From robert.vergnes at yahoo.fr Mon Apr 2 05:17:36 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 2 Apr 2007 11:17:36 +0200 (CEST) Subject: [SciPy-user] QME-Dev Workbench (wxSciPy) Beta 2.7 - RELEASED TODAY on sourceforge Message-ID: <762249.28477.qm@web27406.mail.ukl.yahoo.com> What's New on QME-DEV (wxSciPy) workbench on Beta 0.2.7.x: Download: http://sourceforge.net/project/platformdownload.php?group_id=181979 *Compatible Python 2.4 and wxPython 2.6.x: As most of the people are (still) using Python 2.4 and wxpython 2.6.x.. I did a back port from Python 2.5 and wxpython 2.8 . So now the version QME-DEV Beta 0.2.7 should be working on Mandriva and xUbuntu *Externally Processed Python Shells: You can now Run several VERY long tasks and keep working with the wxGUI ! Thank you to Josiah Carlson for his library and help on this issue. It is now functional. The advantage of externally processed Python Shells is that you can start long tasks and the wxGUI is not blocked or frozen. So you can use it to plot, prepare your script or do other things such as export/import data etc... (tested with 5 shells running on long tasks > 35min, tested:solo-long task > 1hr). In fact the Python Shell windows in the main application are just terminals to the underlying Python Shells... which are external process to the application. The python shell windows you see are not the shells,.. this is clever.. and it does work nicely. Again. Thank you Josiah ! *Externally Processed MatplotLib window-canvas: So Matplotlib windows are stable and stays up even when wxGUI has some troubles or handles something else. It also gives full MatPlotlib (pylab) user functionalities ( zoom, save, etc..) as I use the original Pylab library. Usually you cannot have that with wxMPL unless you do the implementation yourself. -if people need other kind of plotting than curves let me know I might look at it. But I need example data file / .py srcipt - etc.. The time-stamp and parameters-stamp work (on/off) and are displayed on the graph. * Transfer to / from the Python Shell of the values : Data to/from wxGUI from/to Python Shells , Transfer of data "from/to the wx.GUI to/from the shells" is now working via temporary files. It has been tested With up to 100,000linesx10 columns. Transfer to the wxGrid without troubles. It CAN be MUCH more but I have not tested it. While writing, I am testing longs tasks > 60min with 1 million lines vector output which will be transfered in the wxGrid GUI. The way it works: a part of the dictionary from the wxGUI application goes to the shell and then a dictionary from the shell comes back updated to the application. ( after you scripts has run for example). So you can plot straight-away. *Improved Window and Sizer stability: I Removed the collapsible pane controls from the notebooks. I did not manage to have the collapsible panes to work properly inside a notebook. And it was creating some strange problem with resizing and reducing the Main window. So they have been removed and sizing is now working ok. *File Impot/Export: Now the CSV import/export works as well as the TSV. The DataGrid accepts Float, Integer, Complex, String(date format default to string). *Stability: I had no crash during several hours in row with long task running ( more than 5 long tasks in 5 shells) What is QME-DEV(wxSciPy) Workbench: It is a Python based workbench GUI to help analyze experimental datas. You can load/save/export datas, graph and apply scripts to your data using Shells which are linked with the GUI. (ie the data's GUI dictionaries are transfered to the shell namespace so you can use you data in the shell). The application is organized by experiment -a datapage, one experiment-, a set of experiments a set of datapages. The DataPage 0 - is your first experiment, the DataPage 1, the second and so on... In each DataPage,so for each of your experiment, you have a grid page( excel like), an equation page ( where you can write your python script and set some parameters (for curve fitting for example)), a GraphPage to plot your data, a Shell Page where the data of the current experiment (in the current DataGrid) will be transfered along with your script for execution and/or manual work in the shell, and finally you have a NotePage to take some notes. In the equation page ( where you can copy some python script to be executed later in the Shell of this page), you have 2 buttons. One button [Eval] which sends the data's dictionary of your current DataPage(experiment) to the Shell along with your script for execution. And you have below a second button [GetBack] which allows you get back the namespace from the Shell ( after execution and/or manual work) into the DataGrid of the current page. If you create new variables during your script executions and you want to see this variables in your grid, the labels of the columns of the grid must be set first. As the GUI cannot guess what you want from the Shell. A Help with examples is included in the package (pdf format). The GUI is based on wxPython and it uses SciPy or other library for your scientific and math scripts. Needed libraries(dependencies): Python 2.4, wxPython 2.6.x, SciPy 0.5.x, Numpy, and MatPloLib (pylab). The download address: www.sf.net/projects/qme-dev --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.vergnes at yahoo.fr Mon Apr 2 05:17:36 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 2 Apr 2007 11:17:36 +0200 (CEST) Subject: [SciPy-user] QME-Dev Workbench (wxSciPy) Beta 2.7 - RELEASED TODAY on sourceforge Message-ID: <762249.28477.qm@web27406.mail.ukl.yahoo.com> What's New on QME-DEV (wxSciPy) workbench on Beta 0.2.7.x: Download: http://sourceforge.net/project/platformdownload.php?group_id=181979 *Compatible Python 2.4 and wxPython 2.6.x: As most of the people are (still) using Python 2.4 and wxpython 2.6.x.. I did a back port from Python 2.5 and wxpython 2.8 . So now the version QME-DEV Beta 0.2.7 should be working on Mandriva and xUbuntu *Externally Processed Python Shells: You can now Run several VERY long tasks and keep working with the wxGUI ! Thank you to Josiah Carlson for his library and help on this issue. It is now functional. The advantage of externally processed Python Shells is that you can start long tasks and the wxGUI is not blocked or frozen. So you can use it to plot, prepare your script or do other things such as export/import data etc... (tested with 5 shells running on long tasks > 35min, tested:solo-long task > 1hr). In fact the Python Shell windows in the main application are just terminals to the underlying Python Shells... which are external process to the application. The python shell windows you see are not the shells,.. this is clever.. and it does work nicely. Again. Thank you Josiah ! *Externally Processed MatplotLib window-canvas: So Matplotlib windows are stable and stays up even when wxGUI has some troubles or handles something else. It also gives full MatPlotlib (pylab) user functionalities ( zoom, save, etc..) as I use the original Pylab library. Usually you cannot have that with wxMPL unless you do the implementation yourself. -if people need other kind of plotting than curves let me know I might look at it. But I need example data file / .py srcipt - etc.. The time-stamp and parameters-stamp work (on/off) and are displayed on the graph. * Transfer to / from the Python Shell of the values : Data to/from wxGUI from/to Python Shells , Transfer of data "from/to the wx.GUI to/from the shells" is now working via temporary files. It has been tested With up to 100,000linesx10 columns. Transfer to the wxGrid without troubles. It CAN be MUCH more but I have not tested it. While writing, I am testing longs tasks > 60min with 1 million lines vector output which will be transfered in the wxGrid GUI. The way it works: a part of the dictionary from the wxGUI application goes to the shell and then a dictionary from the shell comes back updated to the application. ( after you scripts has run for example). So you can plot straight-away. *Improved Window and Sizer stability: I Removed the collapsible pane controls from the notebooks. I did not manage to have the collapsible panes to work properly inside a notebook. And it was creating some strange problem with resizing and reducing the Main window. So they have been removed and sizing is now working ok. *File Impot/Export: Now the CSV import/export works as well as the TSV. The DataGrid accepts Float, Integer, Complex, String(date format default to string). *Stability: I had no crash during several hours in row with long task running ( more than 5 long tasks in 5 shells) What is QME-DEV(wxSciPy) Workbench: It is a Python based workbench GUI to help analyze experimental datas. You can load/save/export datas, graph and apply scripts to your data using Shells which are linked with the GUI. (ie the data's GUI dictionaries are transfered to the shell namespace so you can use you data in the shell). The application is organized by experiment -a datapage, one experiment-, a set of experiments a set of datapages. The DataPage 0 - is your first experiment, the DataPage 1, the second and so on... In each DataPage,so for each of your experiment, you have a grid page( excel like), an equation page ( where you can write your python script and set some parameters (for curve fitting for example)), a GraphPage to plot your data, a Shell Page where the data of the current experiment (in the current DataGrid) will be transfered along with your script for execution and/or manual work in the shell, and finally you have a NotePage to take some notes. In the equation page ( where you can copy some python script to be executed later in the Shell of this page), you have 2 buttons. One button [Eval] which sends the data's dictionary of your current DataPage(experiment) to the Shell along with your script for execution. And you have below a second button [GetBack] which allows you get back the namespace from the Shell ( after execution and/or manual work) into the DataGrid of the current page. If you create new variables during your script executions and you want to see this variables in your grid, the labels of the columns of the grid must be set first. As the GUI cannot guess what you want from the Shell. A Help with examples is included in the package (pdf format). The GUI is based on wxPython and it uses SciPy or other library for your scientific and math scripts. Needed libraries(dependencies): Python 2.4, wxPython 2.6.x, SciPy 0.5.x, Numpy, and MatPloLib (pylab). The download address: www.sf.net/projects/qme-dev --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ebrosh at nana.co.il Mon Apr 2 06:03:03 2007 From: ebrosh at nana.co.il (Eli Brosh) Date: Mon, 2 Apr 2007 13:03:03 +0300 Subject: [SciPy-user] Bugs in special Message-ID: <957526FB6E347743AAB42B212AB54FDA7A5AED@NANAMAILBACK1.nanamail.co.il> Hello I am trying to convert from MATLAB to Python with SciPy. I am using Python 2.4.3 for Windows (Enthought Edition) As a start, I tried to use the special functions from the SciPy "special" module. There, I encountered some problems that may be a result of bugs in SciPy. The session (in IDLE) goes like: >>>from scipy import * >>> x=.5 >>> special.jv(0,x) 0.938469807241 >>> y=.5+1.j >>> y (0.5+1j) >>> special.jv(0,y) >>> ================================ RESTART ================================ When I try to put a complex argument in special.jv, I get an error message from the operating system (windows XP): It says "pythonw.exe has encountered a problem and needs to close. We are sorry for the inconvenience." The IDLE does not close but it is restarted: There appears a line: >>> ================================ RESTART ================================ This does not occur when the argument of special.jv is real. However, even real arguments in special.ber and special.ker provoked the same crash and the same error message. Is this a bug in SciPy or am I doing something wrong ? Thanks Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Mon Apr 2 06:59:59 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 02 Apr 2007 12:59:59 +0200 Subject: [SciPy-user] Bugs in special In-Reply-To: <957526FB6E347743AAB42B212AB54FDA7A5AED@NANAMAILBACK1.nanamail.co.il> References: <957526FB6E347743AAB42B212AB54FDA7A5AED@NANAMAILBACK1.nanamail.co.il> Message-ID: On Mon, 2 Apr 2007 13:03:03 +0300 "Eli Brosh" wrote: > Hello > I am trying to convert from MATLAB to Python with SciPy. > I am using Python 2.4.3 for Windows (Enthought Edition) > As a start, I tried to use the special functions from >the SciPy "special" module. > There, I encountered some problems that may be a result >of bugs in SciPy. > > > The session (in IDLE) goes like: > >>>>from scipy import * >>>> x=.5 >>>> special.jv(0,x) > 0.938469807241 >>>> y=.5+1.j >>>> y > (0.5+1j) >>>> special.jv(0,y) >>>> ================================ RESTART >>>>================================ > > When I try to put a complex argument in special.jv, I >get an error message from the operating system (windows >XP): > It says "pythonw.exe has encountered a problem and needs >to close. We are sorry for the inconvenience." > > The IDLE does not close but it is restarted: > There appears a line: >>>> ================================ RESTART >>>>================================ > > This does not occur when the argument of special.jv is >real. > > However, even real arguments in special.ber and >special.ker provoked the same crash and the same error >message. > > > > Is this a bug in SciPy or am I doing something wrong ? > > > Thanks > Eli > See the tickets http://projects.scipy.org/scipy/scipy/ticket/301 http://projects.scipy.org/scipy/scipy/ticket/387 Nils From ebrosh at nana.co.il Mon Apr 2 08:05:48 2007 From: ebrosh at nana.co.il (Eli Brosh) Date: Mon, 2 Apr 2007 15:05:48 +0300 Subject: [SciPy-user] Bugs in special References: <957526FB6E347743AAB42B212AB54FDA7A5AED@NANAMAILBACK1.nanamail.co.il> Message-ID: <957526FB6E347743AAB42B212AB54FDA95B9E4@NANAMAILBACK1.nanamail.co.il> Thank you Nils I looked at the tickets http://projects.scipy.org/scipy/scipy/ticket/301 http://projects.scipy.org/scipy/scipy/ticket/387 It seems that inded, there are some serious bugs in Bessel and Kelvin functions in 'special' package. However, the bugs reported in these "tickets" are not the bug I encountered. Perhaps they are related. Eli ________________________________ From: scipy-user-bounces at scipy.org on behalf of Nils Wagner Sent: Mon 02/04/2007 13:59 To: SciPy Users List Subject: Re: [SciPy-user] Bugs in special On Mon, 2 Apr 2007 13:03:03 +0300 "Eli Brosh" wrote: > Hello > I am trying to convert from MATLAB to Python with SciPy. > I am using Python 2.4.3 for Windows (Enthought Edition) > As a start, I tried to use the special functions from >the SciPy "special" module. > There, I encountered some problems that may be a result >of bugs in SciPy. > > > The session (in IDLE) goes like: > >>>>from scipy import * >>>> x=.5 >>>> special.jv(0,x) > 0.938469807241 >>>> y=.5+1.j >>>> y > (0.5+1j) >>>> special.jv(0,y) >>>> ================================ RESTART >>>>================================ > > When I try to put a complex argument in special.jv, I >get an error message from the operating system (windows >XP): > It says "pythonw.exe has encountered a problem and needs >to close. We are sorry for the inconvenience." > > The IDLE does not close but it is restarted: > There appears a line: >>>> ================================ RESTART >>>>================================ > > This does not occur when the argument of special.jv is >real. > > However, even real arguments in special.ber and >special.ker provoked the same crash and the same error >message. > > > > Is this a bug in SciPy or am I doing something wrong ? > > > Thanks > Eli > See the tickets http://projects.scipy.org/scipy/scipy/ticket/301 http://projects.scipy.org/scipy/scipy/ticket/387 Nils _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 5785 bytes Desc: not available URL: From giorgio.luciano at chimica.unige.it Tue Apr 3 08:25:27 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Tue, 03 Apr 2007 14:25:27 +0200 Subject: [SciPy-user] question about standalone small software and teaching Message-ID: <461247B7.6020507@chimica.unige.it> Hello Dear All, I just have a question for all that uses python/numpy/scipy/matplotlib for making science. I use with no problem in my computer python+numpy+scipy+matplotlib and I'm very satisfied with them. I was a matlab user. I still not have unearthed the power ot python but I'm happy to use a programming language and not a metalanguage. When I gave people my software (in matlab) the all ask me if I could compile and create some interface. I tried to use matlab GUIs, succeded in creating, but then I had a lot of problems. Compiling not always worked. after compiling you have not a workspace and so I had to make all output as txt files... and so on. Now that I use python I'm again with the same problem. I create easy routines (for chemometrics) and then people ask me if I can make a standalone program with interface. I used orange and for NN it's surely one of the best, but I'm not good at programming widgets. Then I think about it, searched the web and didn't find anything. What I'm searching is something similar to labview :) At first I thought ... hey why people wat an interface, just use the console, and then after listening to their reason I have to agree. What do I generally do ? I have a matrix in txt, I apply my routines (a SVD, a PCA, a filter etc etc written in python), plot them (using maplotlib) and then I want an output. that's it. I started looking at various Qt etc. etc. but for me it's overhelming, because I think that the most important part should be dedicate to the routines creation and not to making a gui, compiling, etc. etc. I need simple command like people wants. grids, paste and copy, small working plots :) I mean I can get crazy with setting my program, importing etc. etc. but I also have to say that needs and claim about writing simple guis, common paste and copy etc should be considered from someone there (we wait for the help of some guru that makes things easier ;) thanks for reading the mail Giorgio From gael.varoquaux at normalesup.org Tue Apr 3 09:21:59 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 3 Apr 2007 15:21:59 +0200 Subject: [SciPy-user] [Numpy-discussion] question about standalone small software and teaching In-Reply-To: <461247B7.6020507@chimica.unige.it> References: <461247B7.6020507@chimica.unige.it> Message-ID: <20070403132159.GC13747@clipper.ens.fr> You can do a script with a GUI front end, as described in the first chapter of my tutorial http://gael-varoquaux.info/computers/traits_tutorial/traits_tutorial.html . You can also build a complete interactive application, as described in the rest of the tutorial, but this is more work. If you have more questions about this approach feal free to ask. Ga?l From raphael.langella at steria.cnes.fr Tue Apr 3 11:58:41 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Tue, 3 Apr 2007 17:58:41 +0200 Subject: [SciPy-user] Compiling numpy and scipy on AIX 5.3 Message-ID: <200704031557.l33FvTM11701@cnes.fr> Using information from this post : http://thread.gmane.org/gmane.comp.python.scientific.user/6237/focus=628 6, I noticed my compiler wasn't properly detected. When I just type xlf, it brings the man page, without any version information. I have to type xlf -qversion and it prints : IBM XL Fortran Enterprise Edition V10.1 for AIX Version: 10.01.0000.0003 I made the following changes in ibm.py so the compiler is properly detected : Line 11 : version_pattern = r'IBM XL Fortran (Enterprise Edition |)V(?P[^\s*]*)' line 14 : 'version_cmd' : ["xlf -qversion"], line 25 : xlf_dir = '/etc' line 54 : xlf_cfg = '/etc/xlf.cfg' % version Now numpy detects the compiler. Then, I set BLAS and LAPACK environnement variables, and I get this when compiling numpy : xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -lfblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so ld: 0711-317 ERROR: Undefined symbol: .main ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. ld: 0711-317 ERROR: Undefined symbol: .main ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. error: Command "xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -lfblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so" failed with exit status 8 What's happening there? From wizzard028wise at gmail.com Tue Apr 3 13:52:43 2007 From: wizzard028wise at gmail.com (Bobo South Africa) Date: Tue, 3 Apr 2007 19:52:43 +0200 Subject: [SciPy-user] Linear Integral Equation second kind Message-ID: <674a602a0704031052p1d9a66acr6bd463ba7256b0c2@mail.gmail.com> Hi all, I would want to compute the numerical solution of the following integral equation ( Volterra integral equation for the second kind) : f(t) = \int_{a}^{ b} f(t-x)g(x)dx + h(t) (latex symbol) where the unknown function is f(t). Thank for your help -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Tue Apr 3 17:17:36 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 3 Apr 2007 17:17:36 -0400 Subject: [SciPy-user] Linear Integral Equation second kind In-Reply-To: <674a602a0704031052p1d9a66acr6bd463ba7256b0c2@mail.gmail.com> References: <674a602a0704031052p1d9a66acr6bd463ba7256b0c2@mail.gmail.com> Message-ID: On 03/04/07, Bobo South Africa wrote: > Hi all, > > I would want to compute the numerical solution of the following integral > equation > ( Volterra integral equation for the second kind) : > > > > f(t) = \int_{a}^{ b} f(t-x)g(x)dx + h(t) (latex > symbol) > > where the unknown function is f(t). As far as I know, scipy does not implement solving these equations. But it does implement some tools which should allow you to solve them quite efficiently (at least in terms of your time, possibly not in terms of CPU time). I recommend looking at Numerical Recipes: http://www.nrbook.com/b/bookcpdf.php See if the algorithms they describe can take advantage of the tools in scipy - in particular, there are both smart adaptive quadrature rules and splines, if you'd like to work with a spline representation of your function. Alternatively, if you want to compute your function on a grid and use numerical integration adapted to the grid, scipy implements not only trapezoidal and Simpson integration on arbitrary grids, in scipy.special you will find various families of orthogonal polynomials along with all the coefficients to do Gaussian integration in terms of those. Finally, even if you implement the calculation mostly by hand, be aware that there are a host of linear algebra tools to help. Anne > > Thank for your help > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From s.mientki at mailbox.kun.nl Tue Apr 3 17:31:51 2007 From: s.mientki at mailbox.kun.nl (stef mientki) Date: Tue, 03 Apr 2007 23:31:51 +0200 Subject: [SciPy-user] question about standalone small software and teaching In-Reply-To: <461247B7.6020507@chimica.unige.it> References: <461247B7.6020507@chimica.unige.it> Message-ID: <4612C7C7.4000900@gmail.com> sorry my previous answer is hold by the mailmanager, because of some trouble with my mail accounts. > What I'm searching is something similar to labview :) > I just succeeded in doing the first calculation in Signal WorkBench, and although it's pure ASCII text, it has some of the really useful benefits of LabView ;-) You can see this first result here http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_signal_workbench.html#code_window -- cheers, Stef Mientki http://pic.flappie.nl From oliphant at ee.byu.edu Tue Apr 3 18:43:29 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 03 Apr 2007 16:43:29 -0600 Subject: [SciPy-user] NumPy 1.0.2 released Message-ID: <4612D891.8040200@ee.byu.edu> To all SciPy / NumPy users: NumPy 1.0.2 was released yesterday (4-02-07). Get it by following the download link at http://numpy.scipy.org This is a bug-fix release with a couple of additional features. Thanks to everybody who helped track down and fix bugs. -Travis From wizzard028wise at gmail.com Tue Apr 3 18:48:14 2007 From: wizzard028wise at gmail.com (Bobo South Africa) Date: Wed, 4 Apr 2007 00:48:14 +0200 Subject: [SciPy-user] Linear Integral Equation second kind In-Reply-To: References: <674a602a0704031052p1d9a66acr6bd463ba7256b0c2@mail.gmail.com> Message-ID: <674a602a0704031548l5c7b905cnad24439aaf06f9a3@mail.gmail.com> Hi Anne, You have sent to me exactly what I was looking for. Thanks you ! 2007/4/3, Anne Archibald : > > On 03/04/07, Bobo South Africa wrote: > > Hi all, > > > > I would want to compute the numerical solution of the following integral > > equation > > ( Volterra integral equation for the second kind) : > > > > > > > > f(t) = \int_{a}^{ b} f(t-x)g(x)dx + h(t) (latex > > symbol) > > > > where the unknown function is f(t). > > As far as I know, scipy does not implement solving these equations. > But it does implement some tools which should allow you to solve them > quite efficiently (at least in terms of your time, possibly not in > terms of CPU time). > > I recommend looking at Numerical Recipes: > http://www.nrbook.com/b/bookcpdf.php > > See if the algorithms they describe can take advantage of the tools in > scipy - in particular, there are both smart adaptive quadrature rules > and splines, if you'd like to work with a spline representation of > your function. Alternatively, if you want to compute your function on > a grid and use numerical integration adapted to the grid, scipy > implements not only trapezoidal and Simpson integration on arbitrary > grids, in scipy.special you will find various families of orthogonal > polynomials along with all the coefficients to do Gaussian integration > in terms of those. Finally, even if you implement the calculation > mostly by hand, be aware that there are a host of linear algebra tools > to help. > > Anne > > > > > > > > Thank for your help > > > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raphael.langella at steria.cnes.fr Wed Apr 4 04:26:25 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Wed, 4 Apr 2007 10:26:25 +0200 Subject: [SciPy-user] RE : Compiling numpy and scipy on AIX 5.3 Message-ID: <200704040825.l348PGM17138@cnes.fr> I just tried numpy 1.0.2, and the compiler detection works perfectly, thanks. But I still have exactly the same error message (Undefined symbol: .main). Even if I unset BLAS and LAPACK to use internal functions, I get the same result. -----Message d'origine----- De : scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] De la part de Langella Raphael Envoy? : mardi 3 avril 2007 17:59 ? : SciPy Users List Objet : [SciPy-user] Compiling numpy and scipy on AIX 5.3 Using information from this post : http://thread.gmane.org/gmane.comp.python.scientific.user/6237/focus=628 6, I noticed my compiler wasn't properly detected. When I just type xlf, it brings the man page, without any version information. I have to type xlf -qversion and it prints : IBM XL Fortran Enterprise Edition V10.1 for AIX Version: 10.01.0000.0003 I made the following changes in ibm.py so the compiler is properly detected : Line 11 : version_pattern = r'IBM XL Fortran (Enterprise Edition |)V(?P[^\s*]*)' line 14 : 'version_cmd' : ["xlf -qversion"], line 25 : xlf_dir = '/etc' line 54 : xlf_cfg = '/etc/xlf.cfg' % version Now numpy detects the compiler. Then, I set BLAS and LAPACK environnement variables, and I get this when compiling numpy : xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -lfblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so ld: 0711-317 ERROR: Undefined symbol: .main ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. ld: 0711-317 ERROR: Undefined symbol: .main ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. error: Command "xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -lfblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so" failed with exit status 8 What's happening there? _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From massimo.sandal at unibo.it Wed Apr 4 05:38:36 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 04 Apr 2007 11:38:36 +0200 Subject: [SciPy-user] question about standalone small software and teaching In-Reply-To: <461247B7.6020507@chimica.unige.it> References: <461247B7.6020507@chimica.unige.it> Message-ID: <4613721C.3070708@unibo.it> Giorgio Luciano ha scritto: > At first I thought ... hey why people wat an interface, just use the > console, and then after listening to their reason I have to agree. > What do I generally do ? I have a matrix in txt, I apply my routines (a > SVD, a PCA, a filter etc etc written in python), plot them (using > maplotlib) and then I want an output. that's it. > I started looking at various Qt etc. etc. but for me it's overhelming, > because I think that the most important part should be dedicate to the > routines creation and not to making a gui, compiling, etc. etc. I need > simple command like people wants. grids, paste and copy, small working > plots :) > I mean I can get crazy with setting my program, importing etc. etc. but > I also have to say that needs and claim about writing simple guis, > common paste and copy etc should be considered from someone there (we > wait for the help of some guru that makes things easier ;) It's quite hard for me to understand what you mean. Anyway, I solved the issue of usability vs code simplicity for my data analysis application by using a mixed CLI+GUI design. That is, I have a very simple GUI that just shows the plot and may have some button/menu for basic operations, and a custom command line to finely work with it. Think of RasMol, for example. The custom command line is done with the Python Cmd module that is included with Python, and it's a breeze to code with. The GUI uses Matplotlib embedded with wxMPL in a wxPython frame. The command line and the GUI are threaded (work on two different threads) that communicate by passing events (cli-->gui) and with a Queue (gui-->cli): easy. Anyway, I'd advice you to learn wxPython basics. It's powerful, free, multiplatform and it's becoming the default Python GUI toolkit in the wild. Learning a GUI toolkit cannot harm. If you can, buy the Robin Dunn book "wxPython in Action", it's wonderful to say the least. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From robert.vergnes at yahoo.fr Wed Apr 4 06:16:00 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Wed, 4 Apr 2007 12:16:00 +0200 (CEST) Subject: [SciPy-user] erreur de segmentation Message-ID: <20070404101600.71034.qmail@web27413.mail.ukl.yahoo.com> Hello, On Python linux ( mandriva) each time we use python we have a 'erreur de segmentation' . I read somewhere on teh net that this happened after installation of numpy. but I can't really say on the mandriva config what triggered that issue. Did anybody have a clue about that ? Thanx Robert --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at mailbox.kun.nl Wed Apr 4 07:47:00 2007 From: s.mientki at mailbox.kun.nl (stef mientki) Date: Wed, 04 Apr 2007 13:47:00 +0200 Subject: [SciPy-user] question about standalone small software and teaching In-Reply-To: <4613721C.3070708@unibo.it> References: <461247B7.6020507@chimica.unige.it> <4613721C.3070708@unibo.it> Message-ID: <46139034.6050907@gmail.com> massimo sandal wrote: > Giorgio Luciano ha scritto: >> At first I thought ... hey why people wat an interface, just use the >> console, and then after listening to their reason I have to agree. >> What do I generally do ? I have a matrix in txt, I apply my routines >> (a SVD, a PCA, a filter etc etc written in python), plot them (using >> maplotlib) and then I want an output. that's it. >> I started looking at various Qt etc. etc. but for me it's >> overhelming, because I think that the most important part should be >> dedicate to the routines creation and not to making a gui, compiling, >> etc. etc. I need simple command like people wants. grids, paste and >> copy, small working plots :) >> I mean I can get crazy with setting my program, importing etc. etc. >> but I also have to say that needs and claim about writing simple >> guis, common paste and copy etc should be considered from someone >> there (we wait for the help of some guru that makes things easier ;) > > It's quite hard for me to understand what you mean. > > Anyway, I solved the issue of usability vs code simplicity for my data > analysis application by using a mixed CLI+GUI design. That is, I have > a very simple GUI that just shows the plot and may have some > button/menu for basic operations, and a custom command line to finely > work with it. Think of RasMol, for example. The custom command line is > done with the Python Cmd module that is included with Python, and it's > a breeze to code with. The GUI uses Matplotlib embedded with wxMPL in > a wxPython frame. The command line and the GUI are threaded (work on > two different threads) that communicate by passing events (cli-->gui) > and with a Queue (gui-->cli): easy. > Do you have a website or document describing your method in more detail ? I always love to see solutions of others, to see what I'm missing. cheers, Stef Mientki -- cheers, Stef Mientki http://pic.flappie.nl From s.mientki at mailbox.kun.nl Wed Apr 4 07:51:38 2007 From: s.mientki at mailbox.kun.nl (stef mientki) Date: Wed, 04 Apr 2007 13:51:38 +0200 Subject: [SciPy-user] question about standalone small software and teaching In-Reply-To: <461247B7.6020507@chimica.unige.it> References: <461247B7.6020507@chimica.unige.it> Message-ID: <4613914A.1090509@gmail.com> > I tried to use matlab GUIs, succeded in creating, but then I had a lot > of problems. Compiling not always worked. after compiling you have not a > workspace and so I had to make all output as txt files... and so on. > Now that I use python I'm again with the same problem. I create easy > routines (for chemometrics) and then people ask me if I can make a > standalone program with interface. > I used orange and for NN it's surely one of the best, but I'm not good > at programming widgets. Then I think about it, searched the web and > didn't find anything. > What I'm searching is something similar to labview :) > If you like LabView, then I'ld suggest, buy it ! NI (just like MathWorks) has a real "drugdealer policy", if you're in education, you can get it almost for nop. But I'm afraid, that if you had trouble with MatLab, and then had the same trouble with Python, you'll have even more trouble with LabView ;-) > At first I thought ... hey why people wat an interface, just use the > console, and then after listening to their reason I have to agree. > What do I generally do ? I have a matrix in txt, I apply my routines (a > SVD, a PCA, a filter etc etc written in python), plot them (using > maplotlib) and then I want an output. that's it. > I started looking at various Qt etc. etc. but for me it's overhelming, > because I think that the most important part should be dedicate to the > routines creation and not to making a gui, compiling, etc. etc. I need > simple command like people wants. grids, paste and copy, small working > plots :) > What do you mean by copy and paste ? > I mean I can get crazy with setting my program, importing etc. etc. but > I also have to say that needs and claim about writing simple guis, > common paste and copy etc should be considered from someone there (we > wait for the help of some guru that makes things easier ;) > > I agree that a simple user interface would be very welcome. I think that's why QME and Signal WorkBench are being developed ;-) If you're familiar with another Graphical oriented language (Visual Basic, Delphi, Kylix, Lazarus,...) it's easy to do the GUI in that language and glue the application with Python (glueing is one of the key benefits of Python). -- cheers, Stef Mientki http://pic.flappie.nl From cookedm at physics.mcmaster.ca Wed Apr 4 08:59:32 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 04 Apr 2007 08:59:32 -0400 Subject: [SciPy-user] RE : Compiling numpy and scipy on AIX 5.3 In-Reply-To: <200704040825.l348PGM17138@cnes.fr> References: <200704040825.l348PGM17138@cnes.fr> Message-ID: <4613A134.4070706@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Langella Raphael wrote: > > IBM XL Fortran Enterprise Edition V10.1 for AIX > Version: 10.01.0000.0003 > > Now numpy detects the compiler. Then, I set BLAS and LAPACK > environnement variables, and I get this when compiling numpy : > > xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -lfblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so > ld: 0711-317 ERROR: Undefined symbol: .main > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. > ld: 0711-317 ERROR: Undefined symbol: .main > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. > error: Command "xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/local/lib -lfblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so" failed with exit status 8 AIX is a royal PITA when it comes to making shared libraries. Really, it's awful. You need to make a file of imported symbols first, and pass that to the linker command. Fortunately, Python includes an ld_so_aix script to do this. Try the current Numpy SVN; I've tried to make it use this. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGE6E0+kNzddXW8YwRAuh+AJ9SQvYzHMYt0SZ/9eOeRcB8dAc0dACeP0vy pu8mKUIU84mJUBQ4of8pZZQ= =S0Le -----END PGP SIGNATURE----- From massimo.sandal at unibo.it Wed Apr 4 10:20:30 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 04 Apr 2007 16:20:30 +0200 Subject: [SciPy-user] question about standalone small software and teaching In-Reply-To: <46139034.6050907@gmail.com> References: <461247B7.6020507@chimica.unige.it> <4613721C.3070708@unibo.it> <46139034.6050907@gmail.com> Message-ID: <4613B42E.9000308@unibo.it> stef mientki ha scritto: > > Do you have a website or document describing your method in more detail ? > I always love to see solutions of others, to see what I'm missing. Nope, since it's not still ready for public use. However if you want I can send you the code (I license under GNU GPL v.2) If you want details feel free to mail me. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From raphael.langella at steria.cnes.fr Wed Apr 4 10:37:16 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Wed, 4 Apr 2007 16:37:16 +0200 Subject: [SciPy-user] RE : RE : Compiling numpy and scipy on AIX 5.3 Message-ID: <200704041438.l34Ecc025787@cnes.fr> > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de David M. Cooke > Envoy? : mercredi 4 avril 2007 15:00 > ? : SciPy Users List > Objet : Re: [SciPy-user] RE : Compiling numpy and scipy on AIX 5.3 > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Langella Raphael wrote: > > > > IBM XL Fortran Enterprise Edition V10.1 for AIX > > Version: 10.01.0000.0003 > > > > Now numpy detects the compiler. Then, I set BLAS and LAPACK > > environnement variables, and I get this when compiling numpy : > > > > xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg > > build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o > -L/usr/local/lib -lfblas -o > build/lib.aix-5.3-2.5/numpy/core/_dotblas.so > > ld: 0711-317 ERROR: Undefined symbol: .main > > ld: 0711-345 Use the -bloadmap or -bnoquiet option to > obtain more information. > > ld: 0711-317 ERROR: Undefined symbol: .main > > ld: 0711-345 Use the -bloadmap or -bnoquiet option to > obtain more information. > > error: Command "xlf95 -bshared -F/ptmp/tmp4WLhqo_xlf.cfg > build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o > -L/usr/local/lib -lfblas -o > build/lib.aix-5.3-2.5/numpy/core/_dotblas.so" failed with > exit status 8 > > AIX is a royal PITA when it comes to making shared libraries. > Really, it's awful. You need to make a file of imported > symbols first, and pass that to the linker command. > > Fortunately, Python includes an ld_so_aix script to do this. > Try the current Numpy SVN; I've tried to make it use this. I'd love to try the SVN version, but : $ svn co http://svn.scipy.org/svn/numpy/trunk numpy svn: PROPFIND request failed on '/svn/numpy/trunk' svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http://svn.scipy.org) Am I missing something or is it a server problem as the error message suggests ? From lists.steve at arachnedesign.net Wed Apr 4 10:45:37 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 4 Apr 2007 10:45:37 -0400 Subject: [SciPy-user] RE : RE : Compiling numpy and scipy on AIX 5.3 In-Reply-To: <200704041438.l34Ecc025787@cnes.fr> References: <200704041438.l34Ecc025787@cnes.fr> Message-ID: <77F9DB2E-5F6C-4DE8-BB55-14BE1D12537D@arachnedesign.net> > I'd love to try the SVN version, but : > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn: PROPFIND request failed on '/svn/numpy/trunk' > svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// > svn.scipy.org) > > Am I missing something or is it a server problem as the error > message suggests ? I copy/pasted your `svn co ..` command into my terminal and it looks like it's working fine for me ... -steve From raphael.langella at steria.cnes.fr Wed Apr 4 10:47:42 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Wed, 4 Apr 2007 16:47:42 +0200 Subject: [SciPy-user] RE : RE : RE : Compiling numpy and scipy on AIX 5.3 Message-ID: <200704041448.l34Em4000061@cnes.fr> > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de Steve Lianoglou > Envoy? : mercredi 4 avril 2007 16:46 > ? : SciPy Users List > Objet : Re: [SciPy-user] RE : RE : Compiling numpy and scipy > on AIX 5.3 > > > > I'd love to try the SVN version, but : > > > > $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > > svn: PROPFIND request failed on '/svn/numpy/trunk' > > svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// > > svn.scipy.org) > > > > Am I missing something or is it a server problem as the error > > message suggests ? > > I copy/pasted your `svn co ..` command into my terminal and it looks > like it's working fine for me ... So, it's probably a proxy problem. I'll try from home... From cookedm at physics.mcmaster.ca Wed Apr 4 11:15:40 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 04 Apr 2007 11:15:40 -0400 Subject: [SciPy-user] RE : RE : RE : Compiling numpy and scipy on AIX 5.3 In-Reply-To: <200704041448.l34Em4000061@cnes.fr> References: <200704041448.l34Em4000061@cnes.fr> Message-ID: <4613C11C.2080501@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Langella Raphael wrote: >>> I'd love to try the SVN version, but : >>> >>> $ svn co http://svn.scipy.org/svn/numpy/trunk numpy >>> svn: PROPFIND request failed on '/svn/numpy/trunk' >>> svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// >>> svn.scipy.org) >>> >>> Am I missing something or is it a server problem as the error >>> message suggests ? >> I copy/pasted your `svn co ..` command into my terminal and it looks >> like it's working fine for me ... > > So, it's probably a proxy problem. I'll try from home... If it's a proxy problem, using https instead of http will probably work. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGE8Ec+kNzddXW8YwRAmRkAKDsbu+xmRLp6yq+V/RTerDsOUnHkwCgyu6D 2ZIj/uvNy3e3Cxjm3BxlZUc= =ggGA -----END PGP SIGNATURE----- From raphael.langella at steria.cnes.fr Wed Apr 4 11:40:13 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Wed, 4 Apr 2007 17:40:13 +0200 Subject: [SciPy-user] RE : RE : RE : RE : Compiling numpy and scipy on AIX 5.3 Message-ID: <200704041539.l34FdQl05490@cnes.fr> > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de David M. Cooke > Envoy? : mercredi 4 avril 2007 17:16 > ? : SciPy Users List > Objet : Re: [SciPy-user] RE : RE : RE : Compiling numpy and > scipy on AIX 5.3 > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Langella Raphael wrote: > >>> I'd love to try the SVN version, but : > >>> > >>> $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > >>> svn: PROPFIND request failed on '/svn/numpy/trunk' > >>> svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// > >>> svn.scipy.org) > >>> > >>> Am I missing something or is it a server problem as the error > >>> message suggests ? > >> I copy/pasted your `svn co ..` command into my terminal > and it looks > >> like it's working fine for me ... > > > > So, it's probably a proxy problem. I'll try from home... > > If it's a proxy problem, using https instead of http will > probably work. Indeed. So here's what the SVN version gives me : creating build/temp.aix-5.3-2.5/numpy/core/blasdot compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.aix-5.3-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/Produits/publics/powerpc.AIX.5.2/python/2.5.0/include/python2.5 -c' cc_r: numpy/core/blasdot/_dotblas.c Traceback (most recent call last): File "setup.py", line 89, in setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "/tmp/numpy/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/usr/local/lib/python2.5/distutils/core.py", line 151, in setup File "/usr/local/lib/python2.5/distutils/dist.py", line 974, in run_commands File "/usr/local/lib/python2.5/distutils/dist.py", line 994, in run_command File "/tmp/numpy/numpy/distutils/command/install.py", line 16, in run r = old_install.run(self) File "/usr/local/lib/python2.5/distutils/command/install.py", line 506, in run File "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/cmd.py", line 333, in run_command del help[cmd] File "/usr/local/lib/python2.5/distutils/dist.py", line 994, in run_command File "/usr/local/lib/python2.5/distutils/command/build.py", line 112, in run File "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/cmd.py", line 333, in run_command del help[cmd] File "/usr/local/lib/python2.5/distutils/dist.py", line 994, in run_command File "/tmp/numpy/numpy/distutils/command/build_ext.py", line 121, in run self.build_extensions() File "/usr/local/lib/python2.5/distutils/command/build_ext.py", line 407, in build_extensions File "/tmp/numpy/numpy/distutils/command/build_ext.py", line 335, in build_extension build_temp=self.build_temp,**kws) File "/usr/local/lib/python2.5/distutils/ccompiler.py", line 845, in link_shared_object File "/tmp/numpy/numpy/distutils/fcompiler/__init__.py", line 496, in link self.spawn(command) File "/tmp/numpy/numpy/distutils/ccompiler.py", line 29, in CCompiler_spawn display = ' '.join(list(display)) TypeError: sequence item 0: expected string, list found From cookedm at physics.mcmaster.ca Wed Apr 4 14:11:03 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 04 Apr 2007 14:11:03 -0400 Subject: [SciPy-user] RE : RE : RE : RE : Compiling numpy and scipy on AIX 5.3 In-Reply-To: <200704041539.l34FdQl05490@cnes.fr> References: <200704041539.l34FdQl05490@cnes.fr> Message-ID: <4613EA37.9070007@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Langella Raphael wrote: >> Langella Raphael wrote: >>>>> I'd love to try the SVN version, but : >>>>> >>>>> $ svn co http://svn.scipy.org/svn/numpy/trunk numpy >>>>> svn: PROPFIND request failed on '/svn/numpy/trunk' >>>>> svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// >>>>> svn.scipy.org) >>>>> >>>>> Am I missing something or is it a server problem as the error >>>>> message suggests ? >>>> I copy/pasted your `svn co ..` command into my terminal >> and it looks >>>> like it's working fine for me ... >>> So, it's probably a proxy problem. I'll try from home... >> If it's a proxy problem, using https instead of http will >> probably work. > > Indeed. So here's what the SVN version gives me : > > creating build/temp.aix-5.3-2.5/numpy/core/blasdot > compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/blasdot -Inumpy/core/include -Ibuild/src.aix-5.3-2.5/numpy/core -Inumpy/core/src -Inumpy/core/include -I/Produits/publics/powerpc.AIX.5.2/python/2.5.0/include/python2.5 -c' > cc_r: numpy/core/blasdot/_dotblas.c > Traceback (most recent call last): > File "setup.py", line 89, in > setup_package() > File "setup.py", line 82, in setup_package > configuration=configuration ) > File "/tmp/numpy/numpy/distutils/core.py", line 174, in setup > return old_setup(**new_attr) > File "/usr/local/lib/python2.5/distutils/core.py", line 151, in setup > File "/usr/local/lib/python2.5/distutils/dist.py", line 974, in run_commands > File "/usr/local/lib/python2.5/distutils/dist.py", line 994, in run_command > File "/tmp/numpy/numpy/distutils/command/install.py", line 16, in run > r = old_install.run(self) > File "/usr/local/lib/python2.5/distutils/command/install.py", line 506, in run > File "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/cmd.py", line 333, in run_command > del help[cmd] > File "/usr/local/lib/python2.5/distutils/dist.py", line 994, in run_command > File "/usr/local/lib/python2.5/distutils/command/build.py", line 112, in run > File "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/cmd.py", line 333, in run_command > del help[cmd] > File "/usr/local/lib/python2.5/distutils/dist.py", line 994, in run_command > File "/tmp/numpy/numpy/distutils/command/build_ext.py", line 121, in run > self.build_extensions() > File "/usr/local/lib/python2.5/distutils/command/build_ext.py", line 407, in build_extensions > File "/tmp/numpy/numpy/distutils/command/build_ext.py", line 335, in build_extension > build_temp=self.build_temp,**kws) > File "/usr/local/lib/python2.5/distutils/ccompiler.py", line 845, in link_shared_object > File "/tmp/numpy/numpy/distutils/fcompiler/__init__.py", line 496, in link > self.spawn(command) > File "/tmp/numpy/numpy/distutils/ccompiler.py", line 29, in CCompiler_spawn > display = ' '.join(list(display)) > TypeError: sequence item 0: expected string, list found Ah, oops. Try it now. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGE+o3+kNzddXW8YwRAiMkAJsFwuYAqLczeRx0lZlv5SifXZrYwgCfXLBi OsHBLcmMh+fHfYTNnBt3AT0= =WT6J -----END PGP SIGNATURE----- From s.mientki at mailbox.kun.nl Wed Apr 4 15:42:17 2007 From: s.mientki at mailbox.kun.nl (stef mientki) Date: Wed, 04 Apr 2007 21:42:17 +0200 Subject: [SciPy-user] question about standalone small software and teaching In-Reply-To: <461247B7.6020507@chimica.unige.it> References: <461247B7.6020507@chimica.unige.it> Message-ID: <4613FF99.3010503@gmail.com> Bot that I want you have away from SciPy, but I just bounced into this http://www.celles.net/wikini/wakka.php?wiki=ScilabMecaPendule Is that what you're looking for ? -- cheers, Stef Mientki http://pic.flappie.nl From wbaxter at gmail.com Wed Apr 4 22:10:28 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 5 Apr 2007 11:10:28 +0900 Subject: [SciPy-user] Big list of Numpy & Scipy users Message-ID: On 4/4/07, Robert Kern wrote: > Bill Baxter wrote: > > Is there any place on the Wiki that lists all the known software that > > uses Numpy in some way? > > >> > It would be nice to start collecting such a list if there isn't one > > already. Screenshots would be nice too. > > There is no such list that I know of, but you may start one on the wiki if you like. Ok, I made a start: http://www.scipy.org/Scipy_Projects Anyone who has a project that depends on Numpy or Scipy, please go add your info there! I haven't linked it from anywhere, because it looks pretty pathetic right now with only three or four entries. But hopefully everyone will jumps in and add their project to the list. Part of the idea is that this should be a good place to point nay-sayers to when they say "meh - numpy... that's a niche project for a handful of scientists." So ... hopefully a good portion of the links will be things other than science projects. There will hopefully be a lot of things that "ordinary users" would care about. :-) I couldn't figure out how to add an image, but if someone knows how to do that, please do. --bb From shuntim.luk at polyu.edu.hk Thu Apr 5 01:12:20 2007 From: shuntim.luk at polyu.edu.hk (LUK ShunTim) Date: Thu, 05 Apr 2007 13:12:20 +0800 Subject: [SciPy-user] RE : RE : Compiling numpy and scipy on AIX 5.3 In-Reply-To: <77F9DB2E-5F6C-4DE8-BB55-14BE1D12537D@arachnedesign.net> References: <200704041438.l34Ecc025787@cnes.fr> <77F9DB2E-5F6C-4DE8-BB55-14BE1D12537D@arachnedesign.net> Message-ID: <46148534.3090405@polyu.edu.hk> Steve Lianoglou wrote: >> I'd love to try the SVN version, but : >> >> $ svn co http://svn.scipy.org/svn/numpy/trunk numpy >> svn: PROPFIND request failed on '/svn/numpy/trunk' >> svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// >> svn.scipy.org) >> >> Am I missing something or is it a server problem as the error >> message suggests ? > > I copy/pasted your `svn co ..` command into my terminal and it looks > like it's working fine for me ... > > -steve Probably the OP is behind a proxy. Try if svn co https://svn.scipy.org/svn/numpy/trunk numpy works. See this FAQ http://subversion.tigris.org/faq.html#proxy Regards, ST -- From david at ar.media.kyoto-u.ac.jp Thu Apr 5 01:19:36 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 05 Apr 2007 14:19:36 +0900 Subject: [SciPy-user] erreur de segmentation In-Reply-To: <20070404101600.71034.qmail@web27413.mail.ukl.yahoo.com> References: <20070404101600.71034.qmail@web27413.mail.ukl.yahoo.com> Message-ID: <461486E8.7020901@ar.media.kyoto-u.ac.jp> Robert VERGNES wrote: > Hello, > > On Python linux ( mandriva) each time we use python we have a > 'erreur de segmentation' . > > I read somewhere on teh net that this happened after installation of > numpy. but I can't really say on the mandriva config what triggered > that issue. > > Did anybody have a clue about that ? > > Thanx > > Robert Hi Robert, First, I would strongly advise against turning errors messages in French: this makes finding info on the net much more difficult. Now, you need to give more details: when does this segmentation fault occurs ? When you launch a simple python shell (unlikely) ? When you import numpy ? cheers, David From paul.cristini at univ-pau.fr Thu Apr 5 03:32:17 2007 From: paul.cristini at univ-pau.fr (paul cristini) Date: Thu, 05 Apr 2007 09:32:17 +0200 Subject: [SciPy-user] Linear interpolation with delaunay triangulation Message-ID: <4614A601.9020407@univ-pau.fr> Hello, I'm trying to use delaunay package to perform interpolation on irregularly spaced data and I was wondering if there was a possibility to use the linear interpolator with this package. I get an error saying the object is not callable when replacing nn with linear in the example. Paul From raphael.langella at steria.cnes.fr Thu Apr 5 04:16:39 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Thu, 5 Apr 2007 10:16:39 +0200 Subject: [SciPy-user] RE : RE : RE : RE : RE : Compiling numpy and scipy on AIX 5.3 Message-ID: <200704050816.l358Gf023376@cnes.fr> > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de David M. Cooke > Envoy? : mercredi 4 avril 2007 20:11 > ? : SciPy Users List > Objet : Re: [SciPy-user] RE : RE : RE : RE : Compiling numpy > and scipy on AIX 5.3 > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Langella Raphael wrote: > >> Langella Raphael wrote: > >>>>> I'd love to try the SVN version, but : > >>>>> > >>>>> $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > >>>>> svn: PROPFIND request failed on '/svn/numpy/trunk' > >>>>> svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// > >>>>> svn.scipy.org) > >>>>> > >>>>> Am I missing something or is it a server problem as the error > >>>>> message suggests ? > >>>> I copy/pasted your `svn co ..` command into my terminal > >> and it looks > >>>> like it's working fine for me ... > >>> So, it's probably a proxy problem. I'll try from home... > >> If it's a proxy problem, using https instead of http will > >> probably work. > > > > Indeed. So here's what the SVN version gives me : > > > > creating build/temp.aix-5.3-2.5/numpy/core/blasdot > > compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/blasdot > > -Inumpy/core/include -Ibuild/src.aix-5.3-2.5/numpy/core > -Inumpy/core/src -Inumpy/core/include > -I/Produits/publics/powerpc.AIX.5.2/python/2.5.0/include/python2.5 -c' > > cc_r: numpy/core/blasdot/_dotblas.c > > Traceback (most recent call last): > > File "setup.py", line 89, in > > setup_package() > > File "setup.py", line 82, in setup_package > > configuration=configuration ) > > File "/tmp/numpy/numpy/distutils/core.py", line 174, in setup > > return old_setup(**new_attr) > > File "/usr/local/lib/python2.5/distutils/core.py", line > 151, in setup > > File "/usr/local/lib/python2.5/distutils/dist.py", line > 974, in run_commands > > File "/usr/local/lib/python2.5/distutils/dist.py", line > 994, in run_command > > File "/tmp/numpy/numpy/distutils/command/install.py", > line 16, in run > > r = old_install.run(self) > > File > "/usr/local/lib/python2.5/distutils/command/install.py", line > 506, in run > > File > "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/ > cmd.py", line 333, in run_command > > del help[cmd] > > File "/usr/local/lib/python2.5/distutils/dist.py", line > 994, in run_command > > File > "/usr/local/lib/python2.5/distutils/command/build.py", line > 112, in run > > File > "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/ > cmd.py", line 333, in run_command > > del help[cmd] > > File "/usr/local/lib/python2.5/distutils/dist.py", line > 994, in run_command > > File "/tmp/numpy/numpy/distutils/command/build_ext.py", > line 121, in run > > self.build_extensions() > > File > "/usr/local/lib/python2.5/distutils/command/build_ext.py", > line 407, in build_extensions > > File "/tmp/numpy/numpy/distutils/command/build_ext.py", > line 335, in build_extension > > build_temp=self.build_temp,**kws) > > File "/usr/local/lib/python2.5/distutils/ccompiler.py", > line 845, in link_shared_object > > File "/tmp/numpy/numpy/distutils/fcompiler/__init__.py", > line 496, in link > > self.spawn(command) > > File "/tmp/numpy/numpy/distutils/ccompiler.py", line 29, > in CCompiler_spawn > > display = ' '.join(list(display)) > > TypeError: sequence item 0: expected string, list found > > Ah, oops. Try it now. OK, compilation works fine without any optimization. As soon as I try to link with blas, lapack, atlas or essl, I get the following errors : compile options: '-Inumpy/core/src -Inumpy/core/include -I/usr/local/include/python2.5 -c' cc_r: _configtest.c cc_r _configtest.o -o _configtest ld: 0711-317 ERROR: Undefined symbol: .exp ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. ld: 0711-317 ERROR: Undefined symbol: .exp ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. failure. removing: _configtest.c _configtest.o /usr/local/lib/python2.5/config/ld_so_aix cc_r -bI:/usr/local/lib/python2.5/config/python.exp build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o -L/usr/lib -lblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so ld: 0711-317 ERROR: Undefined symbol: .cblas_cdotc_sub ld: 0711-317 ERROR: Undefined symbol: .cblas_zdotc_sub ld: 0711-317 ERROR: Undefined symbol: .cblas_sdot ld: 0711-317 ERROR: Undefined symbol: .cblas_ddot ld: 0711-317 ERROR: Undefined symbol: .cblas_caxpy ld: 0711-317 ERROR: Undefined symbol: .cblas_saxpy ld: 0711-317 ERROR: Undefined symbol: .cblas_zaxpy ld: 0711-317 ERROR: Undefined symbol: .cblas_daxpy ld: 0711-317 ERROR: Undefined symbol: .cblas_cdotu_sub ld: 0711-317 ERROR: Undefined symbol: .cblas_zdotu_sub ld: 0711-317 ERROR: Undefined symbol: .cblas_cgemv ld: 0711-317 ERROR: Undefined symbol: .cblas_sgemv ld: 0711-317 ERROR: Undefined symbol: .cblas_zgemv ld: 0711-317 ERROR: Undefined symbol: .cblas_dgemv ld: 0711-317 ERROR: Undefined symbol: .cblas_cgemm ld: 0711-317 ERROR: Undefined symbol: .cblas_zgemm ld: 0711-317 ERROR: Undefined symbol: .cblas_sgemm ld: 0711-317 ERROR: Undefined symbol: .cblas_dgemm ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. building 'numpy.lib._compiled_base' extension compiling C sources C compiler: cc_r -DNDEBUG -O /usr/local/lib/python2.5/config/ld_so_aix cc_r -bI:/usr/local/lib/python2.5/config/python.exp build/temp.aix-5.3-2.5/numpy/linalg/lapack_litemodule.o -L/usr/local/lib -lflapack -lfblas -o build/lib.aix-5.3-2.5/numpy/linalg/lapack_lite.so ld: 0711-317 ERROR: Undefined symbol: .zungqr_ ld: 0711-317 ERROR: Undefined symbol: .zgeqrf_ ld: 0711-317 ERROR: Undefined symbol: .zpotrf_ ld: 0711-317 ERROR: Undefined symbol: .zgetrf_ ld: 0711-317 ERROR: Undefined symbol: .zgesdd_ ld: 0711-317 ERROR: Undefined symbol: .zgesv_ ld: 0711-317 ERROR: Undefined symbol: .zgelsd_ ld: 0711-317 ERROR: Undefined symbol: .zgeev_ ld: 0711-317 ERROR: Undefined symbol: .dorgqr_ ld: 0711-317 ERROR: Undefined symbol: .dgeqrf_ ld: 0711-317 ERROR: Undefined symbol: .dpotrf_ ld: 0711-317 ERROR: Undefined symbol: .dgetrf_ ld: 0711-317 ERROR: Undefined symbol: .dgesdd_ ld: 0711-317 ERROR: Undefined symbol: .dgesv_ ld: 0711-317 ERROR: Undefined symbol: .dgelsd_ ld: 0711-317 ERROR: Undefined symbol: .zheevd_ ld: 0711-317 ERROR: Undefined symbol: .dsyevd_ ld: 0711-317 ERROR: Undefined symbol: .dgeev_ ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more information. Also, I tried to compile scipy and got : compiling C++ sources C compiler: c++_r -DNDEBUG -O creating build/temp.aix-5.3-2.5/Lib/cluster creating build/temp.aix-5.3-2.5/Lib/cluster/src compile options: '-I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include/python2.5 -c' c++_r: Lib/cluster/src/vq_wrap.cpp sh: c++_r: not found sh: c++_r: not found error: Command "c++_r -DNDEBUG -O -I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include/python2.5 -c Lib/cluster/src/vq_wrap.cpp -o build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o" failed with exit status 127 From zunzun at zunzun.com Thu Apr 5 04:23:53 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 5 Apr 2007 04:23:53 -0400 Subject: [SciPy-user] scipy.org Cookbook link not responding Message-ID: <20070405082353.GA15350@zunzun.com> Just note that the scipy.org Cookbook page http://scipy.org/Cookbook will not display properly, yielding: Cookbook OK The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, root at localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.0.54 (Fedora) Server at scipy.org Port 80 James Phillips http://zunzun.com From oliver.tomic at matforsk.no Thu Apr 5 04:44:41 2007 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Thu, 5 Apr 2007 10:44:41 +0200 Subject: [SciPy-user] Big list of Numpy & Scipy users In-Reply-To: Message-ID: Hi list, I've put out our project on the scipy project site. Hopefully, this will become a very long list. Oliver scipy-user-bounces at scipy.org wrote on 05.04.2007 04:10:28: > On 4/4/07, Robert Kern wrote: > > Bill Baxter wrote: > > > Is there any place on the Wiki that lists all the known software that > > > uses Numpy in some way? > > > > >> > It would be nice to start collecting such a list if there isn't one > > > already. Screenshots would be nice too. > > > > There is no such list that I know of, but you may start one on the > wiki if you like. > > Ok, I made a start: http://www.scipy.org/Scipy_Projects > Anyone who has a project that depends on Numpy or Scipy, please go add > your info there! > > I haven't linked it from anywhere, because it looks pretty pathetic > right now with only three or four entries. But hopefully everyone > will jumps in and add their project to the list. > > Part of the idea is that this should be a good place to point > nay-sayers to when they say "meh - numpy... that's a niche project for > a handful of scientists." > > So ... hopefully a good portion of the links will be things other than > science projects. There will hopefully be a lot of things that > "ordinary users" would care about. :-) > > I couldn't figure out how to add an image, but if someone knows how to > do that, please do. > > --bb > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From raphael.langella at steria.cnes.fr Thu Apr 5 06:11:11 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Thu, 5 Apr 2007 12:11:11 +0200 Subject: [SciPy-user] RE : RE : RE : RE : RE : RE : Compiling numpy and scipy onAIX 5.3 Message-ID: <200704051009.l35A9xl29540@cnes.fr> > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de Langella Raphael > Envoy? : jeudi 5 avril 2007 10:17 > ? : SciPy Users List > Objet : [SciPy-user] RE : RE : RE : RE : RE : Compiling numpy > and scipy onAIX 5.3 > > > > -----Message d'origine----- > > De : scipy-user-bounces at scipy.org > > [mailto:scipy-user-bounces at scipy.org] De la part de David M. Cooke > > Envoy? : mercredi 4 avril 2007 20:11 > > ? : SciPy Users List > > Objet : Re: [SciPy-user] RE : RE : RE : RE : Compiling numpy > > and scipy on AIX 5.3 > > > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > Langella Raphael wrote: > > >> Langella Raphael wrote: > > >>>>> I'd love to try the SVN version, but : > > >>>>> > > >>>>> $ svn co http://svn.scipy.org/svn/numpy/trunk numpy > > >>>>> svn: PROPFIND request failed on '/svn/numpy/trunk' > > >>>>> svn: PROPFIND of '/svn/numpy/trunk': 500 Server Error (http:// > > >>>>> svn.scipy.org) > > >>>>> > > >>>>> Am I missing something or is it a server problem as the error > > >>>>> message suggests ? > > >>>> I copy/pasted your `svn co ..` command into my terminal > > >> and it looks > > >>>> like it's working fine for me ... > > >>> So, it's probably a proxy problem. I'll try from home... > > >> If it's a proxy problem, using https instead of http > will probably > > >> work. > > > > > > Indeed. So here's what the SVN version gives me : > > > > > > creating build/temp.aix-5.3-2.5/numpy/core/blasdot > > > compile options: '-DNO_ATLAS_INFO=1 -Inumpy/core/blasdot > > > -Inumpy/core/include -Ibuild/src.aix-5.3-2.5/numpy/core > > -Inumpy/core/src -Inumpy/core/include > > > -I/Produits/publics/powerpc.AIX.5.2/python/2.5.0/include/python2.5 -c' > > > cc_r: numpy/core/blasdot/_dotblas.c > > > Traceback (most recent call last): > > > File "setup.py", line 89, in > > > setup_package() > > > File "setup.py", line 82, in setup_package > > > configuration=configuration ) > > > File "/tmp/numpy/numpy/distutils/core.py", line 174, in setup > > > return old_setup(**new_attr) > > > File "/usr/local/lib/python2.5/distutils/core.py", line > > 151, in setup > > > File "/usr/local/lib/python2.5/distutils/dist.py", line > > 974, in run_commands > > > File "/usr/local/lib/python2.5/distutils/dist.py", line > > 994, in run_command > > > File "/tmp/numpy/numpy/distutils/command/install.py", > > line 16, in run > > > r = old_install.run(self) > > > File > > "/usr/local/lib/python2.5/distutils/command/install.py", line > > 506, in run > > > File > > "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/ > > cmd.py", line 333, in run_command > > > del help[cmd] > > > File "/usr/local/lib/python2.5/distutils/dist.py", line > > 994, in run_command > > > File > > "/usr/local/lib/python2.5/distutils/command/build.py", line > > 112, in run > > > File > > "/Produits/publics/powerpc.AIX.5.2/python/2.5.0/lib/python2.5/ > > cmd.py", line 333, in run_command > > > del help[cmd] > > > File "/usr/local/lib/python2.5/distutils/dist.py", line > > 994, in run_command > > > File "/tmp/numpy/numpy/distutils/command/build_ext.py", > > line 121, in run > > > self.build_extensions() > > > File > > "/usr/local/lib/python2.5/distutils/command/build_ext.py", > > line 407, in build_extensions > > > File "/tmp/numpy/numpy/distutils/command/build_ext.py", > > line 335, in build_extension > > > build_temp=self.build_temp,**kws) > > > File "/usr/local/lib/python2.5/distutils/ccompiler.py", > > line 845, in link_shared_object > > > File "/tmp/numpy/numpy/distutils/fcompiler/__init__.py", > > line 496, in link > > > self.spawn(command) > > > File "/tmp/numpy/numpy/distutils/ccompiler.py", line 29, > > in CCompiler_spawn > > > display = ' '.join(list(display)) > > > TypeError: sequence item 0: expected string, list found > > > > Ah, oops. Try it now. > > OK, compilation works fine without any optimization. As soon > as I try to link with blas, lapack, atlas or essl, I get the > following errors : compile options: '-Inumpy/core/src > -Inumpy/core/include -I/usr/local/include/python2.5 -c' > cc_r: _configtest.c > cc_r _configtest.o -o _configtest > ld: 0711-317 ERROR: Undefined symbol: .exp > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > more information. > ld: 0711-317 ERROR: Undefined symbol: .exp > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > more information. failure. > removing: _configtest.c _configtest.o > > /usr/local/lib/python2.5/config/ld_so_aix cc_r > -bI:/usr/local/lib/python2.5/config/python.exp > build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o > -L/usr/lib -lblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so > ld: 0711-317 ERROR: Undefined symbol: .cblas_cdotc_sub > ld: 0711-317 ERROR: Undefined symbol: .cblas_zdotc_sub > ld: 0711-317 ERROR: Undefined symbol: .cblas_sdot > ld: 0711-317 ERROR: Undefined symbol: .cblas_ddot > ld: 0711-317 ERROR: Undefined symbol: .cblas_caxpy > ld: 0711-317 ERROR: Undefined symbol: .cblas_saxpy > ld: 0711-317 ERROR: Undefined symbol: .cblas_zaxpy > ld: 0711-317 ERROR: Undefined symbol: .cblas_daxpy > ld: 0711-317 ERROR: Undefined symbol: .cblas_cdotu_sub > ld: 0711-317 ERROR: Undefined symbol: .cblas_zdotu_sub > ld: 0711-317 ERROR: Undefined symbol: .cblas_cgemv > ld: 0711-317 ERROR: Undefined symbol: .cblas_sgemv > ld: 0711-317 ERROR: Undefined symbol: .cblas_zgemv > ld: 0711-317 ERROR: Undefined symbol: .cblas_dgemv > ld: 0711-317 ERROR: Undefined symbol: .cblas_cgemm > ld: 0711-317 ERROR: Undefined symbol: .cblas_zgemm > ld: 0711-317 ERROR: Undefined symbol: .cblas_sgemm > ld: 0711-317 ERROR: Undefined symbol: .cblas_dgemm > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > more information. building 'numpy.lib._compiled_base' > extension compiling C sources C compiler: cc_r -DNDEBUG -O > > /usr/local/lib/python2.5/config/ld_so_aix cc_r > -bI:/usr/local/lib/python2.5/config/python.exp > build/temp.aix-5.3-2.5/numpy/linalg/lapack_litemodule.o > -L/usr/local/lib -lflapack -lfblas -o > build/lib.aix-5.3-2.5/numpy/linalg/lapack_lite.so > ld: 0711-317 ERROR: Undefined symbol: .zungqr_ > ld: 0711-317 ERROR: Undefined symbol: .zgeqrf_ > ld: 0711-317 ERROR: Undefined symbol: .zpotrf_ > ld: 0711-317 ERROR: Undefined symbol: .zgetrf_ > ld: 0711-317 ERROR: Undefined symbol: .zgesdd_ > ld: 0711-317 ERROR: Undefined symbol: .zgesv_ > ld: 0711-317 ERROR: Undefined symbol: .zgelsd_ > ld: 0711-317 ERROR: Undefined symbol: .zgeev_ > ld: 0711-317 ERROR: Undefined symbol: .dorgqr_ > ld: 0711-317 ERROR: Undefined symbol: .dgeqrf_ > ld: 0711-317 ERROR: Undefined symbol: .dpotrf_ > ld: 0711-317 ERROR: Undefined symbol: .dgetrf_ > ld: 0711-317 ERROR: Undefined symbol: .dgesdd_ > ld: 0711-317 ERROR: Undefined symbol: .dgesv_ > ld: 0711-317 ERROR: Undefined symbol: .dgelsd_ > ld: 0711-317 ERROR: Undefined symbol: .zheevd_ > ld: 0711-317 ERROR: Undefined symbol: .dsyevd_ > ld: 0711-317 ERROR: Undefined symbol: .dgeev_ > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > more information. The problem comes from my version of lapack. I tried another one, which somebody else had already compiled and it works :) But there's still the problem of scipy not finding my C++ compiler, or invoking the wrong one. From grante at visi.com Thu Apr 5 10:27:45 2007 From: grante at visi.com (Grant Edwards) Date: Thu, 5 Apr 2007 14:27:45 +0000 (UTC) Subject: [SciPy-user] RE : RE : RE : RE : RE : RE : Compiling numpy and scipy onAIX 5.3 References: <200704051009.l35A9xl29540@cnes.fr> Message-ID: WTF is adding all of the "RE :" prefixes to the subject line? Somebody needs to fix his broken mail/news client. -- Grant Edwards grante Yow! MERYL STREEP is my at obstetrician! visi.com From cookedm at physics.mcmaster.ca Thu Apr 5 10:43:55 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 05 Apr 2007 10:43:55 -0400 Subject: [SciPy-user] Compiling numpy and scipy onAIX 5.3 In-Reply-To: <200704051009.l35A9xl29540@cnes.fr> References: <200704051009.l35A9xl29540@cnes.fr> Message-ID: <46150B2B.3040408@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Langella Raphael wrote: > But there's still the problem of scipy not finding my C++ compiler, or invoking the wrong one. That's probably a problem with how python was compiled. Set the environment variable CXX to your C++ compiler, and try again. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGFQsrN9ixZKFWjRQRAnUeAJ4pQ9oJw2vka6deRdlBCYpAlIyX3gCeNMSI KUwnvioU+XJP2rkvuepJXXY= =OIJN -----END PGP SIGNATURE----- From robert.kern at gmail.com Thu Apr 5 12:59:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 05 Apr 2007 11:59:10 -0500 Subject: [SciPy-user] Linear interpolation with delaunay triangulation In-Reply-To: <4614A601.9020407@univ-pau.fr> References: <4614A601.9020407@univ-pau.fr> Message-ID: <46152ADE.3060308@gmail.com> paul cristini wrote: > Hello, > I'm trying to use delaunay package to perform interpolation on > irregularly spaced data and I was wondering if there was a possibility > to use the linear interpolator with this package. I get an error saying > the object is not callable when replacing nn with linear in the example. I beg your pardon. I never implemented the __call__ interface for the linear interpolator or the linear interpolation for "unstructured" query points. However, the interface for evaluating on a grid is there: LinearInterpolator(triangulation, z)[[ystart:ystop:ysteps*1j, xstart:xstop:xsteps*1j] -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Apr 5 12:59:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 05 Apr 2007 11:59:51 -0500 Subject: [SciPy-user] scipy.org Cookbook link not responding In-Reply-To: <20070405082353.GA15350@zunzun.com> References: <20070405082353.GA15350@zunzun.com> Message-ID: <46152B07.2030206@gmail.com> zunzun at zunzun.com wrote: > Just note that the scipy.org Cookbook page > > http://scipy.org/Cookbook > > will not display properly, yielding: Seems to be okay to me, now. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Thu Apr 5 14:12:25 2007 From: strawman at astraw.com (Andrew Straw) Date: Thu, 05 Apr 2007 11:12:25 -0700 Subject: [SciPy-user] [Numpy-discussion] Big list of Numpy & Scipy users In-Reply-To: References: Message-ID: <46153C09.5020909@astraw.com> Bill Baxter wrote: > On 4/4/07, Robert Kern wrote: > >> Bill Baxter wrote: >> >>> Is there any place on the Wiki that lists all the known software that >>> uses Numpy in some way? >>> >>> >>>> It would be nice to start collecting such a list if there isn't one >>>> >>> already. Screenshots would be nice too. >>> >> There is no such list that I know of, but you may start one on the wiki if you like. >> > > Ok, I made a start: http://www.scipy.org/Scipy_Projects > Great idea. I renamed the page to http://www.scipy.org/Projects so Numpy-only users wouldn't feel excluded. From zunzun at zunzun.com Thu Apr 5 14:46:55 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 5 Apr 2007 14:46:55 -0400 Subject: [SciPy-user] scipy.org Cookbook link not responding In-Reply-To: <46152B07.2030206@gmail.com> References: <20070405082353.GA15350@zunzun.com> <46152B07.2030206@gmail.com> Message-ID: <20070405184655.GA28511@zunzun.com> Yes it is, that's strange. Thank you for checking. James On Thu, Apr 05, 2007 at 11:59:51AM -0500, Robert Kern wrote: > zunzun at zunzun.com wrote: > > Just note that the scipy.org Cookbook page > > > > http://scipy.org/Cookbook > > > > will not display properly, yielding: > > Seems to be okay to me, now. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From v-nijs at kellogg.northwestern.edu Thu Apr 5 23:48:04 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Thu, 05 Apr 2007 22:48:04 -0500 Subject: [SciPy-user] Experimental design In-Reply-To: <20070405184655.GA28511@zunzun.com> Message-ID: Does anyone know of code available for scipy or python that generates factorial experimental designs? The macros I currently use are available freely for sas (see link below). http://support.sas.com/techsup/tnote/tnote_stat.html#market Any suggestions are welcome. Here is a brief description of the macros I use. "This macro calculates reasonable sizes for main-effects experimental designs. It tries to find design sizes in which perfect balance and orthogonality can occur or at least sizes in which violations of orthogonality and balance are minimized." "The MktTex macro is a full-featured linear designer that can handle simple problems like main-effects designs and more complicated problems including designs with interactions and restrictions on which levels can appear together. The macro is particularly designed to easily create the kinds of linear designs that marketing researches need for conjoint and choice experiments. It does not create for example Latin Squares, randomized blocks, response surface, resolution IV, or other specialized designs that are not as widely used in marketing research." Vincent On 4/5/07 1:46 PM, "zunzun at zunzun.com" wrote: > Yes it is, that's strange. Thank you for checking. > > James > > On Thu, Apr 05, 2007 at 11:59:51AM -0500, Robert Kern wrote: >> zunzun at zunzun.com wrote: >>> Just note that the scipy.org Cookbook page >>> >>> http://scipy.org/Cookbook >>> >>> will not display properly, yielding: >> >> Seems to be okay to me, now. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless enigma >> that is made terrible by our own mad attempt to interpret it as though it >> had >> an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Vincent R. Nijs Assistant Professor of Marketing Kellogg School of Management, Northwestern University 2001 Sheridan Road, Evanston, IL 60208-2001 Phone: +1-847-491-4574 Fax: +1-847-491-2498 E-mail: v-nijs at kellogg.northwestern.edu Skype: vincentnijs From peridot.faceted at gmail.com Fri Apr 6 00:34:42 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 6 Apr 2007 00:34:42 -0400 Subject: [SciPy-user] Orthogonal polynomials Message-ID: Hi, I have been using the orthogonal polynomials implemented in scipy.special. They are very convenient, but I have been encountering numerical headaches. For example: In [127]: chebyt(100)(0.5) Out[127]: 3468.56206844 In [128]: cos(100*arccos(0.5)) Out[128]: -0.5 In some cases, I realize that there are numerical obstacles to computing high-degree orthogonal polynomials. Certainly their coefficients will be a problem. But in some cases (for example for Chebyshev polynomials of the first kind) there are more accurate, possibly more efficient, ways to compute them. These do not necessarily allow one to compute the weights efficiently (although root-finding does make it possible) but they would nevertheless be useful. Does any code like this exist in scipy? Is there any that would be convenient to import? How difficult is it to add special-case code to the orthogonal polynomial objects? I am willing to supply a patch for the Chebyshev, at least, if I can do it cleanly. I am particularly interested in the Legendre polynomials, but I can get away with using the Chebyshev polynomials in my application. Thanks, Anne M. Archibald From raphael.langella at steria.cnes.fr Fri Apr 6 03:06:58 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 6 Apr 2007 09:06:58 +0200 Subject: [SciPy-user] Compiling numpy and scipy on AIX 5.3 Message-ID: <200704060705.l3675hl26264@cnes.fr> Lol, you're right :) My broken mail client is Microsoft Outlook... in french (hence the RE) Maybe I'll just subscribe with a gmail address. > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de Grant Edwards > Envoy? : jeudi 5 avril 2007 16:28 > ? : scipy-user at scipy.org > Objet : Re: [SciPy-user] RE : RE : RE : RE : RE : RE : > Compiling numpy andscipy onAIX 5.3 > > > WTF is adding all of the "RE :" prefixes to the subject line? > > Somebody needs to fix his broken mail/news client. > > -- > Grant Edwards grante Yow! > MERYL STREEP is my > at obstetrician! > visi.com > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From raphael.langella at steria.cnes.fr Fri Apr 6 04:33:47 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 6 Apr 2007 10:33:47 +0200 Subject: [SciPy-user] Compiling numpy and scipy on AIX 5.3 Message-ID: <200704060834.l368Xv021176@cnes.fr> Something akward is going on : I typed : export CXX="xlc++_r" And got : compiling C++ sources C compiler: xlc++_r -DNDEBUG -O creating build/temp.aix-5.3-2.5/Lib/cluster creating build/temp.aix-5.3-2.5/Lib/cluster/src compile options: '-I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include/python2.5 -c' xlc++_r: Lib/cluster/src/vq_wrap.cpp "Lib/cluster/src/vq_wrap.cpp", line 582.1: 1540-1101 (W) A return value of type "int" is expected. "Lib/cluster/src/vq_wrap.cpp", line 590.1: 1540-1101 (W) A return value of type "int" is expected. xlc++_r cc_r -bI:/usr/local/lib/python2.5/config/python.exp build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o -Lbuild/temp.aix-5.3-2.5 -o build/lib.aix-5.3-2.5/scipy/cluster/_vq.so xlc++_r: 1501-228 input file cc_r not found xlc++_r: 1501-228 input file cc_r not found error: Command "xlc++_r cc_r -bI:/usr/local/lib/python2.5/config/python.exp build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o -Lbuild/temp.aix-5.3-2.5 -o build/lib.aix-5.3-2.5/scipy/cluster/_vq.so" failed with exit status 252 What the heck is happening here? BTW, I tried with 0.5.2 and SVN > -----Message d'origine----- > De : scipy-user-bounces at scipy.org > [mailto:scipy-user-bounces at scipy.org] De la part de David M. Cooke > Envoy? : jeudi 5 avril 2007 16:44 > ? : SciPy Users List > Objet : Re: [SciPy-user] Compiling numpy and scipy onAIX 5.3 > > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Langella Raphael wrote: > > But there's still the problem of scipy not finding my C++ > compiler, or > > invoking the wrong one. > > That's probably a problem with how python was compiled. Set > the environment variable CXX to your C++ compiler, and try again. > > - -- > |>|\/|< > /------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.6 (Darwin) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org > > iD8DBQFGFQsrN9ixZKFWjRQRAnUeAJ4pQ9oJw2vka6deRdlBCYpAlIyX3gCeNMSI > KUwnvioU+XJP2rkvuepJXXY= > =OIJN > -----END PGP SIGNATURE----- > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cookedm at physics.mcmaster.ca Fri Apr 6 06:59:39 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 06 Apr 2007 06:59:39 -0400 Subject: [SciPy-user] Compiling numpy and scipy on AIX 5.3 In-Reply-To: <200704060834.l368Xv021176@cnes.fr> References: <200704060834.l368Xv021176@cnes.fr> Message-ID: <4616281B.9050104@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Langella Raphael wrote: > Something akward is going on : > > I typed : > export CXX="xlc++_r" > > And got : > > compiling C++ sources > C compiler: xlc++_r -DNDEBUG -O > > creating build/temp.aix-5.3-2.5/Lib/cluster > creating build/temp.aix-5.3-2.5/Lib/cluster/src > compile options: '-I/usr/local/lib/python2.5/site-packages/numpy/core/include -I/usr/local/include/python2.5 -c' > xlc++_r: Lib/cluster/src/vq_wrap.cppC+ > "Lib/cluster/src/vq_wrap.cpp", line 582.1: 1540-1101 (W) A return value of type "int" is expected. > "Lib/cluster/src/vq_wrap.cpp", line 590.1: 1540-1101 (W) A return value of type "int" is expected. > xlc++_r cc_r -bI:/usr/local/lib/python2.5/config/python.exp build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o -Lbuild/temp.aix-5.3-2.5 -o build/lib.aix-5.3-2.5/scipy/cluster/_vq.so > xlc++_r: 1501-228 input file cc_r not found > xlc++_r: 1501-228 input file cc_r not found > error: Command "xlc++_r cc_r -bI:/usr/local/lib/python2.5/config/python.exp build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o -Lbuild/temp.aix-5.3-2.5 -o build/lib.aix-5.3-2.5/scipy/cluster/_vq.so" failed with exit status 252 > > What the heck is happening here? BTW, I tried with 0.5.2 and SVN The command to compile an extension is stored as a list; C++ extensions are compiled by replacing the first item with the name of the C++ compiler. Obviously, that doesn't work here, as the first item isn't the compiler. I'll take a look. For now, if you aren't going to use scipy.cluster, you can disable it by editing Lib/setup.py and commenting out the appropiate line. It's the only subpackage compiled by default that requires C++ at install time (the others are in the sandbox, and weave). > David Cooke wrote: >>> Langella Raphael wrote: >>> But there's still the problem of scipy not finding my C++ compiler, or >>> invoking the wrong one. >> That's probably a problem with how python was compiled. Set >> the environment variable CXX to your C++ compiler, and try again. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGFigaN9ixZKFWjRQRAm6sAKCzPuDEuqng3GW7iwo0Qu6XWUqhHACgnp3k 3EoD7zRZoywCKAvB5PYFQuw= =3QDS -----END PGP SIGNATURE----- From rjchacko at gmail.com Fri Apr 6 09:26:35 2007 From: rjchacko at gmail.com (Ranjit Chacko) Date: Fri, 6 Apr 2007 09:26:35 -0400 Subject: [SciPy-user] ising model Message-ID: I'd like to write a simulation of an ising model just to play around with scipy, and I was wondering what would be the easiest way to display a NxN lattice in scipy. Would you use vpython? pygame? Thanks, Ranjit -------------- next part -------------- An HTML attachment was scrubbed... URL: From raphael.langella at steria.cnes.fr Fri Apr 6 10:29:57 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 6 Apr 2007 16:29:57 +0200 Subject: [SciPy-user] Compiling numpy and scipy on AIX 5.3 Message-ID: <200704061429.l36ET2l00158@cnes.fr> > -----Message d'origine----- > > OK, compilation works fine without any optimization. As soon > > as I try to link with blas, lapack, atlas or essl, I get the > > following errors : compile options: '-Inumpy/core/src > > -Inumpy/core/include -I/usr/local/include/python2.5 -c' > > cc_r: _configtest.c > > cc_r _configtest.o -o _configtest > > ld: 0711-317 ERROR: Undefined symbol: .exp > > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > > more information. > > ld: 0711-317 ERROR: Undefined symbol: .exp > > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > > more information. failure. > > removing: _configtest.c _configtest.o > > > > /usr/local/lib/python2.5/config/ld_so_aix cc_r > > -bI:/usr/local/lib/python2.5/config/python.exp > > build/temp.aix-5.3-2.5/numpy/core/blasdot/_dotblas.o > > -L/usr/lib -lblas -o build/lib.aix-5.3-2.5/numpy/core/_dotblas.so > > ld: 0711-317 ERROR: Undefined symbol: .cblas_cdotc_sub > > ld: 0711-317 ERROR: Undefined symbol: .cblas_zdotc_sub > > ld: 0711-317 ERROR: Undefined symbol: .cblas_sdot > > ld: 0711-317 ERROR: Undefined symbol: .cblas_ddot > > ld: 0711-317 ERROR: Undefined symbol: .cblas_caxpy > > ld: 0711-317 ERROR: Undefined symbol: .cblas_saxpy > > ld: 0711-317 ERROR: Undefined symbol: .cblas_zaxpy > > ld: 0711-317 ERROR: Undefined symbol: .cblas_daxpy > > ld: 0711-317 ERROR: Undefined symbol: .cblas_cdotu_sub > > ld: 0711-317 ERROR: Undefined symbol: .cblas_zdotu_sub > > ld: 0711-317 ERROR: Undefined symbol: .cblas_cgemv > > ld: 0711-317 ERROR: Undefined symbol: .cblas_sgemv > > ld: 0711-317 ERROR: Undefined symbol: .cblas_zgemv > > ld: 0711-317 ERROR: Undefined symbol: .cblas_dgemv > > ld: 0711-317 ERROR: Undefined symbol: .cblas_cgemm > > ld: 0711-317 ERROR: Undefined symbol: .cblas_zgemm > > ld: 0711-317 ERROR: Undefined symbol: .cblas_sgemm > > ld: 0711-317 ERROR: Undefined symbol: .cblas_dgemm > > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > > more information. building 'numpy.lib._compiled_base' > > extension compiling C sources C compiler: cc_r -DNDEBUG -O > > > > /usr/local/lib/python2.5/config/ld_so_aix cc_r > > -bI:/usr/local/lib/python2.5/config/python.exp > > build/temp.aix-5.3-2.5/numpy/linalg/lapack_litemodule.o > > -L/usr/local/lib -lflapack -lfblas -o > > build/lib.aix-5.3-2.5/numpy/linalg/lapack_lite.so > > ld: 0711-317 ERROR: Undefined symbol: .zungqr_ > > ld: 0711-317 ERROR: Undefined symbol: .zgeqrf_ > > ld: 0711-317 ERROR: Undefined symbol: .zpotrf_ > > ld: 0711-317 ERROR: Undefined symbol: .zgetrf_ > > ld: 0711-317 ERROR: Undefined symbol: .zgesdd_ > > ld: 0711-317 ERROR: Undefined symbol: .zgesv_ > > ld: 0711-317 ERROR: Undefined symbol: .zgelsd_ > > ld: 0711-317 ERROR: Undefined symbol: .zgeev_ > > ld: 0711-317 ERROR: Undefined symbol: .dorgqr_ > > ld: 0711-317 ERROR: Undefined symbol: .dgeqrf_ > > ld: 0711-317 ERROR: Undefined symbol: .dpotrf_ > > ld: 0711-317 ERROR: Undefined symbol: .dgetrf_ > > ld: 0711-317 ERROR: Undefined symbol: .dgesdd_ > > ld: 0711-317 ERROR: Undefined symbol: .dgesv_ > > ld: 0711-317 ERROR: Undefined symbol: .dgelsd_ > > ld: 0711-317 ERROR: Undefined symbol: .zheevd_ > > ld: 0711-317 ERROR: Undefined symbol: .dsyevd_ > > ld: 0711-317 ERROR: Undefined symbol: .dgeev_ > > ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain > > more information. > > The problem comes from my version of lapack. I tried another > one, which somebody else had already compiled and it works :) > But there's still the problem of scipy not finding my C++ > compiler, or invoking the wrong one. Oops, disregard the previous statement, I made a mistake setting the LAPACK variable and it worked only because it used only internal lapack functions (blas is included under /lib). So, I still have all these undefined symbols when I try to compile numpy with any lapack librairy. Note that I'm just reporting the problem, I'm not waiting or expecting a solution. I'm already quite happy to have been able to compile a working version of numpy (damn AIX). From raphael.langella at steria.cnes.fr Fri Apr 6 11:04:32 2007 From: raphael.langella at steria.cnes.fr (Langella Raphael) Date: Fri, 6 Apr 2007 17:04:32 +0200 Subject: [SciPy-user] RE : Compiling numpy and scipy on AIX 5.3 Message-ID: <200704061503.l36F3ul14649@cnes.fr> I just compressed the attachment... > -----Message d'origine----- > De : Langella Raphael > Envoy? : vendredi 6 avril 2007 16:59 > ? : 'SciPy Users List' > Objet : [SciPy-user] Compiling numpy and scipy on AIX 5.3 > > > > -----Message d'origine----- > > De : scipy-user-bounces at scipy.org > > [mailto:scipy-user-bounces at scipy.org] De la part de David M. Cooke > > Envoy? : vendredi 6 avril 2007 13:00 > > ? : SciPy Users List > > Objet : Re: [SciPy-user] Compiling numpy and scipy on AIX 5.3 > > > > > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > Langella Raphael wrote: > > > Something akward is going on : > > > > > > I typed : > > > export CXX="xlc++_r" > > > > > > And got : > > > > > > compiling C++ sources > > > C compiler: xlc++_r -DNDEBUG -O > > > > > > creating build/temp.aix-5.3-2.5/Lib/cluster > > > creating build/temp.aix-5.3-2.5/Lib/cluster/src > > > compile options: > > > > > '-I/usr/local/lib/python2.5/site-packages/numpy/core/include > > -I/usr/local/include/python2.5 -c' > > > xlc++_r: Lib/cluster/src/vq_wrap.cppC+ > > > "Lib/cluster/src/vq_wrap.cpp", line 582.1: 1540-1101 (W) A return > > > value of type "int" is expected. > > "Lib/cluster/src/vq_wrap.cpp", line > > > 590.1: 1540-1101 (W) A return value of type "int" is expected. > > > xlc++_r cc_r -bI:/usr/local/lib/python2.5/config/python.exp > > > xlc++build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o > > -Lbuild/temp.aix-5.3-2.5 -o > build/lib.aix-5.3-2.5/scipy/cluster/_vq.so > > > xlc++_r: 1501-228 input file cc_r not found > > > xlc++_r: 1501-228 input file cc_r not found > > > error: Command "xlc++_r cc_r > > > -bI:/usr/local/lib/python2.5/config/python.exp > > > build/temp.aix-5.3-2.5/Lib/cluster/src/vq_wrap.o > > > -Lbuild/temp.aix-5.3-2.5 -o > > > build/lib.aix-5.3-2.5/scipy/cluster/_vq.so" failed with > exit status > > > 252 > > > > > > What the heck is happening here? BTW, I tried with 0.5.2 and SVN > > > > The command to compile an extension is stored as a list; C++ > > extensions are compiled by replacing the first item with the > > name of the C++ compiler. Obviously, that doesn't work here, > > as the first item isn't the compiler. I'll take a look. > > > > For now, if you aren't going to use scipy.cluster, you can > > disable it by editing Lib/setup.py and commenting out the > > appropiate line. It's the only subpackage compiled by default > > that requires C++ at install time (the others are in the > > sandbox, and weave). > > It worked with 0.5.2 > But I have a lot of errors when testing : failures=20, > errors=28 I joined the full compile and test log. > > I also tried the svn but the sparse submodule also requires > c++, so I commented it out. The compile failed with : > > xlf:f77: build/src.aix-5.3-2.5/Lib/stats/mvn-f2pywrappers.f > ** f2pyinitdkblck === End of Compilation 1 === > 1501-510 Compilation successful for file mvn-f2pywrappers.f. > /usr/local/lib/python2.5/config/ld_so_aix xlf95 > -bI:/usr/local/lib/python2.5/config/python.exp -bshared > -F/ptmp/tmpmB5fHP_xlf.cfg > build/temp.aix-5.3-2.5/build/src.aix-5.3-2.5/Lib/stats/mvnmodu > le.o > build/temp.aix-5.3-2.5/build/src.aix-5.3-2.5/fortranobject.o > build/temp.aix-5.3-2.5/Lib/stats/mvndst.o > build/temp.aix-5.3-2.5/build/src.aix-5.3-2.5/Lib/stats/mvn-f2p > ywrappers.o -Lbuild/temp.aix-5.3-2.5 -o > build/lib.aix-5.3-2.5/scipy/stats/mvn.so > xlf95: 1501-262 One or more input object files contain IPA > information: specify -qipa for additional optimization. > building 'scipy.ndimage._nd_image' extension compiling C > sources C compiler: cc_r -DNDEBUG -O > > creating build/temp.aix-5.3-2.5/Lib/ndimage > creating build/temp.aix-5.3-2.5/Lib/ndimage/src > compile options: '-ILib/ndimage/src > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/local/include/python2.5 -c' > cc_r: Lib/ndimage/src/ni_interpolation.c > "Lib/ndimage/src/ni_interpolation.c", line 144.9: 1506-046 > (S) Syntax error. "Lib/ndimage/src/ni_interpolation.c", line > 145.22: 1506-1118 (W) Character constant 'in' has more than 1 > character. "Lib/ndimage/src/ni_interpolation.c", line 144.12: > 1506-045 (S) Undeclared identifier Integer. > "Lib/ndimage/src/ni_interpolation.c", line 144.9: 1506-046 > (S) Syntax error. "Lib/ndimage/src/ni_interpolation.c", line > 145.22: 1506-1118 (W) Character constant 'in' has more than 1 > character. "Lib/ndimage/src/ni_interpolation.c", line 144.12: > 1506-045 (S) Undeclared identifier Integer. > error: Command "cc_r -DNDEBUG -O -ILib/ndimage/src > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/local/include/python2.5 -c > Lib/ndimage/src/ni_interpolation.c -o > build/temp.aix-5.3-2.5/Lib/ndimage/src/ni_interpolation.o" > failed with exit status 1 > -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy-AIX-build&test.zip Type: application/x-zip-compressed Size: 43224 bytes Desc: scipy-AIX-build&test.zip URL: From aisaac at american.edu Fri Apr 6 12:23:31 2007 From: aisaac at american.edu (Alan Isaac) Date: Fri, 6 Apr 2007 11:23:31 -0500 Subject: [SciPy-user] ising model In-Reply-To: References: Message-ID: On Fri, 6 Apr 2007, Ranjit Chacko wrote: > what would be the easiest way to display a NxN lattice in > scipy http://matplotlib.sourceforge.net/matplotlib.pylab.html#-imshow hth, Alan Isaac From edschofield at gmail.com Fri Apr 6 17:12:33 2007 From: edschofield at gmail.com (Edschofield) Date: Fri, 06 Apr 2007 23:12:33 +0200 Subject: [SciPy-user] pric 06-Apr-2007 Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: latest_price06-Apr-2007.zip Type: application/octet-stream Size: 38790 bytes Desc: not available URL: From ebrosh at nana.co.il Sat Apr 7 14:24:01 2007 From: ebrosh at nana.co.il (Eli Brosh) Date: Sat, 7 Apr 2007 21:24:01 +0300 Subject: [SciPy-user] SciPy+NumPy WinXP installer for AMD Duron Message-ID: <957526FB6E347743AAB42B212AB54FDA7A5AEF@NANAMAILBACK1.nanamail.co.il> Hello I encountered some bugs when trying to use SciPy (see my posting "Bugs in special"). I suspect that these bugs result from an incopatibility between the numpy/scipy windows installers distributed by enthought and by the Scipy.org website and my AMD duron 1.3 GHz proccessor. I see that for such cases, the reccomended solution is to build numpy and scipy from source code. Is there some easier way for overcoming this incompatibility ? Does anyone have a ready WinXP installer for early AMD Duron proccesors ? Or, perhaps, an installer is not needed and the replacement of few files could suffice ? Thanks Eli Brosh -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Sun Apr 8 04:25:44 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 08 Apr 2007 10:25:44 +0200 Subject: [SciPy-user] (numerical) "solving" 2-dimensional equations ? Message-ID: <4618A708.3010206@ru.nl> hello all, I'm trying to build a graphical calculator, for time-series and (simple) 2-dimensional equations. Now my math has become very rusty, and I think it shouldn't be too difficult, but I can't find the right clue (or isn't the problem so simple) ? The idea is that the user, types in the equation, specifies the range of one of the axis, then the program draws the picture of that equation in the specified range. i.e for a circle sqr (x) + sqr (y) = 4 x = linspace ( -10, 10, 1000 ) Who can give me some clues ? thanks, Stef Mientki From zunzun at zunzun.com Sun Apr 8 05:15:34 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 8 Apr 2007 05:15:34 -0400 Subject: [SciPy-user] Juicy tidbit from the ODRPACK User's Guide Message-ID: <20070408091534.GA15901@zunzun.com> I've been looking for a simple way to calculate a covariance matrix for nonlinear curve fitting so that I could add fit statistics to my web site, and have been wargling back and forth over various packages, files, languages, more files, etc. for about two months now. I *almost* started to add additional ODR binding code to Robert Kern's odrpack.c, when lo and behold; On page 33 of the ODRPACK User's Reference Guide one finds: If MAXIT = 0 then no iterations will be taken, but whatever computations are required to complete the final computation report will be made. For example, by setting MAXIT = 0 and the third digit of JOB to zero, the user can compute the covariance matrix Vbeta for the input values beta and delta. Ohhhhh, I get it - READ THE DOCUMENTATION! Boy, do I ever feel like a total dork. On the other hand I've learned a lot about interfacing FORTRAN and Python, many thanks to Pearu Peterson and Robert Kern. James Phillips http://zunzun.com From dgalant at zahav.net.il Sun Apr 8 07:11:38 2007 From: dgalant at zahav.net.il (dgalant) Date: Sun, 8 Apr 2007 14:11:38 +0300 Subject: [SciPy-user] problem compiling scipy Message-ID: <109F7650-2420-42AE-8872-1EDA41967565@zahav.net.il> I am having a problem compiling scipy0.52. Everything is fine until the build of linsolve when I get the following sequence of messages: building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources creating build/src.macosx-10.3-fat-2.4/scipy/linsolve creating build/src.macosx-10.3-fat-2.4/scipy/linsolve/umfpack adding 'Lib/linsolve/umfpack/umfpack.i' to sources. creating build/src.macosx-10.3-fat-2.4/Lib/linsolve creating build/src.macosx-10.3-fat-2.4/Lib/linsolve/umfpack swig: Lib/linsolve/umfpack/umfpack.i swig -python -o build/src.macosx-10.3-fat-2.4/Lib/linsolve/umfpack/ _umfpack_wrap.c -outdir build/src.macosx-10.3-fat-2.4/Lib/linsolve/ umfpack Lib/linsolve/umfpack/umfpack.i Lib/linsolve/umfpack/umfpack.i:188: Error: Unable to find 'umfpack.h' Lib/linsolve/umfpack/umfpack.i:189: Error: Unable to find 'umfpack_solve.h' Lib/linsolve/umfpack/umfpack.i:190: Error: Unable to find 'umfpack_defaults.h' Lib/linsolve/umfpack/umfpack.i:191: Error: Unable to find 'umfpack_triplet_to_col.h' Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack_col_to_triplet.h' Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_transpose.h' Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_scale.h' Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_report_symbolic.h' Lib/linsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_report_numeric.h' Lib/linsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_report_info.h' Lib/linsolve/umfpack/umfpack.i:199: Error: Unable to find 'umfpack_report_control.h' Lib/linsolve/umfpack/umfpack.i:211: Error: Unable to find 'umfpack_symbolic.h' Lib/linsolve/umfpack/umfpack.i:212: Error: Unable to find 'umfpack_numeric.h' Lib/linsolve/umfpack/umfpack.i:221: Error: Unable to find 'umfpack_free_symbolic.h' Lib/linsolve/umfpack/umfpack.i:222: Error: Unable to find 'umfpack_free_numeric.h' Lib/linsolve/umfpack/umfpack.i:244: Error: Unable to find 'umfpack_get_lunz.h' Lib/linsolve/umfpack/umfpack.i:268: Error: Unable to find 'umfpack_get_numeric.h' error: command 'swig' failed with exit status 1 Can anyone help me? Thank you, David Galant Mac mini running MacOS X.4.9 swig 1.3.31 gcc 4.01 From wbaxter at gmail.com Sun Apr 8 07:23:47 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sun, 8 Apr 2007 20:23:47 +0900 Subject: [SciPy-user] Sparse indexing workarounds Message-ID: Does anyone have an easy (and efficient way) to update a submatrix of a big sparse matrix? K = scipy.sparse.lil_matrix(bigN,bigN) ... conn = [1,4,11,12] K[ix_(conn,conn)] += elemK where elemK is a 4x4 dense matrix. This kind of thing is very commonly needed in FEM codes for assembling the global stiffness matrix. But sparse doesn't seem to support either += or indexing with the open grid of indices returned by ix_. Thanks, --bb From dgalant at zahav.net.il Sun Apr 8 08:23:44 2007 From: dgalant at zahav.net.il (dgalant) Date: Sun, 8 Apr 2007 15:23:44 +0300 Subject: [SciPy-user] problem compiling scipy followup Message-ID: <87B43897-C086-4B76-9C61-C3DDB0654E60@zahav.net.il> I included the header files from umfpack (from netlib) and got much farther, but this time, I can't guess what to do. Any help would be appreciated. Thanks, David Galant > I am having a problem compiling scipy0.52. Everything is fine until > the build of linsolve when I get the following sequence of messages: > > building extension "scipy.linsolve._zsuperlu" sources > building extension "scipy.linsolve._dsuperlu" sources > building extension "scipy.linsolve._csuperlu" sources > building extension "scipy.linsolve._ssuperlu" sources > building extension "scipy.linsolve.umfpack.__umfpack" sources > creating build/src.macosx-10.3-fat-2.4/scipy/linsolve > creating build/src.macosx-10.3-fat-2.4/scipy/linsolve/umfpack > adding 'Lib/linsolve/umfpack/umfpack.i' to sources. > creating build/src.macosx-10.3-fat-2.4/Lib/linsolve > creating build/src.macosx-10.3-fat-2.4/Lib/linsolve/umfpack > swig: Lib/linsolve/umfpack/umfpack.i > swig -python -o build/src.macosx-10.3-fat-2.4/Lib/linsolve/umfpack/ > _umfpack_wrap.c -outdir build/src.macosx-10.3-fat-2.4/Lib/linsolve/ > umfpack Lib/linsolve/umfpack/umfpack.i > Lib/linsolve/umfpack/umfpack.i:188: Error: Unable to find 'umfpack.h' > Lib/linsolve/umfpack/umfpack.i:189: Error: Unable to find > 'umfpack_solve.h' > Lib/linsolve/umfpack/umfpack.i:190: Error: Unable to find > 'umfpack_defaults.h' > Lib/linsolve/umfpack/umfpack.i:191: Error: Unable to find > 'umfpack_triplet_to_col.h' > Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find > 'umfpack_col_to_triplet.h' > Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find > 'umfpack_transpose.h' > Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find > 'umfpack_scale.h' > Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find > 'umfpack_report_symbolic.h' > Lib/linsolve/umfpack/umfpack.i:197: Error: Unable to find > 'umfpack_report_numeric.h' > Lib/linsolve/umfpack/umfpack.i:198: Error: Unable to find > 'umfpack_report_info.h' > Lib/linsolve/umfpack/umfpack.i:199: Error: Unable to find > 'umfpack_report_control.h' > Lib/linsolve/umfpack/umfpack.i:211: Error: Unable to find > 'umfpack_symbolic.h' > Lib/linsolve/umfpack/umfpack.i:212: Error: Unable to find > 'umfpack_numeric.h' > Lib/linsolve/umfpack/umfpack.i:221: Error: Unable to find > 'umfpack_free_symbolic.h' > Lib/linsolve/umfpack/umfpack.i:222: Error: Unable to find > 'umfpack_free_numeric.h' > Lib/linsolve/umfpack/umfpack.i:244: Error: Unable to find > 'umfpack_get_lunz.h' > Lib/linsolve/umfpack/umfpack.i:268: Error: Unable to find > 'umfpack_get_numeric.h' > error: command 'swig' failed with exit status 1 > > > Can anyone help me? > > Thank you, > > David Galant > > Mac mini running MacOS X.4.9 > swig 1.3.31 > gcc 4.01 > > From dgalant at zahav.net.il Sun Apr 8 08:28:26 2007 From: dgalant at zahav.net.il (dgalant) Date: Sun, 8 Apr 2007 15:28:26 +0300 Subject: [SciPy-user] problem compiling scipy followup Message-ID: I included the header files from umfpack (from netlib) and got much farther, but this time, I can't guess what to do. Any help would be appreciated. Thanks, David Galant Sorry I forgot to include the error /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.4/Lib/ fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/ drfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zrfft.o build/ temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zfftnd.o build/ temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/i686-apple- darwin8.8.1/3.4.0 -Lbuild/temp.macosx-10.3-fat-2.4 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/lib.macosx-10.3-fat-2.4/scipy/fftpack/ _fftpack.so" failed with exit status 1 From aisaac at american.edu Sun Apr 8 11:01:49 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 8 Apr 2007 11:01:49 -0400 Subject: [SciPy-user] (numerical) "solving" 2-dimensional equations ? In-Reply-To: <4618A708.3010206@ru.nl> References: <4618A708.3010206@ru.nl> Message-ID: On Sun, 08 Apr 2007, Stef Mientki apparently wrote: > sqr (x) + sqr (y) = 4 Draw it parametrically. Cheers, Alan Isaac From rjchacko at gmail.com Sun Apr 8 12:04:46 2007 From: rjchacko at gmail.com (Ranjit Chacko) Date: Sun, 8 Apr 2007 12:04:46 -0400 Subject: [SciPy-user] scipy install error Message-ID: I just installed python and scipy from here: http://pythonmac.org/packages/py24-fat/index.html When I tried to import modules from scipy I got the following error: >>> from scipy import * RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 Traceback (most recent call last): File "", line 1, in -toplevel- from scipy import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/__init__.py", line 8, in -toplevel- from numpyio import packbits, unpackbits, bswap, fread, fwrite, \ ImportError: numpy.core.multiarray failed to import How do I fix this? Thanks, Ranjit -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Sun Apr 8 14:43:17 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 08 Apr 2007 20:43:17 +0200 Subject: [SciPy-user] (numerical) "solving" 2-dimensional equations ? In-Reply-To: References: <4618A708.3010206@ru.nl> Message-ID: <461937C5.50800@ru.nl> Alan G Isaac wrote: > On Sun, 08 Apr 2007, Stef Mientki apparently wrote: > >> sqr (x) + sqr (y) = 4 >> > > Draw it parametrically. > Sorry, what do you mean by that, (how do it, where do I find more information)? cheers, Stef > Cheers, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From hasslerjc at comcast.net Sun Apr 8 15:17:16 2007 From: hasslerjc at comcast.net (John Hassler) Date: Sun, 08 Apr 2007 15:17:16 -0400 Subject: [SciPy-user] Problems building SciPy Message-ID: <46193FBC.3040804@comcast.net> Since these seem to be popular at the moment .... I try to keep C: for the OS and use D:\Program Files\ for everything else, as far as possible. I installed msys and MinGW there, and it worked for simple tests, but something in the numpy build could not handle the blank in "Program Files," even with quotes in the path. Ok, reinstall and let msys and MinGW go where they want (ie., C:). Then I found that I had to use a double backslash in the path for [atlas], rather than the single backslash in the example on the web page. Ok, numpy now builds, installs, and passes all tests. When I tried to build SciPy, however, it builds for a long time, but finally ends with the following error. What am I missing? thanks john c:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -shared build\temp.win32-2.5\Release\build\src.win32-2.5\lib\fftpack\_fftpackmodule.o build\temp.win32-2.5\Release\lib\fftpack\src\zfft.o build\temp.win32-2.5\Release\lib\fftpack\src\drfft.o build\temp.win32-2.5\Release\lib\fftpack\src\zrfft.o build\temp.win32-2.5\Release\lib\fftpack\src\zfftnd.o build\temp.win32-2.5\Release\build\src.win32-2.5\fortranobject.o -Lc:\MinGW\lib -Lc:\MinGW\lib\gcc\mingw32\3.4.2 -Ld:\program files\python25\libs -Ld:\program files\python25\PCBuild -Lbuild\temp.win32-2.5 -ldfftpack -lpython25 -lg2c -lgcc -lmsvcr71 -o build\lib.win32-2.5\scipy\fftpack\_fftpack.pyd c:\MinGW\lib\gcc\mingw32\3.4.2/libgcc.a(__main.o): undefined reference to `__EH_FRAME_BEGIN__' c:\MinGW\lib\gcc\mingw32\3.4.2/libgcc.a(__main.o): undefined reference to `__EH_FRAME_BEGIN__' collect2: ld returned 1 exit status error: Command "c:\MinGW\bin\g77.exe -g -Wall -mno-cygwin -shared build\temp.win32-2.5\Release\build\src.win32-2.5\lib\fftpack\_fftpackmodule.o build\temp.win32-2.5\Release\lib\fftpack\src\zfft.o build\temp.win32-2.5\Release\lib\fftpack\src\drfft.o build\temp.win32-2.5\Release\lib\fftpack\src\zrfft.o build\temp.win32-2.5\Release\lib\fftpack\src\zfftnd.o build\temp.win32-2.5\Release\build\src.win32-2.5\fortranobject.o -Lc:\MinGW\lib -Lc:\MinGW\lib\gcc\mingw32\3.4.2 "-Ld:\program files\python25\libs" "-Ld:\program files\python25\PCBuild" -Lbuild\temp.win32-2.5 -ldfftpack -lpython25 -lg2c -lgcc -lmsvcr71 -o build\lib.win32-2.5\scipy\fftpack\_fftpack.pyd" failed with exit status 1 From strawman at astraw.com Sun Apr 8 16:05:24 2007 From: strawman at astraw.com (Andrew Straw) Date: Sun, 08 Apr 2007 13:05:24 -0700 Subject: [SciPy-user] Problems building SciPy In-Reply-To: <46193FBC.3040804@comcast.net> References: <46193FBC.3040804@comcast.net> Message-ID: <46194B04.8060801@astraw.com> John Hassler wrote: > Then I found that I had to use a double backslash in the path for > [atlas], rather than the single backslash in the example on the web > page. Ok, numpy now builds, installs, and passes all tests. What web page? If it's on the wiki, we'd appreciate it if you'd fix it, or if you can't, please reference that exact page so that some can fix it. (Sorry, I can't help answer your question...) From hasslerjc at comcast.net Sun Apr 8 16:39:44 2007 From: hasslerjc at comcast.net (John Hassler) Date: Sun, 08 Apr 2007 16:39:44 -0400 Subject: [SciPy-user] Problems building SciPy In-Reply-To: <46194B04.8060801@astraw.com> References: <46193FBC.3040804@comcast.net> <46194B04.8060801@astraw.com> Message-ID: <46195310.3030600@comcast.net> An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Apr 8 17:58:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 08 Apr 2007 16:58:33 -0500 Subject: [SciPy-user] problem compiling scipy followup In-Reply-To: References: Message-ID: <46196589.90005@gmail.com> dgalant wrote: > I included the header files from umfpack (from netlib) and got much > farther, but this time, I can't guess what to do. Any help would be > appreciated. You can't use g77 with the Universal Python binaries. You must use gfortran. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Apr 8 17:59:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 08 Apr 2007 16:59:14 -0500 Subject: [SciPy-user] scipy install error In-Reply-To: References: Message-ID: <461965B2.90104@gmail.com> Ranjit Chacko wrote: > I just installed python and scipy from here: > http://pythonmac.org/packages/py24-fat/index.html > > When I tried to import modules from scipy I got the following error: > >>>> from scipy import * > RuntimeError: module compiled against version 1000002 of C-API but this > version of numpy is 1000009 Build scipy from source. That binary was compiled against an older version of numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Sun Apr 8 18:36:04 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 9 Apr 2007 07:36:04 +0900 Subject: [SciPy-user] (numerical) "solving" 2-dimensional equations ? In-Reply-To: <461937C5.50800@ru.nl> References: <4618A708.3010206@ru.nl> <461937C5.50800@ru.nl> Message-ID: This isn't really an answer but I think the keyword for googling would be "algebraic equations" with words like "plotting" or "graphing". For the simple case you gave you can solve by making it a function of one var or the other: y = +/- sqrt( 4 - x^2 ) But note the +/- there because there are two solutions to anything of the form x^2 = c If you're looking to handle arbitrary algebraic curves of arbitrary degree, it's not so simple. One simple way to get an approximation is to evaluate it implicitly on a pixel grid. For each pixel you look at the value of abs( x^2 + y^2 - 4 ). If it's less than some small number then paint the pixel black. This isn't a very robust way to draw it though, just simple. :-) On 4/9/07, Stef Mientki wrote: > > > Alan G Isaac wrote: > > On Sun, 08 Apr 2007, Stef Mientki apparently wrote: > > > >> sqr (x) + sqr (y) = 4 > >> > > > > Draw it parametrically. > > > Sorry, what do you mean by that, (how do it, where do I find more > information)? > > cheers, > Stef > > Cheers, > > Alan Isaac From aisaac at american.edu Sun Apr 8 18:43:54 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 8 Apr 2007 18:43:54 -0400 Subject: [SciPy-user] (numerical) "solving" 2-dimensional equations ? In-Reply-To: <461937C5.50800@ru.nl> References: <4618A708.3010206@ru.nl> <461937C5.50800@ru.nl> Message-ID: >> On Sun, 08 Apr 2007, Stef Mientki apparently wrote: >>> sqr (x) + sqr (y) = 4 > Alan G Isaac wrote: >> Draw it parametrically. On Sun, 08 Apr 2007, Stef Mientki apparently wrote: > Sorry, what do you mean by that, (how do it, where do I find more > information)? http://t16web.lanl.gov/Kawano/gnuplot/parametric-e.html hth, Alan Isaac From lorenzo.isella at gmail.com Sun Apr 8 21:47:42 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 09 Apr 2007 03:47:42 +0200 Subject: [SciPy-user] Suggestions for Integro-Differential Equations Message-ID: <46199B3E.7040002@gmail.com> Dear All, I would like to solve numerically the following equation (I use latex-style notation) [for those who are interested, this is Smoluchowski equation describing the coagulation of aerosol particles]: \frac{dn(v,t)}{dt}=\frac{1}{2}\int_{v_0}^{v-v_0} K(v-q,q)n(v-q,t)n(q,t)dq-n(v,t)\int_{v_0}^\infty K(q,v)n(q,t)dq, where K(q,v) is the appropriate collision kernel and n is a particle concentration. I am not familiar with integro-differential equations (which could be the real problem) and I'll add that the equation above can be expressed also in a discrete form, which is supposed to be even worse to be dealt with numerically and that I am thus leaving out. Any suggestions about how to deal with the equation above in Python? I wonder if there is some tool for this sort of problems. Cheers Lorenzo From peridot.faceted at gmail.com Mon Apr 9 00:28:18 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 9 Apr 2007 00:28:18 -0400 Subject: [SciPy-user] Suggestions for Integro-Differential Equations In-Reply-To: <46199B3E.7040002@gmail.com> References: <46199B3E.7040002@gmail.com> Message-ID: On 08/04/07, Lorenzo Isella wrote: > Dear All, > I would like to solve numerically the following equation (I use > latex-style notation) [for those who are interested, this is > Smoluchowski equation describing the coagulation of aerosol particles]: > \frac{dn(v,t)}{dt}=\frac{1}{2}\int_{v_0}^{v-v_0} > K(v-q,q)n(v-q,t)n(q,t)dq-n(v,t)\int_{v_0}^\infty K(q,v)n(q,t)dq, > where K(q,v) is the appropriate collision kernel and n is a particle > concentration. I'm not very familiar with integro-differential equations either. I suspect there may be some specialized techniques for solving them (you might consider some of the techniques used for solving integral equations and see if they generalize; look at Numerical Recipes http://www.nrbook.com/b/bookcpdf.php ), but often a cruder approach can work if you're willing for it to take longer and give lower-quality answers. One approach would be to discretize n(v,t) along the first argument, to the vector {n_i(t)}, perhaps by sampling at the roots of some orthogonal polynomial, or perhaps evenly so you can use a Simpson-like rule (both are implemented in scipy; I would choose based on how smooth you expect the answer to be). Then you have \frac{dn_i(t)}{dt} = F(n_i(t)), where F is a slow, complicated function of the vector n_i. This is now an ordinary differential equation, readily solved by the tools in scipy.integrate. If your kernel is singular (as seems depressingly common), you may find you have to use custom orthogonal polynomials for the integration; these can be found by computing a few numerical integrations. If brute force and ignorance fail, browse through the literature, or trap a numerical analyst on their way to the coffee machine, and find out about tools for solving integrodifferential equations. For example, there's a 2005 paper by Struckmeier in Numerical Algorithms that looks like it might give you pointers not just to his clever new techniques but to the best simple techniques for solving the Smoluchowski equation. Anne M. Archibald From s.mientki at ru.nl Mon Apr 9 04:29:51 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 09 Apr 2007 10:29:51 +0200 Subject: [SciPy-user] (numerical) "solving" 2-dimensional equations ? In-Reply-To: References: <4618A708.3010206@ru.nl> <461937C5.50800@ru.nl> Message-ID: <4619F97F.5030707@ru.nl> Thanks Alan and Bill, after following your links, and searching around, the answer to my question wasn't as simple as I expected. I think I need something like Maxima + Kayali. Bill your pixel idea but at the moment indeed be the simplest, and therefor the fastest way to get started. cheers, Stef Alan G Isaac wrote: >>> On Sun, 08 Apr 2007, Stef Mientki apparently wrote: >>> >>>> sqr (x) + sqr (y) = 4 >>>> > > > >> Alan G Isaac wrote: >> >>> Draw it parametrically. >>> > > > On Sun, 08 Apr 2007, Stef Mientki apparently wrote: > >> Sorry, what do you mean by that, (how do it, where do I find more >> information)? >> > > > http://t16web.lanl.gov/Kawano/gnuplot/parametric-e.html > > hth, > Alan Isaac > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From rmay at ou.edu Mon Apr 9 12:03:59 2007 From: rmay at ou.edu (Ryan May) Date: Mon, 09 Apr 2007 11:03:59 -0500 Subject: [SciPy-user] scipy.io.loadmat incompatible with Numpy 1.0.2 Message-ID: <461A63EF.8090203@ou.edu> Hi, As far as I can tell, the new Numpy 1.0.2 broke scipy.io.loadmat. Here's what I get when I try to open a file with using loadmat with numpy 1.0.2 (on gentoo AMD64): In [2]: loadmat('tep_iqdata.mat') --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /usr/lib64/python2.4/site-packages/scipy/io/mio.py in loadmat(file_name, mdict, appendmat, basename, **kwargs) 94 ''' 95 MR = mat_reader_factory(file_name, appendmat, **kwargs) ---> 96 matfile_dict = MR.get_variables() 97 if mdict is not None: 98 mdict.update(matfile_dict) /usr/lib64/python2.4/site-packages/scipy/io/miobase.py in get_variables(self, variable_names) 267 variable_names = [variable_names] 268 self.mat_stream.seek(0) --> 269 mdict = self.file_header() 270 mdict['__globals__'] = [] 271 while not self.end_of_stream(): /usr/lib64/python2.4/site-packages/scipy/io/mio5.py in file_header(self) 508 hdict = {} 509 hdr = self.read_dtype(self.dtypes['file_header']) --> 510 hdict['__header__'] = hdr['description'].strip(' \t\n\000') 511 v_major = hdr['version'] >> 8 512 v_minor = hdr['version'] & 0xFF AttributeError: 'numpy.ndarray' object has no attribute 'strip' Reverting to numpy 1.0.1 works fine for the same code. So the question is, does scipy need an update, or did something unintended creep into Numpy 1.0.2? (Hence the cross-post) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From cimrman3 at ntc.zcu.cz Tue Apr 10 04:47:10 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 10 Apr 2007 10:47:10 +0200 Subject: [SciPy-user] Sparse indexing workarounds In-Reply-To: References: Message-ID: <461B4F0E.8040509@ntc.zcu.cz> Bill Baxter wrote: > Does anyone have an easy (and efficient way) to update a submatrix of > a big sparse matrix? > > K = scipy.sparse.lil_matrix(bigN,bigN) > ... > conn = [1,4,11,12] > K[ix_(conn,conn)] += elemK > > where elemK is a 4x4 dense matrix. > > This kind of thing is very commonly needed in FEM codes for assembling > the global stiffness matrix. But sparse doesn't seem to support > either += or indexing with the open grid of indices returned by ix_. You may have a look at http://ui505p06-mbs.ntc.zcu.cz/sfe/FEUtilsExample it uses CSR matrix though... r. From hakan.jakobsson at gmail.com Tue Apr 10 05:11:51 2007 From: hakan.jakobsson at gmail.com (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Tue, 10 Apr 2007 11:11:51 +0200 Subject: [SciPy-user] Sparse indexing workarounds In-Reply-To: <461B4F0E.8040509@ntc.zcu.cz> References: <461B4F0E.8040509@ntc.zcu.cz> Message-ID: <687bb3e80704100211necb743fi8fd7b9293c79afc3@mail.gmail.com> You can use pysparse instead of sparse. The update_add_mask function will do the trick. /H?kan J On 4/10/07, Robert Cimrman wrote: > > Bill Baxter wrote: > > Does anyone have an easy (and efficient way) to update a submatrix of > > a big sparse matrix? > > > > K = scipy.sparse.lil_matrix(bigN,bigN) > > ... > > conn = [1,4,11,12] > > K[ix_(conn,conn)] += elemK > > > > where elemK is a 4x4 dense matrix. > > > > This kind of thing is very commonly needed in FEM codes for assembling > > the global stiffness matrix. But sparse doesn't seem to support > > either += or indexing with the open grid of indices returned by ix_. > > You may have a look at > http://ui505p06-mbs.ntc.zcu.cz/sfe/FEUtilsExample > it uses CSR matrix though... > > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Apr 10 08:01:33 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 10 Apr 2007 14:01:33 +0200 Subject: [SciPy-user] Bugs in special In-Reply-To: <957526FB6E347743AAB42B212AB54FDA95B9E4@NANAMAILBACK1.nanamail.co.il> References: <957526FB6E347743AAB42B212AB54FDA7A5AED@NANAMAILBACK1.nanamail.co.il> <957526FB6E347743AAB42B212AB54FDA95B9E4@NANAMAILBACK1.nanamail.co.il> Message-ID: <461B7C9D.5070906@iam.uni-stuttgart.de> Eli Brosh wrote: > Thank you Nils > I looked at the tickets > http://projects.scipy.org/scipy/scipy/ticket/301 > > http://projects.scipy.org/scipy/scipy/ticket/387 > > It seems that inded, there are some serious bugs in Bessel and Kelvin functions in 'special' package. > However, the bugs reported in these "tickets" are not the bug I encountered. > Perhaps they are related. > > Eli > > > > > ________________________________ > > From: scipy-user-bounces at scipy.org on behalf of Nils Wagner > Sent: Mon 02/04/2007 13:59 > To: SciPy Users List > Subject: Re: [SciPy-user] Bugs in special > > > > On Mon, 2 Apr 2007 13:03:03 +0300 > "Eli Brosh" wrote: > >> Hello >> I am trying to convert from MATLAB to Python with SciPy. >> I am using Python 2.4.3 for Windows (Enthought Edition) >> As a start, I tried to use the special functions from >> the SciPy "special" module. >> There, I encountered some problems that may be a result >> of bugs in SciPy. >> >> >> The session (in IDLE) goes like: >> >> >>>> >from scipy import * >>>> >>>>> x=.5 >>>>> special.jv(0,x) >>>>> >> 0.938469807241 >> >>>>> y=.5+1.j >>>>> y >>>>> >> (0.5+1j) >> >>>>> special.jv(0,y) >>>>> ================================ RESTART >>>>> ================================ >>>>> >> When I try to put a complex argument in special.jv, I >> get an error message from the operating system (windows >> XP): >> It says "pythonw.exe has encountered a problem and needs >> to close. We are sorry for the inconvenience." >> >> The IDLE does not close but it is restarted: >> There appears a line: >> >>>>> ================================ RESTART >>>>> ================================ >>>>> >> This does not occur when the argument of special.jv is >> real. >> >> However, even real arguments in special.ber and >> special.ker provoked the same crash and the same error >> message. >> >> >> >> Is this a bug in SciPy or am I doing something wrong ? >> >> >> Thanks >> Eli >> >> > See the tickets > > http://projects.scipy.org/scipy/scipy/ticket/301 > > http://projects.scipy.org/scipy/scipy/ticket/387 > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I cannot reproduce your bug here. (SuSE Linux 64-bit) >>> y=.5+1.j >>> y (0.5+1j) >>> special.jv(0,y) (1.179856630403078-0.27372678559101116j) >>> import scipy >>> scipy.__version__ '0.5.3.dev2901' Nils From gerard.vermeulen at grenoble.cnrs.fr Tue Apr 10 16:12:50 2007 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Tue, 10 Apr 2007 22:12:50 +0200 Subject: [SciPy-user] ANN: PyQwt-5.0.0 released Message-ID: <20070410221250.472a97fa@zombie.grenoble.cnrs.fr> What is PyQwt ( http://pyqwt.sourceforge.net ) ? - it is a set of Python bindings for the Qwt C++ class library which extends the Qt framework with widgets for scientific and engineering applications. It provides a widget to plot 2-dimensional data and various widgets to display and control bounded or unbounded floating point values. - it requires and extends PyQt, a set of Python bindings for Qt. - it supports the use of PyQt, Qt, Qwt, and optionally NumPy or SciPy in a GUI Python application or in an interactive Python session. - it runs on POSIX, Mac OS X and Windows platforms (practically any platform supported by Qt and Python). - it plots fast: displaying data with 100,000 points takes about 0.1 s. - it is licensed under the GPL with an exception to allow dynamic linking with non-free releases of Qt and PyQt. PyQwt-5.0.0 is a major release with support for Qt-4.x, many API changes compared to PyQwt-4.2.x, and a NSIS Windows installer. PyQwt-5.0.0 supports: 1. Python-2.5, -2.4 or -2.3. 2. PyQt-3.18 (to be released in April 2007), or PyQt-3.17. 3. PyQt-4.2 (to be released in April 2007), or PyQt-4.1.x. 3 SIP-4.6 (to be released in April 2007), or SIP-4.5.x. 4. Qt-3.3.x, or -3.2.x. 5. Qt-4.2.x, or -4.1.x. 6. Recent versions of NumPy, numarray, and/or Numeric. Enjoy -- Gerard Vermeulen From lbolla at gmail.com Wed Apr 11 06:11:51 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 11 Apr 2007 12:11:51 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab Message-ID: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> Dear all, I've "expanded" the performance tests by Prabhu Ramachandran in http://www.scipy.org/PerformancePython with a comparison with matlab. for anyone interested, see http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ . it's still a work in progress, but worth seeing by anyone still uncertain in switching from matlab to numpy. Regards, Lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Apr 11 06:33:15 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 11 Apr 2007 11:33:15 +0100 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> Message-ID: <1e2af89e0704110333x5f3186bct9b0e6d62089419f0@mail.gmail.com> Thanks - that's very helpful. Any chance of adding that to the main PerformancePython python page? Matthew On 4/11/07, lorenzo bolla wrote: > Dear all, > I've "expanded" the performance tests by Prabhu Ramachandran in > http://www.scipy.org/PerformancePython with a comparison > with matlab. > for anyone interested, see > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/. > it's still a work in progress, but worth seeing by anyone still uncertain in > switching from matlab to numpy. > Regards, > Lorenzo. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From matthieu.brucher at gmail.com Wed Apr 11 06:42:32 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 11 Apr 2007 12:42:32 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <1e2af89e0704110333x5f3186bct9b0e6d62089419f0@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <1e2af89e0704110333x5f3186bct9b0e6d62089419f0@mail.gmail.com> Message-ID: Excellent, I've put the link on my french blog :) 2007/4/11, Matthew Brett : > > Thanks - that's very helpful. Any chance of adding that to the main > PerformancePython python page? > > Matthew > > On 4/11/07, lorenzo bolla wrote: > > Dear all, > > I've "expanded" the performance tests by Prabhu Ramachandran in > > http://www.scipy.org/PerformancePython with a comparison > > with matlab. > > for anyone interested, see > > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > . > > it's still a work in progress, but worth seeing by anyone still > uncertain in > > switching from matlab to numpy. > > Regards, > > Lorenzo. > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Wed Apr 11 10:10:17 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 11 Apr 2007 16:10:17 +0200 Subject: [SciPy-user] EMPython Message-ID: <80c99e790704110710j37d046eat88b37854de57b6ef@mail.gmail.com> Dear all, does anyone know who is responsible for the website: http://www.empython.org/? if I get it right, he should be Robert Lytle, former responsible for www.electromagneticpython.org, now dismissed. do you know an e-mail address I can write to? Thanks! Lorenzo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hasslerjc at comcast.net Wed Apr 11 11:32:45 2007 From: hasslerjc at comcast.net (John Hassler) Date: Wed, 11 Apr 2007 11:32:45 -0400 Subject: [SciPy-user] Building SciPy Message-ID: <461CFF9D.6050904@comcast.net> I have an ancient and honorable Athlon, without SSE, so I can't use the binary for Scipy. I tried to build it, but ran into problems. When all else fails, read the directions (although they might have been placed in a more obvious location): "Furthermore, version 3.4.5 of gcc seems to be required or you end up with a linker error at the end." So I put 3.4.5 into MinGW. Scipy built, and it runs my programs (ode, optimize, fsolve) ok. However, scipy.test() crashes in the sparse matrix test: Running scipy.test(), I get a warning: Warning: FAILURE importing tests for D:\Program Files\Python25\Lib\site-packages\scipy\linsolve\umfpack\tests\test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ) And then it crashes on sparse test: .......... check_quadrature (scipy.integrate.tests.test_quadrature.test_quadrature)Took 13 points. ... ok check_romb (scipy.integrate.tests.test_quadrature.test_quadrature) ... ok check_romberg (scipy.integrate.tests.test_quadrature.test_quadrature) ... ok check_eye (scipy.sparse.tests.test_sparse.test_construct_utils) ... ok check_identity (scipy.sparse.tests.test_sparse.test_construct_utils) ... ok check_normalize (scipy.sparse.tests.test_sparse.test_coo) ... ok check_add (scipy.sparse.tests.test_sparse.test_csc) >>> ================= RESTART ================= The debugger says: Unhandled exception in pythonw.exe (SPARSETOOLS.PYD): 0xC000001D: Illegal instruction 69B99471 and ebx,3 69B99474 cvtsi2ss xmm1,dword ptr [edx+4] <-- This one has the yellow arrow 69B99479 mov edx,dword ptr [ebp-28h] Any suggestions? john From lorenzo.isella at gmail.com Wed Apr 11 12:48:27 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Wed, 11 Apr 2007 18:48:27 +0200 Subject: [SciPy-user] Integrate.odeint Message-ID: Dear All, First of all I want to make clear that I am not looking for someone taking care of my homework. I am struggling to solve a system of nonlinear ODE's (mentioned in a previous mail of mine). I am resorting to integrate.odeint which worked wonderfully for me in the past. What I am finding puzzling is that I can understand getting the wrong result, but not somehow ending up with a solution which does not respect the initial condition. I cut and paste the code. Does anyone have a clue at what is going on? Kind Regards Lorenzo #! /usr/bin/env python from scipy import * import pylab # used to read the .csv file x=linspace(1.,300.,20) #set of initial particle diameters lensum=len(x) # number of particle bins myvar1=1.5 #standard deviation of the log-normal distribution mu1=70. # mean of the distribution A1=1000. #amplitude of the distribution print 'x is', x def distr(A1,mu1,myvar1,x): # function representing the initial log-normal distribution z=log(10.)*A1/sqrt(2.*pi)/log(myvar1)*exp(-((log(x/mu1))**2.) \ /2./log(myvar1)/log(myvar1)) return z vec_distr=vectorize(distr) # I vectorized the previous functio y0=vec_distr(A1,mu1,myvar1,x) # initial condition of log-normally distributed particles #print 'n_ini is', n_ini #I plot the initial state condition pylab.plot(x,y0) pylab.xlabel('D_p') pylab.ylabel('n_ini(0)') #pylab.legend(('prey population','predator population')) pylab.title('initial distribution') pylab.grid(True) pylab.savefig('N_initial') pylab.hold(False) # this is the system of 1st order ODE's I want to solve: # \frac{dy[k]}{dt}=0.5*sum_{i+j=k}kernel[i,j]*y[i]*y[j] (creator) + # -y[k]sum_{i=1}^infty kernel[i,k]*y[i] (destructor) # NB: careful since in the formula both i and j start from 1! # In the following, I will be using a trivial constant kernel to test the code kern=1e-6 # kernel def coupling(y,t,kernel,lensum): out=zeros(lensum) # array which I'll use to write down the differential equations creation=zeros(lensum) destruction=zeros(lensum) for k in range(0,lensum): for i in range(0,lensum): destruction[k]=destruction[k]-kern*y[i] for j in range(0,lensum): if (i+1+j+1==k+1): #I add 1 to correct the array indexing creation[k]=creation[k]+kern*y[i]*y[j] destruction[k]=y[k]*destruction[k] creation[k]=0.5*creation[k] if (creation[k]+destruction[k]>=0.): out[k]=creation[k]+destruction[k] # output array return out t=arange(0.,100.,1.) print 't is', t #t_fix=0. #test=coupling(y0,t_fix,kern,lensum) #print 'test is', test ysol = integrate.odeint(coupling, y0, t,args=(kern,lensum),printmessg=1) print 'the shape of y is', shape(ysol) pylab.plot(x,ysol[0,:]/max(ysol[0,:]),x,ysol[90,:]/max(ysol[90,:])) pylab.xlabel('D_p') pylab.ylabel('population') pylab.title('evolution') pylab.grid(True) pylab.savefig('N_evolved') print 'the solu at t=0 is', ysol[0,:] print 'and the initial condition is', y0 print 'So far so good' From nicolas.pettiaux at ael.be Wed Apr 11 13:42:55 2007 From: nicolas.pettiaux at ael.be (Nicolas Pettiaux) Date: Wed, 11 Apr 2007 19:42:55 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> Message-ID: 2007/4/11, lorenzo bolla : > Dear all, > I've "expanded" the performance tests by Prabhu Ramachandran in > http://www.scipy.org/PerformancePython with a comparison > with matlab. > for anyone interested, see > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/. > it's still a work in progress, but worth seeing by anyone still uncertain in > switching from matlab to numpy. Thank you. I am looking for any info and help, and examples also to help and argument that change from matlab to numpy/scipy at the university where I teach ... numerical analysis and matlab. Thanks a lot. Nicolas -- Nicolas Pettiaux - email: nicolas.pettiaux at ael.be From robert.kern at gmail.com Wed Apr 11 15:05:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Apr 2007 14:05:31 -0500 Subject: [SciPy-user] EMPython In-Reply-To: <80c99e790704110710j37d046eat88b37854de57b6ef@mail.gmail.com> References: <80c99e790704110710j37d046eat88b37854de57b6ef@mail.gmail.com> Message-ID: <461D317B.5050604@gmail.com> lorenzo bolla wrote: > Dear all, > does anyone know who is responsible for the website: > http://www.empython.org/? > if I get it right, he should be Robert Lytle, former responsible for > www.electromagneticpython.org , > now dismissed. > do you know an e-mail address I can write to? He's posted on enthought-dev recently. rob (at) empython.org -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Wed Apr 11 15:06:54 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 11 Apr 2007 21:06:54 +0200 Subject: [SciPy-user] What's "a sequence" Message-ID: <461D31CE.8070609@ru.nl> hello, I'm trying to design a filter by the Remez exchange algorithm, but don't know how define "a sequence", like the bands parameter. From the help: remez(numtaps, bands, desired, weight=None, Hz=1, type='bandpass', maxiter=25, grid_density=16) Calculate the minimax optimal filter using Remez exchange algorithm. Inputs: numtaps -- The desired number of taps in the filter. bands -- A montonic sequence containing the band edges. All elements must be non-negative and less than 1/2 the sampling frequency as given by Hz. I tried several options i=signal.remez(16, ([0.02,0.06]), ... i=signal.remez(16, [0.02,0.06], ... i=signal.remez(16, (0.02,0.06), ... but they all give an error on the bands-parameter. From robert.kern at gmail.com Wed Apr 11 15:21:40 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Apr 2007 14:21:40 -0500 Subject: [SciPy-user] What's "a sequence" In-Reply-To: <461D31CE.8070609@ru.nl> References: <461D31CE.8070609@ru.nl> Message-ID: <461D3544.9080405@gmail.com> Stef Mientki wrote: > hello, > > I'm trying to design a filter by the Remez exchange algorithm, > but don't know how define "a sequence", like the bands parameter. > > From the help: > remez(numtaps, bands, desired, weight=None, Hz=1, type='bandpass', > maxiter=25, grid_density=16) > Calculate the minimax optimal filter using Remez exchange algorithm. > Inputs: > numtaps -- The desired number of taps in the filter. > bands -- A montonic sequence containing the band edges. All elements > must be non-negative and less than 1/2 the sampling frequency > as given by Hz. > > I tried several options > i=signal.remez(16, ([0.02,0.06]), ... > i=signal.remez(16, [0.02,0.06], ... > i=signal.remez(16, (0.02,0.06), ... > > but they all give an error on the bands-parameter. What error? Please always copy-and-paste the traceback. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at mailbox.kun.nl Wed Apr 11 15:43:22 2007 From: s.mientki at mailbox.kun.nl (stef mientki) Date: Wed, 11 Apr 2007 21:43:22 +0200 Subject: [SciPy-user] What's "a sequence" In-Reply-To: <461D3544.9080405@gmail.com> References: <461D31CE.8070609@ru.nl> <461D3544.9080405@gmail.com> Message-ID: <461D3A5A.20101@gmail.com> Robert Kern wrote: > Stef Mientki wrote: > >> hello, >> >> I'm trying to design a filter by the Remez exchange algorithm, >> but don't know how define "a sequence", like the bands parameter. >> >> From the help: >> remez(numtaps, bands, desired, weight=None, Hz=1, type='bandpass', >> maxiter=25, grid_density=16) >> Calculate the minimax optimal filter using Remez exchange algorithm. >> Inputs: >> numtaps -- The desired number of taps in the filter. >> bands -- A montonic sequence containing the band edges. All elements >> must be non-negative and less than 1/2 the sampling frequency >> as given by Hz. >> >> I tried several options >> i=signal.remez(16, ([0.02,0.06]), ... >> i=signal.remez(16, [0.02,0.06], ... >> i=signal.remez(16, (0.02,0.06), ... >> >> but they all give an error on the bands-parameter. >> > > What error? Please always copy-and-paste the traceback. > > Sorry for not given the traceback and not signing my message (hit the enter key by accident). Anyway found a solution: i = signal.remez (16, array([0,0.02,0.06,0.5]), ... Remains an other question, do I need the border elements 0 and 0.5 in the sequences for the Remez exchange algorithm ? I guess yes, because of: bands -- A montonic sequence containing the band edges. All elements must be non-negative and less than 1/2 the sampling frequency as given by Hz. desired -- A sequency half the size of bands containing the desired gain in each of the specified bands (Coming from MatLab, I'm spoiled to just fill in some boxes ;-) thanks, Stef Mientki From robert.kern at gmail.com Wed Apr 11 15:54:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Apr 2007 14:54:23 -0500 Subject: [SciPy-user] What's "a sequence" In-Reply-To: <461D3A5A.20101@gmail.com> References: <461D31CE.8070609@ru.nl> <461D3544.9080405@gmail.com> <461D3A5A.20101@gmail.com> Message-ID: <461D3CEF.90008@gmail.com> stef mientki wrote: > Sorry for not given the traceback and not signing my message (hit the > enter key by accident). No worries. > Anyway found a solution: > i = signal.remez (16, array([0,0.02,0.06,0.5]), ... That's what I figured. We could probably use asarray() there so that we could accept lists and such, too. > Remains an other question, do I need the border elements 0 and 0.5 in > the sequences for the Remez exchange algorithm ? I'm not sure. I'm not worth much for signal processing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ggellner at uoguelph.ca Wed Apr 11 16:13:32 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Wed, 11 Apr 2007 16:13:32 -0400 Subject: [SciPy-user] f2py, fortran 95, and present() Message-ID: <20070411201332.GA22593@angus> Not sure if f2py allows this, but I have some code that uses optional arguments in fortran 95. I want these to be optional in the python interface as well, but it seems that the intrinsic 'present()' does not work correctly (that is if you have an optional F95 argument, it is made equal to 0, and is always present from the python interface) with the wrapped interface, that is, the optional variable is set. I imagine this is an issue with the C interface, that doesn't support optional arguments in the way that fortran does. But I am not great in either C, or the C extension interface. So I was hoping someone could tell me definitively Here is a simple example (which does nothing useful) to make what I am saying clear: fadd.f95 subroutine fadd(x, a, b) implicit none real(8), intent(out) :: x real(8), intent(in) :: a real(8), intent(in), optional :: b if (present(b)) then x = a + b else x = a + 1.0d0 endif end subroutine fadd We compile this with $ f2py -c -m fadd fadd.f95 but when I import and use the module in python I get >>> fadd.fadd(2) 2.0 Instead of 3.0. Is this just something that can't be done with f2py? Gabriel From lechtlr at yahoo.com Wed Apr 11 16:12:34 2007 From: lechtlr at yahoo.com (lechtlr) Date: Wed, 11 Apr 2007 13:12:34 -0700 (PDT) Subject: [SciPy-user] fprime for L-BFGS-B Message-ID: <877496.85367.qm@web57904.mail.re3.yahoo.com> Can anyone give some clues as to how to define gradient array (fprime) for optimize.fmin_l_bfgs_b. I have attached a simple parameter estimation example below to test fmin_l_bfgs_b with and without fprime. It works with approx_grad (i.e., optimize.fmin_l_bfgs_b(resid, p0, args=(y_meas,x), fprime=None, approx_grad=True). However, when I define, fprime it returns the initial guess. I suspect that I am not defining the fprime correctly. Any help would greatly be appreciated. -Lex from numpy import * from scipy import optimize def evalY(p, x): return x**3 * p[3] + x**2 * p[2] + x * p[1] + p[0] # function to be minimized for parameter estimation def resid(p, y, x): err = y - evalY(p, x) return dot(err,err) # fprime for fmin_bfgs def func_der(p, y, x): g = zeros(4,float) g0 = 1.0 g1 = x g2 = x**2 g3 = x**3 g[0] = g0 g[1] = dot(g1,g1) g[2] = dot(g2,g2) g[3] = dot(g3,g3) return g x = array([0., 1., 2., 3., 4., 5.]).astype('d') coeffs = [4., 3., 5., 2.] yErrs = array([0.1, 0.12, -0.1, 0.05, 0,-.02]).astype('d') y_true = evalY(coeffs, x) y_meas = y_true + yErrs #Initial guess p0 = [2., 1.2, 1.1, 1.] pMin_BFGS, f_BFGS, d_BFGS = optimize.fmin_l_bfgs_b(resid, p0, args=(y_meas,x), fprime=func_der, approx_grad=None) print "\nFinal parameters:" print '%s %20s %20s ' % ('Para', 'Actual', 'fmin_BFGS') for i in range(len(coeffs)): print '%s %20.2f %20.2f ' % (i, coeffs[i], pMin_BFGS[i]) --------------------------------- Food fight? Enjoy some healthy debate in the Yahoo! Answers Food & Drink Q&A. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Wed Apr 11 17:05:44 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 11 Apr 2007 23:05:44 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <1e2af89e0704110333x5f3186bct9b0e6d62089419f0@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <1e2af89e0704110333x5f3186bct9b0e6d62089419f0@mail.gmail.com> Message-ID: <80c99e790704111405g64c947ebt3b31c05987038cb0@mail.gmail.com> I'd like to. Just let me know if it is helpful for someone else and I'll do it. lorenzo On 4/11/07, Matthew Brett wrote: > > Thanks - that's very helpful. Any chance of adding that to the main > PerformancePython python page? > > Matthew > > On 4/11/07, lorenzo bolla wrote: > > Dear all, > > I've "expanded" the performance tests by Prabhu Ramachandran in > > http://www.scipy.org/PerformancePython with a comparison > > with matlab. > > for anyone interested, see > > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > . > > it's still a work in progress, but worth seeing by anyone still > uncertain in > > switching from matlab to numpy. > > Regards, > > Lorenzo. > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emsellem at obs.univ-lyon1.fr Wed Apr 11 18:25:54 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 12 Apr 2007 00:25:54 +0200 Subject: [SciPy-user] Meijer G functions ? Message-ID: <461D6072.8040305@obs.univ-lyon1.fr> Hi, does anybody know if an implementation of the Meijer G functions has been done in scipy? thanks Eric From arokem at berkeley.edu Wed Apr 11 21:20:10 2007 From: arokem at berkeley.edu (Ariel Rokem) Date: Wed, 11 Apr 2007 18:20:10 -0700 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? Message-ID: Hi - I have been having a very similar problem building scipy. I am running Mac OS10.4.9 on a PPC with gfortran 4.3.0 and gcc3.3 I was running: ariel-rokems-ibook-g4:~ ariel$ python setup.py build_src build_clib -- fcompiler=gnu95 build_ext --fcompiler=gnu95 build as instructed here : http://scipy.org/Installing_SciPy/Mac_OS_X But it doesn't seem to work: ariel-rokems-ibook-g4:~ ariel$ python Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ? ImportError: No module named scipy This is the end of the build proccess. Looks ominous, but I am puzzled ~:-? error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk -fno-s rict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno- common -dynamic -DNDE UG -g -O3 -DUSE_VENDOR_BLAS=1 -c Lib/linsolve/SuperLU/SRC/zgscon.c -o build/temp.macosx-1 .3-fat-2.4/Lib/linsolve/SuperLU/SRC/zgscon.o" failed with exit status 1 Nick - is this the issue you had? Did you manage to solve it? Thanks, Ariel -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Wed Apr 11 21:49:52 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 11 Apr 2007 18:49:52 -0700 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: References: Message-ID: <14D902D4-B8E5-494A-BC97-23664C9FC81E@stanford.edu> Hi Ariel, Are you in fact using gcc 3.3? If so, note that gfortran must be used with gcc 4. As per the instructions on the page you referenced, first run 'sudo gcc_select 4.0' to start using gcc 4 before building scipy. As for scipy not being available in python ("no module named scipy"), it stands to reason that since the build process terminated in an error, it would not be usable and hence not be installed. Finally, if the above does not fix the problem, it would be helpful if in the future you could include the actual error lines in your message. These are the lines immediately before the the "failed with exit status" lines that you copied to the bottom of the last email. These error lines can be distinguished as they should attempt to provide a (likely cryptic) error message somewhat more revealing than "failed". Best luck, Zach On Apr 11, 2007, at 6:20 PM, Ariel Rokem wrote: > Hi - I have been having a very similar problem building scipy. > > I am running Mac OS10.4.9 on a PPC with gfortran 4.3.0 and gcc3.3 > I was running: > > ariel-rokems-ibook-g4:~ ariel$ python setup.py build_src build_clib > --fcompiler=gnu95 build_ext --fcompiler=gnu95 build > > as instructed here : http://scipy.org/Installing_SciPy/Mac_OS_X > > But it doesn't seem to work: > > ariel-rokems-ibook-g4:~ ariel$ python > Python 2.4.4 (#1, Oct 18 2006, 10:34:39) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named scipy > > > This is the end of the build proccess. Looks ominous, but I am > puzzled ~:-? > > > error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ > MacOSX10.4u.sdk -fno-s > rict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno- > common -dynamic -DNDE > UG -g -O3 -DUSE_VENDOR_BLAS=1 -c Lib/linsolve/SuperLU/SRC/zgscon.c - > o build/temp.macosx-1 > .3-fat-2.4/Lib/linsolve/SuperLU/SRC/zgscon.o" failed with exit > status 1 > > > Nick - is this the issue you had? Did you manage to solve it? > > Thanks, > > Ariel > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Wed Apr 11 22:07:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Apr 2007 21:07:19 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: References: Message-ID: <461D9457.3060207@gmail.com> Ariel Rokem wrote: > Hi - I have been having a very similar problem building scipy. > > I am running Mac OS10.4.9 on a PPC with gfortran 4.3.0 and gcc3.3 > I was running: > > ariel-rokems-ibook-g4:~ ariel$ python setup.py build_src build_clib > --fcompiler=gnu95 build_ext --fcompiler=gnu95 build > > as instructed here : http://scipy.org/Installing_SciPy/Mac_OS_X > > But it doesn't seem to work: > > ariel-rokems-ibook-g4:~ ariel$ python > Python 2.4.4 (#1, Oct 18 2006, 10:34:39) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy > Traceback (most recent call last): > File "", line 1, in ? > ImportError: No module named scipy Note that after it is built (correctly, see Zach's message), it must then be installed per the directions given on that page. $ sudo python setup.py install -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arokem at berkeley.edu Wed Apr 11 22:32:58 2007 From: arokem at berkeley.edu (Ariel Rokem) Date: Wed, 11 Apr 2007 19:32:58 -0700 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: <461D9457.3060207@gmail.com> References: <461D9457.3060207@gmail.com> Message-ID: Thanks Zach and Robert, I've tried to include more this time (does the build process save a log somewhere?). Please be patient with me, I am just a newbie, trying to migrate over from the world of evil proprietary software for :) So - as advised I changed the gcc version to 4.0 and verified that was the version. Then, I ran the building process again, as before: Along the way, several kinds of error messages appeared: This: non-existing path in 'Lib/maxentropy': 'doc' and this: Couldn't match compiler version for 'GNU Fortran (GCC) 4.3.0 20070316 (experimen tal)\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes wit h NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of G NU Fortran\nunder the terms of the GNU General Public License.\nFor more informa tion about these matters, see the file named COPYING\n' customize Gnu95FCompiler using build_clib building 'superlu_src' library compiling C sources C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ MacOSX10.4u.sdk - fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd - fno-common -dynamic -DNDEBUG -g -O3 This looks like it may have something to do with the compiler versions and yet, it goes on compiling, More error messages appear - things that look like this: Lib/linsolve/SuperLU/SRC/scomplex.c: In function 'c_div': Lib/linsolve/SuperLU/SRC/scomplex.c:30: warning: incompatible implicit declaration of built-in function 'exit' and things that look like this: fortran:f77: Lib/special/cdflib/dzror.f Lib/special/cdflib/dzror.f:92.72: ASSIGN 10 TO i99999 1 Warning: Obsolete: ASSIGN statement at (1) Lib/special/cdflib/dzror.f:100.72: Finally, this is the last things that appear on the screen: creating build/temp.macosx-10.3-fat-2.4/build creating build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4 creating build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ Lib creating build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ Lib/fftp creating build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src compile options: '-DSCIPY_FFTW3_H -I/usr/local/include -Ibuild/ src.macosx-10.3 ameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ numpy/core/ meworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: build/src.macosx-10.3-fat-2.4/fortranobject.c gcc: Lib/fftpack/src/zrfft.c gcc: Lib/fftpack/src/drfft.c gcc: build/src.macosx-10.3-fat-2.4/Lib/fftpack/_fftpackmodule.c gcc: Lib/fftpack/src/zfft.c gcc: Lib/fftpack/src/zfftnd.c Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-p s/core.py", line 174, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distu 9, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distu 6, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distu 6, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-p s/command/build_ext.py", line 121, in run self.build_extensions() File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ python2.4/distu .py", line 405, in build_extensions self.build_extension(ext) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-p s/command/build_ext.py", line 312, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' Looks pretty bad, no? I ran the installation command too, just to be sure. Unsurprisingly, it didn't work, giving this message in the end: compile options: '-DSCIPY_FFTW3_H -I/usr/local/include -Ibuild/ src.macosx-10.3-fat-2.4 -I/Library/Frameworks/Python.framewo rk/Versions/2.4/lib/python2.4/site-packages/numpy/core/include -I/ Library/Frameworks/Python.framework/Versions/2.4/include/ python2.4 -c' /usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/ temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.4/Lib/ fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpa ck/src/drfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zfftnd .o build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/power pc-apple-darwin7.9.0/3.4.4 -Lbuild/temp.macosx-10.3-fat-2.4 - ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/lib.macosx-10.3- fat-2.4/scipy/fftpack/_fftpack.so /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/temp.macosx-10.3-fat-2.4/build/src.maco sx-10.3-fat-2.4/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3- fat-2.4/Lib/fftpack/src/zfft.o build/temp.macosx-10.3-f at-2.4/Lib/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.4/Lib/ fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.4/Lib/ff tpack/src/zfftnd.o build/temp.macosx-10.3-fat-2.4/build/ src.macosx-10.3-fat-2.4/fortranobject.o -L/usr/local/lib -L/usr/loc al/lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/temp.macosx-10.3- fat-2.4 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/ lib.macosx-10.3-fat-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 I hope this gives enough information this time (in fact, I hope that's not too much this time). Does anyone have any ideas? Thanks a lot, Ariel On Apr 11, 2007, at 7:07 PM, Robert Kern wrote: > Ariel Rokem wrote: >> Hi - I have been having a very similar problem building scipy. >> >> I am running Mac OS10.4.9 on a PPC with gfortran 4.3.0 and gcc3.3 >> I was running: >> >> ariel-rokems-ibook-g4:~ ariel$ python setup.py build_src build_clib >> --fcompiler=gnu95 build_ext --fcompiler=gnu95 build >> >> as instructed here : http://scipy.org/Installing_SciPy/Mac_OS_X >> >> But it doesn't seem to work: >> >> ariel-rokems-ibook-g4:~ ariel$ python >> Python 2.4.4 (#1, Oct 18 2006, 10:34:39) >> [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin >> Type "help", "copyright", "credits" or "license" for more >> information. >>>>> import scipy >> Traceback (most recent call last): >> File "", line 1, in ? >> ImportError: No module named scipy > > Note that after it is built (correctly, see Zach's message), it > must then be > installed per the directions given on that page. > > $ sudo python setup.py install > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Wed Apr 11 22:45:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Apr 2007 21:45:26 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: References: <461D9457.3060207@gmail.com> Message-ID: <461D9D46.3000606@gmail.com> Ariel Rokem wrote: > Along the way, several kinds of error messages appeared: > > This: > > non-existing path in 'Lib/maxentropy': 'doc' Harmless warning. Ignore. > and this: > > Couldn't match compiler version for 'GNU Fortran (GCC) 4.3.0 20070316 > (experimen > tal)\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU > Fortran comes wit > h NO WARRANTY, to the extent permitted by law.\nYou may redistribute > copies of G > NU Fortran\nunder the terms of the GNU General Public License.\nFor > more informa > tion about these matters, see the file named COPYING\n' > customize Gnu95FCompiler using build_clib > building 'superlu_src' library > compiling C sources > C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/ > MacOSX10.4u.sdk - > fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd - > fno-common > -dynamic -DNDEBUG -g -O3 > > This looks like it may have something to do with the compiler > versions and yet, it goes on compiling, This was fixed in numpy r3598, so it should be in numpy 1.0.2. Please update your numpy installation. > More error messages appear - things that look like this: > > Lib/linsolve/SuperLU/SRC/scomplex.c: In function 'c_div': > Lib/linsolve/SuperLU/SRC/scomplex.c:30: warning: incompatible > implicit declaration of built-in function 'exit' > > and things that look like this: > > fortran:f77: Lib/special/cdflib/dzror.f > Lib/special/cdflib/dzror.f:92.72: > > ASSIGN 10 TO i99999 > > 1 > Warning: Obsolete: ASSIGN statement at (1) > Lib/special/cdflib/dzror.f:100.72: These are warnings, not errors. > Finally, this is the last things that appear on the screen: > > > creating build/temp.macosx-10.3-fat-2.4/build > creating build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4 > creating build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ > Lib > creating build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ > Lib/fftp > creating build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src > compile options: '-DSCIPY_FFTW3_H -I/usr/local/include -Ibuild/ > src.macosx-10.3 > ameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ > numpy/core/ > meworks/Python.framework/Versions/2.4/include/python2.4 -c' > gcc: build/src.macosx-10.3-fat-2.4/fortranobject.c > gcc: Lib/fftpack/src/zrfft.c > gcc: Lib/fftpack/src/drfft.c > gcc: build/src.macosx-10.3-fat-2.4/Lib/fftpack/_fftpackmodule.c > gcc: Lib/fftpack/src/zfft.c > gcc: Lib/fftpack/src/zfftnd.c > Traceback (most recent call last): > File "setup.py", line 55, in ? > setup_package() > File "setup.py", line 47, in setup_package > configuration=configuration ) > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-p > s/core.py", line 174, in setup > return old_setup(**new_attr) > File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ > python2.4/distu > 9, in setup > dist.run_commands() > File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ > python2.4/distu > 6, in run_commands > self.run_command(cmd) > File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ > python2.4/distu > 6, in run_command > cmd_obj.run() > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-p > s/command/build_ext.py", line 121, in run > self.build_extensions() > File "/Library/Frameworks/Python.framework/Versions/2.4//lib/ > python2.4/distu > .py", line 405, in build_extensions > self.build_extension(ext) > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-p > s/command/build_ext.py", line 312, in build_extension > link = self.fcompiler.link_shared_object > AttributeError: 'NoneType' object has no attribute 'link_shared_object' > > > > Looks pretty bad, no? > > I ran the installation command too, just to be sure. Unsurprisingly, > it didn't work, giving this message in the end: > > > compile options: '-DSCIPY_FFTW3_H -I/usr/local/include -Ibuild/ > src.macosx-10.3-fat-2.4 -I/Library/Frameworks/Python.framewo > rk/Versions/2.4/lib/python2.4/site-packages/numpy/core/include -I/ > Library/Frameworks/Python.framework/Versions/2.4/include/ > python2.4 -c' > /usr/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/ > temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ > Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3-fat-2.4/Lib/ > fftpack/src/zfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpa > ck/src/drfft.o build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zrfft.o > build/temp.macosx-10.3-fat-2.4/Lib/fftpack/src/zfftnd > .o build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ > fortranobject.o -L/usr/local/lib -L/usr/local/lib/gcc/power > pc-apple-darwin7.9.0/3.4.4 -Lbuild/temp.macosx-10.3-fat-2.4 - > ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/lib.macosx-10.3- > fat-2.4/scipy/fftpack/_fftpack.so > /usr/bin/ld: can't locate file for: -lcc_dynamic > collect2: ld returned 1 exit status > /usr/bin/ld: can't locate file for: -lcc_dynamic > collect2: ld returned 1 exit status > error: Command "/usr/local/bin/g77 -g -Wall -undefined dynamic_lookup > -bundle build/temp.macosx-10.3-fat-2.4/build/src.maco > sx-10.3-fat-2.4/Lib/fftpack/_fftpackmodule.o build/temp.macosx-10.3- > fat-2.4/Lib/fftpack/src/zfft.o build/temp.macosx-10.3-f > at-2.4/Lib/fftpack/src/drfft.o build/temp.macosx-10.3-fat-2.4/Lib/ > fftpack/src/zrfft.o build/temp.macosx-10.3-fat-2.4/Lib/ff > tpack/src/zfftnd.o build/temp.macosx-10.3-fat-2.4/build/ > src.macosx-10.3-fat-2.4/fortranobject.o -L/usr/local/lib -L/usr/loc > al/lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/temp.macosx-10.3- > fat-2.4 -ldfftpack -lfftw3 -lg2c -lcc_dynamic -o build/ > lib.macosx-10.3-fat-2.4/scipy/fftpack/_fftpack.so" failed with exit > status 1 > > > I hope this gives enough information this time (in fact, I hope > that's not too much this time). Does anyone have any ideas? This is caused by the build process picking up g77 (which won't work with gcc 4) since it couldn't verify your gfortran. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arokem at berkeley.edu Thu Apr 12 00:18:54 2007 From: arokem at berkeley.edu (Ariel Rokem) Date: Wed, 11 Apr 2007 21:18:54 -0700 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: <461D9D46.3000606@gmail.com> References: <461D9457.3060207@gmail.com> <461D9D46.3000606@gmail.com> Message-ID: So - what should I do in order to make it verify gfortran? Or should I remove g77 somehow? Thanks again, Ariel > This is caused by the build process picking up g77 (which won't > work with gcc 4) > since it couldn't verify your gfortran. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Thu Apr 12 00:28:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Apr 2007 23:28:27 -0500 Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: References: <461D9457.3060207@gmail.com> <461D9D46.3000606@gmail.com> Message-ID: <461DB56B.2040906@gmail.com> Ariel Rokem wrote: > So - what should I do in order to make it verify gfortran? Or should > I remove g77 somehow? Like I said, upgrade numpy to incorporate the fix that allows it to recognize that particular release of gfortran. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From S.Mientki at ru.nl Thu Apr 12 03:29:14 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Thu, 12 Apr 2007 09:29:14 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> Message-ID: <461DDFCA.3030209@ru.nl> lorenzo bolla wrote: > Dear all, > I've "expanded" the performance tests by Prabhu Ramachandran > in > http://www.scipy.org/PerformancePython with a comparison with matlab. > for anyone interested, see > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/. > > it's still a work in progress, but worth seeing by anyone still > uncertain in switching from matlab to numpy. Very good job Lorenzo ! Please could you explain one thing to me: - the contents of your page is very valuable - the layout of your page is perfect, without clicking I can see every detail and every image completely But ... ... when I move my mouse over a graph, ... ... a small version of the graph popups (showing non information at all), and hiding the large graph underneath :-( .... cheers, Stef Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From gael.varoquaux at normalesup.org Thu Apr 12 03:41:44 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 12 Apr 2007 09:41:44 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <461DDFCA.3030209@ru.nl> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <461DDFCA.3030209@ru.nl> Message-ID: <20070412074143.GB9810@clipper.ens.fr> On Thu, Apr 12, 2007 at 09:29:14AM +0200, Stef Mientki wrote: > lorenzo bolla wrote: > > I've "expanded" the performance tests by Prabhu Ramachandran > > in > > http://www.scipy.org/PerformancePython with a comparison with matlab. > > for anyone interested, see > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/. > > it's still a work in progress, but worth seeing by anyone still > > uncertain in switching from matlab to numpy. > Very good job Lorenzo ! > Please could you explain one thing to me: > - the contents of your page is very valuable This is why I think it should be added to the wiki, with a link for the Matlab/Numpy comparison page, and one from the PerformancePython page. My 2 cents, Ga?l From edschofield at gmail.com Thu Apr 12 07:56:25 2007 From: edschofield at gmail.com (Edschofield) Date: Thu, 12 Apr 2007 13:56:25 +0200 Subject: [SciPy-user] price 12-Apr-2007 Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: latest_price12-Apr-2007.zip Type: application/octet-stream Size: 38788 bytes Desc: not available URL: From lou_boog2000 at yahoo.com Thu Apr 12 09:09:34 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Thu, 12 Apr 2007 06:09:34 -0700 (PDT) Subject: [SciPy-user] Plea: Installing SciPy on Mac 10.4.9: _fftpack.so problems? In-Reply-To: Message-ID: <628927.9032.qm@web34401.mail.mud.yahoo.com> This might not be what you want, but you might consider using prebuilt packages at http://pythonmac.org/packages/. I have used them to install on my old PPC laptop and my new Mac Pro Intel dual core desktop. All worked fine. I installed Python 2.4.4, NumPy, Matplotlib/PyLab, and SciPy from the packages. I also additionally installed iPython from the iPython site using setup.py etc. So far everything works well together. --- Ariel Rokem wrote: > Hi - I have been having a very similar problem > building scipy. > > I am running Mac OS10.4.9 on a PPC with gfortran > 4.3.0 and gcc3.3 -- Lou Pecora, my views are my own. --------------- "I knew I was going to take the wrong train, so I left early." --Yogi Berra ____________________________________________________________________________________ Sucker-punch spam with award-winning protection. Try the free Yahoo! Mail Beta. http://advision.webevents.yahoo.com/mailbeta/features_spam.html From ckkart at hoc.net Thu Apr 12 20:43:17 2007 From: ckkart at hoc.net (Christian K) Date: Fri, 13 Apr 2007 09:43:17 +0900 Subject: [SciPy-user] fprime for L-BFGS-B In-Reply-To: <877496.85367.qm@web57904.mail.re3.yahoo.com> References: <877496.85367.qm@web57904.mail.re3.yahoo.com> Message-ID: lechtlr wrote: > Can anyone give some clues as to how to define gradient array (fprime) for optimize.fmin_l_bfgs_b. > > I have attached a simple parameter estimation example below to test fmin_l_bfgs_b with and without fprime. It works with approx_grad (i.e., optimize.fmin_l_bfgs_b(resid, p0, args=(y_meas,x), fprime=None, approx_grad=True). However, when I define, fprime it returns the initial guess. I suspect that I am not defining the fprime correctly. > > Any help would greatly be appreciated. > > -Lex > > from numpy import * > from scipy import optimize > > def evalY(p, x): > return x**3 * p[3] + x**2 * p[2] + x * p[1] + p[0] > > # function to be minimized for parameter estimation > def resid(p, y, x): > err = y - evalY(p, x) > return dot(err,err) > > # fprime for fmin_bfgs > def func_der(p, y, x): > > g = zeros(4,float) > > g0 = 1.0 > g1 = x > g2 = x**2 > g3 = x**3 > > g[0] = g0 > g[1] = dot(g1,g1) > g[2] = dot(g2,g2) > g[3] = dot(g3,g3) > > return g This returns the derivative of 'evalY', not that of 'resid'. It should be something like: return -g*2*(y-evalY(p, x).sum() Christian From s.mientki at ru.nl Fri Apr 13 17:24:14 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 13 Apr 2007 23:24:14 +0200 Subject: [SciPy-user] small error in documentation signal.py Message-ID: <461FF4FE.4080601@ru.nl> I don't know where to report bugs in scipy package, and I don't know if errors in documentation are called bugs, but anyway, here is the bug, lfilter(b, a, x, axis=-1, zi=None) Algorithm: The filter function is implemented as a direct II transposed structure. This means that the filter implements y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb] - a[1]*y[n-1] + ... + a[na]*y[n-na] should be y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb] - a[1]*y[n-1] - ... - a[na]*y[n-na] or this might even be better a[0]*y[n] = b[0]*x[n] + b[1]*x[n-1] + ... + b[nb]*x[n-nb] - a[1]*y[n-1] + ... + a[na]*y[n-na] cheers, Stef Mientki From robert.kern at gmail.com Fri Apr 13 17:30:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 13 Apr 2007 16:30:42 -0500 Subject: [SciPy-user] small error in documentation signal.py In-Reply-To: <461FF4FE.4080601@ru.nl> References: <461FF4FE.4080601@ru.nl> Message-ID: <461FF682.8070201@gmail.com> Stef Mientki wrote: > I don't know where to report bugs in scipy package, http://projects.scipy.org/scipy/scipy Click the "Register" link in the upper-right corner to make an account. Then click "New Ticket". > and I don't know if errors in documentation are called bugs, Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Fri Apr 13 17:31:44 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 13 Apr 2007 17:31:44 -0400 Subject: [SciPy-user] Meijer G functions ? In-Reply-To: <461D6072.8040305@obs.univ-lyon1.fr> References: <461D6072.8040305@obs.univ-lyon1.fr> Message-ID: <461FF6C0.4020809@physics.mcmaster.ca> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Eric Emsellem wrote: > Hi, > > does anybody know if an implementation of the Meijer G functions has > been done in scipy? Nope. Meijer G is so general that any implementation is going to be a big piece of work. You're best off trying to see if you can reduce your specific case to more elementary functions. - -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGH/bAN9ixZKFWjRQRAu7/AKCj2RhSSB+4awU/HU4YC8BxVtCwMgCdGAFv p6ye7IO1rzxLz1Xy+Biw2rE= =8ws4 -----END PGP SIGNATURE----- From wangxj.uc at gmail.com Fri Apr 13 17:53:53 2007 From: wangxj.uc at gmail.com (Xiaojian Wang) Date: Fri, 13 Apr 2007 14:53:53 -0700 Subject: [SciPy-user] Dose the interpolation functions exist in Python or Scipy module? Message-ID: Hi, Dose the interpolation function library exist in Python or Scipy module?, I want to generate a surface with known points in 3D, such as using bi-cubic spline etc. I did it in F77 long time ago. thanks in advance and have a nice weekend. Xiaojian -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Fri Apr 13 17:58:38 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 13 Apr 2007 17:58:38 -0400 Subject: [SciPy-user] Dose the interpolation functions exist in Python or Scipy module? In-Reply-To: References: Message-ID: On 13/04/07, Xiaojian Wang wrote: > Hi, Dose the interpolation function library exist in Python or Scipy > module?, > I want to generate a surface with known points in 3D, such as using > bi-cubic spline etc. I did it in F77 long time ago. There is a library, scipy.interpolate, which provides access to many interpolation functions, and in particular cubic (and I think bicubic) splines. Be aware that it sometimes defaults to smoothing splines rather than strictly interpolating splines. Anne M. Archibald From s.mientki at ru.nl Fri Apr 13 18:17:49 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 14 Apr 2007 00:17:49 +0200 Subject: [SciPy-user] small error in documentation signal.py In-Reply-To: <461FF682.8070201@gmail.com> References: <461FF4FE.4080601@ru.nl> <461FF682.8070201@gmail.com> Message-ID: <4620018D.8080804@ru.nl> thanks and done. cheers, Stef Robert Kern wrote: > Stef Mientki wrote: > >> I don't know where to report bugs in scipy package, >> > > http://projects.scipy.org/scipy/scipy > > Click the "Register" link in the upper-right corner to make an account. Then > click "New Ticket". > > >> and I don't know if errors in documentation are called bugs, >> > > Yup. > > From wbaxter at gmail.com Sat Apr 14 00:21:52 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 14 Apr 2007 13:21:52 +0900 Subject: [SciPy-user] Sparse indexing workarounds In-Reply-To: <687bb3e80704100211necb743fi8fd7b9293c79afc3@mail.gmail.com> References: <461B4F0E.8040509@ntc.zcu.cz> <687bb3e80704100211necb743fi8fd7b9293c79afc3@mail.gmail.com> Message-ID: Thanks for the suggestion. Is there any easy way to print out a pysparse spmatrix or convert it to a numpy dense matrix? It seems like the functionality is there in pysparse, but the interface is a bit maddening. A little more flexibility would be nice. I guess the interface is unforgiving because it's implemented in C/C++? But there are lots of little issues like matvec insisting on having a return parameter, and refusing to work with anything besides 1D arrays (no automatic conversion of python lists to arrays, or forgiving treatment of (N,1) or (1,N) arrays as "close enough". Documentation is quite minimal too... but anyway, if I can get it to solve my system, and if it's fast, I'll be a happy camper. :-) --bb On 4/10/07, H?kan Jakobsson wrote: > You can use pysparse instead of sparse. The update_add_mask function will do > the trick. > /H?kan J > > > On 4/10/07, Robert Cimrman < cimrman3 at ntc.zcu.cz> wrote: > > Bill Baxter wrote: > > > Does anyone have an easy (and efficient way) to update a submatrix of > > > a big sparse matrix? > > > > > > K = scipy.sparse.lil_matrix(bigN,bigN) > > > ... > > > conn = [1,4,11,12] > > > K[ix_(conn,conn)] += elemK > > > > > > where elemK is a 4x4 dense matrix. > > > > > > This kind of thing is very commonly needed in FEM codes for assembling > > > the global stiffness matrix. But sparse doesn't seem to support > > > either += or indexing with the open grid of indices returned by ix_. > > > > You may have a look at > > http://ui505p06-mbs.ntc.zcu.cz/sfe/FEUtilsExample > > it uses CSR matrix though... > > > > r. > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From wbaxter at gmail.com Sat Apr 14 03:40:43 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 14 Apr 2007 16:40:43 +0900 Subject: [SciPy-user] Sparse indexing workarounds In-Reply-To: <687bb3e80704100211necb743fi8fd7b9293c79afc3@mail.gmail.com> References: <461B4F0E.8040509@ntc.zcu.cz> <687bb3e80704100211necb743fi8fd7b9293c79afc3@mail.gmail.com> Message-ID: Ack! Pysparse's update_add_mask is bugged! In [188]: K = arange(12).reshape((3,4)) In [189]: L = spmatrix.ll_mat(10,10) In [190]: L.update_add_mask(K, [7,8,9],[3,4,5,6], [True]*3,[True]*4) In [191]: todense(L) Out[191]: array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 3., 6., 9., 0., 0., 0.], [ 0., 0., 0., 1., 4., 7., 10., 0., 0., 0.], [ 0., 0., 0., 2., 5., 8., 11., 0., 0., 0.]]) In [192]: K Out[192]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) Notice how K got transposed and reshaped. To make it work properly you have to do L.update_add_mask(K.T.reshape(3,4), [7,8,9],[3,4,5,6], [True]*3,[True]*4) --bb On 4/10/07, H?kan Jakobsson wrote: > You can use pysparse instead of sparse. The update_add_mask function will do > the trick. > /H?kan J > > > On 4/10/07, Robert Cimrman < cimrman3 at ntc.zcu.cz> wrote: > > Bill Baxter wrote: > > > Does anyone have an easy (and efficient way) to update a submatrix of > > > a big sparse matrix? > > > > > > K = scipy.sparse.lil_matrix(bigN,bigN) > > > ... > > > conn = [1,4,11,12] > > > K[ix_(conn,conn)] += elemK > > > > > > where elemK is a 4x4 dense matrix. > > > > > > This kind of thing is very commonly needed in FEM codes for assembling > > > the global stiffness matrix. But sparse doesn't seem to support > > > either += or indexing with the open grid of indices returned by ix_. > > > > You may have a look at > > http://ui505p06-mbs.ntc.zcu.cz/sfe/FEUtilsExample > > it uses CSR matrix though... > > > > r. > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From lbolla at gmail.com Sat Apr 14 04:51:43 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Sat, 14 Apr 2007 10:51:43 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <461DDFCA.3030209@ru.nl> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <461DDFCA.3030209@ru.nl> Message-ID: <80c99e790704140151h38175250x46dfdce6dd64bb25@mail.gmail.com> it's a "feature" of wordpress. I'll disable it as soon as possible! cheers, L. On 4/12/07, Stef Mientki wrote: > > > > lorenzo bolla wrote: > > Dear all, > > I've "expanded" the performance tests by Prabhu Ramachandran > > in > > http://www.scipy.org/PerformancePython with a comparison with matlab. > > for anyone interested, see > > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > . > > > > it's still a work in progress, but worth seeing by anyone still > > uncertain in switching from matlab to numpy. > Very good job Lorenzo ! > > Please could you explain one thing to me: > - the contents of your page is very valuable > - the layout of your page is perfect, without clicking I can see every > detail and every image completely > But ... > ... when I move my mouse over a graph, ... > ... a small version of the graph popups (showing non information at all), > and hiding the large graph underneath :-( > .... > > cheers, > Stef > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of > Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbolla at gmail.com Sat Apr 14 04:52:34 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Sat, 14 Apr 2007 10:52:34 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <20070412074143.GB9810@clipper.ens.fr> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <461DDFCA.3030209@ru.nl> <20070412074143.GB9810@clipper.ens.fr> Message-ID: <80c99e790704140152l742ef902tf56c75556b495912@mail.gmail.com> I'm happy to add it to the wiki (at least the link). I've been very busy for the last 3 days, but I'll do it soon. thank you, L. On 4/12/07, Gael Varoquaux wrote: > > On Thu, Apr 12, 2007 at 09:29:14AM +0200, Stef Mientki wrote: > > lorenzo bolla wrote: > > > I've "expanded" the performance tests by Prabhu Ramachandran > > > in > > > http://www.scipy.org/PerformancePython with a comparison with matlab. > > > for anyone interested, see > > > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > . > > > > it's still a work in progress, but worth seeing by anyone still > > > uncertain in switching from matlab to numpy. > > Very good job Lorenzo ! > > > Please could you explain one thing to me: > > - the contents of your page is very valuable > > This is why I think it should be added to the wiki, with a link for the > Matlab/Numpy comparison page, and one from the PerformancePython page. > > My 2 cents, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hakan.jakobsson at gmail.com Sat Apr 14 07:55:00 2007 From: hakan.jakobsson at gmail.com (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Sat, 14 Apr 2007 13:55:00 +0200 Subject: [SciPy-user] Sparse indexing workarounds In-Reply-To: References: <461B4F0E.8040509@ntc.zcu.cz> <687bb3e80704100211necb743fi8fd7b9293c79afc3@mail.gmail.com> Message-ID: <687bb3e80704140455u123e1fadq1dc50b42e988b52c@mail.gmail.com> Yes, I acctually knew that but forgot to mention. Sorry. In any case, would this kind of functionality not be very nice to have in the standard sparse package? For one I find that pysparse really doesn't install without fuzz. I couldn't get it to work with Python 2.5 and had to install 2.4 on my Mac in order to do so. At uni we've been trying to get it to work on some Linux boxes but no luck. Maybe this functionality is already planned for in sparse, otherwise that's my suggestion. Unfortunately I'm not savvy enough (yet) when it comes to python, but when I am I'll be happy to help out. :) /H?kan On 4/14/07, Bill Baxter wrote: > > Ack! Pysparse's update_add_mask is bugged! > > In [188]: K = arange(12).reshape((3,4)) > In [189]: L = spmatrix.ll_mat(10,10) > In [190]: L.update_add_mask(K, [7,8,9],[3,4,5,6], [True]*3,[True]*4) > In [191]: todense(L) > Out[191]: > array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], > [ 0., 0., 0., 0., 3., 6., 9., 0., 0., 0.], > [ 0., 0., 0., 1., 4., 7., 10., 0., 0., 0.], > [ 0., 0., 0., 2., 5., 8., 11., 0., 0., 0.]]) > In [192]: K > Out[192]: > array([[ 0, 1, 2, 3], > [ 4, 5, 6, 7], > [ 8, 9, 10, 11]]) > > Notice how K got transposed and reshaped. To make it work properly > you have to do > > L.update_add_mask (K.T.reshape(3,4), [7,8,9],[3,4,5,6], > [True]*3,[True]*4) > > --bb > > > On 4/10/07, H?kan Jakobsson wrote: > > You can use pysparse instead of sparse. The update_add_mask function > will do > > the trick. > > /H?kan J > > > > > > On 4/10/07, Robert Cimrman < cimrman3 at ntc.zcu.cz> wrote: > > > Bill Baxter wrote: > > > > Does anyone have an easy (and efficient way) to update a submatrix > of > > > > a big sparse matrix? > > > > > > > > K = scipy.sparse.lil_matrix(bigN,bigN) > > > > ... > > > > conn = [1,4,11,12] > > > > K[ix_(conn,conn)] += elemK > > > > > > > > where elemK is a 4x4 dense matrix. > > > > > > > > This kind of thing is very commonly needed in FEM codes for > assembling > > > > the global stiffness matrix. But sparse doesn't seem to support > > > > either += or indexing with the open grid of indices returned by ix_. > > > > > > You may have a look at > > > http://ui505p06-mbs.ntc.zcu.cz/sfe/FEUtilsExample > > > it uses CSR matrix though... > > > > > > r. > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emsellem at obs.univ-lyon1.fr Sat Apr 14 13:48:27 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Sat, 14 Apr 2007 19:48:27 +0200 Subject: [SciPy-user] Meijer G functions ? Message-ID: <462113EB.3050003@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From bnuttall at uky.edu Sat Apr 14 17:18:41 2007 From: bnuttall at uky.edu (Brandon C. Nuttall) Date: Sat, 14 Apr 2007 17:18:41 -0400 Subject: [SciPy-user] SciPy.Stats.linregress question Message-ID: <1176585521.ae24c7cbnuttall@uky.edu> Hello, I have a question about the least squares linear regression module in scipy.stats.linregress. The standard error of the estimate returned is the population estimate not the sample estimate. Shouldn't this be an estimate for the sample (i.e., the degrees of freedom should be n-2 where n is the sample size)? That's not really a my question. My question is, should there be a parameter for this and similar routines that specifies whether you want a population or sample estimate? So, for example, using data from http://onlinestatbook.com/chapter12/accuracy.html, you can back out the sample standard error of the estimate: >>> ================================ RESTART ================================ >>> from scipy.stats import linregress >>> from math import * >>> online = [[1.0, 1.0], [2.0, 2.0], [3.0, 1.3], [4.0, 3.75], [5.0, 2.25]] >>> slope,intercept,r,twotail,stderr = linregress(online) >>> print "Population stderr:",stderr Population stderr: 0.747094371549 >>> sample = sqrt((stderr*stderr*len(online))/(len(online)-2)) >>> print "Sample stderr:",sample Sample stderr: 0.964494686351 >>> Thanks. Brandon Nuttall Brandon C. Nuttall bnuttall at uky.edu www.uky.edu/kgs 859-257-5500 ext 174 From tgrav at mac.com Sat Apr 14 18:47:56 2007 From: tgrav at mac.com (Tommy Grav) Date: Sat, 14 Apr 2007 18:47:56 -0400 Subject: [SciPy-user] using errors in scipy.optimize.leastsq Message-ID: <8B84322F-CE15-4637-85A1-927DED3FC117@mac.com> I am using scipy.optimize.leastsq to fit a sinusoid function to a set of observations given by three arrays atime,amag,aerr. I am able to make the leastsq function find what seems to be the appropriate solution, but I am now wondering how to use the amerr (the array of errors in the measurement to weight the individual points used in the solution. Does anyone have any hints for doing this? Cheers Tommy Snipped example: def func(x,period,time,mag,merr,n): diff = ndarray(len(time)) tmp = 2.*pi/period for l in range(1,n+1): fit = x[0] + x[2]*sin(tmp*(time - x[1])) + x[3]*cos(tmp* (time - x[1])) diff = fit - mag return diff mavg = average(amag) mmax = max(amag) - mavg mmin = mavg - min(amag) print mavg, mmax, mmin x0 = array([mavg,54128.0,mmax,mmin]) norder = 1 plist = array([float(p/1000.) for p in range(10,1000)]) chilist = ndarray(len(plist)) n = 0 for period in plist: res = leastsq(func,x0,args= (period,atime,amag,amerr,norder),full_output=1,col_deriv=1,epsfcn=0.001) diff = func(res[0],period,atime,amag,amerr,norder) chilist[n] = sqrt(dot(diff,diff)/(len(diff)-1.)) n += 1 period = plist[chilist.argmin()] print "Period = %12.5f h" % (period*24.) res = leastsq(func,x0,args= (period,atime,amag,amerr,norder),full_output=1,col_deriv=1) print "Amplitude = %12.5f mag" % (res[0][2]) From robert.kern at gmail.com Sat Apr 14 18:57:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 14 Apr 2007 17:57:05 -0500 Subject: [SciPy-user] using errors in scipy.optimize.leastsq In-Reply-To: <8B84322F-CE15-4637-85A1-927DED3FC117@mac.com> References: <8B84322F-CE15-4637-85A1-927DED3FC117@mac.com> Message-ID: <46215C41.3010003@gmail.com> Tommy Grav wrote: > I am using scipy.optimize.leastsq to fit a sinusoid function to a set of > observations given by three arrays atime,amag,aerr. I am able to make > the leastsq function find what seems to be the appropriate solution, > but I am now wondering how to use the amerr (the array of errors in > the measurement to weight the individual points used in the solution. > Does anyone have any hints for doing this? diff /= merr -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Sun Apr 15 09:55:50 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 15 Apr 2007 15:55:50 +0200 Subject: [SciPy-user] is there an easy way to extend/continuate a modulo function ? Message-ID: <46222EE6.9010101@ru.nl> In comparing different filters, I plot the phase of the transfer function # calculate the transfer function h,w = signal.freqz ( filt_coeff[0], filt_coeff[1] ) # calculate the phase angle w_Phase = arctan( imag(w) / real(w)) Now the phase-angle "w_Phase" flips around pi/2 and -pi/2, which of course is logical. But supposse I want to continuate the phase over the pi/2 border, (and I know that phase at either first or last element is zero), is there an easy way to accomplish that ? thanks, Stef Mientki From gary.pajer at gmail.com Sun Apr 15 10:10:36 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Sun, 15 Apr 2007 10:10:36 -0400 Subject: [SciPy-user] is there an easy way to extend/continuate a modulo function ? In-Reply-To: <46222EE6.9010101@ru.nl> References: <46222EE6.9010101@ru.nl> Message-ID: <88fe22a0704150710v211eda1ds972a2eee6bffec85@mail.gmail.com> On 4/15/07, Stef Mientki wrote: > In comparing different filters, I plot the phase of the transfer function > > # calculate the transfer function > h,w = signal.freqz ( filt_coeff[0], filt_coeff[1] ) > # calculate the phase angle > w_Phase = arctan( imag(w) / real(w)) > > Now the phase-angle "w_Phase" flips around pi/2 and -pi/2, > which of course is logical. > But supposse I want to continuate the phase over the pi/2 border, > (and I know that phase at either first or last element is zero), > is there an easy way to accomplish that ? Take a look at numpy.unwrap() Sounds like what you want. > > thanks, > Stef Mientki > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Sun Apr 15 11:32:32 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 15 Apr 2007 17:32:32 +0200 Subject: [SciPy-user] Dose the interpolation functions exist in Python or Scipy module? In-Reply-To: References: Message-ID: <20070415153232.GM18196@mentat.za.net> On Fri, Apr 13, 2007 at 05:58:38PM -0400, Anne Archibald wrote: > On 13/04/07, Xiaojian Wang wrote: > > Hi, Dose the interpolation function library exist in Python or Scipy > > module?, > > I want to generate a surface with known points in 3D, such as using > > bi-cubic spline etc. I did it in F77 long time ago. > > There is a library, scipy.interpolate, which provides access to many > interpolation functions, and in particular cubic (and I think bicubic) > splines. Be aware that it sometimes defaults to smoothing splines > rather than strictly interpolating splines. John Travers recently refactored parts of the fitpack interface to distinguish between interpolation at smoothing. Take a look at scipy.interpolate.RectBivariateSpline scipy.interpolate.UnivariateSpline scipy.interpolate.BivariateSpline Regards St?fan From s.mientki at ru.nl Sun Apr 15 11:57:48 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 15 Apr 2007 17:57:48 +0200 Subject: [SciPy-user] is there an easy way to extend/continuate a modulo function ? In-Reply-To: <88fe22a0704150710v211eda1ds972a2eee6bffec85@mail.gmail.com> References: <46222EE6.9010101@ru.nl> <88fe22a0704150710v211eda1ds972a2eee6bffec85@mail.gmail.com> Message-ID: <46224B7C.3090005@ru.nl> > Take a look at numpy.unwrap() > Sounds like what you want. > Thanks Gary, that's indeed what I'm looking for. cheers, Stef From rmay at ou.edu Sun Apr 15 13:29:22 2007 From: rmay at ou.edu (Ryan May) Date: Sun, 15 Apr 2007 12:29:22 -0500 Subject: [SciPy-user] is there an easy way to extend/continuate a modulo function ? In-Reply-To: <46222EE6.9010101@ru.nl> References: <46222EE6.9010101@ru.nl> Message-ID: <462260F2.5040406@ou.edu> Stef Mientki wrote: > In comparing different filters, I plot the phase of the transfer function > > # calculate the transfer function > h,w = signal.freqz ( filt_coeff[0], filt_coeff[1] ) > # calculate the phase angle > w_Phase = arctan( imag(w) / real(w)) > You could use arctan2 instead of arctan: w_Phase = arctan2( imag(w), real(w)) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From antoinecailliau at gmail.com Sun Apr 15 14:10:24 2007 From: antoinecailliau at gmail.com (Antoine Cailliau) Date: Sun, 15 Apr 2007 20:10:24 +0200 Subject: [SciPy-user] TypeError: an integer is required Message-ID: <1176660624.6517.17.camel@localhost> Hi, For a university project, I need to compute some big matrix (rouglhy 2000*85000). But when I try to create my matrix I've a error I don't understand. Here is the code fragment > movies_seen_by_user = sparse.csc_matrix((max_movie_id+1, > cursor.rowcount),cursor.rowcount,None) cursor.rowcount = 2000 and is an integer. And here's is the error : > File "/var/www/project/project/algorithm.py", line 103, in suggest_films > movies_seen_by_user = sparse.csc_matrix((max_movie_id+1, cursor.rowcount),cursor.rowcount,None) > File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 530, in __init__ > self.data = zeros((nzmax,), self.dtype) > TypeError: an integer is required The problem is time that the loop take. With a sparse.zero it's quicker than with a sparse matrix (csc) but I've a lot of zero in my matrix and I want to have a sparse matrix for the operations I made after. I tried > movies_seen_by_user = sparse.csc_matrix((max_movie_id+1, cursor.rowcount)) > movies_seen_by_user.nzmax = cursor.rowcount > movies_seen_by_user.allocsize = cursor.rowcount + 1 But my script still very very very slow. Thanks to everyone, Antoine C. -- Antoine Cailliau Rue de l'Ang?lique, 2 1348 Louvain-La-Neuve Mobile : 00 32 496 67 82 52 a.cailliau at ac-graphic.net http://www.ac-graphic.net/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3869 bytes Desc: not available URL: From s.mientki at ru.nl Sun Apr 15 15:26:10 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 15 Apr 2007 21:26:10 +0200 Subject: [SciPy-user] is there an easy way to extend/continuate a modulo function ? In-Reply-To: <462260F2.5040406@ou.edu> References: <46222EE6.9010101@ru.nl> <462260F2.5040406@ou.edu> Message-ID: <46227C52.8020702@ru.nl> Ryan May wrote: > Stef Mientki wrote: > >> In comparing different filters, I plot the phase of the transfer function >> >> # calculate the transfer function >> h,w = signal.freqz ( filt_coeff[0], filt_coeff[1] ) >> # calculate the phase angle >> w_Phase = arctan( imag(w) / real(w)) >> >> > You could use arctan2 instead of arctan: > > w_Phase = arctan2( imag(w), real(w)) > > Ryan > Thanks Ryan, arctan2 works twice as well as arctan+unwrap ;-) i.e. it unfolds the from (-pi/1 .. pi/2) into (-pi ..pi) But unfortunately, both methods don't seem to work on this specific transfer function, I see a lot "-1.#IND", seems to be a special presentation of "-1", but don't know what it "really" means, my wrapper application also seems to have trouble with that. If I find more details, you'll hear again from me. cheers, Stef Mientki From s.mientki at ru.nl Sun Apr 15 16:35:12 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 15 Apr 2007 22:35:12 +0200 Subject: [SciPy-user] problems with signal-functions impulse and step ... Message-ID: <46228C80.1070705@ru.nl> I wonder if I'm doing something wrong, or if the signal library might have has some small bugs. Do others have the same experience ? #I create a highpass filter, #which amplitude and phase characteristic looks good filt_1 = signal.iirdesign( 0.06, 0.002, 1, 50, 0, 'butter') # Now when I want to calculate the impuls response, aa,bb = signal.impulse( filt_1[0], filt_1[1] ) # depending on the IDE I'm using, one of the following happens # - I get an error message (which is (yet) above my knowledge of Python), see below # - the IDE crashes totally # - the script stops, without an error message Traceback (most recent call last): File "", line 114, in ? File "P:\PROGRA~1\PYTHON\lib\site-packages\scipy\signal\ltisys.py", line 470, in impulse h[k] = squeeze(dot(dot(C,eA),B)) TypeError: only length-1 arrays can be converted to Python scalars # creating an LTI-object before calculating impuls or step response # mostly will return nor error, but just stops execution of the script LTI_1 = signal.lti( filt_1[0], filt_1[1]) aa,bb = LTI_1.impulse() aa,bb = LTI_1.step() thanks for any clarification, Stef Mientki From stefan at sun.ac.za Sun Apr 15 17:40:58 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 15 Apr 2007 23:40:58 +0200 Subject: [SciPy-user] problems with signal-functions impulse and step ... In-Reply-To: <46228C80.1070705@ru.nl> References: <46228C80.1070705@ru.nl> Message-ID: <20070415214058.GR18196@mentat.za.net> Hi Stef On Sun, Apr 15, 2007 at 10:35:12PM +0200, Stef Mientki wrote: > I wonder if I'm doing something wrong, > or if the signal library might have has some small bugs. > Do others have the same experience ? > > #I create a highpass filter, > #which amplitude and phase characteristic looks good > filt_1 = signal.iirdesign( 0.06, 0.002, 1, 50, 0, 'butter') > > # Now when I want to calculate the impuls response, > aa,bb = signal.impulse( filt_1[0], filt_1[1] ) The function signature for signal.impulse is signal.impulse(system, X0=None, T=None, N=None) Where system -- an instance of the LTI class or a tuple with 2, 3, or 4 elements representing (num, den), (zero, pole, gain), or (A, B, C, D) representation of the system. (By the way, IPython is extremely useful in investigating this sort of thing -- you simply type signal.impulse?) So, I think what you want to do is: aa,bb = signal.impulse(filt_1[:2]) Cheers St?fan From S.Mientki at ru.nl Mon Apr 16 04:01:14 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Mon, 16 Apr 2007 10:01:14 +0200 Subject: [SciPy-user] problems with signal-functions impulse and step ... In-Reply-To: <20070415214058.GR18196@mentat.za.net> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> Message-ID: <46232D4A.9040704@ru.nl> Stefan van der Walt wrote: > Hi Stef > > On Sun, Apr 15, 2007 at 10:35:12PM +0200, Stef Mientki wrote: > >> I wonder if I'm doing something wrong, >> or if the signal library might have has some small bugs. >> Do others have the same experience ? >> >> #I create a highpass filter, >> #which amplitude and phase characteristic looks good >> filt_1 = signal.iirdesign( 0.06, 0.002, 1, 50, 0, 'butter') >> >> # Now when I want to calculate the impuls response, >> aa,bb = signal.impulse( filt_1[0], filt_1[1] ) >> > > The function signature for signal.impulse is > > signal.impulse(system, X0=None, T=None, N=None) > > Where > > system -- an instance of the LTI class or a tuple with 2, 3, or 4 > elements representing (num, den), (zero, pole, gain), or > (A, B, C, D) representation of the system. > > (By the way, IPython is extremely useful in investigating this sort of > thing -- you simply type signal.impulse?) > > So, I think what you want to do is: > > aa,bb = signal.impulse(filt_1[:2]) > > thanks St?fan, but it doesn't seem to work either. Although I'm just a beginner with Python (coming from MatLab), I don't see the difference between: aa,bb = signal.impulse(filt_1[:2]) aa,bb = signal.impulse(filt_1) knowing that the first dimension of filt_1 = 2, and indeed it gives the same problems ;-) or am I missing something ? cheers, Stef Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From stefan at sun.ac.za Mon Apr 16 05:51:45 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 16 Apr 2007 11:51:45 +0200 Subject: [SciPy-user] problems with signal-functions impulse and step ... In-Reply-To: <46232D4A.9040704@ru.nl> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> Message-ID: <20070416095145.GV18196@mentat.za.net> On Mon, Apr 16, 2007 at 10:01:14AM +0200, Stef Mientki wrote: > > The function signature for signal.impulse is > > > > signal.impulse(system, X0=None, T=None, N=None) > > > > Where > > > > system -- an instance of the LTI class or a tuple with 2, 3, or 4 > > elements representing (num, den), (zero, pole, gain), or > > (A, B, C, D) representation of the system. > > > > (By the way, IPython is extremely useful in investigating this sort of > > thing -- you simply type signal.impulse?) > > > > So, I think what you want to do is: > > > > aa,bb = signal.impulse(filt_1[:2]) > > > > > thanks St?fan, > but it doesn't seem to work either. Are you saying that the following code snippet crashes? b,a = signal.iirdesign(0.06,0.006,1,50,0,'butter') signal.impulse((b,a)) If so, which version of scipy are you using? This works fine on 0.5.3 r2897. > I don't see the difference between: > > aa,bb = signal.impulse(filt_1[:2]) > > aa,bb = signal.impulse(filt_1) There is no difference -- I was just trying to emphasise that you need to specify a tuple/list of two elements as the input parameter -- not two two separate parameters like you had it. Regards St?fan From ctmedra at unizar.es Mon Apr 16 13:04:28 2007 From: ctmedra at unizar.es (Carlos Medrano) Date: Mon, 16 Apr 2007 19:04:28 +0200 Subject: [SciPy-user] misc.imresize Message-ID: <200704161904.28691.ctmedra@unizar.es> Hi: I have been working with imresize and I think that there is a problem when the size is a real number. It only accepts float but not float32 or float64 (if not exactly float it thinks it can be a tuple). It is not critical but it should probably accept all these types. It is nice to work directly with ndarray instead of PIL images. I think that the PIL method imresize accepts only a tuple. Python: 2.4.3, scipy and numpy from Andrew Straw repository Scipy: '0.5.2.dev2299' Numpy: '1.0' PIL: 1.1.5-4ubuntu1 --------------------------------------------------------------- Python 2.4.3 (#2, Oct 6 2006, 07:52:30) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> im=lena() >>> im.shape (512, 512) >>> im1=misc.imresize(im,1.1) /usr/lib/python2.4/site-packages/PIL/Image.py:1200: DeprecationWarning: integer argument expected, got float im = self.im.resize(size, resample) >>> im1.shape (563, 563) >>> im1=misc.imresize(im,float(1.1)) >>> im1=misc.imresize(im,float32(1.1)) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/misc/pilutil.py", line 256, in imresize size = (size[1],size[0]) TypeError: unsubscriptable object >>> im1=misc.imresize(im,float64(1.1)) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/misc/pilutil.py", line 256, in imresize size = (size[1],size[0]) TypeError: unsubscriptable object Regards: Carlos Medrano From s.mientki at ru.nl Mon Apr 16 14:46:44 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 16 Apr 2007 20:46:44 +0200 Subject: [SciPy-user] Bad publicity, was:Re: problems with signal-functions impulse and step ... In-Reply-To: <20070416095145.GV18196@mentat.za.net> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> <20070416095145.GV18196@mentat.za.net> Message-ID: <4623C494.9080209@ru.nl> Stefan van der Walt wrote: > On Mon, Apr 16, 2007 at 10:01:14AM +0200, Stef Mientki wrote: > >>> The function signature for signal.impulse is >>> >>> signal.impulse(system, X0=None, T=None, N=None) >>> >>> Where >>> >>> system -- an instance of the LTI class or a tuple with 2, 3, or 4 >>> elements representing (num, den), (zero, pole, gain), or >>> (A, B, C, D) representation of the system. >>> >>> (By the way, IPython is extremely useful in investigating this sort of >>> thing -- you simply type signal.impulse?) >>> >>> So, I think what you want to do is: >>> >>> aa,bb = signal.impulse(filt_1[:2]) >>> >>> >>> >> thanks St?fan, >> but it doesn't seem to work either. >> > > hello St?fan (and others), > Are you saying that the following code snippet crashes? > > b,a = signal.iirdesign(0.06,0.006,1,50,0,'butter') > signal.impulse((b,a)) > yes (to be sure, I copied them again from your email ;-) > If so, which version of scipy are you using? This works fine on 0.5.3 > r2897. > > Python 2.4.3 - Enthought Edition 1.0.0 (#69, Aug 2 2006, 12:09:59) [MSC v.1310 32 bit (Intel)] on win32. Sorry, probably you mean a version of some sublibrary, I don't know which library you mean and I don't know how I can get that version. Of course this is not very important to me, because I can get the step and impuls response just as easy from lfilter. But I think this is (very) bad publicity for SciPy, after all, everyone starting with (deterministic) signal analysis, will start with some simple filters, and the first thing they want to see is amplitude response, the second thing the want to see is the impuls response. If indeed these functions are so buggy, it would be better to just remove them, because every beginner knows how to determine the impuls response, by filtering a Dirac puls. just my 2 cents ;-) But anyway thanks for the answers St?fan ! cheers, Stef Mientki From robert.kern at gmail.com Mon Apr 16 15:20:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Apr 2007 14:20:49 -0500 Subject: [SciPy-user] Bad publicity, was:Re: problems with signal-functions impulse and step ... In-Reply-To: <4623C494.9080209@ru.nl> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> <20070416095145.GV18196@mentat.za.net> <4623C494.9080209@ru.nl> Message-ID: <4623CC91.70801@gmail.com> Stef Mientki wrote: > > Stefan van der Walt wrote: >> If so, which version of scipy are you using? This works fine on 0.5.3 >> r2897. >> >> > Python 2.4.3 - Enthought Edition 1.0.0 (#69, Aug 2 2006, 12:09:59) [MSC > v.1310 32 bit (Intel)] on win32. > Sorry, probably you mean a version of some sublibrary, > I don't know which library you mean and > I don't know how I can get that version. >>> import scipy >>> print scipy.__version__ 0.5.3.dev2819 > Of course this is not very important to me, > because I can get the step and impuls response just as easy from lfilter. > > But I think this is (very) bad publicity for SciPy, > after all, everyone starting with (deterministic) signal analysis, > will start with some simple filters, > and the first thing they want to see is amplitude response, > the second thing the want to see is the impuls response. > If indeed these functions are so buggy, They're not anymore. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Mon Apr 16 15:39:12 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 16 Apr 2007 21:39:12 +0200 Subject: [SciPy-user] Bad publicity, was:Re: problems with signal-functions impulse and step ... In-Reply-To: <4623CC91.70801@gmail.com> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> <20070416095145.GV18196@mentat.za.net> <4623C494.9080209@ru.nl> <4623CC91.70801@gmail.com> Message-ID: <4623D0E0.2000605@ru.nl> >>>> import scipy >>>> print scipy.__version__ >>>> > 0.5.3.dev2819 > > thanks Robert, so the Enthought edition I use, has Scipy 0.5.0.2033 >> the second thing the want to see is the impuls response. >> If indeed these functions are so buggy, >> > > They're not anymore. > > That's very good to hear ! I now see there's a version 0.5.2 for win32, for Python 2.4 and 2.5. Has 0.5.2 already signal functions ? Now I've installed the Enthought edition, and that one is still based on Scipy 0.5.0.2033. Can I just install Scipy 0.5.2 (for Python 2.4 I guess), over the existing Enthought edition ? Will there be a new Enthought edition (soon) ? cheers, Stef Mientki From robert.kern at gmail.com Mon Apr 16 15:56:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Apr 2007 14:56:39 -0500 Subject: [SciPy-user] Bad publicity, was:Re: problems with signal-functions impulse and step ... In-Reply-To: <4623D0E0.2000605@ru.nl> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> <20070416095145.GV18196@mentat.za.net> <4623C494.9080209@ru.nl> <4623CC91.70801@gmail.com> <4623D0E0.2000605@ru.nl> Message-ID: <4623D4F7.7090603@gmail.com> Stef Mientki wrote: >>>>> import scipy >>>>> print scipy.__version__ >>>>> >> 0.5.3.dev2819 >> >> > thanks Robert, > so the Enthought edition I use, has Scipy 0.5.0.2033 > >>> the second thing the want to see is the impuls response. >>> If indeed these functions are so buggy, >>> >> They're not anymore. >> > That's very good to hear ! > I now see there's a version 0.5.2 for win32, for Python 2.4 and 2.5. > Has 0.5.2 already signal functions ? I don't know when the bug got fixed. > Now I've installed the Enthought edition, > and that one is still based on Scipy 0.5.0.2033. > > Can I just install Scipy 0.5.2 (for Python 2.4 I guess), Yes, for 2.4. > over the existing Enthought edition ? I would remove the c:\Python24\Lib\site-packages\scipy\ directory first. > Will there be a new Enthought edition (soon) ? Not in the same monolithic installer, no. We are distributing eggs now, including current builds of scipy. http://code.enthought.com/enstaller/ Note that the version numbers listed on that page are out-of-date. We make nightly builds of scipy from SVN. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Mon Apr 16 16:13:27 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Mon, 16 Apr 2007 22:13:27 +0200 Subject: [SciPy-user] Bad publicity, was:Re: problems with signal-functions impulse and step ... In-Reply-To: <4623D4F7.7090603@gmail.com> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> <20070416095145.GV18196@mentat.za.net> <4623C494.9080209@ru.nl> <4623CC91.70801@gmail.com> <4623D0E0.2000605@ru.nl> <4623D4F7.7090603@gmail.com> Message-ID: <4623D8E7.1000008@ru.nl> thanks Robert, I'll try to install new versions, and report what it brings. cheers, Stef Mientki >>> >>> >> That's very good to hear ! >> I now see there's a version 0.5.2 for win32, for Python 2.4 and 2.5. >> Has 0.5.2 already signal functions ? >> > > I don't know when the bug got fixed. > > >> Now I've installed the Enthought edition, >> and that one is still based on Scipy 0.5.0.2033. >> >> Can I just install Scipy 0.5.2 (for Python 2.4 I guess), >> > > Yes, for 2.4. > > >> over the existing Enthought edition ? >> > > I would remove the c:\Python24\Lib\site-packages\scipy\ directory first. > > >> Will there be a new Enthought edition (soon) ? >> > > Not in the same monolithic installer, no. We are distributing eggs now, > including current builds of scipy. > > http://code.enthought.com/enstaller/ > > Note that the version numbers listed on that page are out-of-date. We make > nightly builds of scipy from SVN. > > From william.ratcliff at gmail.com Mon Apr 16 17:02:43 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 16 Apr 2007 17:02:43 -0400 Subject: [SciPy-user] question about installation of odr Message-ID: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> I just checked odr out from svn and attempted to install it. It wasn't happy about the absence of BLAS libraries. However, it did build. I ran the tests and they all failed. It seems that the return values are the same as the initial values. Can anyone offer any suggestions? I should note that after the build, I moved odr from the sandbox to its own directory under scipy. Thanks, William Ratcliff Build: $ /c/python24/python.exe setup.py install Setting mingw32 as default compiler for nt. blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not find in c:\python24\lib libraries mkl,vml,guide not find in C:\ libraries mkl,vml,guide not find in c:\python24\libs NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not find in c:\python24\lib libraries ptf77blas,ptcblas,atlas not find in C:\ libraries ptf77blas,ptcblas,atlas not find in c:\python24\libs NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not find in c:\python24\lib libraries f77blas,cblas,atlas not find in C:\ libraries f77blas,cblas,atlas not find in c:\python24\libs NOT AVAILABLE blas_info: libraries blas not find in c:\python24\lib libraries blas not find in C:\ libraries blas not find in c:\python24\libs NOT AVAILABLE blas_src_info: NOT AVAILABLE NOT AVAILABLE Appending sandbox.odr configuration to sandbox Ignoring attempt to set 'name' (from 'sandbox' to 'sandbox.odr') Appending sandbox.odr configuration to sandbox Ignoring attempt to set 'name' (from 'sandbox' to 'sandbox.odr') running install running build running config_fc running build_src building library "odrpack" sources building library "odrpack" sources building extension "sandbox.odr.__odrpack" sources building extension "sandbox.odr.__odrpack" sources building data_files sources running build_py running build_clib customize Mingw32CCompiler customize Mingw32CCompiler using build_clib 0 Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler c:\python24\lib\site-packages\numpy\distutils\system_info.py:1233: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) c:\python24\lib\site-packages\numpy\distutils\system_info.py:1242: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) c:\python24\lib\site-packages\numpy\distutils\system_info.py:1245: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) .\odr\setup.py:26: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) customize GnuFCompiler customize GnuFCompiler using build_clib building 'odrpack' library compiling Fortran sources Fortran f77 compiler: c:\Python24\Enthought\MingW\bin\g77.exe -g -Wall -fno-seco nd-underscore -O3 -funroll-loops -march=pentium4 -mmmx -msse2 -msse -fomit-frame -pointer -malign-double compile options: '-c' g77.exe:f77: odr\odrpack\d_lpkbls.f ar: adding 4 object files to build\temp.win32-2.4\libodrpack.a running build_ext customize Mingw32CCompiler customize Mingw32CCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building 'sandbox.odr.__odrpack' extension compiling C sources C compiler: gcc -O2 -Wall -Wstrict-prototypes creating build\temp.win32-2.4\Release creating build\temp.win32-2.4\Release\odr compile options: '-Iodr -Ic:\python24\lib\site-packages\numpy\core\include -Ic:\ python24\include -Ic:\python24\PC -c' gcc -O2 -Wall -Wstrict-prototypes -Iodr -Ic:\python24\lib\site-packages\numpy\co re\include -Ic:\python24\include -Ic:\python24\PC -c odr\__odrpack.c -o build\te mp.win32-2.4\Release\odr\__odrpack.o c:\Python24\Enthought\MingW\bin\g77.exe -shared build\temp.win32- 2.4\Release\odr \__odrpack.o -Lc:/Python24/Enthought/MingW/bin/../lib/gcc/mingw32/3.4.5 -Lc:\pyt hon24\libs -Lc:\python24\PCBuild -Lbuild\temp.win32-2.4 -lodrpack -lpython24 -lg 2c -o build\lib.win32-2.4\sandbox\odr\__odrpack.pyd running install_lib creating c:\python24\Lib\site-packages\sandbox creating c:\python24\Lib\site-packages\sandbox\odr copying build\lib.win32-2.4\sandbox\odr\info.py -> c:\python24\Lib\site-packages \sandbox\odr copying build\lib.win32-2.4\sandbox\odr\models.py -> c:\python24\Lib\site-packag es\sandbox\odr copying build\lib.win32-2.4\sandbox\odr\odrpack.py -> c:\python24\Lib\site-packa ges\sandbox\odr copying build\lib.win32-2.4\sandbox\odr\setup.py -> c:\python24\Lib\site-package s\sandbox\odr copying build\lib.win32-2.4\sandbox\odr\__init__.py -> c:\python24\Lib\site-pack ages\sandbox\odr copying build\lib.win32-2.4\sandbox\odr\__odrpack.pyd -> c:\python24\Lib\site-pa ckages\sandbox\odr copying build\lib.win32-2.4\sandbox\__init__.py -> c:\python24\Lib\site-packages \sandbox byte-compiling c:\python24\Lib\site-packages\sandbox\odr\info.py to info.pyc byte-compiling c:\python24\Lib\site-packages\sandbox\odr\models.py to models.pyc byte-compiling c:\python24\Lib\site-packages\sandbox\odr\odrpack.py to odrpack.p yc byte-compiling c:\python24\Lib\site-packages\sandbox\odr\setup.py to setup.pyc byte-compiling c:\python24\Lib\site-packages\sandbox\odr\__init__.py to __init__ .pyc byte-compiling c:\python24\Lib\site-packages\sandbox\__init__.py to __init__.pyc running install_data creating c:\python24\Lib\site-packages\sandbox\odr\tests copying odr\tests\test_odr.py -> c:\python24\Lib\site-packages\sandbox\odr\tests Test failures: $ /c/python24/python.exe test_odr.py Setting mingw32 as default compiler for nt. Found 5 tests for __main__ FFFFF ====================================================================== FAIL: test_explicit (__main__.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_odr.py", line 49, in test_explicit np.array([ 1.2646548050648876e+03, -5.4018409956678255e+01, File "C:\Python24\lib\site-packages\numpy\testing\utils.py", line 222, in asse rt_array_almost_equal header='Arrays are not almost equal') File "C:\Python24\lib\site-packages\numpy\testing\utils.py", line 207, in asse rt_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.50000000e+03, -5.00000000e+01, -1.00000000e-01]) y: array([ 1.26465481e+03, -5.40184100e+01, -8.78497122e-02]) ====================================================================== FAIL: test_implicit (__main__.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_odr.py", line 94, in test_implicit np.array([-0.9993809167281279, -2.9310484652026476, 0.0875730502693354, File "C:\Python24\lib\site-packages\numpy\testing\utils.py", line 222, in asse rt_array_almost_equal header='Arrays are not almost equal') File "C:\Python24\lib\site-packages\numpy\testing\utils.py", line 207, in asse rt_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([-1. , -3. , 0.09, 0.02, 0.08]) y: array([-0.99938092, -2.93104847, 0.08757305, 0.01622997, 0.0797538 ]) ====================================================================== FAIL: test_lorentz (__main__.test_odr) ---------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Apr 16 17:16:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Apr 2007 16:16:37 -0500 Subject: [SciPy-user] question about installation of odr In-Reply-To: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> Message-ID: <4623E7B5.2050203@gmail.com> william ratcliff wrote: > I just checked odr out from svn and attempted to install it. It wasn't > happy about the absence of BLAS libraries. However, it did build. I > ran the tests and they all failed. It seems that the return values are > the same as the initial values. Can anyone offer any suggestions? Try editing your site.cfg to configure the BLAS libraries. That might be the cause of the problem. When we can't find BLAS libraries, we do include the routines from the reference BLAS, but that might be buggy. > I > should note that after the build, I moved odr from the sandbox to its > own directory under scipy. Don't do that. odr is no longer in the sandbox; it is part of scipy proper now, so I don't know what sandbox you moved it from. Just build and install scipy as a whole. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon Apr 16 17:55:01 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 16 Apr 2007 23:55:01 +0200 Subject: [SciPy-user] misc.imresize In-Reply-To: <200704161904.28691.ctmedra@unizar.es> References: <200704161904.28691.ctmedra@unizar.es> Message-ID: <20070416215501.GB18196@mentat.za.net> Hi Carlos On Mon, Apr 16, 2007 at 07:04:28PM +0200, Carlos Medrano wrote: > I have been working with imresize and I think that there is a problem when > the size is a real number. It only accepts float but not float32 or float64 > (if not exactly float it thinks it can be a tuple). It is not critical but it > should probably accept all these types. It is nice to work directly with > ndarray instead of PIL images. Should be fixed in r2926. Thanks for the report. Cheers St?fan From zunzun at zunzun.com Mon Apr 16 19:55:44 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Mon, 16 Apr 2007 19:55:44 -0400 Subject: [SciPy-user] question about installation of odr In-Reply-To: <4623E7B5.2050203@gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> Message-ID: <20070416235544.GA2579@zunzun.com> On Mon, Apr 16, 2007 at 04:16:37PM -0500, Robert Kern wrote: > > Don't do that. odr is no longer in the sandbox; it is part of scipy proper now, > so I don't know what sandbox you moved it from. Just build and install scipy as > a whole. I'm working with odr now; the SVN version is indeed _not_ in the sandbox now. James From william.ratcliff at gmail.com Mon Apr 16 21:20:46 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 16 Apr 2007 21:20:46 -0400 Subject: [SciPy-user] question about installation of odr In-Reply-To: <20070416235544.GA2579@zunzun.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> Message-ID: <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> For your installation, did you have to install BLAS separately, or add anything to direct the ODR portion of the install towards those libraries? Or, in the latest SVN release of scipy, does everything install seamlessly? Thanks, William On 4/16/07, zunzun at zunzun.com wrote: > > On Mon, Apr 16, 2007 at 04:16:37PM -0500, Robert Kern wrote: > > > > Don't do that. odr is no longer in the sandbox; it is part of scipy > proper now, > > so I don't know what sandbox you moved it from. Just build and install > scipy as > > a whole. > > I'm working with odr now; the SVN version is indeed _not_ in the sandbox > now. > > James > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Apr 16 21:29:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Apr 2007 20:29:34 -0500 Subject: [SciPy-user] question about installation of odr In-Reply-To: <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> Message-ID: <462422FE.7030800@gmail.com> william ratcliff wrote: > For your installation, did you have to install BLAS separately, or add > anything to direct the ODR portion of the install towards those > libraries? Or, in the latest SVN release of scipy, does everything > install seamlessly? Yes, you should install some kind of BLAS separately. The build process will use whatever you have configured in the [blas_opt] section of your site.cfg. See the example: http://svn.scipy.org/svn/numpy/trunk/site.cfg.example -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zunzun at zunzun.com Mon Apr 16 21:37:48 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Mon, 16 Apr 2007 21:37:48 -0400 Subject: [SciPy-user] question about installation of odr In-Reply-To: <462422FE.7030800@gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> <462422FE.7030800@gmail.com> Message-ID: <20070417013748.GA4102@zunzun.com> On Mon, Apr 16, 2007 at 08:29:34PM -0500, Robert Kern wrote: > > Yes, you should install some kind of BLAS separately. The build process will use > whatever you have configured in the [blas_opt] section of your site.cfg. See the > example: > > http://svn.scipy.org/svn/numpy/trunk/site.cfg.example That worked for me earlier, all right. site.cfg is the key. James From william.ratcliff at gmail.com Mon Apr 16 22:11:52 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 16 Apr 2007 22:11:52 -0400 Subject: [SciPy-user] question about installation of odr In-Reply-To: <462422FE.7030800@gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> <462422FE.7030800@gmail.com> Message-ID: <827183970704161911g7aa3a03bmb6659ffee7b515be@mail.gmail.com> Where should the site.cfg file live? That is, should it be in the ODR directory, or in some higher level directory? Thanks On 4/16/07, Robert Kern wrote: > > william ratcliff wrote: > > For your installation, did you have to install BLAS separately, or add > > anything to direct the ODR portion of the install towards those > > libraries? Or, in the latest SVN release of scipy, does everything > > install seamlessly? > > Yes, you should install some kind of BLAS separately. The build process > will use > whatever you have configured in the [blas_opt] section of your site.cfg. > See the > example: > > http://svn.scipy.org/svn/numpy/trunk/site.cfg.example > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Apr 16 22:24:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Apr 2007 21:24:20 -0500 Subject: [SciPy-user] question about installation of odr In-Reply-To: <827183970704161911g7aa3a03bmb6659ffee7b515be@mail.gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> <462422FE.7030800@gmail.com> <827183970704161911g7aa3a03bmb6659ffee7b515be@mail.gmail.com> Message-ID: <46242FD4.9060808@gmail.com> william ratcliff wrote: > Where should the site.cfg file live? That is, should it be in the ODR > directory, or in some higher level directory? Next to scipy's setup.py. Note that you need to run that setup.py and build all of scipy, not just scipy.odr alone. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From william.ratcliff at gmail.com Mon Apr 16 22:38:10 2007 From: william.ratcliff at gmail.com (william ratcliff) Date: Mon, 16 Apr 2007 22:38:10 -0400 Subject: [SciPy-user] question about installation of odr In-Reply-To: <46242FD4.9060808@gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> <462422FE.7030800@gmail.com> <827183970704161911g7aa3a03bmb6659ffee7b515be@mail.gmail.com> <46242FD4.9060808@gmail.com> Message-ID: <827183970704161938q5ab6f320tacdf78cf3bdfc816@mail.gmail.com> Then, a general question: Do I need to rebuild all of scipy each time a package is added? For example, if I want to add a package from the sandbox from svn, then do I need to rebuild all of scipy to use it? Also, could you modify the version of ODR in svn to change statements of the form, "if if...GOTO", to IF (( ) .AND. ( )) GOTO....The previous caused some problems with the g77 in mingw. Thanks, William On 4/16/07, Robert Kern wrote: > > william ratcliff wrote: > > Where should the site.cfg file live? That is, should it be in the ODR > > directory, or in some higher level directory? > > Next to scipy's setup.py. Note that you need to run that setup.py and > build all > of scipy, not just scipy.odr alone. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Apr 16 22:41:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Apr 2007 21:41:03 -0500 Subject: [SciPy-user] question about installation of odr In-Reply-To: <827183970704161938q5ab6f320tacdf78cf3bdfc816@mail.gmail.com> References: <827183970704161402i34d3b276lbedbe1a3cdf51564@mail.gmail.com> <4623E7B5.2050203@gmail.com> <20070416235544.GA2579@zunzun.com> <827183970704161820h4aa7326es60a8c241a8878225@mail.gmail.com> <462422FE.7030800@gmail.com> <827183970704161911g7aa3a03bmb6659ffee7b515be@mail.gmail.com> <46242FD4.9060808@gmail.com> <827183970704161938q5ab6f320tacdf78cf3bdfc816@mail.gmail.com> Message-ID: <462433BF.40609@gmail.com> william ratcliff wrote: > Then, a general question: > > Do I need to rebuild all of scipy each time a package is added? For > example, if I want to add a package from the sandbox from svn, then do I > need to rebuild all of scipy to use it? At the moment, yes. If you don't delete build/, then it shouldn't take much time. > Also, could you modify the > version of ODR in svn to change statements of the form, "if if...GOTO", to > IF (( ) .AND. ( )) GOTO....The previous caused some problems with the > g77 in mingw. Provide a patch that works, and I'll apply it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Tue Apr 17 08:41:26 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 17 Apr 2007 14:41:26 +0200 Subject: [SciPy-user] Finding Neighboors Message-ID: Hi, I wanted to know if there was a module in scipy that is able to find the k-neighboors of a point ? If so, is there an optimized one - tree-based search - ? If not, I'm doing the optimized version... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Tue Apr 17 15:42:09 2007 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 17 Apr 2007 21:42:09 +0200 Subject: [SciPy-user] kdtree Message-ID: <37fac3b90704171242m4cfe46a8t7f4284c955395007@mail.gmail.com> Hi Matthieu, You might want to take a look at BioPython's KDTree implemenation. Currently I think it's still using Numeric rather than Numpy, but for what I've been using it for so far, that hasn't been an issue so far. Cheers, -jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Apr 17 16:00:44 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 17 Apr 2007 22:00:44 +0200 Subject: [SciPy-user] kdtree In-Reply-To: <37fac3b90704171242m4cfe46a8t7f4284c955395007@mail.gmail.com> References: <37fac3b90704171242m4cfe46a8t7f4284c955395007@mail.gmail.com> Message-ID: Thanks for the link, I do not think I'll install Biopython - I've installed enough - and with numeric :| But I'll check the C++ implementation. I've implemented a search tree, but it is in Python, so it is somehow slow at the moment. Matthieu 2007/4/17, Jelle Feringa / EZCT Architecture & Design Research < jelle.feringa at ezct.net>: > > Hi Matthieu, > > You might want to take a look at BioPython's KDTree implemenation. > Currently I think it's still using Numeric rather than Numpy, but for what > I've been using it for so far, that hasn't been an issue so far. > > Cheers, > > -jelle > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Tue Apr 17 16:11:28 2007 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 17 Apr 2007 22:11:28 +0200 Subject: [SciPy-user] KDTree Message-ID: <37fac3b90704171311l5a42d3f8h56106c478c96addb@mail.gmail.com> Matthieu, I haven't got all of BioPython installed either, I just took out the KDTree... Cheers, -jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Tue Apr 17 21:53:29 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 17 Apr 2007 18:53:29 -0700 Subject: [SciPy-user] Finding Neighboors In-Reply-To: References: Message-ID: <3A71FB63-E36D-46D9-8E5C-26E8370CA0B4@stanford.edu> Biopython has an implementation of KD-trees -- that might be a good starting place. http://biopython.org/DIST/docs/api/public/trees.html http://biopython.org/DIST/docs/api/public/Bio.KDTree-module.html On Apr 17, 2007, at 5:41 AM, Matthieu Brucher wrote: > Hi, > > I wanted to know if there was a module in scipy that is able to > find the k-neighboors of a point ? > If so, is there an optimized one - tree-based search - ? > If not, I'm doing the optimized version... > > Matthieu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From lbolla at gmail.com Wed Apr 18 05:39:30 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Wed, 18 Apr 2007 11:39:30 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <80c99e790704140152l742ef902tf56c75556b495912@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <461DDFCA.3030209@ru.nl> <20070412074143.GB9810@clipper.ens.fr> <80c99e790704140152l742ef902tf56c75556b495912@mail.gmail.com> Message-ID: <80c99e790704180239o3bc4239aud7ffec7d6d5a5d1@mail.gmail.com> I've updated the wiki page on PerformancePython http://www.scipy.org/PerformancePython with the Matlab benchmarks. I've also added some comments about Octave (that results twice slower than numpy). Details here: http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ cheers, L. On 4/14/07, lorenzo bolla wrote: > > I'm happy to add it to the wiki (at least the link). I've been very busy > for the last 3 days, but I'll do it soon. > thank you, > L. > > > On 4/12/07, Gael Varoquaux wrote: > > > > On Thu, Apr 12, 2007 at 09:29:14AM +0200, Stef Mientki wrote: > > > lorenzo bolla wrote: > > > > I've "expanded" the performance tests by Prabhu Ramachandran > > > > in > > > > http://www.scipy.org/PerformancePython with a comparison with > > matlab. > > > > for anyone interested, see > > > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > > . > > > > > > it's still a work in progress, but worth seeing by anyone still > > > > uncertain in switching from matlab to numpy. > > > Very good job Lorenzo ! > > > > > Please could you explain one thing to me: > > > - the contents of your page is very valuable > > > > This is why I think it should be added to the wiki, with a link for the > > Matlab/Numpy comparison page, and one from the PerformancePython page. > > > > My 2 cents, > > > > Ga?l > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Apr 18 13:19:10 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 18 Apr 2007 13:19:10 -0400 Subject: [SciPy-user] bogus immutable page on wiki Message-ID: Hi, The page http://www.scipy.org/Cookbook/Autovectorize is wrong in claiming that vectorized functions don't work on scalars, making the page irrelevant. It's set to be immutable, though, so I can't fix it. Anne P.S.: In [2]: vectorize(lambda x: x**2)(3) Out[2]: 9 -A From davidlinke at tiscali.de Wed Apr 18 14:57:13 2007 From: davidlinke at tiscali.de (David Linke) Date: Wed, 18 Apr 2007 20:57:13 +0200 Subject: [SciPy-user] bogus immutable page on wiki In-Reply-To: References: Message-ID: <46266A09.10400@tiscali.de> Hmm, are you sure that you are logged in? I can edit / delete that page. Another reason may be that your name is not on http://new.scipy.org/Wiki/EditorsGroup. If you like I can add you as another editor so that you can make the fix. Regards, David Anne Archibald wrote: > Hi, > > The page http://www.scipy.org/Cookbook/Autovectorize is wrong in > claiming that vectorized functions don't work on scalars, making the > page irrelevant. It's set to be immutable, though, so I can't fix it. > > Anne > > P.S.: > In [2]: vectorize(lambda x: x**2)(3) > Out[2]: 9 > > -A > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Wed Apr 18 15:08:40 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 18 Apr 2007 14:08:40 -0500 Subject: [SciPy-user] ubuntu feisty? Message-ID: I am having some issues with Ubuntu Edgy and my new laptop's wireless card. I have some hope that Feisty may fix these problems, but I am wondering if it will cause more problems than it fixes. Has anyone tried Feisty beta with Scipy? Thanks, Ryan From strawman at astraw.com Wed Apr 18 15:55:21 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 18 Apr 2007 12:55:21 -0700 Subject: [SciPy-user] ubuntu feisty? In-Reply-To: References: Message-ID: <462677A9.2090204@astraw.com> Hi Ryan, I've tested on my feisty chroot on an i386 architecture and all scipy and numpy tests pass with latest svn for both Python 2.4 and 2.5. Ryan Krauss wrote: > I am having some issues with Ubuntu Edgy and my new laptop's wireless > card. I have some hope that Feisty may fix these problems, but I am > wondering if it will cause more problems than it fixes. Has anyone > tried Feisty beta with Scipy? > > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From s.mientki at ru.nl Wed Apr 18 18:10:23 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 19 Apr 2007 00:10:23 +0200 Subject: [SciPy-user] Bad publicity, was:Re: problems with signal-functions impulse and step ... In-Reply-To: <4623D8E7.1000008@ru.nl> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> <20070416095145.GV18196@mentat.za.net> <4623C494.9080209@ru.nl> <4623CC91.70801@gmail.com> <4623D0E0.2000605@ru.nl> <4623D4F7.7090603@gmail.com> <4623D8E7.1000008@ru.nl> Message-ID: <4626974F.4080602@ru.nl> hi Robert, I tried to install the new package through "enstaller", although I think it's a good idea to use eggs-installer, the total installation didn't went so fluently, but after all it's an alpha version ;-) (I wrote some remarks, but have no idea the moderator will let it pass ;-) The signal library seems to have improved, it didn't crash anymore, but I'm not completely satisfied with the results, have to dig into that deeper. cheers, Stef Mientki From s.mientki at ru.nl Wed Apr 18 18:15:27 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 19 Apr 2007 00:15:27 +0200 Subject: [SciPy-user] FIR filter, calculated with Remez exchange algorithm ? Message-ID: <4626987F.1030300@ru.nl> Does anyone has experience with the Remez function ? I tried to design a simple highpass filter, 24 coefficients: stopband 40 dB from 0..0.01 transition band from 0.01 .. 0.2 passband 0 dB from 0.2 .. 0.5 (which I think is not a critical design, after all MatLab does it fluently ;-) filt_4 = signal.remez (24, (0, 0.01, 0.2, 0.49), (0.01, 1)) but I get some weird impuls responses. Did I fill in the parameters correctly ? thanks, Stef Mientki From travis at enthought.com Wed Apr 18 18:33:01 2007 From: travis at enthought.com (Travis Vaught) Date: Wed, 18 Apr 2007 17:33:01 -0500 Subject: [SciPy-user] ANN: SciPy 2007 Conference Message-ID: <200704181733.01437.travis@enthought.com> Greetings, The *SciPy 2007 Conference* has been scheduled for mid-August at CalTech. http://www.scipy.org/SciPy2007 Here's the rough schedule: Tutorials: August 14-15 (Tuesday and Wednesday) Conference: August 16-17 (Thursday and Friday) Sprints: August 18 (Saturday) Exciting things are happening in the Python community, and the SciPy 2007 Conference is an excellent opportunity to exchange ideas, learn techniques, contribute code and affect the direction of scientific computing (or just to learn what all the fuss is about). Last year's conference saw a near-doubling of attendance to 138, and we're looking forward to continued gains in participation. We'll be announcing the Keynote Speaker and providing a detailed schedule in the coming weeks. Registration: ------------- Registration is now open. You may register online at https://www.enthought.com/scipy07. Early registration for the conference is $150.00 and includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. Tutorial registration is an additional $75.00. After July 15, 2007, conference registration will increase to $200.00 (tutorial registration will remain the same at $75.00). Call for Presenters ------------------- If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 6, 2007. Papers and/or presentation slides are acceptable and are due by August 3, 2007. Tutorial Sessions ----------------- Last year's conference saw an overwhelming turnout for our first-ever tutorial sessions. In order to better accommodate the community interest in tutorials, we've expanded them to 2 days and are providing food (requiring us to charge a modest fee for tutorials this year). A tentative list of topics for tutorials includes: - Wrapping Code with Python (extension module development) - Building Rich Scientific Applications with Python - Using SciPy for Statistical Analysis - Using SciPy for Signal Processing and Image Processing - Using Python as a Scientific IDE/Workbench - Others... This is a preliminary list; topics will change and be extended. If you'd like to present a tutorial, or are interested in a particular topic for a tutorial, please email the SciPy users mailing list (link below). A current list will be maintained here: http://www.scipy.org/SciPy2007/Tutorials Coding Sprints -------------- We've dedicated the Saturday after the conference for a Coding Sprint. Please include any ideas for Sprint topics on the Sprints wiki page here: http://www.scipy.org/SciPy2007/Sprints We're looking forward to another great conference! Best, Travis ------------- Links to various SciPy and NumPy mailing lists may be found here: http://www.scipy.org/Mailing_Lists From rhc28 at cornell.edu Wed Apr 18 18:56:32 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 18 Apr 2007 18:56:32 -0400 Subject: [SciPy-user] Is numpy's argsort lying about its numpy.int32 types? Message-ID: Hi, I'm having a problem comparing some types when using numpy's argsort. I'm using numpy 1.0.2. I can reproduce it simply: Python 2.4.4 (#71, Oct 18 2006, 08:34:43) [MSC v.1310 32 bit (Intel)] << snip >> In [1]: from numpy import argsort, sort, int32, array In [2]: x=array([1,3,2]) In [3]: aa=argsort(x) In [4]: as=sort(x) In [5]: type(aa[0]) Out[5]: In [6]: type(as[0]) Out[6]: In [7]: int32 Out[7]: In [8]: type(as[0])==int32 Out[8]: True In [9]: type(aa[0])==int32 Out[9]: False Any of the three indices in aa give me the same problem. Can someone explain if I should be doing this a different way, and if this is a bug?! Thanks, Rob From jturner at gemini.edu Wed Apr 18 19:11:41 2007 From: jturner at gemini.edu (James Turner) Date: Wed, 18 Apr 2007 19:11:41 -0400 Subject: [SciPy-user] ANN: SciPy 2007 Conference In-Reply-To: <39BD0132-B9E0-4C56-B543-D3B23134B5E9@enthought.com> References: <39BD0132-B9E0-4C56-B543-D3B23134B5E9@enthought.com> Message-ID: <4626A5AD.3010407@gemini.edu> Hi Travis(es), > If you'd like to present a tutorial, or are interested in a > particular topic for a tutorial, please email the SciPy users mailing > list (link below). I'd vote for a NumPy tutorial, if you are able to do it -- I see there was one last year. As a fairly new astronomy user, I'd also be interested in matplotlib and things like image processing, which you already have planned. Regarding the tutorial on extension module development, I've come across various separate references to things like "extending Python", "extending numpy" and "extending ndimage" (each of those presumably being a special case of the previous one). Then, on top of that, there are things like Weave, SWIG and F2PY and no doubt a set of conventions for presenting new SciPy functions in a consistent way. It would be good to have some overview of how these different things fit together :-). Don't know if that's what you had in mind... Thanks! James. From oliphant at ee.byu.edu Wed Apr 18 19:28:51 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 18 Apr 2007 17:28:51 -0600 Subject: [SciPy-user] Is numpy's argsort lying about its numpy.int32 types? In-Reply-To: References: Message-ID: <4626A9B3.40906@ee.byu.edu> Rob Clewley wrote: >Hi, > >I'm having a problem comparing some types when using numpy's argsort. >I'm using numpy 1.0.2. I can reproduce it simply: > > >In [1]: from numpy import argsort, sort, int32, array > >In [2]: x=array([1,3,2]) > >In [3]: aa=argsort(x) > >In [4]: as=sort(x) > >In [5]: type(aa[0]) >Out[5]: > >In [6]: type(as[0]) >Out[6]: > >In [7]: int32 >Out[7]: > >In [8]: type(as[0])==int32 >Out[8]: True > >In [9]: type(aa[0])==int32 >Out[9]: False > >Any of the three indices in aa give me the same problem. Can someone >explain if I should be doing this a different way, and if this is a >bug?! > > The "problem" is that there are two types that display as "int32" for some platforms (e.g. c_long and c_int are both numpy.int32 on my machine). These are equivalent which can be seen by looking at the output of aa.dtype == as.dtype -Travis From rhc28 at cornell.edu Wed Apr 18 23:11:50 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 18 Apr 2007 23:11:50 -0400 Subject: [SciPy-user] Is numpy's argsort lying about its numpy.int32 types? In-Reply-To: <4626A9B3.40906@ee.byu.edu> References: <4626A9B3.40906@ee.byu.edu> Message-ID: Fair enough, but it does cause a *real* problem when I extract the values from aa and pass them on to other functions which try to compare their types to the integer types int and int32 that I can import from numpy. Since the values I'm testing could equally have been generated by functions that return the regular int type I can't guarantee that those values will have a dtype attribute! I have some initialization code for a big class that has to set up some state differently depending on the type of the input. So, I was trying to do something like this if type(x) in [int, int32]: ## do stuff specific to integer x but now it seems like I'll need try: isint = x.dtype == dtype('int32') except AttributeError: isint = type(x) == int if isint: ## do stuff specific to integer x -- which is a mess! Is there a better way to do this test cleanly and robustly? And why couldn't c_long always correspond to a unique numpy name (i.e., not shared with int32) regardless of how it's implemented? Either way it would be helpful to have a name for this "other" int32 that I can test against using the all-purpose type() ... so that I could test something like type(x) in [int, int32_c_long, int32_c_int] Thanks in advance for the clarification! Rob From oliphant.travis at ieee.org Thu Apr 19 00:00:06 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 18 Apr 2007 22:00:06 -0600 Subject: [SciPy-user] Is numpy's argsort lying about its numpy.int32 types? In-Reply-To: References: <4626A9B3.40906@ee.byu.edu> Message-ID: <4626E946.2090107@ieee.org> Rob Clewley wrote: > Fair enough, but it does cause a *real* problem when I extract the > values from aa and pass them on to other functions which try to > compare their types to the integer types int and int32 that I can > import from numpy. Since the values I'm testing could equally have > been generated by functions that return the regular int type I can't > guarantee that those values will have a dtype attribute! > You don't have to use the bit-width names (which can be confusing) in such cases. There is a regular name for every C-like type You can use the names byte, short, intc, int_, longlong (and corresponding unsigned names prefixed with u) > I have some initialization code for a big class that has to set up > some state differently depending on the type of the input. So, I was > trying to do something like this > > if type(x) in [int, int32]: > ## do stuff specific to integer x > > but now it seems like I'll need > > try: > isint = x.dtype == dtype('int32') > except AttributeError: > isint = type(x) == int > if isint: > ## do stuff specific to integer x > try: if isinstance(x, (int, integer)) integer is the super-class of all c-like integer types. > -- which is a mess! Is there a better way to do this test cleanly and > robustly? And why couldn't c_long always correspond to a unique numpy > name (i.e., not shared with int32) regardless of how it's implemented? > There is a unique numpy name for all of them. The bit-width names just can't be unique. > Either way it would be helpful to have a name for this "other" int32 > that I can test against using the all-purpose type() ... so that I > could test something like > > type(x) in [int, int32_c_long, int32_c_int] > isinstance(x, (int, intc, int_)) is what you want. -Travis From rhc28 at cornell.edu Wed Apr 18 23:58:26 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Wed, 18 Apr 2007 23:58:26 -0400 Subject: [SciPy-user] Is numpy's argsort lying about its numpy.int32 types? In-Reply-To: <4626E946.2090107@ieee.org> References: <4626A9B3.40906@ee.byu.edu> <4626E946.2090107@ieee.org> Message-ID: Excellent. I didn't know isinstance could be used with a tuple in the second argument! That helps a lot. Cheers! From stefan at sun.ac.za Thu Apr 19 05:40:56 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 19 Apr 2007 11:40:56 +0200 Subject: [SciPy-user] Is numpy's argsort lying about its numpy.int32 types? In-Reply-To: <4626E946.2090107@ieee.org> References: <4626A9B3.40906@ee.byu.edu> <4626E946.2090107@ieee.org> Message-ID: <20070419094056.GC6154@mentat.za.net> On Wed, Apr 18, 2007 at 10:00:06PM -0600, Travis Oliphant wrote: > try: > > if isinstance(x, (int, integer)) > > integer is the super-class of all c-like integer types. Is issubdtype(x,int) also a safe bet? Cheers St?fan From gael.varoquaux at normalesup.org Thu Apr 19 07:39:50 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Apr 2007 13:39:50 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices Message-ID: <20070419113949.GM22826@clipper.ens.fr> I have a huge matrix (I don't know how big it is, it hasn't finished loading yet, but the ascii file weights 381M). I was wondering what format had best speed efficiency for saving/loading huge file. I don't mind using a hdf5 even if it is not included in scipy itself. Cheers, Ga?l From tgrav at mac.com Thu Apr 19 09:05:40 2007 From: tgrav at mac.com (Tommy Grav) Date: Thu, 19 Apr 2007 09:05:40 -0400 Subject: [SciPy-user] leastsq information message Message-ID: <2579220A-19C8-4F5C-A6BC-252231A33198@mac.com> I am using the scipy.optimize.leastsq to find best solution to a Fourier Analysis of the rotational lightcurve of asteroids. The function works well and finds good solutions but I keep getting an information message: Both actual and predicted relative reductions in the sum of squares are at most 0.000000 I am calling the method with res = leastsq(diffunc,x0,args= (period,atime,amag,amerr,ahindex,norder),full_output=1,col_deriv=0,epsfc n=0.01) so I am asking the function do estimate the derivatives. Can anyone enlighten me as to what the information message is caused by and if there is some way of improving my fit to avoid it. Cheers Tommy From robert.kern at gmail.com Thu Apr 19 10:23:08 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 19 Apr 2007 09:23:08 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <20070419113949.GM22826@clipper.ens.fr> References: <20070419113949.GM22826@clipper.ens.fr> Message-ID: <46277B4C.5050002@gmail.com> Gael Varoquaux wrote: > I have a huge matrix (I don't know how big it is, it hasn't finished > loading yet, but the ascii file weights 381M). I was wondering what > format had best speed efficiency for saving/loading huge file. I don't > mind using a hdf5 even if it is not included in scipy itself. I think we've found that a simple pickle using protocol 2 works the fastest. At the time (a year or so ago) this was faster than PyTables for loading the entire array of about 1GB size. PyTables might be better now, possibly because of the new numpy support. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gary.pajer at gmail.com Thu Apr 19 11:39:00 2007 From: gary.pajer at gmail.com (Gary Pajer) Date: Thu, 19 Apr 2007 11:39:00 -0400 Subject: [SciPy-user] ubuntu feisty? In-Reply-To: References: Message-ID: <88fe22a0704190839n31e23a96o66873a43654162fb@mail.gmail.com> On 4/18/07, Ryan Krauss wrote: > I am having some issues with Ubuntu Edgy and my new laptop's wireless > card. I have some hope that Feisty may fix these problems, but I am > wondering if it will cause more problems than it fixes. Has anyone > tried Feisty beta with Scipy? > > Thanks, > > Ryan Edgy doesn't play well with some wireless systems for some reason. I had problems on my Thinkpad with the Intel Pro Wireless 2100 adapter and the linux ipw2100 driver. I found a website with a solution that worked for me. I can't find the site now, (I know I kept track of it somewhere) but here's the result: Execute sudo ifconfig eth1 up iwlist eth1 scanning find the access point you want, then sudo iwconfig eth1 essid MY_ACCESS_POINT sudo iwconfig eth1 key MY_WEB_KEY_IF_I_HAVE_ONE sudo dhclient I made scripts for the two APs that a frequently use: ifconfig eth1 up iwlist eth1 scanning iwconfig eth1 essid XXXXXX iwconfig eth1 key YYYYYYYYYY dhclient and run it as sudo. Works like a charm. *Sometimes* I can connect with one of the Ubuntu GUI tools. If one of the tools finds an access point but doesn't complete the connection, I can complete it manually with sudo dhclient I'm looking forward to Feisty :) -gary > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Thu Apr 19 11:51:22 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Apr 2007 17:51:22 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <46277B4C.5050002@gmail.com> References: <20070419113949.GM22826@clipper.ens.fr> <46277B4C.5050002@gmail.com> Message-ID: <20070419155119.GS22826@clipper.ens.fr> On Thu, Apr 19, 2007 at 09:23:08AM -0500, Robert Kern wrote: > I think we've found that a simple pickle using protocol 2 works the > fastest. At the time (a year or so ago) this was faster than PyTables > for loading the entire array of about 1GB size. PyTables might be > better now, possibly because of the new numpy support. Thank you Robert. This is useful to know. Ga?l From fperez.net at gmail.com Thu Apr 19 12:03:17 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 19 Apr 2007 10:03:17 -0600 Subject: [SciPy-user] ubuntu feisty? In-Reply-To: <88fe22a0704190839n31e23a96o66873a43654162fb@mail.gmail.com> References: <88fe22a0704190839n31e23a96o66873a43654162fb@mail.gmail.com> Message-ID: On 4/19/07, Gary Pajer wrote: > Edgy doesn't play well with some wireless systems for some reason. I > had problems on my Thinkpad with the Intel Pro Wireless 2100 adapter > and the linux ipw2100 driver. This is OT, but in case it saves others time, here goes. After trying lots of different things, finally I settled on sudo wlassistant as my wifi connection tool. It's the only one of all the half-working tools shipped in Edgy that so far has reliably, 100% of the time, been able to find any available wireless networks and connect to them. Just apt-get it if you don't have it installed, it's a KDE tool so it may pull in some dependencies for you if you don't have the KDE libs already. HTH, f From ryanlists at gmail.com Thu Apr 19 12:09:48 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 19 Apr 2007 11:09:48 -0500 Subject: [SciPy-user] ubuntu feisty? In-Reply-To: References: <88fe22a0704190839n31e23a96o66873a43654162fb@mail.gmail.com> Message-ID: Thanks Fernando, Gary, and Andrew. It is OT and I moved conversations with Gary and Andrew off list. But, as usual, people on this list know a lot about computers and are much more helpful than folks other places. That is why I post my general Python questions here and not on the main python list. I usually have good luck with ubuntuforums, but my wireless Edgy problems are still unresolved. If the suggestions of Gary and Fernando actually get my wireless working consistently in Ubuntu Edgy, I may be tempted to post my Ubuntu questions here as well. Just kidding. But I might start a mailing list for Scipy+Ubuntu people. On 4/19/07, Fernando Perez wrote: > On 4/19/07, Gary Pajer wrote: > > > Edgy doesn't play well with some wireless systems for some reason. I > > had problems on my Thinkpad with the Intel Pro Wireless 2100 adapter > > and the linux ipw2100 driver. > > This is OT, but in case it saves others time, here goes. > > After trying lots of different things, finally I settled on > > sudo wlassistant > > as my wifi connection tool. It's the only one of all the half-working > tools shipped in Edgy that so far has reliably, 100% of the time, been > able to find any available wireless networks and connect to them. > > Just apt-get it if you don't have it installed, it's a KDE tool so it > may pull in some dependencies for you if you don't have the KDE libs > already. > > HTH, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From zunzun at zunzun.com Thu Apr 19 12:58:01 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 19 Apr 2007 12:58:01 -0400 Subject: [SciPy-user] leastsq information message In-Reply-To: <2579220A-19C8-4F5C-A6BC-252231A33198@mac.com> References: <2579220A-19C8-4F5C-A6BC-252231A33198@mac.com> Message-ID: <20070419165800.GA14658@zunzun.com> On Thu, Apr 19, 2007 at 09:05:40AM -0400, Tommy Grav wrote: > > Both actual and predicted relative reductions in the sum of squares > are at most 0.000000 I poked around and found in the scipy source code minpack.py: (lmdif or lmder is first called and this test is made) if info in [5,6,7,8]: print "Warning: " + errors[info][0] and according to the minpack lmdif info available at: http://www.math.utah.edu/software/minpack/minpack/lmdif.html one of the following (5,6,7,8) causes your error message to print INFO = 5 Number of calls to FCN has reached or exceeded MAXFEV. INFO = 6 FTOL is too small. No further reduction in the sum of squares is possible. INFO = 7 XTOL is too small. No further improvement in the approximate solution X is possible. INFO = 8 GTOL is too small. FVEC is orthogonal to the columns of the Jacobian to machine precision. So it looks to be caused by one of these conditions. James Phillips http://zunzun.com From ryanlists at gmail.com Thu Apr 19 15:19:46 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 19 Apr 2007 14:19:46 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <20070419155119.GS22826@clipper.ens.fr> References: <20070419113949.GM22826@clipper.ens.fr> <46277B4C.5050002@gmail.com> <20070419155119.GS22826@clipper.ens.fr> Message-ID: I just changed from simply reading a text file using io.read_array to cPickle and got a factor of 4 or 5 speed up for my medium sized array. But the cPickle file is quite large (about twice the size of the ascii file - I don't think the ascii has very many digits). I thought there used to be some built in functions called something like shelve that stored dictionaries fairly quickly and compactly. Are those functions still around and I am just remembering the name wrong? Or have they been done away with? I remember vaguely that they stored data in 3 seperate files - a python file that could later be imported, a dat file (I think) and something else. The cPickle approach seems fast, I just wish there was some way to make the files smaller. Is there a good way to do this that doesn't slow down the read time too much? Thanks, Ryan On 4/19/07, Gael Varoquaux wrote: > On Thu, Apr 19, 2007 at 09:23:08AM -0500, Robert Kern wrote: > > I think we've found that a simple pickle using protocol 2 works the > > fastest. At the time (a year or so ago) this was faster than PyTables > > for loading the entire array of about 1GB size. PyTables might be > > better now, possibly because of the new numpy support. > > Thank you Robert. This is useful to know. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From faltet at carabos.com Thu Apr 19 15:30:32 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 19 Apr 2007 21:30:32 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <46277B4C.5050002@gmail.com> References: <20070419113949.GM22826@clipper.ens.fr> <46277B4C.5050002@gmail.com> Message-ID: <1177011032.2543.94.camel@localhost.localdomain> El dj 19 de 04 del 2007 a les 09:23 -0500, en/na Robert Kern va escriure: > Gael Varoquaux wrote: > > I have a huge matrix (I don't know how big it is, it hasn't finished > > loading yet, but the ascii file weights 381M). I was wondering what > > format had best speed efficiency for saving/loading huge file. I don't > > mind using a hdf5 even if it is not included in scipy itself. > > I think we've found that a simple pickle using protocol 2 works the fastest. At > the time (a year or so ago) this was faster than PyTables for loading the entire > array of about 1GB size. PyTables might be better now, possibly because of the > new numpy support. I was curious as well if PyTables 2.0 is getting somewhat faster than 1.4 series (although I already knew that for this sort of things, the space for improvement should be rather small). For that, I've made a small benchmark (see attachments) and compared the performance for PyTables 1.4 and 2.0 against pickle (protocol 2). In the benchmark, a NumPy array of around 1 GB is created and the time for writing and reading it from disk is written to stdout. You can see the outputs for the runs in the attachments as well. >From there, some conclusions can be draw: 1. The difference of performance between PyTables 1.4 and 2.0 for this especific task is almost negligible. This was somthing expected because, although 1.4 was using numarray at the core, the use of the array protocol made unnecessary the copies of the arrays (and hence, the overhead over 2.0, with NumPy at the core, is negligible). 2. For writing, the EArray (Extensible Array) object of PyTables has roughly the same speed than NumPy (a 15% faster in fact, but this is not that much). However, for reading, the speed-up of PyTables over pickle is more than 2x (up to 2.35x for 2.0), which is something to consider. 3. For compressed EArrays, writing times are relatively bad: between 0.06x (zlib and PyTables 1.4) and 0.15x (lzo and PyTables 2.0). However, for reading the ratios are quite good: between 0.57x (zlib and PyTables 1.4) and 1.45x (lzo and PyTables 2.0). In general, one should expect better performance from compressed data, but I've chosen completely random data here, so the compressors weren't able to achieve even decent compression ratios and that hurts I/O performance quite a few. 4. The best performance is achieved by the simple (it doesn't allow to be enlarged nor compressed), but rather effective in terms of I/O, Array object. For writing, it can be up to 1.74x faster (using PyTables 2.0) than pickle and up to 3.56x (using PyTables 1.4) for reading, which is quite a lot (more than 500 MB/s) in terms of I/O speed. I will warn the reader that these times are taken *without* having in account the flush time to disk for writing. When this time is taken, the gap between PyTables and pickle will reduce significantly (but not when using compression, were PyTables will continue to be rather slower in comparison). So, you should take the the above figures as *peak* throughputs (that can be achieved when the dataset fits comfortably in the main memory because of the filesystem cache). For reading, and when the files doesn't fit in the filesystem cache or are read from the first time one should expect an important degrading over all the figures that I presented here. However, when using compression over real data (where a 2x or more compression ratios are realistic), the compressed EArray should be up to 2x faster (I've noticed this many times in other contexts) for reading than other solutions (this is so because one have to read less data from disk and moreover, CPUs today are exceedingly fast at decompressing). The above benchmarks have been run on a Linux machine running SuSe Linux with an AMD Opteron @ 2 GHz, 8 GB of main memory and a 7200 rpm IDE disk. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth -------------- next part -------------- A non-text attachment was scrubbed... Name: iobench-2.0.py Type: text/x-python Size: 3679 bytes Desc: URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: iobench-1.4.py Type: text/x-python Size: 3670 bytes Desc: URL: -------------- next part -------------- Python version: 2.4.4 (#1, Nov 6 2006, 12:24:47) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] NumPy version: 1.0.1 PyTables version: 1.4 Checking with a 1000x125000 matrix of float64 elements (953.674 MB) ***** cPickle (protocol 2) ***** Time for writing: 3.992s File size: 955M Time for reading: 6.222s ***** PyTables EArray (dump row to row) ***** Time for writing: 3.745s. Speed-up over cPickle: 1.07x File size: 955M Time for reading: 2.73s. Speed-up over cPickle: 2.28x File size: 955M ***** PyTables EArray (dump row to row, compressed with zlib) ****** Time for writing: 68.575s. Speed-up over cPickle: 0.06x File size: 810M Time for reading: 10.956s. Speed-up over cPickle: 0.57x File size: 810M ***** PyTables EArray (dump row to row, compressed with lzo) ***** Time for writing: 33.865s. Speed-up over cPickle: 0.12x File size: 840M Time for reading: 7.694s. Speed-up over cPickle: 0.81x File size: 840M ***** PyTables EArray (complete dump) ***** Time for writing: 3.389s. Speed-up over cPickle: 1.18x File size: 955M Time for reading: 2.758s. Speed-up over cPickle: 2.26x File size: 955M ***** PyTables Array ***** Time for writing: 2.659s. Speed-up over cPickle: 1.5x File size: 955M Time for reading: 1.746s. Speed-up over cPickle: 3.56x File size: 955M -------------- next part -------------- Python version: 2.5 (r25:51908, Nov 3 2006, 12:01:01) [GCC 4.0.2 20050901 (prerelease) (SUSE Linux)] NumPy version: 1.0.2.dev3640 PyTables version: 2.0b2pro Checking with a 1000x125000 matrix of float64 elements (953.674 MB) ***** cPickle (protocol 2) ***** Time for writing: 4.674s File size: 955M Time for reading: 6.254s ***** PyTables EArray (dump row to row) ***** Time for writing: 3.844s. Speed-up over cPickle: 1.22x File size: 972M Time for reading: 2.663s. Speed-up over cPickle: 2.35x File size: 972M ***** PyTables EArray (dump row to row, compressed with zlib) ****** Time for writing: 48.956s. Speed-up over cPickle: 0.1x File size: 831M Time for reading: 8.597s. Speed-up over cPickle: 0.73x File size: 831M ***** PyTables EArray (dump row to row, compressed with lzo) ***** Time for writing: 30.643s. Speed-up over cPickle: 0.15x File size: 842M Time for reading: 4.302s. Speed-up over cPickle: 1.45x File size: 842M ***** PyTables EArray (complete dump) ***** Time for writing: 4.071s. Speed-up over cPickle: 1.15x File size: 972M Time for reading: 2.701s. Speed-up over cPickle: 2.32x File size: 972M ***** PyTables Array ***** Time for writing: 2.693s. Speed-up over cPickle: 1.74x File size: 955M Time for reading: 1.81s. Speed-up over cPickle: 3.46x File size: 955M From faltet at carabos.com Thu Apr 19 15:42:32 2007 From: faltet at carabos.com (Francesc Altet) Date: Thu, 19 Apr 2007 21:42:32 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <20070419113949.GM22826@clipper.ens.fr> <46277B4C.5050002@gmail.com> <20070419155119.GS22826@clipper.ens.fr> Message-ID: <1177011752.2543.105.camel@localhost.localdomain> El dj 19 de 04 del 2007 a les 14:19 -0500, en/na Ryan Krauss va escriure: > I just changed from simply reading a text file using io.read_array to > cPickle and got a factor of 4 or 5 speed up for my medium sized array. > But the cPickle file is quite large (about twice the size of the > ascii file - I don't think the ascii has very many digits). Yeah. This can be expected because pickle saves the complete set of digits in binary form (8 bytes for double precisition, while if you keep only 2 digits (+ the decimal point + a space) you will need only 4 bytes for your data, hence the space savings. > I thought there used to be some built in functions called something > like shelve that stored dictionaries fairly quickly and compactly. > Are those functions still around and I am just remembering the name > wrong? Or have they been done away with? I remember vaguely that > they stored data in 3 seperate files - a python file that could later > be imported, a dat file (I think) and something else. > > The cPickle approach seems fast, I just wish there was some way to > make the files smaller. Is there a good way to do this that doesn't > slow down the read time too much? Try using compression. If your data doesn't have many decimals, chances are that it can be easily compressed up to 3x. There are many compressors that have a Python interface (your best bet is to use the zlib module included in Python). Or try PyTables for transparent compression support. HTH, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From s.mientki at ru.nl Thu Apr 19 16:19:59 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 19 Apr 2007 22:19:59 +0200 Subject: [SciPy-user] FIR filter, calculated with Remez exchange algorithm ? In-Reply-To: <4626987F.1030300@ru.nl> References: <4626987F.1030300@ru.nl> Message-ID: <4627CEEF.4080002@ru.nl> ok, I got the answer (I think) A slightly changed design, works perfect: filt_4 = signal.remez (25, (0, 0.01, 0.2, 0.5), (0.01, 1)) The filterlength must be odd, because it's a high pass filter. If the length is even, the respons at Nyquist is zero, so my orginal example filt_4 = signal.remez (24, (0, 0.01, 0.2, 0.49), (0.01, 1)) will try to create a transition band between 0.49 and 0.5, which is much to steep for this filterlength. I never encountered this problem because MatLab, and previous programs I used always corrected this themselfs. Now would it be possible to implement this behaviour in the library (I think it's usefull for beginners and previous MatLab users) if last amplitude band = 1 (because it also must be odd for bandstop filters) make N odd else make N even cheers, Stef Mientki Stef Mientki wrote: > Does anyone has experience with the Remez function ? > > I tried to design a simple highpass filter, 24 coefficients: > stopband 40 dB from 0..0.01 > transition band from 0.01 .. 0.2 > passband 0 dB from 0.2 .. 0.5 > (which I think is not a critical design, > after all MatLab does it fluently ;-) > > filt_4 = signal.remez (24, (0, 0.01, 0.2, 0.49), (0.01, 1)) > > but I get some weird impuls responses. > > Did I fill in the parameters correctly ? > > thanks, > Stef Mientki > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From gael.varoquaux at normalesup.org Thu Apr 19 16:38:57 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 19 Apr 2007 22:38:57 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <20070419113949.GM22826@clipper.ens.fr> <46277B4C.5050002@gmail.com> <20070419155119.GS22826@clipper.ens.fr> Message-ID: <20070419203856.GA18273@clipper.ens.fr> On Thu, Apr 19, 2007 at 02:19:46PM -0500, Ryan Krauss wrote: > The cPickle approach seems fast, I just wish there was some way to > make the files smaller. Is there a good way to do this that doesn't > slow down the read time too much? The problem with pickle is that the compatibility is not garantied from one version of modules to another. I would use pytables for everything else than temporary storage. Another good thing about pytables is that the file is standard and can be read by many other programs. I made the decision for our lab to standardise on hdf5, even without knowing that it was one of the fastest IO useable with scipy. Thanks C?rabos for pytables ! Ga?l From s.mientki at ru.nl Thu Apr 19 17:36:45 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 19 Apr 2007 23:36:45 +0200 Subject: [SciPy-user] PerformancePython -- comparison with Matlab In-Reply-To: <80c99e790704180239o3bc4239aud7ffec7d6d5a5d1@mail.gmail.com> References: <80c99e790704110311n5afe9e77m690f737bcc1a8f9@mail.gmail.com> <461DDFCA.3030209@ru.nl> <20070412074143.GB9810@clipper.ens.fr> <80c99e790704140152l742ef902tf56c75556b495912@mail.gmail.com> <80c99e790704180239o3bc4239aud7ffec7d6d5a5d1@mail.gmail.com> Message-ID: <4627E0ED.1040706@ru.nl> nice work Lorenzo, and I'm glad to see, that you accomplished to remove the popup "feature" ;-) cheers, Stef lorenzo bolla wrote: > I've updated the wiki page on PerformancePython > http://www.scipy.org/PerformancePython with the Matlab benchmarks. > > I've also added some comments about Octave (that results twice slower > than numpy). > Details here: > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > > cheers, > L. > > On 4/14/07, *lorenzo bolla* > wrote: > > I'm happy to add it to the wiki (at least the link). I've been > very busy for the last 3 days, but I'll do it soon. > thank you, > L. > > > On 4/12/07, *Gael Varoquaux* > wrote: > > On Thu, Apr 12, 2007 at 09:29:14AM +0200, Stef Mientki wrote: > > lorenzo bolla wrote: > > > I've "expanded" the performance tests by Prabhu Ramachandran > > > in > > > http://www.scipy.org/PerformancePython with a comparison > with matlab. > > > for anyone interested, see > > > > http://lbolla.wordpress.com/2007/04/11/numerical-computing-matlab-vs-pythonnumpyweave/ > . > > > > it's still a work in progress, but worth seeing by anyone still > > > uncertain in switching from matlab to numpy. > > Very good job Lorenzo ! > > > Please could you explain one thing to me: > > - the contents of your page is very valuable > > This is why I think it should be added to the wiki, with a > link for the > Matlab/Numpy comparison page, and one from the > PerformancePython page. > > My 2 cents, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From v-nijs at kellogg.northwestern.edu Thu Apr 19 18:26:44 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Thu, 19 Apr 2007 17:26:44 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <1177011032.2543.94.camel@localhost.localdomain> Message-ID: Pytables looks very interesting and clearly has a ton of features. However, if I am trying to just read-in a csv file can it figure out the correct data types on its own (e.g., dates, floats, strings)? Read "I am too lazy to types in variables names and types myself if the names are already in the file" :) Similarly can you just dump a dictionary or rec-array into a pytable with one 'save' command and have pytables figure out the variable names and types? This seems relevant since you wouldn't have to do that with cPickle which saves user-time if not computer time. Sorry if this is too off-topic. Vincent On 4/19/07 2:30 PM, "Francesc Altet" wrote: > El dj 19 de 04 del 2007 a les 09:23 -0500, en/na Robert Kern va > escriure: >> Gael Varoquaux wrote: >>> I have a huge matrix (I don't know how big it is, it hasn't finished >>> loading yet, but the ascii file weights 381M). I was wondering what >>> format had best speed efficiency for saving/loading huge file. I don't >>> mind using a hdf5 even if it is not included in scipy itself. >> >> I think we've found that a simple pickle using protocol 2 works the fastest. >> At >> the time (a year or so ago) this was faster than PyTables for loading the >> entire >> array of about 1GB size. PyTables might be better now, possibly because of >> the >> new numpy support. > > I was curious as well if PyTables 2.0 is getting somewhat faster than > 1.4 series (although I already knew that for this sort of things, the > space for improvement should be rather small). > > For that, I've made a small benchmark (see attachments) and compared the > performance for PyTables 1.4 and 2.0 against pickle (protocol 2). In the > benchmark, a NumPy array of around 1 GB is created and the time for > writing and reading it from disk is written to stdout. You can see the > outputs for the runs in the attachments as well. > >> From there, some conclusions can be draw: > > 1. The difference of performance between PyTables 1.4 and 2.0 for this > especific task is almost negligible. This was somthing expected because, > although 1.4 was using numarray at the core, the use of the array > protocol made unnecessary the copies of the arrays (and hence, the > overhead over 2.0, with NumPy at the core, is negligible). > > 2. For writing, the EArray (Extensible Array) object of PyTables has > roughly the same speed than NumPy (a 15% faster in fact, but this is not > that much). However, for reading, the speed-up of PyTables over pickle > is more than 2x (up to 2.35x for 2.0), which is something to consider. > > 3. For compressed EArrays, writing times are relatively bad: between > 0.06x (zlib and PyTables 1.4) and 0.15x (lzo and PyTables 2.0). However, > for reading the ratios are quite good: between 0.57x (zlib and PyTables > 1.4) and 1.45x (lzo and PyTables 2.0). In general, one should expect > better performance from compressed data, but I've chosen completely > random data here, so the compressors weren't able to achieve even decent > compression ratios and that hurts I/O performance quite a few. > > 4. The best performance is achieved by the simple (it doesn't allow to > be enlarged nor compressed), but rather effective in terms of I/O, Array > object. For writing, it can be up to 1.74x faster (using PyTables 2.0) > than pickle and up to 3.56x (using PyTables 1.4) for reading, which is > quite a lot (more than 500 MB/s) in terms of I/O speed. > > I will warn the reader that these times are taken *without* having in > account the flush time to disk for writing. When this time is taken, the > gap between PyTables and pickle will reduce significantly (but not when > using compression, were PyTables will continue to be rather slower in > comparison). So, you should take the the above figures as *peak* > throughputs (that can be achieved when the dataset fits comfortably in > the main memory because of the filesystem cache). > > For reading, and when the files doesn't fit in the filesystem cache or > are read from the first time one should expect an important degrading > over all the figures that I presented here. However, when using > compression over real data (where a 2x or more compression ratios are > realistic), the compressed EArray should be up to 2x faster (I've > noticed this many times in other contexts) for reading than other > solutions (this is so because one have to read less data from disk and > moreover, CPUs today are exceedingly fast at decompressing). > > The above benchmarks have been run on a Linux machine running SuSe Linux > with an AMD Opteron @ 2 GHz, 8 GB of main memory and a 7200 rpm IDE > disk. > > Cheers, -- From ryanlists at gmail.com Thu Apr 19 19:01:44 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 19 Apr 2007 18:01:44 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> Message-ID: I have a very similar question. Pytables clearly has much more capability than I need and the documentation is a bit intimidating. I have tests that involve multiple channels of data that I need to store. Can you give a simple example of using pytables to store 3 seperate Nx1 vectors in the same file and easily retreive the individual channels. The cPickle equivalent would be something like: v1=rand(1000,) v2=rand(1000,) mydict={'v1':v1,'v2':v2} and then dump mydict to a pickle file. How would I do this samething in pytables? Thanks, Ryan On 4/19/07, Vincent Nijs wrote: > Pytables looks very interesting and clearly has a ton of features. However, > if I am trying to just read-in a csv file can it figure out the correct data > types on its own (e.g., dates, floats, strings)? Read "I am too lazy to > types in variables names and types myself if the names are already in the > file" :) > > Similarly can you just dump a dictionary or rec-array into a pytable with > one 'save' command and have pytables figure out the variable names and > types? This seems relevant since you wouldn't have to do that with cPickle > which saves user-time if not computer time. > > Sorry if this is too off-topic. > > Vincent > > > > > On 4/19/07 2:30 PM, "Francesc Altet" wrote: > > > El dj 19 de 04 del 2007 a les 09:23 -0500, en/na Robert Kern va > > escriure: > >> Gael Varoquaux wrote: > >>> I have a huge matrix (I don't know how big it is, it hasn't finished > >>> loading yet, but the ascii file weights 381M). I was wondering what > >>> format had best speed efficiency for saving/loading huge file. I don't > >>> mind using a hdf5 even if it is not included in scipy itself. > >> > >> I think we've found that a simple pickle using protocol 2 works the fastest. > >> At > >> the time (a year or so ago) this was faster than PyTables for loading the > >> entire > >> array of about 1GB size. PyTables might be better now, possibly because of > >> the > >> new numpy support. > > > > I was curious as well if PyTables 2.0 is getting somewhat faster than > > 1.4 series (although I already knew that for this sort of things, the > > space for improvement should be rather small). > > > > For that, I've made a small benchmark (see attachments) and compared the > > performance for PyTables 1.4 and 2.0 against pickle (protocol 2). In the > > benchmark, a NumPy array of around 1 GB is created and the time for > > writing and reading it from disk is written to stdout. You can see the > > outputs for the runs in the attachments as well. > > > >> From there, some conclusions can be draw: > > > > 1. The difference of performance between PyTables 1.4 and 2.0 for this > > especific task is almost negligible. This was somthing expected because, > > although 1.4 was using numarray at the core, the use of the array > > protocol made unnecessary the copies of the arrays (and hence, the > > overhead over 2.0, with NumPy at the core, is negligible). > > > > 2. For writing, the EArray (Extensible Array) object of PyTables has > > roughly the same speed than NumPy (a 15% faster in fact, but this is not > > that much). However, for reading, the speed-up of PyTables over pickle > > is more than 2x (up to 2.35x for 2.0), which is something to consider. > > > > 3. For compressed EArrays, writing times are relatively bad: between > > 0.06x (zlib and PyTables 1.4) and 0.15x (lzo and PyTables 2.0). However, > > for reading the ratios are quite good: between 0.57x (zlib and PyTables > > 1.4) and 1.45x (lzo and PyTables 2.0). In general, one should expect > > better performance from compressed data, but I've chosen completely > > random data here, so the compressors weren't able to achieve even decent > > compression ratios and that hurts I/O performance quite a few. > > > > 4. The best performance is achieved by the simple (it doesn't allow to > > be enlarged nor compressed), but rather effective in terms of I/O, Array > > object. For writing, it can be up to 1.74x faster (using PyTables 2.0) > > than pickle and up to 3.56x (using PyTables 1.4) for reading, which is > > quite a lot (more than 500 MB/s) in terms of I/O speed. > > > > I will warn the reader that these times are taken *without* having in > > account the flush time to disk for writing. When this time is taken, the > > gap between PyTables and pickle will reduce significantly (but not when > > using compression, were PyTables will continue to be rather slower in > > comparison). So, you should take the the above figures as *peak* > > throughputs (that can be achieved when the dataset fits comfortably in > > the main memory because of the filesystem cache). > > > > For reading, and when the files doesn't fit in the filesystem cache or > > are read from the first time one should expect an important degrading > > over all the figures that I presented here. However, when using > > compression over real data (where a 2x or more compression ratios are > > realistic), the compressed EArray should be up to 2x faster (I've > > noticed this many times in other contexts) for reading than other > > solutions (this is so because one have to read less data from disk and > > moreover, CPUs today are exceedingly fast at decompressing). > > > > The above benchmarks have been run on a Linux machine running SuSe Linux > > with an AMD Opteron @ 2 GHz, 8 GB of main memory and a 7200 rpm IDE > > disk. > > > > Cheers, > > -- > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From oliphant.travis at ieee.org Fri Apr 20 00:15:39 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 19 Apr 2007 22:15:39 -0600 Subject: [SciPy-user] FIR filter, calculated with Remez exchange algorithm ? In-Reply-To: <4627CEEF.4080002@ru.nl> References: <4626987F.1030300@ru.nl> <4627CEEF.4080002@ru.nl> Message-ID: <46283E6B.7000705@ieee.org> Stef Mientki wrote: > ok, I got the answer (I think) > > A slightly changed design, works perfect: > > filt_4 = signal.remez (25, (0, 0.01, 0.2, 0.5), (0.01, 1)) > > The filterlength must be odd, because it's a high pass filter. > If the length is even, the respons at Nyquist is zero, > so my orginal example > > filt_4 = signal.remez (24, (0, 0.01, 0.2, 0.49), (0.01, 1)) > > will try to create a transition band between 0.49 and 0.5, > which is much to steep for this filterlength. > > I never encountered this problem because MatLab, > and previous programs I used always corrected this themselfs. > > Now would it be possible to implement this behaviour in the library > (I think it's usefull for beginners and previous MatLab users) > if last amplitude band = 1 (because it also must be odd for bandstop filters) > make N odd > else > make N even > > I guess the question is should we raise an error or just auto-correct. I'm thinking raising an error may help avoid this mis-learning. But, then again, if we document that it rounds up to the nearest odd-length under such conditions that may suffice. -Travis From gael.varoquaux at normalesup.org Fri Apr 20 02:24:20 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 20 Apr 2007 08:24:20 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> Message-ID: <20070420062420.GC28829@clipper.ens.fr> I agree that pytable lack a really simple interface. Say something that dumps a dic to an hdf5 file, and vice-versa (althought hdf5 -> dic is a bit harder as all the hdf5 types may not convert nicely to python types). On my experiment I use this code to load the data: """ def load_h5(file_name): """ Loads an hdf5 file and returns a dict with the hdf5 data in it. """ file = tables.openFile(file_name) out_dict = {} for key, value in file.leaves.iteritems(): if isinstance(value, tables.UnImplemented): continue try: value = value.read() try: if isinstance(value, CharArray): value = value.tolist() except Exception, inst: print "Couldn't convert %s to a list" % key print inst if len(value) == 1: value = value[0] out_dict[key[1:]] = value except Exception, inst: print "couldn't load %s" % key print inst file.close() return(out_dict) """ It works well on our files, but our files are produced by code I wrote, so they do not explore all the possibilities of hdf5. Similarily I have some python code to dump a dic of arrays to an hdf5 file: """ def dic_to_h5(filename, dic): """ Saves all the arrays in a dictionary to an hdf5 file. """ out_file = tables.openFile(filename, mode = "w") for key, value in dic.iteritems(): if isinstance( value, ndarray): out_file.createArray('/', str(key), value) out_file.close() """ This code is not general enough to go in pytables, but if the list wants to improve it a bit, then we could propose it for inclusion, or at least put it on the cookbook. Cheers, Ga?l On Thu, Apr 19, 2007 at 06:01:44PM -0500, Ryan Krauss wrote: > I have a very similar question. Pytables clearly has much more > capability than I need and the documentation is a bit intimidating. I > have tests that involve multiple channels of data that I need to > store. Can you give a simple example of using pytables to store 3 > seperate Nx1 vectors in the same file and easily retreive the > individual channels. The cPickle equivalent would be something like: > v1=rand(1000,) > v2=rand(1000,) > mydict={'v1':v1,'v2':v2} > and then dump mydict to a pickle file. How would I do this samething > in pytables? > Thanks, > Ryan > On 4/19/07, Vincent Nijs wrote: > > Pytables looks very interesting and clearly has a ton of features. However, > > if I am trying to just read-in a csv file can it figure out the correct data > > types on its own (e.g., dates, floats, strings)? Read "I am too lazy to > > types in variables names and types myself if the names are already in the > > file" :) > > Similarly can you just dump a dictionary or rec-array into a pytable with > > one 'save' command and have pytables figure out the variable names and > > types? This seems relevant since you wouldn't have to do that with cPickle > > which saves user-time if not computer time. > > Sorry if this is too off-topic. > > Vincent From peridot.faceted at gmail.com Fri Apr 20 02:51:07 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 20 Apr 2007 02:51:07 -0400 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <20070420062420.GC28829@clipper.ens.fr> References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> Message-ID: On 20/04/07, Gael Varoquaux wrote: > This code is not general enough to go in pytables, but if the list wants > to improve it a bit, then we could propose it for inclusion, or at least > put it on the cookbook. I have a similar but even more basic hack to get pickle-based persistence in ipython - keeps track of a list of names, then saves a dict extracted from globals() to a pickle on disk. It's handy to be able to store classes, and for now I'm not worried about compatibility in the pickle protocol (which is supposed to be pretty good, actually; the first pickle protocol ever is still implemented) though finding class implementations can be a problem. Anne From faltet at carabos.com Fri Apr 20 03:05:40 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 20 Apr 2007 09:05:40 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: Message-ID: <1177052740.2842.20.camel@localhost.localdomain> El dj 19 de 04 del 2007 a les 17:26 -0500, en/na Vincent Nijs va escriure: > Pytables looks very interesting and clearly has a ton of features. However, > if I am trying to just read-in a csv file can it figure out the correct data > types on its own (e.g., dates, floats, strings)? Read "I am too lazy to > types in variables names and types myself if the names are already in the > file" :) PyTables itself doesn't have a csv importer as such, but provided the existence of the csv module, making one shouldn't be difficult at all. Regarding type discovering, no. PyTables is designed to cope with extremely large amounts of data, and knowing exactly which type is desired for each dataset is *crucial* for keeping the storage requirements under a minimum. However, if you don't mind about the space that will take you data on disk, you can always import the csv into a NumPy array (or recarray) and save it into PyTables in a straightforward way (see below). > Similarly can you just dump a dictionary or rec-array into a pytable with > one 'save' command and have pytables figure out the variable names and > types? This seems relevant since you wouldn't have to do that with cPickle > which saves user-time if not computer time. You can easily save any numpy array (or recarray) in pytables: >>> import numpy >>> import tables >>> f=tables.openFile('/tmp/tmp.h5','w') >>> na=numpy.arange(10).reshape(2,5) # saving an array >>> tna=f.createArray('/', 'na', na) # retrieving the array >>> na_fromdisk = na[:] >>> na_fromdisk array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> ra=numpy.empty(shape=2, dtype='i4,f8') # saving a recarray >>> tra=f.createTable('/', 'ra', ra) # retrieving the recarray >>> ra_fromdisk=tra[:] >>> ra_fromdisk array([(-1209301768, 1.0675549246111695e-269), (135695288, -1.3687048847096049e-40)], dtype=[('f0', '>> f.root.na[1] array([5, 6, 7, 8, 9]) >>> f.root.na[1,2:4] array([7, 8]) >>> f.root.na[1,2::2] array([7, 9]) >>> f.root.ra[1] (135695288, -1.3687048847096049e-40) >>> f.root.ra[::2] array([(-1209301768, 1.0675549246111695e-269)], dtype=[('f0', ' Sorry if this is too off-topic. Well, I don't think so, so don't be afraid to ask. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From faltet at carabos.com Fri Apr 20 03:13:28 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 20 Apr 2007 09:13:28 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> Message-ID: <1177053208.2842.28.camel@localhost.localdomain> El dj 19 de 04 del 2007 a les 18:01 -0500, en/na Ryan Krauss va escriure: > I have a very similar question. Pytables clearly has much more > capability than I need and the documentation is a bit intimidating. I > have tests that involve multiple channels of data that I need to > store. Can you give a simple example of using pytables to store 3 > seperate Nx1 vectors in the same file and easily retreive the > individual channels. The cPickle equivalent would be something like: > > v1=rand(1000,) > v2=rand(1000,) > mydict={'v1':v1,'v2':v2} > > and then dump mydict to a pickle file. How would I do this samething > in pytables? In a former message, I've shown how easily is to save a recarray with pytables. You can achieve what you want by getting used to recarrays (which are very powerful beasts for doing many numerical tasks). There are many convenient ways of creating recarrays, and for your problem, rec.fromarrays is best: >>> v1=numpy.random.rand(10,) >>> v2=numpy.random.rand(10,) >>> ra=numpy.rec.fromarrays([v1,v2], dtype=[('v1', 'f8'), ('v2', 'f8')]) # save 'ra' in pytables >>> tra=f.createTable('/', 'ra2', ra) # read the entire 'ra' from disk >>> tra[:] array([(0.33512633154470473, 0.065904821918977619), (0.53020917640437315, 0.025182584316907786), (0.8336930367762152, 0.78541751699681861), (0.36623947675706092, 0.67927796809305641), (0.464207127024309, 0.85144582476536301), (0.012377388362621145, 0.4211020211753902), (0.3012702551957076, 0.71677896535796437), (0.38805504723782103, 0.48775066039074322), (0.92300245732715691, 0.52952648422581394), (0.1549704417226937, 0.070114112948997387)], dtype=[('v1', ' References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> Message-ID: <1177054215.2842.38.camel@localhost.localdomain> El dv 20 de 04 del 2007 a les 08:24 +0200, en/na Gael Varoquaux va escriure: > I agree that pytable lack a really simple interface. Say something that > dumps a dic to an hdf5 file, and vice-versa (althought hdf5 -> dic is a > bit harder as all the hdf5 types may not convert nicely to python types). As I said before, be used to recarrays. If you have reasons for sticking with dictionaries, it is straighforward converting a dict into a recarray. For example: >>> v1=numpy.random.rand(10,) >>> v2=numpy.random.randint(10, size=10) >>> mydict={'v1':v1,'v2':v2} #Conversion to a recarray begins >>> cols = [col for col in mydict.itervalues()] >>> ratype = [(name, col.dtype) for (name, col) in mydict.iteritems()] >>> ra=numpy.rec.fromarrays(cols, dtype=ratype) # now, you can proceed to saving (and reading) the data >>> tra3=f.createTable('/', 'ra3', ra) >>> tra3[:] array([(0.71896141583591389, 3), (0.6147395923362261, 8), (0.74390300993242819, 8), (0.85740583591803832, 8), (0.058988577053635471, 4), (0.33839332688847212, 9), (0.3847836118934358, 2), (0.0072535131033339972, 5), (0.42023038711482563, 5), (0.26398728887523382, 6)], dtype=[('v1', ' Similarily I have some python code to dump a dic of arrays to an hdf5 > file: > > """ > def dic_to_h5(filename, dic): > """ Saves all the arrays in a dictionary to an hdf5 file. > """ > out_file = tables.openFile(filename, mode = "w") > for key, value in dic.iteritems(): > if isinstance( value, ndarray): > out_file.createArray('/', str(key), value) > out_file.close() > """ > > This code is not general enough to go in pytables, but if the list wants > to improve it a bit, then we could propose it for inclusion, or at least > put it on the cookbook. Yeah, there are infinite possibilities in that regard. However, I think that there is a beauty in keeping the values of a dictionary (or fields in a recarray) tied together in a table. This approach has proven to be very powerful in many situations (but, of course, the user has to decide the better way to arrange his own data). Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From gael.varoquaux at normalesup.org Fri Apr 20 03:36:38 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 20 Apr 2007 09:36:38 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <1177054215.2842.38.camel@localhost.localdomain> References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> <1177054215.2842.38.camel@localhost.localdomain> Message-ID: <20070420073638.GE28829@clipper.ens.fr> On Fri, Apr 20, 2007 at 09:30:15AM +0200, Francesc Altet wrote: > As I said before, be used to recarrays. If you have reasons for sticking > with dictionaries, it is straighforward converting a dict into a > recarray. You are most definitely right that most people (including me) don't use recarrays enough. However when importing hdf5 I want to keep part of the versatility of hdf5: it is hierarchical and can save much more than arrays. Dictionnaries can mirror this possibility quite well in Python. This is why I think they are well suited for helper functions to do IO with hdf5. As a side note, we do use this richness of hdf5 in our experiment, to store say the time of an experimental run, the temperature of the room... Ga?l From josegomez at gmx.net Fri Apr 20 04:38:54 2007 From: josegomez at gmx.net (Jose Gomez) Date: Fri, 20 Apr 2007 08:38:54 +0000 (UTC) Subject: [SciPy-user] Installing the SVM sandbox Message-ID: Hi, I am interested in testing out the SVM bit in the sandbox. However, I don't seem to be able to understand how to do it. I have installed the scipy package in Kubuntu (6.10). I have downloaded the source tarball, and went into the relevant directory, compiled the libsvm C++ code, copied the .so and .os files to the main sandbox/svm directory, issued a python setup.py build and python setup.py install, and tried the tests in the tests directory. They work. However, if I try to import svm from my IPython shell, I get this: |from svm.dataset import LibSvmRegressionDataSet, LibSvmTestDataSet --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/cuenta/ ImportError: No module named svm.dataset I must have forgotten something :) From nwagner at iam.uni-stuttgart.de Fri Apr 20 04:56:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Apr 2007 10:56:02 +0200 Subject: [SciPy-user] Installing the SVM sandbox In-Reply-To: References: Message-ID: <46288022.40100@iam.uni-stuttgart.de> Jose Gomez wrote: > Hi, > I am interested in testing out the SVM bit in the sandbox. However, I don't seem > to be able to understand how to do it. I have installed the scipy package in > Kubuntu (6.10). I have downloaded the source tarball, and went into the relevant > directory, compiled the libsvm C++ code, copied the .so and .os files to the > main sandbox/svm directory, issued a python setup.py build and python setup.py > install, and tried the tests in the tests directory. They work. However, if I > try to import svm from my IPython shell, I get this: > > |from svm.dataset import LibSvmRegressionDataSet, LibSvmTestDataSet > --------------------------------------------------------------------------- > exceptions.ImportError Traceback (most recent call > last) > > /home/cuenta/ > > ImportError: No module named svm.dataset > > I must have forgotten something :) > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi, I am not familiar with Kubuntu but I was able to install the sandbox package svm using scipy/Lib/sandbox/enabled_packages.txt on SuSE Linux 10.x Just add a line with svm to that file and reinstall it. (rm -rf build; python setup.py install) Then you can do something like >>> from scipy.sandbox import svm >>> HTH, Nils From millman at berkeley.edu Fri Apr 20 05:04:02 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 20 Apr 2007 02:04:02 -0700 Subject: [SciPy-user] Installing the SVM sandbox In-Reply-To: References: Message-ID: Hello Jose, Assuming you have everything you need to build scipy installed as well as ctypes 1.0.1 it should be fairly straightforward. I haven't tested this recently (I can give it a try tomorrow), but I think you should just have to do something like this: cd ~/src svn co http://svn.scipy.org/svn/scipy/trunk/ ./scipy-trunk cd scipy-trunk echo svm > Lib/sandbox/enabled_packages.txt python setup.py build sudo python setup.py install Then if all goes well just try something like this at the python prompt: from scipy.sandbox import svm from scipy.sandbox.svm import LibSvmRegressionDataSet, LibSvmTestDataSet Good luck, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Fri Apr 20 07:36:06 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 20 Apr 2007 06:36:06 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <20070420073638.GE28829@clipper.ens.fr> References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> <1177054215.2842.38.camel@localhost.localdomain> <20070420073638.GE28829@clipper.ens.fr> Message-ID: So, I wanted to try out pytables and run the benchmark Francesc posted, but I ran into an error. I downloaded http://www.pytables.org/download/preliminary/tables-2.0b2.win32-py2.4.exe and installed it, but import tables gives this error: In [1]: import tables --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) C:\Documents and Settings\Ryan\ C:\Python24\Lib\site-packages\tables\__init__.py 49 # Import the user classes from the proper modules 50 from tables.exceptions import * ---> 51 from tables.file import File, openFile, copyFile 52 from tables.node import Node 53 from tables.group import Group C:\Python24\Lib\site-packages\tables\file.py 244 # It is necessary to import Table after openFile, because it solves a ci rcular 245 # import reference. --> 246 from tables.table import Table 247 248 C:\Python24\Lib\site-packages\tables\table.py 34 import numpy 35 ---> 36 from tables import tableExtension 37 from tables.conditions import split_condition 38 from tables.numexpr.compiler import getType as numexpr_getType ImportError: cannot import name tableExtension Is the executable not stand alone? Ryan On 4/20/07, Gael Varoquaux wrote: > On Fri, Apr 20, 2007 at 09:30:15AM +0200, Francesc Altet wrote: > > As I said before, be used to recarrays. If you have reasons for sticking > > with dictionaries, it is straighforward converting a dict into a > > recarray. > > You are most definitely right that most people (including me) don't use > recarrays enough. However when importing hdf5 I want to keep part of the > versatility of hdf5: it is hierarchical and can save much more than > arrays. Dictionnaries can mirror this possibility quite well in Python. > This is why I think they are well suited for helper functions to do IO > with hdf5. > > As a side note, we do use this richness of hdf5 in our experiment, to > store say the time of an experimental run, the temperature of the room... > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From faltet at carabos.com Fri Apr 20 08:11:25 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 20 Apr 2007 14:11:25 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> <20070420073638.GE28829@clipper.ens.fr> Message-ID: <200704201411.26979.faltet@carabos.com> A Divendres 20 Abril 2007 13:36, Ryan Krauss escrigu?: > So, I wanted to try out pytables and run the benchmark Francesc > posted, but I ran into an error. I downloaded > http://www.pytables.org/download/preliminary/tables-2.0b2.win32-py2.4.exe > and installed it, but import tables gives this error: [snip] Have you follwed the installation instructions carefully? If not, have a look at: http://www.pytables.org/docs/manual-2.0b2/ch02.html#binaryInstallationDescr and try again. Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From ryanlists at gmail.com Fri Apr 20 12:09:29 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 20 Apr 2007 11:09:29 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <200704201411.26979.faltet@carabos.com> References: <1177011032.2543.94.camel@localhost.localdomain> <20070420073638.GE28829@clipper.ens.fr> <200704201411.26979.faltet@carabos.com> Message-ID: I have followed the dll instructions now but am still getting the same error message: Here are the dll's in the tables directory; In [2]: cd C:/Python24/Lib/site-packages/tables/ C:\Python24\Lib\site-packages\tables In [3]: ls *.dll Volume in drive C has no label. Volume Serial Number is 4464-F810 Directory of C:\Python24\Lib\site-packages\tables 11/14/2005 04:28 PM 217,088 hdf5_cppdll.dll 11/14/2005 04:25 PM 815,104 hdf5dll.dll 01/06/2005 06:28 PM 90,112 szlibdll.dll 04/17/2006 05:29 PM 73,728 zlib1.dll My error message is still: In [1]: import tables --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) Z:\ C:\Python24\Lib\site-packages\tables\__init__.py 49 # Import the user classes from the proper modules 50 from tables.exceptions import * ---> 51 from tables.file import File, openFile, copyFile 52 from tables.node import Node 53 from tables.group import Group C:\Python24\Lib\site-packages\tables\file.py 244 # It is necessary to import Table after openFile, because it solves a ci rcular 245 # import reference. --> 246 from tables.table import Table 247 248 C:\Python24\Lib\site-packages\tables\table.py 34 import numpy 35 ---> 36 from tables import tableExtension 37 from tables.conditions import split_condition 38 from tables.numexpr.compiler import getType as numexpr_getType ImportError: cannot import name tableExtension Thanks, Ryan On 4/20/07, Francesc Altet wrote: > A Divendres 20 Abril 2007 13:36, Ryan Krauss escrigu?: > > So, I wanted to try out pytables and run the benchmark Francesc > > posted, but I ran into an error. I downloaded > > http://www.pytables.org/download/preliminary/tables-2.0b2.win32-py2.4.exe > > and installed it, but import tables gives this error: > [snip] > > Have you follwed the installation instructions carefully? If not, have a look > at: > > http://www.pytables.org/docs/manual-2.0b2/ch02.html#binaryInstallationDescr > > and try again. > > Cheers, > > -- > >0,0< Francesc Altet http://www.carabos.com/ > V V C?rabos Coop. V. Enjoy Data > "-" > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gnchen at cortechs.net Fri Apr 20 12:18:31 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Fri, 20 Apr 2007 09:18:31 -0700 Subject: [SciPy-user] Installing the SVM sandbox In-Reply-To: References: Message-ID: <8C987F6C-B5D1-415A-8609-E9A9F37F1B34@cortechs.net> Hi! I got confused here. I thought you need to change setup.py under sandbox. So, what is the official way to include sandbox?? Gen On Apr 20, 2007, at 2:04 AM, Jarrod Millman wrote: > Hello Jose, > > Assuming you have everything you need to build scipy installed as > well as ctypes 1.0.1 it should be fairly straightforward. I > haven't tested this recently (I can give it a try tomorrow), but I > think you should just have to do something like this: > > cd ~/src > svn co http://svn.scipy.org/svn/scipy/trunk/ ./scipy-trunk > cd scipy-trunk > echo svm > Lib/sandbox/enabled_packages.txt > python setup.py build > sudo python setup.py install > > Then if all goes well just try something like this at the python > prompt: > from scipy.sandbox import svm > from scipy.sandbox.svm import LibSvmRegressionDataSet, > LibSvmTestDataSet > > Good luck, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at carabos.com Fri Apr 20 12:26:00 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 20 Apr 2007 18:26:00 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> <20070420073638.GE28829@clipper.ens.fr> <200704201411.26979.faltet@carabos.com> Message-ID: <1177086360.2559.12.camel@localhost.localdomain> El dv 20 de 04 del 2007 a les 11:09 -0500, en/na Ryan Krauss va escriure: > I have followed the dll instructions now but am still getting the same > error message: > > Here are the dll's in the tables directory; > > In [2]: cd C:/Python24/Lib/site-packages/tables/ > C:\Python24\Lib\site-packages\tables > > In [3]: ls *.dll > Volume in drive C has no label. > Volume Serial Number is 4464-F810 > > Directory of C:\Python24\Lib\site-packages\tables > > 11/14/2005 04:28 PM 217,088 hdf5_cppdll.dll > 11/14/2005 04:25 PM 815,104 hdf5dll.dll > 01/06/2005 06:28 PM 90,112 szlibdll.dll > 04/17/2006 05:29 PM 73,728 zlib1.dll > > > My error message is still: Mmmm, I always put the DLLs in \windows\system32 and that usually works. I'm not a windows expert, but I've read that it is enough to put them in the package directory (i.e., the place you are trying out), but I vaguely remember that this can create problems... Another possibility is that the binary that I've generated could be broken (but it is strange as nobody has complained about this, and it has been downloaded many times by now). Well, try with the \windows\system32 (btw, you only need to copy there hdf5dll.dll, szlibdll.dll and zlib1.dll) directory and tell me how it goes. If it works, I'll remove the alternate place for DLL's (i.e. python_installation_path\Lib\site-packages\tables) unless somebody with more experience with Windows can bring more light here. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From robert.kern at gmail.com Fri Apr 20 12:28:53 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Apr 2007 11:28:53 -0500 Subject: [SciPy-user] Installing the SVM sandbox In-Reply-To: <8C987F6C-B5D1-415A-8609-E9A9F37F1B34@cortechs.net> References: <8C987F6C-B5D1-415A-8609-E9A9F37F1B34@cortechs.net> Message-ID: <4628EA45.4020102@gmail.com> Gennan Chen wrote: > Hi! > > I got confused here. I thought you need to change setup.py under > sandbox. So, what is the official way to include sandbox?? Follow Jarrod's instructions. He is describing the official way. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Fri Apr 20 12:43:36 2007 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 20 Apr 2007 16:43:36 +0000 (UTC) Subject: [SciPy-user] Fast saving/loading of huge matrices References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> Message-ID: Fri, 20 Apr 2007 08:24:20 +0200, Gael Varoquaux kirjoitti: > > I agree that pytable lack a really simple interface. Say something that > dumps a dic to an hdf5 file, and vice-versa (althought hdf5 -> dic is a > bit harder as all the hdf5 types may not convert nicely to python types). In a different attempt to make storing stuff in Pytables easier, I wrote a library to dump and load any objects directly to HDF5 files http://www.iki.fi/pav/software/hdf5pickle/index.html It uses the pickle protocol to interface with Python, but unrolls objects so that they are stored in the "native" Pytables formats, if possible, instead of pickled strings. It's a bit rought around some edges and a bit slow, but works. (Also, all security issues associated with pickling should be remembered...) For example: import numpy as N import hdf5pickle, tables class Foo(object): def __init__(self, c): self.a = array([1,2,3,4,5], float) self.b = 12345 self.c = c foo = Foo(array([[1+2j, 3+4j]])) f = tables.openFile('test.h5', 'w') hdf5pickle.dump(foo, f, '/foo') f.close() f = tables.openFile('test.h5', 'r') foo2 = hdf5pickle.load(f, '/foo') f.close() assert N.all(foo.a == foo2.a) assert N.all(foo.b == foo2.b) ... meanwhile, in the shell ... $ h5ls -dvr test.h5 /foo Group /foo/__ Group /foo/__/args Dataset {1} Data: (0) 0 /foo/__/cls Dataset {12} Data: (0) 95, 95, 109, 97, 105, 110, 95, 95, 10, 70, 111, 111 /foo/a Dataset {5} Data: (0) 1, 2, 3, 4, 5 /foo/b Dataset {SCALAR} Data: (0) 12345 /foo/c Dataset {1, 2} Data: (0,0) {1, 2}, {3, 4} -- Pauli Virtanen From ryanlists at gmail.com Fri Apr 20 12:56:40 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 20 Apr 2007 11:56:40 -0500 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: <1177086360.2559.12.camel@localhost.localdomain> References: <1177011032.2543.94.camel@localhost.localdomain> <20070420073638.GE28829@clipper.ens.fr> <200704201411.26979.faltet@carabos.com> <1177086360.2559.12.camel@localhost.localdomain> Message-ID: Changing the dll directory didn't help. On 4/20/07, Francesc Altet wrote: > El dv 20 de 04 del 2007 a les 11:09 -0500, en/na Ryan Krauss va > escriure: > > I have followed the dll instructions now but am still getting the same > > error message: > > > > Here are the dll's in the tables directory; > > > > In [2]: cd C:/Python24/Lib/site-packages/tables/ > > C:\Python24\Lib\site-packages\tables > > > > In [3]: ls *.dll > > Volume in drive C has no label. > > Volume Serial Number is 4464-F810 > > > > Directory of C:\Python24\Lib\site-packages\tables > > > > 11/14/2005 04:28 PM 217,088 hdf5_cppdll.dll > > 11/14/2005 04:25 PM 815,104 hdf5dll.dll > > 01/06/2005 06:28 PM 90,112 szlibdll.dll > > 04/17/2006 05:29 PM 73,728 zlib1.dll > > > > > > My error message is still: > > Mmmm, I always put the DLLs in \windows\system32 and that usually works. > I'm not a windows expert, but I've read that it is enough to put them in > the package directory (i.e., the place you are trying out), but I > vaguely remember that this can create problems... > > Another possibility is that the binary that I've generated could be > broken (but it is strange as nobody has complained about this, and it > has been downloaded many times by now). > > Well, try with the \windows\system32 (btw, you only need to copy there > hdf5dll.dll, szlibdll.dll and zlib1.dll) directory and tell me how it > goes. If it works, I'll remove the alternate place for DLL's (i.e. > python_installation_path\Lib\site-packages\tables) unless somebody with > more experience with Windows can bring more light here. > > Cheers, > > -- > Francesc Altet | Be careful about using the following code -- > Carabos Coop. V. | I've only proven that it works, > www.carabos.com | I haven't tested it. -- Donald Knuth > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From faltet at carabos.com Fri Apr 20 13:06:08 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 20 Apr 2007 19:06:08 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> Message-ID: <1177088768.2559.17.camel@localhost.localdomain> El dv 20 de 04 del 2007 a les 16:43 +0000, en/na Pauli Virtanen va escriure: > Fri, 20 Apr 2007 08:24:20 +0200, Gael Varoquaux kirjoitti: > > > > I agree that pytable lack a really simple interface. Say something that > > dumps a dic to an hdf5 file, and vice-versa (althought hdf5 -> dic is a > > bit harder as all the hdf5 types may not convert nicely to python types). > > In a different attempt to make storing stuff in Pytables easier, > I wrote a library to dump and load any objects directly to HDF5 files > > http://www.iki.fi/pav/software/hdf5pickle/index.html > > It uses the pickle protocol to interface with Python, but unrolls > objects so that they are stored in the "native" Pytables formats, if > possible, instead of pickled strings. > > It's a bit rought around some edges and a bit slow, but works. (Also, all > security issues associated with pickling should be remembered...) > > For example: [snip] Wow, a very conscientious work. Why you have not advertised it before? I love to hear about these kind of developments... Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From faltet at carabos.com Fri Apr 20 13:08:45 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 20 Apr 2007 19:08:45 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> <20070420073638.GE28829@clipper.ens.fr> <200704201411.26979.faltet@carabos.com> <1177086360.2559.12.camel@localhost.localdomain> Message-ID: <1177088925.2559.21.camel@localhost.localdomain> El dv 20 de 04 del 2007 a les 11:56 -0500, en/na Ryan Krauss va escriure: > Changing the dll directory didn't help. Ok. I've not a windows machine at hand to test myself. So will ask to the PyTables list just in case anybody has tried the Python2.4 package (it is true that most of the people is using the Python2.5 one) and will come back with more news. Cheers, > > On 4/20/07, Francesc Altet wrote: > > El dv 20 de 04 del 2007 a les 11:09 -0500, en/na Ryan Krauss va > > escriure: > > > I have followed the dll instructions now but am still getting the same > > > error message: > > > > > > Here are the dll's in the tables directory; > > > > > > In [2]: cd C:/Python24/Lib/site-packages/tables/ > > > C:\Python24\Lib\site-packages\tables > > > > > > In [3]: ls *.dll > > > Volume in drive C has no label. > > > Volume Serial Number is 4464-F810 > > > > > > Directory of C:\Python24\Lib\site-packages\tables > > > > > > 11/14/2005 04:28 PM 217,088 hdf5_cppdll.dll > > > 11/14/2005 04:25 PM 815,104 hdf5dll.dll > > > 01/06/2005 06:28 PM 90,112 szlibdll.dll > > > 04/17/2006 05:29 PM 73,728 zlib1.dll > > > > > > > > > My error message is still: > > > > Mmmm, I always put the DLLs in \windows\system32 and that usually works. > > I'm not a windows expert, but I've read that it is enough to put them in > > the package directory (i.e., the place you are trying out), but I > > vaguely remember that this can create problems... > > > > Another possibility is that the binary that I've generated could be > > broken (but it is strange as nobody has complained about this, and it > > has been downloaded many times by now). > > > > Well, try with the \windows\system32 (btw, you only need to copy there > > hdf5dll.dll, szlibdll.dll and zlib1.dll) directory and tell me how it > > goes. If it works, I'll remove the alternate place for DLL's (i.e. > > python_installation_path\Lib\site-packages\tables) unless somebody with > > more experience with Windows can bring more light here. > > > > Cheers, > > > > -- > > Francesc Altet | Be careful about using the following code -- > > Carabos Coop. V. | I've only proven that it works, > > www.carabos.com | I haven't tested it. -- Donald Knuth > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From s.mientki at ru.nl Fri Apr 20 16:02:18 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 20 Apr 2007 22:02:18 +0200 Subject: [SciPy-user] power(10,2) accuracy ? Message-ID: <46291C4A.3020709@ru.nl> Is this good behavior ? >>> power(10,2) 99 >>> power(10,3) 1000 >>> power(10.0,2.0) 100.0 thanks, Stef Mientki From nwagner at iam.uni-stuttgart.de Fri Apr 20 16:10:29 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Apr 2007 22:10:29 +0200 Subject: [SciPy-user] power(10,2) accuracy ? In-Reply-To: <46291C4A.3020709@ru.nl> References: <46291C4A.3020709@ru.nl> Message-ID: On Fri, 20 Apr 2007 22:02:18 +0200 Stef Mientki wrote: > Is this good behavior ? > > >>> power(10,2) > 99 > >>> power(10,3) > 1000 > >>> power(10.0,2.0) > 100.0 > > thanks, > Stef Mientki > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user >>> from scipy import * >>> power(10,2) 100 >>> power(10,3) 1000 >>> power(10,2.) 100.0 >>> import scipy >>> scipy.__version__ '0.5.3.dev2931' From robert.kern at gmail.com Fri Apr 20 16:11:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Apr 2007 15:11:50 -0500 Subject: [SciPy-user] power(10,2) accuracy ? In-Reply-To: <46291C4A.3020709@ru.nl> References: <46291C4A.3020709@ru.nl> Message-ID: <46291E86.8090606@gmail.com> Stef Mientki wrote: > Is this good behavior ? > > >>> power(10,2) > 99 > >>> power(10,3) > 1000 > >>> power(10.0,2.0) > 100.0 Please always tell us what version of numpy you are using and on what platform when reporting possible bugs. power(10, 2) gives me 100 with a recent SVN checkout on OS X. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jturner at gemini.edu Fri Apr 20 16:26:46 2007 From: jturner at gemini.edu (James Turner) Date: Fri, 20 Apr 2007 16:26:46 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <20070420073638.GE28829@clipper.ens.fr> References: <20070420073638.GE28829@clipper.ens.fr> Message-ID: <46292206.3010203@gemini.edu> > As a side note, we do use this richness of hdf5 in our experiment, to > store say the time of an experimental run, the temperature of the room... It sounds like HDF5 provides much the same capabilities as FITS, the main file standard used for some decades in astronomy. It also sounds like there may be a lot of overlap between Pytables and STScI's binary tables, as implemented in PyFITS. I imagine that's why Pytables was based on numarray, come to think of it... Does anyone have a good overview of how they compare, or know whether this HDF format is the same one that was used years ago by the Starlink project in the UK? Now that Python + NumPy is gaining popularity in astronomy, I suppose there will be more motivation/opportunity to use the tools that other scientists are using (although I don't believe there is any immediate prospect of replacing FITS with another format). I hope this doesn't lead to too much incompatibility :-(. Anyway, these tools seem useful to have around! Cheers, James. From s.mientki at ru.nl Fri Apr 20 16:37:20 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 20 Apr 2007 22:37:20 +0200 Subject: [SciPy-user] power(10,2) accuracy ? In-Reply-To: <46291E86.8090606@gmail.com> References: <46291C4A.3020709@ru.nl> <46291E86.8090606@gmail.com> Message-ID: <46292480.1030907@ru.nl> Robert Kern wrote: > Stef Mientki wrote: > >> Is this good behavior ? >> >> >>> power(10,2) >> 99 >> >>> power(10,3) >> 1000 >> >>> power(10.0,2.0) >> 100.0 >> > > Please always tell us what version of numpy you are using and on what platform > when reporting possible bugs. power(10, 2) gives me 100 with a recent SVN > checkout on OS X. > > Sorry, I'm not used to tell version numbers. Numpy: 1.0.3.dev3716 WinXP, SP1 cheers, Stef From aisaac at american.edu Fri Apr 20 17:51:45 2007 From: aisaac at american.edu (Alan Isaac) Date: Fri, 20 Apr 2007 17:51:45 -0400 Subject: [SciPy-user] power(10,2) accuracy ? In-Reply-To: <46292480.1030907@ru.nl> References: <46291C4A.3020709@ru.nl> <46291E86.8090606@gmail.com><46292480.1030907@ru.nl> Message-ID: On Fri, 20 Apr 2007, Stef Mientki wrote: > Numpy: 1.0.3.dev3716 > WinXP, SP1 >> import numpy as N >> N.power(10,2) 100 >> N.__version__ '1.0' Which reminds me, I need to update on this machine ... Alan Isaac (on Win XP SP2) From strawman at astraw.com Fri Apr 20 16:55:47 2007 From: strawman at astraw.com (Andrew Straw) Date: Fri, 20 Apr 2007 13:55:47 -0700 Subject: [SciPy-user] control theory? Message-ID: <462928D3.7040002@astraw.com> Hi all, I'm looking for Python-centric software (and probably even better, with tutorials) that let's one do things various control-theory stuff. In particular I'm interested in complex frequency plane tools such as root loci plots. Any suggestions? I see that scipy.signal might have a few starting points... -Andrew From perry at stsci.edu Fri Apr 20 17:14:20 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 20 Apr 2007 17:14:20 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <46292206.3010203@gemini.edu> References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> Message-ID: <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> On Apr 20, 2007, at 4:26 PM, James Turner wrote: >> As a side note, we do use this richness of hdf5 in our experiment, to >> store say the time of an experimental run, the temperature of the >> room... > > It sounds like HDF5 provides much the same capabilities as FITS, the > main file standard used for some decades in astronomy. It also sounds > like there may be a lot of overlap between Pytables and STScI's binary > tables, as implemented in PyFITS. I imagine that's why Pytables was > based on numarray, come to think of it... Does anyone have a good > overview of how they compare, or know whether this HDF format is the I think that is a bit too broadly posed to answer in any simple way (if you are wondering how HDF and FITS compare). Speed? Flexibility? Etc. FITS is generally much less flexible. However, it is archival. Something that HDF has a harder time claiming. And it is very well entrenched in astronomy. > same one that was used years ago by the Starlink project in the UK? > I believe so (at least some version of HDF). Perry From jturner at gemini.edu Fri Apr 20 18:51:21 2007 From: jturner at gemini.edu (James Turner) Date: Fri, 20 Apr 2007 18:51:21 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) Message-ID: <462943E9.5040304@gemini.edu> Hi Perry, > I think that is a bit too broadly posed to answer in any simple way > (if you are wondering how HDF and FITS compare). Speed? Flexibility? > Etc. FITS is generally much less flexible. However, it is archival. > Something that HDF has a harder time claiming. And it is very well > entrenched in astronomy. Thanks for the interesting summary. My question was a bit vague just because I was asking more out of curiosity than for a specific application. Actually, in the back of my mind I was wondering how SciPy and STScI Python might end up playing together... I think SciPy will prove rather useful and it obviously helps to talk the same language as other users/developers (so thumbs up for NumPy :-) ). Maybe if there were a single interface to image/table-type formats, it would allow more interaction -- but I imagine such a thing might be impractical to implement. Don't worry too much -- I'm just thinking out loud! Cheers, James. From peridot.faceted at gmail.com Fri Apr 20 19:25:34 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 20 Apr 2007 19:25:34 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> Message-ID: On 20/04/07, Perry Greenfield wrote: > I think that is a bit too broadly posed to answer in any simple way > (if you are wondering how HDF and FITS compare). Speed? Flexibility? > Etc. FITS is generally much less flexible. However, it is archival. > Something that HDF has a harder time claiming. And it is very well > entrenched in astronomy. FITS has certain limitations, which people seem to work their way around in different ways. For example, RXTE data processing produces FITS files which generally use a sui generis data description language to describe the layout of bitfields in its tables. Is there a nice page summarizing the capabilities of HDF5, for comparison? Thanks, Anne From robert.kern at gmail.com Fri Apr 20 19:34:06 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Apr 2007 18:34:06 -0500 Subject: [SciPy-user] HDF5 vs FITS In-Reply-To: References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> Message-ID: <46294DEE.1040400@gmail.com> Anne Archibald wrote: > On 20/04/07, Perry Greenfield wrote: > >> I think that is a bit too broadly posed to answer in any simple way >> (if you are wondering how HDF and FITS compare). Speed? Flexibility? >> Etc. FITS is generally much less flexible. However, it is archival. >> Something that HDF has a harder time claiming. And it is very well >> entrenched in astronomy. > > FITS has certain limitations, which people seem to work their way > around in different ways. For example, RXTE data processing produces > FITS files which generally use a sui generis data description language > to describe the layout of bitfields in its tables. Is there a nice > page summarizing the capabilities of HDF5, for comparison? This one is pretty good. http://hdfgroup.com/whatishdf5.html It links to this paper which goes more in depth, but is still an overview of the capabilities (rather than documentation about how to use the libraries). http://hdfgroup.com/HDF5/RD100-2002/All_About_HDF5.pdf -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michelemazzucco at gmail.com Fri Apr 20 19:48:07 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sat, 21 Apr 2007 00:48:07 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails Message-ID: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> Hi all, I've just built SciPy on an Intel Mac OS 10.4, python 2.5.1, gcc 4.0.1, gfortran 4.3.0 and fftw 3.1.2. I've followed the instructions described here [1]. I can successfully install the library, howevever the call to scipy.test(1,10) fails: ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", line 190, in import scipy.special as special File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mmio.py", line 151, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ---------------------------------------------------------------------- Ran 417 tests in 1.788s FAILED (errors=2) The call numpy.test(1,10) succeeds. Any idea? Thanks, Michele [1] http://www.scipy.org/Installing_SciPy/Mac_OS_X From jturner at gemini.edu Fri Apr 20 20:07:02 2007 From: jturner at gemini.edu (James Turner) Date: Fri, 20 Apr 2007 20:07:02 -0400 Subject: [SciPy-user] HDF5 vs FITS In-Reply-To: ce557a360704201625l64884a39w585be8d9df6f2ef9@mail.gmail.com References: ce557a360704201625l64884a39w585be8d9df6f2ef9@mail.gmail.com Message-ID: <462955A6.70503@gemini.edu> > It links to this paper which goes more in depth, but is still an > overview of the capabilities (rather than documentation about how > to use the libraries). Thanks -- that paper (2nd link) is a good reference, including a format comparison table on pages 8-9. I see the paper claims that "HDF5 is compatible with all of the competing formats discussed in Item 10b in that those data models can be expressed in terms of HDF5.", with one of the "competing formats" being FITS. Cheers, James. From michelemazzucco at gmail.com Fri Apr 20 20:35:20 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sat, 21 Apr 2007 01:35:20 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> Message-ID: <1f38e67c0704201735l3fd27fb9g62e7a9314908c11c@mail.gmail.com> Hi again, I've noticed also the problem below, which happens every time I import the scipy module. What went wrong during the setup process?, there were no error messages. Sealbook:/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests nmm42$ python test_basic.py /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/numpytest.py:634: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code DeprecationWarning) Traceback (most recent call last): File "test_basic.py", line 28, in from linalg import solve,inv,det,lstsq, toeplitz, hankel, tri, triu, tril File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so, 2): Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/flapack.so Expected in: dynamic lookup Sealbook:/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests nmm42$ ls ../ __init__.py basic.py cblas.so flapack.so interface_gen.py lapack.pyc setup.py __init__.pyc basic.pyc clapack.so flinalg.py interface_gen.pyc linalg_version.py setup.pyc _flinalg.so blas.py decomp.py flinalg.pyc iterative.py linalg_version.pyc setup_atlas_version.py _iterative.so blas.pyc decomp.pyc info.py iterative.pyc matfuncs.py setup_atlas_version.pyc atlas_version.so calc_lwork.so fblas.so info.pyc lapack.py matfuncs.pyc tests Thanks, Michele On 4/21/07, Michele Mazzucco wrote: > Hi all, > > I've just built SciPy on an Intel Mac OS 10.4, python 2.5.1, gcc > 4.0.1, gfortran 4.3.0 and fftw 3.1.2. > I've followed the instructions described here [1]. I can successfully > install the library, howevever the call to > scipy.test(1,10) > fails: > > ====================================================================== > ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", > line 55, in check_integer > from scipy import stats > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", > line 7, in > from stats import * > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", > line 190, in > import scipy.special as special > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", > line 8, in > from basic import * > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", > line 8, in > from _cephes import * > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > 2): Symbol not found: ___dso_handle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > Expected in: dynamic lookup > > > ====================================================================== > ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mmio.py", > line 151, in check_simple_todense > b = mmread(fn).todense() > AttributeError: 'numpy.ndarray' object has no attribute 'todense' > > ---------------------------------------------------------------------- > Ran 417 tests in 1.788s > > FAILED (errors=2) > > > > The call numpy.test(1,10) succeeds. Any idea? > > Thanks, > Michele > > > > [1] http://www.scipy.org/Installing_SciPy/Mac_OS_X > From perry at stsci.edu Fri Apr 20 21:54:14 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 20 Apr 2007 21:54:14 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of hugematrices) In-Reply-To: <462943E9.5040304@gemini.edu> Message-ID: James Turner wrote: > Thanks for the interesting summary. My question was a bit vague just > because I was asking more out of curiosity than for a specific > application. > > Actually, in the back of my mind I was wondering how SciPy and STScI > Python might end up playing together... I think SciPy will prove > rather useful and it obviously helps to talk the same language as > other users/developers (so thumbs up for NumPy :-) ). Maybe if there > were a single interface to image/table-type formats, it would allow > more interaction -- but I imagine such a thing might be impractical > to implement. Don't worry too much -- I'm just thinking out loud! > Well, the next release of stsci_python will be based on numpy, and that should be out in early summer. As far as tables go, certain aspects of FITS table semantics make compatibility difficult (scaled columns, Null values, etc), though we do plan to make raw numpy record arrays views of the same data buffer available. I don't know if that's what you are referring to here or not. Perry From robert.kern at gmail.com Fri Apr 20 22:18:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Apr 2007 21:18:27 -0500 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> Message-ID: <46297473.3040209@gmail.com> Michele Mazzucco wrote: > Hi all, > > I've just built SciPy on an Intel Mac OS 10.4, python 2.5.1, gcc > 4.0.1, gfortran 4.3.0 and fftw 3.1.2. > I've followed the instructions described here [1]. I can successfully > install the library, howevever the call to > scipy.test(1,10) > fails: > > ====================================================================== > ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", > line 55, in check_integer > from scipy import stats > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", > line 7, in > from stats import * > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", > line 190, in > import scipy.special as special > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", > line 8, in > from basic import * > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", > line 8, in > from _cephes import * > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > 2): Symbol not found: ___dso_handle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > Expected in: dynamic lookup Install cctools: ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-590.36.dmg I'm afraid I don't know much of the how or why, but it should fix this particular error. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Sat Apr 21 04:53:50 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 21 Apr 2007 10:53:50 +0200 Subject: [SciPy-user] Fast saving/loading of huge matrices In-Reply-To: References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> Message-ID: <20070421085350.GB5974@clipper.ens.fr> On Fri, Apr 20, 2007 at 04:43:36PM +0000, Pauli Virtanen wrote: > In a different attempt to make storing stuff in Pytables easier, > I wrote a library to dump and load any objects directly to HDF5 files > http://www.iki.fi/pav/software/hdf5pickle/index.html Do you think this can be used to save data in a way that can be used to share it between programs ? Something a bit more unniversal than python's pickle. If so I vote for inclusion in pytables. Ga?l From s.mientki at ru.nl Sat Apr 21 05:28:00 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 21 Apr 2007 11:28:00 +0200 Subject: [SciPy-user] FIR filter, calculated with Remez exchange algorithm ? In-Reply-To: <46283E6B.7000705@ieee.org> References: <4626987F.1030300@ru.nl> <4627CEEF.4080002@ru.nl> <46283E6B.7000705@ieee.org> Message-ID: <4629D920.8050008@ru.nl> Travis Oliphant wrote: > Stef Mientki wrote: > >> ok, I got the answer (I think) >> >> A slightly changed design, works perfect: >> >> filt_4 = signal.remez (25, (0, 0.01, 0.2, 0.5), (0.01, 1)) >> >> The filterlength must be odd, because it's a high pass filter. >> If the length is even, the respons at Nyquist is zero, >> so my orginal example >> >> filt_4 = signal.remez (24, (0, 0.01, 0.2, 0.49), (0.01, 1)) >> >> will try to create a transition band between 0.49 and 0.5, >> which is much to steep for this filterlength. >> >> I never encountered this problem because MatLab, >> and previous programs I used always corrected this themselfs. >> >> Now would it be possible to implement this behaviour in the library >> (I think it's usefull for beginners and previous MatLab users) >> if last amplitude band = 1 (because it also must be odd for bandstop filters) >> make N odd >> else >> make N even >> >> >> > I guess the question is should we raise an error or just auto-correct. > I'm thinking raising an error may help avoid this mis-learning. But, > then again, if we document that it rounds up to the nearest odd-length > under such conditions that may suffice. > > thanks for your attention Travis, I leave it up to the Scipy gurus, if and how to solve it, personally I'm going to use a wrapper to solve the problem for the future. BTW the problem was not described completely yet. Untill now I didn't understand why I had so much trouble designing FIR filters, through Remez exchange algorithm, while it always went very fluently in MatLab. I think I understand now, why I was so distracted. Coming from MatLab there are a few extra problems: - The Signal Toolbox manual and MatLab help files contains many errors, but using the fdatool hides this - the MatLab-equivalent "firpm" specifies the filterorder (which is Ntap-1) instead of Ntap The solution in my opinion is simple - for bandpass filters, always make the filter-length odd - for differentiators, always make the filter-length even Because FIR filters always have a relative large length, this is no problem at all. I've written my notes and an example how bad it can be, on my website: http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jallcc_swb_filters_2.html cheers, Stef Mientki From michelemazzucco at gmail.com Sat Apr 21 07:04:01 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sat, 21 Apr 2007 12:04:01 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <46297473.3040209@gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> Message-ID: <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> Robert, I've installed cctools as you told me, but unfortunately it still does not work for the same reason (you can find the log as attachment). Any idea? Michele On 4/21/07, Robert Kern wrote: > Michele Mazzucco wrote: > > Hi all, > > > > I've just built SciPy on an Intel Mac OS 10.4, python 2.5.1, gcc > > 4.0.1, gfortran 4.3.0 and fftw 3.1.2. > > I've followed the instructions described here [1]. I can successfully > > install the library, howevever the call to > > scipy.test(1,10) > > fails: > > > > ====================================================================== > > ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", > > line 55, in check_integer > > from scipy import stats > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", > > line 7, in > > from stats import * > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", > > line 190, in > > import scipy.special as special > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", > > line 8, in > > from basic import * > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", > > line 8, in > > from _cephes import * > > ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, > > 2): Symbol not found: ___dso_handle > > Referenced from: > > /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so > > Expected in: dynamic lookup > > Install cctools: > > ftp://gcc.gnu.org/pub/gcc/infrastructure/cctools-590.36.dmg > > I'm afraid I don't know much of the how or why, but it should fix this > particular error. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/numpytest.py:634: DeprecationWarning: ScipyTest is now called NumpyTest; please update your code DeprecationWarning) >>> scipy.test(1,10) /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/numpytest.py:198: DeprecationWarning: ScipyTestCase is now called NumpyTestCase; please update your code DeprecationWarning) Found 4 tests for scipy.io.array_import Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/optimize/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Found 397 tests for scipy.ndimage Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/lapack/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/maxentropy/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/misc/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests for module Found 12 tests for scipy.io.mmio Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/signal/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/tests for module Found 4 tests for scipy.io.recaster Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/misc/tests for module Warning: FAILURE importing tests for /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparse.py:12: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparsetools.so, 2): Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/sparsetools.so Expected in: dynamic lookup (in ) Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/signal/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/cluster/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/interpolate/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/blas/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/fftpack/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/weave/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/misc/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/misc/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/ndimage/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/signal/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests for module Warning: No test file found in /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/tests for module Found 0 tests for __main__ check_basic (scipy.io.tests.test_array_import.test_numpyio) Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ... ok check_complex (scipy.io.tests.test_array_import.test_read_array) ... ok check_float (scipy.io.tests.test_array_import.test_read_array) ... ok check_integer (scipy.io.tests.test_array_import.test_read_array) ... ERROR affine_transform 1 ... ok affine transform 2 ... ok affine transform 3 ... ok affine transform 4 ... ok affine transform 5 ... ok affine transform 6 ... ok affine transform 7 ... ok affine transform 8 ... ok affine transform 9 ... ok affine transform 10 ... ok affine transform 11 ... ok affine transform 12 ... ok affine transform 13 ... ok affine transform 14 ... ok affine transform 15 ... ok affine transform 16 ... ok affine transform 17 ... ok affine transform 18 ... ok affine transform 19 ... ok affine transform 20 ... ok affine transform 21 ... ok binary closing 1 ... ok binary closing 2 ... ok binary dilation 1 ... ok binary dilation 2 ... ok binary dilation 3 ... ok binary dilation 4 ... ok binary dilation 5 ... ok binary dilation 6 ... ok binary dilation 7 ... ok binary dilation 8 ... ok binary dilation 9 ... ok binary dilation 10 ... ok binary dilation 11 ... ok binary dilation 12 ... ok binary dilation 13 ... ok binary dilation 14 ... ok binary dilation 15 ... ok binary dilation 16 ... ok binary dilation 17 ... ok binary dilation 18 ... ok binary dilation 19 ... ok binary dilation 20 ... ok binary dilation 21 ... ok binary dilation 22 ... ok binary dilation 23 ... ok binary dilation 24 ... ok binary dilation 25 ... ok binary dilation 26 ... ok binary dilation 27 ... ok binary dilation 28 ... ok binary dilation 29 ... ok binary dilation 30 ... ok binary dilation 31 ... ok binary dilation 32 ... ok binary dilation 33 ... ok binary dilation 34 ... ok binary dilation 35 ... ok binary erosion 1 ... ok binary erosion 2 ... ok binary erosion 3 ... ok binary erosion 4 ... ok binary erosion 5 ... ok binary erosion 6 ... ok binary erosion 7 ... ok binary erosion 8 ... ok binary erosion 9 ... ok binary erosion 10 ... ok binary erosion 11 ... ok binary erosion 12 ... ok binary erosion 13 ... ok binary erosion 14 ... ok binary erosion 15 ... ok binary erosion 16 ... ok binary erosion 17 ... ok binary erosion 18 ... ok binary erosion 19 ... ok binary erosion 20 ... ok binary erosion 21 ... ok binary erosion 22 ... ok binary erosion 23 ... ok binary erosion 24 ... ok binary erosion 25 ... ok binary erosion 26 ... ok binary erosion 27 ... ok binary erosion 28 ... ok binary erosion 29 ... ok binary erosion 30 ... ok binary erosion 31 ... ok binary erosion 32 ... ok binary erosion 33 ... ok binary erosion 34 ... ok binary erosion 35 ... ok binary erosion 36 ... ok binary fill holes 1 ... ok binary fill holes 2 ... ok binary fill holes 3 ... ok binary opening 1 ... ok binary opening 2 ... ok binary propagation 1 ... ok binary propagation 2 ... ok black tophat 1 ... ok black tophat 2 ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1 ... ok generic 1d filter 1 ... ok generic gradient magnitude 1 ... ok generic laplace filter 1 ... ok geometric transform 1 ... ok geometric transform 2 ... ok geometric transform 3 ... ok geometric transform 4 ... ok geometric transform 5 ... ok geometric transform 6 ... ok geometric transform 7 ... ok geometric transform 8 ... ok geometric transform 10 ... ok geometric transform 13 ... ok geometric transform 14 ... ok geometric transform 15 ... ok geometric transform 16 ... ok geometric transform 17 ... ok geometric transform 18 ... ok geometric transform 19 ... ok geometric transform 20 ... ok geometric transform 21 ... ok geometric transform 22 ... ok geometric transform 23 ... ok geometric transform 24 ... ok grey closing 1 ... ok grey closing 2 ... ok grey dilation 1 ... ok grey dilation 2 ... ok grey dilation 3 ... ok grey erosion 1 ... ok grey erosion 2 ... ok grey erosion 3 ... ok grey opening 1 ... ok grey opening 2 ... ok histogram 1 ... ok histogram 2 ... ok histogram 3 ... ok binary hit-or-miss transform 1 ... ok binary hit-or-miss transform 2 ... ok binary hit-or-miss transform 3 ... ok iterating a structure 1 ... ok iterating a structure 2 ... ok iterating a structure 3 ... ok label 1 ... ok label 2 ... ok label 3 ... ok label 4 ... ok label 5 ... ok label 6 ... ok label 7 ... ok label 8 ... ok label 9 ... ok label 10 ... ok label 11 ... ok label 12 ... ok label 13 ... ok laplace filter 1 ... ok laplace filter 2 ... ok map coordinates 1 ... ok map coordinates 2 ... ok maximum 1 ... ok maximum 2 ... ok maximum 3 ... ok maximum 4 ... ok maximum filter 1 ... ok maximum filter 2 ... ok maximum filter 3 ... ok maximum filter 4 ... ok maximum filter 5 ... ok maximum filter 6 ... ok maximum filter 7 ... ok maximum filter 8 ... ok maximum filter 9 ... ok maximum position 1 ... ok maximum position 2 ... ok maximum position 3 ... ok maximum position 4 ... ok maximum position 5 ... ok maximum position 6 ... ok mean 1 ... ok mean 2 ... ok mean 3 ... ok mean 4 ... ok minimum 1 ... ok minimum 2 ... ok minimum 3 ... ok minimum 4 ... ok minimum filter 1 ... ok minimum filter 2 ... ok minimum filter 3 ... ok minimum filter 4 ... ok minimum filter 5 ... ok minimum filter 6 ... ok minimum filter 7 ... ok minimum filter 8 ... ok minimum filter 9 ... ok minimum position 1 ... ok minimum position 2 ... ok minimum position 3 ... ok minimum position 4 ... ok minimum position 5 ... ok minimum position 6 ... ok minimum position 7 ... ok morphological gradient 1 ... ok morphological gradient 2 ... ok morphological laplace 1 ... ok morphological laplace 2 ... ok prewitt filter 1 ... ok prewitt filter 2 ... ok prewitt filter 3 ... ok prewitt filter 4 ... ok rank filter 1 ... ok rank filter 2 ... ok rank filter 3 ... ok rank filter 4 ... ok rank filter 5 ... ok rank filter 6 ... ok rank filter 7 ... ok median filter 8 ... ok rank filter 9 ... ok rank filter 10 ... ok rank filter 11 ... ok rank filter 12 ... ok rank filter 13 ... ok rank filter 14 ... ok rotate 1 ... ok rotate 2 ... ok rotate 3 ... ok rotate 4 ... ok rotate 5 ... ok rotate 6 ... ok rotate 7 ... ok rotate 8 ... ok shift 1 ... ok shift 2 ... ok shift 3 ... ok shift 4 ... ok shift 5 ... ok shift 6 ... ok shift 7 ... ok shift 8 ... ok shift 9 ... ok sobel filter 1 ... ok sobel filter 2 ... ok sobel filter 3 ... ok sobel filter 4 ... ok spline filter 1 ... ok spline filter 2 ... ok spline filter 3 ... ok spline filter 4 ... ok spline filter 5 ... ok standard deviation 1 ... ok standard deviation 2 ... ok standard deviation 3 ... ok standard deviation 4 ... ok standard deviation 5 ... ok standard deviation 6 ... ok sum 1 ... ok sum 2 ... ok sum 3 ... ok sum 4 ... ok sum 5 ... ok sum 6 ... ok sum 7 ... ok sum 8 ... ok sum 9 ... ok sum 10 ... ok sum 11 ... ok sum 12 ... ok uniform filter 1 ... ok uniform filter 2 ... ok uniform filter 3 ... ok uniform filter 4 ... ok uniform filter 5 ... ok uniform filter 6 ... ok variance 1 ... ok variance 2 ... ok variance 3 ... ok variance 4 ... ok variance 5 ... ok variance 6 ... ok watershed_ift 1 ... ok watershed_ift 2 ... ok watershed_ift 3 ... ok watershed_ift 4 ... ok watershed_ift 5 ... ok watershed_ift 6 ... ok watershed_ift 7 ... ok white tophat 1 ... ok white tophat 2 ... ok zoom 1 ... ok zoom 2 ... ok zoom 3 ... ok zoom 4 ... ok zoom 5 ... ok zoom 6 ... ok zoom 7 ... ok zoom 8 ... ok zoom 9 ... ok zoom 10 ... ok zoom by affine transformation 1 ... ok check_random_rect_real (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_random_symmetric_real (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_complex (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_hermitian (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_real (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_rectangular (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_rectangular_real (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_skew_symmetric (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_skew_symmetric_float (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_symmetric (scipy.io.tests.test_mmio.test_mmio_array) ... ok check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ... ERROR test_downcasts (scipy.io.tests.test_recaster.test_recaster) ... ok test_init (scipy.io.tests.test_recaster.test_recaster) ... ok test_smallest_int_sctype (scipy.io.tests.test_recaster.test_recaster) ... ok test_smallest_same_kind (scipy.io.tests.test_recaster.test_recaster) ... ok ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/stats.py", line 190, in import scipy.special as special File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so, 2): Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/special/_cephes.so Expected in: dynamic lookup ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mmio.py", line 151, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ---------------------------------------------------------------------- Ran 417 tests in 2.013s FAILED (errors=2) >>> From pav at iki.fi Sat Apr 21 07:19:56 2007 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 21 Apr 2007 11:19:56 +0000 (UTC) Subject: [SciPy-user] Fast saving/loading of huge matrices References: <1177011032.2543.94.camel@localhost.localdomain> <20070420062420.GC28829@clipper.ens.fr> <20070421085350.GB5974@clipper.ens.fr> Message-ID: Sat, 21 Apr 2007 10:53:50 +0200, Gael Varoquaux kirjoitti: > On Fri, Apr 20, 2007 at 04:43:36PM +0000, Pauli Virtanen wrote: >> In a different attempt to make storing stuff in Pytables easier, I >> wrote a library to dump and load any objects directly to HDF5 files > >> http://www.iki.fi/pav/software/hdf5pickle/index.html > > Do you think this can be used to save data in a way that can be used to > share it between programs ? Something a bit more unniversal than > python's pickle. > If so I vote for inclusion in pytables. I guess that between different Python programs both using hdf5pickle, the sharing characteristics are the same as for Python pickle: you can share objects if their class is present in both programs. Sharing data between a Python program A using hdf5pickle and a (possibly non-Python) program B not using it works at least in the direction A -> B for the data that can be saved in a native HDF5 format (e.g. ints, floats, arrays, dicts, __dicts__ of objects etc). Direction B->A is more tricky, as hdf5pickle currently expects to find a 'pickletype' attribute describing what type of an object is stored in a node. A simple fallback should be easy to implement, groups to dicts and others to arrays. -- Pauli From ryanlists at gmail.com Sat Apr 21 11:24:08 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 21 Apr 2007 10:24:08 -0500 Subject: [SciPy-user] control theory? In-Reply-To: <462928D3.7040002@astraw.com> References: <462928D3.7040002@astraw.com> Message-ID: I have tinkered with adding to signals and creating my own. I teach and do research in this area, so I am definitely into developing something. I don't think any thing exists beyond what is in signals. If it does, I would like to know about it. I created some crude root locus stuff a while back. Knowing me, it is likely hackish and poorly documented. Most of my work is based on Bode plots and Bode-based compensator design. On 4/20/07, Andrew Straw wrote: > Hi all, > > I'm looking for Python-centric software (and probably even better, with > tutorials) that let's one do things various control-theory stuff. In > particular I'm interested in complex frequency plane tools such as root > loci plots. Any suggestions? I see that scipy.signal might have a few > starting points... > > -Andrew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Sat Apr 21 14:22:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 21 Apr 2007 13:22:38 -0500 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> Message-ID: <462A566E.1090501@gmail.com> Michele Mazzucco wrote: > Robert, > > I've installed cctools as you told me, but unfortunately it still does > not work for the same reason (you can find the log as attachment). > Any idea? I'm sorry, I forgot to mention that you need to rebuild after installing cctools. Delete the build/ directory first to make sure you have a clean build. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michelemazzucco at gmail.com Sat Apr 21 14:49:02 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sat, 21 Apr 2007 19:49:02 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <462A566E.1090501@gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> Message-ID: <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> Robert, I did what you told me, however now I get even more failures when I test the library: ====================================================================== ERROR: check loadmat case 3dmatrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case cell ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case double ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case emptycell ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case minus ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case multi ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case object ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case onechar ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case sparsecomplex ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case string ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case stringarray ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case struct ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case structarr ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case structnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== ERROR: check loadmat case unicode ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case matdict = loadmat(file_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio.py", line 96, in loadmat matfile_dict = MR.get_variables() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/miobase.py", line 269, in get_variables mdict = self.file_header() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/mio5.py", line 510, in file_header hdict['__header__'] = hdr['description'].strip(' \t\n\000') AttributeError: 'numpy.ndarray' object has no attribute 'strip' ====================================================================== FAIL: check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 66, in check_cosine_weighted_infinite a/(a**2 + ome**2)) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (0.21667686818858298, 0.21663778162911612, 6.5911747520269308e-10) ====================================================================== FAIL: check_sine_weighted_finite (scipy.integrate.tests.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 46, in check_sine_weighted_finite (20*sin(ome)-ome*cos(ome)+ome*exp(-20))/(20**2 + ome**2)) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (-2.0010659195591835e-06, -0.0266069863325, 1.4302337979274612e-14) ====================================================================== FAIL: check_sine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 56, in check_sine_weighted_infinite ome/(a**2 + ome**2)) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (0.12001853532338669, 0.12, 4.3115018629240958e-10) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1.0340576422315888e-36j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 1.0340576422315888e-36j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1596 tests in 6.405s FAILED (failures=5, errors=19) I guess this is not the expected behaviour, is it? Michele On 4/21/07, Robert Kern wrote: > Michele Mazzucco wrote: > > Robert, > > > > I've installed cctools as you told me, but unfortunately it still does > > not work for the same reason (you can find the log as attachment). > > Any idea? > > I'm sorry, I forgot to mention that you need to rebuild after installing > cctools. Delete the build/ directory first to make sure you have a clean build. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lists.steve at arachnedesign.net Sat Apr 21 15:12:48 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sat, 21 Apr 2007 15:12:48 -0400 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> Message-ID: <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> I remember reading mention of some recent behavior regarding the matlab io functions, and I think it's fixed in the latest SVN branch. If you haven't already, try rebuilding numpy from the latest SVN checkout, and again scipy from the latest svn. -steve On Apr 21, 2007, at 2:49 PM, Michele Mazzucco wrote: > Robert, > > I did what you told me, however now I get even more failures when I > test the library: > > ====================================================================== > ERROR: check loadmat case 3dmatrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case cell > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case cellnest > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case complex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case double > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case emptycell > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case matrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case minus > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case multi > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case object > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case onechar > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case sparse > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case sparsecomplex > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case string > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case stringarray > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case struct > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case structarr > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case structnest > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > ERROR: check loadmat case unicode > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 85, in cc > self._check_case(name, files, expected) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mio.py", > line 75, in _check_case > matdict = loadmat(file_name) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio.py", > line 96, in loadmat > matfile_dict = MR.get_variables() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/miobase.py", > line 269, in get_variables > mdict = self.file_header() > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/mio5.py", > line 510, in file_header > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > ====================================================================== > FAIL: check_cosine_weighted_infinite > (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 66, in check_cosine_weighted_infinite > a/(a**2 + ome**2)) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (0.21667686818858298, 0.21663778162911612, > 6.5911747520269308e-10) > > ====================================================================== > FAIL: check_sine_weighted_finite > (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 46, in check_sine_weighted_finite > (20*sin(ome)-ome*cos(ome)+ome*exp(-20))/(20**2 + ome**2)) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (-2.0010659195591835e-06, -0.0266069863325, > 1.4302337979274612e-14) > > ====================================================================== > FAIL: check_sine_weighted_infinite > (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 56, in check_sine_weighted_infinite > ome/(a**2 + ome**2)) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (0.12001853532338669, 0.12, 4.3115018629240958e-10) > > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/testing/utils.py", > line 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 1.0340576422315888e-36j > DESIRED: (-9+2j) > > ====================================================================== > FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/linalg/tests/test_blas.py", > line 75, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/testing/utils.py", > line 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 1.0340576422315888e-36j > DESIRED: (-9+2j) > > ---------------------------------------------------------------------- > Ran 1596 tests in 6.405s > > FAILED (failures=5, errors=19) > > > > > I guess this is not the expected behaviour, is it? > > > Michele > > On 4/21/07, Robert Kern wrote: >> Michele Mazzucco wrote: >>> Robert, >>> >>> I've installed cctools as you told me, but unfortunately it still >>> does >>> not work for the same reason (you can find the log as attachment). >>> Any idea? >> >> I'm sorry, I forgot to mention that you need to rebuild after >> installing >> cctools. Delete the build/ directory first to make sure you have a >> clean build. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a >> harmless enigma >> that is made terrible by our own mad attempt to interpret it as >> though it had >> an underlying truth." >> -- Umberto Eco >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From michelemazzucco at gmail.com Sat Apr 21 15:33:07 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sat, 21 Apr 2007 20:33:07 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> Message-ID: <1f38e67c0704211233u22c911e5w2d1ff87018be5778@mail.gmail.com> Hi Steve, do I need to rebuild numpy as well?, numpy's test is ok. Michele On 4/21/07, Steve Lianoglou wrote: > I remember reading mention of some recent behavior regarding the > matlab io functions, and I think it's fixed in the latest SVN branch. > > If you haven't already, try rebuilding numpy from the latest SVN > checkout, and again scipy from the latest svn. > > -steve > > On Apr 21, 2007, at 2:49 PM, Michele Mazzucco wrote: > > > Robert, > > > > I did what you told me, however now I get even more failures when I > > test the library: > > > > ====================================================================== > > ERROR: check loadmat case 3dmatrix > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case cell > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case cellnest > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case complex > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case double > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case emptycell > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case matrix > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case minus > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case multi > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case object > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case onechar > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case sparse > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case sparsecomplex > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case string > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case stringarray > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case struct > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case structarr > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case structnest > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > ERROR: check loadmat case unicode > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 85, in cc > > self._check_case(name, files, expected) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mio.py", > > line 75, in _check_case > > matdict = loadmat(file_name) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio.py", > > line 96, in loadmat > > matfile_dict = MR.get_variables() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/miobase.py", > > line 269, in get_variables > > mdict = self.file_header() > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/mio5.py", > > line 510, in file_header > > hdict['__header__'] = hdr['description'].strip(' \t\n\000') > > AttributeError: 'numpy.ndarray' object has no attribute 'strip' > > > > ====================================================================== > > FAIL: check_cosine_weighted_infinite > > (scipy.integrate.tests.test_quadpack.test_quad) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 66, in check_cosine_weighted_infinite > > a/(a**2 + ome**2)) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 9, in assert_quad > > assert abs(value-tabledValue) < err, (value, tabledValue, err) > > AssertionError: (0.21667686818858298, 0.21663778162911612, > > 6.5911747520269308e-10) > > > > ====================================================================== > > FAIL: check_sine_weighted_finite > > (scipy.integrate.tests.test_quadpack.test_quad) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 46, in check_sine_weighted_finite > > (20*sin(ome)-ome*cos(ome)+ome*exp(-20))/(20**2 + ome**2)) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 9, in assert_quad > > assert abs(value-tabledValue) < err, (value, tabledValue, err) > > AssertionError: (-2.0010659195591835e-06, -0.0266069863325, > > 1.4302337979274612e-14) > > > > ====================================================================== > > FAIL: check_sine_weighted_infinite > > (scipy.integrate.tests.test_quadpack.test_quad) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 56, in check_sine_weighted_infinite > > ome/(a**2 + ome**2)) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 9, in assert_quad > > assert abs(value-tabledValue) < err, (value, tabledValue, err) > > AssertionError: (0.12001853532338669, 0.12, 4.3115018629240958e-10) > > > > ====================================================================== > > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > > line 76, in check_dot > > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/numpy/testing/utils.py", > > line 156, in assert_almost_equal > > assert round(abs(desired - actual),decimal) == 0, msg > > AssertionError: > > Items are not equal: > > ACTUAL: 1.0340576422315888e-36j > > DESIRED: (-9+2j) > > > > ====================================================================== > > FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/linalg/tests/test_blas.py", > > line 75, in check_dot > > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/numpy/testing/utils.py", > > line 156, in assert_almost_equal > > assert round(abs(desired - actual),decimal) == 0, msg > > AssertionError: > > Items are not equal: > > ACTUAL: 1.0340576422315888e-36j > > DESIRED: (-9+2j) > > > > ---------------------------------------------------------------------- > > Ran 1596 tests in 6.405s > > > > FAILED (failures=5, errors=19) > > > > > > > > > > I guess this is not the expected behaviour, is it? > > > > > > Michele > > > > On 4/21/07, Robert Kern wrote: > >> Michele Mazzucco wrote: > >>> Robert, > >>> > >>> I've installed cctools as you told me, but unfortunately it still > >>> does > >>> not work for the same reason (you can find the log as attachment). > >>> Any idea? > >> > >> I'm sorry, I forgot to mention that you need to rebuild after > >> installing > >> cctools. Delete the build/ directory first to make sure you have a > >> clean build. > >> > >> -- > >> Robert Kern > >> > >> "I have come to believe that the whole world is an enigma, a > >> harmless enigma > >> that is made terrible by our own mad attempt to interpret it as > >> though it had > >> an underlying truth." > >> -- Umberto Eco > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lists.steve at arachnedesign.net Sat Apr 21 15:40:05 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sat, 21 Apr 2007 15:40:05 -0400 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <1f38e67c0704211233u22c911e5w2d1ff87018be5778@mail.gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> <1f38e67c0704211233u22c911e5w2d1ff87018be5778@mail.gmail.com> Message-ID: <69C8A379-EA0B-49AA-8C37-1BD5818281D1@arachnedesign.net> Hi, > do I need to rebuild numpy as well?, numpy's test is ok. I'm not really sure, but I would (it's pretty quick, anyway). Just rebuild from the latest SVN for both numpy and scipy and see what happens. -steve From michelemazzucco at gmail.com Sat Apr 21 15:59:05 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sat, 21 Apr 2007 20:59:05 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <69C8A379-EA0B-49AA-8C37-1BD5818281D1@arachnedesign.net> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> <1f38e67c0704211233u22c911e5w2d1ff87018be5778@mail.gmail.com> <69C8A379-EA0B-49AA-8C37-1BD5818281D1@arachnedesign.net> Message-ID: <1f38e67c0704211259w715a0216l409e984223aead05@mail.gmail.com> Steven, I did what you said, but unfortunately it still fails (I've re-built numpy as well): ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/io/tests/test_mmio.py", line 151, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ====================================================================== FAIL: check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 66, in check_cosine_weighted_infinite a/(a**2 + ome**2)) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (0.21667686818858298, 0.21663778162911612, 6.5911747520269308e-10) ====================================================================== FAIL: check_sine_weighted_finite (scipy.integrate.tests.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 46, in check_sine_weighted_finite (20*sin(ome)-ome*cos(ome)+ome*exp(-20))/(20**2 + ome**2)) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (-2.0010659195591835e-06, -0.0266069863325, 1.4302337979274612e-14) ====================================================================== FAIL: check_sine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 56, in check_sine_weighted_infinite ome/(a**2 + ome**2)) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", line 9, in assert_quad assert abs(value-tabledValue) < err, (value, tabledValue, err) AssertionError: (0.12001853532338669, 0.12, 4.3115018629240958e-10) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 7.1376556515469226e-37j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 7.2076299948304295e-37j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1487 tests in 5.350s FAILED (failures=5, errors=1) Michele On 4/21/07, Steve Lianoglou wrote: > Hi, > > > do I need to rebuild numpy as well?, numpy's test is ok. > > I'm not really sure, but I would (it's pretty quick, anyway). > > Just rebuild from the latest SVN for both numpy and scipy and see > what happens. > > -steve > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lists.steve at arachnedesign.net Sun Apr 22 00:29:54 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sun, 22 Apr 2007 00:29:54 -0400 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: <1f38e67c0704211259w715a0216l409e984223aead05@mail.gmail.com> References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> <1f38e67c0704211233u22c911e5w2d1ff87018be5778@mail.gmail.com> <69C8A379-EA0B-49AA-8C37-1BD5818281D1@arachnedesign.net> <1f38e67c0704211259w715a0216l409e984223aead05@mail.gmail.com> Message-ID: Hi, > I did what you said, but unfortunately it still fails (I've re-built > numpy as well): Yes, but in different ways now ... at least you're not getting the loadmat failures in the tests anymore :-) For what it's worth, I think those `check_dot` failures have been around for a long time (on mac builds, at least) -- and I *think* I remember someone saying that it's no big deal (they fail on my install as well). I also have a failure on `check_simple_to_dense` but none for `check_*_weighted_infinite` functions (maybe those are new (?)) I'm using: scipy -> 0.5.3.dev2806 (3 failures) numpy -> 1.0.3.dev3683 (no failures) Honestly I'm not sure what's up with the failures. Perhaps someone else will chime in soon enough. -steve > > ====================================================================== > ERROR: check_simple_todense > (scipy.io.tests.test_mmio.test_mmio_coordinate) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/io/tests/test_mmio.py", > line 151, in check_simple_todense > b = mmread(fn).todense() > AttributeError: 'numpy.ndarray' object has no attribute 'todense' > > ====================================================================== > FAIL: check_cosine_weighted_infinite > (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 66, in check_cosine_weighted_infinite > a/(a**2 + ome**2)) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (0.21667686818858298, 0.21663778162911612, > 6.5911747520269308e-10) > > ====================================================================== > FAIL: check_sine_weighted_finite > (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 46, in check_sine_weighted_finite > (20*sin(ome)-ome*cos(ome)+ome*exp(-20))/(20**2 + ome**2)) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (-2.0010659195591835e-06, -0.0266069863325, > 1.4302337979274612e-14) > > ====================================================================== > FAIL: check_sine_weighted_infinite > (scipy.integrate.tests.test_quadpack.test_quad) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 56, in check_sine_weighted_infinite > ome/(a**2 + ome**2)) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > line 9, in assert_quad > assert abs(value-tabledValue) < err, (value, tabledValue, err) > AssertionError: (0.12001853532338669, 0.12, 4.3115018629240958e-10) > > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/testing/utils.py", > line 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 7.1376556515469226e-37j > DESIRED: (-9+2j) > > ====================================================================== > FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/scipy/linalg/tests/test_blas.py", > line 75, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/site-packages/numpy/testing/utils.py", > line 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 7.2076299948304295e-37j > DESIRED: (-9+2j) > > ---------------------------------------------------------------------- > Ran 1487 tests in 5.350s > > FAILED (failures=5, errors=1) > > > > Michele > > > On 4/21/07, Steve Lianoglou wrote: >> Hi, >> >>> do I need to rebuild numpy as well?, numpy's test is ok. >> >> I'm not really sure, but I would (it's pretty quick, anyway). >> >> Just rebuild from the latest SVN for both numpy and scipy and see >> what happens. >> >> -steve >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From faltet at carabos.com Sun Apr 22 05:57:04 2007 From: faltet at carabos.com (Francesc Altet) Date: Sun, 22 Apr 2007 11:57:04 +0200 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <46292206.3010203@gemini.edu> References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> Message-ID: <1177235824.2549.31.camel@localhost.localdomain> El dv 20 de 04 del 2007 a les 16:26 -0400, en/na James Turner va escriure: > > As a side note, we do use this richness of hdf5 in our experiment, to > > store say the time of an experimental run, the temperature of the room... > > It sounds like HDF5 provides much the same capabilities as FITS, the > main file standard used for some decades in astronomy. It also sounds > like there may be a lot of overlap between Pytables and STScI's binary > tables, as implemented in PyFITS. I imagine that's why Pytables was > based on numarray, come to think of it... Well, I must recognize that I didn't know about STScI's TABLES packages until a couple of years ago (when PyTables was already more than three years old), so no idea on the amount of overlap between both projects. However, I'd say that both projects are rather different in aims or, at very least, on the formats that they are based on. So, probably, most of the overlap would come basically from the similarity of the names ;-). TABLES being born before 1998 has a clear precedence for reclaiming the name over PyTables which first public version was released in 2002, but as its application fields are quite different (in principle), I don't think there is not much point in thinking about changing the name of the latter, IMO. And no, I didn't choose numarray for the very first version of PyTables because it was used already used for TABLES (in fact, I think that TABLES appeared much before than numarray; correct me if I'm wrong), but because numarray was the only Python package with a powerful implementation of recarrays, objects that were critical for the PyTables aims. Let me stress out that the PyTables project is in debt with the excellent (although, with the advent of NumPy, the qualifier 'venerable' is beginning to be applicable also) numarray package and its developers, because without them PyTables wouldn't exist (or at least, wouldn't have many of its current features). Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From faltet at carabos.com Sun Apr 22 06:02:42 2007 From: faltet at carabos.com (Francesc Altet) Date: Sun, 22 Apr 2007 12:02:42 +0200 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> Message-ID: <1177236162.2549.38.camel@localhost.localdomain> Hi Perry, El dv 20 de 04 del 2007 a les 17:14 -0400, en/na Perry Greenfield va escriure: > On Apr 20, 2007, at 4:26 PM, James Turner wrote: > > It sounds like HDF5 provides much the same capabilities as FITS, the > > main file standard used for some decades in astronomy. It also sounds > > like there may be a lot of overlap between Pytables and STScI's binary > > tables, as implemented in PyFITS. I imagine that's why Pytables was > > based on numarray, come to think of it... Does anyone have a good > > overview of how they compare, or know whether this HDF format is the > > I think that is a bit too broadly posed to answer in any simple way > (if you are wondering how HDF and FITS compare). Speed? Flexibility? > Etc. FITS is generally much less flexible. However, it is archival. > Something that HDF has a harder time claiming. And it is very well > entrenched in astronomy. Sorry for my ignorance, but can you explain what 'archival' term means in this context? I suppose that it has a very concrete meaning, but I can't realize why a flexible format like HDF5 is not appropriate for archival (in the general sense of the term) purposes. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From faltet at carabos.com Sun Apr 22 06:18:40 2007 From: faltet at carabos.com (Francesc Altet) Date: Sun, 22 Apr 2007 12:18:40 +0200 Subject: [SciPy-user] HDF5 vs FITS In-Reply-To: <462955A6.70503@gemini.edu> References: ce557a360704201625l64884a39w585be8d9df6f2ef9@mail.gmail.com <462955A6.70503@gemini.edu> Message-ID: <1177237120.2549.48.camel@localhost.localdomain> El dv 20 de 04 del 2007 a les 20:07 -0400, en/na James Turner va escriure: > > It links to this paper which goes more in depth, but is still an > > overview of the capabilities (rather than documentation about how > > to use the libraries). > > Thanks -- that paper (2nd link) is a good reference, including a > format comparison table on pages 8-9. > > I see the paper claims that "HDF5 is compatible with all of the > competing formats discussed in Item 10b in that those data models can > be expressed in terms of HDF5.", with one of the "competing formats" > being FITS. It depends on what the author would mean by 'compatible'. I think that HDF5 is not meant to read FITS directly (nor will be in the future), but through a conversor. There is a RFC about this subject going on: http://www.hdfgroup.uiuc.edu/RFC/HDF5/fits2h5/fits2h5.htm Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From michelemazzucco at gmail.com Sun Apr 22 07:52:58 2007 From: michelemazzucco at gmail.com (Michele Mazzucco) Date: Sun, 22 Apr 2007 12:52:58 +0100 Subject: [SciPy-user] scipy compilation succeeds, test fails In-Reply-To: References: <1f38e67c0704201648i51e2447dwf628fc8631d230b4@mail.gmail.com> <46297473.3040209@gmail.com> <1f38e67c0704210404w2d7141a8sc578e90687a8ff32@mail.gmail.com> <462A566E.1090501@gmail.com> <1f38e67c0704211149t411db931t598d65a6d3519f56@mail.gmail.com> <8E994821-B3F8-495A-BC64-E3B8A4AA6D17@arachnedesign.net> <1f38e67c0704211233u22c911e5w2d1ff87018be5778@mail.gmail.com> <69C8A379-EA0B-49AA-8C37-1BD5818281D1@arachnedesign.net> <1f38e67c0704211259w715a0216l409e984223aead05@mail.gmail.com> Message-ID: <1f38e67c0704220452r61d464c9q2ecb797f33761c46@mail.gmail.com> Steve, yes, not it fails in a different way. However I'm not very confident in using a library whose tests fail :(. If, as you say, these failures are well known, maybe it's the test which is buggy ;) Best, Michele On 4/22/07, Steve Lianoglou wrote: > Hi, > > > I did what you said, but unfortunately it still fails (I've re-built > > numpy as well): > > Yes, but in different ways now ... at least you're not getting the > loadmat failures in the tests anymore :-) > > For what it's worth, I think those `check_dot` failures have been > around for a long time (on mac builds, at least) -- and I *think* I > remember someone saying that it's no big deal (they fail on my > install as well). > > I also have a failure on `check_simple_to_dense` but none for > `check_*_weighted_infinite` functions (maybe those are new (?)) > > I'm using: > > scipy -> 0.5.3.dev2806 (3 failures) > numpy -> 1.0.3.dev3683 (no failures) > > Honestly I'm not sure what's up with the failures. Perhaps someone > else will chime in soon enough. > > -steve > > > > > > ====================================================================== > > ERROR: check_simple_todense > > (scipy.io.tests.test_mmio.test_mmio_coordinate) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/io/tests/test_mmio.py", > > line 151, in check_simple_todense > > b = mmread(fn).todense() > > AttributeError: 'numpy.ndarray' object has no attribute 'todense' > > > > ====================================================================== > > FAIL: check_cosine_weighted_infinite > > (scipy.integrate.tests.test_quadpack.test_quad) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 66, in check_cosine_weighted_infinite > > a/(a**2 + ome**2)) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 9, in assert_quad > > assert abs(value-tabledValue) < err, (value, tabledValue, err) > > AssertionError: (0.21667686818858298, 0.21663778162911612, > > 6.5911747520269308e-10) > > > > ====================================================================== > > FAIL: check_sine_weighted_finite > > (scipy.integrate.tests.test_quadpack.test_quad) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 46, in check_sine_weighted_finite > > (20*sin(ome)-ome*cos(ome)+ome*exp(-20))/(20**2 + ome**2)) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 9, in assert_quad > > assert abs(value-tabledValue) < err, (value, tabledValue, err) > > AssertionError: (-2.0010659195591835e-06, -0.0266069863325, > > 1.4302337979274612e-14) > > > > ====================================================================== > > FAIL: check_sine_weighted_infinite > > (scipy.integrate.tests.test_quadpack.test_quad) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 56, in check_sine_weighted_infinite > > ome/(a**2 + ome**2)) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/integrate/tests/test_quadpack.py", > > line 9, in assert_quad > > assert abs(value-tabledValue) < err, (value, tabledValue, err) > > AssertionError: (0.12001853532338669, 0.12, 4.3115018629240958e-10) > > > > ====================================================================== > > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/lib/blas/tests/test_blas.py", > > line 76, in check_dot > > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/numpy/testing/utils.py", > > line 156, in assert_almost_equal > > assert round(abs(desired - actual),decimal) == 0, msg > > AssertionError: > > Items are not equal: > > ACTUAL: 7.1376556515469226e-37j > > DESIRED: (-9+2j) > > > > ====================================================================== > > FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/scipy/linalg/tests/test_blas.py", > > line 75, in check_dot > > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > > File "/Library/Frameworks/Python.framework/Versions/2.5/lib/ > > python2.5/site-packages/numpy/testing/utils.py", > > line 156, in assert_almost_equal > > assert round(abs(desired - actual),decimal) == 0, msg > > AssertionError: > > Items are not equal: > > ACTUAL: 7.2076299948304295e-37j > > DESIRED: (-9+2j) > > > > ---------------------------------------------------------------------- > > Ran 1487 tests in 5.350s > > > > FAILED (failures=5, errors=1) > > > > > > > > Michele > > > > > > On 4/21/07, Steve Lianoglou wrote: > >> Hi, > >> > >>> do I need to rebuild numpy as well?, numpy's test is ok. > >> > >> I'm not really sure, but I would (it's pretty quick, anyway). > >> > >> Just rebuild from the latest SVN for both numpy and scipy and see > >> what happens. > >> > >> -steve > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From perry at stsci.edu Sun Apr 22 10:23:48 2007 From: perry at stsci.edu (Perry Greenfield) Date: Sun, 22 Apr 2007 10:23:48 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <1177236162.2549.38.camel@localhost.localdomain> References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> <1177236162.2549.38.camel@localhost.localdomain> Message-ID: On Apr 22, 2007, at 6:02 AM, Francesc Altet wrote: > Hi Perry, > > El dv 20 de 04 del 2007 a les 17:14 -0400, en/na Perry Greenfield va > escriure: >> >> I think that is a bit too broadly posed to answer in any simple way >> (if you are wondering how HDF and FITS compare). Speed? Flexibility? >> Etc. FITS is generally much less flexible. However, it is archival. >> Something that HDF has a harder time claiming. And it is very well >> entrenched in astronomy. > > Sorry for my ignorance, but can you explain what 'archival' term means > in this context? I suppose that it has a very concrete meaning, but I > can't realize why a flexible format like HDF5 is not appropriate for > archival (in the general sense of the term) purposes. > What I meant was that FITS was defined in terms of the actual binary representation (originally on tape, but now generalized to other storage). The idea being that once written as a FITS file, it would always be supported in the future as a format. With HDF, the focus (as I understood it) was on the software interface, and that the binary representation used may change. And for HDF that binary representation has changed over time (again, as I understand it, perhaps I've been misinformed). That kind of variability is a real killer for archival purposes. Is HDF5 considered stable enough that no future changes are envisioned? (And have they guaranteed to support HDF5 indefinitely even if new enhancements are proposed?). That's what it would take to be accepted as an archival format. Perry From perry at stsci.edu Sun Apr 22 10:26:44 2007 From: perry at stsci.edu (Perry Greenfield) Date: Sun, 22 Apr 2007 10:26:44 -0400 Subject: [SciPy-user] HDF5 vs FITS In-Reply-To: <1177237120.2549.48.camel@localhost.localdomain> References: ce557a360704201625l64884a39w585be8d9df6f2ef9@mail.gmail.com <462955A6.70503@gemini.edu> <1177237120.2549.48.camel@localhost.localdomain> Message-ID: <06AF478C-7B86-4CFA-BDD9-9106A97EEFAC@stsci.edu> On Apr 22, 2007, at 6:18 AM, Francesc Altet wrote: > El dv 20 de 04 del 2007 a les 20:07 -0400, en/na James Turner va > escriure: >>> It links to this paper which goes more in depth, but is still an >>> overview of the capabilities (rather than documentation about how >>> to use the libraries). >> >> Thanks -- that paper (2nd link) is a good reference, including a >> format comparison table on pages 8-9. >> >> I see the paper claims that "HDF5 is compatible with all of the >> competing formats discussed in Item 10b in that those data models can >> be expressed in terms of HDF5.", with one of the "competing formats" >> being FITS. > > It depends on what the author would mean by 'compatible'. I think that > HDF5 is not meant to read FITS directly (nor will be in the > future), but > through a conversor. There is a RFC about this subject going on: > > http://www.hdfgroup.uiuc.edu/RFC/HDF5/fits2h5/fits2h5.htm That's right. Presumably they just mean that all possible FITS structures can be mapped into HDF constructs isomorphically. And that's what the converter does. Perry From perry at stsci.edu Sun Apr 22 11:10:04 2007 From: perry at stsci.edu (Perry Greenfield) Date: Sun, 22 Apr 2007 11:10:04 -0400 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: <1177235824.2549.31.camel@localhost.localdomain> References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> <1177235824.2549.31.camel@localhost.localdomain> Message-ID: <27C1C4E7-CE55-4FBE-AEFD-E61D97405381@stsci.edu> On Apr 22, 2007, at 5:57 AM, Francesc Altet wrote: > > Well, I must recognize that I didn't know about STScI's TABLES > packages > until a couple of years ago (when PyTables was already more than three > years old), so no idea on the amount of overlap between both projects. > However, I'd say that both projects are rather different in aims > or, at > very least, on the formats that they are based on. So, probably, most > of the overlap would come basically from the similarity of the > names ;-). TABLES being born before 1998 has a clear precedence for > reclaiming the name over PyTables which first public version was > released in 2002, but as its application fields are quite different > (in > principle), I don't think there is not much point in thinking about > changing the name of the latter, IMO. I'm not sure what the TABLES reference is to. STScI does have a package called TABLES, but it was developed for IRAF and has no Python heritage to it. It uses an entirely different environment to access FITS tables (it's actually based on CFITSIO). PyFITS also provides access to FITS tables. As I mentioned, since FITS tables have some special features, it isn't possible to deal with these conveniently with either numarray's or numpy's record arrays directly. Both have used a wrapping class to add these features. Perry From fredmfp at gmail.com Sun Apr 22 12:31:51 2007 From: fredmfp at gmail.com (fred) Date: Sun, 22 Apr 2007 18:31:51 +0200 Subject: [SciPy-user] filling array without loop... Message-ID: <462B8DF7.9010500@gmail.com> Hi, I try to fix the following problem, using scipy arrays method. The fact is that I don't know which methods I can use to fix it. Thus, I set this array, with loops, as this : cells_array = array([[VTK_HEXAHEDRON_NB_POINTS, \ k*(nx*ny) + j*nx + i, \ k*(nx*ny) + j*nx + i+1, \ k*(nx*ny) + (j+1)*nx + i+1, \ k*(nx*ny) + (j+1)*nx + i, \ (k+1)*(nx*ny) + j*nx + i, \ (k+1)*(nx*ny) + j*nx + i+1, \ (k+1)*(nx*ny) + (j+1)*nx + i+1, \ (k+1)*(nx*ny) + (j+1)*nx + i] \ for k in range(nz-1) for j in range(ny-1) for i in range(nx-1)], dtype='i').ravel() Works fine, but a bit too slow for nx,ny,nz \approx 10^2 (endless). As an example, let's say nx, ny, nz = 2, 3, 4 ; cells_array looks like this: [[ 8 0 1 3 2 6 7 9 8] [ 8 2 3 5 4 8 9 11 10] [ 8 6 7 9 8 12 13 15 14] [ 8 8 9 11 10 14 15 17 16] [ 8 12 13 15 14 18 19 21 20] [ 8 14 15 17 16 20 21 23 22]] My first idea is to work on the transpose of the array: [[ 8 8 8 8 8 8] [ 0 2 6 8 12 14] [ 1 3 7 9 13 15] [ 3 5 9 11 15 17] [ 2 4 8 10 14 16] [ 6 8 12 14 18 20] [ 7 9 13 15 19 21] [ 9 11 15 17 21 23] [ 8 10 14 16 20 22]] My second idea is to proceed step-by-step. 1) Filling the first row is trivial. 2) Second row I'm thinking of setting a = arange(0, ny, 2) (set [0, 2]) then "replicate" a to the next items, adding (ny-1)*(nz-1) each time. -> a = [ 0 2 6 8 12 14] 3) 3rd, 4th & 5th rows a+1 a+3 a+2 4) 6th to 9th repeat previous steps. I think I have the idea (at least, one idea ;-) but don't know how to do this with scipy. Any suggestion are welcome. Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From peridot.faceted at gmail.com Sun Apr 22 13:23:42 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 22 Apr 2007 13:23:42 -0400 Subject: [SciPy-user] filling array without loop... In-Reply-To: <462B8DF7.9010500@gmail.com> References: <462B8DF7.9010500@gmail.com> Message-ID: On 22/04/07, fred wrote: > Works fine, but a bit too slow for nx,ny,nz \approx 10^2 (endless). This probably gives you an array that needs some transpose()ing before you ravel it, but you can more or less write your formula as is: i = arange(nx)[newaxis,newaxis,:] j = arange(ny)[newaxis,:,newaxis] k = arange(nz)[:,newaxis,newaxis] Now any elementwise arithmetic you do on i j and k will broadcast them up to arrays of the right shape, so for example k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1] yields a cubical matrix that is essentially the first formula you gave. Putting them all together (I'm sure there's a tidier way to do this, particularly if you don't care much about the order): ca1 = array([VTK_HEXAHEDRON_NB_POINTS*ones((nz-1,ny-1,nx-1),dtype=int), k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1], k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:], k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:], k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1], k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1], k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:], k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:], k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1]], dtype=int) This could also be improved by using one of numpy's stacking functions to put the matrices together instead of the catch-all too-clever-for-its-own-good "array". The point is, elementwise operations and broadcasting make it easy to transcribe your generation routine. You could also, and again this may take some transposing, use something like indices = arange(nx*ny*nz).reshape((nz,ny,nx)) so that indices[k,j,i] replaces k*(nx*ny)+j*nx+i. It'll be slower, though hopefully more maintainable. Anne M. Archibald From fredmfp at gmail.com Sun Apr 22 14:47:33 2007 From: fredmfp at gmail.com (fred) Date: Sun, 22 Apr 2007 20:47:33 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> Message-ID: <462BADC5.4090803@gmail.com> Anne Archibald a ?crit : > ca1 = array([VTK_HEXAHEDRON_NB_POINTS*ones((nz-1,ny-1,nx-1),dtype=int), > k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1], > k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:], > k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:], > k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1], > k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1], > k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:], > k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:], > k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1]], dtype=int) > Whaouh ! Many thanks, Anne ! I absolutely understand nothing, but I'll work hard on it ;-) I think you are ready to answer to a more complex (at least, for me, of course) issue, Anne :-)))) Each array cell is a convolution that I wrote as a scalar product between KW, a weights filter vector, and input_data, the "raw" data (3D); output_data is the filtered response given by the filter. def calc_output(self): from scipy import copy, inner, zeros nx, ny, nz = self.nx, self.ny, self.nz nvx, nvy, nvz = self.nvx, self.nvy, self.nvz nx_nvx, ny_nvy, nz_nvz = nx-nvx, ny-nvy, nz-nvz nvx1, nvy1, nvz1 = nvx+1, nvy+1, nvz+1 KWsize = self.KW.size KW = self.KW data_input = self.input data_output = self.output = copy(data_input) for k1 in range(nvz, nz_nvz): for j1 in range(nvy, ny_nvy): for i1 in range(nvx, nx_nvx): data_output[i1,j1,k1] = inner(KW, \ data_input[i1-nvx:i1+nvx1,j1-nvy:j1+nvy1,k1-nvz:k1+nvz1].reshape(KWsize)) These triple loops are _very_ CPU time consuming. Is it possible to write this function without loop ? If yes, you are my God, Anne ! :-)))) Cheers, -- http://scipy.org/FredericPetit From peridot.faceted at gmail.com Sun Apr 22 15:31:26 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 22 Apr 2007 15:31:26 -0400 Subject: [SciPy-user] filling array without loop... In-Reply-To: <462BADC5.4090803@gmail.com> References: <462B8DF7.9010500@gmail.com> <462BADC5.4090803@gmail.com> Message-ID: On 22/04/07, fred wrote: > Each array cell is a convolution that I wrote as a scalar product > between KW, a weights filter vector, and input_data, > the "raw" data (3D); output_data is the filtered response given by the > filter. First: it is almost certainly possible to write this without a loop. But the algorithm will still be O(n**3*m**3). You can rewrite convolutions using the Fourier transform to get an algorithm that is O((n+m)**3*log(n+m)**3), which will be much faster. Doing it for a three-dimensional matrix will be a bit messy. One problem with convolutions is you need to specify what happens at the boundaries. You can use periodic boundaries, you can pad the input with zeros, or you can discard everything in the output that's too close to the edge. For your particular problem, I'd say, use the FFT approach. So: * Pad your arrays to the same size. ** The FFT method will give you periodic boundary conditions, that is, if your convolution blurs your image, the FFT will blur the left side into the right side. If the blur is only 2n pixels wide, then trimming off the n pixels on either side (as you do now) will get rid of this. ** For comprehensibility you may want to center your "blur" array at index zero (negative indices go at the end of the array). Not doing this will shift the result array. * Take numpy.fft.rfftn of both arrays. * Multiply the resulting (complex) arrays. * Take numpy.fft.irfftn of the result. A good test case is using as your "blur" array something like [1,0,0,0] (should leave the array unchanged), [0,1,0,0] (should shift the array over by one), or [1,0,1,0] (should add a shifted copy to the original), or rather, their three-dimensional equivalents. For more on what you can do with the FFT in terms of convolution (for example, if your convolution kernel is small you can make this run faster), look at Numerical Recipes in C (which is available for free on the Web). Good luck, Anne From fredmfp at gmail.com Sun Apr 22 16:01:46 2007 From: fredmfp at gmail.com (fred) Date: Sun, 22 Apr 2007 22:01:46 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> <462BADC5.4090803@gmail.com> Message-ID: <462BBF2A.7000103@gmail.com> Anne Archibald a ?crit : > On 22/04/07, fred wrote: > > >> Each array cell is a convolution that I wrote as a scalar product >> between KW, a weights filter vector, and input_data, >> the "raw" data (3D); output_data is the filtered response given by the >> filter. >> > > First: it is almost certainly possible to write this without a loop. > But the algorithm will still be O(n**3*m**3). You can rewrite > convolutions using the Fourier transform to get an algorithm that is > O((n+m)**3*log(n+m)**3), which will be much faster. Doing it for a > three-dimensional matrix will be a bit messy. > Ok. Forget that is a convolution (because in fact, it is really not), simply a scalar product (inner product says scipy) in each cells of an array, as I wrote in my example. Does it changes something ? In this case, I don't see the interest to use a FFT (maybe I'm wrong). Is it _slow_ because it is in n**3*m**3 or because of loops in python (maybe silly question) ? Cheers, -- http://scipy.org/FredericPetit From jturner at gemini.edu Sun Apr 22 16:40:38 2007 From: jturner at gemini.edu (James Turner) Date: Sun, 22 Apr 2007 16:40:38 -0400 Subject: [SciPy-user] HDF5 vs FITS In-Reply-To: <1177237120.2549.48.camel@localhost.localdomain> References: <1177237120.2549.48.camel@localhost.localdomain> Message-ID: <462BC846.2050606@gemini.edu> Hi Francesc, Thanks for the Pytables history. I wasn't suggesting that there is a problem with the similarity of the names -- I did wonder at first if there is a connection with FITS tables, but that can always be solved with a comment in the documentation :-). > It depends on what the author would mean by 'compatible'. I think > that HDF5 is not meant to read FITS directly (nor will be in the > future), but through a conversor. Yes, I understood that. > There is a RFC about this subject going on: > http://www.hdfgroup.uiuc.edu/RFC/HDF5/fits2h5/fits2h5.htm That IS interesting to know. Defining a standard mapping between the two formats seems like a good idea, allowing at least some level of interoperability at the end-user-program level (as opposed to NumPy). Since the convertor is only one-way, I infer an expectation that HDF5 would supersede FITS, which I wouldn't really agree with, for similar reasons to Perry's comments about archiving. There is a LOT of existing astronomy software that does not handle HDF5, so a conversion back to FITS from the "FITS within HDF5" structure would be needed before the latter is really useful. Of course sticking to "FITS within HDF5" sort-of defeats the point, but at least it could allow processing FITS data with HDF5 software or vice-versa, assuming the HDF5 software is flexible enough in its expectations regarding data structures. Just an off-the-top-of-my-head reaction; I'm sure Perry et al. are familiar with such issues in more detail. Cheers, James. From faltet at carabos.com Sun Apr 22 17:08:42 2007 From: faltet at carabos.com (Francesc Altet) Date: Sun, 22 Apr 2007 23:08:42 +0200 Subject: [SciPy-user] HDF5 vs FITS (was: Fast saving/loading of huge matrices) In-Reply-To: References: <20070420073638.GE28829@clipper.ens.fr> <46292206.3010203@gemini.edu> <7DDBDD97-339A-42B3-AE55-534DE1A23BB2@stsci.edu> <1177236162.2549.38.camel@localhost.localdomain> Message-ID: <1177276123.2945.17.camel@localhost.localdomain> El dg 22 de 04 del 2007 a les 10:23 -0400, en/na Perry Greenfield va escriure: > On Apr 22, 2007, at 6:02 AM, Francesc Altet wrote: > > > Hi Perry, > > > > El dv 20 de 04 del 2007 a les 17:14 -0400, en/na Perry Greenfield va > > escriure: > >> > >> I think that is a bit too broadly posed to answer in any simple way > >> (if you are wondering how HDF and FITS compare). Speed? Flexibility? > >> Etc. FITS is generally much less flexible. However, it is archival. > >> Something that HDF has a harder time claiming. And it is very well > >> entrenched in astronomy. > > > > Sorry for my ignorance, but can you explain what 'archival' term means > > in this context? I suppose that it has a very concrete meaning, but I > > can't realize why a flexible format like HDF5 is not appropriate for > > archival (in the general sense of the term) purposes. > > > What I meant was that FITS was defined in terms of the actual binary > representation (originally on tape, but now generalized to other > storage). The idea being that once written as a FITS file, it would > always be supported in the future as a format. With HDF, the focus > (as I understood it) was on the software interface, and that the > binary representation used may change. And for HDF that binary > representation has changed over time (again, as I understand it, > perhaps I've been misinformed). That kind of variability is a real > killer for archival purposes. Is HDF5 considered stable enough that > no future changes are envisioned? (And have they guaranteed to > support HDF5 indefinitely even if new enhancements are proposed?). > That's what it would take to be accepted as an archival format. Ok. Thanks for the explanation. Well, my impression is that the THG people is trying hard to stick with a stable version of the format. In fact, the latest incarnation of HDF5 (1.8.0, in beta stage now) claims that it is able to read files from *all* the previous versions of HDF5. >From the "What's New" announcement of 1.8.0 [1]: """ Backward and Forward Format Compatibility: The HDF5 Release 1.8.0 library will read all existing HDF5 files, from this or any prior release. Although this release contains features that require additions and/or changes to the HDF5 file format, by default this release will write out files that conform to a "maximum compatibility" principle. That is, files are written with the earliest version of the file format that describes the information, rather than always using the latest version possible. This provides the best forward compatibility by allowing the maximum number of older versions of the library to read files produced with this release. If library features are used that require new file format features, or if the application requests that the library write out only the latest version of the file format, the files produced with this version of the library may not be readable by older versions of the HDF5 library. """ So, not only backward compatibility is important for them, but also the forward one (which could also be important for archival purposes). Furthermore, they have a pretty complete FAQ [2] on the issues about bugs in previous releases that might prevent this backward/forward compatibility and suggestions and workarounds (when they are known/possible) for coping with them. This is not to say that HDF5 is completely free of issues for archival purposes, but at least, their developers seem to try hard to provide support for avoiding (or workarounding in case of problems) them. Cheers, [1] http://www.hdfgroup.uiuc.edu/HDF5/doc_1.8pre/WhatsNew180.html [2] http://www.hdfgroup.org/HDF5/faq/bkfwd-compat.html -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From faltet at carabos.com Sun Apr 22 17:25:23 2007 From: faltet at carabos.com (Francesc Altet) Date: Sun, 22 Apr 2007 23:25:23 +0200 Subject: [SciPy-user] HDF5 vs FITS In-Reply-To: <462BC846.2050606@gemini.edu> References: <1177237120.2549.48.camel@localhost.localdomain> <462BC846.2050606@gemini.edu> Message-ID: <1177277123.2945.35.camel@localhost.localdomain> El dg 22 de 04 del 2007 a les 16:40 -0400, en/na James Turner va escriure: > Hi Francesc, > > Thanks for the Pytables history. I wasn't suggesting that there is a > problem with the similarity of the names -- I did wonder at first if > there is a connection with FITS tables, but that can always be solved > with a comment in the documentation :-). Yes, that would be enough (at least on those fortunate cases where people do actually read docs ;-). > > It depends on what the author would mean by 'compatible'. I think > > that HDF5 is not meant to read FITS directly (nor will be in the > > future), but through a conversor. > > Yes, I understood that. > > > There is a RFC about this subject going on: > > http://www.hdfgroup.uiuc.edu/RFC/HDF5/fits2h5/fits2h5.htm > > That IS interesting to know. Defining a standard mapping between the > two formats seems like a good idea, allowing at least some level of > interoperability at the end-user-program level (as opposed to NumPy). > Since the convertor is only one-way, I infer an expectation that HDF5 > would supersede FITS, which I wouldn't really agree with, for similar > reasons to Perry's comments about archiving. There is a LOT of > existing astronomy software that does not handle HDF5, so a > conversion back to FITS from the "FITS within HDF5" structure would > be needed before the latter is really useful. Of course sticking to > "FITS within HDF5" sort-of defeats the point, but at least it could > allow processing FITS data with HDF5 software or vice-versa, assuming > the HDF5 software is flexible enough in its expectations regarding > data structures. Well, I don't know the plans for HDF5 and FITS for the future at all, so I'm not the most adequate person to talk about this, but my impression is that fits2h5 is a try to bring to the astronomers a conversion tool that would let them to use HDF5 aware tools for coping with their data. That's all. In particular, I doubt that the goal would be to make astronomers to change their format, because, among many other reasons, there should be a *vast* set of libraries that already works against FITS, and changing all of this to use HDF5 would simply be a no go. Although, perhaps having a bidirectional "FITS within HDF5" solution as you are suggesting can be a great idea, who knows. > Just an off-the-top-of-my-head reaction; I'm sure Perry et al. are > familiar with such issues in more detail. Mine too indeed. Cheers, -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From fredmfp at gmail.com Sun Apr 22 18:45:28 2007 From: fredmfp at gmail.com (fred) Date: Mon, 23 Apr 2007 00:45:28 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> Message-ID: <462BE588.7040009@gmail.com> Anne Archibald a ?crit : > up to arrays of the right shape, so for example > k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1] > yields a cubical matrix that is essentially the first formula you > gave. Putting them all together (I'm sure there's a tidier way to do > this, particularly if you don't care much about the order): > VTK does care about the order. > ca1 = array([VTK_HEXAHEDRON_NB_POINTS*ones((nz-1,ny-1,nx-1),dtype=int), > k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1], > k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:], > k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:], > k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1], > k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1], > k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:], > k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:], > k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1]], dtype=int) > > Ok, I think I got the trick. Very powerful ! ;-) > This could also be improved by using one of numpy's stacking functions > to put the matrices together instead of the catch-all > too-clever-for-its-own-good "array". > > I tried this: cells_array = vstack([hstack(VTK_HEXAHEDRON_NB_POINTS*ones((nz-1,ny-1,nx-1),dtype=int)), hstack(k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1]), hstack(k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:]), hstack(k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:]), hstack(k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1]), hstack(k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1]), hstack(k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:]), hstack(k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:]), hstack(k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1])]).transpose().ravel() Written as this, I get a memory allocation error: for nx=ny=nz=200, it exceeds 2GB, altough it works fine when array() used. Any idea ? For this peculiar case, I get the same issue when ijk loops are used... > The point is, elementwise operations and broadcasting make it easy to > transcribe your generation routine. > > You could also, and again this may take some transposing, use something like > indices = arange(nx*ny*nz).reshape((nz,ny,nx)) > so that indices[k,j,i] replaces k*(nx*ny)+j*nx+i. It'll be slower, > though hopefully more maintainable. > Ok, but the idea is to speed up the computing of the array. Thanks. Cheers, -- http://scipy.org/FredericPetit From peridot.faceted at gmail.com Sun Apr 22 19:41:15 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 22 Apr 2007 19:41:15 -0400 Subject: [SciPy-user] filling array without loop... In-Reply-To: <462BE588.7040009@gmail.com> References: <462B8DF7.9010500@gmail.com> <462BE588.7040009@gmail.com> Message-ID: On 22/04/07, fred wrote: > Ok, I think I got the trick. > Very powerful ! ;-) Broadcasting and elementwise operations are the basic raison d'?tre for numpy. > I tried this: > > cells_array = > vstack([hstack(VTK_HEXAHEDRON_NB_POINTS*ones((nz-1,ny-1,nx-1),dtype=int)), > > hstack(k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1]), > > hstack(k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:]), > > hstack(k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:]), > > hstack(k[:-1,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1]), > > hstack(k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1]), > > hstack(k[1:,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,1:]), > > hstack(k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,1:]), > > hstack(k[1:,:,:]*(nx*ny)+j[:,1:,:]*nx+i[:,:,:-1])]).transpose().ravel() > > Written as this, I get a memory allocation error: for nx=ny=nz=200, it > exceeds 2GB, > altough it works fine when array() used. Any idea ? Um, first of all, what are you trying to accomplish with all those hstack()s? The stacking functions are a good way to make an array out of a list or tuple, but if what you have is already an array, some combination of reshape() and transpose() is almost certainly what you want. If you're doing something simple, that probably won't copy your data; if it does copy your data, rearrange how you fill your arrays to start with. at 7x200x200x200x8 bytes, the final array will be 427MiB, so any duplication of it is liable to push you close to 2 GB. It's a bit crazy to be working so close to the point at which you run out of RAM. But you have a point; numpy's vector operations, if written in the obvious way, can use far more memory than looping. The easiest solution is a compromise: partially vectorize your function. Instead of replacing all three loops by numpy vector operations, try replacing only one or two. This will still cut your overhead drastically while keeping the array sizes small. If you need more speed from this, it's going to take more work and produce uglier code. You can rewrite the code so all matrix operations are done in place (using += or ufunc output arguments), or there are tools like numexpr, pyrex, or weave. Good luck, Anne From peridot.faceted at gmail.com Sun Apr 22 19:51:04 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sun, 22 Apr 2007 19:51:04 -0400 Subject: [SciPy-user] filling array without loop... In-Reply-To: <462BBF2A.7000103@gmail.com> References: <462B8DF7.9010500@gmail.com> <462BADC5.4090803@gmail.com> <462BBF2A.7000103@gmail.com> Message-ID: On 22/04/07, fred wrote: > Forget that is a convolution (because in fact, it is really not), > simply a scalar product (inner product says scipy) in each cells of an > array, as I wrote in my example. > > Does it changes something ? Uh, maybe I'm confused - the code you sent sure looks like a convolution to me (although KW has for some reason been flattened). Are you saying that the code you sent is not doing a convolution, or are you asking me to optimize code I haven't seen? Assuming the latter, dot() and tensordot() work fine on higher-dimensional arrays; just transpose() your arrays so that the indices you're contracting along are in the right places. As a general comment, it seems as if you are using flat arrays to store multidimensional data (for example, KW in the code you posted), and as a result have loads of reshape()s. It's generally better, when possible, to keep multidimensional arrays multidimensional - less error-prone, and indexing is more likely to be easy. Anne From fredmfp at gmail.com Mon Apr 23 07:14:22 2007 From: fredmfp at gmail.com (fred) Date: Mon, 23 Apr 2007 13:14:22 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> <462BE588.7040009@gmail.com> Message-ID: <462C950E.7030109@gmail.com> Anne Archibald a ?crit : >> I tried this: >> >> cells_array = >> vstack([hstack(VTK_HEXAHEDRON_NB_POINTS*ones((nz-1,ny-1,nx-1),dtype=int)), >> >> hstack(k[:-1,:,:]*(nx*ny)+j[:,:-1,:]*nx+i[:,:,:-1]), >> >> [snip] >> Um, first of all, what are you trying to accomplish with all those >> hstack()s? The stacking functions are a good way to make an array out >> of a list or tuple, but if what you have is already an array, some >> combination of reshape() and transpose() is almost certainly what you >> want. If you're doing something simple, that probably won't copy your >> data; if it does copy your data, rearrange how you fill your arrays to >> start with. >> Ok, maybe I misunderstood. In this example x = x0 + arange(nx, dtype=float32)*dx y = y0 + arange(ny, dtype=float32)*dy z = z0 + arange(nz, dtype=float32)*dz c = vstack([hstack([x]*ny*nz), \ repeat(hstack([y]*nz), nx), \ repeat(z, nx*ny)]).transpose().ravel() using hstack()/vstack() is much faster than using array() for nx,ny,nz = 200,250,300 >> at 7x200x200x200x8 bytes, the final array will be 427MiB, so any >> duplication of it is liable to push you close to 2 GB. It's a bit >> crazy to be working so close to the point at which you run out of RAM. >> It's only for testing ;-) What I don't understand for this example (200x200x200) is why it works fine with array and does not with vstack/hstack ? ?stack() methods duplicate data that array() don't ? >> But you have a point; numpy's vector operations, if written in the >> obvious way, can use far more memory than looping. The easiest >> solution is a compromise: partially vectorize your function. Instead >> of replacing all three loops by numpy vector operations, try replacing >> only one or two. This will still cut your overhead drastically while >> keeping the array sizes small. >> Ok. >> If you need more speed from this, it's going to take more work and >> produce uglier code. You can rewrite the code so all matrix operations >> are done in place (using += or ufunc output arguments), or there are >> tools like numexpr, pyrex, or weave. >> I'll think of this when I'll really want to optimize my code ;-) Thanks. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Mon Apr 23 07:16:59 2007 From: fredmfp at gmail.com (fred) Date: Mon, 23 Apr 2007 13:16:59 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> <462BE588.7040009@gmail.com> Message-ID: <462C95AB.1050806@gmail.com> Anne Archibald a ?crit : > Um, first of all, what are you trying to accomplish with all those > hstack()s? > Because you told me so ;-) in your first answer. So, if it is not right, what do you mean by "using one of numpy's stacking functions to put the matrices together..." ? Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Mon Apr 23 09:06:40 2007 From: fredmfp at gmail.com (fred) Date: Mon, 23 Apr 2007 15:06:40 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> <462BADC5.4090803@gmail.com> <462BBF2A.7000103@gmail.com> Message-ID: <462CAF60.90406@gmail.com> Anne Archibald a ?crit : > On 22/04/07, fred wrote: > > >> Forget that is a convolution (because in fact, it is really not), >> simply a scalar product (inner product says scipy) in each cells of an >> array, as I wrote in my example. >> >> Does it changes something ? >> > > Uh, maybe I'm confused - Ok, so let me explain a little more... I have two 2D matrices, say A et B, with same dims = 571x876. B is to be computed, A is known. Each cell of the matrix B is computed from a "scalar product" with a few cells of the matrix A and a "weights" vector W: B[i,j] = \sum_0^8 w_n a_n where - W = [w_n] is a 1D vector (computed from solve(a,b)) with dim = 9 - a_n = A[i,j] a_n are selected from a submatrix of A (much smaller than A), say A': a_2 -- a_5 -- a_8 | | | A' = a_1 -- a_4 -- a_7 | | | a_0 -- a_3 -- a_6 My idea is to write B[i,j] as a scalar product between W and A' flattened to a 1D vector: B[i,j] = dot (W,A') (ok, inner() method was unadapted) The problem, from my own opinion, is once again, the two (three in general case) loops to compute each cell of the matrix B. So my question is : is it possible to fill the B matrix without loops ? I hope I more clear. Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From Giovanni.Samaey at cs.kuleuven.be Mon Apr 23 09:35:40 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Mon, 23 Apr 2007 15:35:40 +0200 Subject: [SciPy-user] random numbers in scipy Message-ID: <462CB62C.30304@cs.kuleuven.be> Hi all, I apologize if this question is silly; I checked the mailing list, the module documentation, the numpy book and the scipy site for the answer but couldn't find it. 1. Is the numpy.random module identical to the scipy.stats module? 2. (How) can I set the seed for the random number generator? Best, and thanks in advance. Giovanni From nwagner at iam.uni-stuttgart.de Mon Apr 23 09:38:05 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 23 Apr 2007 15:38:05 +0200 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462CB62C.30304@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> Message-ID: <462CB6BD.4000408@iam.uni-stuttgart.de> Giovanni Samaey wrote: > Hi all, > > I apologize if this question is silly; I checked the mailing list, the > module documentation, the numpy book and the scipy site for the answer > but couldn't find it. > > 1. Is the numpy.random module identical to the scipy.stats module? > > 2. (How) can I set the seed for the random number generator? > > random.seed > Best, and thanks in advance. > > Giovanni > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From Giovanni.Samaey at cs.kuleuven.be Mon Apr 23 09:58:16 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Mon, 23 Apr 2007 15:58:16 +0200 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462CB6BD.4000408@iam.uni-stuttgart.de> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> Message-ID: <462CBB78.90301@cs.kuleuven.be> Thanks for the (short) answer: 1. Only numpy.random has a "random.seed". So I assume I should be using that module, and not scipy.stats? 2. Do I seed the module, or an instance of something (as in pygsl)? The method is not described in the numpy book, and the docstring says it is a method of mtrand.RandomState, which is an object. I deduce from this that the module initialises one single object of this type on import, which is used by every call to generate a random number, for any distribution? Is this a right deduction? Giovanni Nils Wagner wrote: > Giovanni Samaey wrote: > >> Hi all, >> >> I apologize if this question is silly; I checked the mailing list, the >> module documentation, the numpy book and the scipy site for the answer >> but couldn't find it. >> >> 1. Is the numpy.random module identical to the scipy.stats module? >> >> 2. (How) can I set the seed for the random number generator? >> >> >> > random.seed > > From emanuelez at gmail.com Mon Apr 23 11:20:14 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Mon, 23 Apr 2007 17:20:14 +0200 Subject: [SciPy-user] regional maxima Message-ID: Hello everybody, i am considering to port to Python an application in wrote in Matlab. My only concern right now is that i need something similar to the function "imregionalmax": http://www.mathworks.com/access/helpdesk/help/toolbox/images/index.html?/access/helpdesk/help/toolbox/images/imregionalmax.html&http://www.google.dk/search?q=imregionalmax&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a My googling so far has taken me to find out pymorphpro ( http://www.mmorph.com/pymorphpro/) which does exactly what i want but doesn't seem to be free and works with numeric and not numpy. I can also see that scipy offers some morphology functions but nothing to spot regional maxima. I wonder if it is possible to achieve this using the available primitives (erosion, dilation, ...) and i think i found the answer (or part of it) here: http://www.qgar.org/doc/QgarLib/html/classqgar_1_1RegionalMaxBinaryImage.html where this is defined: *RMAX(f) = f - R(f,E(f))*, where *f* is the given image, *E* its erosion, and *R(a,b)* the reconstruction by dilatation of *b* using *a*. Anyhow... my experiments so far were not successful so that's why i'm writing here. Any suggestion? -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Apr 23 11:40:33 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 23 Apr 2007 16:40:33 +0100 Subject: [SciPy-user] regional maxima In-Reply-To: References: Message-ID: <1e2af89e0704230840r1d8a718dqd07548ec10962074@mail.gmail.com> Hi, On 4/23/07, Emanuele Zattin wrote: > Hello everybody, > i am considering to port to Python an application in wrote in Matlab. > My only concern right now is that i need something similar to the function > "imregionalmax": I don't know this area well - but have you checked out ITK www.itk.org? They have some low-level python bindings as standard. Matthew From peridot.faceted at gmail.com Mon Apr 23 12:17:34 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 23 Apr 2007 12:17:34 -0400 Subject: [SciPy-user] filling array without loop... In-Reply-To: <462C95AB.1050806@gmail.com> References: <462B8DF7.9010500@gmail.com> <462BE588.7040009@gmail.com> <462C95AB.1050806@gmail.com> Message-ID: On 23/04/07, fred wrote: > Anne Archibald a ?crit : > > Um, first of all, what are you trying to accomplish with all those > > hstack()s? > > > Because you told me so ;-) in your first answer. > So, if it is not right, what do you mean by "using one of numpy's > stacking functions to put the matrices together..." ? Well, I see your problem in two parts: (1) build an array of shape (7,nx,ny,nz), then (2) flatten it enough to make VTK happy. Step (1) is most easily accomplished by making a list of 7 arrays of shape (nx,ny,nz) then putting them together along a new axis. Making the arrays is no problem, you just use broadcasting. Step (2). putting a list of arrays together into one array along a new axis you can do with array(), or you could use one of the stacking functions. You only have to do that once, so I'm not clear on why you use so many hstack()s. I'd take a look at the Numpy Example List; it looks like none of the available functions do quite what you want, although a combination of [...,newaxis] and concatenate might be all right. Since all your hstack()s and vstack()s aren't doing anything useful, I'd get rid of them - at the least they have to make copies of their arguments. Anne From robert.kern at gmail.com Mon Apr 23 12:30:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 23 Apr 2007 11:30:46 -0500 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462CBB78.90301@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> Message-ID: <462CDF36.7000402@gmail.com> Giovanni Samaey wrote: > Thanks for the (short) answer: > > 1. Only numpy.random has a "random.seed". So I assume I should be using > that module, and not scipy.stats? If it has the distributions that you need. If you need distributions that are only provided by the distribution objects in scipy.stats, then you might need to use scipy.stats. > 2. Do I seed the module, or an instance of something (as in pygsl)? > > The method is not described in the numpy book, and the docstring says > it is a method of mtrand.RandomState, which is an object. I deduce from > this that the module initialises one single object of this type on > import, which is used by every call to generate a random number, for any > distribution? Is this a right deduction? The "functions" in numpy.random are just aliases to the methods on the global instance. They are there only for convenience when you don't need real control over the stream and for backwards compatibility with Numeric's RandomArray which only had global state. The preferred way is to make your own instance of RandomState and call methods off of it. You should only use numpy.random.seed() if you need to work around other code which uses the global instance (which, unfortunately, is much of scipy.stats). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From faltet at carabos.com Mon Apr 23 14:22:42 2007 From: faltet at carabos.com (Francesc Altet) Date: Mon, 23 Apr 2007 20:22:42 +0200 Subject: [SciPy-user] Optimization in numexpr for unaligned arrays Message-ID: <1177352562.2614.19.camel@localhost.localdomain> Hi, I've filed a patch with ticket #400 of SciPy trac. I've tried to assign it to David, but it seems that I've not privileges to do this. Anyway, I'm puzzled about why, when working with unaligned arrays, a plain "a.copy()" in Python space can be faster (between a 30% and a 70%) than a 'PyArray_SimpleNewFromDescr' call in C space. Any hints? -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From Giovanni.Samaey at cs.kuleuven.be Tue Apr 24 03:27:50 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 24 Apr 2007 09:27:50 +0200 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462CDF36.7000402@gmail.com> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> Message-ID: <462DB176.8030104@cs.kuleuven.be> Dear Robert, thanks for the answer ! > > The preferred way is to make your own instance of RandomState and call methods > off of it. You should only use numpy.random.seed() if you need to work around > other code which uses the global instance (which, unfortunately, is much of > scipy.stats). > So I should do: rand = numpy.RandomState(seed) rand.normal(loc=0.,scale=1.,size=100) I am willing to document this somewhere: - I was unable to deduce this from the module's docstring (it doesn't have one) - I didn't find this on the web nor in the book (I am not saying it isn't there somewhere -- I only looked for an hour. Maybe it is there on a non-obvious place). I think the book or the docstring of the module are the best places. As a side remark: I find it confusing to have a numpy.random and a scipy.stats module (in which stats does not allow to set seeds). Do both use the Mersenne Twister? Furhtermore, you can ask random numbers both directly from the numpy.random module, and from a RandomState object. Giovanni From aisaac at american.edu Tue Apr 24 04:08:31 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Apr 2007 04:08:31 -0400 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462DB176.8030104@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be><462CDF36.7000402@gmail.com><462DB176.8030104@cs.kuleuven.be> Message-ID: On Tue, 24 Apr 2007, Giovanni Samaey apparently wrote: > rand = numpy.RandomState(seed) rand = numpy.random.RandomState(seed) At least in 1.0. Cheers, Alan Isaac From Giovanni.Samaey at cs.kuleuven.be Tue Apr 24 08:35:10 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Tue, 24 Apr 2007 14:35:10 +0200 Subject: [SciPy-user] weird behaviour in scipy.random seed Message-ID: <462DF97E.4020802@cs.kuleuven.be> Hi, I am seeing the following weird behaviour when seeding the random number generator in scipy.random. The basic idea is that it gives an error when I take the seed out of a scipy array of integers, but works fine when I cast them to standard python integers. (See code below.) Should I file a ticket for this? Does anyone have the same? Giovanni from scipy import random as R a=R.RandomState(seed=0) # this is OK a=R.RandomState(seed=0.) # this gives an error, which is OK ValueError: object of too small depth for desired array Now I try the following x = scipy.arange(500) x[0].dtype # this says 'int64' a=R.RandomState(seed=x[0]) # this again give an error, which is not OK ValueError: object of too small depth for desired array Casting to int, resolves the problem: a=R.RandomState(seed=int(x[0])) # this does not give an error. From emanuelez at gmail.com Tue Apr 24 12:11:00 2007 From: emanuelez at gmail.com (Emanuele Zattin) Date: Tue, 24 Apr 2007 18:11:00 +0200 Subject: [SciPy-user] regional maxima In-Reply-To: <1e2af89e0704230840r1d8a718dqd07548ec10962074@mail.gmail.com> References: <1e2af89e0704230840r1d8a718dqd07548ec10962074@mail.gmail.com> Message-ID: Yeah, i have stumbled upon that in my googling but its installation process seems to be kinda overkill :-S And most of all there is no c/c++ compiler installed on the servers of the university i am working on. Any other suggestion? On 4/23/07, Matthew Brett wrote: > > Hi, > > On 4/23/07, Emanuele Zattin wrote: > > Hello everybody, > > i am considering to port to Python an application in wrote in Matlab. > > My only concern right now is that i need something similar to the > function > > "imregionalmax": > > I don't know this area well - but have you checked out ITK > www.itk.org? They have some low-level python bindings as standard. > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Apr 24 12:24:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Apr 2007 11:24:49 -0500 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462DB176.8030104@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> <462DB176.8030104@cs.kuleuven.be> Message-ID: <462E2F51.2010406@gmail.com> Giovanni Samaey wrote: > As a side remark: I find it confusing to have a numpy.random and a > scipy.stats module (in which stats does not allow to set seeds). They are two different things. scipy.stats has many more things in it. All of its random number generation capabilities (only part of what scipy.stats does) use numpy.random. > Do > both use the Mersenne Twister? Since scipy.stats uses numpy.random, yes. > Furhtermore, you can ask random numbers > both directly from the numpy.random module, and from a RandomState object. I've already explained why this is the case. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From samrobertsmith at gmail.com Tue Apr 24 20:22:52 2007 From: samrobertsmith at gmail.com (linda.s) Date: Tue, 24 Apr 2007 17:22:52 -0700 Subject: [SciPy-user] book Message-ID: <1d987df30704241722u6cd1594au30a8676ef4b7b017@mail.gmail.com> Hi, I am very new to SciPy. Is there any good tutorial book? Thanks, Linda From aisaac at american.edu Tue Apr 24 21:00:24 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Apr 2007 21:00:24 -0400 Subject: [SciPy-user] book In-Reply-To: <1d987df30704241722u6cd1594au30a8676ef4b7b017@mail.gmail.com> References: <1d987df30704241722u6cd1594au30a8676ef4b7b017@mail.gmail.com> Message-ID: On Tue, 24 Apr 2007, "linda.s" apparently wrote: > I am very new to SciPy. > Is there any good tutorial book? http://www.scipy.org/Documentation See in particular: Cheers, Alan Isaac From david.warde.farley at utoronto.ca Tue Apr 24 22:28:35 2007 From: david.warde.farley at utoronto.ca (David Warde-Farley) Date: Tue, 24 Apr 2007 22:28:35 -0400 Subject: [SciPy-user] book In-Reply-To: References: <1d987df30704241722u6cd1594au30a8676ef4b7b017@mail.gmail.com> Message-ID: <1177468115.8747.12.camel@rodimus> On Tue, 2007-04-24 at 21:00 -0400, Alan G Isaac wrote: > On Tue, 24 Apr 2007, "linda.s" apparently wrote: > > I am very new to SciPy. > > Is there any good tutorial book? > > http://www.scipy.org/Documentation > > See in particular: > > Also, http://www.scipy.org/Cookbook Cheers, David From S.Mientki at ru.nl Wed Apr 25 03:53:15 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Wed, 25 Apr 2007 09:53:15 +0200 Subject: [SciPy-user] How to install SciPy in the most simplest way ? Message-ID: <462F08EB.8010101@ru.nl> hello, I'm still in the transition of going from MatLab to Scipy, and installed previous week a SciPy on a PC twice, through the new "Enstaller". It's a pitty that there will be no old installer versions anymore (although I can understand why). Although I succeeded, the behavior of the Enstaller was different both times, and you can clearly see it's an alpha version. (I already wrote my experiences with the first install, the second install had the weird phenomena that none of the succesful installed packages was detected). As a spoiled windows user, which by the way are most people in my surrounding, I'm used to a "one-button-install", so I wonder if it's possible to make a much simpeler install procedure. I don't know anything about what's required for a good install, what kind of things things should be stored in the windows registry, but as Python is an interpretor, I would expect there should be a very easy procedure: - Install it on one machine, - copy the complete subdirectory to another computer Does this work for Python + Scipy ? Though the above question might seem a lot of fuzz about nearly nothing, it's very essential for my plan, in which I want to convince the other people at our university to move from MatLab to Python. For windows users, the "one-button-install" is essential, otherwise most windows users, will not even try a new package. Sorry for the long post, about "nothing" for non-windows users ;-) thanks, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From giovanni.samaey at cs.kuleuven.be Wed Apr 25 04:21:18 2007 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Wed, 25 Apr 2007 10:21:18 +0200 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462E2F51.2010406@gmail.com> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> <462DB176.8030104@cs.kuleuven.be> <462E2F51.2010406@gmail.com> Message-ID: <462F0F7E.1040806@cs.kuleuven.be> I now understand the situation that scipy.stats builds upon numpy.random, and adds stuff without claiming to expose all that is in numpy.random (such as the seeding). Thanks for helping ! I repeat that I am willing to document this, if this effort will come in an "authorative" documentation (ie at the right place) and will be properly reviewed. That part of my previous message has not been answered. Giovanni From gael.varoquaux at normalesup.org Wed Apr 25 04:29:10 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 25 Apr 2007 10:29:10 +0200 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462F0F7E.1040806@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> <462DB176.8030104@cs.kuleuven.be> <462E2F51.2010406@gmail.com> <462F0F7E.1040806@cs.kuleuven.be> Message-ID: <20070425082906.GC15538@clipper.ens.fr> On Wed, Apr 25, 2007 at 10:21:18AM +0200, Giovanni Samaey wrote: > I repeat that I am willing to document this, if this effort will come in > an "authorative" documentation (ie at the right place) and will be > properly reviewed. The place where I would put such notes would be on the wiki, by adding a "rand" page on page http://scipy.org/SciPy_packages. Ga?l From fredmfp at gmail.com Wed Apr 25 05:00:08 2007 From: fredmfp at gmail.com (fred) Date: Wed, 25 Apr 2007 11:00:08 +0200 Subject: [SciPy-user] filling array without loop... In-Reply-To: References: <462B8DF7.9010500@gmail.com> <462BE588.7040009@gmail.com> <462C95AB.1050806@gmail.com> Message-ID: <462F1898.7040103@gmail.com> Anne Archibald a ?crit : > Well, I see your problem in two parts: (1) build an array of shape > (7,nx,ny,nz), then (2) flatten it enough to make VTK happy. Step (1) > is most easily accomplished by making a list of 7 arrays of shape > (nx,ny,nz) then putting them together along a new axis. Making the > arrays is no problem, you just use broadcasting. Step (2). putting a > list of arrays together into one array along a new axis you can do > with array(), or you could use one of the stacking functions. You only > have to do that once, so I'm not clear on why you use so many > hstack()s. I'd take a look at the Numpy Example List; it looks like > none of the available functions do quite what you want, although a > combination of [...,newaxis] and concatenate might be all right. Hi Anne, This is what I got from the python french newsgroup. Beautiful and fast ;-) csize = (nx-1)*(ny-1)*(nz-1) a = array([0, 1, nx+1, nx], dtype=int32) a = hstack((a, nx*ny + a)) i = arange(nx-1, dtype=int32)[newaxis,newaxis,:] j = arange(ny-1, dtype=int32)[newaxis,:,newaxis] k = arange(nz-1, dtype=int32)[:,newaxis,newaxis] b = (k*(nx*ny) + j*nx + i).flatten() cells_array = zeros(((nx-1)*(ny-1)*(nz-1),VTK_HEXAHEDRON_NB_POINTS+1), dtype=int32) cells_array[:,0] = VTK_HEXAHEDRON_NB_POINTS cells_array[:,1:] = b[:,newaxis] + a cells_id = array([VTK_HEXAHEDRON]*csize, dtype=uint8) Cheers, -- http://scipy.org/FredericPetit From lfriedri at imtek.de Wed Apr 25 05:55:37 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Wed, 25 Apr 2007 11:55:37 +0200 Subject: [SciPy-user] odeintegrate, speed Message-ID: <462F2599.9060408@imtek.de> Hello, recently, I successfully used scipy.integrate.odepack.odeint (scipy version 0.5.2) to solve a simple ODE. The system function I used is a standard python function like def systemFunction(self, x, t): fThermal = ... k = ... gamma = ... fExternal = externalForce(x) xDeriv = (fThermal - k * x - fExternal) / gamma return xDeriv where externalForce(x) is another python function. I have the feeling that the simulation is somewhat "slow", although I have no hard timing data available at the moment. I compared it to simulations done in Igor (a scientific software from wavemetrics) and tried to set the accuracy targets to the same values, and Igor was much faster. I can imagine that my Python code is slow because I use a standard python function as the systemFunction. I read that it is possible to speed things up using scipy.weave. However, I am not sure if this is the right way for me, because I do not have a complicated numpy-expression in my systemFunction but rather simple one-number-arithmetic. Anyway, I did import scipy.weave scipy.weave.test and the result was: Found 0 tests for scipy.weave.c_spec Found 2 tests for scipy.weave.blitz_tools building extensions here: c:\docume~1\larsfr~1\locals~1\temp\Lars Friedrich\python24_compiled\m1 Found 1 tests for scipy.weave.ext_tools Found 74 tests for scipy.weave.size_check Found 9 tests for scipy.weave.build_tools Found 0 tests for scipy.weave.inline_tools Found 1 tests for scipy.weave.ast_tools Found 0 tests for scipy.weave.wx_spec Found 3 tests for scipy.weave.standard_array_spec Found 26 tests for scipy.weave.catalog Found 16 tests for scipy.weave.slice_handler Found 0 tests for __main__ Traceback (most recent call last): File "", line 1, in ? File "C:\PROGRA~1\Python24\Lib\site-packages\numpy\testing\numpytest.py", line 476, in test runner.run(all_tests) File "C:\Program Files\Python24\lib\unittest.py", line 696, in run test(result) File "C:\Program Files\Python24\lib\unittest.py", line 428, in __call__ return self.run(*args, **kwds) File "C:\Program Files\Python24\lib\unittest.py", line 424, in run test(result) File "C:\PROGRA~1\Python24\Lib\site-packages\numpy\testing\numpytest.py", line 140, in __call__ unittest.TestCase.__call__(self, result) File "C:\Program Files\Python24\lib\unittest.py", line 281, in __call__ return self.run(*args, **kwds) File "C:\Program Files\Python24\lib\unittest.py", line 276, in run if ok: result.addSuccess(self) File "C:\Program Files\Python24\lib\unittest.py", line 648, in addSuccess self.stream.write('.') File "C:\PROGRA~1\Python24\Lib\site-packages\numpy\testing\numpytest.py", line 107, in write self.stream.flush() IOError: [Errno 9] Bad file descriptor My numpy version is 1.0.1 What is the right way to do a fast ODE simulation with scipy? Shall I use weave.inline? Do I need an additional C-compiler then? Or is there an easier / better approach? Thanks for lighting my darkness.... Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfriedri at imtek.de From strawman at astraw.com Wed Apr 25 11:48:05 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 25 Apr 2007 08:48:05 -0700 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462F0F7E.1040806@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> <462DB176.8030104@cs.kuleuven.be> <462E2F51.2010406@gmail.com> <462F0F7E.1040806@cs.kuleuven.be> Message-ID: <462F7835.8090204@astraw.com> Giovanni Samaey wrote: > I now understand the situation that scipy.stats builds upon > numpy.random, and adds stuff without claiming to expose all that is in > numpy.random (such as the seeding). Thanks for helping ! > > I repeat that I am willing to document this, if this effort will come in > an "authorative" documentation (ie at the right place) and will be > properly reviewed. > That part of my previous message has not been answered. > Gael already answered if you want to make a wiki page, which is one of the "authoritative" means of documentation we have. Another, probably slightly more so, is the docstrings. Your efforts in this regard will certainly be reviewed and incorporated if acceptable to a developer if you make a patch against the svn version of scipy and create a Trac ticket containing the patch. -Andrew From strawman at astraw.com Wed Apr 25 11:52:02 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 25 Apr 2007 08:52:02 -0700 Subject: [SciPy-user] Python issue of Computing in Science and Engineering available Message-ID: <462F7922.7060704@astraw.com> The May/June issue of Computing in Science and Engineering http://computer.org/cise: is out and has a Python theme. Many folks we know and love from the community and mailing lists contribute to the issue. Read articles by Paul Dubois and Travis Oliphant for free online. From fperez.net at gmail.com Wed Apr 25 12:11:03 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Apr 2007 10:11:03 -0600 Subject: [SciPy-user] [Matplotlib-users] Python issue of Computing in Science and Engineering available In-Reply-To: <462F7922.7060704@astraw.com> References: <462F7922.7060704@astraw.com> Message-ID: On 4/25/07, Andrew Straw wrote: > The May/June issue of Computing in Science and Engineering > http://computer.org/cise: is out and has a Python theme. Many folks we > know and love from the community and mailing lists contribute to the > issue. Read articles by Paul Dubois and Travis Oliphant for free online. Since authors are allowed by their publication policy to keep a publicly available copy of their papers on their personal website, here's the ipython one: http://amath.colorado.edu/faculty/fperez/preprints/ipython-cise-final.pdf Cheers, f From jdh2358 at gmail.com Wed Apr 25 12:18:56 2007 From: jdh2358 at gmail.com (John Hunter) Date: Wed, 25 Apr 2007 11:18:56 -0500 Subject: [SciPy-user] [Matplotlib-users] Python issue of Computing in Science and Engineering available In-Reply-To: References: <462F7922.7060704@astraw.com> Message-ID: <88e473830704250918n6c7631f4w514d8dc68b3dde94@mail.gmail.com> On 4/25/07, Fernando Perez wrote: > Since authors are allowed by their publication policy to keep a > publicly available copy of their papers on their personal website, > here's the ipython one: Didn't know that... here's a link to my matplotlib article http://nitace.bsd.uchicago.edu/misc/c3sci.pdf It might be nice to create a scipy wiki page linking to these PDFs. JDH From fperez.net at gmail.com Wed Apr 25 12:29:01 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 25 Apr 2007 10:29:01 -0600 Subject: [SciPy-user] [Matplotlib-users] Python issue of Computing in Science and Engineering available In-Reply-To: <88e473830704250918n6c7631f4w514d8dc68b3dde94@mail.gmail.com> References: <462F7922.7060704@astraw.com> <88e473830704250918n6c7631f4w514d8dc68b3dde94@mail.gmail.com> Message-ID: On 4/25/07, John Hunter wrote: > On 4/25/07, Fernando Perez wrote: > > Since authors are allowed by their publication policy to keep a > > publicly available copy of their papers on their personal website, > > here's the ipython one: > > Didn't know that... here's a link to my matplotlib article I'm going by the language here: http://www.ieee.org/web/publications/rights/policies.html Specifically: When IEEE publishes the work, the author must replace the previous electronic version of the accepted paper with either (1) the full citation to the IEEE work or (2) the IEEE-published version, including the IEEE copyright notice and full citation. Prior or revised versions of the paper must not be represented as the published version. This explicitly mentions author website redistribution, as long as the official IEEE version is used. Unless I'm misreading the above, I think it's OK for us to keep such copies in our personal sites. We can link to them from the scipy wiki, though I don't think it would be OK to /copy/ the PDFs to the scipy wiki. As always, IANAL and all that. Cheers, f From giovanni.samaey at cs.kuleuven.be Wed Apr 25 13:30:26 2007 From: giovanni.samaey at cs.kuleuven.be (Giovanni Samaey) Date: Wed, 25 Apr 2007 19:30:26 +0200 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462F7835.8090204@astraw.com> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> <462DB176.8030104@cs.kuleuven.be> <462E2F51.2010406@gmail.com> <462F0F7E.1040806@cs.kuleuven.be> <462F7835.8090204@astraw.com> Message-ID: <462F9032.7070804@cs.kuleuven.be> Andrew Straw wrote: > Giovanni Samaey wrote: > >> I now understand the situation that scipy.stats builds upon >> numpy.random, and adds stuff without claiming to expose all that is in >> numpy.random (such as the seeding). Thanks for helping ! >> >> I repeat that I am willing to document this, if this effort will come in >> an "authorative" documentation (ie at the right place) and will be >> properly reviewed. >> That part of my previous message has not been answered. >> >> > Gael already answered if you want to make a wiki page, which is one of > the "authoritative" means of documentation we have. Another, probably > slightly more so, is the docstrings. Your efforts in this regard will > certainly be reviewed and incorporated if acceptable to a developer if > you make a patch against the svn version of scipy and create a Trac > ticket containing the patch. > Thanks: I will go through the docstrings that I find for other numpy modules and add one for numpy.random in the same style. Since numpy.random is, strictly speaking, a numpy package and not a scipy package, I don't know where I should put it on the wiki. Can I just create an account to make a trac ticket or should I ask for one? This being said: don't be surprised if it takes a few weeks... I put it on my to-do list and it will show up when I am able to do this. But I am making a hard commitment for myself and towards you guys. It is my pleasure to help you create documentation if it only involves translating loose emails and experience into a proper docstring :-) Giovanni Giovanni From robert.kern at gmail.com Wed Apr 25 13:40:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Apr 2007 12:40:26 -0500 Subject: [SciPy-user] random numbers in scipy In-Reply-To: <462F9032.7070804@cs.kuleuven.be> References: <462CB62C.30304@cs.kuleuven.be> <462CB6BD.4000408@iam.uni-stuttgart.de> <462CBB78.90301@cs.kuleuven.be> <462CDF36.7000402@gmail.com> <462DB176.8030104@cs.kuleuven.be> <462E2F51.2010406@gmail.com> <462F0F7E.1040806@cs.kuleuven.be> <462F7835.8090204@astraw.com> <462F9032.7070804@cs.kuleuven.be> Message-ID: <462F928A.3070700@gmail.com> Giovanni Samaey wrote: > Thanks: I will go through the docstrings that I find for other numpy > modules and add one for numpy.random in the same style. > Since numpy.random is, strictly speaking, a numpy package and not a > scipy package, I don't know where I should put it on the wiki. > Can I just create an account to make a trac ticket or should I ask for one? Make yourself an account on the numpy Trac: http://projects.scipy.org/scipy/numpy/register Then you will see a "New Ticket" option in the navigation bar. If you are submitting an actual patch file, be sure to check the box "I have files to attach to this ticket" at the bottom of the new ticket page and then submit. You will then be given a chance to upload the patch file. > This being said: don't be surprised if it takes a few weeks... I put it > on my to-do list and it will show up when I am able to do this. > But I am making a hard commitment for myself and towards you guys. It > is my pleasure to help you create documentation if it only involves > translating loose emails and experience into a proper docstring :-) Any level of assistance is appreciated. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From yaroslavvb at gmail.com Wed Apr 25 17:17:29 2007 From: yaroslavvb at gmail.com (Yaroslav Bulatov) Date: Wed, 25 Apr 2007 14:17:29 -0700 Subject: [SciPy-user] bug in pilutils? Message-ID: In [12]: scipy.misc.fromimage(scipy.misc.toimage(20*ones((1,1)))) Out[12]: array([[0]], dtype=uint8) No matter what the starting array is, converting it to image and then back to array gives me an array of 0's, but I expect to get the starting array From ryanlists at gmail.com Wed Apr 25 18:01:36 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 25 Apr 2007 17:01:36 -0500 Subject: [SciPy-user] [Matplotlib-users] Python issue of Computing in Science and Engineering available In-Reply-To: <88e473830704250918n6c7631f4w514d8dc68b3dde94@mail.gmail.com> References: <462F7922.7060704@astraw.com> <88e473830704250918n6c7631f4w514d8dc68b3dde94@mail.gmail.com> Message-ID: My CiSE article can be downloaded from here: http://www.siue.edu/~rkrauss/python_stuff.html Ryan On 4/25/07, John Hunter wrote: > On 4/25/07, Fernando Perez wrote: > > Since authors are allowed by their publication policy to keep a > > publicly available copy of their papers on their personal website, > > here's the ipython one: > > Didn't know that... here's a link to my matplotlib article > > http://nitace.bsd.uchicago.edu/misc/c3sci.pdf > > It might be nice to create a scipy wiki page linking to these PDFs. > > JDH > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Matplotlib-users mailing list > Matplotlib-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > From fredmfp at gmail.com Wed Apr 25 18:02:44 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 00:02:44 +0200 Subject: [SciPy-user] [f2py] hide ? Message-ID: <462FD004.9090507@gmail.com> Hi, I read the PerfomancePython doc, and specially, the example using f2py. I try to getting work my own example but I can't :-( Here the short fortran code: subroutine essai(n) cf2py integer intent(hide) :: n integer :: n print *, 'n = ', n end subroutine After compiling it, I try this: Python 2.4.4 (#2, Apr 5 2007, 20:11:18) Type "copyright", "credits" or "license" for more information. IPython 0.7.2 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: n=13 In [2]: import essai; essai.essai() n = 0 In [3]: What am I doing wrong ? I read & re-read the "Using f2py" section, I don't understand. Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From robert.kern at gmail.com Wed Apr 25 18:05:40 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Apr 2007 17:05:40 -0500 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <462FD004.9090507@gmail.com> References: <462FD004.9090507@gmail.com> Message-ID: <462FD0B4.7070304@gmail.com> fred wrote: > Hi, > > I read the PerfomancePython doc, and specially, the example using f2py. > > I try to getting work my own example but I can't :-( > > Here the short fortran code: > > subroutine essai(n) > > cf2py integer intent(hide) :: n > > integer :: n > > print *, 'n = ', n > > end subroutine > > After compiling it, I try this: > > Python 2.4.4 (#2, Apr 5 2007, 20:11:18) > Type "copyright", "credits" or "license" for more information. > > IPython 0.7.2 -- An enhanced Interactive Python. > ? -> Introduction to IPython's features. > %magic -> Information about IPython's 'magic' % functions. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: n=13 > > In [2]: import essai; essai.essai() > n = 0 > > In [3]: > > What am I doing wrong ? Variables "just sitting there" won't have any effect whatsoever inside the FORTRAN subroutine. You would have to pass it in as an argument to essai(); it can't be hidden. intent(hide) is for situations where you can derive the value from other inputs, like the length of an array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Wed Apr 25 18:54:57 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 26 Apr 2007 00:54:57 +0200 Subject: [SciPy-user] bug in pilutils? In-Reply-To: References: Message-ID: <20070425225457.GA1569@mentat.za.net> On Wed, Apr 25, 2007 at 02:17:29PM -0700, Yaroslav Bulatov wrote: > In [12]: scipy.misc.fromimage(scipy.misc.toimage(20*ones((1,1)))) > Out[12]: array([[0]], dtype=uint8) > > No matter what the starting array is, converting it to image and then > back to array gives me an array of 0's, but I expect to get the > starting array scipy.misc.toimage tries to estimate the range of the input image. If you supply data that is already of dtype N.uint8 it won't, i.e. In [9]: scipy.misc.fromimage(scipy.misc.toimage(20*ones((1,1)).astype(uint8))) Out[9]: array([[20]], dtype=uint8) The following also illustrates the bahaviour you are seeing: In [15]: scipy.misc.fromimage(scipy.misc.toimage([[0,1,2]])) Out[15]: array([[ 0, 127, 255]], dtype=uint8) Regards St?fan From stefan at sun.ac.za Wed Apr 25 18:54:57 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 26 Apr 2007 00:54:57 +0200 Subject: [SciPy-user] bug in pilutils? In-Reply-To: References: Message-ID: <20070425225457.GA1569@mentat.za.net> On Wed, Apr 25, 2007 at 02:17:29PM -0700, Yaroslav Bulatov wrote: > In [12]: scipy.misc.fromimage(scipy.misc.toimage(20*ones((1,1)))) > Out[12]: array([[0]], dtype=uint8) > > No matter what the starting array is, converting it to image and then > back to array gives me an array of 0's, but I expect to get the > starting array scipy.misc.toimage tries to estimate the range of the input image. If you supply data that is already of dtype N.uint8 it won't, i.e. In [9]: scipy.misc.fromimage(scipy.misc.toimage(20*ones((1,1)).astype(uint8))) Out[9]: array([[20]], dtype=uint8) The following also illustrates the bahaviour you are seeing: In [15]: scipy.misc.fromimage(scipy.misc.toimage([[0,1,2]])) Out[15]: array([[ 0, 127, 255]], dtype=uint8) Regards St?fan From fredmfp at gmail.com Wed Apr 25 18:59:37 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 00:59:37 +0200 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <462FD0B4.7070304@gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> Message-ID: <462FDD59.2040407@gmail.com> Robert Kern a ?crit : > Variables "just sitting there" won't have any effect whatsoever inside the > FORTRAN subroutine. You would have to pass it in as an argument to essai(); it > can't be hidden. intent(hide) is for situations where you can derive the value > from other inputs, like the length of an array. > Oh, thanks ! I did not understand how it could be possible. But now, it's ok, thanks to you. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Apr 26 03:12:02 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 09:12:02 +0200 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <462FD0B4.7070304@gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> Message-ID: <463050C2.5070400@gmail.com> Robert Kern a ?crit : > Variables "just sitting there" won't have any effect whatsoever inside the > FORTRAN subroutine. You would have to pass it in as an argument to essai(); it > can't be hidden. intent(hide) is for situations where you can derive the value > from other inputs, like the length of an array. > Hi, Ok, let's talk about array now ;-) The following sample code fails if I use nvx arg in essai(), essai:nx=3 Traceback (most recent call last): File "./essai.py", line 22, in ? a = essai.essai(a, b, KW, nx, ny, nz, nvx) essai.error: (shape(a,0)==nx) failed for 1st keyword nx and works fine if not. What's wrong ? I'm thinking of issue with memory allocation, but what's going on ? (wrong arrays dimensions ? arguments order in function call ?) It must be a stupid error but where ? Thanks in advance. Cheers, -- http://scipy.org/FredericPetit -------------- next part -------------- A non-text attachment was scrubbed... Name: essai.py Type: text/x-python Size: 376 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: essai.f URL: From fredmfp at gmail.com Thu Apr 26 04:39:27 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 10:39:27 +0200 Subject: [SciPy-user] solve for symetric matrix ? Message-ID: <4630653F.1090803@gmail.com> Hi, I use scipy.linalg.solve() to solve Au=b, where A is symetric (and diagonal filled with 0). Is there a better (ie faster) method for this kind of matrices ? Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From nwagner at iam.uni-stuttgart.de Thu Apr 26 04:47:34 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 10:47:34 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <4630653F.1090803@gmail.com> References: <4630653F.1090803@gmail.com> Message-ID: <46306726.8080904@iam.uni-stuttgart.de> fred wrote: > Hi, > > I use scipy.linalg.solve() to solve Au=b, where A is symetric (and > diagonal filled with 0). > > Is there a better (ie faster) method for this kind of matrices ? > > Thanks in advance. > > Cheers, > > If your matrix is also positive definite you may use linalg.cho_factor linalg.cho_solve Cheers, Nils From fredmfp at gmail.com Thu Apr 26 05:01:51 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 11:01:51 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306726.8080904@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> Message-ID: <46306A7F.10500@gmail.com> Nils Wagner a ?crit : > If your matrix is also positive definite you may use > linalg.cho_factor > linalg.cho_solve > Hi Nils, Do you mean that using Cholesky decomposition plus solve methods is faster than simply using solve() ? Cheers, -- http://scipy.org/FredericPetit From nwagner at iam.uni-stuttgart.de Thu Apr 26 05:05:12 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 11:05:12 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306A7F.10500@gmail.com> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> Message-ID: <46306B48.3000301@iam.uni-stuttgart.de> fred wrote: > Nils Wagner a ?crit : > >> If your matrix is also positive definite you may use >> linalg.cho_factor >> linalg.cho_solve >> >> > Hi Nils, > > Do you mean that using Cholesky decomposition plus solve methods is faster > than simply using solve() ? > > Cheers, > > Hi Fred, Yes. See http://en.wikipedia.org/wiki/Cholesky_decomposition#Computing_the_Cholesky_decomposition HTH, Nils From fredmfp at gmail.com Thu Apr 26 05:12:48 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 11:12:48 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306B48.3000301@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> Message-ID: <46306D10.3070305@gmail.com> Nils Wagner a ?crit : > Hi Fred, > > Yes. > See > http://en.wikipedia.org/wiki/Cholesky_decomposition#Computing_the_Cholesky_decomposition > Thanks Nils ! -- http://scipy.org/FredericPetit From nwagner at iam.uni-stuttgart.de Thu Apr 26 05:20:03 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 11:20:03 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306D10.3070305@gmail.com> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> Message-ID: <46306EC3.3050601@iam.uni-stuttgart.de> fred wrote: > Nils Wagner a ?crit : > >> Hi Fred, >> >> Yes. >> See >> http://en.wikipedia.org/wiki/Cholesky_decomposition#Computing_the_Cholesky_decomposition >> >> > Thanks Nils ! > > Hi Fred, If you can't believe it run the attached script. I get python -i cholesky_lu.py Elapsed time LU 0.291770935059 Residual 1.65732165386e-14 Elapsed time Cholesky 0.210819959641 Residual 1.8009241117e-14 Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: cholesky_lu.py Type: text/x-python Size: 444 bytes Desc: not available URL: From fredmfp at gmail.com Thu Apr 26 05:33:28 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 11:33:28 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306B48.3000301@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> Message-ID: <463071E8.6010605@gmail.com> Nils Wagner a ?crit : > fred wrote: > >> Nils Wagner a ?crit : >> >> >>> If your matrix is also positive definite you may use >>> linalg.cho_factor >>> linalg.cho_solve >>> By construction, they are positive definite. But just for curiosity, I did not find any method to test directly it. Ok, I could compute eigvals but other idea ? Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Apr 26 05:42:20 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 11:42:20 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306EC3.3050601@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> Message-ID: <463073FC.6010000@gmail.com> Nils Wagner a ?crit : > fred wrote: > >> Nils Wagner a ?crit : >> >> >>> Hi Fred, >>> >>> Yes. >>> See >>> http://en.wikipedia.org/wiki/Cholesky_decomposition#Computing_the_Cholesky_decomposition >>> >>> >>> >> Thanks Nils ! >> >> >> > Hi Fred, > > If you can't believe it run the attached script. > Oh, I believe you, Nils ! ;-) > I get > python -i cholesky_lu.py > Elapsed time LU 0.291770935059 > Residual 1.65732165386e-14 > Elapsed time Cholesky 0.210819959641 > Residual 1.8009241117e-14 > And me :-) Elapsed time LU 0.424935102463 Residual 5.81146345637e-15 Elapsed time Cholesky 0.253962039948 Residual 5.74426431766e-15 You titled 'LU' and used solve() method. Do you mean that solve() use in fact LU decomposition ? PS : what cpu do you use (just for comparison ;-) Cheers, -- http://scipy.org/FredericPetit From nwagner at iam.uni-stuttgart.de Thu Apr 26 05:43:54 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 11:43:54 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <463071E8.6010605@gmail.com> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <463071E8.6010605@gmail.com> Message-ID: <4630745A.3050900@iam.uni-stuttgart.de> fred wrote: > Nils Wagner a ?crit : > >> fred wrote: >> >> >>> Nils Wagner a ?crit : >>> >>> >>> >>>> If your matrix is also positive definite you may use >>>> linalg.cho_factor >>>> linalg.cho_solve >>>> >>>> > By construction, they are positive definite. > > But just for curiosity, I did not find any method to test directly it. > > Ok, I could compute eigvals but other idea ? > > Cheers, > > Hi Fred, If the Cholesky decomposition doesn't exist, the matrix is not positive definite. For further information, see S.~M. Rump, Verification of positive definiteness, BIT Numerical Mathematics, 46:433--452, 2006. http://www.ti3.tu-harburg.de/cgi-bin/smrbibsearch/publications/ti3.html?author=rump Cheers, Nils From nwagner at iam.uni-stuttgart.de Thu Apr 26 05:46:30 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 11:46:30 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <463073FC.6010000@gmail.com> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <463073FC.6010000@gmail.com> Message-ID: <463074F6.4030000@iam.uni-stuttgart.de> fred wrote: > Nils Wagner a ?crit : > >> fred wrote: >> >> >>> Nils Wagner a ?crit : >>> >>> >>> >>>> Hi Fred, >>>> >>>> Yes. >>>> See >>>> http://en.wikipedia.org/wiki/Cholesky_decomposition#Computing_the_Cholesky_decomposition >>>> >>>> >>>> >>>> >>> Thanks Nils ! >>> >>> >>> >>> >> Hi Fred, >> >> If you can't believe it run the attached script. >> >> > Oh, I believe you, Nils ! ;-) > >> I get >> python -i cholesky_lu.py >> Elapsed time LU 0.291770935059 >> Residual 1.65732165386e-14 >> Elapsed time Cholesky 0.210819959641 >> Residual 1.8009241117e-14 >> >> > And me :-) > Elapsed time LU 0.424935102463 > Residual 5.81146345637e-15 > Elapsed time Cholesky 0.253962039948 > Residual 5.74426431766e-15 > > You titled 'LU' and used solve() method. > Yes solve() is for general matrices and based on an LU factorization. > Do you mean that solve() use in fact LU decomposition ? > > PS : what cpu do you use (just for comparison ;-) > > processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 47 model name : AMD Athlon(tm) 64 Processor 3200+ stepping : 2 cpu MHz : 2000.138 cache size : 512 KB fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni lahf_lm bogomips : 4009.73 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc > Cheers, > > From fredmfp at gmail.com Thu Apr 26 06:15:17 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 12:15:17 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <4630745A.3050900@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <463071E8.6010605@gmail.com> <4630745A.3050900@iam.uni-stuttgart.de> Message-ID: <46307BB5.8020001@gmail.com> Nils Wagner a ?crit : > If the Cholesky decomposition doesn't exist, the matrix is not positive > definite. > Yes, but you catch an exception if it is not positive definite (ok you can still use try/except). I was thinking of somewhat similar to test the sign of the eigenvalues for instance. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Apr 26 06:16:14 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 12:16:14 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <463074F6.4030000@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <463073FC.6010000@gmail.com> <463074F6.4030000@iam.uni-stuttgart.de> Message-ID: <46307BEE.7000000@gmail.com> Nils Wagner a ?crit : > Yes solve() is for general matrices and based on an LU factorization. > Ok, thanks. -- http://scipy.org/FredericPetit From cimrman3 at ntc.zcu.cz Thu Apr 26 06:18:56 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 26 Apr 2007 12:18:56 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306EC3.3050601@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> Message-ID: <46307C90.8060303@ntc.zcu.cz> Nils Wagner wrote: > fred wrote: >> Nils Wagner a ?crit : >> >>> Hi Fred, >>> >>> Yes. >>> See >>> http://en.wikipedia.org/wiki/Cholesky_decomposition#Computing_the_Cholesky_decomposition >>> >>> >> Thanks Nils ! >> >> > Hi Fred, > > If you can't believe it run the attached script. > I get > python -i cholesky_lu.py > Elapsed time LU 0.291770935059 > Residual 1.65732165386e-14 > Elapsed time Cholesky 0.210819959641 > Residual 1.8009241117e-14 > ... > t_0 = time.time() > x_lu = linalg.solve(A,b) In [7]:linalg.solve? Definition: la.solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0, debug=0) Docstring: solve(a, b, sym_pos=0, lower=0, overwrite_a=0, overwrite_b=0) -> x ... sym_pos -- Assume a is symmetric and positive definite. In [8]:scipy.__version__ Out[8]:'0.5.3.dev2935' Have you tried sym_pos=True? solve should then use Cholesky... r. From fredmfp at gmail.com Thu Apr 26 06:32:35 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 12:32:35 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46307C90.8060303@ntc.zcu.cz> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <46307C90.8060303@ntc.zcu.cz> Message-ID: <46307FC3.8030206@gmail.com> Robert Cimrman a ?crit : > Have you tried sym_pos=True? solve should then use Cholesky... > Unhappily, my colleague told me that the matrices were positive definite. and I did not verify that before. But in fact, he was confused. They are negative definite (kriging matrices). Bad news :-( Thanks anyway ! -- http://scipy.org/FredericPetit From Ronan.Perrussel at ec-lyon.fr Thu Apr 26 06:35:29 2007 From: Ronan.Perrussel at ec-lyon.fr (Ronan Perrussel) Date: Thu, 26 Apr 2007 12:35:29 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46307FC3.8030206@gmail.com> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <46307C90.8060303@ntc.zcu.cz> <46307FC3.8030206@gmail.com> Message-ID: <46308071.6080300@ec-lyon.fr> fred a ?crit : > But in fact, he was confused. They are negative definite (kriging matrices). > Bad news :-( Then you can solve with -A instead of A? Ronan -------------- next part -------------- A non-text attachment was scrubbed... Name: ronan.perrussel.vcf Type: text/x-vcard Size: 382 bytes Desc: not available URL: From fredmfp at gmail.com Thu Apr 26 06:52:42 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 12:52:42 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46308071.6080300@ec-lyon.fr> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <46307C90.8060303@ntc.zcu.cz> <46307FC3.8030206@gmail.com> <46308071.6080300@ec-lyon.fr> Message-ID: <4630847A.40702@gmail.com> Ronan Perrussel a ?crit : > fred a ?crit : >> But in fact, he was confused. They are negative definite (kriging >> matrices). >> Bad news :-( > Then you can solve with -A instead of A? I tried it, of course ;-) But cho_factor() rejects -A too. One of the definition of the positive definite (if I understand right) is that the eigenvalues must be positive. So in my case, as A has negative egv, -A has positive ones. Although -A has positive ones, cho_factor() raise error. -- http://scipy.org/FredericPetit From dahl.joachim at gmail.com Thu Apr 26 06:57:40 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 26 Apr 2007 12:57:40 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <4630653F.1090803@gmail.com> References: <4630653F.1090803@gmail.com> Message-ID: <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> On 4/26/07, fred wrote: > > Hi, > > I use scipy.linalg.solve() to solve Au=b, where A is symetric (and > diagonal filled with 0). > > If the diagonal is 0, then the matrix is not definite (so, neither is -A)... -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Apr 26 07:31:53 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 13:31:53 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> Message-ID: <46308DA9.5050007@iam.uni-stuttgart.de> Joachim Dahl wrote: > > > On 4/26/07, *fred* > wrote: > > Hi, > > I use scipy.linalg.solve() to solve Au=b, where A is symetric (and > diagonal filled with 0). > > > If the diagonal is 0, then the matrix is not definite (so, neither is > -A)... > > > I missed the main information (zero diagonal) Are you aware of a simple proof that a symmetric matrix with zero diagonal entries is indefinite ? Anyway it would be great if scipy has a solver for symmetric indefinite systems e.g. the LAPACK routine http://www.netlib.org/lapack/double/dsysv.f AFAIK, this subroutine is currently not available via linalg.flapack. Nils > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Thu Apr 26 07:37:31 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 13:37:31 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> Message-ID: <46308EFB.8030101@gmail.com> Joachim Dahl a ?crit : > If the diagonal is 0, then the matrix is not definite (so, neither is > -A)... Uh...., so... is there any peculiar method to solve symetric matrices with 0 on the diagonal ? Cheers, -- http://scipy.org/FredericPetit From dahl.joachim at gmail.com Thu Apr 26 07:40:11 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 26 Apr 2007 13:40:11 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46308DA9.5050007@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> <46308DA9.5050007@iam.uni-stuttgart.de> Message-ID: <47347f490704260440k44ee0b16x591e8b4f54f2b986@mail.gmail.com> On 4/26/07, Nils Wagner wrote: > > Joachim Dahl wrote: > > > > > > On 4/26/07, *fred* > wrote: > > > > Hi, > > > > I use scipy.linalg.solve() to solve Au=b, where A is symetric (and > > diagonal filled with 0). > > > > > > If the diagonal is 0, then the matrix is not definite (so, neither is > > -A)... > > > > > > > I missed the main information (zero diagonal) > Are you aware of a simple proof that a symmetric matrix with zero > diagonal entries is indefinite ? if diag(A)=0 then ei'*A*ei = 0 so A is not positive definite. Same goes for -A, so A is not negative definite either. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Apr 26 07:41:36 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 13:41:36 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46308EFB.8030101@gmail.com> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> <46308EFB.8030101@gmail.com> Message-ID: <46308FF0.8090907@iam.uni-stuttgart.de> fred wrote: > Joachim Dahl a ?crit : > >> If the diagonal is 0, then the matrix is not definite (so, neither is >> -A)... >> > Uh...., so... is there any peculiar method to solve symetric matrices > with 0 on the diagonal ? > > Cheers, > > Fred, See my previous mail. You are looking for a solver for symmetric indefinite systems. Should I file a ticket (task/enhancement) for a symmetric indefinite solver ? Nils From nwagner at iam.uni-stuttgart.de Thu Apr 26 07:58:38 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 13:58:38 +0200 Subject: [SciPy-user] dense versus sparse Message-ID: <463093EE.6030405@iam.uni-stuttgart.de> Hi, How can I "translate" the last line of the dense version #S=-diag(ones(n)) S=-identity(n) S[:,-1] = S[:,-1] + ones(n) into a sparse version #S= -sparse.spdiags([ones(n)],[0],n,n) S= -sparse.speye(n,n) ... Nils From dahl.joachim at gmail.com Thu Apr 26 08:00:45 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Thu, 26 Apr 2007 14:00:45 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46308FF0.8090907@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> <46308EFB.8030101@gmail.com> <46308FF0.8090907@iam.uni-stuttgart.de> Message-ID: <47347f490704260500g7c67604fj2183a394090921c6@mail.gmail.com> On 4/26/07, Nils Wagner wrote: > > fred wrote: > > Joachim Dahl a ?crit : > > > >> If the diagonal is 0, then the matrix is not definite (so, neither is > >> -A)... > >> > > Uh...., so... is there any peculiar method to solve symetric matrices > > with 0 on the diagonal ? > > > > Cheers, > > > > > Fred, > > See my previous mail. You are looking for a solver for symmetric > indefinite systems. > Should I file a ticket (task/enhancement) for a symmetric indefinite > solver ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > A good free LDL' solver is not easy to find, I think... If someone knows of one, please post a pointer. Joachim -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu Apr 26 08:46:29 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 14:46:29 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <47347f490704260440k44ee0b16x591e8b4f54f2b986@mail.gmail.com> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> <46308DA9.5050007@iam.uni-stuttgart.de> <47347f490704260440k44ee0b16x591e8b4f54f2b986@mail.gmail.com> Message-ID: <46309F25.8000707@gmail.com> Joachim Dahl a ?crit : > if diag(A)=0 then ei'*A*ei = 0 so A is not positive definite. Same > goes for -A, so A is > not negative definite either. > Uh, I surely misunderstand something but what ?.... diag(A) = 0 but b'Ab != 0 cat test.py #! /usr/bin/env python #-*- coding: iso-8859-15 -*- from scipy import * from scipy.linalg import * n = 5 A = zeros((n,n)) b = rand(n) A[0,:] = range(5) A[1,1:] = range(4) A[2,2:] = range(3) A[3,3:] = range(2) for i in range(1,n): for j in range(0,i): A[i,j] = A[j,i] print A print inner(inner(b.transpose(), A), b) ./test.py [[ 0. 1. 2. 3. 4.] [ 1. 0. 1. 2. 3.] [ 2. 1. 0. 1. 2.] [ 3. 2. 1. 0. 1.] [ 4. 3. 2. 1. 0.]] 16.9365786993 -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Apr 26 08:50:29 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 14:50:29 +0200 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <463050C2.5070400@gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> <463050C2.5070400@gmail.com> Message-ID: <4630A015.2090200@gmail.com> fred a ?crit : > > Ok, let's talk about array now ;-) > > The following sample code fails if I use nvx arg in essai(), Re, Nobody can help me on this issue, please ? Cheers, -- http://scipy.org/FredericPetit From Ronan.Perrussel at ec-lyon.fr Thu Apr 26 09:01:07 2007 From: Ronan.Perrussel at ec-lyon.fr (Ronan Perrussel) Date: Thu, 26 Apr 2007 15:01:07 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46309F25.8000707@gmail.com> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> <46308DA9.5050007@iam.uni-stuttgart.de> <47347f490704260440k44ee0b16x591e8b4f54f2b986@mail.gmail.com> <46309F25.8000707@gmail.com> Message-ID: <4630A293.4080701@ec-lyon.fr> fred a ?crit : > Joachim Dahl a ?crit : > >> if diag(A)=0 then ei'*A*ei = 0 so A is not positive definite. Same >> goes for -A, so A is >> not negative definite either. > h, I surely misunderstand something but what ?.... > ei is a vector of the canonical basis of R^n. For instance, e1 = (1, 0, ..., 0)' and e1'*A*e1 = A_{11} = 0. If A is positive definite, it should be stricly positive. (As you can remark, all the coefficients of your diagonal should be strictly positive.) Ronan -------------- next part -------------- A non-text attachment was scrubbed... Name: ronan.perrussel.vcf Type: text/x-vcard Size: 382 bytes Desc: not available URL: From fredmfp at gmail.com Thu Apr 26 09:28:50 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 15:28:50 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <4630A293.4080701@ec-lyon.fr> References: <4630653F.1090803@gmail.com> <47347f490704260357n1da1d279g4ef3f19abeac8c3a@mail.gmail.com> <46308DA9.5050007@iam.uni-stuttgart.de> <47347f490704260440k44ee0b16x591e8b4f54f2b986@mail.gmail.com> <46309F25.8000707@gmail.com> <4630A293.4080701@ec-lyon.fr> Message-ID: <4630A912.6030700@gmail.com> Ronan Perrussel a ?crit : > fred a ?crit : >> Joachim Dahl a ?crit : >> >>> if diag(A)=0 then ei'*A*ei = 0 so A is not positive definite. >>> Same goes for -A, so A is >>> not negative definite either. >> h, I surely misunderstand something but what ?.... >> > ei is a vector of the canonical basis of R^n. > For instance, e1 = (1, 0, ..., 0)' and e1'*A*e1 = A_{11} = 0. > If A is positive definite, it should be stricly positive. That's ok. Just a typo in the (french) wiki page on positive definite matrices. Thanks. -- http://scipy.org/FredericPetit From david.huard at gmail.com Thu Apr 26 09:57:37 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 26 Apr 2007 09:57:37 -0400 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <4630A015.2090200@gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> <463050C2.5070400@gmail.com> <4630A015.2090200@gmail.com> Message-ID: <91cf711d0704260657r12d785f1ve7d0f78d762dcc75@mail.gmail.com> Here are some things that could be wrong. 1. You declare a as intent(in, out), this means that a is modified in place, ie, the subroutine does not return a value. >>> essai.essai(a, b, KW, nx, ny, nz) will hence return none. 2. Declare the dimension explicitely in the cf2py comments cf2py integer dimension(nx,ny,nz), intent(in) :: b 3. If you want to hide the shape integers, you need to tell it to. cf2py integer, intent(hide) :: nx=shape(b,0) cf2py integer, intent(hide) :: ny=shape(b,1) cf2py integer, intent(hide) :: nz=shape(b,2) My advice would be to start with the plain fortran function without cf2py comments, and add them one at a time. David 2007/4/26, fred : > > fred a ?crit : > > > > Ok, let's talk about array now ;-) > > > > The following sample code fails if I use nvx arg in essai(), > Re, > > Nobody can help me on this issue, please ? > > Cheers, > > -- > http://scipy.org/FredericPetit > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu Apr 26 10:10:53 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 16:10:53 +0200 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <91cf711d0704260657r12d785f1ve7d0f78d762dcc75@mail.gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> <463050C2.5070400@gmail.com> <4630A015.2090200@gmail.com> <91cf711d0704260657r12d785f1ve7d0f78d762dcc75@mail.gmail.com> Message-ID: <4630B2ED.1060409@gmail.com> David Huard a ?crit : > Here are some things that could be wrong. > Hi David, > 1. You declare a as intent(in, out), this means that a is modified in > place, ie, the subroutine does not return a value. > >>> essai.essai(a, b, KW, nx, ny, nz) > will hence return none. > essai.essai(a, b, KW, nx, ny, nz) does return the right value because it works fine ;-) > 2. Declare the dimension explicitely in the cf2py comments > cf2py integer dimension(nx,ny,nz), intent(in) :: b > I did it. Nothing changes. And the PerformancePython does not mention this. > 3. If you want to hide the shape integers, you need to tell it to. > cf2py integer, intent(hide) :: nx=shape(b,0) > cf2py integer, intent(hide) :: ny=shape(b,1) > cf2py integer, intent(hide) :: nz=shape(b,2) > I'll see this later, when my sample code works fine ;-) Cheers, -- http://scipy.org/FredericPetit From gnurser at googlemail.com Thu Apr 26 10:12:35 2007 From: gnurser at googlemail.com (George Nurser) Date: Thu, 26 Apr 2007 15:12:35 +0100 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <4630A015.2090200@gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> <463050C2.5070400@gmail.com> <4630A015.2090200@gmail.com> Message-ID: <1d1e6ea70704260712v4e8fecdo2355ef07bb76af9f@mail.gmail.com> On 26/04/07, fred wrote: > fred a ?crit : > > > > Ok, let's talk about array now ;-) > > > > The following sample code fails if I use nvx arg in essai(), > Re, > Fred, The problem is that because nx,ny and ny have become optional keyword arguments (they will automatically be set by the shape of a), they have to go /after/ nvx. Try doing: print essai.__doc__ in order to get the information of how to call essai.essai Either do. 1) essai.essai(a,b,kw,nvx) or 2) essai.essai(a,b,kw,nvx,nx=nx,ny=ny,nz=nz) where a.shape = (nx,ny,nz) or 3) essai.essai(a,b,kw,nvx,nx,ny,nz) HTH. George Nurser. From fredmfp at gmail.com Thu Apr 26 10:58:35 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 16:58:35 +0200 Subject: [SciPy-user] [f2py] hide ? In-Reply-To: <1d1e6ea70704260712v4e8fecdo2355ef07bb76af9f@mail.gmail.com> References: <462FD004.9090507@gmail.com> <462FD0B4.7070304@gmail.com> <463050C2.5070400@gmail.com> <4630A015.2090200@gmail.com> <1d1e6ea70704260712v4e8fecdo2355ef07bb76af9f@mail.gmail.com> Message-ID: <4630BE1B.9050404@gmail.com> George Nurser a ?crit : > Fred, > The problem is that because nx,ny and ny have become optional keyword > arguments (they will automatically be set by the shape of a), they > have to go /after/ nvx. > !!! Good shot, George ! :-)))) Thanks a lot ! -- http://scipy.org/FredericPetit From t_crane at mrl.uiuc.edu Thu Apr 26 11:06:56 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Thu, 26 Apr 2007 10:06:56 -0500 Subject: [SciPy-user] question about ODE output and time steps Message-ID: <9EADC1E53F9C70479BF65593703691141343DC@mrlnt6.mrl.uiuc.edu> Hi all, I'm new to python and want to start using it as a replacement for the too-expensive MATLAB. To that end, I've been playing around with the ode solver, as it's a central feature of a simulation I'm working on now in MATLAB. I finally got the solver working, but there are some things about it that seem cumbersome or not ideal. I'm hoping you can help me figure out a better way of implementing it... First, my test code is attached. It's a system of three coupled equations, an example taken from the MATLAB documentation on their ode45 function. 1) The Y output from ode seems to be of type array. That is, I'm solving a system of three coupled equations, so for each iteration the solver generates an array of three elements as well as the time at that iteration. So, as you can see in my code, I append the output from each iteration. This ends up giving me a list of three-element arrays. This seems to be a rather cumbersome way of organizing the output and makes plotting it rather laborious. The only way I've been able to plot it is by running a for-loop in which I populate three other lists (y1,y2,y3) with the appropriate values from the ode solver. Then I can plot it (see attached code). As I said, this seems overly-laborious. Any suggestions? 2) The basic solver scipy.integrate.ode requires you to specify the time step. I would prefer a solver with an adaptive time step algorithm. What do you suggest I do for this? Any help is appreciated. thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: odeTest.py Type: application/octet-stream Size: 635 bytes Desc: odeTest.py URL: From anand at soe.ucsc.edu Thu Apr 26 11:11:11 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 26 Apr 2007 08:11:11 -0700 Subject: [SciPy-user] Solve for symmetric matrix? In-Reply-To: References: Message-ID: <4630C10F.4060409@cse.ucsc.edu> If the matrix has a zero eigenvalue, it has a nullspace so it's singular so you can't really invert it. However, if you know in advance that the RHS vector is orthogonal to the nullspace you can solve for the remaining components using a low-rank submatrix. Fred, if this is a kriging problem I've dealt with it in a Gaussian process module I'm writing. It's not even close to stable yet, but get it from code.google.com/p/gaussian-process. Please email me if you've got any questions, comments or especially interest in helping out. :) For purposes of the ticket, there's a Linpack routine called DCHDC that can compute a Cholesky decomposition even if some eigenvalues are zero. It's kind of hard to work with because of some funky pivoting, but it could probably be wrapped nicely for scipy. I'm using a hacked version of Lapack's DPOTF2 that's too dangerous for general usage. Anand Message: 10 Date: Thu, 26 Apr 2007 13:41:36 +0200 From: Nils Wagner Subject: Re: [SciPy-user] solve for symetric matrix ? To: SciPy Users List Message-ID: <46308FF0.8090907 at iam.uni-stuttgart.de> Content-Type: text/plain; charset=ISO-8859-15 fred wrote: >> Joachim Dahl a ?crit : >> > > >>>> If the diagonal is 0, then the matrix is not definite (so, neither is >>>> -A)... >>>> >> >> >> Uh...., so... is there any peculiar method to solve symetric matrices >> with 0 on the diagonal ? >> >> Cheers, >> >> > > Fred, See my previous mail. You are looking for a solver for symmetric indefinite systems. Should I file a ticket (task/enhancement) for a symmetric indefinite solver ? Nils From nwagner at iam.uni-stuttgart.de Thu Apr 26 11:23:29 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 17:23:29 +0200 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: <9EADC1E53F9C70479BF65593703691141343DC@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF65593703691141343DC@mrlnt6.mrl.uiuc.edu> Message-ID: <4630C3F1.8050000@iam.uni-stuttgart.de> Trevis Crane wrote: > > Hi all, > > I?m new to python and want to start using it as a replacement for the > too-expensive MATLAB. To that end, I?ve been playing around with the > ode solver, as it?s a central feature of a simulation I?m working on > now in MATLAB. I finally got the solver working, but there are some > things about it that seem cumbersome or not ideal. I?m hoping you can > help me figure out a better way of implementing it? > > First, my test code is attached. It?s a system of three coupled > equations, an example taken from the MATLAB documentation on their > ode45 function. > > 1) The Y output from ode seems to be of type array. That is, I?m > solving a system of three coupled equations, so for each iteration the > solver generates an array of three elements as well as the time at > that iteration. So, as you can see in my code, I append the output > from each iteration. This ends up giving me a list of three-element > arrays. This seems to be a rather cumbersome way of organizing the > output and makes plotting it rather laborious. The only way I?ve been > able to plot it is by running a for-loop in which I populate three > other lists (y1,y2,y3) with the appropriate values from the ode > solver. Then I can plot it (see attached code). As I said, this seems > overly-laborious. Any suggestions? > > 2) The basic solver scipy.integrate.ode requires you to specify the > time step. I would prefer a solver with an adaptive time step > algorithm. What do you suggest I do for this? > > Any help is appreciated. > > thanks, > > trevis > > ________________________________________________ > > Trevis Crane > > Postdoctoral Research Assoc. > > Department of Physics > > University of Ilinois > > 1110 W. Green St. > > Urbana, IL 61801 > > p: 217-244-8652 > > f: 217-244-2278 > > e: tcrane at uiuc.edu > > ________________________________________________ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Trevis, Did you try integrate.odeint ? I have attached a small script for completeness. Cheers, Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_ode.py Type: text/x-python Size: 455 bytes Desc: not available URL: From peridot.faceted at gmail.com Thu Apr 26 11:27:56 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 26 Apr 2007 11:27:56 -0400 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <46306EC3.3050601@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> Message-ID: On 26/04/07, Nils Wagner wrote: > If you can't believe it run the attached script. > I get > python -i cholesky_lu.py > Elapsed time LU 0.291770935059 > Residual 1.65732165386e-14 > Elapsed time Cholesky 0.210819959641 > Residual 1.8009241117e-14 Notice here that Cholesky is less than a factor of two faster. It'll never be much better than that in general. So you're not going to win much on speed. Personally I prefer to use SVD for matrix inversions - it's much more reliable, which I value more than speed. It allows me to diagnose whether the matrix is singular or ill-conditioned and deal with that. Anne From S.Mientki at ru.nl Thu Apr 26 11:27:59 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Thu, 26 Apr 2007 17:27:59 +0200 Subject: [SciPy-user] Is it allowed to distribute SciPy ? Message-ID: <4630C4FF.1060706@ru.nl> hello as I indicated in a previous post I need a simple installer, I just tried Inno, and at first look, it seems that I can produce with 10 lines of text a "1-button" installer for SciPy + PyScripter + Signal WorkBench Is this allowed ? (I realize that it's a bad habit not to choose the official path, but I think it's even worse, not to promote SciPy ;-) cheers, Stef Mientki Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From anand at soe.ucsc.edu Thu Apr 26 11:33:28 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 26 Apr 2007 08:33:28 -0700 Subject: [SciPy-user] Solve for symmetric matrix? In-Reply-To: <4630C10F.4060409@cse.ucsc.edu> References: <4630C10F.4060409@cse.ucsc.edu> Message-ID: <4630C648.10407@cse.ucsc.edu> > If the matrix has a zero eigenvalue, it has a nullspace so it's > singular so you can't really invert it. However, if you know in > advance that the RHS vector is orthogonal to the nullspace you can > solve for the remaining components using a low-rank submatrix. I mean if you know in advance that the _LHS_ vector (the vector you're solving for) is orthogonal to the nullspace you can solve for the remaining components using a _full_-rank submatrix. Ehh, too much to do. From nwagner at iam.uni-stuttgart.de Thu Apr 26 11:32:32 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 17:32:32 +0200 Subject: [SciPy-user] Solve for symmetric matrix? In-Reply-To: <4630C10F.4060409@cse.ucsc.edu> References: <4630C10F.4060409@cse.ucsc.edu> Message-ID: <4630C610.6070402@iam.uni-stuttgart.de> Anand Patil wrote: > If the matrix has a zero eigenvalue, it has a nullspace so it's singular so you can't really invert it. However, if you know in advance that the RHS vector is orthogonal to the nullspace you can solve for the remaining components using a low-rank submatrix. > > Fred, if this is a kriging problem I've dealt with it in a Gaussian process module I'm writing. It's not even close to stable yet, but get it from code.google.com/p/gaussian-process. Please email me if you've got any questions, comments or especially interest in helping out. :) > > For purposes of the ticket, there's a Linpack routine called DCHDC that can compute a Cholesky decomposition even if some eigenvalues are zero. It's kind of hard to work with because of some funky pivoting, but it could probably be wrapped nicely for scipy. I'm using a hacked version of Lapack's DPOTF2 that's too dangerous for general usage. > > Anand > LINPACK is somewhat outdated. GAMS gives http://gams.nist.gov/serve.cgi/Class/D2b1a/ Nils > > Message: 10 > Date: Thu, 26 Apr 2007 13:41:36 +0200 > From: Nils Wagner > Subject: Re: [SciPy-user] solve for symetric matrix ? > To: SciPy Users List > Message-ID: <46308FF0.8090907 at iam.uni-stuttgart.de> > Content-Type: text/plain; charset=ISO-8859-15 > > fred wrote: > > >>> Joachim Dahl a ?crit : >>> >>> >> >> >> >>>>> If the diagonal is 0, then the matrix is not definite (so, neither is >>>>> -A)... >>>>> >>>>> >>> >>> >>> Uh...., so... is there any peculiar method to solve symetric matrices >>> with 0 on the diagonal ? >>> >>> Cheers, >>> >>> >>> >> >> >> > Fred, > > See my previous mail. You are looking for a solver for symmetric > indefinite systems. > Should I file a ticket (task/enhancement) for a symmetric indefinite > solver ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Thu Apr 26 11:38:08 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 26 Apr 2007 11:38:08 -0400 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: <9EADC1E53F9C70479BF65593703691141343DC@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF65593703691141343DC@mrlnt6.mrl.uiuc.edu> Message-ID: On 26/04/07, Trevis Crane wrote: > 1) The Y output from ode seems to be of type array. That is, I'm > solving a system of three coupled equations, so for each iteration the > solver generates an array of three elements as well as the time at that > iteration. So, as you can see in my code, I append the output from each > iteration. This ends up giving me a list of three-element arrays. This > seems to be a rather cumbersome way of organizing the output and makes > plotting it rather laborious. The only way I've been able to plot it is by > running a for-loop in which I populate three other lists (y1,y2,y3) with the > appropriate values from the ode solver. Then I can plot it (see attached > code). As I said, this seems overly-laborious. Any suggestions? The plotting functions are a pain, really, I almost always have to do something to rearrange my data before I can plot it. The errorbar functions are the worst. But the rearrangements can be made less painful. I suggest, before plotting, applying array() to your list; then you can use slicing to feed the plotting functions: A = array(L) plot(A[:,0],A[:,1]) > 2) The basic solver scipy.integrate.ode requires you to specify the > time step. I would prefer a solver with an adaptive time step algorithm. > What do you suggest I do for this? In fact it *is* an adaptive algorithm, they both are (odeint is just as basic). But they are quite clumsy to use. There is an option which will tell odeint to return after just one step of the internal integrator. If you're doing anything even moderately sophisticated (such as, for example, stopping on a particular y value, or obtaining a solution object you can treat like a function) I would look at PyDSTool. Anne M. Archibald From nwagner at iam.uni-stuttgart.de Thu Apr 26 11:38:57 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 17:38:57 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> Message-ID: <4630C791.1020802@iam.uni-stuttgart.de> Anne Archibald wrote: > On 26/04/07, Nils Wagner wrote: > > >> If you can't believe it run the attached script. >> I get >> python -i cholesky_lu.py >> Elapsed time LU 0.291770935059 >> Residual 1.65732165386e-14 >> Elapsed time Cholesky 0.210819959641 >> Residual 1.8009241117e-14 >> > > Notice here that Cholesky is less than a factor of two faster. It'll > never be much better than that in general. So you're not going to win > much on speed. > > Personally I prefer to use SVD for matrix inversions - it's much more > reliable, which I value more than speed. It allows me to diagnose > whether the matrix is singular or ill-conditioned and deal with that. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Strange. I have increased the number of unknowns to 5000. What happens with the residual ?? Elapsed time LU 28.8819189072 Residual 7.64156112039e-14 Elapsed time Cholesky 15.6276760101 Residual 141.421356237 >>> n 5000 Any idea ? Nils From peridot.faceted at gmail.com Thu Apr 26 11:43:02 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 26 Apr 2007 11:43:02 -0400 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <4630C791.1020802@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <4630C791.1020802@iam.uni-stuttgart.de> Message-ID: On 26/04/07, Nils Wagner wrote: > Strange. I have increased the number of unknowns to 5000. What happens > with the residual ?? > > Elapsed time LU 28.8819189072 > Residual 7.64156112039e-14 > Elapsed time Cholesky 15.6276760101 > Residual 141.421356237 > >>> n > 5000 > > Any idea ? Well, roundoff error is always a problem in numerical linear algebra. Perhaps use the svd to check your condition number? Anne From peridot.faceted at gmail.com Thu Apr 26 11:45:46 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 26 Apr 2007 11:45:46 -0400 Subject: [SciPy-user] Solve for symmetric matrix? In-Reply-To: <4630C10F.4060409@cse.ucsc.edu> References: <4630C10F.4060409@cse.ucsc.edu> Message-ID: On 26/04/07, Anand Patil wrote: > If the matrix has a zero eigenvalue, it has a nullspace so it's singular so you can't really invert it. However, if you know in advance that the RHS vector is orthogonal to the nullspace you can solve for the remaining components using a low-rank submatrix. Let me reiterate my support for the SVD. It allows you to find a least-squares solution for matrices with nullspaces, real or numerical, even if the result vector is not in the span of the matrix. It also lets you check how far your answer is from being a real solution, in both spaces. Anne From nwagner at iam.uni-stuttgart.de Thu Apr 26 11:48:53 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 26 Apr 2007 17:48:53 +0200 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <4630C791.1020802@iam.uni-stuttgart.de> Message-ID: <4630C9E5.7010904@iam.uni-stuttgart.de> Anne Archibald wrote: > On 26/04/07, Nils Wagner wrote: > > >> Strange. I have increased the number of unknowns to 5000. What happens >> with the residual ?? >> >> Elapsed time LU 28.8819189072 >> Residual 7.64156112039e-14 >> Elapsed time Cholesky 15.6276760101 >> Residual 141.421356237 >> >>>>> n >>>>> >> 5000 >> >> Any idea ? >> > > Well, roundoff error is always a problem in numerical linear algebra. > Perhaps use the svd to check your condition number? > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Anne, Just because you have mentioned the condition number - it would be nice to have a built-in function like cond or condest in scipy. Any comment ? Nils From t_crane at mrl.uiuc.edu Thu Apr 26 11:50:55 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Thu, 26 Apr 2007 10:50:55 -0500 Subject: [SciPy-user] question about ODE output and time steps Message-ID: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Anne Archibald > Sent: Thursday, April 26, 2007 10:38 AM > To: SciPy Users List > Subject: Re: [SciPy-user] question about ODE output and time steps > > On 26/04/07, Trevis Crane wrote: > > > 1) The Y output from ode seems to be of type array. That is, I'm > > solving a system of three coupled equations, so for each iteration the > > solver generates an array of three elements as well as the time at that > > iteration. So, as you can see in my code, I append the output from each > > iteration. This ends up giving me a list of three-element arrays. This > > seems to be a rather cumbersome way of organizing the output and makes > > plotting it rather laborious. The only way I've been able to plot it is by > > running a for-loop in which I populate three other lists (y1,y2,y3) with the > > appropriate values from the ode solver. Then I can plot it (see attached > > code). As I said, this seems overly-laborious. Any suggestions? > > The plotting functions are a pain, really, I almost always have to do > something to rearrange my data before I can plot it. The errorbar > functions are the worst. But the rearrangements can be made less > painful. > > I suggest, before plotting, applying array() to your list; then you > can use slicing to feed the plotting functions: > A = array(L) > plot(A[:,0],A[:,1]) [Trevis Crane] ah, yes, good... thanks > > > 2) The basic solver scipy.integrate.ode requires you to specify the > > time step. I would prefer a solver with an adaptive time step algorithm. > > What do you suggest I do for this? > > In fact it *is* an adaptive algorithm, they both are (odeint is just > as basic). But they are quite clumsy to use. There is an option which > will tell odeint to return after just one step of the internal > integrator. > > If you're doing anything even moderately sophisticated (such as, for > example, stopping on a particular y value, or obtaining a solution > object you can treat like a function) I would look at PyDSTool. > [Trevis Crane] I assumed it was fixed since you supply the dt value (or in the case of odeint, an array of t values). By you statement above, though, I assume these are simply to tell the solver at what times you want y information (much like MATLAB's solvers). What I really want to do, however, requires that I stop the solver at some iteration step that, a priori, is unknown. I'm simulating a dynamic system and my condition for stopping the solver is when the energy of the system stops changing beyond a specified threshold. So each iteration of the solver I calculated the energy, and if after so many iterations it stops changing (much), I want the solver to stop. As for PyDSTool -- I'll check it out. thanks again, trevis > Anne M. Archibald > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From peridot.faceted at gmail.com Thu Apr 26 11:53:47 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 26 Apr 2007 11:53:47 -0400 Subject: [SciPy-user] solve for symetric matrix ? In-Reply-To: <4630C9E5.7010904@iam.uni-stuttgart.de> References: <4630653F.1090803@gmail.com> <46306726.8080904@iam.uni-stuttgart.de> <46306A7F.10500@gmail.com> <46306B48.3000301@iam.uni-stuttgart.de> <46306D10.3070305@gmail.com> <46306EC3.3050601@iam.uni-stuttgart.de> <4630C791.1020802@iam.uni-stuttgart.de> <4630C9E5.7010904@iam.uni-stuttgart.de> Message-ID: On 26/04/07, Nils Wagner wrote: > Anne, > > Just because you have mentioned the condition number - it would be nice > to have > a built-in function like cond or condest in scipy. > Any comment ? Well, it sort of would, but it's easy enough with the svd: def cond(M): s = svd(M,compute_uv) return amax(s)/amin(s) Anne From peridot.faceted at gmail.com Thu Apr 26 11:57:49 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 26 Apr 2007 11:57:49 -0400 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> Message-ID: On 26/04/07, Trevis Crane wrote: > [Trevis Crane] > > I assumed it was fixed since you supply the dt value (or in the case of > odeint, an array of t values). By you statement above, though, I assume > these are simply to tell the solver at what times you want y information > (much like MATLAB's solvers). > > What I really want to do, however, requires that I stop the solver at > some iteration step that, a priori, is unknown. I'm simulating a > dynamic system and my condition for stopping the solver is when the > energy of the system stops changing beyond a specified threshold. So > each iteration of the solver I calculated the energy, and if after so > many iterations it stops changing (much), I want the solver to stop. I had essentially the same need. I tried various hacks - it turns out odeint actually allows you to backtrack within one integrator step - but they failed and were a pain to use. I came up with a wrapper for ODEPACK's LSODAR, which has a stop-on-contstraint function that does exactly the right thing but is otherwise identical to LSODA that odeint uses. But PyDSTool did it right, apparently (I haven't used it). Anne From ivilata at carabos.com Thu Apr 26 12:06:17 2007 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Thu, 26 Apr 2007 18:06:17 +0200 Subject: [SciPy-user] [ANN] PyTables 2.0rc1 released Message-ID: <20070426160617.GD7924@tardis.terramar.selidor.net> Hi all, The development of the second major release of PyTables is steadily approaching its goal! Today we want to announce the *first release candidate version of PyTables 2.0*, i.e. PyTables 2.0rc1. This version settles the API and file format for the following PyTables 2.0 series. No more features will be added until the final version of 2.0 is released, so we do enter a time for exhaustive platform testing and fixing the last remaining bugs, as well as updating any outdated documentation. Your collaboration is very important in this stage of development, so we encourage you to download PyTables, test it, and report any problems you find or any suggestions you have. Thank you! And now, the official announcement: ============================ Announcing PyTables 2.0rc1 ============================ PyTables is a library for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data with support for full 64-bit file addressing. PyTables runs on top of the HDF5 library and NumPy package for achieving maximum throughput and convenient use. You can download a source package of the version 2.0rc1 with generated PDF and HTML docs and binaries for Windows from http://www.pytables.org/download/preliminary/ For an on-line version of the manual, visit: http://www.pytables.org/docs/manual-2.0rc1 Please have in mind that some sections in the manual can be obsolete (specially the "Optimization tips" chapter). Other chapters should be fairly up-to-date though (although still a bit in state of flux). In case you want to know more in detail what has changed in this version, have a look at ``RELEASE_NOTES.txt``. Find the HTML version for this document at: http://www.pytables.org/moin/ReleaseNotes/Release_2.0rc1 If you are a user of PyTables 1.x, probably it is worth for you to look at ``MIGRATING_TO_2.x.txt`` file where you will find directions on how to migrate your existing PyTables 1.x apps to the 2.0 version. You can find an HTML version of this document at http://www.pytables.org/moin/ReleaseNotes/Migrating_To_2.x Keep reading for an overview of the most prominent improvements in PyTables 2.0 series. New features of PyTables 2.0 ============================ - NumPy is finally at the core! That means that PyTables no longer needs numarray in order to operate, although it continues to be supported (as well as Numeric). This also means that you should be able to run PyTables in scenarios combining Python 2.5 and 64-bit platforms (these are a source of problems with numarray/Numeric because they don't support this combination as of this writing). - Most of the operations in PyTables have experimented noticeable speed-ups (sometimes up to 2x, like in regular Python table selections). This is a consequence of both using NumPy internally and a considerable effort in terms of refactorization and optimization of the new code. - Combined conditions are finally supported for in-kernel selections. So, now it is possible to perform complex selections like:: result = [ row['var3'] for row in table.where('(var2 < 20) | (var1 == "sas")') ] or:: complex_cond = '((%s <= col5) & (col2 <= %s)) ' \ '| (sqrt(col1 + 3.1*col2 + col3*col4) > 3)' result = [ row['var3'] for row in table.where(complex_cond % (inf, sup)) ] and run them at full C-speed (or perhaps more, due to the cache-tuned computing kernel of Numexpr, which has been integrated into PyTables). - Now, it is possible to get fields of the ``Row`` iterator by specifying their position, or even ranges of positions (extended slicing is supported). For example, you can do:: result = [ row[4] for row in table # fetch field #4 if row[1] < 20 ] result = [ row[:] for row in table # fetch all fields if row['var2'] < 20 ] result = [ row[1::2] for row in # fetch odd fields table.iterrows(2, 3000, 3) ] in addition to the classical:: result = [row['var3'] for row in table.where('var2 < 20')] - ``Row`` has received a new method called ``fetch_all_fields()`` in order to easily retrieve all the fields of a row in situations like:: [row.fetch_all_fields() for row in table.where('column1 < 0.3')] The difference between ``row[:]`` and ``row.fetch_all_fields()`` is that the former will return all the fields as a tuple, while the latter will return the fields in a NumPy void type and should be faster. Choose whatever fits better to your needs. - Now, all data that is read from disk is converted, if necessary, to the native byteorder of the hosting machine (before, this only happened with ``Table`` objects). This should help to accelerate applications that have to do computations with data generated in platforms with a byteorder different than the user machine. - The modification of values in ``*Array`` objects (through __setitem__) now doesn't make a copy of the value in the case that the shape of the value passed is the same as the slice to be overwritten. This results in considerable memory savings when you are modifying disk objects with big array values. - All leaf constructors (except for ``Array``) have received a new ``chunkshape`` argument that lets the user explicitly select the chunksizes for the underlying HDF5 datasets (only for advanced users). - All leaf constructors have received a new parameter called ``byteorder`` that lets the user specify the byteorder of their data *on disk*. This effectively allows to create datasets in other byteorders than the native platform. - Native HDF5 datasets with ``H5T_ARRAY`` datatypes are fully supported for reading now. - The test suites for the different packages are installed now, so you don't need a copy of the PyTables sources to run the tests. Besides, you can run the test suite from the Python console by using:: >>> tables.tests() Resources ========= Go to the PyTables web site for more details: http://www.pytables.org About the HDF5 library: http://hdf.ncsa.uiuc.edu/HDF5/ About NumPy: http://numpy.scipy.org/ To know more about the company behind the development of PyTables, see: http://www.carabos.com/ Acknowledgments =============== Thanks to many users who provided feature improvements, patches, bug reports, support and suggestions. See the ``THANKS`` file in the distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last, but not least thanks a lot to the HDF5 and NumPy (and numarray!) makers. Without them PyTables simply would not exist. Share your experience ===================== Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. ---- **Enjoy data!** -- The PyTables Team :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 307 bytes Desc: Digital signature URL: From robert.kern at gmail.com Thu Apr 26 12:29:25 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 Apr 2007 11:29:25 -0500 Subject: [SciPy-user] Is it allowed to distribute SciPy ? In-Reply-To: <4630C4FF.1060706@ru.nl> References: <4630C4FF.1060706@ru.nl> Message-ID: <4630D365.3090002@gmail.com> Stef Mientki wrote: > hello > > as I indicated in a previous post I need a simple installer, > I just tried Inno, and at first look, > it seems that I can produce with 10 lines of text a "1-button" installer > for > SciPy + PyScripter + Signal WorkBench > > Is this allowed ? Yes. The license is quite permissive. > (I realize that it's a bad habit not to choose the official path, > but I think it's even worse, not to promote SciPy ;-) Whatever gets the job done. That's why we went with such a permissive license. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anand at soe.ucsc.edu Thu Apr 26 12:41:22 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 26 Apr 2007 09:41:22 -0700 Subject: [SciPy-user] Solve for symmetric matrix? In-Reply-To: References: Message-ID: <4630D632.9010805@cse.ucsc.edu> >LINPACK is somewhat outdated. GAMS gives >http://gams.nist.gov/serve.cgi/Class/D2b1a/ > >Nils > > Yeah, I had to get the manual by interlibrary loan. I wasn't familiar with GAMS, thanks for the pointer. It's worth noting that many of the routines in the GAMS class seem to be from LINPACK, so it's at least worth considering even though it's outdated. The nice thing about DCHDC is it returns actual low-rank Cholesky factors for positive semidefinite matrices, so if you try to decompose [1 1] [1 1] you'll essentially get the vector [1 1] back. I didn't see anything in the link that does that, they seem to mostly be U^T D U factorizations for symmetric indefinite matrices. You could get a low-rank Cholesky factor out of that, but it's less direct. Cheers, Anand From anand at soe.ucsc.edu Thu Apr 26 13:03:18 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 26 Apr 2007 10:03:18 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 44, Issue 54 In-Reply-To: References: Message-ID: <4630DB56.2040600@cse.ucsc.edu> >Let me reiterate my support for the SVD. It allows you to find a >least-squares solution for matrices with nullspaces, real or >numerical, even if the result vector is not in the span of the matrix. >It also lets you check how far your answer is from being a real >solution, in both spaces. > > > >Anne > > Anne, I completely agree with you, the SVD is very much nicer and more intuitive to work with than the Cholesky decomposition. The only thing Cholesky has to recommend it is that it's a lot faster for large matrices, and in my applications (and possibly the OP's) the need for speed can be acute. Since you can eventually get everything you need from a Cholesky decomposition even for matrices with nullspaces, it unfortunately becomes the method of choice. Cheers, Anand In [9]: A=eye(1000) In [10]: %time b=svd(A) CPU times: user 11.54 s, sys: 0.69 s, total: 12.22 s Wall time: 19.25 In [11]: %time b=cholesky(A) CPU times: user 0.53 s, sys: 0.16 s, total: 0.69 s Wall time: 0.89 From anand at soe.ucsc.edu Thu Apr 26 13:05:12 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 26 Apr 2007 10:05:12 -0700 Subject: [SciPy-user] Changes of coordinates Message-ID: <4630DBC8.5040602@cse.ucsc.edu> Hi all, I was wondering if anyone knows where I can get fast Python-callable or easy-to-wrap code that does the following for a nice complete-ish set of coordinate systems: - Computes the distance between two points x and y - When it makes sense, converts points to Euclidean coordinates. Thanks in advance, Anand From fredmfp at gmail.com Thu Apr 26 13:09:53 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 19:09:53 +0200 Subject: [SciPy-user] SciPy-user Digest, Vol 44, Issue 54 In-Reply-To: <4630DB56.2040600@cse.ucsc.edu> References: <4630DB56.2040600@cse.ucsc.edu> Message-ID: <4630DCE1.6070208@gmail.com> Anand Patil a ?crit : >> Let me reiterate my support for the SVD. It allows you to find a >> least-squares solution for matrices with nullspaces, real or >> numerical, even if the result vector is not in the span of the matrix. >> It also lets you check how far your answer is from being a real >> solution, in both spaces. >> >> >> >> Anne >> >> >> > Anne, > > I completely agree with you, the SVD is very much nicer and more > intuitive to work with than the Cholesky decomposition. The only thing > Cholesky has to recommend it is that it's a lot faster for large > matrices, and in my applications (and possibly the OP's) the need for > Not possibly ;-) I _do_ need for speed. > Cheers, > Anand > > In [9]: A=eye(1000) > > In [10]: %time b=svd(A) > CPU times: user 11.54 s, sys: 0.69 s, total: 12.22 s > Wall time: 19.25 > > In [11]: %time b=cholesky(A) > CPU times: user 0.53 s, sys: 0.16 s, total: 0.69 s > Wall time: 0.89 I did not ask Anne to speed up filling my VTK arrays with broadcasting to loss all of it in solving my matrices ;-))) Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Apr 26 13:16:45 2007 From: fredmfp at gmail.com (fred) Date: Thu, 26 Apr 2007 19:16:45 +0200 Subject: [SciPy-user] Solve for symmetric matrix? In-Reply-To: <4630C10F.4060409@cse.ucsc.edu> References: <4630C10F.4060409@cse.ucsc.edu> Message-ID: <4630DE7D.60709@gmail.com> Anand Patil a ?crit : > Fred, if this is a kriging problem I've dealt with it in a Gaussian process module I'm writing. It's not even close to stable yet, but get it from code.google.com/p/gaussian-process. Please email me if you've got any questions, comments or especially interest in helping out. :) > Sure ! ;-) I wrote some stuff from my own, works fine with "classical" python/scipy tricks, but now, I'm looking for to speed up runs. Stay tuned ;-) -- http://scipy.org/FredericPetit From oliphant.travis at ieee.org Thu Apr 26 13:30:40 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 26 Apr 2007 11:30:40 -0600 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> Message-ID: <4630E1C0.3000504@ieee.org> Anne Archibald wrote: >> [Trevis Crane] >> >> I assumed it was fixed since you supply the dt value (or in the case of >> odeint, an array of t values). By you statement above, though, I assume >> these are simply to tell the solver at what times you want y information >> (much like MATLAB's solvers). >> >> What I really want to do, however, requires that I stop the solver at >> some iteration step that, a priori, is unknown. I'm simulating a >> dynamic system and my condition for stopping the solver is when the >> energy of the system stops changing beyond a specified threshold. So >> each iteration of the solver I calculated the energy, and if after so >> many iterations it stops changing (much), I want the solver to stop. >> > > I had essentially the same need. I tried various hacks - it turns out > odeint actually allows you to backtrack within one integrator step - > but they failed and were a pain to use. I came up with a wrapper for > ODEPACK's LSODAR, which has a stop-on-contstraint function that does > exactly the right thing but is otherwise identical to LSODA that > odeint uses. But PyDSTool did it right, apparently (I haven't used > it). > So why don't we put what PyDSTool into scipy? SciPy's odeint is a wrapper I did of ODEPACK about 10 years ago. It is pretty low-level and reflects my needs for ordinary differential equation solving. It does not claim to be the end-all solution. Pearu wrote some additional tools (VSODE) which may be easier for some to use. It would be great if other people who wrote wrappers put them into SciPy. It seems that there is a strong tendency to just release your own package than to help with another package. We SciPy developers do try to be easy to work with to avoid this kind of forking. -Travis From robert.kern at gmail.com Thu Apr 26 13:24:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 26 Apr 2007 12:24:50 -0500 Subject: [SciPy-user] Changes of coordinates In-Reply-To: <4630DBC8.5040602@cse.ucsc.edu> References: <4630DBC8.5040602@cse.ucsc.edu> Message-ID: <4630E062.1040701@gmail.com> Anand Patil wrote: > Hi all, > > I was wondering if anyone knows where I can get fast Python-callable or > easy-to-wrap code that does the following for a nice complete-ish set of > coordinate systems: > > - Computes the distance between two points x and y > - When it makes sense, converts points to Euclidean coordinates. Uh, what *kind* of coordinate systems? If you need geographic coordinate systems, look at pyproj. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Thu Apr 26 13:26:14 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 Apr 2007 19:26:14 +0200 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: <4630E1C0.3000504@ieee.org> References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> <4630E1C0.3000504@ieee.org> Message-ID: <20070426172610.GK24024@clipper.ens.fr> On Thu, Apr 26, 2007 at 11:30:40AM -0600, Travis Oliphant wrote: > So why don't we put what PyDSTool into scipy? SciPy's odeint is a > wrapper I did of ODEPACK about 10 years ago. It is pretty low-level > and reflects my needs for ordinary differential equation solving. It > does not claim to be the end-all solution. Pearu wrote some additional > tools (VSODE) which may be easier for some to use. It would be great if > other people who wrote wrappers put them into SciPy. +10 ! I have never needed more then odeint, but I hear so much good about PyDSTool that I wish it was in scipy. Indeed if it is not in scipy I will never get my colleagues to use it, therefore it will limit my use. Isn't the point of scipy to collect all these wrappers and small chunk of code into something organized ? Cheers, Ga?l From rhc28 at cornell.edu Thu Apr 26 14:10:02 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Thu, 26 Apr 2007 14:10:02 -0400 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: <20070426172610.GK24024@clipper.ens.fr> References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> <4630E1C0.3000504@ieee.org> <20070426172610.GK24024@clipper.ens.fr> Message-ID: Hi, I don't foresee having time to add (probably only some of) PyDSTool to SciPy or unless someone would graciously give me a hand. Part of that would involve some very specific feedback on the basic classes we've implemented and an evaluation of whether they are suitable for general usage: in particular, our Interval, Pointset, Variable, and Trajectory classes. To bring on board PyDSTool's ODE solvers in their current form is also to commit to the use of these basic classes, as there's not a trivial way to factor out the solvers without losing some of the fancy functionality (e.g., having autonomous external inputs to a RHS interpolated from arrays). In my biased opinion, I think that these basic classes could add value to SciPy in general, but I'm not going to push it until I get a sense of sufficient support from you guys. Some technically-minded folks might want to help me improve our implementations too. Also, the lsodar routine already wrapped nicely in SloppyCell could much more easily be incorporated into SciPy with minimal baggage included. Anyway, we haven't yet got around our ugly method for generating C-based right-hand-sides in a platform-independent way, which is to (mis-)use distutils! We hope to use auto-generated makefiles instead, and we have some chance of coding that up over the summer. I would think that would be a deal-breaker for having our integrators in SciPy, no? In the meantime, I would encourage anyone who writes for SciPy, has played with PyDSTool, and is curious about incorporating our solvers into SciPy, to please continue this discussion with a critical evaluation of our base classes (recent improvements are in our SVN repository). You can read more about their design and implementation at our wiki. -Rob From sherwood at cam.cornell.edu Thu Apr 26 14:24:05 2007 From: sherwood at cam.cornell.edu (Erik Sherwood) Date: Thu, 26 Apr 2007 14:24:05 -0400 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> <4630E1C0.3000504@ieee.org> <20070426172610.GK24024@clipper.ens.fr> Message-ID: <7BF45F8A-D471-45E2-8681-121FBA527B15@cam.cornell.edu> I'm willing to work on it, for sure, but I want to get our integrator/ generator class structure in better shape first. I'm really happy about this line: +10 ! I have never needed more then odeint, but I hear so much good about PyDSTool that I wish it was in scipy. Erik Sherwood Center for Applied Mathematics | Phone: (607) 255-4195 657 Rhodes Hall | Fax: (607) 255-9860 Cornell University | Email: sherwood at cam.cornell.edu Ithaca, NY 14853 | Web: http://www.cam.cornell.edu/~sherwood On Apr 26, 2007, at 2:10 PM, Rob Clewley wrote: > Hi, > > I don't foresee having time to add (probably only some of) PyDSTool to > SciPy or unless someone would graciously give me a hand. Part of that > would involve some very specific feedback on the basic classes we've > implemented and an evaluation of whether they are suitable for general > usage: in particular, our Interval, Pointset, Variable, and Trajectory > classes. To bring on board PyDSTool's ODE solvers in their current > form is also to commit to the use of these basic classes, as there's > not a trivial way to factor out the solvers without losing some of the > fancy functionality (e.g., having autonomous external inputs to a RHS > interpolated from arrays). In my biased opinion, I think that these > basic classes could add value to SciPy in general, but I'm not going > to push it until I get a sense of sufficient support from you guys. > Some technically-minded folks might want to help me improve our > implementations too. > > Also, the lsodar routine already wrapped nicely in SloppyCell could > much more easily be incorporated into SciPy with minimal baggage > included. > > Anyway, we haven't yet got around our ugly method for generating > C-based right-hand-sides in a platform-independent way, which is to > (mis-)use distutils! We hope to use auto-generated makefiles instead, > and we have some chance of coding that up over the summer. I would > think that would be a deal-breaker for having our integrators in > SciPy, no? > > In the meantime, I would encourage anyone who writes for SciPy, has > played with PyDSTool, and is curious about incorporating our solvers > into SciPy, to please continue this discussion with a critical > evaluation of our base classes (recent improvements are in our SVN > repository). You can read more about their design and implementation > at our wiki. > > -Rob > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From samrobertsmith at gmail.com Thu Apr 26 14:38:11 2007 From: samrobertsmith at gmail.com (linda.s) Date: Thu, 26 Apr 2007 11:38:11 -0700 Subject: [SciPy-user] shape Message-ID: <1d987df30704261138x7a9ea317rd03e25871b4e08d0@mail.gmail.com> I am curious about the (9,) for the shape. Does it mean 9 rows one column or one row 9 columns? The final array looks like one row 9 columns but why (9,) gives me an impression that it is 9 rows one column? Thanks. >>> from numpy import * >>> numarr2 array([[ 2, 4, 6], [ 8, 10, 12], [14, 16, 18]]) >>> numarr2.shape = (9,) >>> print numarr2 [ 2 4 6 8 0 12 14 16 18] From anand at soe.ucsc.edu Thu Apr 26 15:35:42 2007 From: anand at soe.ucsc.edu (Anand Patil) Date: Thu, 26 Apr 2007 12:35:42 -0700 Subject: [SciPy-user] Changes of coordinates Message-ID: <4630FF0E.7060104@cse.ucsc.edu> Robert Kern wrote: > Uh, what *kind* of coordinate systems? If you need geographic coordinate > systems, look at pyproj. Sorry that was unclear, let me try again. I'm mostly interested in spherical, polar, cylindrical and geographical. However, if there's a more general package out there that can convert between other coordinate systems for R^n or the surface of the sphere, or compute the shortest distance between points in some other spaces, that would be great. I'll check out pyproj, thanks for the tip. Anand From oliphant.travis at ieee.org Thu Apr 26 15:45:33 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 26 Apr 2007 13:45:33 -0600 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> <4630E1C0.3000504@ieee.org> <20070426172610.GK24024@clipper.ens.fr> Message-ID: <4631015D.3010502@ieee.org> Rob Clewley wrote: > Hi, > > I don't foresee having time to add (probably only some of) PyDSTool to > SciPy or unless someone would graciously give me a hand. Part of that > would involve some very specific feedback on the basic classes we've > implemented and an evaluation of whether they are suitable for general > usage: in particular, our Interval, Pointset, Variable, and Trajectory > classes. > To bring on board PyDSTool's ODE solvers in their current > form is also to commit to the use of these basic classes, as there's > not a trivial way to factor out the solvers without losing some of the > fancy functionality (e.g., having autonomous external inputs to a RHS > interpolated from arrays). In my biased opinion, I think that these > basic classes could add value to SciPy in general, but I'm not going > to push it until I get a sense of sufficient support from you guys. > Most of the main contributors to SciPy have been cautious object-oriented people. We've got only a few limited classes but there is no central objection to adding classes other than my experience is that it is very hard to get classes that are really central enough to be useful in more than one domain. But, that doesn't preclude us from providing package-specific classes (which we've already done). So, if adding these classes makes it easier / faster to solve dynamical systems, then I don't think there is going to be any real resistance to putting them in the ODE tool chain. In fact, there has been lots of discussion suggesting pulling out ODE's from their current home in the integrate package (where they only loosely fit) and making say a "dynamic" module that houses several solution approaches to ordinary differential equations: from simple wrappers around ODEPACK to full-featured class-based approaches like PyDSTool provides. > Some technically-minded folks might want to help me improve our > > implementations too. > I could help with implementations in terms of Python-compiled language wrappings if that's what you mean. > Also, the lsodar routine already wrapped nicely in SloppyCell could > much more easily be incorporated into SciPy with minimal baggage > included. > It sounds like this is something that we should do already. There are several low-level routines with compiled code already available in the SciPy compiled code without a Python wrapper. I'd like to fix this. I welcome anyone who wants to do more fancy things on top. If you want or need SVN access to SciPy to contribute the work just ask. Usually, the changes take place in the sandbox, but we need to move lots of things over from the sandbox. I'm planning some major work on SciPy this summer (as soon as I get this Python buffer interface thing done for Python 3.0) > Anyway, we haven't yet got around our ugly method for generating > C-based right-hand-sides in a platform-independent way, which is to > (mis-)use distutils! We hope to use auto-generated makefiles instead, > and we have some chance of coding that up over the summer. I would > think that would be a deal-breaker for having our integrators in > SciPy, no? > There is general interest in adapting weave so that pieces of it which would be helpful for this kind of work are easier to access. > In the meantime, I would encourage anyone who writes for SciPy, has > played with PyDSTool, and is curious about incorporating our solvers > into SciPy, to please continue this discussion with a critical > evaluation of our base classes (recent improvements are in our SVN > repository). You can read more about their design and implementation > at our wiki. > Thanks for continuing the discussion. -Travis From gael.varoquaux at normalesup.org Thu Apr 26 15:47:21 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 26 Apr 2007 21:47:21 +0200 Subject: [SciPy-user] shape In-Reply-To: <1d987df30704261138x7a9ea317rd03e25871b4e08d0@mail.gmail.com> References: <1d987df30704261138x7a9ea317rd03e25871b4e08d0@mail.gmail.com> Message-ID: <20070426194717.GD4393@clipper.ens.fr> On Thu, Apr 26, 2007 at 11:38:11AM -0700, linda.s wrote: > I am curious about the (9,) for the shape. Does it mean 9 rows one > column or one row 9 columns? The final array looks like one row 9 > columns but why (9,) gives me an impression that it is 9 rows one > column? Thanks. Rows and columns have a meaning only for 2D arrays. shape=(9,1) is a 9 rows vector: a = array([[2], [4], [6], [8], [10], [12], [14], [16], [18]]) shape=(1,9) is a 9 columns vector: a = array([[2, 4, 6, 8, 10, 12, 14, 16, 18]]) You can suppress the dimensions of length 1 with the numpy function "squeeze". HTH, Ga?l From ryanlists at gmail.com Thu Apr 26 15:49:09 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 26 Apr 2007 14:49:09 -0500 Subject: [SciPy-user] shape In-Reply-To: <1d987df30704261138x7a9ea317rd03e25871b4e08d0@mail.gmail.com> References: <1d987df30704261138x7a9ea317rd03e25871b4e08d0@mail.gmail.com> Message-ID: It is a 1 dimensional array, so it has only one dimension. Numpy does not assume everything is a 2D array the way Matlab does. The N dimensional array is the primary class. The sort of freaky thing is that it can then be either a row or a column vector depending on what the situation requires: B=array([1,2,3]) shape(B) A=array([[4,5,6],[7,8,9],[10,11,12]]) dot(A,B)#B is a column vector here dot(B,A)#B is a row vector here On 4/26/07, linda. s wrote: > I am curious about the (9,) for the shape. Does it mean 9 rows one > column or one row 9 columns? The final array looks like one row 9 > columns but why (9,) gives me an impression that it is 9 rows one > column? Thanks. > > >>> from numpy import * > >>> numarr2 > array([[ 2, 4, 6], > [ 8, 10, 12], > [14, 16, 18]]) > >>> numarr2.shape = (9,) > >>> print numarr2 > [ 2 4 6 8 0 12 14 16 18] > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From ryanlists at gmail.com Thu Apr 26 16:17:41 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 26 Apr 2007 15:17:41 -0500 Subject: [SciPy-user] Bicycle Wheel Precession Message-ID: I am trying to teach my dynamics students about teh precession of a bicycle wheel like in this video: http://commons.bcit.ca/physics/video/precession.shtml If the x axis is along the axle of the wheel and the z axis points up, I think the motion can be modeled using the following ode: def NewtonEuler(x,t,M,I): Ixx=I[0] Iyy=I[1] Izz=I[2] wx=x[0] wy=x[1] wz=x[2] dxdt=zeros(3,) dxdt[0]=((Iyy-Izz)*wy*wz+M[0])/Ixx dxdt[1]=((Izz-Ixx)*wx*wz+M[1])/Iyy dxdt[2]=((Ixx-Iyy)*wx*wy+M[2])/Izz return dxdt I use this function along with integrate.odeint in the attached file, but the surprising thing is that wz is sinusoidal rather than nearly constant. Experimentally, I see nearly constant precession. Can anyone help me find what is wrong with my approach? Thanks, Ryan -------------- next part -------------- from scipy import * from pylab import figure, cla, clf, plot, subplot, show, ylabel, xlabel, xlim, ylim, semilogx, legend, title, savefig, yticks, grid, rcParams #from IPython.Debugger import Pdb import copy, os, sys R=0.5#meters m=5.0;#kg Ixx=0.5*m*R**2 Iyy=0.25*m*R**2 Izz=Iyy wx=2.0*pi#rad/sec roughly def NewtonEuler(x,t,M,I): Ixx=I[0] Iyy=I[1] Izz=I[2] wx=x[0] wy=x[1] wz=x[2] dxdt=zeros(3,) dxdt[0]=((Iyy-Izz)*wy*wz+M[0])/Ixx dxdt[1]=((Izz-Ixx)*wx*wz+M[1])/Iyy dxdt[2]=((Ixx-Iyy)*wx*wy+M[2])/Izz return dxdt def omegaplot(fi, t, out, mylegend=['$\\omega_x$','$\\omega_y$','$\\omega_z$']): figure(fi) cla() for col in out.T: plot(t,col) xlabel('Time (sec)') ylabel('Angular Velocity (rad/sec)') if mylegend: legend(mylegend) wo=[wx,0.0,0.0] I=[Ixx,Iyy,Izz] d=0.05 g=9.81 My=d*g*m M=[0,My,0] t=arange(0,5,0.001) out=integrate.odeint(NewtonEuler, wo, t, args=(M,I,)) figure(1) cla() plot(t,out[:,0],t,out[:,1],t,out[:,2]) legend(['$\\omega_x$','$\\omega_y$','$\\omega_z$']) xlabel('Time (sec)') ylabel('Angular Velocity (rad/sec)') savefig('wx_positive.eps') ############ w2=[-wx,0.0,0.0] out2=integrate.odeint(NewtonEuler, w2, t, args=(M,I,)) figure(2) cla() plot(t,out2[:,0],t,out2[:,1],t,out2[:,2]) legend(['$\\omega_x$','$\\omega_y$','$\\omega_z$']) xlabel('Time (sec)') ylabel('Angular Velocity (rad/sec)') savefig('wx_negative.eps') ## def NewtonEulerSO(x,t,M,I): ## Ixx=I[0] ## Iyy=I[1] ## Izz=I[2] ## thx=x[0] ## thy=x[1] ## thz=x[2] ## wx=x[3] ## wy=x[4] ## wz=x[5] ## dxdt=zeros(6,) ## dxdt[0]=wx ## dxdt[1]=wy ## dxdt[2]=wz ## dxdt[3]=((Iyy-Izz)*wy*wz+M[0])/Ixx ## dxdt[4]=((Izz-Ixx)*wx*wz+M[1]*cos(thx))/Iyy ## dxdt[5]=((Ixx-Iyy)*wx*wy+M[2])/Izz ## return dxdt ## Xo=[0.0,0.0,0.0,wx,0.0,0.0] ## outSO=integrate.odeint(NewtonEulerSO, Xo, t, args=(M,I,)) ## xyzlist=['x','y','z'] ## omegaleg=['$\\omega_'+item+'$' for item in xyzlist] ## thetalist=['$\\theta_'+item+'$' for item in xyzlist] ## solegend=thetalist+omegaleg ## omegaplot(3,t,outSO,mylegend=solegend) show() From rhc28 at cornell.edu Thu Apr 26 17:35:11 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Thu, 26 Apr 2007 17:35:11 -0400 Subject: [SciPy-user] question about ODE output and time steps In-Reply-To: <4631015D.3010502@ieee.org> References: <9EADC1E53F9C70479BF6559370369114142EB0@mrlnt6.mrl.uiuc.edu> <4630E1C0.3000504@ieee.org> <20070426172610.GK24024@clipper.ens.fr> <4631015D.3010502@ieee.org> Message-ID: > Most of the main contributors to SciPy have been cautious > object-oriented people. We've got only a few limited classes but there > is no central objection to adding classes other than my experience is > that it is very hard to get classes that are really central enough to be > useful in more than one domain. That makes sense. That's one of the things I'd like feedback on. For instance, I imagine that a class that defines a numeric interval (PyDSTool's Interval class) in a mathematically meaningful way has universal appeal. I'm not talking about interval arithmetic here -- more in terms of defining tolerances/bounds on a variable, and with containment tests implemented using __in__, appropriate end-point behavior in the presence of rounding errors, support for semi- or bi-infinite intervals, etc. Also, a Pointset encapsulates arrays with named rows, an optional dedicated independent variable (for intrinsically 'parameterized' pointsets) and provides a friendlier face on large datasets for which keeping track of which index corresponds to which variable's data becomes a pain in the behind. Would/does anyone use these classes outside of PyDSTool? > But, that doesn't preclude us from providing package-specific classes > (which we've already done). So, if adding these classes makes it easier > / faster to solve dynamical systems, then I don't think there is going > to be any real resistance to putting them in the ODE tool chain. I'm hoping that it would be acceptable to keep odeint etc. more or less as they are rather than have them recoded to utilize Pointsets, Trajectories, etc. in their output, unless someone volunteers to re-wrap the existing fortran integrator codes! That's a job I will probably never find time for. > In fact, there has been lots of discussion suggesting pulling out ODE's > from their current home in the integrate package (where they only > loosely fit) and making say a "dynamic" module that houses several > solution approaches to ordinary differential equations: from simple > wrappers around ODEPACK to full-featured class-based approaches like > PyDSTool provides. This sounds very appealing to me, and would make me feel more comfortable about imposing new classes on SciPy. Others who are interested in PDEs or in symbolic dynamics might like to have their own sub-modules of a "dynamic" module. It might encourage some convergence in broadly useful classes for supporting interpolated curves & surfaces from discrete data sets. I currently only have a linearly interpolated curve class ("Trajectory") which does not easily generalize. I would be willing to get involved in setting up a "dynamic" module. > > Some technically-minded folks might want to help me improve our > > implementations too. > > > I could help with implementations in terms of Python-compiled language > wrappings if that's what you mean. Although that's not what I meant, that might also be helpful. Erik Sherwood might want to get back to you about that once we've explored how we should better approach it. I was meaning basic issues such as whether our class API and initializations are sufficiently robust and pythonic, in the eyes of more professional/experienced pythoneers. For instance, take the Pointset class: does the API need cleaning up? Is its slicing behavior reasonable? Is it unacceptable to coerce all incoming float arrays to float64!? (I think I know the answer to the last question, but I'm not planning to fix that soon :) > I'm planning some major work on SciPy this summer (as soon as I get this > Python buffer interface thing done for Python 3.0) > > Anyway, we haven't yet got around our ugly method for generating > > C-based right-hand-sides in a platform-independent way, which is to > > (mis-)use distutils! We hope to use auto-generated makefiles instead, > > and we have some chance of coding that up over the summer. I would > > think that would be a deal-breaker for having our integrators in > > SciPy, no? > > There is general interest in adapting weave so that pieces of it which > would be helpful for this kind of work are easier to access. I have no idea whether the way that we build C functions on the fly from user specifications can fit into the weave framework (I've never used weave), but I'll spend some time later to think about it. Perhaps others can already answer that question for me... -Rob From jturner at gemini.edu Thu Apr 26 17:57:41 2007 From: jturner at gemini.edu (James Turner) Date: Thu, 26 Apr 2007 17:57:41 -0400 Subject: [SciPy-user] JOB: Data Process Developers at Gemini Observatory Message-ID: <46312055.1040102@gemini.edu> I just wanted to point out an opening for developers at the Gemini Observatory, working on PyRAF-based data reduction software: http://www.gemini.edu/jobs/ http://members.aas.org/JobReg/JobDetailPage.cfm?JobID=23528 Just to be clear, please don't send applications to me personally, but follow the instructions at the above links. Apologies if anyone gets this from both SciPy and AstroPy. Thanks! James Turner. From gcross at u.washington.edu Thu Apr 26 18:57:14 2007 From: gcross at u.washington.edu (Gregory Crosswhite) Date: Thu, 26 Apr 2007 15:57:14 -0700 Subject: [SciPy-user] Distributed Array Library? Message-ID: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> Hey everyone! I would appreciate some advice on a problem I am facing. I have written a code using the numpy library that (among other things) performs contractions of a tensor network. Unfortunately, I have reached the point where my tensors are growing too big to handle in a single computer, so I want to rework my code so that it works on a cluster or grid. My question is: do you have suggestions for tools that would let me have ndarray like functionality with an array that could be distributed over many processors? Specifically, I would like to be able to create very large (possibly multi-gigabyte) tensors with an arbitrary number of dimensions, to be able to transpose indices and reshape dimensions, and to take general tensor products. After searching online, it looked like there was a package online called GlobalArrays that allows one to easily create distributed arrays, but it has the following characteristics that I would have to work around: *) No Python binding at present. (One used to exist, but it has disappeared from the internet. :-) ) *) No capability for transposing indices or reshaping dimensions *) The distributed inner product operations do not take stride arguments. I also saw something called the Tensor Contraction Engine which might have some support for this kind of thing, but the documentation for the actual tensor contraction part of the system seemed very sparse so I cannot tell whether . I wonder whether it would be feasible to integrate something like this into the numpy core; I looked through the Guide to NumPy (thank to Travis for taking the time to write such comprehensive documentation!) and saw that there were various hooks to implement one's own type, along with operations to perform a dot product, ufuncs, and the like, but all of these seem to assume that one has a uniform memory layout so that adopting them for a distributed array would be an exercise in futility. Do the wise men and women of this list have any advice regarding the best tool to use? :-) Thank you very much in advance! - Gregory Crosswhite From steve at shrogers.com Thu Apr 26 21:44:16 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Thu, 26 Apr 2007 19:44:16 -0600 Subject: [SciPy-user] Distributed Array Library? In-Reply-To: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> References: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> Message-ID: <46315570.8070908@shrogers.com> I don't know of an existing library that does what you want, but IPython1 (http://ipython.scipy.org/moin/IPython1) is intended to facilitate this type of distributed application. Currently alpha and under heavy development, but it's working well enough for you to look at. # Steve From answer at tnoo.net Fri Apr 27 05:27:34 2007 From: answer at tnoo.net (Martin =?iso-8859-1?Q?L=FCthi?=) Date: Fri, 27 Apr 2007 11:27:34 +0200 Subject: [SciPy-user] Distributed Array Library? References: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> Message-ID: <87d51q6ubd.fsf@tnoo.net> Petsc [1] does distributed processing (MPI). There is a Python wrapper petsc4py [2]. [1] http://www-unix.mcs.anl.gov/petsc/petsc-as/ [2] http://code.google.com/p/petsc4py/ -- Martin L?thi answer at tnoo.net From ryanlists at gmail.com Fri Apr 27 08:59:47 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 27 Apr 2007 07:59:47 -0500 Subject: [SciPy-user] Creating my own pseudo-Enthought installer Message-ID: I am making some progress at converting my students and colleagues to Scipy and friends from other, lesser, comercial programs. One of the biggest hurdles is getting everything installed. Enthought is great, but doens't get updated super often. I understand why and appreciate what they do and am not complaining. Is there a relatively easy way for me to make my own installer that does what Enthought Python does so that I can have a more updated version available for my students and colleagues? Right now I tell them to install Enthought and then upgrade Numpy/Scipy/Matplotlib. Thanks, Ryan From S.Mientki at ru.nl Fri Apr 27 09:15:35 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Fri, 27 Apr 2007 15:15:35 +0200 Subject: [SciPy-user] Creating my own pseudo-Enthought installer In-Reply-To: References: Message-ID: <4631F777.6030809@ru.nl> Ryan Krauss wrote: > I am making some progress at converting my students and colleagues to > Scipy and friends from other, lesser, comercial programs. One of the > biggest hurdles is getting everything installed. Enthought is great, > but doens't get updated super often. I understand why and appreciate > what they do and am not complaining. Is there a relatively easy way > for me to make my own installer that does what Enthought Python does > so that I can have a more updated version available for my students > and colleagues? Right now I tell them to install Enthought and then > upgrade Numpy/Scipy/Matplotlib. > I asked this question a few days ago, but didn't get the right answer. Yesterday I did a very simple test, (I don't know anything about the file structure of SciPy) basically, - just install everything on own PC - copy some extra Python24.DLL - put everything in a inno setup install (10 lines of code) What I see now, looks to work perfectly, and it's even portable on a USB stick ! I'll try to put everything this weekend on my website and let know the URL. cheers, Stef Mientki > Thanks, > > Ryan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 From bnuttall at uky.edu Fri Apr 27 10:00:00 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Fri, 27 Apr 2007 10:00:00 -0400 Subject: [SciPy-user] Matplotlib/pylab help In-Reply-To: <4631F777.6030809@ru.nl> References: <4631F777.6030809@ru.nl> Message-ID: <6.0.1.1.2.20070427095338.02401328@pop.uky.edu> Folks, I have a simple(?) question. I need to plot some data with the y axis inverted. That is, the minimum y is at the top of the y axis (top left corner of figure) and the maximum y in at the bottom of the y axis (bottom left corner of figure) where it intersects the x axis. I'm plotting depth-related data xy data. I've browsed the FAQs, docstrings, Matplotlib User's Guide and haven't found out how to do this. I'm certain I'm overlooking some argument or method somewhere. Thanks for any assistance. Brandon Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From ryanlists at gmail.com Fri Apr 27 10:12:44 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 27 Apr 2007 09:12:44 -0500 Subject: [SciPy-user] Matplotlib/pylab help In-Reply-To: <6.0.1.1.2.20070427095338.02401328@pop.uky.edu> References: <4631F777.6030809@ru.nl> <6.0.1.1.2.20070427095338.02401328@pop.uky.edu> Message-ID: I think it is as simple as specifying your ylim to be [ymax,ymin]: t=arange(0,0.2,0.001) y=sin(2*10*pi*t) plot(t,y) ylim([1.0,-1.0]) On 4/27/07, Brandon Nuttall wrote: > Folks, > > I have a simple(?) question. I need to plot some data with the y axis > inverted. That is, the minimum y is at the top of the y axis (top left > corner of figure) and the maximum y in at the bottom of the y axis (bottom > left corner of figure) where it intersects the x axis. I'm plotting > depth-related data xy data. I've browsed the FAQs, docstrings, Matplotlib > User's Guide and haven't found out how to do this. I'm certain I'm > overlooking some argument or method somewhere. > > Thanks for any assistance. > > Brandon > > > > Brandon C. Nuttall > > BNUTTALL at UKY.EDU Kentucky Geological Survey > (859) 257-5500 University of Kentucky > (859) 257-1147 (fax) 228 Mining & Mineral > Resources Bldg > http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gnchen at cortechs.net Fri Apr 27 10:37:35 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Fri, 27 Apr 2007 07:37:35 -0700 Subject: [SciPy-user] question about scipy.linalg + numpy.linalg + numpy.dual?? Message-ID: <784CA98C-587D-4C90-9A38-C6BE4A8FF8C1@cortechs.net> Hi! All, I need to solve a linear system and its A is sym-pos. The problem I have is there are so choices in scipy/numpy. Can anyone clarify which one I should use?? My system is OS X running on a macpro. I did manage to get numpy/ scipy compiled against MKL. Gen-Nan Chen, PhD -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnuttall at uky.edu Fri Apr 27 10:45:02 2007 From: bnuttall at uky.edu (Brandon Nuttall) Date: Fri, 27 Apr 2007 10:45:02 -0400 Subject: [SciPy-user] Matplotlib/pylab help In-Reply-To: References: <4631F777.6030809@ru.nl> <6.0.1.1.2.20070427095338.02401328@pop.uky.edu> Message-ID: <6.0.1.1.2.20070427104437.023ea430@pop.uky.edu> That does it. Thanks. At 10:12 AM 4/27/2007, you wrote: >I think it is as simple as specifying your ylim to be [ymax,ymin]: > >t=arange(0,0.2,0.001) >y=sin(2*10*pi*t) >plot(t,y) >ylim([1.0,-1.0]) > >On 4/27/07, Brandon Nuttall wrote: > > Folks, > > > > I have a simple(?) question. I need to plot some data with the y axis > > inverted. That is, the minimum y is at the top of the y axis (top left > > corner of figure) and the maximum y in at the bottom of the y axis (bottom > > left corner of figure) where it intersects the x axis. I'm plotting > > depth-related data xy data. I've browsed the FAQs, docstrings, Matplotlib > > User's Guide and haven't found out how to do this. I'm certain I'm > > overlooking some argument or method somewhere. > > > > Thanks for any assistance. > > > > Brandon > > > > > > > > Brandon C. Nuttall > > > > BNUTTALL at UKY.EDU Kentucky Geological Survey > > (859) 257-5500 University of Kentucky > > (859) 257-1147 (fax) 228 Mining & Mineral > > Resources Bldg > > http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user Brandon C. Nuttall BNUTTALL at UKY.EDU Kentucky Geological Survey (859) 257-5500 University of Kentucky (859) 257-1147 (fax) 228 Mining & Mineral Resources Bldg http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 From t_crane at mrl.uiuc.edu Fri Apr 27 11:11:06 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 27 Apr 2007 10:11:06 -0500 Subject: [SciPy-user] best way of finding a function Message-ID: <9EADC1E53F9C70479BF6559370369114142EB3@mrlnt6.mrl.uiuc.edu> Hi, One thing that makes it hard to get into using SciPy and Python is the decentralized nature of the documentation. My problem is that I want to use the arc-hyperbolic sine function. I have no idea where this is in order to import it (and in all likelihood I could import it from any number of sources). I can't seem to find it looking through the documentation on scipy.org. This is a specific example, but in general what's the best way of finding where some given function is in order to import it? Once you've been using scipy and such long enough, I imagine this is not so much an issue, but it does create something of a barrier to begin with. Or at least, I've found it to be so. Do you all have any suggestions how to make this easier? thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Fri Apr 27 11:33:25 2007 From: bhendrix at enthought.com (Bryce Hendrix) Date: Fri, 27 Apr 2007 10:33:25 -0500 Subject: [SciPy-user] How to install SciPy in the most simplest way ? In-Reply-To: <462F08EB.8010101@ru.nl> References: <462F08EB.8010101@ru.nl> Message-ID: <463217C5.805@enthought.com> Stef, What problems did you run into with Enstaller? Its no longer alpha- although some features are missing, it should behave very well. There haven't been any changes in several weeks, I encourage you to try it again if its been a while. If you want a simple install, you can create an msi that has a custom action to call a script which will install eggs using the Enstaller in the command line mode. You can also just copy the Python install directory to another machine, it should work okay, but you won't get the start menu additions or the explorer integration (double clicking on a python file will do nothing). Bryce Stef Mientki wrote: > hello, > > I'm still in the transition of going from MatLab to Scipy, > and installed previous week a SciPy on a PC twice, > through the new "Enstaller". > It's a pitty that there will be no old installer versions anymore > (although I can understand why). > > Although I succeeded, the behavior of the Enstaller was different both > times, > and you can clearly see it's an alpha version. > (I already wrote my experiences with the first install, > the second install had the weird phenomena that none of the succesful > installed packages was detected). > > As a spoiled windows user, which by the way are most people in my > surrounding, > I'm used to a "one-button-install", > so I wonder if it's possible to make a much simpeler install procedure. > > I don't know anything about what's required for a good install, > what kind of things things should be stored in the windows registry, > but as Python is an interpretor, > I would expect there should be a very easy procedure: > - Install it on one machine, > - copy the complete subdirectory to another computer > Does this work for Python + Scipy ? > > Though the above question might seem a lot of fuzz about nearly nothing, > it's very essential for my plan, > in which I want to convince the other people at our university to move > from MatLab to Python. > For windows users, the "one-button-install" is essential, > otherwise most windows users, will not even try a new package. > > Sorry for the long post, about "nothing" for non-windows users ;-) > > thanks, > Stef Mientki > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of Commerce - trade register 41055629 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From t_crane at mrl.uiuc.edu Fri Apr 27 11:44:30 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 27 Apr 2007 10:44:30 -0500 Subject: [SciPy-user] How to install SciPy in the most simplest way ? Message-ID: <9EADC1E53F9C70479BF65593703691141343EB@mrlnt6.mrl.uiuc.edu> I just recently decided to the same as you Stef (go from Matlab to Python). After going back and forth several times, I finally just decided to go with the Enthought install. This was a "one-button" install procedure and you get everything you'll need to more or less duplicate Matlab's capabilities. As mentioned elsewhere, it doesn't always have the absolute latest version of the various packages included with it, but I decided for my purposes this wasn't so important. trevis > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Bryce Hendrix > Sent: Friday, April 27, 2007 10:33 AM > To: SciPy Users List > Subject: Re: [SciPy-user] How to install SciPy in the most simplest way ? > > Stef, > > What problems did you run into with Enstaller? Its no longer alpha- > although some features are missing, it should behave very well. There > haven't been any changes in several weeks, I encourage you to try it > again if its been a while. > > If you want a simple install, you can create an msi that has a custom > action to call a script which will install eggs using the Enstaller in > the command line mode. You can also just copy the Python install > directory to another machine, it should work okay, but you won't get the > start menu additions or the explorer integration (double clicking on a > python file will do nothing). > > Bryce > > Stef Mientki wrote: > > hello, > > > > I'm still in the transition of going from MatLab to Scipy, > > and installed previous week a SciPy on a PC twice, > > through the new "Enstaller". > > It's a pitty that there will be no old installer versions anymore > > (although I can understand why). > > > > Although I succeeded, the behavior of the Enstaller was different both > > times, > > and you can clearly see it's an alpha version. > > (I already wrote my experiences with the first install, > > the second install had the weird phenomena that none of the succesful > > installed packages was detected). > > > > As a spoiled windows user, which by the way are most people in my > > surrounding, > > I'm used to a "one-button-install", > > so I wonder if it's possible to make a much simpeler install procedure. > > > > I don't know anything about what's required for a good install, > > what kind of things things should be stored in the windows registry, > > but as Python is an interpretor, > > I would expect there should be a very easy procedure: > > - Install it on one machine, > > - copy the complete subdirectory to another computer > > Does this work for Python + Scipy ? > > > > Though the above question might seem a lot of fuzz about nearly nothing, > > it's very essential for my plan, > > in which I want to convince the other people at our university to move > > from MatLab to Python. > > For windows users, the "one-button-install" is essential, > > otherwise most windows users, will not even try a new package. > > > > Sorry for the long post, about "nothing" for non-windows users ;-) > > > > thanks, > > Stef Mientki > > > > > > Kamer van Koophandel - handelsregister 41055629 / Netherlands Chamber of > Commerce - trade register 41055629 > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From ellisonbg.net at gmail.com Fri Apr 27 11:47:47 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 27 Apr 2007 09:47:47 -0600 Subject: [SciPy-user] Distributed Array Library? In-Reply-To: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> References: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> Message-ID: <6ce0ac130704270847k59bbd522tad40102209e677d0@mail.gmail.com> Hi, Some thoughts and potential directions: 1. petsc4py is definitely worth looking at 2. Also pytrillinos is another really good parallel array/matrix library: http://software.sandia.gov/trilinos/packages/pytrilinos/ It seems very powerful and is well supported. 3. Global arrays Robert Harrison at ORNL has python bindings to this. They probably need updating, and I am not sure if/where they can be downloaded. This could be very nice. It also might make sense to do a simple ctypes wrapper for the global array library. I would be interested in this. 4. Someone could write low-level code using numpy+mpi4py We (the IPython1 devs) have thought about this some. To provide basic distributed arrays wouldn't be very difficult. The challenge is that once you have such things, people will want things like eigensolvers, linear solvers, etc. These wouldn't be as easy. But, there would be an advantage. The overall focus of the above packages is that they are focused on linear algebra (matrices). For higher rank tensors, I am not sure they are that great. I would be really nice to have something that was better for "real" tensor work. But it might make sense to go with global arrays instead. 4. IPython1. While IPython1 doesn't provide any distributed array library, it would provide a very nice context in which to use one of the above solutions. It would integrate seamlessly with any of the above and enable interactive development/debugging/execution. Here are details: http://ipython.scipy.org/moin/Parallel_Computing It would be great to have a tutorial showing how to use petsc4py or pytrillinos with ipython1. Any takers? Brian On 4/26/07, Gregory Crosswhite wrote: > Hey everyone! I would appreciate some advice on a problem I am facing. > > I have written a code using the numpy library that (among other > things) performs contractions of a tensor network. Unfortunately, I > have reached the point where my tensors are growing too big to handle > in a single computer, so I want to rework my code so that it works on > a cluster or grid. > > My question is: do you have suggestions for tools that would let me > have ndarray like functionality with an array that could be > distributed over many processors? Specifically, I would like to be > able to create very large (possibly multi-gigabyte) tensors with an > arbitrary number of dimensions, to be able to transpose indices and > reshape dimensions, and to take general tensor products. > > After searching online, it looked like there was a package online > called GlobalArrays that allows one to easily create distributed > arrays, but it has the following characteristics that I would have to > work around: > > *) No Python binding at present. (One used to exist, but it has > disappeared from the internet. :-) ) > *) No capability for transposing indices or reshaping dimensions > *) The distributed inner product operations do not take stride > arguments. > > I also saw something called the Tensor Contraction Engine which might > have some support for this kind of thing, but the documentation for > the actual tensor contraction part of the system seemed very sparse > so I cannot tell whether . > > I wonder whether it would be feasible to integrate something like > this into the numpy core; I looked through the Guide to NumPy (thank > to Travis for taking the time to write such comprehensive > documentation!) and saw that there were various hooks to implement > one's own type, along with operations to perform a dot product, > ufuncs, and the like, but all of these seem to assume that one has a > uniform memory layout so that adopting them for a distributed array > would be an exercise in futility. > > Do the wise men and women of this list have any advice regarding the > best tool to use? :-) > > Thank you very much in advance! > > - Gregory Crosswhite > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fred.jen at web.de Fri Apr 27 12:04:02 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Fri, 27 Apr 2007 18:04:02 +0200 Subject: [SciPy-user] Finding a function Message-ID: <1177689842.5648.4.camel@muli> Hello, it wanted to say, that i really know this problem. I started scipy half a year ago and everything took long to understand it. At the moment there is a computational physics course at my universite and i know that a lot of other student have to deal with the same problem. So, is there a really fast and practical way to find the functions in scipy? thanks, fred From s.mientki at ru.nl Fri Apr 27 12:39:34 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 27 Apr 2007 18:39:34 +0200 Subject: [SciPy-user] how to make vectors and arrays of the same length ? Message-ID: <46322746.8040107@ru.nl> hello, I've vectors (1d-arrays) and 2d-arrays, which should have equal length along the last axis, by extending the too short vectors with the value of the last sample. And as I try to write this code (see my try below), I've the feeling, that I'm complicating things unnecessary. Maybe someone has a nice trick ? thanks, Stef Mientki b1 = floor(10*random.random((2,10))) b2 = ones(20) # test dimension along the last axis (= time-axis) _ml = max ( b1.shape[-1], b2.shape[-1] ) if b1.ndim == 2 : if b1.shape[0]<_ml: last_col = b1[:,-1] #while b1.shape[0] < _ml: #b1 = vstack((b1,last_col)) print last_col xc = vstack((last_col,last_col)) print 'xc',xc b1 = hstack((b1,xc)) print b1 print xc[0] # the 1-d case is simple (but probably can also be improved ;-) else: if len(b1)<_ml: b1 = r_ [b1, b1[-1]*ones(_ml-len(b1))] if len(b2)<_ml: b2 = r_ [b2, b2[-1]*ones(_ml-len(b2))] From robert.kern at gmail.com Fri Apr 27 12:43:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Apr 2007 11:43:07 -0500 Subject: [SciPy-user] best way of finding a function In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB3@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB3@mrlnt6.mrl.uiuc.edu> Message-ID: <4632281B.5070705@gmail.com> Trevis Crane wrote: > Hi, > > One thing that makes it hard to get into using SciPy and Python is the > decentralized nature of the documentation. My problem is that I want to > use the arc-hyperbolic sine function. I have no idea where this is in > order to import it (and in all likelihood I could import it from any > number of sources). I can?t seem to find it looking through the > documentation on scipy.org. This is a specific example, but in general > what?s the best way of finding where some given function is in order to > import it? Well, it is numpy.arcsinh(). Googling for "numpy arcsinh" brings up numerous hits including this: http://www.scipy.org/Numpy_Example_List The problem is that you had to know it was called "arcsinh" rather than searching for it in a (nonunique) expanded form "arc-hyperbolic sine." This isn't a terribly soluble problem; we refer to it in the docstring with the (possibly more correct) expanded form "inverse hyperbolic sine", a search for which would have given you this: http://www.scipy.org/Numpy_Example_List_With_Doc -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From t_crane at mrl.uiuc.edu Fri Apr 27 12:49:36 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 27 Apr 2007 11:49:36 -0500 Subject: [SciPy-user] best way of finding a function Message-ID: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> OK thank you. But in the generalized case, for you guys/gals who are more experienced, when looking for a function for the first time, is this what you usually do -- Google it? > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Robert Kern > Sent: Friday, April 27, 2007 11:43 AM > To: SciPy Users List > Subject: Re: [SciPy-user] best way of finding a function > > Trevis Crane wrote: > > Hi, > > > > One thing that makes it hard to get into using SciPy and Python is the > > decentralized nature of the documentation. My problem is that I want to > > use the arc-hyperbolic sine function. I have no idea where this is in > > order to import it (and in all likelihood I could import it from any > > number of sources). I can't seem to find it looking through the > > documentation on scipy.org. This is a specific example, but in general > > what's the best way of finding where some given function is in order to > > import it? > > Well, it is numpy.arcsinh(). Googling for "numpy arcsinh" brings up numerous > hits including this: > > http://www.scipy.org/Numpy_Example_List > > The problem is that you had to know it was called "arcsinh" rather than > searching for it in a (nonunique) expanded form "arc-hyperbolic sine." This > isn't a terribly soluble problem; we refer to it in the docstring with the > (possibly more correct) expanded form "inverse hyperbolic sine", a search for > which would have given you this: > > http://www.scipy.org/Numpy_Example_List_With_Doc > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From s.mientki at ru.nl Fri Apr 27 13:00:12 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 27 Apr 2007 19:00:12 +0200 Subject: [SciPy-user] How to install SciPy in the most simplest way ? In-Reply-To: <463217C5.805@enthought.com> References: <462F08EB.8010101@ru.nl> <463217C5.805@enthought.com> Message-ID: <46322C1C.1010700@ru.nl> hello Bryce, Bryce Hendrix wrote: > Stef, > > What problems did you run into with Enstaller? I put a copy of my remarks below, don't know if they arrived at Enthought. > Its no longer alpha- > but as far as know, you can't find it on the Enthought site, unless you know the exact URL. > although some features are missing, it should behave very well. There > haven't been any changes in several weeks, I encourage you to try it > again if its been a while. > I tried a few days ago. The problem might be a difference in the definition of user friendly, between NIX and Windows users. > If you want a simple install, you can create an msi that has a custom > action to call a script which will install eggs using the Enstaller in > the command line mode. You can also just copy the Python install > directory to another machine, it should work okay, That's really great news, (and the answer I was hoping to get ;-) because I just tested it just a little bit and it seems to work. > but you won't get the > start menu additions I think there's a solution for, but I have to test it. > or the explorer integration (double clicking on a > python file will do nothing). > No, but for that you've to change the windows registry. So possibly I make this an optional install feature. thanks, Stef Mientki > > ========== start copy =========== hello, thank you for creating Enthought edition. I downloaded it a few months ago, at everything went fluently. Because of a buggy signal library, I was advised (by Robert Kern) to upgrade my package. As a beginner to Python and a spoiled windows-user this didn't went so fluently (and I realize it's an alpha version) Here a my steps and remarks. - I removed all python from my winXP (SP 1) - I used the msi installer, which works great (real windows ;-) - After it installed basic Python on "P:\Program Files\Python\" it started some kind of DOS box (probably Python) which closed right away - Then I figured out (by trying some different installs) that the problem was in the SPACE in the filepath - So I deinstalled Python again - Installed Python at "P:\Python\", and it worked, great !! And my first tests with signal.py showed less bugs, I think I still have some troubles, but I will investigate that in the next few days. Now some remarks about enstaller (remember I'm a spoiled windows user ;-) - the user interaction is terribly slow (2.5 GHz AMD) - the scroll button on the mouse makes much too small steps - why not a button "select all packages to install" , after all what's a few GB nowadays - there seemed to be no progressbar, so I wondered what was going on (I later discovered some logging) - the reposteries where not updated correctly ( a number of packages were marked as "n", which I presumes means "No", but were colored blue, and reinstalling gave message "succesfully installed" but marked was still "n" Thanks for enstaller (despite it's alpha characteristics), cheers, Stef Mientki ========= end copy ========= > From aisaac at american.edu Fri Apr 27 13:58:55 2007 From: aisaac at american.edu (Alan Isaac) Date: Fri, 27 Apr 2007 13:58:55 -0400 Subject: [SciPy-user] best way of finding a function In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> Message-ID: On Fri, 27 Apr 2007, Trevis Crane wrote: > when looking for a function for the first time If it is likely to be a numpy function, I first look at the excellent Guide to Numpy: http://www.tramy.us/ If I don't have the book with me, I look at http://www.scipy.org/Numpy_Example_List The SciPy API docs are very useful: http://www.scipy.org/doc/api_docs/ fwiw, Alan Isaac From robert.kern at gmail.com Fri Apr 27 13:02:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Apr 2007 12:02:07 -0500 Subject: [SciPy-user] best way of finding a function In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> Message-ID: <46322C8F.6020202@gmail.com> Trevis Crane wrote: > OK thank you. But in the generalized case, for you guys/gals who are > more experienced, when looking for a function for the first time, is > this what you usually do -- Google it? I pretty much just know what's there at this point. Maybe a little bit of grepping in the source tree. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Fri Apr 27 13:08:46 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 27 Apr 2007 19:08:46 +0200 Subject: [SciPy-user] best way of finding a function In-Reply-To: <4632281B.5070705@gmail.com> References: <9EADC1E53F9C70479BF6559370369114142EB3@mrlnt6.mrl.uiuc.edu> <4632281B.5070705@gmail.com> Message-ID: <46322E1E.5020905@ru.nl> Robert Kern wrote: > Trevis Crane wrote: > >> Hi, >> >> One thing that makes it hard to get into using SciPy and Python is the >> decentralized nature of the documentation. My problem is that I want to >> use the arc-hyperbolic sine function. I have no idea where this is in >> order to import it (and in all likelihood I could import it from any >> number of sources). I can?t seem to find it looking through the >> documentation on scipy.org. This is a specific example, but in general >> what?s the best way of finding where some given function is in order to >> import it? >> Very good question, I often had the feeling to ask that question, but I didn't dare ;-) > > Well, it is numpy.arcsinh(). Googling for "numpy arcsinh" brings up numerous > hits including this: > > http://www.scipy.org/Numpy_Example_List > > The problem is that you had to know it was called "arcsinh" rather than > searching for it in a (nonunique) expanded form "arc-hyperbolic sine." That is one of the problems, the other problem is to know that you have to refer to "numpy" !! I understand that it's very difficult to get a homogeneous help documentation, on the other hand, it shouldn't be too difficult to generate some kind of database, where all functions from all packages in the Python directory are gathered. So I really wonder if some one hasn't done that yet. cheers, Stef Mientki From faltet at carabos.com Fri Apr 27 13:08:22 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 27 Apr 2007 19:08:22 +0200 Subject: [SciPy-user] best way of finding a function In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> Message-ID: <1177693702.2625.4.camel@localhost.localdomain> El dv 27 de 04 del 2007 a les 11:49 -0500, en/na Trevis Crane va escriure: > OK thank you. But in the generalized case, for you guys/gals who are > more experienced, when looking for a function for the first time, is > this what you usually do -- Google it? Google helps a lot indeed. I use also quite a lot the TAB key in IPython, and the '?' mark after a name function that I'm not sure if it is what I'm after. When this technique is used in combination with packages that are hierarchically structured (i.e. subpackages of subpackages) like SciPy is, this turns out to be stunningly effective (at least for me). -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From david.huard at gmail.com Fri Apr 27 13:10:19 2007 From: david.huard at gmail.com (David Huard) Date: Fri, 27 Apr 2007 13:10:19 -0400 Subject: [SciPy-user] Bicycle Wheel Precession In-Reply-To: References: Message-ID: <91cf711d0704271010q6ef73941o1f0a0f67d64497ac@mail.gmail.com> The wheel's rotation looks too slow, and the effect you're seeing would then be nutation, which I think is normal given your initial conditions. David Reference: Goldstein's Classical Mechanics, p 217-219. 2007/4/26, Ryan Krauss : > > I am trying to teach my dynamics students about teh precession of a > bicycle wheel like in this video: > > http://commons.bcit.ca/physics/video/precession.shtml > > If the x axis is along the axle of the wheel and the z axis points up, > I think the motion can be modeled using the following ode: > > def NewtonEuler(x,t,M,I): > Ixx=I[0] > Iyy=I[1] > Izz=I[2] > wx=x[0] > wy=x[1] > wz=x[2] > dxdt=zeros(3,) > dxdt[0]=((Iyy-Izz)*wy*wz+M[0])/Ixx > dxdt[1]=((Izz-Ixx)*wx*wz+M[1])/Iyy > dxdt[2]=((Ixx-Iyy)*wx*wy+M[2])/Izz > return dxdt > > I use this function along with integrate.odeint in the attached file, > but the surprising thing is that wz is sinusoidal rather than nearly > constant. Experimentally, I see nearly constant precession. > > Can anyone help me find what is wrong with my approach? > > Thanks, > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Apr 27 13:14:34 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 27 Apr 2007 11:14:34 -0600 Subject: [SciPy-user] best way of finding a function In-Reply-To: <1177693702.2625.4.camel@localhost.localdomain> References: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> <1177693702.2625.4.camel@localhost.localdomain> Message-ID: On 4/27/07, Francesc Altet wrote: > El dv 27 de 04 del 2007 a les 11:49 -0500, en/na Trevis Crane va > escriure: > > OK thank you. But in the generalized case, for you guys/gals who are > > more experienced, when looking for a function for the first time, is > > this what you usually do -- Google it? > > Google helps a lot indeed. > > I use also quite a lot the TAB key in IPython, and the '?' mark after a > name function that I'm not sure if it is what I'm after. When this > technique is used in combination with packages that are hierarchically > structured (i.e. subpackages of subpackages) like SciPy is, this turns > out to be stunningly effective (at least for me). The following little trick in ipython is also worth knowing about: In [2]: import numpy In [3]: numpy.*cos*? numpy.arccos numpy.arccosh numpy.cos numpy.cosh Cheers, f From t_crane at mrl.uiuc.edu Fri Apr 27 13:17:04 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Fri, 27 Apr 2007 12:17:04 -0500 Subject: [SciPy-user] best way of finding a function Message-ID: <9EADC1E53F9C70479BF65593703691141343EE@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Fernando Perez > Sent: Friday, April 27, 2007 12:15 PM > To: SciPy Users List > Subject: Re: [SciPy-user] best way of finding a function > > On 4/27/07, Francesc Altet wrote: > > El dv 27 de 04 del 2007 a les 11:49 -0500, en/na Trevis Crane va > > escriure: > > > OK thank you. But in the generalized case, for you guys/gals who are > > > more experienced, when looking for a function for the first time, is > > > this what you usually do -- Google it? > > > > Google helps a lot indeed. > > > > I use also quite a lot the TAB key in IPython, and the '?' mark after a > > name function that I'm not sure if it is what I'm after. When this > > technique is used in combination with packages that are hierarchically > > structured (i.e. subpackages of subpackages) like SciPy is, this turns > > out to be stunningly effective (at least for me). > > The following little trick in ipython is also worth knowing about: > > In [2]: import numpy > > In [3]: numpy.*cos*? > numpy.arccos > numpy.arccosh > numpy.cos > numpy.cosh > > [Trevis Crane] OK cool... thanks > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From faltet at carabos.com Fri Apr 27 13:18:17 2007 From: faltet at carabos.com (Francesc Altet) Date: Fri, 27 Apr 2007 19:18:17 +0200 Subject: [SciPy-user] best way of finding a function In-Reply-To: References: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> <1177693702.2625.4.camel@localhost.localdomain> Message-ID: <1177694297.2625.7.camel@localhost.localdomain> El dv 27 de 04 del 2007 a les 11:14 -0600, en/na Fernando Perez va escriure: > On 4/27/07, Francesc Altet wrote: > > El dv 27 de 04 del 2007 a les 11:49 -0500, en/na Trevis Crane va > > escriure: > > > OK thank you. But in the generalized case, for you guys/gals who are > > > more experienced, when looking for a function for the first time, is > > > this what you usually do -- Google it? > > > > Google helps a lot indeed. > > > > I use also quite a lot the TAB key in IPython, and the '?' mark after a > > name function that I'm not sure if it is what I'm after. When this > > technique is used in combination with packages that are hierarchically > > structured (i.e. subpackages of subpackages) like SciPy is, this turns > > out to be stunningly effective (at least for me). > > The following little trick in ipython is also worth knowing about: > > In [2]: import numpy > > In [3]: numpy.*cos*? > numpy.arccos > numpy.arccosh > numpy.cos > numpy.cosh That's great. I should find time to read the ipython manual more in deep. It will save me a lot of time :) -- Francesc Altet | Be careful about using the following code -- Carabos Coop. V. | I've only proven that it works, www.carabos.com | I haven't tested it. -- Donald Knuth From zunzun at zunzun.com Fri Apr 27 13:30:20 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 27 Apr 2007 13:30:20 -0400 Subject: [SciPy-user] Finding a function In-Reply-To: <1177689842.5648.4.camel@muli> References: <1177689842.5648.4.camel@muli> Message-ID: <20070427173020.GA24483@zunzun.com> Doxygen (http://www.stack.nl/~dimitri/doxygen/) should do the job rather very nicely, since all methods in all classes are indexed. We use it for C++ and C# where I work, having syntax-highlighted code with every function, class and variable hyperlinked to each other is just way cool - and, uhhhhhh, efficient too. It works for Python code as well so it's certainly worth a try. James Phillips http://zunzun.com On Fri, Apr 27, 2007 at 06:04:02PM +0200, Fred Jendrzejewski wrote: > > So, is there a really fast and practical way to find the functions in > scipy? From robert.kern at gmail.com Fri Apr 27 14:24:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Apr 2007 13:24:50 -0500 Subject: [SciPy-user] best way of finding a function In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB4@mrlnt6.mrl.uiuc.edu> Message-ID: <46323FF2.6010305@gmail.com> Trevis Crane wrote: > OK thank you. But in the generalized case, for you guys/gals who are > more experienced, when looking for a function for the first time, is > this what you usually do -- Google it? This page is also quite helpful: http://www.hjcb.nl/python/Arrays.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From David.L.Goldsmith at noaa.gov Fri Apr 27 14:33:51 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Fri, 27 Apr 2007 11:33:51 -0700 Subject: [SciPy-user] Bicycle Wheel Precession In-Reply-To: <91cf711d0704271010q6ef73941o1f0a0f67d64497ac@mail.gmail.com> References: <91cf711d0704271010q6ef73941o1f0a0f67d64497ac@mail.gmail.com> Message-ID: <4632420F.9010207@noaa.gov> As a generalization of David's comment below, I would emphasize that you check that the numerical values you're using in your simulation (e.g., for things like proportionality constants, initial conditions, etc.) accurately reflect those inherent in your experimental apparatus, both wrt value and units. For instance, whether or not what you're observing is nutation or precession, if your numerical values are such that the amplitude of the sinusoid is small, it would appear to be "near" a constant function... Another David David Huard wrote: > The wheel's rotation looks too slow, and the effect you're seeing > would then be nutation, which I think is normal given your initial > conditions. > > David > > Reference: Goldstein's Classical Mechanics, p 217-219. > > > 2007/4/26, Ryan Krauss >: > > I am trying to teach my dynamics students about teh precession of a > bicycle wheel like in this video: > > http://commons.bcit.ca/physics/video/precession.shtml > > > If the x axis is along the axle of the wheel and the z axis points up, > I think the motion can be modeled using the following ode: > > def NewtonEuler(x,t,M,I): > Ixx=I[0] > Iyy=I[1] > Izz=I[2] > wx=x[0] > wy=x[1] > wz=x[2] > dxdt=zeros(3,) > dxdt[0]=((Iyy-Izz)*wy*wz+M[0])/Ixx > dxdt[1]=((Izz-Ixx)*wx*wz+M[1])/Iyy > dxdt[2]=((Ixx-Iyy)*wx*wy+M[2])/Izz > return dxdt > > I use this function along with integrate.odeint in the attached file, > but the surprising thing is that wz is sinusoidal rather than nearly > constant. Experimentally, I see nearly constant precession. > > Can anyone help me find what is wrong with my approach? > > Thanks, > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gcross at u.washington.edu Fri Apr 27 16:35:30 2007 From: gcross at u.washington.edu (Gregory Crosswhite) Date: Fri, 27 Apr 2007 13:35:30 -0700 Subject: [SciPy-user] Distributed Array Library? In-Reply-To: <6ce0ac130704270847k59bbd522tad40102209e677d0@mail.gmail.com> References: <9D6787C2-4FA8-40F8-B828-B253EE4DC458@u.washington.edu> <6ce0ac130704270847k59bbd522tad40102209e677d0@mail.gmail.com> Message-ID: <9715C84B-5C9F-4C5D-8AB7-D774BCF6BAD3@u.washington.edu> Thank you very much for the comprehensive response. :-) > 4. Someone could write low-level code using numpy+mpi4py > > We (the IPython1 devs) have thought about this some. To provide basic > distributed arrays wouldn't be very difficult. ... Out of curiosity, how had you all been thinking about doing this? One thing that had come to my mind was somehow one might use GlobalArray as a back-end and the ndarray interface as a front-end, but so much of the ndarray code assumes that one can use pointer arithmetic to access the data that I couldn't see a good way to make this work. Is there a clever way of implementing a distributed array that would allow one to re-use as much of the numpy core code as possible? Thanks again, Greg From issa at aims.ac.za Fri Apr 27 16:35:05 2007 From: issa at aims.ac.za (Issa Karambal) Date: Fri, 27 Apr 2007 22:35:05 +0200 Subject: [SciPy-user] n-th derivative using scipy In-Reply-To: <46232D4A.9040704@ru.nl> References: <46228C80.1070705@ru.nl> <20070415214058.GR18196@mentat.za.net> <46232D4A.9040704@ru.nl> Message-ID: <46325E79.1040001@aims.ac.za> Hello I would like to know how I can compute the nth derivative of Lagrange polynomial issa From coughlan at ski.org Fri Apr 27 16:59:35 2007 From: coughlan at ski.org (James Coughlan) Date: Fri, 27 Apr 2007 13:59:35 -0700 Subject: [SciPy-user] Matplotlib/pylab help In-Reply-To: References: <4631F777.6030809@ru.nl> <6.0.1.1.2.20070427095338.02401328@pop.uky.edu> Message-ID: <46326437.3080002@ski.org> Hi, Ryan, thanks for the ylim solution. Just wanted to point out that one more step needs to be taken to display matrices in image form (such as matrices that represent images), when the y axis typically designates row number, which increases from the top to the bottom of the figure. The ylim function does make the y-axis increase from top to bottom, *but* the entire image then appears upside down (i.e. flipped about the middle row). To correct this, just use the flipud() function. Example: h,w=im.shape #im is grayscale image figure() imshow(flipud(im)) ylim([h, 0]) If anyone knows an easier solution, please let me know. (In Matlab it's as simple as typing "axis ij" after displaying the matrix.) Best, James Ryan Krauss wrote: > I think it is as simple as specifying your ylim to be [ymax,ymin]: > > t=arange(0,0.2,0.001) > y=sin(2*10*pi*t) > plot(t,y) > ylim([1.0,-1.0]) > > On 4/27/07, Brandon Nuttall wrote: > >> Folks, >> >> I have a simple(?) question. I need to plot some data with the y axis >> inverted. That is, the minimum y is at the top of the y axis (top left >> corner of figure) and the maximum y in at the bottom of the y axis (bottom >> left corner of figure) where it intersects the x axis. I'm plotting >> depth-related data xy data. I've browsed the FAQs, docstrings, Matplotlib >> User's Guide and haven't found out how to do this. I'm certain I'm >> overlooking some argument or method somewhere. >> >> Thanks for any assistance. >> >> Brandon >> >> >> >> Brandon C. Nuttall >> >> BNUTTALL at UKY.EDU Kentucky Geological Survey >> (859) 257-5500 University of Kentucky >> (859) 257-1147 (fax) 228 Mining & Mineral >> Resources Bldg >> http://www.uky.edu/KGS/home.htm Lexington, Kentucky 40506-0107 >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- ------------------------------------------------------- James Coughlan, Ph.D., Associate Scientist Smith-Kettlewell Eye Research Institute Email: coughlan at ski.org URL: http://www.ski.org/Rehab/Coughlan_lab/ Phone: 415-345-2146 Fax: 415-345-8455 ------------------------------------------------------- From pgmdevlist at gmail.com Fri Apr 27 17:08:28 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 27 Apr 2007 17:08:28 -0400 Subject: [SciPy-user] Matplotlib/pylab help In-Reply-To: <46326437.3080002@ski.org> References: <46326437.3080002@ski.org> Message-ID: <200704271708.29129.pgmdevlist@gmail.com> On Friday 27 April 2007 16:59:35 James Coughlan wrote: > Hi, [...] > The ylim function does make the y-axis increase from top to bottom, > *but* the entire image then appears upside down (i.e. flipped about the > middle row). > > To correct this, just use the flipud() function. Example: Or use the 'origin' keyword ? http://matplotlib.sourceforge.net/matplotlib.pylab.html#-imshow And remmber the maplotlib mailing list ;) From s.mientki at ru.nl Fri Apr 27 18:16:57 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 28 Apr 2007 00:16:57 +0200 Subject: [SciPy-user] [ANN] Portable SciPy v0.1 released Message-ID: <46327659.2050900@ru.nl> Portable SciPy, is an easy installer of SciPy for M$ windows users. For this moment, you can find the description page, with all links here http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/python/portable_scipy.html For future use, it's advised to always use my redirector page http://pic.flappie.nl/ The simple method described here, can be used to create any set of Python packages + other programs, with just a few lines of code (example available). have fun, and let me hear what you think of it. Stef Mientki From s.mientki at ru.nl Sat Apr 28 03:46:41 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 28 Apr 2007 09:46:41 +0200 Subject: [SciPy-user] how to make vectors and arrays of the same length ? In-Reply-To: <46322746.8040107@ru.nl> References: <46322746.8040107@ru.nl> Message-ID: <4632FBE1.5040602@ru.nl> hello, I've found a working solution for the problem, but I'm not happy with it, because I can't manage to do it without 2 transposes, and I've the feeling my solution is much too complicated. So I'ld be much obliged if someone gives me a better solution, or tell me that this is th? solution. thanks, Stef Mientki # start with a 2-dimensional array # we want to expand the second index with the last value a=asarray([[1,2,3],[4,5,6]]) # get the last row b=asarray(a[:,-1]) # now extend the row with 3 samples a=a.transpose() a=vstack((a,b)) a=vstack((a,b)) a=vstack((a,b)) a=a.transpose() And here is the result [[1 2 3 3 3 3] [4 5 6 6 6 6]] From mforbes at physics.ubc.ca Sat Apr 28 04:20:26 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Sat, 28 Apr 2007 01:20:26 -0700 Subject: [SciPy-user] how to make vectors and arrays of the same length ? In-Reply-To: <4632FBE1.5040602@ru.nl> References: <46322746.8040107@ru.nl> <4632FBE1.5040602@ru.nl> Message-ID: <3A86EAC7-E20F-4CF3-A349-1D90D5FDE093@physics.ubc.ca> How about this: def extend(a,extent): """Return an extended version of a. The new array will have shape = (...,extent) and will be padded with the last 'column' of a. >>> a = asarray([[1,2,3],[4,5,6]]) >>> extend(a,6) array([[1, 2, 3, 3, 3, 3], [4, 5, 6, 6, 6, 6]]) """ new_shape = list(a.shape) # Make new_shape a mutable list old_extent = min(extent,new_shape[-1]) # Allow extent to shrink array new_shape[-1] = extent extended_a = empty(new_shape,dtype=a.dtype) extended_a[...,:old_extent] = a[...,:old_extent] extended_a[...,old_extent:] = a[...,-1:] return extended_a Michael. On 28 Apr 2007, at 12:46 AM, Stef Mientki wrote: > hello, > > I've found a working solution for the problem, > but I'm not happy with it, > because I can't manage to do it without 2 transposes, > and I've the feeling my solution is much too complicated. > So I'ld be much obliged if someone gives me a better solution, > or tell me that this is th? solution. > > thanks, > Stef Mientki > > # start with a 2-dimensional array > # we want to expand the second index with the last value > a=asarray([[1,2,3],[4,5,6]]) > > # get the last row > b=asarray(a[:,-1]) > > # now extend the row with 3 samples > a=a.transpose() > a=vstack((a,b)) > a=vstack((a,b)) > a=vstack((a,b)) > a=a.transpose() > > And here is the result > [[1 2 3 3 3 3] > [4 5 6 6 6 6]] > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From matthieu.brucher at gmail.com Sat Apr 28 12:19:31 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 28 Apr 2007 18:19:31 +0200 Subject: [SciPy-user] Where are the window functions ? Message-ID: Hi, I'm wondering where is every window function, some seam to be in the global scipy namespace, others in thesignal namespace. Why is that so ? Compatibility reasons ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Sun Apr 29 06:14:59 2007 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 29 Apr 2007 12:14:59 +0200 Subject: [SciPy-user] difference between x, array(x) and array([x]) Message-ID: <46347023.9080202@gmx.net> I saw this sometimes but never really paid attention to it: While playing with scipy.factorial I found that it returns array(): In [23]: scipy.factorial(3) Out[23]: array(6.0) E.g. 6 is a sacalar, array([6]) an 1D-array of len 1, but array(6)? In [8]: import numpy as N In [9]: type(N.array([6])) Out[9]: In [10]: type(N.array(6)) Out[10]: So, array(6) is an array. But this looks as if it is a scalar: In [11]: N.array(6) == N.array([6]) Out[11]: array([ True], dtype=bool) In [12]: 6 == N.array([6]) Out[12]: array([ True], dtype=bool) In [13]: 6 == N.array(6) Out[13]: True In [14]: N.array([6])[0] Out[14]: 6 In [15]: N.array(6)[0] --------------------------------------------------------------------------- exceptions.IndexError Traceback (most recent call last) /home/elcorto/ IndexError: 0-d arrays can't be indexed OK, array(6) is a 0-d array. What is the advantage of returning an 0-d array object array(6) instead of just 6? -- cheers, steve I love deadlines. I like the whooshing sound they make as they fly by. -- Douglas Adams From williams at astro.ox.ac.uk Sun Apr 29 13:38:06 2007 From: williams at astro.ox.ac.uk (Michael Williams) Date: Sun, 29 Apr 2007 18:38:06 +0100 Subject: [SciPy-user] book In-Reply-To: <1d987df30704241722u6cd1594au30a8676ef4b7b017@mail.gmail.com> References: <1d987df30704241722u6cd1594au30a8676ef4b7b017@mail.gmail.com> Message-ID: <20070429173806.GA5590@astro.ox.ac.uk> On Tue, Apr 24, 2007 at 05:22:52PM -0700, linda.s wrote: > I am very new to SciPy. Is there any good tutorial book? This came up on this list a couple of months back: http://thread.gmane.org/gmane.comp.python.scientific.user/10734 From s.mientki at ru.nl Fri Apr 27 18:16:57 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 28 Apr 2007 00:16:57 +0200 Subject: [SciPy-user] [ANN] Portable SciPy v0.1 released Message-ID: Portable SciPy, is an easy installer of SciPy for M$ windows users. For this moment, you can find the description page, with all links here http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/python/portable_scipy.html For future use, it's advised to always use my redirector page http://pic.flappie.nl/ The simple method described here, can be used to create any set of Python packages + other programs, with just a few lines of code (example available). have fun, and let me hear what you think of it. Stef Mientki From lorenzo.isella at gmail.com Mon Apr 30 11:37:49 2007 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Mon, 30 Apr 2007 17:37:49 +0200 Subject: [SciPy-user] Installation SciPy on FC5 Message-ID: Dear All, I am running Fedora Core 5 on my desktop at work and I would like to install SciPy to run some simulations. Now, I have only the basic repositories enabled and I do not have a the SciPy library available. I went to the SciPy homepage and the download & installation of NumPy went fine, but I am experiencing some problems with SciPy, due to missing dependencies. Here is the output of my latest attempt: python setup.py install mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib/sse2 libraries f77blas,cblas,atlas not found in /usr/lib NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1301: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: libraries blas not found in /usr/local/lib FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 FOUND: libraries = ['blas'] library_dirs = ['/usr/lib'] define_macros = [('NO_ATLAS_INFO', 1)] language = f77 lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries f77blas,cblas,atlas not found in /usr/lib/sse2 libraries lapack_atlas not found in /usr/lib/sse2 libraries f77blas,cblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_info NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1210: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) lapack_info: libraries lapack not found in /usr/local/lib libraries lapack not found in /usr/lib NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1221: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) lapack_src_info: NOT AVAILABLE /usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:1224: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 144, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('Lib') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./Lib/setup.py", line 10, in configuration config.add_subpackage('lib') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/lib/setup.py", line 8, in configuration config.add_subpackage('lapack') File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 765, in add_subpackage caller_level = 2) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/usr/lib/python2.4/site-packages/numpy/distutils/misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "Lib/lib/lapack/setup.py", line 32, in configuration lapack_opt = get_info('lapack_opt',notfound_action=2) File "/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py", line 403, in get_info raise self.notfounderror,self.notfounderror.__doc__ numpy.distutils.system_info.NotFoundError: Some third-party program or library is not found. [root at erlive153 scipy-0.5.2]# Before ending up in dependency hell (I tried installing Blas and it went like a breeze, but something went wrong when I tried with Atlas), I would like to know if there is a smarter way of doing this. Many thanks for your help. Lorenzo From gnchen at cortechs.net Mon Apr 30 12:12:59 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 30 Apr 2007 09:12:59 -0700 Subject: [SciPy-user] cannot compile under centos 5 after weekend svn update Message-ID: <1471CA42-3B78-4BAE-A5BB-AB960DFE2AD2@cortechs.net> Hi! All, I cannot compile numpy anymore after last weekend's update. Anyone know what's wrong?? [gnchen at cortechs25:numpy]$ python setup.py config Running from numpy source directory. F2PY Version 2_3730 blas_opt_info: blas_mkl_info: Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/core.py", line 144, in setup config = configuration() File "setup.py", line 48, in configuration config.add_subpackage('numpy') File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ misc_util.py", line 765, in add_subpackage caller_level = 2) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./numpy/setup.py", line 9, in configuration config.add_subpackage('core') File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ misc_util.py", line 765, in add_subpackage caller_level = 2) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ misc_util.py", line 748, in get_subpackage caller_level = caller_level + 1) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ misc_util.py", line 695, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "numpy/core/setup.py", line 228, in configuration blas_info = get_info('blas_opt',0) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ system_info.py", line 399, in get_info self.calc_info() File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ system_info.py", line 1287, in calc_info blas_mkl_info = get_info('blas_mkl') File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ system_info.py", line 256, in get_info return cl().get_info(notfound_action) File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ system_info.py", line 399, in get_info self.calc_info() File "/snap/c02_raid0/other_src/numpy/numpy/distutils/ system_info.py", line 819, in calc_info dict_append(libraries = ['pthread']) TypeError: dict_append() takes exactly 1 non-keyword argument (0 given) Gen-Nan Chen, PhD Chief Scientific Officer Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9703 Fax: 1-858-459-9705 Email: gnchen at cortechs.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Apr 30 12:23:45 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 30 Apr 2007 09:23:45 -0700 Subject: [SciPy-user] Installation SciPy on FC5 In-Reply-To: References: Message-ID: On 4/30/07, Lorenzo Isella wrote: > > Dear All, > I am running Fedora Core 5 on my desktop at work and I would like to > install SciPy to run some simulations. > > Before ending up in dependency hell (I tried installing Blas and it > went like a breeze, but something went wrong when I tried with Atlas), > I would like to know if there is a smarter way of doing this. > Many thanks for your help. > You should be able to do something like this: http://projects.scipy.org/neuroimaging/ni/wiki/DevelopmentInstallFedora Just skip the NIPY-specific steps and you don't need to enable the models code in the sandbox. Good luck, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Apr 30 12:30:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Apr 2007 11:30:15 -0500 Subject: [SciPy-user] cannot compile under centos 5 after weekend svn update In-Reply-To: <1471CA42-3B78-4BAE-A5BB-AB960DFE2AD2@cortechs.net> References: <1471CA42-3B78-4BAE-A5BB-AB960DFE2AD2@cortechs.net> Message-ID: <46361997.40602@gmail.com> Gennan Chen wrote: > Hi! All, > > I cannot compile numpy anymore after last weekend's update. Anyone know > what's wrong?? A typo. Fixed in r3731. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fred.jen at web.de Mon Apr 30 13:16:31 2007 From: fred.jen at web.de (Fred Jendrzejewski) Date: Mon, 30 Apr 2007 19:16:31 +0200 Subject: [SciPy-user] Timeseries Message-ID: <1177953391.7961.1.camel@muli> Hello, i wanted to play around with the timeseries package, but created already mistake during the installation(Ubuntu 7.04 AMD64). Is it in such an early stadium or am i doing some stuipid things. Fred Jendrzejewski From pgmdevlist at gmail.com Mon Apr 30 13:21:42 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 30 Apr 2007 13:21:42 -0400 Subject: [SciPy-user] Timeseries In-Reply-To: <1177953391.7961.1.camel@muli> References: <1177953391.7961.1.camel@muli> Message-ID: <200704301321.43239.pgmdevlist@gmail.com> On Monday 30 April 2007 13:16:31 Fred Jendrzejewski wrote: > Hello, Hello > i wanted to play around with the timeseries package, but created already > mistake during the installation(Ubuntu 7.04 AMD64). > Is it in such an early stadium or am i doing some stuipid things. I can't tell for the second possibility, as you didn't give us enough information. The first possibility is likely, however: the package runs well on our machines, but it's already installed... Please Fred, could you give us the error messages that you get ? From t_crane at mrl.uiuc.edu Mon Apr 30 13:38:51 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Mon, 30 Apr 2007 12:38:51 -0500 Subject: [SciPy-user] ode/programming question Message-ID: <9EADC1E53F9C70479BF6559370369114142EB6@mrlnt6.mrl.uiuc.edu> Hi all, When using one of the ODE solvers, you can pass it a list of arguments. These arguments are used in the function that defines the system of linear equations that you're solving. What if I want to modify an argument every iteration and re-pass this modified argument to the helper function in the next iteration? In Matlab (what I'm most familiar with), this is easy to do, using "nested" functions, because they share the same scope/namespace as the function they're nested in. This is not the case in Python, however, so I'm curious what the best way of doing this would be. I assume I define a global variable, but I'm wondering if there's another, perhaps better, way of doing it. thanks, trevis ________________________________________________ Trevis Crane Postdoctoral Research Assoc. Department of Physics University of Ilinois 1110 W. Green St. Urbana, IL 61801 p: 217-244-8652 f: 217-244-2278 e: tcrane at uiuc.edu ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Mon Apr 30 13:48:04 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 30 Apr 2007 13:48:04 -0400 Subject: [SciPy-user] ode/programming question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB6@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB6@mrlnt6.mrl.uiuc.edu> Message-ID: On 30/04/07, Trevis Crane wrote: > When using one of the ODE solvers, you can pass it a list of arguments. > These arguments are used in the function that defines the system of linear > equations that you're solving. What if I want to modify an argument every > iteration and re-pass this modified argument to the helper function in the > next iteration? In Matlab (what I'm most familiar with), this is easy to do, > using "nested" functions, because they share the same scope/namespace as the > function they're nested in. This is not the case in Python, however, so I'm > curious what the best way of doing this would be. I assume I define a > global variable, but I'm wondering if there's another, perhaps better, way > of doing it. I would use a class that implements the __callable__ method if I wanted to store some state. But be warned that the ODE solver is going to assume that your function always returns the same value for the same inputs, and it's unlikely to call the function in t order. Incidentally, I've never understood why python has all those "args" arguments. It makes the code and signature for functions with function arguments complicated and confusing, and it's not usually enough: for example, if you want to use the minimization functions to maximize, you either have to write a function apurpose or you have to feed it a lambda; in either case you can easily curry the function arbitrarily. So I never ever use the "args" arguments. Why are they there? Anne From t_crane at mrl.uiuc.edu Mon Apr 30 16:27:40 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Mon, 30 Apr 2007 15:27:40 -0500 Subject: [SciPy-user] ode/programming question Message-ID: <9EADC1E53F9C70479BF6559370369114142EB7@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Anne Archibald > Sent: Monday, April 30, 2007 12:48 PM > To: SciPy Users List > Subject: Re: [SciPy-user] ode/programming question > > On 30/04/07, Trevis Crane wrote: > > > When using one of the ODE solvers, you can pass it a list of arguments. > > These arguments are used in the function that defines the system of linear > > equations that you're solving. What if I want to modify an argument every > > iteration and re-pass this modified argument to the helper function in the > > next iteration? In Matlab (what I'm most familiar with), this is easy to do, > > using "nested" functions, because they share the same scope/namespace as the > > function they're nested in. This is not the case in Python, however, so I'm > > curious what the best way of doing this would be. I assume I define a > > global variable, but I'm wondering if there's another, perhaps better, way > > of doing it. > > I would use a class that implements the __callable__ method if I > wanted to store some state. But be warned that the ODE solver is going > to assume that your function always returns the same value for the > same inputs, and it's unlikely to call the function in t order. [Trevis Crane] I found reference to a __call__ method, but not the __callable__ method. Can you point me to a description of it? In reference to you comment about the ODE not likely calling the function in t order -- you're saying that the ODE solver doesn't necessarily progress in a monotonic fashion from t = t_0 to t = t_final? Hmm. I hadn't thought of that. thanks, trevis > > Incidentally, I've never understood why python has all those "args" > arguments. It makes the code and signature for functions with function > arguments complicated and confusing, and it's not usually enough: for > example, if you want to use the minimization functions to maximize, > you either have to write a function apurpose or you have to feed it a > lambda; in either case you can easily curry the function arbitrarily. > So I never ever use the "args" arguments. Why are they there? > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Mon Apr 30 16:35:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Apr 2007 15:35:29 -0500 Subject: [SciPy-user] ode/programming question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB7@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB7@mrlnt6.mrl.uiuc.edu> Message-ID: <46365311.5050103@gmail.com> Trevis Crane wrote: > I found reference to a __call__ method, but not the __callable__ method. > Can you point me to a description of it? She meant __call__. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Apr 30 17:05:48 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 30 Apr 2007 17:05:48 -0400 Subject: [SciPy-user] ode/programming question In-Reply-To: <9EADC1E53F9C70479BF6559370369114142EB7@mrlnt6.mrl.uiuc.edu> References: <9EADC1E53F9C70479BF6559370369114142EB7@mrlnt6.mrl.uiuc.edu> Message-ID: On 30/04/07, Trevis Crane wrote: > I found reference to a __call__ method, but not the __callable__ method. > Can you point me to a description of it? As Robert pointed out, I got this wrong; you do indeed want a __call__ method to make your class callable. Sorry about that! > In reference to you comment about the ODE not likely calling the > function in t order -- you're saying that the ODE solver doesn't > necessarily progress in a monotonic fashion from t = t_0 to t = t_final? > Hmm. I hadn't thought of that. Most serious ODE solvers are adaptive, taking smaller steps when the function behaves sharply and larger when it is well-predicted by the model the ODE solver is using. You may find it enlightening to read chapter 16 of Numerical Recipes in C (http://www.nrbook.com/b/bookcpdf.php) - don't implement any of those methods (there are almost certainly better, solider implementations already built into scipy) but it will tell you more about how ODE solvers work, when they are likely to run into problems, and what to do about it. What is the problem you are working on? It may be possible to avoid using state, or to use it in a way that won't be a problem. Anne From t_crane at mrl.uiuc.edu Mon Apr 30 17:36:20 2007 From: t_crane at mrl.uiuc.edu (Trevis Crane) Date: Mon, 30 Apr 2007 16:36:20 -0500 Subject: [SciPy-user] ode/programming question Message-ID: <9EADC1E53F9C70479BF6559370369114142EB8@mrlnt6.mrl.uiuc.edu> > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of Anne Archibald > Sent: Monday, April 30, 2007 4:06 PM > To: SciPy Users List > Subject: Re: [SciPy-user] ode/programming question > > On 30/04/07, Trevis Crane wrote: > > > I found reference to a __call__ method, but not the __callable__ method. > > Can you point me to a description of it? > > As Robert pointed out, I got this wrong; you do indeed want a __call__ > method to make your class callable. Sorry about that! > > > In reference to you comment about the ODE not likely calling the > > function in t order -- you're saying that the ODE solver doesn't > > necessarily progress in a monotonic fashion from t = t_0 to t = t_final? > > Hmm. I hadn't thought of that. > > Most serious ODE solvers are adaptive, taking smaller steps when the > function behaves sharply and larger when it is well-predicted by the > model the ODE solver is using. You may find it enlightening to read > chapter 16 of Numerical Recipes in C > (http://www.nrbook.com/b/bookcpdf.php) - don't implement any of those > methods (there are almost certainly better, solider implementations > already built into scipy) but it will tell you more about how ODE > solvers work, when they are likely to run into problems, and what to > do about it. [Trevis Crane] I'll look into... thanks! > > What is the problem you are working on? It may be possible to avoid > using state, or to use it in a way that won't be a problem. [Trevis Crane] I have a system of linear equations. Each equation has the exact same form: phi_dot = Ij - Ic*sin(phi) Each of the equations has a different value for Ij, Ic, and phi, and indeed their coupling is through these parameters. Ij and Ic change dynamically with in time subject to various constraints, including the way in which they're coupled. Whenever t changes, I have to recalculate several parameters that are then used to determine Ij and Ic, but when the next time step comes, I need to use the most recent values of Ij and Ic in order to calculate the next values. Furthermore, this evolution needs to continue until the energy versus time of the system flattens out. And the energy is determined based upon these continually updating parameters. Does this make sense? I have this simulation working in Matlab, but as I've mentioned I want to try using Python in the future, so I thought I'd start with something for which I already have a correct answer. Now here's another question -- I'm trying to pass an extra argument to odeint like this y = odeint(y0,t,x) where x is the extra argument (a parameter I want to pass to the helper function). But this returns an error that tells me extra arguments must be in a tuple. I'm not sure what the appropriate syntax for this would be. Any help is appreciated... thanks, trevis > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnchen at cortechs.net Mon Apr 30 18:59:09 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 30 Apr 2007 15:59:09 -0700 Subject: [SciPy-user] ndimage crash on Rocks 4.0 Message-ID: Hi! All, scipy.ndimage gave me seg fault on Rocks 4.0, python 2.3.4. Anyone has a solution? [root at cluster01:~]# ipython Python 2.3.4 (#1, Feb 22 2005, 04:09:37) Type "copyright", "credits" or "license" for more information. IPython 0.8.0 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: import scipy.ndimage In [2]: scipy.ndimage.test(0 ...: KeyboardInterrupt In [2]: scipy.ndimage.test() Found 398 tests for scipy.ndimage Found 0 tests for __main__ ........................................................................ .............................../usr/lib/python2.3/site-packages/scipy/ ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ........................................................................ .....................Segmentation fault I link numpy/scipy against MKL Gen -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Apr 30 19:09:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Apr 2007 18:09:39 -0500 Subject: [SciPy-user] ndimage crash on Rocks 4.0 In-Reply-To: References: Message-ID: <46367733.1060900@gmail.com> Gennan Chen wrote: > Hi! All, > > scipy.ndimage gave me seg fault on Rocks 4.0, python 2.3.4. Anyone has a > solution? Can you rerun the tests with scipy.ndimage.test(verbosity=2)? That will put the test framework into verbose mode and print out the name of the test before it's run. That way, we can know which test failed. A gdb backtrace would also be helpful if you know how to get one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon Apr 30 19:18:30 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 1 May 2007 01:18:30 +0200 Subject: [SciPy-user] ndimage crash on Rocks 4.0 In-Reply-To: References: Message-ID: <20070430231830.GB6385@mentat.za.net> Hi Gen On Mon, Apr 30, 2007 at 03:59:09PM -0700, Gennan Chen wrote: > scipy.ndimage gave me seg fault on Rocks 4.0, python 2.3.4. Anyone has a > solution? Please add any debug information you can to http://projects.scipy.org/scipy/scipy/ticket/404 I'm busy tracing this bug (which especially annoys me since I don't see these errors on my machine). The problem should only influence ndimage.generic_filter*. Cheers St?fan From s.mientki at ru.nl Mon Apr 30 19:36:26 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 01 May 2007 01:36:26 +0200 Subject: [SciPy-user] Python Reference Card Message-ID: <46367D7A.8060202@ru.nl> have you seen the Python Reference card: http://www.limsi.fr/Individu/pointal/python/pqrc/ this is really a must have for newbies and most scipy users (because they probably will not use these general functions so often). Oh, wouldn't it be nice to have reference cards for Scipy ;-) cheers, Stef Mientki From gnchen at cortechs.net Mon Apr 30 20:45:19 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 30 Apr 2007 17:45:19 -0700 Subject: [SciPy-user] ndimage crash on Rocks 4.0 In-Reply-To: <46367733.1060900@gmail.com> References: <46367733.1060900@gmail.com> Message-ID: <3F34907A-0112-4831-82C8-C3A81F6E4060@cortechs.net> Here is the debug info: In [4]: scipy.ndimage.test(verbosity=2) Found 398 tests for scipy.ndimage Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Warning: No test file found in /usr/lib/python2.3/site-packages/scipy/ ndimage/tests for module Found 0 tests for __main__ affine_transform 1 ... ok affine transform 2 ... ok affine transform 3 ... ok affine transform 4 ... ok affine transform 5 ... ok affine transform 6 ... ok affine transform 7 ... ok affine transform 8 ... ok affine transform 9 ... ok affine transform 10 ... ok affine transform 11 ... ok affine transform 12 ... ok affine transform 13 ... ok affine transform 14 ... ok affine transform 15 ... ok affine transform 16 ... ok affine transform 17 ... ok affine transform 18 ... ok affine transform 19 ... ok affine transform 20 ... ok affine transform 21 ... ok binary closing 1 ... ok binary closing 2 ... ok binary dilation 1 ... ok binary dilation 2 ... ok binary dilation 3 ... ok binary dilation 4 ... ok binary dilation 5 ... ok binary dilation 6 ... ok binary dilation 7 ... ok binary dilation 8 ... ok binary dilation 9 ... ok binary dilation 10 ... ok binary dilation 11 ... ok binary dilation 12 ... ok binary dilation 13 ... ok binary dilation 14 ... ok binary dilation 15 ... ok binary dilation 16 ... ok binary dilation 17 ... ok binary dilation 18 ... ok binary dilation 19 ... ok binary dilation 20 ... ok binary dilation 21 ... ok binary dilation 22 ... ok binary dilation 23 ... ok binary dilation 24 ... ok binary dilation 25 ... ok binary dilation 26 ... ok binary dilation 27 ... ok binary dilation 28 ... ok binary dilation 29 ... ok binary dilation 30 ... ok binary dilation 31 ... ok binary dilation 32 ... ok binary dilation 33 ... ok binary dilation 34 ... ok binary dilation 35 ... ok binary erosion 1 ... ok binary erosion 2 ... ok binary erosion 3 ... ok binary erosion 4 ... ok binary erosion 5 ... ok binary erosion 6 ... ok binary erosion 7 ... ok binary erosion 8 ... ok binary erosion 9 ... ok binary erosion 10 ... ok binary erosion 11 ... ok binary erosion 12 ... ok binary erosion 13 ... ok binary erosion 14 ... ok binary erosion 15 ... ok binary erosion 16 ... ok binary erosion 17 ... ok binary erosion 18 ... ok binary erosion 19 ... ok binary erosion 20 ... ok binary erosion 21 ... ok binary erosion 22 ... ok binary erosion 23 ... ok binary erosion 24 ... ok binary erosion 25 ... ok binary erosion 26 ... ok binary erosion 27 ... ok binary erosion 28 ... ok binary erosion 29 ... ok binary erosion 30 ... ok binary erosion 31 ... ok binary erosion 32 ... ok binary erosion 33 ... ok binary erosion 34 ... ok binary erosion 35 ... ok binary erosion 36 ... ok binary fill holes 1 ... ok binary fill holes 2 ... ok binary fill holes 3 ... ok binary opening 1 ... ok binary opening 2 ... ok binary propagation 1 ... ok binary propagation 2 ... ok black tophat 1 ... ok black tophat 2 ... ok boundary modes/usr/lib/python2.3/site-packages/scipy/ndimage/ interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1Segmentation fault And Stefan is right. The problem is generic_filter. Gen On Apr 30, 2007, at 4:09 PM, Robert Kern wrote: > Gennan Chen wrote: >> Hi! All, >> >> scipy.ndimage gave me seg fault on Rocks 4.0, python 2.3.4. Anyone >> has a >> solution? > > Can you rerun the tests with scipy.ndimage.test(verbosity=2)? That > will put the > test framework into verbose mode and print out the name of the test > before it's > run. That way, we can know which test failed. > > A gdb backtrace would also be helpful if you know how to get one. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: