From lou_boog2000 at yahoo.com Mon Jan 1 08:35:48 2007 From: lou_boog2000 at yahoo.com (Lou Pecora) Date: Mon, 1 Jan 2007 05:35:48 -0800 (PST) Subject: [SciPy-user] additional superpak problems In-Reply-To: <9DC36E73-F8AE-4FA1-AFC5-A09C037D9FC8@cs.hmc.edu> Message-ID: <867287.82526.qm@web34406.mail.mud.yahoo.com> I use matplotlib. No real problems there. Just have to remember to set it to interactive for certain things. I think you need to set mpl to use TkAgg for interactive. TKAgg can be invoked in the interpreter by just importing matplotlib and executing matplotlib.use("TKAgg"). I think if you use the wx backend you have to have mpl synchronized with wxPython. Note: WXAgg does *not* work with interactive. I can't recall the problem exactly. Otherwise I have mpl running fine. -- Lou --- belinda thom wrote: > So you've had no use of matplotlib? Getting that one > running in > interactive mode was as tricky as the scipy one, > IMHO. > > --b __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From fperez.net at gmail.com Mon Jan 1 11:36:08 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 1 Jan 2007 09:36:08 -0700 Subject: [SciPy-user] additional superpak problems In-Reply-To: <867287.82526.qm@web34406.mail.mud.yahoo.com> References: <9DC36E73-F8AE-4FA1-AFC5-A09C037D9FC8@cs.hmc.edu> <867287.82526.qm@web34406.mail.mud.yahoo.com> Message-ID: Hi Lou, On 1/1/07, Lou Pecora wrote: > I use matplotlib. No real problems there. Just have > to remember to set it to interactive for certain > things. I think you need to set mpl to use > TkAgg for interactive. TKAgg can be invoked in the > interpreter by just importing matplotlib and executing > matplotlib.use("TKAgg"). I think if you use the wx > backend you have to have mpl synchronized with > wxPython. Note: WXAgg does *not* work with > interactive. I can't recall the problem exactly. > Otherwise I have mpl running fine. I think Belinda's question was regarding the fact that with ipython, if you start it as ipython -pylab it automatically synchronizes with mpl for interactive use, with all (well, except FLTK) the GUI backends. IPython has custom threading support to allow GTK, WX or QT3/4 to correctly work without blocking the input terminal. If I understand correctly, getting the GUI bits to work under OSX has been giving Belinda some grief, unfortunately. Cheers, f From v-nijs at kellogg.northwestern.edu Mon Jan 1 17:52:32 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Mon, 01 Jan 2007 16:52:32 -0600 Subject: [SciPy-user] Scipy website In-Reply-To: Message-ID: Is there a problem with the scipy website? Connection seems to have been spotty the last couple of days. Vincent From sbasu at physics.cornell.edu Mon Jan 1 18:09:53 2007 From: sbasu at physics.cornell.edu (Sourish Basu) Date: Mon, 01 Jan 2007 18:09:53 -0500 Subject: [SciPy-user] New user questions In-Reply-To: <9BE12344-C99F-4CDD-82F5-B167E3C382B6@gmail.com> References: <9BE12344-C99F-4CDD-82F5-B167E3C382B6@gmail.com> Message-ID: <459994C1.5080008@physics.cornell.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Niklas Saers wrote: > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' from '...y/linsolve/umfpack/ > umfpack.pyc'> > Warning: FAILURE importing tests for from '.../linsolve/umfpack/__init__.pyc'> Installing UMFPACK is also quite easy. Just install it somewhere and add the following lines to your numpy site.cfg file before building/installing: [umfpack] library_dirs = /usr/local/lib (this is the path where libumfpack.a can be found) umfpack_libs = umfpack Then reinstall numpy/scipy. If you've already done it, this extra step should not complicate matters any more. - -Sourish -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iQIVAwUBRZmUwLq7YmsVF20EAQpP9xAAzR2MwpRaXmPBGlhfrmtv6r5Lx+0eudvm 3QNmeFEYDzPwvh9qQ996epbbPNCHrj/cw4LUKCOQ2T9whY1EJl8HqR5BYFMHPUuo c+ppdL72BfJ/rae1P7k+At1LpTZwfx+pLzpczJu/Gmb69r2bIvjScwc6a9fehXKg E+GUXV+0nSPb6LwWQ9yyReAPlboe//DSEJNNJtQBPVAygL2Je1Omyxt1l57UmbDw 0oC9Zxo9lniSjdSGeXbahuzjA/2ahZS37qxuoIYxa5nn1a0gPKTrvAbkaSRQrbU8 W4MsN0hNjOTaNJh6cxZGiR+msYPagl2vRgBCWqWbkKu8nfDgRAHZ8ojwAMx253Z4 EhMTDlRQLFG+RoGlemRY6sKCJfB+FZf2eFIVj2pKnRjxYc1clWWUVk0lcDAQGQal emBZ0Kcc9ccUPIDfHAdtOTcOPtpAC+Zm0QtZyAk5IgHuIFBhjIPxVh6YviKOwPdh OEWq/Asy2kTzsR4STalnshnBCj2STVSTU5VW7oS64ltKbcWyeAfpwzN/nRXvKpO0 jy5zcRMKHAsb2OA01Fehh5KRnYnXWpOoIgwx3XnhxYFlO1M57duFQ0oxUKFCF0GJ YwW462PdIIvCh4vcHbWsyr1dobTQxc7D4zMeMcFwUVe0QiQsOynQ9CLH+gGzF31C /n+Ii79W0Lw= =Dqrb -----END PGP SIGNATURE----- From eric at enthought.com Mon Jan 1 21:41:10 2007 From: eric at enthought.com (eric jones) Date: Mon, 01 Jan 2007 20:41:10 -0600 Subject: [SciPy-user] Scipy website In-Reply-To: References: Message-ID: <4599C646.7020009@enthought.com> I just tried and am having issues as well. I will check with Jeff to see what is going on. eric Vincent Nijs wrote: > Is there a problem with the scipy website? Connection seems to have been > spotty the last couple of days. > > Vincent > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From eric at enthought.com Mon Jan 1 22:31:32 2007 From: eric at enthought.com (eric jones) Date: Mon, 01 Jan 2007 21:31:32 -0600 Subject: [SciPy-user] Scipy website In-Reply-To: <4599C646.7020009@enthought.com> References: <4599C646.7020009@enthought.com> Message-ID: <4599D214.4010206@enthought.com> It looks like Jeff got it back up. I am able to access it again. Thanks Jeff. eric eric jones wrote: > I just tried and am having issues as well. I will check with Jeff to > see what is going on. > > eric > > > Vincent Nijs wrote: > >> Is there a problem with the scipy website? Connection seems to have been >> spotty the last couple of days. >> >> Vincent >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From joishi at amnh.org Tue Jan 2 10:22:24 2007 From: joishi at amnh.org (J Oishi) Date: Tue, 2 Jan 2007 10:22:24 -0500 Subject: [SciPy-user] SciPy finds wrong lapack References: Message-ID: <4AA8D91C-14D0-472A-A46B-8F3D09989089@amnh.org> Hi, I'm attempting to use SciPy 0.5.2 with NumPy 1.0.1 on a Scientific Linux distribution (which is itself based on Redhat Enterprise, I think). I have built ATLAS and ensured I have a complete LAPACK installed in /usr/local/lib/atlas (following the instructions on the ATLAS site). NumPy and SciPy both build ok, although I had to modify site.cfg for both in order to get them to find my ATLAS libraries. However, scipy.test() fails as follows: ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/ test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/local/lib/python2.4/site-packages/scipy/stats/ __init__.py", line 7, in ? from stats import * File "/usr/local/lib/python2.4/site-packages/scipy/stats/ stats.py", line 190, in ? import scipy.special as special File "/usr/local/lib/python2.4/site-packages/scipy/special/ __init__.py", line 10, in ? import orthogonal File "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 65, in ? from scipy.linalg import eig File "/usr/local/lib/python2.4/site-packages/scipy/linalg/ __init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 23, in ? from scipy.linalg import calc_lwork ImportError: cannot import name calc_lwork ---------------------------------------------------------------------- When I run ldd on the calc_lwork.so, I get the following: [joishi at hachiko scipy-0.5.2]$ ldd /usr/local/lib/python2.4/site-packages/scipy/linalg/calc_lwork.so liblapack.so.3 => /usr/lib/liblapack.so.3 (0x0045a000) libblas.so.3 => /usr/lib/libblas.so.3 (0x00111000) libm.so.6 => /lib/tls/libm.so.6 (0x00293000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00162000) libc.so.6 => /lib/tls/libc.so.6 (0x002b6000) libg2c.so.0 => /usr/lib/libg2c.so.0 (0x00b65000) /lib/ld-linux.so.2 (0x00ae3000) However, that's not where the ATLAS version of LAPACK is! I do not have a shared version of liblapack in /usr/local/lib/atlas, only a static .a version. It seems that somewhere scipy is picking up the shared version from /usr/lib instead of the static version in /usr/local/lib/atlas. How can I control this behavior? Or am I chasing the wrong problem? Any advice would be greatly appreciated. thanks, jeff From robert.kern at gmail.com Tue Jan 2 11:05:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 02 Jan 2007 10:05:01 -0600 Subject: [SciPy-user] SciPy finds wrong lapack In-Reply-To: <4AA8D91C-14D0-472A-A46B-8F3D09989089@amnh.org> References: <4AA8D91C-14D0-472A-A46B-8F3D09989089@amnh.org> Message-ID: <459A82AD.6070705@gmail.com> J Oishi wrote: > Hi, > > I'm attempting to use SciPy 0.5.2 with NumPy 1.0.1 on a Scientific Linux > distribution (which is itself based on Redhat Enterprise, I think). I > have > built ATLAS and ensured I have a complete LAPACK installed in > /usr/local/lib/atlas (following the instructions on the ATLAS site). > NumPy > and SciPy both build ok, although I had to modify site.cfg for both in > order to get them to find my ATLAS libraries. What is your site.cfg and where (and what) are the relevant libraries on your system? You might want to use [DEFAULT] search_static_first=true in order to pick up the static library rather than the shared one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From joishi at amnh.org Tue Jan 2 11:35:38 2007 From: joishi at amnh.org (J Oishi) Date: Tue, 2 Jan 2007 11:35:38 -0500 Subject: [SciPy-user] SciPy finds wrong lapack Message-ID: <1EE61315-A361-4BCC-A0EC-24DCB612453A@amnh.org> Thanks for the tip. I tried setting the search_static_first=true, but the problem remains. > What is your site.cfg and where (and what) are the relevant > libraries on your > system? my site.cfg (for both scipy and numpy) reads [DEFAULT] search_static_first=true [atlas] atlas_libs = lapack,cblas, atlas, f77blas My atlas libs are in /usr/local/lib/atlas, but the /usr/lib/ liblapack.so.3 versions are the ones being linked into calc_lwork.so. I think this might be related to the fact that liblapack.so.3 => /usr/lib/liblapack.so.3 (0x0045a000) libblas.so.3 => /usr/lib/libblas.so.3 (0x00111000) are from my (possibly tainted) RedHat libraries, rather than from my good ATLAS versions /usr/local/lib/atlas/liblapack.a /usr/local/lib/atlas/libcblas.a thanks, j From robert.kern at gmail.com Tue Jan 2 12:18:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 02 Jan 2007 11:18:18 -0600 Subject: [SciPy-user] SciPy finds wrong lapack In-Reply-To: <1EE61315-A361-4BCC-A0EC-24DCB612453A@amnh.org> References: <1EE61315-A361-4BCC-A0EC-24DCB612453A@amnh.org> Message-ID: <459A93DA.3020506@gmail.com> J Oishi wrote: > Thanks for the tip. I tried setting the search_static_first=true, but > the problem remains. > >> What is your site.cfg and where (and what) are the relevant >> libraries on your >> system? > my site.cfg (for both scipy and numpy) reads > > [DEFAULT] > search_static_first=true > [atlas] > atlas_libs = lapack,cblas, atlas, f77blas > > My atlas libs are in /usr/local/lib/atlas, but the /usr/lib/ > liblapack.so.3 versions are the ones being linked into calc_lwork.so. > I think this might be related to the fact that > liblapack.so.3 => /usr/lib/liblapack.so.3 (0x0045a000) > libblas.so.3 => /usr/lib/libblas.so.3 (0x00111000) > are from my (possibly tainted) RedHat libraries, rather than from my > good ATLAS versions > /usr/local/lib/atlas/liblapack.a > /usr/local/lib/atlas/libcblas.a Ah. Then you need to include that information in your site.cfg: [atlas] atlas_libs = lapack, f77blas, cblas, atlas library_dirs = /usr/local/lib/atlas Also, note the correction to the order of the libraries. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From niklassaers at gmail.com Tue Jan 2 08:02:55 2007 From: niklassaers at gmail.com (Niklas Saers) Date: Tue, 2 Jan 2007 14:02:55 +0100 Subject: [SciPy-user] New user questions In-Reply-To: <459994C1.5080008@physics.cornell.edu> References: <9BE12344-C99F-4CDD-82F5-B167E3C382B6@gmail.com> <459994C1.5080008@physics.cornell.edu> Message-ID: <5F1323E6-C6DE-4E87-8503-8462CED08BCE@gmail.com> Thank you very much for your great advice. :-) Cheers Nik On 2. jan. 2007, at 00.09, Sourish Basu wrote: > Installing UMFPACK is also quite easy. Just install it somewhere and > add the following lines to your numpy site.cfg file before > building/installing: From mattknox_ca at hotmail.com Tue Jan 2 12:36:20 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 2 Jan 2007 12:36:20 -0500 Subject: [SciPy-user] time series plotting (slightly off topic) Message-ID: I added a sub-module to the time series module in the sandbox to help with plotting time series. Take a look at plotting.py in the examples sub directory if you are interested. It requires matplotlib obviously. Basically, it makes it very easy to generate plots of time series at different frequencies (business day, daily, monthly, quarterly, and annual currently supported). I have only tried it with the most recent version of matplotlib, so no idea how it would fair with earlier versions. Suggestions, criticisms, etc always welcome. - Matt From m.starnes at imperial.ac.uk Tue Jan 2 13:05:59 2007 From: m.starnes at imperial.ac.uk (Mark Starnes) Date: Tue, 02 Jan 2007 18:05:59 +0000 Subject: [SciPy-user] csc, csr sparse complex matrix problems. Possibly memory issues. In-Reply-To: <4596FF09.3050208@imperial.ac.uk> References: <457E91FD.2000302@iam.uni-stuttgart.de> <4580C072.3060608@imperial.ac.uk> <458124BC.6080508@ntc.zcu.cz> <4581355A.2020806@imperial.ac.uk> <4596FF09.3050208@imperial.ac.uk> Message-ID: <459A9F07.2080907@imperial.ac.uk> An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Tue Jan 2 13:22:43 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 2 Jan 2007 13:22:43 -0500 Subject: [SciPy-user] time series plotting (slightly off topic) In-Reply-To: References: Message-ID: <200701021322.44174.pgmdevlist@gmail.com> Ciao Matt, Happy New Year ! I've been quite busy during Xmas and NYD with your TimeSeries module. Please have a look in the mtimeseries subfolder. You'll find the basic code along with an updated version of your example and some unittests. As I had told you last year, my implmntation of TimeSeries relies on maskedarray. That means that TimeSeries object are basically subclasses of MaskedArray, and in fine of ndarray. Support of non-regular time series, with gaps in dates, is operational. The trick is to store the dates along the data, in a new DateArray object. That's a bit of an overload when you have regular time series, but that's the only way. The DateArray objects are described in the tsdate.py file. I put some creation/conversion tests: you can convert a DateArray to another frequency, to strings, or even to Gregorian proleptic ordinals, which means that plotting is as simple as: plot_date(series._dates.toordinal(), series._series) I had already written some subclasses of MPL.SubPlot to simplify that. I'll have to clean my code a bit and will post it latter. The nice thing is that you can select items directly by their date, entered as a DateObject, or even a string, without having to convert them to indices first. Conversion to other frequencies works as expected. Using basic arithmetic functions works also (a nice side effect of working with maskedarray). A couple of comments: - Do we still need dateOf ? cseries.convert seems to do the work. dateOf could then just be a python wrapper. - It'd be cool if cseries.convert could accept arrays instead of only scalars. That way, we wouldn't have to loop over a DateArray for the conversion. Now I intend to work on the full support of multi-timeseries, viz, one TimeSeries object where each row corresponds to a particular series. In particular, I'd want to be able to access data by names, a bit like records. I want also to work on supporting other frequencies, such as weekly, hourly and minutely. There's already some basic implementation in tsdate, but I haven't tried to modify the C code yet. I didn't really try to support the regular numpy.core.ma: I can try to work a bit on that, but that's not my main objective. Let me know what you think. Cheers P. From joishi at amnh.org Tue Jan 2 13:40:21 2007 From: joishi at amnh.org (J Oishi) Date: Tue, 2 Jan 2007 13:40:21 -0500 Subject: [SciPy-user] SciPy finds wrong lapack Message-ID: Sorry to be a pain, but I'm still not having any luck! > [atlas] > atlas_libs = lapack, f77blas, cblas, atlas > library_dirs = /usr/local/lib/atlas > > Also, note the correction to the order of the libraries. Hmm. I made both of these changes to both numpy and scipy site.cfg, and the error still persists. ldd continues to report linking to /usr/ lib versions. Looking through the output of python setup.py install scipy reports that it finds the atlas libraries in /usr/local/lib/atlas: atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c include_dirs = ['/usr/local/lib/atlas'] thanks again, j From mattknox_ca at hotmail.com Tue Jan 2 14:26:42 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 2 Jan 2007 19:26:42 +0000 (UTC) Subject: [SciPy-user] time series plotting (slightly off topic) References: <200701021322.44174.pgmdevlist@gmail.com> Message-ID: > > Ciao Matt, > Happy New Year ! > I've been quite busy during Xmas and NYD with your TimeSeries module. Please > have a look in the mtimeseries subfolder. You'll find the basic code along > with an updated version of your example and some unittests. Thanks. Happy new year to you too. Hope you found some time to relax during the holiday season too! I'll take a look at your changes. > which means that plotting is as simple as: > plot_date(series._dates.toordinal(), series._series) > I had already written some subclasses of MPL.SubPlot to simplify that. I'll > have to clean my code a bit and will post it latter. > My first attempt at plotting the time series objects was similar to this, but I found getting intelligent labels quite tricky. My new plotting code basically bypasses the built-in matplotlib date axis labelling and outputs the labels from scratch using my own logic (which works better for handling different frequencies in my opinion). A lot of the logic I created concerns the spacing of the labels and how many labels to draw, so you don't end up with an over- crowded axis. And quarterly frequency labelling is supported, which I believe is not possible (not easily anyway) with the standard stuff. > A couple of comments: > - Do we still need dateOf ? cseries.convert seems to do the work. dateOf could then just be a python wrapper. Yeah, you are probably right. Originally all the logic was not implemented in C yet, so I needed this function. The name of the function (dateOf) probably isn't entirely intuitive either. The reason for the name is because that's what the equivalent function in FAME is called and I just named it the same :) > - It'd be cool if cseries.convert could accept arrays instead of only scalars. > That way, we wouldn't have to loop over a DateArray for the conversion. > Yeah, that could be done. Wouldn't be very difficult likely. > I want also to work on supporting other frequencies, such as weekly, hourly > and minutely. There's already some basic implementation in tsdate, but I > haven't tried to modify the C code yet. As you are aware, weekly is one of those tricky frequencies that isn't a nice subset of the other frequencies. It's not a data frequency I use very often, but we'll probably need to discuss some strategy for converting it to other frequencies since it is an odd-ball case. Another thing to consider with weekly frequency is defining when the week ends at. FAME actually recognizes seven different weekly frequencies: weekly(Monday), weekly(Tuesday), etc... , and this impacts how frequency conversion is done too. It's not really something I care too much about, but just something to consider... Also, looking at your code for the weekly frequency in tsdate... I think it might be a good idea to add an additional parameter to the __init__ method called "period" so you could specify a date with just the year and period (15th week for example, or maybe -1 for last period in year, etc). - Matt From pgmdevlist at gmail.com Tue Jan 2 15:11:58 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 2 Jan 2007 15:11:58 -0500 Subject: [SciPy-user] time series plotting (slightly off topic) In-Reply-To: References: <200701021322.44174.pgmdevlist@gmail.com> Message-ID: <200701021511.58613.pgmdevlist@gmail.com> > Thanks. Happy new year to you too. Hope you found some time to relax during > the holiday season too! I'll take a look at your changes. Oh I did, no worry :) > My first attempt at plotting the time series objects was similar to this, > but I found getting intelligent labels quite tricky. My new plotting code > basically bypasses the built-in matplotlib date axis labelling and outputs > the labels from scratch using my own logic... I see. So, what about that: - we transform your class so that it is a real subclass of mpl.axes or better, mpl.subplot (which means that we can use it a figure with other plots, and have access to all the basic methods) - we write a proper DateFormatter to handle the dates, that would implement your labelling in a mpl friendly way. We wouldn't even add to go through the conversion to ordinals: just stick to the corresponding integers, and use the frequency to decide what to do, just like you did. > The name of the function (dateOf) probably isn't entirely intuitive either. Yeah, I've noticed. The 'asfreq' method sounds more natural. > As you are aware, weekly is one of those tricky frequencies that isn't a > nice subset of the other frequencies. It's not a data frequency I use very > often, but we'll probably need to discuss some strategy for converting it > to other frequencies since it is an odd-ball case. ... As you've noticed, I went to the easiest: a week is an isoweek, starting on Mondays (or whatever the convention in mx.DateTime is to process weeks). Adding a different first day of the week might be an idea to consider, but I wouldn't rush on that... > Also, looking at your code for the weekly frequency in tsdate... I think it > might be a good idea to add an additional parameter to the __init__ method > called "period" so you could specify a date with just the year and period > (15th week for example, or maybe -1 for last period in year, etc). That's an idea as well. But if we are to deal with other frequencies, I'd like to implement seasons, one season being a particular group of months. I don't think C would be needed, just using months should do the trick... BTW, I just updated tseries.py and the corresponding test to handle pickling/unpickling. That works well on 1D data, I'll work on nD data later on. From v-nijs at kellogg.northwestern.edu Wed Jan 3 21:22:21 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 03 Jan 2007 20:22:21 -0600 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: <200701021511.58613.pgmdevlist@gmail.com> Message-ID: If: d = {'a':array([1,2,3]), 'b':array([4,3,5])} Is is possible to do something like c_[d['a'],d['b']] in scipy without making a copy? The models module expects data structured in dictionaries I believe. I am not sure I completely understand the code in the models module but I think it creates a copy of the original data that is then processed by a class in the module. While the idea of using a 'data-dictionary' is very nice, this copy can be expensive when using large data-sets. Is there a way to avoid it (i.e., a c_ like command without making a copy)? An alternative solution that avoids copies but still allows access to data by using variable names would be to create a single data matrix: Array([[1,4], [2,3], [3,5]]) Store the indices in a dictionary: d = {'a':0, 'b':1} The data passed to the model would then be: data[:,(d['a'],d['b'])] The second approach can lead to a bit more book-keeping but that can easily be wrapped in a few functions/methods. Since it uses indexing it does not make a copy. Is there a generally recommended/preferred approach to accessing data using variable names? Best, Vincent From robert.kern at gmail.com Wed Jan 3 21:28:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 03 Jan 2007 20:28:17 -0600 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: References: Message-ID: <459C6641.9030802@gmail.com> Vincent Nijs wrote: > If: > > d = {'a':array([1,2,3]), 'b':array([4,3,5])} > > Is is possible to do something like c_[d['a'],d['b']] in scipy without > making a copy? No. In general the data will not be next to each other in memory, so they must be copied to create a single concatenated array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From v-nijs at kellogg.northwestern.edu Wed Jan 3 22:00:05 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 03 Jan 2007 21:00:05 -0600 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: <459C6641.9030802@gmail.com> Message-ID: Robert, If x is: Array([[1,4], [2,3], [3,5]]) And d = {'x0':x[:,0], 'x1':x[:,1]} Would d['x0'] and d['x1'] be next to each other in memory? Vincent > Vincent Nijs wrote: >> If: >> >> d = {'a':array([1,2,3]), 'b':array([4,3,5])} >> >> Is is possible to do something like c_[d['a'],d['b']] in scipy without >> making a copy? > > No. In general the data will not be next to each other in memory, so they must > be copied to create a single concatenated array. From robert.kern at gmail.com Wed Jan 3 22:19:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 03 Jan 2007 21:19:00 -0600 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: References: Message-ID: <459C7224.7010609@gmail.com> Vincent Nijs wrote: > Robert, > > If x is: > > Array([[1,4], > [2,3], > [3,5]]) > > And d = {'x0':x[:,0], 'x1':x[:,1]} > > Would d['x0'] and d['x1'] be next to each other in memory? Yes, they would, for these purposes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Jan 3 22:33:03 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 03 Jan 2007 20:33:03 -0700 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: References: Message-ID: <459C756F.1020604@ee.byu.edu> Vincent Nijs wrote: > If: > > d = {'a':array([1,2,3]), 'b':array([4,3,5])} > > Is is possible to do something like c_[d['a'],d['b']] in scipy without > making a copy? > > The models module expects data structured in dictionaries I believe. > You could not go backwards like this. You would have to allocate the array for 'a' and 'b' as one big chunk and then assign 'a' and 'b' to parts of that original data. -Travis From v-nijs at kellogg.northwestern.edu Wed Jan 3 22:30:35 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 03 Jan 2007 21:30:35 -0600 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: <459C7224.7010609@gmail.com> Message-ID: z = c_[d['x0'],d['x1']] Still makes a copy however. That is, if I set z[0,0] = -1, z is changed by d and x are not. Is there an alternative command to c_ that would not make the copy? Also, what if x.shape is (5,3). Could you concatenate d['x0'] and d['x2']? Vincent On 1/3/07 9:19 PM, "Robert Kern" wrote: > Vincent Nijs wrote: >> Robert, >> >> If x is: >> >> Array([[1,4], >> [2,3], >> [3,5]]) >> >> And d = {'x0':x[:,0], 'x1':x[:,1]} >> >> Would d['x0'] and d['x1'] be next to each other in memory? > > Yes, they would, for these purposes. From robert.kern at gmail.com Wed Jan 3 22:48:56 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 03 Jan 2007 21:48:56 -0600 Subject: [SciPy-user] Concatenate without making a copy? + sandbox: models In-Reply-To: References: Message-ID: <459C7928.3010308@gmail.com> Vincent Nijs wrote: > z = c_[d['x0'],d['x1']] > > Still makes a copy however. That is, if I set z[0,0] = -1, z is changed by d > and x are not. Correct. As I said, "in general" that is not the case, so c_ will copy. > Is there an alternative command to c_ that would not make the copy? No. You could write one (probably even in Python by abusing the __array_interface__ appropriately) and perhaps convince Jonathon Taylor to use it in scipy.sandbox.models, but the chance of a performance gain is almost certainly not worth the effort or the multiplication of similar functions. > Also, what if x.shape is (5,3). Could you concatenate d['x0'] and d['x2']? Yes, you could concatenate them, but they would have to copy because they are not laid out in memory appropriately. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lfriedri at imtek.de Thu Jan 4 05:10:35 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Thu, 04 Jan 2007 11:10:35 +0100 Subject: [SciPy-user] Fast simple plotting In-Reply-To: References: Message-ID: <1167905435.4888.18.camel@localhost> Hello everyone, I am using python/scipy to control some hardware using ctypes and some self written dlls. Now I have the need to display some data online while doing other things. Until now I used matplotlib to plot everything, but this is too slow for the online display. What do you recommend? What I would like to do as a start is a small simple oscilloscope. I read data from a data acquisition card and plot it to a simple 2d-plot. I would like to reach roughly 5 frames per second but *being able to do something else at the same time*. As I said, I tried matplotlib. At the moment I am playing around with VTK. I read about chaco. I know that there is gnuplot. Certainly there is also wx or something equivalent. Is there anyone doing something similar? What do you recommend? Thanks for every comment Lars Friedrich -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfriedri at imtek.de From gael.varoquaux at normalesup.org Thu Jan 4 05:15:53 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 4 Jan 2007 11:15:53 +0100 Subject: [SciPy-user] Fast simple plotting In-Reply-To: <1167905435.4888.18.camel@localhost> References: <1167905435.4888.18.camel@localhost> Message-ID: <20070104101548.GA14945@clipper.ens.fr> On Thu, Jan 04, 2007 at 11:10:35AM +0100, Lars Friedrich wrote: > As I said, I tried matplotlib. At the moment I am playing around with > VTK. I read about chaco. I know that there is gnuplot. Certainly there > is also wx or something equivalent. > Is there anyone doing something similar? What do you recommend? MPL was good enough for me, but I was told that chaco is much fast then MPL (it is part of its design goals). I doubt gnuplot will help much, here. I suggest you try chaco (and report you experience here, please). Ga?l From sgarcia at olfac.univ-lyon1.fr Thu Jan 4 05:28:37 2007 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 04 Jan 2007 11:28:37 +0100 Subject: [SciPy-user] Fast simple plotting In-Reply-To: <1167905435.4888.18.camel@localhost> References: <1167905435.4888.18.camel@localhost> Message-ID: <459CD6D5.40600@olfac.univ-lyon1.fr> If you want I have written an osciloscope for mpl and pyqt4. But the code is really dirty. I have tested it with a usb2 Mesurment Computing Card. It works quite good for low sampling rate (16 channel 200Hz) and small window (5s.) And I am doing something else at the same time (recording to a file and trigger detection) So for me mpl is good enough too. What is the size of data you to refresh 5 frames per second ? Sam Lars Friedrich wrote: > Hello everyone, > > I am using python/scipy to control some hardware using ctypes and some > self written dlls. Now I have the need to display some data online while > doing other things. Until now I used matplotlib to plot everything, but > this is too slow for the online display. What do you recommend? > > What I would like to do as a start is a small simple oscilloscope. I > read data from a data acquisition card and plot it to a simple 2d-plot. > I would like to reach roughly 5 frames per second but *being able to do > something else at the same time*. > > As I said, I tried matplotlib. At the moment I am playing around with > VTK. I read about chaco. I know that there is gnuplot. Certainly there > is also wx or something equivalent. > > Is there anyone doing something similar? What do you recommend? > > Thanks for every comment > > Lars Friedrich > > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From psaade at gmail.com Thu Jan 4 07:53:20 2007 From: psaade at gmail.com (Philippe Saade) Date: Thu, 4 Jan 2007 13:53:20 +0100 Subject: [SciPy-user] SciPy and xplt/gplt/plt issue : a Newbie question Message-ID: <7ae676150701040453h2afa3f07s1eead397e19bcbb7@mail.gmail.com> Hi everybody, i'm a newbie in scientific computation and plotting. I need your help for a *very* stupid problem : i'm trying to use python and Scipy package (on Ubuntu, python v2.4). While reading and practicing "Scipy Tutorial", a can't plot anything because plt/xplt/gplt modules are not known to my python. I tried from scipy.xplt import * [...] and many other apt-get install python-*** but i don't understand where are the plotting functions hidden... Can anybody give me a hint Thanks ! P.Saad? From nwagner at iam.uni-stuttgart.de Thu Jan 4 08:11:19 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 04 Jan 2007 14:11:19 +0100 Subject: [SciPy-user] SciPy and xplt/gplt/plt issue : a Newbie question In-Reply-To: <7ae676150701040453h2afa3f07s1eead397e19bcbb7@mail.gmail.com> References: <7ae676150701040453h2afa3f07s1eead397e19bcbb7@mail.gmail.com> Message-ID: On Thu, 4 Jan 2007 13:53:20 +0100 "Philippe Saade" wrote: > Hi everybody, > > i'm a newbie in scientific computation and plotting. > I need your help for a *very* stupid problem : i'm >trying to use > python and Scipy package (on Ubuntu, python v2.4). > > While reading and practicing "Scipy Tutorial", a can't >plot anything > because plt/xplt/gplt modules are not known to my >python. > > I tried > from scipy.xplt import * > [...] > > and many other apt-get install python-*** > > but i don't understand where are the plotting functions >hidden... > > Can anybody give me a hint > > Thanks ! > > P.Saad? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user xplt is in the sandbox. IIRC it works only for 32-bit machines. Anyway, matplotlib is recommended for 2-D plotting. Nils http://matplotlib.sourceforge.net/ From psaade at gmail.com Thu Jan 4 08:33:33 2007 From: psaade at gmail.com (Philippe Saade) Date: Thu, 4 Jan 2007 14:33:33 +0100 Subject: [SciPy-user] SciPy and xplt/gplt/plt issue : a Newbie question In-Reply-To: References: <7ae676150701040453h2afa3f07s1eead397e19bcbb7@mail.gmail.com> Message-ID: <7ae676150701040533w6c3c185cqb72cdfb41d6a6626@mail.gmail.com> On 1/4/07, Nils Wagner wrote: > > xplt is in the sandbox. > IIRC it works only for 32-bit machines. > > Anyway, matplotlib is recommended for > 2-D plotting. > > Nils > > http://matplotlib.sourceforge.net/ Thanks for your answer. I'm trying to write a course for my students using scipy, numpy and matplotlib. But i'm a newbie and : ** scipy tutorial doesn't use matplotlib ** numpy doc is not complete/available (yes, i bought Travis Oliphant book...) ** i'm slowly trying to translate all scipy tutorial's examples so i can rely solely on matplotlib... I would be happy if i could use xplt, first.... About "sandbox" : In [4]: scipy.sandbox. scipy.sandbox.__class__ scipy.sandbox.__hash__ scipy.sandbox.__reduce_ex__ scipy.sandbox.__delattr__ scipy.sandbox.__init__ scipy.sandbox.__repr__ scipy.sandbox.__dict__ scipy.sandbox.__name__ scipy.sandbox.__setattr__ scipy.sandbox.__doc__ scipy.sandbox.__new__ scipy.sandbox.__str__ scipy.sandbox.__file__ scipy.sandbox.__path__ scipy.sandbox.__getattribute__ scipy.sandbox.__reduce__ i don't see any xplt in sandbox.... is there a problem with Ubuntu scipy package ? P.Saad? From nwagner at iam.uni-stuttgart.de Thu Jan 4 08:39:50 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 04 Jan 2007 14:39:50 +0100 Subject: [SciPy-user] SciPy and xplt/gplt/plt issue : a Newbie question In-Reply-To: <7ae676150701040533w6c3c185cqb72cdfb41d6a6626@mail.gmail.com> References: <7ae676150701040453h2afa3f07s1eead397e19bcbb7@mail.gmail.com> <7ae676150701040533w6c3c185cqb72cdfb41d6a6626@mail.gmail.com> Message-ID: On Thu, 4 Jan 2007 14:33:33 +0100 "Philippe Saade" wrote: > On 1/4/07, Nils Wagner >wrote: > >> >> xplt is in the sandbox. >> IIRC it works only for 32-bit machines. >> >> Anyway, matplotlib is recommended for >> 2-D plotting. >> >> Nils >> >> http://matplotlib.sourceforge.net/ > > Thanks for your answer. > > I'm trying to write a course for my students using >scipy, numpy and > matplotlib. But i'm a newbie and : > ** scipy tutorial doesn't use matplotlib > ** numpy doc is not complete/available (yes, i bought >Travis Oliphant book...) > ** i'm slowly trying to translate all scipy tutorial's >examples so i > can rely solely on matplotlib... > > I would be happy if i could use xplt, first.... > > About "sandbox" : > In [4]: scipy.sandbox. > scipy.sandbox.__class__ scipy.sandbox.__hash__ > scipy.sandbox.__reduce_ex__ > scipy.sandbox.__delattr__ scipy.sandbox.__init__ > scipy.sandbox.__repr__ > scipy.sandbox.__dict__ scipy.sandbox.__name__ > scipy.sandbox.__setattr__ > scipy.sandbox.__doc__ scipy.sandbox.__new__ > scipy.sandbox.__str__ > scipy.sandbox.__file__ scipy.sandbox.__path__ > scipy.sandbox.__getattribute__ scipy.sandbox.__reduce__ > > > i don't see any xplt in sandbox.... > is there a problem with Ubuntu scipy package ? > > P.Saad? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user I am not familiar with Ubuntu. If you build scipy via svn you can include xplt in the build process by using a file called enabled_packages.txt with a line containing xplt. http://www.scipy.org/SciPySandbox HTH, Nils From gael.varoquaux at normalesup.org Thu Jan 4 09:02:42 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 4 Jan 2007 15:02:42 +0100 Subject: [SciPy-user] SciPy and xplt/gplt/plt issue : a Newbie question In-Reply-To: <7ae676150701040533w6c3c185cqb72cdfb41d6a6626@mail.gmail.com> References: <7ae676150701040453h2afa3f07s1eead397e19bcbb7@mail.gmail.com> <7ae676150701040533w6c3c185cqb72cdfb41d6a6626@mail.gmail.com> Message-ID: <20070104140239.GB7576@clipper.ens.fr> > ** i'm slowly trying to translate all scipy tutorial's examples so i > can rely solely on matplotlib... I think that is the best solution. Someone (you, I, anyone who manages to find the time) should translate this tutorial to use matplotlib (for a very quick introduction, see http://scipy.org/Plotting_Tutorial). xplt is not a terribly good solution compared to matlplotlib, so one might as well start staight ahead with matplotlib. By that way, which tutorial are you talking about ? Ga?l From bldrake at adaptcs.com Thu Jan 4 09:40:40 2007 From: bldrake at adaptcs.com (Barry Drake) Date: Thu, 4 Jan 2007 06:40:40 -0800 (PST) Subject: [SciPy-user] Fast simple plotting In-Reply-To: <459CD6D5.40600@olfac.univ-lyon1.fr> Message-ID: <525182.28300.qm@web203.biz.mail.re2.yahoo.com> Samuel, I would be interested in you code. I plan to buy some of the hardware you mentioned and set something up for a teaching lab. Thanks. Barry Drake Samuel GARCIA wrote: If you want I have written an osciloscope for mpl and pyqt4. But the code is really dirty. I have tested it with a usb2 Mesurment Computing Card. It works quite good for low sampling rate (16 channel 200Hz) and small window (5s.) And I am doing something else at the same time (recording to a file and trigger detection) So for me mpl is good enough too. What is the size of data you to refresh 5 frames per second ? Sam Lars Friedrich wrote: > Hello everyone, > > I am using python/scipy to control some hardware using ctypes and some > self written dlls. Now I have the need to display some data online while > doing other things. Until now I used matplotlib to plot everything, but > this is too slow for the online display. What do you recommend? > > What I would like to do as a start is a small simple oscilloscope. I > read data from a data acquisition card and plot it to a simple 2d-plot. > I would like to reach roughly 5 frames per second but *being able to do > something else at the same time*. > > As I said, I tried matplotlib. At the moment I am playing around with > VTK. I read about chaco. I know that there is gnuplot. Certainly there > is also wx or something equivalent. > > Is there anyone doing something similar? What do you recommend? > > Thanks for every comment > > Lars Friedrich > > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pwang at enthought.com Thu Jan 4 09:54:00 2007 From: pwang at enthought.com (Peter Wang) Date: Thu, 4 Jan 2007 08:54:00 -0600 Subject: [SciPy-user] Fast simple plotting In-Reply-To: <1167905435.4888.18.camel@localhost> References: <1167905435.4888.18.camel@localhost> Message-ID: <1C4BBCED-8075-47C6-BB99-FF3840D994CD@enthought.com> On Jan 4, 2007, at 4:10 AM, Lars Friedrich wrote: > I am using python/scipy to control some hardware using ctypes and some > self written dlls. Now I have the need to display some data online > while > doing other things. Until now I used matplotlib to plot everything, > but > this is too slow for the online display. What do you recommend? > > What I would like to do as a start is a small simple oscilloscope. I > read data from a data acquisition card and plot it to a simple 2d- > plot. > I would like to reach roughly 5 frames per second but *being able > to do > something else at the same time*. Hi Lars, Someone else asked a similar question about a data acquisition application on the enthought-dev a couple of weeks ago and I wrote up an example program using Chaco for him. Here is a link to his original email: https://mail.enthought.com/pipermail/enthought-dev/2006-December/ 003810.html Here is the source code of the application (140 lines): https://svn.enthought.com/enthought/browser/trunk/src/lib/enthought/ chaco2/examples/data_stream.py And finally, a screenshot: https://mail.enthought.com/pipermail/enthought-dev/attachments/ 20061221/63fa296e/attachment-0001.png Note that in this demo application, it's updating the screen 10 times per second, and it's plenty responsive. (It can easily do 50.) -Peter From hasslerjc at adelphia.net Thu Jan 4 10:04:09 2007 From: hasslerjc at adelphia.net (John Hassler) Date: Thu, 04 Jan 2007 10:04:09 -0500 Subject: [SciPy-user] Fast simple plotting In-Reply-To: <525182.28300.qm@web203.biz.mail.re2.yahoo.com> References: <525182.28300.qm@web203.biz.mail.re2.yahoo.com> Message-ID: <459D1769.9090301@adelphia.net> An HTML attachment was scrubbed... URL: From bldrake at adaptcs.com Thu Jan 4 10:19:08 2007 From: bldrake at adaptcs.com (Barry Drake) Date: Thu, 4 Jan 2007 07:19:08 -0800 (PST) Subject: [SciPy-user] Fast simple plotting In-Reply-To: <459D1769.9090301@adelphia.net> Message-ID: <63336.76484.qm@web207.biz.mail.re2.yahoo.com> John, Thank you! I just downloaded all the software and ordered the temperature and dew point sensor. It's certain I'll have questions after I get started. Cheers! Barry John Hassler wrote: You might want to look at PyUniversalLibrary, too: http://www.its.caltech.edu/~astraw/pyul.html john Barry Drake wrote: Samuel, I would be interested in you code. I plan to buy some of the hardware you mentioned and set something up for a teaching lab. Thanks. Barry Drake Samuel GARCIA wrote: If you want I have written an osciloscope for mpl and pyqt4. But the code is really dirty. I have tested it with a usb2 Mesurment Computing Card. It works quite good for low sampling rate (16 channel 200Hz) and small window (5s.) And I am doing something else at the same time (recording to a file and trigger detection) So for me mpl is good enough too. What is the size of data you to refresh 5 frames per second ? Sam Lars Friedrich wrote: > Hello everyone, > > I am using python/scipy to control some hardware using ctypes and some > self written dlls. Now I have the need to display some data online while > doing other things. Until now I used matplotlib to plot everything, but > this is too slow for the online display. What do you recommend? > > What I would like to do as a start is a small simple oscilloscope. I > read data from a data acquisition card and plot it to a simple 2d-plot. > I would like to reach roughly 5 frames per second but *being able to do > something else at the same time*. > > As I said, I tried matplotlib. At the moment I am playing around with > VTK. I read about chaco. I know that there is gnuplot. Certainly there > is also wx or something equivalent. > > Is there anyone doing something similar? What do you recommend? > > Thanks for every comment > > Lars Friedrich > > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Universite Claude Bernard LYON 1 CNRS - UMR5020, Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From vmas at carabos.com Thu Jan 4 11:46:09 2007 From: vmas at carabos.com (Vicent Mas (V+)) Date: Thu, 4 Jan 2007 17:46:09 +0100 Subject: [SciPy-user] CLAHE histogram Message-ID: <200701041746.09956.vmas@carabos.com> Hello list, does anybody know if scipy provides some method for contrast-limited adaptive histogram equalization (CLAHE) of grayscale images? It seems this functionality is not available in the ndimage module. Thanks in advance. -- :: \ / Vicent Mas http://www.carabos.com 0;0 / \ C?rabos Coop. Enjoy Data V V " " From rowen at cesmail.net Thu Jan 4 13:18:23 2007 From: rowen at cesmail.net (Russell E Owen) Date: Thu, 04 Jan 2007 10:18:23 -0800 Subject: [SciPy-user] Fast simple plotting References: <1167905435.4888.18.camel@localhost> Message-ID: In article <1167905435.4888.18.camel at localhost>, Lars Friedrich wrote: > Hello everyone, > > I am using python/scipy to control some hardware using ctypes and some > self written dlls. Now I have the need to display some data online while > doing other things. Until now I used matplotlib to plot everything, but > this is too slow for the online display. What do you recommend? When we ran into a similar problem we switched from matplotlib to HippoDraw. it is very nice and very fast, but uses Qt, which is a big package. -- Russell From strawman at astraw.com Thu Jan 4 14:44:11 2007 From: strawman at astraw.com (Andrew Straw) Date: Thu, 04 Jan 2007 11:44:11 -0800 Subject: [SciPy-user] Fast simple plotting In-Reply-To: <459D1769.9090301@adelphia.net> References: <525182.28300.qm@web203.biz.mail.re2.yahoo.com> <459D1769.9090301@adelphia.net> Message-ID: <459D590B.8020504@astraw.com> John Hassler wrote: > You might want to look at PyUniversalLibrary, too: > http://www.its.caltech.edu/~astraw/pyul.html > > john FWIW, I just switched the webpage to https://code.astraw.com/projects/PyUniversalLibrary/ (The old URL will just redirect you to the above.) Also, I put the svn repo online at https://code.astraw.com/PyUniversalLibrary/trunk/ I hope that having the svn repo online will encourage people to send me patches against svn HEAD. I don't have a lot of time to maintain this myself... Also, I'll gladly give anyone write access once they've sent me a useful couple of patches. If anyone wants to build an oscilloscope app to go with PyUniversalLibrary, that would be pretty cool and I'd be happy to include it with PyUL. I have the start of such a thing at https://code.astraw.com/projects/PyUniversalLibrary/file/trunk/examples/pyscope.py This is matplotlib based, but I'm open to other toolkits. -Andrew From grante at visi.com Thu Jan 4 14:50:45 2007 From: grante at visi.com (Grant Edwards) Date: Thu, 4 Jan 2007 19:50:45 +0000 (UTC) Subject: [SciPy-user] Fast simple plotting References: <525182.28300.qm@web203.biz.mail.re2.yahoo.com> <459D1769.9090301@adelphia.net> <459D590B.8020504@astraw.com> Message-ID: On 2007-01-04, Andrew Straw wrote: > John Hassler wrote: >> You might want to look at PyUniversalLibrary, too: >> http://www.its.caltech.edu/~astraw/pyul.html > > FWIW, I just switched the webpage to > https://code.astraw.com/projects/PyUniversalLibrary/ (The old URL will > just redirect you to the above.) A "universal library" that only works on one platform? 1/2 ;) -- Grant Edwards grante Yow! Everybody gets free at BORSCHT! visi.com From lanceboyle at qwest.net Thu Jan 4 17:11:50 2007 From: lanceboyle at qwest.net (Jerry) Date: Thu, 4 Jan 2007 15:11:50 -0700 Subject: [SciPy-user] Fast simple plotting In-Reply-To: <1167905435.4888.18.camel@localhost> References: <1167905435.4888.18.camel@localhost> Message-ID: Check out PLplot at http://plplot.sourceforge.net/index.html. It is a native C library with Python bindings. Dont' be put off by the rather crappy graphical examples on the home page. I don't know how fast it is but I'd say give it a try. It is a very active sourceforge project. BTW, gnuplot can only plot data that is first written to disk (other than that which it calculates itself) so it might be slower than you would like. Jerry On Jan 4, 2007, at 3:10 AM, Lars Friedrich wrote: > Hello everyone, > > I am using python/scipy to control some hardware using ctypes and some > self written dlls. Now I have the need to display some data online > while > doing other things. Until now I used matplotlib to plot everything, > but > this is too slow for the online display. What do you recommend? > > What I would like to do as a start is a small simple oscilloscope. I > read data from a data acquisition card and plot it to a simple 2d- > plot. > I would like to reach roughly 5 frames per second but *being able > to do > something else at the same time*. > > As I said, I tried matplotlib. At the moment I am playing around with > VTK. I read about chaco. I know that there is gnuplot. Certainly there > is also wx or something equivalent. > > Is there anyone doing something similar? What do you recommend? > > Thanks for every comment > > Lars Friedrich > From sransom at nrao.edu Thu Jan 4 17:25:55 2007 From: sransom at nrao.edu (Scott Ransom) Date: Thu, 4 Jan 2007 17:25:55 -0500 Subject: [SciPy-user] Fast simple plotting In-Reply-To: References: <1167905435.4888.18.camel@localhost> Message-ID: <200701041725.55647.sransom@nrao.edu> Similarly, PGPLOT works great as well. Almost all of my published figures are generated using it. There is a python module called ppglot that works with numeric and numarray, and only minor changes are needed to make it work with numpy (I can send those changes if you want to check them out). It is a fortran library with C-wrappers and python around that, but it is still _very_ fast. I'm not sure how it handles Windows, though... http://www.astro.caltech.edu/~tjp/pgplot/ http://efault.net/npat/hacks/ppgplot/ Scott On Thursday 04 January 2007 17:11, Jerry wrote: > Check out PLplot at http://plplot.sourceforge.net/index.html. It is a > native C library with Python bindings. Dont' be put off by the rather > crappy graphical examples on the home page. I don't know how fast it > is but I'd say give it a try. It is a very active sourceforge > project. > > BTW, gnuplot can only plot data that is first written to disk (other > than that which it calculates itself) so it might be slower than you > would like. > > Jerry > > On Jan 4, 2007, at 3:10 AM, Lars Friedrich wrote: > > Hello everyone, > > > > I am using python/scipy to control some hardware using ctypes and > > some self written dlls. Now I have the need to display some data > > online while > > doing other things. Until now I used matplotlib to plot everything, > > but > > this is too slow for the online display. What do you recommend? > > > > What I would like to do as a start is a small simple oscilloscope. > > I read data from a data acquisition card and plot it to a simple > > 2d- plot. > > I would like to reach roughly 5 frames per second but *being able > > to do > > something else at the same time*. > > > > As I said, I tried matplotlib. At the moment I am playing around > > with VTK. I read about chaco. I know that there is gnuplot. > > Certainly there is also wx or something equivalent. > > > > Is there anyone doing something similar? What do you recommend? > > > > Thanks for every comment > > > > Lars Friedrich > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From grante at visi.com Thu Jan 4 17:40:42 2007 From: grante at visi.com (Grant Edwards) Date: Thu, 4 Jan 2007 22:40:42 +0000 (UTC) Subject: [SciPy-user] Fast simple plotting References: <1167905435.4888.18.camel@localhost> Message-ID: On 2007-01-04, Jerry wrote: > BTW, gnuplot can only plot data that is first written to disk Not really true 1) Gnuplot can be sent data through a pipe. 2) All decent OSes (and even a few crappy ones) cache filesystem data blocks in RAM, so small temp-files will rarely even hit a disk platter. > (other than that which it calculates itself) so it might be > slower than you would like. In my experience gnuplot (pygnuplot) is pretty fast. I'm able to smoothly animate rotation of 3-D wireframe plots containing undreds of line segments without any problems. -- Grant Edwards grante Yow! YOW!! I am having at FUN!! visi.com From alan at ajackson.org Thu Jan 4 22:00:31 2007 From: alan at ajackson.org (Alan Jackson) Date: Thu, 4 Jan 2007 21:00:31 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) Message-ID: <20070104210031.ffecdfaa.alan@ajackson.org> I've been working with numnpy (Numeric actually, we're a little behind), in a framework that requires that we allow for NaN's to appear at random in our arrays. We have handled that with masked arrays, but I am finding those somewhat awkward to work with, and given that the NaN's will normally be rare, it seems a shame to double the storage costs that way. Any suggestions for a better overall strategy for dealing with NaN's? -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From robert.kern at gmail.com Thu Jan 4 22:37:08 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 04 Jan 2007 21:37:08 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <20070104210031.ffecdfaa.alan@ajackson.org> References: <20070104210031.ffecdfaa.alan@ajackson.org> Message-ID: <459DC7E4.6090607@gmail.com> Alan Jackson wrote: > I've been working with numnpy (Numeric actually, we're a little behind), in a > framework that requires that we allow for NaN's to appear at random in our > arrays. We have handled that with masked arrays, but I am finding those > somewhat awkward to work with, and given that the NaN's will normally be > rare, it seems a shame to double the storage costs that way. > > Any suggestions for a better overall strategy for dealing with NaN's? It depends. What do you want to do with them? Why do they show up in your data? I.e. is your framework using them to represent missing data? or are they simply showing up from computations (0/0)? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lbolla at gmail.com Sat Jan 6 05:51:27 2007 From: lbolla at gmail.com (lorenzo bolla) Date: Sat, 6 Jan 2007 11:51:27 +0100 Subject: [SciPy-user] Python EM Project In-Reply-To: <80c99e790701060242y235d5f89h536e1675d54f08f3@mail.gmail.com> References: <80c99e790701060242y235d5f89h536e1675d54f08f3@mail.gmail.com> Message-ID: <80c99e790701060251w5ccf01abt946e8a36a3b264fe@mail.gmail.com> Hi all, I'm not sure if I am slightly out of topic, but I can't find information anywhere else. Can someone tell me where the Python EM Project has gone? The website www.pythonemproject.com that used to host it is now a bunch of ads, property of DomainDrop. Also Robert Lytle, who seemed to be "responsible" for the project, cannot be contacted via e-mail. Thank you! Lorenzo Bolla. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan at ajackson.org Sat Jan 6 16:08:52 2007 From: alan at ajackson.org (Alan Jackson) Date: Sat, 6 Jan 2007 15:08:52 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <459DC7E4.6090607@gmail.com> References: <20070104210031.ffecdfaa.alan@ajackson.org> <459DC7E4.6090607@gmail.com> Message-ID: <20070106150852.c3c015fb.alan@ajackson.org> On Thu, 04 Jan 2007 21:37:08 -0600 Robert Kern wrote: > Alan Jackson wrote: > > I've been working with numnpy (Numeric actually, we're a little behind), in a > > framework that requires that we allow for NaN's to appear at random in our > > arrays. We have handled that with masked arrays, but I am finding those > > somewhat awkward to work with, and given that the NaN's will normally be > > rare, it seems a shame to double the storage costs that way. > > > > Any suggestions for a better overall strategy for dealing with NaN's? > > It depends. What do you want to do with them? Why do they show up in your data? > I.e. is your framework using them to represent missing data? or are they simply > showing up from computations (0/0)? Missing data. Basically want to handle missing data as painlessly as possible without having to add a bunch of stuff every time a calculation is done. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From pgmdevlist at gmail.com Sat Jan 6 16:25:56 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 6 Jan 2007 16:25:56 -0500 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <20070106150852.c3c015fb.alan@ajackson.org> References: <20070104210031.ffecdfaa.alan@ajackson.org> <459DC7E4.6090607@gmail.com> <20070106150852.c3c015fb.alan@ajackson.org> Message-ID: <200701061625.56738.pgmdevlist@gmail.com> On Saturday 06 January 2007 16:08, Alan Jackson wrote: > Missing data. Basically want to handle missing data as painlessly as > possible without having to add a bunch of stuff every time a calculation is > done. Alan, Could you be a bit more specific about you mean by this "bunch of stuffs" ? What are your complaints about the current implementation of masked arrays ? Assuming you have Nan in your data, you can get a masked array as easily as that: >>> import numpy as N >>> import numpy.core.ma as MA >>> x = N.array([1,2,N.nan,4]) >>> X = MA.masked_array(x, mask=N.isnan(x) >>> X array(data = [ 1.00000000e+00 2.00000000e+00 1.00000000e+20 4.00000000e+00], mask = [False False True False], fill_value=1e+20) And then you can play with X. A few months ago, I ran into some problems while trying to subclass MaskedArray. I rewrote most of numpy.core.ma to solve my particular issues. This new implementation is available on the scipy SVN server, in the sandbox/maskedarray folder. I'd be glad if you could give it a try, so that I could improve it. From v-nijs at kellogg.northwestern.edu Sat Jan 6 17:14:00 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 06 Jan 2007 16:14:00 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <200701061625.56738.pgmdevlist@gmail.com> Message-ID: It may be relevant to note that 'isnan' only seems to work with floats. If you change Pierre's example a bit, mask creation doesn't work as I might expect. If you create the array as follows >>> x = N.array([1,2,N.nan,4],dtype='int16') or >>> x = N.arange(4) >>> x[2] = nan Then x = [1,2,0,4] and isnan(x) = [False,False,False,False] It might be convenient if an array would be automatically up-cast to float is an nan is present. Vincent On 1/6/07 3:25 PM, "Pierre GM" wrote: > On Saturday 06 January 2007 16:08, Alan Jackson wrote: >> Missing data. Basically want to handle missing data as painlessly as >> possible without having to add a bunch of stuff every time a calculation is >> done. > > Alan, > Could you be a bit more specific about you mean by this "bunch of stuffs" ? > What are your complaints about the current implementation of masked arrays ? > Assuming you have Nan in your data, you can get a masked array as easily as > that: >>>> import numpy as N >>>> import numpy.core.ma as MA >>>> x = N.array([1,2,N.nan,4]) >>>> X = MA.masked_array(x, mask=N.isnan(x) >>>> X > array(data = > [ 1.00000000e+00 2.00000000e+00 1.00000000e+20 4.00000000e+00], > mask = > [False False True False], > fill_value=1e+20) > > And then you can play with X. > > A few months ago, I ran into some problems while trying to subclass > MaskedArray. I rewrote most of numpy.core.ma to solve my particular issues. > This new implementation is available on the scipy SVN server, in the > sandbox/maskedarray folder. I'd be glad if you could give it a try, so that I > could improve it. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From peridot.faceted at gmail.com Sat Jan 6 17:28:42 2007 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sat, 6 Jan 2007 17:28:42 -0500 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: References: <200701061625.56738.pgmdevlist@gmail.com> Message-ID: On 06/01/07, Vincent Nijs wrote: > It may be relevant to note that 'isnan' only seems to work with floats. If > you change Pierre's example a bit, mask creation doesn't work as I might > expect. If you create the array as follows > > >>> x = N.array([1,2,N.nan,4],dtype='int16') [...] > It might be convenient if an array would be automatically up-cast to float > is an nan is present. This is in fact the normal behaviour of numpy: if there's a floating-point number in the array when you create it, the array gets upcast - unless you specify the data type, forcing it to a particular type. Which you did. The unavailablity of NaNs for integer types is sometimes unfortunate, but there's really nothing numpy can do about it. A. M. Archibald From v-nijs at kellogg.northwestern.edu Sat Jan 6 18:47:28 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Sat, 06 Jan 2007 17:47:28 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: Message-ID: I see. But when you assign a float (e.g., nan or 0.1) to an element of an array filled with 'int16' elements the array is not up-cast but the element down-cast. Thanks for the clarification! Vincent On 1/6/07 4:28 PM, "A. M. Archibald" wrote: > On 06/01/07, Vincent Nijs wrote: >> It may be relevant to note that 'isnan' only seems to work with floats. If >> you change Pierre's example a bit, mask creation doesn't work as I might >> expect. If you create the array as follows >> >>>>> x = N.array([1,2,N.nan,4],dtype='int16') > > [...] > >> It might be convenient if an array would be automatically up-cast to float >> is an nan is present. > > This is in fact the normal behaviour of numpy: if there's a > floating-point number in the array when you create it, the array gets > upcast - unless you specify the data type, forcing it to a particular > type. Which you did. > > The unavailablity of NaNs for integer types is sometimes unfortunate, > but there's really nothing numpy can do about it. > > A. M. Archibald > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From alan at ajackson.org Sat Jan 6 19:22:42 2007 From: alan at ajackson.org (Alan Jackson) Date: Sat, 6 Jan 2007 18:22:42 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <200701061625.56738.pgmdevlist@gmail.com> References: <20070104210031.ffecdfaa.alan@ajackson.org> <459DC7E4.6090607@gmail.com> <20070106150852.c3c015fb.alan@ajackson.org> <200701061625.56738.pgmdevlist@gmail.com> Message-ID: <20070106182242.969dd57b.alan@ajackson.org> On Sat, 6 Jan 2007 16:25:56 -0500 Pierre GM wrote: > On Saturday 06 January 2007 16:08, Alan Jackson wrote: > > Missing data. Basically want to handle missing data as painlessly as > > possible without having to add a bunch of stuff every time a calculation is > > done. > > Alan, > Could you be a bit more specific about you mean by this "bunch of stuffs" ? > What are your complaints about the current implementation of masked arrays ? Well for one it seems a shame to carry around the extra storage when 99.9% of my data is not missing. It may be my own inexperience with python, numpy, and masked arrays, but it seemed more complex than necessary to do the following. I wanted to take an array and find the forward (or is it backward? one-sided anyway) difference, and then append the final element so the output array had the same size as the input. With normal arrays this could be done with b = concatenate((a[1:]-a[:-1],[a[-1]-a[-2]] )) or something close to that, I'm writing from memory. With masked arrays, you seem to lose the 'maskedness' when they are only one-d, so the concatenate fails and you have to fiddle about a bit to get it to work right. In my mind, it should "just work", so that the code looks the same in either case, but I couldn't get it to do that. Perhaps the new implementation has taken care of that. If so, that would take care of a lot of my pain. I'm still pushing my IT support guys to install the latest release. Hmmm... just tried it and it seems to work. I definitely need to start using a newer version at work. I really hate that my home system is so much more up to date. I'll get back to you! Thanks! > Assuming you have Nan in your data, you can get a masked array as easily as > that: > >>> import numpy as N > >>> import numpy.core.ma as MA > >>> x = N.array([1,2,N.nan,4]) > >>> X = MA.masked_array(x, mask=N.isnan(x) > >>> X > array(data = > [ 1.00000000e+00 2.00000000e+00 1.00000000e+20 4.00000000e+00], > mask = > [False False True False], > fill_value=1e+20) > > And then you can play with X. > > A few months ago, I ran into some problems while trying to subclass > MaskedArray. I rewrote most of numpy.core.ma to solve my particular issues. > This new implementation is available on the scipy SVN server, in the > sandbox/maskedarray folder. I'd be glad if you could give it a try, so that I > could improve it. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From pgmdevlist at gmail.com Sat Jan 6 19:52:50 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 6 Jan 2007 19:52:50 -0500 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: References: Message-ID: <200701061952.50496.pgmdevlist@gmail.com> On Saturday 06 January 2007 18:47, Vincent Nijs wrote: > I see. But when you assign a float (e.g., nan or 0.1) to an element of an > array filled with 'int16' elements the array is not up-cast but the element > down-cast. True. When you create an array, you select a particular area of memory. How much depends on the shape of the array, and its dtype. Picture building a checkboard with a given number of rows and columns (the shape), and with a fixed cell size (the dtype). Once the array is created, you can't fill a cell with some data larger than the cell size: the data has to be shrunk to fit. So, once you create an array of int_, any float/complex you add to it will be shrunk to int_, because the memory space has been already allocated for int_ and nothing larger. If you really want another larger tile size, then you have to create a new array from scratch. That's what you do by using 'astype'. In our particular case: If you expect missing data as NaN, then use float_ as a default (which most of the numpy functions do). Note that you're not limited to NaN for the missing data: anything can do: >>> x = MA.array([1,2,-99,4], mask=(x=-99)) will work, so will >>> x=MA.masked_values([1,2,-99,4], -99) From pgmdevlist at gmail.com Sat Jan 6 19:56:56 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 6 Jan 2007 19:56:56 -0500 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <20070106182242.969dd57b.alan@ajackson.org> References: <20070104210031.ffecdfaa.alan@ajackson.org> <200701061625.56738.pgmdevlist@gmail.com> <20070106182242.969dd57b.alan@ajackson.org> Message-ID: <200701061956.56808.pgmdevlist@gmail.com> On Saturday 06 January 2007 19:22, Alan Jackson wrote: > Well for one it seems a shame to carry around the extra storage when 99.9% > of my data is not missing. Booleans aren't that large. > With normal arrays this could be done with > b = concatenate((a[1:]-a[:-1],[a[-1]-a[-2]] )) or something close to that, > I'm writing from memory. With masked arrays, you seem to lose the > 'maskedness' when they are only one-d, so the concatenate fails and you > have to fiddle about a bit to get it to work right. In my mind, it should > "just work", so that the code looks the same in either case, but I couldn't > get it to do that. Are you sure you use the concatenate function from numpy.core.ma ? And not the one from numpy ? Because if you do use the one from numpy, you get a warning that the data is masked in one or more location, and you end up with a basic ndarray, True. In the new implementation, you raise an exception (not an helpful one, but I can't control that) >>>import numpy as N >>>import numpy.core.ma as ma >>>x = ma.array([1,2,3], mask=[0,0,1]) >>>print ma.concatenate([x,x]) [1 2 -- 1 2 --] >>>N.concatenate([x,x]) [1 2 3 1 2 3] Now, with the new implementation >>>import maskedarray as MA\ >>>x=MA.array([1,2,3], mask=[0,0,1]) >>>print MA.concatenate([x,x]) [1 2 -- 1 2 --] >>>N.concatenate([x,x]) AttributeError: 'NoneType' object has no attribute 'shape' From v-nijs at kellogg.northwestern.edu Sat Jan 6 22:50:07 2007 From: v-nijs at kellogg.northwestern.edu (v-nijs at kellogg.northwestern.edu) Date: Sun, 07 Jan 2007 3:50:07 +0000 Subject: [SciPy-user] NaN's in numpy (and Scipy) Message-ID: <20070107035007.DE5C523@lulu.it.northwestern.edu> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From v-nijs at kellogg.northwestern.edu Sat Jan 6 22:51:08 2007 From: v-nijs at kellogg.northwestern.edu (v-nijs at kellogg.northwestern.edu) Date: Sun, 07 Jan 2007 3:51:08 +0000 Subject: [SciPy-user] ediff1d and NaN's Message-ID: <20070107035108.EE38F64@lulu.it.northwestern.edu> An embedded and charset-unspecified text was scrubbed... Name: not available URL: From pgmdevlist at gmail.com Sat Jan 6 23:15:56 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Sat, 6 Jan 2007 23:15:56 -0500 Subject: [SciPy-user] ediff1d and NaN's In-Reply-To: <20070107035108.EE38F64@lulu.it.northwestern.edu> References: <20070107035108.EE38F64@lulu.it.northwestern.edu> Message-ID: <200701062315.57045.pgmdevlist@gmail.com> On Saturday 06 January 2007 22:51, v-nijs at kellogg.northwestern.edu wrote: > In the code for ediff1d, to_end, append() is used which up-casts the array. > In the code for to_begin, insert() is used which does not up-cast the > array. Indeed. `append` calls `concatenate`, which will create a new array and will use the largest dtype (in your example, float). I was thinking about how to code it for maskedarray, and was naturally going for creating an empty array and filling it as needed. In that case, the result would have the same dtype as the initial array, no matter what you precise for to_bgin/to_end. Should that be the case ? From alan at ajackson.org Sun Jan 7 21:08:08 2007 From: alan at ajackson.org (Alan Jackson) Date: Sun, 7 Jan 2007 20:08:08 -0600 Subject: [SciPy-user] NaN's in numpy (and Scipy) In-Reply-To: <20070107035007.DE5C523@lulu.it.northwestern.edu> References: <20070107035007.DE5C523@lulu.it.northwestern.edu> Message-ID: <20070107200808.a8009006.alan@ajackson.org> Thanks! That looks like it could be what I need. On Sun, 07 Jan 2007 3:50:07 +0000 v-nijs at kellogg.northwestern.edu wrote: > Alan, > > There is an ediff1d() function in numpy that you could use. > > >>> b = numpy.ediff1d(a,to_end=a[-1]-a[-2]) > > Or if you want to specify nan's at the beginning (or end): > > >>> b = numpy.ediff1d(a,to_begin=numpy.nan) > > Not sure what that will do with your masked-arrays however. > > I need this function in my own time-series program as well so I was happy to find this function myself. > > Vincent > > > ==============Original message text=============== > On Sun, 07 Jan 2007 12:56:56 am +0000 Pierre GM wrote: > > On Saturday 06 January 2007 19:22, Alan Jackson wrote: > > Well for one it seems a shame to carry around the extra storage when 99.9% > > of my data is not missing. > > Booleans aren't that large. > > > With normal arrays this could be done with > > b = concatenate((a[1:]-a[:-1],[a[-1]-a[-2]] )) or something close to that, > > I'm writing from memory. With masked arrays, you seem to lose the > > 'maskedness' when they are only one-d, so the concatenate fails and you > > have to fiddle about a bit to get it to work right. In my mind, it should > > "just work", so that the code looks the same in either case, but I couldn't > > get it to do that. > > Are you sure you use the concatenate function from numpy.core.ma ? > And not the one from numpy ? > > Because if you do use the one from numpy, you get a warning that the data is > masked in one or more location, and you end up with a basic ndarray, True. In > the new implementation, you raise an exception (not an helpful one, but I > can't control that) > > >>>import numpy as N > >>>import numpy.core.ma as ma > >>>x = ma.array([1,2,3], mask=[0,0,1]) > >>>print ma.concatenate([x,x]) > [1 2 -- 1 2 --] > >>>N.concatenate([x,x]) > [1 2 3 1 2 3] > > Now, with the new implementation > >>>import maskedarray as MA\ > >>>x=MA.array([1,2,3], mask=[0,0,1]) > >>>print MA.concatenate([x,x]) > [1 2 -- 1 2 --] > >>>N.concatenate([x,x]) > AttributeError: 'NoneType' object has no attribute 'shape' > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ===========End of original message text=========== > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From lanceboyle at qwest.net Mon Jan 8 23:39:44 2007 From: lanceboyle at qwest.net (Jerry) Date: Mon, 8 Jan 2007 21:39:44 -0700 Subject: [SciPy-user] Fast simple plotting In-Reply-To: References: <1167905435.4888.18.camel@localhost> Message-ID: Thanks for the quasi-correction and gnuplot speed report. Jerry On Jan 4, 2007, at 3:40 PM, Grant Edwards wrote: > On 2007-01-04, Jerry wrote: > >> BTW, gnuplot can only plot data that is first written to disk > > Not really true > > 1) Gnuplot can be sent data through a pipe. > > 2) All decent OSes (and even a few crappy ones) cache > filesystem data blocks in RAM, so small temp-files will > rarely even hit a disk platter. > >> (other than that which it calculates itself) so it might be >> slower than you would like. > > In my experience gnuplot (pygnuplot) is pretty fast. I'm able > to smoothly animate rotation of 3-D wireframe plots containing > undreds of line segments without any problems. > From nwagner at iam.uni-stuttgart.de Tue Jan 9 03:59:18 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 09:59:18 +0100 Subject: [SciPy-user] Found 397 tests for scipy.ndimage Message-ID: <45A35966.8010907@iam.uni-stuttgart.de> Hi, I have installed scipy from svn on different machines. Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? Any idea ? Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 128 tests for scipy.linalg.fblas Found 10 tests for scipy.integrate.quadpack Found 98 tests for scipy.stats.stats Nils From robert.kern at gmail.com Tue Jan 9 04:02:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Jan 2007 03:02:31 -0600 Subject: [SciPy-user] Found 397 tests for scipy.ndimage In-Reply-To: <45A35966.8010907@iam.uni-stuttgart.de> References: <45A35966.8010907@iam.uni-stuttgart.de> Message-ID: <45A35A27.7040200@gmail.com> Nils Wagner wrote: > Hi, > > I have installed scipy from svn on different machines. > Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? ndimage is still broken on 64-bit platforms, so we don't import it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Jan 9 04:07:01 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 10:07:01 +0100 Subject: [SciPy-user] Found 397 tests for scipy.ndimage In-Reply-To: <45A35A27.7040200@gmail.com> References: <45A35966.8010907@iam.uni-stuttgart.de> <45A35A27.7040200@gmail.com> Message-ID: <45A35B35.1010801@iam.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Hi, >> >> I have installed scipy from svn on different machines. >> Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? >> > > ndimage is still broken on 64-bit platforms, so we don't import it. > > But I found the 397 tests for scipy.ndimage on processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 75 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4200+ stepping : 2 cpu MHz : 2204.600 cache size : 512 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy bogomips : 4417.22 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc processor : 1 vendor_id : AuthenticAMD cpu family : 15 model : 75 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4200+ stepping : 2 cpu MHz : 2204.600 cache size : 512 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy bogomips : 4409.72 TLB size : 1024 4K pages clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: ts fid vid ttp tm stc Linux marilyn 2.6.16.27-0.6-smp #1 SMP Wed Dec 13 09:34:50 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux Am I missing something ? Nils From robert.kern at gmail.com Tue Jan 9 04:28:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Jan 2007 03:28:03 -0600 Subject: [SciPy-user] Found 397 tests for scipy.ndimage In-Reply-To: <45A35B35.1010801@iam.uni-stuttgart.de> References: <45A35966.8010907@iam.uni-stuttgart.de> <45A35A27.7040200@gmail.com> <45A35B35.1010801@iam.uni-stuttgart.de> Message-ID: <45A36023.3000101@gmail.com> Nils Wagner wrote: > Robert Kern wrote: >> Nils Wagner wrote: >> >>> Hi, >>> >>> I have installed scipy from svn on different machines. >>> Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? >>> >> ndimage is still broken on 64-bit platforms, so we don't import it. >> >> > But I found the 397 tests for scipy.ndimage on [a 64bit machine] > Am I missing something ? No, sorry, I was wrong. Which machines do you get the tests and which do you not get the tests? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Jan 9 04:29:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Jan 2007 03:29:18 -0600 Subject: [SciPy-user] Found 397 tests for scipy.ndimage In-Reply-To: <45A35966.8010907@iam.uni-stuttgart.de> References: <45A35966.8010907@iam.uni-stuttgart.de> Message-ID: <45A3606E.10809@gmail.com> Nils Wagner wrote: > Hi, > > I have installed scipy from svn on different machines. > Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? Can you explicitly run the scipy.ndimage tests on each of the machines? $ python -c "from numpy.testing import NumpyTest; NumpyTest('scipy.ndimage').test()" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Jan 9 04:34:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 10:34:00 +0100 Subject: [SciPy-user] Found 397 tests for scipy.ndimage In-Reply-To: <45A3606E.10809@gmail.com> References: <45A35966.8010907@iam.uni-stuttgart.de> <45A3606E.10809@gmail.com> Message-ID: <45A36188.2080302@iam.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Hi, >> >> I have installed scipy from svn on different machines. >> Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? >> > > Can you explicitly run the scipy.ndimage tests on each of the machines? > > $ python -c "from numpy.testing import NumpyTest; NumpyTest('scipy.ndimage').test()" > > No I cannot. Here are the results... python -c "from numpy.testing import NumpyTest; NumpyTest('scipy.ndimage').test()" Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/numpy/testing/numpytest.py", line 437, in test exec 'import %s as this_package' % (self.package) File "", line 1, in ? ImportError: No module named ndimage /home/nwagner> uname -a Linux lisa 2.6.13-15.13-default #1 Tue Nov 28 13:43:50 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux python -c "from numpy.testing import NumpyTest; NumpyTest('scipy.ndimage').test()" Found 397 tests for scipy.ndimage Found 0 tests for __main__ ............................................................................................................................................................................................................................................................................................................................................................................................................. ---------------------------------------------------------------------- Ran 397 tests in 0.554s OK /home/nwagner> uname -a Linux marilyn 2.6.16.27-0.6-smp #1 SMP Wed Dec 13 09:34:50 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux python -c "from numpy.testing import NumpyTest; NumpyTest('scipy.ndimage').test()" Found 397 tests for scipy.ndimage Found 0 tests for __main__ ............................................................................................................................................................................................................................................................................................................................................................................................................. ---------------------------------------------------------------------- Ran 397 tests in 1.886s OK /home/nwagner> uname -a Linux amanda 2.6.11.4-21.15-default #1 Tue Nov 28 13:39:58 UTC 2006 i686 athlon i386 GNU/Linux From robert.kern at gmail.com Tue Jan 9 04:42:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 09 Jan 2007 03:42:41 -0600 Subject: [SciPy-user] Found 397 tests for scipy.ndimage In-Reply-To: <45A36188.2080302@iam.uni-stuttgart.de> References: <45A35966.8010907@iam.uni-stuttgart.de> <45A3606E.10809@gmail.com> <45A36188.2080302@iam.uni-stuttgart.de> Message-ID: <45A36391.2000507@gmail.com> Nils Wagner wrote: > Robert Kern wrote: >> Nils Wagner wrote: >> >>> Hi, >>> >>> I have installed scipy from svn on different machines. >>> Sometimes I can't see the tests for scipy.ndimage, bur for what reason ? >>> >> Can you explicitly run the scipy.ndimage tests on each of the machines? >> >> $ python -c "from numpy.testing import NumpyTest; NumpyTest('scipy.ndimage').test()" >> > No I cannot. Here are the results... > > python -c "from numpy.testing import NumpyTest; > NumpyTest('scipy.ndimage').test()" > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.4/site-packages/numpy/testing/numpytest.py", > line 437, in test > exec 'import %s as this_package' % (self.package) > File "", line 1, in ? > ImportError: No module named ndimage Well, that's why. Look at how you built scipy on that machine and try to figure out how the ndimage package didn't get built (or installed). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jens_brandenburg at gmx.net Tue Jan 9 09:18:50 2007 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Tue, 09 Jan 2007 15:18:50 +0100 Subject: [SciPy-user] NumPy and SciPy installation Message-ID: <45A3A44A.4060501@gmx.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I am trying to install numpy on 64bit Linux platform (SuSE 10.2). It doesn't work out of the box... But I am sure that it's not a big deal or that the problem has been solved some time ago. Due to the lack of time I hope that by asking the experts, I am getting it running as fast as possible... This is how the inst.log looks like: Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 89, in setup_package() File "setup.py", line 59, in setup_package from numpy.distutils.core import setup File "/home/CPFS/brandenb/downloads/numpy/numpy/distutils/core.py", line 24, in from numpy.distutils.command import build_ext File "/home/CPFS/brandenb/downloads/numpy/numpy/distutils/command/build_ext.py", line 16, in from numpy.distutils.system_info import combine_paths File "/home/CPFS/brandenb/downloads/numpy/numpy/distutils/system_info.py", line 159, in so_ext = distutils.sysconfig.get_config_vars('SO')[0] or '' File "/usr/lib64/python2.5/distutils/sysconfig.py", line 493, in get_config_vars func() File "/usr/lib64/python2.5/distutils/sysconfig.py", line 352, in _init_posix raise DistutilsPlatformError(my_msg) distutils.errors.DistutilsPlatformError: invalid Python installation: unable to open /usr/lib64/python2.5/config/Makefile (No such file or directory) Thank you a lot in advance! Cheers, Jens Brandenburg -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org iD8DBQFFo6RJ/2KgYWbwWI4RAq3xAJ9B0kBIWOuN1IRsKhc0x9kSUQsPSACbBQHk KHklOrbLL61VclOY5A9EFuM= =Gu2V -----END PGP SIGNATURE----- From nwagner at iam.uni-stuttgart.de Tue Jan 9 09:53:12 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 09 Jan 2007 15:53:12 +0100 Subject: [SciPy-user] NumPy and SciPy installation In-Reply-To: <45A3A44A.4060501@gmx.net> References: <45A3A44A.4060501@gmx.net> Message-ID: <45A3AC58.1060300@iam.uni-stuttgart.de> Jens Brandenburg wrote: > Hello, > > I am trying to install numpy on 64bit Linux platform (SuSE 10.2). It > doesn't work out of the box... But I am sure that it's not a big deal > or that the problem has been solved some time ago. Due to the lack of > time I hope that by asking the experts, I am getting it running as > fast as possible... > > This is how the inst.log looks like: > > Running from numpy source directory. > Traceback (most recent call last): > File "setup.py", line 89, in > setup_package() > File "setup.py", line 59, in setup_package > from numpy.distutils.core import setup > File "/home/CPFS/brandenb/downloads/numpy/numpy/distutils/core.py", > line 24, in > from numpy.distutils.command import build_ext > File > "/home/CPFS/brandenb/downloads/numpy/numpy/distutils/command/build_ext.py", > line 16, in > from numpy.distutils.system_info import combine_paths > File > "/home/CPFS/brandenb/downloads/numpy/numpy/distutils/system_info.py", > line 159, in > so_ext = distutils.sysconfig.get_config_vars('SO')[0] or '' > File "/usr/lib64/python2.5/distutils/sysconfig.py", line 493, in > get_config_vars > func() > File "/usr/lib64/python2.5/distutils/sysconfig.py", line 352, in > _init_posix > raise DistutilsPlatformError(my_msg) > distutils.errors.DistutilsPlatformError: invalid Python installation: > unable to open /usr/lib64/python2.5/config/Makefile (No such file or > directory) > > Thank you a lot in advance! > > Cheers, > > Jens Brandenburg IIRC numpy/scipy is installed in /usr/local/lib64/python2.5 by default (on openSUSE10.2). Did you use python setup.py install ? (as root) Nils From matthew.brett at gmail.com Tue Jan 9 09:59:00 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 9 Jan 2007 14:59:00 +0000 Subject: [SciPy-user] NumPy and SciPy installation In-Reply-To: <45A3A44A.4060501@gmx.net> References: <45A3A44A.4060501@gmx.net> Message-ID: <1e2af89e0701090659p3884539bv4b16513accfb0a7b@mail.gmail.com> Hi, > I am trying to install numpy on 64bit Linux platform (SuSE 10.2). It > doesn't work out of the box... But I am sure that it's not a big deal > or that the problem has been solved some time ago. Due to the lack of > time I hope that by asking the experts, I am getting it running as > fast as possible... > distutils.errors.DistutilsPlatformError: invalid Python installation: > unable to open /usr/lib64/python2.5/config/Makefile (No such file or > directory) Have you installed the python-devel package for your distribution? Best, Matthew From jks at iki.fi Tue Jan 9 11:15:48 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Tue, 09 Jan 2007 18:15:48 +0200 Subject: [SciPy-user] Masked arrays with dtype=object: elements get wrapped in 0D arrays Message-ID: For a numpy array A with dtype=object, the elements A[i] are the objects stored in the array, not wrapped in zero-dimensional arrays. For a numpy.core.ma array A with dtype=object, this seems to be initially true, but no longer for elements that have been assigned to. The following script demonstrates this: ------------------------------------------------------------------------ import numpy for constructor in numpy.array, numpy.core.ma.array: print '===', constructor, '===' a = constructor([1,2,3,4.0,'foo'],dtype=object) print "Expecting: int, int, int, float, str" print [type(a[i]) for i in range(len(a))] a[0] = 0; a[2] = 0; a[4] = 0 print "Expecting: int, int, int, float, int" print [type(a[i]) for i in range(len(a))] ------------------------------------------------------------------------ My output (with numpy.__version__ == '1.0.1.dev3435') is ------------------------------------------------------------------------ === === Expecting: int, int, int, float, str [, , , , ] Expecting: int, int, int, float, int [, , , , ] === === Expecting: int, int, int, float, str [, , , , ] Expecting: int, int, int, float, int [, , , , ] ------------------------------------------------------------------------ Is this a bug, or am I understanding something wrong? -- Jouni K. Sepp?nen http://www.iki.fi/jks From pgmdevlist at gmail.com Tue Jan 9 12:22:12 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 9 Jan 2007 12:22:12 -0500 Subject: [SciPy-user] Masked arrays with dtype=object: elements get wrapped in 0D arrays In-Reply-To: References: Message-ID: <200701091222.12561.pgmdevlist@gmail.com> > Is this a bug, or am I understanding something wrong? Not really. Well, I don't know whether it should be a bug or not. When you set an element `i` of masked array `a` to a value `val`, you actually call the function "filled" on the value to set the `_data` part of `a`. This function returns a numpy.array no matter what: `numpy.array(val)`. So, when you type a[0]=0 you actually have a[0] = numpy.array(0) Which explains what you see. If `a` has a classic dtype such as int_, float_..., `numpy.array(val)` is transformed into a single element (int_, float_...), to fit the memory size reserved for an array cell. In your case, `a` is a object_ array: there's no transformation, as the memory size of an object_ array cell can be modified. Now, should 'filled' always return a numpy.array ? That's the case where we wouldn't want this conversion, and an extra test could be added in `filled`, along the lines of: ... if isinstance(val,int) or isinstance(val,float) or isisinstance(val,complex) or isinstance(val,str): return val else: return numpy.array(val) That extra test will be costly in the long run: 'filled' is called quite often w/ maskedarrays. At the same time, we could just return `val` no matter what (as long it's not a masked array). I gonna try to check whether it breaks anything. In the meanwhile: what are you trying to do ? There should be some kind of workaround. From jks at iki.fi Tue Jan 9 13:12:48 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Tue, 09 Jan 2007 20:12:48 +0200 Subject: [SciPy-user] Masked arrays with dtype=object: elements get wrapped in 0D arrays References: <200701091222.12561.pgmdevlist@gmail.com> Message-ID: Pierre GM writes: > In the meanwhile: what are you trying to do ? There should be some kind of > workaround. I have a csv file containing strings, floating-point numbers, integers, and empty cells (denoting missing data; this is why I want to use masked arrays). I read it using the standard csv module, which always returns just strings. I convert the result into a masked array with dtype=object and iterate through the array, casting anything that looks like an int or a float into the appropriate data type, and setting the mask at the locations where the value is the empty string. Each column in the file is supposed to contain values of just one type, so I tried to check this, and was surprised to find that each number is of type numpy.ndarray. I suppose the simplest workaround is to fix the types and masks in the lists returned by csv.reader before making the data into an array. -- Jouni K. Sepp?nen http://www.iki.fi/jks From v-nijs at kellogg.northwestern.edu Tue Jan 9 13:27:11 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Tue, 09 Jan 2007 12:27:11 -0600 Subject: [SciPy-user] Masked arrays with dtype=object: elements get wrapped in 0D arrays In-Reply-To: Message-ID: Jouni, If you think your code might be generally useful (i.e., apply to any csv file) would you mind posting it? I put a simple class for reading, writing, manipulating, and plotting time-series data from/to csv files at http://scipy.org/Cookbook/dbase However, it does not work for files with strings (except dates) or missing values. I'd really like to see how you do that. Thanks, Vincent On 1/9/07 12:12 PM, "Jouni K. Sepp?nen" wrote: > Pierre GM writes: > >> In the meanwhile: what are you trying to do ? There should be some kind of >> workaround. > > I have a csv file containing strings, floating-point numbers, > integers, and empty cells (denoting missing data; this is why I want > to use masked arrays). I read it using the standard csv module, which > always returns just strings. I convert the result into a masked array > with dtype=object and iterate through the array, casting anything that > looks like an int or a float into the appropriate data type, and > setting the mask at the locations where the value is the empty string. > Each column in the file is supposed to contain values of just one > type, so I tried to check this, and was surprised to find that each > number is of type numpy.ndarray. > > I suppose the simplest workaround is to fix the types and masks in the > lists returned by csv.reader before making the data into an array. From jks at iki.fi Tue Jan 9 15:12:47 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Tue, 09 Jan 2007 22:12:47 +0200 Subject: [SciPy-user] Masked arrays with dtype=object: elements get wrapped in 0D arrays References: Message-ID: Vincent Nijs writes: > If you think your code might be generally useful (i.e., apply to any > csv file) would you mind posting it? Here's my current version. -- Jouni K. Sepp?nen http://www.iki.fi/jks -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: nameddata.py URL: From jeff at taupro.com Wed Jan 10 06:08:49 2007 From: jeff at taupro.com (Jeff Rush) Date: Wed, 10 Jan 2007 05:08:49 -0600 Subject: [SciPy-user] Reminder: Early Bird Registration for PyCon Ending Soon Message-ID: <45A4C941.7080903@taupro.com> Greetings. As the co-chair for the upcoming Python conference, being held in Dallas (Addison) Texas, I want to remind folk to register before early bird registration prices end. The event is the fifth international Python Conference, being held Feb 23-25, 2007 at the Marriott-Quorum in Addison, with early-bird registration ending **Jan 15**. The conference draws approximately 400-500 attendees from diverse backgrounds such as scientists from national and medical labs, college/K-12 educators, web engineers and the myriad of IT developers and programming hobbyists. Those new to the Python language are welcome, and we're offering a half-day "Python 101" tutorial on the day before the conference, Thursday Feb 22 to help you get up to speed and better enjoy the rest of the conference. Some of the really cool talks are: - Topographica: Python used for Computational Neuroscience - Python and wxPython for Experimental Economics - Interactive Parallel and Distributed Computing with IPython - Understanding and Using NumPy - IPython: getting the most out of working interactively in Python - Accessing and serving scientific datasets with Python - Galaxy: A Python based web framework for comparative genomics - PyDX: mathematics is code - Visual Python in a Computational Physics Course - Sony Pictures Imageworks Being run by the Python community as a non-profit event, the conference strives to be inexpensive, with registration being only $260 (or $195 if you register prior to Jan 15th), with a further discount for students. On the day before the conference we are running a full day of classroom tutorials (extra charge per class) and then after the conference is a free four-days of sprints, which are informal gatherings of programmers to work together in coding on various projects. Sprints are excellent opportunities to do agile pair-programming side-by-side with experienced programmers and make new friends. Other activities are lightning talks, which are 5-minute presentations to show off a cool technology or spread the word about a project, open space talks, which are spontaneous gatherings around a topic and, new this year, a Python Lab where experienced and novice programmers will work together to solve challenging problems and then present their solutions. The conference is also running four keynote talks by leaders in the programming field, with a special focus on education this year: "The Power of Dangerous Ideas: Python and One Laptop per Child" by Ivan Krstic, senior member of the One Laptop per Child project "Premise: eLearning does not Belong in Public Schools" by Adele Goldberg, of SmallTalk fame "Python 3000" by Guido van Rossum, creator of Python "The Importance of Programming Literacy" by Robert M. "r0ml" Lefkowitz, a frequent speaker at O'Reilly conferences I believe you will find the conference educational and enjoyable. More information about the conference along with the full schedule of presentations with abstracts, is available online: http://us.pycon.org/ Thanks for any help you can give in spreading the word, Jeff Rush Co-Chair PyCon 2007 From sbasu at physics.cornell.edu Wed Jan 10 07:03:46 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 07:03:46 -0500 (EST) Subject: [SciPy-user] Reminder: Early Bird Registration for PyCon Ending Soon Message-ID: <20070110120346.85583D08@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From oliphant at ee.byu.edu Wed Jan 10 09:43:28 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 07:43:28 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <45A4FB90.9000809@ee.byu.edu> There was a lively discussion on the SciPy List before Christmas regarding establishing a standard for documentation strings for NumPy / SciPy. I am very interested in establishing such a standard. A hearty thanks goes to William Stein for encouraging the discussion. I hope it is very clear that the developers of NumPy / SciPy are quite interested in good documentation strings but recognize that producing them can be fairly tedious and un-interesting work. This is the best explanation I can come up with for the relative paucity of documentation rather than some underlying agenda *not* to produce them. I suspect a standard has not been established largely because of all the discussions taking place within the documentation communities of epydoc, docutils, etc. and a relative unclarity on what to do about Math in docstrings. I'd like to get something done within the next few days (like placing the standard on a wiki and/or placing a HOWTO_DOCUMENT file in the distribution of NumPy). My preference is to use our own basic format for documentation with something that will translate the result into something that the epydoc package can process (like epytext or reStructuredText). The reason, I'd like to use our own simple syntax, is that I'm not fully happy with either epytext or reStructuredText. In general, I don't like a lot of line-noise and "formatting" extras. Unfortuntately both epytext and reStructuredText seem to have their fair share of such things. Robert wrote some great documentation for a few functions (apparently following a reStructuredText format). While I liked that he did this, it showed me that I don't very much like all the line-noise needed for structured text. I've looked through a large number of documentation strings that I've written over the years and believe that the following format suffices. I would like all documentation to follow this format. This format attempts to be a combination of epytext and restructured text with additions for latex-math. The purpose is to make a docstring readable but also allowing for some structured text directives. At some point we will have a sub-routine that will translate docstrings in this format to pure epytext or pure restructured text. """ one-line summary not using variable names or the function name A few sentences giving an extended description. Inputs: var1 -- Explanation variable2 -- Explanation Outputs: named, list, of, outputs named -- explanation list -- more detail of -- blah, blah. outputs -- even more blah Additional Inputs: kwdarg1 -- A little-used input not always needed. kwdarg2 -- Some keyword arguments can and should be given in Inputs Section. This is just for "little-used" inputs. Algorithm: Notes about the implemenation algorithm (if needed). This can have multiple paragraphs as can all sections. Notes: Additional notes if needed Authors: name (date): notes about what was done name (date): major documentation updates can be included here also. See also: * func1 -- any notes about the relationship * func2 -- * func3 -- (or this can be a comma separated list) func1, func2, func3 (For NumPy functions, these do not need to have numpy. namespace in front of them) (For SciPy they don't need the scipy. namespace in front of them). (Use np and sp for abbreviations to numpy and scipy if you need to reference the other package). Examples: examples in doctest format Comments: This section should include anything that should not be displayed in a help or other hard-copy output. Such things as substitution-directed directives should go here. """ Additional Information: For paragraphs, indentation is significant and indicates indentation in the output. New paragraphs are marked with blank line. Text-emphasis: Use *italics*, **bold**, and `courier` if needed in any explanations (but not for variable names and doctest code or multi-line code) Math: Use \[...\] or $...$ for math in latex format (remember to use the raw-format for your text string in such cases). Place it in a new-paragraph for displaystyle or in-line for inline style. References: Use L{reference-link} for any code links (except in the see-also section). The reference-link should contain the full path-name (unless the function is in the same name-space as this one is. Use http:// for any URL's Lists: * item1 - subitem + subsubitem * item2 * item3 or 1. item1 a. subitem i. subsubitem1 ii. subsubitem2 2. item2 3. item3 for lists. Definitions: descripition This is my description for any definitions needed. Addtional Code-blocks: {{{ for multi-line code-blocks that are not examples to be run but should be formatted as code. }}} Tables: Tables should be constructed as either: +------------------------+------------+----------+ | Header row, column 1 | Header 2 | Header 3 | +========================+============+==========+ | body row 1, column 1 | column 2 | column 3 | +------------------------+------------+----------+ | body row 2 | Cells may span | +------------------------+-----------------------+ or || Header row, column 1 || Header 2 || Header 3 || ------------------------------------------------------- || body row, column 1 || column 2 || column 3 || || body row 2 |||| Cells may span columns || Footnotes: [1] or [CITATION3] for Footnotes which are placed at the bottom of the docstring as [1] Footnote [CITATION3] Additional note. Substitution: Use |somename|{optional text} with (the next line is placed at the bottom of the docstring in the Comments: section) .. |somename| image::myfile.png or .. |somename| somedirective:: {optional text} for placing restructured text directives in the main text. Please address comments to this proposal, very soon. I'd like to finalize it within a few days. -Travis From sbasu at physics.cornell.edu Wed Jan 10 09:39:34 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 09:39:34 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110143934.E46B0CA3@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From Glen.Mabey at swri.org Wed Jan 10 09:51:42 2007 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Wed, 10 Jan 2007 08:51:42 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <20070110145141.GC12949@swri16wm.electro.swri.edu> On Wed, Jan 10, 2007 at 07:43:28AM -0700, Travis Oliphant wrote: > > There was a lively discussion on the SciPy List before Christmas > regarding establishing a standard for documentation strings for NumPy / > SciPy. > > I am very interested in establishing such a standard. > See also: > * func1 -- any notes about the relationship > * func2 -- > * func3 -- > (or this can be a comma separated list) > func1, func2, func3 Can I just say that the "See also" feature is often a tedious field to maintain, but it is *extremely* useful when trying to explore the available features and functionality of numpy/scipy. Glen From sbasu at physics.cornell.edu Wed Jan 10 09:51:58 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 09:51:58 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110145158.84156CE3@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From edschofield at gmail.com Wed Jan 10 10:02:17 2007 From: edschofield at gmail.com (Ed Schofield) Date: Wed, 10 Jan 2007 16:02:17 +0100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <1b5a37350701100702iac4565en93ad7e4c8d20dea5@mail.gmail.com> On 1/10/07, Travis Oliphant wrote: > > > I've looked through a large number of documentation strings that I've > written over the years and believe that the following format suffices. > I would like all documentation to follow this format. This looks great to me. I don't have any particular criticisms. -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbasu at physics.cornell.edu Wed Jan 10 10:02:31 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 10:02:31 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110150231.C34A5B8C@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From gael.varoquaux at normalesup.org Wed Jan 10 10:22:14 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 10 Jan 2007 16:22:14 +0100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <20070110152206.GA21746@clipper.ens.fr> All I can say is that using standard restructured text allows to use existing tools, and hopefully, to interact with the wiki. Now I don't contribute any code, so my opinion is not important. Ga?l From sbasu at physics.cornell.edu Wed Jan 10 10:22:32 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 10:22:32 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110152232.E2CD9D0F@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From oliphant at ee.byu.edu Wed Jan 10 10:24:15 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 08:24:15 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <20070110152206.GA21746@clipper.ens.fr> References: <45A4FB90.9000809@ee.byu.edu> <20070110152206.GA21746@clipper.ens.fr> Message-ID: <45A5051F.2040604@ee.byu.edu> Gael Varoquaux wrote: >All I can say is that using standard restructured text allows to use >existing tools, and hopefully, to interact with the wiki. Now I don't >contribute any code, so my opinion is not important. > > I'm aware of this, which is why I used epytext and restructured text as guides. However, I'm strongly against line-noise in the docstrings. My preference is to have some simple tools to convert NumPy/SciPy docstrings to pure re-structured text and/or epytext to take advantage of those tools. The conversion should not be difficult and these tools should be included with NumPy. -Travis From sbasu at physics.cornell.edu Wed Jan 10 10:24:35 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 10:24:35 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110152435.381F2D11@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From dd55 at cornell.edu Wed Jan 10 10:30:47 2007 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 10 Jan 2007 10:30:47 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <200701101030.47770.dd55@cornell.edu> On Wednesday 10 January 2007 09:43, Travis Oliphant wrote: > There was a lively discussion on the SciPy List before Christmas > regarding establishing a standard for documentation strings for NumPy / > SciPy. [...] > I'd like to get something done within the next few days (like placing > the standard on a wiki and/or placing a HOWTO_DOCUMENT file in the > distribution of NumPy). [...] > I've looked through a large number of documentation strings that I've > written over the years and believe that the following format suffices. > I would like all documentation to follow this format. [...] > Lists: > > * item1 > - subitem > + subsubitem > * item2 > * item3 > > or > > 1. item1 > a. subitem > i. subsubitem1 > ii. subsubitem2 > 2. item2 > 3. item3 > > for lists. [...] > Tables: > > Tables should be constructed as either: > > +------------------------+------------+----------+ > > | Header row, column 1 | Header 2 | Header 3 | > > +========================+============+==========+ > > | body row 1, column 1 | column 2 | column 3 | > > +------------------------+------------+----------+ > > | body row 2 | Cells may span | > > +------------------------+-----------------------+ > > or > > || Header row, column 1 || Header 2 || Header 3 || > > ------------------------------------------------------- > > || body row, column 1 || column 2 || column 3 || > || body row 2 |||| Cells may span columns || From a users perspective, thank you all for working on this. If the Scipy/Numpy community is establishing a standard, now would be the time to pick one format or the other for lists and tables. It would be nice if the wiki/HOWTO_DOCUMENT included a template as well as a specific example that illustrates each element of a properly formatted docstring in more detail, to help documentation writers to produce something that feels consistent. Could these documents also include instructions for where and how users without commit privileges should submit documentation patches or requests? Darren From sbasu at physics.cornell.edu Wed Jan 10 10:31:29 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 10:31:29 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110153129.851A2B09@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From stefan at sun.ac.za Wed Jan 10 10:40:22 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 10 Jan 2007 17:40:22 +0200 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <20070110154022.GN16625@mentat.za.net> On Wed, Jan 10, 2007 at 07:43:28AM -0700, Travis Oliphant wrote: > Outputs: named, list, of, outputs > named -- explanation > list -- more detail > of -- blah, blah. > outputs -- even more blah Why list the outputs when they follow in the list? > Authors: > name (date): notes about what was done > name (date): major documentation updates can be included here > also. I don't much like the idea of author info in the docstring. We have an SVN log and a credits file. This doesn't enhance the reader's understanding of the function usage. Thanks for pushing this -- it would be good to have a standardised format for documentation. Cheers St?fan From sbasu at physics.cornell.edu Wed Jan 10 10:42:22 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 10:42:22 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110154222.78613B77@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From aisaac at american.edu Wed Jan 10 10:55:47 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 10 Jan 2007 10:55:47 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A5051F.2040604@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu><20070110152206.GA21746@clipper.ens.fr><45A5051F.2040604@ee.byu.edu> Message-ID: On Wed, 10 Jan 2007, Travis Oliphant apparently wrote: > I'm strongly against line-noise in the docstrings. Although I am a user not a developer, I have some experience with reStructuredText and epydoc, and I wonder if they are as "noisy" as you fear. In general I think it gives a very clean and readable look, and links are particularly clean. A big advantage is that the tools for processing exist and are very good. The idea that reST introduces a lot of "noise" is contrary to my limited experience. Do I recall that Fernando Perez was exploring using the latex-math support in documentation: if so, maybe he can report on the experience. (I have used it in documents, where it works great, but not in documentation.) I believe your example becomes something like the below (doing this quickly). There is also additional functionality available. For example, you can accumulate a change log, which is nice! The main thing that seems to be missing in the example is a :notes: field that could hold multiple notes and perhaps somthing equivalent for :see:. fwiw, Alan Isaac """ one-line summary not using variable names or the function name A few sentences giving an extended description. :Parameters: `var1` Explanation `variable2` Explanation :return: named, list, of, outputs - named: explanation - list: more detail - of: blah, blah. - outputs: even more blah :Keywords: `kwdarg1` A little-used input not always needed. `kwdarg2 Some keyword arguments can and should be given in Inputs Section. This is just for "little-used" inputs. :note: Additional notes if needed :note: More additional notes if needed :author: name (date): notes about what was done :author: name (date): major documentation updates can be included here also. :see: func1 -- any notes about the relationship :see: func2 -- :see: func3 -- :see: func4, func5, func6 Algorithm --------- Notes about the implemenation algorithm (if needed). This can have multiple paragraphs as can all sections. Examples -------- examples in doctest format Comments -------- This section should include anything that should not be displayed in a help or other hard-copy output. Such things as substitution-directed directives should go here. """ From fperez.net at gmail.com Wed Jan 10 11:05:14 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 10 Jan 2007 09:05:14 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <20070110152206.GA21746@clipper.ens.fr> <45A5051F.2040604@ee.byu.edu> Message-ID: On 1/10/07, Alan G Isaac wrote: > On Wed, 10 Jan 2007, Travis Oliphant apparently wrote: > > I'm strongly against line-noise in the docstrings. > > Although I am a user not a developer, I have some experience > with reStructuredText and epydoc, and I wonder if they are > as "noisy" as you fear. In general I think it gives > a very clean and readable look, and links are particularly > clean. A big advantage is that the tools for processing > exist and are very good. The idea that reST introduces > a lot of "noise" is contrary to my limited experience. Do > I recall that Fernando Perez was exploring using the > latex-math support in documentation: if so, maybe he can > report on the experience. (I have used it in documents, > where it works great, but not in documentation.) No, it wasn't me. I'm /interested/ in it, but I haven't used it yet. I should add though, that I agree with Alan in preferring plain reST against pseudo-reST. My brain is starting to fill up with all the 'enhanced plaintext' formats floating around: Trac, Moin, reST, epytext, ... ( I'm not even sure if Trac and Moin are 100% identical anymore). For that reason, I'd much rather just learn one of them and use it well. Keeping track of multiple slightly different formats becomes a major PITB (pain in the brain). It's actually much harder than keeping /very/ different formats sorted out, since our brain seems pretty good at creating 'cognitive buckets' for widely disparate things, but those that are oh-ever-so-slightly different easily blur into one another. So while I'm +100 for Travis' proposal in general, I'd vote for it just following plain reST (with the LaTeX support that has been discussed here) without adding yet a new 'numpytext' format. Cheers, f From sbasu at physics.cornell.edu Wed Jan 10 11:05:39 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 11:05:39 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110160539.A02BEB76@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From fperez.net at gmail.com Wed Jan 10 11:10:10 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 10 Jan 2007 09:10:10 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <20070110160539.A02BEB76@server.physics.cornell.edu> References: <20070110160539.A02BEB76@server.physics.cornell.edu> Message-ID: On 1/10/07, sbasu at physics.cornell.edu wrote: > Dear Sender, > > I am taking a vacation from the 8th to the 20th of January. I may > be able to check this email remotely, but sadly I cannot guarantee > either that or a quick response. I will certainly read all my pending > emails on the 21st. Can somebody with admin access /please/ unsubscribe this guy? We're going to be flooded with this junk for the next 10 days... Thanks f From sbasu at physics.cornell.edu Wed Jan 10 11:10:23 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 11:10:23 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110161023.08D42BD2@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From pgmdevlist at gmail.com Wed Jan 10 11:15:39 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 10 Jan 2007 11:15:39 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45A5051F.2040604@ee.byu.edu> Message-ID: <200701101115.39965.pgmdevlist@gmail.com> On Wednesday 10 January 2007 10:55, Alan G Isaac wrote: > On Wed, 10 Jan 2007, Travis Oliphant apparently wrote: > > I'm strongly against line-noise in the docstrings. > > Although I am a user not a developer, I have some experience > with reStructuredText and epydoc, and I wonder if they are > as "noisy" as you fear. My 2c: I'd be happier if we'd stick to reST. As Alan mentioned, the format is simple, not cluttered, and the tools are already here. Why yet another format ? Travis, what do you consider as 'noise' in the rest fields ? I also second Stefan's suggestion: I'm not really interested in knowing who wrote a particular piece of code when trying to access the docstring of a function. Now, Travis, if your example is very generic and covers also the docstring of a full module, that's different. But then it might be worth to add some :date: and :version: fields. > """ > one-line summary not using variable names or the function name > > A few sentences giving an extended description. > > :Parameters: > > `var1` > Explanation > `variable2` > Explanation I like precision the type of input and the default directly in front of the variables: :Parameters: - `var1` : ndarray Explanation - `var2` : Boolean *[False]* Explanation From sbasu at physics.cornell.edu Wed Jan 10 11:16:02 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 11:16:02 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110161602.7FA97D08@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From matthew at sel.cam.ac.uk Wed Jan 10 11:16:32 2007 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Wed, 10 Jan 2007 16:16:32 +0000 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <20070110160539.A02BEB76@server.physics.cornell.edu> Message-ID: On 10 Jan 2007, at 16:10, Fernando Perez wrote: > On 1/10/07, sbasu at physics.cornell.edu > wrote: >> Dear Sender, >> >> I am taking a vacation from the 8th to the 20th of January. I may >> be able to check this email remotely, but sadly I cannot guarantee >> either that or a quick response. I will certainly read all my pending >> emails on the 21st. > > Can somebody with admin access /please/ unsubscribe this guy? We're > going to be flooded with this junk for the next 10 days... I emailed the address given in said email "if you wanted sysadmin help". I await developments. Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From sbasu at physics.cornell.edu Wed Jan 10 11:17:03 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 11:17:03 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110161703.90446D11@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From swisher at enthought.com Wed Jan 10 12:07:43 2007 From: swisher at enthought.com (Janet Swisher) Date: Wed, 10 Jan 2007 11:07:43 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: <45A51D5F.1000904@enthought.com> Darren Dale wrote: > >> Lists: >> >> * item1 >> - subitem >> + subsubitem >> * item2 >> * item3 >> >> or >> >> 1. item1 >> a. subitem >> i. subsubitem1 >> ii. subsubitem2 >> 2. item2 >> 3. item3 >> >> for lists. >> > [...] > >> Tables: >> >> Tables should be constructed as either: >> >> +------------------------+------------+----------+ >> >> | Header row, column 1 | Header 2 | Header 3 | >> >> +========================+============+==========+ >> >> | body row 1, column 1 | column 2 | column 3 | >> >> +------------------------+------------+----------+ >> >> | body row 2 | Cells may span | >> >> +------------------------+-----------------------+ >> >> or >> >> || Header row, column 1 || Header 2 || Header 3 || >> >> ------------------------------------------------------- >> >> || body row, column 1 || column 2 || column 3 || >> || body row 2 |||| Cells may span columns || >> > > > >From a users perspective, thank you all for working on this. > Agreed! Thanks to Travis for pushing the ball forward on this. > If the Scipy/Numpy community is establishing a standard, now would be the time > to pick one format or the other for lists and tables. For tables, I agree that picking one format would be desirable. I personally prefer the second of the two examples, as it's simpler. The first type is tedious to construct and maintain, unless you use the emacs package that supports it; if "everybody" is using emacs, then the first type might be preferable. For lists, however, I see a need for both bulleted and numbered list formats. There are times when you *want* to imply a sequence among the items, and there are other times when a sequence would be misleading. This is why HTML has both
    and
      lists. -- Janet Swisher --- Senior Technical Writer Enthought, Inc. http://www.enthought.com From david.huard at gmail.com Wed Jan 10 12:07:51 2007 From: david.huard at gmail.com (David Huard) Date: Wed, 10 Jan 2007 12:07:51 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <200701101115.39965.pgmdevlist@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5051F.2040604@ee.byu.edu> <200701101115.39965.pgmdevlist@gmail.com> Message-ID: <91cf711d0701100907g4fca7067wcfaab9b007eb1261@mail.gmail.com> Hi, I think we should stick with ReST, but define our own roles and directives (Input, Output, etc). This would lead to a standard close to what Travis proposes, with some small differences: fields would be defined as :Field: instead of Field:, mathematical formulas, ie :math: instead of $, although we could probably do something about that too. I started to implement the standard discussed before the holidays by hacking epydoc. I managed to add Input and Output fields, but it's not pretty. Epydoc does not seem to be easily customizable, hence the suggestion to define reST roles for the fields Travis proposed and make minimal changes to epydoc to integrate those new fields. In any case, it would be time to get a documention folder in the trunk or sandbox to store: * patches to docutils and epydoc, * a typical docstring to use as a benchmark, * a documentation makefile, * documentation guidelines. Cheers, David 2007/1/10, Pierre GM : > > On Wednesday 10 January 2007 10:55, Alan G Isaac wrote: > > On Wed, 10 Jan 2007, Travis Oliphant apparently wrote: > > > I'm strongly against line-noise in the docstrings. > > > > Although I am a user not a developer, I have some experience > > with reStructuredText and epydoc, and I wonder if they are > > as "noisy" as you fear. > > My 2c: > > I'd be happier if we'd stick to reST. As Alan mentioned, the format is > simple, > not cluttered, and the tools are already here. Why yet another format ? > Travis, what do you consider as 'noise' in the rest fields ? > > I also second Stefan's suggestion: I'm not really interested in knowing > who > wrote a particular piece of code when trying to access the docstring of a > function. > > Now, Travis, if your example is very generic and covers also the docstring > of > a full module, that's different. But then it might be worth to add > some :date: and :version: fields. > > > > """ > > one-line summary not using variable names or the function name > > > > A few sentences giving an extended description. > > > > :Parameters: > > > > `var1` > > Explanation > > `variable2` > > Explanation > > I like precision the type of input and the default directly in front of > the > variables: > :Parameters: > - `var1` : ndarray > Explanation > - `var2` : Boolean *[False]* > Explanation > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbasu at physics.cornell.edu Wed Jan 10 12:08:02 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 12:08:02 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110170802.47C93B08@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From sbasu at physics.cornell.edu Wed Jan 10 12:08:07 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 12:08:07 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110170807.E6665B76@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From oliphant at ee.byu.edu Wed Jan 10 12:21:56 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 10:21:56 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu><20070110152206.GA21746@clipper.ens.fr><45A5051F.2040604@ee.byu.edu> Message-ID: <45A520B4.4060206@ee.byu.edu> Alan G Isaac wrote: > >""" >one-line summary not using variable names or the function name > >A few sentences giving an extended description. > >:Parameters: > `var1` > Explanation > `variable2` > Explanation >:return: named, list, of, outputs > - named: explanation > - list: more detail > - of: blah, blah. > - outputs: even more blah >:Keywords: > `kwdarg1` > A little-used input not always needed. > `kwdarg2 > Some keyword arguments can and should be given in Inputs > Section. This is just for "little-used" inputs. >:note: Additional notes if needed >:note: More additional notes if needed >:author: name (date): notes about what was done >:author: name (date): major documentation updates can be included here also. >:see: func1 -- any notes about the relationship >:see: func2 -- >:see: func3 -- >:see: func4, func5, func6 > > Several things: 1) I don't like ":Input:" "Input:" looks much better to me. The extra ':' is noise for the human reader. 2) I don't like little back-ticks on the variable names, they are just line noise. It is easily inferred that these are parameters. I prefer the names "Inputs:" and "Outputs:" because these are already pretty standard throughout SciPy. Why switch to underlining in Algorithm and Examples and Comments Section. I don't necessarily mind it, but it seems arbitrary. I'm happy to have a notes section. I don't care if the author field is there or not. It is nice to use tools like epydoc, but we should not force ourselves to use there stuff since we will have to hack it anyway to manage math. Therefor, why not "hack" it to manage the format we decide on. -Travis From sbasu at physics.cornell.edu Wed Jan 10 12:22:14 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 12:22:14 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110172214.3CA3AD08@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From seefeld at sympatico.ca Wed Jan 10 12:26:34 2007 From: seefeld at sympatico.ca (Stefan Seefeld) Date: Wed, 10 Jan 2007 12:26:34 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A520B4.4060206@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu><20070110152206.GA21746@clipper.ens.fr><45A5051F.2040604@ee.byu.edu> <45A520B4.4060206@ee.byu.edu> Message-ID: <45A521CA.2030201@sympatico.ca> Travis Oliphant wrote: > It is nice to use tools like epydoc, but we should not > force ourselves to use there stuff since we will have to hack it anyway > to manage math. Therefor, why not "hack" it to manage the format we > decide on. FWIW, I'm the author of synopsis (http://synopsis.fresco.org/), which is used to document source code, but has a very open and extensible architecture as to how to handle the individual processing steps. It does support python, but also C, C++, and (CORBA) IDL. So, in case you are interested into a flexible tool that can also cover your back-end code, you may give it a try. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin... From sbasu at physics.cornell.edu Wed Jan 10 12:32:19 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 12:32:19 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110173219.A600CB7C@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From oliphant at ee.byu.edu Wed Jan 10 12:39:46 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 10:39:46 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <20070110152206.GA21746@clipper.ens.fr> <45A5051F.2040604@ee.byu.edu> Message-ID: <45A524E2.4030608@ee.byu.edu> Fernando Perez wrote: >I should add though, that I agree with Alan in preferring plain reST >against pseudo-reST. My brain is starting to fill up with all the >'enhanced plaintext' formats floating around: Trac, Moin, reST, >epytext, ... ( I'm not even sure if Trac and Moin are 100% identical >anymore). For that reason, I'd much rather just learn one of them and >use it well. Keeping track of multiple slightly different formats >becomes a major PITB (pain in the brain). It's actually much harder >than keeping /very/ different formats sorted out, since our brain >seems pretty good at creating 'cognitive buckets' for widely disparate >things, but those that are oh-ever-so-slightly different easily blur >into one another. > > Probably not. I know that Moin and reST are not identical. Problem is that Moin does a better job with tables than reST does. >So while I'm +100 for Travis' proposal in general, I'd vote for it >just following plain reST (with the LaTeX support that has been >discussed here) without adding yet a new 'numpytext' format. > > But, that's just it we have to "add latex" support which is already going to be non-standard. The way reST defines directives is not pretty. Putting .. :latex-math: everywhere is line-noise to me. If the only issue is having automated tools. We can easily translate to those tools from our format. -Travis From sbasu at physics.cornell.edu Wed Jan 10 12:40:03 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 12:40:03 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110174003.8E58FCB4@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From aisaac at american.edu Wed Jan 10 13:54:38 2007 From: aisaac at american.edu (Alan Isaac) Date: Wed, 10 Jan 2007 13:54:38 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <200701101115.39965.pgmdevlist@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5051F.2040604@ee.byu.edu> <200701101115.39965.pgmdevlist@gmail.com> Message-ID: I am not trying to "argue" for reST+epydoc, but rather to add my perspective on some of the arguments pro and con. Here are some issues that have been raised. I will copy Ed Loper, in case he wants to comment. Alan Isaac Must items (e.g., parameters) in a consolidated field be marked as interpreted text (with back ticks). Yes. It does seem redundant, so I will ask why. Would it not be nice to have :Inputs: and :Outputs: consolidated fields as synonyms for :Parameters: and :Returns:? Yes! Perhaps Ed Loper would be willing to add this. Can new fields be added? http://epydoc.sourceforge.net/fields.html#newfield (In addition, the author has added fields in response to requests.) Is Epydoc easily customizable? In what ways? It is easy to add new fields (see above), but I do not know about new consolidated fields. Can you put type and precision right after the variable in a :Parameters: list? http://epydoc.sourceforge.net/fields.html#rst (Yes; see the very last example) Is table support adequate in reST? http://docutils.sourceforge.net/docs/ref/rst/directives.html#tables So *maybe* not, although I am not currently imagining a table that will be used in documentation that cannot be accommodated (are elements in multiple cells really needed?). And of course, any plain text table can be retained as a literal block. Is the use of directives "cluttered"? Not in my opinion. For example:: \[ f(x) = x^2 \] does not seem more cluttered to me than:: .. latex-math:: f(x) = x^2 However it would be nice to keep the default role for math, so we could inline `f(x)=x^2` rather than :latex-math:`f(x)=x^2`. It may be worth asking whether epydoc developers would be willing to pass $f(x)=x^2$ as latex-math. Are there date and version fields? There are, called 'since' and 'version'. Why use underlining to define sections? So that they are really sections. The indented examples will display fine but will not give access to sectioning controls. From robert.kern at gmail.com Wed Jan 10 14:29:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Jan 2007 13:29:01 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <20070110154022.GN16625@mentat.za.net> References: <45A4FB90.9000809@ee.byu.edu> <20070110154022.GN16625@mentat.za.net> Message-ID: <45A53E7D.5000608@gmail.com> Stefan van der Walt wrote: > On Wed, Jan 10, 2007 at 07:43:28AM -0700, Travis Oliphant wrote: >> Outputs: named, list, of, outputs >> named -- explanation >> list -- more detail >> of -- blah, blah. >> outputs -- even more blah > > Why list the outputs when they follow in the list? On first glance, it's difficult for a new user to understand that the list is describing elements of a tuple. I can go either way on this, though. I use a substantially similar format at work, and I don't include the redundant list. >> Authors: >> name (date): notes about what was done >> name (date): major documentation updates can be included here >> also. > > I don't much like the idea of author info in the docstring. We have > an SVN log and a credits file. This doesn't enhance the reader's > understanding of the function usage. Agreed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sbasu at physics.cornell.edu Wed Jan 10 14:29:27 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 14:29:27 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110192927.7C2D8D1B@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From robert.kern at gmail.com Wed Jan 10 14:49:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Jan 2007 13:49:34 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45A5434E.2090609@gmail.com> Travis Oliphant wrote: > """ > one-line summary not using variable names or the function name > > A few sentences giving an extended description. > > Inputs: > var1 -- Explanation > variable2 -- Explanation > > Outputs: named, list, of, outputs > named -- explanation > list -- more detail > of -- blah, blah. > outputs -- even more blah My objection to this kind of list is that it uses up a lot of valuable horizontal space. Additionally, it either entails annoying column alignment *or* it looks really ugly with all of the descriptions unaligned. Look at most of the docstrings in scipy for examples. This is the Enthought standard: Parameters ---------- var1 : type of var1 (loosely) Description of var1. variable2 : type of variable2 (loosely) Description of variable2. kwdarg : type of kwdarg, optional Description of kwdarg. The (loose) type information is quite handy. Sometimes it is quite difficult to tell what kind of thing the function needs from the usual description, particularly when both scalars or arrays are flying around. The problem with both of these forms is that they are difficult to parse into correct lists. One issue that should be noted is that sometimes the description of a variable is omitted because the description of the function, the variable name, and the type information are more than enough context to tell you everything you need to know about the parameter. The only description you can give is just a repetition of the information that you just gave. Or two variables are substantially similar such that you want to describe them together only once. """ Clip the values of an array. Parameters ---------- arr : array lower : number upper : number The lower and upper bounds to clip the array against. """ This is one reason why I'm beginning to like the ReST form of this; it always has a marker to say that this is part of a list. """ Clip the values of an array. :Parameters: - arr : array - lower : number - upper : number The lower and upper bounds to clip the array against. """ I don't think that you can write a tool that takes the ambiguous forms to the unambiguous ones. > Additional Inputs: kwdarg1 -- A little-used input not always needed. > kwdarg2 -- Some keyword arguments can and should be given in Inputs > Section. This is just for "little-used" inputs. These should be in the Inputs section, not in a separate section. > Algorithm: > Notes about the implemenation algorithm (if needed). > > This can have multiple paragraphs as can all sections. Meh. This should be in the multi-line description at the top. > Notes: > Additional notes if needed Also in the multi-line description. > Authors: > name (date): notes about what was done > name (date): major documentation updates can be included here also. I agree with Stefan that this shouldn't be in the docstring. > See also: > * func1 -- any notes about the relationship > * func2 -- > * func3 -- > (or this can be a comma separated list) > func1, func2, func3 This kind of "See also" section is annoying to maintain, and I think that, as we have currently been using them, they aren't very useful. The functions in scipy.integrate shouldn't point to every other function in scipy.integrate. That's pointless and creates a bunch of similar-looking text that gets ignored. They should each point to scipy.integrate itself. If they have a much closer relationship to another function (trapz() and cumtrapz() for example), then they should include that here, but we should show great restraint when doing so. We reduce the utility of this section by being indiscriminate. > (For NumPy functions, these do not need to have numpy. namespace in > front of them) > (For SciPy they don't need the scipy. namespace in front of them). > (Use np and sp for abbreviations to numpy and scipy if you need to > reference > the other package). We shouldn't abbreviate. > Examples: > examples in doctest format > > Comments: > This section should include anything that should not be displayed in a > help > or other hard-copy output. Such things as substitution-directed > directives > should go here. This should be in a comment, not a docstring. Most tools won't be aware of this special processing that you need to apply. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sbasu at physics.cornell.edu Wed Jan 10 14:50:00 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 14:50:00 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110195000.01996D34@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From oliphant at ee.byu.edu Wed Jan 10 15:15:08 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 10 Jan 2007 13:15:08 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A5434E.2090609@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> Message-ID: <45A5494C.9020405@ee.byu.edu> Robert Kern wrote: >>""" >>one-line summary not using variable names or the function name >> >>A few sentences giving an extended description. >> >>Inputs: >> var1 -- Explanation >> variable2 -- Explanation >> >>Outputs: named, list, of, outputs >> named -- explanation >> list -- more detail >> of -- blah, blah. >> outputs -- even more blah >> >> > >My objection to this kind of list is that it uses up a lot of valuable >horizontal space. Additionally, it either entails annoying column alignment *or* >it looks really ugly with all of the descriptions unaligned. Look at most of the >docstrings in scipy for examples. This is the Enthought standard: > > > I'm not sure I understand what is taking up horizontal space (do you mean the required alighment)? I do like aligned descriptions. >Parameters >---------- >var1 : type of var1 (loosely) > Description of var1. >variable2 : type of variable2 (loosely) > Description of variable2. >kwdarg : type of kwdarg, optional > Description of kwdarg. > > > This is acceptable in my mind, also. Although I probably like '-' instead of ':' as a separator, but just because I'm used to it. >The (loose) type information is quite handy. Sometimes it is quite difficult to >tell what kind of thing the function needs from the usual description, >particularly when both scalars or arrays are flying around. > > Yeah, I could see that. As you understand but others may not, we shouldn't get overly specific with the "types" though. We're supposed to be doing duck typing wherever possible. >The problem with both of these forms is that they are difficult to parse into >correct lists. One issue that should be noted is that sometimes the description >of a variable is omitted because the description of the function, the variable >name, and the type information are more than enough context to tell you >everything you need to know about the parameter. The only description you can >give is just a repetition of the information that you just gave. Or two >variables are substantially similar such that you want to describe them together >only once. > >""" Clip the values of an array. > >Parameters >---------- >arr : array >lower : number >upper : number > The lower and upper bounds to clip the array against. >""" > >This is one reason why I'm beginning to like the ReST form of this; it always >has a marker to say that this is part of a list. > > I don't see why indentation doesn't give you that information already. >""" Clip the values of an array. > >:Parameters: > - arr : array > - lower : number > - upper : number > The lower and upper bounds to clip the array against. >""" > >I don't think that you can write a tool that takes the ambiguous forms to the >unambiguous ones. > > > I don't know. If you require indentation for non list entries, then it seems un-ambiguous to me. >>Additional Inputs: kwdarg1 -- A little-used input not always needed. >> kwdarg2 -- Some keyword arguments can and should be given in Inputs >> Section. This is just for "little-used" inputs. >> >> > >These should be in the Inputs section, not in a separate section. > > I don't like this as a requirement because occassionally this clutters the inputs list with a lot of inputs that are rarely used. The common usage should be listed. I'd like there to be an option for an Additional Inputs section. See some of the functions in scipy.optimize for examples. > > >>Algorithm: >> Notes about the implemenation algorithm (if needed). >> >> This can have multiple paragraphs as can all sections. >> >> > > > >Meh. This should be in the multi-line description at the top. > > I don't know. Not always. Sometimes sure, but the algorithm can be quite complicated to explain and should therefore be here. Perhaps we should use Algorithm Notes for this section so that it is clear that simple algorithms should be explained in the multi-line description part. > > >>Notes: >> Additional notes if needed >> >> > >Also in the multi-line description. > > > > >>Authors: >> name (date): notes about what was done >> name (date): major documentation updates can be included here also. >> >> > >I agree with Stefan that this shouldn't be in the docstring. > > Great. I have no problem with eliminating this section. >>See also: >> * func1 -- any notes about the relationship >> * func2 -- >> * func3 -- >> (or this can be a comma separated list) >> func1, func2, func3 >> >> > >This kind of "See also" section is annoying to maintain, and I think that, as we >have currently been using them, they aren't very useful. The functions in >scipy.integrate shouldn't point to every other function in scipy.integrate. >That's pointless and creates a bunch of similar-looking text that gets ignored. >They should each point to scipy.integrate itself. If they have a much closer >relationship to another function (trapz() and cumtrapz() for example), then they >should include that here, but we should show great restraint when doing so. We >reduce the utility of this section by being indiscriminate. > > I tend to agree. I think one rule of thumb which could be applied to use see also only if the additional docstring gives more insight into what the current function is actually doing. >>Examples: >> examples in doctest format >> >>Comments: >> This section should include anything that should not be displayed in a >>help >> or other hard-copy output. Such things as substitution-directed >>directives >> should go here. >> >> > >This should be in a comment, not a docstring. Most tools won't be aware of this >special processing that you need to apply. > > > I'm fine with that. Prefer it actually. -Travis From sbasu at physics.cornell.edu Wed Jan 10 15:15:26 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 15:15:26 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110201526.C999AAC9@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From robert.kern at gmail.com Wed Jan 10 15:31:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 10 Jan 2007 14:31:29 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A5494C.9020405@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A5494C.9020405@ee.byu.edu> Message-ID: <45A54D21.7060900@gmail.com> Travis Oliphant wrote: > Robert Kern wrote: > >>> """ >>> one-line summary not using variable names or the function name >>> >>> A few sentences giving an extended description. >>> >>> Inputs: >>> var1 -- Explanation >>> variable2 -- Explanation >>> >>> Outputs: named, list, of, outputs >>> named -- explanation >>> list -- more detail >>> of -- blah, blah. >>> outputs -- even more blah >>> >>> >> My objection to this kind of list is that it uses up a lot of valuable >> horizontal space. Additionally, it either entails annoying column alignment *or* >> it looks really ugly with all of the descriptions unaligned. Look at most of the >> docstrings in scipy for examples. This is the Enthought standard: >> > I'm not sure I understand what is taking up horizontal space (do you > mean the required alighment)? > > I do like aligned descriptions. The "named -- " part is taking up horizontal space that should be used by the explanation part. It's a better use of both horizontal and vertical space, IMO, to put the parameter on its own line and follow it with a description. Longish parameter names make this kind of formatting prohibitive (and the other way around!), and I would like to encourage longer, more understandable names. >> Parameters >> ---------- >> var1 : type of var1 (loosely) >> Description of var1. >> variable2 : type of variable2 (loosely) >> Description of variable2. >> kwdarg : type of kwdarg, optional >> Description of kwdarg. >> > This is acceptable in my mind, also. Although I probably like '-' > instead of ':' as a separator, but just because I'm used to it. > >> The (loose) type information is quite handy. Sometimes it is quite difficult to >> tell what kind of thing the function needs from the usual description, >> particularly when both scalars or arrays are flying around. >> > Yeah, I could see that. As you understand but others may not, we > shouldn't get overly specific with the "types" though. We're supposed > to be doing duck typing wherever possible. Well, there's certainly no pointless type-*checking* going on. It's just that when I say I want an int, for example, I'm going to be treating what you give me as an int whatever it happens to be, so it better quack like an int. I'm just explicitly stating that precondition which already existed and give users a much clearer idea of how they are expected to use the function. It guides the user to doing the sensible, expected thing without preventing them from sensible, but unexpected things. >> The problem with both of these forms is that they are difficult to parse into >> correct lists. One issue that should be noted is that sometimes the description >> of a variable is omitted because the description of the function, the variable >> name, and the type information are more than enough context to tell you >> everything you need to know about the parameter. The only description you can >> give is just a repetition of the information that you just gave. Or two >> variables are substantially similar such that you want to describe them together >> only once. >> >> """ Clip the values of an array. >> >> Parameters >> ---------- >> arr : array >> lower : number >> upper : number >> The lower and upper bounds to clip the array against. >> """ >> >> This is one reason why I'm beginning to like the ReST form of this; it always >> has a marker to say that this is part of a list. > > I don't see why indentation doesn't give you that information already. > >> """ Clip the values of an array. >> >> :Parameters: >> - arr : array >> - lower : number >> - upper : number >> The lower and upper bounds to clip the array against. >> """ >> >> I don't think that you can write a tool that takes the ambiguous forms to the >> unambiguous ones. >> > I don't know. If you require indentation for non list entries, then it > seems un-ambiguous to me. Hmm. Point. reST didn't when I was playing with it in epydoc (although it handles the full form where all list elements have a description just fine). I overgeneralized. >>> Additional Inputs: kwdarg1 -- A little-used input not always needed. >>> kwdarg2 -- Some keyword arguments can and should be given in Inputs >>> Section. This is just for "little-used" inputs. >>> >> These should be in the Inputs section, not in a separate section. >> > I don't like this as a requirement because occassionally this clutters > the inputs list with a lot of inputs that are rarely used. The common > usage should be listed. I'd like there to be an option for an > Additional Inputs section. > > See some of the functions in scipy.optimize for examples. I can live with that. >>> Algorithm: >>> Notes about the implemenation algorithm (if needed). >>> >>> This can have multiple paragraphs as can all sections. >> >> Meh. This should be in the multi-line description at the top. >> > I don't know. Not always. Sometimes sure, but the algorithm can be > quite complicated to explain and should therefore be here. Perhaps we > should use Algorithm Notes for this section so that it is clear that > simple algorithms should be explained in the multi-line description part. I'm happy to also allow any arbitrary, unstructured section the author feels is appropriate. But I feel that the minimal standard that we promote should exclude it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sbasu at physics.cornell.edu Wed Jan 10 15:32:06 2007 From: sbasu at physics.cornell.edu (sbasu at physics.cornell.edu) Date: Wed, 10 Jan 2007 15:32:06 -0500 (EST) Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <20070110203206.E39DFAFE@server.physics.cornell.edu> Dear Sender, I am taking a vacation from the 8th to the 20th of January. I may be able to check this email remotely, but sadly I cannot guarantee either that or a quick response. I will certainly read all my pending emails on the 21st. If you're mailing to contact the system administrator of PECF, please contact backup admin Ivan Daykov . To everyone else: please be patient and I will get back to you when I return! I am sorry if this has inconvenienced you in any way. Thanks, Sourish Basu From swisher at enthought.com Wed Jan 10 16:28:57 2007 From: swisher at enthought.com (Janet Swisher) Date: Wed, 10 Jan 2007 15:28:57 -0600 Subject: [SciPy-user] SciPy-user Digest, Vol 41, Issue 18 In-Reply-To: References: Message-ID: <45A55A99.3090804@enthought.com> Robert Kern wrote: > This is the Enthought standard: > > > Parameters > ---------- > var1 : type of var1 (loosely) > Description of var1. > variable2 : type of variable2 (loosely) > Description of variable2. > kwdarg : type of kwdarg, optional > Description of kwdarg. > One issue that should be noted is that sometimes the description > of a variable is omitted because the description of the function, the variable > name, and the type information are more than enough context to tell you > everything you need to know about the parameter. The only description you can > give is just a repetition of the information that you just gave. Or two > variables are substantially similar such that you want to describe them together > only once. > > """ Clip the values of an array. > > Parameters > ---------- > arr : array > lower : number > upper : number > The lower and upper bounds to clip the array against. > """ > > This is one reason why I'm beginning to like the ReST form of this; it always > has a marker to say that this is part of a list. The first form above *is* a ReST list -- a definition list. However, ReST definition lists *require* a description paragraph for each item. > """ Clip the values of an array. > > :Parameters: > - arr : array > - lower : number > - upper : number > The lower and upper bounds to clip the array against. > """ I think ReST would interpret this as a bullet list. However, as written, it would output the description on the same line as the last bullet. To make the description come out as a continuation paragraph, it would need a blank line in between, which arguably creates too much visual separation in the source, especially if there are more items: :Parameters: - arr : array - lower : number - upper : number The lower and upper bounds to clip the array against. - another : number -- Janet Swisher --- Senior Technical Writer Enthought, Inc. http://www.enthought.com From stefan at sun.ac.za Wed Jan 10 16:43:24 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 10 Jan 2007 23:43:24 +0200 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A54D21.7060900@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A5494C.9020405@ee.byu.edu> <45A54D21.7060900@gmail.com> Message-ID: <20070110214324.GO16625@mentat.za.net> On Wed, Jan 10, 2007 at 02:31:29PM -0600, Robert Kern wrote: > >> Parameters > >> ---------- > >> var1 : type of var1 (loosely) > >> Description of var1. > >> variable2 : type of variable2 (loosely) > >> Description of variable2. > >> kwdarg : type of kwdarg, optional > >> Description of kwdarg. > >> > > This is acceptable in my mind, also. Although I probably like '-' > > instead of ':' as a separator, but just because I'm used to it. > > > >> The (loose) type information is quite handy. Sometimes it is quite difficult to > >> tell what kind of thing the function needs from the usual description, > >> particularly when both scalars or arrays are flying around. > >> > > Yeah, I could see that. As you understand but others may not, we > > shouldn't get overly specific with the "types" though. We're supposed > > to be doing duck typing wherever possible. > > Well, there's certainly no pointless type-*checking* going on. It's just that > when I say I want an int, for example, I'm going to be treating what you give me > as an int whatever it happens to be, so it better quack like an int. I'm just > explicitly stating that precondition which already existed and give users a much > clearer idea of how they are expected to use the function. It guides the user to > doing the sensible, expected thing without preventing them from sensible, but > unexpected things. I like the idea of providing a loose type. Normally, when I write documentation, this is included in the description, i.e. "A 3xMxN array that indicates...", but separating it from the description actually makes a lot of sense. Duck typing is only useful when the user knows which objects quack. Cheers St?fan From v-nijs at kellogg.northwestern.edu Thu Jan 11 01:35:03 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Thu, 11 Jan 2007 00:35:03 -0600 Subject: [SciPy-user] Install from source on mac Message-ID: I followed Robert Kerns' excellent source install guide for OS X 10.4 (G4). My programs seem to run fine. However scipy.test() does produce the following errors: Scipy from svn 2530 ====================================================================== FAIL: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/io/tests/test_mio.py", line 80, in _check_case self._check_level(k_label, expected, matdict[k]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/io/tests/test_mio.py", line 63, in _check_level decimal = 5) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal test sparse; file /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-package s/scipy/io/tests/./data/testsparse_6.5.1_GLNX86.mat, variable testsparse (mismatch 46.6666666667%) x: array([[ 3.03865194e-319, 3.16202013e-322, 1.04346664e-320, 2.05531309e-320, 2.56123631e-320], [ 3.16202013e-322, 0.00000000e+000, 0.00000000e+000,... y: array([[ 1., 2., 3., 4., 5.], [ 2., 0., 0., 0., 0.], [ 3., 0., 0., 0., 0.]]) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9985508918762207+2.8094236063523161e-37j) DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: (-1.9985508918762207+2.8094164317041787e-37j) DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1604 tests in 17.649s FAILED (failures=3) From bthom at cs.hmc.edu Thu Jan 11 02:12:18 2007 From: bthom at cs.hmc.edu (belinda thom) Date: Wed, 10 Jan 2007 23:12:18 -0800 Subject: [SciPy-user] Install from source on mac In-Reply-To: References: Message-ID: <517AA203-758F-4183-B058-45EFB0637ECC@cs.hmc.edu> Hi Vincent, On Jan 10, 2007, at 10:35 PM, Vincent Nijs wrote: > I followed Robert Kerns' excellent source install guide for OS X > 10.4 (G4). > My programs seem to run fine. However scipy.test() does produce the > following errors: > > Scipy from svn 2530 I'm curious, did you install the wx package he recommended and if so, were you able to plot w/matplotlib using the WXagg backend? Thanks, --b From nwagner at iam.uni-stuttgart.de Thu Jan 11 03:11:17 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 11 Jan 2007 09:11:17 +0100 Subject: [SciPy-user] Handling of attachments in the bug tracker Message-ID: <45A5F125.3080907@iam.uni-stuttgart.de> Hi, How can I download attachments of tickets in the bug tracker ? Copy and paste doesn't work due to line numbers. Is it possible to toggle between a description with/without line numbers ? Nils From robert.kern at gmail.com Thu Jan 11 03:20:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 11 Jan 2007 02:20:19 -0600 Subject: [SciPy-user] Handling of attachments in the bug tracker In-Reply-To: <45A5F125.3080907@iam.uni-stuttgart.de> References: <45A5F125.3080907@iam.uni-stuttgart.de> Message-ID: <45A5F343.6070007@gmail.com> Nils Wagner wrote: > Hi, > > How can I download attachments of tickets in the bug tracker ? > Copy and paste doesn't work due to line numbers. Is it possible > to toggle between a description with/without line numbers ? Go to the bottom of the page. Click the link "Original Format". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gruben at bigpond.net.au Thu Jan 11 06:27:46 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu, 11 Jan 2007 22:27:46 +1100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A5434E.2090609@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> Message-ID: <45A61F32.7000907@bigpond.net.au> I'm not sure I can add much to this discussion, except that currently no tools seem to do exactly what I think we want with ReST anyway, so if we're going to rely on existing tools, ReST may not be all that useful. I think what is needed is to generate a simple reference section in the proposed LaTeX version of the document which contains one indexed entry per numpy/scipy function, probably with 'see also' cross-references, and also an html version of same. Currently epydoc generates far too much information (2371 pages worth when I ran it on the numpy source a few days ago) and seems unable to be easily modified to reduce its output. The other thing we want is to be able to generate examples from heavily marked-up example modules a'la what FiPy does. I don't think epydoc even allows that without modification. Having said that, I think I'd tend toward using ReST because it doesn't grate with me like it seems to with Travis. Gary From gael.varoquaux at normalesup.org Thu Jan 11 06:31:18 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 11 Jan 2007 12:31:18 +0100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A61F32.7000907@bigpond.net.au> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> Message-ID: <20070111113111.GE20540@clipper.ens.fr> On Thu, Jan 11, 2007 at 10:27:46PM +1100, Gary Ruben wrote: > I'm not sure I can add much to this discussion, except that currently no > tools seem to do exactly what I think we want with ReST anyway, so if > we're going to rely on existing tools, ReST may not be all that useful. Well it is very easy to build tools using the docutils. Once the docstrings are extracted the layout becomes easy. Another reason to use ReST is that is can also go on the wiki. One could imagine Wiki editing of the docstrings. Yes I know, I am getting a bit carried away. Ga?l From sbasu at physics.cornell.edu Thu Jan 11 07:03:53 2007 From: sbasu at physics.cornell.edu (Sourish Basu) Date: Thu, 11 Jan 2007 07:03:53 -0500 Subject: [SciPy-user] Apologies for the vacation autoresponder that backfired so badly Message-ID: <45A627A9.2070907@physics.cornell.edu> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Dear Scipy users, I'm really, really sorry for all the autoresponse emails that you received. I set my preferences just before leaving the house, and made the mistake of not checking the right filters. Worse, the place I'm visiting won't allow outgoing SSH connections, so I couldn't even change it even though I noticed the problem yesterday. Thanks to Robert Kern for turning my mail delivery off. Once again, I'm terribly sorry to have sent a lot of junk to everyone. It was rude, not to speak of the embarassment :( Sourish Basu -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iQIVAwUBRaYnqLq7YmsVF20EAQqhxhAAr0HjpDzSRN2xJjZ0m73U60ekwF/Uqjoq UNSBaD5J09WCFWv7GBwsL5T8ERxi9fikA9yaLIQ8eGz0YbatbEIPQ1ofuO1X1C97 0KXwtKCevLQk1Hj9hw1JlBKs024rQYlTSpPZM1qY0WkSQmkI4mpWFcf6M1AZHuAZ s1tyA6swtKSnorh9aUyzV739ZM5AhRILhMfOD4vGfttFRU6dFL3T4YcJwAkpsIke 3Xnoy4VhFAffUs/iZbtX9tfYW+H/z3ig1jXjCE8l0rtt5jZZ24pf0CLUap6AsCjr kfoHC7FcfBxm58AigFSIAiK5LxqSP+1jCq7v781Xaw8QHYhvVP6D93ADQXAZTXAH lRZuZLCQhV09WNJ8mb1BLbzm4mblFD+9rUL38p+jqDwkfEdqkhTMHxkX7zK0Y6rY ia1Lla8wpuWSkXUVCjqGp74S+XzbKz8zl26m7d4JQNd1NOWjdn4HVV/9FKv1mm5f u2CQUoW9m+LBMZQg4FuyEhevsAZ9jU/HKevDGqGTSpCIlF2YS2EK3SdK+eIwH3Z+ wggwnliFtbHIHdUm0Ypj5/eD5EvugN8NJlNySzFUhqpOwN87DSouFf9qd63qcntw Luxi3fA8lVaYHGOPwSBnU8RfeUan2xyL9Fd6VTV/R2Mj2Atrhqbm3uGaofNKdUQi 3hKijAbdjXU= =lA72 -----END PGP SIGNATURE----- From v-nijs at kellogg.northwestern.edu Thu Jan 11 08:51:43 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Thu, 11 Jan 2007 07:51:43 -0600 Subject: [SciPy-user] Install from source on mac In-Reply-To: <517AA203-758F-4183-B058-45EFB0637ECC@cs.hmc.edu> Message-ID: Seems to work fine. I put the following in ~/.matplotlib/matplotlibrc backend : WXAgg numerix : numpy The following also works: backend: TKAgg Vincent On 1/11/07 1:12 AM, "belinda thom" wrote: > Hi Vincent, > > On Jan 10, 2007, at 10:35 PM, Vincent Nijs wrote: > >> I followed Robert Kerns' excellent source install guide for OS X >> 10.4 (G4). >> My programs seem to run fine. However scipy.test() does produce the >> following errors: >> >> Scipy from svn 2530 > > I'm curious, did you install the wx package he recommended and if so, > were you able to plot w/matplotlib using the WXagg backend? > > Thanks, > > --b > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From aisaac at american.edu Thu Jan 11 09:19:26 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 11 Jan 2007 09:19:26 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A61F32.7000907@bigpond.net.au> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com><45A61F32.7000907@bigpond.net.au> Message-ID: On Thu, 11 Jan 2007, Gary Ruben apparently wrote: > currently no tools seem to do exactly what I think we want > with ReST ... I think what is needed is to generate > a simple reference section in the proposed LaTeX version > of the document which contains one indexed entry per > numpy/scipy function, probably with 'see also' > cross-references, and also an html version of same. ... > The other thing we want is to be able to generate examples > from heavily marked-up example modules a'la what FiPy > does. I don't think epydoc even allows that without > modification. I would be great if someone who can be very specific about such needs would ask Ed Loper if they are easily achieved. It would be a shame to speculate incorrectly about that. Cheers, Alan Isaac From nwagner at iam.uni-stuttgart.de Thu Jan 11 10:08:44 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 11 Jan 2007 16:08:44 +0100 Subject: [SciPy-user] Reordering of variables in sparse matrices in scipy Message-ID: <45A652FC.5090207@iam.uni-stuttgart.de> Hi all, Is there a way for reordering variables in sparse matrices in scipy for example by Metis ? Nils http://glaros.dtc.umn.edu/gkhome/views/metis http://www.csit.fsu.edu/~burkardt/c_src/oemetis/oemetis.html From ellisonbg.net at gmail.com Thu Jan 11 10:58:47 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Thu, 11 Jan 2007 08:58:47 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> Message-ID: <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> While I am all for better docstrings, I feel that the current state of docstring formats is similar to how array libraries were before NumPy: fragmented. I think that using an existing standard (I prefer ReST that can be processed by epydoc) will go much further in encouraging folks to contribute good docstrings. Also, using a common standard opens the door for interesting applications and usages of those docstrings. For instance, IPython gives a user access to docstrings using the foo? notation. We already have a AJAX/web based IPython GUI and it would be very nice if foo? in that could convert the docstring to html and display that. Using ReST, that is easy - with everyone using their own favorite markup language, it is not practical. Because of this, and the reasons that Fernando listed (too much cognitive load in tracking all the different markup languages already) we are using epydoc+ReST in IPython from now forward. I realize that the line noise of ReST is less readable in Terminal based things, but I feel it is a nice compromise for the capabilities/features that you get. Cheers, Brian From perry at stsci.edu Thu Jan 11 11:27:15 2007 From: perry at stsci.edu (Perry Greenfield) Date: Thu, 11 Jan 2007 11:27:15 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> Message-ID: <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> On Jan 11, 2007, at 10:58 AM, Brian Granger wrote: > > Because of this, and the reasons that Fernando listed (too much > cognitive load in tracking all the different markup languages already) > we are using epydoc+ReST in IPython from now forward. > > I realize that the line noise of ReST is less readable in Terminal > based things, but I feel it is a nice compromise for the > capabilities/features that you get. Any reason ipython can't use epydoc or some other tool to format the markup in ascii (I forget if epydoc does ascii output) so that the user doesn't see the 'line noise' when using the ipython introspection features? Perry From ellisonbg.net at gmail.com Thu Jan 11 17:42:18 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Thu, 11 Jan 2007 15:42:18 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> Message-ID: <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> > Any reason ipython can't use epydoc or some other tool to format the > markup in ascii (I forget if epydoc does ascii output) so that the > user doesn't see the 'line noise' when using the ipython > introspection features? This would be really nice and address the line noise issue - at least within ipython. I'm sure it could be done as epydoc just uses docutils to parse the docstrings. I don't know much about the docutils internals, but I am assuming you could access the document tree of the doc string and then print things however you want. This would be rather cool and could be done dynamically on the fly. Thoughts Fernando? Brian > Perry > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Thu Jan 11 17:51:55 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 11 Jan 2007 23:51:55 +0100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> Message-ID: <20070111225155.GC4479@clipper.ens.fr> On Thu, Jan 11, 2007 at 03:42:18PM -0700, Brian Granger wrote: > This would be really nice and address the line noise issue - at least > within ipython. I'm sure it could be done as epydoc just uses > docutils to parse the docstrings. I don't know much about the > docutils internals, but I am assuming you could access the document > tree of the doc string and then print things however you want. This > would be rather cool and could be done dynamically on the fly. I can volonteer to implement this, if nobody is in too much of a hurry (I am quite busy currently). I have a bit of experience with the docutils. On the other hand, docutils are pretty simple to understand, so I guess any regular contributor to scipy could do this. Ga?l From gael.varoquaux at normalesup.org Thu Jan 11 17:56:11 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 11 Jan 2007 23:56:11 +0100 Subject: [SciPy-user] Getting started wiki page Message-ID: <20070111225611.GD4479@clipper.ens.fr> Once again I had to explain to a new student what scipy was, what python was, and what he should read in order to start working with these tools. I have started a "Getting Started" page on the wiki to answer this problem. The idea would be to guide the complete beginner to get him productive as quickly as possible (like the "Getting started" page in matlab). Currently the page is quite empty and pretty bad, but if people start improving it it could be very useful and hopefuly be linked from the front page. The page is not linked by any page yet, but it is accessible at: http://scipy.org/Getting_Started Cheers, Ga?l From fperez.net at gmail.com Thu Jan 11 17:58:55 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 11 Jan 2007 15:58:55 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> Message-ID: On 1/11/07, Brian Granger wrote: > > Any reason ipython can't use epydoc or some other tool to format the > > markup in ascii (I forget if epydoc does ascii output) so that the > > user doesn't see the 'line noise' when using the ipython > > introspection features? > > This would be really nice and address the line noise issue - at least > within ipython. I'm sure it could be done as epydoc just uses > docutils to parse the docstrings. I don't know much about the > docutils internals, but I am assuming you could access the document > tree of the doc string and then print things however you want. This > would be rather cool and could be done dynamically on the fly. > > Thoughts Fernando? +1, and super-easy to implement. The place to look is: http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/OInspect.py The getdoc function is the single point of entry for docstring extraction, it's just a matter of enhancing it to use docutils if present (but revert gracefully to normal operation if not). Cheers, f From s.mientki at ru.nl Thu Jan 11 18:29:14 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Fri, 12 Jan 2007 00:29:14 +0100 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070111225611.GD4479@clipper.ens.fr> References: <20070111225611.GD4479@clipper.ens.fr> Message-ID: <45A6C84A.4070900@ru.nl> Gael Varoquaux wrote: > Once again I had to explain to a new student what scipy was, what python > was, and what he should read in order to start working with these tools. > > I have started a "Getting Started" page on the wiki to answer this > problem. The idea would be to guide the complete beginner to get him > productive as quickly as possible (like the "Getting started" page in > matlab). > very good idea, as I'am starter myself (started 2 weeks ago), I spent a few days typing my first "Python characters". That was until I discovered the Enthought-edition (worth to mention on your page). Secondly, you mention SPE, which crashed 2 times in 5 minutes :-( So I switched to PyScripter (which crashes when doing plots in debug mode), but I can live with that ;-) btw I know it's a wiki, but I feel myself a to newbie. cheers, Stef Mientki From stefan at sun.ac.za Thu Jan 11 18:33:15 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 11 Jan 2007 23:33:15 +0000 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <45A6C84A.4070900@ru.nl> References: <20070111225611.GD4479@clipper.ens.fr> <45A6C84A.4070900@ru.nl> Message-ID: <20070111233315.GA14863@zaphod.lagged.za.net> On Fri, Jan 12, 2007 at 12:29:14AM +0100, Stef Mientki wrote: > > > Gael Varoquaux wrote: > > Once again I had to explain to a new student what scipy was, what python > > was, and what he should read in order to start working with these tools. > > > > I have started a "Getting Started" page on the wiki to answer this > > problem. The idea would be to guide the complete beginner to get him > > productive as quickly as possible (like the "Getting started" page in > > matlab). > > > very good idea, as I'am starter myself (started 2 weeks ago), > I spent a few days typing my first "Python characters". > That was until I discovered the Enthought-edition (worth to mention on > your page). > Secondly, you mention SPE, which crashed 2 times in 5 minutes :-( > So I switched to PyScripter (which crashes when doing plots in debug mode), > but I can live with that ;-) > > btw I know it's a wiki, but I feel myself a to newbie. Being a 'newbie' is maybe the best time to write documentation -- while you still know what new users need to know, and how to explain it to them simply. Don't feel shy to contribute -- as you say, it's a wiki, so even if you just scratch down a few ideas those will eventually turn into useful paragraphs. Cheers St?fan From ggellner at uoguelph.ca Thu Jan 11 19:08:22 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Thu, 11 Jan 2007 19:08:22 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <1168560502.22630.6.camel@angus> +1 on epydoc and reST I have had to learn to parse far to many markup languages already . . . reST seems as good as any (and is not going away in many other projects, so we would be adding a new language, not replacing anything). Gabriel From peridot.faceted at gmail.com Thu Jan 11 19:20:03 2007 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Thu, 11 Jan 2007 19:20:03 -0500 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070111225611.GD4479@clipper.ens.fr> References: <20070111225611.GD4479@clipper.ens.fr> Message-ID: On 11/01/07, Gael Varoquaux wrote: > I have started a "Getting Started" page on the wiki to answer this > problem. The idea would be to guide the complete beginner to get him > productive as quickly as possible (like the "Getting started" page in > matlab). I think this is a good idea! I made some small changes and added an example session, just so potential users could see what it looks like, and what you can do with just a few lines. I'm not sure what all else belongs there - pointers to a tutorial? (Is there a nice "how to work effectively with scipy" tutorial?) Pointers to other web resources? Pointers to specialist packages like PyDS? A. M. Archibald From fredmfp at gmail.com Fri Jan 12 03:56:45 2007 From: fredmfp at gmail.com (fred) Date: Fri, 12 Jan 2007 09:56:45 +0100 Subject: [SciPy-user] Getting started wiki page In-Reply-To: References: <20070111225611.GD4479@clipper.ens.fr> Message-ID: <45A74D4D.4090106@gmail.com> A. M. Archibald a ?crit : > I think this is a good idea! I made some small changes and added an > example session, just so potential users could see what it looks like, > and what you can do with just a few lines. I'm not sure what all else > belongs there - pointers to a tutorial? (Is there a nice "how to work > effectively with scipy" tutorial?) Pointers to other web resources? http://www.rexx.com/~dkuhlman/scipy_course_01.html ? -- http://scipy.org/FredericPetit From millman at berkeley.edu Fri Jan 12 10:33:49 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 12 Jan 2007 07:33:49 -0800 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <1168560502.22630.6.camel@angus> References: <45A4FB90.9000809@ee.byu.edu> <1168560502.22630.6.camel@angus> Message-ID: It seems to me that the discussion about the proposed documentation standard is well enough along that it is time for some decisions to be reached. To help facilitate this, I created a wiki page to try and capture the main proposals: http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards To help simplify the discussion, I split the issue into 2 parts: 1. content (or the sections of the docstring) 2. markup I think that the basic content or sections are more or less agreed upon and I believe the sections I listed were what everyone was agreeing on. Hopefully, I haven't misunderstood. There are some minor unresolved points like whether to use Inputs or Parameters. But that should be easy to resolve once we decide which markup to use. >From what I can see it looks like there are 2 main markup choices: m1. reST m2. preST (pseudo-reST) My understanding is that most people have a slight preference for using reST. Travis is a strong support of preST because of the increased line-noise in reST. The main argument for reST seems to be that it is a pre-existing standard. If there are no more major arguments to be made for either markup choice, I think we should vote or come to some sort of agreement on whether to use reST or preST by early next week. Once the markup is chosen, I will update the wiki. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From giorgio.luciano at chimica.unige.it Fri Jan 12 10:44:18 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Fri, 12 Jan 2007 16:44:18 +0100 Subject: [SciPy-user] round all element of an array Message-ID: <45A7ACD2.6030501@chimica.unige.it> I've searched in the manual and find nothing Is it possible to round all the value of an array to fixed decimanls ? The option round works for scalar but not for array Thanks in advance for replys Giorgio From nkottens at nd.edu Fri Jan 12 10:47:55 2007 From: nkottens at nd.edu (Nicholas Kottenstette) Date: Fri, 12 Jan 2007 10:47:55 -0500 Subject: [SciPy-user] round all element of an array In-Reply-To: <45A7ACD2.6030501@chimica.unige.it> References: <45A7ACD2.6030501@chimica.unige.it> Message-ID: <45A7ADAB.3060909@nd.edu> Giorgio Luciano wrote: > I've searched in the manual and find nothing > Is it possible to round all the value of an array to fixed decimanls ? > The option round works for scalar but not for array > Thanks in advance for replys > Giorgio > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > ?scipy.round_ Base Class: String Form: Namespace: Interactive File: /usr/local/neclab_tools/lib/python2.4/site-packages/numpy/core/fromnumeric.py Definition: scipy.round_(a, decimals=0, out=None) Docstring: Returns reference to result. Copies a and rounds to 'decimals' places. Keyword arguments: decimals -- number of decimal places to round to (default 0). out -- existing array to use for output (default copy of a). Returns: Reference to out, where None specifies a copy of the original array a. Round to the specified number of decimals. When 'decimals' is negative it specifies the number of positions to the left of the decimal point. The real and imaginary parts of complex numbers are rounded separately. Nothing is done if the array is not of float type and 'decimals' is greater than or equal to 0. The keyword 'out' may be used to specify a different array to hold the result rather than the default 'a'. If the type of the array specified by 'out' differs from that of 'a', the result is cast to the new type, otherwise the original type is kept. Floats round to floats by default. Numpy rounds to even. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in IEEE floating point and the errors introduced in scaling the numbers when 'decimals' is something other than 0. The function around is an alias for round_. Sincerely, Nicholas Kottenstette From ndbecker2 at gmail.com Fri Jan 12 11:03:11 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 12 Jan 2007 11:03:11 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> <20070111225155.GC4479@clipper.ens.fr> Message-ID: Gael Varoquaux wrote: > On Thu, Jan 11, 2007 at 03:42:18PM -0700, Brian Granger wrote: >> This would be really nice and address the line noise issue - at least >> within ipython. I'm sure it could be done as epydoc just uses >> docutils to parse the docstrings. I don't know much about the >> docutils internals, but I am assuming you could access the document >> tree of the doc string and then print things however you want. This >> would be rather cool and could be done dynamically on the fly. > > I can volonteer to implement this, if nobody is in too much of a hurry (I > am quite busy currently). I have a bit of experience with the docutils. > > On the other hand, docutils are pretty simple to understand, so I guess > any regular contributor to scipy could do this. > I'd just like to add, that a number of posts have advocated for a design where docstrings are read via parsing instead of introspection. Unfortunately, we need to document objects written in languages other than python (C, C++). A hybrid system is possible (use parsing when possible, revert to introspection), but a system that only uses parsing doesn't meet the requirements. From perry at stsci.edu Fri Jan 12 11:44:38 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 12 Jan 2007 11:44:38 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> <20070111225155.GC4479@clipper.ens.fr> Message-ID: A few more comments on this topic. 1) regarding having ipython process the docstring. I suppose that one could argue that preprocessing the docstring universally may screw up those not written for the markup language. Wouldn't it be possible to put some sort of marker in the module (flag definition, e.g., ipdoc=True) so that ipython could check the module that the docstring comes from to see if it should be preprocessed? Maybe this isn't worth worrying about. 2) I personally much prefer "arguments" to "inputs" (and secondarily "parameters" to "inputs"). Some functions may modify their arguments. Some arguments may be both inputs and outputs. The explanation should indicate which arguments may be modified by the function (I'd prefer some very terse notation that was standard to scipy/numpy, e.g., "arr: [m] the 2-d array to be FFT'ed. This is done in-place." Yes, this example is redundant, but is is nice to make it very clear visually that the argument is modified in some way without having to read the whole thing). 3) I like the idea of some loose indication of the type. But here also it would be nice for some terse (for common cases) notation *and* some precision on terminology. For example, in Fernando's example, does number mean complex number? I'd tend to think that in most common cases, a numerical type includes all those lower types. If I require some sort of float, an int will suffice unless explicitly listed as not. So perhaps the use of b, i, f, c for boolean, int, float, or complex interpretation of the parameter with the default interpretation including any lower type. Perhaps some notation for dimensionality should be standardized (e.g., 2d, <3d, nd, s) where s means scalar only (mind you, I haven't given the notation much thought so I'm not wedded to these examples). The use of a concise standard notation may be a little bit of an impediment to new users, but if learning it is easy and they key is easy to get, it can make the documentation both easier to scan and much less wordy, and more precise. 4) I also tend to think there is sometimes some imprecision regarding what are keyword parameters. If I'm not mistaken, rarely used positional parameters that have defaults are often referred to as keyword arguments (since that is often how they are specified, i.e., using keyword syntax). So, in the case of these docstrings, are these to go under arguments or keywords? 5) along this line, it would be nice to have a standard notation for default values for arguments. Perhaps this all getting into premature detail, but I don't see why this can't all be settled ahead of time (and save some grief down the road). Perry From gael.varoquaux at normalesup.org Fri Jan 12 11:53:18 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 12 Jan 2007 17:53:18 +0100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> <20070111225155.GC4479@clipper.ens.fr> Message-ID: <20070112165313.GK1822@clipper.ens.fr> On Fri, Jan 12, 2007 at 11:44:38AM -0500, Perry Greenfield wrote: > 1) regarding having ipython process the docstring. I suppose that one > could argue that preprocessing the docstring universally may screw up > those not written for the markup language. Wouldn't it be possible to > put some sort of marker in the module (flag definition, e.g., > ipdoc=True) so that ipython could check the module that the docstring > comes from to see if it should be preprocessed? Maybe this isn't > worth worrying about. This already exists, a variable in the module can specify this to epydoc: __docformat__ = "restructuredtext en" See http://epydoc.sourceforge.net/othermarkup.html Ga?l From robert.kern at gmail.com Fri Jan 12 14:58:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Jan 2007 13:58:20 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> <20070111225155.GC4479@clipper.ens.fr> Message-ID: <45A7E85C.9030609@gmail.com> Neal Becker wrote: > I'd just like to add, that a number of posts have advocated for a design > where docstrings are read via parsing instead of introspection. > Unfortunately, we need to document objects written in languages other than > python (C, C++). A hybrid system is possible (use parsing when possible, > revert to introspection), but a system that only uses parsing doesn't meet > the requirements. And for what it's worth, epydoc 3 is just such a hybrid system. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Jan 12 15:09:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 12 Jan 2007 14:09:05 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> <20070111225155.GC4479@clipper.ens.fr> Message-ID: <45A7EAE1.6080506@gmail.com> Perry Greenfield wrote: > 3) I like the idea of some loose indication of the type. But here > also it would be nice for some terse (for common cases) notation > *and* some precision on terminology. For example, in Fernando's > example, does number mean complex number? I'd tend to think that in > most common cases, a numerical type includes all those lower types. > If I require some sort of float, an int will suffice unless > explicitly listed as not. So perhaps the use of b, i, f, c for > boolean, int, float, or complex interpretation of the parameter with > the default interpretation including any lower type. I'd rather use the full words. > Perhaps some > notation for dimensionality should be standardized (e.g., 2d, <3d, > nd, s) where s means scalar only (mind you, I haven't given the > notation much thought so I'm not wedded to these examples). The use > of a concise standard notation may be a little bit of an impediment > to new users, but if learning it is easy and they key is easy to get, > it can make the documentation both easier to scan and much less > wordy, and more precise. I've taken to using shape tuples: (n, m), (n,), etc. For several arguments that may need to have some axes of the same size, this is a quite terse way of expressing that. It also gives you a way to refer to a specific axis ("the n vectors of m dimensions each"). > 4) I also tend to think there is sometimes some imprecision regarding > what are keyword parameters. If I'm not mistaken, rarely used > positional parameters that have defaults are often referred to as > keyword arguments (since that is often how they are specified, i.e., > using keyword syntax). So, in the case of these docstrings, are > these to go under arguments or keywords? I would suggest that there are no "positional arguments with defaults." They are all keyword arguments. pylab-style argument lists should be forbidden in numpy and scipy. :-) So I'd go with Arguments: for everything you have to specify and Additional Arguments: for everything that you don't. > 5) along this line, it would be nice to have a standard notation for > default values for arguments. Actually, I'd suggest leaving them out of the docstring entirely. The introspection tells you what the default is. If there is some special processing of that default value ("if arg is None: arg = something_else()"), it should be explained in prose (mostly because the general case needs explanation in prose and can't be reduced to "default=something"). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perry at stsci.edu Fri Jan 12 15:28:25 2007 From: perry at stsci.edu (Perry Greenfield) Date: Fri, 12 Jan 2007 15:28:25 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A7EAE1.6080506@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45A5434E.2090609@gmail.com> <45A61F32.7000907@bigpond.net.au> <6ce0ac130701110758jfc19b3fmf2b205dccc796fd3@mail.gmail.com> <7A0A7D8E-5E35-4D11-822B-DCE038CD93BA@stsci.edu> <6ce0ac130701111442r3f6fc7d5k5c464a5dc3063d95@mail.gmail.com> <20070111225155.GC4479@clipper.ens.fr> <45A7EAE1.6080506@gmail.com> Message-ID: <830F2850-A4E0-478D-9B91-9E03BD490A00@stsci.edu> On Jan 12, 2007, at 3:09 PM, Robert Kern wrote: > Perry Greenfield wrote: > >> 3) I like the idea of some loose indication of the type. But here >> also it would be nice for some terse (for common cases) notation >> *and* some precision on terminology. For example, in Fernando's >> example, does number mean complex number? I'd tend to think that in >> most common cases, a numerical type includes all those lower types. >> If I require some sort of float, an int will suffice unless >> explicitly listed as not. So perhaps the use of b, i, f, c for >> boolean, int, float, or complex interpretation of the parameter with >> the default interpretation including any lower type. > > I'd rather use the full words. > That's fine, but there should still be conventions about what words mean (e.g., number) > > I would suggest that there are no "positional arguments with > defaults." They are > all keyword arguments. pylab-style argument lists should be > forbidden in numpy > and scipy. :-) > If you mean that if an argument has a default, it should be treated as a keywords and advertised as such, that's fine (or are you arguing that it would be impermissible to define functions with positional arguments with defaults? If so, that would run contrary to lots of current usage). Is it an issue of how these are documented or implemented? (If a positional argument has a default, you can't keep people from using it in a positional way rather than with keyword- style usage.) > So I'd go with Arguments: for everything you have to specify and > Additional > Arguments: for everything that you don't. > Fine with me. >> 5) along this line, it would be nice to have a standard notation for >> default values for arguments. > > Actually, I'd suggest leaving them out of the docstring entirely. The > introspection tells you what the default is. If there is some > special processing > of that default value ("if arg is None: arg = something_else()"), > it should be > explained in prose (mostly because the general case needs > explanation in prose > and can't be reduced to "default=something"). True enough (and I wondered about that right after posting since having to specify something that is already in the source code is bound to cause error when the docstring isn't updated appropriately). Still, when showing the formatted docstring, it may be helpful to show the default as derived from introspection through use of some software formatting mechanism (this matches a function argument, does it have a default? If yes, show it). Certainly there are lots of cases where explanatory text is needed (the default of None usually doesn't say much by itself since the software will usually interpret that as meaning something in particular. No software formatting system is going to be able to figure that out). But for things that aren't None (or odd cases) it can be useful to show. Perry From s.mientki at ru.nl Sat Jan 13 05:16:36 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 13 Jan 2007 11:16:36 +0100 Subject: [SciPy-user] is there a difference between numpy and scipy array ? Message-ID: <45A8B184.4010902@ru.nl> Forgive my ignorance, but reading more and more docs, I now get he feeling there's a difference between Numpy en Scipy arrays, is that so ? and if so what is the difference ? I'm trying to move from MatLab to Python, (for real-time data-analysis), and I always import Scipy (with good success ;-), (supposing that Scipy is an extension of Numpy, and Scipy has all I need). thanks, Stef Mientki From robert.kern at gmail.com Sat Jan 13 05:20:44 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 13 Jan 2007 04:20:44 -0600 Subject: [SciPy-user] is there a difference between numpy and scipy array ? In-Reply-To: <45A8B184.4010902@ru.nl> References: <45A8B184.4010902@ru.nl> Message-ID: <45A8B27C.3050604@gmail.com> Stef Mientki wrote: > Forgive my ignorance, but reading more and more docs, > I now get he feeling there's a difference between Numpy en Scipy arrays, > is that so ? No. scipy is a package that *uses* numpy. That's all. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Sat Jan 13 05:55:15 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 13 Jan 2007 11:55:15 +0100 Subject: [SciPy-user] is there a difference between numpy and scipy array ? In-Reply-To: <45A8B27C.3050604@gmail.com> References: <45A8B184.4010902@ru.nl> <45A8B27C.3050604@gmail.com> Message-ID: <45A8BA93.1040705@ru.nl> Robert Kern wrote: > Stef Mientki wrote: > >> Forgive my ignorance, but reading more and more docs, >> I now get he feeling there's a difference between Numpy en Scipy arrays, >> is that so ? >> > > No. scipy is a package that *uses* numpy. That's all. > Thanks Robert, I was distracted by the term "array" and "ndarray", so I assume now the latter stands for N-dimensional-array. cheers, Stef -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmoores at gmail.com Sat Jan 13 09:39:28 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sat, 13 Jan 2007 06:39:28 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? Message-ID: Can python compute ln(640320**3 + 744)/163**.5 to 30 places? I ran across this the other day. It's value is said to be the same as pi to about 30 places. I'd like to know if it is possible to do this in python with SciPy. A friend pointed out that the Windows Calculator gives about 30 significant digits. I tried it and got 3.1415926535897932384626433832797. Pi to 32 places is 3.14159265358979323846264338327950, so the calculator is accurate to 31 significant digits (including the 3.). So, how about SciPy? Thanks, Dick Moores Win XP Pro Python 2.5 yet to install SciPy From dd55 at cornell.edu Sat Jan 13 13:35:37 2007 From: dd55 at cornell.edu (Darren Dale) Date: Sat, 13 Jan 2007 13:35:37 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: Message-ID: <200701131335.37536.dd55@cornell.edu> On Saturday 13 January 2007 9:39 am, Dick Moores wrote: > Can python compute ln(640320**3 + 744)/163**.5 to 30 places? It looks like it is good to about 16 digits: >>> '%1.32f'%(log(640320**3 + 744)/163**.5) '3.14159265358979311599796346854419' '3.14159265358979323846264338327950' 1.234567890123456^ From rdmoores at gmail.com Sat Jan 13 13:59:51 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sat, 13 Jan 2007 10:59:51 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <200701131335.37536.dd55@cornell.edu> References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 1/13/07, Darren Dale wrote: > On Saturday 13 January 2007 9:39 am, Dick Moores wrote: > > Can python compute ln(640320**3 + 744)/163**.5 to 30 places? > > It looks like it is good to about 16 digits: > > >>> '%1.32f'%(log(640320**3 + 744)/163**.5) > '3.14159265358979311599796346854419' > > '3.14159265358979323846264338327950' > 1.234567890123456^ Is that SciPy at work? If so, it's no better than plain old Python: >>> from math import log as ln >>> ln(640320**3 + 744)/163**.5 3.1415926535897931 But thanks, Dick From sransom at nrao.edu Sat Jan 13 14:05:55 2007 From: sransom at nrao.edu (Scott Ransom) Date: Sat, 13 Jan 2007 14:05:55 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: <20070113190555.GA29640@ssh.cv.nrao.edu> On Sat, Jan 13, 2007 at 10:59:51AM -0800, Dick Moores wrote: > On 1/13/07, Darren Dale wrote: > > On Saturday 13 January 2007 9:39 am, Dick Moores wrote: > > > Can python compute ln(640320**3 + 744)/163**.5 to 30 places? > > > > It looks like it is good to about 16 digits: > > > > >>> '%1.32f'%(log(640320**3 + 744)/163**.5) > > '3.14159265358979311599796346854419' > > > > '3.14159265358979323846264338327950' > > 1.234567890123456^ > > Is that SciPy at work? If so, it's no better than plain old Python: > > >>> from math import log as ln > >>> ln(640320**3 + 744)/163**.5 > 3.1415926535897931 > > But thanks, > > Dick That is correct. Scipy does not have builtin arbitrary precision capabilities. However, you should be able to find the old "real.py" module around on the net. With that you get: In [1]: from real import * << Ignoring a DeprecationWarning about the regex module here >> In [4]: log(640320**3 + 744)/sqrt(163) Out[4]: 3.1415926535897932384626433832797+-4 In [5]: log(640320**3 + 744)/sqrt(163) / pi() Out[5]: 1.000000000000000000000000000000+-1 Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From peridot.faceted at gmail.com Sat Jan 13 14:11:27 2007 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sat, 13 Jan 2007 14:11:27 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 13/01/07, Dick Moores wrote: > Is that SciPy at work? If so, it's no better than plain old Python: It sounds like what you want is arbitrary-precision arithmetic, which is not what scipy is for (see instead packages like clnum or dmath). NumPy is for calculating efficiently with large arrays of floating-point numbers at machine precision, and scipy is a package of scientific algorithms mostly taking advantage of numpy. While it is possible to combine arbitrary-precision arithmetic with numpy, arbitrary-precision arithmetic is exceedingly slow and unsuitable for most numerical tasks. A. M. Archibald From aisaac at american.edu Sat Jan 13 14:57:26 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 13 Jan 2007 14:57:26 -0500 Subject: [SciPy-user] =?utf-8?q?Can_SciPy_compute_ln=28640320**3_+_744=29/?= =?utf-8?q?163**=2E5_to=0930_places=3F?= In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On Sat, 13 Jan 2007, "A. M. Archibald" apparently wrote: > It sounds like what you want is arbitrary-precision arithmetic Or maybe the OP was asking if it is possible to force an "extended precision" calculation? fwiw, Alan Isaac From rdmoores at gmail.com Sat Jan 13 14:55:29 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sat, 13 Jan 2007 11:55:29 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 1/13/07, A. M. Archibald wrote: > On 13/01/07, Dick Moores wrote: > > > Is that SciPy at work? If so, it's no better than plain old Python: > > It sounds like what you want is arbitrary-precision arithmetic, which > is not what scipy is for (see instead packages like clnum or dmath). > NumPy is for calculating efficiently with large arrays of > floating-point numbers at machine precision, and scipy is a package of > scientific algorithms mostly taking advantage of numpy. While it is > possible to combine arbitrary-precision arithmetic with numpy, > arbitrary-precision arithmetic is exceedingly slow and unsuitable for > most numerical tasks. Thanks for the info. Dick Moores From tgrav at mac.com Sat Jan 13 15:02:42 2007 From: tgrav at mac.com (Tommy Grav) Date: Sat, 13 Jan 2007 15:02:42 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: <83A7262E-1DA6-4DB1-B2B4-B145027C44AF@mac.com> Actually if you use floats instead of int is the expression you get >>> print '%1.32f' % (log(640320.**3. + 744.)/163.**.5) 3.14159265358979311599796346854419 >>> print '%1.32f' % (pi) 3.14159265358979311599796346854419 >>> Cheers Tommy On Jan 13, 2007, at 1:59 PM, Dick Moores wrote: > On 1/13/07, Darren Dale wrote: >> On Saturday 13 January 2007 9:39 am, Dick Moores wrote: >>> Can python compute ln(640320**3 + 744)/163**.5 to 30 places? >> >> It looks like it is good to about 16 digits: >> >>>>> '%1.32f'%(log(640320**3 + 744)/163**.5) >> '3.14159265358979311599796346854419' >> >> '3.14159265358979323846264338327950' >> 1.234567890123456^ > > Is that SciPy at work? If so, it's no better than plain old Python: > >>>> from math import log as ln >>>> ln(640320**3 + 744)/163**.5 > 3.1415926535897931 > > But thanks, > > Dick > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From sransom at nrao.edu Sat Jan 13 15:22:47 2007 From: sransom at nrao.edu (Scott Ransom) Date: Sat, 13 Jan 2007 15:22:47 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <83A7262E-1DA6-4DB1-B2B4-B145027C44AF@mac.com> References: <200701131335.37536.dd55@cornell.edu> <83A7262E-1DA6-4DB1-B2B4-B145027C44AF@mac.com> Message-ID: <20070113202247.GA12246@ssh.cv.nrao.edu> On Sat, Jan 13, 2007 at 03:02:42PM -0500, Tommy Grav wrote: > Actually if you use floats instead of int is the expression you get > > >>> print '%1.32f' % (log(640320.**3. + 744.)/163.**.5) > 3.14159265358979311599796346854419 > >>> print '%1.32f' % (pi) > 3.14159265358979311599796346854419 > >>> Too bad neither of those is close to the real value of the OP's relation or the real value of Pi: 3.141592653589793238462643383279502884197169.... You can't do better than 15-16 decimal points of precision with "normal" double-precision floating point math (which is what python and numpy use). S -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From fperez.net at gmail.com Sat Jan 13 15:37:50 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 13 Jan 2007 13:37:50 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <20070113202247.GA12246@ssh.cv.nrao.edu> References: <200701131335.37536.dd55@cornell.edu> <83A7262E-1DA6-4DB1-B2B4-B145027C44AF@mac.com> <20070113202247.GA12246@ssh.cv.nrao.edu> Message-ID: On 1/13/07, Scott Ransom wrote: > On Sat, Jan 13, 2007 at 03:02:42PM -0500, Tommy Grav wrote: > > Actually if you use floats instead of int is the expression you get > > > > >>> print '%1.32f' % (log(640320.**3. + 744.)/163.**.5) > > 3.14159265358979311599796346854419 > > >>> print '%1.32f' % (pi) > > 3.14159265358979311599796346854419 > > >>> > > Too bad neither of those is close to the real value of the OP's > relation or the real value of Pi: > 3.141592653589793238462643383279502884197169.... > > You can't do better than 15-16 decimal points of precision with > "normal" double-precision floating point math (which is what > python and numpy use). SAGE, which is python-based, offers the python MPFR wrappers as a builtin, where this computation is trivial. I've left a computation here: http://sage.math.washington.edu:8100/fperez Feel free to open a new worksheet there to experiment, just name it uniquely and start playing (the SAGE notebook server is free and open to the public). MPFR is LGPL, so it unfortunately can't be included as a scipy component. But if LGPL is OK for your work, it's an excellent library. Cheers, f From rdmoores at gmail.com Sat Jan 13 16:00:49 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sat, 13 Jan 2007 13:00:49 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 1/13/07, A. M. Archibald wrote: > > It sounds like what you want is arbitrary-precision arithmetic, which > is not what scipy is for (see instead packages like clnum or dmath). OK, I tracked down clnum and installed it in Python 2.5. But it doesn't look as if I can use it for logarithms or non-integer exponents. Or can I? The only manual I could find is . I was able to execute all the examples and got the same results, but... And dmath. From a blog entry by one of the developers () dmath seems the most promising for what I want to do, but the download for python 2.5, from is dmath-0.9-py2.5.egg (md5), a "Python Egg". I've never seen that extension before, and doubt I can use dmath on Windows. Is that correct? If I can, how do I install it? Thanks, Dick Moores From rdmoores at gmail.com Sat Jan 13 16:19:13 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sat, 13 Jan 2007 13:19:13 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> <83A7262E-1DA6-4DB1-B2B4-B145027C44AF@mac.com> <20070113202247.GA12246@ssh.cv.nrao.edu> Message-ID: On 1/13/07, Fernando Perez wrote: > SAGE, which is python-based, offers the python MPFR wrappers as a > builtin, where this computation is trivial. I've left a computation > here: > > http://sage.math.washington.edu:8100/fperez Thank you! I see what it can do. > Feel free to open a new worksheet there to experiment, just name it > uniquely and start playing (the SAGE notebook server is free and open > to the public). > > MPFR is LGPL, so it unfortunately can't be included as a scipy > component. But if LGPL is OK for your work, it's an excellent > library. Are you saying that I should be able to hook up SAGE with Python, using the "the python MPFR wrappers"? That would be terrific. But can I use it with Python on Windows? It seems to be for UNIX/LInux. No? Dick Moores Win XP Pro Python 2.5 From gael.varoquaux at normalesup.org Sat Jan 13 16:21:10 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 13 Jan 2007 22:21:10 +0100 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: <20070113212110.GE6242@clipper.ens.fr> On Sat, Jan 13, 2007 at 01:00:49PM -0800, Dick Moores wrote: > And dmath. From a blog entry by one of the developers > () dmath seems the most promising for what I > want to do, but the download for python 2.5, from > is dmath-0.9-py2.5.egg > (md5), a "Python Egg". You can install it using easyinstall, see http://peak.telecommunity.com/DevCenter/EasyInstall This should be doable under windows, eventhough I have never tried. Ga?l From fperez.net at gmail.com Sat Jan 13 16:21:29 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 13 Jan 2007 14:21:29 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 1/13/07, Dick Moores wrote: > On 1/13/07, A. M. Archibald wrote: > > > > It sounds like what you want is arbitrary-precision arithmetic, which > > is not what scipy is for (see instead packages like clnum or dmath). > > OK, I tracked down clnum and installed it in Python 2.5. But it > doesn't look as if I can use it for logarithms or non-integer > exponents. Or can I? The only manual I could find is > . I was able to > execute all the examples and got the same results, but... MPFR is better than clnum for certain things (at least as of Feb'06, when the SAGE guys did some detailed comparisons) but it's certainly a very usable library, which I like a lot. Using it, your computation can be easily done (here I requested 50 digits, clnum gives a few more probably because the inputs are exact integers): In [16]: import clnum as n In [17]: n.set_default_precision(50) In [18]: n.log(640320**3 + 744)/n.sqrt(163) Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) Below is Mathematica's value for reference: tlon[~]> math Mathematica 5.2 for Linux Copyright 1988-2005 Wolfram Research, Inc. -- Motif graphics initialized -- In[1]:= N[Log[640320^3+744]/Sqrt[163],58] Out[1]= 3.141592653589793238462643383279726619347549880883522422293 [clnum:]3.141592653589793238462643383279726619347549880883522422293 That looks pretty good to me. Here's a comparison of the approximate formula with Pi itself: In[2]:= N[Pi,50] Out[2]= 3.1415926535897932384626433832795028841971693993751 [approx]3.141592653589793238462643383279726619347549880883522422293 Indeed, they match on the first 30 digits. Cheers, f From fperez.net at gmail.com Sat Jan 13 16:26:00 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 13 Jan 2007 14:26:00 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> <83A7262E-1DA6-4DB1-B2B4-B145027C44AF@mac.com> <20070113202247.GA12246@ssh.cv.nrao.edu> Message-ID: On 1/13/07, Dick Moores wrote: > On 1/13/07, Fernando Perez wrote: > > > SAGE, which is python-based, offers the python MPFR wrappers as a > > builtin, where this computation is trivial. I've left a computation > > here: > > > > http://sage.math.washington.edu:8100/fperez > > Thank you! I see what it can do. > > > Feel free to open a new worksheet there to experiment, just name it > > uniquely and start playing (the SAGE notebook server is free and open > > to the public). > > > > MPFR is LGPL, so it unfortunately can't be included as a scipy > > component. But if LGPL is OK for your work, it's an excellent > > library. > > Are you saying that I should be able to hook up SAGE with Python, > using the "the python MPFR wrappers"? That would be terrific. But can > I use it with Python on Windows? It seems to be for UNIX/LInux. No? No, SAGE /is/ a python program. But it's a standalone program that includes its own private copy of python, pyrex (forked), mpfr, and lots and lots more (it includes copies of ipython, numpy, scipy, matplotlib and just about anything you can think of). It's a free download and they provide windows binaries, I'm pretty sure. Go to their downloads page for more details (I don't really know anything about Windows, so I can't help you there). As far as ripping out their mpfr wrappers for standalone use with a regular python install, I'm sure it's doable but I haven't done so myself. Cheers, f From hasslerjc at adelphia.net Sat Jan 13 20:44:26 2007 From: hasslerjc at adelphia.net (John Hassler) Date: Sat, 13 Jan 2007 20:44:26 -0500 Subject: [SciPy-user] Athlon problem again? Message-ID: <45A98AFA.4010203@adelphia.net> An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Jan 13 20:48:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 13 Jan 2007 19:48:16 -0600 Subject: [SciPy-user] Athlon problem again? In-Reply-To: <45A98AFA.4010203@adelphia.net> References: <45A98AFA.4010203@adelphia.net> Message-ID: <45A98BE0.6060308@gmail.com> John Hassler wrote: >>>> from scipy import integrate >>>> integrate.test() > Found 10 tests for scipy.integrate.quadpack > Found 3 tests for scipy.integrate.quadrature > Found 1 tests for scipy.integrate > Found 0 tests for __main__ >>>> ================================ RESTART ====== Can you try the latest SVN of scipy? Try using integrate.test(1, 10) so that we can see which test fails. > This was discussed at length on Scipy-User. Here are the beginnings of > a couple of threads: > /Fri Jul 7 05:30:30 CDT 2006/ > /Sat Jul 29 16:15:58 CDT 2006 What are the Subject's of those threads? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hasslerjc at adelphia.net Sat Jan 13 21:02:18 2007 From: hasslerjc at adelphia.net (John Hassler) Date: Sat, 13 Jan 2007 21:02:18 -0500 Subject: [SciPy-user] Athlon problem again? In-Reply-To: <45A98BE0.6060308@gmail.com> References: <45A98AFA.4010203@adelphia.net> <45A98BE0.6060308@gmail.com> Message-ID: <45A98F2A.3050206@adelphia.net> An HTML attachment was scrubbed... URL: From s.mientki at ru.nl Sun Jan 14 16:10:21 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sun, 14 Jan 2007 22:10:21 +0100 Subject: [SciPy-user] is there a difference between numpy and scipy array ? In-Reply-To: <45A8B27C.3050604@gmail.com> References: <45A8B184.4010902@ru.nl> <45A8B27C.3050604@gmail.com> Message-ID: <45AA9C3D.30106@ru.nl> Robert Kern wrote: > Stef Mientki wrote: > >> Forgive my ignorance, but reading more and more docs, >> I now get he feeling there's a difference between Numpy en Scipy arrays, >> is that so ? >> > > No. scipy is a package that *uses* numpy. That's all. > > I found it again: from: SciPy Course Outline, by Author: Dave Kuhlman At times you may need to convert an array from one type to another, for example from a numpy array to a scipy array or the reverse. The array protocol will help. In particular, the asarray() function can convert an array without copying. Examples: In [8]: import numpy In [9]: import scipy In [10]: a1 = zeros((4,6)) In [11]: type(a1) Out[11]: In [12]: a2 = numpy.asarray(a1) In [13]: type(a2) Out[13]: In [14]: a3 = numpy.zeros((3,5)) In [15]: type(a3) Out[15]: In [16]: a4 = scipy.asarray(a3) In [17]: type(a4) Out[17]: So after all there is a difference, but I still don't know what exactly the difference is ;-) == now something weird happens here: [Dbg]>>> aap=(11,12,11,23) [Dbg]>>> type(aap) [Dbg]>>> IOI=scipy.asarray(aap) <== make it a scipy array [Dbg]>>> print IOI [11 12 11 23] [Dbg]>>> type(IOI) <== it has NOT become a scipy array ?? Maybe in general this (small ?) difference might not make a big deal, but as I want to do real calculations, with mainly scipy functions, I want to optimize the array type (probably i4, but maybe f8 is even faster depending on the kind of calculations) cheers, Stef Mientki -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Jan 14 17:18:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 14 Jan 2007 16:18:36 -0600 Subject: [SciPy-user] is there a difference between numpy and scipy array ? In-Reply-To: <45AA9C3D.30106@ru.nl> References: <45A8B184.4010902@ru.nl> <45A8B27C.3050604@gmail.com> <45AA9C3D.30106@ru.nl> Message-ID: <45AAAC3C.1030408@gmail.com> Stef Mientki wrote: > > > Robert Kern wrote: >> Stef Mientki wrote: >> >>> Forgive my ignorance, but reading more and more docs, >>> I now get he feeling there's a difference between Numpy en Scipy arrays, >>> is that so ? >>> >> No. scipy is a package that *uses* numpy. That's all. >> > I found it again: > from: SciPy Course Outline, by > Author: Dave Kuhlman > > At times you may need to convert an array from one type to another, for > example from a numpy array to a scipy array or the reverse. The array > protocol will help. In particular, the asarray() function can convert an > array without copying. Examples: > > In [8]: import numpy > In [9]: import scipy > In [10]: a1 = zeros((4,6)) > In [11]: type(a1) > Out[11]: > > In [12]: a2 = numpy.asarray(a1) > In [13]: type(a2) > Out[13]: > > In [14]: a3 = numpy.zeros((3,5)) > In [15]: type(a3) > Out[15]: > > In [16]: a4 = scipy.asarray(a3) > In [17]: type(a4) > Out[17]: > > So after all there is a difference, > but I still don't know what exactly the difference is ;-) No, that example doesn't even run. If there ever were a difference, it's long gone (although for the life of me I cannot remember a time when there was a numpy package and a scipy package each with their own ndarray). In [1]: import numpy In [2]: import scipy In [3]: a1 = numpy.zeros((3,3)) In [4]: type(a1) Out[4]: In [5]: a2 = scipy.zeros((3,3)) In [6]: type(a2) Out[6]: In [7]: print numpy.__version__ 1.0.2.dev3507 In [8]: print scipy.__version__ 0.5.3.dev2500 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rdmoores at gmail.com Mon Jan 15 00:35:21 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sun, 14 Jan 2007 21:35:21 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 1/13/07, Fernando Perez wrote: > In [16]: import clnum as n > > In [17]: n.set_default_precision(50) > > In [18]: n.log(640320**3 + 744)/n.sqrt(163) > Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) > I assumed you did that in Python. So I tried it with this script, adding a print statement: =================================== # clnumTest1-a.py import clnum as n n.set_default_precision(50) print "%.50f" % (n.log(640320**3 + 744)/n.sqrt(163)) ====================================== result: 3.141592653589793100000000000000 So clarify, if you would, please. Thanks, Dick Moores From bryanv at enthought.com Mon Jan 15 01:44:54 2007 From: bryanv at enthought.com (Bryan Van de Ven) Date: Mon, 15 Jan 2007 00:44:54 -0600 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: <45AB22E6.2060104@enthought.com> I'm going to go out on a limb here and say that when you do that print, the mpf object is first converted to a regular python float, which does not have enough precision, of course. Converting the mpf object to a clnum rational mpq object in ipython: In [17]: a = cl.log(640320**3 + 744)/cl.sqrt(163) In [18]: cl.mpq(a) Out[18]: mpq(181338285223186289202345004581,57721768930156197489438103340) And then just as check, evaluating this fraction to 50 digits in Mathematica: In[6]:= N[ 181338285223186289202345004581/57721768930156197489438103340,50] Out[6]= 3.1415926535897932384626433832797266193475498808835 Seems to match up... Dick Moores wrote: > On 1/13/07, Fernando Perez wrote: > >> In [16]: import clnum as n >> >> In [17]: n.set_default_precision(50) >> >> In [18]: n.log(640320**3 + 744)/n.sqrt(163) >> Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) >> > > I assumed you did that in Python. So I tried it with this script, > adding a print statement: > =================================== > # clnumTest1-a.py > > import clnum as n > > n.set_default_precision(50) > > print "%.50f" % (n.log(640320**3 + 744)/n.sqrt(163)) > ====================================== > result: 3.141592653589793100000000000000 > > So clarify, if you would, please. > > Thanks, > > Dick Moores > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rdmoores at gmail.com Mon Jan 15 02:06:03 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sun, 14 Jan 2007 23:06:03 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <45AB22E6.2060104@enthought.com> References: <200701131335.37536.dd55@cornell.edu> <45AB22E6.2060104@enthought.com> Message-ID: Please show me how to do that in just plain python. I don't have or know ipython. Thanks On 1/14/07, Bryan Van de Ven wrote: > I'm going to go out on a limb here and say that when you do that print, > the mpf object is first converted to a regular python float, which does > not have enough precision, of course. > > Converting the mpf object to a clnum rational mpq object in ipython: > > In [17]: a = cl.log(640320**3 + 744)/cl.sqrt(163) > > In [18]: cl.mpq(a) > Out[18]: mpq(181338285223186289202345004581,57721768930156197489438103340) > > And then just as check, evaluating this fraction to 50 digits in > Mathematica: > > In[6]:= N[ 181338285223186289202345004581/57721768930156197489438103340,50] > > Out[6]= 3.1415926535897932384626433832797266193475498808835 > > Seems to match up... > > > Dick Moores wrote: > > On 1/13/07, Fernando Perez wrote: > > > >> In [16]: import clnum as n > >> > >> In [17]: n.set_default_precision(50) > >> > >> In [18]: n.log(640320**3 + 744)/n.sqrt(163) > >> Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) > >> > > > > I assumed you did that in Python. So I tried it with this script, > > adding a print statement: > > =================================== > > # clnumTest1-a.py > > > > import clnum as n > > > > n.set_default_precision(50) > > > > print "%.50f" % (n.log(640320**3 + 744)/n.sqrt(163)) > > ====================================== > > result: 3.141592653589793100000000000000 > > > > So clarify, if you would, please. > > > > Thanks, > > > > Dick Moores > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Jan 15 02:14:13 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Jan 2007 01:14:13 -0600 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> <45AB22E6.2060104@enthought.com> Message-ID: <45AB29C5.3020105@gmail.com> Dick Moores wrote: > Please show me how to do that in just plain python. I don't have or > know ipython. It's the same. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rdmoores at gmail.com Mon Jan 15 02:33:03 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sun, 14 Jan 2007 23:33:03 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <45AB29C5.3020105@gmail.com> References: <200701131335.37536.dd55@cornell.edu> <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> Message-ID: On 1/14/07, Robert Kern wrote: > Dick Moores wrote: > > Please show me how to do that in just plain python. I don't have or > > know ipython. > > It's the same. OK. Bryan Van de Ven's code lacked an import statement, a precision statement, and a print statement. So I tried ===================== # clnumTest2-a.py import clnum as cl cl.set_default_precision(50) a = cl.log(640320**3 + 744)/cl.sqrt(163) print cl.mpq(a) ===================== And got 181338285223186289202345004581/57721768930156197489438103340 Doing that division at the python interactive prompt gets me 3.1415926535897927 hardly the precision I'm after. Someone please show me a whole python script. Thanks, Dick Moores From robert.kern at gmail.com Mon Jan 15 02:47:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Jan 2007 01:47:48 -0600 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> Message-ID: <45AB31A4.5040502@gmail.com> Dick Moores wrote: > On 1/14/07, Robert Kern wrote: >> Dick Moores wrote: >>> Please show me how to do that in just plain python. I don't have or >>> know ipython. >> It's the same. > > OK. Bryan Van de Ven's code lacked an import statement, a precision > statement, and a print statement. So I tried > ===================== > # clnumTest2-a.py > > import clnum as cl > cl.set_default_precision(50) > a = cl.log(640320**3 + 744)/cl.sqrt(163) > > print cl.mpq(a) > ===================== > And got > 181338285223186289202345004581/57721768930156197489438103340 > Doing that division at the python interactive prompt gets me > 3.1415926535897927 > hardly the precision I'm after. > > Someone please show me a whole python script. Fernando already gave you exactly what you needed. You just printed it wrong. Doing the string interpolation through '%.50f' casts the number to a Python float object (16 decimal places of precision). If you were you simply print the object that is the result of the computation (`a` above), you will get the answer that you are looking for. As you can see from what Fernando showed you, there is in fact 50 digits of precision using clnum: In [16]: import clnum as n In [17]: n.set_default_precision(50) In [18]: n.log(640320**3 + 744)/n.sqrt(163) Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rdmoores at gmail.com Mon Jan 15 02:56:39 2007 From: rdmoores at gmail.com (Dick Moores) Date: Sun, 14 Jan 2007 23:56:39 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <45AB31A4.5040502@gmail.com> References: <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> <45AB31A4.5040502@gmail.com> Message-ID: On 1/14/07, Robert Kern wrote: > Dick Moores wrote: > > On 1/14/07, Robert Kern wrote: > >> Dick Moores wrote: > >>> Please show me how to do that in just plain python. I don't have or > >>> know ipython. > >> It's the same. > > > > OK. Bryan Van de Ven's code lacked an import statement, a precision > > statement, and a print statement. So I tried > > ===================== > > # clnumTest2-a.py > > > > import clnum as cl > > cl.set_default_precision(50) > > a = cl.log(640320**3 + 744)/cl.sqrt(163) > > > > print cl.mpq(a) > > ===================== > > And got > > 181338285223186289202345004581/57721768930156197489438103340 > > Doing that division at the python interactive prompt gets me > > 3.1415926535897927 > > hardly the precision I'm after. > > > > Someone please show me a whole python script. > > Fernando already gave you exactly what you needed. You just printed it wrong. > Doing the string interpolation through '%.50f' casts the number to a Python > float object (16 decimal places of precision). If you were you simply print the > object that is the result of the computation (`a` above), you will get the > answer that you are looking for. As you can see from what Fernando showed you, > there is in fact 50 digits of precision using clnum: > > > In [16]: import clnum as n > > In [17]: n.set_default_precision(50) > > In [18]: n.log(640320**3 + 744)/n.sqrt(163) > Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) > Well if you mean for me to do: ============================= # clnumTest1-b.py import clnum as n n.set_default_precision(50) print n.log(640320**3 + 744)/n.sqrt(163) ================================== That gives me 3.1415926535897932385 . So I STILL don't understand. Also, what are those In [16]: things all you people use? Is that line 16? If so, line 16 of what? Dick Moores From robert.kern at gmail.com Mon Jan 15 03:06:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 15 Jan 2007 02:06:01 -0600 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> <45AB31A4.5040502@gmail.com> Message-ID: <45AB35E9.3070301@gmail.com> Dick Moores wrote: > Well if you mean for me to do: > ============================= > # clnumTest1-b.py > > import clnum as n > > n.set_default_precision(50) > > print n.log(640320**3 + 744)/n.sqrt(163) > ================================== > That gives me 3.1415926535897932385 . > > So I STILL don't understand. It looks like the __str__ representation of clnum numbers (i.e. what you get by printing them, or by calling str() on them) is restricted to that many digits regardless of what the actual precision of the object is. Look at the example at the bottom of clnum's webpage: http://calcrpnpy.sourceforge.net/clnum.html However, looking at the __repr__ representation (i.e. what was printed in the Out[18] in the above example, and what can be obtained by calling repr() on the object), you can see all of the digits that are available according to the precision of the object. > Also, what are those In [16]: things all you people use? Is that line > 16? If so, line 16 of what? It's just the IPython input prompt, like the >>> of the regular interpreter. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rdmoores at gmail.com Mon Jan 15 03:35:31 2007 From: rdmoores at gmail.com (Dick Moores) Date: Mon, 15 Jan 2007 00:35:31 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <45AB35E9.3070301@gmail.com> References: <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> <45AB31A4.5040502@gmail.com> <45AB35E9.3070301@gmail.com> Message-ID: On 1/15/07, Robert Kern wrote: > Dick Moores wrote: > > > Well if you mean for me to do: > > ============================= > > # clnumTest1-b.py > > > > import clnum as n > > > > n.set_default_precision(50) > > > > print n.log(640320**3 + 744)/n.sqrt(163) > > ================================== > > That gives me 3.1415926535897932385 . > > > > So I STILL don't understand. > > It looks like the __str__ representation of clnum numbers (i.e. what you get by > printing them, or by calling str() on them) is restricted to that many digits > regardless of what the actual precision of the object is. Look at the example at > the bottom of clnum's webpage: > > http://calcrpnpy.sourceforge.net/clnum.html > > However, looking at the __repr__ representation (i.e. what was printed in the > Out[18] in the above example, and what can be obtained by calling repr() on the > object), you can see all of the digits that are available according to the > precision of the object. OK! I tried ============================ # clnumTest1-c.py import clnum as n n.set_default_precision(50) print repr(n.log(640320**3 + 744)/n.sqrt(163)) ================================== And got mpf('3.141592653589793238462643383279726619347549880883522422293',55) Great! Thanks to you all! BTW what's the 55 at the end? There are 58 digits including the initial 3. > > Also, what are those In [16]: things all you people use? Is that line > > 16? If so, line 16 of what? > > It's just the IPython input prompt, like the >>> of the regular interpreter. Oh. Dick From rdmoores at gmail.com Mon Jan 15 08:49:32 2007 From: rdmoores at gmail.com (Dick Moores) Date: Mon, 15 Jan 2007 05:49:32 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> <45AB31A4.5040502@gmail.com> <45AB35E9.3070301@gmail.com> Message-ID: I hope I may ask another question. I'd like to know how use clnum to calculate 2**3.9 to a precision of 40. I looked carefully at http://calcrpnpy.sourceforge.net/clnumManual.html and tried various things, but to no avail. Despite these words in the manual: "Most of the functions in the clnum module are arbitrary precision floating point versions of Python builtins. See the Python documentation in the math, cmath, and builtins sections for the following functions: acos, acosh, asin, asinh, atan, atan2, atanh, ceil, cos, cosh, degrees, exp, floor, hypot, log, log10, radians, round, sin, sinh, sqrt, tan, and tanh." it seems there is no clnum.pow(). Help, please. Thanks, Dick Moores From palazzol at comcast.net Mon Jan 15 09:45:20 2007 From: palazzol at comcast.net (Frank Palazzolo) Date: Mon, 15 Jan 2007 09:45:20 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> <45AB31A4.5040502@gmail.com> <45AB35E9.3070301@gmail.com> Message-ID: <45AB9380.60209@comcast.net> Hi, You should really ask the clnum guy, but I think you need to use exp() and log() instead of pow(). Also, you have to use only integers. pow(x,y) = exp(log(x)*y) In your case... from clnum import * set_default_precision(50) exp(log(2)*39/10) which gives: mpf('14.92852786458891865570149225839907467243567942962390462241',55) -Frank Dick Moores wrote: > I hope I may ask another question. I'd like to know how use clnum to > calculate 2**3.9 to a precision of 40. I looked carefully at > http://calcrpnpy.sourceforge.net/clnumManual.html and tried various > things, but to no avail. > > Despite these words in the manual: > > "Most of the functions in the clnum module are arbitrary precision > floating point versions of Python builtins. See the Python > documentation in the math, cmath, and builtins sections for the > following functions: acos, acosh, asin, asinh, atan, atan2, atanh, > ceil, cos, cosh, degrees, exp, floor, hypot, log, log10, radians, > round, sin, sinh, sqrt, tan, and tanh." > > it seems there is no clnum.pow(). Help, please. > > Thanks, > > Dick Moores > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fperez.net at gmail.com Mon Jan 15 10:27:42 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 15 Jan 2007 08:27:42 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <200701131335.37536.dd55@cornell.edu> Message-ID: On 1/14/07, Dick Moores wrote: > On 1/13/07, Fernando Perez wrote: > > > In [16]: import clnum as n > > > > In [17]: n.set_default_precision(50) > > > > In [18]: n.log(640320**3 + 744)/n.sqrt(163) > > Out[18]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) > > > > I assumed you did that in Python. So I tried it with this script, > adding a print statement: > =================================== > # clnumTest1-a.py > > import clnum as n > > n.set_default_precision(50) > > print "%.50f" % (n.log(640320**3 + 744)/n.sqrt(163)) > ====================================== > result: 3.141592653589793100000000000000 > > So clarify, if you would, please. In [8]: print "%.50f" % (n.log(640320**3 + 744)/n.sqrt(163)) 3.14159265358979311599796346854418516159057617187500 In [9]: n.log(640320**3 + 744)/n.sqrt(163) Out[9]: mpf('3.141592653589793238462643383279726619347549880883522422293',55) In [10]: print n.log(640320**3 + 744)/n.sqrt(163) 3.1415926535897932385 There's a difference between: a) '%Xf' formatting, which is done by Python by converting your object into a regular float (64-bit) and then generating a string representation for it, seen in [8] b) the __repr__ of an mpf object, seen in [9] c) the __str__ of an mpf object, seen in [10] Same object (the mpf number), three different mechanisms for getting a string form out of it, three different results (as a /string/). Cheers, f From rdm at rcblue.com Mon Jan 15 11:52:46 2007 From: rdm at rcblue.com (Dick Moores) Date: Mon, 15 Jan 2007 08:52:46 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <45AB9380.60209@comcast.net> References: <45AB22E6.2060104@enthought.com> <45AB29C5.3020105@gmail.com> <45AB31A4.5040502@gmail.com> <45AB35E9.3070301@gmail.com> <45AB9380.60209@comcast.net> Message-ID: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> That's great! Thanks! Now, how about something such as (5/23)**(2/7)? print repr(n.exp(n.log(5/23)*2/7)) gets ValueError: log domain error Dick At 06:45 AM 1/15/2007, you wrote: >Hi, > >You should really ask the clnum guy, but I think you need to use exp() >and log() instead of pow(). Also, you have to use only integers. > >pow(x,y) = exp(log(x)*y) > >In your case... > >from clnum import * >set_default_precision(50) >exp(log(2)*39/10) > >which gives: > >mpf('14.92852786458891865570149225839907467243567942962390462241',55) > >-Frank > > >Dick Moores wrote: > > I hope I may ask another question. I'd like to know how use clnum to > > calculate 2**3.9 to a precision of 40. I looked carefully at > > http://calcrpnpy.sourceforge.net/clnumManual.html and tried various > > things, but to no avail. > > > > Despite these words in the manual: > > > > "Most of the functions in the clnum module are arbitrary precision > > floating point versions of Python builtins. See the Python > > documentation in the math, cmath, and builtins sections for the > > following functions: acos, acosh, asin, asinh, atan, atan2, atanh, > > ceil, cos, cosh, degrees, exp, floor, hypot, log, log10, radians, > > round, sin, sinh, sqrt, tan, and tanh." > > > > it seems there is no clnum.pow(). Help, please. > > > > Thanks, > > > > Dick Moores > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user From dd55 at cornell.edu Mon Jan 15 12:19:58 2007 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 15 Jan 2007 12:19:58 -0500 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> References: <45AB9380.60209@comcast.net> <7.0.1.0.2.20070115084916.07543c20@rcblue.com> Message-ID: <200701151219.58595.dd55@cornell.edu> On Monday 15 January 2007 11:52, Dick Moores wrote: > That's great! Thanks! > > Now, how about something such as (5/23)**(2/7)? > print repr(n.exp(n.log(5/23)*2/7)) > gets ValueError: log domain error integer division 5/23=0. The log of zero is either going to give you -infinity or an error. Darren From chiaracaronna at hotmail.com Mon Jan 15 12:51:55 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Mon, 15 Jan 2007 17:51:55 +0000 Subject: [SciPy-user] Newbie: Reading a slice of a file In-Reply-To: <200701151219.58595.dd55@cornell.edu> Message-ID: I need to read just some rows of a file... do you know how to do it? I found out that with pylab.load I can skip some rows from the top, but I don't know how to skip from the bottom... maybe there is an other function? Thanks Chiara >From: Darren Dale >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 >to30 places? >Date: Mon, 15 Jan 2007 12:19:58 -0500 > >On Monday 15 January 2007 11:52, Dick Moores wrote: > > That's great! Thanks! > > > > Now, how about something such as (5/23)**(2/7)? > > print repr(n.exp(n.log(5/23)*2/7)) > > gets ValueError: log domain error > >integer division 5/23=0. The log of zero is either going to give you >-infinity >or an error. > >Darren >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Don't just search. Find. Check out the new MSN Search! http://search.msn.com/ From v-nijs at kellogg.northwestern.edu Mon Jan 15 12:54:18 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Mon, 15 Jan 2007 11:54:18 -0600 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> Message-ID: Try putting: from __future__ import division At the top of your program. This wil make division on integers act like 'true' division. Then your calculation will work fine: In [1]: from __future__ import division In [2]: import numpy as n In [3]: n.exp(n.log(5/23)*2/7) 0.646607324065 Vincent On 1/15/07 10:52 AM, "Dick Moores" wrote: > That's great! Thanks! > > Now, how about something such as (5/23)**(2/7)? > print repr(n.exp(n.log(5/23)*2/7)) > gets ValueError: log domain error > > Dick > > At 06:45 AM 1/15/2007, you wrote: > >> Hi, >> >> You should really ask the clnum guy, but I think you need to use exp() >> and log() instead of pow(). Also, you have to use only integers. >> >> pow(x,y) = exp(log(x)*y) >> >> In your case... >> >> from clnum import * >> set_default_precision(50) >> exp(log(2)*39/10) >> >> which gives: >> >> mpf('14.92852786458891865570149225839907467243567942962390462241',55) >> >> -Frank >> >> >> Dick Moores wrote: >>> I hope I may ask another question. I'd like to know how use clnum to >>> calculate 2**3.9 to a precision of 40. I looked carefully at >>> http://calcrpnpy.sourceforge.net/clnumManual.html and tried various >>> things, but to no avail. >>> >>> Despite these words in the manual: >>> >>> "Most of the functions in the clnum module are arbitrary precision >>> floating point versions of Python builtins. See the Python >>> documentation in the math, cmath, and builtins sections for the >>> following functions: acos, acosh, asin, asinh, atan, atan2, atanh, >>> ceil, cos, cosh, degrees, exp, floor, hypot, log, log10, radians, >>> round, sin, sinh, sqrt, tan, and tanh." >>> >>> it seems there is no clnum.pow(). Help, please. >>> >>> Thanks, >>> >>> Dick Moores >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Vincent R. Nijs Assistant Professor of Marketing Kellogg School of Management, Northwestern University 2001 Sheridan Road, Evanston, IL 60208-2001 Phone: +1-847-491-4574 Fax: +1-847-491-2498 E-mail: v-nijs at kellogg.northwestern.edu Skype: vincentnijs From hetland at tamu.edu Mon Jan 15 12:52:08 2007 From: hetland at tamu.edu (Rob Hetland) Date: Mon, 15 Jan 2007 11:52:08 -0600 Subject: [SciPy-user] Problem (bug?) using modules with f2py Message-ID: I am trying to create a module that will store a model grid, and operate on that grid. The grids for different applications are different sizes. I imagine that I will recompile the module for each application separately, defining the grid size at compile time. Perhaps this is not the best approach, but the grids may be large, and I want to be efficient with memory. I would like the simple example outlined below to work, but now it does not. It seems that if a matrix is passed to f2py with intent (out), it does not acknowledge the parameters in the module. It *does* work without output variable -- i.e., and empty argument list to get_array. There are ways around this, but I think this should work, and suspect a bug. Any suggestions? -Rob ### TEST MODULE: MODULE mod_test implicit none integer, parameter :: Lm = 512 integer, parameter :: Mm = 256 TYPE T_GRID real*8 x(1:Lm, 1:Mm) END TYPE T_GRID TYPE (T_GRID) :: GRID CONTAINS SUBROUTINE initialize (val) integer i, j real*8 val do i=1,Lm do j=1,Mm GRID % x(i,j) = val end do end do RETURN END SUBROUTINE initialize END MODULE mod_test ### PY_TEST.F subroutine init(a) USE mod_test real*8 a CALL initialize(a) return end subroutine get_array_dont_work(val) ! does not compile, but does if the above line has an empty argument list.. USE mod_test real*8 val(1:Lm, 1:Mm) cf2py intent(out) val integer i, j do i=1,Lm do j=1,Mm val(i,j) = GRID % x(i,j) end do end do return end subroutine get_array_works() ! This routine, for example, DOES work as expected. USE mod_test real*8 val(1:Lm, 1:Mm) integer i, j do i=1,Lm do j=1,Mm val(i,j) = GRID % x(i,j) print *, i, j, val(i,j) end do end do return end ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From v-nijs at kellogg.northwestern.edu Mon Jan 15 13:28:50 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Mon, 15 Jan 2007 12:28:50 -0600 Subject: [SciPy-user] Newbie: Reading a slice of a file In-Reply-To: Message-ID: I don't think you can do that directly with pylab.load. You could do data = pylab.load(f,delimiter=',',skiprows=n-10) Where n = the length of the file. You could probably do something more flexible with the csv module. The following will read all your data into an array (assumes all elements are floats). You can add-in selection conditions as needed. import numpy reader = csv.reader(file('yourfile.csv','r')) data = numpy.array([i for i in reader]) data = data.astype('f') Vincent On 1/15/07 11:51 AM, "Chiara Caronna" wrote: > I need to read just some rows of a file... do you know how to do it? > I found out that with pylab.load I can skip some rows from the top, but I > don't know how to skip from the bottom... maybe there is an other function? > Thanks > Chiara > > >> From: Darren Dale >> Reply-To: SciPy Users List >> To: SciPy Users List >> Subject: Re: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 >> to30 places? >> Date: Mon, 15 Jan 2007 12:19:58 -0500 >> >> On Monday 15 January 2007 11:52, Dick Moores wrote: >>> That's great! Thanks! >>> >>> Now, how about something such as (5/23)**(2/7)? >>> print repr(n.exp(n.log(5/23)*2/7)) >>> gets ValueError: log domain error >> >> integer division 5/23=0. The log of zero is either going to give you >> -infinity >> or an error. >> >> Darren >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > _________________________________________________________________ > Don't just search. Find. Check out the new MSN Search! > http://search.msn.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Vincent R. Nijs Assistant Professor of Marketing Kellogg School of Management, Northwestern University 2001 Sheridan Road, Evanston, IL 60208-2001 Phone: +1-847-491-4574 Fax: +1-847-491-2498 E-mail: v-nijs at kellogg.northwestern.edu Skype: vincentnijs From rdm at rcblue.com Mon Jan 15 13:45:44 2007 From: rdm at rcblue.com (Dick Moores) Date: Mon, 15 Jan 2007 10:45:44 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> Message-ID: <7.0.1.0.2.20070115104218.0783c580@rcblue.com> Of, course! I forgot about integer division. Thanks, Vincent and Darren. But I still don't get the precision: ========================== # clnumTest3-c.py from __future__ import division import clnum as n n.set_default_precision(40) print repr(n.exp(n.log(5/23)*2/7)) ========================= gets mpf('0.6466073240654112295',17) How come? Dick At 09:54 AM 1/15/2007, you wrote: >Try putting: > >from __future__ import division > >At the top of your program. This wil make division on integers act like >'true' division. > >Then your calculation will work fine: > >In [1]: from __future__ import division >In [2]: import numpy as n >In [3]: n.exp(n.log(5/23)*2/7) >0.646607324065 > >Vincent > > >On 1/15/07 10:52 AM, "Dick Moores" wrote: > > > That's great! Thanks! > > > > Now, how about something such as (5/23)**(2/7)? > > print repr(n.exp(n.log(5/23)*2/7)) > > gets ValueError: log domain error > > > > Dick > > > > At 06:45 AM 1/15/2007, you wrote: > > > >> Hi, > >> > >> You should really ask the clnum guy, but I think you need to use exp() > >> and log() instead of pow(). Also, you have to use only integers. > >> > >> pow(x,y) = exp(log(x)*y) > >> > >> In your case... > >> > >> from clnum import * > >> set_default_precision(50) > >> exp(log(2)*39/10) > >> > >> which gives: > >> > >> mpf('14.92852786458891865570149225839907467243567942962390462241',55) > >> > >> -Frank > >> > >> > >> Dick Moores wrote: > >>> I hope I may ask another question. I'd like to know how use clnum to > >>> calculate 2**3.9 to a precision of 40. I looked carefully at > >>> http://calcrpnpy.sourceforge.net/clnumManual.html and tried various > >>> things, but to no avail. > >>> > >>> Despite these words in the manual: > >>> > >>> "Most of the functions in the clnum module are arbitrary precision > >>> floating point versions of Python builtins. See the Python > >>> documentation in the math, cmath, and builtins sections for the > >>> following functions: acos, acosh, asin, asinh, atan, atan2, atanh, > >>> ceil, cos, cosh, degrees, exp, floor, hypot, log, log10, radians, > >>> round, sin, sinh, sqrt, tan, and tanh." > >>> > >>> it seems there is no clnum.pow(). Help, please. > >>> > >>> Thanks, > >>> > >>> Dick Moores > >>> _______________________________________________ > >>> SciPy-user mailing list > >>> SciPy-user at scipy.org > >>> http://projects.scipy.org/mailman/listinfo/scipy-user > >>> > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > >-- >Vincent R. Nijs >Assistant Professor of Marketing >Kellogg School of Management, Northwestern University >2001 Sheridan Road, Evanston, IL 60208-2001 >Phone: +1-847-491-4574 Fax: +1-847-491-2498 >E-mail: v-nijs at kellogg.northwestern.edu >Skype: vincentnijs > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Mon Jan 15 15:03:05 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 15 Jan 2007 13:03:05 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070115104218.0783c580@rcblue.com> References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> Message-ID: On 1/15/07, Dick Moores wrote: > Of, course! I forgot about integer division. Thanks, Vincent and Darren. > > But I still don't get the precision: > ========================== > # clnumTest3-c.py > from __future__ import division > import clnum as n > n.set_default_precision(40) > print repr(n.exp(n.log(5/23)*2/7)) > ========================= > gets mpf('0.6466073240654112295',17) > > How come? Because you shoulnd't use from __future__ import division in this case, since that will turn 5/23 into a plain float, with 64-bit accuracy (well, 53 bits for the mantissa, really). In [3]: n.set_default_precision(50) In [4]: n.exp(n.log(n.mpq(5,23)*n.mpq(2,7))) Out[4]: mpf('0.06211180124223602484472049689440993788819875776397515527949',55) or alternatively: In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) Out[7]: mpf('0.06211180124223602484472049689440993788819875776397515527949',55) clnum exposes true rationals, so you can use them for a computation such as this one. In this regard SAGE has the convenience that it preparses your input to apply on-the-fly conversions so that all objects are treated either as rationals or as extended-precision floats. This has a performance impact so it's not a good choice for everyday numerics, but it does save a lot of typing and makes it a bit more convenient for this kind of thing: sage: r=RealField(200) sage: r(5/3)**(r(2/7)) 1.1571385359117507748889046340135881102888601841088600244770 Which of the two approaches (full SAGE or clnum/mpfr inside regular python) suits your needs best is a question only you can answer. Cheers, f ps - due to security problems, the public SAGE notebook is unfortunately down at the moment. From rdm at rcblue.com Mon Jan 15 16:18:20 2007 From: rdm at rcblue.com (Dick Moores) Date: Mon, 15 Jan 2007 13:18:20 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> Message-ID: <7.0.1.0.2.20070115125148.0319d2c0@rcblue.com> At 12:03 PM 1/15/2007, Fernando Perez wrote: >On 1/15/07, Dick Moores wrote: > > Of, course! I forgot about integer division. Thanks, Vincent and Darren. > > > > But I still don't get the precision: > > ========================== > > # clnumTest3-c.py > > from __future__ import division > > import clnum as n > > n.set_default_precision(40) > > print repr(n.exp(n.log(5/23)*2/7)) > > ========================= > > gets mpf('0.6466073240654112295',17) > > > > How come? > >Because you shoulnd't use > >from __future__ import division > >in this case, since that will turn 5/23 into a plain float, with >64-bit accuracy (well, 53 bits for the mantissa, really). > >In [3]: n.set_default_precision(50) > >In [4]: n.exp(n.log(n.mpq(5,23)*n.mpq(2,7))) >Out[4]: >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) > >or alternatively: > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) >Out[7]: >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) That's terrific! ======================= # clnumTest3-c.py import clnum as n n.set_default_precision(50) a = repr(n.exp(n.log(n.mpq(5,23)*n.mpq(2,7)))) print a print str(a)[5:-5] """ results mpf('0.06211180124223602484472049689440993788819875776397515527949',55) 0.06211180124223602484472049689440993788819875776397515527949 """ ========================================= Is there a better way to print the result without the leading "mpf('" and the trailing "',55)"? The only clnum manual I found was , which is obviously incomplete. How does anyone learn as much about it as you do? And finally (I think), could you show me how to do division and multiplication with precision? Say 7596.52/632517 and 123.45678987654321 times 298374.287364827364287346? >clnum exposes true rationals, so you can use them for a computation >such as this one. I do want to use clnum, so I can use it to make some useful functions to to keep around for use within Python scripts. Thanks very much, Dick From fperez.net at gmail.com Mon Jan 15 17:32:53 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 15 Jan 2007 15:32:53 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070115125148.0319d2c0@rcblue.com> References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070115125148.0319d2c0@rcblue.com> Message-ID: On 1/15/07, Dick Moores wrote: > Is there a better way to print the result without the leading "mpf('" > and the trailing "',55)"? Not that I can see right away. > > The only clnum manual I found was > , which is > obviously incomplete. How does anyone learn as much about it as you do? I don't know how good the manual is, I've never had a look at it. I just use the tab key and the ? operator in ipython. It's a handy little tool: http://ipython.scipy.org For instance: In [29]: n.*cos*? n.acos n.acosh n.cos n.cosh In [30]: n.mpf? Type: type Base Class: Namespace: Interactive File: /home/fperez/tmp/local/lib/python2.4/site-packages/clnum/__init__.py Docstring: mpf(x) -> extended precision floating point number mpf(x,prec) -> extended precision floating point number Convert a string or number to an extended precision floating point number, if possible. The optional prec is the number of decimal digits. > And finally (I think), could you show me how to do division and > multiplication with precision? Say 7596.52/632517 and > 123.45678987654321 times 298374.287364827364287346? In [25]: f = n.mpf In [26]: f('7596.52')/632517 Out[26]: mpf('0.012009985502365944314540162556895703988983695299889172939225',55) In [27]: f('123.45678987654321')*f('298374.287364827364287346') Out[27]: mpf('3.683633169976281356247722948026362522066000000000000000001e7',55) Cheers, f From seismic73 at yahoo.com Tue Jan 16 01:23:07 2007 From: seismic73 at yahoo.com (Jonathan Kane) Date: Mon, 15 Jan 2007 22:23:07 -0800 (PST) Subject: [SciPy-user] scipy installation problems for a newbie Message-ID: <930713.75058.qm@web81812.mail.mud.yahoo.com> Hi. I am a newbie trying to install both numpy and scipy. Numpy seems to have installed fine, but scipy gives me errors when I try certain operations. Can anyone help? I am using an Intel Mac Mini with OS X 10.4.8. I downloaded and installed the binaries for ActivePython. Here is the readout when I start python ActivePython 2.4.3 Build 11 (ActiveState Software Inc.) based on Python 2.4.3 (#1, Apr 3 2006, 18:07:14) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. Here is an example of when I try to load scipy modules: >>> from numpy import matrix >>> from scipy.linalg import inv, det, eig Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so: Library not loaded: /usr/local/lib/libgfortran.1.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so Reason: image not found What is going on?? Help if you can. Thanks. Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jan 16 01:29:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 16 Jan 2007 00:29:18 -0600 Subject: [SciPy-user] scipy installation problems for a newbie In-Reply-To: <930713.75058.qm@web81812.mail.mud.yahoo.com> References: <930713.75058.qm@web81812.mail.mud.yahoo.com> Message-ID: <45AC70BE.5070402@gmail.com> Jonathan Kane wrote: > Hi. I am a newbie trying to install both numpy and scipy. Numpy seems > to have installed fine, but scipy gives me errors when I try certain > operations. Can anyone help? > > I am using an Intel Mac Mini with OS X 10.4.8. I downloaded and > installed the binaries for ActivePython. Here is the readout when I > start python Which numpy and scipy binaries did you install and where did you get them? Whoever built them used gfortran to compile them. Consequently, they got linked to the gfortran runtime libraries. You will need to download and install the gfortran compiler available from here: http://hpc.sourceforge.net Specifically, download the file gfortran-intel-bin.tar.gz . Read the paragraph above the download link for information on how to install it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chiaracaronna at hotmail.com Tue Jan 16 08:35:13 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Tue, 16 Jan 2007 13:35:13 +0000 Subject: [SciPy-user] Newbie: Reading a slice of a file In-Reply-To: Message-ID: Thanks for your suggestion. I managed to modify the pylab.load function; I inserted a new optional input, skiprowsfrom=n, where n is the rows from which you want to skip;I think this can bu useful, for example, if in the file there are not only floats, but also uncommented text. Here is my modification in the file mlab.py: for i,line in enumerate(fh): if i=skiprowsfrom-1: continue ... ... >From: Vincent Nijs >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] Newbie: Reading a slice of a file >Date: Mon, 15 Jan 2007 12:28:50 -0600 > >I don't think you can do that directly with pylab.load. You could do > >data = pylab.load(f,delimiter=',',skiprows=n-10) > >Where n = the length of the file. > >You could probably do something more flexible with the csv module. The >following will read all your data into an array (assumes all elements are >floats). You can add-in selection conditions as needed. > >import numpy >reader = csv.reader(file('yourfile.csv','r')) >data = numpy.array([i for i in reader]) >data = data.astype('f') > >Vincent > > >On 1/15/07 11:51 AM, "Chiara Caronna" wrote: > > > I need to read just some rows of a file... do you know how to do it? > > I found out that with pylab.load I can skip some rows from the top, but >I > > don't know how to skip from the bottom... maybe there is an other >function? > > Thanks > > Chiara > > > > > >> From: Darren Dale > >> Reply-To: SciPy Users List > >> To: SciPy Users List > >> Subject: Re: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 > >> to30 places? > >> Date: Mon, 15 Jan 2007 12:19:58 -0500 > >> > >> On Monday 15 January 2007 11:52, Dick Moores wrote: > >>> That's great! Thanks! > >>> > >>> Now, how about something such as (5/23)**(2/7)? > >>> print repr(n.exp(n.log(5/23)*2/7)) > >>> gets ValueError: log domain error > >> > >> integer division 5/23=0. The log of zero is either going to give you > >> -infinity > >> or an error. > >> > >> Darren > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _________________________________________________________________ > > Don't just search. Find. Check out the new MSN Search! > > http://search.msn.com/ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > >-- >Vincent R. Nijs >Assistant Professor of Marketing >Kellogg School of Management, Northwestern University >2001 Sheridan Road, Evanston, IL 60208-2001 >Phone: +1-847-491-4574 Fax: +1-847-491-2498 >E-mail: v-nijs at kellogg.northwestern.edu >Skype: vincentnijs > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ FREE pop-up blocking with the new MSN Toolbar - get it now! http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/ From edloper at gradient.cis.upenn.edu Tue Jan 16 13:52:53 2007 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Tue, 16 Jan 2007 13:52:53 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy Message-ID: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> [I sent this 5 days ago, but it's been held because I was not subscribed -- so I decided to just go ahead & subscribe and resend it. Apologies if it ends up being a dup.] I'm glad to hear that you're making a push towards using standardized markup in docstrings -- I think this is a worthy goal. I wanted to respond to a few points that have come up, though. First, I'd pretty strongly recommend against inventing your own markup language. It increases the barrier for contributions, makes life more difficult for tools, and takes up that much more brain space that could be devoted to better things. Plus, it's surprisingly hard to do right, even if you're translating from your markup to an existing one -- there are just too many corner cases to consider. I know Travis has reservations about the amount of 'line noise,' but believe me, there are good reasons why that 'line noise' is there, and the authors of ReST have done a *very* good job at keeping it to a minimum. Given the expressive power that's needed for scipy docs, I would recommend using ReST. Epytext is a much simpler markup language, and most likely won't be expressive enough. (e.g., it has no support for tables.) Whatever markup language you settle on, be sure to indicate it by setting module-level __docformat__ variables, as described in PEP 258. __docformat__ should be a string containing the name of the module's markup language. The name of the markup language may optionally be followed by a language code (such as en for English). Conventionally, the definition of the __docformat__ variable immediately follows the module's docstring. E.g.: __docformat__ = 'restructuredtext' Other standard values include 'plaintext' and 'epytext'. As for extending ReST and/or epydoc to support any specializiations you want to make, I don't think it'll be that hard. E.g., adding 'input' and 'output' as aliases for 'parameters' and 'returns' is pretty simple. And adding support for generating latex-math should be pretty straight-forward. I think concerns about the markup for marking latex-math are perhaps exaggerated, given that the *contents* of latex-math expressions are quite likely to look like line-noise to the uninitiated. :) I've patched my local version of docutils to support inline math with `x=12`:math: and block math with: .. math:: F(x,y;w) = \langle w, \Phi(x,y) \rangle And I've been pretty happy with how well it reads. And for people who aren't latex gurus, it may be more obvious what's going on if they see :math:`..big latex expr..` than if they just see $..big latex expr..$. If you really think that's too line-noise-like, then you could set the default role to be math, so `x=12` would render as math. But then you'd need to explicitly mark crossreferences, so I doubt that would be a win overall. [Alan Isaac] > Must items (e.g., parameters) in a consolidated field be > marked as interpreted text (with back ticks). > Yes. It does seem redundant, so I will ask why. > I wouldn't mind changing this to work both with & without the backticks around parameter names. At the time when I implemented it, I just checked what the standard practice within docutils for writing consolidated fields was, and wrote a parser for that. [Alan Isaac] > Would it not be nice to have :Inputs: and :Outputs: > consolidated fields as synonyms for :Parameters: > and :Returns:? > Yes! Perhaps Ed Loper would be willing to add this. > The only concern might be if other projects have defined custom :input: and :output: fields that they use for other uses -- I'll try to check if this is the case. In the mean time, the following should do what you want: from epydoc.docstringparser import * register_field_handler(process_return_field, 'output') from epydoc.markup import restructuredtext as epytext_rst epytext_rst.CONSOLIDATED_FIELDS['input'] = 'param' epytext_rst.CONSOLIDATED_DEFLIST_FIELDS.append('input') [Alan Isaac] > Is Epydoc easily customizable? > In what ways? It is easy to add new fields > (see above), but I do not know about new > consolidated fields. > I intend for epydoc to be easily customizable, but at the moment it's only customizable in those places where I've thought to make it customizable. If you find there's some customization you'd like to do, but there's no hook for it, let me know & I can try to think about what kind of hook would be appropriate. [Alan Isaac] > Is table support adequate in reST? > See If ReST table support isn't expressive enough for you, then you must be using some pretty complex tables. :) [Alan Isaac] > math, so we could inline `f(x)=x^2` rather than > :latex-math:`f(x)=x^2`. > As I noted above, this would mean you'd have to explicitly mark crossreferences to python objects with some tag -- rst can't read your mind to know whether `foo` refers to a math expression or a variable. > It may be worth asking whether > epydoc developers would be willing to pass $f(x)=x^2$ > as latex-math. > Overall, I'm reluctant to make changes to the markup language(s) themselves that aren't supported by the markup language's own extension facilities. > Why use underlining to define sections? > So that they are really sections. > The indented examples will display fine > but will not give access to sectioning controls. > If you don't use underlining, you'll get definition lists instead of sections. It would be possible to register a transformation w/ ReST that checks for top-level definition lists & transforms them to sections, but I doubt it's worth it. In my experience, the only time when you need to add section headings within a docstring is if the docstring is quite long, and in that case the underlining doesn't bother me too much. [Gary Ruben] > Currently epydoc generates far too much > information (2371 pages worth when I ran it on the numpy source a few > days ago) and seems unable to be easily modified to reduce its output. > If you can explicitly specify what you'd like included in the output, and how you'd like it formatted, then I can give you an idea of how hard that would be to produce. You are right that, at the moment, epydoc's output generators are not terribly customizable. And the latex output isn't as pretty as I'd like. :) [Gary Ruben] > The other thing we want is to be able to generate examples from > heavily > marked-up example modules a'la what FiPy does. I don't think epydoc > even > allows that without modification. > For this, I highly recommend writing stand-alone doctest files, which can be run through docutils as-is to generate marked-up examples; and can be run through doctest to verify that all examples are correct. E.g., see: Each of the files linked from that page is generated from a rst- formatted doctest file. [Perry Greenfield] > Any reason ipython can't use epydoc or some other tool to format the > markup in ascii (I forget if epydoc does ascii output) so that the > user doesn't see the 'line noise' when using the ipython > introspection features? > If you add this to ipython, please be sure to check the __docformat__ variable before deciding how to convert the docstring! (If you encounter an unknown markup, then just render it as plaintext.) As a final note, it's probably true that epydoc may currently be missing some of the hooks that you'd need to specialize ReST without doing some monkey-patching. If you find that this is the case, please let me know what hooks you'd like to see added to epydoc. Or if the construction you're trying to add is one that's likely to be useful to other epydoc users (e.g., latex-math), then it could certainly be added to epydoc itself. -Edward (disclaimer: I'm not subscribed to scipy-user; I just read the thread from the archives. So please cc me on responses.) From fperez.net at gmail.com Tue Jan 16 14:42:44 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 16 Jan 2007 12:42:44 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> Message-ID: On 1/16/07, Edward Loper wrote: [...] many thanks for your feedback, Ed. This is a sorely needed development in scipy's growth. > [Perry Greenfield] > > > Any reason ipython can't use epydoc or some other tool to format the > > markup in ascii (I forget if epydoc does ascii output) so that the > > user doesn't see the 'line noise' when using the ipython > > introspection features? > > > > If you add this to ipython, please be sure to check the __docformat__ > variable before deciding how to convert the docstring! (If you > encounter an unknown markup, then just render it as plaintext.) No worries, that's how it will be done. Cheers, f From markusro at element.fkp.physik.tu-darmstadt.de Tue Jan 16 18:21:02 2007 From: markusro at element.fkp.physik.tu-darmstadt.de (Markus Rosenstihl) Date: Wed, 17 Jan 2007 00:21:02 +0100 Subject: [SciPy-user] Newbie: Reading a slice of a file In-Reply-To: References: Message-ID: Am 15.01.2007 um 18:51 schrieb Chiara Caronna: > I need to read just some rows of a file... do you know how to do it? > I found out that with pylab.load I can skip some rows from the top, > but I > don't know how to skip from the bottom... maybe there is an other > function? > Thanks > Chiara Hi You can use scipy.io.read_array: In [313]: b Out[313]: array([[0, 0, 0], [1, 0, 0], [2, 0, 0], [3, 0, 0], [4, 0, 0], [5, 0, 0], [6, 0, 0], [7, 0, 0], [8, 0, 0], [9, 0, 0]]) In [314]: scipy.io.write_array('testarray',b) In [315]: scipy.io.read_array('testarray',lines=(1,(4,7))) Out[315]: array([[ 1., 0., 0.], [ 4., 0., 0.], [ 5., 0., 0.], [ 6., 0., 0.]]) In [316]: HTH, Markus -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: Signierter Teil der Nachricht URL: From s.mientki at ru.nl Tue Jan 16 18:47:43 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Wed, 17 Jan 2007 00:47:43 +0100 Subject: [SciPy-user] Getting started wiki page In-Reply-To: <20070111225611.GD4479@clipper.ens.fr> References: <20070111225611.GD4479@clipper.ens.fr> Message-ID: <45AD641F.3080406@ru.nl> Hi Gael, apparently not everybody is allowed to edit the pages, I saw yesterday the lecture of Eric Jones, Travis Oliphant, despite the not-overwhelming image quality, I think it's real information for newbies. And if the lecture is too long, at least the handouts are great !! Maybe an idea to add it to "How to learn to use scipy": http://www.nanohub.org/resources/?id=99. Another interesting point for newbies, might be an overview of variable types, what's the difference between list and tupple ? I have put my personal notes on my website, maybe it's worth to include something like this http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/python_vars.html succes, Stef Mientki Gael Varoquaux wrote: > Once again I had to explain to a new student what scipy was, what python > was, and what he should read in order to start working with these tools. > > I have started a "Getting Started" page on the wiki to answer this > problem. The idea would be to guide the complete beginner to get him > productive as quickly as possible (like the "Getting started" page in > matlab). > > Currently the page is quite empty and pretty bad, but if people start > improving it it could be very useful and hopefuly be linked from the > front page. > > The page is not linked by any page yet, but it is accessible at: > > http://scipy.org/Getting_Started > > Cheers, > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Tue Jan 16 18:33:34 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 16 Jan 2007 16:33:34 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> Message-ID: <45AD60CE.6090102@ee.byu.edu> Edward Loper wrote: >[I sent this 5 days ago, but it's been held because I was not >subscribed -- so I decided to just go ahead & subscribe and resend > > >it. Apologies if it ends up being a dup.] > > I'm ccing to the users list, but the discussion has been taking place on the developers list, so I'm addressing it there. >I'm glad to hear that you're making a push towards using standardized >markup in docstrings -- I think this is a worthy goal. I wanted to >respond to a few points that have come up, though. > >First, I'd pretty strongly recommend against inventing your own >markup language. It increases the barrier for contributions, makes >life more difficult for tools, and takes up that much more brain >space that could be devoted to better things. > I'm not really convinced by this argument. I agree we shouldn't be flippant about introducing new markup and that is not what I've proposed. But, already we must learn multiple markup. For example, Moin Moin uses one way to describe tables and restructured text. Basically, I've said that the existing markups do not seem geared toward mathematical description (as latex must be hacked on to them). In addition, I don't like the look of existing markup docstrings --- especially in the parameter section. That's where my biggest problem really lies. What should be clear ends up with all kinds of un-necessary back-ticks. I also don't really like the extra ":" at the beginning of a paragraph to denote a section. I could live with the underline though. In the end, none of the markup languages seem to have been designed with the scientific user community in mind and so I'm not feeling a particular need to cram my brain into what they came up with. Basically, scipy docstrings have been developed already and the follow a sort of markup. Why should they all be changed (in essentially unnecessary ways because a computer program could be written to change them and they will look "messier" in the end) just to satisfy a particular markup language that was designed without inquiring about our needs. This is not a case of Not Invented Here. I'm very happy to use an existing markup standard. In fact, I basically like restructured Text. There are only a few issues I have with it. Ideally, we can get those issues resolved in epydoc itself. So that we can use a slightly modified form of restructured Text in the docstrings. I'm all for simplifying things, but there is a limit to how much I'm willing to give up when we can easily automate the conversions to what epydoc currently expects. >Plus, it's >surprisingly hard to do right, even if you're translating from your >markup to an existing one -- there are just too many corner cases to >consider. I know Travis has reservations about the amount of 'line >noise,' but believe me, there are good reasons why that 'line noise' >is there, and the authors of ReST have done a *very* good job at >keeping it to a minimum. > > Well that is debatable in the specific instances of parameter descriptions. The extra back-ticks are annoying to say the least, and un-necessary. >Given the expressive power that's needed for scipy docs, I would >recommend using ReST. Epytext is a much simpler markup language, and >most likely won't be expressive enough. (e.g., it has no support for >tables.) > >Whatever markup language you settle on, be sure to indicate it by >setting module-level __docformat__ variables, as described in PEP >258. __docformat__ should be a string containing the name of the >module's markup language. The name of the markup language may >optionally be followed by a language code (such as en for English). >Conventionally, the definition of the __docformat__ variable >immediately follows the module's docstring. E.g.: > > __docformat__ = 'restructuredtext' > >Other standard values include 'plaintext' and 'epytext'. > > SciPy is big enough that I see no reason we cannot define a slightly modified form of restructured Text (i.e. it uses MoinMoin tables, gets rid of back-ticks in parameter lists, understands math ($ $), and has specific layout for certain sections. >As for extending ReST and/or epydoc to support any specializiations >you want to make, I don't think it'll be that hard. E.g., adding >'input' and 'output' as aliases for 'parameters' and 'returns' is >pretty simple. And adding support for generating latex-math should >be pretty straight-forward. I think concerns about the markup for >marking latex-math are perhaps exaggerated, given that the *contents* >of latex-math expressions are quite likely to look like line-noise to >the uninitiated. :) I've patched my local version of docutils to >support inline math with `x=12`:math: and block math with: > >.. math:: F(x,y;w) = \langle w, \Phi(x,y) \rangle > >And I've been pretty happy with how well it reads. And for people >who aren't latex gurus, it may be more obvious what's going on if >they see :math:`..big latex expr..` than if they just see $..big >latex expr..$. > >If you really think that's too line-noise-like, then you could set >the default role to be math, so `x=12` would render as math. But >then you'd need to explicitly mark crossreferences, so I doubt that >would be a win overall. > >[Alan Isaac] > > > >>Must items (e.g., parameters) in a consolidated field be >>marked as interpreted text (with back ticks). >> Yes. It does seem redundant, so I will ask why. >> >> >> > >I wouldn't mind changing this to work both with & without the >backticks around parameter names. At the time when I implemented it, >I just checked what the standard practice within docutils for writing >consolidated fields was, and wrote a parser for that. > > Allowing us not to have backticks in parameter names would help me like using restructured Text quite a bit. I see no reason why parameter lists cannot be handled specially. After all, it is the most important part of a docstring. > > >>Is table support adequate in reST? >> >> >> > >See restructuredtext.html#tables> > >If ReST table support isn't expressive enough for you, then you must >be using some pretty complex tables. :) > > Moin Moin uses a different way to describe tables. :-( >[Alan Isaac] > > > >> math, so we could inline `f(x)=x^2` rather than >> :latex-math:`f(x)=x^2`. >> >> >> > >As I noted above, this would mean you'd have to explicitly mark >crossreferences to python objects with some tag -- rst can't read >your mind to know whether `foo` refers to a math expression or a >variable. > > > > > >>It may be worth asking whether >> epydoc developers would be willing to pass $f(x)=x^2$ >> as latex-math. >> >> >> > >Overall, I'm reluctant to make changes to the markup language(s) >themselves that aren't supported by the markup language's own >extension facilities. > > > That understandable reluctance is why we need to make changes to the standard for SciPy docstrings. Math support is critical and it just isn't built-in to restructured Text as well as it could be. Having to do :latex-math:` ` for in-line math is silly when $$ has been the way to define latex math for a long time. In summary, my biggest issues with just straight restructured Text are 1) back-ticks in parameter lists. 2) the way math is included 3) doesn't understand Moin Moin tables 4) doesn't seem to have special processing for standard section headers (I may be wrong about this as I'm not an reST expert. I also don't really like the way bold, italics, and courier are handled. My favorite is now *bold* /italics/ and `fixed-width`. I like the {{{ code }}} for code blocks that Moin Moin uses, but that's not a big deal to me. I can live with the :: that restructured-Text uses. It seems like we are going to have to customize (hack) epydoc to do what we want anyway. Why then can we not "tweak" reST a little-bit too. -Travis From aisaac at american.edu Tue Jan 16 20:56:11 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 16 Jan 2007 20:56:11 -0500 Subject: [SciPy-user] [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: <45AD60CE.6090102@ee.byu.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu><45AD60CE.6090102@ee.byu.edu> Message-ID: On Tue, 16 Jan 2007, Travis Oliphant apparently wrote: > In summary, my biggest issues with just straight > restructured Text are > 1) back-ticks in parameter lists. I understood Ed to say: 1) this can probably be dispensed with easily enough 2) if dispensed with, we may lose cross refs from the parameters? I assume I misunderstood (2), since this is just a matter of how definition lists are parsed in consolidated fields, and definition lists definitely do not require the back tics. So I *think* Ed is saying both that this problem can be overcome and that he is willing to help with it. > 2) the way math is included I understood Ed to say that for inline math we could just make LaTeX the default role, so that we write e.g., `f(x)=x^2`. Back ticks look at least as good as dollar signs, IMO. The cost is that cross refs then must be explicitly marked. How important are these considered to be for the documentation? (How did you intend to mark them?) I did not get a sense of how easy it would be to make dollar signs special (so they could be used to indicate a math role). I guess it would not be hard, but I feel pretty confident that this would NOT be welcomed as some kind of patch to reST. (But I'm just a user.) > 3) doesn't understand Moin Moin tables This seems just a matter of hacking reST, it seems to me. I hazard that the reST developers would welcome a patch to handle Moin Moin tables. In the meantime I ask, what features missing from reST tables would actually be used? > 4) doesn't seem to have special processing for standard section headers > (I may be wrong about this as I'm not an reST expert. I am not sure what you mean here. Section headers can be styled as you wish, pretty much. What kind of "processing" is desired here? > I also don't really like the way bold, italics, and > courier are handled. My favorite is now *bold* /italics/ > and `fixed-width`. This seems to me not worth haggling over. Right? Really **bold**, *italics*, and ``fixed-width`` are just as good. (Note that you could even use single back ticks for fixed width by hijacking the default role, but it seems better to save that for math?) Remember that each time you steal a character you need some way to escape it to get it back when needed: reST minimizes this. > I like the {{{ code }}} for code blocks that Moin Moin uses, but that's > not a big deal to me. I can live with the :: that restructured-Text uses. I think the reST convention is much cleaner. fwiw, Alan Isaac From chiaracaronna at hotmail.com Wed Jan 17 03:34:39 2007 From: chiaracaronna at hotmail.com (Chiara Caronna) Date: Wed, 17 Jan 2007 08:34:39 +0000 Subject: [SciPy-user] Newbie: Reading a slice of a file In-Reply-To: Message-ID: Thank you, but the problem is that in the file from time to time there are non numerical values, so I need to skip those rows... >From: Markus Rosenstihl >Reply-To: SciPy Users List >To: SciPy Users List >Subject: Re: [SciPy-user] Newbie: Reading a slice of a file >Date: Wed, 17 Jan 2007 00:21:02 +0100 > >Am 15.01.2007 um 18:51 schrieb Chiara Caronna: > >>I need to read just some rows of a file... do you know how to do it? >>I found out that with pylab.load I can skip some rows from the top, but I >>don't know how to skip from the bottom... maybe there is an other >>function? >>Thanks >>Chiara > >Hi > >You can use scipy.io.read_array: > > >In [313]: b >Out[313]: >array([[0, 0, 0], > [1, 0, 0], > [2, 0, 0], > [3, 0, 0], > [4, 0, 0], > [5, 0, 0], > [6, 0, 0], > [7, 0, 0], > [8, 0, 0], > [9, 0, 0]]) > >In [314]: scipy.io.write_array('testarray',b) > >In [315]: scipy.io.read_array('testarray',lines=(1,(4,7))) >Out[315]: >array([[ 1., 0., 0.], > [ 4., 0., 0.], > [ 5., 0., 0.], > [ 6., 0., 0.]]) > >In [316]: > >HTH, > >Markus ><< PGP.sig >> >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Don't just search. Find. Check out the new MSN Search! http://search.msn.com/ From agn at noc.soton.ac.uk Tue Jan 16 18:10:22 2007 From: agn at noc.soton.ac.uk (George Nurser) Date: Tue, 16 Jan 2007 23:10:22 +0000 Subject: [SciPy-user] scipy, f2py and ifort on intel mac Message-ID: I managed to get f2py to work with ifort on a MBP C2D by following the hint in Pearu Petersen's post in http://comments.gmane.org/gmane.comp.python.f2py.user/778 & included if sys.platform=='darwin': opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) in /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/ site-packages/numpy/distutils/fcompiler/intel.py In intel.py I also replaced -KPIC by -fPIC as recommended by the ifort manual, and removed -shared from the linker_so dictionary list, as the manual indicated it's not a valid option. after that e.g. f2py --fcompiler=intel -c Wright.f -m Wright produced a module that imported fine into numpy. Emboldened by this I had a go at compiling scipy using ifort, with python setup.py build config_fc --fcompiler=intel This did *not* work ... It made blas.so successfully, but the compiler fortcom went into an infinite loop when trying to compile lapack.so. The last output (I also then changed -O3 to -O2 in case it helped) was: build/src.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/scipy/ linalg/flapackmodule.c:11799: warning: passing argument 6 of 'f2py_func' from incompatible pointer type compiling Fortran sources Fortran f77 compiler: /usr/bin/ifort -72 -w90 -w95 -fPIC -cm -O2 Fortran f90 compiler: /usr/bin/ifort -FR -fPIC -cm -O2 Fortran fix compiler: /usr/bin/ifort -FI -fPIC -cm -O2 compile options: '-Ibuild/src.macosx-10.3-fat-2.4 -I/Library/ Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ numpy/core/include -I/Library/Frameworks/Python.framework/Versions/ 2.4/include/python2.4 -c' ifort:f77: build/src.macosx-10.3-fat-2.4/build/src.macosx-10.3- fat-2.4/scipy/linalg/flapack-f2pywrappers.f /usr/bin/ifort -nofor_main -undefined dynamic_lookup -bundle build/ temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/build/ src.macosx-10.3-fat-2.4/scipy/linalg/flapackmodule.o build/ temp.macosx-10.3-fat-2.4/build/src.macosx-10.3-fat-2.4/ fortranobject.o build/temp.macosx-10.3-fat-2.4/build/src.macosx-10.3- fat-2.4/build/src.macosx-10.3-fat-2.4/scipy/linalg/flapack- f2pywrappers.o -Lbuild/temp.macosx-10.3-fat-2.4 -o build/ lib.macosx-10.3-fat-2.4/scipy/linalg/flapack.so -Wl,-framework - Wl,Accelerate After this output, terminal hung until the compilation job was killed. Compilation with gfortran python setup.py build seemed to work without any problems. Anybody have any ideas? --George Nurser. From markusro at element.fkp.physik.tu-darmstadt.de Wed Jan 17 04:59:05 2007 From: markusro at element.fkp.physik.tu-darmstadt.de (Markus Rosenstihl) Date: Wed, 17 Jan 2007 10:59:05 +0100 Subject: [SciPy-user] Newbie: Reading a slice of a file In-Reply-To: References: Message-ID: <46f082fea88c676aa0fea765398630a3@element.fkp.physik.tu-darmstadt.de> Am 17.01.2007 um 09:34 schrieb Chiara Caronna: > > Thank you, but the problem is that in the file from time to time there > are > non numerical values, so I need to skip those rows... > doc(scipy.io.read_array) says the following: scipy.io.read_array(fileobject, separator=None, columns=None, comment='#', lines=None, atype='d', linesep='\n', rowsize=10000, missing=0) comment -- the comment character (line will be ignored even if it is specified by the lines tuple) missing -- value to insert in array when conversion to number fails. So you could try to use one of them: If your file looks like this 0 0 0 0 0 1 ! a comment 0 0 0 0 0 2 ! some more comment you can set: comment='!' This would skip all lines starting with "!" Markus >> From: Markus Rosenstihl >> Reply-To: SciPy Users List >> To: SciPy Users List >> Subject: Re: [SciPy-user] Newbie: Reading a slice of a file >> Date: Wed, 17 Jan 2007 00:21:02 +0100 >> >> Am 15.01.2007 um 18:51 schrieb Chiara Caronna: >> >>> I need to read just some rows of a file... do you know how to do it? >>> I found out that with pylab.load I can skip some rows from the top, >>> but I >>> don't know how to skip from the bottom... maybe there is an other >>> function? >>> Thanks >>> Chiara >> >> Hi >> >> You can use scipy.io.read_array: >> >> >> In [313]: b >> Out[313]: >> array([[0, 0, 0], >> [1, 0, 0], >> [2, 0, 0], >> [3, 0, 0], >> [4, 0, 0], >> [5, 0, 0], >> [6, 0, 0], >> [7, 0, 0], >> [8, 0, 0], >> [9, 0, 0]]) >> >> In [314]: scipy.io.write_array('testarray',b) >> >> In [315]: scipy.io.read_array('testarray',lines=(1,(4,7))) >> Out[315]: >> array([[ 1., 0., 0.], >> [ 4., 0., 0.], >> [ 5., 0., 0.], >> [ 6., 0., 0.]]) >> >> In [316]: >> >> HTH, >> >> Markus > > >> << PGP.sig >> > > > > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > _________________________________________________________________ > Don't just search. Find. Check out the new MSN Search! > http://search.msn.com/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: Signierter Teil der Nachricht URL: From lfriedri at imtek.de Wed Jan 17 05:10:33 2007 From: lfriedri at imtek.de (Lars Friedrich) Date: Wed, 17 Jan 2007 11:10:33 +0100 Subject: [SciPy-user] matplotlib, small numbers Message-ID: <1169028633.4400.3.camel@localhost> Hello, I just tried import pylab pylab.plot((1e-20, 2e-20)) The line is displayed fine, but when I use the pan-tool the y-lims are set to 1e-10 so that I can not see any slope anymore. Is there a way to change this? Thanks Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-K?hler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfriedri at imtek.de From gruben at bigpond.net.au Wed Jan 17 07:59:34 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Wed, 17 Jan 2007 23:59:34 +1100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> Message-ID: <45AE1DB6.8040707@bigpond.net.au> Edward Loper wrote: > If you really think that's too line-noise-like, then you could set > the default role to be math, so `x=12` would render as math. But > then you'd need to explicitly mark crossreferences, so I doubt that > would be a win overall. I want to highlight the distinction between the class/method-level docstrings and modules which are simply a large docstring used to implement an example. If we are to follow this model, as FiPy does, then I don't think there will be much (any?) LaTeX math in the class/method-level docstrings. In fact, I'd argue against LaTeX in these docstrings and probably against using matplotlib in the examples in these docstrings. It will mostly (only?) be in the module-level docstrings, which won't be seen in such environments as ipython. I expect they'll only ever appear to users as full html pages or LaTeX pages. Ed suggests using docutils directly for these. In FiPy, graphical output such as graphs and bitmaps are generated by a plotting package and embedded back into the resulting LaTeX. We would like to be able to do this for both html and LaTeX. I don't know if docutils supports this. > [Gary Ruben] > >> Currently epydoc generates far too much >> information (2371 pages worth when I ran it on the numpy source a few >> days ago) and seems unable to be easily modified to reduce its output. > > If you can explicitly specify what you'd like included in the output, > and how you'd like it formatted, then I can give you an idea of how > hard that would be to produce. You are right that, at the moment, > epydoc's output generators are not terribly customizable. And the > latex output isn't as pretty as I'd like. :) I think "see also" cross references to other methods are also important. I think we want a way to just generate the docstrings as html and LaTeX, with cross-references as navigational aids. No inheritance information. This is because SciPy is currently more of a library of functions grouped into modules and the inheritance information is not usually important. A user just wants to know what methods and variables/fields are available. The endo-generated API docs here: demonstrate this approach, where the class hierarchy has been separated out. Gary R. From robert.vergnes at yahoo.fr Wed Jan 17 08:52:32 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Wed, 17 Jan 2007 14:52:32 +0100 (CET) Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <45AE1DB6.8040707@bigpond.net.au> Message-ID: <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> Hello, I started to write my own Data Anlysis Workbench based on Scipy and wxpython. (I haven't seen one around?) Is there anybody interested in it and/or to help writing it ? Originally I was using the french software 'regressi' for experimental data analysis but I need some equations which are not included in it and I cannot check how exactly regressi is working - it is shareware but not open - so I started my own, based on python (Scipy, Numpy, labpy and wxpython) It is on the verge to be functional ( near a 0.9 version) The aim is to be able to acquire data from experiments ( text or other) and display it (grid), graph it and apply some ready function such as curve fitting. It has also the possibility to compare several set of data from the same or different experiment and find some parameters and or variations in parameters due to change in initial parameters. Also with possibility to transfer acquired / calculated data from/to python-shell to play with the data and send the data back right away to the plotting and/or to curve fitting. - just with one click. It can be applied to signal processing, chemistry or mechanics. It is for people who wants to focus on their data more than on programming... Let me know if some of you are interested, I used this mailing list because i did not where to send this information. Sorry if I bothered you. Best Regards Robert Gary Ruben a ?crit : Edward Loper wrote: > If you really think that's too line-noise-like, then you could set > the default role to be math, so `x=12` would render as math. But > then you'd need to explicitly mark crossreferences, so I doubt that > would be a win overall. I want to highlight the distinction between the class/method-level docstrings and modules which are simply a large docstring used to implement an example. If we are to follow this model, as FiPy does, then I don't think there will be much (any?) LaTeX math in the class/method-level docstrings. In fact, I'd argue against LaTeX in these docstrings and probably against using matplotlib in the examples in these docstrings. It will mostly (only?) be in the module-level docstrings, which won't be seen in such environments as ipython. I expect they'll only ever appear to users as full html pages or LaTeX pages. Ed suggests using docutils directly for these. In FiPy, graphical output such as graphs and bitmaps are generated by a plotting package and embedded back into the resulting LaTeX. We would like to be able to do this for both html and LaTeX. I don't know if docutils supports this. > [Gary Ruben] > >> Currently epydoc generates far too much >> information (2371 pages worth when I ran it on the numpy source a few >> days ago) and seems unable to be easily modified to reduce its output. > > If you can explicitly specify what you'd like included in the output, > and how you'd like it formatted, then I can give you an idea of how > hard that would be to produce. You are right that, at the moment, > epydoc's output generators are not terribly customizable. And the > latex output isn't as pretty as I'd like. :) I think "see also" cross references to other methods are also important. I think we want a way to just generate the docstrings as html and LaTeX, with cross-references as navigational aids. No inheritance information. This is because SciPy is currently more of a library of functions grouped into modules and the inheritance information is not usually important. A user just wants to know what methods and variables/fields are available. The endo-generated API docs here: demonstrate this approach, where the class hierarchy has been separated out. Gary R. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.ingrisch at med.uni-muenchen.de Wed Jan 17 09:46:46 2007 From: michael.ingrisch at med.uni-muenchen.de (Michael Ingrisch) Date: Wed, 17 Jan 2007 15:46:46 +0100 Subject: [SciPy-user] still missing io.fopen Message-ID: <1169045206.9118.5.camel@localhost> Dear all, Two weeks ago, J.Oishi posted a question concerning the obviously missing class io.fopen. I recently changed my system from Windows and SciPy 0.4.something to Ubuntu Linux and Scipy 0.5.2, and now, I have exactly the same problem. With info(io), I get a hopeful message, but info(io.fopen) tells me, that the class doesn't exist. My script for importing (binary) measurement data from the MR-scanner doesn't work anymore for this reason. Is there any known solution or workaround for this problem? Thanks in advance, Michi From ewald.zietsman at gmail.com Wed Jan 17 10:01:45 2007 From: ewald.zietsman at gmail.com (Ewald Zietsman) Date: Wed, 17 Jan 2007 17:01:45 +0200 Subject: [SciPy-user] SciPy Data Analysis Workbench Message-ID: I will be interested in helping out with your project since I can benefit greatly from such a program. I regularly need to edit files containing experimental data (astronomical time series data). I then have to fit various kinds of functions, remove points, calculate periodograms, fit sine functions, subtract fitted functions from dataset, add sets together etc. I have written some code to do some of this but I have no expertise in the development of user interfaces and the like. I also have to get publication quality plots ( which can be done easily enough in Matplotlib) -Ewald Zietsman -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.Mientki at ru.nl Wed Jan 17 11:03:59 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Wed, 17 Jan 2007 17:03:59 +0100 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> References: <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> Message-ID: <45AE48EF.8070708@ru.nl> hi Robert, Robert VERGNES wrote: > Hello, > > I started to write my own Data Anlysis Workbench based on Scipy and > wxpython. (I haven't seen one around?) > You mean something like this ? http://www.linbox.com/ucome.rvt?file=/any/en/Ressources/ProjetsLibres/siglab/twentymin.txt or this http://www.ee.iitb.ac.in/uma/%7Eishan/scilab/rltool.htm or this http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_data_acquisition.html or better can you give an impression of what it will look like ? Yes, I'm working on the last example, replacing MatLab with SciPy. So I'm definitely interested. cheers, Stef Mientki From edloper at gradient.cis.upenn.edu Wed Jan 17 11:46:26 2007 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Wed, 17 Jan 2007 11:46:26 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AD60CE.6090102@ee.byu.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu> <45AD60CE.6090102@ee.byu.edu> Message-ID: <369FDD09-3A75-49F3-9CD1-0CCD143CD493@gradient.cis.upenn.edu> >> First, I'd pretty strongly recommend against inventing your own >> markup language. [...] > I'm not really convinced by this argument. I agree we shouldn't > be flippant about introducing new markup and that is not what I've > proposed. But, already we must learn multiple markup. For > example, Moin Moin uses one way to describe tables and > restructured text. [Travis] > Basically, I've said that the existing markups do not seem geared > toward mathematical description (as latex must be hacked on to them). This is certainly true -- the only markup language I know of for concisely representing math as text *is* latex. But it doesn't sound like you're proposing doing anything different -- you just want to use $expr$ to hack latex into the markup, rather than using :math:`expr` or `expr`. > In addition, I don't like the look of existing markup docstrings > --- especially in the parameter section. That's where my biggest > problem really lies. What should be clear ends up with all kinds > of un-necessary back-ticks. As I mentioned in my previous email, that is *not* part of the definition of rst as a markup language; it's just a detail about how that markup's output (a document tree) gets processed by epydoc. It would be fairly simple to change epydoc to accept parameter sections where the variables don't have to be marked by backticks; and based on the feedback here, I intend to do just that. Responding to Alan Isaac's comment on the same topic, this change would *not* result in any ability to cross reference from the parameters. But there's a separate issue of what meaning should be associated with the expression `foo`. There's two possibilities: it can be used to mark cross-references; or it can be used to mark mathematical expressions. Whichever of those it is *not* used for will need to be marked with some more explicit markup. My preference is for `foo` to be used for cross-references, since they're used more often in general. But I understand there's a push from scipy to make math easy to express. (Although, as I said before, anything more than simple expressions will look like line-noise to the uninitiated, whether it's surrounded by $...$ or `...` or :math:`...`.) > I also don't really like the extra ":" at the beginning of a > paragraph to denote a section. I could live with the underline > though. I don't think the ":" at the beginning detracts from readability, and it certainly doesn't take much work to write, so I guess this is primarily an aesthetic reaction.. My guess is that it wouldn't take you too long to get used to. Does anyone other than Travis consider the use of eg ":Parameters:" instead of "Parameters:" to be a show- stopper for using rst? > Basically, scipy docstrings have been developed already and the > follow a sort of markup. Why should they all be changed (in > essentially unnecessary ways because a computer program could be > written to change them and they will look "messier" in the end) > just to satisfy a particular markup language that was designed > without inquiring about our needs. This is a somewhat more persuasive argument than the ones I've seen you give before. If the docs are already formatted in a particular style, and that style is widely known & used etc., then I can understand not wanting to make sudden radical departures from it. (Keep in mind that I don't have first-hand experience with what the current state fo the sci-py docs is.) But if you would need to make substantial changes in order for your automatic transformation to work correctly anyway, and those changes are on a scale comparable to the changes you'd need to make to switch to rst, then rst seems to me to be far preferable. E.g., there's already an emacs mode to correctly colorize rst text, and there's been some work on making a combined emacs/rst colorizing mode. > This is not a case of Not Invented Here. I'm very happy to use an > existing markup standard. In fact, I basically like restructured > Text. There are only a few issues I have with it. Ideally, we can > get those issues resolved in epydoc itself. So that we can use a > slightly modified form of restructured Text in the docstrings. I'm happy to discuss changes in how epydoc interprets the document tree generated by restructuredtext. What I'm much less likely to entertain, at least for incorporation into epydoc itself, is changes to the markup language itself. (I realize that the difference between those can seem very arbitrary from the outside, but I don't want epydoc to require some modified version of rst to run -- and I'd like to discourage you from branching/patching epydoc if it's possible to make the changes you need in the official version.) > I'm all for simplifying things, but there is a limit to how much > I'm willing to give up when we can easily automate the conversions > to what epydoc currently expects. I think you may underestimate how difficult this type of conversion will be, at least if you want it to be technically correct (as opposed to working "most of the time.") -- there are a *lot* of corner cases to consider. Of course, this may just reflect my personal aversion to "messy" markup languages that don't have any precise definition.. As just a simple example, would your translation render the following as math or not:? If you pay me $12, I'll make this face: :$ [me] >> [...] the authors of ReST have done a *very* good job at keeping >> it to a minimum. > Well that is debatable in the specific instances of parameter > descriptions. The extra back-ticks are annoying to say the least, > and un-necessary. That's not part of how ReST is defined; it's part of how epydoc is defined. And as I said, it can be changed. >> __docformat__ = 'restructuredtext' >> > SciPy is big enough that I see no reason we cannot define a > slightly modified form of restructured Text (i.e. it uses MoinMoin > tables, gets rid of back-ticks in parameter lists, understands math > ($ $), and has specific layout for certain sections. Well, the final decisions are of course up to the scipy developers. I think I've made my position fairly clear. But if you *do* use your own slightly-modified-rst, then please give it a new name specify that name in __docformat__ variables in all your modules. >> If ReST table support isn't expressive enough for you, then you >> must be using some pretty complex tables. :) >> > Moin Moin uses a different way to describe tables. :-( Is there a reason you're attached to MoinMoin's syntax instead of rst's? MoinMoin's doesn't seem particularly more readable to me. If you're using rst for most of your markup, why not use it for tables? > In summary, my biggest issues with just straight restructured Text are > > 1) back-ticks in parameter lists. Easy to fix. > 2) the way math is included If you can't live with :math:`x=12`, then this may be an issue. > 3) doesn't understand Moin Moin tables I don't see why you're wedded to MoinMoin table sytnax > 4) doesn't seem to have special processing for standard section > headers (I may be wrong about this as I'm not an reST expert. As long as you mark the section headers with ":foo:" instead of "foo:", this shouldn't be a problem. You could even define a custom rst transform to replace paragraphs of the form "foo:" with section headers if you *really* object to that extra ":" > I also don't really like the way bold, italics, and courier are > handled. My favorite is now *bold* /italics/ and `fixed-width`. Well, rst defines *...* to be used for "emph," so you could use stylesheets to translate that to bold instead of italics if you want. `...` already gets rendered as fixed-width, although ``...`` would be the more technically correct thing to use -- `...` is supposed to be used for crossreferences, but will end up rendered as fixed width w/ no xref link if the no target is found. You won't get /italics/, though -- could you live with **italics**? (**...** is defined in rst as "strong", which again could be rendered however you like w/ stylesheets.) > It seems like we are going to have to customize (hack) epydoc to do > what we want anyway. Why then can we not "tweak" reST a little-bit > too. Well, if it's possible, I'd like to allow you to do everything you want with standard epydoc -- that way, your users don't have to worry about compiling & maintaining some variant version. Failing that, I'd like to add enough hooks to epydoc that you can do what you want without using patches, but by just having a custom interface. -Edward From robert.vergnes at yahoo.fr Wed Jan 17 11:54:25 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Wed, 17 Jan 2007 17:54:25 +0100 (CET) Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <45AE48EF.8070708@ru.nl> Message-ID: <20070117165425.78770.qmail@web27403.mail.ukl.yahoo.com> Hi Nice. I received a lot of answers. so I will try to answer everybody. i also attached some screenshot of the development version (0.6 actually - I hope to have a more or less workable one at 0.8 by the end of the week... I hope). (openoffice format). Don't get put-off by this early version... I believe the concept is good and help and advise is welcome. Answer to Stef: Siglab is really for signal processing only - it seems. But yes, the same can be achieved and more ergonomically. (at least i believe). RLTool is for scilab. again siclab is environment and my problem is to work around the environment and improve speed of work. so yes if you know which function you need it is possible to implement easily things for your need as my workbench was design to be flexible in term of variables and parameters for the equations that we use ( ode, FFT or other). ABout medilab - yes my interface - ultimately will look like that but with a plus - you can manged many datapages and make a dataset(whcih include several datapages) and perfom calcul on the dataset. So to summarize you have to a datapage - where you have a shell and perform some calcul and you can graph and/load save data. and you have access to a dataset which is a set of your datapage - again with possibility to graph / shell etc... Some preview attached. I am working to have a alpha version for Monday. Answer to Ewald: Yes. i have similar problem. and I need a prog where i can sort my data and try to make faster and easier my work but allowing possibility to use the shell to play with data along with standard function. I have implemented a module base on MathPlot lib so you get graph directly for your publication. Answer to Paul: no- I have no real web site (no time)- In fact the workbench has emerged from a project QME-DEV and so I called it QME-DEV workbench (I had to analyse non linear effect - Quantum Marcroscopic Effects actually hence the name QME-). And my project was about the quantization of some experiments and not about a workbench. But i think it would good for the community to have it. So I would have to change the aim/title of my original project.. unless somebody comes up with a new name ? I Attached the screenshoots. I do not send anything - it is too buggy for the moment. If you are interested , please let me have 1) you science orientation (ie what you want from the soft), 2) your python possibility(strength) and I will try to organize something. We can stay on this mailing list for the time-being. I will try to organize a sourceforge. PS: if you don't get the attached file i will email to you separately. Contact me at : robert DOT vergnes AT yahoo DOT fr - put in the subject: QME-DEV Workbench Best Regards, Robert Stef Mientki a ?crit : hi Robert, Robert VERGNES wrote: > Hello, > > I started to write my own Data Anlysis Workbench based on Scipy and > wxpython. (I haven't seen one around?) > You mean something like this ? http://www.linbox.com/ucome.rvt?file=/any/en/Ressources/ProjetsLibres/siglab/twentymin.txt or this http://www.ee.iitb.ac.in/uma/%7Eishan/scilab/rltool.htm or this http://oase.uci.kun.nl/~mientki/data_www/pic/jalcc/help/jalcc_data_acquisition.html or better can you give an impression of what it will look like ? Yes, I'm working on the last example, replacing MatLab with SciPy. So I'm definitely interested. cheers, Stef Mientki _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhu at aero.iitb.ac.in Wed Jan 17 12:17:05 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Wed, 17 Jan 2007 22:47:05 +0530 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> References: <45AE1DB6.8040707@bigpond.net.au> <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> Message-ID: <17838.23057.636366.263684@monster.iitb.ac.in> >>>>> "Robert" == Robert VERGNES writes: Robert> Hello, I started to write my own Data Anlysis Workbench Robert> based on Scipy and wxpython. (I haven't seen one around?) Robert> Is there anybody interested in it and/or to help writing Robert> it ? I think it would be a good idea to use the Enthought Tool Suite (ETS) (http://code.enthought.com/ets/) as the underlying toolset for this package. Gael Varoquaux has written a very nice article highlighting why traits UI (which is a part of ETS) is cool here: http://gael-varoquaux.info/computers/traits_tutorial/ cheers, prabhu From edloper at gradient.cis.upenn.edu Wed Jan 17 12:20:04 2007 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Wed, 17 Jan 2007 12:20:04 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: [Gary Ruben] > In FiPy, graphical > output such as graphs and bitmaps are generated by a plotting package > and embedded back into the resulting LaTeX. We would like to be > able to > do this for both html and LaTeX. I don't know if docutils supports > this. Yes and no. Docutils has support for defining custom directives, which can do pretty much anything. For example, epydoc defines a custom directive "..digraph::" which can be used to render directive graphs, defined using the graphviz dot language [1]. And it also defines some more specialized directives like "..classtree::". So it would certainly be possible to add a "..plot::" directive, or something like that. But those directives are not currently part of the official docutils distribution -- they're something you'd need to add on. (But adding them on isn't too complex.) [1] http://www.graphviz.org/ > I think "see also" cross references to other methods are also > important. To create an inline hyperlink to another method with rst, you just mention it and put backticks around it. E.g., something like "This is similar to `frobble()`, but a little bit tastier." You can also have an explicit see-also section using ":see: `frobble`" > I think we want a way to just generate the docstrings as html and > LaTeX, > with cross-references as navigational aids. No inheritance > information. > This is because SciPy is currently more of a library of functions > grouped into modules and the inheritance information is not usually > important. A user just wants to know what methods and variables/fields > are available. Epydoc doesn't currently offer any way to supress inheritance information, but it looks to me (from cursory browsing) like the "endo-generated API docs" are structured in a somewhat similar way to the way that epydoc would do it. -Edward From oliphant at ee.byu.edu Wed Jan 17 12:57:14 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 10:57:14 -0700 Subject: [SciPy-user] [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu><45AD60CE.6090102@ee.byu.edu> Message-ID: <45AE637A.4070603@ee.byu.edu> Alan G Isaac wrote: >On Tue, 16 Jan 2007, Travis Oliphant apparently wrote: > > >>In summary, my biggest issues with just straight >>restructured Text are >>1) back-ticks in parameter lists. >> >> > >I understood Ed to say: > >1) this can probably be dispensed with easily enough >2) if dispensed with, we may lose cross refs from the > parameters? > >I assume I misunderstood (2), since this is just a matter of >how definition lists are parsed in consolidated fields, and >definition lists definitely do not require the back tics. >So I *think* Ed is saying both that this problem can be >overcome and that he is willing to help with it. > > Great. Getting rid of back-ticks in the parameters section would help a great deal. So does this mean :Parameters: a : array this is something b : int this is a scalar input could be how argument parameters are specified? This would be fine. I'd like an additional "field" name to describe little used parameters so that functions with a large number of keyword argument can still be well documented without cluttering the main parameters list. Is that possible? Where are these standard field names used? > > > >>2) the way math is included >> >> > >I understood Ed to say that for inline math we could just >make LaTeX the default role, so that we write e.g., >`f(x)=x^2`. Back ticks look at least as good as dollar >signs, IMO. > > I don't have a problem with back-ticks used for math. >The cost is that cross refs then must be explicitly marked. >How important are these considered to be for the >documentation? (How did you intend to mark them?) > >I did not get a sense of how easy it would be to make dollar >signs special (so they could be used to indicate a math role). >I guess it would not be hard, but I feel >pretty confident that this would NOT be welcomed as some >kind of patch to reST. (But I'm just a user.) > > > > >>3) doesn't understand Moin Moin tables >> >> > >This seems just a matter of hacking reST, it seems to me. >I hazard that the reST developers would welcome a patch to >handle Moin Moin tables. In the meantime I ask, what >features missing from reST tables would actually be used? > > > > I don't care so much about this. I'm not wedded to Moin Moin's definition of tables except it seemed pretty minimialistic and easy to use. > > > >>I also don't really like the way bold, italics, and >>courier are handled. My favorite is now *bold* /italics/ >>and `fixed-width`. >> >> > >This seems to me not worth haggling over. Right? > > I agree. I'm not that concerned about it. I think if we can agree on a suitable description of the parameters list and on what to do about including math (I don't think there will be a lot of math in docstrings but there may need to be some, so we ought to think about it) then we are there for using reST in docstrings. I guess my major reaction to reST was the prevalent use of backticks in parameter descriptions. I can see now that this is not a requirement of reST but only a convention. I'd prefer a different convention for NumPy/SciPy. -Travis From oliphant at ee.byu.edu Wed Jan 17 13:24:31 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 11:24:31 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45AE69DF.80909@ee.byu.edu> Travis Oliphant wrote: >There was a lively discussion on the SciPy List before Christmas >regarding establishing a standard for documentation strings for NumPy / >SciPy. > > After some more lively discussion, here is my latest proposal. We will use reST for the docstrings with specific sections expected as given. The Notes and Examples section are optional but the Examples section is strongly encouraged. """ one-line summary not using variable names or the function name A few more sentences giving an extended description :Parameters: var1 : any-type information Explanation var2 : type Explanation long_variable_name Explanation :Keywords: only_seldom_used_keywords : type Explanation any_common_keywords Should be placed in the parameters section Notes ------ Any algorithm or other notes that may be needed. Examples -------- Doctest-formated examples """ Remaining questions: 1) I'm still unclear on how to include math --- please help. 2) I'm still unclear on what to do about see-also. I know we want to be conservative in how this is used, but perhaps ought to give some help. 3) I don't really like using :Keywords: like I'm using it above. I would prefer another special field name that epydoc understands. Perhaps we can just use spacing in the :Parameters: section to convey the same split in the authors interpretation of the parameters. Something like this: :Parameters: var1 : any-type information Explanation var2 : type Explanation long_variable_name Explanation only_seldom_used_keywords : type Explanation any_common_keywords Should be placed in the parameters section -Travis From Karl.Young at ucsf.edu Wed Jan 17 14:08:57 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 17 Jan 2007 11:08:57 -0800 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: References: <45AE6C33.4090805@radmail.ucsf.edu> Message-ID: <45AE7449.30007@ucsf.edu> >Many of us happily use python for speed-critical code, and we feel >that a careful mix of python, numpy, pyrex and raw C provides a very >good environment with a minimal amount of confusion (since you >basically only need to know Python and C syntax, pyrex being a blend >of the two). > > Sorry for the tangential question but Fernando's post reminded me of a question I had which is what do scipy folks recommend these days re. wrapping C++ packages. I'm working on a project that would like to wrap a C++ NMR simulation package, either as a NiPy component or a standalone package to be used in conjunction with NiPy (need to determine the best way to do that - NiPy is primarily focused on neuroimaging and adding a spectroscopy component might be adding unnecessary clutter, even though the ultimate goal is analysis of spectroscopic imaging data). Before the original C++ package ceased it's evolution (a couple of years ago) one of the developers had gotten about halfway through using Boost to provide a python interface. I did a little bit of work with SWIG a few years ago but for a very small package. So we're not sure whether to try and complete the original effort (using newer versions of python and Boost) or to start over with a currently more appropriate choice. Any thoughts welcome, thanks, -- KY Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From v-nijs at kellogg.northwestern.edu Wed Jan 17 15:11:47 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Wed, 17 Jan 2007 14:11:47 -0600 Subject: [SciPy-user] Multivariate regression In-Reply-To: <20070117165425.78770.qmail@web27403.mail.ukl.yahoo.com> Message-ID: I posted a simple class to generate various fit statistics for a multi-variate regression using ols. The sandbox.models class is much more elaborate but this provides the fit-statistics I needed for my own work. The code may also be useful to new users to find out where some of the most basic scipy tools can be found and how they can be used. http://www.scipy.org/Cookbook/OLS If anybody has functions lying around to test for residual heteroscedastsicity, (P)ACF, Portmaneau, VIF etc. feel free to add them to the class. Otherwise, I will add these features slowly over time as I need them myself. Also, the formatting of the output seems to jump around a bit when using the class from ipython. If anyone can help me fix that, that would be great. Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Wed Jan 17 17:41:55 2007 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu, 18 Jan 2007 09:41:55 +1100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AE69DF.80909@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> Message-ID: <45AEA633.5080902@bigpond.net.au> I think the :Returns: section is missing. Is see-also to be included at the end of the Notes section. For see-also, Ed says we can just add ":see: `frobble`" markup, so we should be able to just add a line like in the numpy example list on the wiki. I'm not clear on what the :Keywords: section adds. I think the 2nd example is just as good. Reiterating my previous post about math, I'm arguing against LaTeX and matplotlib examples in these docstrings. LaTeX math and graphical examples would only be in the module-level docstrings, which won't be seen in such environments as ipython. This means they'll only ever appear to users as fully processed html pages or in pdfs. I think we can use the :math:'eqn' and .. math: idioms. Gary R. Travis Oliphant wrote: > After some more lively discussion, here is my latest proposal. We will > use reST for the docstrings with specific sections expected as given. > The Notes and Examples section are optional but the Examples section is > strongly encouraged. > > """ > one-line summary not using variable names or the function name > > A few more sentences giving an extended description > > :Parameters: > var1 : any-type information > Explanation > var2 : type > Explanation > long_variable_name > Explanation > > :Keywords: > only_seldom_used_keywords : type > Explanation > any_common_keywords > Should be placed in the parameters section > > Notes > ------ > > Any algorithm or other notes that may be needed. > > > Examples > -------- > > Doctest-formated examples > """ > > > Remaining questions: > > 1) I'm still unclear on how to include math --- please help. > 2) I'm still unclear on what to do about see-also. I know we want to be > conservative in how this is used, but perhaps ought to give some help. > 3) I don't really like using :Keywords: like I'm using it above. I > would prefer another special field name that epydoc understands. > Perhaps we can just use spacing in the :Parameters: section to convey > the same split in the authors interpretation of the parameters. > > Something like this: > > :Parameters: > var1 : any-type information > Explanation > var2 : type > Explanation > long_variable_name > Explanation > > only_seldom_used_keywords : type > Explanation > any_common_keywords > Should be placed in the parameters section > > > -Travis From fperez.net at gmail.com Wed Jan 17 17:48:36 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 17 Jan 2007 15:48:36 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEA633.5080902@bigpond.net.au> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> Message-ID: On 1/17/07, Gary Ruben wrote: > I think the :Returns: section is missing. > > Is see-also to be included at the end of the Notes section. For > see-also, Ed says we can just add ":see: `frobble`" markup, so we should > be able to just add a line like in the numpy example list on the wiki. > > I'm not clear on what the :Keywords: section adds. I think the 2nd > example is just as good. No, please let's explicitly differentiate between keywords and positional arguments. I don't care much what they are called, but there's a sharp functional difference in the language between the two: you can NOT omit positional arguments while you can omit keywords (or call them arguments-with-defaults, or whatever). So a big -100 from me on conflating two things which are very very distinct at the language level (and yes, I know you can reorder a call in python if you use named arguments for everything, etc. My point still stands). Cheers, f From oliphant.travis at ieee.org Wed Jan 17 17:57:55 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 17 Jan 2007 15:57:55 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> Message-ID: Fernando Perez wrote: > On 1/17/07, Gary Ruben wrote: > >>I think the :Returns: section is missing. >> >>Is see-also to be included at the end of the Notes section. For >>see-also, Ed says we can just add ":see: `frobble`" markup, so we should >>be able to just add a line like in the numpy example list on the wiki. >> >>I'm not clear on what the :Keywords: section adds. I think the 2nd >>example is just as good. > > > No, please let's explicitly differentiate between keywords and > positional arguments. I don't care much what they are called, but > there's a sharp functional difference in the language between the two: > you can NOT omit positional arguments while you can omit keywords (or > call them arguments-with-defaults, or whatever). But, whether or not one has a positional or keyword argument is obvious from the calling information, right? I guess if you are using def f(*args, **kwds) then it isn't obvious. In that case (which I would discourage generally), I would suggest using separate :Parameters: and :Keywords: sections So, I guess I'm leaning toward a single :Parameters: section with an empty line distinguishing between "essential" and "little" used keyword arguments (it is up to the author to decide what that means). The other acceptable distinction would be to use separate :Parameters: and :Keywords: sections. Many people maybe unclear, but basically these sections are interpreted by epydoc (they are not standard reST). This is why we can get rid of the back-ticks and have epydoc process the documentation as if the back-ticks were actually there. -Travis From robert.kern at gmail.com Wed Jan 17 17:58:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Jan 2007 16:58:33 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEA633.5080902@bigpond.net.au> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> Message-ID: <45AEAA19.7020102@gmail.com> Gary Ruben wrote: > I think the :Returns: section is missing. > > Is see-also to be included at the end of the Notes section. For > see-also, Ed says we can just add ":see: `frobble`" markup, so we should > be able to just add a line like in the numpy example list on the wiki. Personally, I'd like to encourage a prose explanation with each see-also reference. Doing so discourages the (relatively unhelpful, IMO) scattershot lists that we've been using so far. > I'm not clear on what the :Keywords: section adds. I think the 2nd > example is just as good. In plain text, it looks fine, but the code that interprets the two sections as (slightly) separate and outputs HTML and PDF as such remains to be written. > Reiterating my previous post about math, I'm arguing against LaTeX and > matplotlib examples in these docstrings. LaTeX math and graphical > examples would only be in the module-level docstrings, which won't be > seen in such environments as ipython. This means they'll only ever > appear to users as fully processed html pages or in pdfs. I think we can > use the :math:'eqn' and .. math: idioms. I don't know. Expressing math formulae is most important in function/class docstrings rather than modules. See most of the docstrings for the scipy.stats distributions, for example. And module docstrings will be seen in ipython and similar environments, too; I'm not sure why you think they won't. While I'm not a fan of reading raw LaTeX math expressions in plain text, it beats the readily-available alternatives. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Wed Jan 17 18:07:54 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 17 Jan 2007 16:07:54 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> Message-ID: On 1/17/07, Travis Oliphant wrote: > > No, please let's explicitly differentiate between keywords and > > positional arguments. I don't care much what they are called, but > > there's a sharp functional difference in the language between the two: > > you can NOT omit positional arguments while you can omit keywords (or > > call them arguments-with-defaults, or whatever). > > But, whether or not one has a positional or keyword argument is obvious > from the calling information, right? ... which is inaccessible if the function is written in C, for example: In [2]: n.cos? Type: ufunc Base Class: Namespace: Interactive Docstring: y = cos(x) cosine elementwise. The call line can only be introspected out (which is what ipython, help() and other tools do) if the function is written in pure python. Also, a formatting system may not properly show an extra blank line in the middle of a formatted list. reST for example makes blank lines /optional/ between lists, so it would collapse all arguments in your convention (I think) into a single list. IMHO, relying on something as brittle as a single line of whitespace (which may well be swallowed by the renderer du jour) for conveying this distinction is not a good idea. And having different conventions for different cases (such as f(*a,**k) or C functions) is also, I think, not very nice. I'd rather this be consistent across /all/ cases. Just my 1e-2, ultimately I'm just happy to see /something/ emerge. f From ckkart at hoc.net Wed Jan 17 18:08:50 2007 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 18 Jan 2007 08:08:50 +0900 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> References: <20070117135232.79039.qmail@web27409.mail.ukl.yahoo.com> Message-ID: <45AEAC82.50409@hoc.net> Robert VERGNES wrote: > Hello, > > I started to write my own Data Anlysis Workbench based on Scipy and wxpython. (I haven't seen one around?) > Do you know peak-o-mat (http://lorentz.sf.net)? It is a data analysis/peak fitting tool based on scipy and wxpython mainly designed to analyse data from optical spectroscopy experiments. The data can be manipulated using numpy expressions, a shell window is currently not implemented but it could be added easily and this is in fact a nice idea, I think. peak-o-mat talks to xmgrace to produce publication quality output but I would like to add a matplotlib backend, too. I am about to refactor the code so that hopefully other people than me will be able to read and expand it. There's not much documentation available but the latest version is fully functional. Feel free to ask if you need more instructions on how to use it. Christian From oliphant at ee.byu.edu Wed Jan 17 18:55:58 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 16:55:58 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEAA19.7020102@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> <45AEAA19.7020102@gmail.com> Message-ID: <45AEB78E.7050007@ee.byu.edu> Robert Kern wrote: >Gary Ruben wrote: > > >>I think the :Returns: section is missing. >> >>Is see-also to be included at the end of the Notes section. For >>see-also, Ed says we can just add ":see: `frobble`" markup, so we should >>be able to just add a line like in the numpy example list on the wiki. >> >> > >Personally, I'd like to encourage a prose explanation with each see-also >reference. Doing so discourages the (relatively unhelpful, IMO) scattershot >lists that we've been using so far. > > Any suggestions on how that might be done? > > >>I'm not clear on what the :Keywords: section adds. I think the 2nd >>example is just as good. >> >> > >In plain text, it looks fine, but the code that interprets the two sections as >(slightly) separate and outputs HTML and PDF as such remains to be written. > > I understood that epydoc translates :Parameters: and :Keywords: sections for us (see the consolidated fields section of http://epydoc.sourceforge.net/fields.html#fields ) Personally I'd like to use :Parameters: and then add an :Extra Parameters: section or something like that if just using a blank-line in the :Parameters: section is not useful enough. > > >>Reiterating my previous post about math, I'm arguing against LaTeX and >>matplotlib examples in these docstrings. LaTeX math and graphical >>examples would only be in the module-level docstrings, which won't be >>seen in such environments as ipython. This means they'll only ever >>appear to users as fully processed html pages or in pdfs. I think we can >>use the :math:'eqn' and .. math: idioms. >> >> > >I don't know. Expressing math formulae is most important in function/class >docstrings rather than modules. See most of the docstrings for the scipy.stats >distributions, for example. And module docstrings will be seen in ipython and >similar environments, too; I'm not sure why you think they won't. While I'm not >a fan of reading raw LaTeX math expressions in plain text, it beats the >readily-available alternatives. > > I agree that we can't really just ignore the problem. I could live with :math:`eqn` or :latex-math:`eqn` (although I would really prefer the abbreviations :m:`eqn` and :lm:`eqn`). Abbreviations are just fine in my mind for something that is used often enough. Ideally, we could use $$eqn$$ (I saw the double dollar signs used in ASCIIDOC). -Travis -Travis From fperez.net at gmail.com Wed Jan 17 19:02:08 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 17 Jan 2007 17:02:08 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEB78E.7050007@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> <45AEAA19.7020102@gmail.com> <45AEB78E.7050007@ee.byu.edu> Message-ID: On 1/17/07, Travis Oliphant wrote: > Personally I'd like to use :Parameters: and then add an :Extra > Parameters: section or something like that if just using a blank-line in > the :Parameters: section is not useful enough. Glad to hear that :) I'd suggest 'Optional Parameters' or 'Named Parameters' if the existing 'Keywords' convention that epydoc/reST uses isn't deemed satisfactory (I happen to be OK with 'Keywords', but that's just me). I find the word 'Extra' rather vague and void of real meaning in this case. Cheers, f From edloper at gradient.cis.upenn.edu Wed Jan 17 19:03:37 2007 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Wed, 17 Jan 2007 19:03:37 -0500 Subject: [SciPy-user] [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: [Travis] > Great. Getting rid of back-ticks in the parameters section would > help > a great deal. > > So does this mean > > :Parameters: > a : array > this is something > b : int > this is a scalar input > > could be how argument parameters are specified? Indeed. I just made & commited the change (epydoc subersion revision 1418). > This would be fine. > > I'd like an additional "field" name to describe little used parameters > so that functions with a large number of keyword argument can still be > well documented without cluttering the main parameters list. Is that > possible? As it stands, epydoc is perfectly happy with more than one ":Parameters:" section.. So you could list the primary parameters up near the top, and have a second ":Parameters:" section later on with the 'extra' arguments. In fact, you might consider using the ":Keywords:" field tag for this second section -- the only difference between ":Parameters:" and ":Keywords:" is that the former will complain if you give a description for a non-existent parameter... ":keywords:" was originally intended for **kwargs, but epydoc is perfectly happy with you applying it to explicit args too... So your docs would end up looking something like: def f(x, range=10): """ Summary description... :Parameters: x : int description of x... Longer description, or whatever other info you think should go here. :Keywords: y : int description of y... """ Of course, you don't *have* to use consolidated fields.. when you have a small number of parameters you could also use the slightly shorter: def f(x, range=10): """ Summary description... :param x: description of x :type x: int Longer description, or whatever other info you think should go here. :param y: description of y :type y: int """ > Where are these standard field names used? Not sure what you mean by this question.. Docutils/rst doesn't define the meanings of any fields.. The fields that epydoc currently supports are described at . > I don't have a problem with back-ticks used for math. Keep in mind, though, that using bare back-ticks for math means they can't also be used for cross-references. I suspect that cross- references will be more common.. not just in see-also type fields, but also as hyperlinks in sentences such as "precision is determined by the `flooble` instance variable." If you use bare back-ticks for math, then that would need to become "precision is determined by the :xref:`flooble` instance variable. You suggest later that you don't expect there to be a lot of math embedded in docstrings, in which case the slightly more verbose `math`:\sum_i^Ni^2` doesn't seem like as much of a big deal. >>> 3) doesn't understand Moin Moin tables > I don't care so much about this. I'm not wedded to Moin Moin's > definition of tables except it seemed pretty minimialistic and easy > to use. In that case, I see no advantage to using moinmoin table syntax. > I think if we can agree on a suitable description of the parameters > list > and on what to do about including math (I don't think there will be a > lot of math in docstrings but there may need to be some, so we > ought to > think about it) then we are there for using reST in docstrings. Some issues still need to be worked out with formatting inline math, but I'm hopeful that you'll be able to use non-modified rst :) -Edward From robert.kern at gmail.com Wed Jan 17 19:04:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Jan 2007 18:04:19 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEB78E.7050007@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> <45AEAA19.7020102@gmail.com> <45AEB78E.7050007@ee.byu.edu> Message-ID: <45AEB983.8050307@gmail.com> Travis Oliphant wrote: > Robert Kern wrote: > >> Gary Ruben wrote: >> >>> I think the :Returns: section is missing. >>> >>> Is see-also to be included at the end of the Notes section. For >>> see-also, Ed says we can just add ":see: `frobble`" markup, so we should >>> be able to just add a line like in the numpy example list on the wiki. >>> >> Personally, I'd like to encourage a prose explanation with each see-also >> reference. Doing so discourages the (relatively unhelpful, IMO) scattershot >> lists that we've been using so far. > > Any suggestions on how that might be done? Not writing any code to try to parse the "see-alsos" into simple lists. That's as elegant a solution as you get, IMO. :-) More seriously, simply making the "See also" section unstructured, and writing the initial examples with appropriate verbosity will probably suffice. >>> I'm not clear on what the :Keywords: section adds. I think the 2nd >>> example is just as good. >>> >> In plain text, it looks fine, but the code that interprets the two sections as >> (slightly) separate and outputs HTML and PDF as such remains to be written. >> > I understood that epydoc translates :Parameters: and :Keywords: sections > for us (see the consolidated fields section of > http://epydoc.sourceforge.net/fields.html#fields ) Yes, I was referring to the hypothetical code that would parse the blank line in the single :Parameters: section as a separator between parameters and keywords. Personally, I simply prefer having explicit, separate sections here, and don't care what the :Names: are. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jan 17 19:12:22 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Jan 2007 18:12:22 -0600 Subject: [SciPy-user] [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: <45AEBB66.5010304@gmail.com> Edward Loper wrote: > [Travis] >> Great. Getting rid of back-ticks in the parameters section would >> help >> a great deal. >> >> So does this mean >> >> :Parameters: >> a : array >> this is something >> b : int >> this is a scalar input >> >> could be how argument parameters are specified? > > Indeed. I just made & commited the change (epydoc subersion revision > 1418). Excellent. May I add some, possibly troublesome examples to the menagerie that I would like to be able to parse? I'll check out SVN epydoc tonight to play with it myself. :Parameters: x : array y : array The x and y coordinates. :Parameters: f : callable The function to optimize. *args Positional arguments to apply to f(). **kwds Keyword arguments to apply to f(). # I don't care too much about this one, but it'd be nice. :Parameters: f : callable The function to optimize. *args, **kwds Additional arguments to apply to f(). # I'd settle for this: :Parameters: f : callable The function to optimize. *args **kwds Additional arguments to apply to f(). >> I don't have a problem with back-ticks used for math. > > Keep in mind, though, that using bare back-ticks for math means they > can't also be used for cross-references. I suspect that cross- > references will be more common.. not just in see-also type fields, > but also as hyperlinks in sentences such as "precision is determined > by the `flooble` instance variable." If you use bare back-ticks for > math, then that would need to become "precision is determined by > the :xref:`flooble` instance variable. You suggest later that you > don't expect there to be a lot of math embedded in docstrings, in > which case the slightly more verbose `math`:\sum_i^Ni^2` doesn't seem > like as much of a big deal. I agree that cross-references will probably be more common. Especially `n`. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Jan 17 19:12:40 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 17:12:40 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> <45AEAA19.7020102@gmail.com> <45AEB78E.7050007@ee.byu.edu> Message-ID: <45AEBB78.8060109@ee.byu.edu> Fernando Perez wrote: >On 1/17/07, Travis Oliphant wrote: > > > >>Personally I'd like to use :Parameters: and then add an :Extra >>Parameters: section or something like that if just using a blank-line in >>the :Parameters: section is not useful enough. >> >> > >Glad to hear that :) > >I'd suggest 'Optional Parameters' or 'Named Parameters' if the >existing 'Keywords' convention that epydoc/reST uses isn't deemed >satisfactory (I happen to be OK with 'Keywords', but that's just me). >I find the word 'Extra' rather vague and void of real meaning in this >case. > > I'm a little woried that epydoc is using :Keywords: differently here as in: these are keywords for the function that should go into some kind of index. -Travis From fperez.net at gmail.com Wed Jan 17 19:17:10 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 17 Jan 2007 17:17:10 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEBB78.8060109@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> <45AEAA19.7020102@gmail.com> <45AEB78E.7050007@ee.byu.edu> <45AEBB78.8060109@ee.byu.edu> Message-ID: On 1/17/07, Travis Oliphant wrote: > I'm a little woried that epydoc is using :Keywords: differently here as > in: these are keywords for the function that should go into some kind of > index. Good point. Though hopefully in addition to keyword indexing, at some point we'll also have full-text indexing, which is handy. 'Optional parameters', 'Named Parameters', something else...? Cheers, f From oliphant at ee.byu.edu Wed Jan 17 19:17:27 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 17:17:27 -0700 Subject: [SciPy-user] [SciPy-dev] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: <45AEBC97.2030603@ee.byu.edu> Edward Loper wrote: >Indeed. I just made & commited the change (epydoc subersion revision >1418). > > Very nice. Thank you. > > >>This would be fine. >> >>I'd like an additional "field" name to describe little used parameters >>so that functions with a large number of keyword argument can still be >>well documented without cluttering the main parameters list. Is that >>possible? >> >> > >As it stands, epydoc is perfectly happy with more than one >":Parameters:" section.. > Ahh! This would be just fine. I like this idea. >Some issues still need to be worked out with formatting inline math, >but I'm hopeful that you'll be able to use non-modified rst :) > > > Could abbreviations like :m:`eqn` or :lm:`eqn` (for latex-interpreted math) be acceptable? I don't think we should change the default interpretation of `` because math will not be *that* common. -Travis From edloper at gradient.cis.upenn.edu Wed Jan 17 19:41:09 2007 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Wed, 17 Jan 2007 19:41:09 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: <0647F00A-9552-45F6-AC31-3C3D0C569D18@gradient.cis.upenn.edu> [Travis] > 1) I'm still unclear on how to include math --- please help. The options, as I see it, are: a) default role=math, explicitly mark xrefs: `x+3` and :xref:`get_floop()` b) default role=xref, explicitly mark math: :math:`x=3` and `get_floop()` I'm +1 for option b, but it sounds like you lean more towards option a. > 2) I'm still unclear on what to do about see-also. I know we want > to be > conservative in how this is used, but perhaps ought to give some help. Two options: you can just reference other related objects as part of the description, and mark them as cross-references using `backticks`; or you can have explicit see-also fields, preferably including a description of *how* this object relates to that object. So option 1: def f(x): """Return the flarp of x. For the fleep of x, use `g()`.""" Option 2: def f(x): """Return the flarp of x. :see: `g()`, which is used to find the fleep of x.""" > 3) I don't really like using :Keywords: like I'm using it above. I > would prefer another special field name that epydoc understands. > Perhaps we can just use spacing in the :Parameters: section to convey > the same split in the authors interpretation of the parameters. As it turns out, epydoc *does* understand a section named :Keywords:, as I mentioned in my previous email. But in the generated output, there will be no separation between the two groups of arguments (although they will be listed in the same order that you list them in). So the distinction will be visible in plaintext, but not in the HTML/PDF output generated by epydoc. If you *really* feel like you need to distinguish a second set of arguments, we could perhaps add a new field for this, with a new name. [Fernando] > No, please let's explicitly differentiate between keywords and > positional arguments. [...] They should be distinguished by the function signature. [Fernando] > ... which is inaccessible if the function is written in C, for > example: All C functions are strongly encouraged to begin their docstrings with a single-line signature. This convention is followed by all standard builtin objects, functions, & methods (e.g., "help (list.append)"), and is understood by epydoc. I.e., if the first line of a docstring is a single-line signature, epydoc will parse that signature and use it. E.g., see [Robert] > Personally, I'd like to encourage a prose explanation with each see- > also > reference. Doing so discourages the (relatively unhelpful, IMO) > scattershot > lists that we've been using so far. +1. Or just include the references to other objects as part of the description, and mark them as cross-references. [Robert] > I don't know. Expressing math formulae is most important in > function/class > docstrings rather than modules. See most of the docstrings for the > scipy.stats > distributions, for example. And module docstrings will be seen in > ipython and > similar environments, too; I'm not sure why you think they won't. > While I'm not > a fan of reading raw LaTeX math expressions in plain text, it beats > the > readily-available alternatives. I tend to agree with this -- for mathematical libraries, equations can be a very precise & concise way to describe exactly what a function is doing. On the other hand, if users can't read it, all the precision in the world won't help.. :) [Robert] > Excellent. May I add some, possibly troublesome examples to the > menagerie that I > would like to be able to parse? I'll check out SVN epydoc tonight > to play with > it myself. Unfortunately, epydoc/rst don't currently do too well with these.. > :Parameters: > x : array > y : array > The x and y coordinates. rst fails on this one -- it parses it as a paragraph containing "x : array y : array" followed by an illegally indented paragraph "The x and y coordinates", instead of parsing it as a definition list. I could plausibly see the rst folks being convinced to change rst so that this would be parsed as a definition list, but you'd have to run it by them and see what they think. As long as it's not parsed as a definition list by rst, it's not really possible for me to process it correctly. > :Parameters: > f : callable > The function to optimize. > *args > Positional arguments to apply to f(). > **kwds > Keyword arguments to apply to f(). rst balks at the use of * and ** in these -- it thinks they are start symbols for emph and strong, respectively. If you use `*args` and `**kwargs` instead, then rst parses it fine, and epydoc generates the output you want (it will complain because it thinks *args and **kwargs are unknown parameters, but that could be fixed). > # I don't care too much about this one, but it'd be nice. > :Parameters: > f : callable > The function to optimize. > *args, **kwds > Additional arguments to apply to f(). Assuming you put `backticks` around `*args` and `**kwargs`, rst will process this correctly. Epydoc will complain that the second item is ill-formated, and complain about failed xrefs for `*args` and `**kwargs`, but will generate the correct output. Those epydoc errors/warnings could be fixed. > # I'd settle for this: > :Parameters: > f : callable > The function to optimize. > *args > **kwds > Additional arguments to apply to f(). This one fails, again because it doesn't look like a definition list to rst. rst parses this as a definition list with one element (f) followed by a paragraph "*args **kwargs" followed by an illegally indented paragraph "Additional arguments to apply to f()".. So this one could probably be fixed iff your first example could be fixed. -Edward From wbaxter at gmail.com Wed Jan 17 19:44:59 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 18 Jan 2007 09:44:59 +0900 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEA633.5080902@bigpond.net.au> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEA633.5080902@bigpond.net.au> Message-ID: On 1/18/07, Gary Ruben wrote: > I think the :Returns: section is missing. > > Is see-also to be included at the end of the Notes section. For > see-also, Ed says we can just add ":see: `frobble`" markup, so we should > be able to just add a line like in the numpy example list on the wiki. That sounds like it will be worse than the backticks in the parameters section. For example, from the Numpy Example List, mean() has this: See also: average, median, var, std, sum So you're saying in the docstrings that will become :see: `average`, `median`, `var`, `std`, `sum` Or worse: :see:`average`, :see:`median`, :see:`var`, :see:`std`, :see:`sum` I think a good See Also list is one of the most useful things a package like SciPy can have in its documentation. I can't count the number of times I've learned about useful and relevant functions in Matlab thanks to its copious See Also's. --bb From edloper at gradient.cis.upenn.edu Wed Jan 17 20:26:32 2007 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Wed, 17 Jan 2007 20:26:32 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: [Travis] > I understood that epydoc translates :Parameters: and :Keywords: > sections > for us (see the consolidated fields section of > http://epydoc.sourceforge.net/fields.html#fields ) [Travis, responding to Edward] >> As it stands, epydoc is perfectly happy with more than one >> ":Parameters:" section.. >> > Ahh! This would be just fine. I like this idea. One issue with both of these solutions is that they will *not* make any distinction between the two sets of arguments in the generated html/pdf output. It sounds like you *do* want this distinction to get preserved in the generated output. If that's the case, then I think we'll need to define a new field tag. [Travis] > Personally I'd like to use :Parameters: and then add an :Extra > Parameters: section or something like that if just using a blank- > line in > the :Parameters: section is not useful enough. [Fernando] > I'd suggest 'Optional Parameters' or 'Named Parameters' if the > existing 'Keywords' convention that epydoc/reST uses isn't deemed > satisfactory (I happen to be OK with 'Keywords', but that's just me). I'd be fine with any of "Extra Parameters", "Named Parameters", or "Optional Parameters" (There will be some trickiness here, because rst will interpret these as eg an "Extra" field with an argument "Parameters" c.f. ":Extra Parameters: ..." to ":param x: ...". But I think that can be handled.) Using :keywords: is not an option, assuming that you want the two sets of parameters to be distinguished in the output -- :keywords: already has a meaning in epydoc, and it will not put those parameters in a separate section. [Travis] > I agree that we can't really just ignore the problem. I could live > with :math:`eqn` or :latex-math:`eqn` (although I would really prefer > the abbreviations :m:`eqn` and :lm:`eqn`). Abbreviations are just > fine > in my mind for something that is used often enough. I think the standard name should be something like :math:`foo`, but that it should be easy enough to expose a hook so you can register :m:`foo` as an alias for :math:`foo`. [Travis] >> I'm a little woried that epydoc is using :Keywords: differently >> here as >> in: these are keywords for the function that should go into some >> kind of >> index. No, :keywords: is used for keyword arguments. As of now, I haven't added support for adding index terms in rst. (In epytext, they're marked X{thusly}.) Adding support wouldn't be particularly difficult -- presumably the syntax would be something like :indexed:`this`. It's just one of many many things I haven't gotten around to. -Edward From strawman at astraw.com Wed Jan 17 21:51:27 2007 From: strawman at astraw.com (Andrew Straw) Date: Wed, 17 Jan 2007 18:51:27 -0800 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AE69DF.80909@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> Message-ID: <45AEE0AF.5040407@astraw.com> Another issue, which I don't think has been addressed in the present discussion: What do we do with optional outputs? (e.g. the retall argument to scipy.optimize.fmin() ?) -Andrew From robert.kern at gmail.com Wed Jan 17 22:02:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 17 Jan 2007 21:02:50 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AEE0AF.5040407@astraw.com> References: <45A4FB90.9000809@ee.byu.edu> <45AE69DF.80909@ee.byu.edu> <45AEE0AF.5040407@astraw.com> Message-ID: <45AEE35A.6070609@gmail.com> Andrew Straw wrote: > Another issue, which I don't think has been addressed in the present > discussion: What do we do with optional outputs? (e.g. the retall > argument to scipy.optimize.fmin() ?) :Parameters: func : callable ... retall : bool, optional If True, then return all of the optional outputs as well as the minimizing input. :Returns: xopt : array The minimizing input of the function. fopt : scalar, optional The value of the function at the minimizing input. iter : int, optional The number of iterations. funcalls : int, optional The number of function calls. warnflag : int, optional 1: Maximum number of function of evaluations reached. 2: Maximum number of iterations reached. allvecs : list or arrays, optional List of attempted minimizing input at each iteration. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu Jan 18 00:09:57 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 17 Jan 2007 22:09:57 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: Message-ID: <45AF0125.2040003@ee.byu.edu> Edward Loper wrote: > One issue with both of these solutions is that they will *not* make > any distinction between the two sets of arguments in the generated > html/pdf output. It sounds like you *do* want this distinction to > get preserved in the generated output. If that's the case, then I > think we'll need to define a new field tag. > I think you are right. Probably something like :OtherParameters: or something like that. I prefer white-space as in :Other Parameters: if it can be made to work. > I think the standard name should be something like :math:`foo`, but > that it should be easy enough to expose a hook so you can > register :m:`foo` as an alias for :math:`foo`. > This is completely fine. I think two aliases are useful 1) :lm:`foo` for :latex-math:`foo` (can we use '-' in a standard name?) 2) :m:`foo` for :math:`foo` One issue that hasn't been addressed is the :return: field. In NumPy/SciPy tuples are often returned and meant to be called so that multiple outputs are returned. It would be best to have an :Returns: consolidated field that looks very-much like the :Parameters: section. :Returns: named : type Explanation output : type Explanation variables : type Explanation Is this possible. If there is just one output then :return var: Explanation :rtype var: type would work. -Travis From seismic73 at yahoo.com Thu Jan 18 00:36:43 2007 From: seismic73 at yahoo.com (Jonathan Kane) Date: Wed, 17 Jan 2007 21:36:43 -0800 (PST) Subject: [SciPy-user] scipy installation problems for a newbie In-Reply-To: <45AC70BE.5070402@gmail.com> Message-ID: <631550.62567.qm@web81809.mail.mud.yahoo.com> Hi, Thanks for your reply. I downloaded my python from http://www.activestate.com/Products/ActivePython/ because it was recommended at http://www.scipy.org/Download Next I downloaded numpy and scipy from the same page in the file http://idisk.mac.com/fonnesbeck-Public/ScipySuperpack-Intel-10.4-py2.4.zip >From that installation I got errors that I showed in my first mail. I took your advice and went and installed the entire DeveloperTools from Apple (about 3.3 Gbytes!). I also installed the file you recommended gfortran-intel-bin.tar.gz Still no luck. I then saw on the page http://www.scipy.org/Installing_SciPy/Mac_OS_X that I should install gcc-intel-bin.tar.gz to get the fortran compilers, so I also installed that (another ~250 Mbytes). Still no luck. I got rid of a few errors along the way, but still get the following: >>> from scipy import * Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so: Library not loaded: /usr/local/lib/libgfortran.1.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/flapack.so Reason: image not found It can't seem to find the file /usr/local/lib/libgfortran.1.dylib (there are files called libgfortran.2.0.0.dylib , libgfortran.2.dylib, libgfortran.a, libgfortran.dylib, libgfortran.la, however......) I really wish it wasn't this hard. I am not a hotshot programmer/hacker, but I am not clueless either. Its very confusing and frustrating downloading and installing Scipy. Can you help me? Should I send another mail to the list? Should I uninstall everything and start over (although I don't know how to uninstall the compiler packages.....)? Please let me know. Thanks in advance. Jon Robert Kern wrote: Jonathan Kane wrote: > Hi. I am a newbie trying to install both numpy and scipy. Numpy seems > to have installed fine, but scipy gives me errors when I try certain > operations. Can anyone help? > > I am using an Intel Mac Mini with OS X 10.4.8. I downloaded and > installed the binaries for ActivePython. Here is the readout when I > start python Which numpy and scipy binaries did you install and where did you get them? Whoever built them used gfortran to compile them. Consequently, they got linked to the gfortran runtime libraries. You will need to download and install the gfortran compiler available from here: http://hpc.sourceforge.net Specifically, download the file gfortran-intel-bin.tar.gz . Read the paragraph above the download link for information on how to install it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jan 18 01:00:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Jan 2007 00:00:11 -0600 Subject: [SciPy-user] scipy installation problems for a newbie In-Reply-To: <631550.62567.qm@web81809.mail.mud.yahoo.com> References: <631550.62567.qm@web81809.mail.mud.yahoo.com> Message-ID: <45AF0CEB.6040004@gmail.com> Jonathan Kane wrote: > It can't seem to find the file /usr/local/lib/libgfortran.1.dylib > (there are files called libgfortran.2.0.0.dylib , libgfortran.2.dylib, > libgfortran.a, libgfortran.dylib, libgfortran.la, however......) Ah. Looks like the gfortran binaries were upgraded to a newer version after Chris built the Superpack. That's ... unfortunate. I hereby curse all people who release tarballs without version information. Well, you have a couple of options. You can uninstall the compilers that you got from the hpc.sf.net page pretty cleanly. You can get a list of the files it unpacked like so $ tar ztf /path/to/gfortran-intel-bin.tar.gz usr/local/ usr/local/._.DS_Store usr/local/.DS_Store usr/local/bin/ usr/local/bin/gfortran ... I would probably want to remove those files manually. Then you can install the gfortran-intel-bin.tar.gz that is in the ScipySuperpack zip file. That should be the one it was built with. Don't worry about the corresponding gcc. It's unnecessary. Alternatively, now that you have your compilers already set up, you could build numpy and scipy yourself. They're not too hard. matplotlib is harder, if you need that, too, but we'll take this two steps at a time. See the instructions I gave earlier on the list. The matplotlib part is missing a crucial detail, but the numpy and scipy bits are complete and accurate. http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025368.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Thu Jan 18 01:44:07 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 17 Jan 2007 23:44:07 -0700 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45AE7449.30007@ucsf.edu> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> Message-ID: On 1/17/07, Karl Young wrote: > > >Many of us happily use python for speed-critical code, and we feel > >that a careful mix of python, numpy, pyrex and raw C provides a very > >good environment with a minimal amount of confusion (since you > >basically only need to know Python and C syntax, pyrex being a blend > >of the two). > > > > > > Sorry for the tangential question but Fernando's post reminded me of a > question I had which is what do scipy folks recommend > these days re. wrapping C++ packages. I'm working on a project that > would like to wrap a C++ NMR simulation package, either as a NiPy > component or a standalone package to be used in conjunction with NiPy > (need to determine the best way to do that - NiPy is primarily focused > on neuroimaging and adding a spectroscopy component might be adding > unnecessary clutter, even though the ultimate goal is analysis of > spectroscopic imaging data). Before the original C++ package ceased it's > evolution (a couple of years ago) one of the developers had gotten about > halfway through using Boost to provide a python interface. I did a > little bit of work with SWIG a few years ago but for a very small > package. So we're not sure whether to try and complete the original > effort (using newer versions of python and Boost) or to start over with > a currently more appropriate choice. Any thoughts welcome, thanks, What I recall from a while ago, as relayed by John Hunter (matplotlib) and Prabhu Ramachandran (mayavi) was that while boost was great, the compile times and the resulting binary library sizes were completely out of control. All that templating forces instantiation of an enormous number of classes, which makes both time and size explode, to the point of making both development and distribution seriously problematic. Both John and Prabhu decided to go with other alternatives due to this, I think it was SWIG (whose C++ support has greatly improved). You might also want to have a look at this: http://www.scipy.org/PyRoot Root is a gigantic C++ framework coming from the data analysis world of High-Energy physics (aka particle physics). It was developed at CERN and is very widely used. This guy's talk was very interesting, and they seem to have gone for their own (non-boost, non-SWIG) solution, with arguments backing up their decision. Cheers, f From robert.vergnes at yahoo.fr Thu Jan 18 02:08:22 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Thu, 18 Jan 2007 08:08:22 +0100 (CET) Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <17838.23057.636366.263684@monster.iitb.ac.in> Message-ID: <20070118070822.94776.qmail@web27405.mail.ukl.yahoo.com> Hello Prabhu, Thanx for your input. I browsed the info relatively fast. Indeed it looks interesting. But the interface of the qme-dev workbench is nearly finished... in wxpython and should be out this or next week. I looked at Gael's article. yes impressive, but wxpython is relatively fast to do similar things as well. Albeit wxpython was not exactly designed for scientific work I have to admit. It is GUI only. But the advantage of wxpython is that you can tape in the large wxpython library of demos to develop very specific GUI fast look at the Demos that's impressive. I am not sure that Traits has such large librairy of demo code available. Additionnaly you can get the support of the wxpython community which is vital when you want to go fast. - never heard of ETS before... unfortunately. I would have known Traits/ETS before, I would have considered it. I considered Qt and Tk and wxpython. and made a choice...2 months ago. And now, I cannot afford to learn a new language/wrapper for python's GUI. It took me too much time for wxpython and unfortunately, time is what I have the less...So I will stick to wxpython for the time being. But flexibility of qme-dev workbench allows to use Math-Plot-Lib (Pylab) and could certainly use the ETS lib in the future, if they do not conflict with wxpython. If you want to participate to the workbench development you are warmly welcome. Thanx. Robert Prabhu Ramachandran a ?crit : >>>>> "Robert" == Robert VERGNES writes: Robert> Hello, I started to write my own Data Anlysis Workbench Robert> based on Scipy and wxpython. (I haven't seen one around?) Robert> Is there anybody interested in it and/or to help writing Robert> it ? I think it would be a good idea to use the Enthought Tool Suite (ETS) (http://code.enthought.com/ets/) as the underlying toolset for this package. Gael Varoquaux has written a very nice article highlighting why traits UI (which is a part of ETS) is cool here: http://gael-varoquaux.info/computers/traits_tutorial/ cheers, prabhu _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Thu Jan 18 03:36:51 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 18 Jan 2007 09:36:51 +0100 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070118070822.94776.qmail@web27405.mail.ukl.yahoo.com> References: <17838.23057.636366.263684@monster.iitb.ac.in> <20070118070822.94776.qmail@web27405.mail.ukl.yahoo.com> Message-ID: <20070118083651.GA14390@clipper.ens.fr> Hello Robert, Concerning TraitsUI, I think it is a _great_ productivity gain. It is a bit hard to learn for people who have never used a graphical toolkit, but if you are fluent with wxPython, it should be pretty fast to learn. Bear in mind that it is only a layer above wxPython, so if you understand wxPython and if you read section 4 of my tutorial you can blend to two together. As far as ETS goes it is a fantastic toolsuit but is not terribly well distributed currently (no packages under linux or mac) but the developpers are very helpful and they are actively working on that. I have never regretted using TraitsUI. The fact that the widget manipulation code vanishes from my code make it much more readable, and automatic generation of panels is great. The biggest problem with it is that it is still young. If you have feedback on my tutorial, I would be very interested, I am trying to lower the effort to learn GUI programming as much as possible. I see many people around me who need this. Cheers, Ga?l On Thu, Jan 18, 2007 at 08:08:22AM +0100, Robert VERGNES wrote: > Hello Prabhu, > Thanx for your input. I browsed the info relatively fast. > Indeed it looks interesting. But the interface of the qme-dev workbench is > nearly finished... in wxpython and should be out this or next week. > I looked at Gael's article. yes impressive, but wxpython is relatively > fast to do similar things as well. Albeit wxpython was not exactly > designed for scientific work I have to admit. It is GUI only. > But the advantage of wxpython is that you can tape in the large wxpython > library of demos to develop very specific GUI fast look at the Demos > that's impressive. I am not sure that Traits has such large librairy of > demo code available. Additionnaly you can get the support of the wxpython > community which is vital when you want to go fast. - never heard of ETS > before... unfortunately. > I would have known Traits/ETS before, I would have considered it. I > considered Qt and Tk and wxpython. and made a choice...2 months ago. And > now, I cannot afford to learn a new language/wrapper for python's GUI. It > took me too much time for wxpython and unfortunately, time is what I have > the less...So I will stick to wxpython for the time being. But flexibility > of qme-dev workbench allows to use Math-Plot-Lib (Pylab) and could > certainly use the ETS lib in the future, if they do not conflict with > wxpython. > If you want to participate to the workbench development you are warmly > welcome. > Thanx. > Robert > Prabhu Ramachandran a ecrit : > >>>>> "Robert" == Robert VERGNES writes: > Robert> Hello, I started to write my own Data Anlysis Workbench > Robert> based on Scipy and wxpython. (I haven't seen one around?) > Robert> Is there anybody interested in it and/or to help writing > Robert> it ? > I think it would be a good idea to use the Enthought Tool Suite (ETS) > (http://code.enthought.com/ets/) as the underlying toolset for this > package. > Gael Varoquaux has written a very nice article highlighting why traits > UI (which is a part of ETS) is cool here: > http://gael-varoquaux.info/computers/traits_tutorial/ > cheers, > prabhu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > ---------------------------------------------------------------------- > Decouvrez une nouvelle fac,on d'obtenir des reponses `a toutes vos > questions ! Profitez des connaissances, des opinions et des experiences > des internautes sur Yahoo! Questions/Reponses. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Gael Varoquaux, Groupe d'optique atomique, Laboratoire Charles Fabry de l'Institut d'Optique Campus Polytechnique, RD 128 91127 Palaiseau cedex FRANCE Tel : 33 (0) 1 69 35 88 30 - Fax : 33 (0) 1 69 35 87 00 From david at ar.media.kyoto-u.ac.jp Thu Jan 18 04:16:07 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 18 Jan 2007 18:16:07 +0900 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> Message-ID: <45AF3AD7.5090403@ar.media.kyoto-u.ac.jp> Fernando Perez wrote: > > What I recall from a while ago, as relayed by John Hunter (matplotlib) > and Prabhu Ramachandran (mayavi) was that while boost was great, the > compile times and the resulting binary library sizes were completely > out of control. All that templating forces instantiation of an > enormous number of classes, which makes both time and size explode, to > the point of making both development and distribution seriously > problematic. Both John and Prabhu decided to go with other > alternatives due to this, I think it was SWIG (whose C++ support has > greatly improved). > > You might also want to have a look at this: > > http://www.scipy.org/PyRoot I took a quick look at the slides, and wow, this looks pretty cool. I am wondering if this scales up well if you have a lot of communication between python and the C++ "backend"; the ability to catch memory problems looks impressive too, I am wondering how this is done, as they seem to keep everything in the same address space. FWIW, I have used boost a bit with python, and while I find it more 'elegant' than SWIG (well, as elegant as C++ can be), my experience is exactly the same than John and Prabhu. The compilation time is enormous to the point where it is really a pain to use on anything but really powerful computers, specially on linux where g++ is kind of slow at compiling C++, cheers, David From robert.vergnes at yahoo.fr Thu Jan 18 04:29:42 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Thu, 18 Jan 2007 10:29:42 +0100 (CET) Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <20070118083651.GA14390@clipper.ens.fr> Message-ID: <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> Hi Gael, Ok. I admit that! >From what I can see Traits can make coding GUI much MORE clearer than wxpython and faster. ;-) But being in the middle of finishing a first release of the workbench - having spent so many hours of coding the GUI wxpython... aaarff - changing now does NOT make me feel comfortable... . Also I want to use the soft on Win and Linux. And I am not sure that Traits could handle what I need. I will release soon a first version and you will give you opinion if it could be dealt with by Traits. For example: creating one datapage of the workbench can certainly be made with Traits (a datapage being a notebook which include different pages of different type such datagrid, graph, shell, etc..). But then I use a notebook inside a notebook to generate the dataset... -ie set of datapages. So I am not so sure that it would easy to handle with Traits and control all the controls...). I will let you judge when it will be released. Bye Robert PS for Gael: je n'ai tjs pas compris a quoi sert SciLab... Il ya 6 mois j'ai considere scilab pour mon developpement. Puis quand j'ai vu le bastringue j'ai laisse tombe. C'est vraiment afreux... Pourquoi 'ils' n'ont pas utilises Python et/ou developpes une GUI pour octave ... parfois je m'interroge sur les choix 'politico-businesso-scientifique' francais...on sait vraiment depenser les sous en France. :-) Gael Varoquaux a ?crit : Hello Robert, Concerning TraitsUI, I think it is a _great_ productivity gain. It is a bit hard to learn for people who have never used a graphical toolkit, but if you are fluent with wxPython, it should be pretty fast to learn. Bear in mind that it is only a layer above wxPython, so if you understand wxPython and if you read section 4 of my tutorial you can blend to two together. As far as ETS goes it is a fantastic toolsuit but is not terribly well distributed currently (no packages under linux or mac) but the developpers are very helpful and they are actively working on that. I have never regretted using TraitsUI. The fact that the widget manipulation code vanishes from my code make it much more readable, and automatic generation of panels is great. The biggest problem with it is that it is still young. If you have feedback on my tutorial, I would be very interested, I am trying to lower the effort to learn GUI programming as much as possible. I see many people around me who need this. Cheers, Ga?l On Thu, Jan 18, 2007 at 08:08:22AM +0100, Robert VERGNES wrote: > Hello Prabhu, > Thanx for your input. I browsed the info relatively fast. > Indeed it looks interesting. But the interface of the qme-dev workbench is > nearly finished... in wxpython and should be out this or next week. > I looked at Gael's article. yes impressive, but wxpython is relatively > fast to do similar things as well. Albeit wxpython was not exactly > designed for scientific work I have to admit. It is GUI only. > But the advantage of wxpython is that you can tape in the large wxpython > library of demos to develop very specific GUI fast look at the Demos > that's impressive. I am not sure that Traits has such large librairy of > demo code available. Additionnaly you can get the support of the wxpython > community which is vital when you want to go fast. - never heard of ETS > before... unfortunately. > I would have known Traits/ETS before, I would have considered it. I > considered Qt and Tk and wxpython. and made a choice...2 months ago. And > now, I cannot afford to learn a new language/wrapper for python's GUI. It > took me too much time for wxpython and unfortunately, time is what I have > the less...So I will stick to wxpython for the time being. But flexibility > of qme-dev workbench allows to use Math-Plot-Lib (Pylab) and could > certainly use the ETS lib in the future, if they do not conflict with > wxpython. > If you want to participate to the workbench development you are warmly > welcome. > Thanx. > Robert > Prabhu Ramachandran a ecrit : > >>>>> "Robert" == Robert VERGNES writes: > Robert> Hello, I started to write my own Data Anlysis Workbench > Robert> based on Scipy and wxpython. (I haven't seen one around?) > Robert> Is there anybody interested in it and/or to help writing > Robert> it ? > I think it would be a good idea to use the Enthought Tool Suite (ETS) > (http://code.enthought.com/ets/) as the underlying toolset for this > package. > Gael Varoquaux has written a very nice article highlighting why traits > UI (which is a part of ETS) is cool here: > http://gael-varoquaux.info/computers/traits_tutorial/ > cheers, > prabhu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > ---------------------------------------------------------------------- > Decouvrez une nouvelle fac,on d'obtenir des reponses `a toutes vos > questions ! Profitez des connaissances, des opinions et des experiences > des internautes sur Yahoo! Questions/Reponses. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Gael Varoquaux, Groupe d'optique atomique, Laboratoire Charles Fabry de l'Institut d'Optique Campus Polytechnique, RD 128 91127 Palaiseau cedex FRANCE Tel : 33 (0) 1 69 35 88 30 - Fax : 33 (0) 1 69 35 87 00 _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhu at aero.iitb.ac.in Thu Jan 18 07:12:30 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 18 Jan 2007 17:42:30 +0530 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> Message-ID: <17839.25646.470771.76061@prpc.aero.iitb.ac.in> >>>>> "Robert" == Robert VERGNES writes: Robert> Hi Gael, Ok. I admit that! >>> From what I can see Traits can make coding GUI much MORE >>> clearer than wxpython and faster. Robert> ;-) But being in the middle of finishing a first release Robert> of the workbench - having spent so many hours of coding Robert> the GUI wxpython... aaarff - changing now does NOT make me I understand. I did not realize you had already spent a ton of time on this. Also note that traits is one part of ETS. There is a lot of other stuff. This includes envisage which lets you build applications via plugins. It also includes kiva, chaco and mayavi2. There is more coming. Robert> feel comfortable... . Also I want to use the soft on Win Robert> and Linux. And I am not sure that Traits could handle Some of us develop code for ETS on Linux and the Mac. Many others develop entirely on win32. Needless to say, it works on Win32 as well. Robert> what I need. I will release soon a first version and you Robert> will give you opinion if it could be dealt with by Traits. Robert> For example: creating one datapage of the workbench can Robert> certainly be made with Traits (a datapage being a notebook Robert> which include different pages of different type such Robert> datagrid, graph, shell, etc..). But then I use a notebook Robert> inside a notebook to generate the dataset... -ie set of Robert> datapages. So I am not so sure that it would easy to Robert> handle with Traits and control all the controls...). I Robert> will let you judge when it will be released. With traits and envisage there are some pretty cool things you can do. Unfortunately, I think the ETS developers (myself included) haven't done enough by the way of reaching out to people and telling them what you can do with it. regards, prabhu From gael.varoquaux at normalesup.org Thu Jan 18 08:23:44 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 18 Jan 2007 14:23:44 +0100 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> Message-ID: <20070118132330.GA20853@clipper.ens.fr> Hi Robert, I think I share you point of view: on the short term for you project, traits is not a good option. On the long term, I think it might be. > [rant in French about the French governement putting money in scilab > which is much less interesting the scipy and Co. ] Yes, I agree with what you said. I have used scilab before, and, appart of the reduce language abilities and toolbox, I must say that when I sent in patches (one-liners) I got told by the developpers that my problems (Linux related) where not a priority. The thing isn't even properly open-source. What you should do is do some advocacy around you about python. Ga?l From david at ar.media.kyoto-u.ac.jp Thu Jan 18 08:30:16 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 18 Jan 2007 22:30:16 +0900 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <20070118132330.GA20853@clipper.ens.fr> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> Message-ID: <45AF7668.6020709@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > Hi Robert, > > I think I share you point of view: on the short term for you project, > traits is not a good option. On the long term, I think it might be. > >> [rant in French about the French governement putting money in scilab >> which is much less interesting the scipy and Co. ] > > Yes, I agree with what you said. I have used scilab before, and, appart > of the reduce language abilities and toolbox, I must say that when I sent > in patches (one-liners) I got told by the developpers that my problems > (Linux related) where not a priority. > > The thing isn't even properly open-source. To be fair to scilab (which I don't like much myself either), it was started quite sometime ago, and it has some possibilities that neither octave or python seem to have, such as a kind of clone of simulink, which is supposed to be really useful in some circles. David From hasslerjc at adelphia.net Thu Jan 18 08:56:15 2007 From: hasslerjc at adelphia.net (John Hassler) Date: Thu, 18 Jan 2007 08:56:15 -0500 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <45AF7668.6020709@ar.media.kyoto-u.ac.jp> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> Message-ID: <45AF7C7F.1060800@adelphia.net> An HTML attachment was scrubbed... URL: From matthew at sel.cam.ac.uk Thu Jan 18 09:03:38 2007 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Thu, 18 Jan 2007 14:03:38 +0000 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <45AF7C7F.1060800@adelphia.net> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> <45AF7C7F.1060800@adelphia.net> Message-ID: On 18 Jan 2007, at 13:56, John Hassler wrote: > To be still more fair, Scilab should be compared to Matlab, not > Python. Scilab (free) doesn't have the polish of Matlab > (expensive), but is as usable as Matlab for a very wide range of > problems. In fact, I like the structure of Scilab much better than > that of Matlab and its abominable "function equals m-file" > requirement. If you're looking for matlab replacements, I'd recommend GNU Octave. It is mostly compatible with matlab syntax, too. http://www.gnu.org/software/octave/ Cheers, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From hasslerjc at adelphia.net Thu Jan 18 09:16:51 2007 From: hasslerjc at adelphia.net (John Hassler) Date: Thu, 18 Jan 2007 09:16:51 -0500 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> <45AF7C7F.1060800@adelphia.net> Message-ID: <45AF8153.20607@adelphia.net> An HTML attachment was scrubbed... URL: From matthew at sel.cam.ac.uk Thu Jan 18 09:21:41 2007 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Thu, 18 Jan 2007 14:21:41 +0000 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <45AF8153.20607@adelphia.net> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> <45AF7C7F.1060800@adelphia.net> <45AF8153.20607@adelphia.net> Message-ID: On 18 Jan 2007, at 14:16, John Hassler wrote: > Unfortunately, Octave isn't easy to run on windows. (As I > remember, you have to run it with Cygwin.) Python/SciPy and Scilab > play nicely with either Linux or windows. We're drifting off the topic a smidge here, but you can get windows binaries for Octave if you don't have Cygwin, according to http:// www.gnu.org/software/octave/download.html disclaimer: I only ever use OS X, solaris and linux. HTH, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From S.Mientki at ru.nl Thu Jan 18 09:27:33 2007 From: S.Mientki at ru.nl (Stef Mientki) Date: Thu, 18 Jan 2007 15:27:33 +0100 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <45AF7668.6020709@ar.media.kyoto-u.ac.jp> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> Message-ID: <45AF83D5.8080906@ru.nl> David Cournapeau wrote: > Gael Varoquaux wrote: > >> Hi Robert, >> >> I think I share you point of view: on the short term for you project, >> traits is not a good option. On the long term, I think it might be. >> >> >>> [rant in French about the French governement putting money in scilab >>> which is much less interesting the scipy and Co. ] >>> >> Yes, I agree with what you said. I have used scilab before, and, appart >> of the reduce language abilities and toolbox, I must say that when I sent >> in patches (one-liners) I got told by the developpers that my problems >> (Linux related) where not a priority. >> >> The thing isn't even properly open-source. >> > To be fair to scilab (which I don't like much myself either), it was > started quite sometime ago, and it has some possibilities that neither > octave or python seem to have, such as a kind of clone of simulink, > which is supposed to be really useful in some circles. > Very usefull information, Although I've more or less chosen SciPy as the main tool, I'm still doubting what to choose, SciPy or SciLab, but now I think I definitely go for SciPy. A few months ago, I was requested to see if LabView could be a good alternative for both MatLab and our own data-acquisition and analysis pacakage (in the field of Medical Research). After testing LabView for at most 2 days, I threw it in the garbage can ;-) So then the question arose, are their other alternatives, and so I bounced into SciLab and SciPy. The SciCos part of SciLab is not only MatLab's-Simulink (transfer functions), but also includes MatLab's-PowerSim (2-poles) . Testing the demos, I think SciCos can really compete with the MatLab versions. The following issues decided me to choose SciPy: - embedding of SciLab is very difficult (impossible ?) - the usenet group is much less active than SciPy, Python usergroups - Python is usable as a general program language The fact that SciPy doesn't have a Simulink equivalent, delayed my decision for a couple of days, but let's be positive, maybe SimuPy will be born some day ;-) Comparing MatLab and SciPy is boring, everything points in the same direction. The only disadvantage of SciPy is "nobody knows it !!" ;-) cheers, Stef Mientki > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emsellem at obs.univ-lyon1.fr Thu Jan 18 09:52:18 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 18 Jan 2007 15:52:18 +0100 Subject: [SciPy-user] cblas_dtrsm not found Message-ID: <45AF89A2.5060000@obs.univ-lyon1.fr> Hi! I have been using numpy+scipy+matplotlib extensively in scripts since quite some time, and regularly updated all these packages using svn. For some reason the following does not work anymore, and is trying to find cblas_dtrsm. I have it in /usr/local/lib/atlas/libcblas.a (I checked doing a "ar -t" of that library). I also have that directory in my ld.so.conf (and indeed as ATLAS default dir). it must be a very stupid mistake I am making here, but I cannot make it work. (it always worked, and I haven't changed anything in my ATLAS/BLAS, etc) (I tried recompiling scipy but no luck) Anyone has an idea of what is wrong with that? thanks for any help here. Eric ==================================== ---> 21 from scipy import optimize, special, interpolate 22 from scipy.optimize import leastsq, fsolve 23 from scipy.special import gammainc /usr/local/lib/python2.4/site-packages/scipy/optimize/__init__.py 9 from zeros import * 10 from anneal import * ---> 11 from lbfgsb import fmin_l_bfgs_b 12 from tnc import fmin_tnc 13 from cobyla import fmin_cobyla /usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py 28 29 from numpy import zeros, float64, array, int32 ---> 30 import _lbfgsb 31 import optimize 32 ImportError: /usr/local/lib/python2.4/site-packages/scipy/optimize/_lbfgsb.so: undefined symbol: cblas_dtrsm From wbaxter at gmail.com Thu Jan 18 10:17:30 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 19 Jan 2007 00:17:30 +0900 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> <45AF7C7F.1060800@adelphia.net> <45AF8153.20607@adelphia.net> Message-ID: On 1/18/07, Matthew Vernon wrote: > > On 18 Jan 2007, at 14:16, John Hassler wrote: > > > Unfortunately, Octave isn't easy to run on windows. (As I > > remember, you have to run it with Cygwin.) Python/SciPy and Scilab > > play nicely with either Linux or windows. > > We're drifting off the topic a smidge here, but you can get windows > binaries for Octave if you don't have Cygwin, according to http:// > www.gnu.org/software/octave/download.html > > disclaimer: I only ever use OS X, solaris and linux. Hmm, the download link for Windows says: "WARNING: do not install this package if you have cygwin installed". Which suggests to me that it probably just installs the parts of cygwin that it needs to run. (But note that I have not tried installing it to confirm.) --bb From simon at arrowtheory.com Thu Jan 18 10:48:41 2007 From: simon at arrowtheory.com (Simon Burton) Date: Thu, 18 Jan 2007 07:48:41 -0800 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <17839.25646.470771.76061@prpc.aero.iitb.ac.in> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <17839.25646.470771.76061@prpc.aero.iitb.ac.in> Message-ID: <20070118074841.497f7e79.simon@arrowtheory.com> On Thu, 18 Jan 2007 17:42:30 +0530 Prabhu Ramachandran wrote: ... > With traits and envisage there are some pretty cool things you can do. > Unfortunately, I think the ETS developers (myself included) haven't > done enough by the way of reaching out to people and telling them what > you can do with it. Are you guys showing off ETS at PyCon this year ? Simon. From robert.kern at gmail.com Thu Jan 18 11:27:22 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Jan 2007 10:27:22 -0600 Subject: [SciPy-user] cblas_dtrsm not found In-Reply-To: <45AF89A2.5060000@obs.univ-lyon1.fr> References: <45AF89A2.5060000@obs.univ-lyon1.fr> Message-ID: <45AF9FEA.1020702@gmail.com> Eric Emsellem wrote: > Hi! > > I have been using numpy+scipy+matplotlib extensively in scripts since > quite some time, and regularly updated all these packages using svn. > > For some reason the following does not work anymore, and is trying to > find cblas_dtrsm. > I have it in /usr/local/lib/atlas/libcblas.a (I checked doing a "ar -t" > of that library). > I also have that directory in my ld.so.conf (and indeed as ATLAS default > dir). > > it must be a very stupid mistake I am making here, but I cannot make it > work. (it always worked, and I haven't changed anything in my > ATLAS/BLAS, etc) > > (I tried recompiling scipy but no luck) > > Anyone has an idea of what is wrong with that? Not without knowing the details of how you built scipy. Can you show us the output of $ python setup.py config ? Also, I believe that you can find symbols which are statically linked into the extension module by using nm(1) (but I'm too lazy to find something that I can check; I use a shared-library ATLAS on my Linux system). $ nm $PYLIB/scipy/optimize/_lbfgsb.so | grep cblas_dtrsm -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Jan 18 11:31:08 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Jan 2007 10:31:08 -0600 Subject: [SciPy-user] RE : Re: SciPy Data Analysis Workbench In-Reply-To: <20070118074841.497f7e79.simon@arrowtheory.com> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <17839.25646.470771.76061@prpc.aero.iitb.ac.in> <20070118074841.497f7e79.simon@arrowtheory.com> Message-ID: <45AFA0CC.5000405@gmail.com> Simon Burton wrote: > On Thu, 18 Jan 2007 17:42:30 +0530 > Prabhu Ramachandran wrote: > > ... >> With traits and envisage there are some pretty cool things you can do. >> Unfortunately, I think the ETS developers (myself included) haven't >> done enough by the way of reaching out to people and telling them what >> you can do with it. > > Are you guys showing off ETS at PyCon this year ? Not in an official talk, but several of us will be there, and we will try to snag some of the time for an open space talk. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emsellem at obs.univ-lyon1.fr Thu Jan 18 11:35:07 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 18 Jan 2007 17:35:07 +0100 Subject: [SciPy-user] cblas_dtrsm not found In-Reply-To: <45AF9FEA.1020702@gmail.com> References: <45AF89A2.5060000@obs.univ-lyon1.fr> <45AF9FEA.1020702@gmail.com> Message-ID: <45AFA1BB.6030801@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jan 18 11:38:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 Jan 2007 10:38:37 -0600 Subject: [SciPy-user] cblas_dtrsm not found In-Reply-To: <45AFA1BB.6030801@obs.univ-lyon1.fr> References: <45AF89A2.5060000@obs.univ-lyon1.fr> <45AF9FEA.1020702@gmail.com> <45AFA1BB.6030801@obs.univ-lyon1.fr> Message-ID: <45AFA28D.8000003@gmail.com> Eric Emsellem wrote: > thanks for the quick reply. > > doing the nm I get: > U cblas_dtrsm > > and the python config: (not much seems to be found?) Okay, then you should make a site.cfg file with the information to find and link against your ATLAS libraries. If you have a relatively recent numpy, there should be a site.cfg.example file with plenty of comments on how to fill it out. Otherwise, you can get that file from here: http://svn.scipy.org/svn/numpy/trunk/site.cfg.example -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Karl.Young at ucsf.edu Thu Jan 18 12:11:38 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Thu, 18 Jan 2007 09:11:38 -0800 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> Message-ID: <45AFAA4A.9080505@ucsf.edu> >What I recall from a while ago, as relayed by John Hunter (matplotlib) >and Prabhu Ramachandran (mayavi) was that while boost was great, the >compile times and the resulting binary library sizes were completely >out of control. All that templating forces instantiation of an >enormous number of classes, which makes both time and size explode, to >the point of making both development and distribution seriously >problematic. Both John and Prabhu decided to go with other >alternatives due to this, I think it was SWIG (whose C++ support has >greatly improved). > >You might also want to have a look at this: > >http://www.scipy.org/PyRoot > >Root is a gigantic C++ framework coming from the data analysis world >of High-Energy physics (aka particle physics). It was developed at >CERN and is very widely used. This guy's talk was very interesting, >and they seem to have gone for their own (non-boost, non-SWIG) >solution, with arguments backing up their decision. > > Thanks much for the suggestions Fernando. I shudder at the thought of using Root though; I used it for a while when I was at SLAC, mainly for plotting from a C++ package. It is a very ambitious project with a lot of nice features developed by some good programmers but it's a bit of a monstrosity (one of the jokes at SLAC was that it also serves as a toothpaste and floor wax). I liked how easy it was to use SWIG though, so given the improved C++ support that sounds like the most promising direction. I'll report back re. what I find. -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From fperez.net at gmail.com Thu Jan 18 13:28:26 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 18 Jan 2007 11:28:26 -0700 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45AF3AD7.5090403@ar.media.kyoto-u.ac.jp> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45AF3AD7.5090403@ar.media.kyoto-u.ac.jp> Message-ID: On 1/18/07, David Cournapeau wrote: > Fernando Perez wrote: > > > > What I recall from a while ago, as relayed by John Hunter (matplotlib) > > and Prabhu Ramachandran (mayavi) was that while boost was great, the > > compile times and the resulting binary library sizes were completely > > out of control. All that templating forces instantiation of an > > enormous number of classes, which makes both time and size explode, to > > the point of making both development and distribution seriously > > problematic. Both John and Prabhu decided to go with other > > alternatives due to this, I think it was SWIG (whose C++ support has > > greatly improved). > > > > You might also want to have a look at this: > > > > http://www.scipy.org/PyRoot > I took a quick look at the slides, and wow, this looks pretty cool. I am > wondering if this scales up well if you have a lot of communication > between python and the C++ "backend"; the ability to catch memory > problems looks impressive too, I am wondering how this is done, as they > seem to keep everything in the same address space. > > FWIW, I have used boost a bit with python, and while I find it more > 'elegant' than SWIG (well, as elegant as C++ can be), my experience is > exactly the same than John and Prabhu. The compilation time is enormous > to the point where it is really a pain to use on anything but really > powerful computers, specially on linux where g++ is kind of slow at > compiling C++, Keep in mind that I wasn't suggesting using Root itself, but rather trying to use their strategy for C++-Python coupling. Wim (the author of the slides) mentioned they'd try to separate that component from PyRoot itself, you might want to email him and ask how far along that has gone (or how hard it would be to rip it out yourself). I also come from a high-energy background so I'm reasonably familiar with Root's reputation, and I don't think I'd foist it on anyone who hasn't harmed me before :) Cheers, f From zunzun at zunzun.com Thu Jan 18 13:38:58 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 18 Jan 2007 13:38:58 -0500 Subject: [SciPy-user] Python Equations released under GPL Message-ID: <20070118183858.GA24001@zunzun.com> The middleware code for http://zunzun.com has been released under the GPL. The project is a collection of Python equations that can fit themselves to both 2D and 3D data sets, output source code in several computing languages and can use an included genetic algorithm for initial parameter estimation. Requires SciPy and SciPy's weave module. James From aisaac at american.edu Thu Jan 18 13:43:24 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 18 Jan 2007 13:43:24 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AF0125.2040003@ee.byu.edu> References: <45AF0125.2040003@ee.byu.edu> Message-ID: On Wed, 17 Jan 2007, Travis Oliphant apparently wrote: > Probably something like :OtherParameters: or something > like that. I prefer white-space as in :Other Parameters: > if it can be made to work. While Ed suggested that he can make this work, it would be a deviation from standard reST. Each time there is such a deviation, it makes it harder for other reST based processors (besides Epydoc) to process the docs. So perhaps :OtherParameters: or :Parameters2: would be better. > 1) :lm:`foo` for :latex-math:`foo` > 2) :m:`foo` for :math:`foo` I must have missed a post: what is the distinction being drawn here? I do not know what :math: is unless it is :latex-math:. I can imagine also having, say, :lout-math: but I do not recall this being discussed. > It would be best to have an :Returns: consolidated field > that looks very-much like the :Parameters: section. Absolutely! Cheers, Alan Isaac From aisaac at american.edu Thu Jan 18 13:43:25 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 18 Jan 2007 13:43:25 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <369FDD09-3A75-49F3-9CD1-0CCD143CD493@gradient.cis.upenn.edu> References: <4DF09477-38C1-44AA-90E0-96D659BC2C2B@gradient.cis.upenn.edu><45AD60CE.6090102@ee.byu.edu><369FDD09-3A75-49F3-9CD1-0CCD143CD493@gradient.cis.upenn.edu> Message-ID: > Travis wrote: >> Moin Moin uses a different way to describe tables. :-( On Wed, 17 Jan 2007, Edward Loper apparently wrote: > Is there a reason you're attached to MoinMoin's syntax > instead of rst's? MoinMoin's doesn't seem particularly > more readable to me. If you're using rst for most of your > markup, why not use it for tables? Since this thread has grown rather long, I'm going to risk being a bit repetitive. 1. reST tables *are* more limited, but nobody has yet illustrated with a table that would be - needed in the docs - incompatible with the reST tables 2. It would startle me if a patch to handle MoinMoin tables would be rejected by the reST developers Cheers, Alan Isaac From oliphant at ee.byu.edu Thu Jan 18 14:24:35 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 18 Jan 2007 12:24:35 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45AF0125.2040003@ee.byu.edu> Message-ID: <45AFC973.1090306@ee.byu.edu> Alan G Isaac wrote: >On Wed, 17 Jan 2007, Travis Oliphant apparently wrote: > > >>Probably something like :OtherParameters: or something >>like that. I prefer white-space as in :Other Parameters: >>if it can be made to work. >> >> > >While Ed suggested that he can make this work, it would be >a deviation from standard reST. Each time there is such >a deviation, it makes it harder for other reST based >processors (besides Epydoc) to process the docs. >So perhaps :OtherParameters: or :Parameters2: would be better. > > > > > >>1) :lm:`foo` for :latex-math:`foo` >>2) :m:`foo` for :math:`foo` >> >> > >I must have missed a post: what is the distinction being >drawn here? I do not know what :math: is unless it is >:latex-math:. I can imagine also having, say, :lout-math: >but I do not recall this being discussed. > > :math: would be an equation that does not necessarily use latex syntax for math. Several markup languages define math so that for example :math:`y=f(x)` renders differently. :latex-math: would use latex symbols and commands for math --- something like :latex-math:`y=f\right(x\left)` for example. -Travis From aisaac at american.edu Thu Jan 18 14:39:44 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 18 Jan 2007 14:39:44 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45AFC973.1090306@ee.byu.edu> References: <45AF0125.2040003@ee.byu.edu> <45AFC973.1090306@ee.byu.edu> Message-ID: On Thu, 18 Jan 2007, Travis Oliphant apparently wrote: > :math: > would be an equation that does not necessarily use latex syntax for > math. Several markup languages define math so that for example > :math:`y=f(x)` renders differently. I think I understand: the intent is essentially to allow *styling* of a particular span or div, without support for any "type-setting" commands. Is that right? So, for example, if someone wants to use Lout notation, they can for now just use a math role (to get, e.g., fix width italic rendering) but at some future date if reST supports a `lout-math` role, they could change over to this role while making no other changes. This seems generously open-ended. Thanks, Alan From cckidder at gmail.com Thu Jan 18 15:59:20 2007 From: cckidder at gmail.com (Chad Kidder) Date: Thu, 18 Jan 2007 20:59:20 +0000 (UTC) Subject: [SciPy-user] RHEL 4 SciPy Install Failure Message-ID: I was trying to install SciPy today after a sucessful numpy install and it went well until it got to the fortran compilation of dfftpack. Below is an exerpt of the error. Anyone know what's going on? From cckidder at gmail.com Thu Jan 18 16:23:57 2007 From: cckidder at gmail.com (Chad Kidder) Date: Thu, 18 Jan 2007 15:23:57 -0600 Subject: [SciPy-user] RHEL 4 SciPy Install Failure Message-ID: I was trying to install SciPy today after a successful numpy install and it went well until it got to the fortran compilation of dfftpack. Below is an exerpt of the error. Anyone know what's going on? running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_clib building 'dfftpack' library compiling Fortran sources Fortran f77 compiler: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer creating build/temp.linux-x86_64-2.3 creating build/temp.linux-x86_64-2.3/Lib creating build/temp.linux-x86_64-2.3/Lib/fftpack creating build/temp.linux-x86_64-2.3/Lib/fftpack/dfftpack compile options: '-c' g77:f77: Lib/fftpack/dfftpack/dsinqi.f Lib/fftpack/dfftpack/dsinqi.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dsinqi.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dsinqi.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dsinqi.f:0: error: CPU you selected does not support x86-64 instruction set error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer -c -c Lib/fftpack/dfftpack/dsinqi.f -o build/temp.linux-x86_64-2.3/Lib/fftpack/dfftpack/dsinqi.o" failed with exit status 1 Thanks for your help. From zunzun at zunzun.com Thu Jan 18 16:33:09 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 18 Jan 2007 16:33:09 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? Message-ID: <20070118213309.GA31714@zunzun.com> Can I get any advice on which software license to distribute my code? The sourceforge URL, which I forgot to post before, is http://sourceforge.net/projects/pythonequations I'm just interested in feedback from users, whether someone uses the code in a commercial application is not particulary important to me personally. I would like people to know where to get the source code itself if someone does redistribute, again so that the chance of feedback on the source code is increased. I'm not chained to the GPL, I'm just naive and inexperienced. Any advice would be greatly appreciated, and my thanks to Glen Mabey for his thoughts below. Re-releasing would happen this weekend if I knew which license to use other than the GPL. James ----- Forwarded message from "Glen W. Mabey" ----- From: "Glen W. Mabey" To: zunzun at zunzun.com Subject: Re: SPAM::Re: [SciPy-user] Python Equations released under GPL X-OriginalArrivalTime: 18 Jan 2007 21:11:47.0017 (UTC) FILETIME=[433C8390:01C73B45] James, On Thu, Jan 18, 2007 at 04:02:51PM -0500, zunzun at zunzun.com wrote: > Thanks for the response. The reason for thr GPL release is that I > need help working on the code: the GPL requires *redistributed* > changes to be sent back to (me). If I use other than a GPL license, > I don't see anyone helping me with the code. Well, the tradeoff is that with the GPL you may turn away a good number of people who would otherwise have used and contributed to your project. I understand that contributions are a major source of strength to a project, yet contributors need to be attracted. One of the great attractions of Python, numpy, scipy, and matplotlib for me is the ability to use it in commercial applications. Still others feel that GPL isn't as "free" (or liberal) of a software license as the Python license, so for them it is a matter of ideology. But, don't take my word for it. I suggest you again post to the numpy list and explain your position. Ask whether your choice of license would impact others' willingness to contribute to your project. This is really a great forum to ask on, because those on it are already using scipy and weave and hence are the most likely to try your project. And many of them are actively developing code and contributing it to the project, even though it is not GPL'ed ... like I have. Best Regards, Glen Mabey ----- End forwarded message ----- From fperez.net at gmail.com Thu Jan 18 16:47:53 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 18 Jan 2007 14:47:53 -0700 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <20070118213309.GA31714@zunzun.com> References: <20070118213309.GA31714@zunzun.com> Message-ID: On 1/18/07, zunzun at zunzun.com wrote: > Can I get any advice on which software license to distribute > my code? The sourceforge URL, which I forgot to post before, is > > http://sourceforge.net/projects/pythonequations > > I'm just interested in feedback from users, whether someone uses > the code in a commercial application is not particulary important > to me personally. I would like people to know where to get the > source code itself if someone does redistribute, again so that > the chance of feedback on the source code is increased. > > I'm not chained to the GPL, I'm just naive and inexperienced. > Any advice would be greatly appreciated, and my thanks to > Glen Mabey for his thoughts below. Re-releasing would happen > this weekend if I knew which license to use other than the GPL. John Hunter has a standardized and extremely well-written response to this question. Rather than poorly paraphrasing him, I just hit 'jdh bsd pitch' in my gmail search box, and I'll let his words tell the rest. This should (with John's permission) probably go into the scipy FAQ... Cheers, f ###### Written by John Hunter in response to this same question Would you consider licensing your code under a more permissive license? Most of the essential scientific computing tools in python (scipy, numpy, matplotlib, ipython, vtk, enthought tool suite, ....) are licensed under a BSD-ish style license, and cannot reuse GPLd code. My standard "licensing pitch" is included below:: I'll start by summarizing what many of you already know about open source licenses. I believe this discussion is broadly correct, though it is not a legal document and if you want legally precise statements you should reference the original licenses cited here. The Open-Source-Initiative is a clearing house for OS licenses, so you can read more there. The two dominant license variants in the wild are GPL-style and BSD-style. There are countless other licenses that place specific restrictions on code reuse, but the purpose of this document is to discuss the differences between the GPL and BSD variants, specifically in regards to my experience developing matplotlib and in my discussions with other developers about licensing issues. The best known and perhaps most widely used license is the GPL, which in addition to granting you full rights to the source code including redistribution, carries with it an extra obligation. If you use GPL code in your own code, or link with it, your product must be released under a GPL compatible license. I.e., you are required to give the source code to other people and give them the right to redistribute it as well. Many of the most famous and widely used open source projects are released under the GPL, including linux, gcc and emacs. The second major class are the BSD-style licenses (which includes MIT and the python PSF license). These basically allow you to do whatever you want with the code: ignore it, include it in your own open source project, include it in your proprietary product, sell it, whatever. python itself is released under a BSD compatible license, in the sense that, quoting from the PSF license page There is no GPL-like "copyleft" restriction. Distributing binary-only versions of Python, modified or not, is allowed. There is no requirement to release any of your source code. You can also write extension modules for Python and provide them only in binary form. Famous projects released under a BSD-style license in the permissive sense of the last paragraph are the BSD operating system, python and TeX. I believe the choice of license is an important one, and I advocate a BSD-style license. In my experience, the most important commodity an open source project needs to succeed is users. Of course, doing something useful is a prerequisite to getting users, but I also believe users are something of a prerequisite to doing something useful. It is very difficult to design in a vacuum, and users drive good software by suggesting features and finding bugs. If you satisfy the needs of some users, you will inadvertently end up satisfying the needs of a large class of users. And users become developers, especially if they have some skills and find a feature they need implemented, or if they have a thesis to write. Once you have a lot of users and a number of developers, a network effect kicks in, exponentially increasing your users and developers. In open source parlance, this is sometimes called competing for mind share. So I believe the number one (or at least number two) commodity an open source project can possess is mind share, which means you want as many damned users using your software as you can get. Even though you are giving it away for free, you have to market your software, promote it, and support it as if you were getting paid for it. Now, how does this relate to licensing, you are asking? Many software companies will not use GPL code in their own software, even those that are highly committed to open source development, such as enthought, out of legitimate concern that use of the GPL will "infect" their code base by its viral nature. In effect, they want to retain the right to release some proprietary code. And in my experience, companies make for some of the best developers, because they have the resources to get a job done, even a boring one, if they need it in their code. Two of the matplotlib backends (FLTK and WX) were contributed by private sector companies who are using matplotlib either internally or in a commercial product -- I doubt these companies would have been using matplotlib if the code were GPL. In my experience, the benefits of collaborating with the private sector are real, whereas the fear that some private company will "steal" your product and sell it in a proprietary application leaving you with nothing is not. There is a lot of GPL code in the world, and it is a constant reality in the development of matplotlib that when we want to reuse some algorithm, we have to go on a hunt for a non-GPL version. Most recently this occurred in a search for a good contouring algorithm. I worry that the "license wars", the effect of which are starting to be felt on many projects, have a potential to do real harm to open source software development. There are two unpalatable options. 1) Go with GPL and lose the mind-share of the private sector 2) Forgo GPL code and retain the contribution of the private sector. This is a very tough decision because their is a lot of very high quality software that is GPL and we need to use it; they don't call the license viral for nothing. The third option, which is what is motivating me to write this, is to convince people who have released code under the GPL to re-release it under a BSD compatible license. Package authors retain the copyright to their software and have discretion to re-release it under a license of their choosing. Many people choose the GPL when releasing a package because it is the most famous open source license, and did not consider issues such as those raised here when choosing a license. When asked, these developers will often be amenable to re-releasing their code under a more permissive license. Fernando Perez did this with ipython, which was released under the LGPL and then re-released under a BSD license to ease integration with matplotlib, scipy and enthought code. The LGPL is more permissive than the GPL, allowing you to link with it non-virally, but many companies are still loath to use it out of legal concerns, and you cannot reuse LGPL code in a proprietary product. So I encourage you to release your code under a BSD compatible license, and when you encounter an open source developer whose code you want to use, encourage them to do the same. Feel free to forward this document on them. Comments, suggestions for improvements, corrections, etc, should be sent to jdhunter at ace.bsd.uchicago.edu From val at vtek.com Thu Jan 18 19:02:13 2007 From: val at vtek.com (val) Date: Thu, 18 Jan 2007 19:02:13 -0500 Subject: [SciPy-user] SciPy Data Analysis Workbench References: <17838.23057.636366.263684@monster.iitb.ac.in><20070118070822.94776.qmail@web27405.mail.ukl.yahoo.com> <20070118083651.GA14390@clipper.ens.fr> Message-ID: <035101c73b5d$13185eb0$6400a8c0@D380> Hi Gael: I've read your tutorial and was quite impressed; for me, it is a new vision of traits and (traits.ui) i didn't realize before. Thanks for great contribution. My first impression of traits was its help in its structured description and manipulation of physical properties of real-world objects, a very important (and missing in Python?) component of physics-related programming. How you would see/elaborate on that side of traits? Thanks, Val ----- Original Message ----- From: "Gael Varoquaux" To: "SciPy Users List" Sent: Thursday, January 18, 2007 3:36 AM Subject: Re: [SciPy-user] SciPy Data Analysis Workbench Hello Robert, Concerning TraitsUI, I think it is a _great_ productivity gain. It is a bit hard to learn for people who have never used a graphical toolkit, but if you are fluent with wxPython, it should be pretty fast to learn. Bear in mind that it is only a layer above wxPython, so if you understand wxPython and if you read section 4 of my tutorial you can blend to two together. As far as ETS goes it is a fantastic toolsuit but is not terribly well distributed currently (no packages under linux or mac) but the developpers are very helpful and they are actively working on that. I have never regretted using TraitsUI. The fact that the widget manipulation code vanishes from my code make it much more readable, and automatic generation of panels is great. The biggest problem with it is that it is still young. If you have feedback on my tutorial, I would be very interested, I am trying to lower the effort to learn GUI programming as much as possible. I see many people around me who need this. Cheers, Ga?l On Thu, Jan 18, 2007 at 08:08:22AM +0100, Robert VERGNES wrote: > Hello Prabhu, > Thanx for your input. I browsed the info relatively fast. > Indeed it looks interesting. But the interface of the qme-dev workbench > is > nearly finished... in wxpython and should be out this or next week. > I looked at Gael's article. yes impressive, but wxpython is relatively > fast to do similar things as well. Albeit wxpython was not exactly > designed for scientific work I have to admit. It is GUI only. > But the advantage of wxpython is that you can tape in the large > wxpython > library of demos to develop very specific GUI fast look at the Demos > that's impressive. I am not sure that Traits has such large librairy > of > demo code available. Additionnaly you can get the support of the > wxpython > community which is vital when you want to go fast. - never heard of ETS > before... unfortunately. > I would have known Traits/ETS before, I would have considered it. I > considered Qt and Tk and wxpython. and made a choice...2 months ago. > And > now, I cannot afford to learn a new language/wrapper for python's GUI. > It > took me too much time for wxpython and unfortunately, time is what I > have > the less...So I will stick to wxpython for the time being. But > flexibility > of qme-dev workbench allows to use Math-Plot-Lib (Pylab) and could > certainly use the ETS lib in the future, if they do not conflict with > wxpython. > If you want to participate to the workbench development you are warmly > welcome. > Thanx. > Robert > Prabhu Ramachandran a ecrit : > >>>>> "Robert" == Robert VERGNES writes: > Robert> Hello, I started to write my own Data Anlysis Workbench > Robert> based on Scipy and wxpython. (I haven't seen one around?) > Robert> Is there anybody interested in it and/or to help writing > Robert> it ? > I think it would be a good idea to use the Enthought Tool Suite (ETS) > (http://code.enthought.com/ets/) as the underlying toolset for this > package. > Gael Varoquaux has written a very nice article highlighting why > traits > UI (which is a part of ETS) is cool here: > http://gael-varoquaux.info/computers/traits_tutorial/ > cheers, > prabhu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > ---------------------------------------------------------------------- > Decouvrez une nouvelle fac,on d'obtenir des reponses `a toutes vos > questions ! Profitez des connaissances, des opinions et des experiences > des internautes sur Yahoo! Questions/Reponses. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- Gael Varoquaux, Groupe d'optique atomique, Laboratoire Charles Fabry de l'Institut d'Optique Campus Polytechnique, RD 128 91127 Palaiseau cedex FRANCE Tel : 33 (0) 1 69 35 88 30 - Fax : 33 (0) 1 69 35 87 00 _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From zunzun at zunzun.com Thu Jan 18 20:42:22 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 18 Jan 2007 20:42:22 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: References: <20070118213309.GA31714@zunzun.com> Message-ID: <20070119014222.GA4710@zunzun.com> On Thu, Jan 18, 2007 at 02:47:53PM -0700, Fernando Perez wrote: > > This should (with John's permission) probably go into the scipy FAQ... AARRGHH! - why didn't I know this? PLEASE add it to the FAQ. > because it is the most famous open source license, and did not > consider issues such as those raised here when choosing a > license. Guilty as charged. > When asked, these developers will often be amenable to > re-releasing their code under a more permissive license. Re-release scheduled for this weekend. I very much appreciate the help and advice, we "license newbies" sure need it. I'll post again after changing the license. James From bblais at bryant.edu Thu Jan 18 20:45:45 2007 From: bblais at bryant.edu (Brian Blais) Date: Thu, 18 Jan 2007 20:45:45 -0500 Subject: [SciPy-user] scilab versus scipy (was Re: SciPy Data Analysis Workbench) In-Reply-To: <45AF83D5.8080906@ru.nl> References: <20070118083651.GA14390@clipper.ens.fr> <20070118092942.25922.qmail@web27411.mail.ukl.yahoo.com> <20070118132330.GA20853@clipper.ens.fr> <45AF7668.6020709@ar.media.kyoto-u.ac.jp> <45AF83D5.8080906@ru.nl> Message-ID: <45B022C9.6080608@bryant.edu> Stef Mientki wrote: > Very usefull information, > Although I've more or less chosen SciPy as the main tool, > I'm still doubting what to choose, SciPy or SciLab, > but now I think I definitely go for SciPy. I would not doubt...go with scipy. :) > The SciCos part of SciLab is not only MatLab's-Simulink (transfer > functions), > but also includes MatLab's-PowerSim (2-poles) . I guess if you need this part of it, then I can't complain, but I've used scilab extensively in the recent past (not the scicos stuff, though), and I can add to the list to lean to Python/Scipy > The following issues decided me to choose SciPy: > - embedding of SciLab is very difficult (impossible ?) I would add that *extending* scilab is a pain. writing extensions in python, especially using Pyrex, is nearly trivial and a must for any scientific programming. for scilab, to write one extension function, you need at least *3* different files, with different syntax. yikes! > - the usenet group is much less active than SciPy, Python usergroups yes, and I agree with a previous post that they do not respond well to suggestions. I got blown off a couple times, and not answered a couple times. The python groups, in contrast, are always very friendly even when I post stupid things. :) > - Python is usable as a general program language very true. there is a real arbitrariness in the scilab syntax, and function names, which seems to come from the top and is inflexible. for 2d plots, matplotlib blows it out of the water. wxpython, or any of the other gui toolkits, are much better than the inconsistent options under scilab. I left scilab for octave, in Linux, for many of these same reasons. The windows port of octave, as of a year or so ago, was *terrible*: performance was awful, to the point of making a perfectly running project in Linux completely unusable in Windows. This was one of the factors that started me looking elsewhere, and I happened upon Python a year ago, and never looked back! I go with the Enthought edition of Python, to get all the necessary packages in one download. my 2cents. bb -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais From srinath at txcorp.com Thu Jan 18 21:13:13 2007 From: srinath at txcorp.com (Srinath Vadlamani) Date: Thu, 18 Jan 2007 19:13:13 -0700 Subject: [SciPy-user] OS X test issues Message-ID: <45B02939.7040700@txcorp.com> I have made my n-th attempt at installing a proper scipy version on my MacBook Pro. These are my steps. 1)svn co numpy and scipy 2)built numpy configured with gfortran 3)built fftw 4)built scipy configured with gfortran All tests for numpy pass. For scipy its a different story: ============================ Python 2.4.4 (#1, Oct 18 2006, 10:34:39) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> import scipy >>> scipy.test(1) Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 53 tests for scipy.linalg.decomp Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 12 tests for scipy.io.mmio Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.io.recaster Found 4 tests for scipy.optimize.zeros Warning: FAILURE importing tests for /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py:15: Import Error: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-package s/scipy/sparse/sparsetools.so: Library not loaded: /usr/local/lib/libgfortran.1.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/s parsetools.so Reason: image not found (in ?) **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 41 tests for scipy.linalg.basic Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 14 tests for scipy.linalg.blas Found 4 tests for scipy.fftpack.helper Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ...E...caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................................................................................................... ............................................................................................................... ............................................................................................................... ............................................................................................................... .............................................E.. **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** .........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 2.66420674161e-08 ...Result may be inaccurate, approximate err = 7.27595761418e-12 .....................................................F....... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .......F.......... ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/tests/test_array _import.py", line 55, in check_integer from scipy import stats File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/stats/stats.py", li ne 190, in ? import scipy.special as special File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/__init__.py ", line 8, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/basic.py", line 8, in ? from _cephes import * ImportError: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-p ackages/scipy/special/_cephes.so: Symbol not found: _e_wsle Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/ _cephes.so Expected in: dynamic lookup ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/tests/test_mmio. py", line 151, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/lib/blas/tests/test _blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 9.8394918217557107e-39j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_b las.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 9.7746285184390436e-39j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 903 tests in 1.773s FAILED (failures=2, errors=2) >>> ===================================== Any suggestions will help. thanks, SV -- ========================================================== Srinath Vadlamani srinath at txcorp.com Tech-X Corp. o:303-996-2034 5621 Arapahoe Ave. Suite A f:303-448-7756 Boulder, CO 80303 ========================================================== From lists.steve at arachnedesign.net Thu Jan 18 22:00:20 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Thu, 18 Jan 2007 22:00:20 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <20070119014222.GA4710@zunzun.com> References: <20070118213309.GA31714@zunzun.com> <20070119014222.GA4710@zunzun.com> Message-ID: <2288E66F-2D0C-43DC-A8E4-72D8A9F800AA@arachnedesign.net> Hi James, > Re-release scheduled for this weekend. I very much appreciate > the help and advice, we "license newbies" sure need it. > > I'll post again after changing the license. What about re-releasing it under the name of "pyquations"? You know .. just kidding, but I think it has a nice ring to it :-) -steve From v-nijs at kellogg.northwestern.edu Thu Jan 18 22:19:58 2007 From: v-nijs at kellogg.northwestern.edu (Vincent Nijs) Date: Thu, 18 Jan 2007 21:19:58 -0600 Subject: [SciPy-user] OS X test issues In-Reply-To: <45B02939.7040700@txcorp.com> Message-ID: I used steps 1,2, and 4 on a G4 laptop (OS X 10.4.8). Installed fftw using MacPorts. I get the same check_dot errors as Srinath but not the scipy.io ones. I do also get the following check_normal and check loadmat errors: ====================================================================== FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/stats/tests/test_morestats.py", line 45, in check_normal assert_array_less(crit[:-1], A) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 235, in assert_array_less header='Arrays are not less-ordered') File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not less-ordered (mismatch 100.0%) x: array([ 0.538, 0.613, 0.736, 0.858]) y: array(nan) ====================================================================== FAIL: check loadmat case sparse ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/io/tests/test_mio.py", line 85, in cc self._check_case(name, files, expected) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/io/tests/test_mio.py", line 80, in _check_case self._check_level(k_label, expected, matdict[k]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/scipy/io/tests/test_mio.py", line 63, in _check_level decimal = 5) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag es/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal test sparse; file /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-package s/scipy/io/tests/./data/testsparse_6.5.1_GLNX86.mat, variable testsparse (mismatch 46.6666666667%) x: array([[ 3.03865194e-319, 3.16202013e-322, 1.04346664e-320, 2.05531309e-320, 2.56123631e-320], [ 3.16202013e-322, 0.00000000e+000, 0.00000000e+000,... y: array([[ 1., 2., 3., 4., 5.], [ 2., 0., 0., 0., 0.], [ 3., 0., 0., 0., 0.]]) Ran 1604 tests in 27.705s FAILED (failures=4) On 1/18/07 8:13 PM, "Srinath Vadlamani" wrote: > I have made my n-th attempt at installing a proper scipy version on my > MacBook Pro. These are my steps. > 1)svn co numpy and scipy > 2)built numpy configured with gfortran > 3)built fftw > 4)built scipy configured with gfortran > > All tests for numpy pass. > For scipy its a different story: > ============================ > Python 2.4.4 (#1, Oct 18 2006, 10:34:39) > [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy >>>> import scipy >>>> scipy.test(1) > Found 4 tests for scipy.io.array_import > Found 128 tests for scipy.linalg.fblas > Found 397 tests for scipy.ndimage > Found 53 tests for scipy.linalg.decomp > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > Found 12 tests for scipy.io.mmio > Found 4 tests for scipy.linalg.lapack > Found 18 tests for scipy.fftpack.basic > Found 4 tests for scipy.io.recaster > Found 4 tests for scipy.optimize.zeros > Warning: FAILURE importing tests for '...site-packages/scipy/io/mio.pyc'> > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ > scipy/sparse/sparse.py:15: > Import > Error: Failure linking new module: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-package > s/scipy/sparse/sparsetools.so: Library not loaded: > /usr/local/lib/libgfortran.1.dylib > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ > scipy/sparse/s > parsetools.so > Reason: image not found (in ?) > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > Found 42 tests for scipy.lib.lapack > Found 128 tests for scipy.lib.blas.fblas > Found 7 tests for scipy.linalg.matfuncs > Found 41 tests for scipy.linalg.basic > Found 1 tests for scipy.optimize.cobyla > Found 16 tests for scipy.lib.blas > Found 14 tests for scipy.linalg.blas > Found 4 tests for scipy.fftpack.helper > Found 0 tests for __main__ > > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > ...E...caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > .............................................................................. > ................................. > .............................................................................. > ................................. > .............................................................................. > ................................. > .............................................................................. > ................................. > .............................................E.. > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > .........................................................................caxpy > :n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ...Result may be inaccurate, approximate err = 2.66420674161e-08 > ...Result may be inaccurate, approximate err = 7.27595761418e-12 > .....................................................F....... > **************************************************************** > WARNING: cblas module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses fblas instead of cblas. > **************************************************************** > > .......F.......... > ====================================================================== > ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/io/tests/test_array > _import.py", line 55, in check_integer > from scipy import stats > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/stats/__init__.py", > line 7, in ? > from stats import * > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/stats/stats.py", > li > ne 190, in ? > import scipy.special as special > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/special/__init__.py > ", line 8, in ? > from basic import * > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/special/basic.py", > > line 8, in ? > from _cephes import * > ImportError: Failure linking new module: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-p > ackages/scipy/special/_cephes.so: Symbol not found: _e_wsle > Referenced from: > /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ > scipy/special/ > _cephes.so > Expected in: dynamic lookup > > > ====================================================================== > ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/io/tests/test_mmio. > py", line 151, in check_simple_todense > b = mmread(fn).todense() > AttributeError: 'numpy.ndarray' object has no attribute 'todense' > > ====================================================================== > FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/lib/blas/tests/test > _blas.py", line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /numpy/testing/utils.py", > > line 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 9.8394918217557107e-39j > DESIRED: (-9+2j) > > ====================================================================== > FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/linalg/tests/test_b > las.py", line 75, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /numpy/testing/utils.py", > > line 156, in assert_almost_equal > assert round(abs(desired - actual),decimal) == 0, msg > AssertionError: > Items are not equal: > ACTUAL: 9.7746285184390436e-39j > DESIRED: (-9+2j) > > ---------------------------------------------------------------------- > Ran 903 tests in 1.773s > > FAILED (failures=2, errors=2) > >>>> > ===================================== > Any suggestions will help. > > thanks, > SV -- Vincent R. Nijs Assistant Professor of Marketing Kellogg School of Management, Northwestern University 2001 Sheridan Road, Evanston, IL 60208-2001 Phone: +1-847-491-4574 Fax: +1-847-491-2498 E-mail: v-nijs at kellogg.northwestern.edu Skype: vincentnijs From akumar at iitm.ac.in Fri Jan 19 02:35:08 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Fri, 19 Jan 2007 13:05:08 +0530 Subject: [SciPy-user] r_ not defined while using signal.freqs Message-ID: <20070119073508.GA5624@localhost> Dear SciPy users, While trying to use signal.freqs, I get the following error: In [1]: from scipy import * In [2]: [w, h] = signal.freqs([1, 2, 5626], [1, 2, 2]) --------------------------------------------------------------------------- exceptions.NameError Traceback (most recent call last) /home/kumar/python/filter_test/ /usr/lib/python2.4/site-packages/scipy/signal/filter_design.py in freqs(b, a, worN, plot) 57 """ 58 if worN is None: ---> 59 w = findfreqs(b,a,200) 60 elif isinstance(worN, types.IntType): 61 N = worN /usr/lib/python2.4/site-packages/scipy/signal/filter_design.py in findfreqs(num, den, N) 23 ep = atleast_1d(-1000)+0j 24 ---> 25 ez = r_['-1',numpy.compress(ep.imag >=0, ep,axis=-1), numpy.compress((abs(tz) < 1e5) & (tz.imag >=0),tz,axis=-1)] 26 27 integ = abs(ez) < 1e-10 NameError: global name 'r_' is not defined However, r_ seems to work when I try it from the ipython console. I am using the packaged scipy on Debian GNU/Linux (i386), if that helps. Thanks! Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robert.kern at gmail.com Fri Jan 19 02:34:30 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 19 Jan 2007 01:34:30 -0600 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <20070119073508.GA5624@localhost> References: <20070119073508.GA5624@localhost> Message-ID: <45B07486.1020709@gmail.com> Kumar Appaiah wrote: > Dear SciPy users, > > While trying to use signal.freqs, I get the following error: > > > In [1]: from scipy import * > > In [2]: [w, h] = signal.freqs([1, 2, 5626], [1, 2, 2]) > --------------------------------------------------------------------------- > exceptions.NameError Traceback (most > recent call last) > > /home/kumar/python/filter_test/ > > /usr/lib/python2.4/site-packages/scipy/signal/filter_design.py in > freqs(b, a, worN, plot) > 57 """ > 58 if worN is None: > ---> 59 w = findfreqs(b,a,200) > 60 elif isinstance(worN, types.IntType): > 61 N = worN > > /usr/lib/python2.4/site-packages/scipy/signal/filter_design.py in > findfreqs(num, den, N) > 23 ep = atleast_1d(-1000)+0j > 24 > ---> 25 ez = r_['-1',numpy.compress(ep.imag >=0, ep,axis=-1), > numpy.compress((abs(tz) < 1e5) & (tz.imag >=0),tz,axis=-1)] > 26 > 27 integ = abs(ez) < 1e-10 > > NameError: global name 'r_' is not defined > Fixed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Fri Jan 19 02:46:10 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Fri, 19 Jan 2007 13:16:10 +0530 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <20070119073508.GA5624@localhost> References: <20070119073508.GA5624@localhost> Message-ID: <20070119074610.GA6223@localhost> On Fri, Jan 19, 2007 at 01:05:08PM +0530, Kumar Appaiah wrote: > However, r_ seems to work when I try it from the ipython console. > > I am using the packaged scipy on Debian GNU/Linux (i386), if that > helps. Sorry, forgot to add the version: python-scipy - 0.5.2-0.1 (latest) Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From nwagner at iam.uni-stuttgart.de Fri Jan 19 04:23:24 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 19 Jan 2007 10:23:24 +0100 Subject: [SciPy-user] triu, tril for sparse matrices ? Message-ID: <45B08E0C.8050709@iam.uni-stuttgart.de> Hi all, Is there a way to use triu and tril for sparse matrices ? Nils From nwagner at iam.uni-stuttgart.de Fri Jan 19 04:29:20 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 19 Jan 2007 10:29:20 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers Message-ID: <45B08F70.80208@iam.uni-stuttgart.de> Hi all, Is there a way to monitor the convergence behaviour of iterative solvers (cg, gmres, ...) in scipy ? I mean a plot of the current residual r_i=linalg.norm(b-A x_i) versus the iteration i. Nils From Giovanni.Samaey at cs.kuleuven.be Fri Jan 19 07:50:05 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Fri, 19 Jan 2007 13:50:05 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: <45B08F70.80208@iam.uni-stuttgart.de> References: <45B08F70.80208@iam.uni-stuttgart.de> Message-ID: <45B0BE7D.1070609@cs.kuleuven.ac.be> I needed this as well and I altered the GMRES function in scipy's linalg file as follows: I added an input parameter prnt and added at line 618 the following lines if prnt: print "#GMRES: ", nbiter,resid This location is inside a loop, and only if ijob=2 so that you only get one print per iteration. Best Giovanni Nils Wagner wrote: > Hi all, > > Is there a way to monitor the convergence behaviour of iterative solvers > (cg, gmres, ...) in scipy ? > I mean a plot of the current residual r_i=linalg.norm(b-A x_i) versus > the iteration i. > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From matthew.brett at gmail.com Fri Jan 19 08:57:45 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 19 Jan 2007 13:57:45 +0000 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: <45B0BE7D.1070609@cs.kuleuven.ac.be> References: <45B08F70.80208@iam.uni-stuttgart.de> <45B0BE7D.1070609@cs.kuleuven.ac.be> Message-ID: <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> On 1/19/07, Giovanni Samaey wrote: > I needed this as well and I altered the GMRES function in scipy's linalg > file as follows: > > I added an input parameter prnt and > added at line 618 the following lines > > if prnt: > print "#GMRES: ", nbiter,resid Or maybe an observer function of some sort for more flexibility: def show_convergence(nbiter, resid): print "#GMRES: ", nbiter,resid def solver(...., observer_func = None) ... if observer_func: observer_func(nbiter, resid) sort of thing. Matthew From ryanlists at gmail.com Fri Jan 19 10:21:30 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 19 Jan 2007 09:21:30 -0600 Subject: [SciPy-user] c2d Message-ID: Is there a scipy equivalent to the matlab function c2d for converting a continuous LTI system to digital? I poked around in scipy.signal and didn't see anything. I don't mind writing one, but don't want to reinvent the wheel. Ryan From nwagner at iam.uni-stuttgart.de Fri Jan 19 10:25:28 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 19 Jan 2007 16:25:28 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> References: <45B08F70.80208@iam.uni-stuttgart.de> <45B0BE7D.1070609@cs.kuleuven.ac.be> <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> Message-ID: <45B0E2E8.7070000@iam.uni-stuttgart.de> Matthew Brett wrote: > On 1/19/07, Giovanni Samaey wrote: > >> I needed this as well and I altered the GMRES function in scipy's linalg >> file as follows: >> >> I added an input parameter prnt and >> added at line 618 the following lines >> >> if prnt: >> print "#GMRES: ", nbiter,resid >> > > Or maybe an observer function of some sort for more flexibility: > > def show_convergence(nbiter, resid): > print "#GMRES: ", nbiter,resid > > def solver(...., observer_func = None) > ... > if observer_func: > observer_func(nbiter, resid) > > sort of thing. > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Whatever will be implemented in the future it would be a nice enhancement. +1 for a monitoring function. I can file a ticket if needed. Nils From robert.vergnes at yahoo.fr Fri Jan 19 10:27:28 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Fri, 19 Jan 2007 16:27:28 +0100 (CET) Subject: [SciPy-user] QME-Dev Workbench (wxSciPy) Alpha release In-Reply-To: Message-ID: <20070119152728.41681.qmail@web27408.mail.ukl.yahoo.com> Hello, I hope to have an Alpha release this week-end. A site has been opened ( but it is under construction) at: http://sourceforge.net/projects/qme-dev/ Those interested in the workbench will be able to test it. And participate (? everybody disappeared ;-)) ) . I am struggling a bit .... with the CVS/ftp access at sourceforge, It migh be easier for me to send by email the alpha version to people interested - in the case that I do not find an easy solution to access sourceforge for upload.(anybody can help? winscp not connecting) About what is needed to run this wxSciPy workbench you will need : Python 2.5 or higher..., SciPy (and numpy), PyLab(math plot lib) and wxPython 2.8 . and any other package you want to use in the shell(s). It will be slow for load/save as for debug I use save/load data in ascii format. So for large matrix... it takes some time (say matrix > 1000x100). But we will switch to binary to speed-up the load/save process soon. Best Regards Robert Matthew Vernon a ?crit : On 18 Jan 2007, at 13:56, John Hassler wrote: > To be still more fair, Scilab should be compared to Matlab, not > Python. Scilab (free) doesn't have the polish of Matlab > (expensive), but is as usable as Matlab for a very wide range of > problems. In fact, I like the structure of Scilab much better than > that of Matlab and its abominable "function equals m-file" > requirement. If you're looking for matlab replacements, I'd recommend GNU Octave. It is mostly compatible with matlab syntax, too. http://www.gnu.org/software/octave/ Cheers, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edschofield at gmail.com Fri Jan 19 11:38:36 2007 From: edschofield at gmail.com (Ed Schofield) Date: Fri, 19 Jan 2007 17:38:36 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: <45B0E2E8.7070000@iam.uni-stuttgart.de> References: <45B08F70.80208@iam.uni-stuttgart.de> <45B0BE7D.1070609@cs.kuleuven.ac.be> <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> <45B0E2E8.7070000@iam.uni-stuttgart.de> Message-ID: <1b5a37350701190838p772932f6h29a23962ce016230@mail.gmail.com> On 1/19/07, Nils Wagner wrote: > > Matthew Brett wrote: > > On 1/19/07, Giovanni Samaey wrote: > > > >> I needed this as well and I altered the GMRES function in scipy's > linalg > >> file as follows: > >> > >> I added an input parameter prnt and > >> added at line 618 the following lines > >> > >> if prnt: > >> print "#GMRES: ", nbiter,resid > >> > > > > Or maybe an observer function of some sort for more flexibility: > > > > def show_convergence(nbiter, resid): > > print "#GMRES: ", nbiter,resid > > > > def solver(...., observer_func = None) > > ... > > if observer_func: > > observer_func(nbiter, resid) > > > > sort of thing. > > > > Matthew > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > Whatever will be implemented in the future it would be a nice enhancement. > > +1 for a monitoring function. > > I can file a ticket if needed. A few months ago I added a callback argument to various nonlinear solvers in optimize.py. We might as well aim for the same syntax for the linear solvers. It should be easy to do... -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggellner at uoguelph.ca Fri Jan 19 11:54:46 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Fri, 19 Jan 2007 11:54:46 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: References: <20070118213309.GA31714@zunzun.com> Message-ID: <20070119165446.GA32131@encolpuis> At risk of sounding like a troll . . . This is an issue I have noticed that the python science world has VERY strong opinions about . . . and I was hoping that maybe people would answer my final questions before becoming a BSD'er ;-) In John Hunters very good pitch, I find his main points being: 1) GPL generally can not be comfortably/legally used by companies, so you cut off these developers needlessly. 2) The safety that the GPL offers is a straw man, as in general companies do not steal from the community. 3) GPL is generally used by developers without thinking about the above, and they use it because it is popular. Re 1): I have never understood why this is the case. GTK+, gcc stack, and many other libraries are LGPL and are used by companies for there products (like acrobat for linux, and OS X as far as I know (gcc stack)), and I have never heard of their code being 'infected'. If so then could it be that the problem is not point 3, but rather that companies do not understand that the LGPL does not take their code, and stop them from making proprietary products with it (you would only have to release bug fixes to the library, anything else would just link to the library and then be able to be closed)? I would love it if anyone can set me straight. I promise this is the last thing I will say on the issue, as I have seen the response on these lists when someone advocates the GPL before . . . I am neutral, but confused. Gabriel On Thu, Jan 18, 2007 at 02:47:53PM -0700, Fernando Perez wrote: > On 1/18/07, zunzun at zunzun.com wrote: > > Can I get any advice on which software license to distribute > > my code? The sourceforge URL, which I forgot to post before, is > > > > http://sourceforge.net/projects/pythonequations > > > > I'm just interested in feedback from users, whether someone uses > > the code in a commercial application is not particulary important > > to me personally. I would like people to know where to get the > > source code itself if someone does redistribute, again so that > > the chance of feedback on the source code is increased. > > > > I'm not chained to the GPL, I'm just naive and inexperienced. > > Any advice would be greatly appreciated, and my thanks to > > Glen Mabey for his thoughts below. Re-releasing would happen > > this weekend if I knew which license to use other than the GPL. > > John Hunter has a standardized and extremely well-written response to > this question. Rather than poorly paraphrasing him, I just hit 'jdh > bsd pitch' in my gmail search box, and I'll let his words tell the > rest. > > This should (with John's permission) probably go into the scipy FAQ... > > Cheers, > > f > > ###### Written by John Hunter in response to this same question > > Would you consider licensing your code under a more permissive > license? Most of the essential scientific computing tools in python > (scipy, numpy, matplotlib, ipython, vtk, enthought tool suite, ....) > are licensed under a BSD-ish style license, and cannot reuse GPLd > code. > > My standard "licensing pitch" is included below:: > > I'll start by summarizing what many of you already know about open > source licenses. I believe this discussion is broadly correct, though > it is not a legal document and if you want legally precise statements > you should reference the original licenses cited here. The > Open-Source-Initiative is a clearing house for OS licenses, so you can > read more there. > > The two dominant license variants in the wild are GPL-style and > BSD-style. There are countless other licenses that place specific > restrictions on code reuse, but the purpose of this document is to > discuss the differences between the GPL and BSD variants, specifically > in regards to my experience developing matplotlib and in my > discussions with other developers about licensing issues. > > The best known and perhaps most widely used license is the GPL, which > in addition to granting you full rights to the source code including > redistribution, carries with it an extra obligation. If you use GPL > code in your own code, or link with it, your product must be released > under a GPL compatible license. I.e., you are required to give the source > code to other people and give them the right to redistribute it as > well. Many of the most famous and widely used open source projects are > released under the GPL, including linux, gcc and emacs. > > The second major class are the BSD-style licenses (which includes MIT > and the python PSF license). These basically allow you to do whatever > you want with the code: ignore it, include it in your own open source > project, include it in your proprietary product, sell it, > whatever. python itself is released under a BSD compatible license, in > the sense that, quoting from the PSF license page > > There is no GPL-like "copyleft" restriction. Distributing > binary-only versions of Python, modified or not, is allowed. There > is no requirement to release any of your source code. You can also > write extension modules for Python and provide them only in binary > form. > > Famous projects released under a BSD-style license in the permissive > sense of the last paragraph are the BSD operating system, python and > TeX. > > I believe the choice of license is an important one, and I advocate a > BSD-style license. In my experience, the most important commodity an > open source project needs to succeed is users. Of course, doing > something useful is a prerequisite to getting users, but I also > believe users are something of a prerequisite to doing something > useful. It is very difficult to design in a vacuum, and users drive > good software by suggesting features and finding bugs. If you satisfy > the needs of some users, you will inadvertently end up satisfying the > needs of a large class of users. And users become developers, > especially if they have some skills and find a feature they need > implemented, or if they have a thesis to write. Once you have a lot of > users and a number of developers, a network effect kicks in, > exponentially increasing your users and developers. In open source > parlance, this is sometimes called competing for mind share. > > So I believe the number one (or at least number two) commodity an open > source project can possess is mind share, which means you want as many > damned users using your software as you can get. Even though you are > giving it away for free, you have to market your software, promote it, > and support it as if you were getting paid for it. Now, how does this > relate to licensing, you are asking? > > Many software companies will not use GPL code in their own software, > even those that are highly committed to open source development, such > as enthought, out of legitimate concern that use of the GPL will > "infect" their code base by its viral nature. In effect, they want to > retain the right to release some proprietary code. And in my > experience, companies make for some of the best developers, because > they have the resources to get a job done, even a boring one, if they > need it in their code. Two of the matplotlib backends (FLTK and WX) > were contributed by private sector companies who are using matplotlib > either internally or in a commercial product -- I doubt these > companies would have been using matplotlib if the code were GPL. In my > experience, the benefits of collaborating with the private sector are > real, whereas the fear that some private company will "steal" your > product and sell it in a proprietary application leaving you with > nothing is not. > > There is a lot of GPL code in the world, and it is a constant reality > in the development of matplotlib that when we want to reuse some > algorithm, we have to go on a hunt for a non-GPL version. Most > recently this occurred in a search for a good contouring algorithm. I > worry that the "license wars", the effect of which are starting to be > felt on many projects, have a potential to do real harm to open source > software development. There are two unpalatable options. 1) Go with > GPL and lose the mind-share of the private sector 2) Forgo GPL code > and retain the contribution of the private sector. This is a very > tough decision because their is a lot of very high quality software > that is GPL and we need to use it; they don't call the license viral > for nothing. > > The third option, which is what is motivating me to write this, is to > convince people who have released code under the GPL to re-release it > under a BSD compatible license. Package authors retain the copyright > to their software and have discretion to re-release it under a license > of their choosing. Many people choose the GPL when releasing a package > because it is the most famous open source license, and did not > consider issues such as those raised here when choosing a > license. When asked, these developers will often be amenable to > re-releasing their code under a more permissive license. Fernando > Perez did this with ipython, which was released under the LGPL and > then re-released under a BSD license to ease integration with > matplotlib, scipy and enthought code. The LGPL is more permissive than > the GPL, allowing you to link with it non-virally, but many companies > are still loath to use it out of legal concerns, and you cannot reuse > LGPL code in a proprietary product. > > So I encourage you to release your code under a BSD compatible > license, and when you encounter an open source developer whose code > you want to use, encourage them to do the same. Feel free to forward > this document on them. > > Comments, suggestions for improvements, corrections, etc, should be > sent to jdhunter at ace.bsd.uchicago.edu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jdhunter at ace.bsd.uchicago.edu Fri Jan 19 11:55:39 2007 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 19 Jan 2007 10:55:39 -0600 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: ("Fernando Perez"'s message of "Thu, 18 Jan 2007 14:47:53 -0700") References: <20070118213309.GA31714@zunzun.com> Message-ID: <87hcunklbo.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Fernando" == Fernando Perez writes: Fernando> This should (with John's permission) probably go into Fernando> the scipy FAQ... I'm happy to see it there, at the risk that one day I may get an angry look from Stallman.... JDH From jdhunter at ace.bsd.uchicago.edu Fri Jan 19 12:10:12 2007 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 19 Jan 2007 11:10:12 -0600 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <20070119165446.GA32131@encolpuis> (Gabriel Gellner's message of "Fri, 19 Jan 2007 11:54:46 -0500") References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> Message-ID: <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Gabriel" == Gabriel Gellner writes: Gabriel> Re 1): I have never understood why this is the Gabriel> case. GTK+, gcc stack, and many other libraries are LGPL Gabriel> and are used by companies for there products (like Gabriel> acrobat for linux, and OS X as far as I know (gcc Gabriel> stack)), and I have never heard of their code being Gabriel> 'infected'. Gabriel> If so then could it be that the problem is not point 3, Gabriel> but rather that companies do not understand that the LGPL Gabriel> does not take their code, and stop them from making Gabriel> proprietary products with it (you would only have to I think this is broadly correct -- the LGPL should be safe for these companies and should not threaten their code. Part of what is going on here is that companies that distribute software don't want to risk it. The FSF is fairly fanatical, and as we are seeing with GPL3, may change the rules as time goes on. For small companies who want to distribute source code with their software but cannot afford lawyers and legal fights, I think the preference is to not get involved with the GPL, L or otherwise. As Eric Jones has said about licenses, the fewer words the better. His point is, I think, that the larger and more complex the license, the more likely it is that lawyers can disagree over it. And LGPL can hinder code reuse. If someone releases a large LGPL package that defines some small but nifty scientific algorithm, we may prefer to cut that algorithm out and insert it into our own code base w/o making the entire package a dependency at link time. This makes it much easier to include and reuse just the bits you want. Of course, the package authors may not want this and may want us to depend on their package, which is understandable and is their choice, but it certainly restricts our choice in building a set of tools that we can distribute with minimal size and dependencies as we see fit. JDH From nwagner at iam.uni-stuttgart.de Fri Jan 19 12:12:32 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 19 Jan 2007 18:12:32 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: <1b5a37350701190838p772932f6h29a23962ce016230@mail.gmail.com> References: <45B08F70.80208@iam.uni-stuttgart.de> <45B0BE7D.1070609@cs.kuleuven.ac.be> <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> <45B0E2E8.7070000@iam.uni-stuttgart.de> <1b5a37350701190838p772932f6h29a23962ce016230@mail.gmail.com> Message-ID: On Fri, 19 Jan 2007 17:38:36 +0100 "Ed Schofield" wrote: > On 1/19/07, Nils Wagner >wrote: >> >> Matthew Brett wrote: >> > On 1/19/07, Giovanni Samaey >> wrote: >> > >> >> I needed this as well and I altered the GMRES >>function in scipy's >> linalg >> >> file as follows: >> >> >> >> I added an input parameter prnt and >> >> added at line 618 the following lines >> >> >> >> if prnt: >> >> print "#GMRES: ", nbiter,resid >> >> >> > >> > Or maybe an observer function of some sort for more >>flexibility: >> > >> > def show_convergence(nbiter, resid): >> > print "#GMRES: ", nbiter,resid >> > >> > def solver(...., observer_func = None) >> > ... >> > if observer_func: >> > observer_func(nbiter, resid) >> > >> > sort of thing. >> > >> > Matthew >> > _______________________________________________ >> > SciPy-user mailing list >> > SciPy-user at scipy.org >> > http://projects.scipy.org/mailman/listinfo/scipy-user >> > >> Whatever will be implemented in the future it would be a >>nice enhancement. >> >> +1 for a monitoring function. >> >> I can file a ticket if needed. > > > A few months ago I added a callback argument to various >nonlinear solvers in > optimize.py. We might as well aim for the same syntax >for the linear > solvers. It should be easy to do... > > -- Ed So how can I use this functionality within the following script from scipy import * def func(x): return 0.5*dot(x,dot(A,x))-dot(x,b) def callback(x): return linalg.norm(dot(A,x)-b) n = 10 x0 = zeros(n,float) A = random.rand(n,n)+diag(4*ones(n)) A = 0.5*(A+A.T) b = random.rand(n) x = optimize.fmin_cg(func,x0) Nils From ggellner at uoguelph.ca Fri Jan 19 12:20:48 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Fri, 19 Jan 2007 12:20:48 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20070119172048.GA793@encolpuis> > I think this is broadly correct -- the LGPL should be safe for these > companies and should not threaten their code. Part of what is going > on here is that companies that distribute software don't want to risk > it. The FSF is fairly fanatical, and as we are seeing with GPL3, may > change the rules as time goes on. For small companies who want to > distribute source code with their software but cannot afford lawyers > and legal fights, I think the preference is to not get involved with > the GPL, L or otherwise. As Eric Jones has said about licenses, the > fewer words the better. His point is, I think, that the larger and > more complex the license, the more likely it is that lawyers can > disagree over it. > > And LGPL can hinder code reuse. If someone releases a large LGPL > package that defines some small but nifty scientific algorithm, we may > prefer to cut that algorithm out and insert it into our own code base > w/o making the entire package a dependency at link time. This makes > it much easier to include and reuse just the bits you want. Of > course, the package authors may not want this and may want us to > depend on their package, which is understandable and is their choice, > but it certainly restricts our choice in building a set of tools that > we can distribute with minimal size and dependencies as we see fit. > > JDH Makes sense to me, I to get afraid by any sized legal document :-) Thanks for the response. Gabriel From matthew at sel.cam.ac.uk Fri Jan 19 12:25:36 2007 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Fri, 19 Jan 2007 17:25:36 +0000 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <20070119165446.GA32131@encolpuis> References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> Message-ID: <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> Hi, > This is an issue I have noticed that the python science world has VERY > strong opinions about . . . and I was hoping that maybe people would > answer my final questions before becoming a BSD'er ;-) I'm a GPL person myself. > In John Hunters very good pitch, I find his main points being: > 1) GPL generally can not be comfortably/legally used by companies, so > you cut off these developers needlessly. That's not entirely true; for example, companies could agree to pay you to let them release your code under a non-GPL licence. Also, some companies do work on GPLd code (e.g. Ubuntu) - if their business model doesn't rely on creating proprietary software and selling it, then there's no reason for them not to use GPL code. There's a side-point about whether it's good to encourage non-free software to be used in science. I would always advocate doing science with free software wherever possible [that discussion really is for somewhere else. Maybe I should blog about it ;) ]. > 2) The safety that the GPL offers is a straw man, as in general > companies do not steal from the community. This isn't true. Microsoft uses BSD-licenced code, for example: http://everything2.com/index.pl?node=BSD%20Code%20in%20Windows Much of Mac OS X is based upon BSD-licenced code. It's not uncommon for this to happen, although in fairness some BSD- licence proponents see this as a good thing. > 3) GPL is generally used by developers without thinking about the > above, and they use it because it is popular. Now this really isn't on. Many developers (like myself) chose to use the GPL, because we think that it's model of free software is superior to the BSD-licences'. As close to a balanced piece on the topic as I can find is: http://en.wikipedia.org/wiki/BSD_and_GPL_licensing Regards, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From aisaac at american.edu Fri Jan 19 12:34:21 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 19 Jan 2007 12:34:21 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> References: <20070118213309.GA31714@zunzun.com><20070119165446.GA32131@encolpuis><87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: On Fri, 19 Jan 2007, John Hunter apparently wrote: > LGPL can hinder code reuse. This is a key point: the restriction on incorporating portions of a library. The other points about legal uncertainty and changing rules of the game may or may not prove to be well founded, but of course our inability to tell is for now exactly the point. My exposure to Python has really reshaped my views of the GPL, although I still think it can occasionally be appropriate. I am inclined to think that avoidance of the GPL has been important for the spread of Python and for quite a few of the best Python apps. Ruby tries to split the difference with a fairly complex licensing arrangement: GPL or ... or ... It will be interesting to see how that affects its spread beyond academe in the LR. Cheers, Alan Isaac From edschofield at gmail.com Fri Jan 19 12:33:52 2007 From: edschofield at gmail.com (Ed Schofield) Date: Fri, 19 Jan 2007 18:33:52 +0100 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <87hcunklbo.fsf@peds-pc311.bsd.uchicago.edu> References: <20070118213309.GA31714@zunzun.com> <87hcunklbo.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <1b5a37350701190933k1d904856oaa8d5ef95e72b879@mail.gmail.com> On 1/19/07, John Hunter wrote: > > >>>>> "Fernando" == Fernando Perez writes: > > Fernando> This should (with John's permission) probably go into > Fernando> the scipy FAQ... > > I'm happy to see it there, at the risk that one day I may get an angry > look from Stallman.... Interesting article. But, if we post it on scipy.org, I'd like to make two important corrections: The best known and perhaps most widely used license is the GPL, which > in addition to granting you full rights to the source code including > redistribution, carries with it an extra obligation. If you use GPL > code in your own code, or link with it, your product must be released > under a GPL compatible license. I.e., you are required to give the source > code to other people and give them the right to redistribute it as > well. > You *may* use GPL code in your own code in-house without releasing it under the GPL. You must *only* release your product under the GPL if you distribute it to the public in binary form (Section 2). The LGPL is more permissive than the GPL, allowing you to link with it > non-virally, but many companies are still loath to use it out of legal > concerns, and you cannot reuse LGPL code in a proprietary product. You *may* reuse LGPL code in a proprietary product in-house, as with the GPL. You may also distribute closed-source binaries that link with LGPL code, provided you give "prominent notice" that you're using LGPL code and offer to redistribute any modifications you've made to the LGPL code (Section 6). -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggellner at uoguelph.ca Fri Jan 19 12:59:15 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Fri, 19 Jan 2007 12:59:15 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: References: <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20070119175915.GA1627@encolpuis> This is an interesting point, but couldn't you just cut out the piece of the library and make it a standalone library . . . really this seems to make LGPL code tricky for reusing algorithms (that can't easily/efficiently be turned into a library :-), or any piece of code that is not easily turned into a callable/(subclassable) library? I have to admit that I to have had to reconsider my ideas on the GPL since becoming a big fan of python/scipy as well. Though I still **think** I prefer the GPL, this discussion has been interesting. The strongest argument for me is that the legal issues can be frightening for small companies (or individuals), in this sense the BSD to me seems unambiguously superior. Sorry if I bubbled up any emotions, I simply had never read JDH's email before and I found it challenging . . . Gabriel On Fri, Jan 19, 2007 at 12:34:21PM -0500, Alan G Isaac wrote: > On Fri, 19 Jan 2007, John Hunter apparently wrote: > > LGPL can hinder code reuse. > > This is a key point: the restriction on incorporating > portions of a library. The other points about legal > uncertainty and changing rules of the game may or may not > prove to be well founded, but of course our inability to > tell is for now exactly the point. > > My exposure to Python has really reshaped my views of the > GPL, although I still think it can occasionally be > appropriate. I am inclined to think that avoidance of the > GPL has been important for the spread of Python and for > quite a few of the best Python apps. Ruby tries to split > the difference with a fairly complex licensing arrangement: > GPL or ... or ... It will be interesting to see how that > affects its spread beyond academe in the LR. > > Cheers, > Alan Isaac > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Fri Jan 19 13:03:56 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 19 Jan 2007 11:03:56 -0700 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> Message-ID: On 1/19/07, Matthew Vernon wrote: > There's a side-point about whether it's good to encourage non-free > software to be used in science. I would always advocate doing science > with free software wherever possible [that discussion really is for > somewhere else. Maybe I should blog about it ;) ]. Just to make sure we don't muddy the waters here: BSD-licensed software /is free software/, period. Someone may take the BSD sources and turn them into a propietary product later down the road, but the original BSD code /is free/. Nobody is charging you for, nor preventing you to modify, any of numpy, scipy, matplotlib, ipython or python itself. Now, to summarize my /personal/ perspective on this. I think there's certainly a place for the GPL, and I'm glad it exists and is used. If I were writing for free, in an academic setting, code whose commercial potential was obvious and direct (something to put in an embedded gizmo, for example), I'd most likely GPL it. But for ipython (the one I had a say over) I went with BSD simply because I think that the risk of losing potential contributors outweighs the risk of someone 'stealing' (that's a poor term, I know) it. I think in the end it comes down to a risk balance: - GPL prevents anyone from closing down your code, at the risk of keeping some parties away from using and/or contributing to your project. - BSD may encourage more such contributions, though the door remains open for someone to take that code and set up a closed shop with it [1]. This risk balance will probably lead to a different decision for each person/project. For me and ipython I decided that BSD was the right choice. Others are free to think differently. I'll try to quiet on this issue now; it can easily spin into an endless discussion. Cheers, f Footnotes: [1] It is possible that some people may shy away from contributing to BSD projects altogether out of fear that their contribution may be 'stolen' one day, but that seems to happen less often in practice than people avoiding GPL code. From rshepard at appl-ecosys.com Fri Jan 19 13:12:21 2007 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Fri, 19 Jan 2007 10:12:21 -0800 (PST) Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <20070119175915.GA1627@encolpuis> References: <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> <20070119175915.GA1627@encolpuis> Message-ID: On Fri, 19 Jan 2007, Gabriel Gellner wrote: > for small companies (or individuals), in this sense the BSD to me seems > unambiguously superior. Gabriel, et al.: Here's something for you to consider: As the Groklaw post introduces the subject, "BSD - The Dark Horse of Open Source, by Brendan Scott, OS Law Sunday, January 14 2007 @ 02:34 PM EST "Brendan Scott has been studying the BSD license, particularly in the context of Australian law, and he has come up with some startling questions. Is the BSD license as permissive as we've thought? The paper is principally for lawyers to consider, but it's certainly of interest to everyone, and note his disclaimer: " 'Nothing in this paper is legal advice or a statement of the law. This paper is an exposition of an (untested) argument as to the effect of the BSD license.' "Scott will be speaking at the Gaming Miniconf at LCA2007 in Australia on Tuesday at 11:15 if anyone wants to discuss the paper afterwards. The paper is also available in PDF format." Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc. | Accelerator(TM) Voice: 503-667-4517 Fax: 503-667-8863 From robert.kern at gmail.com Fri Jan 19 14:00:03 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 19 Jan 2007 13:00:03 -0600 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> Message-ID: <45B11533.5010705@gmail.com> Matthew Vernon wrote: > Hi, > >> This is an issue I have noticed that the python science world has VERY >> strong opinions about . . . and I was hoping that maybe people would >> answer my final questions before becoming a BSD'er ;-) > > I'm a GPL person myself. > >> In John Hunters very good pitch, I find his main points being: >> 1) GPL generally can not be comfortably/legally used by companies, so >> you cut off these developers needlessly. > > That's not entirely true; for example, companies could agree to pay > you to let them release your code under a non-GPL licence. This is not really an option for projects with multiple contributors that sometimes disappear. At least those that aren't large enough to require copyright assignment to a Foundation. If you're hoping to accept nontrivial patches, you need to make sure that you get them under a license that allows you to relicense (or sublicense; I don't want to look up the difference now) to something else. But if you are going to ask that of your contributors, it just makes sense to extend that same courtesy to them, too. > Also, some > companies do work on GPLd code (e.g. Ubuntu) And there are many more companies that don't. We call them "customers." > - if their business > model doesn't rely on creating proprietary software and selling it, > then there's no reason for them not to use GPL code. That makes complete sense. Unfortunately, as with most sensible things involving law and business, it's not true. While the GPL is fine for them for some software, like say gcc and the rest of the GNU/Linux toolchain, it's not for other things. It's often very not fine for libraries. Companies buy proprietary libraries and code-bases all of the time and hire other companies to build on them and combine them with other things. "In-house" code frequently isn't. > There's a side-point about whether it's good to encourage non-free > software to be used in science. I would always advocate doing science > with free software wherever possible [that discussion really is for > somewhere else. Maybe I should blog about it ;) ]. Well, we're not. We're encouraging people to use BSD. Discouraging the GPL in favor of BSD is not encouraging non-free software for science. >> 3) GPL is generally used by developers without thinking about the >> above, and they use it because it is popular. > > Now this really isn't on. Many developers (like myself) chose to use > the GPL, because we think that it's model of free software is > superior to the BSD-licences'. However, it *is* very often true. Like in this case. See the post that starts this thread. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zunzun at zunzun.com Fri Jan 19 14:04:33 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Fri, 19 Jan 2007 14:04:33 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <45B11533.5010705@gmail.com> References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> <45B11533.5010705@gmail.com> Message-ID: <20070119190433.GA13562@zunzun.com> On Fri, Jan 19, 2007 at 01:00:03PM -0600, Robert Kern wrote: > > However, it *is* very often true. Like in this case. See the post that starts > this thread. Ah - that's me. BSD it shall be, and thank you for the advice. James From robert.kern at gmail.com Fri Jan 19 14:31:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 19 Jan 2007 13:31:46 -0600 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: References: <87y7nzj62z.fsf@peds-pc311.bsd.uchicago.edu> <20070119175915.GA1627@encolpuis> Message-ID: <45B11CA2.6020707@gmail.com> Rich Shepard wrote: > On Fri, 19 Jan 2007, Gabriel Gellner wrote: > >> for small companies (or individuals), in this sense the BSD to me seems >> unambiguously superior. > > Gabriel, et al.: > > Here's something for you to consider: > > > > As the Groklaw post introduces the subject, > > "BSD - The Dark Horse of Open Source, by Brendan Scott, OS Law > Sunday, January 14 2007 @ 02:34 PM EST > > "Brendan Scott has been studying the BSD license, particularly in the context > of Australian law, and he has come up with some startling questions. Is the > BSD license as permissive as we've thought? The paper is principally for > lawyers to consider, but it's certainly of interest to everyone, and note > his disclaimer: I also encourage everyone to read the comments and not just the paper. That title and abstract is much more sound and fury than the arguments entail, IMO. Seriously, "Dark Horse"? Lawyers should save their sophistry for court appearances, not papers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rhc28 at cornell.edu Fri Jan 19 16:45:00 2007 From: rhc28 at cornell.edu (Rob Clewley) Date: Fri, 19 Jan 2007 16:45:00 -0500 Subject: [SciPy-user] c2d In-Reply-To: References: Message-ID: Hi Ryan, I don't think there is, but PyDSTool has such a function, after a similar question was asked on this list last year (see http://projects.scipy.org/pipermail/scipy-user/2006-March/007327.html). The code adapts the interp1d class that's been in Scipy for a while (you could just dig out my interp0d class). You can see an example in PyDSTool/Tests/interp_pcwc.py, which contains code like timeData = linspace(0, 10, 30) sindata = sin(timeData) xData = makeDataDict(['sinx'], [sindata]) # a PyDSTool util to "zip" names and arrays pcwc_interp = InterpolateTable({ 'tdata': timeData, 'ics': xData, 'name': 'interp0d', 'method': 'constant', }).compute('interp') x = linspace(0,10,300) plot(x, pcwc_interp(x, ['sinx'])) plot(x, sin(x)) I've attached the output as a JPG graphic. I hope this helps. Rob -------------- next part -------------- A non-text attachment was scrubbed... Name: pydstool_sin_c2d_example.jpg Type: image/jpeg Size: 24163 bytes Desc: not available URL: From oliphant at ee.byu.edu Fri Jan 19 16:50:33 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 19 Jan 2007 14:50:33 -0700 Subject: [SciPy-user] c2d In-Reply-To: References: Message-ID: <45B13D29.1010205@ee.byu.edu> Ryan Krauss wrote: >Is there a scipy equivalent to the matlab function c2d for converting >a continuous LTI system to digital? I poked around in scipy.signal >and didn't see anything. I don't mind writing one, but don't want to >reinvent the wheel. > > > I don't think so. I remember thinking about it several years ago, but I don't think it actually got written. -Travis From nkottens at nd.edu Sat Jan 20 00:08:52 2007 From: nkottens at nd.edu (Nicholas Kottenstette) Date: Fri, 19 Jan 2007 23:08:52 -0600 Subject: [SciPy-user] Trouble with arrays of arrays Message-ID: <45B1A3E4.2040500@nd.edu> Hey All, I was trying to write some code to handle polynomial matrices. I want a matrix in which each entry is a reference to a possibly different length array describing the i,jth polynomial. For simplicity I want to represent the following 2x2 matrix [2s+3, s+2; 3.3s+4.3, 1.2s+6.3] M = array( In [75]: N = array([[array([2.0,3.0]),array([1.0,2.0])],[array([3.3,4.3]),array([1.2,6.3])]],dtype=object_) In [76]: N Out[76]: array([[[2.0, 3.0], [1.0, 2.0]], [[3.3, 4.3], [1.2, 6.3]]], dtype=object) In [77]: N.shape Out[77]: (2, 2, 2) Clearly this is not what I wanted to happen, I was expecting a 2x2 array with references to each object array. If I modify my declaration of N slightly so that not all the arrays "fit" I can hack this to get what I want to see: In [78]: N = array([[array([2.0,3.0]),array([1.0,2.0])],[array([3.3,4.3]),array([0.0,1.2,6.3])]],dtype=object_) In [79]: N Out[79]: array([[[ 2. 3.], [ 1. 2.]], [[ 3.3 4.3], [ 0. 1.2 6.3]]], dtype=object) since 0s2+1.2s+6.3 = 1.2s+6.3 any suggestions to handle this problem differently are welcome. Sincerely, Nicholas From robert.kern at gmail.com Sat Jan 20 01:01:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 00:01:28 -0600 Subject: [SciPy-user] Trouble with arrays of arrays In-Reply-To: <45B1A3E4.2040500@nd.edu> References: <45B1A3E4.2040500@nd.edu> Message-ID: <45B1B038.8030106@gmail.com> Nicholas Kottenstette wrote: > Hey All, > > I was trying to write some code to handle polynomial matrices. > I want a matrix in which each entry is a reference to a possibly > different length array > describing the i,jth polynomial. > > For simplicity I want to represent the following 2x2 matrix > [2s+3, s+2; > 3.3s+4.3, 1.2s+6.3] > > M = array( > In [75]: N = > array([[array([2.0,3.0]),array([1.0,2.0])],[array([3.3,4.3]),array([1.2,6.3])]],dtype=object_) > > > In [76]: N > Out[76]: > array([[[2.0, 3.0], > [1.0, 2.0]], > > [[3.3, 4.3], > [1.2, 6.3]]], dtype=object) > > In [77]: N.shape > Out[77]: (2, 2, 2) > > Clearly this is not what I wanted to happen, I was expecting a 2x2 array > with references to each object array. Well, it's not clear to array(). There's no way for it to know how deep you wanted it to go. Object arrays with elements that are sequences are simply ambiguous and array() does the best that it can with the information it has available. The generally accepted approach to resolve this ambiguity is to use empty() to make an empty object array of the shape that you want and fill it in from your input. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Sat Jan 20 02:08:37 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sat, 20 Jan 2007 12:38:37 +0530 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <45B07486.1020709@gmail.com> References: <20070119073508.GA5624@localhost> <45B07486.1020709@gmail.com> Message-ID: <20070120070837.GA5964@localhost> On Fri, Jan 19, 2007 at 01:34:30AM -0600, Robert Kern wrote: > Fixed. Pardon me if I'm asking something obvious, but where do I get the fix or fixed version? Is it the one in the svn repository? Also, do I just replace the relevant file filter_design.py with the current one there, or do I have to sync with the whole repository? Finally, if I don't want to do all this, I should just wait for the next minor release to get into Debian, right? Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robert.kern at gmail.com Sat Jan 20 02:12:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 01:12:16 -0600 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <20070120070837.GA5964@localhost> References: <20070119073508.GA5624@localhost> <45B07486.1020709@gmail.com> <20070120070837.GA5964@localhost> Message-ID: <45B1C0D0.6040505@gmail.com> Kumar Appaiah wrote: > On Fri, Jan 19, 2007 at 01:34:30AM -0600, Robert Kern wrote: >> Fixed. > > Pardon me if I'm asking something obvious, but where do I get the fix > or fixed version? Is it the one in the svn repository? Yes. > Also, do I just replace the relevant file filter_design.py with the > current one there, or do I have to sync with the whole repository? For a fix as localized as this (and because I don't think that file has been otherwise changed since the last release), you can just replace the relevant file. Other times, fixes will be more involved and will require changes in multiple files. Or other changes have been incorporated into the file since the last release that will depend on changes in other files. > Finally, if I don't want to do all this, I should just wait for the > next minor release to get into Debian, right? If waiting for Debian is the alternative, then I'm almost certain that you *do* want to do at least one of the things you described to get the fix. We don't make official releases of scipy particularly often, and Debian will, of course, lag that release. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Sat Jan 20 03:14:29 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sat, 20 Jan 2007 13:44:29 +0530 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <45B1C0D0.6040505@gmail.com> References: <20070119073508.GA5624@localhost> <45B07486.1020709@gmail.com> <20070120070837.GA5964@localhost> <45B1C0D0.6040505@gmail.com> Message-ID: <20070120081429.GA8597@localhost> On Sat, Jan 20, 2007 at 01:12:16AM -0600, Robert Kern wrote: > If waiting for Debian is the alternative, then I'm almost certain > that you *do* want to do at least one of the things you described to > get the fix. We don't make official releases of scipy particularly > often, and Debian will, of course, lag that release. Well, I've filed a bug report and directed the maintainer to refer to this thread and the svn repository. So, even Debian will have the fixed version soon, I guess. Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robert.kern at gmail.com Sat Jan 20 06:23:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 05:23:21 -0600 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <20070120081429.GA8597@localhost> References: <20070119073508.GA5624@localhost> <45B07486.1020709@gmail.com> <20070120070837.GA5964@localhost> <45B1C0D0.6040505@gmail.com> <20070120081429.GA8597@localhost> Message-ID: <45B1FBA9.7070903@gmail.com> Kumar Appaiah wrote: > On Sat, Jan 20, 2007 at 01:12:16AM -0600, Robert Kern wrote: >> If waiting for Debian is the alternative, then I'm almost certain >> that you *do* want to do at least one of the things you described to >> get the fix. We don't make official releases of scipy particularly >> often, and Debian will, of course, lag that release. > > Well, I've filed a bug report and directed the maintainer to refer to > this thread and the svn repository. So, even Debian will have the > fixed version soon, I guess. That is not usually the case. Upstream bug fixes the size and severity of this one are usually folded in when the Debian packager starts updates to a new upstream release. If that package tends to track an upstream source control repository instead of official releases, then this might happen fairly quickly in Debian unstable. However, most packages do not; python-scipy is just such a package. But look at the changelog history if you don't believe me: http://packages.debian.org/changelogs/pool/main/p/python-scipy/python-scipy_0.5.2-0.1/changelog -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Sat Jan 20 06:56:51 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sat, 20 Jan 2007 17:26:51 +0530 Subject: [SciPy-user] r_ not defined while using signal.freqs In-Reply-To: <45B1FBA9.7070903@gmail.com> References: <20070119073508.GA5624@localhost> <45B07486.1020709@gmail.com> <20070120070837.GA5964@localhost> <45B1C0D0.6040505@gmail.com> <20070120081429.GA8597@localhost> <45B1FBA9.7070903@gmail.com> Message-ID: <20070120115651.GA14501@localhost> On Sat, Jan 20, 2007 at 05:23:21AM -0600, Robert Kern wrote: > That is not usually the case. Upstream bug fixes the size and > severity of this one are usually folded in when the Debian packager > starts updates to a new upstream release. If that package tends to > track an upstream source control repository instead of official > releases, then this might happen fairly quickly in Debian > unstable. However, most packages do not; python-scipy is just such a > package. Didn't think of that. Thanks for the clarification. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robert.kern at gmail.com Sat Jan 20 08:10:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 07:10:21 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45A4FB90.9000809@ee.byu.edu> References: <45A4FB90.9000809@ee.byu.edu> Message-ID: <45B214BD.80503@gmail.com> There's another thing we should standardize that's related to docstrings: encoding. Many names of authors of modules as well as authors of papers I want to cite in the docstrings have non-ASCII characters. Let's standardize on a UTF-8 encoding for all files that have non-ASCII characters. Now, having gone that far, I'm sitting on the fence as to whether we should discourage/encourage/leave alone the idea of using the math symbols in Unicode to help display math better. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jks at iki.fi Sat Jan 20 09:20:35 2007 From: jks at iki.fi (=?iso-8859-1?Q?Jouni_K=2E_Sepp=E4nen?=) Date: Sat, 20 Jan 2007 16:20:35 +0200 Subject: [SciPy-user] Docstring standards for NumPy and SciPy References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> Message-ID: Robert Kern writes: > Now, having gone that far, I'm sitting on the fence as to whether we > should discourage/encourage/leave alone the idea of using the math > symbols in Unicode to help display math better. "Unicode Nearly Plain-Text Encoding of Mathematics" by Murray Sargent?III may be worth considering: http://unicode.org/notes/tn28/ I don't know if there is yet any software support for this format, but it shouldn't be too difficult to convert it to (La)TeX. This format would solve the question of delimiting math expressions: "U+23A8 starts a math zone and U+23AC ends it." -- Jouni K. Sepp?nen http://www.iki.fi/jks From aisaac at american.edu Sat Jan 20 10:58:46 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 20 Jan 2007 10:58:46 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> Message-ID: > Robert Kern writes: >> Now, having gone that far, I'm sitting on the fence as to whether we >> should discourage/encourage/leave alone the idea of using the math >> symbols in Unicode to help display math better. On Sat, 20 Jan 2007, Jouni K. Sepp?nen apparently wrote: > "Unicode Nearly Plain-Text Encoding of Mathematics" by Murray > Sargent?III may be worth considering: > http://unicode.org/notes/tn28/ > I don't know if there is yet any software support for this format, but > it shouldn't be too difficult to convert it to (La)TeX. > This format would solve the question of delimiting math expressions: > "U+23A8 starts a math zone and U+23AC ends it." First on Jouni's last comment. Of course we can choose any characters we want to delimit a *certain kind* of math zone. If this proposal were adopted, it would be fairly natural to follow its choices. We would still need to decide how to delimit other kinds of math zones. Next a couple comments on the proposal, which I confess to have read very quickly - the document was written in Word and as far as I can tell does not actually describe any implementation of the conversion from the "linear format" to the "built-up format". So only the "linear format" is currently relevant, and IMO it is neither pretty nor substantively novel. - a more natural comparison would have been to LaTeX with Unicode support, rather than ASCII LaTeX vs. a new unicode proposal. The contrast between examples would then not look nearly as impressive - I saw no discussion of Fortress! I find that very odd in the context of the proposal. (By the way, Fortress has been open sourced.) In short, this looks like MS coming late to the party with ideas that are not novel, a proposal that is vapor, and comparisons that are not objective. Again, this is just a quick reaction. Finally, a question about Robert's core question: how many people will be able to read the Unicode math characters as they examine the docs while working at an interpreter prompt? I am not trying to advocate for or against the use of Unicode math symbols. I must say, however, that I doubt I will soon be able to get most Unicode math symbols into my docs as fast as I can type their LaTeX representations. Alan From ndbecker2 at gmail.com Sat Jan 20 11:09:29 2007 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 20 Jan 2007 11:09:29 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> Message-ID: Alan G Isaac wrote: > Fortress I wonder what this is referring to? The FORTRAN thing from sun? From steve at shrogers.com Sat Jan 20 11:27:34 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Sat, 20 Jan 2007 09:27:34 -0700 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> Message-ID: <45B242F6.3080103@shrogers.com> Neal Becker wrote: > Alan G Isaac wrote: > >> Fortress > > I wonder what this is referring to? The FORTRAN thing from sun? > > I believe so, though Fortress (http://fortress.sunsource.net/) really isn't all that Fortran like. It's intended to address Fortran's scientific computing domain in a more scientist friendly manner, so it's goals are similar to SciPy's. Regards, Steve -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From aisaac at american.edu Sat Jan 20 12:12:10 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 20 Jan 2007 12:12:10 -0500 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu><45B214BD.80503@gmail.com> Message-ID: > Alan G Isaac wrote: >> Fortress On Sat, 20 Jan 2007, Neal Becker apparently wrote: > I wonder what this is referring to? The FORTRAN thing from sun? It is Sun's new (alpha) scientific computing language: http://research.sun.com/projects/plrg/faq/index.html Code example: http://research.sun.com/projects/plrg/faq/NAS-CG.pdf Cheers, Alan Isaac From gnurser at googlemail.com Sat Jan 20 13:50:08 2007 From: gnurser at googlemail.com (George Nurser) Date: Sat, 20 Jan 2007 18:50:08 +0000 Subject: [SciPy-user] OS X test issues In-Reply-To: <50A935D6-89E8-4996-B17C-8EB5EA5C6B7B@kellogg.northwestern.edu> References: <50A935D6-89E8-4996-B17C-8EB5EA5C6B7B@kellogg.northwestern.edu> Message-ID: <1d1e6ea70701201050w29ac4f34q18ddaf3b7faef13f@mail.gmail.com> After compiling scipy with gfortran on core duo 2, mac os x 10.4.8. I get similar but slightly different test errors to Srinath and Vincent. After import scipy scipy.test(10) I get three test failures; 1. FAIL: check_y_stride (scipy.linalg.tests.test_fblas.test_cgemv) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_fblas.py", line 367, in check_y_stride assert_array_almost_equal(desired_y,y) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 16.6666666667%) x: array([-1.35128093 +1.35128093j, 1. +1.j , -8.87044144+12.87044144j, 3. +3.j , -8.92136478+16.92136383j, 5. +5.j ], dtype=complex64) y: array([-1.35128105 +1.35128105j, 1. +1.j , -8.87044144+12.87044144j, 3. +3.j , -8.92136574+16.92136574j, 5. +5.j ], dtype=complex64) 2 & 3: FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 3.7444811487959628e-37j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 3.7444703868237568e-37j DESIRED: (-9+2j) A simple dot product gives the right answer, though: In [9]: scipy.dot([3j,-4,3.-4j],[2,3,1]) Out[9]: (-9+2j) Do these test failures matter? Regards, George Nurser. From gael.varoquaux at normalesup.org Sat Jan 20 14:18:53 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 20 Jan 2007 20:18:53 +0100 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <035101c73b5d$13185eb0$6400a8c0@D380> References: <20070118083651.GA14390@clipper.ens.fr> <035101c73b5d$13185eb0$6400a8c0@D380> Message-ID: <20070120191853.GA21935@clipper.ens.fr> On Thu, Jan 18, 2007 at 07:02:13PM -0500, val wrote: > My first impression of traits was its help in its structured > description and manipulation of physical properties of real-world > objects, a very important (and missing in Python?) component of > physics-related programming. > How you would see/elaborate on that side of traits? Hi Val, Sorry for taking so long to answer, this week has been hectic. I most definitely agree. Manipuling objects that the programmer can identify has real objects, and build graphical representations, is great as it helps him building a mental representation of his code. It is actually a great way to introduce object oriented programming to physicists. Traits is nice for describing physical objects as one can use custom validators to constrain the attributes of the objects to meaningful values (say for instance that an electrical power must be positive). Another nice thing is that with the "_traits_changed" hook, you can have objects that update their properties when some of their attributes are changed. I must say I think that traitsUI is _the_ solution for physics-related GUI programming, unless you have to use C for performance reason. And even if I had to use C for performance, and a lot of time in front of me, I would try to use a mixed language solution. Ga?l From s.mientki at ru.nl Sat Jan 20 14:49:50 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Sat, 20 Jan 2007 20:49:50 +0100 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070120191853.GA21935@clipper.ens.fr> References: <20070118083651.GA14390@clipper.ens.fr> <035101c73b5d$13185eb0$6400a8c0@D380> <20070120191853.GA21935@clipper.ens.fr> Message-ID: <45B2725E.8010407@ru.nl> > I must say I think that traitsUI is _the_ solution for physics-related > GUI programming, unless you have to use C for performance reason. And > even if I had to use C for performance, and a lot of time in front of me, > I would try to use a mixed language solution. > I'm just learning Python, so I might not have enough overview, but what about Orange or Viper ? cheers, Stef Mientki From ggellner at uoguelph.ca Sat Jan 20 15:01:28 2007 From: ggellner at uoguelph.ca (Gabriel Gellner) Date: Sat, 20 Jan 2007 15:01:28 -0500 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070120191853.GA21935@clipper.ens.fr> References: <20070118083651.GA14390@clipper.ens.fr> <035101c73b5d$13185eb0$6400a8c0@D380> <20070120191853.GA21935@clipper.ens.fr> Message-ID: <20070120200128.GA13237@encolpuis> Does using traits change from a Duck typing solution to a static typing (type checking) solution? I have to admit I am vague on these issues, but all the books I have learned python from go crazy about not doing type checking (they say over and over that python is strongly typed, but not statically typed . . . and you should use duck typing not type checking). Having a fortran background I miss a lot of the subtly of this, but I worry about not using pythonic code, as I am trying to change my old programming habits (hopefully the new ones are better . . .). Does traits suffer this problem? As from what I have read (as a previous fortran programmer) I find traits very, very intriguing (especially the easy generation of a gui). Gabriel On Sat, Jan 20, 2007 at 08:18:53PM +0100, Gael Varoquaux wrote: > On Thu, Jan 18, 2007 at 07:02:13PM -0500, val wrote: > > My first impression of traits was its help in its structured > > description and manipulation of physical properties of real-world > > objects, a very important (and missing in Python?) component of > > physics-related programming. > > How you would see/elaborate on that side of traits? > > Hi Val, > > Sorry for taking so long to answer, this week has been hectic. > > I most definitely agree. Manipuling objects that the programmer can > identify has real objects, and build graphical representations, is great > as it helps him building a mental representation of his code. It is > actually a great way to introduce object oriented programming to > physicists. > > Traits is nice for describing physical objects as one can use custom > validators to constrain the attributes of the objects to meaningful > values (say for instance that an electrical power must be positive). > > Another nice thing is that with the "_traits_changed" hook, you can have > objects that update their properties when some of their attributes are > changed. > > I must say I think that traitsUI is _the_ solution for physics-related > GUI programming, unless you have to use C for performance reason. And > even if I had to use C for performance, and a lot of time in front of me, > I would try to use a mixed language solution. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gael.varoquaux at normalesup.org Sat Jan 20 15:38:40 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 20 Jan 2007 21:38:40 +0100 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <20070120200128.GA13237@encolpuis> References: <20070118083651.GA14390@clipper.ens.fr> <035101c73b5d$13185eb0$6400a8c0@D380> <20070120191853.GA21935@clipper.ens.fr> <20070120200128.GA13237@encolpuis> Message-ID: <20070120203839.GA4369@clipper.ens.fr> On Sat, Jan 20, 2007 at 03:01:28PM -0500, Gabriel Gellner wrote: > Does using traits change from a Duck typing solution to a static > typing (type checking) solution? Well yes and no. There is type checking going on, but it happens at run-time. The compiler (here a byte-code compiler) is not aware of this type-checking and cannot perform optimisations, or checking of the code, so no language theorist will call this type-checking. > I have to admit I am vague on these issues, but all the books I have > learned python from go crazy about not doing type checking (they say > over and over that python is strongly typed, but not statically typed > . . . and you should use duck typing not type checking). I have begun to see this as an extention from the object-oriented maxim "program to interfaces, not implementations". I think what all this really means is that if a number is not an integer, but behaves like an integer, then just go ahead with it. What is import is not the type (the impementation), but how the object behaves (the interface). Of course the standard techniques used in program analysis and proofs that rely on types are much harder to implement with duck-typing (I am not even sure they can be), and if you add python's monkey patching abilities, than all hopes of type-based proofs disappear. > As from what I have read (as a previous fortran programmer) I find > traits very, very intriguing (especially the easy generation of a gui). Traits is a way of constraining the values the attributes of an object can take by adding run-time checks. TraitsUI uses the extra informations added by Traits to build dialogs using simple rules. The example on the traits page is quite easy to understand, I find: http://code.enthought.com/traits/ HTH, Ga?l From gael.varoquaux at normalesup.org Sat Jan 20 15:44:35 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 20 Jan 2007 21:44:35 +0100 Subject: [SciPy-user] SciPy Data Analysis Workbench In-Reply-To: <45B2725E.8010407@ru.nl> References: <20070118083651.GA14390@clipper.ens.fr> <035101c73b5d$13185eb0$6400a8c0@D380> <20070120191853.GA21935@clipper.ens.fr> <45B2725E.8010407@ru.nl> Message-ID: <20070120204433.GB4369@clipper.ens.fr> On Sat, Jan 20, 2007 at 08:49:50PM +0100, Stef Mientki wrote: > I'm just learning Python, so I might not have enough overview, > but what about Orange or Viper ? Interesting. I didn't know those alternatives. I had a quick look, and one thing I like about traitsUI is that it comes from OOP (object oriented programming) background. Orange and Viper seems to be more visual programming oriented, and this is one thing I don't really like. Ga?l From aronovitch at gmail.com Sat Jan 20 19:36:48 2007 From: aronovitch at gmail.com (Amit Aronovitch) Date: Sun, 21 Jan 2007 02:36:48 +0200 Subject: [SciPy-user] special.jn(3,x) became inexact in recent versions Message-ID: Hi, I've been holding back from installing recent release of scipy for a long time now - the reason being some numerical errors I did not have time to track down. I just found out the problem (by tracing the calculations down to a binary lib function...), so here's a description - in case it would help some other users: in the old version: special.jn(3,62) gives 0.101359688424111 (correct) in new version: special.jn(3,62) gives -0.0027352572609119909 (wrong!) (plotting a graph of jn in this region shows several "jumps") The solution: -------------- quoting from the C source (jn.c taken from netlib's source) * * Not suitable for large n or x. Use jv() instead. * indeed special.jv(3.62) gives the correct result (0.101) Possible Explanation: Older releases of cephes used the original names (j0,jn,...), while the newer release contains macros converting these names to "cephes_jn" etc. It is possible that the older cephes module did not really use the cephes functions, but the ones from glibc instead (glibc contains many mathematical functions, including a "jn", which does work for large x). I wonder how do cephes functions compare to glibc ones performance-wise. Might be a good idea to generate ufunc wrappers to those (on platforms where glibc is available). Regards, Amit A. From robert.kern at gmail.com Sat Jan 20 20:26:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 19:26:52 -0600 Subject: [SciPy-user] special.jn(3,x) became inexact in recent versions In-Reply-To: References: Message-ID: <45B2C15C.5050007@gmail.com> Amit Aronovitch wrote: > Possible Explanation: > Older releases of cephes used the original names (j0,jn,...), while > the newer release contains macros converting these names to "cephes_jn" etc. > It is possible that the older cephes module did not really use the > cephes functions, but the ones from glibc instead (glibc contains many > mathematical functions, including a "jn", which does work for large x). Excellent detective work! This seems quite likely to me. > I wonder how do cephes functions compare to glibc ones > performance-wise. Might be a good idea to generate ufunc wrappers to > those (on platforms where glibc is available). And several other C runtimes as well. For some reason I was under the impression that at least j0 was part of the C99 standard, but I do not see any of the Bessel functions in the standard itself (section 7.12): http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf It would be possible to check for each of these and construct a configuration header with HAVE_LIBM_JN #defines. The #define hackery that renames jn to cephes_jn would be conditional on the HAVE_LIBM settings. I hesitate to add more to the configuration steps, though. Better all around would be for us to find (or write) a suitably licensed implementation that's better. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sransom at nrao.edu Sat Jan 20 21:13:34 2007 From: sransom at nrao.edu (Scott Ransom) Date: Sat, 20 Jan 2007 21:13:34 -0500 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo Message-ID: <20070121021333.GA26374@ssh.cv.nrao.edu> Hi All, Just put together a new Intel Core2 Duo machine that is running 64-bit Debian unstable (etch-to-be). Scipy gives an error during the compile of dftpack: creating build/temp.linux-x86_64-2.4/Lib/fftpack/dfftpack compile options: '-c' g77:f77: Lib/fftpack/dfftpack/dcosqf.f Lib/fftpack/dfftpack/dcosqf.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dcosqf.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dcosqf.f:0: error: CPU you selected does not support x86-64 instruction set Lib/fftpack/dfftpack/dcosqf.f:0: error: CPU you selected does not support x86-64 instruction set error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer -c -c Lib/fftpack/dfftpack/dcosqf.f -o build/temp.linux-x86_64-2.4/Lib/fftpack/dfftpack/dcosqf.o" failed with exit status 1 ransom at hugin:/usr/local/src/scipy$ g77 --version GNU Fortran (GCC) 3.4.6 (Debian 3.4.6-5) Copyright (C) 2006 Free Software Foundation, Inc. Here is the key line from python -c 'from numpy.f2py.diagnose import run; run()' ----------------------- CPU information: getNCPUs has_mmx has_sse has_sse2 is_64bit is_Intel is_i686 ------ ----------------------- and here is my uname info: ransom at hugin:/usr/local/src/scipy$ uname -a Linux hugin 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 GNU/Linux So my guess is that the is_64bit and is_i686 are tripping up the CFLAGS choices.... Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From robert.kern at gmail.com Sat Jan 20 21:24:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 20:24:46 -0600 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <20070121021333.GA26374@ssh.cv.nrao.edu> References: <20070121021333.GA26374@ssh.cv.nrao.edu> Message-ID: <45B2CEEE.8070105@gmail.com> Scott Ransom wrote: > and here is my uname info: > ransom at hugin:/usr/local/src/scipy$ uname -a > Linux hugin 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 GNU/Linux > > So my guess is that the is_64bit and is_i686 are tripping up the > CFLAGS choices.... I imagine the Linux 2.6.18-3-amd64 might be an issue, too, for an Intel Core Duo 2 machine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jan 20 22:40:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 21:40:21 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> Message-ID: <45B2E0A5.9030100@gmail.com> Alan G Isaac wrote: > Finally, a question about Robert's core question: how many > people will be able to read the Unicode math characters as > they examine the docs while working at an interpreter prompt? Pretty much everyone except Windows users have UTF-8 capable terminals more or less by default. But yes, this is why I'm on the threeway fence, here, whether to encourage, discourage, or simply not make any proclamation. > I am not trying to advocate for or against the use of > Unicode math symbols. I must say, however, that I doubt > I will soon be able to get most Unicode math symbols into my > docs as fast as I can type their LaTeX representations. True, but I type once and read the plaintext many, many times. The occasional ? or ? would be nice, sometimes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Sun Jan 21 00:01:07 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sun, 21 Jan 2007 10:31:07 +0530 Subject: [SciPy-user] Problems with filter design Message-ID: <20070121050107.GA9168@localhost> Dear Scipy users, I am having problems with the filter design functions in SciPy. I am unable to find out what I am doing wrong. Here's the relevant code section: omega_c = sqrt(0.8) omega_r = 1 / sqrt(0.8) omega_s = 7.5 gpass = 0.1 gstop = 43.46 (n, Wn) = signal.ellipord(omega_c, omega_r, gpass, gstop, analog=1) [B, A] = signal.ellip(n, gpass, gstop, Wn, analog=1) However, it throws an error at me, like this: Traceback (most recent call last): File "ellip.py", line 18, in ? [B, A] = signal.ellip(n, gpass, gstop, Wn, analog=1) File "/usr/lib/python2.4/site-packages/scipy/signal/filter_design.py", line 514, in ellip return iirfilter(N, Wn, rs=rs, rp=rp, btype=btype, analog=analog, output=output, ftype='elliptic') File "/usr/lib/python2.4/site-packages/scipy/signal/filter_design.py", line 448, in iirfilter b, a = lp2lp(b,a,wo=wo) File "/usr/lib/python2.4/site-packages/scipy/signal/filter_design.py", line 186, in lp2lp wo = wo[0] IndexError: 0-d arrays can't be indexed Referring to filter_design.py, I guessed an array may be expected in Wn. But neither does that make sense, nor does it give the expected output. What is the mistake? Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robert.kern at gmail.com Sun Jan 21 00:09:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 23:09:43 -0600 Subject: [SciPy-user] Problems with filter design In-Reply-To: <20070121050107.GA9168@localhost> References: <20070121050107.GA9168@localhost> Message-ID: <45B2F597.7000400@gmail.com> Kumar Appaiah wrote: > Dear Scipy users, > > I am having problems with the filter design functions in SciPy. I > am unable to find out what I am doing wrong. > > Here's the relevant code section: > > > omega_c = sqrt(0.8) > omega_r = 1 / sqrt(0.8) > omega_s = 7.5 > > gpass = 0.1 > gstop = 43.46 > > (n, Wn) = signal.ellipord(omega_c, omega_r, gpass, gstop, analog=1) > [B, A] = signal.ellip(n, gpass, gstop, Wn, analog=1) > > > However, it throws an error at me, like this: > > > Traceback (most recent call last): > File "ellip.py", line 18, in ? > [B, A] = signal.ellip(n, gpass, gstop, Wn, analog=1) > File > "/usr/lib/python2.4/site-packages/scipy/signal/filter_design.py", line > 514, in ellip > return iirfilter(N, Wn, rs=rs, rp=rp, btype=btype, analog=analog, > output=output, ftype='elliptic') > File > "/usr/lib/python2.4/site-packages/scipy/signal/filter_design.py", line > 448, in iirfilter > b, a = lp2lp(b,a,wo=wo) > File > "/usr/lib/python2.4/site-packages/scipy/signal/filter_design.py", line > 186, in lp2lp > wo = wo[0] > IndexError: 0-d arrays can't be indexed > > > Referring to filter_design.py, I guessed an array may be expected in > Wn. But neither does that make sense, nor does it give the expected > output. What is the mistake? It's a subtle bug. Wn is being coerced to an array in iirfilter(). When Wn is a scalar, it becomes a rank-0 array. Unfortunately, there was a bit of inaccurate type-checking in lp2lp and lp2hp that assumed that if the type of wo was an array, then it would be a (2,)-array of 2 critical frequencies, not a ()-array (and don't ask me whether or not having two critical frequencies makes sense. It's been a long time since I took my signal processing class). I've checked in a workaround. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sransom at nrao.edu Sun Jan 21 00:19:20 2007 From: sransom at nrao.edu (Scott Ransom) Date: Sun, 21 Jan 2007 00:19:20 -0500 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <45B2CEEE.8070105@gmail.com> References: <20070121021333.GA26374@ssh.cv.nrao.edu> <45B2CEEE.8070105@gmail.com> Message-ID: <20070121051920.GA23112@ssh.cv.nrao.edu> On Sat, Jan 20, 2007 at 08:24:46PM -0600, Robert Kern wrote: > Scott Ransom wrote: > > > and here is my uname info: > > ransom at hugin:/usr/local/src/scipy$ uname -a > > Linux hugin 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 GNU/Linux > > > > So my guess is that the is_64bit and is_i686 are tripping up the > > CFLAGS choices.... > > I imagine the Linux 2.6.18-3-amd64 might be an issue, too, for an Intel Core Duo > 2 machine. I guess it could be. But that is the way that Debian is doing things with Etch now. See here: http://www.ducea.com/2006/10/10/debian-etch-kernels/ Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From robert.kern at gmail.com Sun Jan 21 00:28:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 23:28:46 -0600 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <20070121051920.GA23112@ssh.cv.nrao.edu> References: <20070121021333.GA26374@ssh.cv.nrao.edu> <45B2CEEE.8070105@gmail.com> <20070121051920.GA23112@ssh.cv.nrao.edu> Message-ID: <45B2FA0E.1030406@gmail.com> Scott Ransom wrote: > On Sat, Jan 20, 2007 at 08:24:46PM -0600, Robert Kern wrote: >> Scott Ransom wrote: >> >>> and here is my uname info: >>> ransom at hugin:/usr/local/src/scipy$ uname -a >>> Linux hugin 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 GNU/Linux >>> >>> So my guess is that the is_64bit and is_i686 are tripping up the >>> CFLAGS choices.... >> I imagine the Linux 2.6.18-3-amd64 might be an issue, too, for an Intel Core Duo >> 2 machine. > > I guess it could be. But that is the way that Debian is doing > things with Etch now. See here: > > http://www.ducea.com/2006/10/10/debian-etch-kernels/ Okay, never mind. Just an odd label rather than a real problem. I'll have to dig into the cpu-detection code and the gcc manpages to see what's incompatible. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Sun Jan 21 00:44:48 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sun, 21 Jan 2007 11:14:48 +0530 Subject: [SciPy-user] Problems with filter design In-Reply-To: <45B2F597.7000400@gmail.com> References: <20070121050107.GA9168@localhost> <45B2F597.7000400@gmail.com> Message-ID: <20070121054448.GA10166@localhost> On Sat, Jan 20, 2007 at 11:09:43PM -0600, Robert Kern wrote: > It's a subtle bug. Wn is being coerced to an array in > iirfilter(). When Wn is a scalar, it becomes a rank-0 > array. Unfortunately, there was a bit of inaccurate type-checking in > lp2lp and lp2hp that assumed that if the type of wo was an array, > then it would be a (2,)-array of 2 critical frequencies, not a > ()-array (and don't ask me whether or not having two critical > frequencies makes sense. It's been a long time since I took my > signal processing class). It's quite all right. Signal processing isn't something one is too good at unless one is in it throughout! > I've checked in a workaround. Which leads me to think; haven't people been using these functions? They seem to have quite a few unpolished stuff. Another example is the docstring for signal.ellipord. If you notice, at the bottom, it says "Chebyshev" instead of "Elliptic"; obviously, it has been overlooked while doing a copy paste! Anyway, when I find a fault like this, I still feel I should post it on the list to ensure others are facing the same problems, rather than call for a bug filing straightaway. Is this OK? Thanks for the quick response! Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From robert.kern at gmail.com Sun Jan 21 00:52:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Jan 2007 23:52:55 -0600 Subject: [SciPy-user] Problems with filter design In-Reply-To: <20070121054448.GA10166@localhost> References: <20070121050107.GA9168@localhost> <45B2F597.7000400@gmail.com> <20070121054448.GA10166@localhost> Message-ID: <45B2FFB7.4060507@gmail.com> Kumar Appaiah wrote: > Which leads me to think; haven't people been using these functions? > They seem to have quite a few unpolished stuff. Yes, but they've also been through the Numeric->numpy automatic translation and haven't seen much active development since that time. There is quite a lot of code that needs a thorough review. Thank you for doing your part! > Another example is the docstring for signal.ellipord. If you notice, > at the bottom, it says "Chebyshev" instead of "Elliptic"; obviously, > it has been overlooked while doing a copy paste! > > Anyway, when I find a fault like this, I still feel I should post it > on the list to ensure others are facing the same problems, rather than > call for a bug filing straightaway. Is this OK? For things like this, I'd prefer the bug report. Others are almost certainly facing the same problems. Platform-specific and build-related issues are different. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From akumar at iitm.ac.in Sun Jan 21 01:17:21 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Sun, 21 Jan 2007 11:47:21 +0530 Subject: [SciPy-user] Problems with filter design In-Reply-To: <45B2FFB7.4060507@gmail.com> References: <20070121050107.GA9168@localhost> <45B2F597.7000400@gmail.com> <20070121054448.GA10166@localhost> <45B2FFB7.4060507@gmail.com> Message-ID: <20070121061721.GA10800@localhost> On Sat, Jan 20, 2007 at 11:52:55PM -0600, Robert Kern wrote: > Yes, but they've also been through the Numeric->numpy automatic > translation and haven't seen much active development since that > time. There is quite a lot of code that needs a thorough > review. Thank you for doing your part! I am actually a beneficiary! It's just a side effect that rhe code is corrected for others as well. > > Anyway, when I find a fault like this, I still feel I should post it > > on the list to ensure others are facing the same problems, rather than > > call for a bug filing straightaway. Is this OK? > > For things like this, I'd prefer the bug report. Others are almost > certainly facing the same problems. Platform-specific and > build-related issues are different. Accepted. Thanks a lot! Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From nadavh at visionsense.com Sun Jan 21 04:17:09 2007 From: nadavh at visionsense.com (Nadav Horesh) Date: Sun, 21 Jan 2007 11:17:09 +0200 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo Message-ID: <1169371029.31713.12.camel@nadav.envision.co.il> The cpu identification in "site-packages/numpy/distutils/cpuinfo.py" fails to recognizr Core2 cpu. My solution: Modify the _is_Nocona method in cpuinfo.py to: def _is_Nocona(self): # return self.is_PentiumIV() and self.is_64bit() return re.match(r'Intel.*?Core.*\b', self.info[0]['model name']) is not None (The commented line is the old code that fails to recognize core 2) Nadav. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sransom at nrao.edu Sun Jan 21 09:27:42 2007 From: sransom at nrao.edu (Scott Ransom) Date: Sun, 21 Jan 2007 09:27:42 -0500 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <1169371029.31713.12.camel@nadav.envision.co.il> References: <1169371029.31713.12.camel@nadav.envision.co.il> Message-ID: <20070121142742.GB19549@ssh.cv.nrao.edu> Hi Nadav, Yup. This seems to work. Everything compiles and the tests pass. Thanks a bunch for the quick patch. Scott On Sun, Jan 21, 2007 at 11:17:09AM +0200, Nadav Horesh wrote: > The cpu identification in "site-packages/numpy/distutils/cpuinfo.py" > fails to recognizr Core2 cpu. > > My solution: Modify the _is_Nocona method in cpuinfo.py to: > > def _is_Nocona(self): > # return self.is_PentiumIV() and self.is_64bit() > return re.match(r'Intel.*?Core.*\b', > self.info[0]['model name']) is not None > > > (The commented line is the old code that fails to recognize core 2) > > Nadav. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From zunzun at zunzun.com Sun Jan 21 10:37:38 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Sun, 21 Jan 2007 10:37:38 -0500 Subject: [SciPy-user] Python Equations released under BSD license In-Reply-To: <20070119190433.GA13562@zunzun.com> References: <20070118213309.GA31714@zunzun.com> <20070119165446.GA32131@encolpuis> <3FE52EDF-DA72-4A2F-8AC7-5A03F0CCB945@sel.cam.ac.uk> <45B11533.5010705@gmail.com> <20070119190433.GA13562@zunzun.com> Message-ID: <20070121153738.GA23096@zunzun.com> The middleware code for http://zunzun.com has been released under a BSD license at http://sourceforge.net/projects/pythonequations/. The project is a collection of Python equations that can fit themselves to both 2D and 3D data sets, output source code in several computing languages and can use an included genetic algorithm for initial parameter estimation. Requires SciPy and SciPy's weave module from http://scipy.org. From ivilata at carabos.com Sun Jan 21 12:01:44 2007 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Sun, 21 Jan 2007 18:01:44 +0100 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <45B214BD.80503@gmail.com> References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> Message-ID: <20070121170144.GH2093@tardis.terramar.selidor.net> Robert Kern (el 2007-01-20 a les 07:10:21 -0600) va dir:: > There's another thing we should standardize that's related to docstrings: > encoding. Many names of authors of modules as well as authors of papers I want > to cite in the docstrings have non-ASCII characters. Let's standardize on a > UTF-8 encoding for all files that have non-ASCII characters. Wouldn't it be safer and more cross-platform friendly to use Unicode docstrings with escape sequences where needed, instead of hard-wiring the UTF-8 encoding into docstrings? This would keep the files ASCII, and I guess there won't be a need for that many non-ASCII characters (occasionaly for author names, as you pointed out). :: > Now, having gone that far, I'm sitting on the fence as to whether we should > discourage/encourage/leave alone the idea of using the math symbols in Unicode > to help display math better. I guess that, with so many non-obvious Unicode math characters, one would better use LaTeX or plain ASCII math to get consistent math notation from different authors. ;) Just my impressions. Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 309 bytes Desc: Digital signature URL: From seismic73 at yahoo.com Sun Jan 21 12:20:52 2007 From: seismic73 at yahoo.com (Jonathan Kane) Date: Sun, 21 Jan 2007 09:20:52 -0800 (PST) Subject: [SciPy-user] scipy installation problems for a newbie In-Reply-To: <45AF0CEB.6040004@gmail.com> Message-ID: <683587.22127.qm@web81810.mail.mud.yahoo.com> Thanks for you help so far. A few more questions before I re-install: 1) What happens if I just install the correct gfortran-intel-bin.tar over the incorrect one without removing the files manually? Will that mess things up? 2) I looked at some of the files listed by the tar ztf /path/to/gfortran-intel-bin.tar.gz command, and it seems they were installed a long time ago: seismic73$ ls -al /usr/local/include/gmp.h -rw-r--r-- 1 seismic7 staff 80824 Oct 8 13:15 /usr/local/include/gmp.h I thought that when it installed the file, the time stamp would be for the day of installation (which was Jan. 17). That makes me afraid to erase it because maybe its not the one I just installed......... Is it safe to erase every file listed by tar ztf /path/to/gfortran-intel-bin.tar.gz??? 3) If I want to build Numpy and Scipy using the compilers I have, how do I uninstall the current version first? Sorry to bother with all these questions.... obviously I am a very new newbie..... Jon Robert Kern wrote: Jonathan Kane wrote: > It can't seem to find the file /usr/local/lib/libgfortran.1.dylib > (there are files called libgfortran.2.0.0.dylib , libgfortran.2.dylib, > libgfortran.a, libgfortran.dylib, libgfortran.la, however......) Ah. Looks like the gfortran binaries were upgraded to a newer version after Chris built the Superpack. That's ... unfortunate. I hereby curse all people who release tarballs without version information. Well, you have a couple of options. You can uninstall the compilers that you got from the hpc.sf.net page pretty cleanly. You can get a list of the files it unpacked like so $ tar ztf /path/to/gfortran-intel-bin.tar.gz usr/local/ usr/local/._.DS_Store usr/local/.DS_Store usr/local/bin/ usr/local/bin/gfortran ... I would probably want to remove those files manually. Then you can install the gfortran-intel-bin.tar.gz that is in the ScipySuperpack zip file. That should be the one it was built with. Don't worry about the corresponding gcc. It's unnecessary. Alternatively, now that you have your compilers already set up, you could build numpy and scipy yourself. They're not too hard. matplotlib is harder, if you need that, too, but we'll take this two steps at a time. See the instructions I gave earlier on the list. The matplotlib part is missing a crucial detail, but the numpy and scipy bits are complete and accurate. http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025368.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From aronovitch at gmail.com Sun Jan 21 16:07:58 2007 From: aronovitch at gmail.com (Amit Aronovitch) Date: Sun, 21 Jan 2007 23:07:58 +0200 Subject: [SciPy-user] special.jn(3,x) became inexact in recent versions In-Reply-To: <45B2C15C.5050007@gmail.com> References: <45B2C15C.5050007@gmail.com> Message-ID: Robert Kern wrote: > > And several other C runtimes as well. For some reason I was under the impression > that at least j0 was part of the C99 standard, but I do not see any of the > Bessel functions in the standard itself (section 7.12): > > http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf > > It would be possible to check for each of these and construct a configuration > header with HAVE_LIBM_JN #defines. The #define hackery that renames jn to > cephes_jn would be conditional on the HAVE_LIBM settings. > > I hesitate to add more to the configuration steps, though. Better all around > would be for us to find (or write) a suitably licensed implementation that's better. > The code for these functions in glibc seems to be related (and almost identical to) Sun's "unbundled" libm, which is freely available from netlib: http://www.netlib.org/fdlibm/ (not sure about the exact ancestry - possibly came through NetBSD) I've no idea how it compares to cephes, but I see no problems with the license. Might be worth running some benchmarks. Regards, AA From srinath at txcorp.com Sun Jan 21 16:53:41 2007 From: srinath at txcorp.com (Srinath Vadlamani) Date: Sun, 21 Jan 2007 14:53:41 -0700 Subject: [SciPy-user] issue with scipytest on Mac OS X core 2 duo Message-ID: <45B3E0E5.1080103@txcorp.com> I don't know the best way to reply when I have the announcements in digest form. So I will recap my scipy test issues: 1) Machine: MacBook Pro Core 2 Duo 2.16GHz 2)GCC: 4.0.1 (darwin build) 3)python: MacPython2.4 (tried 2.5 but then read that 2.4 is a more complete build) 4)Numpy: svn co 5)gfortran:4.3 6)g77:GNU Fortran (GCC) 3.4.0 (from hcp) 7)fftw: 3.1.3 8)scipy: svn co So numpy now builds after reinstalling gcc from xcode. And fftw built and installed. Privious errors as before: ======================== ====================================================================== ERROR: check_simple_todense (scipy.io.tests.test_mmio.test_mmio_coordinate) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/tests/test_mmio.py", line 151, in check_simple_todense b = mmread(fn).todense() AttributeError: 'numpy.ndarray' object has no attribute 'todense' ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 3.4021356382745012e-37j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/testing/utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 3.4046969876595327e-37j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1470 tests in 2.116s FAILED (failures=2, errors=1) ======================================== Thanks for any help. Srinath V. -- ========================================================== Srinath Vadlamani srinath at txcorp.com Tech-X Corp. o:303-996-2034 5621 Arapahoe Ave. Suite A f:303-448-7756 Boulder, CO 80303 ========================================================== From robert.kern at gmail.com Sun Jan 21 17:26:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 21 Jan 2007 16:26:23 -0600 Subject: [SciPy-user] Docstring standards for NumPy and SciPy In-Reply-To: <20070121170144.GH2093@tardis.terramar.selidor.net> References: <45A4FB90.9000809@ee.byu.edu> <45B214BD.80503@gmail.com> <20070121170144.GH2093@tardis.terramar.selidor.net> Message-ID: <45B3E88F.5060209@gmail.com> Ivan Vilata i Balaguer wrote: > Robert Kern (el 2007-01-20 a les 07:10:21 -0600) va dir:: > >> There's another thing we should standardize that's related to docstrings: >> encoding. Many names of authors of modules as well as authors of papers I want >> to cite in the docstrings have non-ASCII characters. Let's standardize on a >> UTF-8 encoding for all files that have non-ASCII characters. > > Wouldn't it be safer and more cross-platform friendly to use Unicode > docstrings with escape sequences where needed, instead of hard-wiring > the UTF-8 encoding into docstrings? This would keep the files ASCII, > and I guess there won't be a need for that many non-ASCII characters > (occasionaly for author names, as you pointed out). :: Well, most of the point is to be able to read names as they actually are. If you submit a module, I want to see Copyright 2007 C?rabos Coop. not Copyright 2007 C\xe1rabos Coop. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Jan 21 17:32:06 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 21 Jan 2007 16:32:06 -0600 Subject: [SciPy-user] scipy installation problems for a newbie In-Reply-To: <683587.22127.qm@web81810.mail.mud.yahoo.com> References: <683587.22127.qm@web81810.mail.mud.yahoo.com> Message-ID: <45B3E9E6.4080109@gmail.com> Jonathan Kane wrote: > Thanks for you help so far. A few more questions before I re-install: > > 1) What happens if I just install the correct gfortran-intel-bin.tar > over the incorrect one without removing the files manually? Will that > mess things up? Probably. > 2) I looked at some of the files listed by the tar ztf > /path/to/gfortran-intel-bin.tar.gz command, and it seems they were > installed a long time ago: > > seismic73$ ls -al /usr/local/include/gmp.h > -rw-r--r-- 1 seismic7 staff 80824 Oct 8 13:15 /usr/local/include/gmp.h > > I thought that when it installed the file, the time stamp would be for > the day of installation (which was Jan. 17). That makes me afraid to > erase it because maybe its not the one I just installed......... Tarballs carry timestamp information. So those timestamps are whatever the timestamps were for the person who made the tarball when he made it. > Is it safe to erase every file listed by tar ztf > /path/to/gfortran-intel-bin.tar.gz??? The files, but not the common directories. For example, if you are also deleting the gcc that you got from hpc.sf.net (recommended!), you can just delete /usr/local/lib/gcc/ entirely, but you shouldn't delete /usr/local/lib/. > 3) If I want to build Numpy and Scipy using the compilers I have, how > do I uninstall the current version first? If you are using Python 2.4, there will be numpy/ and scipy/ directories in the following directory. Delete them. /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rdm at rcblue.com Sun Jan 21 22:07:34 2007 From: rdm at rcblue.com (Dick Moores) Date: Sun, 21 Jan 2007 19:07:34 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> Message-ID: <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> At 12:03 PM 1/15/2007, you wrote: >On 1/15/07, Dick Moores wrote: > > Of, course! I forgot about integer division. Thanks, Vincent and Darren. > > > > But I still don't get the precision: > > ========================== > > # clnumTest3-c.py > > from __future__ import division > > import clnum as n > > n.set_default_precision(40) > > print repr(n.exp(n.log(5/23)*2/7)) > > ========================= > > gets mpf('0.6466073240654112295',17) > > > > How come? > >Because you shoulnd't use > >from __future__ import division > >in this case, since that will turn 5/23 into a plain float, with >64-bit accuracy (well, 53 bits for the mantissa, really). > >In [3]: n.set_default_precision(50) > >In [4]: n.exp(n.log(n.mpq(5,23)*n.mpq(2,7))) >Out[4]: >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) > >or alternatively: > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) >Out[7]: >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) I hadn't checked this until now, but I used the Python interactive prompt: >>>from __future__ import division >>>(5/23)**(2/7) 0.64660732406541122 And double checked it with the Windows calculator: 0.64660732406541123462639015242381 Why the discrepancy? Thanks, Dick Moores From rdm at rcblue.com Sun Jan 21 22:18:50 2007 From: rdm at rcblue.com (Dick Moores) Date: Sun, 21 Jan 2007 19:18:50 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> Message-ID: <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> Sorry, I replied to the wrong post. What I really want to see is how to use clnum to do (5/23)**(2/27). Thanks, Dick Moores At 07:07 PM 1/21/2007, you wrote: >At 12:03 PM 1/15/2007, you wrote: > >On 1/15/07, Dick Moores wrote: > > > Of, course! I forgot about integer division. Thanks, Vincent and Darren. > > > > > > But I still don't get the precision: > > > ========================== > > > # clnumTest3-c.py > > > from __future__ import division > > > import clnum as n > > > n.set_default_precision(40) > > > print repr(n.exp(n.log(5/23)*2/7)) > > > ========================= > > > gets mpf('0.6466073240654112295',17) > > > > > > How come? > > > >Because you shoulnd't use > > > >from __future__ import division > > > >in this case, since that will turn 5/23 into a plain float, with > >64-bit accuracy (well, 53 bits for the mantissa, really). > > > >In [3]: n.set_default_precision(50) > > > >In [4]: n.exp(n.log(n.mpq(5,23)*n.mpq(2,7))) > >Out[4]: > >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) > > > >or alternatively: > > > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) > >Out[7]: > >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) > >I hadn't checked this until now, but I used the Python interactive prompt: > >>>from __future__ import division > >>>(5/23)**(2/7) >0.64660732406541122 > >And double checked it with the Windows calculator: >0.64660732406541123462639015242381 > >Why the discrepancy? > >Thanks, > >Dick Moores > > > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user From fperez.net at gmail.com Sun Jan 21 23:01:51 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 21 Jan 2007 21:01:51 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> Message-ID: On 1/21/07, Dick Moores wrote: > Sorry, I replied to the wrong post. What I really want to see is how > to use clnum to do (5/23)**(2/27). The answer was already in the message you quoted: > > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) > > >Out[7]: > > >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) Cheers, f ps - you don't need to cc me separately, I'm on the scipy list already. From rdm at rcblue.com Sun Jan 21 23:42:40 2007 From: rdm at rcblue.com (Dick Moores) Date: Sun, 21 Jan 2007 20:42:40 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> Message-ID: <7.0.1.0.2.20070121203809.06287f08@rcblue.com> At 08:01 PM 1/21/2007, you wrote: >On 1/21/07, Dick Moores wrote: > > Sorry, I replied to the wrong post. What I really want to see is how > > to use clnum to do (5/23)**(2/27). > >The answer was already in the message you quoted: > > > > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) > > > >Out[7]: > > > >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) No, think it's this: >>>from __future__ import division >>>(5/23)**(2/7) 0.64660732406541122 No? Dick From fperez.net at gmail.com Mon Jan 22 00:01:20 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 21 Jan 2007 22:01:20 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070121203809.06287f08@rcblue.com> References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> <7.0.1.0.2.20070121203809.06287f08@rcblue.com> Message-ID: On 1/21/07, Dick Moores wrote: > At 08:01 PM 1/21/2007, you wrote: > >On 1/21/07, Dick Moores wrote: > > > Sorry, I replied to the wrong post. What I really want to see is how > > > to use clnum to do (5/23)**(2/27). > > > >The answer was already in the message you quoted: > > > > > > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) > > > > >Out[7]: > > > > >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) > > No, think it's this: > >>>from __future__ import division > >>>(5/23)**(2/7) > 0.64660732406541122 > > No? Well, since your question was: > > > Sorry, I replied to the wrong post. What I really want to see is how > > > to use clnum to do (5/23)**(2/27). I gave you a clnum answer (with 2/7 instead of 2/27) (this assumes "import clnum as n"): n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) Now, if your question had been "What I really want to see is how to use /plain python/ to do (5/23)**(2/7)." then the proper response would indeed be In [1]: from __future__ import division In [2]: (5/23)**(2/7) Out[2]: 0.64660732406541122 or (without the __future__ statement): In [1]: (5/23.)**(2/7.) Out[1]: 0.64660732406541122 Regards, f From rdm at rcblue.com Mon Jan 22 00:14:58 2007 From: rdm at rcblue.com (Dick Moores) Date: Sun, 21 Jan 2007 21:14:58 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> <7.0.1.0.2.20070121203809.06287f08@rcblue.com> Message-ID: <7.0.1.0.2.20070121210534.06418028@rcblue.com> At 09:01 PM 1/21/2007, you wrote: >On 1/21/07, Dick Moores wrote: > > At 08:01 PM 1/21/2007, you wrote: > > >On 1/21/07, Dick Moores wrote: > > > > Sorry, I replied to the wrong post. What I really want to see is how > > > > to use clnum to do (5/23)**(2/27). > > > > > >The answer was already in the message you quoted: > > > > > > > > >In [7]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) > > > > > >Out[7]: > > > > > >mpf('0.0621118012422360248447204968944099378881987577639751 > 5527949',55) > > > > No, think it's this: > > >>>from __future__ import division > > >>>(5/23)**(2/7) > > 0.64660732406541122 > > > > No? > >Well, since your question was: > > > > > Sorry, I replied to the wrong post. What I really want to see is how > > > > to use clnum to do (5/23)**(2/27). > >I gave you a clnum answer (with 2/7 instead of 2/27) (this assumes >"import clnum as n"): > >n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) > > >Now, if your question had been > >"What I really want to see is how to use /plain python/ to do (5/23)**(2/7)." > >then the proper response would indeed be > >In [1]: from __future__ import division > >In [2]: (5/23)**(2/7) >Out[2]: 0.64660732406541122 > >or (without the __future__ statement): > >In [1]: (5/23.)**(2/7.) >Out[1]: 0.64660732406541122 OK, I'll reclarify. Of course, I'm trying to learn clnum. I thought that was understood. Please show me how to use clnum to compute (5/23)**(2/7) (that's 5/23 to the 2/7 power), with a precision of 50. Surely the answer is not mpf('0.06211180124223602484472049689440993788819875776397515527949',55)?? Just by inspection this is clear, is it not? A number between 0 and 1 raised to a power between 0 and 1 will get closer to 1, right? (5/23 is 0.217391304348) I believe you've given me (5/23.)*(2/7.). Thanks for your patience, Dick From fperez.net at gmail.com Mon Jan 22 00:22:14 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 21 Jan 2007 22:22:14 -0700 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: <7.0.1.0.2.20070121210534.06418028@rcblue.com> References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> <7.0.1.0.2.20070121203809.06287f08@rcblue.com> <7.0.1.0.2.20070121210534.06418028@rcblue.com> Message-ID: On 1/21/07, Dick Moores wrote: > OK, I'll reclarify. Of course, I'm trying to learn clnum. I thought > that was understood. Please show me how to use clnum to compute > (5/23)**(2/7) (that's 5/23 to the 2/7 power), with a precision of > 50. Surely the answer is not > mpf('0.06211180124223602484472049689440993788819875776397515527949',55)?? > Just by inspection this is clear, is it not? A number between 0 and 1 > raised to a power between 0 and 1 will get closer to 1, right? (5/23 > is 0.217391304348) I believe you've given me (5/23.)*(2/7.). Sorry, my bad: one misplaced parenthesis and me being terribly sloppy in not actually checking the numerical value (I just hastily saw 0..6.. and didn't look further). In [2]: import clnum as n In [3]: n.set_default_precision(50) This is what I repeated to you like a mindless idiot: In [4]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) Out[4]: mpf('0.06211180124223602484472049689440993788819875776397515527949',55) This is the correct form (note position of parenthesis): In [5]: n.exp(n.log(n.mpq('5/23'))*n.mpq('2/7')) Out[5]: mpf('0.6466073240654112346263901524238077888294103593272266200345',55) Again, my apology for the confusion, it was entirely my fault. Regards, f From rdm at rcblue.com Mon Jan 22 00:34:09 2007 From: rdm at rcblue.com (Dick Moores) Date: Sun, 21 Jan 2007 21:34:09 -0800 Subject: [SciPy-user] Can SciPy compute ln(640320**3 + 744)/163**.5 to 30 places? In-Reply-To: References: <7.0.1.0.2.20070115084916.07543c20@rcblue.com> <7.0.1.0.2.20070115104218.0783c580@rcblue.com> <7.0.1.0.2.20070121185922.0748ad80@rcblue.com> <7.0.1.0.2.20070121191650.074bf8c8@rcblue.com> <7.0.1.0.2.20070121203809.06287f08@rcblue.com> <7.0.1.0.2.20070121210534.06418028@rcblue.com> Message-ID: <7.0.1.0.2.20070121213304.066b4be0@rcblue.com> At 09:22 PM 1/21/2007, you wrote: >On 1/21/07, Dick Moores wrote: > > > OK, I'll reclarify. Of course, I'm trying to learn clnum. I thought > > that was understood. Please show me how to use clnum to compute > > (5/23)**(2/7) (that's 5/23 to the 2/7 power), with a precision of > > 50. Surely the answer is not > > mpf('0.06211180124223602484472049689440993788819875776397515527949',55)?? > > Just by inspection this is clear, is it not? A number between 0 and 1 > > raised to a power between 0 and 1 will get closer to 1, right? (5/23 > > is 0.217391304348) I believe you've given me (5/23.)*(2/7.). > >Sorry, my bad: one misplaced parenthesis and me being terribly sloppy >in not actually checking the numerical value (I just hastily saw >0..6.. and didn't look further). > >In [2]: import clnum as n > >In [3]: n.set_default_precision(50) > > >This is what I repeated to you like a mindless idiot: > >In [4]: n.exp(n.log(n.mpq('5/23')*n.mpq('2/7'))) >Out[4]: >mpf('0.06211180124223602484472049689440993788819875776397515527949',55) > > >This is the correct form (note position of parenthesis): > >In [5]: n.exp(n.log(n.mpq('5/23'))*n.mpq('2/7')) >Out[5]: mpf('0.6466073240654112346263901524238077888294103593272266200345',55) > > >Again, my apology for the confusion, it was entirely my fault. No, no. I kept screwing up my question. You've been of immense assistance. Thank you. Dick From emsellem at obs.univ-lyon1.fr Mon Jan 22 02:53:25 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Mon, 22 Jan 2007 08:53:25 +0100 Subject: [SciPy-user] cblas_dtrsm still not found In-Reply-To: <45AFA28D.8000003@gmail.com> References: <45AF89A2.5060000@obs.univ-lyon1.fr> <45AF9FEA.1020702@gmail.com> <45AFA1BB.6030801@obs.univ-lyon1.fr> <45AFA28D.8000003@gmail.com> Message-ID: <45B46D75.5030300@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From robert.vergnes at yahoo.fr Mon Jan 22 02:55:23 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 22 Jan 2007 08:55:23 +0100 (CET) Subject: [SciPy-user] QME-Dev Workbench (wxSciPy) Alpha - RELEASED TODAY on sourceforge In-Reply-To: <20070120204433.GB4369@clipper.ens.fr> Message-ID: <20070122075523.87563.qmail@web27408.mail.ukl.yahoo.com> Dear All, The QME-Dev (wxSciPy) workbench for data analysis has been released. It is an Alpha version for test. Don't blame me for the crashes. But it is is already possible to use it to graph and process data, load/save; and get/send data to/from the shell and apply python modules to your data. You will find it on sourceforge at: http://sourceforge.net/projects/qme-dev/ For any questions, please use the Forum page of the site. Also please note that dependencies are(normally): Python 2.5 SciPy adn NumPy wxPython 2.8 Earlier version of Python and other earlier libraries might work, but I have not tested it. As well I did not (yet) test on MacOS and Linux. In the zip file you will find a Mini-Help on PDF format with a small how-to. And a concept explanation for developers in the last chapter of the Mini-Help for those interested in. If you want some modules in: Any useful function can be added easily and is welcome. Provided that we are supplied with modules/functions that takes a grid (matrix of float numbers) or tuples (with a matrix) as input. And gives back a grid or tuple (with a matrix) of floating points as output. But you can already copy/paste your most used function into the editor of a data-page and then process it and get back the result to graph it - nearly directly (you have to press a button). Graph page is simple now and allows you only 3 curves at a time on x,y type. But it will be updated to allow other type of graph/plot (3D) etc... Best Regards Robert --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jan 22 03:02:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jan 2007 02:02:07 -0600 Subject: [SciPy-user] cblas_dtrsm still not found In-Reply-To: <45B46D75.5030300@obs.univ-lyon1.fr> References: <45AF89A2.5060000@obs.univ-lyon1.fr> <45AF9FEA.1020702@gmail.com> <45AFA1BB.6030801@obs.univ-lyon1.fr> <45AFA28D.8000003@gmail.com> <45B46D75.5030300@obs.univ-lyon1.fr> Message-ID: <45B46F7F.5000709@gmail.com> Eric Emsellem wrote: > HI again > > well I have now spent a lot of time trying to get back to my normal > setup for scipy and nothing seems to work. > I still get an error message on cblas_dtrsm even though I have tried to > include all libraries I can think of, to rewrite the links to these even > using the site.cfg (it always worked before without that but I followed > your advice there). Okay, what are the contents of your site.cfg? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emsellem at obs.univ-lyon1.fr Mon Jan 22 04:47:38 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Mon, 22 Jan 2007 10:47:38 +0100 Subject: [SciPy-user] cblas_dtrsm still not found References: 45B46D75.5030300@obs.univ-lyon1.fr Message-ID: <45B4883A.2050606@obs.univ-lyon1.fr> Well the uncommented lines are: # [DEFAULT] library_dirs = /usr/local/lib include_dirs = /usr/local/include # [blas_opt] libraries = f77blas, cblas, atlas # [lapack_opt] libraries = lapack, f77blas, cblas, atlas # [fftw] libraries = fftw3 it finds the libraries but not the _info part (not sure this is critical..?). Eric -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From massimo.sandal at unibo.it Mon Jan 22 05:12:22 2007 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 22 Jan 2007 11:12:22 +0100 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: References: <20070118213309.GA31714@zunzun.com> Message-ID: <45B48E06.8050109@unibo.it> Fernando Perez ha scritto: > Two of the matplotlib backends (FLTK and WX) > were contributed by private sector companies who are using matplotlib > either internally or in a commercial product -- I doubt these > companies would have been using matplotlib if the code were GPL. In my > experience, the benefits of collaborating with the private sector are > real, whereas the fear that some private company will "steal" your > product and sell it in a proprietary application leaving you with > nothing is not. Probably you are right, but remember that with a BSD-ish licence, the fact they contributed *back* these backends was just courtesy -not their obligation. So, perhaps there is much good code based on scipy/mpl that won't ever be given back to the community (and that maybe would have been given back). And using a GPL program internally and modifying it is absolutely right. GPL covers redistribution, not internal use. Moreover, companies do not fear GPL that much anymore. I personally think LGPL is better than GPL in this regard, because having to make a program GPL just because it links a library seems frankly too much sometimes. OTOH: - There are indeed large (i.e. with many contributors) GPL projects that allow commercial redistribution/double licensing -see MySQL and QT for example. I think that what matters is *starting* the project making it clear to contributors the double licensing exception. - As you remarked, BSD code cannot engulf GPL one, but GPL can engulf BSD. So the advantages of collaborating with the private sector are somehow hindered by the disvantages of having to look hard for free, non-GPL-ish code. m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From Giovanni.Samaey at cs.kuleuven.be Mon Jan 22 05:18:55 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Mon, 22 Jan 2007 11:18:55 +0100 Subject: [SciPy-user] solve_banded documentation Message-ID: <45B48F8F.7000802@cs.kuleuven.ac.be> Dear all, the docstring of solve_banded seems to be not correct/up-to-date, or at least unclear. It is not clear which dimensions the band matrix should be: the example is an l+u+1 matrix, while the input parameter description shows its transpose. This induces difficulties in figuring out which elements of the diagonals are used and which are not. I am currently figuring out on a small example how to use this method by comparing to the solve method. This should not be necessary. (I can contribute docstrings later, if wanted.) The input parameter naming in the docstring is inconsitent with the declaration line of the method; only adding to the confusion. Is someone checking if the docstrings are correct when putting something in, and if they remain correct when methods are changed? Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From clovisgo at gmail.com Mon Jan 22 05:20:45 2007 From: clovisgo at gmail.com (Clovis Goldemberg) Date: Mon, 22 Jan 2007 08:20:45 -0200 Subject: [SciPy-user] C2D answer Message-ID: <6f239f130701220220t5ea025ncdac89f07e8626e7@mail.gmail.com> To Travis/Kern, I am attaching the "quick and dirty" cd2 function below. It can be surely improved but I hope it helps... ClovisGo ################################################################################################ #Convert a frequency domain transfer-function into two discrete-domain polynomial functions def c2d(num_array, den_array, TS, w1=None): '''This function converts a frequency-domain transfer function, given by its numerator and denominator arrays into a discrete-domain transfer function, considering TS sampling time [s]. Outputs are polynomials! Tustin method is used for this conversion using an optional pre-warping frequency w1 [rd/s]. Examples: num_array = [1.0] den_array = [1.0,1.0] TS = 1.0 num_z, den_z = c2d(num_array, den_array, TS) Produces: num_z = poly1d([ 0.33333333 , 0.33333333]) den_z = poly1d([ 1.0 , -0.33333333]) ''' num_z = make_z_poly(num_array, TS, w1=w1) den_z = make_z_poly(den_array, TS, w1=w1) M = numpy.poly1d(num_array).order N = numpy.poly1d(den_array).order poly1 = numpy.poly1d.__pow__(numpy.poly1d([1.0, 1.0]), (N-M)) num_z = numpy.polymul(poly1, num_z) K = den_z.coeffs[0] poly1 = numpy.poly1d(1.0/K) num_z = numpy.polymul(poly1, num_z) den_z = numpy.polymul(poly1, den_z) return num_z, den_z -------------- next part -------------- An HTML attachment was scrubbed... URL: From Giovanni.Samaey at cs.kuleuven.be Mon Jan 22 05:33:35 2007 From: Giovanni.Samaey at cs.kuleuven.be (Giovanni Samaey) Date: Mon, 22 Jan 2007 11:33:35 +0100 Subject: [SciPy-user] solve_banded documentation In-Reply-To: <45B48F8F.7000802@cs.kuleuven.ac.be> References: <45B48F8F.7000802@cs.kuleuven.ac.be> Message-ID: <45B492FF.10502@cs.kuleuven.ac.be> The example is correct, and the input parameter description wasn't. A possible update is attached. Giovanni Samaey wrote: > Dear all, > > the docstring of solve_banded seems to be not correct/up-to-date, or at > least unclear. It is not clear which dimensions the band matrix should > be: the example is an l+u+1 matrix, while the input parameter > description shows its transpose. This induces difficulties in figuring > out which elements of the diagonals are used and which are not. > > I am currently figuring out on a small example how to use this method by > comparing to the solve method. This should not be necessary. (I can > contribute docstrings later, if wanted.) > > The input parameter naming in the docstring is inconsitent with the > declaration line of the method; only adding to the confusion. > > > Is someone checking if the docstrings are correct when putting something > in, and if they remain correct when methods are changed? > > > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: doc.txt URL: From cimrman3 at ntc.zcu.cz Mon Jan 22 05:41:06 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 22 Jan 2007 11:41:06 +0100 Subject: [SciPy-user] initial release of a finite element code Message-ID: <45B494C2.3060902@ntc.zcu.cz> Hi all, I have released the Python-based finite element code we use for our mainly biomechanical simulations. A very simple web-site runs on my own computer - http://ui505p06-mbs.ntc.zcu.cz/sfe - do not fear the long hostname. It's state can be called alpha at best (documentation missing), so be warned, yet I think it could be interesting for people doing science in Python (and Linux). r. From edschofield at gmail.com Mon Jan 22 05:55:50 2007 From: edschofield at gmail.com (Ed Schofield) Date: Mon, 22 Jan 2007 11:55:50 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: References: <45B08F70.80208@iam.uni-stuttgart.de> <45B0BE7D.1070609@cs.kuleuven.ac.be> <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> <45B0E2E8.7070000@iam.uni-stuttgart.de> <1b5a37350701190838p772932f6h29a23962ce016230@mail.gmail.com> Message-ID: <1b5a37350701220255k599084bew2ead9d662e355a03@mail.gmail.com> On 1/19/07, Nils Wagner wrote: > > > So how can I use this functionality within the following > script > from scipy import * > > def func(x): > return 0.5*dot(x,dot(A,x))-dot(x,b) > > def callback(x): > return linalg.norm(dot(A,x)-b) > > n = 10 > x0 = zeros(n,float) > A = random.rand(n,n)+diag(4*ones(n)) > A = 0.5*(A+A.T) > b = random.rand(n) > > x = optimize.fmin_cg(func,x0) Hi Nils, The callback function's return value is ignored. Make it do something, e.g.: def callback(x): print "||A.x - b|| = " + str(linalg.norm(dot(A,x)-b)) Then call it with: x = optimize.fmin_cg(func, x0, callback=callback) -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Jan 22 06:23:27 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 22 Jan 2007 11:23:27 +0000 Subject: [SciPy-user] initial release of a finite element code In-Reply-To: <45B494C2.3060902@ntc.zcu.cz> References: <45B494C2.3060902@ntc.zcu.cz> Message-ID: <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> Hi, > A very simple web-site runs on my own computer - > http://ui505p06-mbs.ntc.zcu.cz/sfe - do not fear the long hostname. Ah - thank you very much for making this available. I noticed SFE was GPL licensed - I don't suppose there is any chance that you would consider BSD would you? We: http://neuroimaging.scipy.org/ are committed to a BSD license, so it's a big thing for us. Thanks a lot, Matthew From cimrman3 at ntc.zcu.cz Mon Jan 22 06:43:01 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 22 Jan 2007 12:43:01 +0100 Subject: [SciPy-user] initial release of a finite element code In-Reply-To: <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> References: <45B494C2.3060902@ntc.zcu.cz> <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> Message-ID: <45B4A345.7070406@ntc.zcu.cz> Matthew Brett wrote: >> A very simple web-site runs on my own computer - >> http://ui505p06-mbs.ntc.zcu.cz/sfe - do not fear the long hostname. > > Ah - thank you very much for making this available. I noticed SFE was > GPL licensed - I don't suppose there is any chance that you would > consider BSD would you? We: > > http://neuroimaging.scipy.org/ > > are committed to a BSD license, so it's a big thing for us. I have read carefully the discussion about licensing issues (Python Equations thread) and, personally, I am in favor of BSD, too. But I develop the code for my department and here things become more complicated... My colleagues do not understand the open source / free software license issues much, so after my brief introduction we quickly agreed on GPL - I wanted to make the release as soon as possible. IMHO more such reactions/opinions (together with cooperation offers :)) could flip their opinion. cheers, r. From nwagner at iam.uni-stuttgart.de Mon Jan 22 07:54:53 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 22 Jan 2007 13:54:53 +0100 Subject: [SciPy-user] Convergence behaviour of iterative solvers In-Reply-To: <1b5a37350701220255k599084bew2ead9d662e355a03@mail.gmail.com> References: <45B08F70.80208@iam.uni-stuttgart.de> <45B0BE7D.1070609@cs.kuleuven.ac.be> <1e2af89e0701190557n597de98bv9f685b2a7e97ad67@mail.gmail.com> <45B0E2E8.7070000@iam.uni-stuttgart.de> <1b5a37350701190838p772932f6h29a23962ce016230@mail.gmail.com> <1b5a37350701220255k599084bew2ead9d662e355a03@mail.gmail.com> Message-ID: <45B4B41D.9070704@iam.uni-stuttgart.de> Ed Schofield wrote: > > > On 1/19/07, *Nils Wagner* > wrote: > > > So how can I use this functionality within the following > script > from scipy import * > > def func(x): > return 0.5*dot(x,dot(A,x))-dot(x,b) > > def callback(x): > return linalg.norm(dot(A,x)-b) > > n = 10 > x0 = zeros(n,float) > A = random.rand(n,n)+diag(4*ones(n)) > A = 0.5*(A+A.T) > b = random.rand(n) > > x = optimize.fmin_cg(func,x0) > > > Hi Nils, > > The callback function's return value is ignored. Make it do something, > e.g.: > > def callback(x): > print "||A.x - b|| = " + str(linalg.norm(dot(A,x)-b)) > > Then call it with: > > x = optimize.fmin_cg(func, x0, callback=callback) > > -- Ed > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi Ed, Thank you very much! Where should I place the following two lines if callback is not None: callback(x) Here is an excerpt of iterative.py (namely cg) while 1: x, iter_, resid, info, ndx1, ndx2, sclr1, sclr2, ijob = \ revcom(b, x, work, iter_, resid, info, ndx1, ndx2, ijob) slice1 = slice(ndx1-1, ndx1-1+n) slice2 = slice(ndx2-1, ndx2-1+n) if (ijob == -1): if callback is not None: callback(x) break elif (ijob == 1): if matvec is None: matvec = get_matvec(A) work[slice2] *= sclr2 work[slice2] += sclr1*matvec(work[slice1]) elif (ijob == 2): if psolve is None: psolve = get_psolve(A) work[slice1] = psolve(work[slice2]) elif (ijob == 3): if matvec is None: matvec = get_matvec(A) work[slice2] *= sclr2 work[slice2] += sclr1*matvec(x) elif (ijob == 4): if ftflag: info = -1 ftflag = False bnrm2, resid, info = stoptest(work[slice1], b, bnrm2, tol, info) ijob = 2 return x, info From aisaac at american.edu Mon Jan 22 08:55:34 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 22 Jan 2007 08:55:34 -0500 Subject: [SciPy-user] Re-releasing Python Equations under a new license? In-Reply-To: <45B48E06.8050109@unibo.it> References: <20070118213309.GA31714@zunzun.com><45B48E06.8050109@unibo.it> Message-ID: > Fernando Perez ha scritto: >> Two of the matplotlib backends (FLTK and WX) were >> contributed by private sector companies who are using >> matplotlib either internally or in a commercial product >> -- I doubt these companies would have been using >> matplotlib if the code were GPL. In my experience, the >> benefits of collaborating with the private sector are >> real, whereas the fear that some private company will >> "steal" your product and sell it in a proprietary >> application leaving you with nothing is not. On Mon, 22 Jan 2007, massimo sandal apparently responded: > Probably you are right, but remember that with a BSD-ish > licence, the fact they contributed back these backends was > just courtesy -not their obligation. So, perhaps there is > much good code based on scipy/mpl that won't ever be given > back to the community (and that maybe would have been > given back). It is difficult/impossible to adequately discuss this issues in a mailing list format. This is an example. What does "*just* courtesy" mean? I would be more inclined to speculate that they also found it in their *interest*. Also you speculate that there is unshared code, but of course Fernando's can then speculate that with a GPL license neither the shared nor the possible unshared code would exist. (I think this is often right.) In short, the effect on available free (as in speech) code of the licensing issue is tricky to sort out, and we have to turn to our experiences to make decisions. Fernando's experience has been reported by others, and it is a powerful anecdote. > Moreover, companies do not fear GPL that much anymore. > I personally think LGPL is better than GPL in this regard, > because having to make a program GPL just because it links > a library seems frankly too much sometimes. I would say "often" rather than "sometimes". Personally I think the GPL is brilliant and occasionally is important. But *usually* it is not necessary and too often it is counterproductive. Most troubling, and especially troubling in academe, it can really get in the way of sharing with less encumbered code (as we see repeatedly in discussions on this list). Anyway, my original point was that this a collection of GPL pros and cons is probably better generated in another forum. So I have wandered... Cheers, Alan Isaac From robert.vergnes at yahoo.fr Mon Jan 22 10:00:29 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Mon, 22 Jan 2007 16:00:29 +0100 (CET) Subject: [SciPy-user] QME-Dev Workbench (wxSciPy) Alpha - wx.aui error on line 17 In-Reply-To: Message-ID: <20070122150029.99256.qmail@web27412.mail.ukl.yahoo.com> Hello, This means that you need to upgrade your librairy wxpython to the latest one - for python 2.5 -> 2.8.1.1 for python 2.4.x -> 2.8.1.1 So the Alpha09 should run on python 2.4.x as well as on python 2.5. (ansi or unicode should change nothing for this application) Report if not to: http://sourceforge.net/forum/forum.php?thread_id=1655366&forum_id=632392 Regards --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Mon Jan 22 10:07:22 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 22 Jan 2007 10:07:22 -0500 Subject: [SciPy-user] Misc Pyrex questions Message-ID: <200701221007.22776.pgmdevlist@gmail.com> Dear All, I started playing with Pyrex this week-end, to see how easy it would be to port some subclasses to C. A good thing is that it's not as bad as I was dreading. I do have a lot of question, however: - Can I create a subclass w/ Pyrex ? If yes, how ? I haven't been able to find any example in the numerous tutorials I browsed through, and my naive attempt to: cdef NewSubclass(ndarray) didn't work at all. - I want to create an output ndarray from a given an input sequence/ndarray. I'm using PyArray_FROM_OTF, which asks me for a dtype. How can I guess the dtype of the input sequence ? In Python, a numpy.asarray(obj) would do the trick, the dtype would be set to the maximum possible. - What'd be a more pyrex-esque way to code the resize and reshape functions ? - Is there a standard way to deal w/ __getitem__ ? especially when the item is not an integer (but a slice or a sequence) ? Thanks a lot in advance for your help. P. PS: Sorry for the cross-posting From matthew.brett at gmail.com Mon Jan 22 10:12:04 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 22 Jan 2007 15:12:04 +0000 Subject: [SciPy-user] initial release of a finite element code In-Reply-To: <45B4A345.7070406@ntc.zcu.cz> References: <45B494C2.3060902@ntc.zcu.cz> <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> <45B4A345.7070406@ntc.zcu.cz> Message-ID: <1e2af89e0701220712j6f00aa75i651017525d53f29f@mail.gmail.com> Hi, > My colleagues do not understand the open source / free software license > issues much, so after my brief introduction we quickly agreed on GPL - I > wanted to make the release as soon as possible. IMHO more such > reactions/opinions (together with cooperation offers :)) could flip > their opinion. Is it easy to characterize why they preferred GPL? I know this has been rehearsed in detail, but my summary of it would be, if you want your code to become _the_ standard implementation, there's a considerably better chance if you have a BSD license. Thanks a lot, Matthew From matthew at sel.cam.ac.uk Mon Jan 22 10:18:51 2007 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Mon, 22 Jan 2007 15:18:51 +0000 Subject: [SciPy-user] initial release of a finite element code In-Reply-To: <1e2af89e0701220712j6f00aa75i651017525d53f29f@mail.gmail.com> References: <45B494C2.3060902@ntc.zcu.cz> <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> <45B4A345.7070406@ntc.zcu.cz> <1e2af89e0701220712j6f00aa75i651017525d53f29f@mail.gmail.com> Message-ID: <8009A253-B6FA-485D-BAB9-C1263B489F67@sel.cam.ac.uk> On 22 Jan 2007, at 15:12, Matthew Brett wrote: > I know this has been rehearsed in detail, but my summary of it would > be, if you want your code to become _the_ standard implementation, > there's a considerably better chance if you have a BSD license. The authors of emacs, gcc, gdb, autoconf, bash, etc., etc. would be very surprised to hear you say that. Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From matthew.brett at gmail.com Mon Jan 22 10:25:44 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 22 Jan 2007 15:25:44 +0000 Subject: [SciPy-user] initial release of a finite element code In-Reply-To: <8009A253-B6FA-485D-BAB9-C1263B489F67@sel.cam.ac.uk> References: <45B494C2.3060902@ntc.zcu.cz> <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> <45B4A345.7070406@ntc.zcu.cz> <1e2af89e0701220712j6f00aa75i651017525d53f29f@mail.gmail.com> <8009A253-B6FA-485D-BAB9-C1263B489F67@sel.cam.ac.uk> Message-ID: <1e2af89e0701220725t27b0f959t9330b9c903cdd614@mail.gmail.com> > > I know this has been rehearsed in detail, but my summary of it would > > be, if you want your code to become _the_ standard implementation, > > there's a considerably better chance if you have a BSD license. > > The authors of emacs, gcc, gdb, autoconf, bash, etc., etc. would be > very surprised to hear you say that. Ah; well I would hope that, by reading the other threads, they would think that I was referring to library code rather than applications, but I agree that if they just had the email above, they might think I was being a bit gauche. Obviously the differences between BSD and LGPL are considerably more subtle. Best, Matthew From matthew.brett at gmail.com Mon Jan 22 10:45:17 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 22 Jan 2007 15:45:17 +0000 Subject: [SciPy-user] initial release of a finite element code In-Reply-To: <1e2af89e0701220725t27b0f959t9330b9c903cdd614@mail.gmail.com> References: <45B494C2.3060902@ntc.zcu.cz> <1e2af89e0701220323k65b7cd1flc0cd9781a8f3a68@mail.gmail.com> <45B4A345.7070406@ntc.zcu.cz> <1e2af89e0701220712j6f00aa75i651017525d53f29f@mail.gmail.com> <8009A253-B6FA-485D-BAB9-C1263B489F67@sel.cam.ac.uk> <1e2af89e0701220725t27b0f959t9330b9c903cdd614@mail.gmail.com> Message-ID: <1e2af89e0701220745o586e1e60v4b36dbd2c11bf9bd@mail.gmail.com> > > > I know this has been rehearsed in detail, but my summary of it would > > > be, if you want your code to become _the_ standard implementation, > > > there's a considerably better chance if you have a BSD license. Or, put simply, for library code, the GPL takes a calculated risk of losing users in the hope that this will force developers to commit to open-source software: http://www.gnu.org/licenses/why-not-lgpl.html Best, Matthew From robert.kern at gmail.com Mon Jan 22 12:38:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jan 2007 11:38:26 -0600 Subject: [SciPy-user] cblas_dtrsm still not found In-Reply-To: <45B4883A.2050606@obs.univ-lyon1.fr> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> Message-ID: <45B4F692.6080904@gmail.com> Eric Emsellem wrote: > Well the uncommented lines are: > > # > [DEFAULT] > library_dirs = /usr/local/lib > include_dirs = /usr/local/include > # > [blas_opt] > libraries = f77blas, cblas, atlas > # > [lapack_opt] > libraries = lapack, f77blas, cblas, atlas These libraries are all in /usr/local/lib/atlas, right? So add library_dirs = /usr/local/lib/atlas to [blas_opt] and [lapack_opt] > # > [fftw] > libraries = fftw3 > > it finds the libraries but not the _info part (not sure this is > critical..?). No, the _info part comes the names of the classes in the code that implement each section. Nothing to worry about. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tom.denniston at alum.dartmouth.org Mon Jan 22 12:50:25 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 22 Jan 2007 11:50:25 -0600 Subject: [SciPy-user] lowess in scipy? Message-ID: I was looking for a lowess function like the R one in scipy. I was unable to find anything (via a simple sourcecode grep) and my google search yielded a thread between Robert and Travis which mentioned, among many other things, including the BioPython implementation of lowess in scipy. Was this ever done or is BioPython still the best place to look for a lowess algorithm? Or does anyone have any other suggestions for the best python lowess implementation? Robert Kern rkern at ucsd.edu Wed Oct 12 18:15:02 CDT 2005 Previous message: [SciPy-dev] Introductions, sparse matrix support Next message: [SciPy-dev] Package organization Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] -------------------------------------------------------------------------------- Travis Oliphant wrote: > Robert Kern wrote: > >>I have more opinions on this, scipy package organization, and What >>Belongs In Scipy, but I think I've used up my opinion budget for the week. > > I don't know, people who contribute code get much, much larger opinion > budgets.... ;-) Okay. I warned you. I would like to see scipy's package organization become flatter and more oriented towards easy, lightweight, modular packaging rather than subject matter. For example, some people want bindings to SUNDIALS for ODEs. They could go into scipy.integrate, but that introduces a large, needless dependency for those who just want to compute integrals. So I would suggest that SUNDIALS bindings would be in their own scipy.sundials package. As another example, I might also suggest moving the simulated annealing module out into scipy.globalopt along with diffev.py and pso.py that are currently in my sandbox. They're all optimizers, but functionally they are unrelated to the extension-heavy optimizers that make up the remainder of scipy.optimize. The wavelets library that Fernando is working on would go in as scipy.wavelets rather than being stuffed into scipy.signal. You get the idea. This is also why I suggested making the scipy_core versions of fftpack and linalg be named scipy.corefft and scipy.corelinalg and not be aliased to scipy.fftpack and scipy.linalg. The "core" names reflect their packaging and their limited functionality. For one thing, this naming allows us to try to import the possibly optimized versions in the full scipy: # scipy/corelinalg/__init__.py import basic_lite svd = basic_lite.singular_value_decomposition ... try: from scipy import linalg svd = linalg.svd ... except ImportError: pass The explicit "core" names help us and others keep control over dependencies. Lots of the scipy subpackages need some linear algebra functions, but AFAICT, none actually require anything beyond what's in scipy_core. With the "core" names, we won't accidentally add a dependency on the full scipy.linalg without due deliberation. Okay, What Belongs In Scipy. It's somewhat difficult to answer the question, "Does this package belong in scipy?" without having a common answer to, "What is scipy?" I won't pretend to have the single answer to that last question, but I will start the dialogue based on the rationalizations I've come up with to defend my gut feelings. Things scipy is not: * A framework. You shouldn't have to restructure your programs to use the algorithms implemented in scipy. Sometimes the algorithms themselves may require it (e.g. reverse communication solvers), but that's not imposed by scipy. * Everything a scientist will need to do computing. For a variety of reasons, it's just not an achievable goal and, more importantly, it's not a good standard for making decisions. A lot of scientists need a good RDBMS, but there's no reason to put pysqlite into scipy. Enthon, package repositories, and specialized LiveCDs are better places to collect "everything." * A plotting library. (Sorry, had to throw that in.) Things scipy is: * A loose collection of slightly interdependent modules for numerical computing. * A common build environment that handles much of the annoying work for numerical extension modules. Does your module rely on a library that needs LAPACK or BLAS? If you put it in scipy, your users can configure the location of their optimized libraries *once*, and all of the scipy modules they build can use that information. * A good place to put numerical modules that don't otherwise have a good home. Things scipy *could* be: * An *excellent* build environment for library-heavy extension modules. To realize this, we would need to integrate the configuration portion of PETSc's BuildSystem or something equivalent. The automatic discovery/download/build works quite well. If this were to be realized, some packages might make more sense as subpackages of scipy. For example, matplotlib and pytables don't have much reason to be part of scipy right now, but if the libraries they depend on could be automatically detected/downloaded/built and shared with other scipy subpackages, then I think it might make sense for them to live in scipy, too. As Pearu suggested, as we port scipy packages to the new scipy_core we should audit and label them. To that end: * gui_thread, gplt, and plt are dead, I think. * xplt shambles along, but shouldn't return as scipy.xplt. It can never be *the* plotting library for scipy, and leaving it as scipy.xplt gives people that impression. * scipy.cluster is sorta broken and definitely incomplete. We should port over what's in Bio.Cluster. For that matter, there's quite a bit in biopython we should stea^Wport (Bio.GA, Bio.HMM, Bio.KDTree, Bio.NeuralNetwork, Bio.NaiveBayes, Bio.MaxEntropy, Bio.MarkovModel, Bio.LogisticRegression, Bio.Statistics.lowess). * The bindings to ODEPACK, QUADPACK, FITPACK, and MINPACK are handwritten. Should we mark them as, "f2py when you get the chance"? Otherwise, they probably count as "state-of-the-art" although we could always expand our offerings like exposing some of the other functions in ODEPACK. * scipy.optimize: I think I recently ran into a regression in the old scipy. fmin() wasn't finding the minimum of the Rosenbrock function in the tutorial. I'll have to check that again. The simulated annealing code could use some review. * scipy.special: cephes.round() seems to be buggy depending on the platform, and I think we got a bug report about one of the other functions. * I will maintain much of scipy.stats. Of course, that will probably mean, "throwing anova() into the sandbox never to return." Many of the other functions in stats.py need vetting. Now I'm sure I've used up my opinion budget. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From mattknox_ca at hotmail.com Mon Jan 22 13:21:00 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 22 Jan 2007 13:21:00 -0500 Subject: [SciPy-user] (off topic) experience with pytables and masked arrays? Message-ID: Sorry for the off topic post... but this seemed like the most likely spot to find a response... I spoke with Francesc about this, and Pytables does not support masked arrays (and won't be in the near future). It's not too hard to envision some possibilities about how one would create work arounds for doing this kind of thing... but I thought I would just ask to see if anyone has already tried this on their own. So... does anyone that works with masked arrays use pytables for storage? And if so, have any code they would be willing to share? My initial thoughts on how one would do this would be to pick a "special" value for each data type (eg. sys.maxint for int32, NaN for float, etc...), and replace masked values with this value prior to writing. And conversely, mask these values when reading (and probably not only mask them, but replace them in the underlying data array with safer values for performing operations (ie. array.fill_value)). Several additional attributes would have to be stored along with the array: fill_value , some kind of flag to indicate it is a masked array, etc... In a perfect world this would be done within the pytables core, and then all the array related functions and methods in pytables would just work with masked arrays directly, requiring no special handling from the user, but that isn't going to happen anytime soon. So if anyone has built their own functions for dealing with this situation (or has some thoughts on strategies for dealing with missing data points in pytables in general), I would love to hear about it. Thanks, - Matt Knox From vmas at carabos.com Mon Jan 22 14:03:33 2007 From: vmas at carabos.com (Vicent Mas (V+)) Date: Mon, 22 Jan 2007 20:03:33 +0100 Subject: [SciPy-user] docstrings typo in cluster module Message-ID: <200701222003.33341.vmas@carabos.com> Hi, Recently I've been using the cluster module. After reading the module docstrings I think there are a couple of typos in the vq method. It says # c0 c1 c2 c3 code_book = [[ 1., 2., 3., 4.], #f0 [ 1., 2., 3., 4.], #f1 [ 1., 2., 3., 4.]]) #f2 Outputs: code -- 1D array. If obs is a NxM array, then a length M array but should say: # f0 f1 f2 f3 code_book = [[ 1., 2., 3., 4.], #c0 [ 1., 2., 3., 4.], #c1 [ 1., 2., 3., 4.]]) #c2 Outputs: code -- 1D array. If obs is a NxM array, then a length N array Please, let me know me if I'm wrong. -- :: \ / Vicent Mas http://www.carabos.com 0;0 / \ C?rabos Coop. Enjoy Data V V " " From ryanlists at gmail.com Mon Jan 22 15:59:53 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 22 Jan 2007 14:59:53 -0600 Subject: [SciPy-user] C2D answer In-Reply-To: <6f239f130701220220t5ea025ncdac89f07e8626e7@mail.gmail.com> References: <6f239f130701220220t5ea025ncdac89f07e8626e7@mail.gmail.com> Message-ID: Hey ClovisGo, I would like to build on this c2d function (mainly verify it and add a zoh option). Can you please provide me with the function make_z_poly and some description of your algorithm? Thanks, Ryan On 1/22/07, Clovis Goldemberg wrote: > To Travis/Kern, > > I am attaching the "quick and dirty" cd2 function below. > It can be surely improved but I hope it helps... > > ClovisGo > > ################################################################################################ > #Convert a frequency domain transfer-function into two discrete-domain > polynomial functions > def c2d(num_array, den_array, TS, w1=None): > '''This function converts a frequency-domain transfer function, > given by its numerator and denominator arrays into a discrete-domain > transfer function, considering TS sampling time [s]. Outputs are > polynomials! Tustin method is used for this conversion using an > optional pre-warping frequency w1 [rd/s]. > Examples: > num_array = [ 1.0] > den_array = [1.0,1.0] > TS = 1.0 > num_z, den_z = c2d(num_array, den_array, TS) > Produces: > num_z = poly1d([ 0.33333333 , 0.33333333]) > den_z = poly1d([ 1.0 , -0.33333333]) > ''' > num_z = make_z_poly(num_array, TS, w1=w1) > den_z = make_z_poly(den_array, TS, w1=w1) > M = numpy.poly1d (num_array).order > N = numpy.poly1d(den_array).order > poly1 = numpy.poly1d.__pow__(numpy.poly1d([1.0, 1.0]), (N-M)) > num_z = numpy.polymul(poly1, num_z) > K = den_z.coeffs[0] > poly1 = numpy.poly1d(1.0/K) > num_z = numpy.polymul(poly1, num_z) > den_z = numpy.polymul(poly1, den_z) > return num_z, den_z > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From Karl.Young at ucsf.edu Mon Jan 22 17:16:56 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Mon, 22 Jan 2007 14:16:56 -0800 Subject: [SciPy-user] scipy data mining ? Message-ID: <45B537D8.7080103@ucsf.edu> I'm currently using a nice Java based data mining package called Weka (essentially as a black box as I don't have time to learn Java) but was looking for something more python/scipy friendly to switch to as I'd prefer more interactive use. I found a python package on the web that potentially looks pretty nice (Orange - http://www.ailab.si/orange) but given that it uses GPL (and also given the recent discussion on license issues) and doesn't look to have made any effort to be numpy/scipy friendly I was wondering if anyone was aware of a more scipy friendly effort. Should someone (maybe even me...) be talked into contacting the Orange developers and seeing if they'd be interested in a switch to BSD and a gradual evolution towards integration with numpy... ? -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From pgmdevlist at gmail.com Mon Jan 22 19:12:56 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 22 Jan 2007 19:12:56 -0500 Subject: [SciPy-user] lowess in scipy? In-Reply-To: References: Message-ID: <200701221912.56436.pgmdevlist@gmail.com> On Monday 22 January 2007 12:50, Tom Denniston wrote: > Or does anyone have any other > suggestions for the best python lowess implementation? The easiest by far turned out to find the Fortran sources for lowess and/or stl from netlib (the links are also available here: http://stat.bell-labs.com/wsc/smoothsoft.html ), and write a f2py wrapper. If you're interested, I have a slightly modified version of lowess.f and stl.f (just merging the two), and the corresponding .pyf to process it. I'm not sure about the actual license status. If it's BSD, we could think about putting the fortran sources in scipy. I never really try to find an answer to that. From tom.denniston at alum.dartmouth.org Mon Jan 22 21:48:22 2007 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 22 Jan 2007 20:48:22 -0600 Subject: [SciPy-user] lowess in scipy? In-Reply-To: <200701221912.56436.pgmdevlist@gmail.com> References: <200701221912.56436.pgmdevlist@gmail.com> Message-ID: Pierre, Thanks for you answer. It's a part of netlib I think. The legal info from netlib is here: http://netlib.bell-labs.com/netlib/research/boilerplate.gz It seems to come with a BSD-like license from what I can tell. They only ask that you cite them as the source of the code. I think it would be great to get it into scipy. Am I reading the legal notice correctly? --Tom On 1/22/07, Pierre GM wrote: > On Monday 22 January 2007 12:50, Tom Denniston wrote: > > Or does anyone have any other > > suggestions for the best python lowess implementation? > > The easiest by far turned out to find the Fortran sources for lowess and/or > stl from netlib (the links are also available here: > http://stat.bell-labs.com/wsc/smoothsoft.html > ), and write a f2py wrapper. > > If you're interested, I have a slightly modified version of lowess.f and stl.f > (just merging the two), and the corresponding .pyf to process it. > I'm not sure about the actual license status. If it's BSD, we could think > about putting the fortran sources in scipy. I never really try to find an > answer to that. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Jan 22 22:07:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 22 Jan 2007 21:07:11 -0600 Subject: [SciPy-user] lowess in scipy? In-Reply-To: References: <200701221912.56436.pgmdevlist@gmail.com> Message-ID: <45B57BDF.4040507@gmail.com> Tom Denniston wrote: > Pierre, > > Thanks for you answer. > > It's a part of netlib I think. The legal info from netlib is here: > http://netlib.bell-labs.com/netlib/research/boilerplate.gz > > It seems to come with a BSD-like license from what I can tell. They > only ask that you cite them as the source of the code. I think it > would be great to get it into scipy. Am I reading the legal notice > correctly? netlib is just a repository; it is not a package that one can be "part of." The code in netlib has varying licenses. That file does not apply to any piece of code in fact; it's just a recommended (or perhaps internally required by Bell Labs at one point) template for authors to fill out. The clear statement of the copyright for the DLOESS subroutine can be found in its shar archive: http://www.netlib.org/a/dloess /* * The authors of this software are Cleveland, Grosse, and Shyu. * Copyright (c) 1989, 1992 by AT&T. * Permission to use, copy, modify, and distribute this software for any * purpose without fee is hereby granted, provided that this entire notice * is included in all copies of any software which is or includes a copy * or modification of this software and in all copies of the supporting * documentation for such software. * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED * WARRANTY. IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY * OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE. */ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gael.varoquaux at normalesup.org Tue Jan 23 01:52:23 2007 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 23 Jan 2007 07:52:23 +0100 Subject: [SciPy-user] lowess in scipy? In-Reply-To: <45B57BDF.4040507@gmail.com> References: <200701221912.56436.pgmdevlist@gmail.com> <45B57BDF.4040507@gmail.com> Message-ID: <20070123065223.GB6954@clipper.ens.fr> On Mon, Jan 22, 2007 at 09:07:11PM -0600, Robert Kern wrote: > /* > * The authors of this software are Cleveland, Grosse, and Shyu. > * Copyright (c) 1989, 1992 by AT&T. > * Permission to use, copy, modify, and distribute this software for any > * purpose without fee is hereby granted, provided that this entire notice > * is included in all copies of any software which is or includes a copy > * or modification of this software and in all copies of the supporting > * documentation for such software. > * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED > * WARRANTY. IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY > * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY > * OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE. > */ Now I am being stupid, but what kind of license is that ? Ga?l From robert.kern at gmail.com Tue Jan 23 02:27:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Jan 2007 01:27:01 -0600 Subject: [SciPy-user] lowess in scipy? In-Reply-To: <20070123065223.GB6954@clipper.ens.fr> References: <200701221912.56436.pgmdevlist@gmail.com> <45B57BDF.4040507@gmail.com> <20070123065223.GB6954@clipper.ens.fr> Message-ID: <45B5B8C5.3070900@gmail.com> Gael Varoquaux wrote: > On Mon, Jan 22, 2007 at 09:07:11PM -0600, Robert Kern wrote: >> /* >> * The authors of this software are Cleveland, Grosse, and Shyu. >> * Copyright (c) 1989, 1992 by AT&T. >> * Permission to use, copy, modify, and distribute this software for any >> * purpose without fee is hereby granted, provided that this entire notice >> * is included in all copies of any software which is or includes a copy >> * or modification of this software and in all copies of the supporting >> * documentation for such software. >> * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED >> * WARRANTY. IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY >> * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY >> * OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE. >> */ > > Now I am being stupid, but what kind of license is that ? It's substantially similar to the MIT license. http://opensource.org/licenses/mit-license.php It's scipy-compatible, which is all that I care about. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emsellem at obs.univ-lyon1.fr Tue Jan 23 03:56:09 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Tue, 23 Jan 2007 09:56:09 +0100 Subject: [SciPy-user] cblas_dtrsm still not found In-Reply-To: <45B4F692.6080904@gmail.com> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> Message-ID: <45B5CDA9.9020508@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From emsellem at obs.univ-lyon1.fr Tue Jan 23 10:48:55 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Tue, 23 Jan 2007 16:48:55 +0100 Subject: [SciPy-user] Installing scipy: getting worse.. In-Reply-To: <45B4F692.6080904@gmail.com> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> Message-ID: <45B62E67.303@obs.univ-lyon1.fr> Hi, after the trouble I had with lapack and blas, I decided to reinstall everything.. Now I end up with a different (still annoying) message about libraries not being found. Here it is (and below I provide the details of what I have done to install everything again). Any help is VERY welcome ....!!! Eric ### Doing a config on scipy gives: python setup.py config Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 28, in setup_package from numpy.distutils.core import setup File "/usr/local/lib/python2.4/site-packages/numpy/__init__.py", line 40, in ? import linalg File "/usr/local/lib/python2.4/site-packages/numpy/linalg/__init__.py", line 4, in ? from linalg import * File "/usr/local/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 25, in ? from numpy.linalg import lapack_lite ImportError: /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: ATL_cGetNB ######### Here are the details of my installation: ################# NUMPY/SCIPY ################## ## INSTALLATION including blas, lapack, atlas ## ################################################ ## Getting the svn for numpy and scipy svn co http://svn.scipy.org/svn/numpy/trunk numpy svn co http://svn.scipy.org/svn/scipy/trunk scipy ## allowing root su ## build and install numpy cd numpy python setup.py install >& inst.log & tail -f inst.log ## Going to scipy directory to build everything else cd .. cd scipy ## First BLAS mkdir -p blas cd blas ## Getting the archive wget http://www.netlib.org/blas/blas.tgz tar xvzf blas.tgz ## Compiling g77 -fno-second-underscore -O2 -c *.f ## Making the library ar r libfblas.a *.o ranlib libfblas.a ## Cleaning rm -rf *.o ## Coping the output where it should be cp libfblas.a /usr/local/lib # For bash use export, for tcsh use setenv of course # export BLAS=/usr/local/lib/libfblas.a setenv BLAS /usr/local/lib/libfblas.a ## Doing LAPACK now cd .. wget http://www.netlib.org/lapack/lapack.tgz tar xzf lapack.tgz cd lapack-3.1.0 cp INSTALL/make.inc.LINUX make.inc ###### Now you must edit make.inc and change (if necessary) the following values: OPTS = "-O2" ###### ## !!!!!! ## WARNING ## With 10.2 OpenSuse you must do: ## cd /lib ## ln -s libgcc_s.so.1 libgcc_s.so ## Maybe it will be corrected later ## !!!!!! ###### make lapacklib >& make.log & tail -f make.log ###### ## Now we obtain and compile the ATLAS package. Get it from: ## https://sourceforge.net/projects/math-atlas/ ###### ## ## !!!!!! ## Seems that version 3.6.0 does not always work and provide errors such as: ## make[3]: *** [res/cMMRES] Erreur 255 ## GetMMRES: Assertion `fp' failed. ## ## -> so getting the latest "unstable version": 3.7.25 ## cd scipy bunzip2 atlas3.7.25.tar.bz2 tar xvf atlas3.7.25.tar ## CPU THROTTLING OFF (otherwise ATLAS builing makes no sense ) cpufreq-set -g performance ## compiling ATLAS cd ATLAS ## giving a name for this build mkdir ATLAS_Linux_P4E ; cd ATLAS_Linux_P4E ../configure ## Now merging the lapack lib and the programs from atlas make cd lib mkdir tmp cd tmp ## extracting the obj from ATLAS lapack lib ar x ../liblapack.a cp ../liblapack.a ../liblapack.a.sav cp ../../../../lapack-3.1.0/lapack_LINUX.a ../liblapack.a ## adding files to the lapack lib from lapack ar r ../liblapack.a *.o cd .. rm -rf tmp cp liblapack.a /usr/local/lib # export LAPACK=/usr/local/lib/liblapack.a setenv LAPACK /usr/local/lib/liblapack.a mkdir /usr/local/lib/atlas cp *.a /usr/local/lib/atlas # export ATLAS=/usr/local/lib/atlas setenv ATLAS /usr/local/lib/atlas ## Until that point everything seems to have gone ok, but now... cd ../../.. python setup.py install ==> crashing with the message mentioned above -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From gnata at obs.univ-lyon1.fr Tue Jan 23 12:25:40 2007 From: gnata at obs.univ-lyon1.fr (Gnata Xavier) Date: Tue, 23 Jan 2007 18:25:40 +0100 Subject: [SciPy-user] Installing scipy: getting worse.. In-Reply-To: <45B62E67.303@obs.univ-lyon1.fr> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> <45B62E67.303@obs.univ-lyon1.fr> Message-ID: <45B64514.70302@obs.univ-lyon1.fr> Hi, undefined symbol: ATL_cGetNB : Ok, looks like you have problem with the way you link lapack, blass and atlas after you compile it. I cannot remember your way to compile all this stuff because it is not needed on my box :). Google tells me that "libatlas needs to be linked in after liblapack" A line containing //"-latlas -llapack" is wrong. It should be "//-llapack -latlas" or "-llapack -lf77blas -latlas" if needed. I hope this helps. I have no clue why you haven't seen this bug before. Maybe just because this function was not yet wrapped in scipy... Xavier > Hi, > > after the trouble I had with lapack and blas, I decided to reinstall > everything.. > Now I end up with a different (still annoying) message about libraries > not being found. > Here it is (and below I provide the details of what I have done to > install everything again). > > Any help is VERY welcome ....!!! > > Eric > > ### Doing a config on scipy gives: > python setup.py config > > Traceback (most recent call last): > File "setup.py", line 55, in ? > setup_package() > File "setup.py", line 28, in setup_package > from numpy.distutils.core import setup > File "/usr/local/lib/python2.4/site-packages/numpy/__init__.py", line > 40, in ? > import linalg > File > "/usr/local/lib/python2.4/site-packages/numpy/linalg/__init__.py", line > 4, in ? > from linalg import * > File "/usr/local/lib/python2.4/site-packages/numpy/linalg/linalg.py", > line 25, in ? > from numpy.linalg import lapack_lite > ImportError: > /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: > undefined symbol: ATL_cGetNB > > ######### Here are the details of my installation: > ################# NUMPY/SCIPY ################## > ## INSTALLATION including blas, lapack, atlas ## > ################################################ > > ## Getting the svn for numpy and scipy > svn co http://svn.scipy.org/svn/numpy/trunk numpy > svn co http://svn.scipy.org/svn/scipy/trunk scipy > > ## allowing root > su > > ## build and install numpy > cd numpy > python setup.py install >& inst.log & > tail -f inst.log > > ## Going to scipy directory to build everything else > cd .. > cd scipy > > ## First BLAS > mkdir -p blas > cd blas > ## Getting the archive > wget http://www.netlib.org/blas/blas.tgz > tar xvzf blas.tgz > ## Compiling > g77 -fno-second-underscore -O2 -c *.f > ## Making the library > ar r libfblas.a *.o > ranlib libfblas.a > ## Cleaning > rm -rf *.o > ## Coping the output where it should be > cp libfblas.a /usr/local/lib > # For bash use export, for tcsh use setenv of course > # export BLAS=/usr/local/lib/libfblas.a > setenv BLAS /usr/local/lib/libfblas.a > > ## Doing LAPACK now > cd .. > wget http://www.netlib.org/lapack/lapack.tgz > tar xzf lapack.tgz > cd lapack-3.1.0 > cp INSTALL/make.inc.LINUX make.inc > > ###### > Now you must edit make.inc and change (if necessary) the following values: > OPTS = "-O2" > ###### > ## !!!!!! > ## WARNING > ## With 10.2 OpenSuse you must do: > ## cd /lib > ## ln -s libgcc_s.so.1 libgcc_s.so > ## Maybe it will be corrected later > ## !!!!!! > ###### > > make lapacklib >& make.log & > tail -f make.log > > ###### > ## Now we obtain and compile the ATLAS package. Get it from: > ## https://sourceforge.net/projects/math-atlas/ > ###### > ## > ## !!!!!! > ## Seems that version 3.6.0 does not always work and provide errors such as: > ## make[3]: *** [res/cMMRES] Erreur 255 > ## GetMMRES: Assertion `fp' failed. > ## > ## -> so getting the latest "unstable version": 3.7.25 > ## > cd scipy > bunzip2 atlas3.7.25.tar.bz2 > tar xvf atlas3.7.25.tar > > ## CPU THROTTLING OFF (otherwise ATLAS builing makes no sense ) > cpufreq-set -g performance > ## compiling ATLAS > cd ATLAS > ## giving a name for this build > mkdir ATLAS_Linux_P4E ; cd ATLAS_Linux_P4E > ../configure > > ## Now merging the lapack lib and the programs from atlas > make > cd lib > mkdir tmp > cd tmp > ## extracting the obj from ATLAS lapack lib > ar x ../liblapack.a > cp ../liblapack.a ../liblapack.a.sav > cp ../../../../lapack-3.1.0/lapack_LINUX.a ../liblapack.a > ## adding files to the lapack lib from lapack > ar r ../liblapack.a *.o > cd .. > rm -rf tmp > cp liblapack.a /usr/local/lib > # export LAPACK=/usr/local/lib/liblapack.a > setenv LAPACK /usr/local/lib/liblapack.a > mkdir /usr/local/lib/atlas > cp *.a /usr/local/lib/atlas > # export ATLAS=/usr/local/lib/atlas > setenv ATLAS /usr/local/lib/atlas > > ## Until that point everything seems to have gone ok, but now... > cd ../../.. > python setup.py install > > ==> crashing with the message mentioned above > > From robert.kern at gmail.com Tue Jan 23 13:58:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Jan 2007 12:58:14 -0600 Subject: [SciPy-user] Installing scipy: getting worse.. In-Reply-To: <45B62E67.303@obs.univ-lyon1.fr> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> <45B62E67.303@obs.univ-lyon1.fr> Message-ID: <45B65AC6.4080402@gmail.com> [Pre. S.: You don't have to include me on the To: line as well. I'm on the list, and it messes up my mail filters (which admittedly are probably pretty lame).] Eric Emsellem wrote: > Hi, > > after the trouble I had with lapack and blas, I decided to reinstall > everything.. > Now I end up with a different (still annoying) message about libraries > not being found. > Here it is (and below I provide the details of what I have done to > install everything again). > > Any help is VERY welcome ....!!! > > Eric > > ### Doing a config on scipy gives: > python setup.py config > > Traceback (most recent call last): > File "setup.py", line 55, in ? > setup_package() > File "setup.py", line 28, in setup_package > from numpy.distutils.core import setup > File "/usr/local/lib/python2.4/site-packages/numpy/__init__.py", line > 40, in ? > import linalg > File > "/usr/local/lib/python2.4/site-packages/numpy/linalg/__init__.py", line > 4, in ? > from linalg import * > File "/usr/local/lib/python2.4/site-packages/numpy/linalg/linalg.py", > line 25, in ? > from numpy.linalg import lapack_lite > ImportError: > /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: > undefined symbol: ATL_cGetNB Xavier is probably correct in his assessment why you get this error. However, it looks like you built ATLAS after numpy which is odd. Make sure that you build ATLAS first. You're putting it in /usr/local/lib/atlas/ which is fine. Now, make a site.cfg file with the right information. # The order of the libraries is important! Don't change them! [blas_opt] library_dirs = /usr/local/lib/atlas/ libraries = f77blas, cblas, atlas [lapack_opt] library_dirs = /usr/local/lib/atlas/ libraries = lapack, f77blas, cblas, atlas Now build numpy. Put the site.cfg file next to the setup.py file in the root of your checkout directory. The install should go smoothly. Run the test suite to make sure. $ python -c "import numpy; numpy.test()" Now build scipy. Again, put the site.cfg file next to the setup.py file in the root of your checkout directory. The install should also go smoothly. Run the test suite to make sure. $ python -c "import scipy; scipy.test()" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.sorich at gmail.com Tue Jan 23 18:49:56 2007 From: michael.sorich at gmail.com (Michael Sorich) Date: Wed, 24 Jan 2007 10:19:56 +1030 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <45B537D8.7080103@ucsf.edu> References: <45B537D8.7080103@ucsf.edu> Message-ID: <16761e100701231549n68f6cf7vd134c55cb314c772@mail.gmail.com> Hi Karl, I have used Orange before and it is quite good - the range of methods available are not as comprehensive as Weka but is much easier to write a script compared to using jython and weka. I would also prefer it to be better integrated with scipy - both in terms of code and/or licensing. I don't know how realistic this is but it is worth asking. I suspect that at least some of the code may be based on other GPL libraries. Also the main data class is an ExampleTable which is implemented in C++ and is structured quite differently to any numpy array - recarray is probably closest. Good luck, Michael On 1/23/07, Karl Young wrote: > > I'm currently using a nice Java based data mining package called Weka > (essentially as a black box as I don't have time to learn Java) but was > looking for something more python/scipy friendly to switch to as I'd > prefer more interactive use. I found a python package on the web that > potentially looks pretty nice (Orange - http://www.ailab.si/orange) but > given that it uses GPL (and also given the recent discussion on license > issues) and doesn't look to have made any effort to be numpy/scipy > friendly I was wondering if anyone was aware of a more scipy friendly > effort. Should someone (maybe even me...) be talked into contacting the > Orange developers and seeing if they'd be interested in a switch to BSD > and a gradual evolution towards integration with numpy... ? > > -- > > Karl Young > Center for Imaging of Neurodegenerative Diseases, UCSF > VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab > 4150 Clement Street FAX: (415) 668-2864 > San Francisco, CA 94121 Email: karl young at ucsf edu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From wbaxter at gmail.com Tue Jan 23 19:11:16 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 24 Jan 2007 09:11:16 +0900 Subject: [SciPy-user] lowess in scipy? In-Reply-To: <45B5B8C5.3070900@gmail.com> References: <200701221912.56436.pgmdevlist@gmail.com> <45B57BDF.4040507@gmail.com> <20070123065223.GB6954@clipper.ens.fr> <45B5B8C5.3070900@gmail.com> Message-ID: On 1/23/07, Robert Kern wrote: > Gael Varoquaux wrote: > > On Mon, Jan 22, 2007 at 09:07:11PM -0600, Robert Kern wrote: > >> /* > >> * The authors of this software are Cleveland, Grosse, and Shyu. > >> * Copyright (c) 1989, 1992 by AT&T. > >> * Permission to use, copy, modify, and distribute this software for any > >> * purpose without fee is hereby granted, provided that this entire notice > >> * is included in all copies of any software which is or includes a copy > >> * or modification of this software and in all copies of the supporting > >> * documentation for such software. > >> * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED > >> * WARRANTY. IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY > >> * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY > >> * OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE. > >> */ > > > > Now I am being stupid, but what kind of license is that ? > > It's substantially similar to the MIT license. > > http://opensource.org/licenses/mit-license.php > > It's scipy-compatible, which is all that I care about. SciPy allows such advertising clauses? The term "software" is pretty vague ... does that mean any binaries eventually built using the source as well as source distributions? "Included" is pretty vague too. Does a string embedded in the data segment of a binary somewhere count? If there are other bits of software in SciPy with such advertising clauses it would be nice to collect all their licenses into the top level LICENSE.txt or somewhere else easily locatable for people building applications so that can copy paste into their about box and/or documentation. --bb From Karl.Young at ucsf.edu Tue Jan 23 18:34:01 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Tue, 23 Jan 2007 15:34:01 -0800 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <16761e100701231549n68f6cf7vd134c55cb314c772@mail.gmail.com> References: <45B537D8.7080103@ucsf.edu> <16761e100701231549n68f6cf7vd134c55cb314c772@mail.gmail.com> Message-ID: <45B69B69.3020503@ucsf.edu> Hi Michael, Thanks for the info. I suspect nothing will be quite as comprehensive as Weka; it has a pretty large and well established group of contributors. But close seems good enough re. having the convenience of a python/C++ package (e.g. it would be much easier for me to add methods that I was interested in using). You're right that getting the Orange developers interested in integration is probably a long shot but probably worth floating as nobody seems to be aware of any similar efforts that are numpy friendly and both user groups would benefit (but somebody would have to do the work...). I don't know how difficult it would be to write an ExampleTable to recarray converter but if the Orange developers had any interest in integration it might be worth investigating. >Hi Karl, > >I have used Orange before and it is quite good - the range of methods >available are not as comprehensive as Weka but is much easier to write >a script compared to using jython and weka. I would also prefer it to >be better integrated with scipy - both in terms of code and/or >licensing. I don't know how realistic this is but it is worth asking. >I suspect that at least some of the code may be based on other GPL >libraries. Also the main data class is an ExampleTable which is >implemented in C++ and is structured quite differently to any numpy >array - recarray is probably closest. > >Good luck, > >Michael > >On 1/23/07, Karl Young wrote: > > >>I'm currently using a nice Java based data mining package called Weka >>(essentially as a black box as I don't have time to learn Java) but was >>looking for something more python/scipy friendly to switch to as I'd >>prefer more interactive use. I found a python package on the web that >>potentially looks pretty nice (Orange - http://www.ailab.si/orange) but >>given that it uses GPL (and also given the recent discussion on license >>issues) and doesn't look to have made any effort to be numpy/scipy >>friendly I was wondering if anyone was aware of a more scipy friendly >>effort. Should someone (maybe even me...) be talked into contacting the >>Orange developers and seeing if they'd be interested in a switch to BSD >>and a gradual evolution towards integration with numpy... ? >> >>-- >> >>Karl Young >>Center for Imaging of Neurodegenerative Diseases, UCSF >>VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab >>4150 Clement Street FAX: (415) 668-2864 >>San Francisco, CA 94121 Email: karl young at ucsf edu >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From robert.kern at gmail.com Tue Jan 23 19:35:10 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Jan 2007 18:35:10 -0600 Subject: [SciPy-user] lowess in scipy? In-Reply-To: References: <200701221912.56436.pgmdevlist@gmail.com> <45B57BDF.4040507@gmail.com> <20070123065223.GB6954@clipper.ens.fr> <45B5B8C5.3070900@gmail.com> Message-ID: <45B6A9BE.1000401@gmail.com> Bill Baxter wrote: > On 1/23/07, Robert Kern wrote: >> Gael Varoquaux wrote: >>> On Mon, Jan 22, 2007 at 09:07:11PM -0600, Robert Kern wrote: >>>> /* >>>> * The authors of this software are Cleveland, Grosse, and Shyu. >>>> * Copyright (c) 1989, 1992 by AT&T. >>>> * Permission to use, copy, modify, and distribute this software for any >>>> * purpose without fee is hereby granted, provided that this entire notice >>>> * is included in all copies of any software which is or includes a copy >>>> * or modification of this software and in all copies of the supporting >>>> * documentation for such software. >>>> * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED >>>> * WARRANTY. IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY >>>> * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY >>>> * OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE. >>>> */ >>> Now I am being stupid, but what kind of license is that ? >> It's substantially similar to the MIT license. >> >> http://opensource.org/licenses/mit-license.php >> >> It's scipy-compatible, which is all that I care about. > > SciPy allows such advertising clauses? What advertising clause? The documentation clause is not the same thing as an advertising clause. Scipy's license has a similar one: b. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. If you want a good overview of how to interpret open source licenses and the various legal issues surrounding them, I suggest Larry Rosen's book _Open Source Licensing: Software Freedom and Intellectual Property Law_ available under an open source license itself: http://www.rosenlaw.com/oslbook.htm -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sherwood at cam.cornell.edu Tue Jan 23 20:33:31 2007 From: sherwood at cam.cornell.edu (Erik Sherwood) Date: Tue, 23 Jan 2007 20:33:31 -0500 Subject: [SciPy-user] scipy 0.5.2, fortran, distutils on Intel Core 2 Duo, Mac OS X In-Reply-To: <45B6A9BE.1000401@gmail.com> References: <200701221912.56436.pgmdevlist@gmail.com> <45B57BDF.4040507@gmail.com> <20070123065223.GB6954@clipper.ens.fr> <45B5B8C5.3070900@gmail.com> <45B6A9BE.1000401@gmail.com> Message-ID: <34950542-8D14-4504-8E8C-76DD86D7AD5A@cam.cornell.edu> I'm trying to use numpy.distutils to compile fortran sources on a MacBook Pro Core 2 Duo machine, with python 2.4, scipy0.5.2, numpy 1.0.1 (all installed using fink). I have g95, also installed with fink. fortran is called using a call to numpy.distutils.core.setup distutils goes into interactive mode, and I get the following error message/output: ======================================================================== Starting interactive session ------------------------------------------------------------------------ Tasks: i - Show python/platform/machine information ie - Show environment information c - Show C compilers information c - Set C compiler (current:None) f - Show Fortran compilers information f - Set Fortran compiler (current:None) e - Edit proposed sys.argv[1:]. Task aliases: 0 - Configure 1 - Build 2 - Install 2 - Install with prefix. 3 - Inplace build 4 - Source distribution 5 - Binary distribution Proposed sys.argv = ['SLIP_2D_pdc.py'] Choose a task (^D to quit, Enter to continue with setup): f/sw/bin/g95 Tasks: i - Show python/platform/machine information ie - Show environment information c - Show C compilers information c - Set C compiler (current:None) f - Show Fortran compilers information f - Set Fortran compiler (current:/sw/bin/g95) e - Edit proposed sys.argv[1:]. Task aliases: 0 - Configure 1 - Build 2 - Install 2 - Install with prefix. 3 - Inplace build 4 - Source distribution 5 - Binary distribution Proposed sys.argv = ['SLIP_2D_pdc.py'] Choose a task (^D to quit, Enter to continue with setup): ------------------------------------------------------------------------ customize AbsoftFCompiler customize IbmFCompiler customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_clib building 'radau5' library compiling Fortran sources Fortran f77 compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -fPIC -O3 -funroll-loops Fortran f90 compiler: /usr/local/bin/gfortran -Wall -fno-second- underscore -fPIC -O3 -funroll-loops Fortran fix compiler: /usr/local/bin/gfortran -Wall -ffixed-form -fno- second-underscore -Wall -fno-second-underscore -fPIC -O3 -funroll-loops compile options: '-c' gfortran:f77: /Users/sherwood/Apps/PyDSTool/PyDSTool/trunk/PyDSTool/ integrator/radau5.f gfortran: error trying to exec 'f951': execvp: No such file or directory gfortran: error trying to exec 'f951': execvp: No such file or directory Error occurred in generating Radau system... exceptions.SystemExit error: Command "/usr/local/bin/gfortran -Wall - ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -c -c / Users/sherwood/Apps/PyDSTool/PyDSTool/trunk/PyDSTool/integrator/ radau5.f -o /Users/sherwood/Apps/PyDSTool/PyDSTool/trunk/PyDSTool/ tests/radau5_temp/Users/sherwood/Apps/PyDSTool/PyDSTool/trunk/ PyDSTool/integrator/radau5.o" failed with exit status 1 Any ideas how to avoid this error? It's not clear to me why gfortran is being used instead of g95. If I get a list of fortran compilers in interactive mode, I get List of available Fortran compilers: --fcompiler=g95 G95 Fortran Compiler (0.90) --fcompiler=gnu95 GNU 95 Fortran Compiler (4.2.0) What do I need to do to use g95 instead of gfortran? Also, I had posted before about the problem of being taken to interactive mode when calling distutils.core.setup on Mac OS X. I believe there was a patch for 0.5.1, but apparently it wasn't included in 0.5.2. What needs to be changed to avoid interactive mode? (Or which subversion revision had the patch?) Thanks, Erik From robert.kern at gmail.com Tue Jan 23 20:43:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 23 Jan 2007 19:43:50 -0600 Subject: [SciPy-user] scipy 0.5.2, fortran, distutils on Intel Core 2 Duo, Mac OS X In-Reply-To: <34950542-8D14-4504-8E8C-76DD86D7AD5A@cam.cornell.edu> References: <200701221912.56436.pgmdevlist@gmail.com> <45B57BDF.4040507@gmail.com> <20070123065223.GB6954@clipper.ens.fr> <45B5B8C5.3070900@gmail.com> <45B6A9BE.1000401@gmail.com> <34950542-8D14-4504-8E8C-76DD86D7AD5A@cam.cornell.edu> Message-ID: <45B6B9D6.8010300@gmail.com> Erik Sherwood wrote: > I'm trying to use numpy.distutils to compile fortran sources on a > MacBook Pro Core 2 Duo machine, with python 2.4, scipy0.5.2, numpy > 1.0.1 (all installed using fink). I have g95, also installed with fink. > > fortran is called using a call to numpy.distutils.core.setup The setup() function isn't meant to be called like that, if I understand correctly what you are doing. It needs command-line arguments in sys.argv. While the interactive mode stuff is novel to numpy.distutils, the need for command-line arguments is a restriction in distutil's design. distutils is not really programmatically drivable. I recommend writing a real setup.py script and drive it by calling it as a subprocess. If I don't correctly understand what is going on, please describe it more. Preferably with code and exactly what you type at the command line to run that code. > It's not clear to me why gfortran is being used instead of g95. If I > get a list of fortran compilers in interactive mode, I get > > List of available Fortran compilers: > --fcompiler=g95 G95 Fortran Compiler (0.90) > --fcompiler=gnu95 GNU 95 Fortran Compiler (4.2.0) > > What do I need to do to use g95 instead of gfortran? Use --fcompiler=g95 as arguments to the build_ext command (and build_clib if you are building FORTRAN libraries using distutils). > Also, I had posted before about the problem of being taken to > interactive mode when calling distutils.core.setup on Mac OS X. I > believe there was a patch for 0.5.1, but apparently it wasn't > included in 0.5.2. What needs to be changed to avoid interactive > mode? (Or which subversion revision had the patch?) As it's part of numpy, not scipy, no there was no change between scipy 0.5.1 and 0.5.2. However, removing the interactive mode won't solve your problem. Without command-line arguments, setup() will simply do nothig. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ivilata at carabos.com Wed Jan 24 03:23:40 2007 From: ivilata at carabos.com (Ivan Vilata i Balaguer) Date: Wed, 24 Jan 2007 09:23:40 +0100 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <45B537D8.7080103@ucsf.edu> References: <45B537D8.7080103@ucsf.edu> Message-ID: <20070124082340.GI2093@tardis.terramar.selidor.net> Karl Young (el 2007-01-22 a les 14:16:56 -0800) va dir:: > I'm currently using a nice Java based data mining package called Weka > (essentially as a black box as I don't have time to learn Java) but was > looking for something more python/scipy friendly to switch to as I'd > prefer more interactive use. I found a python package on the web that > potentially looks pretty nice (Orange - http://www.ailab.si/orange) but > given that it uses GPL (and also given the recent discussion on license > issues) and doesn't look to have made any effort to be numpy/scipy > friendly I was wondering if anyone was aware of a more scipy friendly > effort. Should someone (maybe even me...) be talked into contacting the > Orange developers and seeing if they'd be interested in a switch to BSD > and a gradual evolution towards integration with numpy... ? You may also give a try to PyTables_, which is already being used by some people to perform data mining. It is not similar to Orange or Weka in the sense that PyTables is a lower-level, non-GUI Python library. However, it uses NumPy at its core, so integration with SciPy should be no problem, and it is designed to be comfortable in interactive usage (on a Python console). The standard version is free/libre software under a BSD license. On the GUI part, you could use ViTables_ for textual browsing of big files, or HDFView_ if you need plotting or image visualisation capabilities. .. _PyTables: http://www.pytables.org/ .. _ViTables: http://www.carabos.com/products/vitables .. _HDFView: http://www.hdfgroup.org/hdf-java-html/hdfview/ Hope that helps, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C?rabos Coop. V. V V Enjoy Data "" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 309 bytes Desc: Digital signature URL: From david at ar.media.kyoto-u.ac.jp Wed Jan 24 03:26:20 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 24 Jan 2007 17:26:20 +0900 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <20070124082340.GI2093@tardis.terramar.selidor.net> References: <45B537D8.7080103@ucsf.edu> <20070124082340.GI2093@tardis.terramar.selidor.net> Message-ID: <45B7182C.1080103@ar.media.kyoto-u.ac.jp> Ivan Vilata i Balaguer wrote: > Karl Young (el 2007-01-22 a les 14:16:56 -0800) va dir:: > >> I'm currently using a nice Java based data mining package called Weka >> (essentially as a black box as I don't have time to learn Java) but was >> looking for something more python/scipy friendly to switch to as I'd >> prefer more interactive use. I found a python package on the web that >> potentially looks pretty nice (Orange - http://www.ailab.si/orange) but >> given that it uses GPL (and also given the recent discussion on license >> issues) and doesn't look to have made any effort to be numpy/scipy >> friendly I was wondering if anyone was aware of a more scipy friendly >> effort. Should someone (maybe even me...) be talked into contacting the >> Orange developers and seeing if they'd be interested in a switch to BSD >> and a gradual evolution towards integration with numpy... ? > > You may also give a try to PyTables_, which is already being used by > some people to perform data mining. It is not similar to Orange or Weka > in the sense that PyTables is a lower-level, non-GUI Python library. > However, it uses NumPy at its core, so integration with SciPy should be > no problem, and it is designed to be comfortable in interactive usage > (on a Python console). The standard version is free/libre software > under a BSD license. > > On the GUI part, you could use ViTables_ for textual browsing of big > files, or HDFView_ if you need plotting or image visualisation > capabilities. > > .. _PyTables: http://www.pytables.org/ > .. _ViTables: http://www.carabos.com/products/vitables > .. _HDFView: http://www.hdfgroup.org/hdf-java-html/hdfview/ That would give the IO and interface part of orange, but not the core machine learning part. This is I think one area where numpy/scipy is still lacking, at least integration-wise, compared to matlab which has major toolboxes such as netlab for this kind of thing. David From emsellem at obs.univ-lyon1.fr Wed Jan 24 04:42:50 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 24 Jan 2007 10:42:50 +0100 Subject: [SciPy-user] Installing scipy/numpy: getting worse.. and worse Message-ID: <45B72A1A.7050906@obs.univ-lyon1.fr> Hi again (sorry for the double emailing, indeed I was using reply all: bad habit) Things are getting out hands now... I am not even getting numpy to work anymore now even though I reinstalled Everything from scratch... (getting undefined symbol: ATL_cGetNB). I'll try to solve all this on my side not to continue the email spamming on the scipy list (with maybe the help of Xavier who is in the same institute) and I'll report to the list if relevant. cheers and thanks for all the help Eric -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From emsellem at obs.univ-lyon1.fr Wed Jan 24 06:08:41 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Wed, 24 Jan 2007 12:08:41 +0100 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others Message-ID: <45B73E39.3090104@obs.univ-lyon1.fr> Back... Now I have numpy running fine (all tests ok). But scipy is still having problems. Sorry for all this, but at this point I have no clue what to do anymore. (it is clear I don't have umfpack, but I guess this is not a problem, just a matter of choice. I am more concerned with the optimize failures) cheers Eric =============================================== ## Below are the failures I get when launching scipy.test ## (only including the failure, many tests look ok): >>> scipy.test() Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) ... Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py:30: ImportError: /usr/local/lib/python2.4/site-packages/scipy/optimize/_lbfgsb.so: undefined sym bol: cblas_dtrsm (in ?) ... Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py:30: ImportError: /usr/local/lib/python2.4/site-packages/scipy/optimize/_lbfgsb.so: undefined sym bol: cblas_dtrsm (in ?) ... Warning: FAILURE importing tests for /usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py:17: ImportError: /usr/local/lib/python2.4/site-packages/scipy/linalg/flapack.so: undefined symbol: ATL_xerbla (in ?) ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/local/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/usr/local/lib/python2.4/site-packages/scipy/stats/stats.py", line 190, in ? import scipy.special as special File "/usr/local/lib/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 65, in ? from scipy.linalg import eig File "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: /usr/local/lib/python2.4/site-packages/scipy/linalg/flapack.so: undefined symbol: ATL_xerbla ---------------------------------------------------------------------- Ran 752 tests in 3.335s FAILED (errors=1) -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From gnata at obs.univ-lyon1.fr Wed Jan 24 06:15:23 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Wed, 24 Jan 2007 12:15:23 +0100 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others In-Reply-To: <45B73E39.3090104@obs.univ-lyon1.fr> References: <45B73E39.3090104@obs.univ-lyon1.fr> Message-ID: <45B73FCB.2000708@obs.univ-lyon1.fr> Well on Debian, you can solve your first pb with apt-get install libumfpack4-dev apt tells me that libumfpack4 is "a set of routines for solving unsymmetric sparse linear systems" No idea if it is packaged into suse rpm or not :( Xavier > Back... Now I have numpy running fine (all tests ok). But scipy is still > having problems. > Sorry for all this, but at this point I have no clue what to do anymore. > (it is clear I don't have umfpack, but I guess this is not a problem, > just a matter of choice. I am more concerned with the optimize failures) > > cheers > > Eric > =============================================== > ## Below are the failures I get when launching scipy.test > ## (only including the failure, many tests look ok): > > >>>> scipy.test() >>>> > Warning: FAILURE importing tests for 'scipy.linsolve.umfpack.umfpack' from '...y/linsolve/umfpack/umfpack.pyc'> > /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ?) > ... > > Warning: FAILURE importing tests for from '...es/scipy/optimize/optimize.pyc'> > /usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py:30: > ImportError: > /usr/local/lib/python2.4/site-packages/scipy/optimize/_lbfgsb.so: > undefined sym > bol: cblas_dtrsm (in ?) > > ... > > Warning: FAILURE importing tests for from '.../linsolve/umfpack/__init__.pyc'> > /usr/local/lib/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: > AttributeError: 'module' object has no attribute 'umfpack' (in ?) > Warning: FAILURE importing tests for '...kages/scipy/optimize/zeros.pyc'> > /usr/local/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py:30: > ImportError: > /usr/local/lib/python2.4/site-packages/scipy/optimize/_lbfgsb.so: > undefined sym > bol: cblas_dtrsm (in ?) > > ... > > Warning: FAILURE importing tests for '...ckages/scipy/special/basic.pyc'> > /usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py:17: > ImportError: > /usr/local/lib/python2.4/site-packages/scipy/linalg/flapack.so: > undefined symbol: > ATL_xerbla (in ?) > > > ====================================================================== > ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", > line 55, in check_integer > from scipy import stats > File "/usr/local/lib/python2.4/site-packages/scipy/stats/__init__.py", > line 7, in ? > from stats import * > File "/usr/local/lib/python2.4/site-packages/scipy/stats/stats.py", > line 190, in ? > import scipy.special as special > File > "/usr/local/lib/python2.4/site-packages/scipy/special/__init__.py", line > 10, in ? > import orthogonal > File > "/usr/local/lib/python2.4/site-packages/scipy/special/orthogonal.py", > line 65, in ? > from scipy.linalg import eig > File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/__init__.py", line > 8, in ? > from basic import * > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/basic.py", > line 17, in ? > from lapack import get_lapack_funcs > File "/usr/local/lib/python2.4/site-packages/scipy/linalg/lapack.py", > line 17, in ? > from scipy.linalg import flapack > ImportError: > /usr/local/lib/python2.4/site-packages/scipy/linalg/flapack.so: > undefined symbol: ATL_xerbla > > ---------------------------------------------------------------------- > Ran 752 tests in 3.335s > > FAILED (errors=1) > > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From gnata at obs.univ-lyon1.fr Wed Jan 24 06:25:51 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Wed, 24 Jan 2007 12:25:51 +0100 Subject: [SciPy-user] FAILURE importing tests for Hi, I'm running scipy and numpy svn. It looks just fine for me but one test from scipy.test() fails : Warning: FAILURE importing tests for /usr/lib/python2.4/site-packages/scipy/sparse/sparse.py:15: ImportError: cannot import name densetocsr (in ?) Maybe just a temporary problem in svn. Is someone able to reproduce that? Xavier -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From nwagner at iam.uni-stuttgart.de Wed Jan 24 06:26:50 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 24 Jan 2007 12:26:50 +0100 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others In-Reply-To: <45B73FCB.2000708@obs.univ-lyon1.fr> References: <45B73E39.3090104@obs.univ-lyon1.fr> <45B73FCB.2000708@obs.univ-lyon1.fr> Message-ID: <45B7427A.2050203@iam.uni-stuttgart.de> Xavier Gnata wrote: > Well on Debian, you can solve your first pb with apt-get install > libumfpack4-dev > apt tells me that libumfpack4 is "a set of routines for solving > unsymmetric sparse linear systems" > No idea if it is packaged into suse rpm or not :( > > Xavier > > AFAIK it is not packaged but you can add it to a wishlist ;-) http://en.opensuse.org/Wishlist_Science Nils From tim.leslie at gmail.com Wed Jan 24 07:21:14 2007 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 24 Jan 2007 23:21:14 +1100 Subject: [SciPy-user] FAILURE importing tests for References: <45B7423F.5080107@obs.univ-lyon1.fr> Message-ID: On 1/24/07, Xavier Gnata wrote: > Hi, > > I'm running scipy and numpy svn. > It looks just fine for me but one test from scipy.test() fails : > > Warning: FAILURE importing tests for '...site-packages/scipy/io/mio.pyc'> > /usr/lib/python2.4/site-packages/scipy/sparse/sparse.py:15: ImportError: > cannot import name densetocsr (in ?) > > Maybe just a temporary problem in svn. > > Is someone able to reproduce that? I can't reproduce this. Which svn versions do you have for each of numpy and scipy? Tim > > Xavier > > -- > ############################################ > Xavier Gnata > CRAL - Observatoire de Lyon > 9, avenue Charles Andr? > 69561 Saint Genis Laval cedex > Phone: +33 4 78 86 85 28 > Fax: +33 4 78 86 83 86 > E-mail: gnata at obs.univ-lyon1.fr > ############################################ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From fredmfp at gmail.com Wed Jan 24 08:31:33 2007 From: fredmfp at gmail.com (fred) Date: Wed, 24 Jan 2007 14:31:33 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... Message-ID: <45B75FB5.20706@gmail.com> Hi all, I'm playing with mayavi2 to display spherical harmonics, which works fine, thanks to scipy (I use 0.5.1 on debian etch). However, I have noticed a "strange" behaviour if I change theta & phi resolution. If I set dphi, dtheta = pi/66, pi/66 [phi, theta] = mgrid[0:2*pi+dphi:dphi, 0:pi+dtheta:dtheta] display is fine: http://fredantispam.free.fr/Y_7_3_a.png If I use nicer mesh, say dphi, dtheta = pi/180, pi/180 [phi, theta] = mgrid[0:2*pi+dphi:dphi, 0:pi+dtheta:dtheta] display is obviously wrong http://fredantispam.free.fr/Y_7_3_b.png What's wrong ? Thanks in advance. Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Wed Jan 24 08:41:05 2007 From: fredmfp at gmail.com (fred) Date: Wed, 24 Jan 2007 14:41:05 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B75FB5.20706@gmail.com> References: <45B75FB5.20706@gmail.com> Message-ID: <45B761F1.7000506@gmail.com> fred a ?crit : >What's wrong ? > > I forgot to mention that if I use litteral expressions instead of scipy.special.sph_harm func, it works fine with the nicer mesh. Cheers, -- http://scipy.org/FredericPetit From edschofield at gmail.com Wed Jan 24 09:35:09 2007 From: edschofield at gmail.com (Ed Schofield) Date: Wed, 24 Jan 2007 15:35:09 +0100 Subject: [SciPy-user] Callback feature for iterative solvers Message-ID: <1b5a37350701240635m2e99cd54j26c439ce5be6ae52@mail.gmail.com> On 1/23/07, Nils Wagner wrote: > > Hi Ed, > > Please find attached my modified iterative.py. The output of my test is > attached as well. > > python -i test_callback.py yields > > iter_ 1 > ||A.x - b|| = 2.04635901219 > iter_ 1 > ||A.x - b|| = 2.04635901219 > iter_ 1 > ||A.x - b|| = 0.528173459455 ... > Yeah, so iter_ doesn't increase monotonically. We probably want to call the callback function only once per iteration -- whenever iter_ increases. I've added a check for this and checked it into SVN for bicg(), bicgstab(), cg(), cgs(), gmres(), and qmr(). I've also added some unit tests based on your test case. So the iterative solvers now support callback functions. If anyone runs into any problems, please shout! -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From zunzun at zunzun.com Wed Jan 24 11:34:31 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Wed, 24 Jan 2007 11:34:31 -0500 Subject: [SciPy-user] Scipy web site Cookbook example will not run? Message-ID: <20070124163430.GA28574@zunzun.com> Can anyone get the Scipy web site Cookbook example http://www.scipy.org/Cookbook/OLS to run? I can't, and would very much like to add the fit statistics to the Python Equations package http://sf.net/projects/pythonequations. I tried on Linux and XP, re-installing the latest Enthought Python on my XP box. Just having the example run is enough for me. James http://zunzun.com From bryanv at enthought.com Wed Jan 24 11:43:46 2007 From: bryanv at enthought.com (Byan Van de Ven) Date: Wed, 24 Jan 2007 10:43:46 -0600 Subject: [SciPy-user] Scipy web site Cookbook example will not run? In-Reply-To: <20070124163430.GA28574@zunzun.com> References: <20070124163430.GA28574@zunzun.com> Message-ID: <45B78CC2.7080705@enthought.com> It seems to run fine for me. The only difference from the example is that file that actually downloads is named ols.0.1.py, which I had to rename to ols.py before trying "import ols". zunzun at zunzun.com wrote: > Can anyone get the Scipy web site Cookbook example > http://www.scipy.org/Cookbook/OLS to run? I can't, > and would very much like to add the fit statistics to the > Python Equations package http://sf.net/projects/pythonequations. > I tried on Linux and XP, re-installing the latest Enthought > Python on my XP box. > > Just having the example run is enough for me. > > James > http://zunzun.com > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From prabhu at aero.iitb.ac.in Wed Jan 24 12:38:52 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Wed, 24 Jan 2007 23:08:52 +0530 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <45B537D8.7080103@ucsf.edu> References: <45B537D8.7080103@ucsf.edu> Message-ID: <17847.39340.717279.277869@gargle.gargle.HOWL> >>>>> "Karl" == Karl Young writes: Karl> I'm currently using a nice Java based data mining package Karl> called Weka (essentially as a black box as I don't have time Karl> to learn Java) but was looking for something more Karl> python/scipy friendly to switch to as I'd prefer more Karl> interactive use. I found a python package on the web that Karl> potentially looks pretty nice (Orange - Karl> http://www.ailab.si/orange) but given that it uses GPL (and [...] A while ago I was talking to Eric Jones about creating a BSD network pipeline network editor and he said that it should be possible to create a simple first cut of such an application in a week if a bunch of enthusiastic folks sat together and worked on it. Perhaps someone with the time should take Eric up on that offer (assuming of course that Eric would be interested)? This might be a good coding sprint for SciPy07. cheers, prabhu From gnata at obs.univ-lyon1.fr Wed Jan 24 12:41:07 2007 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Wed, 24 Jan 2007 18:41:07 +0100 Subject: [SciPy-user] FAILURE importing tests for References: <45B7423F.5080107@obs.univ-lyon1.fr> Message-ID: <45B79A33.9030800@obs.univ-lyon1.fr> Hi, I have remove all the scipy dir and svn co http://svn.scipy.org/svn/scipy/trunk scipy It works just fine. No more errors. I used to update my svn dir using svn update. Maybe a problem during one of these updates. Sorry for the noise. Xavier. > On 1/24/07, Xavier Gnata wrote: > >> Hi, >> >> I'm running scipy and numpy svn. >> It looks just fine for me but one test from scipy.test() fails : >> >> Warning: FAILURE importing tests for > '...site-packages/scipy/io/mio.pyc'> >> /usr/lib/python2.4/site-packages/scipy/sparse/sparse.py:15: ImportError: >> cannot import name densetocsr (in ?) >> >> Maybe just a temporary problem in svn. >> >> Is someone able to reproduce that? >> > > I can't reproduce this. Which svn versions do you have for each of > numpy and scipy? > > Tim > > >> Xavier >> >> -- >> ############################################ >> Xavier Gnata >> CRAL - Observatoire de Lyon >> 9, avenue Charles Andr? >> 69561 Saint Genis Laval cedex >> Phone: +33 4 78 86 85 28 >> Fax: +33 4 78 86 83 86 >> E-mail: gnata at obs.univ-lyon1.fr >> ############################################ >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From zunzun at zunzun.com Wed Jan 24 13:01:51 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Wed, 24 Jan 2007 13:01:51 -0500 Subject: [SciPy-user] Scipy web site Cookbook example *will* run now In-Reply-To: <45B78CC2.7080705@enthought.com> References: <20070124163430.GA28574@zunzun.com> <45B78CC2.7080705@enthought.com> Message-ID: <20070124180150.GA4518@zunzun.com> On Wed, Jan 24, 2007 at 10:43:46AM -0600, Byan Van de Ven wrote: > It seems to run fine for me. The only difference from the example is > that file that actually downloads is named ols.0.1.py, which I had to > rename to ols.py before trying "import ols". On Winders Dang XP Super-Duper Edition, uninstalling Enthought Python and separately installing python, numpy and scipy worked. Looks like a default Enthought install won't run it at the moment. Thank you very much for letting me know it worked for you. James From Karl.Young at ucsf.edu Wed Jan 24 12:48:16 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Wed, 24 Jan 2007 09:48:16 -0800 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <45B7182C.1080103@ar.media.kyoto-u.ac.jp> References: <45B537D8.7080103@ucsf.edu> <20070124082340.GI2093@tardis.terramar.selidor.net> <45B7182C.1080103@ar.media.kyoto-u.ac.jp> Message-ID: <45B79BE0.1070908@ucsf.edu> David Cournapeau wrote: >Ivan Vilata i Balaguer wrote: > > >>Karl Young (el 2007-01-22 a les 14:16:56 -0800) va dir:: >> >> >> >>>I'm currently using a nice Java based data mining package called Weka >>>(essentially as a black box as I don't have time to learn Java) but was >>>looking for something more python/scipy friendly to switch to as I'd >>>prefer more interactive use. I found a python package on the web that >>>potentially looks pretty nice (Orange - http://www.ailab.si/orange) but >>>given that it uses GPL (and also given the recent discussion on license >>>issues) and doesn't look to have made any effort to be numpy/scipy >>>friendly I was wondering if anyone was aware of a more scipy friendly >>>effort. Should someone (maybe even me...) be talked into contacting the >>>Orange developers and seeing if they'd be interested in a switch to BSD >>>and a gradual evolution towards integration with numpy... ? >>> >>> >>You may also give a try to PyTables_, which is already being used by >>some people to perform data mining. It is not similar to Orange or Weka >>in the sense that PyTables is a lower-level, non-GUI Python library. >>However, it uses NumPy at its core, so integration with SciPy should be >>no problem, and it is designed to be comfortable in interactive usage >>(on a Python console). The standard version is free/libre software >>under a BSD license. >> >>On the GUI part, you could use ViTables_ for textual browsing of big >>files, or HDFView_ if you need plotting or image visualisation >>capabilities. >> >>.. _PyTables: http://www.pytables.org/ >>.. _ViTables: http://www.carabos.com/products/vitables >>.. _HDFView: http://www.hdfgroup.org/hdf-java-html/hdfview/ >> >> >That would give the IO and interface part of orange, but not the core >machine learning part. This is I think one area where numpy/scipy is >still lacking, at least integration-wise, compared to matlab which has >major toolboxes such as netlab for this kind of thing. > > Thanks for the suggestions but, yes, I feel that it's the set of core machine learning algorithms that is the important piece. There are some implementations of specific algorithms around but it seems like a lot of work to include a significant fraction of recently developed machine learning algorithms in a single package with a consistent interface (as the Weka guys have done). Since the (exploratory) tendency in data mining is to not trust any single algorithm for all data analysis it's important to have a range available. Given that the Orange developers have already started something like that it seemed reasonable to explore some kind of integration (I could just use it anyway and hack whatever I need to use it with numpy array utilities but some kind of moderated integration seemed more consistent with the scipy philosophy). -- KY Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) Phone: (415) 221-4810 x3114 lab 4150 Clement Street FAX: (415) 668-2864 San Francisco, CA 94121 Email: karl young at ucsf edu From robert.kern at gmail.com Wed Jan 24 14:22:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Jan 2007 13:22:01 -0600 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others In-Reply-To: <45B73E39.3090104@obs.univ-lyon1.fr> References: <45B73E39.3090104@obs.univ-lyon1.fr> Message-ID: <45B7B1D9.1070400@gmail.com> Eric Emsellem wrote: > Back... Now I have numpy running fine (all tests ok). But scipy is still > having problems. > Sorry for all this, but at this point I have no clue what to do anymore. What site.cfg did you use? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Wed Jan 24 14:26:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Jan 2007 13:26:23 -0600 Subject: [SciPy-user] Scipy web site Cookbook example *will* run now In-Reply-To: <20070124180150.GA4518@zunzun.com> References: <20070124163430.GA28574@zunzun.com> <45B78CC2.7080705@enthought.com> <20070124180150.GA4518@zunzun.com> Message-ID: <45B7B2DF.3010906@gmail.com> zunzun at zunzun.com wrote: > On Wed, Jan 24, 2007 at 10:43:46AM -0600, Byan Van de Ven wrote: >> It seems to run fine for me. The only difference from the example is >> that file that actually downloads is named ols.0.1.py, which I had to >> rename to ols.py before trying "import ols". > > On Winders Dang XP Super-Duper Edition, uninstalling Enthought > Python and separately installing python, numpy and scipy worked. > > Looks like a default Enthought install won't run it at the moment. Can you tell us what error messages you got? If there is a problem with the Enthought distribution, I'd like to get it fixed. I suspect that the versions of numpy or scipy were too old, but without seeing the error messages, I don't know for sure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zunzun at zunzun.com Wed Jan 24 14:57:01 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Wed, 24 Jan 2007 14:57:01 -0500 Subject: [SciPy-user] Scipy web site Cookbook example *will* run now In-Reply-To: <45B7B2DF.3010906@gmail.com> References: <20070124163430.GA28574@zunzun.com> <45B78CC2.7080705@enthought.com> <20070124180150.GA4518@zunzun.com> <45B7B2DF.3010906@gmail.com> Message-ID: <20070124195700.GA15456@zunzun.com> On Wed, Jan 24, 2007 at 01:26:23PM -0600, Robert Kern wrote: > > Can you tell us what error messages you got? numpy is too old. Just run the example on the scipy.org page http://scipy.org/Cookbook/OLS, the file URL is at the very top of the page where it says "To see the class in action download the ols.py file" - that file is all you need to test. James From bhendrix at enthought.com Wed Jan 24 15:06:03 2007 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 24 Jan 2007 14:06:03 -0600 Subject: [SciPy-user] Scipy web site Cookbook example *will* run now In-Reply-To: <20070124195700.GA15456@zunzun.com> References: <20070124163430.GA28574@zunzun.com> <45B78CC2.7080705@enthought.com> <20070124180150.GA4518@zunzun.com> <45B7B2DF.3010906@gmail.com> <20070124195700.GA15456@zunzun.com> Message-ID: <45B7BC2B.6010809@enthought.com> We (Enthought) have a released a few versions, and the numpy/scipy versions are different in each of these. We are in the process of migrating towards an egg-based distribution where the version of numpy isn't set in stone. Knowing which version of python and which version of numpy you were using would help us determine if it was an old pre-1.0.0 version of numpy or one of the newer versions. In the future, if you'd like to use the eggs we have built, you can download our installer application at http://code.enthought.com/enstaller, or you can download individual eggs at http://code.enthought.com/enstaller/eggs. Bryce zunzun at zunzun.com wrote: > On Wed, Jan 24, 2007 at 01:26:23PM -0600, Robert Kern wrote: > >> Can you tell us what error messages you got? >> > > numpy is too old. Just run the example on the scipy.org page > http://scipy.org/Cookbook/OLS, the file URL is at the very > top of the page where it says "To see the class in action > download the ols.py file" - that file is all you need to test. > > James > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zunzun at zunzun.com Wed Jan 24 15:12:51 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Wed, 24 Jan 2007 15:12:51 -0500 Subject: [SciPy-user] Scipy web site Cookbook example *will* run now In-Reply-To: <45B7BC2B.6010809@enthought.com> References: <20070124163430.GA28574@zunzun.com> <45B78CC2.7080705@enthought.com> <20070124180150.GA4518@zunzun.com> <45B7B2DF.3010906@gmail.com> <20070124195700.GA15456@zunzun.com> <45B7BC2B.6010809@enthought.com> Message-ID: <20070124201251.GA17214@zunzun.com> On Wed, Jan 24, 2007 at 02:06:03PM -0600, Bryce Hendrix wrote: > > Knowing which version of python and which version of > numpy you were using would help us determine if it was an old pre-1.0.0 > version of numpy or one of the newer versions. I downloaded enthon-python2.4-1.0.0.exe from the web page http://code.enthought.com/enthon. Is that enough version info? I can try again if you need. James From bhendrix at enthought.com Wed Jan 24 15:22:17 2007 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 24 Jan 2007 14:22:17 -0600 Subject: [SciPy-user] Scipy web site Cookbook example *will* run now In-Reply-To: <20070124201251.GA17214@zunzun.com> References: <20070124163430.GA28574@zunzun.com> <45B78CC2.7080705@enthought.com> <20070124180150.GA4518@zunzun.com> <45B7B2DF.3010906@gmail.com> <20070124195700.GA15456@zunzun.com> <45B7BC2B.6010809@enthought.com> <20070124201251.GA17214@zunzun.com> Message-ID: <45B7BFF9.6020403@enthought.com> No, thats enough info, thanks. That version's numpy was a pre 1.0 beta, I believe, and its starting to show its age. We won't be releasing another monolithic install unless there is a very strong desire from a significant number of users, so if anyone needs to update their numpy/scipy/matplotlib/ets versions, you can simply remove them from your site-package dir and install the eggs using setuptools. There is no need to uninstall the whole python distribution unless you want to. Bryce zunzun at zunzun.com wrote: > On Wed, Jan 24, 2007 at 02:06:03PM -0600, Bryce Hendrix wrote: > >> Knowing which version of python and which version of >> numpy you were using would help us determine if it was an old pre-1.0.0 >> version of numpy or one of the newer versions. >> > > I downloaded enthon-python2.4-1.0.0.exe from the web page > http://code.enthought.com/enthon. Is that enough version > info? I can try again if you need. > > James > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.L.Goldsmith at noaa.gov Wed Jan 24 18:39:38 2007 From: David.L.Goldsmith at noaa.gov (David L Goldsmith) Date: Wed, 24 Jan 2007 15:39:38 -0800 Subject: [SciPy-user] SciPy '07 Conference? Message-ID: <45B7EE3A.5020603@noaa.gov> Any news? If/when dates/place are determined, they'll be announced here, yes? (I.e., not just posted on the Web site.) DG -- ERD/ORR/NOS/NOAA From robert.kern at gmail.com Wed Jan 24 18:57:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 24 Jan 2007 17:57:19 -0600 Subject: [SciPy-user] SciPy '07 Conference? In-Reply-To: <45B7EE3A.5020603@noaa.gov> References: <45B7EE3A.5020603@noaa.gov> Message-ID: <45B7F25F.4040205@gmail.com> David L Goldsmith wrote: > Any news? If/when dates/place are determined, they'll be announced > here, yes? (I.e., not just posted on the Web site.) We don't really start planning until April or so. Yes, we will make announcements here. The conference will (almost certainly) be held at Caltech in Pasadena, CA and will probably (with less certainty) be somewhere in late August or September. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Wed Jan 24 23:55:55 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 25 Jan 2007 13:55:55 +0900 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <45B79BE0.1070908@ucsf.edu> References: <45B537D8.7080103@ucsf.edu> <20070124082340.GI2093@tardis.terramar.selidor.net> <45B7182C.1080103@ar.media.kyoto-u.ac.jp> <45B79BE0.1070908@ucsf.edu> Message-ID: <45B8385B.3020304@ar.media.kyoto-u.ac.jp> Karl Young wrote: > > Thanks for the suggestions but, yes, I feel that it's the set of core > machine learning algorithms that is the important piece. There are some > implementations of specific algorithms around but it seems like a lot of > work to include a significant fraction of recently developed machine > learning algorithms in a single package with a consistent interface (as > the Weka guys have done). Since the (exploratory) tendency in data > mining is to not trust any single algorithm for all data analysis it's > important to have a range available. Given that the Orange developers > have already started something like that it seemed reasonable to explore > some kind of integration (I could just use it anyway and hack whatever I > need to use it with numpy array utilities but some kind of moderated > integration seemed more consistent with the scipy philosophy). I don't know much about data mining; I am a user of some algorithms used in pattern recognition, but not really into the exploratory part which is important in data mining I think. Right know, scipy has an implementation of SVM (scipy.sandbox.svm), EM for finite mixture of Gaussians (scipy.sandbox.pyem), and some other which I don't really know (models, maxent) which are all more or less relevant for machine learning. There are also some tools around for other stuff like Kalman filtering, MCMC, which are not part (at least in my knowledge) of scipy. Some kind of unification would be nice, but this would require some work, and people interested in it. One thing you could try is using scipy with R, which should have much more complete models, and which can be used from a python session (at least I remember having seen that somewhere, I have never done it myself). It looks like Orange uses pyqt for the graphical part, which is GPL (because QT is since at least 4.0 on all supported plateforms including windows); this would make the graphical part of orange quite difficult to relicense without some recoding. But I don't much about license "mixing", and the mix of python, modules, library with different license makes it quite difficult for me to really understand what is possible or not. cheers, David From emsellem at obs.univ-lyon1.fr Thu Jan 25 03:16:18 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 25 Jan 2007 09:16:18 +0100 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others In-Reply-To: <45B7B1D9.1070400@gmail.com> References: <45B73E39.3090104@obs.univ-lyon1.fr> <45B7B1D9.1070400@gmail.com> Message-ID: <45B86752.4040201@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jan 25 03:24:28 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jan 2007 02:24:28 -0600 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others In-Reply-To: <45B86752.4040201@obs.univ-lyon1.fr> References: <45B73E39.3090104@obs.univ-lyon1.fr> <45B7B1D9.1070400@gmail.com> <45B86752.4040201@obs.univ-lyon1.fr> Message-ID: <45B8693C.2040409@gmail.com> Eric Emsellem wrote: > Here is the site.cfg that I am using starting with blas: Well, that looks fine. The only thing I can suggest now is to make sure that you've removed the build/ directory to make sure that you don't have old, incorrect binaries that aren't getting rebuilt with the correct linker settings. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emsellem at obs.univ-lyon1.fr Thu Jan 25 04:00:09 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 25 Jan 2007 10:00:09 +0100 Subject: [SciPy-user] scipy installation, slightly better, but not there yet: still looking for cblas_dtrsm and others In-Reply-To: <45B8693C.2040409@gmail.com> References: <45B73E39.3090104@obs.univ-lyon1.fr> <45B7B1D9.1070400@gmail.com> <45B86752.4040201@obs.univ-lyon1.fr> <45B8693C.2040409@gmail.com> Message-ID: <45B87199.5000603@obs.univ-lyon1.fr> An HTML attachment was scrubbed... URL: From jjv5 at nih.gov Thu Jan 25 07:46:58 2007 From: jjv5 at nih.gov (James Vincent) Date: Thu, 25 Jan 2007 07:46:58 -0500 Subject: [SciPy-user] Fitting sphere to 3d data points Message-ID: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> Hello, Is it possible to fit a sphere to 3D data points using scipy.optimize.leastsq? I'd like to minimize the residual for the distance from the actual x,y,z point and the fitted sphere surface. I can see how to minimize for z, but that's not really what I'm looking for. Is there a better way to do this? Thanks for any help. params = a,b,c and r a,b,c are the fitted center point of the sphere, r is the radius err = distance-to-center - radius err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r ---- James J. Vincent, Ph.D. National Cancer Institute National Institutes of Health Laboratory of Molecular Biology Building 37, Room 5120 37 Convent Drive, MSC 4264 Bethesda, MD 20892 USA 301-451-8755 jjv5 at nih.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From akumar at iitm.ac.in Thu Jan 25 08:06:00 2007 From: akumar at iitm.ac.in (Kumar Appaiah) Date: Thu, 25 Jan 2007 18:36:00 +0530 Subject: [SciPy-user] Analog band stop filter design Message-ID: <20070125130600.GA10226@localhost> Dear Scipy users, I am unsure of how to use the filter design package to design analog band stop filters. For example, referring to the documentation, I came up with this code: from scipy import * omega_c1 = 430 omega_r1 = 350 omega_c2 = 600 omega_r2 = 700 gpass = 0.5 gstop = 40 (n, Wn) = signal.cheb1ord([omega_r1, omega_r2], [omega_c1, omega_c2], gpass, gstop, analog=1) print n print Wn But the output is: -2147483648 [429 699] Now, I am sure I am doing something wrong, but I guess there is some error checking which needs to be done in filter_design.py as well, though I couldn't quite figure it out from the file. What is the mistake? Thanks. Kumar -- Kumar Appaiah, 462, Jamuna Hostel, Indian Institute of Technology Madras, Chennai - 600 036 From david.huard at gmail.com Thu Jan 25 09:31:10 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 25 Jan 2007 09:31:10 -0500 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> Message-ID: <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> Hi James, As a first guess, I'd say the center of the sphere is simply the mean of your data points, if they're all weighted equally. With only one parameter left to fit, it should be easy enough. However, you may want to look at the paper: Werman, Michael and Keren, Daniel A Bayesian method for fitting parametric and nonparametric models to noisy data Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, 2001. They write that the Mean Square Error approach overestimates the radius in the case of circles. They don't talk about the 3D case, but I'd guess similar problems arise. They provide a method to fit parametric shapes with some robustness to data errors. Cheers, David 2007/1/25, James Vincent : > > Hello, > Is it possible to fit a sphere to 3D data points using > scipy.optimize.leastsq? I'd like to minimize the residual for the distance > from the actual x,y,z point and the fitted sphere surface. I can see how to > minimize for z, but that's not really what I'm looking for. Is there a > better way to do this? Thanks for any help. > > params = a,b,c and r > a,b,c are the fitted center point of the sphere, r is the radius > > err = distance-to-center - radius > err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r > > > > ---- > James J. Vincent, Ph.D. > National Cancer Institute > National Institutes of Health > Laboratory of Molecular Biology > Building 37, Room 5120 > 37 Convent Drive, MSC 4264 > Bethesda, MD 20892 USA > > 301-451-8755 > jjv5 at nih.gov > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjv5 at nih.gov Thu Jan 25 09:47:25 2007 From: jjv5 at nih.gov (James Vincent) Date: Thu, 25 Jan 2007 09:47:25 -0500 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> Message-ID: David, Thanks for the reply. I think I should have been clearer about the problem. I have a surface patch of data points, not an actual whole sphere. It will probably be a very small section of total sphere surface. I would like to fit the sphere that best fits just those points that I have. The center of the sphere will be highly dependent on the curvature of the points. I think the leastsq routine is right, I just can't figure out how to pass the data in yet (it's my first work with scipy). Jim On Jan 25, 2007, at 9:31 AM, David Huard wrote: > Hi James, > > As a first guess, I'd say the center of the sphere is simply the > mean of your data points, if they're all weighted equally. With > only one parameter left to fit, it should be easy enough. However, > you may want to look at the paper: > > Werman, Michael and Keren, Daniel > A Bayesian method for fitting parametric and nonparametric models > to noisy data > Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, > 2001. > > They write that the Mean Square Error approach overestimates the > radius in the case of circles. They don't talk about the 3D case, > but I'd guess similar problems arise. They provide a method to fit > parametric shapes with some robustness to data errors. > > Cheers, > > David > > > > 2007/1/25, James Vincent : > Hello, > > Is it possible to fit a sphere to 3D data points using > scipy.optimize.leastsq? I'd like to minimize the residual for the > distance from the actual x,y,z point and the fitted sphere surface. > I can see how to minimize for z, but that's not really what I'm > looking for. Is there a better way to do this? Thanks for any help. > > params = a,b,c and r > a,b,c are the fitted center point of the sphere, r is the radius > > err = distance-to-center - radius > err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r > > > > ---- > James J. Vincent, Ph.D. > National Cancer Institute > National Institutes of Health > Laboratory of Molecular Biology > Building 37, Room 5120 > 37 Convent Drive, MSC 4264 > Bethesda, MD 20892 USA > > 301-451-8755 > jjv5 at nih.gov > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- James J. Vincent, Ph.D. National Cancer Institute National Institutes of Health Laboratory of Molecular Biology Building 37, Room 5120 37 Convent Drive, MSC 4264 Bethesda, MD 20892 USA 301-451-8755 jjv5 at nih.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckkart at hoc.net Thu Jan 25 09:57:57 2007 From: ckkart at hoc.net (ckkart at hoc.net) Date: Thu, 25 Jan 2007 23:57:57 +0900 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> Message-ID: <45B8C575.5060609@hoc.net> James Vincent wrote: > David, > > Thanks for the reply. I think I should have been clearer about the > problem. I have a surface patch of data points, not an actual whole > sphere. It will probably be a very small section of total sphere > surface. I would like to fit the sphere that best fits just those points > that I have. The center of the sphere will be highly dependent on the > curvature of the points. > > I think the leastsq routine is right, I just can't figure out how > to pass the data in yet (it's my first work with scipy). Have a look at scipy.sandbox.odr, too. It is a module to do orthogonal distance regression in contrast to ordinary least squares of the z-values. Christian From david.douard at logilab.fr Thu Jan 25 10:47:37 2007 From: david.douard at logilab.fr (David Douard) Date: Thu, 25 Jan 2007 16:47:37 +0100 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> Message-ID: <20070125154737.GB19408@crater.logilab.fr> On Thu, Jan 25, 2007 at 09:31:10AM -0500, David Huard wrote: > Hi James, > > As a first guess, I'd say the center of the sphere is simply the mean of > your data points, if they're all weighted equally. Hello, I would have rather said that you need to find the point that minimize the distance to the normals of the triangles you have from your data points (not sure this is really meaningful...). If the points really are on a sphere, all the normal will cut on one point. If not, there really is a minimization problema to solve. David > With only one parameter > left to fit, it should be easy enough. However, you may want to look at the > paper: > > Werman, Michael and Keren, Daniel > A Bayesian method for fitting parametric and nonparametric models to noisy > data > Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, 2001. > > They write that the Mean Square Error approach overestimates the radius in > the case of circles. They don't talk about the 3D case, but I'd guess > similar problems arise. They provide a method to fit parametric shapes with > some robustness to data errors. > > Cheers, > > David > > > > 2007/1/25, James Vincent : > > > >Hello, > >Is it possible to fit a sphere to 3D data points using > >scipy.optimize.leastsq? I'd like to minimize the residual for the distance > >from the actual x,y,z point and the fitted sphere surface. I can see how to > >minimize for z, but that's not really what I'm looking for. Is there a > >better way to do this? Thanks for any help. > > > >params = a,b,c and r > >a,b,c are the fitted center point of the sphere, r is the radius > > > >err = distance-to-center - radius > >err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r > > > > > > > >---- > >James J. Vincent, Ph.D. > >National Cancer Institute > >National Institutes of Health > >Laboratory of Molecular Biology > >Building 37, Room 5120 > >37 Convent Drive, MSC 4264 > >Bethesda, MD 20892 USA > > > >301-451-8755 > >jjv5 at nih.gov > > > > > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From jjv5 at nih.gov Thu Jan 25 10:53:56 2007 From: jjv5 at nih.gov (James Vincent) Date: Thu, 25 Jan 2007 10:53:56 -0500 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: <20070125154737.GB19408@crater.logilab.fr> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> <20070125154737.GB19408@crater.logilab.fr> Message-ID: <0C9AE871-55EE-4008-80AE-3B4DA0897615@nih.gov> Thanks for all the input. I think I've got it. This works: def resSphere(p,x,y,z): """ residuals from sphere fit """ a,b,c,r = p # a,b,c are center x,y,c coords to be fit, r is the radius to be fit distance = sqrt( (x-a)**2 + (y-b)**2 + (z-c)**2 ) err = distance - r # err is distance from input point to current fitted surface return err params = [0.,0.,0.,0.] myResult = leastsq(resSphere, params, args=(myX,myY,myZ) ) print myResult[0] On Jan 25, 2007, at 10:47 AM, David Douard wrote: > On Thu, Jan 25, 2007 at 09:31:10AM -0500, David Huard wrote: >> Hi James, >> >> As a first guess, I'd say the center of the sphere is simply the >> mean of >> your data points, if they're all weighted equally. > > Hello, > > I would have rather said that you need to find the point that minimize > the distance to the normals of the triangles you have from your data > points (not sure this is really meaningful...). > If the points really are on a sphere, all the normal will cut on one > point. If not, there really is a minimization problema to solve. > > David > > > >> With only one parameter >> left to fit, it should be easy enough. However, you may want to >> look at the >> paper: >> >> Werman, Michael and Keren, Daniel >> A Bayesian method for fitting parametric and nonparametric models >> to noisy >> data >> Ieee Transactions on Pattern Analysis and Machine Intelligence, >> 23, 2001. >> >> They write that the Mean Square Error approach overestimates the >> radius in >> the case of circles. They don't talk about the 3D case, but I'd guess >> similar problems arise. They provide a method to fit parametric >> shapes with >> some robustness to data errors. >> >> Cheers, >> >> David >> >> >> >> 2007/1/25, James Vincent : >>> >>> Hello, >>> Is it possible to fit a sphere to 3D data points using >>> scipy.optimize.leastsq? I'd like to minimize the residual for the >>> distance >>> from the actual x,y,z point and the fitted sphere surface. I can >>> see how to >>> minimize for z, but that's not really what I'm looking for. Is >>> there a >>> better way to do this? Thanks for any help. >>> >>> params = a,b,c and r >>> a,b,c are the fitted center point of the sphere, r is the radius >>> >>> err = distance-to-center - radius >>> err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r >>> >>> >>> >>> ---- >>> James J. Vincent, Ph.D. >>> National Cancer Institute >>> National Institutes of Health >>> Laboratory of Molecular Biology >>> Building 37, Room 5120 >>> 37 Convent Drive, MSC 4264 >>> Bethesda, MD 20892 USA >>> >>> 301-451-8755 >>> jjv5 at nih.gov >>> >>> >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- > David Douard LOGILAB, Paris (France) > Formations Python, Zope, Plone, Debian : http://www.logilab.fr/ > formations > D?veloppement logiciel sur mesure : http://www.logilab.fr/ > services > Informatique scientifique : http://www.logilab.fr/science > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- James J. Vincent, Ph.D. National Cancer Institute National Institutes of Health Laboratory of Molecular Biology Building 37, Room 5120 37 Convent Drive, MSC 4264 Bethesda, MD 20892 USA 301-451-8755 jjv5 at nih.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From zunzun at zunzun.com Thu Jan 25 11:57:12 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 25 Jan 2007 11:57:12 -0500 Subject: [SciPy-user] non-linear model coefficient standard error estimates Message-ID: <20070125165711.GA23687@zunzun.com> I can easily calculate coefficient standard errors for linear models such as this cubic polynomial: y = ax^0 + bx^1 + cx^2 + dx^3 using the scipy Cookbook code referenced at http://scipy.org/Cookbook/OLS I do not know how to estimate coefficient standard errors for non-linear models such as y = exp(ax - b) Can anyone point me at a reference? I'll add it to the Python Equations project as soon as I can. James From zunzun at zunzun.com Thu Jan 25 13:30:52 2007 From: zunzun at zunzun.com (zunzun at zunzun.com) Date: Thu, 25 Jan 2007 13:30:52 -0500 Subject: [SciPy-user] Robust Standard Errors for Robust Estimators Message-ID: <20070125183051.GA32335@zunzun.com> Mr. Croux, This is James Phillips of http://zunzun.com, an online data modeling site. I have released the source code for the web site middleware under a liberal BSD license at http://sourceforge.net/projects/pythonequations. I was researching how to calculate coefficient standard errors for nonlinear models, and came across http://www.econ.kuleuven.be/ew/academic/econmetr/members/Dhaene/papers/rsejan2004.pdf where I obtained your e-mail address. The paper appears to be exactly what I need. Do you know where I might find any source code that already implements the estimators discussed in your paper? James Philliips http://zunzun.com zunzun at zunzun.com 2548 Vera Cruz Drive Birmingham, AL 35235 USA From s.mientki at ru.nl Thu Jan 25 14:12:47 2007 From: s.mientki at ru.nl (Stef Mientki) Date: Thu, 25 Jan 2007 20:12:47 +0100 Subject: [SciPy-user] does vectorize optimize ? Message-ID: <45B9012F.3040208@ru.nl> does vectorize optimize the performance, by trying to move the for-loop to the innermost algorithm, or is it just meant for easy writing ? thanks, Stef Mientki From peridot.faceted at gmail.com Thu Jan 25 14:31:57 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 25 Jan 2007 14:31:57 -0500 Subject: [SciPy-user] does vectorize optimize ? In-Reply-To: <45B9012F.3040208@ru.nl> References: <45B9012F.3040208@ru.nl> Message-ID: On 25/01/07, Stef Mientki wrote: > does vectorize optimize the performance, > by trying to move the for-loop to the innermost algorithm, > or is it just meant for easy writing ? Vectorize uses frompyfunc, which does not disassemble the python function and attempt to move loops around or use ufuncs. (For that, you want something like numexpr.) So, in particular, vectorize(lambda x, y: x+y)(a,b) will not be nearly as fast as a+b because it will call the python closure many times. However, it is faster than looping in python because all the allocation and loop indexing is done in C. Finally, don't underestimate convenience. Anne M. Archibald From Karl.Young at ucsf.edu Thu Jan 25 13:51:07 2007 From: Karl.Young at ucsf.edu (Karl Young) Date: Thu, 25 Jan 2007 10:51:07 -0800 Subject: [SciPy-user] scipy data mining ? In-Reply-To: <45B8385B.3020304@ar.media.kyoto-u.ac.jp> References: <45B537D8.7080103@ucsf.edu> <20070124082340.GI2093@tardis.terramar.selidor.net> <45B7182C.1080103@ar.media.kyoto-u.ac.jp> <45B79BE0.1070908@ucsf.edu> <45B8385B.3020304@ar.media.kyoto-u.ac.jp> Message-ID: <45B8FC1B.2030500@ucsf.edu> >Right know, scipy has an implementation of SVM (scipy.sandbox.svm), EM >for finite mixture of Gaussians (scipy.sandbox.pyem), and some other >which I don't really know (models, maxent) which are all more or less >relevant for machine learning. There are also some tools around for >other stuff like Kalman filtering, MCMC, which are not part (at least in >my knowledge) of scipy. Some kind of unification would be nice, but this >would require some work, and people interested in it. > > Yes, it's a little strange that just using the algorithms scattered around isn't sufficient, but it's a certain style that one gets used to after working with a package like Weka that is useful. At first I do the unthinkable and just run a data set (using different subsets in a way that Weka makes easy and automateable) through a zillion methods (which the unified interface makes easy), essentially as black boxes. The object is then to explore more carefully why particular algorithms and subsets of the data (hopefully) show some structure (not to succumb to the temptation to cherry pick the best result and publish ! but nobody ever does that...). >One thing you could try is using scipy with R, which should have much >more complete models, and which can be used from a python session (at >least I remember having seen that somewhere, I have never done it myself). > > I do use R and rpy in code that also does a lot of scipy processing but, interestingly, even R doesn't have any packages , that I'm aware of, that pull together as many algorithms as Weka, or even Orange (with a unified interface - i.e. in R it would presumably amount to some package that would allow one to easily hack a data frame and pass that off to a single command with an argument specifying the machine learning algorithm and optional parameters) >It looks like Orange uses pyqt for the graphical part, which is GPL >(because QT is since at least 4.0 on all supported plateforms including >windows); this would make the graphical part of orange quite difficult >to relicense without some recoding. But I don't much about license >"mixing", and the mix of python, modules, library with different license >makes it quite difficult for me to really understand what is possible or > > > After forwarding some of this discussion to the Orange developers they seemed amenable to exploring scipy and integration possibilities, including changing licenses, but were understandably reluctant to commit to anything that might require a lot of effort on their part. Given the complication added by QT licensing, a possible route might be to see if the compute engine could be effectively decoupled from the ui and licensed separately. Karl Young Center for Imaging of Neurodegenerative Diseases, UCSF VA Medical Center (114M) 4150 Clement Street San Francisco, CA 94121 Phone: (415) 221-4810 x3114 lab FAX: (415) 668-2864 Email: karl young at ucsf edu From robert.kern at gmail.com Thu Jan 25 15:35:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jan 2007 14:35:16 -0600 Subject: [SciPy-user] non-linear model coefficient standard error estimates In-Reply-To: <20070125165711.GA23687@zunzun.com> References: <20070125165711.GA23687@zunzun.com> Message-ID: <45B91484.9090707@gmail.com> zunzun at zunzun.com wrote: > I can easily calculate coefficient standard errors for > linear models such as this cubic polynomial: > > y = ax^0 + bx^1 + cx^2 + dx^3 > > using the scipy Cookbook code referenced at > > http://scipy.org/Cookbook/OLS > > I do not know how to estimate coefficient standard errors > for non-linear models such as > > y = exp(ax - b) > > Can anyone point me at a reference? I'll add it to > the Python Equations project as soon as I can. scipy.odr gives error estimates for such nonlinear parameters. A description of their method (which works for nonlinear OLS models) is given in this paper: Paul T. Boggs and Janet E. Rogers (1990), ``The Computation and Use of the Asymptotic Covariance Matrix for Measurement Error Models,'' NIST IR 89-4102, U.S. Government Printing Office. http://www.boulder.nist.gov/mcsd/Staff/JRogers/papers/odr_vcv.dvi -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From David.L.Goldsmith at noaa.gov Thu Jan 25 15:53:23 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 25 Jan 2007 12:53:23 -0800 Subject: [SciPy-user] SciPy '07 Conference? In-Reply-To: <45B7F25F.4040205@gmail.com> References: <45B7EE3A.5020603@noaa.gov> <45B7F25F.4040205@gmail.com> Message-ID: <45B918C3.6060506@noaa.gov> If there's any chance the ball could get rolling sooner: our travel budget is _very_ tight and, to the extent that the sooner one gets in their request around here, the more likely there will be funds for it, our travel approval this year is on a "first come, first served" basis. FWIW, DG PS: I could probably submit a request w/ estimated amounts and dates. Robert Kern wrote: > David L Goldsmith wrote: > >> Any news? If/when dates/place are determined, they'll be announced >> here, yes? (I.e., not just posted on the Web site.) >> > > We don't really start planning until April or so. Yes, we will make > announcements here. The conference will (almost certainly) be held at Caltech in > Pasadena, CA and will probably (with less certainty) be somewhere in late August > or September. > > From David.L.Goldsmith at noaa.gov Thu Jan 25 16:33:56 2007 From: David.L.Goldsmith at noaa.gov (David Goldsmith) Date: Thu, 25 Jan 2007 13:33:56 -0800 Subject: [SciPy-user] Fitting sphere to 3d data points In-Reply-To: <0C9AE871-55EE-4008-80AE-3B4DA0897615@nih.gov> References: <715CC8AB-4406-41A8-83EF-72F419738896@nih.gov> <91cf711d0701250631r35a9602g8f8ed48979e7cde3@mail.gmail.com> <20070125154737.GB19408@crater.logilab.fr> <0C9AE871-55EE-4008-80AE-3B4DA0897615@nih.gov> Message-ID: <45B92244.5060000@noaa.gov> James Vincent wrote: > Thanks for all the input. I _think_ I've got it. (emphasis added) Have you tested it w/ points known to lie on a sphere, e.g., (+/-1, 0, 0), (0,+/-1,0), (0,0,+/-1), which should of course yield a=b=c=0, r=1 with zero residual error? One reason why I ask is that, when fitting data to a polynomial model (a sphere is the locus of the solution set of the polynomial A(x^2 + y^2 + z^2) + Bx + Cy + Dz -1 = 0) I usually like to take advantage of the fact that a polynomial is a linear function of the "monomial" pieces, in this case x^2 + y^2 + z^2, x, y, & z. Since in general one can't solve a polynomial for what one wants formulaically, this is the only general way to linear least squares fit data to a polynomial; of course in your quadratic case, one can solve for any one of the data variables in terms of the others (as you have done), but to do so you have to introduce the square-root function, which numerically speaking, isn't quite as exact as simply squaring. In other words, I would respectfully suggest that you fit the four quantities (x^2 + y^2 + z^2), x, y, & z to the model A(x^2 + y^2 + z^2) + Bx + Cy + Dz -1 = 0 and calculate your a, b, c, and r from the A, B, C, & D. (And even if you choose not to do that, at least test your algorithm with some pseudo-data chosen to yield known exact results.) DG PS: I've had to fit data to a circle before (using IDL); if anyone else thinks they might have to do something like this in the future, I could contribute "under-test" polyFit, circleFit, and sphereFit functions, not immediately 'cause I'm under deadline, but in the not too distant future. > This works: > > > def resSphere(p,x,y,z): > """ residuals from sphere fit """ > > a,b,c,r = p # a,b,c are center x,y,c > coords to be fit, r is the radius to be fit > distance = sqrt( (x-a)**2 + (y-b)**2 + (z-c)**2 ) > err = distance - r # err is distance from input > point to current fitted surface > > return err > > params = [0.,0.,0.,0.] > myResult = leastsq(resSphere, params, args=(myX,myY,myZ) ) > print myResult[0] > > > > > On Jan 25, 2007, at 10:47 AM, David Douard wrote: > >> On Thu, Jan 25, 2007 at 09:31:10AM -0500, David Huard wrote: >>> Hi James, >>> >>> As a first guess, I'd say the center of the sphere is simply the mean of >>> your data points, if they're all weighted equally. >> >> Hello, >> >> I would have rather said that you need to find the point that minimize >> the distance to the normals of the triangles you have from your data >> points (not sure this is really meaningful...). >> If the points really are on a sphere, all the normal will cut on one >> point. If not, there really is a minimization problema to solve. >> >> David >> >> >> >>> With only one parameter >>> left to fit, it should be easy enough. However, you may want to look >>> at the >>> paper: >>> >>> Werman, Michael and Keren, Daniel >>> A Bayesian method for fitting parametric and nonparametric models to >>> noisy >>> data >>> Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, >>> 2001. >>> >>> They write that the Mean Square Error approach overestimates the >>> radius in >>> the case of circles. They don't talk about the 3D case, but I'd guess >>> similar problems arise. They provide a method to fit parametric >>> shapes with >>> some robustness to data errors. >>> >>> Cheers, >>> >>> David >>> >>> >>> >>> 2007/1/25, James Vincent >: >>>> >>>> Hello, >>>> Is it possible to fit a sphere to 3D data points using >>>> scipy.optimize.leastsq? I'd like to minimize the residual for the >>>> distance >>>> from the actual x,y,z point and the fitted sphere surface. I can >>>> see how to >>>> minimize for z, but that's not really what I'm looking for. Is there a >>>> better way to do this? Thanks for any help. >>>> >>>> params = a,b,c and r >>>> a,b,c are the fitted center point of the sphere, r is the radius >>>> >>>> err = distance-to-center - radius >>>> err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r >>>> >>>> >>>> >>>> ---- >>>> James J. Vincent, Ph.D. >>>> National Cancer Institute >>>> National Institutes of Health >>>> Laboratory of Molecular Biology >>>> Building 37, Room 5120 >>>> 37 Convent Drive, MSC 4264 >>>> Bethesda, MD 20892 USA >>>> >>>> 301-451-8755 >>>> jjv5 at nih.gov >>>> >>>> >>>> >>>> _______________________________________________ >>>> SciPy-user mailing list >>>> SciPy-user at scipy.org >>>> http://projects.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> >> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> -- >> David Douard LOGILAB, Paris (France) >> Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations >> D?veloppement logiciel sur mesure : http://www.logilab.fr/services >> Informatique scientifique : http://www.logilab.fr/science >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > ---- > James J. Vincent, Ph.D. > National Cancer Institute > National Institutes of Health > Laboratory of Molecular Biology > Building 37, Room 5120 > 37 Convent Drive, MSC 4264 > Bethesda, MD 20892 USA > > 301-451-8755 > jjv5 at nih.gov > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From helmrp at yahoo.com Thu Jan 25 16:41:03 2007 From: helmrp at yahoo.com (The Helmbolds) Date: Thu, 25 Jan 2007 13:41:03 -0800 (PST) Subject: [SciPy-user] Fitting Sphere to Data Points In-Reply-To: Message-ID: <304093.89117.qm@web31814.mail.mud.yahoo.com> Try: err_sq[i]=(x[i]-a)**2 + (y[i]-b)**2 + (z[i]-c)**2 - r**2 err_sum_of_squares = sum(err_sq) and minimize err_sum_of_squares by varying the parameters (a,b,c,r). scipy-user-request at scipy.org wrote: Send SciPy-user mailing list submissions to scipy-user at scipy.org To subscribe or unsubscribe via the World Wide Web, visit http://projects.scipy.org/mailman/listinfo/scipy-user or, via email, send a message with subject or body 'help' to scipy-user-request at scipy.org You can reach the person managing the list at scipy-user-owner at scipy.org When replying, please edit your Subject line so it is more specific than "Re: Contents of SciPy-user digest..." Today's Topics: 1. Re: Fitting sphere to 3d data points (James Vincent) 2. Re: Fitting sphere to 3d data points (ckkart at hoc.net) 3. Re: Fitting sphere to 3d data points (David Douard) 4. Re: Fitting sphere to 3d data points (James Vincent) ---------------------------------------------------------------------- Message: 1 Date: Thu, 25 Jan 2007 09:47:25 -0500 From: James Vincent Subject: Re: [SciPy-user] Fitting sphere to 3d data points To: SciPy Users List Message-ID: Content-Type: text/plain; charset="us-ascii" David, Thanks for the reply. I think I should have been clearer about the problem. I have a surface patch of data points, not an actual whole sphere. It will probably be a very small section of total sphere surface. I would like to fit the sphere that best fits just those points that I have. The center of the sphere will be highly dependent on the curvature of the points. I think the leastsq routine is right, I just can't figure out how to pass the data in yet (it's my first work with scipy). Jim On Jan 25, 2007, at 9:31 AM, David Huard wrote: > Hi James, > > As a first guess, I'd say the center of the sphere is simply the > mean of your data points, if they're all weighted equally. With > only one parameter left to fit, it should be easy enough. However, > you may want to look at the paper: > > Werman, Michael and Keren, Daniel > A Bayesian method for fitting parametric and nonparametric models > to noisy data > Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, > 2001. > > They write that the Mean Square Error approach overestimates the > radius in the case of circles. They don't talk about the 3D case, > but I'd guess similar problems arise. They provide a method to fit > parametric shapes with some robustness to data errors. > > Cheers, > > David > > > > 2007/1/25, James Vincent : > Hello, > > Is it possible to fit a sphere to 3D data points using > scipy.optimize.leastsq? I'd like to minimize the residual for the > distance from the actual x,y,z point and the fitted sphere surface. > I can see how to minimize for z, but that's not really what I'm > looking for. Is there a better way to do this? Thanks for any help. > > params = a,b,c and r > a,b,c are the fitted center point of the sphere, r is the radius > > err = distance-to-center - radius > err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r > > > > ---- > James J. Vincent, Ph.D. > National Cancer Institute > National Institutes of Health > Laboratory of Molecular Biology > Building 37, Room 5120 > 37 Convent Drive, MSC 4264 > Bethesda, MD 20892 USA > > 301-451-8755 > jjv5 at nih.gov > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- James J. Vincent, Ph.D. National Cancer Institute National Institutes of Health Laboratory of Molecular Biology Building 37, Room 5120 37 Convent Drive, MSC 4264 Bethesda, MD 20892 USA 301-451-8755 jjv5 at nih.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20070125/dcf1b801/attachment-0001.html ------------------------------ Message: 2 Date: Thu, 25 Jan 2007 23:57:57 +0900 From: ckkart at hoc.net Subject: Re: [SciPy-user] Fitting sphere to 3d data points To: SciPy Users List Message-ID: <45B8C575.5060609 at hoc.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed James Vincent wrote: > David, > > Thanks for the reply. I think I should have been clearer about the > problem. I have a surface patch of data points, not an actual whole > sphere. It will probably be a very small section of total sphere > surface. I would like to fit the sphere that best fits just those points > that I have. The center of the sphere will be highly dependent on the > curvature of the points. > > I think the leastsq routine is right, I just can't figure out how > to pass the data in yet (it's my first work with scipy). Have a look at scipy.sandbox.odr, too. It is a module to do orthogonal distance regression in contrast to ordinary least squares of the z-values. Christian ------------------------------ Message: 3 Date: Thu, 25 Jan 2007 16:47:37 +0100 From: David Douard Subject: Re: [SciPy-user] Fitting sphere to 3d data points To: SciPy Users List Message-ID: <20070125154737.GB19408 at crater.logilab.fr> Content-Type: text/plain; charset="iso-8859-1" On Thu, Jan 25, 2007 at 09:31:10AM -0500, David Huard wrote: > Hi James, > > As a first guess, I'd say the center of the sphere is simply the mean of > your data points, if they're all weighted equally. Hello, I would have rather said that you need to find the point that minimize the distance to the normals of the triangles you have from your data points (not sure this is really meaningful...). If the points really are on a sphere, all the normal will cut on one point. If not, there really is a minimization problema to solve. David > With only one parameter > left to fit, it should be easy enough. However, you may want to look at the > paper: > > Werman, Michael and Keren, Daniel > A Bayesian method for fitting parametric and nonparametric models to noisy > data > Ieee Transactions on Pattern Analysis and Machine Intelligence, 23, 2001. > > They write that the Mean Square Error approach overestimates the radius in > the case of circles. They don't talk about the 3D case, but I'd guess > similar problems arise. They provide a method to fit parametric shapes with > some robustness to data errors. > > Cheers, > > David > > > > 2007/1/25, James Vincent : > > > >Hello, > >Is it possible to fit a sphere to 3D data points using > >scipy.optimize.leastsq? I'd like to minimize the residual for the distance > >from the actual x,y,z point and the fitted sphere surface. I can see how to > >minimize for z, but that's not really what I'm looking for. Is there a > >better way to do this? Thanks for any help. > > > >params = a,b,c and r > >a,b,c are the fitted center point of the sphere, r is the radius > > > >err = distance-to-center - radius > >err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r > > > > > > > >---- > >James J. Vincent, Ph.D. > >National Cancer Institute > >National Institutes of Health > >Laboratory of Molecular Biology > >Building 37, Room 5120 > >37 Convent Drive, MSC 4264 > >Bethesda, MD 20892 USA > > > >301-451-8755 > >jjv5 at nih.gov > > > > > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.org > >http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D?veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Digital signature Url : http://projects.scipy.org/pipermail/scipy-user/attachments/20070125/21445e8d/attachment-0001.bin ------------------------------ Message: 4 Date: Thu, 25 Jan 2007 10:53:56 -0500 From: James Vincent Subject: Re: [SciPy-user] Fitting sphere to 3d data points To: SciPy Users List Message-ID: <0C9AE871-55EE-4008-80AE-3B4DA0897615 at nih.gov> Content-Type: text/plain; charset="iso-8859-1" Thanks for all the input. I think I've got it. This works: def resSphere(p,x,y,z): """ residuals from sphere fit """ a,b,c,r = p # a,b,c are center x,y,c coords to be fit, r is the radius to be fit distance = sqrt( (x-a)**2 + (y-b)**2 + (z-c)**2 ) err = distance - r # err is distance from input point to current fitted surface return err params = [0.,0.,0.,0.] myResult = leastsq(resSphere, params, args=(myX,myY,myZ) ) print myResult[0] On Jan 25, 2007, at 10:47 AM, David Douard wrote: > On Thu, Jan 25, 2007 at 09:31:10AM -0500, David Huard wrote: >> Hi James, >> >> As a first guess, I'd say the center of the sphere is simply the >> mean of >> your data points, if they're all weighted equally. > > Hello, > > I would have rather said that you need to find the point that minimize > the distance to the normals of the triangles you have from your data > points (not sure this is really meaningful...). > If the points really are on a sphere, all the normal will cut on one > point. If not, there really is a minimization problema to solve. > > David > > > >> With only one parameter >> left to fit, it should be easy enough. However, you may want to >> look at the >> paper: >> >> Werman, Michael and Keren, Daniel >> A Bayesian method for fitting parametric and nonparametric models >> to noisy >> data >> Ieee Transactions on Pattern Analysis and Machine Intelligence, >> 23, 2001. >> >> They write that the Mean Square Error approach overestimates the >> radius in >> the case of circles. They don't talk about the 3D case, but I'd guess >> similar problems arise. They provide a method to fit parametric >> shapes with >> some robustness to data errors. >> >> Cheers, >> >> David >> >> >> >> 2007/1/25, James Vincent : >>> >>> Hello, >>> Is it possible to fit a sphere to 3D data points using >>> scipy.optimize.leastsq? I'd like to minimize the residual for the >>> distance >>> from the actual x,y,z point and the fitted sphere surface. I can >>> see how to >>> minimize for z, but that's not really what I'm looking for. Is >>> there a >>> better way to do this? Thanks for any help. >>> >>> params = a,b,c and r >>> a,b,c are the fitted center point of the sphere, r is the radius >>> >>> err = distance-to-center - radius >>> err = sqrt( x-a)**2 + (y-b)**2 + (z-c)**2) - r >>> >>> >>> >>> ---- >>> James J. Vincent, Ph.D. >>> National Cancer Institute >>> National Institutes of Health >>> Laboratory of Molecular Biology >>> Building 37, Room 5120 >>> 37 Convent Drive, MSC 4264 >>> Bethesda, MD 20892 USA >>> >>> 301-451-8755 >>> jjv5 at nih.gov >>> >>> >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> > >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- > David Douard LOGILAB, Paris (France) > Formations Python, Zope, Plone, Debian : http://www.logilab.fr/ > formations > D?veloppement logiciel sur mesure : http://www.logilab.fr/ > services > Informatique scientifique : http://www.logilab.fr/science > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- James J. Vincent, Ph.D. National Cancer Institute National Institutes of Health Laboratory of Molecular Biology Building 37, Room 5120 37 Convent Drive, MSC 4264 Bethesda, MD 20892 USA 301-451-8755 jjv5 at nih.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20070125/0d68f7c3/attachment.html ------------------------------ _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user End of SciPy-user Digest, Vol 41, Issue 62 ****************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Thu Jan 25 16:48:18 2007 From: fredmfp at gmail.com (fred) Date: Thu, 25 Jan 2007 22:48:18 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B761F1.7000506@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> Message-ID: <45B925A2.7030108@gmail.com> fred a ?crit : >fred a ?crit : > > > >>What's wrong ? >> >> >> >> >I forgot to mention that if I use litteral expressions instead of >scipy.special.sph_harm func, >it works fine with the nicer mesh. > > Nobody can help me to fix this issue ?? :-( Cheers, -- http://scipy.org/FredericPetit From peridot.faceted at gmail.com Thu Jan 25 17:17:28 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 25 Jan 2007 17:17:28 -0500 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B925A2.7030108@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> Message-ID: On 25/01/07, fred wrote: > fred a ?crit : > > >fred a ?crit : > > > > > > > >>What's wrong ? > >> > >> > >> > >> > >I forgot to mention that if I use litteral expressions instead of > >scipy.special.sph_harm func, > >it works fine with the nicer mesh. > > > > > Nobody can help me to fix this issue ?? :-( It looks to me like you've found another problem with cephes; we just had a discussion of the unsatisfactoriness of cephes' Bessel functions. But replacing cephes is a big big job, as is improving it... so when it appears you can work around your problem, you hear a guilty silence. Anne M. Archibald From jrevans1 at earthlink.net Thu Jan 25 18:52:37 2007 From: jrevans1 at earthlink.net (James Evans) Date: Thu, 25 Jan 2007 15:52:37 -0800 Subject: [SciPy-user] numpy build problems... Message-ID: <012901c740db$e4de7c30$0202fea9@jpl.nasa.gov> I am attempting to build numpy-1.0.1 and am seeing incorrect behavior. I am using Python-2.4.4 (built as a shared library). I have a regular build of lapack/atlas on my system. I have modified the site.cfg file to point to the correct lib dir. Following is what I get when I type "python setup.py build" (truncated for readability, see attached for full details) Apparently the link step is doing two things wrong: 1) Not linking against python, which this is dependant upon. 2) Not specifying the '-shared' flag . creating build/temp.linux-i686-2.4/numpy/linalg compile options: '-DNO_ATLAS_INFO=2 -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/home/jrevans/COTS/PyTest/include/python2.4 -c' gcc: numpy/linalg/lapack_litemodule.c /home/jrevans/local/COTS/1.0-20060808A/bin/gcc -fPIC -Wl,-E -L/home/jrevans/local/COTS/1.0-20060808A/lib build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o -L/home/jrevans/COTS/devel/lib -llapack -llapack -lblas -lg2c -o build/lib.linux-i686-2.4/numpy/linalg/lapack_lite.so /usr/lib/crt1.o: In function `_start': (.text+0x18): undefined reference to `main' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `check_object': numpy/linalg/lapack_litemodule.c:103: undefined reference to `PyType_IsSubtype' numpy/linalg/lapack_litemodule.c:114: undefined reference to `PyErr_Format' numpy/linalg/lapack_litemodule.c:109: undefined reference to `PyErr_Format' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgeev': numpy/linalg/lapack_litemodule.c:149: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:165: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dsyevd': numpy/linalg/lapack_litemodule.c:235: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:248: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zheevd': numpy/linalg/lapack_litemodule.c:321: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:335: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgelsd': numpy/linalg/lapack_litemodule.c:359: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:374: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgesv': numpy/linalg/lapack_litemodule.c:392: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:401: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgesdd': numpy/linalg/lapack_litemodule.c:424: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:466: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgetrf': numpy/linalg/lapack_litemodule.c:482: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:490: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dpotrf': numpy/linalg/lapack_litemodule.c:504: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:510: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgeqrf': numpy/linalg/lapack_litemodule.c:523: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:534: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dorgqr': numpy/linalg/lapack_litemodule.c:549: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:556: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgeev': numpy/linalg/lapack_litemodule.c:579: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:595: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgelsd': numpy/linalg/lapack_litemodule.c:620: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:635: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgesv': numpy/linalg/lapack_litemodule.c:652: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:661: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgesdd': numpy/linalg/lapack_litemodule.c:685: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:702: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgetrf': numpy/linalg/lapack_litemodule.c:718: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:726: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zpotrf': numpy/linalg/lapack_litemodule.c:740: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:745: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgeqrf': numpy/linalg/lapack_litemodule.c:758: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:768: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zungqr': numpy/linalg/lapack_litemodule.c:781: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:791: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:827: undefined reference to `Py_InitModule4' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `_import_array': build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:945: undefined reference to `PyImport_ImportModule' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:948: undefined reference to `PyObject_GetAttrString' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:950: undefined reference to `PyCObject_Type' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyErr_Print' numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyExc_ImportError' numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyErr_SetString' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:958: undefined reference to `PyExc_RuntimeError' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:958: undefined reference to `PyErr_Format' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:951: undefined reference to `PyCObject_AsVoidPtr' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:831: undefined reference to `PyModule_GetDict' numpy/linalg/lapack_litemodule.c:832: undefined reference to `PyErr_NewException' numpy/linalg/lapack_litemodule.c:833: undefined reference to `PyDict_SetItemString' collect2: ld returned 1 exit status /usr/lib/crt1.o: In function `_start': (.text+0x18): undefined reference to `main' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `check_object': numpy/linalg/lapack_litemodule.c:103: undefined reference to `PyType_IsSubtype' numpy/linalg/lapack_litemodule.c:114: undefined reference to `PyErr_Format' numpy/linalg/lapack_litemodule.c:109: undefined reference to `PyErr_Format' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgeev': numpy/linalg/lapack_litemodule.c:149: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:165: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dsyevd': numpy/linalg/lapack_litemodule.c:235: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:248: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zheevd': numpy/linalg/lapack_litemodule.c:321: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:335: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgelsd': numpy/linalg/lapack_litemodule.c:359: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:374: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgesv': numpy/linalg/lapack_litemodule.c:392: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:401: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgesdd': numpy/linalg/lapack_litemodule.c:424: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:466: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgetrf': numpy/linalg/lapack_litemodule.c:482: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:490: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dpotrf': numpy/linalg/lapack_litemodule.c:504: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:510: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dgeqrf': numpy/linalg/lapack_litemodule.c:523: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:534: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_dorgqr': numpy/linalg/lapack_litemodule.c:549: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:556: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgeev': numpy/linalg/lapack_litemodule.c:579: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:595: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgelsd': numpy/linalg/lapack_litemodule.c:620: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:635: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgesv': numpy/linalg/lapack_litemodule.c:652: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:661: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgesdd': numpy/linalg/lapack_litemodule.c:685: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:702: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgetrf': numpy/linalg/lapack_litemodule.c:718: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:726: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zpotrf': numpy/linalg/lapack_litemodule.c:740: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:745: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zgeqrf': numpy/linalg/lapack_litemodule.c:758: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:768: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `lapack_lite_zungqr': numpy/linalg/lapack_litemodule.c:781: undefined reference to `PyArg_ParseTuple' numpy/linalg/lapack_litemodule.c:791: undefined reference to `Py_BuildValue' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:827: undefined reference to `Py_InitModule4' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `_import_array': build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:945: undefined reference to `PyImport_ImportModule' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:948: undefined reference to `PyObject_GetAttrString' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:950: undefined reference to `PyCObject_Type' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyErr_Print' numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyExc_ImportError' numpy/linalg/lapack_litemodule.c:830: undefined reference to `PyErr_SetString' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:958: undefined reference to `PyExc_RuntimeError' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:958: undefined reference to `PyErr_Format' build/src.linux-i686-2.4/numpy/core/__multiarray_api.h:951: undefined reference to `PyCObject_AsVoidPtr' build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o: In function `initlapack_lite': numpy/linalg/lapack_litemodule.c:831: undefined reference to `PyModule_GetDict' numpy/linalg/lapack_litemodule.c:832: undefined reference to `PyErr_NewException' numpy/linalg/lapack_litemodule.c:833: undefined reference to `PyDict_SetItemString' collect2: ld returned 1 exit status error: Command "/home/jrevans/local/COTS/1.0-20060808A/bin/gcc -fPIC -Wl,-E -L/home/jrevans/local/COTS/1.0-20060808A/lib build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o -L/home/jrevans/COTS/devel/lib -llapack -llapack -lblas -lg2c -o build/lib.linux-i686-2.4/numpy/linalg/lapack_lite.so" failed with exit status 1 Thanks, --James Evans -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build.zip Type: application/octet-stream Size: 4702 bytes Desc: not available URL: From robert.kern at gmail.com Thu Jan 25 18:56:38 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jan 2007 17:56:38 -0600 Subject: [SciPy-user] numpy build problems... In-Reply-To: <012901c740db$e4de7c30$0202fea9@jpl.nasa.gov> References: <012901c740db$e4de7c30$0202fea9@jpl.nasa.gov> Message-ID: <45B943B6.6030704@gmail.com> James Evans wrote: > I am attempting to build numpy-1.0.1 and am seeing incorrect behavior. > > > > I am using Python-2.4.4 (built as a shared library). > > I have a regular build of lapack/atlas on my system. > > I have modified the site.cfg file to point to the correct lib dir. > > > > Following is what I get when I type ?python setup.py build? > > (truncated for readability, see attached for full details) > > > > Apparently the link step is doing two things wrong: > > 1) Not linking against python, which this is dependant upon. > > 2) Not specifying the '-shared' flag This usually happens when you have $CFLAGS or $LDFLAGS set. These *override* the default flags rather than add to them. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Jan 25 18:58:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jan 2007 17:58:20 -0600 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> Message-ID: <45B9441C.6040903@gmail.com> Anne Archibald wrote: > On 25/01/07, fred wrote: >> fred a ?crit : >> >>> fred a ?crit : >>> >>>> What's wrong ? >>>> >>> I forgot to mention that if I use litteral expressions instead of >>> scipy.special.sph_harm func, >>> it works fine with the nicer mesh. >>> >> Nobody can help me to fix this issue ?? :-( > > It looks to me like you've found another problem with cephes; we just > had a discussion of the unsatisfactoriness of cephes' Bessel > functions. But replacing cephes is a big big job, as is improving > it... so when it appears you can work around your problem, you hear a > guilty silence. I don't think so. Plotting the real and imaginary parts as images gives the same image (modulo resolution) for me on OS X. The problem is possibly in the code that was used to generate the meshes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Thu Jan 25 19:32:53 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 01:32:53 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B9441C.6040903@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> Message-ID: <45B94C35.6060003@gmail.com> Robert Kern a ?crit : > I don't think so. Plotting the real and imaginary parts as images gives the same > image (modulo resolution) for me on OS X. The problem is possibly in the code > that was used to generate the meshes. I don't think so too, but who knows ;-) Ok, let's see a short example. Here, it looks very bad. Note that if I replace 2 by 3 in dphi_deg & dtheta_deg, it looks very good. Where could the bug be hidden ??? Cheers, -- http://scipy.org/FredericPetit -------------- next part -------------- A non-text attachment was scrubbed... Name: a.py Type: text/x-python Size: 695 bytes Desc: not available URL: From fredmfp at gmail.com Thu Jan 25 19:39:23 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 01:39:23 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> Message-ID: <45B94DBB.9090700@gmail.com> Anne Archibald a ?crit : > It looks to me like you've found another problem with cephes; we just > had a discussion of the unsatisfactoriness of cephes' Bessel > functions. But replacing cephes is a big big job, as is improving > it... so when it appears you can work around your problem, you hear a > guilty silence. Hmm, I can't say I worked around my problem, because litteral expressions are not easy to get for any values of l,m. Cheers, -- http://scipy.org/FredericPetit From robert.kern at gmail.com Thu Jan 25 19:49:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jan 2007 18:49:54 -0600 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B94C35.6060003@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> Message-ID: <45B95032.7030809@gmail.com> fred wrote: > Robert Kern a ?crit : > >> I don't think so. Plotting the real and imaginary parts as images gives the same >> image (modulo resolution) for me on OS X. The problem is possibly in the code >> that was used to generate the meshes. > I don't think so too, but who knows ;-) > > Ok, let's see a short example. > > Here, it looks very bad. > > Note that if I replace 2 by 3 in dphi_deg & dtheta_deg, > it looks very good. > > Where could the bug be hidden ??? Here: > a = scipy.special.sph_harm(m,l,phi,theta) You have phi and theta backwards assuming we're not using different naming conventions. I'll fix the docstring to be explicit about which is which. Here, phi is the polar angle [0, pi] and theta is the azimuthal angle [0, 2*pi]. Here is the implementation (note: *not* Cephes): def _sph_harmonic(m,n,theta,phi): """Spherical harmonic of order m,n (|m|<=n) and argument theta and phi: Y^m_n(theta,phi) """ x = cos(phi) m,n = int(m), int(n) Pmn,Pmnd = lpmn(m,n,x) val = Pmn[-1, -1] val *= sqrt((2*n+1)/4.0/pi) val *= exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1))) val *= exp(1j*m*theta) return val -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Thu Jan 25 20:04:42 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 02:04:42 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B95032.7030809@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> Message-ID: <45B953AA.1080001@gmail.com> Robert Kern a ?crit : > Here: > > >>a = scipy.special.sph_harm(m,l,phi,theta) > > > You have phi and theta backwards assuming we're not using different naming > conventions. I'll fix the docstring to be explicit about which is which. Here, > phi is the polar angle [0, pi] and theta is the azimuthal angle [0, 2*pi]. If I use scipy.special.sph_harm(m,l,theta,phi) (and so, permute phi & theta in cos/sin if I understand right), it also looks wrong (as in the previous case, in fact). Can someone try my short example, and tell if it looks right or wrong please ? Cheers, -- http://scipy.org/FredericPetit From fredmfp at gmail.com Thu Jan 25 20:17:35 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 02:17:35 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B953AA.1080001@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> Message-ID: <45B956AF.2020007@gmail.com> fred a ?crit : > Can someone try my short example, and tell if it looks right or wrong > please ? Ok, I rewrote my short example. Can you tell me please if it looks good ? Because the result is not "ugly" but not good. #!/usr/bin/env mayavi2 import scipy import scipy.special from enthought.mayavi.tools import mlab l = 1 m = 1 phi_deg_min, phi_deg_max, dphi_deg = 0, 180, 2 theta_deg_min, theta_deg_max, dtheta_deg = 0, 360, 2 phi_deg, theta_deg = scipy.mgrid[phi_deg_min:phi_deg_max+dphi_deg:dphi_deg, theta_deg_min:theta_deg_max+dtheta_deg:dtheta_deg] phi, theta = phi_deg*scipy.pi/180, theta_deg*scipy.pi/180 a = scipy.special.sph_harm(m,l,theta,phi) r = scipy.real(a)**2 x = r*scipy.cos(theta)*scipy.sin(phi) y = r*scipy.sin(theta)*scipy.sin(phi) z = r*scipy.cos(phi) mlab.figure() mlab.surf(x,y,z,r) http://fredantispam.free.fr/foo.png Cheers, -- http://scipy.org/FredericPetit From robert.kern at gmail.com Thu Jan 25 20:48:04 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 25 Jan 2007 19:48:04 -0600 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B956AF.2020007@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> Message-ID: <45B95DD4.40303@gmail.com> fred wrote: > fred a ?crit : > >> Can someone try my short example, and tell if it looks right or wrong >> please ? > Ok, I rewrote my short example. > Can you tell me please if it looks good ? > > Because the result is not "ugly" but not good. > > #!/usr/bin/env mayavi2 > > import scipy > import scipy.special > > from enthought.mayavi.tools import mlab > > l = 1 > m = 1 > http://fredantispam.free.fr/foo.png I do not get this result with these parameters. I get the expected p-orbital-like configuration. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From prabhu at aero.iitb.ac.in Thu Jan 25 23:14:37 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 26 Jan 2007 09:44:37 +0530 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B95DD4.40303@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> <45B95DD4.40303@gmail.com> Message-ID: <17849.32813.107880.442509@gargle.gargle.HOWL> >>>>> "Robert" == Robert Kern writes: >> http://fredantispam.free.fr/foo.png Robert> I do not get this result with these parameters. I get the Robert> expected p-orbital-like configuration. I also get a p-orbital-like configuration with scipy.__version__ == '0.5.0.2158', numpy.__version__ == '1.0.2.dev3487' cheers, prabhu From fredmfp at gmail.com Fri Jan 26 03:03:34 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 09:03:34 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <17849.32813.107880.442509@gargle.gargle.HOWL> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> <45B95DD4.40303@gmail.com> <17849.32813.107880.442509@gargle.gargle.HOWL> Message-ID: <45B9B5D6.5030706@gmail.com> Prabhu Ramachandran a ?crit : >>>>>>"Robert" == Robert Kern writes: > > > >> http://fredantispam.free.fr/foo.png > > Robert> I do not get this result with these parameters. I get the > Robert> expected p-orbital-like configuration. > > I also get a p-orbital-like configuration with scipy.__version__ == > '0.5.0.2158', numpy.__version__ == '1.0.2.dev3487' Thanks to you two. Robert, which versions do you use ? I use scipy.__version__==0.5.1 & numpy.__version__==1.0rc1. Cheers, -- http://scipy.org/FredericPetit From robert.kern at gmail.com Fri Jan 26 03:10:53 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Jan 2007 02:10:53 -0600 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B9B5D6.5030706@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> <45B95DD4.40303@gmail.com> <17849.32813.107880.442509@gargle.gargle.HOWL> <45B9B5D6.5030706@gmail.com> Message-ID: <45B9B78D.4060109@gmail.com> fred wrote: > Prabhu Ramachandran a ?crit : >>>>>>> "Robert" == Robert Kern writes: >> >> >> http://fredantispam.free.fr/foo.png >> >> Robert> I do not get this result with these parameters. I get the >> Robert> expected p-orbital-like configuration. >> >> I also get a p-orbital-like configuration with scipy.__version__ == >> '0.5.0.2158', numpy.__version__ == '1.0.2.dev3487' > Thanks to you two. > > Robert, which versions do you use ? > > I use scipy.__version__==0.5.1 & numpy.__version__==1.0rc1. >>> import numpy >>> print numpy.__version__ 1.0.2.dev3507 >>> import scipy >>> print scipy.__version__ 0.5.3.dev2500 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Fri Jan 26 03:21:26 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 09:21:26 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B9B78D.4060109@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> <45B95DD4.40303@gmail.com> <17849.32813.107880.442509@gargle.gargle.HOWL> <45B9B5D6.5030706@gmail.com> <45B9B78D.4060109@gmail.com> Message-ID: <45B9BA06.4090701@gmail.com> Robert Kern a ?crit : >>>>import numpy >>>>print numpy.__version__ > > 1.0.2.dev3507 > >>>>import scipy >>>>print scipy.__version__ > > 0.5.3.dev2500 Hmmm, Prabhu has an older scipy release, and you, a newer. But both have an newer numpy release. I guess you will advise to update my numpy release. What else ? -- http://scipy.org/FredericPetit From robert.kern at gmail.com Fri Jan 26 03:23:57 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 26 Jan 2007 02:23:57 -0600 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B9BA06.4090701@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> <45B95DD4.40303@gmail.com> <17849.32813.107880.442509@gargle.gargle.HOWL> <45B9B5D6.5030706@gmail.com> <45B9B78D.4060109@gmail.com> <45B9BA06.4090701@gmail.com> Message-ID: <45B9BA9D.2040702@gmail.com> fred wrote: > Robert Kern a ?crit : > >>>>> import numpy >>>>> print numpy.__version__ >> 1.0.2.dev3507 >> >>>>> import scipy >>>>> print scipy.__version__ >> 0.5.3.dev2500 > > Hmmm, Prabhu has an older scipy release, and you, a newer. > But both have an newer numpy release. > > I guess you will advise to update my numpy release. > What else ? I can't think of anything else. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredmfp at gmail.com Fri Jan 26 03:45:53 2007 From: fredmfp at gmail.com (fred) Date: Fri, 26 Jan 2007 09:45:53 +0100 Subject: [SciPy-user] displaying spherical harmonics with different resolution... In-Reply-To: <45B9BA9D.2040702@gmail.com> References: <45B75FB5.20706@gmail.com> <45B761F1.7000506@gmail.com> <45B925A2.7030108@gmail.com> <45B9441C.6040903@gmail.com> <45B94C35.6060003@gmail.com> <45B95032.7030809@gmail.com> <45B953AA.1080001@gmail.com> <45B956AF.2020007@gmail.com> <45B95DD4.40303@gmail.com> <17849.32813.107880.442509@gargle.gargle.HOWL> <45B9B5D6.5030706@gmail.com> <45B9B78D.4060109@gmail.com> <45B9BA06.4090701@gmail.com> <45B9BA9D.2040702@gmail.com> Message-ID: <45B9BFC1.5020401@gmail.com> Robert Kern a ?crit : > I can't think of anything else. No problem, the advice was the right one :-) However, my first python script displays now the right configuration. It was not guilty (even if phi/theta are inverted :-) I do not know why/where, but it seems it was rather numpy's fault... Thanks. Cheers, -- http://scipy.org/FredericPetit From nwagner at iam.uni-stuttgart.de Fri Jan 26 06:57:58 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 26 Jan 2007 12:57:58 +0100 Subject: [SciPy-user] Installing MayaVi Message-ID: <45B9ECC6.8030107@iam.uni-stuttgart.de> Hi, which enthought libs will be installed by ./build_install.sh /usr/local [numpy] in /usr/local/lib/python2.57site-packages I couldn't find these details on http://www.scipy.org/Cookbook/MayaVi/Installation Nils From prabhu at aero.iitb.ac.in Fri Jan 26 12:23:27 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 26 Jan 2007 22:53:27 +0530 Subject: [SciPy-user] Installing MayaVi In-Reply-To: <45B9ECC6.8030107@iam.uni-stuttgart.de> References: <45B9ECC6.8030107@iam.uni-stuttgart.de> Message-ID: <17850.14607.999448.13600@gargle.gargle.HOWL> >>>>> "Nils" == Nils Wagner writes: Nils> Hi, which enthought libs will be installed by Nils> ./build_install.sh /usr/local [numpy] Nils> in /usr/local/lib/python2.57site-packages Nils> I couldn't find these details on AFAIK, everything and a couple of scripts (mayavi2 and endo.py) in $prefix/bin/ cheers, prabhu From mark at mitre.org Fri Jan 26 13:25:28 2007 From: mark at mitre.org (Mark Heslep) Date: Fri, 26 Jan 2007 13:25:28 -0500 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> Message-ID: <45BA4798.5090602@mitre.org> >> ). Before the original C++ package ceased it's >> evolution (a couple of years ago) one of the developers had gotten about >> halfway through using Boost to provide a python interface. I did a >> little bit of work with SWIG a few years ago but for a very small >> package. So we're not sure whether to try and complete the original >> effort (using newer versions of python and Boost) or to start over with >> a currently more appropriate choice. Any thoughts welcome, thanks, >> > > What I recall from a while ago, as relayed by John Hunter (matplotlib) > and Prabhu Ramachandran (mayavi) was that while boost was great, the > compile times and the resulting binary library sizes were completely > out of control. All that templating forces instantiation of an > enormous number of classes, which makes both time and size explode, to > the point of making both development and distribution seriously > problematic. Both John and Prabhu decided to go with other > alternatives due to this, I think it was SWIG (whose C++ support has > greatly improved). > A bit of redirect here: If the OP had been talking about an existing C lib and not C++, would the recommendation be 'always reach first for Ctypes'? It seems too easy and direct to waste time elsewhere. -Mark From oliphant.travis at ieee.org Sat Jan 27 03:39:50 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 27 Jan 2007 01:39:50 -0700 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45BA4798.5090602@mitre.org> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> Message-ID: <45BB0FD6.7060206@ieee.org> Mark Heslep wrote: >>> ). Before the original C++ package ceased it's >>> evolution (a couple of years ago) one of the developers had gotten about >>> halfway through using Boost to provide a python interface. I did a >>> little bit of work with SWIG a few years ago but for a very small >>> package. So we're not sure whether to try and complete the original >>> effort (using newer versions of python and Boost) or to start over with >>> a currently more appropriate choice. Any thoughts welcome, thanks, >>> >>> >> What I recall from a while ago, as relayed by John Hunter (matplotlib) >> and Prabhu Ramachandran (mayavi) was that while boost was great, the >> compile times and the resulting binary library sizes were completely >> out of control. All that templating forces instantiation of an >> enormous number of classes, which makes both time and size explode, to >> the point of making both development and distribution seriously >> problematic. Both John and Prabhu decided to go with other >> alternatives due to this, I think it was SWIG (whose C++ support has >> greatly improved). >> >> > A bit of redirect here: If the OP had been talking about an existing C > lib and not C++, would the recommendation be 'always reach first for > Ctypes'? It seems too easy and direct to waste time elsewhere. > > It still depends a little. A large library is probably more easily handled with SWIG, but for custom interfaces I would say ctypes is best. The only problems with ctypes are 1) Not easy to distribute the source code with the interface as you can't use distutils out of the box for building shared libraries. 2) Doesn't work with C++ code 3) Automatic-generation of the interface file is not as sophisticated as f2py.... -Travis > -Mark > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From robert.kern at gmail.com Sat Jan 27 03:44:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 02:44:26 -0600 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45BB0FD6.7060206@ieee.org> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> Message-ID: <45BB10EA.2090903@gmail.com> Travis Oliphant wrote: > Mark Heslep wrote: >> A bit of redirect here: If the OP had been talking about an existing C >> lib and not C++, would the recommendation be 'always reach first for >> Ctypes'? It seems too easy and direct to waste time elsewhere. >> > It still depends a little. A large library is probably more easily > handled with SWIG, but for custom interfaces I would say ctypes is best. I would also suggest that if the interfaces are simple enough (or rather FORTRAN-like enough) that f2py can handle them, f2py might be easiest. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Sat Jan 27 03:59:15 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 27 Jan 2007 01:59:15 -0700 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45BB10EA.2090903@gmail.com> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <45BB10EA.2090903@gmail.com> Message-ID: <45BB1463.4060507@ieee.org> Robert Kern wrote: > Travis Oliphant wrote: > >> Mark Heslep wrote: >> >>> A bit of redirect here: If the OP had been talking about an existing C >>> lib and not C++, would the recommendation be 'always reach first for >>> Ctypes'? It seems too easy and direct to waste time elsewhere. >>> >>> >> It still depends a little. A large library is probably more easily >> handled with SWIG, but for custom interfaces I would say ctypes is best. >> > > I would also suggest that if the interfaces are simple enough (or rather > FORTRAN-like enough) that f2py can handle them, f2py might be easiest. > > Yeah, f2py is still my favorite way to link to interface to a library -Travis From mclaughlinbryan at yahoo.com Sat Jan 27 12:57:53 2007 From: mclaughlinbryan at yahoo.com (mclaugb) Date: Sat, 27 Jan 2007 17:57:53 -0000 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help Message-ID: I have been using a Gauss-Newton algorithm but it is unconstrained and i need a bounded, multivariate algorithm. Does anyone have any example code that uses the "lbfgsb" optimization algorithm? I have a function of two variables that is bounded to the positive half-space and would like to use lbfgsb to minimize two variables. i have tried to get it working for a 2 variable function case but have been unsuccessful. any example code would help me. I have also seen a version of leastsq called "mpfit" which some suggested porting Numeric to NumPy. Has anyone used this Levenburg-Marquardt algorithm successfully? Do you have the converted code that works with Python 2.4? do you have any example code? Kind Regards, Bryan From robert.kern at gmail.com Sat Jan 27 16:38:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 15:38:42 -0600 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: References: Message-ID: <45BBC662.6050202@gmail.com> mclaugb wrote: > I have been using a Gauss-Newton algorithm but it is unconstrained and i > need a bounded, multivariate algorithm. > > Does anyone have any example code that uses the "lbfgsb" optimization > algorithm? I have a function of two variables that is bounded to the > positive half-space and would like to use lbfgsb to minimize two variables. > i have tried to get it working for a 2 variable function case but have been > unsuccessful. any example code would help me. You mean that your function f maps f RR^2 ---> RR^2 ? Note that all of the minimizers in scipy (and even mpfit that you mention below) only minimize functions that map to scalars; i.e. f RR^n ---> RR These are not multiobjective optimizers. Please clarify what you mean. Can you show us the code that you already tried? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From simon at arrowtheory.com Sat Jan 27 17:02:50 2007 From: simon at arrowtheory.com (Simon Burton) Date: Sat, 27 Jan 2007 14:02:50 -0800 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: <45BBC662.6050202@gmail.com> References: <45BBC662.6050202@gmail.com> Message-ID: <20070127140250.4fd2f10a.simon@arrowtheory.com> On Sat, 27 Jan 2007 15:38:42 -0600 Robert Kern wrote: > > > Does anyone have any example code that uses the "lbfgsb" optimization > > algorithm? I have a function of two variables that is bounded to the > > positive half-space and would like to use lbfgsb to minimize two variables. > > i have tried to get it working for a 2 variable function case but have been > > unsuccessful. any example code would help me. > > You mean that your function f maps > > f > RR^2 ---> RR^2 > > ? I read it as, f:R^2 -> R, but f(x,y) is constrained to y>=0. Simon. From robert.kern at gmail.com Sat Jan 27 17:11:09 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 16:11:09 -0600 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: <20070127140250.4fd2f10a.simon@arrowtheory.com> References: <45BBC662.6050202@gmail.com> <20070127140250.4fd2f10a.simon@arrowtheory.com> Message-ID: <45BBCDFD.1040304@gmail.com> Simon Burton wrote: > On Sat, 27 Jan 2007 15:38:42 -0600 > Robert Kern wrote: > >>> Does anyone have any example code that uses the "lbfgsb" optimization >>> algorithm? I have a function of two variables that is bounded to the >>> positive half-space and would like to use lbfgsb to minimize two variables. >>> i have tried to get it working for a 2 variable function case but have been >>> unsuccessful. any example code would help me. >> You mean that your function f maps >> >> f >> RR^2 ---> RR^2 >> >> ? > > I read it as, f:R^2 -> R, but f(x,y) is constrained to y>=0. Not according to the last time he asked this question on c.l.py: http://mail.python.org/pipermail/python-list/2007-January/423272.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Sat Jan 27 17:12:37 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 28 Jan 2007 00:12:37 +0200 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45BB0FD6.7060206@ieee.org> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> Message-ID: <20070127221236.GA5742@mentat.za.net> On Sat, Jan 27, 2007 at 01:39:50AM -0700, Travis Oliphant wrote: > It still depends a little. A large library is probably more easily > handled with SWIG, but for custom interfaces I would say ctypes is best. > > The only problems with ctypes are > > 1) Not easy to distribute the source code with the interface as you > can't use distutils out of the box for building shared libraries. I've heard this being said many times, but I'm not sure I understand. I've built shared libraries under Linux, and I've seen people do it under Windows. Is there a problem under MacOSX or Solaris platforms? In the documentation, they mention: http://pydoc.org/1.6/distutils.ccompiler.html#CCompiler-link_shared_lib Which I saw in action in http://www.koders.com/python/fid58C5EF5218DFA16D8F6EFBA42676DE605EBAF607.aspx Cheers St?fan From robert.kern at gmail.com Sat Jan 27 17:30:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 16:30:17 -0600 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <20070127221236.GA5742@mentat.za.net> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <20070127221236.GA5742@mentat.za.net> Message-ID: <45BBD279.5010807@gmail.com> Stefan van der Walt wrote: > On Sat, Jan 27, 2007 at 01:39:50AM -0700, Travis Oliphant wrote: >> It still depends a little. A large library is probably more easily >> handled with SWIG, but for custom interfaces I would say ctypes is best. >> >> The only problems with ctypes are >> >> 1) Not easy to distribute the source code with the interface as you >> can't use distutils out of the box for building shared libraries. > > I've heard this being said many times, but I'm not sure I understand. > I've built shared libraries under Linux, and I've seen people do it > under Windows. Is there a problem under MacOSX or Solaris platforms? The problem is the location of installation. Under Windows, it's not really a problem; you just put the DLL next to the .pyd that links against it. DLL resolution looks in the directory of linking DLL first. Linux and other Unices don't do this. You have to install the .so to a location where it will be picked up by the linker. distutils does not really support that. You have to extend it to do so as in the Togl example that you show. Thus, "not easy". -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Jan 27 17:32:18 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 16:32:18 -0600 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <20070127221236.GA5742@mentat.za.net> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <20070127221236.GA5742@mentat.za.net> Message-ID: <45BBD2F2.3080806@gmail.com> Stefan van der Walt wrote: > On Sat, Jan 27, 2007 at 01:39:50AM -0700, Travis Oliphant wrote: >> It still depends a little. A large library is probably more easily >> handled with SWIG, but for custom interfaces I would say ctypes is best. >> >> The only problems with ctypes are >> >> 1) Not easy to distribute the source code with the interface as you >> can't use distutils out of the box for building shared libraries. > > I've heard this being said many times, but I'm not sure I understand. > I've built shared libraries under Linux, and I've seen people do it > under Windows. Is there a problem under MacOSX or Solaris platforms? Actually, let me back up. With ctypes, we want the .so/.dll to go into the package rather than being installed anywhere. You still have to extend distutils do a build of a non-Python .so and then to put it in the right place. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mclaughlinbryan at yahoo.com Sat Jan 27 17:36:36 2007 From: mclaughlinbryan at yahoo.com (mclaugb) Date: Sat, 27 Jan 2007 22:36:36 -0000 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help References: <45BBC662.6050202@gmail.com> Message-ID: I can make the function map to a scalar--though i thought multivariate algorithms used vector differentials between iterations. Note, the error below is the same regardless of whether the function provides a scalar or vector. i seem to be receiving this error: x, f, d = lbfgsb.fmin_l_bfgs_b(Permmin, x0, approx_grad=1,args=params, bounds = [(.0001, 1000),(.0001, 1000)]) File "C:\Python24\lib\site-packages\scipy\optimize\lbfgsb.py", line 197, in fmin_l_bfgs_b isave, dsave) ValueError: failed to initialize intent(inout) array -- expected elsize=8 but got 4 -- input 'l' not compatible to 'd' ------------------------------ Here is the code: def Permmin(x0, params): #x0 is input real and imaginary permittivity guess values, params contain freq, S11 Measured Values S11Measured = params[1] Freq = params[0] BeadPerm = params[2] InnerRad = params[4] OuterRad = params[5] #HodgGaussNonCon returns real and imaginary values for S11 S11 = HodgGaussNonCon.HodgGauss(BeadPerm,Freq ,InnerRad,OuterRad,1e9,x0) S11diff = array([ S11.real - S11Measured.real, S11.imag - S11Measured.imag ]); return abs(S11diff) xstart = [20,20] #test values at 50Mhz GammaMeas = .999964 -0.006289j x0 = xstart Freq = 50e6 BeadPerm = 2.08 +.00832j Inner = .125/2 Outer = .2032 params = [GammaMeas, Freq, BeadPerm, Inner, Outer] x, f, d = lbfgsb.fmin_l_bfgs_b(Permmin, x0, approx_grad=1,args=params, bounds = [(.0001, 1000),(.0001, 1000)]) -------------------------- "Robert Kern" wrote in message news:45BBC662.6050202 at gmail.com... > mclaugb wrote: >> I have been using a Gauss-Newton algorithm but it is unconstrained and i >> need a bounded, multivariate algorithm. >> >> Does anyone have any example code that uses the "lbfgsb" optimization >> algorithm? I have a function of two variables that is bounded to the >> positive half-space and would like to use lbfgsb to minimize two >> variables. >> i have tried to get it working for a 2 variable function case but have >> been >> unsuccessful. any example code would help me. > > You mean that your function f maps > > f > RR^2 ---> RR^2 > > ? > > Note that all of the minimizers in scipy (and even mpfit that you mention > below) > only minimize functions that map to scalars; i.e. > > f > RR^n ---> RR > > These are not multiobjective optimizers. > > Please clarify what you mean. Can you show us the code that you already > tried? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco From robert.kern at gmail.com Sat Jan 27 17:40:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 16:40:47 -0600 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: References: <45BBC662.6050202@gmail.com> Message-ID: <45BBD4EF.8080707@gmail.com> mclaugb wrote: > I can make the function map to a scalar--though i thought multivariate > algorithms used vector differentials between iterations. "Multivariate" describes the input of the objective function, not the output. None of the optimizers in scipy are multiobjective optimizers. > Note, the error > below is the same regardless of whether the function provides a scalar or > vector. > > i seem to be receiving this error: > > x, f, d = lbfgsb.fmin_l_bfgs_b(Permmin, x0, approx_grad=1,args=params, > bounds = [(.0001, 1000),(.0001, 1000)]) > File "C:\Python24\lib\site-packages\scipy\optimize\lbfgsb.py", line 197, > in fmin_l_bfgs_b > isave, dsave) > ValueError: failed to initialize intent(inout) array -- expected elsize=8 > but got 4 -- input 'l' not compatible to 'd' And as I asked you on comp.lang.python, what platform are you on? What FORTRAN compiler are you using to build scipy? I think this is a 64-bit problem, but until you give us that information, we can't help you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.vergnes at yahoo.fr Sat Jan 27 17:44:41 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Sat, 27 Jan 2007 23:44:41 +0100 (CET) Subject: [SciPy-user] QME-Dev wxSciPY workbench updated and corrected for Python 2.4 In-Reply-To: Message-ID: <20070127224441.60301.qmail@web27410.mail.ukl.yahoo.com> Version Alpha 0.0.9.2.4 updated today. Correction made to the plotting/Graph issue with Python 2.4 + Example files added for review of workbench. https://sourceforge.net/project/showfiles.php?group_id=181979 --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sat Jan 27 18:09:50 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 28 Jan 2007 01:09:50 +0200 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45BBD2F2.3080806@gmail.com> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <20070127221236.GA5742@mentat.za.net> <45BBD2F2.3080806@gmail.com> Message-ID: <20070127230950.GE5742@mentat.za.net> On Sat, Jan 27, 2007 at 04:32:18PM -0600, Robert Kern wrote: > Stefan van der Walt wrote: > > On Sat, Jan 27, 2007 at 01:39:50AM -0700, Travis Oliphant wrote: > >> It still depends a little. A large library is probably more easily > >> handled with SWIG, but for custom interfaces I would say ctypes is best. > >> > >> The only problems with ctypes are > >> > >> 1) Not easy to distribute the source code with the interface as you > >> can't use distutils out of the box for building shared libraries. > > > > I've heard this being said many times, but I'm not sure I understand. > > I've built shared libraries under Linux, and I've seen people do it > > under Windows. Is there a problem under MacOSX or Solaris platforms? > > Actually, let me back up. With ctypes, we want the .so/.dll to go into the > package rather than being installed anywhere. You still have to extend distutils > do a build of a non-Python .so and then to put it in the right > place. Why do you specifically have to build a non-Python .so? Ctypes doesn't seem to mind. My setup.py for my extentions directory looks like this: from os.path import join, dirname from glob import glob def configuration(parent_package='', top_path=None, package_name='ext'): from numpy.distutils.misc_util import Configuration config = Configuration(package_name,parent_package,top_path) config.add_extension('libext_', sources=glob(join(dirname(__file__),'*.c'))) config.add_data_dir('tests') return config if __name__ == '__main__': from numpy.distutils.core import setup setup(configuration=configuration) It is probably overly simple, so I'd appreciate any feedback on how to improve it to do the right thing. Cheers St?fan From mclaughlinbryan at yahoo.com Sat Jan 27 18:16:09 2007 From: mclaughlinbryan at yahoo.com (mclaugb) Date: Sat, 27 Jan 2007 23:16:09 -0000 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> Message-ID: I have not compiled scipy on my machine but rather used the self-install files that were pre-built. Perhaps that is the problem. I am using a Windows XP 32-bit system. Bryan "Robert Kern" wrote in message news:45BBD4EF.8080707 at gmail.com... > mclaugb wrote: >> I can make the function map to a scalar--though i thought multivariate >> algorithms used vector differentials between iterations. > > "Multivariate" describes the input of the objective function, not the > output. > None of the optimizers in scipy are multiobjective optimizers. > >> Note, the error >> below is the same regardless of whether the function provides a scalar or >> vector. >> >> i seem to be receiving this error: >> >> x, f, d = lbfgsb.fmin_l_bfgs_b(Permmin, x0, approx_grad=1,args=params, >> bounds = [(.0001, 1000),(.0001, 1000)]) >> File "C:\Python24\lib\site-packages\scipy\optimize\lbfgsb.py", line >> 197, >> in fmin_l_bfgs_b >> isave, dsave) >> ValueError: failed to initialize intent(inout) array -- expected elsize=8 >> but got 4 -- input 'l' not compatible to 'd' > > And as I asked you on comp.lang.python, what platform are you on? What > FORTRAN > compiler are you using to build scipy? I think this is a 64-bit problem, > but > until you give us that information, we can't help you. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco From robert.kern at gmail.com Sat Jan 27 18:21:27 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 17:21:27 -0600 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <20070127230950.GE5742@mentat.za.net> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <20070127221236.GA5742@mentat.za.net> <45BBD2F2.3080806@gmail.com> <20070127230950.GE5742@mentat.za.net> Message-ID: <45BBDE77.7020507@gmail.com> Stefan van der Walt wrote: > Why do you specifically have to build a non-Python .so? Ctypes > doesn't seem to mind. Not on your platform, perhaps. Windows does have a problem building a non-extension as an extension. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Sat Jan 27 18:37:40 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 28 Jan 2007 01:37:40 +0200 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <45BBDE77.7020507@gmail.com> References: <45AE6C33.4090805@radmail.ucsf.edu> <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <20070127221236.GA5742@mentat.za.net> <45BBD2F2.3080806@gmail.com> <20070127230950.GE5742@mentat.za.net> <45BBDE77.7020507@gmail.com> Message-ID: <20070127233740.GI5742@mentat.za.net> On Sat, Jan 27, 2007 at 05:21:27PM -0600, Robert Kern wrote: > Stefan van der Walt wrote: > > > Why do you specifically have to build a non-Python .so? Ctypes > > doesn't seem to mind. > > Not on your platform, perhaps. Windows does have a problem building a > non-extension as an extension. I should ask Albert what he did with scipy.sandbox.svm. I see he plays it safe with *3* different methods of building: svm/setup.py svm/libsvm-2.82/Makefile.win svm/libsvm-2.82/SConstruct Looking at things from the other side, he also pointed me to this url the other day, showing how to do build Python extensions in scons http://www.scons.org/wiki/PythonExtensions Something to keep in mind. Cheers St?fan From robert.kern at gmail.com Sat Jan 27 19:04:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 27 Jan 2007 18:04:07 -0600 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> Message-ID: <45BBE877.9070907@gmail.com> mclaugb wrote: > Perhaps that is the problem. I am using a Windows XP 32-bit system. No, I think I found it. Use floats for x0, not integers. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.vergnes at yahoo.fr Sun Jan 28 08:59:32 2007 From: robert.vergnes at yahoo.fr (Robert VERGNES) Date: Sun, 28 Jan 2007 14:59:32 +0100 (CET) Subject: [SciPy-user] QME-Dev wxSciPY workbench 0.0.9.24 released - updated and corrected for Python 2.4 Message-ID: <20070128135932.6628.qmail@web27407.mail.ukl.yahoo.com> Version Alpha 0.0.9.2.4 updated today. New ZIP file for download. Correction made to the plotting/Graph issue with Python 2.4 + Example files added for review of workbench. https://sourceforge.net/project/showfiles.php?group_id=181979 Note: wxPython 2.8 is still necessary to run the wrokbench. --------------------------------- D?couvrez une nouvelle fa?on d'obtenir des r?ponses ? toutes vos questions ! Profitez des connaissances, des opinions et des exp?riences des internautes sur Yahoo! Questions/R?ponses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mclaughlinbryan at yahoo.com Sun Jan 28 11:18:26 2007 From: mclaughlinbryan at yahoo.com (mclaugb) Date: Sun, 28 Jan 2007 16:18:26 -0000 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> <45BBE877.9070907@gmail.com> Message-ID: Thanks, that got me a little further. I am now getting this error from the approx_fprime function in optimize.py : File "C:\Python24\lib\site-packages\scipy\optimize\optimize.py", line 577, in approx_fprime grad[k] = (apply(f,(xk+ei,)+(args)) - f0)/epsilon ValueError: setting an array element with a sequence. It appears that args is a tuple rather than type 'numpy.ndarray' like the others in the list. The list of args that i provided to lbfgsb was args = numpy.array([GammaMeas.real, GammaMeas.imag, Freq, BeadPerm.real, BeadPerm.imag, Inner, Outer]) I could edit the section of code in optimize.py which is causing the error at the grad[k] line but not sure that is the correct thing to do. def approx_fprime(xk,f,epsilon,*args): f0 = apply(f,(xk,)+args) grad = numpy.zeros((len(xk),), float) ei = numpy.zeros((len(xk),), float) for k in range(len(xk)): ei[k] = epsilon grad[k] = (apply(f,(xk+ei,)+(args)) - f0)/epsilon ei[k] = 0.0 return grad Any thoughts? Bryan "Robert Kern" wrote in message news:45BBE877.9070907 at gmail.com... > mclaugb wrote: > >> Perhaps that is the problem. I am using a Windows XP 32-bit system. > > No, I think I found it. Use floats for x0, not integers. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco From robert.kern at gmail.com Sun Jan 28 15:41:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Jan 2007 14:41:34 -0600 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> <45BBE877.9070907@gmail.com> Message-ID: <45BD0A7E.4030601@gmail.com> mclaugb wrote: > Thanks, that got me a little further. I am now getting this error from the > approx_fprime function in optimize.py : > File "C:\Python24\lib\site-packages\scipy\optimize\optimize.py", line 577, > in approx_fprime > grad[k] = (apply(f,(xk+ei,)+(args)) - f0)/epsilon > ValueError: setting an array element with a sequence. > > It appears that args is a tuple rather than type 'numpy.ndarray' like the > others in the list. The list of args that i provided to lbfgsb was > args = numpy.array([GammaMeas.real, GammaMeas.imag, Freq, BeadPerm.real, > BeadPerm.imag, Inner, Outer]) > > I could edit the section of code in optimize.py which is causing the error > at the grad[k] line but not sure that is the correct thing to do. No, you should pass in args as a tuple rather than a list or an array. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mclaughlinbryan at yahoo.com Sun Jan 28 16:18:24 2007 From: mclaughlinbryan at yahoo.com (mclaugb) Date: Sun, 28 Jan 2007 21:18:24 -0000 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> <45BBE877.9070907@gmail.com> <45BD0A7E.4030601@gmail.com> Message-ID: If passed as a tuple, the error of my previous post occurs. "Value Error: Setting an array element with a sequence" The line of code tries to add a tuple to ndarrays. Should something be modified? "Robert Kern" wrote in message news:45BD0A7E.4030601 at gmail.com... > mclaugb wrote: >> Thanks, that got me a little further. I am now getting this error from >> the >> approx_fprime function in optimize.py : >> File "C:\Python24\lib\site-packages\scipy\optimize\optimize.py", line >> 577, >> in approx_fprime >> grad[k] = (apply(f,(xk+ei,)+(args)) - f0)/epsilon >> ValueError: setting an array element with a sequence. >> >> It appears that args is a tuple rather than type 'numpy.ndarray' like the >> others in the list. The list of args that i provided to lbfgsb was >> args = numpy.array([GammaMeas.real, GammaMeas.imag, Freq, BeadPerm.real, >> BeadPerm.imag, Inner, Outer]) >> >> I could edit the section of code in optimize.py which is causing the >> error >> at the grad[k] line but not sure that is the correct thing to do. > > No, you should pass in args as a tuple rather than a list or an array. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco From robert.kern at gmail.com Sun Jan 28 16:36:31 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 28 Jan 2007 15:36:31 -0600 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help In-Reply-To: References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> <45BBE877.9070907@gmail.com> <45BD0A7E.4030601@gmail.com> Message-ID: <45BD175F.9050803@gmail.com> mclaugb wrote: > If passed as a tuple, the error of my previous post occurs. > > "Value Error: Setting an array element with a sequence" The line of code > tries to add a tuple to ndarrays. No, it doesn't. It adds a tuple to a tuple. (xk+ei,)+args (xk+ei,) is a tuple. args is a tuple. > Should something be modified? It appears that the RHS of that assignment is an array, not a scalar. Permmin() must return a scalar. That is where the exception is coming from. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mclaughlinbryan at yahoo.com Sun Jan 28 18:43:57 2007 From: mclaughlinbryan at yahoo.com (mclaugb) Date: Sun, 28 Jan 2007 23:43:57 -0000 Subject: [SciPy-user] scipy.optimise.lbfgsb /bounded optimization help References: <45BBC662.6050202@gmail.com> <45BBD4EF.8080707@gmail.com> <45BBE877.9070907@gmail.com> <45BD0A7E.4030601@gmail.com> <45BD175F.9050803@gmail.com> Message-ID: I was using the abs function in the return value which was not actually giving a single scalar return value. It now works. Bryan "Robert Kern" wrote in message news:45BD175F.9050803 at gmail.com... > mclaugb wrote: >> If passed as a tuple, the error of my previous post occurs. >> >> "Value Error: Setting an array element with a sequence" The line of >> code >> tries to add a tuple to ndarrays. > > No, it doesn't. It adds a tuple to a tuple. > > (xk+ei,)+args > > (xk+ei,) is a tuple. args is a tuple. > >> Should something be modified? > > It appears that the RHS of that assignment is an array, not a scalar. > Permmin() > must return a scalar. That is where the exception is coming from. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco From r.demaria at tiscali.it Mon Jan 29 11:35:48 2007 From: r.demaria at tiscali.it (Riccardo) Date: Mon, 29 Jan 2007 17:35:48 +0100 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <20070121142742.GB19549@ssh.cv.nrao.edu> References: <1169371029.31713.12.camel@nadav.envision.co.il> <20070121142742.GB19549@ssh.cv.nrao.edu> Message-ID: <20070129163548.GA960@ABPC10780> Hello, I have a similar problem with a Xeon. At the same location the g77 flags are wrong: Lib/fftpack/dfftpack/dcosqb.f:0: error: CPU you selected does not support x86-64 instruction set The fix proposed does not work. Here it is my /proc/cpu_info: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Xeon(R) CPU 5150 @ 2.66GHz stepping : 6 cpu MHz : 2327.501 cache size : 4096 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm pni monitor ds_cpl est tm2 cx16 xtpr bogomips : 5323.64 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Any ideas? Riccardo On Sun, Jan 21, 2007 at 09:27:42AM -0500, Scott Ransom wrote: > Hi Nadav, > > Yup. This seems to work. Everything compiles and the tests pass. > Thanks a bunch for the quick patch. > > Scott > > On Sun, Jan 21, 2007 at 11:17:09AM +0200, Nadav Horesh wrote: > > The cpu identification in "site-packages/numpy/distutils/cpuinfo.py" > > fails to recognizr Core2 cpu. > > > > My solution: Modify the _is_Nocona method in cpuinfo.py to: > > > > def _is_Nocona(self): > > # return self.is_PentiumIV() and self.is_64bit() > > return re.match(r'Intel.*?Core.*\b', > > self.info[0]['model name']) is not None > > > > > > (The commented line is the old code that fails to recognize core 2) > > > > Nadav. > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- > -- > Scott M. Ransom Address: NRAO > Phone: (434) 296-0320 520 Edgemont Rd. > email: sransom at nrao.edu Charlottesville, VA 22903 USA > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From r.demaria at tiscali.it Mon Jan 29 12:00:42 2007 From: r.demaria at tiscali.it (Riccardo) Date: Mon, 29 Jan 2007 18:00:42 +0100 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <20070129163548.GA960@ABPC10780> References: <1169371029.31713.12.camel@nadav.envision.co.il> <20070121142742.GB19549@ssh.cv.nrao.edu> <20070129163548.GA960@ABPC10780> Message-ID: <20070129170042.GA1366@ABPC10780> Few more informations: python numpy/distutils/cpuinfo.py: CPU information: getNCPUs=4 has_mmx has_sse has_sse2 is_64bit is_Intel is_XEON is_Xeon is_i686 The offending options are: /usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer -c -c ... Riccardo On Mon, Jan 29, 2007 at 05:35:48PM +0100, Riccardo wrote: > Hello, > > I have a similar problem with a Xeon. At the same location the g77 > flags are wrong: > > Lib/fftpack/dfftpack/dcosqb.f:0: error: CPU you selected does not > support x86-64 instruction set > > The fix proposed does not work. Here it is my /proc/cpu_info: > > processor : 0 > vendor_id : GenuineIntel > cpu family : 6 > model : 15 > model name : Intel(R) Xeon(R) CPU 5150 @ 2.66GHz > stepping : 6 > cpu MHz : 2327.501 > cache size : 4096 KB > physical id : 0 > siblings : 2 > core id : 0 > cpu cores : 2 > fpu : yes > fpu_exception : yes > cpuid level : 10 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge > mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall > nx lm pni monitor ds_cpl est tm2 cx16 xtpr > bogomips : 5323.64 > clflush size : 64 > cache_alignment : 64 > address sizes : 36 bits physical, 48 bits virtual > power management: > > > Any ideas? > > Riccardo > > > On Sun, Jan 21, 2007 at 09:27:42AM -0500, Scott Ransom wrote: > > Hi Nadav, > > > > Yup. This seems to work. Everything compiles and the tests pass. > > Thanks a bunch for the quick patch. > > > > Scott > > > > On Sun, Jan 21, 2007 at 11:17:09AM +0200, Nadav Horesh wrote: > > > The cpu identification in "site-packages/numpy/distutils/cpuinfo.py" > > > fails to recognizr Core2 cpu. > > > > > > My solution: Modify the _is_Nocona method in cpuinfo.py to: > > > > > > def _is_Nocona(self): > > > # return self.is_PentiumIV() and self.is_64bit() > > > return re.match(r'Intel.*?Core.*\b', > > > self.info[0]['model name']) is not None > > > > > > > > > (The commented line is the old code that fails to recognize core 2) > > > > > > Nadav. > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > -- > > -- > > Scott M. Ransom Address: NRAO > > Phone: (434) 296-0320 520 Edgemont Rd. > > email: sransom at nrao.edu Charlottesville, VA 22903 USA > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From chris at trichech.us Mon Jan 29 13:30:44 2007 From: chris at trichech.us (Chris Fonnesbeck) Date: Mon, 29 Jan 2007 13:30:44 -0500 Subject: [SciPy-user] SciPy (SVN) tries to use f95 Message-ID: <723eb6930701291030p10a42912y2ccc4b375e1ce789@mail.gmail.com> Trying to build SciPy from SVN on an intel fails on my Intel Mac. For some reason, it wants to use F95, which does not exist on my system: f95:f77: Lib/fftpack/dfftpack/dcosqb.f sh: line 1: f95: command not found sh: line 1: f95: command not found error: Command "f95 -fixed -O4 -target=native -c -c Lib/fftpack/dfftpack/dcosqb.f -o build/temp.darwin-8.8.1-i386-2.4/Lib/fftpack/dfftpack/dcosqb.o" failed with exit status 127 This has not occured previously. Any idea what the problem may be? Thanks, -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at trichech.us Mon Jan 29 13:44:51 2007 From: chris at trichech.us (Chris Fonnesbeck) Date: Mon, 29 Jan 2007 13:44:51 -0500 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: <723eb6930701291030p10a42912y2ccc4b375e1ce789@mail.gmail.com> References: <723eb6930701291030p10a42912y2ccc4b375e1ce789@mail.gmail.com> Message-ID: <723eb6930701291044p4f432432lc482c4a532a3f011@mail.gmail.com> A bit more information: I am using the gfortran compiler from hpc.sourceforge.net. I noticed an earlier thread complaining about wanting to use f95 from Fink instead. Perhaps changes to address that problem caused this to break? There appears to be no gfortran flag for --fcompiler, so I dont know what to do. Thanks ---------- Forwarded message ---------- From: Chris Fonnesbeck Date: Jan 29, 2007 1:30 PM Subject: SciPy (SVN) tries to use f95 To: SciPy Users List Trying to build SciPy from SVN on an intel fails on my Intel Mac. For some reason, it wants to use F95, which does not exist on my system: f95:f77: Lib/fftpack/dfftpack/dcosqb.f sh: line 1: f95: command not found sh: line 1: f95: command not found error: Command "f95 -fixed -O4 -target=native -c -c Lib/fftpack/dfftpack/dcosqb.f -o build/temp.darwin-8.8.1-i386-2.4/Lib/fftpack/dfftpack/dcosqb.o" failed with exit status 127 This has not occured previously. Any idea what the problem may be? Thanks, -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jan 29 14:10:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 29 Jan 2007 13:10:49 -0600 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: <723eb6930701291044p4f432432lc482c4a532a3f011@mail.gmail.com> References: <723eb6930701291030p10a42912y2ccc4b375e1ce789@mail.gmail.com> <723eb6930701291044p4f432432lc482c4a532a3f011@mail.gmail.com> Message-ID: <45BE46B9.4070000@gmail.com> Chris Fonnesbeck wrote: > A bit more information: I am using the gfortran compiler from > hpc.sourceforge.net . I noticed an earlier > thread complaining about wanting to use f95 from Fink instead. Perhaps > changes to address that problem caused this to break? There appears to > be no gfortran flag for --fcompiler, so I dont know what to do. --fcompiler=gnu95 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Mon Jan 29 17:20:30 2007 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 30 Jan 2007 00:20:30 +0200 Subject: [SciPy-user] [Nipy-devel] how to wrap C++ In-Reply-To: <20070127233740.GI5742@mentat.za.net> References: <45AE7449.30007@ucsf.edu> <45BA4798.5090602@mitre.org> <45BB0FD6.7060206@ieee.org> <20070127221236.GA5742@mentat.za.net> <45BBD2F2.3080806@gmail.com> <20070127230950.GE5742@mentat.za.net> <45BBDE77.7020507@gmail.com> <20070127233740.GI5742@mentat.za.net> Message-ID: <20070129222029.GA14510@dogbert.sdsl.sun.ac.za> Hello all On Sun, 28 Jan 2007, Stefan van der Walt wrote: > On Sat, Jan 27, 2007 at 05:21:27PM -0600, Robert Kern wrote: > > Stefan van der Walt wrote: > > > > > Why do you specifically have to build a non-Python .so? Ctypes > > > doesn't seem to mind. > > > > Not on your platform, perhaps. Windows does have a problem building a > > non-extension as an extension. > > I should ask Albert what he did with scipy.sandbox.svm. I see he > plays it safe with *3* different methods of building: > > svm/setup.py > svm/libsvm-2.82/Makefile.win > svm/libsvm-2.82/SConstruct Development of my libsvm wrapper went like this: 0. Got it building on Windows with Visual Studio. 1. Got it building on Windows using nmake (Makefile.win) 2. Got it building on Windows and Linux using SCons. 3. Got it building on most platforms (I ho;e) using distutils by hacking the following bit of cruft into the libsvm sources to make the linker happy on Windows: LIBSVM_API void initlibsvm_(){} The reason this is needed is that distutils passes an explicit /EXPORT argument to the Microsoft linker to export the init_ symbol. Don't think there's an option to turn this off without hacking into distutils at a low level. The various bits of magic needed to build a proper DLL on Windows can be seen at the top of this file: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sandbox/svm/libsvm-2.82/svm.h with some defines and stuff in setup.py: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sandbox/svm/setup.py I don't know if the "empty init" hack makes distutils's Python module builder a viable shared library builder on all platforms, but it works on Linux and Windows at least. Feels like a hack though. Probably because it is... > Looking at things from the other side, he also pointed me to this url > the other day, showing how to do build Python extensions in scons > > http://www.scons.org/wiki/PythonExtensions > > Something to keep in mind. Indeed. Cheers, Albert From strawman at astraw.com Mon Jan 29 18:05:57 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 29 Jan 2007 15:05:57 -0800 Subject: [SciPy-user] scipy binary on pythonmac.org appears broken Message-ID: <45BE7DD5.7000000@astraw.com> This is just a heads up that the scipy binary on pythonmac.org appears to be broken, or at least incompatible with the version of numpy on the same website. I downloaded http://pythonmac.org/packages/py24-fat/mpkg/scipy-0.5.1-py2.4-macosx10.4.mpkg.zip (referenced from the package listing page at http://pythonmac.org/packages/py24-fat/ . I am running the Python 2.4 listed on that page.) The command python -c "import scipy; scipy.test(10,10)" runs fine, although only 397 tests are run, which seems rather few. However, the following command gives an error. $ python -c "import scipy.io" RuntimeError: module compiled against version 1000002 of C-API but this version of numpy is 1000009 Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/io/__init__.py", line 8, in ? from numpyio import packbits, unpackbits, bswap, fread, fwrite, \ ImportError: numpy.core.multiarray failed to import I am using the numpy also listed on that page: http://pythonmac.org/packages/py24-fat/dmg/numpy-1.0.1-py2.4-macosx10.4-2006-12-12.dmg As far as I know, this is a clean install. This is my first go with Mac OS X in a long time, so I could easily be doing something stupid... I will probably endeavor to build from sources before too long, but I thought someone should know that package appears broken. From fonnesbeck at gmail.com Mon Jan 29 21:02:06 2007 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 30 Jan 2007 02:02:06 +0000 (UTC) Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 References: <723eb6930701291030p10a42912y2ccc4b375e1ce789@mail.gmail.com> <723eb6930701291044p4f432432lc482c4a532a3f011@mail.gmail.com> <45BE46B9.4070000@gmail.com> Message-ID: Robert Kern gmail.com> writes: > > Chris Fonnesbeck wrote: > > A bit more information: I am using the gfortran compiler from > > hpc.sourceforge.net . I noticed an earlier > > thread complaining about wanting to use f95 from Fink instead. Perhaps > > changes to address that problem caused this to break? There appears to > > be no gfortran flag for --fcompiler, so I dont know what to do. > > --fcompiler=gnu95 > Using this flag causes an error: ... customize Gnu95FCompiler Traceback (most recent call last): File "setupegg.py", line 7, in ? execfile('setup.py') File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/distutils/command/ build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ distutils/command/build_clib.py", line 77, in run self.fcompiler.customize(self.distribution) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ distutils/fcompiler/__init__.py", line 340, in customize linker_so_flags = self.__get_flags(self.get_flags_linker_so,'LDFLAGS') File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ distutils/fcompiler/__init__.py", line 527, in __get_flags var = command() File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ distutils/fcompiler/gnu.py", line 66, in get_flags_linker_so osx_major, osx_minor = osx_version.split('.') ValueError: need more than 1 value to unpack From robert.kern at gmail.com Tue Jan 30 00:27:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 29 Jan 2007 23:27:41 -0600 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: References: <723eb6930701291030p10a42912y2ccc4b375e1ce789@mail.gmail.com> <723eb6930701291044p4f432432lc482c4a532a3f011@mail.gmail.com> <45BE46B9.4070000@gmail.com> Message-ID: <45BED74D.5070308@gmail.com> Chris Fonnesbeck wrote: > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ > distutils/fcompiler/gnu.py", line 66, in get_flags_linker_so > osx_major, osx_minor = osx_version.split('.') > ValueError: need more than 1 value to unpack Crap. Can you show me what you get for this? [~]$ python Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import platform >>> platform.mac_ver() ('10.4.8', ('', '', ''), 'i386') -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From emsellem at obs.univ-lyon1.fr Tue Jan 30 10:58:57 2007 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Tue, 30 Jan 2007 16:58:57 +0100 Subject: [SciPy-user] back with a problem in scipy installation... In-Reply-To: <45B4F692.6080904@gmail.com> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> Message-ID: <45BF6B41.304@obs.univ-lyon1.fr> Hi, after reinstalling EVERYTHING (atlas, lapack, etc) I still hit the wall of "no way to install scipy". It seems that it is still a problem with linking (and maybe at the numpy stage in fact). When doing python setup.py config, for testing the config of scipy, I get an undefined symbol in Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() ............ ImportError: /usr/local/lib/python2.4/site-packages/numpy/linalg/lapack_lite.so: undefined symbol: ATL_cGetNB The ATL_cGetNB.o is in fact included in the libatlas.a, so I wonder why it did not find it. The numpy installation went well in fact (at least apparently). Any clue here? thanks a lot Eric P.S.: this is a desperate shot.. I have been trying to install scipy for 2 weeks now. -- ==================================================================== Eric Emsellem emsellem at obs.univ-lyon1.fr Centre de Recherche Astrophysique de Lyon 9 av. Charles-Andre tel: +33 (0)4 78 86 83 84 69561 Saint-Genis Laval Cedex fax: +33 (0)4 78 86 83 86 France http://www-obs.univ-lyon1.fr/eric.emsellem ==================================================================== From robert.kern at gmail.com Tue Jan 30 12:29:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Jan 2007 11:29:47 -0600 Subject: [SciPy-user] back with a problem in scipy installation... In-Reply-To: <45BF6B41.304@obs.univ-lyon1.fr> References: 45B46D75.5030300@obs.univ-lyon1.fr <45B4883A.2050606@obs.univ-lyon1.fr> <45B4F692.6080904@gmail.com> <45BF6B41.304@obs.univ-lyon1.fr> Message-ID: <45BF808B.1090807@gmail.com> Eric Emsellem wrote: > Any clue here? Not until you show us your site.cfg that you used to build numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From chris at trichech.us Tue Jan 30 14:01:42 2007 From: chris at trichech.us (Chris Fonnesbeck) Date: Tue, 30 Jan 2007 14:01:42 -0500 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 Message-ID: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> Chris Fonnesbeck wrote: > File "/Library/Frameworks/Python.framework/Versions/2.4/lib /python2.4/site-packages/numpy/ > distutils/fcompiler/gnu.py", line 66, in get_flags_linker_so > osx_major, osx_minor = osx_version.split('.') > ValueError: need more than 1 value to unpack > Crap. Can you show me what you get for this? The problem is, on my Intel Mac platform.mac_ver() returns ('', ('', '', ''), '') cf -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Jan 30 14:08:11 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Jan 2007 13:08:11 -0600 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> References: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> Message-ID: <45BF979B.3060408@gmail.com> Chris Fonnesbeck wrote: > Chris Fonnesbeck wrote: > >> File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/ >> distutils/fcompiler/gnu.py", line 66, in get_flags_linker_so >> osx_major, osx_minor = osx_version.split('.') >> ValueError: need more than 1 value to unpack > >> Crap. Can you show me what you get for this? > > The problem is, on my Intel Mac platform.mac_ver() returns ('', ('', '', > ''), '') Crazy! I need more details. What Python are you using? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lists.steve at arachnedesign.net Tue Jan 30 14:09:13 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Tue, 30 Jan 2007 14:09:13 -0500 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: <45BF979B.3060408@gmail.com> References: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> <45BF979B.3060408@gmail.com> Message-ID: <59C4893B-0379-4FB5-838F-37ACD592D510@arachnedesign.net> >> The problem is, on my Intel Mac platform.mac_ver() returns ('', >> ('', '', >> ''), '') > > Crazy! I need more details. What Python are you using? Mine does too, as a matter of fact. Python 2.4.3 (macports) MacBook Pro In [11]: platform.mac_ver() Out[11]: ('', ('', '', ''), '') -steve From robert.kern at gmail.com Tue Jan 30 14:12:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Jan 2007 13:12:17 -0600 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: <59C4893B-0379-4FB5-838F-37ACD592D510@arachnedesign.net> References: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> <45BF979B.3060408@gmail.com> <59C4893B-0379-4FB5-838F-37ACD592D510@arachnedesign.net> Message-ID: <45BF9891.5000108@gmail.com> Steve Lianoglou wrote: >>> The problem is, on my Intel Mac platform.mac_ver() returns ('', >>> ('', '', >>> ''), '') >> Crazy! I need more details. What Python are you using? > > Mine does too, as a matter of fact. > > Python 2.4.3 (macports) > MacBook Pro > > In [11]: platform.mac_ver() > Out[11]: ('', ('', '', ''), '') Gah. Stupid broken MacPorts Python grumblegrumble .... I'll revert my change. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fonnesbeck at gmail.com Tue Jan 30 15:18:13 2007 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 30 Jan 2007 20:18:13 +0000 (UTC) Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 References: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> <45BF979B.3060408@gmail.com> <59C4893B-0379-4FB5-838F-37ACD592D510@arachnedesign.net> Message-ID: Steve Lianoglou arachnedesign.net> writes: > > >> The problem is, on my Intel Mac platform.mac_ver() returns ('', > >> ('', '', > >> ''), '') > > > > Crazy! I need more details. What Python are you using? > I'm using ActivePython 2.4.3. cf From robert.kern at gmail.com Tue Jan 30 15:25:59 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Jan 2007 14:25:59 -0600 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: References: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> <45BF979B.3060408@gmail.com> <59C4893B-0379-4FB5-838F-37ACD592D510@arachnedesign.net> Message-ID: <45BFA9D7.70200@gmail.com> Chris Fonnesbeck wrote: > Steve Lianoglou arachnedesign.net> writes: > >>>> The problem is, on my Intel Mac platform.mac_ver() returns ('', >>>> ('', '', >>>> ''), '') >>> Crazy! I need more details. What Python are you using? > > I'm using ActivePython 2.4.3. Well, I reverted the change, but please report to ActiveState and MacPorts that their builds are broken! They are either missing the gestalt module or the MacOS module. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lists.steve at arachnedesign.net Tue Jan 30 16:08:00 2007 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Tue, 30 Jan 2007 16:08:00 -0500 Subject: [SciPy-user] Fwd: SciPy (SVN) tries to use f95 In-Reply-To: <45BFA9D7.70200@gmail.com> References: <723eb6930701301101p18711941wb138ab0b43b3e1c1@mail.gmail.com> <45BF979B.3060408@gmail.com> <59C4893B-0379-4FB5-838F-37ACD592D510@arachnedesign.net> <45BFA9D7.70200@gmail.com> Message-ID: <063BBC6F-EE3A-4AE5-AACF-F7D73B90B3FF@arachnedesign.net> >>>>> The problem is, on my Intel Mac platform.mac_ver() returns ('', >>>>> ('', '', >>>>> ''), '') >>>> Crazy! I need more details. What Python are you using? >> >> I'm using ActivePython 2.4.3. > > Well, I reverted the change, but please report to ActiveState and > MacPorts that > their builds are broken! They are either missing the gestalt module > or the MacOS > module. Hmm .. yeah ... the funny thing is that the MacPorts python 2.5 looks like it works: Python 2.5 (r25:51908, Dec 30 2006, 12:58:49) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import platform >>> platform.mac_ver() ('10.4.8', ('', '', ''), 'i386') Maybe we can bug them to roll out a 2.4.4 update w/ the fix, too. -steve From lanceboyle at qwest.net Tue Jan 30 19:16:29 2007 From: lanceboyle at qwest.net (Jerry) Date: Tue, 30 Jan 2007 17:16:29 -0700 Subject: [SciPy-user] What it the best way to install on a Mac? Message-ID: I've been mostly lurking here for quite a while and am thinking I should get some stuff installed and see what all the excitement is about. I can't help but notice that some of the posts are about installation problems on various platforms. I also recall (I think) that someone was keeping an up-to-date installer for the Mac. If so, is this a "proper" Mac installer? Or can I install from Fink or another package-manager-type place? I recall some discussion about PyLab (?) or the like which included all the SciPy stuff and plotters and that sounds like the way to go. I'm mainly hoping for a low- hassle installation so that I can get to the good stuff fast. Jerry From robert.kern at gmail.com Tue Jan 30 19:27:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 Jan 2007 18:27:51 -0600 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: References: Message-ID: <45BFE287.80803@gmail.com> Jerry wrote: > I've been mostly lurking here for quite a while and am thinking I > should get some stuff installed and see what all the excitement is > about. I can't help but notice that some of the posts are about > installation problems on various platforms. I also recall (I think) > that someone was keeping an up-to-date installer for the Mac. Probably Chris Fonnesbeck's Scipy Superpack. > If so, > is this a "proper" Mac installer? Yes, however, the last release of it was improperly compiled, IIRC. > Or can I install from Fink or > another package-manager-type place? I don't recommend it. Both the Python interpreter and numpy/scipy/matplotlib tend to be broken in various ways in both Fink and MacPorts. > I recall some discussion about > PyLab (?) or the like which included all the SciPy stuff and plotters > and that sounds like the way to go. pylab is a particular module distributed with matplotlib exposing a particular interface to its plotting capabilities. It has nothing to do with scipy. > I'm mainly hoping for a low- > hassle installation so that I can get to the good stuff fast. Learn to build from source. Because of the dependency on FORTRAN runtime libraries, this is the lowest-hassle method. http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025368.html http://projects.scipy.org/pipermail/numpy-discussion/2007-January/025374.html http://sourceforge.net/mailarchive/message.php?msg_id=37895022 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nadavh at visionsense.com Wed Jan 31 02:09:43 2007 From: nadavh at visionsense.com (Nadav Horesh) Date: Wed, 31 Jan 2007 09:09:43 +0200 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo Message-ID: <1170227383.12321.11.camel@nadav.envision.co.il> > Few more informations: > >python numpy/distutils/cpuinfo.py: > >CPU information: getNCPUs=4 has_mmx has_sse has_sse2 is_64bit is_Intel >is_XEON is_Xeon is_i686 Well, Crossing our systems' info it looks to me that the best and straight forward way to identify Intel's imitation to amd64 arch is: def _is_Nocona(self): return self.is_64bit() and self.is_Intel() and self.is_i686() Did not test it but it fits and looks logical. Nadav. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jan 31 02:15:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Jan 2007 01:15:29 -0600 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <1170227383.12321.11.camel@nadav.envision.co.il> References: <1170227383.12321.11.camel@nadav.envision.co.il> Message-ID: <45C04211.7010600@gmail.com> Nadav Horesh wrote: >> Few more informations: >> >>python numpy/distutils/cpuinfo.py: >> >>CPU information: getNCPUs=4 has_mmx has_sse has_sse2 is_64bit is_Intel >>is_XEON is_Xeon is_i686 > > Well, > Crossing our systems' info it looks to me that the best and straight > forward way to identify Intel's imitation to amd64 arch is: > > def _is_Nocona(self): > return self.is_64bit() and self.is_Intel() and self.is_i686() > > Did not test it but it fits and looks logical. If someone will test it and verify, then I will check it in. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From markusro at element.fkp.physik.tu-darmstadt.de Wed Jan 31 07:32:17 2007 From: markusro at element.fkp.physik.tu-darmstadt.de (Markus Rosenstihl) Date: Wed, 31 Jan 2007 13:32:17 +0100 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: References: Message-ID: Am 31.01.2007 um 01:16 schrieb Jerry: > I've been mostly lurking here for quite a while and am thinking I > should get some stuff installed and see what all the excitement is > about. I can't help but notice that some of the posts are about > installation problems on various platforms. I also recall (I think) > that someone was keeping an up-to-date installer for the Mac. If so, > is this a "proper" Mac installer? Or can I install from Fink or > another package-manager-type place? I recall some discussion about > PyLab (?) or the like which included all the SciPy stuff and plotters > and that sounds like the way to go. I'm mainly hoping for a low- > hassle installation so that I can get to the good stuff fast. > Hi, I can second Roberts advise. I installed python, numpy, matplotlib and scipy via source. Matplotlib was not that easy to install (but not difficult either), but there are instructions to be found on the net. You didn't say which Mac OS X you are using, in case you are on Panther 10.3.9, you should consider installing the 10.4 SDK. (there is a libstc.dylib ???? only available on Tiger, and you can get linking problems while instsalling). Regards Markus -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: Signierter Teil der Nachricht URL: From joris at ster.kuleuven.ac.be Wed Jan 31 08:18:03 2007 From: joris at ster.kuleuven.ac.be (joris at ster.kuleuven.ac.be) Date: Wed, 31 Jan 2007 14:18:03 +0100 Subject: [SciPy-user] What it the best way to install on a Mac? Message-ID: <1170249483.45c0970b55b58@webmail.ster.kuleuven.be> On Wednesday 31 January 2007 01:27, Robert Kern wrote: >Learn to build from source. Because of the dependency on FORTRAN runtime >libraries, this is the lowest-hassle method. This may be true at the moment, but I hope this will not be the general philosophy for the future? Having to build from source may scare off potential numpy/scipy/matplotlib users... Yesterday someone said that he was trying to install scipy from source for about 2 weeks now. This does not look good for someone who is contemplating whether to use numpy/scipy/matplotlib or not. It's not that I demand other people to make easy-to-install packages, but I think that we're heading in the wrong direction if we don't even think that easy-to-install packages are really necessary and that installing from source works just fine. Perhaps Robert didn't mean it as I put it, but I just wanted to express my concern. J. Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From david at ar.media.kyoto-u.ac.jp Wed Jan 31 08:58:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 31 Jan 2007 22:58:14 +0900 Subject: [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab Message-ID: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> Hi There, A few months ago, I posted the first release of pyaudio, a python module to give numpy/scipy environment audio file IO capabilities (ala matlab wavread and co). I recently took time to update it significantly, and as several people showed interest in pyaudio recently, I thought this may interest some people here. The main improvements since last public annoucement: - renamed to pyaudiolab to avoid nameclash with other package pyaudio. - ability to seek into audio files and sync files for flushing IO buffers - matlab-like API: wavread and wavwrite - improvements of the API: more similar to numpy conventions, ability to retrieve most meta-data of a file. - some bug fixing (one bug was quite severe, leading to data corruption on ubuntu with python2.5, due to some weirdness in ubuntu python2.5 packaging) - a real documentation If some people manage to use it on something else than linux, I would happy to hear (particularly Mac OS X, which I cannot test myself). Cheers, David ====== pyaudiolab ====== * WHAT FOR ?: The Goal of pyaudiolab is to give to a numpy/scipy environment some basic audio IO facilities (ala sound, wavread, wavwrite of matlab). With pyaudiolab, you should be able to read and write most common audio files from and to numpy arrays. The underlying IO operations are done using libsndfile from Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/), so he is the one who did the hard work. As libsndfile has support for a vast number of audio files (including wav, aiff, but also htk, ircam, and flac, an open source lossless codec), pyaudiolab enables the import from and export to a fairly large number of file formats. * WHERE TO GET ?: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudiolab/index.html#download bzr archive: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/pyaudiolab/pyaudiolab.dev # HOW TO USE ?: http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/pyaudiolab/index.html From chris at trichech.us Wed Jan 31 09:17:39 2007 From: chris at trichech.us (Chris Fonnesbeck) Date: Wed, 31 Jan 2007 09:17:39 -0500 Subject: [SciPy-user] What it the best way to install on a Mac? Message-ID: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> Jerry wrote: > I've been mostly lurking here for quite a while and am thinking I > should get some stuff installed and see what all the excitement is > about. I can't help but notice that some of the posts are about > installation problems on various platforms. I also recall (I think) > that someone was keeping an up-to-date installer for the Mac. >Probably Chris Fonnesbeck's Scipy Superpack. > If so, > is this a "proper" Mac installer? >Yes, however, the last release of it was improperly compiled, IIRC. I posted a new build yesterday, which should work. C. -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.tomic at matforsk.no Wed Jan 31 09:54:30 2007 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Wed, 31 Jan 2007 15:54:30 +0100 Subject: [SciPy-user] PLS from chemometrics.py (PyChem) Message-ID: Hi list, this might be a bit off-topic, but I hope someone might have an answer to this. I wondered whether anyone on this list has experience using PLS (Partial least squares regression) from the chemometrics-module of PyChem (it is SciPy-based)? https://sourceforge.net/projects/pychem/ What I would like to do is to build a fully cross-validated PLS-model based on two multivariate data matrices. I have the following questions: 1. Is it possible to build a fully cross-validated PLS-model? (I am not sure if this is possible from what I can see from the code) 2. Also, I get the following error message when I try to run this code: from scipy import * from numpy import * from scipy.io import * from chemometrics import * fluorescense = read_array('OST_flu.txt') sensory = read_array('OST_sens.txt') rows = fluorescense.shape[0] mask = zeros((rows, 1), float) for i in range(7,14): mask[i] = 1 print mask results = PLS(fluorescense,sensory,mask,7,None) I get this error: ============ Traceback (most recent call last): File "C:\Work\PanelCheck\Methods\Weighing\testPLS\testPLS_IDLE.py", line 22, in ? results = PLS(fluorescense,sensory,mask,7,None) File "chemometrics.py", line 462, in PLS x1,x2,x3,y1,y2,y3,dummy1,dummy2,dummy3 = __split__(xdata,ydata,mask) # raw data ValueError: need more than 6 values to unpack Any help appreciated regards Oliver (See attached file: chemometrics.py)(See attached file: process.py) -------------- next part -------------- A non-text attachment was scrubbed... Name: chemometrics.py Type: application/octet-stream Size: 26437 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: process.py Type: application/octet-stream Size: 9515 bytes Desc: not available URL: From giorgio.luciano at chimica.unige.it Wed Jan 31 10:10:01 2007 From: giorgio.luciano at chimica.unige.it (Giorgio Luciano) Date: Wed, 31 Jan 2007 16:10:01 +0100 Subject: [SciPy-user] PLS from chemometrics.py (PyChem) In-Reply-To: References: Message-ID: <45C0B149.7060007@chimica.unige.it> I've contacted the author some time ago because I've wrote to the ICS if anyone was interested in creating a repository of free chemometrics software python based. He was interested in having his software included and we keep in conctact because I'm "translating" matlab chemometrics routins in python. Now he's developing version 3.0 of pychem and probably can give you hints about the question you asked. I've also made modules for regression and just finished to write a python nipals algorithm and I'm planning also to "translate" a PLS algorithm if you are interested i will be happy to keep in touch Giorgio From oliver.tomic at matforsk.no Wed Jan 31 10:22:32 2007 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Wed, 31 Jan 2007 16:22:32 +0100 Subject: [SciPy-user] PLS from chemometrics.py (PyChem) In-Reply-To: <45C0B149.7060007@chimica.unige.it> Message-ID: That's great news, Giorgio! I am really looking forward to have access to more chemometric methods coded in python. I'd be glad if you could let me know when your "translations" and those of Roger Jarvis (PyChem author?) are ready. I'd be happy to contribute with testing and giving feedbacks once you are ready. Oliver scipy-user-bounces at scipy.org wrote on 31.01.2007 16:10:01: > I've contacted the author some time ago because I've wrote to the ICS if > anyone was interested in creating a repository of free chemometrics > software python based. > He was interested in having his software included and we keep in > conctact because I'm "translating" matlab chemometrics routins in python. > Now he's developing version 3.0 of pychem and probably can give you > hints about the question you asked. > I've also made modules for regression and just finished to write a > python nipals algorithm and I'm planning also to "translate" a PLS algorithm > if you are interested i will be happy to keep in touch > > Giorgio > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From sransom at nrao.edu Wed Jan 31 11:16:21 2007 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 31 Jan 2007 11:16:21 -0500 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <1170227383.12321.11.camel@nadav.envision.co.il> References: <1170227383.12321.11.camel@nadav.envision.co.il> Message-ID: <200701311116.22016.sransom@nrao.edu> This works for me on a 64-bit Debian Core2 Duo system. eiger:~$ uname -a Linux eiger 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 GNU/Linux eiger:~$ python cpuinfo.py CPU information: getNCPUs=2 has_mmx has_sse has_sse2 is_64bit is_Intel is_Nocona is_i686 Scott On Wednesday 31 January 2007 02:09, Nadav Horesh wrote: > > Few more informations: > > > >python numpy/distutils/cpuinfo.py: > > > >CPU information: getNCPUs=4 has_mmx has_sse has_sse2 is_64bit > > is_Intel is_XEON is_Xeon is_i686 > > Well, > Crossing our systems' info it looks to me that the best and straight > forward way to identify Intel's imitation to amd64 arch is: > > def _is_Nocona(self): > return self.is_64bit() and self.is_Intel() and self.is_i686() > > Did not test it but it fits and looks logical. > > Nadav. -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From cbc at unc.edu Wed Jan 31 11:38:13 2007 From: cbc at unc.edu (Chris Calloway) Date: Wed, 31 Jan 2007 11:38:13 -0500 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> References: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> Message-ID: <45C0C5F5.2000100@unc.edu> Chris Fonnesbeck wrote: > I posted a new build yesterday, which should work. Not a "universal" (PPC & Intel) installer, correct? Not complaining. Just making sure. -- Sincerely, Chris Calloway http://www.seacoos.org office: 332 Chapman Hall phone: (919) 962-4323 mail: Campus Box #3300, UNC-CH, Chapel Hill, NC 27599 From hetland at tamu.edu Wed Jan 31 12:09:29 2007 From: hetland at tamu.edu (Rob Hetland) Date: Wed, 31 Jan 2007 11:09:29 -0600 Subject: [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab In-Reply-To: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> References: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> Message-ID: pyaudiolab works on Mac OS X, but the library needs to point to libsndfile.dylab instead of libsndfile.so.1. This seems to be hardwired in the setup, and I needed to change the pysndfile.py in the actual python site-packages after the install. Also, the example on the web page does not work exactly right. setting frames to 1e4 does not work, as 1e4 is a float, not an int. I think that it would make sense to coerce the frames input to an integer before calling libsndfile. Still, veryveryvery cool. -Rob On Jan 31, 2007, at 7:58 AM, David Cournapeau wrote: > Hi There, > > A few months ago, I posted the first release of pyaudio, a python > module to give numpy/scipy environment audio file IO capabilities (ala > matlab wavread and co). I recently took time to update it > significantly, > and as several people showed interest in pyaudio recently, I thought > this may interest some people here. > The main improvements since last public annoucement: > > - renamed to pyaudiolab to avoid nameclash with other package > pyaudio. > - ability to seek into audio files and sync files for > flushing IO > buffers > - matlab-like API: wavread and wavwrite > - improvements of the API: more similar to numpy conventions, > ability to retrieve most meta-data of a file. > - some bug fixing (one bug was quite severe, leading to data > corruption on ubuntu with python2.5, due to some weirdness in ubuntu > python2.5 packaging) > - a real documentation > > If some people manage to use it on something else than linux, I > would happy to hear (particularly Mac OS X, which I cannot test > myself). > > Cheers, > > David > > ====== pyaudiolab ====== > > * WHAT FOR ?: > > The Goal of pyaudiolab is to give to a numpy/scipy environment > some basic audio IO > facilities (ala sound, wavread, wavwrite of matlab). > > With pyaudiolab, you should be able to read and write most > common audio > files from and to numpy arrays. The underlying IO operations are done > using libsndfile from Erik Castro Lopo > (http://www.mega-nerd.com/libsndfile/), so he is the one who did the > hard work. As libsndfile has support for a vast number of audio files > (including wav, aiff, but also htk, ircam, and flac, an open source > lossless codec), pyaudiolab enables the import from and export to a > fairly > large number of file formats. > > * WHERE TO GET ?: > > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/ > pyaudiolab/index.html#download > > bzr archive: http://www.ar.media.kyoto-u.ac.jp/members/david/ > archives/pyaudiolab/pyaudiolab.dev > > # HOW TO USE ?: > > http://www.ar.media.kyoto-u.ac.jp/members/david/softwares/ > pyaudiolab/index.html > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- Rob Hetland, Associate Professor Dept. of Oceanography, Texas A&M University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 From fperez.net at gmail.com Wed Jan 31 12:47:22 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 31 Jan 2007 10:47:22 -0700 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: <45C0C5F5.2000100@unc.edu> References: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> <45C0C5F5.2000100@unc.edu> Message-ID: Hi all, I'm not a Mac person, but just yesterday Brian Granger and Sanjiv Das put together a nice step-by-step installation guide for the ipython/numpy/scipy triad on a Mac. Here are temporary links to it in PowerPoint and PDF formats: http://scumis.scu.edu/~srdas/temp/iPython_installation.ppt http://scumis.scu.edu/~srdas/temp/iPython_installation.pdf This will probably be cleaned up and updated a bit, and we can put it up in the wiki so it can be more easily edited and maintained in the future. Cheers, f From robert.kern at gmail.com Wed Jan 31 13:28:26 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Jan 2007 12:28:26 -0600 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: <1170249483.45c0970b55b58@webmail.ster.kuleuven.be> References: <1170249483.45c0970b55b58@webmail.ster.kuleuven.be> Message-ID: <45C0DFCA.8010302@gmail.com> joris at ster.kuleuven.ac.be wrote: > On Wednesday 31 January 2007 01:27, Robert Kern wrote: > >> Learn to build from source. Because of the dependency on FORTRAN runtime >> libraries, this is the lowest-hassle method. > > This may be true at the moment, but I hope this will not be the general > philosophy for the future? It's not a philosophy; it's a technical problem. If you can solve the technical problem, building from source will no longer be the lowest-hassle method. > Having to build from source may scare off potential > numpy/scipy/matplotlib users... Yesterday someone said that he was trying to > install scipy from source for about 2 weeks now. I mean no disrespect to all the parties involved, but the common problem that I see between all of the people who take such a long time to build scipy is that they tend to thrash around, trying everything, without understanding what they're doing. Going back and forth on a mailing list trying to diagnose a problem naturally takes a long time. It's such a grossly inefficient way to troubleshoot. But when there's a problem with the binary package, the same exact thing happens. There is no substitute for learning. > It's not that I demand other people to make easy-to-install packages, but I > think that we're heading in the wrong direction if we don't even think that > easy-to-install packages are really necessary and that installing from source > works just fine. We're not. Solve the technical problem, and the whole issue will go away. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From meesters at uni-mainz.de Wed Jan 31 13:32:49 2007 From: meesters at uni-mainz.de (Christian Meesters) Date: Wed, 31 Jan 2007 19:32:49 +0100 Subject: [SciPy-user] bug in cookbook example: rebinning? Message-ID: <200701311932.50317.meesters@uni-mainz.de> Hi, Does anybody know what 'xi' is in the the rebinning example (at http://www.scipy.org/Cookbook/Rebinning ; example No. 3 line 7 )? TIA Christian From conor.robinson at gmail.com Wed Jan 31 13:58:08 2007 From: conor.robinson at gmail.com (Conor Robinson) Date: Wed, 31 Jan 2007 10:58:08 -0800 Subject: [SciPy-user] build issue on 64-bit Intel Core2 Duo In-Reply-To: <200701311116.22016.sransom@nrao.edu> References: <1170227383.12321.11.camel@nadav.envision.co.il> <200701311116.22016.sransom@nrao.edu> Message-ID: I'm comming in on this discussion late, however I have been recompiling libs using icc and ifort on the 3.0ghz version on this machine with the -fast arg, there are very noticible improvements when using the compilers from intel. I had problems getting my i686-apple-darwin8-gcc-4.0.1 to optimize for the 5150. There is 64bit support for linux and windows in ifort and icc. Most errors i ran into with optimizing centered around -O3 or intel's -fast global optimize args implying -static, you may need to manually feed what -O3 implies without -static. GNU does not really support this chip yet (from what I understand), although the pentium pro (i686 arch) seems to do a decent job. On 1/31/07, Scott Ransom wrote: > This works for me on a 64-bit Debian Core2 Duo system. > > eiger:~$ uname -a > Linux eiger 2.6.18-3-amd64 #1 SMP Sun Dec 10 19:57:44 CET 2006 x86_64 > GNU/Linux > > eiger:~$ python cpuinfo.py > CPU information: getNCPUs=2 has_mmx has_sse has_sse2 is_64bit is_Intel > is_Nocona is_i686 > > Scott > > > On Wednesday 31 January 2007 02:09, Nadav Horesh wrote: > > > Few more informations: > > > > > >python numpy/distutils/cpuinfo.py: > > > > > >CPU information: getNCPUs=4 has_mmx has_sse has_sse2 is_64bit > > > is_Intel is_XEON is_Xeon is_i686 > > > > Well, > > Crossing our systems' info it looks to me that the best and straight > > forward way to identify Intel's imitation to amd64 arch is: > > > > def _is_Nocona(self): > > return self.is_64bit() and self.is_Intel() and self.is_i686() > > > > Did not test it but it fits and looks logical. > > > > Nadav. > > -- > Scott M. Ransom Address: NRAO > Phone: (434) 296-0320 520 Edgemont Rd. > email: sransom at nrao.edu Charlottesville, VA 22903 USA > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From amcmorl at gmail.com Wed Jan 31 15:50:47 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Thu, 1 Feb 2007 09:50:47 +1300 Subject: [SciPy-user] bug in cookbook example: rebinning? In-Reply-To: <200701311932.50317.meesters@uni-mainz.de> References: <200701311932.50317.meesters@uni-mainz.de> Message-ID: Hi Christian, On 01/02/07, Christian Meesters wrote: > Does anybody know what 'xi' is in the the rebinning example (at > http://www.scipy.org/Cookbook/Rebinning ; example No. 3 line 7 )? Sorry, that was my bad. I wrote that routine before I knew a lot about numpy or python, and didn't know many of the available helper functions. xi was another small routine: def xi(x,*args): return x Looking back now, I've no idea why it's like that, and a much easier solution is to use numpy.indices in place of nvals in the congrid routine, since nvals(i, dims) == n.indices(dims)[i] I will go and make the change on the wiki now. Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From lev at columbia.edu Wed Jan 31 17:38:00 2007 From: lev at columbia.edu (Lev Givon) Date: Wed, 31 Jan 2007 17:38:00 -0500 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: References: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> <45C0C5F5.2000100@unc.edu> Message-ID: <20070131223800.GA20121@avicenna.cc.columbia.edu> Received from Fernando Perez on Wed, Jan 31, 2007 at 12:47:22PM EST: > Hi all, > > I'm not a Mac person, but just yesterday Brian Granger and Sanjiv Das > put together a nice step-by-step installation guide for the > ipython/numpy/scipy triad on a Mac. Here are temporary links to it in > PowerPoint and PDF formats: > > http://scumis.scu.edu/~srdas/temp/iPython_installation.ppt > http://scumis.scu.edu/~srdas/temp/iPython_installation.pdf > > This will probably be cleaned up and updated a bit, and we can put it > up in the wiki so it can be more easily edited and maintained in the > future. > > Cheers, > > f It might be desirable to encourage a matplotlib person to add some information regarding the installation of that package, as I suspect that many folks in need of installation wisdom think in terms of the tetrad ipython/numpy/scipy/matplotlib. L.G. From fperez.net at gmail.com Wed Jan 31 17:55:16 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 31 Jan 2007 15:55:16 -0700 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: <20070131223800.GA20121@avicenna.cc.columbia.edu> References: <723eb6930701310617v42a4a098s5582ef6dff01e8d4@mail.gmail.com> <45C0C5F5.2000100@unc.edu> <20070131223800.GA20121@avicenna.cc.columbia.edu> Message-ID: On 1/31/07, Lev Givon wrote: > It might be desirable to encourage a matplotlib person to add some > information regarding the installation of that package, as I suspect > that many folks in need of installation wisdom think in terms of the > tetrad ipython/numpy/scipy/matplotlib. Certainly. Hence my hope that this will be wikified soon so that the community can contribute and continue maintaining/improving this as conditions change. Cheers, f From gnchen at cortechs.net Wed Jan 31 21:14:40 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Wed, 31 Jan 2007 18:14:40 -0800 Subject: [SciPy-user] something like GTS?? Message-ID: Hi! Is there something like GTS (GNU Triangulated Surface Library: http:// gts.sourceforge.net/) in Scipy? I hate to cook my own for all those bookkeeping of vertices and surfaces. Any recommendation will be welcomed.. Gen-Nan Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jan 31 22:54:29 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 01 Feb 2007 12:54:29 +0900 Subject: [SciPy-user] Release 0.6.1 of pyaudio, renamed pyaudiolab In-Reply-To: References: <45C0A076.2090805@ar.media.kyoto-u.ac.jp> Message-ID: <45C16475.90800@ar.media.kyoto-u.ac.jp> Rob Hetland wrote: > pyaudiolab works on Mac OS X, but the library needs to point to > libsndfile.dylab instead of libsndfile.so.1. > Argh, I thought I solved the problem. > This seems to be hardwired in the setup, and I needed to change the > pysndfile.py in the actual python site-packages after the install. It is not hardcoded: the setup.py is supposed to detect the library extension used on the platform, and build the name of the library accordingly (on Mac OS X, libsndfile.dylib.1): If you have time, I would appreciate to have the mac os X library name problem fixed. This is not difficult to fix, but I don't have easy access to mac os X, and this kind of info is not easy to find with simple googling. Helpful info (you can post me privately) are: - what does python -c "from numpy.distutils.system_info import so_ext; print so_ext" return ? - the name of compiled libsndfile (my understanding is that under mac os X, for the equivalent of shared , not dynamically loaded libraries, the convention is 'lib' + library_name + so_ext + '.' + 'major version', where library_name is sndfile, major version is 1, and so_ext '.dylib') > > Also, the example on the web page does not work exactly right. > setting frames to 1e4 does not work, as 1e4 is a float, not an int. > I think that it would make sense to coerce the frames input to an > integer before calling libsndfile. Are you sure you are using the version posted on the website ? Because nframes are coerced into 64 bits integers for some time, now (actually, that was exactly the problem related to ctypes and ubuntu, where long long were 32 bits instead of 64). David From robert.kern at gmail.com Wed Jan 31 23:22:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Jan 2007 22:22:49 -0600 Subject: [SciPy-user] something like GTS?? In-Reply-To: References: Message-ID: <45C16B19.4000306@gmail.com> Gennan Chen wrote: > Hi! > > Is there something like GTS (GNU Triangulated Surface Library: > http://gts.sourceforge.net/) in Scipy? I hate to cook my own for all > those bookkeeping of vertices and surfaces. Any recommendation will be > welcomed.. Not currently in scipy, no. At Enthought, we have written a library that does much the same thing (Boolean operations on triangulated-surface-bounded volumes) somewhat more robustly (in our experience). It is not, at the moment, open source although we've talked about opening it. Offers to help develop and maintain it would go a long way towards that end. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lanceboyle at qwest.net Wed Jan 31 23:52:47 2007 From: lanceboyle at qwest.net (Jerry) Date: Wed, 31 Jan 2007 21:52:47 -0700 Subject: [SciPy-user] What it the best way to install on a Mac? In-Reply-To: <45C0DFCA.8010302@gmail.com> References: <1170249483.45c0970b55b58@webmail.ster.kuleuven.be> <45C0DFCA.8010302@gmail.com> Message-ID: <4255B628-91C3-4FC5-BCDA-19C5026CA144@qwest.net> Thanks for all the comments. I've already downloaded Chris's package and will see how that goes--this isn't a high priority at the moment-- just planning ahead. I have nothing but respect for you guys who make all of this happen, and I'm not trying to start a flame war. (I've unintentionally done that before, if not on this list, then on another, by suggesting that the installation process should be easy.) First, I have installed a number of programs from source and it's usually a pain in the ass. If all that's required is one cycle of tar- cd-configure-make, that's OK by me, but it's not usually that easy, it seems. In the OS X world, it is customary that all of those convenient installation instructions such as were provided elsewhere in this thread to be codified into an installer. (Pedantic comments follow--sorry.) On OS X, there are two kinds of installation methods (for mainstream software, that is). One is a single drag-and-drop operation from a disk image to the package's final destination. (Apple's "package" is a directory that acts like a double-clickable application program.) The other way (more appropriate for a lot of the open source stuff, probably) is an automated installer that "just works" and performs any necessary operations and then puts any number of files wherever they need to go, leaving a receipt file in a special place to aid uninstalling. Even installation from source could be wrapped up into such an installer. There are a great many very smart people who would benefit from using Python and its accessories in a technical computing environment but who have no inclination (in fact, a great disinclination) to install from source. Many of these folks use Macs and simply won't stand for a difficult installation because it's not the Mac culture to do so. And let me tell you that "difficult installation" for a Mac user is a low bar indeed. I would guess that four out of five technically-oriented Mac users would turn and run if they were considering using Python and its friends for their technical computing needs and encountered instructions to install from source. Fortunately, Chris's installer seems to be the answer. By the way, Chris, if you haven't done so already, you might post a notice over at http://www.macresearch.org/. I think it's a pretty new site but they passed the 10000-registered- user mark some time back and the rate of news postings has increased a lot lately. Also, a notice to Apple's ADC page about the installer would probably get some notice because they send e-mails to all the ADC folks every month or so with links to open source software. Thanks again for the helpful comments. Jerry On Jan 31, 2007, at 11:28 AM, Robert Kern wrote: > joris at ster.kuleuven.ac.be wrote: >> On Wednesday 31 January 2007 01:27, Robert Kern wrote: >> >>> Learn to build from source. Because of the dependency on FORTRAN >>> runtime >>> libraries, this is the lowest-hassle method. >> >> This may be true at the moment, but I hope this will not be the >> general >> philosophy for the future? > > It's not a philosophy; it's a technical problem. If you can solve > the technical > problem, building from source will no longer be the lowest-hassle > method. > >> Having to build from source may scare off potential >> numpy/scipy/matplotlib users... Yesterday someone said that he was >> trying to >> install scipy from source for about 2 weeks now. > > I mean no disrespect to all the parties involved, but the common > problem that I > see between all of the people who take such a long time to build > scipy is that > they tend to thrash around, trying everything, without > understanding what > they're doing. Going back and forth on a mailing list trying to > diagnose a > problem naturally takes a long time. It's such a grossly > inefficient way to > troubleshoot. > > But when there's a problem with the binary package, the same exact > thing > happens. There is no substitute for learning. > >> It's not that I demand other people to make easy-to-install >> packages, but I >> think that we're heading in the wrong direction if we don't even >> think that >> easy-to-install packages are really necessary and that installing >> from source >> works just fine. > > We're not. Solve the technical problem, and the whole issue will go > away. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user >