From pwang at enthought.com Fri May 2 16:37:54 2008 From: pwang at enthought.com (Peter Wang) Date: Fri, 2 May 2008 15:37:54 -0500 Subject: [SciPy-dev] [IT] Maintenance and scheduled downtime this evening and weekend Message-ID: Hi everyone, This evening and this weekend, we will be doing a major overhaul of Enthought's internal network infrastructure. We will be cleaning up a large amount of legacy structure and transitioning to a more maintainable, better documented configuration. We have planned the work so that externally-facing servers will experience a minimum of downtime. In the event of unforeseen difficulties that cause outages to extend beyond the times given below, we will update the network status page, located at http://dr0.enthought.com/status/ . This page will remain available and be unaffected by any network outages. Downtimes: Friday May 2, 2008, 8pm - 10pm Central time (= 9pm - 11pm Eastern time) (= 1am - 3am UTC Sat. May 3) Saturday May 2, 2008, 10am - 11am Central time (= 11am - 12 noon Eastern time) (= 3pm - 4pm UTC) To reach us during the downtime, please use the contact information provided on the network status page. Please let me know if you have any questions or concerns. We will send out another email once the outage is complete. Thanks for your patience! -Peter From eads at soe.ucsc.edu Fri May 2 18:45:03 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Fri, 02 May 2008 15:45:03 -0700 Subject: [SciPy-dev] [IT] Maintenance and scheduled downtime this evening and weekend In-Reply-To: References: Message-ID: <481B996F.9020504@soe.ucsc.edu> Will this effect SVN and Trac access? Thanks! Damian Peter Wang wrote: > Hi everyone, > > This evening and this weekend, we will be doing a major overhaul of > Enthought's internal network infrastructure. We will be cleaning up a > large amount of legacy structure and transitioning to a more > maintainable, better documented configuration. From pwang at enthought.com Fri May 2 18:50:55 2008 From: pwang at enthought.com (Peter Wang) Date: Fri, 2 May 2008 17:50:55 -0500 Subject: [SciPy-dev] [IT] Maintenance and scheduled downtime this evening and weekend In-Reply-To: <481B996F.9020504@soe.ucsc.edu> References: <481B996F.9020504@soe.ucsc.edu> Message-ID: <6B53627F-13CD-4671-8C30-1217264A0BC9@enthought.com> On May 2, 2008, at 5:45 PM, Damian Eads wrote: > Will this effect SVN and Trac access? > Thanks! > Damian Yes, this affects svn, trac, web, mailing lists,...everything, because we will be working on the underlying network infrastructure. -Peter From eads at soe.ucsc.edu Sat May 3 08:19:52 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 03 May 2008 05:19:52 -0700 Subject: [SciPy-dev] importing scipy.stats generates exception on deprecate decorator Message-ID: <481C5868.9070402@soe.ucsc.edu> Hi, I just encountered a rather puzzling exception I have never seen before, which is generated when scipy.stats is imported. It occurs regardless if I use scipy from the latest SVN checkout or Fedora 8-yum. [redfox at localhost chestnut]$ python Python 2.5.1 (r251:54863, Oct 30 2007, 13:54:11) [GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.stats Traceback (most recent call last): File "", line 1, in File "/tmp/qq/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/tmp/qq/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, in import scipy.linalg as linalg File "/tmp/qq/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 13, in from iterative import * File "/tmp/qq/lib/python2.5/site-packages/scipy/linalg/iterative.py", line 5, in from scipy.sparse.linalg import isolve File "/tmp/qq/lib/python2.5/site-packages/scipy/sparse/__init__.py", line 5, in from base import * File "/tmp/qq/lib/python2.5/site-packages/scipy/sparse/base.py", line 45, in class spmatrix(object): File "/tmp/qq/lib/python2.5/site-packages/scipy/sparse/base.py", line 139, in spmatrix @deprecate TypeError: deprecate() takes exactly 3 arguments (1 given) >>> Any ideas? Damian From matthieu.brucher at gmail.com Sat May 3 09:17:34 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 3 May 2008 15:17:34 +0200 Subject: [SciPy-dev] importing scipy.stats generates exception on deprecate decorator In-Reply-To: <481C5868.9070402@soe.ucsc.edu> References: <481C5868.9070402@soe.ucsc.edu> Message-ID: Hi, Did you use latest numpy SVN as well ? Matthieu 2008/5/3 Damian Eads : > Hi, > > I just encountered a rather puzzling exception I have never seen before, > which is generated when scipy.stats is imported. It occurs regardless if > I use scipy from the latest SVN checkout or Fedora 8-yum. > > [redfox at localhost chestnut]$ python > Python 2.5.1 (r251:54863, Oct 30 2007, 13:54:11) > [GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy.stats > Traceback (most recent call last): > File "", line 1, in > File "/tmp/qq/lib/python2.5/site-packages/scipy/stats/__init__.py", > line 7, in > from stats import * > File "/tmp/qq/lib/python2.5/site-packages/scipy/stats/stats.py", line > 192, in > import scipy.linalg as linalg > File "/tmp/qq/lib/python2.5/site-packages/scipy/linalg/__init__.py", > line 13, in > from iterative import * > File "/tmp/qq/lib/python2.5/site-packages/scipy/linalg/iterative.py", > line 5, in > from scipy.sparse.linalg import isolve > File "/tmp/qq/lib/python2.5/site-packages/scipy/sparse/__init__.py", > line 5, in > from base import * > File "/tmp/qq/lib/python2.5/site-packages/scipy/sparse/base.py", line > 45, in > class spmatrix(object): > File "/tmp/qq/lib/python2.5/site-packages/scipy/sparse/base.py", line > 139, in spmatrix > @deprecate > TypeError: deprecate() takes exactly 3 arguments (1 given) > >>> > > Any ideas? > > Damian > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From robince at gmail.com Sat May 3 10:14:19 2008 From: robince at gmail.com (Robin) Date: Sat, 3 May 2008 15:14:19 +0100 Subject: [SciPy-dev] pickle of dok_matrix fails with protocol 2 Message-ID: Hi, I found that trying to load a protocol 2 binary pickle of a dok_matrix results in an error: Protocol 1 binary and protocol 0 seem to work OK. In [37]: Ad = sparse.dok_matrix((10,10),dtype=int8) In [38]: Ad[2,array([1,2,3,4])]=1 In [39]: fd = open('test.pkl','wb') In [40]: cPickle.dump(Ad,fd,2) In [41]: fd.close() In [42]: fd = open('test.pkl','rb') In [43]: cPickle.load(fd) --------------------------------------------------------------------------- Traceback (most recent call last) /Users/robince/phd/maxent/python/ in () /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/dok.py in __setitem__(self, key, value) 192 j += self.shape[1] 193 --> 194 if i < 0 or i >= self.shape[0] or j < 0 or j >= self.shape[1]: 195 raise IndexError, "index out of bounds" 196 if isintlike(value) and value == 0: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/sparse/base.py in __getattr__(self, attr) 314 return self.getnnz() 315 else: --> 316 raise AttributeError, attr + " not found" 317 318 def transpose(self): : shape not found In [44]: numpy.__version__ Out[44]: '1.0.5.dev4987' In [45]: scipy.__version__ Out[45]: '0.7.0.dev4114' In [46]: cPickle.__version__ Out[46]: '1.71' I have submitted this as ticket #661 http://scipy.org/scipy/scipy/ticket/661 I hope this was the correct thing to do. Robin From pwang at enthought.com Sun May 4 13:12:01 2008 From: pwang at enthought.com (Peter Wang) Date: Sun, 4 May 2008 12:12:01 -0500 Subject: [SciPy-dev] [IT] (updated) Maintenance and scheduled downtime this evening and weekend In-Reply-To: References: Message-ID: Hi everyone, We will need to do some more on the network today, Sunday May 4, from 1pm to 3pm Central time. (This is 2pm-4pm Eastern, 6pm-8pm UTC.) This affects the main Enthought and Scipy.org servers, including SVN, Trac, the mailing lists, and the web site. As usual, we don't anticipate the services being down for the entire time, but there may be intermittent connectivity issues during that time. Please try to avoid editing any Trac wikis during this time, since you may lose your changes. We will try to complete the work as quickly as possible, and we will send a status update when the scheduled work as been completed. Please check http://dr0.enthought.com/status for status updates. On behalf of the IT crew, thanks again for your patience and bearing with us. We recognize that these outages are inconvenient but they are a critical part of the transition to a new infrastructure that better supports the growing needs of the Scipy open-source community. Please let us know if you have any questions or concerns! -Peter From david at ar.media.kyoto-u.ac.jp Mon May 5 04:01:10 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 05 May 2008 17:01:10 +0900 Subject: [SciPy-dev] How to run scipy benchmarks "unit tests" ? Message-ID: <481EBEC6.70309@ar.media.kyoto-u.ac.jp> Hi, Before the conversion to nosetests, there used to be a bunch of benchmarks in the unit tests. (for example, in fftpack). How to run them now from the python interpreter ? I tried scipy.fftpack.bench, but it does not find any benchmark. cheers, David From pwang at enthought.com Mon May 5 05:44:23 2008 From: pwang at enthought.com (Peter Wang) Date: Mon, 5 May 2008 04:44:23 -0500 Subject: [SciPy-dev] [IT] Weekend outage complete In-Reply-To: References: Message-ID: <4364B874-7A19-4F77-88D1-F8855058CF1E@enthought.com> Hi everyone, The downtime took a little longer than expected (perhaps that is to be expected?), but everything should be back up and running now. Mail, web, SVN, and Trac for scipy.org and enthought.com are all functional. The mail server is working through some backlogged mail but that should clear up in a few hours. Thanks again for your patience during this upgrade process. We're in much better shape now to continue improving our network without causing such intrusive outages in the future. Please let me know if you have any questions or comments. On behalf of the IT crew, Thanks, Peter From matthew.brett at gmail.com Mon May 5 06:24:50 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 5 May 2008 10:24:50 +0000 Subject: [SciPy-dev] How to run scipy benchmarks "unit tests" ? In-Reply-To: <481EBEC6.70309@ar.media.kyoto-u.ac.jp> References: <481EBEC6.70309@ar.media.kyoto-u.ac.jp> Message-ID: <1e2af89e0805050324q6b235e7esbaba9e81357b1ac3@mail.gmail.com> Hi, > Before the conversion to nosetests, there used to be a bunch of > benchmarks in the unit tests. (for example, in fftpack). How to run them > now from the python interpreter ? I tried scipy.fftpack.bench, but it > does not find any benchmark. Thanks for reporting this - good point. Someone (yes, it was me) forgot to put the benchmark directories into the setup.py files, so they were not being copied to the distribution. Fixed in latest SVN. Matthew From david at ar.media.kyoto-u.ac.jp Mon May 5 07:09:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 05 May 2008 20:09:15 +0900 Subject: [SciPy-dev] How to run scipy benchmarks "unit tests" ? In-Reply-To: <1e2af89e0805050324q6b235e7esbaba9e81357b1ac3@mail.gmail.com> References: <481EBEC6.70309@ar.media.kyoto-u.ac.jp> <1e2af89e0805050324q6b235e7esbaba9e81357b1ac3@mail.gmail.com> Message-ID: <481EEADB.3060207@ar.media.kyoto-u.ac.jp> Matthew Brett wrote: > > Thanks for reporting this - good point. Someone (yes, it was me) > forgot to put the benchmark directories into the setup.py files, so > they were not being copied to the distribution. Fixed in latest SVN. > Great, it is working now, thanks ! cheers, David From nwagner at iam.uni-stuttgart.de Tue May 6 12:30:20 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 May 2008 18:30:20 +0200 Subject: [SciPy-dev] ERROR: test_hdquantiles (test_mmorestats.TestQuantiles) Message-ID: Hi all, I discovered a new error wrt scipy.test() ====================================================================== ERROR: test_hdquantiles (test_mmorestats.TestQuantiles) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/stats/tests/test_mmorestats.py", line 86, in test_hdquantiles hdq = mms.hdquantiles_sd(data,[0.25, 0.5, 0.75]) File "/usr/local/lib64/python2.5/site-packages/scipy/stats/mmorestats.py", line 167, in hdquantiles_sd result = _hdsd_1D(data.compressed(), p) File "/usr/local/lib64/python2.5/site-packages/scipy/stats/mmorestats.py", line 143, in _hdsd_1D xsorted = np.sort(data.compressed()) AttributeError: 'numpy.ndarray' object has no attribute 'compressed' Cheers, Nils From pgmdevlist at gmail.com Tue May 6 13:14:07 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 6 May 2008 13:14:07 -0400 Subject: [SciPy-dev] ERROR: test_hdquantiles (test_mmorestats.TestQuantiles) In-Reply-To: References: Message-ID: <200805061314.07971.pgmdevlist@gmail.com> On Tuesday 06 May 2008 12:30:20 Nils Wagner wrote: > I discovered a new error wrt scipy.test() Oops, my bad: that's a side effect of a recent change in numpy.ma (where .compressed returns a _baseclass (ndarray) array, instead of a MaskedArray. Anyhow, that's fixed in the SVN(4236). P. From nwagner at iam.uni-stuttgart.de Tue May 6 15:00:07 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 06 May 2008 21:00:07 +0200 Subject: [SciPy-dev] numpy.test() versus scipy.test() Message-ID: Hi all, numpy.test() starts with some useful information >>> numpy.test() Numpy is installed in /usr/lib/python2.4/site-packages/numpy Numpy version 1.1.0.dev5133 IMHO, it would be nice to have this output for scipy.test() as well. Nils From helias at bccn.uni-freiburg.de Wed May 7 04:51:29 2008 From: helias at bccn.uni-freiburg.de (Moritz Helias) Date: Wed, 07 May 2008 10:51:29 +0200 Subject: [SciPy-dev] bug in hyperg #659 Message-ID: Hi, I found a bug in the implementation of hyp1f1 (special/cephes/hyperg.c) most probably due to chageset 3012. Please see ticket #659 . A possible solution seems to be to use the function chgm in special/specfun/specfun.f. I changed my local copy of scipy accordingly and it so far works fine. Greetings, Moritz From Norbert.Nemec.List at gmx.de Thu May 8 04:40:20 2008 From: Norbert.Nemec.List at gmx.de (Norbet Namec) Date: Thu, 08 May 2008 10:40:20 +0200 Subject: [SciPy-dev] Patch (weave): Fix warnings about missing const Message-ID: <20080508084020.225480@gmx.net> Hi there, I tried to submit this as trac-ticket but the system gave me a "500 Server Error". Could someone please check it into SVN, so I can forget about it. The change is trivial and obvious. Thanks, Norbert -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal f?r Modem und ISDN: http://www.gmx.net/de/go/smartsurfer -------------- next part -------------- A non-text attachment was scrubbed... Name: weave-fix-const-warnings.diff Type: text/x-diff Size: 1291 bytes Desc: not available URL: From wnbell at gmail.com Thu May 8 12:46:22 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 8 May 2008 11:46:22 -0500 Subject: [SciPy-dev] [SciPy-user] failures=15, errors=7, scipy svn r4244 on OS X Leopard 10.5.2 In-Reply-To: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> References: <1d1e6ea70805080658o37a0af05od865cca046aa141@mail.gmail.com> Message-ID: On Thu, May 8, 2008 at 8:58 AM, George Nurser wrote: > I'm getting errors. This was with gfortran 4.3.0 from macPorts, > standard gcc 4.0.1, numpy svn 5148. > > Quite a few errors were with matrices having the incorrect shape in > test_matvec etc; also a couple of failures in fancy indexing. > I wonder if recent work on matrices in numpy may be inconsistent with > the scipy test routines now. After updating to numpy svn 5148 I see the same errors. I don't follow the numpy mailing list so the matrix indexing change was a surprise for me. I find it a little obnoxious that A[1] on a numpy matrix now returns a rank 1 array. This isn't the sort of change I would have expected from software that bears the magical 1.x designation. The only thing worse than counterintuitive behavior is non-backwards compatible changes in a minor version release. I don't know when/if I'll change sparse to conform to the dense matrices. Numpy matrices seems to be a moving target these days and my spare time is not so abundant that I can write code for moving targets. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_sparse.out Type: application/octet-stream Size: 8047 bytes Desc: not available URL: From robert.kern at gmail.com Thu May 8 18:48:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 8 May 2008 17:48:10 -0500 Subject: [SciPy-dev] Patch (weave): Fix warnings about missing const In-Reply-To: <20080508084020.225480@gmx.net> References: <20080508084020.225480@gmx.net> Message-ID: <3d375d730805081548t5b553fcdh30e9bcf41a0615f4@mail.gmail.com> On Thu, May 8, 2008 at 3:40 AM, Norbet Namec wrote: > Hi there, > > I tried to submit this as trac-ticket but the system gave me a "500 Server Error". Could someone please check it into SVN, so I can forget about it. The change is trivial and obvious. Applied in r4252. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cimrman3 at ntc.zcu.cz Fri May 9 05:46:08 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 09 May 2008 11:46:08 +0200 Subject: [SciPy-dev] bug: indexing CSR matrix, 64bits Message-ID: <48241D60.5040002@ntc.zcu.cz> Hi, indexing by slices is broken on a 64 bit machine. On a 32 bit one, all is ok. Is something missing in the INSTANTIATE_ALL macro in sparsetools.i? r. In [1]: import scipy In [2]: scipy.__version__ Out[2]: '0.7.0.dev4195' In [3]: import numpy In [4]: numpy.__version__ Out[4]: '1.1.0.dev5106' (Pdb) mtx <12160x12160 sparse matrix of type '' with 235578 stored elements in Compressed Sparse Row format> (Pdb) mtx.data array([ 0. , 0. , 0. , ..., -0.00540995, -0.00636418, 0.0202062 ]) (Pdb) mtx.data.dtype dtype('float64') (Pdb) mtx.indices array([ 0, 1, 36, ..., 12136, 12155, 12159], dtype=int32) (Pdb) mtx.indptr array([ 0, 15, 30, ..., 235543, 235560, 235578], dtype=int32) (Pdb) ir slice(8974, 12160, None) (Pdb) ic slice(8974, 12160, None) (Pdb) mtx[ir,ic] *** NotImplementedError: Wrong number of arguments for overloaded function 'get_csr_submatrix'. Possible C/C++ prototypes are: get_csr_submatrix< int,signed char >(int const,int const,int const [],int const [],signed char const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< signed char > *) get_csr_submatrix< int,unsigned char >(int const,int const,int const [],int const [],unsigned char const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< unsigned char > *) get_csr_submatrix< int,short >(int const,int const,int const [],int const [],short const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< short > *) get_csr_submatrix< int,unsigned short >(int const,int const,int const [],int const [],unsigned short const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< unsigned short > *) get_csr_submatrix< int,int >(int const,int const,int const [],int const [],int const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< int > *) get_csr_submatrix< int,unsigned int >(int const,int const,int const [],int const [],unsigned int const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< unsigned int > *) get_csr_submatrix< int,long long >(int const,int const,int const [],int const [],long long const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< long long > *) get_csr_submatrix< int,unsigned long long >(int const,int const,int const [],int const [],unsigned long long const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< unsigned long long > *) get_csr_submatrix< int,float >(int const,int const,int const [],int const [],float const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< float > *) get_csr_submatrix< int,double >(int const,int const,int const [],int const [],double const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< double > *) get_csr_submatrix< int,long double >(int const,int const,int const [],int const [],long double const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< long double > *) get_csr_submatrix< int,npy_cfloat_wrapper >(int const,int const,int const [],int const [],npy_cfloat_wrapper const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< npy_cfloat_wrapper > *) get_csr_submatrix< int,npy_cdouble_wrapper >(int const,int const,int const [],int const [],npy_cdouble_wrapper const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< npy_cdouble_wrapper > *) get_csr_submatrix< int,npy_clongdouble_wrapper >(int const,int const,int const [],int const [],npy_clongdouble_wrapper const [],int const,int const,int const,int const,std::vector< int > *,std::vector< int > *,std::vector< npy_clongdouble_wrapper > *) From wnbell at gmail.com Fri May 9 13:17:34 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 9 May 2008 12:17:34 -0500 Subject: [SciPy-dev] bug: indexing CSR matrix, 64bits In-Reply-To: <48241D60.5040002@ntc.zcu.cz> References: <48241D60.5040002@ntc.zcu.cz> Message-ID: On Fri, May 9, 2008 at 4:46 AM, Robert Cimrman wrote: > Hi, > > indexing by slices is broken on a 64 bit machine. On a 32 bit one, all > is ok. Is something missing in the INSTANTIATE_ALL macro in sparsetools.i? > Robert, I can't reproduce your problem on my system (Ubuntu 8.04, Athlon 64): $uname -a Linux droog 2.6.24-16-generic #1 SMP Thu Apr 10 12:47:45 UTC 2008 x86_64 GNU/Linux Since your .indptr and .indices arrays are 32-bits, I doubt that the problem is with INSTANTIATE_ALL. Currently sparsetools does not instantiate functions using 64-bit indices because it doubles the compilation time and it's not likely that anyone will have sparse matrices with dimensions >= 2**31 anytime soon. OTOH I don't know what else it could be so please try the following: 1) do a completely fresh install of numpy/scipy (rm -rf build and rm -rf /site-packages/scipy ) See if the problem still exists. 2) instantiate additional "long long" versions of the sparsetools functions: First check that you have a recent SWIG: $ swig -version SWIG Version 1.3.34 Add the following after line 145 of sparsetools.i DECLARE_INDEX_TYPE( long long ) http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/sparsetools.i#L145 Then, for each line that looks like: 174 %template(f_name) f_name; 175 %template(f_name) f_name; 176 %template(f_name) f_name; ..... 187 %template(f_name) f_name; add a long long version also, e.g. %template(f_name) f_name; http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/sparsetools.i#L188 Lastly, regenerate the SWIG wrappers as described here: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/README.txt This will rule out the possible backend problems. Also, can you tell me what your C compiler thinks sizeof(int) is? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From cimrman3 at ntc.zcu.cz Mon May 12 05:11:45 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 12 May 2008 11:11:45 +0200 Subject: [SciPy-dev] bug: indexing CSR matrix, 64bits In-Reply-To: References: <48241D60.5040002@ntc.zcu.cz> Message-ID: <482809D1.1080000@ntc.zcu.cz> Hi Nathan, Nathan Bell wrote: > On Fri, May 9, 2008 at 4:46 AM, Robert Cimrman wrote: >> Hi, >> >> indexing by slices is broken on a 64 bit machine. On a 32 bit one, all >> is ok. Is something missing in the INSTANTIATE_ALL macro in sparsetools.i? >> > > Robert, I can't reproduce your problem on my system (Ubuntu 8.04, Athlon 64): > > $uname -a > Linux droog 2.6.24-16-generic #1 SMP Thu Apr 10 12:47:45 UTC 2008 > x86_64 GNU/Linux $ uname -a Linux uk709n03-kme 2.6.22-gentoo-r8 #1 SMP Fri Sep 7 09:38:53 CEST 2007 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux > Since your .indptr and .indices arrays are 32-bits, I doubt that the > problem is with INSTANTIATE_ALL. Currently sparsetools does not > instantiate functions using 64-bit indices because it doubles the > compilation time and it's not likely that anyone will have sparse > matrices with dimensions >= 2**31 anytime soon. OTOH I don't know > what else it could be so please try the following: Yep. I do not need 64 bit indices. I have also installed swig 1.3.34 to match your version. However the problem persists and is really strange: - I have created a small test script and it works - in pdb, indexing with slices works too (Pdb) mtx[slice(2290, 2845, None),slice(2290, 2845, None)] <555x555 sparse matrix of type '' with 3313 stored elements in Compressed Sparse Row format> - but not with the 'ir', 'ic' (Pdb) ir slice(2290, 2845, None) (Pdb) mtx[ir,ic] *** NotImplementedError: Wrong number of arguments for overloaded function 'get_csr_submatrix'. Possible C/C++ prototypes are: ... So thanks for your suggestions, but I fear the bug might not be in scipy.sparse... r. From cimrman3 at ntc.zcu.cz Mon May 12 05:34:57 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 12 May 2008 11:34:57 +0200 Subject: [SciPy-dev] bug: indexing CSR matrix, 64bits In-Reply-To: <482809D1.1080000@ntc.zcu.cz> References: <48241D60.5040002@ntc.zcu.cz> <482809D1.1080000@ntc.zcu.cz> Message-ID: <48280F41.1030204@ntc.zcu.cz> Robert Cimrman wrote: > Hi Nathan, > > Nathan Bell wrote: >> On Fri, May 9, 2008 at 4:46 AM, Robert Cimrman wrote: >>> Hi, >>> >>> indexing by slices is broken on a 64 bit machine. On a 32 bit one, all >>> is ok. Is something missing in the INSTANTIATE_ALL macro in sparsetools.i? >>> >> Robert, I can't reproduce your problem on my system (Ubuntu 8.04, Athlon 64): >> >> $uname -a >> Linux droog 2.6.24-16-generic #1 SMP Thu Apr 10 12:47:45 UTC 2008 >> x86_64 GNU/Linux > > $ uname -a Linux uk709n03-kme 2.6.22-gentoo-r8 #1 SMP Fri Sep 7 09:38:53 > CEST 2007 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux > >> Since your .indptr and .indices arrays are 32-bits, I doubt that the >> problem is with INSTANTIATE_ALL. Currently sparsetools does not >> instantiate functions using 64-bit indices because it doubles the >> compilation time and it's not likely that anyone will have sparse >> matrices with dimensions >= 2**31 anytime soon. OTOH I don't know >> what else it could be so please try the following: > > Yep. I do not need 64 bit indices. I have also installed swig 1.3.34 to > match your version. > > However the problem persists and is really strange: > > - I have created a small test script and it works > > - in pdb, indexing with slices works too > (Pdb) mtx[slice(2290, 2845, None),slice(2290, 2845, None)] > <555x555 sparse matrix of type '' > with 3313 stored elements in Compressed Sparse Row format> > > - but not with the 'ir', 'ic' > (Pdb) ir > slice(2290, 2845, None) > (Pdb) mtx[ir,ic] > *** NotImplementedError: Wrong number of arguments for overloaded > function 'get_csr_submatrix'. > Possible C/C++ prototypes are: ... > > So thanks for your suggestions, but I fear the bug might not be in > scipy.sparse... I tracked it in my code. Due to construction, the items of ir, ic slices (ir.start, ir.stop, ...) had 'numpy.int32' type. After retyping to 'int' everything works. Nathan, do you think that ensuring correct slice data deserves a ticket/fix? I cannot implement it now, but it would be IMHO fairly easy to do. cheers, r. From david at ar.media.kyoto-u.ac.jp Mon May 12 10:59:29 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 12 May 2008 23:59:29 +0900 Subject: [SciPy-dev] Dropping djbfft ? Message-ID: <48285B51.4070903@ar.media.kyoto-u.ac.jp> Hi, I would like to ask again about dropping djbfft support. Djbfft is a fft implementation, which has not been updated since 1999, and is not packaged by most linux distributions (I quite doubt anyone on windows is using it). Also, because it only supports 2^n sizes, it has to be used simultaneously with another backend (fftpack, fftw, fftw3, etc....). The later is why I would like to drop it: it is quite a PITA to test all combinations, and it takes a lot of time for maybe one or two users out there. I would much prefer spending times on improving things like our fftw3 support, float support, etc... cheers, David From millman at berkeley.edu Mon May 12 12:30:48 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 12 May 2008 09:30:48 -0700 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <48285B51.4070903@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, May 12, 2008 at 7:59 AM, David Cournapeau wrote: > I would like to ask again about dropping djbfft support. Djbfft is a > fft implementation, which has not been updated since 1999, and is not > packaged by most linux distributions (I quite doubt anyone on windows is > using it). Also, because it only supports 2^n sizes, it has to be used > simultaneously with another backend (fftpack, fftw, fftw3, etc....). The > later is why I would like to drop it: it is quite a PITA to test all > combinations, and it takes a lot of time for maybe one or two users out > there. I would much prefer spending times on improving things like our > fftw3 support, float support, etc... +1 I don't think I ever reported it, but I am pretty sure I have run into problems by just having it installed when building SciPy. I would be interested in whether anyone is able to successful use it. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Mon May 12 12:36:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 May 2008 11:36:04 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730805120936u6b565663qa793cb5ef52bcc4e@mail.gmail.com> On Mon, May 12, 2008 at 11:30 AM, Jarrod Millman wrote: > On Mon, May 12, 2008 at 7:59 AM, David Cournapeau > wrote: > > I would like to ask again about dropping djbfft support. Djbfft is a > > fft implementation, which has not been updated since 1999, and is not > > packaged by most linux distributions (I quite doubt anyone on windows is > > using it). Also, because it only supports 2^n sizes, it has to be used > > simultaneously with another backend (fftpack, fftw, fftw3, etc....). The > > later is why I would like to drop it: it is quite a PITA to test all > > combinations, and it takes a lot of time for maybe one or two users out > > there. I would much prefer spending times on improving things like our > > fftw3 support, float support, etc... > > +1 > I don't think I ever reported it, but I am pretty sure I have run into > problems by just having it installed when building SciPy. I would be > interested in whether anyone is able to successful use it. I usually do have it installed and built into scipy, and I don't think I have had any problems with it. But I'm not averse to removing it if it causing problems for other people. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Mon May 12 12:53:52 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 12 May 2008 09:53:52 -0700 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <3d375d730805120936u6b565663qa793cb5ef52bcc4e@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <3d375d730805120936u6b565663qa793cb5ef52bcc4e@mail.gmail.com> Message-ID: On Mon, May 12, 2008 at 9:36 AM, Robert Kern wrote: > I usually do have it installed and built into scipy, and I don't think > I have had any problems with it. But I'm not averse to removing it if > it causing problems for other people. If no one else has run into this problem, I will see if I can figure out what the exact issue I ran into was. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From ndbecker2 at gmail.com Mon May 12 13:39:34 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 12 May 2008 13:39:34 -0400 Subject: [SciPy-dev] Dropping djbfft ? References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <3d375d730805120936u6b565663qa793cb5ef52bcc4e@mail.gmail.com> Message-ID: Robert Kern wrote: > On Mon, May 12, 2008 at 11:30 AM, Jarrod Millman > wrote: >> On Mon, May 12, 2008 at 7:59 AM, David Cournapeau >> wrote: >> > I would like to ask again about dropping djbfft support. Djbfft is >> > a >> > fft implementation, which has not been updated since 1999, and is not >> > packaged by most linux distributions (I quite doubt anyone on windows >> > is using it). Also, because it only supports 2^n sizes, it has to be >> > used simultaneously with another backend (fftpack, fftw, fftw3, >> > etc....). The later is why I would like to drop it: it is quite a >> > PITA to test all combinations, and it takes a lot of time for maybe >> > one or two users out there. I would much prefer spending times on >> > improving things like our fftw3 support, float support, etc... >> >> +1 >> I don't think I ever reported it, but I am pretty sure I have run into >> problems by just having it installed when building SciPy. I would be >> interested in whether anyone is able to successful use it. > > I usually do have it installed and built into scipy, and I don't think > I have had any problems with it. But I'm not averse to removing it if > it causing problems for other people. > fftw is great, but be aware it has a restricted license. From pearu at cens.ioc.ee Mon May 12 15:08:10 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Mon, 12 May 2008 22:08:10 +0300 (EEST) Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <48285B51.4070903@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> Message-ID: <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> On Mon, May 12, 2008 5:59 pm, David Cournapeau wrote: > Hi, > > I would like to ask again about dropping djbfft support. Djbfft is a > fft implementation, which has not been updated since 1999, and is not > packaged by most linux distributions (I quite doubt anyone on windows is > using it). Also, because it only supports 2^n sizes, it has to be used > simultaneously with another backend (fftpack, fftw, fftw3, etc....). The > later is why I would like to drop it: it is quite a PITA to test all > combinations, and it takes a lot of time for maybe one or two users out > there. I would much prefer spending times on improving things like our > fftw3 support, float support, etc... djbfft is important for applications that need only the 2^n sizes support and here djbfft have speed advantage over other fft implementations. For some of these applications the fft speed can be crusial. Therefore my suggestion is not to drop the djbfft support but rather to move it from scipy to a standalone package. Then users, who need the fastest 2^n fft, can use it. This makes maintaining scipy.fft as well as djbfft wrappers easier. I have been using djbfft backend in scipy without problems. In my experience, most problems from using djbfft as a fft backend in scipy, are due to not installing djbfft according to its installation instructions. Somehow people tend to prefer their conventions but djbfft happens to be very sensitive to other conventions and then various problems follow easily. Regards, Pearu From stefan at sun.ac.za Mon May 12 16:17:50 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 12 May 2008 22:17:50 +0200 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> Message-ID: <9457e7c80805121317x1e452153h23e141200190f408@mail.gmail.com> 2008/5/12 Pearu Peterson : > djbfft is important for applications that need only the 2^n sizes support > and here djbfft have speed advantage over other fft implementations. > For some of these applications the fft speed can be crusial. I'm not so sure this is true any longer. From the FFTW discussion page on Wikipedia, by one of the authors of FFTW: """ Nowadays, both djbfft and pfftw are trounced on Intel processors by anything that uses SSE/SSE2 instructions, such as recent FFTW versions or the Intel Math Kernel Libraries. """ Here pfftw refers to their implementation of fixed size, out-of-order FFTs (what djbfft does). As for the licensing issue which someone else brought up: djbfft doesn't even have a license (not on its website, anyway). It is also no longer supported (last release 1999). That's extra ammo for the "release frequently" argument in the other thread: if you don't, then people think your project is dead! I'm of the opinion that, unless benchmarks show an advantage to keeping djbfft, keeping it around may be more trouble than it is worth. Regards St?fan From robert.kern at gmail.com Mon May 12 16:24:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 May 2008 15:24:51 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <9457e7c80805121317x1e452153h23e141200190f408@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <9457e7c80805121317x1e452153h23e141200190f408@mail.gmail.com> Message-ID: <3d375d730805121324x267914c8k70496bbc5a2fad1c@mail.gmail.com> On Mon, May 12, 2008 at 3:17 PM, St?fan van der Walt wrote: > As for the licensing issue which someone else brought up: djbfft > doesn't even have a license (not on its website, anyway). D.J. Bernstein has released all of his code to the public domain. (One can quibble about the exact technicalities of this, a la Larry Rosen, but I think we're safe.) http://video.google.com/videoplay?docid=-3147768955127254412 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Mon May 12 16:25:04 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 12 May 2008 15:25:04 -0500 Subject: [SciPy-dev] bug: indexing CSR matrix, 64bits In-Reply-To: <48280F41.1030204@ntc.zcu.cz> References: <48241D60.5040002@ntc.zcu.cz> <482809D1.1080000@ntc.zcu.cz> <48280F41.1030204@ntc.zcu.cz> Message-ID: On Mon, May 12, 2008 at 4:34 AM, Robert Cimrman wrote: > > I tracked it in my code. Due to construction, the items of ir, ic slices > (ir.start, ir.stop, ...) had 'numpy.int32' type. After retyping to 'int' > everything works. Thanks for isolating the problem. Please try r4294: http://projects.scipy.org/scipy/scipy/changeset/4294 > Nathan, do you think that ensuring correct slice data deserves a > ticket/fix? I cannot implement it now, but it would be IMHO fairly easy > to do. If I had more time I'd test sparse indexing more exhaustively. The bug you found is probably not the only example of an integer-like object slipping through to sparsetools. The better approach may be to make SWIG convert integer-like singletons to C integers. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From stefan at sun.ac.za Mon May 12 16:48:23 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 12 May 2008 22:48:23 +0200 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <3d375d730805121324x267914c8k70496bbc5a2fad1c@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <9457e7c80805121317x1e452153h23e141200190f408@mail.gmail.com> <3d375d730805121324x267914c8k70496bbc5a2fad1c@mail.gmail.com> Message-ID: <9457e7c80805121348i7cd2fa93w49db5e9b893ba207@mail.gmail.com> 2008/5/12 Robert Kern : > On Mon, May 12, 2008 at 3:17 PM, St?fan van der Walt wrote: > > As for the licensing issue which someone else brought up: djbfft > > doesn't even have a license (not on its website, anyway). > > D.J. Bernstein has released all of his code to the public domain. (One > can quibble about the exact technicalities of this, a la Larry Rosen, > but I think we're safe.) > > http://video.google.com/videoplay?docid=-3147768955127254412 Interestring that that conversation took place at Sage Days 6 (them using the GPL and all). I suppose D. Bernstein hadn't gotten around to changing the notices on his web page in the end. Thanks for the link! St?fan From millman at berkeley.edu Mon May 12 17:00:31 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 12 May 2008 14:00:31 -0700 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <9457e7c80805121348i7cd2fa93w49db5e9b893ba207@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <9457e7c80805121317x1e452153h23e141200190f408@mail.gmail.com> <3d375d730805121324x267914c8k70496bbc5a2fad1c@mail.gmail.com> <9457e7c80805121348i7cd2fa93w49db5e9b893ba207@mail.gmail.com> Message-ID: On Mon, May 12, 2008 at 1:48 PM, St?fan van der Walt wrote: > Interestring that that conversation took place at Sage Days 6 (them > using the GPL and all). I suppose D. Bernstein hadn't gotten around > to changing the notices on his web page in the end. He has the notice up on his FAQ: http://cr.yp.to/distributors.html -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Mon May 12 17:10:59 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 12 May 2008 23:10:59 +0200 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <9457e7c80805121317x1e452153h23e141200190f408@mail.gmail.com> <3d375d730805121324x267914c8k70496bbc5a2fad1c@mail.gmail.com> <9457e7c80805121348i7cd2fa93w49db5e9b893ba207@mail.gmail.com> Message-ID: <9457e7c80805121410l67b3b583o1e927a2d68f0b6f8@mail.gmail.com> 2008/5/12 Jarrod Millman : > On Mon, May 12, 2008 at 1:48 PM, St?fan van der Walt wrote: > > Interestring that that conversation took place at Sage Days 6 (them > > using the GPL and all). I suppose D. Bernstein hadn't gotten around > > to changing the notices on his web page in the end. > > He has the notice up on his FAQ: > http://cr.yp.to/distributors.html And now I know why it is a FAQ! :) Thanks St?fan From oliphant at enthought.com Mon May 12 17:50:26 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 12 May 2008 16:50:26 -0500 Subject: [SciPy-dev] Test Message-ID: <4828BBA2.3020907@enthought.com> This is a test. -teo From eric at enthought.com Mon May 12 18:01:04 2008 From: eric at enthought.com (eric jones) Date: Mon, 12 May 2008 17:01:04 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <48285B51.4070903@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> Message-ID: <83B4CDB9-2BEB-48FF-9B42-7996350ED229@enthought.com> Hey David, When we put the fft library together back in 2001-2002, there wasn't a good alternative for a fast fft package that wasn't GPLed. fftw was reasonably fast and fully functioning, but GPLed. fftpack had the functionality, but not the speed. djbfft (at the time) was noticeably faster than either of these for its important subset of functionality (radix 2), but wasn't full featured. The combination of djbfft and fftpack gave us a BSD compatible and fast library for SciPy. This was the combination that we used to build the binaries that were available from the scipy.org website. Having been out of the building SciPy game for a while, my question would be, what fft libraries are used now to build the binaries for scipy.org? Are they using fftpack only? If so, they are likely much slower than they need to be. My guess is that the binaries are not linked to MKL or FFTW because of licensing issues. Is there another alternative that we can use now that is fast, BSD compatible, and simpler to build? I'm very sympathetic to build issues, so if getting rid of djbfft simplifies things, then it is well worth considering. If, on the other hand, it results in slow-ish fft algorithms in the SciPy provided binaries, this also needs to be weighed as a drawback. eric On May 12, 2008, at 9:59 AM, David Cournapeau wrote: > Hi, > > I would like to ask again about dropping djbfft support. Djbfft > is a > fft implementation, which has not been updated since 1999, and is not > packaged by most linux distributions (I quite doubt anyone on > windows is > using it). Also, because it only supports 2^n sizes, it has to be used > simultaneously with another backend (fftpack, fftw, fftw3, etc....). > The > later is why I would like to drop it: it is quite a PITA to test all > combinations, and it takes a lot of time for maybe one or two users > out > there. I would much prefer spending times on improving things like our > fftw3 support, float support, etc... > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From david at ar.media.kyoto-u.ac.jp Mon May 12 21:07:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 10:07:41 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <83B4CDB9-2BEB-48FF-9B42-7996350ED229@enthought.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <83B4CDB9-2BEB-48FF-9B42-7996350ED229@enthought.com> Message-ID: <4828E9DD.6000401@ar.media.kyoto-u.ac.jp> eric jones wrote: > Hey David, > > When we put the fft library together back in 2001-2002, there wasn't a > good alternative for a fast fft package that wasn't GPLed. fftw was > reasonably fast and fully functioning, but GPLed. fftpack had the > functionality, but not the speed. djbfft (at the time) was noticeably > faster than either of these for its important subset of functionality > (radix 2), but wasn't full featured. The combination of djbfft and > fftpack gave us a BSD compatible and fast library for SciPy. This was > the combination that we used to build the binaries that were available > from the scipy.org website. > > Having been out of the building SciPy game for a while, my question > would be, what fft libraries are used now to build the binaries for > scipy.org? Are they using fftpack only? Hi Eric, My understanding is that, at least for the windows binaries, only fftpack is used. I would guess that on linux, people use fftw3, since it is packaged by most distributions. The problem is not a build issue per se. Because djbfft only supports 2^n, single dimension, we need to use another backend at the same time. Today, we support 4 backends (+ djbfft), which means I have to test 8 combinations (not 9 because when mkl is used, djbfft is not used at all). This, combined to the fact that every fft backend is different, makes moving forward more painful that I wished. I think the speed issue is a bit moot, in the sense that right now, we do not use the backends very efficiently anyway. I started working on fftpack to get more speed (in place and out of place, float support, fftw plan control), but instead of improving the code, I am spending most of my time checking that I am not breaking any possible environment combination. cheers, David From david at ar.media.kyoto-u.ac.jp Mon May 12 21:26:25 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 10:26:25 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> Message-ID: <4828EE41.8040603@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > > djbfft is important for applications that need only the 2^n sizes support > and here djbfft have speed advantage over other fft implementations. > For some of these applications the fft speed can be crusial. Are you sure djbfft speed advantage is still true today ? When I run scipy.fftpack.bench, I hardly see any different between fftw3 and djbfft, and the current use of fftw3 is less than optimal (we can at least get a two fold increase with an improved design, because right now, we are using the worst possible plan method). When I run the benchmarks, the only backend significantly faster than the other ones is the MKL. cheers, David From robert.kern at gmail.com Mon May 12 21:56:20 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 May 2008 20:56:20 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <4828EE41.8040603@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> On Mon, May 12, 2008 at 8:26 PM, David Cournapeau wrote: > Pearu Peterson wrote: >> >> djbfft is important for applications that need only the 2^n sizes support >> and here djbfft have speed advantage over other fft implementations. >> For some of these applications the fft speed can be crusial. > > Are you sure djbfft speed advantage is still true today ? When I run > scipy.fftpack.bench, I hardly see any different between fftw3 and > djbfft, and the current use of fftw3 is less than optimal (we can at > least get a two fold increase with an improved design, because right > now, we are using the worst possible plan method). > > When I run the benchmarks, the only backend significantly faster than > the other ones is the MKL. Neither djbfft vs. fftw nor djbfft vs. MKL are definitive comparisons. Not everyone can use GPLed code or proprietary code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon May 12 21:53:05 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 10:53:05 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> Message-ID: <4828F481.40400@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Neither djbfft vs. fftw nor djbfft vs. MKL are definitive comparisons. > Not everyone can use GPLed code or proprietary code. I understand that, I was merely answering to the fact that djbfft is the fastest. We have fftpack for people who do not care so much about speed, no ? IOW, I understand there are people who care about speed, people who care about open source, and people who care about not depending on both GPL and proprietary code. We support all this, and can still do it without depending on djbfft. But do we need to satisfy all the combinations of the above ? cheers, David From robert.kern at gmail.com Mon May 12 22:44:01 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 May 2008 21:44:01 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <4828F481.40400@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> On Mon, May 12, 2008 at 8:53 PM, David Cournapeau wrote: > Robert Kern wrote: >> >> Neither djbfft vs. fftw nor djbfft vs. MKL are definitive comparisons. >> Not everyone can use GPLed code or proprietary code. > > I understand that, I was merely answering to the fact that djbfft is the > fastest. We have fftpack for people who do not care so much about speed, > no ? IOW, I understand there are people who care about speed, people who > care about open source, and people who care about not depending on both > GPL and proprietary code. We support all this, and can still do it > without depending on djbfft. But do we need to satisfy all the > combinations of the above ? I'd drop FFTW and MKL support first before djbfft because they are not compatible with the scipy license. Perhaps this subsystem just needs to be redesigned. I think a lot of the complexity derives from the fact that we're doing the dispatching in C. If we moved the dispatching up to Python and just let external packages register themselves, we can probably save ourselves a substantial number of headaches. By separating the backend implementations, each can actually be unit-tested instead of integration-tested, and no one has to face a combinatorial explosion. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon May 12 22:56:22 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 11:56:22 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> Message-ID: <48290356.6050104@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > I'd drop FFTW and MKL support first before djbfft because they are not > compatible with the scipy license. Ok. > If we moved the dispatching up to Python and just let external > packages register themselves, we can probably save ourselves a > substantial number of headaches. By separating the backend > implementations, each can actually be unit-tested instead of > integration-tested, and no one has to face a combinatorial explosion. Yes, that's exactly what I wanted to do (fftpack by default, always, and dynamically changing the backend if wanted). I am trying to define fftpack as a C header, and each backend as an implementation of this API. But since no backend implements the same things (fftpack aside), that's not easy. A lot of the complexities comes from this: for example, real to real fft is only implemented in fftpack right now; complex to complex and multi dimensional are available in fftw3 and mkl, complex to real available in fftw3. This makes refactoring fftpack as an API + different backends implementing this API wihout breaking anything tedious. The other approach could be to drop all backends but fftpack, and forcing a new backend to implement the whole api (I would be willing to implement a full backend for fftw3 because it is open source, as well as djbfft, but no other). cheers, David From wnbell at gmail.com Mon May 12 23:11:10 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 12 May 2008 22:11:10 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> Message-ID: On Mon, May 12, 2008 at 9:44 PM, Robert Kern wrote: > >> Neither djbfft vs. fftw nor djbfft vs. MKL are definitive comparisons. > >> Not everyone can use GPLed code or proprietary code. > > > > I understand that, I was merely answering to the fact that djbfft is the > > fastest. We have fftpack for people who do not care so much about speed, > > no ? IOW, I understand there are people who care about speed, people who > > care about open source, and people who care about not depending on both > > GPL and proprietary code. We support all this, and can still do it > > without depending on djbfft. But do we need to satisfy all the > > combinations of the above ? > > I'd drop FFTW and MKL support first before djbfft because they are not > compatible with the scipy license. I don't see how this addresses David's argument. While being the "fastest BSD-compatible FFT for power of 2 problem sizes" achieves a certain Pareto optimality, wouldn't it be more productive to provide better support for actively maintained libraries that are faster and more general? How large is the djbfft + SciPy user base? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Mon May 12 23:22:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 12 May 2008 22:22:18 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> Message-ID: <3d375d730805122022n2af8e219ldc08654cccdea0fc@mail.gmail.com> On Mon, May 12, 2008 at 10:11 PM, Nathan Bell wrote: > On Mon, May 12, 2008 at 9:44 PM, Robert Kern wrote: > > I'd drop FFTW and MKL support first before djbfft because they are not > > compatible with the scipy license. > > I don't see how this addresses David's argument. Sure it does. He wants to reduce the number of combinations of optional libraries to support. Dropping any one of them reduces that number. > While being the "fastest BSD-compatible FFT for power of 2 problem > sizes" achieves a certain Pareto optimality, wouldn't it be more > productive to provide better support for actively maintained libraries > that are faster and more general? Not if their licenses conflict with scipy's, no. I think it's a waste of time to support optional backends instead of scipy itself. These optional backends will always be second-class citizens since official distributions won't (or shouldn't) include them. As with UMFPACK, I think that optional components, particularly incompatibly-licensed components, lead to more trouble than they are worth. I'd drop djbfft, too, if we can move to a system where external packages can register their implementations instead of having to build everything in. Or if we could absorb djbfft like we did FFTPACK so it is not an optional component. > How large is the djbfft + SciPy user base? I don't think you'll ever get a confident answer to that question, for any combination of "foo" + scipy. Speculating will get us nowhere. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon May 12 23:10:42 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 12:10:42 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> Message-ID: <482906B2.7010206@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > > I don't see how this addresses David's argument. > > I understand that Robert's position is if supporting too many backends is a burden, just drop mkl and fftw(2/3). Supporting fftpack and djbfft only would certainly be much easier. cheers, David From eric at enthought.com Tue May 13 00:22:37 2008 From: eric at enthought.com (eric jones) Date: Mon, 12 May 2008 23:22:37 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <4828E9DD.6000401@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <83B4CDB9-2BEB-48FF-9B42-7996350ED229@enthought.com> <4828E9DD.6000401@ar.media.kyoto-u.ac.jp> Message-ID: <50C4D634-AB06-4D4C-B97D-94D7E34D8C86@enthought.com> On May 12, 2008, at 8:07 PM, David Cournapeau wrote: > eric jones wrote: >> Hey David, >> >> When we put the fft library together back in 2001-2002, there >> wasn't a >> good alternative for a fast fft package that wasn't GPLed. fftw was >> reasonably fast and fully functioning, but GPLed. fftpack had the >> functionality, but not the speed. djbfft (at the time) was >> noticeably >> faster than either of these for its important subset of functionality >> (radix 2), but wasn't full featured. The combination of djbfft and >> fftpack gave us a BSD compatible and fast library for SciPy. This >> was >> the combination that we used to build the binaries that were >> available >> from the scipy.org website. >> >> Having been out of the building SciPy game for a while, my question >> would be, what fft libraries are used now to build the binaries for >> scipy.org? Are they using fftpack only? > Hi Eric, > > My understanding is that, at least for the windows binaries, only > fftpack is used. > Ok. That is likely not optimal, but your other comments about speeding up the use of fftpack algorithms may ameliorate this issue while simplifying build. > I would guess that on linux, people use fftw3, since it > is packaged by most distributions. How do 3rd party Linux version packagers ( Redhat, Ubuntu, Suse, etc) package SciPy? Do they link against fftw or fftpack? > > > The problem is not a build issue per se. Because djbfft only > supports 2^n, single dimension, we need to use another backend at the > same time. Today, we support 4 backends (+ djbfft), which means I have > to test 8 combinations (not 9 because when mkl is used, djbfft is not > used at all). This, combined to the fact that every fft backend is > different, makes moving forward more painful that I wished. One option would be to just use djbfft with fftpack. This was the idea when it was added. > > > I think the speed issue is a bit moot, in the sense that right now, > we do not use the backends very efficiently anyway. I started > working on > fftpack to get more speed (in place and out of place, float support, > fftw plan control), but instead of improving the code, I am spending > most of my time checking that I am not breaking any possible > environment > combination. If it can be removed and we can keep the speed, that seems like a great win. If we loose the speed, I'd prefer just limiting djbfft to use with fftpack and disallowing the other combinations. eric > > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From peridot.faceted at gmail.com Tue May 13 00:52:47 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 13 May 2008 00:52:47 -0400 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <48290356.6050104@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> <48290356.6050104@ar.media.kyoto-u.ac.jp> Message-ID: 2008/5/12 David Cournapeau : > Robert Kern wrote: >> If we moved the dispatching up to Python and just let external >> packages register themselves, we can probably save ourselves a >> substantial number of headaches. By separating the backend >> implementations, each can actually be unit-tested instead of >> integration-tested, and no one has to face a combinatorial explosion. > Yes, that's exactly what I wanted to do (fftpack by default, always, and > dynamically changing the backend if wanted). I am trying to define > fftpack as a C header, and each backend as an implementation of this > API. But since no backend implements the same things (fftpack aside), > that's not easy. A lot of the complexities comes from this: for example, > real to real fft is only implemented in fftpack right now; complex to > complex and multi dimensional are available in fftw3 and mkl, complex to > real available in fftw3. This makes refactoring fftpack as an API + > different backends implementing this API wihout breaking anything tedious. > > The other approach could be to drop all backends but fftpack, and > forcing a new backend to implement the whole api (I would be willing to > implement a full backend for fftw3 because it is open source, as well as > djbfft, but no other). How about a third approach (which I think may be what Robert Kern was suggesting)? Simply SWIGify (or cythonify or f2pyify) each library, so that they each provide their own API to python. Then write the detection/wrapper code in python, where it's relatively easy to muck about with dispatching and API finagling. Would there be a big performance hit for operations that do many small FFTs? Is there much code that calls the FFTs at a C level? Anne From david at ar.media.kyoto-u.ac.jp Tue May 13 01:06:12 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 14:06:12 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> <48290356.6050104@ar.media.kyoto-u.ac.jp> Message-ID: <482921C4.6090100@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > > How about a third approach (which I think may be what Robert Kern was > suggesting)? Simply SWIGify (or cythonify or f2pyify) each library, so > that they each provide their own API to python. Then write the > detection/wrapper code in python, where it's relatively easy to muck > about with dispatching and API finagling. > Yes, in my mind, that's the same thing: the complicated part is not the wrapping (it is already done through f2py anway), but the "libification", that is making sure that each backend is independant. Doing it from scratch is easy, doing it gradually while keeping compatibility is more work. But well, since dropping djbfft was not an option, I went forward, and I am almost done having one backend = one library in the refactor_fft branch. I still need to fight with distutils, but I know it well enough to win this time :) cheers, David From matthieu.brucher at gmail.com Tue May 13 01:52:06 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 13 May 2008 07:52:06 +0200 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <482906B2.7010206@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> <482906B2.7010206@ar.media.kyoto-u.ac.jp> Message-ID: 2008/5/13 David Cournapeau : > Nathan Bell wrote: > > > > I don't see how this addresses David's argument. > > > > > > I understand that Robert's position is if supporting too many backends > is a burden, just drop mkl and fftw(2/3). Supporting fftpack and djbfft > only would certainly be much easier. Well, for people that want Matlab-fast FFT (and those are the people you want to satisfy with the plug-in system with numpy), dropping the fastest libraries is not the best way to get them. Just my two cents. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue May 13 01:52:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 13 May 2008 00:52:55 -0500 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <482921C4.6090100@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> <48290356.6050104@ar.media.kyoto-u.ac.jp> <482921C4.6090100@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730805122252g3eb6903ay9f8c130f45d4db46@mail.gmail.com> On Tue, May 13, 2008 at 12:06 AM, David Cournapeau wrote: > Anne Archibald wrote: > > > > How about a third approach (which I think may be what Robert Kern was > > suggesting)? Simply SWIGify (or cythonify or f2pyify) each library, so > > that they each provide their own API to python. Then write the > > detection/wrapper code in python, where it's relatively easy to muck > > about with dispatching and API finagling. > > Yes, in my mind, that's the same thing: the complicated part is not the > wrapping (it is already done through f2py anway), but the > "libification", that is making sure that each backend is independant. > Doing it from scratch is easy, doing it gradually while keeping > compatibility is more work. > > But well, since dropping djbfft was not an option, I went forward, and I > am almost done having one backend = one library in the refactor_fft > branch. I still need to fight with distutils, but I know it well enough > to win this time :) I think we're still talking past each other. What I was suggesting is that FFTW(3), MKL, etc. are moved to separate *packages* outside of scipy. Each would expose Python-level fft(), rfft(), ... whatever each library implements. Each package would have a registration function which would tell scipy.fftpack that these are optimized implementations that scipy.fftpack.fft(), etc. could call. scipy.fftpack.fft() and friends would be pure Python and check to see if an optimized version was registered (and possibly if the inputs are a power of 2 if the optimized version requires it). If not, it falls back to the FFTPACK implementation which comes with scipy.fftpack. If it were feasible to build the djbfft library in our build system, it would be nice if we could simply absorb the source into ours and make it a non-optional backend. Otherwise, it also gets kicked out into a scikit like the others. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue May 13 01:52:59 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 14:52:59 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <3d375d730805122252g3eb6903ay9f8c130f45d4db46@mail.gmail.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> <48290356.6050104@ar.media.kyoto-u.ac.jp> <482921C4.6090100@ar.media.kyoto-u.ac.jp> <3d375d730805122252g3eb6903ay9f8c130f45d4db46@mail.gmail.com> Message-ID: <48292CBB.10800@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > I think we're still talking past each other. What I was suggesting is > that FFTW(3), MKL, etc. are moved to separate *packages* outside of > scipy. Yes, I understand that, but if we want to provide non broken packages for mkl and co, we need to separate the code in scipy.fftpack. Or did you mean just put the current code in scikits, even if not buildable, and people who want them just have to upgrade the code ? For djbfft, the problem is that its performances are dependent on the compiler options. IOW, building it should be easy, building it such as it is faster than fftpack maybe not so. cheers, David From david at ar.media.kyoto-u.ac.jp Tue May 13 04:57:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 17:57:15 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <48292CBB.10800@ar.media.kyoto-u.ac.jp> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <49877.88.90.135.57.1210619290.squirrel@cens.ioc.ee> <4828EE41.8040603@ar.media.kyoto-u.ac.jp> <3d375d730805121856s9578280k59df43c2d92f3b2a@mail.gmail.com> <4828F481.40400@ar.media.kyoto-u.ac.jp> <3d375d730805121944g1b353c6sae8045b338539dd0@mail.gmail.com> <48290356.6050104@ar.media.kyoto-u.ac.jp> <482921C4.6090100@ar.media.kyoto-u.ac.jp> <3d375d730805122252g3eb6903ay9f8c130f45d4db46@mail.gmail.com> <48292CBB.10800@ar.media.kyoto-u.ac.jp> Message-ID: <482957EB.90908@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Robert Kern wrote: >> I think we're still talking past each other. What I was suggesting is >> that FFTW(3), MKL, etc. are moved to separate *packages* outside of >> scipy. Robert, if you have a few minutes, would you be mind taking a look at the refactor_fft branch ? Basically, I tried to split as much as possible each backend (each one is a library, instead of the whole extensions being one big source file including other source files), so that we can remove them and put them into scikits without breaking them totally. As it is, the branch it is 100 % backward compatible with the trunk (I tried to test all possible combinations, but I may have missed some). So if you think the code is OK, I could then work on the registration thing, and put mkl, fftw and (djbfft ?) in scikits. cheers, David From david at ar.media.kyoto-u.ac.jp Tue May 13 05:13:58 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 13 May 2008 18:13:58 +0900 Subject: [SciPy-dev] Dropping djbfft ? In-Reply-To: <50C4D634-AB06-4D4C-B97D-94D7E34D8C86@enthought.com> References: <48285B51.4070903@ar.media.kyoto-u.ac.jp> <83B4CDB9-2BEB-48FF-9B42-7996350ED229@enthought.com> <4828E9DD.6000401@ar.media.kyoto-u.ac.jp> <50C4D634-AB06-4D4C-B97D-94D7E34D8C86@enthought.com> Message-ID: <48295BD6.5020102@ar.media.kyoto-u.ac.jp> eric jones wrote: > > How do 3rd party Linux version packagers ( Redhat, Ubuntu, Suse, etc) > package SciPy? Do they link against fftw or fftpack? fftpack is always used (for real to real fft, for example), and integrated in scipy. Ubuntu and debian link against fftw2. I don't know about rpm-based distributions. The code in the refactor_fft branch is now ready to split fftpack into the parts which will stay in scipy (fftpack, djbfft ?) and the others, which will go into a scikits. I will then work on the registration system suggested by Robert, which should not be difficult to do. cheers, David From david at ar.media.kyoto-u.ac.jp Fri May 16 02:26:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 16 May 2008 15:26:53 +0900 Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) Message-ID: <482D292D.9040301@ar.media.kyoto-u.ac.jp> Hi, After some more work, I have modified scipy.fftpack such as backends are truely independent code-wise, and can be selected at runtime. I hope it adresses all remarks comments made in the previous thread: http://projects.scipy.org/scipy/scipy/browser/branches/refactor_fft/ Concretely: - all fftw/fftw3/djbfft/mkl code has been put in scipy/fftpack/backends directory. - fftpack is always build and used by default for all functions - for each backend, if the corresponding libraries are available, the module is built (so if you have mkl, fftw, fftw3 and djbfft, all backends will always be built) - fftpack module now uses some magic to set up functions from one backend or the other (scipy/fftpack/common.py). Basically, the functions are defined at runtime depending on which backend is selected. If the function is not available, the default backend is used as a fallback. TODO: - I have not implemented any registration method, because I was not sure how to do it (config file, monkey patching, import magic ?). - The convolution module is not done yet. - I have not implemented djbfft backend yet (all the other ones should work), because it needs a bit more logic, but nothing serious. - All module .pyf files are almost duplication, but since we want to have them independent, I am not sure how to do better. It should be possible to put all the code in backends out of scipy once the registration system is in place. cheers, David From pearu at cens.ioc.ee Fri May 16 03:04:26 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 16 May 2008 09:04:26 +0200 Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <482D292D.9040301@ar.media.kyoto-u.ac.jp> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> Message-ID: <482D31FA.809@cens.ioc.ee> David Cournapeau wrote: > Hi, > > After some more work, I have modified scipy.fftpack such as backends > are truely independent code-wise, and can be selected at runtime. I hope > it adresses all remarks comments made in the previous thread: > > http://projects.scipy.org/scipy/scipy/browser/branches/refactor_fft/ > > Concretely: > - all fftw/fftw3/djbfft/mkl code has been put in > scipy/fftpack/backends directory. looks good to me. > - All module .pyf files are almost duplication, but since we want to > have them independent, I am not sure how to do better. I agree on the independence. On the other hand, the signatures of wrapper functions should be identical for all backends and I think it can be ensured at the build time (and so saves some unittests). The content of signature files that can be different for different backends, is the callprotoargument and callstatement sections in .pyf files. Other parts of the signature files should be identical (except the module name). So, to minimize code duplication at the expense of introducing two auxiliary files, one can organize the .pyf files as follows: ! File backends/fftw/fftw.pyf: python module _fftw include ../common/fft_part0.pyf callprotoargument complex_double*,int,int*,int,int,int callstatement {& int i,sz=1,xsz=size(x); & for (i=0;i References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> Message-ID: <482D32FC.4090601@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > I agree on the independence. On the other hand, the signatures of > wrapper functions should be identical for all backends Yes, that's my main concern, specially if the code is put in scikits and get a life on its own, we should make sure the C code call convention is the same (the other solution would be for each backend to provide a fatter wrapper, with fft implementation at the python level, but then there is python code duplication). > The content of signature files that can be different for different > backends, is the callprotoargument and callstatement sections in .pyf files. > Ok, bear with me here for being slow, but I don't know anything about f2py :) Do the callprotoargument and callstatement apply to all functions from the included functions ? > Note that the wrapper function names do not need to contain the name > of a backend. > At the C level, the function should certainly have different names, and I didn't find how to wrap a C function foo_bar as a foo function in the f2py generated module. I didn't look for long, though. cheers, David From matthew.brett at gmail.com Fri May 16 05:35:25 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 16 May 2008 10:35:25 +0100 Subject: [SciPy-dev] [IT] Weekend outage complete In-Reply-To: <4364B874-7A19-4F77-88D1-F8855058CF1E@enthought.com> References: <4364B874-7A19-4F77-88D1-F8855058CF1E@enthought.com> Message-ID: <1e2af89e0805160235k239d0804h2cc8906521e1a0ac@mail.gmail.com> Hi, I hope you're the right person to ask about this - sorry if not. I have just noticed that our (neuroimaging.scipy.org) wiki link no longer works: http://projects.scipy.org/neuroimaging/ni/wiki gives a 502 proxy error: Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /neuroimaging/ni/wiki. Reason: DNS lookup failure for: neuroimaging.scipy.org I don't think we've changed any setup - is it possible there is an error at the server end? Thanks a lot, Matthew From pwang at enthought.com Fri May 16 08:04:37 2008 From: pwang at enthought.com (Peter Wang) Date: Fri, 16 May 2008 08:04:37 -0400 Subject: [SciPy-dev] [IT] Weekend outage complete In-Reply-To: <1e2af89e0805160235k239d0804h2cc8906521e1a0ac@mail.gmail.com> References: <4364B874-7A19-4F77-88D1-F8855058CF1E@enthought.com> <1e2af89e0805160235k239d0804h2cc8906521e1a0ac@mail.gmail.com> Message-ID: <2A7512CA-2EF1-4631-BE64-2ADBFAE931F9@enthought.com> On May 16, 2008, at 5:35 AM, Matthew Brett wrote: > Hi, > I hope you're the right person to ask about this - sorry if not. > I have just noticed that our (neuroimaging.scipy.org) wiki link no > longer works: > http://projects.scipy.org/neuroimaging/ni/wiki > gives a 502 proxy error: > Proxy Error > The proxy server received an invalid response from an upstream server. > The proxy server could not handle the request GET /neuroimaging/ni/ > wiki. > Reason: DNS lookup failure for: neuroimaging.scipy.org Hi Matthew, Thanks for reporting this. Indeed, this was affecting some other subdomains as well, but I have just now fixed it. -Peter From pearu at cens.ioc.ee Fri May 16 09:45:10 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 16 May 2008 16:45:10 +0300 (EEST) Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <482D32FC.4090601@ar.media.kyoto-u.ac.jp> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> Message-ID: <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> On Fri, May 16, 2008 10:08 am, David Cournapeau wrote: > Pearu Peterson wrote: >> I agree on the independence. On the other hand, the signatures of >> wrapper functions should be identical for all backends > > Yes, that's my main concern, specially if the code is put in scikits > and get a life on its own, we should make sure the C code call > convention is the same (the other solution would be for each backend to > provide a fatter wrapper, with fft implementation at the python level, > but then there is python code duplication). (the other solution should be avoided if possible) >> The content of signature files that can be different for different >> backends, is the callprotoargument and callstatement sections in .pyf >> files. >> > > Ok, bear with me here for being slow, but I don't know anything about > f2py :) Do the callprotoargument and callstatement apply to all > functions from the included functions ? No, the suggestion was to split the pyf file into 3 pieces: part 0, the call* calles, part 1. Now that I look at it, it seems that also call* parts are identical for all backends except the module name part. This should make also the split of files unnecessary. I'll take a look at the branch and see if I can figure something easier out.. >> Note that the wrapper function names do not need to contain the name >> of a backend. >> > > At the C level, the function should certainly have different names, Why? Isn't C functions static and so there should be no conflicts provided that all backends have their own extension modules. May be I am overlooking something.. > and > I didn't find how to wrap a C function foo_bar as a foo function in the > f2py generated module. I didn't look for long, though. That's easy, one should use `fortranname` statement in pyd files. Search fortranname in f2py users manual. Pearu From david at ar.media.kyoto-u.ac.jp Fri May 16 09:45:24 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 16 May 2008 22:45:24 +0900 Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> Message-ID: <482D8FF4.6010100@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > > No, the suggestion was to split the pyf file into 3 pieces: > part 0, the call* calles, part 1. > Now that I look at it, it seems that also call* parts are > identical for all backends except the module name part. This > should make also the split of files unnecessary. > I'll take a look at the branch and see if I can figure something > easier out.. Great, thanks for looking at this. > > Why? Isn't C functions static and so there should be no conflicts > provided that all backends have their own extension modules. > May be I am overlooking something.. The functions exposed by f2py cannot be static, and contrary to what was the case before, it is possible to have several backends loaded at the same time at the python <-> C layer (this was not a problem before since the several backends were only used inside C, and as such, several backends were never present in the python <-> C layer). Actually, it is one of the nice benefit of the current approach "for free": you don't have to rebuild fftpack to use a different backend. Typically, I can now test all backends just by changing one line in the fftpack module, and once we decide on the best approach for registration, we will be able to do it purely at runtime in python. > > That's easy, one should use `fortranname` statement in pyd files. > Search fortranname in f2py users manual. Thanks, I will modify this part, then, instead of doing the renaming in python. cheers, David From david at ar.media.kyoto-u.ac.jp Fri May 16 09:53:19 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 16 May 2008 22:53:19 +0900 Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <482D8FF4.6010100@ar.media.kyoto-u.ac.jp> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> <482D8FF4.6010100@ar.media.kyoto-u.ac.jp> Message-ID: <482D91CF.7060601@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > > Thanks, I will modify this part, then, instead of doing the renaming in > python. > While you're here, Pearu, is fftpack the same as dfftpack API wise ? While refactoring, I noticed nobody wrapped the float version even if the fortran code is there, and that would really nice to have (would give a nice speed boost "for free" without relying on GPL or proprietary code). cheers, David From pearu at cens.ioc.ee Fri May 16 10:13:15 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 16 May 2008 17:13:15 +0300 (EEST) Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <482D91CF.7060601@ar.media.kyoto-u.ac.jp> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> <482D8FF4.6010100@ar.media.kyoto-u.ac.jp> <482D91CF.7060601@ar.media.kyoto-u.ac.jp> Message-ID: <55183.88.90.135.57.1210947195.squirrel@cens.ioc.ee> On Fri, May 16, 2008 4:53 pm, David Cournapeau wrote: > David Cournapeau wrote: >> >> Thanks, I will modify this part, then, instead of doing the renaming in >> python. >> > > While you're here, Pearu, is fftpack the same as dfftpack API wise ? afaik, yes. See the DFFTPACK/doc.double file for details. > While refactoring, I noticed nobody wrapped the float version even if > the fortran code is there, and that would really nice to have (would > give a nice speed boost "for free" without relying on GPL or proprietary > code). At the time only double version was wrapped because it was a start (and I needed only the double version for my app). Pearu From pearu at cens.ioc.ee Fri May 16 10:44:46 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 16 May 2008 17:44:46 +0300 (EEST) Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> Message-ID: <60151.88.90.135.57.1210949086.squirrel@cens.ioc.ee> On Fri, May 16, 2008 4:45 pm, Pearu Peterson wrote: > I'll take a look at the branch and see if I can figure something > easier out.. Ok, here follows a simpler solution that uses numpy.distutils pyf.src templating tools. I'll give an example using fftw3 backend only because exactly the same technique applies to other backends. Below follows the corresponding diff and it works. If you find it a reasonable approach, I can commit the diff to the refactor_fft branch. Pearu pearu at utlaan-laptop:~/svn/refactor_fft/scipy/fftpack/backends$ svn diff Index: fftw3/setup.py =================================================================== --- fftw3/setup.py (revision 4373) +++ fftw3/setup.py (working copy) @@ -13,7 +13,10 @@ sources = src, include_dirs = ["../common", info['include_dirs']]) - config.add_extension("_fftw3", sources = ["fftw3.pyf"], extra_info = info, libraries = ["fftw3_backend"]) + config.add_extension("_fftw3", sources = ["fftw3.pyf.src"], + depends = ['../fft_template.pyf.src'], + extra_info = info, + libraries = ["fftw3_backend"]) def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration Index: fftw3/fftw3.pyf.src =================================================================== --- fftw3/fftw3.pyf.src (revision 0) +++ fftw3/fftw3.pyf.src (revision 0) @@ -0,0 +1,9 @@ +! -*- f90 -*- + +! + +python module _fftw3 + interface +include '../fft_template.pyf.src' + end interface +end python module _fftw3 Index: fft_template.pyf.src =================================================================== --- fft_template.pyf.src (revision 0) +++ fft_template.pyf.src (revision 0) @@ -0,0 +1,72 @@ +!%f90 -*- f90 -*- +! Author: Pearu Peterson, August 2002 +! This file is included by the files /.pyf.src + + subroutine zfft(x,n,direction,howmany,normalize) + ! y = fft(x[,n,direction,normalize,overwrite_x]) + intent(c) zfft + complex*16 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) & + :: normalize = (direction<0) + end subroutine zfft + + subroutine drfft(x,n,direction,howmany,normalize) + ! y = drfft(x[,n,direction,normalize,overwrite_x]) + intent(c) drfft + real*8 intent(c,in,out,copy,out=y) :: x(*) + integer optional,depend(x),intent(c,in) :: n=size(x) + check(n>0&&n<=size(x)) n + integer depend(x,n),intent(c,hide) :: howmany = size(x)/n + check(n*howmany==size(x)) howmany + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) & + :: normalize = (direction<0) + end subroutine drfft + +! subroutine zrfft(x,n,direction,howmany,normalize) +! ! y = zrfft(x[,n,direction,normalize,overwrite_x]) +! intent(c) zrfft +! complex*16 intent(c,in,out,overwrite,out=y) :: x(*) +! integer optional,depend(x),intent(c,in) :: n=size(x) +! check(n>0&&n<=size(x)) n +! integer depend(x,n),intent(c,hide) :: howmany = size(x)/n +! check(n*howmany==size(x)) howmany +! integer optional,intent(c,in) :: direction = 1 +! integer optional,intent(c,in),depend(direction) & +! :: normalize = (direction<0) +! end subroutine zrfft + + subroutine zfftnd(x,r,s,direction,howmany,normalize,j) + ! y = zfftnd(x[,s,direction,normalize,overwrite_x]) + intent(c) zfftnd + complex*16 intent(c,in,out,copy,out=y) :: x(*) + integer intent(c,hide),depend(x) :: r=old_rank(x) + integer intent(c,hide) :: j=0 + integer optional,depend(r),dimension(r),intent(c,in) & + :: s=old_shape(x,j++) + check(r>=len(s)) s + integer intent(c,hide) :: howmany = 1 + integer optional,intent(c,in) :: direction = 1 + integer optional,intent(c,in),depend(direction) :: & + normalize = (direction<0) + callprotoargument complex_double*,int,int*,int,int,int + callstatement {& + int i,sz=1,xsz=size(x); & + for (i=0;i_error, & + "inconsistency in x.shape and s argument"); & + } & + } + end subroutine zfftnd + +! See http://cens.ioc.ee/projects/f2py2e/ From jh at physics.ucf.edu Sat May 17 01:45:32 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Sat, 17 May 2008 01:45:32 -0400 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 Message-ID: NUMPY/SCIPY DOCUMENTATION MARATHON 2008 As we all know, the state of the numpy and scipy reference documentation (aka the docstrings) is best described as "incomplete". Most functions have docstrings shorter than 5 lines, whereas our competitors IDL and Matlab usually have a concise and well-written page or two per function. The (wonderful) categorized list of functions is very new and isn't included in the package yet. There isn't even a "Getting Started"-type of document you can hand a new user so they can dive right in. Documentation tools are limited to plain-text paginators, while our competition enjoys HTML-based documents with formulae, images, search capability, and cross linking. Tales of woe abound. A university class switched to Numpy and got hopelessly bogged down because students couldn't find out how to call the functions. A developer looked something up while giving a presentation and the words "Blah, Blah, Blah" stared down at the audience in response. To head off another pedagogical meltdown, the University of Central Florida has hired Stefan van der Walt full time to coordinate a community documentation effort to write reference documentation and tools. The project starts now and continues through the summer. The goals: 1. Produce complete docstrings for all numpy functions and as much of scipy as possible, 2. Produce an 8-15 page Getting Started tutorial that is not discipline-specific, 3. Write reference sections on topics in numpy, such as slicing and the use principles of the modules, 4. Complete a first edition, in both PDF and HTML, of a NumPy Reference Manual, and 5. Check everything into the sources by 1 August 2008 so that the Packaging Team can cut a release and have it available in time for Fall 2008 classes. Even Stefan could not document the hundreds of functions that need it by himself, and in any case such a large contribution requires community review. To make it easy for everyone to contribute, Pauli Virtanen and Emmanuelle Guillart have provided a wiki system for editing reference documentation. The idea was developed by Fernando Perez, Stefan, and Gael Varoquaux. We encourage community members to write, review, and proofread reference pages on this wiki. Stefan will check updates into the sources roughly weekly. Near the end of the project, we will put these wiki pages through a vetting process and then check them into the sources a final time for a release hopefully to occur in early August. Meanwhile, Perry Greenfield has taken the lead on on task 3, writing reference docs for things that currently don't have docstrings, such as basic concepts like slicing. We have proposed two small extensions to the current docstring format, for images (to be used sparingly) and indexing. These appear in updated versions of the doc standard, which are linked from the wiki frontpage. Please take a look and comment on these if you like. All docstrings will remain readable in plain text, but we are now generating a full reference guide in PDF and HTML (you guessed it, linked from the wiki). These are searchable formats. There are several ways you can help: 1. Write some docstrings on the wiki! Many people can do this, many more than can write code for the package itself. However, you must know numpy, the function group, and the function you are writing well. You should be familiar with the concept of a reference page and write in that concise style. We'll do tutorial docs in another project at a later date. See the instructions on the wiki for guidelines and format. 2. Review others' docstrings and leave comments on their wiki pages. 3. Proofread docstrings. Make sure they are correct, complete, and concise. Fix grammar. 4. Write examples ("doctests"). Even if you are not a top-notch English writer, you can help by producing a code snippet of a few lines that demonstrates a function. It is fine for them to go into the docstring templates before the actual text. 5. Write a new help function that optionally produces ASCII or points the user's PDF or HTML reader to the right page (either local or global). 6. If you are in a position to hire someone, such as a knowledgeable student or short-term consultant, hire them to work on the tasks above for the summer. We can provide supervision to them or guidance to you if you like. The home for this project is here: http://scipy.org/Developer_Zone/DocMarathon2008 This is not a sprint. It is a marathon, and this time we are going to finish. We hope you will join us! --jh-- and Stefan and Perry and Pauli and Emmanuelle...and you! Joe Harrington Stefan van der Walt Perry Greenfield Pauli Virtanen Emmanuelle Guillart ...and you! From david at ar.media.kyoto-u.ac.jp Sat May 17 05:07:51 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 17 May 2008 18:07:51 +0900 Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <60151.88.90.135.57.1210949086.squirrel@cens.ioc.ee> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> <60151.88.90.135.57.1210949086.squirrel@cens.ioc.ee> Message-ID: <482EA067.6040704@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > > Ok, here follows a simpler solution that uses numpy.distutils pyf.src > templating tools. I'll give an example using fftw3 backend only because > exactly the same technique applies to other backends. Ok, I got it working, thanks. I have another question, though: where is the code to handle .pyf.src files handled, I don't find it in numpy.distutils (I need it for numscons build). cheers, David From pearu at cens.ioc.ee Sat May 17 05:42:18 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 17 May 2008 12:42:18 +0300 (EEST) Subject: [SciPy-dev] fftpack: building all backends and setting them at runtime (was dropping djbfft) In-Reply-To: <482EA067.6040704@ar.media.kyoto-u.ac.jp> References: <482D292D.9040301@ar.media.kyoto-u.ac.jp> <482D31FA.809@cens.ioc.ee> <482D32FC.4090601@ar.media.kyoto-u.ac.jp> <51588.88.90.135.57.1210945510.squirrel@cens.ioc.ee> <60151.88.90.135.57.1210949086.squirrel@cens.ioc.ee> <482EA067.6040704@ar.media.kyoto-u.ac.jp> Message-ID: <55068.88.90.135.57.1211017338.squirrel@cens.ioc.ee> On Sat, May 17, 2008 12:07 pm, David Cournapeau wrote: > Pearu Peterson wrote: >> >> Ok, here follows a simpler solution that uses numpy.distutils pyf.src >> templating tools. I'll give an example using fftw3 backend only because >> exactly the same technique applies to other backends. > > Ok, I got it working, thanks. I have another question, though: where is > the code to handle .pyf.src files handled, I don't find it in > numpy.distutils (I need it for numscons build). The *.src support code is in numpy/distutils/from_template.py. Pearu From hoytak at gmail.com Sat May 17 12:48:48 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Sat, 17 May 2008 09:48:48 -0700 Subject: [SciPy-dev] asscalar and asarray in numpy Message-ID: <4db580fd0805170948q1db8ae36k3c58787b07ff669d@mail.gmail.com> Hello, Perhaps I'm misunderstanding things, I'm a bit surprised that asscalar cannot take a scalar argument, i.e. d = asscalar(1.0) doesn't work. Although the doc string specifies that it returns the scalar part of a single element array, my assumption about this method's behavior based on how asarray is used is that it should accept anything that can be turned into a scalar and do the conversion based on that, i.e. we can use it to ensure that a particular type is a scalar. The current code in the function is: def asscalar(a): """Convert an array of size 1 to its scalar equivalent. """ return a.item() I'm thinking it would be reasonable to replace this by something more robust, perhaps: def asscalar(a): """Convert the input to its scalar equivalent, if possible. """ if numpy.isscalar(a): return a elif isinstance(a, numpy.ndarray): return a.item() else: raise TypeError("Argument cannot be converted to a scalar.") This is untested, and I may be missing subtleties in how I'm doing this -- if so, let me know. I'd be happy to write a unit test if people want this change. Those worried about speed can just use the a.item() method directly. Am I misunderstanding the reasoning behind the as* functions, or is this reasonable? Thanks, --Hoyt -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From peridot.faceted at gmail.com Sat May 17 17:08:02 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Sat, 17 May 2008 17:08:02 -0400 Subject: [SciPy-dev] [Numpy-discussion] [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: References: Message-ID: 2008/5/17 Joe Harrington : > Ryan writes: >> This is very good news. I will find some way to get involved. > > Great! Please dive right in, and sign up on the Developer_Zone page > so we can keep track of who's involved. > > One thing I forgot to mention in my too-wordy announcement was that > discussion of documentation is on the scipy-dev mailing list. We had > to pick one spot and decided that since we are going after scipy as > soon as numpy is done, we'd like to use that list rather than > numpy-discussion. We also wanted to keep it on a development list > rather than polluting the new users' discussion space. This sounds like a great idea! I have a few questions, though: * Am I supposed to be able to edit the wiki? If I go to, for example, http://sd-2116.dedibox.fr/doc/Docstrings/numpy/core/multiarray/empty and click on any of the "edit" links I am told I don't have permission to edit the page, in spite of having registered. * How and when are merges back from the wiki into the docstrings going to happen? Presumably, since any examples will become doctests, this will require substantial human intervention to fix all the broken tests (and remove instances of "shutil.deltree('/')" and the like). * Is there some special format the wiki markup needs to be in to ensure that the reST docstrings are properly formatted? * What fate is planned for the Numpy Example List and its progeny? Presumably everything in the basic Example List should be moved into docstrings (and therefore become doctests)? The categorized example list would presumably then develop links to the function documentation pages instead... Anne From matteo at naufraghi.net Sat May 17 17:00:40 2008 From: matteo at naufraghi.net (Matteo Bertini) Date: Sat, 17 May 2008 23:00:40 +0200 Subject: [SciPy-dev] FLENS Message-ID: I've recently found a Blitz++ competitor: http://flens.sourceforge.net/ I think one of the main FLENS plus is: http://flens.sourceforge.net/session1/tut8.html which can lead to speedups at almost only ... compiler effort. The question is: is Blitz++ so smart? and, if not I'd like a weave developer opinion about a possible weave.flens module (module I can try to develop :P). Ciao, Matteo Bertini From stefan at sun.ac.za Sat May 17 17:24:33 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 17 May 2008 23:24:33 +0200 Subject: [SciPy-dev] [Numpy-discussion] [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: References: Message-ID: <9457e7c80805171424v641f00e9t823a4f41b6a37d23@mail.gmail.com> Hi Anne 2008/5/17 Anne Archibald : > * Am I supposed to be able to edit the wiki? If I go to, for example, > http://sd-2116.dedibox.fr/doc/Docstrings/numpy/core/multiarray/empty > and click on any of the "edit" links I am told I don't have permission > to edit the page, in spite of having registered. The front page of the docwiki contains all the necessary instructions: http://sd-2116.dedibox.fr/doc I just need your username, then I'll add you to the EditorPage. > * How and when are merges back from the wiki into the docstrings going > to happen? Presumably, since any examples will become doctests, this > will require substantial human intervention to fix all the broken > tests (and remove instances of "shutil.deltree('/')" and the like). For now, I shall merge the docstrings once every couple of days (at least once a week). Each changed docstring becomes a different patch in a bzr repository, which we host in the background. I can therefore easily go through the changes, and decide which ones to accept. > * Is there some special format the wiki markup needs to be in to > ensure that the reST docstrings are properly formatted? As long as you provide valid restructuredtext between the start and end tags provided, your changes will be picked up correctly. > * What fate is planned for the Numpy Example List and its progeny? > Presumably everything in the basic Example List should be moved into > docstrings (and therefore become doctests)? The categorized example > list would presumably then develop links to the function documentation > pages instead... Those examples should definately end up in the docstrings (some of them have, already -- see e.g. dtype). We've introduced indexing tags in the docstrings, which should help us to generate a reference guide organised roughly according to the categories set out in your examples list (we haven't finalised the indexing categories, and could use some help on that still). Regards St?fan From wbaxter at gmail.com Sat May 17 17:31:09 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Sun, 18 May 2008 06:31:09 +0900 Subject: [SciPy-dev] FLENS In-Reply-To: References: Message-ID: On Sun, May 18, 2008 at 6:00 AM, Matteo Bertini wrote: > I've recently found a Blitz++ competitor: > http://flens.sourceforge.net/ > > I think one of the main FLENS plus is: > http://flens.sourceforge.net/session1/tut8.html > > which can lead to speedups at almost only ... compiler effort. > > The question is: > > is Blitz++ so smart? That's the main advantage of FLENS over Blitz++ from what I understand. Blitz++ does not use BLAS at all. FLENS is nice. I ported it to D to use for my work. --bb From david at ar.media.kyoto-u.ac.jp Sun May 18 05:08:54 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 18 May 2008 18:08:54 +0900 Subject: [SciPy-dev] FLENS In-Reply-To: References: Message-ID: <482FF226.2020607@ar.media.kyoto-u.ac.jp> Bill Baxter wrote: > > That's the main advantage of FLENS over Blitz++ from what I > understand. Blitz++ does not use BLAS at all. > The above link shows the use of expression template to avoid copies, and this is done by blitz++ as well. As you said, blitz does not use blas/lapack (and is not worked on anymore, right ?). cheers, David From cohen at slac.stanford.edu Sun May 18 06:11:05 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 12:11:05 +0200 Subject: [SciPy-dev] [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 Message-ID: <483000B9.7000602@slac.stanford.edu> Great! I am onboard : wiki username JohannCohenTanugi - review docstrings - test examples - write some? I have not developed any core numpy/scipy functions, but I use python extensively at work. - I am not a native English speaker, but I lived a good 6 years in the U.S.A, so I can make do if needed. ;) Sunday is perfect to start diving in... See you all "there". best, Johann From david at ar.media.kyoto-u.ac.jp Sun May 18 06:17:31 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 18 May 2008 19:17:31 +0900 Subject: [SciPy-dev] [Numpy-discussion] [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <9457e7c80805171424v641f00e9t823a4f41b6a37d23@mail.gmail.com> References: <9457e7c80805171424v641f00e9t823a4f41b6a37d23@mail.gmail.com> Message-ID: <4830023B.5070903@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > > The front page of the docwiki contains all the necessary instructions: > > http://sd-2116.dedibox.fr/doc > > BIG thanks to you Stefan (and everybody else involved, I believe Gael was quite involved in this ?). Finally, a pdf and html doc for numpy which does not make my eyes bleed. I think it will be a good incentive to make better docs. cheers, David From pav at iki.fi Sun May 18 06:57:48 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 18 May 2008 13:57:48 +0300 Subject: [SciPy-dev] [SciPy-user] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <483000B9.7000602@slac.stanford.edu> References: <483000B9.7000602@slac.stanford.edu> Message-ID: <1211108268.8207.0.camel@localhost.localdomain> Hi, su, 2008-05-18 kello 12:11 +0200, Johann Cohen-Tanugi kirjoitti: > Great! I am onboard : wiki username JohannCohenTanugi > - review docstrings > - test examples > - write some? I have not developed any core numpy/scipy functions, but I > use python extensively at work. > - I am not a native English speaker, but I lived a good 6 years in the > U.S.A, so I can make do if needed. ;) > Sunday is perfect to start diving in... See you all "there". Welcome aboard. I added you to the EditorGroup in the wiki, so you can edit pages now. BR, Pauli From pav at iki.fi Sun May 18 08:06:03 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 18 May 2008 15:06:03 +0300 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <483015B9.1060904@slac.stanford.edu> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> Message-ID: <1211112363.8207.11.camel@localhost.localdomain> Hi, su, 2008-05-18 kello 13:40 +0200, Johann Cohen-Tanugi kirjoitti: > one quesiton, the preliminary doc refers to saving the pages locally, > then editing, and in the end uploading everything. I do not understand > how the wiki or any page thereof can be saved on a local machine. I > tried a wget but clearly the wiki system gets broken this way. You can wget pages by adding ?action=raw at the end of the url. But I think Joe's idea was that for larger rewriting, it is more convenient to copy-and-paste the text for the docstring you want to edit, edit it on your own machine until you are happy with the result, and after that put it to the wiki. > ?P.S. I logged out and in again but the page > http://sd-2116.dedibox.fr/doc/EditorGroup tells me that I am not > allowed to see this page. I do not know whether this is expected. It's expected; you should be able to edit other pages. But I just changed this to reduce confusion. -- Pauli Virtanen From pav at iki.fi Sun May 18 09:06:49 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 18 May 2008 16:06:49 +0300 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <483021DA.7060200@slac.stanford.edu> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> Message-ID: <1211116009.8207.29.camel@localhost.localdomain> Hi, su, 2008-05-18 kello 14:32 +0200, Johann Cohen-Tanugi kirjoitti: > thanks. I am looking at fromnumeric.py, and more precisely the examples. > When I try doctest.testfile I get a lot of errors and it seems that in > every single examples I need to add : > >>> from numpy import array > >>> from numpy.core.fromnumeric import * > > So that array is known within the example, as well as the actual > function being tested.... I find that a bit strange so I would like to > know if there is smthg I do wrong. Here is what I do : > [cohen at jarrett core]$ ipython -c "import > doctest;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False)" I think the problem is that right now very few of the examples can actually be ran as doctests, because of the missing import statements. We still need to decide on a standard way to use imports in the examples, and document it more clearly on the wiki. After this is done, we need to can out how to run doctests so that the examples pass. And after this, we can start testing and fixing the examples. I think there are three reasonable choices: 1) Assume 'from numpy import *' 'import doctest, numpy; doctest.testfile(file_name, globs=numpy.__dict__)' 2) Assume 'import numpy' ?? ? 'import doctest, numpy; doctest.testfile(file_name, globs={"numpy": numpy})' 3) Assume 'import numpy as np' ? 'import doctest, numpy; doctest.testfile(file_name, globs={"np": numpy})' The default doctest choice of using the __dict__ of the module where the test is in might not be reasonable if we want to write examples instead of doctests. (And this default choice apparently doesn't work with testfile but it works with testmod?) I think St?fan suggested 2). I'd say 1) or 2). Pauli PS. Could you use "Reply all", so that the messages go also to the scipy-dev list? I think this discussion has interest also to other people. From cohen at slac.stanford.edu Sun May 18 09:40:32 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 15:40:32 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <1211116009.8207.29.camel@localhost.localdomain> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> <1211116009.8207.29.camel@localhost.localdomain> Message-ID: <483031D0.6070400@slac.stanford.edu> ok I will work on the 2) assumption. I do not get it to work with docmod either. best, Johann Pauli Virtanen wrote: > Hi, > > su, 2008-05-18 kello 14:32 +0200, Johann Cohen-Tanugi kirjoitti: > >> thanks. I am looking at fromnumeric.py, and more precisely the examples. >> When I try doctest.testfile I get a lot of errors and it seems that in >> every single examples I need to add : >> >>> from numpy import array >> >>> from numpy.core.fromnumeric import * >> >> So that array is known within the example, as well as the actual >> function being tested.... I find that a bit strange so I would like to >> know if there is smthg I do wrong. Here is what I do : >> [cohen at jarrett core]$ ipython -c "import >> doctest;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False)" >> > > I think the problem is that right now very few of the examples can > actually be ran as doctests, because of the missing import statements. > > We still need to decide on a standard way to use imports in the > examples, and document it more clearly on the wiki. After this is done, > we need to can out how to run doctests so that the examples pass. And > after this, we can start testing and fixing the examples. > > I think there are three reasonable choices: > > 1) Assume 'from numpy import *' > > 'import doctest, numpy; doctest.testfile(file_name, globs=numpy.__dict__)' > > 2) Assume 'import numpy' > ?? > ? 'import doctest, numpy; doctest.testfile(file_name, globs={"numpy": numpy})' > > 3) Assume 'import numpy as np' > > ? 'import doctest, numpy; doctest.testfile(file_name, globs={"np": numpy})' > > The default doctest choice of using the __dict__ of the module where the > test is in might not be reasonable if we want to write examples instead > of doctests. (And this default choice apparently doesn't work with > testfile but it works with testmod?) > > I think St?fan suggested 2). I'd say 1) or 2). > > Pauli > > > PS. Could you use "Reply all", so that the messages go also to the > scipy-dev list? I think this discussion has interest also to other > people. > > > From matthew.brett at gmail.com Sun May 18 11:24:44 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 18 May 2008 15:24:44 +0000 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <1211116009.8207.29.camel@localhost.localdomain> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> <1211116009.8207.29.camel@localhost.localdomain> Message-ID: <1e2af89e0805180824l767483e9lf9e7388d8f983be9@mail.gmail.com> Hi, > 1) Assume 'from numpy import *' > 2) Assume 'import numpy'?? > 3) Assume 'import numpy as np' Strong vote against 1), for 3). I think we'll end up just having to retrain people out of import * later, as did wx and others. Best, Matthew From stefan at sun.ac.za Sun May 18 12:12:58 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 18 May 2008 18:12:58 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <1e2af89e0805180824l767483e9lf9e7388d8f983be9@mail.gmail.com> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> <1211116009.8207.29.camel@localhost.localdomain> <1e2af89e0805180824l767483e9lf9e7388d8f983be9@mail.gmail.com> Message-ID: <9457e7c80805180912g2567ca8age4d6eeb97a2218cc@mail.gmail.com> 2008/5/18 Matthew Brett : > Hi, > >> 1) Assume 'from numpy import *' >> 2) Assume 'import numpy'?? >> 3) Assume 'import numpy as np' > > Strong vote against 1), for 3). I think we'll end up just having to > retrain people out of import * later, as did wx and others. I'm also not in favour of 1; we don't want to encourage people to do "from numpy import *" -- ever. The argument against 3 is that many (some?) people use different shorthand forms for numpy. If we simply rely on 'numpy' being called 'numpy', no one gets confused. Even so, I'd be ok with either 'numpy' or 'np' ('np' has the additional [and maybe obvious] advantage of being a lot shorter). Regards St?fan From pearu at cens.ioc.ee Sun May 18 12:50:31 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 18 May 2008 19:50:31 +0300 (EEST) Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <9457e7c80805180912g2567ca8age4d6eeb97a2218cc@mail.gmail.com> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> <1211116009.8207.29.camel@localhost.localdomain> <1e2af89e0805180824l767483e9lf9e7388d8f983be9@mail.gmail.com> <9457e7c80805180912g2567ca8age4d6eeb97a2218cc@mail.gmail.com> Message-ID: <59729.88.90.135.57.1211129431.squirrel@cens.ioc.ee> On Sun, May 18, 2008 7:12 pm, St?fan van der Walt wrote: > 2008/5/18 Matthew Brett : >> Hi, >> >>> 1) Assume 'from numpy import *' >>> 2) Assume 'import numpy'?????? >>> 3) Assume 'import numpy as np' >> >> Strong vote against 1), for 3). I think we'll end up just having to >> retrain people out of import * later, as did wx and others. > > I'm also not in favour of 1; we don't want to encourage people to do > "from numpy import *" -- ever. The argument against 3 is that many > (some?) people use different shorthand forms for numpy. If we simply > rely on 'numpy' being called 'numpy', no one gets confused. Even so, > I'd be ok with either 'numpy' or 'np' ('np' has the additional [and > maybe obvious] advantage of being a lot shorter). +1 for 2. numpy is not that long. Pearu From cohen at slac.stanford.edu Sun May 18 16:14:43 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sun, 18 May 2008 22:14:43 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <1211116009.8207.29.camel@localhost.localdomain> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> <1211116009.8207.29.camel@localhost.localdomain> Message-ID: <48308E33.7040000@slac.stanford.edu> Pauli Virtanen wrote: > Hi, > > su, 2008-05-18 kello 14:32 +0200, Johann Cohen-Tanugi kirjoitti: > >> thanks. I am looking at fromnumeric.py, and more precisely the examples. >> When I try doctest.testfile I get a lot of errors and it seems that in >> every single examples I need to add : >> >>> from numpy import array >> >>> from numpy.core.fromnumeric import * >> >> So that array is known within the example, as well as the actual >> function being tested.... I find that a bit strange so I would like to >> know if there is smthg I do wrong. Here is what I do : >> [cohen at jarrett core]$ ipython -c "import >> doctest;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False)" >> > > I think the problem is that right now very few of the examples can > actually be ran as doctests, because of the missing import statements. > > We still need to decide on a standard way to use imports in the > examples, and document it more clearly on the wiki. After this is done, > we need to can out how to run doctests so that the examples pass. And > after this, we can start testing and fixing the examples. > > I think there are three reasonable choices: > > 1) Assume 'from numpy import *' > > 'import doctest, numpy; doctest.testfile(file_name, globs=numpy.__dict__)' > > 2) Assume 'import numpy' > ?? > ? 'import doctest, numpy; doctest.testfile(file_name, globs={"numpy": numpy})' > well, there is a little bit of magic that I do not understand. If I do not put an 'import numpy' at the very first Example section of the fromnumeric.py file (id est in the doctest of the choose function), I get plenty of errors stating that numpy is not known, irrespective of whether I call ipython -c "import doctest, numpy;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False,globs={"numpy": numpy})" or ipython -c "import doctest;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False)" On the other hand, if I put 'import numpy' in the doctest of the choose function, it seems to become valid for the rest of the doctests, irrespective of whether I execute the first or second line above. any idea? Johann > 3) Assume 'import numpy as np' > > ? 'import doctest, numpy; doctest.testfile(file_name, globs={"np": numpy})' > > The default doctest choice of using the __dict__ of the module where the > test is in might not be reasonable if we want to write examples instead > of doctests. (And this default choice apparently doesn't work with > testfile but it works with testmod?) > > I think St?fan suggested 2). I'd say 1) or 2). > > Pauli > > > PS. Could you use "Reply all", so that the messages go also to the > scipy-dev list? I think this discussion has interest also to other > people. > > > From hoytak at gmail.com Sun May 18 16:56:03 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Sun, 18 May 2008 13:56:03 -0700 Subject: [SciPy-dev] FLENS In-Reply-To: <482FF226.2020607@ar.media.kyoto-u.ac.jp> References: <482FF226.2020607@ar.media.kyoto-u.ac.jp> Message-ID: <4db580fd0805181356o64644224ib9c07948182f8c01@mail.gmail.com> > That's the main advantage of FLENS over Blitz++ from what I > understand. Blitz++ does not use BLAS at all. FLENS looks really cool, but from my brief look, it doesn't seem like it's really a drop in replacement for blitz++. blitz++ seems to have more functionality in terms of slicing and assignment operations that I'd associate with arrays, which makes weave.blitz practical. It seems that FLENS is primarily focused on matrix type operations, which blitz++ doesn't have, so having a weave.flens package could be really useful. I might have missed something, though -- let me know if I did. --Hoyt -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From alan at ajackson.org Sun May 18 16:59:25 2008 From: alan at ajackson.org (Alan Jackson) Date: Sun, 18 May 2008 15:59:25 -0500 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 Message-ID: <20080518155925.7fd5d18e@ajackson.org> I'll squeeze in a little time on it. I think it is an outstanding idea. wiki name : AlanJackson - write some docs - review docs - test examples Should we have people flag that they have "checked out" a function to work on it so we don't get two people working offline on the same thing and then colliding when they are done? Similarly, for reviews, if there were a notification that a major update had occurred, then reviewers would know to go take a look. -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From wbaxter at gmail.com Sun May 18 17:12:54 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 19 May 2008 06:12:54 +0900 Subject: [SciPy-dev] FLENS In-Reply-To: <4db580fd0805181356o64644224ib9c07948182f8c01@mail.gmail.com> References: <482FF226.2020607@ar.media.kyoto-u.ac.jp> <4db580fd0805181356o64644224ib9c07948182f8c01@mail.gmail.com> Message-ID: On Mon, May 19, 2008 at 5:56 AM, Hoyt Koepke wrote: > > That's the main advantage of FLENS over Blitz++ from what I > > understand. Blitz++ does not use BLAS at all. > > FLENS looks really cool, but from my brief look, it doesn't seem like > it's really a drop in replacement for blitz++. blitz++ seems to have > more functionality in terms of slicing and assignment operations that > I'd associate with arrays, which makes weave.blitz practical. It > seems that FLENS is primarily focused on matrix type operations, which > blitz++ doesn't have, so having a weave.flens package could be really > useful. I might have missed something, though -- let me know if I > did. > Yes that is correct, I would say that the main point of FLENS is to provide convenient access to BLAS and LAPACK. It has no n-dimensional array types, just matrix types. But it does support all of BLAS's triangular and banded storage schemes. It has some support for sparse matrices too, but at this point the support for sparse operations and types is, well, sparse. Only the CRS storage scheme is implemented, and only connection with Intel's MKL is supported. On the FLENS list Michael said he was going to be looking into rounding out the sparse part of the code, so if anyone is interested in helping out, he'd probably be interested. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Sun May 18 17:22:08 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 18 May 2008 17:22:08 -0400 Subject: [SciPy-dev] cdflib wrapper - missing functions? Message-ID: I'm interested in cdf for non-central chi-square, to use for r.v. generation. I noticed that scipy has cdflib, and I see cdflib has cumchf, which seems to be what I want. It looks like cdflib_wrappers.c does not expose cumchf? I know nothing about f2py and scipy packaging, but I suppose it should be easy to add this? Any other suggestions? From robert.kern at gmail.com Sun May 18 17:33:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 18 May 2008 16:33:34 -0500 Subject: [SciPy-dev] cdflib wrapper - missing functions? In-Reply-To: References: Message-ID: <3d375d730805181433x11e89545m337ca5c5c094755@mail.gmail.com> On Sun, May 18, 2008 at 4:22 PM, Neal Becker wrote: > I'm interested in cdf for non-central chi-square, to use for r.v. > generation. I noticed that scipy has cdflib, and I see cdflib has cumchf, > which seems to be what I want. It looks like cdflib_wrappers.c does not > expose cumchf? It's not exposed there because the Cephes library already provides it. See scipy.special.chndtr*. For random number generation, though, you should use numpy.random.noncentral_chisquare(). If you want all of that bundled up into a distribution object, use scipy.stats.ncx2. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From timcera at earthlink.net Sun May 18 18:39:39 2008 From: timcera at earthlink.net (Tim Cera) Date: Sun, 18 May 2008 18:39:39 -0400 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: References: Message-ID: <4830B02B.8030308@earthlink.net> Hello, I would like to help out with the documentation effort. I might be able to write, but at a minimum I can help review. Definitely do not put me down as a profreader. My login on SciPy Doc Wiki is TimCera Kindest regards, Tim From ndbecker2 at gmail.com Sun May 18 18:43:33 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 18 May 2008 18:43:33 -0400 Subject: [SciPy-dev] cdflib wrapper - missing functions? References: <3d375d730805181433x11e89545m337ca5c5c094755@mail.gmail.com> Message-ID: Robert Kern wrote: > On Sun, May 18, 2008 at 4:22 PM, Neal Becker wrote: >> I'm interested in cdf for non-central chi-square, to use for r.v. >> generation. I noticed that scipy has cdflib, and I see cdflib has >> cumchf, >> which seems to be what I want. It looks like cdflib_wrappers.c does not >> expose cumchf? > > It's not exposed there because the Cephes library already provides it. > See scipy.special.chndtr*. For random number generation, though, you > should use numpy.random.noncentral_chisquare(). If you want all of > that bundled up into a distribution object, use scipy.stats.ncx2. > Thanks for the tips! I wonder if there are any references on the method used in numpy.random.noncentral_chisquare? This is the code: double rk_noncentral_chisquare(rk_state *state, double df, double nonc) { double Chi2, N; Chi2 = rk_chisquare(state, df-1); N = rk_gauss(state) + sqrt(nonc); return Chi2 + N*N; } From robert.kern at gmail.com Sun May 18 19:21:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 18 May 2008 18:21:02 -0500 Subject: [SciPy-dev] cdflib wrapper - missing functions? In-Reply-To: References: <3d375d730805181433x11e89545m337ca5c5c094755@mail.gmail.com> Message-ID: <3d375d730805181621o4b64790ew253bd1a7b36c4628@mail.gmail.com> On Sun, May 18, 2008 at 5:43 PM, Neal Becker wrote: > I wonder if there are any references on the method used in > numpy.random.noncentral_chisquare? I took the method from RANLIB. It is not referenced there. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Mon May 19 04:02:13 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 19 May 2008 11:02:13 +0300 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <4830B02B.8030308@earthlink.net> References: <4830B02B.8030308@earthlink.net> Message-ID: <1211184133.6597.62.camel@localhost> su, 2008-05-18 kello 18:39 -0400, Tim Cera kirjoitti: > Hello, > > I would like to help out with the documentation effort. I might be able > to write, but at a minimum I can help review. Definitely do not put me > down as a profreader. > > My login on SciPy Doc Wiki is TimCera Thanks, you can now edit the wiki. Pauli From stefan at sun.ac.za Mon May 19 04:58:44 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 10:58:44 +0200 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <1211184133.6597.62.camel@localhost> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> Message-ID: <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> Hi all, When editing, please take careful note of the documentation guidelines. Specifically: Images are allowed, but should not be central to the explanation; users viewing the docstring as text must be able to comprehend its meaning without resorting to an image viewer. We don't have a good way of including images in the documentation yet, so for now it may be better not to use them (at least until we can discuss it further). We have a math tag for formulas, so use those (sparingly) to write equations. Examples should be formatted in the standard doctest format. I am currently doing a merge into the NumPy sources. If a docstring is not ready for inclusion, the NumPy version will overwrite the version on the wiki. If you made changes to the page, don't worry -- those haven't been lost. You may revert your changes and update the docstring again, until it can be included. For developers: please don't change docstrings directly in the source tree, it makes merging difficult. If you urgently need a change to go in, make it on the wiki and let me know. Regards St?fan From stefan at sun.ac.za Mon May 19 07:30:00 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 13:30:00 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <48308E33.7040000@slac.stanford.edu> References: <483000B9.7000602@slac.stanford.edu> <1211108268.8207.0.camel@localhost.localdomain> <483015B9.1060904@slac.stanford.edu> <1211112363.8207.11.camel@localhost.localdomain> <483021DA.7060200@slac.stanford.edu> <1211116009.8207.29.camel@localhost.localdomain> <48308E33.7040000@slac.stanford.edu> Message-ID: <9457e7c80805190430tdd6ef6ob240d380eefca018@mail.gmail.com> Hi Johann 2008/5/18 Johann Cohen-Tanugi : > well, there is a little bit of magic that I do not understand. If I do > not put an 'import numpy' at the very first Example section of the > fromnumeric.py file (id est in the doctest of the choose function), I > get plenty of errors stating that numpy is not known, irrespective of > whether I call > ipython -c "import doctest, > numpy;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False,globs={"numpy": > numpy})" > or > ipython -c "import > doctest;doctest.testfile('/home/cohen/data1/sources/python/numpy-svn/numpy/core/fromnumeric.py',module_relative=False)" > On the other hand, if I put 'import numpy' in the doctest of the choose > function, it seems to become valid for the rest of the doctests, > irrespective of whether I execute the first or second line above. Your observation is correct. We shall probably inject 'numpy' artificially into the doctest namespace before running them. Doctest picks up both imports made in the file itself and in doctest sections. Regards St?fan (Johann) van der Walt :) From ndbecker2 at gmail.com Mon May 19 08:21:55 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Mon, 19 May 2008 08:21:55 -0400 Subject: [SciPy-dev] noncentral_chisquare buglet? Message-ID: def noncentral_chisquare(self, df, nonc, size=None): """Noncentral Chi^2 distribution. noncentral_chisquare(df, nonc, size=None) -> random values """ cdef ndarray odf, ononc cdef double fdf, fnonc fdf = PyFloat_AsDouble(df) fnonc = PyFloat_AsDouble(nonc) if not PyErr_Occurred(): if fdf <= 1: raise ValueError("df <= 0") << References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> Message-ID: <48317074.5090002@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > For developers: please don't change docstrings directly in the source > tree, it makes merging difficult. If you urgently need a change to go > in, make it on the wiki and let me know. > Good to know, I was about to put some "restification" for numpy.fft. Can you add me to the editor list, please ? (CournapeauDavid), Another question: where can I find instructions on how to build the documentation by myself ? cheers, David From stefan at sun.ac.za Mon May 19 09:18:00 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 15:18:00 +0200 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <48317074.5090002@ar.media.kyoto-u.ac.jp> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <48317074.5090002@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80805190618k4f8b61e7r7b87d0faeb899689@mail.gmail.com> 2008/5/19 David Cournapeau : > St?fan van der Walt wrote: >> For developers: please don't change docstrings directly in the source >> tree, it makes merging difficult. If you urgently need a change to go >> in, make it on the wiki and let me know. >> > > Good to know, I was about to put some "restification" for numpy.fft. Can > you add me to the editor list, please ? (CournapeauDavid), Done. > Another question: where can I find instructions on how to build the > documentation by myself ? I'm still working on that code, but I've made my bzr branch available at https://code.launchpad.net/~stefanv/+junk/numpy-refguide Regards St?fan From david at ar.media.kyoto-u.ac.jp Mon May 19 09:26:11 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 19 May 2008 22:26:11 +0900 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <9457e7c80805190618k4f8b61e7r7b87d0faeb899689@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <48317074.5090002@ar.media.kyoto-u.ac.jp> <9457e7c80805190618k4f8b61e7r7b87d0faeb899689@mail.gmail.com> Message-ID: <48317FF3.2060305@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > > Done. > Thanks. > I'm still working on that code, but I've made my bzr branch available at > > https://code.launchpad.net/~stefanv/+junk/numpy-refguide > Thanks (FYI, the valid address is http://bazaar.launchpad.net/~stefanv/scipy/numpy-refguide/) cheers, David From stefan at sun.ac.za Mon May 19 10:41:34 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 16:41:34 +0200 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <48317FF3.2060305@ar.media.kyoto-u.ac.jp> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <48317074.5090002@ar.media.kyoto-u.ac.jp> <9457e7c80805190618k4f8b61e7r7b87d0faeb899689@mail.gmail.com> <48317FF3.2060305@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80805190741t7383af05ja74b4d4afb33276a@mail.gmail.com> 2008/5/19 David Cournapeau : > St?fan van der Walt wrote: >> >> Done. >> > > Thanks. > >> I'm still working on that code, but I've made my bzr branch available at >> >> https://code.launchpad.net/~stefanv/+junk/numpy-refguide >> > > Thanks (FYI, the valid address is > http://bazaar.launchpad.net/~stefanv/scipy/numpy-refguide/) Sorry, I changed the project after I sent you the URL. Would you please try the latest version and let me know whether you can build the documentation? Note that docutils from SVN is broken, so you'll need v0.4 or the Debian-patched SVN version. Regards St?fan From robert.kern at gmail.com Mon May 19 11:26:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 May 2008 10:26:35 -0500 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> Message-ID: <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> On Mon, May 19, 2008 at 3:58 AM, St?fan van der Walt wrote: > For developers: please don't change docstrings directly in the source > tree, it makes merging difficult. If you urgently need a change to go > in, make it on the wiki and let me know. Ooh. Not good. It's not time-critical, but this needs to be fixed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon May 19 11:27:31 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 May 2008 10:27:31 -0500 Subject: [SciPy-dev] noncentral_chisquare buglet? In-Reply-To: References: Message-ID: <3d375d730805190827y4d6b437bo880445e73b5915a8@mail.gmail.com> On Mon, May 19, 2008 at 7:21 AM, Neal Becker wrote: > def noncentral_chisquare(self, df, nonc, size=None): > """Noncentral Chi^2 distribution. > > noncentral_chisquare(df, nonc, size=None) -> random values > """ > cdef ndarray odf, ononc > cdef double fdf, fnonc > fdf = PyFloat_AsDouble(df) > fnonc = PyFloat_AsDouble(nonc) > if not PyErr_Occurred(): > if fdf <= 1: > raise ValueError("df <= 0") << > I think this message should be "df <= 1"? Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon May 19 11:50:21 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 17:50:21 +0200 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> Message-ID: <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> 2008/5/19 Robert Kern : > On Mon, May 19, 2008 at 3:58 AM, St?fan van der Walt wrote: > >> For developers: please don't change docstrings directly in the source >> tree, it makes merging difficult. If you urgently need a change to go >> in, make it on the wiki and let me know. > > Ooh. Not good. It's not time-critical, but this needs to be fixed. It's not a train smash -- but it causes some patches to be rejected. I don't think this is something that can be "fixed"; it's inherently a merge problem. Regards St?fan From jh at physics.ucf.edu Sun May 18 21:10:56 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Sun, 18 May 2008 21:10:56 -0400 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 Message-ID: <1211159457.5853.84.camel@glup.physics.ucf.edu> I'm excited that so many talented people are signing up to contribute to the docs! Now I'm wishing that I had turned my debate with Stefan about the number who would play into a bet for ice cream...ah well. A quick comment about getting started. Please do read the front page of the wiki and the docs it refers to. We spent considerable time fine-tuning these to respond to exactly the questions that have been raised regarding import format, how to sign up, ReST conventions, etc. I'll provide rationale for some of these below. 1. I signed up on the wiki but can't edit; why? Post your name and wait a day. We're trying to avoid wiki spam by having the wiki chieftains approve editors by adding them to EditorGroup. We don't have huge resources so we're trying to pre-empt management problems like cleaning up wiki spam. Then we can spend more of our limited time writing docs. 2. Workflow? How do I know what to edit, review, or proof? We also don't have a workflow yet, though we have a utilitarian hack in mind that should show the status of each function. Right now they're all in need of editing, so we have a few weeks. Part of the problem is that the wiki gets torn down and rebuilt every time Stefan checks the docs in, and it's not a full web ap. If you have interest in helping solve this problem *and you have no interest in writing*, please let us know. Otherwise, please write, write, write! We have like 1000 proofed pages to produce, maybe more. 3. import convention? >>> 1) Assume 'from numpy import *' >>> 2) Assume 'import numpy'?????? >>> 3) Assume 'import numpy as np' For consistency in the docs, and likely to make doctests work, we need to have just one set of assumptions for imports. It will confuse new users and look very unprofessional if one docstring assumes "import numpy", another "import numpy as np", another "from numpy import thefunctionbeingdocumented" and so forth. We want to be clear and to make the examples cut-and-pasteable for as many people as possible. This is why we have asked people (in the doc instructions and example) to assume "import numpy" for numpy docs, to assume that and "import scipy" for scipy docs, and to make all other imports explicitly in the examples (e.g., if you want to plot in an example). We can discuss alternatives here, but we saw recently a variety of opinion (my own perhaps too loudly expressed) on how best to abbreviate, so I think that we'll probably end up in a religious war about how we want to tell people to import. Since it should always be ok simply to "import numpy", and since that is the simplest form of the import command, my personal take is that this is the appropriate assumption to make in the docs. The "assume import numpy" convention is at the very least understood by all, as you have to do it to access the package. My second preference, which looks more like other programming languages and is briefest, is to assume the user has done "from whatever import foo" on every function in the example. Sadly, this looks suspiciously like "from whatever import *", and we definitely don't want to encourage that practice. It's also not likely to be true. I'm still debating with myself over whether we should put that explicit import at the start of each example. It's an extra line, but it makes for the clearest and briefest rest of the example, which is the interesting part. (Allow me to show my silly side by saying that I usually lose these debates with myself...by not deciding.) --jh-- From pgmdevlist at gmail.com Mon May 19 12:33:09 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 19 May 2008 12:33:09 -0400 Subject: [SciPy-dev] Cookbook/Documentation Message-ID: <200805191233.10380.pgmdevlist@gmail.com> All, * I've just noticed that the page describing RecordArrays (http://www.scipy.org/RecordArrays) is not listed under the Cookbook: should this be changed ? Shouldn't there be at least a link in the documentation page ? * Same problem with Subclasses (http://www.scipy.org/Subclasses) * I was eventually considering writing down some basic docs for MaskedArrays: should I create a page under the Cookbook ? Elsewhere ? * Congrats for the DocMarathon initiative ! The 3 points I've just raised would fit nicely with objective #3 (reference sections): what's the plan for that ? Any specific directions to follow ? From jdh2358 at gmail.com Mon May 19 12:49:08 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 19 May 2008 11:49:08 -0500 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <1211159457.5853.84.camel@glup.physics.ucf.edu> References: <1211159457.5853.84.camel@glup.physics.ucf.edu> Message-ID: <88e473830805190949s44bac124j6e94f91b517c5f20@mail.gmail.com> On Sun, May 18, 2008 at 8:10 PM, Joe Harrington wrote: > This is why we have asked people (in the doc instructions and example) > to assume "import numpy" for numpy docs, to assume that and "import > scipy" for scipy docs, and to make all other imports explicitly in the > examples (e.g., if you want to plot in an example). in the numpy sprint at Berkeley the numpy devs including Travis, Robert, Jarrod and others voted to recommend 'import numpy as np'. Perhaps we can go with that and avoid further debate. The vote was for: import numpy as np import scipy as sp import matplotlib.pyplot as plt JDH >From the sprint chat: In the chat room: Fernando Perez (fdo.perez at gmail.com), chris.d.burns at gmail.com, Robert Kern (robert.kern at gmail.com), Jarrod Millman (millman.ucb at gmail.com), teoliphant at gmail.com 2:43 PM Fernando: You've been invited to this chat room! john? me: hey Fernando: quick, I know you're busy... import numpy as np import scipy as sp import matplotlib as mpl 2:44 PM me: no problem what's up? Fernando: these as conventions for shorhands? the mpl one I don't like... awkawrd to touch type... OK with the first two? me: well, you probably mean pyplot, not maptlotlib, since that is where the pylab plotting functionality lives import pyplot as plt? 2:45 PM the first two look good to me -- I much prefer the lower case over import numpy as N and single letter lower case is too brittle 2:46 PM Fernando: rkern says pyl... travis says lab 2:47 PM travis also votes for plt... me: I like plt reasonably well -- I'd also like to do apylab next generation along the lines of what we talked about before import pyplot as plot import numpy as np import scipy as sp import numpy.fft as fft 2:48 PM import scipy.optimize as opt etc.... Fernando: OK, the show of hands here also goes for plt. Let's just run with it, you OK? me: yep Fernando: so it is: np, sp, plt. Hammer goes down. 2:49 PM We may ping you again for quickies like this if you approve. I'll make sure not to disturb you too much, but it helps to make decisions and move along. Thanks! me: sounds good -- good luck! From cohen at slac.stanford.edu Mon May 19 12:54:47 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 19 May 2008 18:54:47 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <88e473830805190949s44bac124j6e94f91b517c5f20@mail.gmail.com> References: <1211159457.5853.84.camel@glup.physics.ucf.edu> <88e473830805190949s44bac124j6e94f91b517c5f20@mail.gmail.com> Message-ID: <4831B0D7.5010206@slac.stanford.edu> yesterday when I started to modify the doctest I used import numpy. It is easy enough to change to import numpy as np, but please let's get that out of the way once and for all. I have no preference between the 2. Johann John Hunter wrote: > On Sun, May 18, 2008 at 8:10 PM, Joe Harrington wrote: > > >> This is why we have asked people (in the doc instructions and example) >> to assume "import numpy" for numpy docs, to assume that and "import >> scipy" for scipy docs, and to make all other imports explicitly in the >> examples (e.g., if you want to plot in an example). >> > > in the numpy sprint at Berkeley the numpy devs including Travis, > Robert, Jarrod and others voted to recommend 'import numpy as np'. > Perhaps we can go with that and avoid further debate. The vote was > for: > > import numpy as np > import scipy as sp > import matplotlib.pyplot as plt > > JDH > > >From the sprint chat: > > > In the chat room: Fernando Perez (fdo.perez at gmail.com), > chris.d.burns at gmail.com, Robert Kern (robert.kern at gmail.com), Jarrod > Millman (millman.ucb at gmail.com), teoliphant at gmail.com > > 2:43 PM Fernando: You've been invited to this chat room! > john? > me: hey > Fernando: quick, I know you're busy... > import numpy as np > import scipy as sp > import matplotlib as mpl > 2:44 PM me: no problem what's up? > Fernando: these as conventions for shorhands? > the mpl one I don't like... awkawrd to touch type... > OK with the first two? > me: well, you probably mean pyplot, not maptlotlib, since that is > where the pylab plotting functionality lives > import pyplot as plt? > 2:45 PM the first two look good to me -- I much prefer the lower case > over import numpy as N and single letter lower case is too brittle > 2:46 PM Fernando: rkern says pyl... > travis says lab > 2:47 PM travis also votes for plt... > me: I like plt reasonably well -- I'd also like to do apylab next > generation along the lines of what we talked about before > import pyplot as plot > import numpy as np > import scipy as sp > import numpy.fft as fft > 2:48 PM import scipy.optimize as opt > etc.... > Fernando: OK, the show of hands here also goes for plt. > Let's just run with it, you OK? > me: yep > Fernando: so it is: np, sp, plt. Hammer goes down. > 2:49 PM We may ping you again for quickies like this if you approve. > I'll make sure not to disturb you too much, but it helps to make > decisions and move along. Thanks! > me: sounds good -- good luck! > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Mon May 19 13:04:11 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 May 2008 12:04:11 -0500 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> Message-ID: <3d375d730805191004p46615398qa67dae44e72902eb@mail.gmail.com> On Mon, May 19, 2008 at 10:50 AM, St?fan van der Walt wrote: > 2008/5/19 Robert Kern : >> On Mon, May 19, 2008 at 3:58 AM, St?fan van der Walt wrote: >> >>> For developers: please don't change docstrings directly in the source >>> tree, it makes merging difficult. If you urgently need a change to go >>> in, make it on the wiki and let me know. >> >> Ooh. Not good. It's not time-critical, but this needs to be fixed. > > It's not a train smash -- but it causes some patches to be rejected. > I don't think this is something that can be "fixed"; it's inherently a > merge problem. Okay, but this has to be a problem that the wiki->docstring system (or its human operator) needs to deal with, not developers. It's just not acceptable to have parts of the source that we can't edit directly. Is the wiki->docstring code publicly available? 3-way merges are quite typical, so I don't really understand why it would be especially hard in this case. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon May 19 13:19:27 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 19:19:27 +0200 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <3d375d730805191004p46615398qa67dae44e72902eb@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> <3d375d730805191004p46615398qa67dae44e72902eb@mail.gmail.com> Message-ID: <9457e7c80805191019j122a7f3fv5ec1896be446a597@mail.gmail.com> 2008/5/19 Robert Kern : > On Mon, May 19, 2008 at 10:50 AM, St?fan van der Walt wrote: >> 2008/5/19 Robert Kern : >>> On Mon, May 19, 2008 at 3:58 AM, St?fan van der Walt wrote: >>> >>>> For developers: please don't change docstrings directly in the source >>>> tree, it makes merging difficult. If you urgently need a change to go >>>> in, make it on the wiki and let me know. >>> >>> Ooh. Not good. It's not time-critical, but this needs to be fixed. >> >> It's not a train smash -- but it causes some patches to be rejected. >> I don't think this is something that can be "fixed"; it's inherently a >> merge problem. > > Okay, but this has to be a problem that the wiki->docstring system (or > its human operator) needs to deal with, not developers. It's just not > acceptable to have parts of the source that we can't edit directly. You may edit the code directly, if you so wish, but the changes won't immediately be visible to other editors working via the wiki. So rather than a technological limitation, I see it as a matter of courtesy. But don't worry about it, I'll merge the docstrings as necessary. Regards St?fan From stefan at sun.ac.za Mon May 19 13:20:45 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 19:20:45 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <4831B0D7.5010206@slac.stanford.edu> References: <1211159457.5853.84.camel@glup.physics.ucf.edu> <88e473830805190949s44bac124j6e94f91b517c5f20@mail.gmail.com> <4831B0D7.5010206@slac.stanford.edu> Message-ID: <9457e7c80805191020m5171e209ja4e8018638d1006b@mail.gmail.com> Hi Johann 2008/5/19 Johann Cohen-Tanugi : > yesterday when I started to modify the doctest I used import numpy. It > is easy enough to change to import numpy as np, but please let's get > that out of the way once and for all. I have no preference between the 2. Keep using numpy.func for now. When this thread comes to a conclusion, I shall do a search and replace if necessary. Regards St?fan From robert.kern at gmail.com Mon May 19 13:31:29 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 May 2008 12:31:29 -0500 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <9457e7c80805191019j122a7f3fv5ec1896be446a597@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> <3d375d730805191004p46615398qa67dae44e72902eb@mail.gmail.com> <9457e7c80805191019j122a7f3fv5ec1896be446a597@mail.gmail.com> Message-ID: <3d375d730805191031m28f10f5wbb24cc106043e9d8@mail.gmail.com> On Mon, May 19, 2008 at 12:19 PM, St?fan van der Walt wrote: > You may edit the code directly, if you so wish, but the changes won't > immediately be visible to other editors working via the wiki. Would it help if you had an SVN checkin hook? > So > rather than a technological limitation, I see it as a matter of > courtesy. I'm wondering if there isn't a technical solution, though. Is the code available somewhere? I can answer these questions myself if it is. > But don't worry about it, I'll merge the docstrings as > necessary. Thank you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Mon May 19 15:26:35 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 19 May 2008 22:26:35 +0300 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <3d375d730805191031m28f10f5wbb24cc106043e9d8@mail.gmail.com> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> <3d375d730805191004p46615398qa67dae44e72902eb@mail.gmail.com> <9457e7c80805191019j122a7f3fv5ec1896be446a597@mail.gmail.com> <3d375d730805191031m28f10f5wbb24cc106043e9d8@mail.gmail.com> Message-ID: <1211225195.8349.60.camel@localhost.localdomain> ma, 2008-05-19 kello 12:31 -0500, Robert Kern kirjoitti: > On Mon, May 19, 2008 at 12:19 PM, St?fan van der Walt wrote: [clip] > > So rather than a technological limitation, I see it as a matter of > > courtesy. > > I'm wondering if there isn't a technical solution, though. Is the code > available somewhere? I can answer these questions myself if it is. See here: http://sd-2116.dedibox.fr/wiki/numpydoc/ (Note that you can bzr clone this.) Currently the workflow is roughly like so: 1) Collect docstrings: (see pydoc.dtd) pydoc_moin collect source-tree module > docstrings.xml 2) Upload docstrings (overwrite current ones): pydoc_moin moin-upload-local /moin_location < docstrings.xml 3) Download docstrings pydoc_moin moin-collect-local /moin_location > new_docstrings.xml 4) Replace docstrings in sources, one bzr commit per docstring pydoc_moin bzr docstrings.xml new_docstrings.xml bzr-tree 4b) Or make a patch ? pydoc_moin patch docstrings.xml new_docstrings.xml source-tree > docs.patch What happens after this is up to the person doing the merge. If the source tree is indeed managed with bzr, one could use its merge capabilities for doing all the merges, conflict resolution, or cherry picking. At the moment it seems to me that the "correct" workflow for this case would be to - Always keep the wiki bzr branch in sync with the stuff in the wiki. ? (Wiki needs to be locked at some point so that we don't accidentally overwrite change not in bzr.) - Periodically merge wiki's bzr branch with SVN, resolve conflicts, and push changes back to the wiki. (But don't push changes in wiki back to SVN.) - At times, cherry-pick good docstrings to SVN. I'm not sure bzr is smart enough to mark these as no-op merges when merging the changes back to wiki's bzr branch. But I think St?fan had objections to this workflow? Some drawbacks in the above mechanism are: a) Everything must be merged at once. b) Responsibility for a single merge is concentrated on a single person. A fix for a) could be to do 3-way merging ourselves and not rely on bzr. We'd need to track the base revision and handle conflict resolution. Luckily, the code that would handle merging needs only to munge XML files and ask the user what to do on conflicts. If we want to allow anyone to handle conflict resolution (to get rid of b), then merging would need to be pushed down to the wiki level, which could lead to a mess. But it would probably be doable in a custom web-app. -- Pauli Virtanen From stefan at sun.ac.za Mon May 19 15:45:22 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 19 May 2008 21:45:22 +0200 Subject: [SciPy-dev] Docstring editing request on SciPy Doc Wiki In-Reply-To: <1211225195.8349.60.camel@localhost.localdomain> References: <4830B02B.8030308@earthlink.net> <1211184133.6597.62.camel@localhost> <9457e7c80805190158q1f19e39dua6aa3055097f9e3@mail.gmail.com> <3d375d730805190826y3179abc9i92b2636b7035da82@mail.gmail.com> <9457e7c80805190850u608e4eefl5941bc3d170aef5a@mail.gmail.com> <3d375d730805191004p46615398qa67dae44e72902eb@mail.gmail.com> <9457e7c80805191019j122a7f3fv5ec1896be446a597@mail.gmail.com> <3d375d730805191031m28f10f5wbb24cc106043e9d8@mail.gmail.com> <1211225195.8349.60.camel@localhost.localdomain> Message-ID: <9457e7c80805191245k3ee51836ka48d05ac55525a9a@mail.gmail.com> 2008/5/19 Pauli Virtanen : > Currently the workflow is roughly like so: > > 1) Collect docstrings: (see pydoc.dtd) > > pydoc_moin collect source-tree module > docstrings.xml > > 2) Upload docstrings (overwrite current ones): > > pydoc_moin moin-upload-local /moin_location < docstrings.xml > > 3) Download docstrings > > pydoc_moin moin-collect-local /moin_location > new_docstrings.xml > > 4) Replace docstrings in sources, one bzr commit per docstring > > pydoc_moin bzr docstrings.xml new_docstrings.xml bzr-tree > > 4b) Or make a patch > > ? pydoc_moin patch docstrings.xml new_docstrings.xml source-tree > docs.patch > > > What happens after this is up to the person doing the merge. > > If the source tree is indeed managed with bzr, one could use its merge > capabilities for doing all the merges, conflict resolution, or cherry > picking. At the moment it seems to me that the "correct" workflow for > this case would be to > > - Always keep the wiki bzr branch in sync with the stuff in the wiki. > ? (Wiki needs to be locked at some point so that we don't > accidentally > overwrite change not in bzr.) > > - Periodically merge wiki's bzr branch with SVN, resolve conflicts, > and push changes back to the wiki. > (But don't push changes in wiki back to SVN.) > > - At times, cherry-pick good docstrings to SVN. > I'm not sure bzr is smart enough to mark these as no-op merges > when merging the changes back to wiki's bzr branch. > > But I think St?fan had objections to this workflow? I was concerned about the situation arising where we have very different docstrings in SVN and on the wiki. As long as we keep the wiki in sync with SVN, I am happy. Regards St?fan From nmarais at sun.ac.za Mon May 19 16:45:13 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 19 May 2008 20:45:13 +0000 (UTC) Subject: [SciPy-dev] petsc4py support? (segfault) Message-ID: Hi, I'm trying out petsc4py, but running into trouble. While it's hosted on http://code.google.com/p/petsc4py/, there doesn't seem to be any mailing lists/forums. Where is the best place to get support? Going out on a limb, I'll post my problem here for now :) I reported an ubuntu bug here: https://bugs.launchpad.net/ubuntu/+source/petsc4py/ +bug/232036 I also tried compiling myself with SVN numpy. I edited setup.cfg in the petsc4py-0.7.5 build dir: [config] petsc_dir = /usr/lib/petsc petsc_arch = linux-gnu-c-opt [build_ext] debug = 0 [sdist] force_manifest=1 [bdist_rpm] group = Libraries/Python vendor = CIMEC packager = Lisandro Dalcin doc_files = README.txt docs/*.txt config reads as follows: $ python setup.py config running config ---------------------------------------------------------------------- PETSC_DIR: /usr/lib/petsc ---------------------------------------------------------------------- PETSC_ARCH: linux-gnu-c-opt language: CONLY compiler: /usr/bin/mpicc scalar-type: real precision: double ---------------------------------------------------------------------- and when I try and import petsc I get a segfault $ python Python 2.5.2 (r252:60911, Apr 21 2008, 11:17:30) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import petsc4py.PETSc Segmentation fault This is on an ubuntu hardy (8.04) AMD64 machine: $ uname -a Linux genugtig 2.6.24-16-generic #1 SMP Thu Apr 10 12:47:45 UTC 2008 x86_64 GNU/Linux Regards Neilen From robert.kern at gmail.com Mon May 19 16:54:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 May 2008 15:54:24 -0500 Subject: [SciPy-dev] petsc4py support? (segfault) In-Reply-To: References: Message-ID: <3d375d730805191354v21eff53eo8598d8c555773fd0@mail.gmail.com> On Mon, May 19, 2008 at 3:45 PM, Neilen Marais wrote: > Hi, > > I'm trying out petsc4py, but running into trouble. While it's hosted on > http://code.google.com/p/petsc4py/, there doesn't seem to be any mailing > lists/forums. Where is the best place to get support? Here's a reasonable place for now. Ondrej Certik and Robert Cimrman show up fairly regularly here, and I've seen Lisando Dalcin occasionally. Unfortunately, I don't have any specific help for you. However, if you run python under gdb, you can get a backtrace which could help you identify the problem. This is what it looks like on my Mac; it will look different on Ubuntu, but the major features (run, continue) should be done in the same sequence: $ gdb python GNU gdb 6.3.50-20050815 (Apple version gdb-768) (Tue Oct 2 04:07:49 UTC 2007) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin"...Reading symbols for shared libraries .. done (gdb) run Starting program: /usr/local/bin/python Reading symbols for shared libraries +. done Program received signal SIGTRAP, Trace/breakpoint trap. 0x8fe01010 in __dyld__dyld_start () (gdb) continue Continuing. Reading symbols for shared libraries .. done Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) [GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin Type "help", "copyright", "credits" or "license" for more information. Reading symbols for shared libraries .. done >>> import petsc4py.PETSc -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmarais at sun.ac.za Mon May 19 17:00:56 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 19 May 2008 21:00:56 +0000 (UTC) Subject: [SciPy-dev] petsc4py support? (segfault) References: <3d375d730805191354v21eff53eo8598d8c555773fd0@mail.gmail.com> Message-ID: On Mon, 19 May 2008 15:54:24 -0500, Robert Kern wrote: > However, if you run > python under gdb, you can get a backtrace which could help you identify > the problem. Hi Robert, This is what I see: $ gdb python GNU gdb 6.8-debian Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu"... (gdb) run Starting program: /usr/bin/python [Thread debugging using libthread_db enabled] Python 2.5.2 (r252:60911, Apr 21 2008, 11:17:30) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. [New Thread 0x7f0b081746e0 (LWP 12261)] >>> import petsc4py.PETSc Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7f0b081746e0 (LWP 12261)] 0x00007f0b02f38b8b in _int_malloc () from /usr/lib/libopen-pal.so.0 (gdb) Thanks Neilen From cohen at slac.stanford.edu Mon May 19 16:59:10 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Mon, 19 May 2008 22:59:10 +0200 Subject: [SciPy-dev] petsc4py support? (segfault) In-Reply-To: References: <3d375d730805191354v21eff53eo8598d8c555773fd0@mail.gmail.com> Message-ID: <4831EA1E.6040406@slac.stanford.edu> and now type where at the gdb prompt.... Johann Neilen Marais wrote: > On Mon, 19 May 2008 15:54:24 -0500, Robert Kern wrote: > > >> However, if you run >> python under gdb, you can get a backtrace which could help you identify >> the problem. >> > > > Hi Robert, > > This is what I see: > > $ gdb python > GNU gdb 6.8-debian > Copyright (C) 2008 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later gpl.html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-linux-gnu"... > (gdb) run > Starting program: /usr/bin/python > [Thread debugging using libthread_db enabled] > Python 2.5.2 (r252:60911, Apr 21 2008, 11:17:30) > [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > [New Thread 0x7f0b081746e0 (LWP 12261)] > >>>> import petsc4py.PETSc >>>> > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7f0b081746e0 (LWP 12261)] > 0x00007f0b02f38b8b in _int_malloc () from /usr/lib/libopen-pal.so.0 > (gdb) > > Thanks > Neilen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Mon May 19 17:07:22 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 19 May 2008 16:07:22 -0500 Subject: [SciPy-dev] petsc4py support? (segfault) In-Reply-To: References: <3d375d730805191354v21eff53eo8598d8c555773fd0@mail.gmail.com> Message-ID: <3d375d730805191407j316d9fd3i76b71f2f1fc20a3@mail.gmail.com> On Mon, May 19, 2008 at 4:00 PM, Neilen Marais wrote: > On Mon, 19 May 2008 15:54:24 -0500, Robert Kern wrote: > >> However, if you run >> python under gdb, you can get a backtrace which could help you identify >> the problem. > > Hi Robert, > > This is what I see: > > $ gdb python > GNU gdb 6.8-debian > Copyright (C) 2008 Free Software Foundation, Inc. > License GPLv3+: GNU GPL version 3 or later gpl.html> > This is free software: you are free to change and redistribute it. > There is NO WARRANTY, to the extent permitted by law. Type "show copying" > and "show warranty" for details. > This GDB was configured as "x86_64-linux-gnu"... > (gdb) run > Starting program: /usr/bin/python > [Thread debugging using libthread_db enabled] > Python 2.5.2 (r252:60911, Apr 21 2008, 11:17:30) > [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > [New Thread 0x7f0b081746e0 (LWP 12261)] >>>> import petsc4py.PETSc > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7f0b081746e0 (LWP 12261)] > 0x00007f0b02f38b8b in _int_malloc () from /usr/lib/libopen-pal.so.0 > (gdb) Whoops! I left out the important bit: Now type "bt" to get a backtrace. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmarais at sun.ac.za Mon May 19 17:13:55 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 19 May 2008 21:13:55 +0000 (UTC) Subject: [SciPy-dev] petsc4py support? (segfault) References: <3d375d730805191354v21eff53eo8598d8c555773fd0@mail.gmail.com> <4831EA1E.6040406@slac.stanford.edu> Message-ID: On Mon, 19 May 2008 22:59:10 +0200, Johann Cohen-Tanugi wrote: > and now type where at the gdb prompt.... Johann Sorry, bit fuzzy in the brain today :) Remembered to install the dbg pacakges, but not do the backtrace hehe. $ gdb python GNU gdb 6.8-debian Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu"... (gdb) run Starting program: /usr/bin/python [Thread debugging using libthread_db enabled] Python 2.5.2 (r252:60911, Apr 21 2008, 11:17:30) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. [New Thread 0x7fef39d086e0 (LWP 14775)] >>> import petsc4py.PETSc Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fef39d086e0 (LWP 14775)] 0x00007fef34accb8b in _int_malloc () from /usr/lib/libopen-pal.so.0 (gdb) where #0 0x00007fef34accb8b in _int_malloc () from /usr/lib/libopen-pal.so.0 #1 0x00007fef34acde58 in malloc () from /usr/lib/libopen-pal.so.0 #2 0x00007fef34aafbfb in opal_class_initialize () from /usr/lib/libopen-pal.so.0 #3 0x00007fef34ac3e2b in opal_malloc_init () from /usr/lib/libopen- pal.so.0 #4 0x00007fef34ab0d97 in opal_init_util () from /usr/lib/libopen-pal.so.0 #5 0x00007fef34ab0e76 in opal_init () from /usr/lib/libopen-pal.so.0 #6 0x00007fef34f94723 in ompi_mpi_init () from /usr/lib/libmpi.so.0 #7 0x00007fef34fb60d6 in PMPI_Init () from /usr/lib/libmpi.so.0 #8 0x00007fef36fd090e in PetscInitialize () from /usr/lib/petsc/lib/linux-gnu-c-opt/libpetsc.so.2.3.3 #9 0x00007fef383c224b in _wrap_PetscInitialize (self=, args=) at petsc/lib/ext/petscext_wrap.c:6784 #10 0x0000000000488a88 in PyEval_EvalFrameEx (f=0x7e92e0, throwflag=) at ../Python/ceval.c:3557 #11 0x000000000048a376 in PyEval_EvalCodeEx (co=0x7fef39bdb558, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2836 #12 0x000000000048a492 in PyEval_EvalCode (co=0x0, globals=0x0, locals=0xff4ba000) at ../Python/ceval.c:494 #13 0x00000000004a0a00 in PyImport_ExecCodeModuleEx ( name=0x7fff41d38f40 "petsc4py.lib._petsc", co=0x7fef39bdb558, ---Type to continue, or q to quit--- pathname=0x7fff41d36de0 "/usr/lib/python2.5/site-packages/petsc4py/ lib/_petsc.pyc") at ../Python/import.c:675 #14 0x00000000004a1230 in load_source_module ( name=0x7fff41d38f40 "petsc4py.lib._petsc", pathname=0x7fff41d36de0 "/usr/lib/python2.5/site-packages/petsc4py/ lib/_petsc.pyc", fp=) at ../Python/import.c:959 #15 0x00000000004a1809 in import_submodule (mod=0x7fef39bd5638, subname=0x7fef39cc5114 "_petsc", fullname=0x7fff41d38f40 "petsc4py.lib._petsc") at ../Python/ import.c:2400 #16 0x00000000004a1ad1 in ensure_fromlist (mod=0x7fef39bd5638, fromlist=0x7fef39cf7d50, buf=0x7fff41d38f40 "petsc4py.lib._petsc", buflen=12, recursive=0) at ../Python/import.c:2311 #17 0x00000000004a2115 in import_module_level (name=0x0, globals=, locals=, fromlist=0x7fef39cf7d50, level=) at ../Python/import.c:2038 #18 0x00000000004a23c5 in PyImport_ImportModuleLevel ( name=0x7fef39cc3b7c "petsc4py.lib", globals=0x7dbef0, locals=0x7dbef0, fromlist=0x7fef39cf7d50, level=-1) at ../Python/import.c:2072 #19 0x0000000000481a19 in builtin___import__ (self=, args=, kwds=) at ../Python/bltinmodule.c:47 #20 0x0000000000417e73 in PyObject_Call (func=0x0, arg=0x0, kw=0xff4ba000) ---Type to continue, or q to quit--- at ../Objects/abstract.c:1861 #21 0x0000000000481fc2 in PyEval_CallObjectWithKeywords (func=0x7fef39cd55f0, arg=0x7fef39cb6940, kw=0x0) at ../Python/ceval.c:3442 #22 0x0000000000485b61 in PyEval_EvalFrameEx (f=0x7e8ba0, throwflag=) at ../Python/ceval.c:2067 #23 0x000000000048a376 in PyEval_EvalCodeEx (co=0x7fef39cbf5d0, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2836 #24 0x000000000048a492 in PyEval_EvalCode (co=0x0, globals=0x0, locals=0xff4ba000) at ../Python/ceval.c:494 #25 0x00000000004a0a00 in PyImport_ExecCodeModuleEx ( name=0x7fff41d3c500 "petsc4py.Error", co=0x7fef39cbf5d0, pathname=0x7fff41d3a3b0 "/usr/lib/python2.5/site-packages/petsc4py/ Error.pyc") at ../Python/import.c:675 #26 0x00000000004a1230 in load_source_module ( name=0x7fff41d3c500 "petsc4py.Error", pathname=0x7fff41d3a3b0 "/usr/lib/python2.5/site-packages/petsc4py/ Error.pyc", fp=) at ../Python/import.c:959 #27 0x00000000004a1809 in import_submodule (mod=0x7fef39cc3a60, subname=0x7fff41d3c509 "Error", fullname=0x7fff41d3c500 "petsc4py.Error") at ../Python/import.c:2400 #28 0x00000000004a1cdb in load_next (mod=0x7fef39cc3a60, ---Type to continue, or q to quit--- altmod=0x7fef39cc3a60, p_name=, buf=0x7fff41d3c500 "petsc4py.Error", p_buflen=0x7fff41d3c4f8) at ../Python/import.c:2220 #29 0x00000000004a1f5d in import_module_level (name=0x0, globals=, locals=, fromlist=0x7fef39cbe210, level=) at ../Python/import.c:2008 #30 0x00000000004a23c5 in PyImport_ImportModuleLevel ( name=0x7fef39cc3de4 "petsc4py.Error", globals=0x7dbdd0, locals=0x7dbdd0, fromlist=0x7fef39cbe210, level=-1) at ../Python/import.c:2072 #31 0x0000000000481a19 in builtin___import__ (self=, args=, kwds=) at ../Python/bltinmodule.c:47 #32 0x0000000000417e73 in PyObject_Call (func=0x0, arg=0x0, kw=0xff4ba000) at ../Objects/abstract.c:1861 #33 0x0000000000481fc2 in PyEval_CallObjectWithKeywords (func=0x7fef39cd55f0, arg=0x7fef39c9baa0, kw=0x0) at ../Python/ceval.c:3442 #34 0x0000000000485b61 in PyEval_EvalFrameEx (f=0x7e7300, throwflag=) at ../Python/ceval.c:2067 #35 0x000000000048a376 in PyEval_EvalCodeEx (co=0x7fef39cabaf8, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2836 ---Type to continue, or q to quit--- #36 0x000000000048a492 in PyEval_EvalCode (co=0x0, globals=0x0, locals=0xff4ba000) at ../Python/ceval.c:494 #37 0x00000000004a0a00 in PyImport_ExecCodeModuleEx ( name=0x7fff41d3fac0 "petsc4py.PETSc", co=0x7fef39cabaf8, pathname=0x7fff41d3d970 "/usr/lib/python2.5/site-packages/petsc4py/ PETSc.pyc") at ../Python/import.c:675 #38 0x00000000004a1230 in load_source_module ( name=0x7fff41d3fac0 "petsc4py.PETSc", pathname=0x7fff41d3d970 "/usr/lib/python2.5/site-packages/petsc4py/ PETSc.pyc", fp=) at ../Python/import.c:959 #39 0x00000000004a1809 in import_submodule (mod=0x7fef39cc3a60, subname=0x7fff41d3fac9 "PETSc", fullname=0x7fff41d3fac0 "petsc4py.PETSc") at ../Python/import.c:2400 #40 0x00000000004a1cdb in load_next (mod=0x7fef39cc3a60, altmod=0x7fef39cc3a60, p_name=, buf=0x7fff41d3fac0 "petsc4py.PETSc", p_buflen=0x7fff41d3fab8) at ../Python/import.c:2220 #41 0x00000000004a1f5d in import_module_level (name=0x0, globals=, locals=, fromlist=0x72c460, level=) at ../Python/ import.c:2008 #42 0x00000000004a23c5 in PyImport_ImportModuleLevel ( name=0x7fef39cc3a2c "petsc4py.PETSc", globals=0x77e2e0, locals=0x77e2e0, fromlist=0x72c460, level=-1) at ../Python/import.c:2072 ---Type to continue, or q to quit--- #43 0x0000000000481a19 in builtin___import__ (self=, args=, kwds=) at ../Python/bltinmodule.c:47 #44 0x0000000000417e73 in PyObject_Call (func=0x0, arg=0x0, kw=0xff4ba000) at ../Objects/abstract.c:1861 #45 0x0000000000481fc2 in PyEval_CallObjectWithKeywords (func=0x7fef39cd55f0, arg=0x7fef39c9b940, kw=0x0) at ../Python/ceval.c:3442 #46 0x0000000000485b61 in PyEval_EvalFrameEx (f=0x7e6f00, throwflag=) at ../Python/ceval.c:2067 #47 0x000000000048a376 in PyEval_EvalCodeEx (co=0x7fef39cb3300, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2836 #48 0x000000000048a492 in PyEval_EvalCode (co=0x0, globals=0x0, locals=0xff4ba000) at ../Python/ceval.c:494 #49 0x00000000004ac459 in PyRun_InteractiveOneFlags (fp=, filename=0x4e5c58 "", flags=0x7fff41d410a0) at ../Python/pythonrun.c:1273 #50 0x00000000004ac664 in PyRun_InteractiveLoopFlags (fp=0x7fef392796a0, filename=0x4e5c58 "", flags=0x7fff41d410a0) at ../Python/pythonrun.c:723 #51 0x00000000004ac76a in PyRun_AnyFileExFlags (fp=0x7fef392796a0, filename=0x4e5c58 "", closeit=0, flags=0x7fff41d410a0) ---Type to continue, or q to quit--- at ../Python/pythonrun.c:692 #52 0x00000000004145ad in Py_Main (argc=, argv=) at ../Modules/main.c:523 #53 0x00007fef38f3b1c4 in __libc_start_main () from /lib/libc.so.6 #54 0x0000000000413b29 in _start () From nmarais at sun.ac.za Mon May 19 17:41:36 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 19 May 2008 21:41:36 +0000 (UTC) Subject: [SciPy-dev] iterative callback function signature Message-ID: Hi, I'm trying to monitor the residual while doing an iterative solve. In iterative.py I see: x, iter_, resid, info, ndx1, ndx2, sclr1, sclr2, ijob = \ revcom(b, x, work, iter_, resid, info, ndx1, ndx2, ijob) if callback is not None and iter_ > olditer: callback(x) I'm a bit uneducated about the meaning of all the iterative variables, but is there a way I can obtain the residual from x here? Otherwise, would it not make sense to allow more complicated callback functions? Thanks Neilen From nmb at wartburg.edu Tue May 20 10:31:52 2008 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Tue, 20 May 2008 09:31:52 -0500 Subject: [SciPy-dev] Doc Marathon: Ooh, ooh, pick me, pick me Message-ID: <4832E0D8.5050608@wartburg.edu> NeilMartinsenBurrell I love writing documentation. I don't have the time or skills to write code for scipy yet, or fix many bugs, but I used numpy/scipy with my students in a class this term and I'd do anything for a good reference guide. My goal is to do at least one docstring per day for the summer (here's to impending commencements!) -Neil -- "Any teacher that can be replaced by a computer should be replaced by a computer." From stefan at sun.ac.za Tue May 20 11:17:37 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 20 May 2008 17:17:37 +0200 Subject: [SciPy-dev] Doc Marathon: Ooh, ooh, pick me, pick me In-Reply-To: <4832E0D8.5050608@wartburg.edu> References: <4832E0D8.5050608@wartburg.edu> Message-ID: <9457e7c80805200817q40ca9ddcw3d019a01a2a885f2@mail.gmail.com> Hi Neil 2008/5/20 Neil Martinsen-Burrell : > NeilMartinsenBurrell > > I love writing documentation. I don't have the time or skills to write > code for scipy yet, or fix many bugs, but I used numpy/scipy with my > students in a class this term and I'd do anything for a good reference > guide. My goal is to do at least one docstring per day for the summer > (here's to impending commencements!) Welcome on board! I look forward to your contributions. Regards St?fan From wnbell at gmail.com Tue May 20 16:52:36 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 20 May 2008 15:52:36 -0500 Subject: [SciPy-dev] iterative callback function signature In-Reply-To: References: Message-ID: On Mon, May 19, 2008 at 4:41 PM, Neilen Marais wrote: > > if callback is not None and iter_ > olditer: > callback(x) > > I'm a bit uneducated about the meaning of all the iterative variables, > but is there a way I can obtain the residual from x here? Otherwise, > would it not make sense to allow more complicated callback functions? > Neilen, since you know A and b when you call the iterative solve you can compute residuals like so: A = ..... #some matrix b = ..... # some rhs residuals = [] def callback(x): residuals.append(norm(b - A*x)) x,info = cg(A, b, callback=callback) # residuals now contains ||b-A*x|| at each iteration -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From kwgoodman at gmail.com Wed May 21 09:48:46 2008 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 21 May 2008 06:48:46 -0700 Subject: [SciPy-dev] _nanmedian Message-ID: I think _nanmedian in stats.py uses an unnecessary sort. Perhaps it is left over from a time when _nanmedian calculated the median instead of calling median. From sinclaird at ukzn.ac.za Wed May 21 06:10:32 2008 From: sinclaird at ukzn.ac.za (Scott Sinclair) Date: Wed, 21 May 2008 12:10:32 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 Message-ID: <483411380200009F0002CDD6@dbnsmtp.ukzn.ac.za> Hi, My user name registered on the Wiki is ScottSinclair. I can offer to review and edit (items 2 & 3 below) and will also have a go at a few docstrings and doctests (items 1 & 4). Cheers, Scott >>> Joe Harrington 05/17/08 7:45 AM >>> NUMPY/SCIPY DOCUMENTATION MARATHON 2008 There are several ways you can help: 1. Write some docstrings on the wiki! Many people can do this, many more than can write code for the package itself. However, you must know numpy, the function group, and the function you are writing well. You should be familiar with the concept of a reference page and write in that concise style. We'll do tutorial docs in another project at a later date. See the instructions on the wiki for guidelines and format. 2. Review others' docstrings and leave comments on their wiki pages. 3. Proofread docstrings. Make sure they are correct, complete, and concise. Fix grammar. 4. Write examples ("doctests"). Even if you are not a top-notch English writer, you can help by producing a code snippet of a few lines that demonstrates a function. It is fine for them to go into the docstring templates before the actual text. 5. Write a new help function that optionally produces ASCII or points the user's PDF or HTML reader to the right page (either local or global). 6. If you are in a position to hire someone, such as a knowledgeable student or short-term consultant, hire them to work on the tasks above for the summer. We can provide supervision to them or guidance to you if you like. The home for this project is here: http://scipy.org/Developer_Zone/DocMarathon2008 This is not a sprint. It is a marathon, and this time we are going to finish. We hope you will join us! Please find our Email Disclaimer here: http://www.ukzn.ac.za/disclaimer/ From stefan at sun.ac.za Wed May 21 16:05:11 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 22:05:11 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <483411380200009F0002CDD6@dbnsmtp.ukzn.ac.za> References: <483411380200009F0002CDD6@dbnsmtp.ukzn.ac.za> Message-ID: <9457e7c80805211305i38092091y7ace0901d1bc2931@mail.gmail.com> 2008/5/21 Scott Sinclair : > > My user name registered on the Wiki is ScottSinclair. > > I can offer to review and edit (items 2 & 3 below) and will also have a go at a few docstrings and doctests (items 1 & 4). Thank you, Scott! Your name has been added to the editors list. Regards St?fan From bsouthey at gmail.com Wed May 21 16:50:20 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 21 May 2008 15:50:20 -0500 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: <9457e7c80805211305i38092091y7ace0901d1bc2931@mail.gmail.com> References: <483411380200009F0002CDD6@dbnsmtp.ukzn.ac.za> <9457e7c80805211305i38092091y7ace0901d1bc2931@mail.gmail.com> Message-ID: Hi, I signed up as bsouthey. Is there a list of all the names/attributes/functions available in the various NumPy and SciPy name spaces? For example, len(dir(numpy)) is 516 with version '1.1.0.dev5133'. Regards Bruce On Wed, May 21, 2008 at 3:05 PM, St?fan van der Walt wrote: > 2008/5/21 Scott Sinclair : >> >> My user name registered on the Wiki is ScottSinclair. >> >> I can offer to review and edit (items 2 & 3 below) and will also have a go at a few docstrings and doctests (items 1 & 4). > > Thank you, Scott! Your name has been added to the editors list. > > Regards > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From stefan at sun.ac.za Wed May 21 17:02:44 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 21 May 2008 23:02:44 +0200 Subject: [SciPy-dev] ANN: NumPy/SciPy Documentation Marathon 2008 In-Reply-To: References: <483411380200009F0002CDD6@dbnsmtp.ukzn.ac.za> <9457e7c80805211305i38092091y7ace0901d1bc2931@mail.gmail.com> Message-ID: <9457e7c80805211402uc296de3tb427fafd51145360@mail.gmail.com> Hi Bruce 2008/5/21 Bruce Southey : > I signed up as bsouthey. I added you to the editorgroup. WikiLoginNames are normally CamelCase, but I don't think that should make a difference. > Is there a list of all the names/attributes/functions available in the > various NumPy and SciPy name spaces? > For example, len(dir(numpy)) is 516 with version '1.1.0.dev5133'. Yes, click on the NumPy module at the bottom of the frontpage, and scroll down. Regards St?fan From cohen at slac.stanford.edu Wed May 21 18:13:11 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 22 May 2008 00:13:11 +0200 Subject: [SciPy-dev] about data sets Message-ID: <48349E77.5060508@slac.stanford.edu> hello, well I apologize in advance for jumping ahead, this email can anyway always be set aside for later.... I just had the opportunity to use loadtxt today and looked for the docstring with the marathon in mind. And of course it is a non working example as the data file that it is supposed to load does not exist.... I noticed some time ago that David Cournapeau (I think) had started a discussion about this at http://scipy.org/scipy/scikits/wiki/DataSets .... I agree with him that loadable datasets in R are a big plus of this software. I guess that in principle I could create a buffer to read back with loadtxt in the example, but given the long term goals of this doc marathon, I thought that it might be useful to raise this issue right away.... best, Johann From millman at berkeley.edu Wed May 21 20:18:48 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 21 May 2008 17:18:48 -0700 Subject: [SciPy-dev] about data sets In-Reply-To: <48349E77.5060508@slac.stanford.edu> References: <48349E77.5060508@slac.stanford.edu> Message-ID: On Wed, May 21, 2008 at 3:13 PM, Johann Cohen-Tanugi wrote: > I just had the opportunity to use loadtxt today and looked for the > docstring with the marathon in mind. And of course it is a non working > example as the data file that it is supposed to load does not exist.... > I noticed some time ago that David Cournapeau (I think) had started a > discussion about this at http://scipy.org/scipy/scikits/wiki/DataSets > .... I agree with him that loadable datasets in R are a big plus of this > software. > I guess that in principle I could create a buffer to read back with > loadtxt in the example, but given the long term goals of this doc > marathon, I thought that it might be useful to raise this issue right > away.... I am not entirely sure that the discussion that David started is applicable in this case. In the loadtxt example, it is showing how to use a data io function, so it doesn't make sense to use the kind of interface that was being proposed in David's discussion. You may be more interested in this: http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/_datasource.py That way the data can be hosted on a remote website. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Wed May 21 20:23:14 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 May 2008 19:23:14 -0500 Subject: [SciPy-dev] about data sets In-Reply-To: References: <48349E77.5060508@slac.stanford.edu> Message-ID: <3d375d730805211723h1f611cf9x805423435cd039c2@mail.gmail.com> On Wed, May 21, 2008 at 7:18 PM, Jarrod Millman wrote: > On Wed, May 21, 2008 at 3:13 PM, Johann Cohen-Tanugi > wrote: >> I just had the opportunity to use loadtxt today and looked for the >> docstring with the marathon in mind. And of course it is a non working >> example as the data file that it is supposed to load does not exist.... >> I noticed some time ago that David Cournapeau (I think) had started a >> discussion about this at http://scipy.org/scipy/scikits/wiki/DataSets >> .... I agree with him that loadable datasets in R are a big plus of this >> software. >> I guess that in principle I could create a buffer to read back with >> loadtxt in the example, but given the long term goals of this doc >> marathon, I thought that it might be useful to raise this issue right >> away.... > > I am not entirely sure that the discussion that David started is > applicable in this case. In the loadtxt example, it is showing how to > use a data io function, so it doesn't make sense to use the kind of > interface that was being proposed in David's discussion. You may be > more interested in this: > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/_datasource.py > That way the data can be hosted on a remote website. doctestable examples accessing the internet gives me heebie-jeebies. Frankly, I'd prefer that one just use a StringIO and keep the data small. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cohen at slac.stanford.edu Thu May 22 01:31:06 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 22 May 2008 07:31:06 +0200 Subject: [SciPy-dev] about data sets In-Reply-To: <3d375d730805211723h1f611cf9x805423435cd039c2@mail.gmail.com> References: <48349E77.5060508@slac.stanford.edu> <3d375d730805211723h1f611cf9x805423435cd039c2@mail.gmail.com> Message-ID: <4835051A.8020500@slac.stanford.edu> Robert Kern wrote: > On Wed, May 21, 2008 at 7:18 PM, Jarrod Millman wrote: > >> On Wed, May 21, 2008 at 3:13 PM, Johann Cohen-Tanugi >> wrote: >> >>> I just had the opportunity to use loadtxt today and looked for the >>> docstring with the marathon in mind. And of course it is a non working >>> example as the data file that it is supposed to load does not exist.... >>> I noticed some time ago that David Cournapeau (I think) had started a >>> discussion about this at http://scipy.org/scipy/scikits/wiki/DataSets >>> .... I agree with him that loadable datasets in R are a big plus of this >>> software. >>> I guess that in principle I could create a buffer to read back with >>> loadtxt in the example, but given the long term goals of this doc >>> marathon, I thought that it might be useful to raise this issue right >>> away.... >>> >> I am not entirely sure that the discussion that David started is >> applicable in this case. In the loadtxt example, it is showing how to >> use a data io function, so it doesn't make sense to use the kind of >> interface that was being proposed in David's discussion. You may be >> more interested in this: >> http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/_datasource.py >> That way the data can be hosted on a remote website. >> > > doctestable examples accessing the internet gives me heebie-jeebies. > > I fully agree > Frankly, I'd prefer that one just use a StringIO and keep the data small. > Ok, so that would be the standard way to do it for doctestable examples...... I will change the wiki loadtxt example accordingly, and make a note of the normal way one would use this function. Johann From igorsyl at gmail.com Thu May 22 12:32:06 2008 From: igorsyl at gmail.com (Igor Sylvester) Date: Thu, 22 May 2008 11:32:06 -0500 Subject: [SciPy-dev] building issue in Windows Message-ID: Hi. I am trying to build scipy in Windows. I am using binutils/mingw and get the following link error. How is the official scipy distribution built? What version of gcc/g77/g++ is used? I am using the following binutils. gcc-core-3.4.5-20060117-1.tar.gz gcc-g77-3.4.5-20060117-1.tar.gz gcc-g++-3.4.5-20060117-1.tar.gz mingw-runtime-3.12.tar.gz w32api-3.9.tar.gz Thanks for your help! compile options: '-IPython2.5\include -Inumpy\core\src -Inumpy\core\inclue -IPython2.5\include -IPython2.5\C -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -IPython2.5\include -Inumpycore\src -Inumpy\core\include -IPython2.5\include -IPython2.5\PC -c _configtest.c -o _configtest.o Found executable binutils\bin\gcc.exe g++ -mno-cygwin _configtest.o -LPython2.5\lib -LC:\ -LPython2.5\libs -lmsvcr71 -o _configtest.exe Found executable binutils\bin\g++.exe Unknown option: -Bdynamic (ignored) libmingw32.a(pseudo-reloc.o) : error LNK2019: unresolved external symbol __image_base__ referenced in function _pei386_runtime_relocator _configtest.exe : fatal error LNK1120: 1 unresolved externals collect2: ld returned 96 exit status failure. removing: _configtest.c _configtest.o -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at nowsol.com Thu May 22 14:47:19 2008 From: joe at nowsol.com (Joe Covalesky) Date: Thu, 22 May 2008 11:47:19 -0700 Subject: [SciPy-dev] stderr missing in IPython + Emacs + WinXP Message-ID: <4835BFB7.9030408@nowsol.com> I'm using: WinXP Pro SP2 Emacs 22.1.1 ipython.el 2927 python-mode.el 4.75 I can get ipython nominally working in emacs ok so long as I use pyreadline 1.3. If I use more recent releases of pyreadline, none of the output prompts/ stdout shows up. Even with pyreadline1.3 (or with -noreadline), stderr doesn't show up, e.g.: In [2]: from sys import stderr,stdout In [3]: stdout.write('\nfoo\n') foo In [4]: stderr.write('\nbar\n') In [5]: Outside of emacs, this works just fine. Any ideas? Joe From robert.kern at gmail.com Thu May 22 14:52:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 May 2008 13:52:18 -0500 Subject: [SciPy-dev] stderr missing in IPython + Emacs + WinXP In-Reply-To: <4835BFB7.9030408@nowsol.com> References: <4835BFB7.9030408@nowsol.com> Message-ID: <3d375d730805221152h3af2723v918ee711f3b48400@mail.gmail.com> On Thu, May 22, 2008 at 1:47 PM, Joe Covalesky wrote: > I'm using: > > WinXP Pro SP2 > Emacs 22.1.1 > ipython.el 2927 > python-mode.el 4.75 > > I can get ipython nominally working in emacs ok so long as I use > pyreadline 1.3. If I use more recent releases of pyreadline, none of > the output prompts/ stdout shows up. You will want to ask on the IPython mailing list. http://projects.scipy.org/mailman/listinfo/ipython-user -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Thu May 22 17:47:58 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 23 May 2008 00:47:58 +0300 Subject: [SciPy-dev] Duplicated docstring in Scipy Doc Wiki In-Reply-To: <1211482254.8271.20.camel@localhost.localdomain> References: <4cdacb1d-b91d-46f3-b854-e500dd7e54ec@l42g2000hsc.googlegroups.com> <7135f262-7fd6-4985-b6e5-56602d367939@l42g2000hsc.googlegroups.com> <1211477421.8271.15.camel@localhost.localdomain> <1211482254.8271.20.camel@localhost.localdomain> Message-ID: <1211492878.8271.27.camel@localhost.localdomain> Hi all, to, 2008-05-22 kello 21:50 +0300, Pauli Virtanen kirjoitti: > to, 2008-05-22 kello 11:28 -0700, joep kirjoitti: > [clip] > > However, when I do a search on the DocWiki for example for arccos (or > > log, log10, exp, tan,...), I see it 9 times, and it is not clear which > > ones refer to the same docstring and where several imports of the same > > function are picked up separately, and which ones refer to actually > > different functions in the source. > [clip] > > A recommendation for docstring editing might be to verify duplicates > > and copy doc strings if the function is (almost) duplicated or > > triplicated in the numpy source and possibly cross link different > > versions. > > This is a problem with the tool on handling extension objects and > Pyrex-generated classes, and the editors shouldn't have to concern > themselves with it. I'll fix it and remove any unedited duplicates from > the wiki. I corrected this problem on the Scipy Doc Wiki today. You can see many automated changes in RecentChanges, but these only add links to the Numpy source codes (as requested by Joe, IIRC) and I verified manually that any real work wasn't clobbered. Pauli From prabhu at aero.iitb.ac.in Fri May 23 01:24:28 2008 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Fri, 23 May 2008 10:54:28 +0530 Subject: [SciPy-dev] [ANN] Mayavi sprint in July 2008 Message-ID: <4836550C.3050204@aero.iitb.ac.in> Hi, This is to announce a Mayavi sprint between 2nd July to 9th July, 2008. The sprint will be held at the Enthought Office, Austin Texas. Here are the details: Dates: 2nd July 2008 to 9th July 2008 Location: Enthought Office at Austin, TX Please do join us -- even if it is only for a few days. Both Ga?l Varoquaux and myself will be at the sprint on all days and there will be developers from Enthought joining us as well. Enthought is graciously hosting the sprint at their office. The agenda for the sprint is yet to be decided. Please contact me off-list if you plan on attending. Thanks! About Mayavi ------------ Mayavi seeks to provide easy and interactive visualization of 3D data. It is distributed under the terms of the new BSD license. It is built atop the Enthought Tool Suite and VTK. It provides an optional rich UI and a clean Pythonic API with native support for numpy arrays. Mayavi strives to be a reusable tool that can be embedded in your applications in different ways or combined with the envisage application-building framework to assemble domain-specific tools. For more information see here: http://code.enthought.com/projects/mayavi/ cheers, -- Prabhu Ramachandran http://www.aero.iitb.ac.in/~prabhu From alan at ajackson.org Fri May 23 22:29:27 2008 From: alan at ajackson.org (Alan Jackson) Date: Fri, 23 May 2008 21:29:27 -0500 Subject: [SciPy-dev] Documentation Project - Tables Message-ID: <20080523212927.7301c6a8@ajackson.org> Unless I'm doing something wrong (always a real possibility) the table functionality doesn't seem to be working in the wiki. Example in the kaiser windowing function. -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From pav at iki.fi Sat May 24 07:22:17 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 24 May 2008 14:22:17 +0300 Subject: [SciPy-dev] Documentation Project - Tables In-Reply-To: <20080523212927.7301c6a8@ajackson.org> References: <20080523212927.7301c6a8@ajackson.org> Message-ID: <1211628137.7180.5.camel@localhost.localdomain> pe, 2008-05-23 kello 21:29 -0500, Alan Jackson kirjoitti: > Unless I'm doing something wrong (always a real possibility) the table > functionality doesn't seem to be working in the wiki. Example in the > kaiser windowing function. The docstrings should use the restructured text syntax throughout, also for the tables. The proper table syntax is here [1]. Please use the "simple" syntax, because it is easier to read and write. Like so: ==== ====================== beta Window shape ?==== ====================== 0 Rectangular 5 Similar to a Hamming 6 Similar to a Hanning 8.6 Similar to a Blackman ?==== ====================== I think you may have been confused by the Moinmoin syntax reference at the bottom of the edit pages; I'll see if I can change it to something more sensible. Pauli [1] http://docutils.sourceforge.net/docs/user/rst/quickref.html#tables From david at ar.media.kyoto-u.ac.jp Wed May 28 05:42:43 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 28 May 2008 18:42:43 +0900 Subject: [SciPy-dev] scikits.learn: manifold learning does not build Message-ID: <483D2913.7000401@ar.media.kyoto-u.ac.jp> Hi, The package manifold learning does not build cleanly on Linux, and I don't think it would on any other platform because of some directories mispelling at least. I could not make it work in a few minutes, and instead of messing with a package which is not mine, I have disabled it in the setup.py for the moment. Please make sure that a package is at least buildable when adding something in scikits.learn, otherwise, every user of scikits.learn is affected, even when they do not use the package. Thank you, David From matthieu.brucher at gmail.com Wed May 28 06:11:33 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 28 May 2008 12:11:33 +0200 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D2913.7000401@ar.media.kyoto-u.ac.jp> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Strange, it builds fine on my box. Can you give me the errors that were generated ? I can't reproduce them for the moment. Matthieu 2008/5/28 David Cournapeau : > Hi, > > The package manifold learning does not build cleanly on Linux, and I > don't think it would on any other platform because of some directories > mispelling at least. I could not make it work in a few minutes, and > instead of messing with a package which is not mine, I have disabled it > in the setup.py for the moment. > Please make sure that a package is at least buildable when adding > something in scikits.learn, otherwise, every user of scikits.learn is > affected, even when they do not use the package. > > Thank you, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed May 28 06:05:54 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 28 May 2008 19:05:54 +0900 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> Message-ID: <483D2E82.7070000@ar.media.kyoto-u.ac.jp> Matthieu Brucher wrote: > Hi, > > Strange, it builds fine on my box. > Can you give me the errors that were generated ? I can't reproduce > them for the moment. > Did you try from a fresh svn checkout ? It is easy to miss some trivial mistakes when developing a package because some of the files do not exist in svn but are only on your development tree (it happens to me all the time). The first obvious problem is in regression/setup.py, which uses some source files which do not exist in subversion (neighbour vs neighbor): ... compile options: '-Iregression -I/home/david/local/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' g++: neighbours/neighbours.cpp g++: neighbours/neighbours.cpp: No such file or directory g++: no input files g++: neighbours/neighbours.cpp: No such file or directory g++: no input files Also, I don't think you should use * in add_subpackage: it may hide some missing files, and make the problem I mentioned just before about difference between your tree and svn tree harder to track. It may well be the cause of the problem (you may have both neighbour and neighbor on your machine, for example). cheers, David From david at ar.media.kyoto-u.ac.jp Wed May 28 06:10:10 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 28 May 2008 19:10:10 +0900 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D2E82.7070000@ar.media.kyoto-u.ac.jp> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> <483D2E82.7070000@ar.media.kyoto-u.ac.jp> Message-ID: <483D2F82.8050401@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > > Did you try from a fresh svn checkout ? It is easy to miss some trivial > mistakes when developing a package because some of the files do not > exist in svn but are only on your development tree (it happens to me all > the time). > Another problem is that matrix folder is not added to CPPPATH: ... building 'manifold_learning.compression.cost_function._cost_function' extension compiling C++ sources C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -fPIC compile options: '-Icompression -I/home/david/local/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' g++: compression/cost_function/cost_function.cpp In file included from compression/cost_function/cost_function.cpp:2: compression/cost_function/modifiedCompression.h:12:31: error: matrix/matrix_lib.h: No such file or directory compression/cost_function/modifiedCompression.h:13:35: error: matrix/sub_matrix_lib.h: No such file or directory Once you fix this, feel free to re-enable manifold learning (but be sure to fix this, because some people complained that machine.em and machine.svm were broken before I disabled it) cheers, David From matthieu.brucher at gmail.com Wed May 28 06:26:08 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 28 May 2008 12:26:08 +0200 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D2E82.7070000@ar.media.kyoto-u.ac.jp> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> <483D2E82.7070000@ar.media.kyoto-u.ac.jp> Message-ID: 2008/5/28 David Cournapeau : > Matthieu Brucher wrote: > > Hi, > > > > Strange, it builds fine on my box. > > Can you give me the errors that were generated ? I can't reproduce > > them for the moment. > > > > Did you try from a fresh svn checkout ? It is easy to miss some trivial > mistakes when developing a package because some of the files do not > exist in svn but are only on your development tree (it happens to me all > the time). Exactly, my mistake, there were some uncommitted changes in the file setup.py file. Strange that they were not committed when I committed the changed names. Well, at least this is now committed, I'll try from a fresh svn co thios afternoon. Thanks for letting me know of this ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Wed May 28 06:28:13 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 28 May 2008 12:28:13 +0200 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D2F82.8050401@ar.media.kyoto-u.ac.jp> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> <483D2E82.7070000@ar.media.kyoto-u.ac.jp> <483D2F82.8050401@ar.media.kyoto-u.ac.jp> Message-ID: > > > Another problem is that matrix folder is not added to CPPPATH: > > ... > building 'manifold_learning.compression.cost_function._cost_function' > extension > compiling C++ sources > C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -fPIC > > compile options: '-Icompression > -I/home/david/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 -c' > g++: compression/cost_function/cost_function.cpp > In file included from compression/cost_function/cost_function.cpp:2: > compression/cost_function/modifiedCompression.h:12:31: error: > matrix/matrix_lib.h: No such file or directory > compression/cost_function/modifiedCompression.h:13:35: error: > matrix/sub_matrix_lib.h: No such file or directory I'll check this as well, thanks for reporting this. I explicitely added the folder in the include options, but it seems that it is gone :| > Once you fix this, feel free to re-enable manifold learning (but be sure > to fix this, because some people complained that machine.em and > machine.svm were broken before I disabled it) That must be me :D Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From helias at bccn.uni-freiburg.de Wed May 28 07:40:39 2008 From: helias at bccn.uni-freiburg.de (Moritz Helias) Date: Wed, 28 May 2008 13:40:39 +0200 Subject: [SciPy-dev] hyper1f1, ticket #659 Message-ID: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> Hi, I found some strange behavior of the hypergeometric function with real arguments (hyp1f1). I do not know, who is the specialist on this subject, but could somebody please try to reproduce it (see ticket #659, attached .py file): For large negative x, the function becomes quite jagged. I also described a possible reason and a workaround in the ticket. My work relies on this function, so currently I have to patch each of my scipy installations to continue working. If the bug can be reproduced by others, is there the possibility that I provide a patch? Thanks in advance, greetings, Moritz From matthieu.brucher at gmail.com Wed May 28 08:02:03 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 28 May 2008 14:02:03 +0200 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D2F82.8050401@ar.media.kyoto-u.ac.jp> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> <483D2E82.7070000@ar.media.kyoto-u.ac.jp> <483D2F82.8050401@ar.media.kyoto-u.ac.jp> Message-ID: 2008/5/28 David Cournapeau : > David Cournapeau wrote: > > > > Did you try from a fresh svn checkout ? It is easy to miss some trivial > > mistakes when developing a package because some of the files do not > > exist in svn but are only on your development tree (it happens to me all > > the time). > > > > Another problem is that matrix folder is not added to CPPPATH: > > ... > building 'manifold_learning.compression.cost_function._cost_function' > extension > compiling C++ sources > C compiler: g++ -pthread -fno-strict-aliasing -DNDEBUG -g -O2 -Wall -fPIC > > compile options: '-Icompression > -I/home/david/local/lib/python2.5/site-packages/numpy/core/include > -I/usr/include/python2.5 -c' > g++: compression/cost_function/cost_function.cpp > In file included from compression/cost_function/cost_function.cpp:2: > compression/cost_function/modifiedCompression.h:12:31: error: > matrix/matrix_lib.h: No such file or directory > compression/cost_function/modifiedCompression.h:13:35: error: > matrix/sub_matrix_lib.h: No such file or directory I didn't find your error, in fact I do not have the same compiler options as you have : creating build/temp.linux-i686-2.5/scikits/learn/machine/manifold_learning/compression creating build/temp.linux-i686-2.5/scikits/learn/machine/manifold_learning/compression/cost_function compile options: '-Iscikits/learn/machine/manifold_learning -I/home/brucher/local/lib/python2.5/site-packages/numpy/core/include -I/usr/include/python2.5 -c' g++: scikits/learn/machine/manifold_learning/compression/cost_function/cost_function.cpp I have added the include folder in compression/setup.py with : include_dirs = [os.path.dirname(__file__) + '/..'], and the matrix folder is in the parent folder of compression. Perhaps should I add something like abspath() ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cohen at slac.stanford.edu Wed May 28 07:53:37 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Wed, 28 May 2008 13:53:37 +0200 Subject: [SciPy-dev] hyper1f1, ticket #659 In-Reply-To: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> References: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> Message-ID: <483D47C1.80504@slac.stanford.edu> hello, I confirm this on my linux box with a very recent version of scipy from the bzr trunk. It seems somewhat related to http://www.scipy.org/scipy/scipy/ticket/419 .... Anyway, this should be fixed.... Johann Moritz Helias wrote: > Hi, > > I found some strange behavior of the hypergeometric function with real > arguments (hyp1f1). I do not know, who is the specialist on this > subject, but > could somebody please try to reproduce it (see ticket #659, > attached .py file): For large negative x, the function becomes quite > jagged. > > I also described a possible reason and a workaround in the ticket. My > work relies on this function, so currently I have to patch each of my > scipy installations to continue working. > If the bug can be reproduced by others, is there the possibility that > I provide a patch? > > Thanks in advance, > > greetings, > > Moritz > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From helias at bccn.uni-freiburg.de Wed May 28 08:22:34 2008 From: helias at bccn.uni-freiburg.de (Moritz Helias) Date: Wed, 28 May 2008 14:22:34 +0200 Subject: [SciPy-dev] hyper1f1, ticket #659 In-Reply-To: <483D47C1.80504@slac.stanford.edu> References: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> <483D47C1.80504@slac.stanford.edu> Message-ID: <0E8C806C-594B-46C4-9675-73D33FAC425F@bccn.uni-freiburg.de> Hello, Johann Cohen-Tanugi wrote: > hello, > I confirm this on my linux box with a very recent version of scipy > from > the bzr trunk. It seems somewhat related to > http://www.scipy.org/scipy/scipy/ticket/419 .... true. The problem seems to be the error estimation of Kahan's sum algorithm which is too optimistic in this case. > > Anyway, this should be fixed.... > Johann > special/specfun/specfun.f contains an implementation of M(a,b,x) which is called chgm. This works fine for me. It just has to be wrapped (which I did on my machine). Greetings, Moritz > Moritz Helias wrote: >> Hi, >> >> I found some strange behavior of the hypergeometric function with >> real >> arguments (hyp1f1). I do not know, who is the specialist on this >> subject, but >> could somebody please try to reproduce it (see ticket #659, >> attached .py file): For large negative x, the function becomes quite >> jagged. >> >> I also described a possible reason and a workaround in the ticket. My >> work relies on this function, so currently I have to patch each of my >> scipy installations to continue working. >> If the bug can be reproduced by others, is there the possibility that >> I provide a patch? >> >> Thanks in advance, >> >> greetings, >> >> Moritz >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From bsouthey at gmail.com Wed May 28 09:17:19 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 28 May 2008 08:17:19 -0500 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D2913.7000401@ar.media.kyoto-u.ac.jp> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> Message-ID: <483D5B5F.1080607@gmail.com> David Cournapeau wrote: > Hi, > > The package manifold learning does not build cleanly on Linux, and I > don't think it would on any other platform because of some directories > mispelling at least. I could not make it work in a few minutes, and > instead of messing with a package which is not mine, I have disabled it > in the setup.py for the moment. > Please make sure that a package is at least buildable when adding > something in scikits.learn, otherwise, every user of scikits.learn is > affected, even when they do not use the package. > > Thank you, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > Hi, I filed Ticket 55 (http://scipy.org/scipy/scikits/ticket/55) but this probably got missed due to the issues with Trac. Bruce From stefan at sun.ac.za Wed May 28 16:16:23 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 28 May 2008 22:16:23 +0200 Subject: [SciPy-dev] hyper1f1, ticket #659 In-Reply-To: <0E8C806C-594B-46C4-9675-73D33FAC425F@bccn.uni-freiburg.de> References: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> <483D47C1.80504@slac.stanford.edu> <0E8C806C-594B-46C4-9675-73D33FAC425F@bccn.uni-freiburg.de> Message-ID: <9457e7c80805281316i77c1aac0x79cd3d2eccc5f46@mail.gmail.com> 2008/5/28 Moritz Helias : > special/specfun/specfun.f contains an implementation of M(a,b,x) which > is called chgm. > This works fine for me. It just has to be wrapped (which I did on my > machine). Would you mind attaching the patch to a ticket for review? Please also include some tests to verify its behaviour. Thanks St?fan From helias at bccn.uni-freiburg.de Thu May 29 04:24:04 2008 From: helias at bccn.uni-freiburg.de (Moritz Helias) Date: Thu, 29 May 2008 10:24:04 +0200 Subject: [SciPy-dev] hyper1f1, ticket #659 In-Reply-To: <9457e7c80805281316i77c1aac0x79cd3d2eccc5f46@mail.gmail.com> References: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> <483D47C1.80504@slac.stanford.edu> <0E8C806C-594B-46C4-9675-73D33FAC425F@bccn.uni-freiburg.de> <9457e7c80805281316i77c1aac0x79cd3d2eccc5f46@mail.gmail.com> Message-ID: Hello, I just attached a patch to ticket #659 which replaces the implementation of hyp1f1 by the fortran routine chgm (scipy/special/specfun/specfun.f). The file hyp1f1_ticket.py (found in the same ticket) may serve as a first test. I'll try to write a test, which compares the function to matlab's implementation. Maybe somebody has already done so for the old implementation? Greetings, Moritz On May 28, 2008, at 10:16 PM, St?fan van der Walt wrote: > 2008/5/28 Moritz Helias : >> special/specfun/specfun.f contains an implementation of M(a,b,x) >> which >> is called chgm. >> This works fine for me. It just has to be wrapped (which I did on my >> machine). > > Would you mind attaching the patch to a ticket for review? Please > also include some tests to verify its behaviour. > > Thanks > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From matthieu.brucher at gmail.com Thu May 29 05:25:00 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 29 May 2008 11:25:00 +0200 Subject: [SciPy-dev] scikits.learn: manifold learning does not build In-Reply-To: <483D5B5F.1080607@gmail.com> References: <483D2913.7000401@ar.media.kyoto-u.ac.jp> <483D5B5F.1080607@gmail.com> Message-ID: > > Hi, > I filed Ticket 55 (http://scipy.org/scipy/scikits/ticket/55) but this > probably got missed due to the issues with Trac. > Indeed, I didn't know of this ticket. It should be correct now. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From helias at bccn.uni-freiburg.de Thu May 29 09:45:15 2008 From: helias at bccn.uni-freiburg.de (Moritz Helias) Date: Thu, 29 May 2008 15:45:15 +0200 Subject: [SciPy-dev] hyper1f1, ticket #659 In-Reply-To: References: <7F6B4B71-F01B-4CA5-B05E-2C6737F86BEE@bccn.uni-freiburg.de> <483D47C1.80504@slac.stanford.edu> <0E8C806C-594B-46C4-9675-73D33FAC425F@bccn.uni-freiburg.de> <9457e7c80805281316i77c1aac0x79cd3d2eccc5f46@mail.gmail.com> Message-ID: <7A733201-4904-42B1-98A7-63DED099849C@bccn.uni-freiburg.de> Hello, now you will also find a test for hyp1f1 attached to ticket #659: 1.) test_hyp1f1.nb Mathematica script to create reference data at randomly drawn points (a,b,x) in [-30,30] x [-30,30] x [-30,30] using Mathematica's implementation of hyp1f1 2.) hyp1f1_test.py uses reference data created from test_hyp1f1.nb and compares it to the result of scipy's implementation. calculates root mean square error relative error and maximum relative error 3.) scipy_4393_hyp1f1_chgm.patch is a patch (build based on revision 4393) to be applied to scipy root directory to use chgm (from specfun.f) instead of hyperg (from hyperg.c) for hyp1f1. this patch applied, the maximum relative error on my machine is 4.41274416139e-05 I hope this will facilitate testing. Greetings, Moritz On May 29, 2008, at 10:24 AM, Moritz Helias wrote: > Hello, > > I just attached a patch to ticket #659 which replaces the > implementation of hyp1f1 by the fortran routine > chgm (scipy/special/specfun/specfun.f). > > The file hyp1f1_ticket.py (found in the same ticket) may serve as a > first test. I'll try to write a test, which compares the function to > matlab's implementation. > Maybe somebody has already done so for the old implementation? > > Greetings, > > Moritz > > On May 28, 2008, at 10:16 PM, St?fan van der Walt wrote: > >> 2008/5/28 Moritz Helias : >>> special/specfun/specfun.f contains an implementation of M(a,b,x) >>> which >>> is called chgm. >>> This works fine for me. It just has to be wrapped (which I did on my >>> machine). >> >> Would you mind attaching the patch to a ticket for review? Please >> also include some tests to verify its behaviour. >> >> Thanks >> St?fan >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Thu May 29 18:10:54 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 29 May 2008 15:10:54 -0700 Subject: [SciPy-dev] ANN: NumPy 1.1.0 Message-ID: I'm pleased to announce the release of NumPy 1.1.0. NumPy is the fundamental package needed for scientific computing with Python. It contains: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * basic linear algebra functions * basic Fourier transforms * sophisticated random number capabilities * tools for integrating Fortran code. Besides it's obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide-variety of databases. This is the first minor release since the 1.0 release in October 2006. There are a few major changes, which introduce some minor API breakage. In addition this release includes tremendous improvements in terms of bug-fixing, testing, and documentation. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?release_id=602575&group_id=1369 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fperez.net at gmail.com Thu May 29 22:45:24 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 May 2008 19:45:24 -0700 Subject: [SciPy-dev] Tutorials at Scipy 2008 Message-ID: [ This is meant as a heads-up here, please keep the discussion on the SciPy user list so we can focus the conversation in one list only. ] Hi all, Travis Oliphant and myself have signed up to coordinate the tutorials sessions at this year's SciPy conference. Our tentative plan is described here: http://scipy.org/SciPy2008/Tutorials but it basically consists of holding in parallel: 1. A 2-day hands-on tutorial for beginners. 2. A set of 2 or 4 hour sessions on special topics. We need input from people on: - Do you like this idea? - If yes for #1, any suggestions/wishes? Eric Jones, Travis O and myself have all taught similar things and could potentially do it again, but none of us is trying to impose it. If someone else wants to do it, by all means mention it. The job could be split across multiple people once an agenda is organized. - For #2, please go to the wiki and fill in ideas for topics and/or presenters. We'll need a list of viable topics with actual presenters before we start narrowing down the schedule into something more concrete. Feel free to either discuss things here or to just put topics on the wiki. I find wikis to be a poor place for conversation but excellent for summarizing items. I'll try to update the wiki with ideas that arise here, but feel free to directly edit the wiki if you just want to suggest a specific topic or brief piece of info, do NOT feel like you have to vet anything on list. Cheers, Travis and Fernando. From vshah at interactivesupercomputing.com Fri May 30 16:53:46 2008 From: vshah at interactivesupercomputing.com (Viral Shah) Date: Fri, 30 May 2008 13:53:46 -0700 Subject: [SciPy-dev] Error building from svn on Intel Macs. Message-ID: <00829E43-B304-4254-8A59-79225D23576E@interactivesupercomputing.com> I am using an Intel Mac, and trying to build scipy following the instructions at: http://www.scipy.org/Installing_SciPy/Mac_OS_X I was able to successfully build numpy (from svn) without any extra configuration. For scipy, even though I have installed the gfortran compiler, setup doesn't seem to like the fact that I have 4.2.3. I am using gcc 4.0.1 that is supplied by Apple. I get the following: customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' After searching around on scipy-dev, I resorted to doing this: $ python setup.py config_fc --fcompiler=gnu95 build That allowed me to get further, but left me with the following error that I don't understand. I can help update the instructions page with info about compilers, if I can get this process working. Thanks in advance. building 'scipy.fftpack._fftpack' extension compiling C sources C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp - mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall -DMACOSX -I/ usr/include/ffi -DENABLE_DTRACE -arch i386 -arch ppc -pipe compile options: '-Ibuild/src.macosx-10.5-i386-2.5 -I/System/Library/ Frameworks/Python.framework/Versions/2.5/Extras/lib/python/numpy/core/ include -I/System/Library/Frameworks/Python.framework/Versions/2.5/ include/python2.5 -c' Traceback (most recent call last): File "setup.py", line 92, in setup_package() File "setup.py", line 84, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_ext.py", line 121, in run self.build_extensions() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/command/build_ext.py", line 416, in build_extensions self.build_extension(ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_ext.py", line 312, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' -viral From robert.kern at gmail.com Fri May 30 17:12:42 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 May 2008 16:12:42 -0500 Subject: [SciPy-dev] Error building from svn on Intel Macs. In-Reply-To: <00829E43-B304-4254-8A59-79225D23576E@interactivesupercomputing.com> References: <00829E43-B304-4254-8A59-79225D23576E@interactivesupercomputing.com> Message-ID: <3d375d730805301412t37517289x91e4b7e9d27f816b@mail.gmail.com> On Fri, May 30, 2008 at 3:53 PM, Viral Shah wrote: > I am using an Intel Mac, and trying to build scipy following the > instructions at: http://www.scipy.org/Installing_SciPy/Mac_OS_X > > I was able to successfully build numpy (from svn) without any extra > configuration. For scipy, even though I have installed the gfortran > compiler, setup doesn't seem to like the fact that I have 4.2.3. I am > using gcc 4.0.1 that is supplied by Apple. I get the following: > > customize G95FCompiler > customize GnuFCompiler > customize Gnu95FCompiler > Couldn't match compiler version for 'GNU Fortran (GCC) > 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU > Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou > may redistribute copies of GNU Fortran\nunder the terms of the GNU > General Public License.\nFor more information about these matters, see > the file named COPYING\n' > > After searching around on scipy-dev, I resorted to doing this: > > $ python setup.py config_fc --fcompiler=gnu95 build > > That allowed me to get further, but left me with the following error > that I don't understand. I can help update the instructions page with > info about compilers, if I can get this process working. Thanks in > advance. Please provide the full output. The error that you see is a result of a failure to correctly configure the Fortran compiler earlier. The current numpy SVN should be able to handle the version string you give above. In [37]: from numpy.distutils.fcompiler import gnu In [38]: fc = gnu.Gnu95FCompiler() In [39]: v = 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' In [40]: print v GNU Fortran (GCC) 4.2.3 Copyright (C) 2007 Free Software Foundation, Inc. GNU Fortran comes with NO WARRANTY, to the extent permitted by law. You may redistribute copies of GNU Fortran under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING In [41]: fc.version_match(v) Out[41]: '4.2.3' -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From vshah at interactivesupercomputing.com Sat May 31 15:54:36 2008 From: vshah at interactivesupercomputing.com (Viral Shah) Date: Sat, 31 May 2008 12:54:36 -0700 Subject: [SciPy-dev] Error building from svn on Intel Macs. In-Reply-To: References: Message-ID: <10A02ACB-AAC1-4EAB-856C-8558BF21C07B@interactivesupercomputing.com> Attaching full output. $ python setup.py build Warning: No configuration returned, assuming unavailable. mkl_info: libraries mkl,vml,guide not found in /System/Library/Frameworks/ Python.framework/Versions/2.5/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /System/Library/Frameworks/ Python.framework/Versions/2.5/lib libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /System/Library/Frameworks/ Python.framework/Versions/2.5/lib libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /System/Library/Frameworks/ Python.framework/Versions/2.5/lib libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-msse3'] umfpack_info: libraries umfpack not found in /System/Library/Frameworks/ Python.framework/Versions/2.5/lib libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/ python/numpy/distutils/system_info.py:401: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/ ) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE running build running config_fc running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "superlu_src" sources building library "arpack" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.cluster._hierarchy_wrap" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/scipy/interpolate/dfitpack- f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/lib/lapack/ clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.5-i386-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.optimize._slsqp" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse.linalg.isolve._iterative" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.sparse.linalg.dsolve._zsuperlu" sources building extension "scipy.sparse.linalg.dsolve._dsuperlu" sources building extension "scipy.sparse.linalg.dsolve._csuperlu" sources building extension "scipy.sparse.linalg.dsolve._ssuperlu" sources building extension "scipy.sparse.linalg.dsolve.umfpack.__umfpack" sources building extension "scipy.sparse.linalg.eigen.arpack._arpack" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/build/src.macosx-10.5- i386-2.5/scipy/sparse/linalg/eigen/arpack/_arpack-f2pywrappers.f' to sources. building extension "scipy.sparse.sparsetools._csr" sources building extension "scipy.sparse.sparsetools._csc" sources building extension "scipy.sparse.sparsetools._coo" sources building extension "scipy.sparse.sparsetools._bsr" sources building extension "scipy.sparse.sparsetools._dia" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.5-i386-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.5-i386-2.5' to include_dirs. adding 'build/src.macosx-10.5-i386-2.5/scipy/stats/mvn- f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building extension "scipy.ndimage._segment" sources building extension "scipy.ndimage._register" sources building extension "scipy.stsci.convolve._correlate" sources building extension "scipy.stsci.convolve._lineshape" sources building extension "scipy.stsci.image._combine" sources building data_files sources running build_py copying scipy/__svn_version__.py -> build/lib.macosx-10.5-i386-2.5/scipy copying build/src.macosx-10.5-i386-2.5/scipy/__config__.py -> build/ lib.macosx-10.5-i386-2.5/scipy running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler Could not locate executable g77 Could not locate executable f77 Could not locate executable f95 customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' customize NAGFCompiler customize NAGFCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler Couldn't match compiler version for 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou may redistribute copies of GNU Fortran\nunder the terms of the GNU General Public License.\nFor more information about these matters, see the file named COPYING\n' warning: build_ext: fcompiler=nag is not available. building 'scipy.fftpack._fftpack' extension compiling C sources C compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp - mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall -DMACOSX -I/ usr/include/ffi -DENABLE_DTRACE -arch i386 -arch ppc -pipe compile options: '-Ibuild/src.macosx-10.5-i386-2.5 -I/System/Library/ Frameworks/Python.framework/Versions/2.5/Extras/lib/python/numpy/core/ include -I/System/Library/Frameworks/Python.framework/Versions/2.5/ include/python2.5 -c' Traceback (most recent call last): File "setup.py", line 92, in setup_package() File "setup.py", line 84, in setup_package configuration=configuration ) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_ext.py", line 121, in run self.build_extensions() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/distutils/command/build_ext.py", line 416, in build_extensions self.build_extension(ext) File "/System/Library/Frameworks/Python.framework/Versions/2.5/ Extras/lib/python/numpy/distutils/command/build_ext.py", line 312, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' > On Fri, May 30, 2008 at 3:53 PM, Viral Shah > wrote: >> I am using an Intel Mac, and trying to build scipy following the >> instructions at: http://www.scipy.org/Installing_SciPy/Mac_OS_X >> >> I was able to successfully build numpy (from svn) without any extra >> configuration. For scipy, even though I have installed the gfortran >> compiler, setup doesn't seem to like the fact that I have 4.2.3. I am >> using gcc 4.0.1 that is supplied by Apple. I get the following: >> >> customize G95FCompiler >> customize GnuFCompiler >> customize Gnu95FCompiler >> Couldn't match compiler version for 'GNU Fortran (GCC) >> 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU >> Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou >> may redistribute copies of GNU Fortran\nunder the terms of the GNU >> General Public License.\nFor more information about these matters, >> see >> the file named COPYING\n' >> >> After searching around on scipy-dev, I resorted to doing this: >> >> $ python setup.py config_fc --fcompiler=gnu95 build >> >> That allowed me to get further, but left me with the following error >> that I don't understand. I can help update the instructions page with >> info about compilers, if I can get this process working. Thanks in >> advance. > > Please provide the full output. The error that you see is a result of > a failure to correctly configure the Fortran compiler earlier. The > current numpy SVN should be able to handle the version string you give > above. > > > In [37]: from numpy.distutils.fcompiler import gnu > > In [38]: fc = gnu.Gnu95FCompiler() > > In [39]: v = 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free > Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to > the extent permitted by law.\nYou may redistribute copies of GNU > Fortran\nunder the terms of the GNU General Public License.\nFor more > information about these matters, see the file named COPYING\n' > > In [40]: print v > GNU Fortran (GCC) 4.2.3 > Copyright (C) 2007 Free Software Foundation, Inc. > > GNU Fortran comes with NO WARRANTY, to the extent permitted by law. > You may redistribute copies of GNU Fortran > under the terms of the GNU General Public License. > For more information about these matters, see the file named COPYING > > > In [41]: fc.version_match(v) > Out[41]: '4.2.3' > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco From vshah at interactivesupercomputing.com Sat May 31 16:24:48 2008 From: vshah at interactivesupercomputing.com (Viral Shah) Date: Sat, 31 May 2008 13:24:48 -0700 Subject: [SciPy-dev] Error building from svn on Intel Macs. In-Reply-To: References: Message-ID: <1D24F3A0-AEA9-4D1C-90E5-E6FAA59A78F3@interactivesupercomputing.com> I did 'rm -rf build', and here's the output from scratch. I'm building with "python setup.py build". Since the log file is large, I am putting it up here: http://gauss.cs.ucsb.edu/~viral/scipy-build.log -viral > Message: 2 > Date: Fri, 30 May 2008 16:12:42 -0500 > From: "Robert Kern" > Subject: Re: [SciPy-dev] Error building from svn on Intel Macs. > To: "SciPy Developers List" > Message-ID: > <3d375d730805301412t37517289x91e4b7e9d27f816b at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Fri, May 30, 2008 at 3:53 PM, Viral Shah > wrote: >> I am using an Intel Mac, and trying to build scipy following the >> instructions at: http://www.scipy.org/Installing_SciPy/Mac_OS_X >> >> I was able to successfully build numpy (from svn) without any extra >> configuration. For scipy, even though I have installed the gfortran >> compiler, setup doesn't seem to like the fact that I have 4.2.3. I am >> using gcc 4.0.1 that is supplied by Apple. I get the following: >> >> customize G95FCompiler >> customize GnuFCompiler >> customize Gnu95FCompiler >> Couldn't match compiler version for 'GNU Fortran (GCC) >> 4.2.3\nCopyright (C) 2007 Free Software Foundation, Inc.\n\nGNU >> Fortran comes with NO WARRANTY, to the extent permitted by law.\nYou >> may redistribute copies of GNU Fortran\nunder the terms of the GNU >> General Public License.\nFor more information about these matters, >> see >> the file named COPYING\n' >> >> After searching around on scipy-dev, I resorted to doing this: >> >> $ python setup.py config_fc --fcompiler=gnu95 build >> >> That allowed me to get further, but left me with the following error >> that I don't understand. I can help update the instructions page with >> info about compilers, if I can get this process working. Thanks in >> advance. > > Please provide the full output. The error that you see is a result of > a failure to correctly configure the Fortran compiler earlier. The > current numpy SVN should be able to handle the version string you give > above. > > > In [37]: from numpy.distutils.fcompiler import gnu > > In [38]: fc = gnu.Gnu95FCompiler() > > In [39]: v = 'GNU Fortran (GCC) 4.2.3\nCopyright (C) 2007 Free > Software Foundation, Inc.\n\nGNU Fortran comes with NO WARRANTY, to > the extent permitted by law.\nYou may redistribute copies of GNU > Fortran\nunder the terms of the GNU General Public License.\nFor more > information about these matters, see the file named COPYING\n' > > In [40]: print v > GNU Fortran (GCC) 4.2.3 > Copyright (C) 2007 Free Software Foundation, Inc. > > GNU Fortran comes with NO WARRANTY, to the extent permitted by law. > You may redistribute copies of GNU Fortran > under the terms of the GNU General Public License. > For more information about these matters, see the file named COPYING > > > In [41]: fc.version_match(v) > Out[41]: '4.2.3' > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > > > ------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > End of Scipy-dev Digest, Vol 55, Issue 29 > ***************************************** From robert.kern at gmail.com Sat May 31 16:55:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 31 May 2008 15:55:07 -0500 Subject: [SciPy-dev] Error building from svn on Intel Macs. In-Reply-To: <1D24F3A0-AEA9-4D1C-90E5-E6FAA59A78F3@interactivesupercomputing.com> References: <1D24F3A0-AEA9-4D1C-90E5-E6FAA59A78F3@interactivesupercomputing.com> Message-ID: <3d375d730805311355rc61eb06k964be661f844ffa5@mail.gmail.com> On Sat, May 31, 2008 at 3:24 PM, Viral Shah wrote: > I did 'rm -rf build', and here's the output from scratch. I'm building > with "python setup.py build". Since the log file is large, I am > putting it up here: > > http://gauss.cs.ucsb.edu/~viral/scipy-build.log I don't know what to tell you. Near as I can tell, SVN numpy should be able to parse that version string. I can't even find the place in the code where it is giving that error message. You can try installing the older gfortran-4.2.1, which works fine for me. I will try upgrading to 4.2.3 sometime later. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Sat May 31 20:55:01 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 01 Jun 2008 03:55:01 +0300 Subject: [SciPy-dev] Namespaces in documentation (was: ANN: NumPy/SciPy Documentation Marathon 2008) In-Reply-To: <9457e7c80805191020m5171e209ja4e8018638d1006b@mail.gmail.com> References: <1211159457.5853.84.camel@glup.physics.ucf.edu> <88e473830805190949s44bac124j6e94f91b517c5f20@mail.gmail.com> <4831B0D7.5010206@slac.stanford.edu> <9457e7c80805191020m5171e209ja4e8018638d1006b@mail.gmail.com> Message-ID: <1212281702.8410.56.camel@localhost.localdomain> ma, 2008-05-19 kello 19:20 +0200, St?fan van der Walt kirjoitti: > Hi Johann > > 2008/5/19 Johann Cohen-Tanugi : > > yesterday when I started to modify the doctest I used import numpy. It > > is easy enough to change to import numpy as np, but please let's get > > that out of the way once and for all. I have no preference between the 2. > > Keep using numpy.func for now. When this thread comes to a > conclusion, I shall do a search and replace if necessary. I took the liberty of changing this in http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines as I think this issue should be settled sometime soon as changing the examples later is not very productive work. It now says: - Docstring examples may assume ``import numpy`` in numpy and ``import scipy`` in scipy. - ``See Also`` sections should use the full namespaced name. For targets in the same module as the documented one, omitting all namespace prefixes is OK, though. These choices seemed simple, understandable without knowing the consensus about "good import abbreviations", and avoiding the "from foo import *" practice. If you have objections, let's restart the discussion. Pauli -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digitaalisesti allekirjoitettu viestin osa URL: