From david at ar.media.kyoto-u.ac.jp Wed May 2 01:57:48 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 02 May 2007 14:57:48 +0900 Subject: [SciPy-dev] Cleaning and fixing fft in scipy ? In-Reply-To: <60533.84.50.128.236.1177758912.squirrel@cens.ioc.ee> References: <4631D3E8.30400@ar.media.kyoto-u.ac.jp> <463246D5.6040303@ieee.org> <46324944.7010208@ieee.org> <4632F4CD.5010302@ar.media.kyoto-u.ac.jp> <58658.84.50.128.236.1177754802.squirrel@cens.ioc.ee> <58855.84.50.128.236.1177755256.squirrel@cens.ioc.ee> <46332762.10701@ar.media.kyoto-u.ac.jp> <60533.84.50.128.236.1177758912.squirrel@cens.ioc.ee> Message-ID: <4638285C.2040904@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > > I am ok with both steps. Let us know when the first step is complete > so that we can test that the interface is working for different backends. I have something almost ready for review (complex 1d only), but I have a couple of questions: - My understanding is that except for djbfft backend, all other backend implement fft of any size (eg different than 2^n); so for djbfft backend, for size different than 2^n, it uses a diffferent backend. For example, if FFTW (v2) is selected and djbfft detected, then the zfft function is effectively using 2 backend: djbfft and fftw2. Would it be a big problem if in djbfft case, the sizes != 2^n are always handled through a fixed backend (fftpack I guess) ? - there is one backend called fftwork. I would like to check wether my patch at least compile for all backend, but I don't know this backend, and didn't find anything on the internet ? cheers, David From jtravs at gmail.com Wed May 2 05:02:50 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 2 May 2007 10:02:50 +0100 Subject: [SciPy-dev] Cleaning and fixing fft in scipy ? In-Reply-To: <4638285C.2040904@ar.media.kyoto-u.ac.jp> References: <4631D3E8.30400@ar.media.kyoto-u.ac.jp> <463246D5.6040303@ieee.org> <46324944.7010208@ieee.org> <4632F4CD.5010302@ar.media.kyoto-u.ac.jp> <58658.84.50.128.236.1177754802.squirrel@cens.ioc.ee> <58855.84.50.128.236.1177755256.squirrel@cens.ioc.ee> <46332762.10701@ar.media.kyoto-u.ac.jp> <60533.84.50.128.236.1177758912.squirrel@cens.ioc.ee> <4638285C.2040904@ar.media.kyoto-u.ac.jp> Message-ID: <3a1077e70705020202r42f6668cu32756f76425f832f@mail.gmail.com> On 02/05/07, David Cournapeau wrote: > - My understanding is that except for djbfft backend, all other > backend implement fft of any size (eg different than 2^n); so for djbfft > backend, for size different than 2^n, it uses a diffferent backend. For > example, if FFTW (v2) is selected and djbfft detected, then the zfft > function is effectively using 2 backend: djbfft and fftw2. Would it be a > big problem if in djbfft case, the sizes != 2^n are always handled > through a fixed backend (fftpack I guess) ? Well, probably not, as people using djbfft probably know what they are doing. But It does seem a bit drastic to drop from the fastest backend to the slowest if say fftw or MKL is also installed. Especially as this would provide lower performance than the current implementation. But why is this necessary anyway? It is only decided at compile time right? Or are you going the whole way and compiling all available back ends and then letting people choose dynamically? That would be a nice feature. Cheers, John From david at ar.media.kyoto-u.ac.jp Wed May 2 07:23:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 02 May 2007 20:23:30 +0900 Subject: [SciPy-dev] Cleaning and fixing fft in scipy ? In-Reply-To: <3a1077e70705020202r42f6668cu32756f76425f832f@mail.gmail.com> References: <4631D3E8.30400@ar.media.kyoto-u.ac.jp> <463246D5.6040303@ieee.org> <46324944.7010208@ieee.org> <4632F4CD.5010302@ar.media.kyoto-u.ac.jp> <58658.84.50.128.236.1177754802.squirrel@cens.ioc.ee> <58855.84.50.128.236.1177755256.squirrel@cens.ioc.ee> <46332762.10701@ar.media.kyoto-u.ac.jp> <60533.84.50.128.236.1177758912.squirrel@cens.ioc.ee> <4638285C.2040904@ar.media.kyoto-u.ac.jp> <3a1077e70705020202r42f6668cu32756f76425f832f@mail.gmail.com> Message-ID: <463874B2.7000004@ar.media.kyoto-u.ac.jp> John Travers wrote: > On 02/05/07, David Cournapeau wrote: >> - My understanding is that except for djbfft backend, all other >> backend implement fft of any size (eg different than 2^n); so for djbfft >> backend, for size different than 2^n, it uses a diffferent backend. For >> example, if FFTW (v2) is selected and djbfft detected, then the zfft >> function is effectively using 2 backend: djbfft and fftw2. Would it be a >> big problem if in djbfft case, the sizes != 2^n are always handled >> through a fixed backend (fftpack I guess) ? > > Well, probably not, as people using djbfft probably know what they are > doing. But It does seem a bit drastic to drop from the fastest backend > to the slowest if say fftw or MKL is also installed. Especially as > this would provide lower performance than the current implementation. If fftw or mkl is installed, I am not sure djbfft would be used anyway. It may have been the fastest fft a few years ago, but I cannot repeat any of the results given on djbfft with recent CPU (and by recent, I mean a pentium 4, that is already a few years old) and compilers. More fundamentally, if you care about efficiency fpr fft, you use 2^N size. > > But why is this necessary anyway? It is only decided at compile time > right? The goal of the reorganization was: - to make easier to read existing code - to make it easier to add additional backends It was a bit complicated to support point 2 with only C macro in a reliable way (because of djbfft exception, like the ones I assume you put for mkl/djbfft). But after having thought a bit, I took a totally different approach: I am now generating the code with a python script (see my other email), such as adding a backend just consists in writing your implementation in a C file, and regenerating the python script. David From david at ar.media.kyoto-u.ac.jp Wed May 2 07:59:39 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 02 May 2007 20:59:39 +0900 Subject: [SciPy-dev] Cleaning zfft: first patch Message-ID: <46387D2B.9010701@ar.media.kyoto-u.ac.jp> Hi, I've just finished the first part of cleaning fftpack (only zfft for now; wanted to get dev feedback before implementing the scheme for everything). The patch is available as the ticket 408: http://projects.scipy.org/scipy/scipy/ticket/408 The patch consists in two part: - one part split each implementation out of zfft.c, to put it in zfft_name.c files, where name is the name of the backend (mkl, fftw, fftw3, etc...). The corresponding files have no #ifdef, etc... and should be easy to read/improve now. - the above source files are then used to generate a zfft.c file, through a python script. The main reason why I used python to generate the file instead of C macro is because I find this solution easier to add backend: adding a backend only consists in creating a new zfft_name.c file, and adding name to a python list in the python script. Also, as the zfft.c is human readable and do not use new macros, it should be easy to read it now. - Regenerating the zfft.c file is only necessary when one of the backend source is changed, or when a new backend is added. Otherwise, it should work exactly as before; any change of behaviour with respect to configuring/compiling/running is a bug :) I've made no attempt for the python script to be pretty, too cheers, David From dosu004 at sges.auckland.ac.nz Wed May 2 08:12:05 2007 From: dosu004 at sges.auckland.ac.nz (David O'Sullivan) Date: Thu, 03 May 2007 00:12:05 +1200 Subject: [SciPy-dev] Old numarray convolve2d fft=1 option Message-ID: <46388015.60604@sges.auckland.ac.nz> Hi, I'm relatively new to this, and certainly no expert on the underlying math of all this stuff, but I'm trying to do relatively large kernel convolutions on large 2d matrices. The kernels might be 160x160 while the matrices themselves might be 2500x2500. They're also likely to be sparse matrices, but I'll worry about that later - for now I'm just scoping things out. So... anyway... with those size convolutions in mind, I'm intrigued by the fft=1 option that was in the numarray.convolve2d function (or so it says at http://stsdas.stsci.edu/numarray/numarray-1.5.html/node65.html This option doesn't seem to be in the current scipy.signal.convolve2d function. Presumably it would speed 2d convolutions up a lot? Is there a way around this? Or a plan to put the fft implementation into scipy.signal.convolve2d? Thanks David -- David O'Sullivan, Senior Lecturer School of Geography and Environmental Science University of Auckland | Te Whare Wananga o Tamaki Makaurau From chanley at stsci.edu Wed May 2 10:07:27 2007 From: chanley at stsci.edu (Christopher Hanley) Date: Wed, 02 May 2007 10:07:27 -0400 Subject: [SciPy-dev] Old numarray convolve2d fft=1 option In-Reply-To: <46388015.60604@sges.auckland.ac.nz> References: <46388015.60604@sges.auckland.ac.nz> Message-ID: <46389B1F.50509@stsci.edu> David O'Sullivan wrote: > Hi, > I'm relatively new to this, and certainly no expert on the underlying > math of all this stuff, but I'm trying to do relatively large kernel > convolutions on large 2d matrices. The kernels might be 160x160 while > the matrices themselves might be 2500x2500. They're also likely to be > sparse matrices, but I'll worry about that later - for now I'm just > scoping things out. > > So... anyway... with those size convolutions in mind, I'm intrigued by > the fft=1 option that was in the numarray.convolve2d function (or so it > says at > > http://stsdas.stsci.edu/numarray/numarray-1.5.html/node65.html > > This option doesn't seem to be in the current scipy.signal.convolve2d > function. Presumably it would speed 2d convolutions up a lot? > > Is there a way around this? > > Or a plan to put the fft implementation into scipy.signal.convolve2d? > > Thanks > > David > > > David, The numarray convolve package is currently living in scipy.stsci.convolve. It should work in the same way the numarray version did. Chris From a.schmolck at gmx.net Wed May 2 11:53:38 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Wed, 02 May 2007 16:53:38 +0100 Subject: [SciPy-dev] getting test method name In-Reply-To: <796269930704302025y26dadc5cq43f72ea6892aca3d@mail.gmail.com> (Brian Hawthorne's message of "Mon\, 30 Apr 2007 20\:25\:26 -0700") References: <796269930704291725w76c30f7ewd7f1c44c362ac923@mail.gmail.com> <796269930704302025y26dadc5cq43f72ea6892aca3d@mail.gmail.com> Message-ID: [sorry, again moving something from email to the list] "Brian Hawthorne" writes: [concerning a backwards-compatibility in mlabwrap's testing.py] > I noticed though that you added a python >= 2.5 dependency in the > setup.pyfile in order to have absolute imports. Yes -- but I reckon we might want to stay a bit more conservative with the testing stuff since our hope is to get that included somewhere else and since scipy/matplotlib/numpy still support 2.3, IIRC. If it turns out not to be needed (or there is little external interest in testing.py anyway), I'd drop it immediately. > I finally got matlab installed on my laptop but couldn't build until after > removing that dependency (because I have py2.4). Since 2.4 is still the > default version distributed for example with Fedora 6, do you think we could > just wait a bit before adding a 2.5 dependency? Sure, if installing python2.5 is inconvenient for you, feel free to revert the changes (just comment the relevant blocks out). I'll try to at least make some more headway with the proxying thing before I plan to make use of python 2.5 features (customization of proxying behavior would also be a good candidate for 2.5-style with-statements, however). I'm a bit surprised by the implication that fedora (unlike debian or ubuntu) doesn't manage to support multiple version of python, which sounds pretty sucky. BTW Is the following any help? > It helps for everything to be as vanilla as possible. Possibly the users, but not necessarily the developers:) Obviously for what's currently in mlabwrap there is no need to require python 2.5 -- I was just following Jarrod's idea that, provided we eventually want a 2.5 dependency, we requiting 2.5 as early as possible would be a good thing (no 2.3/2.4 people already using it and starting to complain). I think there is a high likelihood that we'll eventually want to use 2.5 features, esp. with statements[1], but ctypes out-of-the-box seems also useful ("you need python2.5" is less likely to lead to confusions than "either 2.5 or python 2.4 with ctypes >= 1.0.1"; BTW what version is the fedora 6 rpm-ctypes?) and there are some other small conveniences (conditional expressions, partial etc.). The other thing is that mlabwrap-1.0, which supports pretty much any version of python, matlab and numpy/numeric, is always available (and could conceivably even be updated to reflect mlabwrap2.0's API more closely if necessary). Anyway, let's defer decision on this for the moment, but I'd be interested to hear what others think, especially potentialy users. Since Jarrod was OK with the 2.5 dependency I assume it's not much of a problem for NiPY? > For example, omitting the 2.5 dependency, mlabwrap could be distributed as > an RPM for Fedora 6 (like scipy, numpy, and python). I'm not sure how good the prospects of that are regardless of python2.5 dependencies since many of the various versions of matlab in general use are mutually incompatible at a binary level. Of course this is hardly a rebuttal of your point, since people with fedora 6 (and fedora 7 will only come out this month) and plenty of python packages installed from rpms are unlikely to be keen on installing 2.5 from source in order to then get a mlabwrap egg. cheers, 'as Footnotes: [1] I thought me might use something like the following to locally change the (default) behavior of an MLabWrap instance:: # either a) with mlab(proxy='never', squeeze=True) as mlab: x=mlab.some_fun(some_arg) y=mlab.some_other_fun(*some_more_args) ... # or b) just with mlab(proxy='never', squeeze=True): ... With both styles we can also use ``mlab(proxy='never', squeeze=True)`` to create a new MlabWrap instance with changed defaults, for the case one just wants to customize behavior for a single call, e.g. ``mlab(proxy='never', squeeze=True).some_fun(some_arg)``[2]. In the first case (a), we just use the with-statement as a fluid-let and implementation is super-trivial (__enter__ returns self, __exit__ does nothing). The disadvantage is that this type of shadowing might feel unnatural to pythonistas whilst choosing another name (``... as mlab2``) might also feel clumsy; and more importantly if the ``as `` clause is accidentally omitted no error will be raised. Scheme (b) requires keeping track of the previous bindings of the __call__ arguments in the newly created instance, setting them in the parent instance and restoring the parent's settings on __exit__, which has the disadvantage of being a bit contorted. BTW, can you think of another way to change defaults? I think we might need the __call__ syntax for that (meaning we can't use it for ``mlab._do``), but maybe there's some better possibility I'm overlooking. [2] Although one could always use the syntax ``mlab.some_fun(some_arg, proxy='never', squeeze=True)`` for the one-off case, I think it the ``mlab(proxy='never', squeeze=True).some_fun(...)`` spelling might be preferable since one might argue that conceptually -- unlike ``nout`` -- these things don't really belong to the function call itself, and who knows, maybe matlab will grow keyword arguments one day. From a.schmolck at gmx.net Wed May 2 12:04:38 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Wed, 02 May 2007 17:04:38 +0100 Subject: [SciPy-dev] should scikits.mlawrap require python2.5 [Was: Re: getting test method name] In-Reply-To: (Alexander Schmolck's message of "Wed\, 02 May 2007 16\:53\:38 +0100") References: <796269930704291725w76c30f7ewd7f1c44c362ac923@mail.gmail.com> <796269930704302025y26dadc5cq43f72ea6892aca3d@mail.gmail.com> Message-ID: Sorry, I forgot to fix the subject line, please reply to this email instead, I'll paste in the same body -- 'as [sorry, again moving something from email to the list] "Brian Hawthorne" writes: [concerning a backwards-compatibility in mlabwrap's testing.py] > I noticed though that you added a python >= 2.5 dependency in the > setup.pyfile in order to have absolute imports. Yes -- but I reckon we might want to stay a bit more conservative with the testing stuff since our hope is to get that included somewhere else and since scipy/matplotlib/numpy still support 2.3, IIRC. If it turns out not to be needed (or there is little external interest in testing.py anyway), I'd drop it immediately. > I finally got matlab installed on my laptop but couldn't build until after > removing that dependency (because I have py2.4). Since 2.4 is still the > default version distributed for example with Fedora 6, do you think we could > just wait a bit before adding a 2.5 dependency? Sure, if installing python2.5 is inconvenient for you, feel free to revert the changes (just comment the relevant blocks out). I'll try to at least make some more headway with the proxying thing before I plan to make use of python 2.5 features (customization of proxying behavior would also be a good candidate for 2.5-style with-statements, however). I'm a bit surprised by the implication that fedora (unlike debian or ubuntu) doesn't manage to support multiple version of python, which sounds pretty sucky. BTW Is the following any help? > It helps for everything to be as vanilla as possible. Possibly the users, but not necessarily the developers:) Obviously for what's currently in mlabwrap there is no need to require python 2.5 -- I was just following Jarrod's idea that, provided we eventually want a 2.5 dependency, we requiting 2.5 as early as possible would be a good thing (no 2.3/2.4 people already using it and starting to complain). I think there is a high likelihood that we'll eventually want to use 2.5 features, esp. with statements[1], but ctypes out-of-the-box seems also useful ("you need python2.5" is less likely to lead to confusions than "either 2.5 or python 2.4 with ctypes >= 1.0.1"; BTW what version is the fedora 6 rpm-ctypes?) and there are some other small conveniences (conditional expressions, partial etc.). The other thing is that mlabwrap-1.0, which supports pretty much any version of python, matlab and numpy/numeric, is always available (and could conceivably even be updated to reflect mlabwrap2.0's API more closely if necessary). Anyway, let's defer decision on this for the moment, but I'd be interested to hear what others think, especially potentialy users. Since Jarrod was OK with the 2.5 dependency I assume it's not much of a problem for NiPY? > For example, omitting the 2.5 dependency, mlabwrap could be distributed as > an RPM for Fedora 6 (like scipy, numpy, and python). I'm not sure how good the prospects of that are regardless of python2.5 dependencies since many of the various versions of matlab in general use are mutually incompatible at a binary level. Of course this is hardly a rebuttal of your point, since people with fedora 6 (and fedora 7 will only come out this month) and plenty of python packages installed from rpms are unlikely to be keen on installing 2.5 from source in order to then get a mlabwrap egg. cheers, 'as Footnotes: [1] I thought me might use something like the following to locally change the (default) behavior of an MLabWrap instance:: # either a) with mlab(proxy='never', squeeze=True) as mlab: x=mlab.some_fun(some_arg) y=mlab.some_other_fun(*some_more_args) ... # or b) just with mlab(proxy='never', squeeze=True): ... With both styles we can also use ``mlab(proxy='never', squeeze=True)`` to create a new MlabWrap instance with changed defaults, for the case one just wants to customize behavior for a single call, e.g. ``mlab(proxy='never', squeeze=True).some_fun(some_arg)``[2]. In the first case (a), we just use the with-statement as a fluid-let and implementation is super-trivial (__enter__ returns self, __exit__ does nothing). The disadvantage is that this type of shadowing might feel unnatural to pythonistas whilst choosing another name (``... as mlab2``) might also feel clumsy; and more importantly if the ``as `` clause is accidentally omitted no error will be raised. Scheme (b) requires keeping track of the previous bindings of the __call__ arguments in the newly created instance, setting them in the parent instance and restoring the parent's settings on __exit__, which has the disadvantage of being a bit contorted. BTW, can you think of another way to change defaults? I think we might need the __call__ syntax for that (meaning we can't use it for ``mlab._do``), but maybe there's some better possibility I'm overlooking. [2] Although one could always use the syntax ``mlab.some_fun(some_arg, proxy='never', squeeze=True)`` for the one-off case, I think it the ``mlab(proxy='never', squeeze=True).some_fun(...)`` spelling might be preferable since one might argue that conceptually -- unlike ``nout`` -- these things don't really belong to the function call itself, and who knows, maybe matlab will grow keyword arguments one day. _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev From peridot.faceted at gmail.com Wed May 2 13:32:09 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 2 May 2007 13:32:09 -0400 Subject: [SciPy-dev] Old numarray convolve2d fft=1 option In-Reply-To: <46388015.60604@sges.auckland.ac.nz> References: <46388015.60604@sges.auckland.ac.nz> Message-ID: On 02/05/07, David O'Sullivan wrote: > I'm relatively new to this, and certainly no expert on the underlying > math of all this stuff, but I'm trying to do relatively large kernel > convolutions on large 2d matrices. The kernels might be 160x160 while > the matrices themselves might be 2500x2500. They're also likely to be > sparse matrices, but I'll worry about that later - for now I'm just > scoping things out. Sparse matrices are going to want another algorithm - if they're sparse enough, the direct approach to convolutions (implemented taking sparseness into account) will almost certainly be more efficient. > This option doesn't seem to be in the current scipy.signal.convolve2d > function. Presumably it would speed 2d convolutions up a lot? Basically, yes. FFTs are fairly fast and they turn convolutions into elementwise multiplication. It would not be terribly difficult to implement from scratch: you take a two-dimensional real FFT of each array, padding appropriately, multiply the FFTs, and take an inverse two-dimensional real FFT. You may have to fiddle with normalizations a bit. But, as always, I would not spend much effort optimizing the procedure until you're sure that this step will be slow enough to matter. Though as you're convolving a matrix with four million elements, it probably will... If you're *really* in a hurry, and the input matrices are sparse enough but the output matrix is dense enough (possible with a convolution), it might be reasonable to actually use a sparse DFT to construct the (dense) Fourier transforms, taking advantage of the sparseness, and then an FFT to reconstruct the convolution. I doubt it'll be worth the trouble. I've done this sort of thing but only to avoid having to bin event data. Anne From cookedm at physics.mcmaster.ca Wed May 2 13:46:05 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 2 May 2007 13:46:05 -0400 Subject: [SciPy-dev] Cleaning and fixing fft in scipy ? In-Reply-To: <3a1077e70705020202r42f6668cu32756f76425f832f@mail.gmail.com> References: <4631D3E8.30400@ar.media.kyoto-u.ac.jp> <463246D5.6040303@ieee.org> <46324944.7010208@ieee.org> <4632F4CD.5010302@ar.media.kyoto-u.ac.jp> <58658.84.50.128.236.1177754802.squirrel@cens.ioc.ee> <58855.84.50.128.236.1177755256.squirrel@cens.ioc.ee> <46332762.10701@ar.media.kyoto-u.ac.jp> <60533.84.50.128.236.1177758912.squirrel@cens.ioc.ee> <4638285C.2040904@ar.media.kyoto-u.ac.jp> <3a1077e70705020202r42f6668cu32756f76425f832f@mail.gmail.com> Message-ID: <0FB0B9F0-EF79-4F0E-BC5D-674B9C480A87@physics.mcmaster.ca> On May 2, 2007, at 05:02 , John Travers wrote: > On 02/05/07, David Cournapeau wrote: >> - My understanding is that except for djbfft backend, all other >> backend implement fft of any size (eg different than 2^n); so for >> djbfft >> backend, for size different than 2^n, it uses a diffferent >> backend. For >> example, if FFTW (v2) is selected and djbfft detected, then the zfft >> function is effectively using 2 backend: djbfft and fftw2. Would >> it be a >> big problem if in djbfft case, the sizes != 2^n are always handled >> through a fixed backend (fftpack I guess) ? > > Well, probably not, as people using djbfft probably know what they are > doing. Not necessarily :-) It's listed as a possible dependency, so people (like me) just install it. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part URL: From millman at berkeley.edu Wed May 2 14:00:55 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 2 May 2007 11:00:55 -0700 Subject: [SciPy-dev] getting test method name In-Reply-To: References: <796269930704291725w76c30f7ewd7f1c44c362ac923@mail.gmail.com> <796269930704302025y26dadc5cq43f72ea6892aca3d@mail.gmail.com> Message-ID: On 5/2/07, Alexander Schmolck wrote: > > [sorry, again moving something from email to the list] Thanks. "Brian Hawthorne" writes: I'm a bit surprised by the implication that fedora (unlike debian or ubuntu) > doesn't manage to support multiple version of python, which sounds pretty > sucky. BTW Is the following any help? < > http://www.serpentine.com/blog/2006/12/22/how-to-build-safe-clean-python-25-rpms-for-fedora-core-6/ > > Fedora is a community project and I bet they would be interested if someone where to provide a python2.5 for Fedora 6. I don't have any time; but if anyone is interested, just take a look at: http://fedoraproject.org/wiki/PackageMaintainers > It helps for everything to be as vanilla as possible. > > Anyway, let's defer decision on this for the moment, but I'd be interested > to > hear what others think, especially potentialy users. Since Jarrod was OK > with > the 2.5 dependency I assume it's not much of a problem for NiPY? I was initially; but then I noticed that there is no Python 2.5 package for any of the released versions of Fedora. For the next year at least, there will be people running older versions of Fedora. So it might be better to hold off on a 2.5 dependency until most users have ready access to it. (The version of ctypes provided by Fedora doesn't work with NumPy, so you need to easy_install ctypes. That is pretty easy.) -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Wed May 2 14:11:03 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 2 May 2007 11:11:03 -0700 Subject: [SciPy-dev] mlabwrap 2.0 vs. mlabwrap 1.1/1.5 Message-ID: I have been thinking about what the priorities should be for the next stable release of mlabwrap. My only personal priority is to have it become a scikit release. I know that it is still somewhat unclear exactly what is required to be a scikit, but my 3 priorities are: 1) use setuptools namespace packages. 2) remove support for Numeric and Numarray. 3) use NumPy testing. Maybe the best way to proceed would be to quickly get these 3 issues resolved and then release a stable release of mlabwrap. It should probably be called 1.1 or 1.5. Then we could work on the 2.0 release, which would switch to using ctypes and provide a new API. This could possibly depend on python 2.5 as well. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.lee.hawthorne at gmail.com Wed May 2 18:04:47 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Wed, 2 May 2007 15:04:47 -0700 Subject: [SciPy-dev] Fwd: mlabwrap high-level user interface In-Reply-To: <796269930705021458k73d768f2q315bedc6eec4c24f@mail.gmail.com> References: <796269930704300050k272ba290t30d60098a9a72b3a@mail.gmail.com> <796269930704300114n1c3e85d6x691504d424fbdd4f@mail.gmail.com> <796269930705021458k73d768f2q315bedc6eec4c24f@mail.gmail.com> Message-ID: <796269930705021504g772d3090jd2db35ec47f7981d@mail.gmail.com> ---------- Forwarded message ---------- From: Brian Hawthorne Date: May 2, 2007 2:58 PM Subject: Re: mlabwrap high-level user interface To: Alexander Schmolck Cc: scipy-dev at scipy.org, Matthew Brett , Taylor Berg-Kirkpatrick On 4/30/07, Alexander Schmolck wrote: > > [moving this to scipy-dev] > > "Brian Hawthorne" writes: > > > Ah, forgot one critical use case, the function call! > > > >>>> res = engine.svd(X, nout=1) > >>>> res = engine["svd"](X, nout=1) > > As I mentioned in the other poist, I don't like that. There ought to be > one > way to do things, and the first type of call is clearly what we'd like > function calls to typically look like. Agreed, a single way to do things is preferable to multiple arbitrary choices. I was thinking of rarer cases for the dictionary style lookup, with attribute style being the common use case. For example: 1) Building layers on top (like an object browser), one might like to say: >>> for name in engine.names: do_something_with(engine[name]) But I guess in such a case, you could always just use the built-in getattr function. 2) Name conflicts. In the rare case that some lunatic named a matlab variable "__getattr__", you could retrieve its value with engine["__getattr__"], whereas engine.__getattr__ would return a bound python method. I suspect that ``engine["svd"]`` ought to be equivalent to ``engine.svd()``. > > The reason why I'd at least strongly consider this funny looking behavior > that > is that nullary-function call vs. variable lookup is below the interface > level > in matlab (i.e. syntactically undistinguishable and hence something that > one > is often at liberty to change for a variable that isn't intended for > mutation; > a bit like property access can be transparently replaced by a function > call in > python (using e.g __getattr__) but not in lesser languages like java or > C++). Hmm, that does throw a wrench in the works -brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.lee.hawthorne at gmail.com Wed May 2 18:37:47 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Wed, 2 May 2007 15:37:47 -0700 Subject: [SciPy-dev] getting test method name In-Reply-To: References: <796269930704291725w76c30f7ewd7f1c44c362ac923@mail.gmail.com> <796269930704302025y26dadc5cq43f72ea6892aca3d@mail.gmail.com> Message-ID: <796269930705021537y4ce061a1v1de29f95468818ba@mail.gmail.com> > > For example, omitting the 2.5 dependency, mlabwrap could be distributed > as > > an RPM for Fedora 6 (like scipy, numpy, and python). > > I'm not sure how good the prospects of that are regardless of python2.5 > dependencies since many of the various versions of matlab in general use > are > mutually incompatible at a binary level. Of course this is hardly a > rebuttal > of your point, since people with fedora 6 (and fedora 7 will only come out > this month) and plenty of python packages installed from rpms are unlikely > to > be keen on installing 2.5 from source in order to then get a mlabwrap egg. Yes, excellent point. Since it's astronomically unlikely that matlab will ever be distributed as a binary RPM from any open repository, it wouldn't help much to distribute mlabwrap in that format, as it will probably need to be built from source against whatever matlab is installed on the user's system. I have not read all the egg documentation yet, is there such a thing as a source egg, which will build itself when you install it? Or this is the function of easy_install...? Footnotes: > [1] I thought me might use something like the following to locally change > the > (default) behavior of an MLabWrap instance:: > > # either a) > with mlab(proxy='never', squeeze=True) as mlab: > x=mlab.some_fun(some_arg) > y=mlab.some_other_fun(*some_more_args) > ... > > # or b) just > with mlab(proxy='never', squeeze=True): > ... > > With both styles we can also use ``mlab(proxy='never', > squeeze=True)`` to > create a new MlabWrap instance with changed defaults, for the case > one > just wants to customize behavior for a single call, e.g. > ``mlab(proxy='never', squeeze=True).some_fun(some_arg)``[2]. In the > first > case (a), we just use the with-statement as a fluid-let and > implementation is super-trivial (__enter__ returns self, __exit__ > does > nothing). The disadvantage is that this type of shadowing might feel > unnatural to pythonistas whilst choosing another name (``... as > mlab2``) > might also feel clumsy; and more importantly if the ``as `` clause is > accidentally omitted no error will be raised. Scheme (b) requires > keeping > track of the previous bindings of the __call__ arguments in the newly > created instance, setting them in the parent instance and restoring > the > parent's settings on __exit__, which has the disadvantage of being a > bit > contorted. > > BTW, can you think of another way to change defaults? I think we > might > need the __call__ syntax for that (meaning we can't use it for > ``mlab._do``), but maybe there's some better possibility I'm > overlooking. Ha, we are running out of magic syntax hooks! Technically, there's no problem using __call__ for this purpose, since the argument to be evaluated (as described in the previous design suggestions) will not interfere with the keyword arguments. However, there may be some cognitive dissonance on the users' part since __call__ would end up being used for two very distinct purposes. But I can't think of anything better right now. Regarding styles a) and b) above, I personally prefer a), since b) has the feeling of a temporary change in state to the primary engine object (even it it's not implemented that way), whereas a) is more like clone-with-modifications, which I prefer in general because it doesn't require the object you're cloning to be mutable (and I always try to reduce mutable state whenever I can). Also, it will work without the "with" statement since you could say: >>> engine2 = engine(proxy='never', squeeze=True) As far as reconciling the two different functions of __call__, we can say whenever a string is provided for evaluation, do not return a cloned engine, but evaluate the string (with the settings provided as keyword args, if any). Then whenever a string is *not* provided, always return a cloned engine, modified according to the keyword args given. [2] Although one could always use the syntax ``mlab.some_fun(some_arg, > proxy='never', squeeze=True)`` for the one-off case, I think it the > ``mlab(proxy='never', squeeze=True).some_fun(...)`` spelling might be > preferable since one might argue that conceptually -- unlike ``nout`` > -- > these things don't really belong to the function call itself, and who > knows, maybe matlab will grow keyword arguments one day. > Yes, probably a bad idea to add those settings to the function calls. -brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed May 2 20:54:21 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 3 May 2007 02:54:21 +0200 Subject: [SciPy-dev] Old numarray convolve2d fft=1 option In-Reply-To: References: <46388015.60604@sges.auckland.ac.nz> Message-ID: <20070503005421.GN10731@mentat.za.net> On Wed, May 02, 2007 at 01:32:09PM -0400, Anne Archibald wrote: > On 02/05/07, David O'Sullivan wrote: > > > I'm relatively new to this, and certainly no expert on the underlying > > math of all this stuff, but I'm trying to do relatively large kernel > > convolutions on large 2d matrices. The kernels might be 160x160 while > > the matrices themselves might be 2500x2500. They're also likely to be > > sparse matrices, but I'll worry about that later - for now I'm just > > scoping things out. > > > This option doesn't seem to be in the current scipy.signal.convolve2d > > function. Presumably it would speed 2d convolutions up a lot? > > Basically, yes. FFTs are fairly fast and they turn convolutions into > elementwise multiplication. > > It would not be terribly difficult to implement from scratch: you take > a two-dimensional real FFT of each array, padding appropriately, > multiply the FFTs, and take an inverse two-dimensional real FFT. You > may have to fiddle with normalizations a bit. Or try scipy.signal.fftconvolve... Cheers St?fan From a.schmolck at gmx.net Thu May 3 11:09:15 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Thu, 03 May 2007 16:09:15 +0100 Subject: [SciPy-dev] Fwd: mlabwrap high-level user interface In-Reply-To: <796269930705021504g772d3090jd2db35ec47f7981d@mail.gmail.com> (Brian Hawthorne's message of "Wed\, 2 May 2007 15\:04\:47 -0700") References: <796269930704300050k272ba290t30d60098a9a72b3a@mail.gmail.com> <796269930704300114n1c3e85d6x691504d424fbdd4f@mail.gmail.com> <796269930705021458k73d768f2q315bedc6eec4c24f@mail.gmail.com> <796269930705021504g772d3090jd2db35ec47f7981d@mail.gmail.com> Message-ID: > Agreed, a single way to do things is preferable to multiple arbitrary > choices. I was thinking of rarer cases for the dictionary style lookup, with > attribute style being the common use case. For example: > > 1) Building layers on top (like an object browser), one might like to > say: > > >>> for name in engine.names: do_something_with(engine[name]) > > But I guess in such a case, you could always just use the built-in > getattr function. Yup. > 2) Name conflicts. In the rare case that some lunatic named a matlab > variable "__getattr__", > you could retrieve its value with engine["__getattr__"], whereas > engine.__getattr__ would > return a bound python method. Ah, that's not a problem -- a big part of the reason that things like ``mlab._get`` and ``mlab._do`` start with an underscore is that legal matlab names don't. So currently name conflicts really *can't* happen and this is a reason I'd like to keep "non-private" methods out. Actually, there is one additional wrinkle: to allow for identifiers that are legal in matlab but not in python (prime example: ``print``), a final trailing '_' is stripped on the way to matlab (e.g. ``mlab.print_`` -> ``print`` in matlab); but this name-mangling is obviously bijective, so there are still no name clashes, but to achieve consistency lookup methods such as ``._get`` or the one you proposed (mlab['print']) should maybe also implement it (i.e. mangle 'print_' and even throw an error on 'print', because it is a python keyword). This is currently not the case. >> I suspect that ``engine["svd"]`` ought to be equivalent to >> ``engine.svd()``. >> >> The reason why I'd at least strongly consider this funny looking behavior >> that is that nullary-function call vs. variable lookup is below the >> interface level in matlab (i.e. syntactically undistinguishable and hence >> something that one is often at liberty to change for a variable that isn't >> intended for mutation; a bit like property access can be transparently >> replaced by a function call in python (using e.g __getattr__) but not in >> lesser languages like java or C++). > > > Hmm, that does throw a wrench in the works Well, not necessarily. Although I'd suspect that quite a few users of matlab are not aware of the fact that ``x = inf`` does involve a function call (``inf()``), I'm not sure that on the whole it would cause much problems interface-breakage to resolve ``mlab.x`` to a variable getting operation, if ``x`` is a variable and not a function -- in fact I suppose this is quite negligible (since it would only constants that where reimplemented as functions (or vice versa), not something that I think occurs often). Can anyone think of examples? Another consideration is that this would prevent users from just assigning things to a matlab instance or the MlabWrap class, which is something that I think can be useful (e.g. to customize the default ``nout`` (etc.) value for a certain function ``f`` you can now just do: mlab.f = mlab._make_mlab_command('f, nout=3, doc=mlab._raw_eval("help('f)")) ). Lastly performance related considerations: given the terrible performance of all matlab engine calls means that this scheme could be close to 2x slower (two engine calls engEvalString + engGetVariable) than mlab['x'] (one engine call: engGetVariable) in some cases; it would also render the memoizing that currently happens on ``getattr(mlab, x)`` more problematic. So, in summary, I think that treating ``mlab.x`` as variable lookup/assignment when ``x`` is a matlab variable is in principle feasible -- the downsides and difficulties I see are not overwhelming, but I currently would favor using ``mlab['x']`` for variables and reserving ``mlab.x`` for functions, unless someone can come up with a compelling argument. Unlike the __call__ syntax, I don't see anything else that the ``[]`` syntax could be used for instead, either. cheers, 'as From a.schmolck at gmx.net Thu May 3 12:33:48 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Thu, 03 May 2007 17:33:48 +0100 Subject: [SciPy-dev] getting test method name In-Reply-To: <796269930705021537y4ce061a1v1de29f95468818ba@mail.gmail.com> (Brian Hawthorne's message of "Wed\, 2 May 2007 15\:37\:47 -0700") References: <796269930704291725w76c30f7ewd7f1c44c362ac923@mail.gmail.com> <796269930704302025y26dadc5cq43f72ea6892aca3d@mail.gmail.com> <796269930705021537y4ce061a1v1de29f95468818ba@mail.gmail.com> Message-ID: "Brian Hawthorne" writes: > Yes, excellent point. Since it's astronomically unlikely that matlab will > ever be distributed as a binary RPM from any open repository, it wouldn't > help much to distribute mlabwrap in that format, as it will probably need to > be built from source against whatever matlab is installed on the user's > system. I have not read all the egg documentation yet, is there such a thing > as a source egg, which will build itself when you install it? Or this is the > function of easy_install...? Yup -- I think so, easy_install should allow an out-of-the-box-from-source install -- at least in cases that don't succumb to the general horrors of matlab mex/engine extension building that you've experience firsthand. > > Footnotes: >> [1] I thought me might use something like the following to locally change >> the >> (default) behavior of an MLabWrap instance:: >> >> # either a) >> with mlab(proxy='never', squeeze=True) as mlab: >> x=mlab.some_fun(some_arg) >> y=mlab.some_other_fun(*some_more_args) >> ... >> >> # or b) just >> with mlab(proxy='never', squeeze=True): >> ... >> > Ha, we are running out of magic syntax hooks! Technically, there's no > problem using __call__ for this purpose, since the argument to be evaluated > (as described in the previous design suggestions) will not interfere with > the keyword arguments. However, there may be some cognitive dissonance on > the users' part since __call__ would end up being used for two very distinct > purposes. Uhm, there could be some cognitive dissonance on my part as well... > But I can't think of anything better right now. I can: mlab._do('whatever') it's just an extra 4 chars typing-overhead and best of all already implemented. I really don't see the attraction of creating something really messy just to save 4 keystrokes for bypassing the high-level interface -- something which IMO should neither be often required nor particularly encouraged. Can you give some examples why you think it's important to make this more convenient? > Regarding styles a) and b) above, I personally prefer a), since b) has the > feeling of a temporary change in state to the primary engine object (even it > it's not implemented that way) actually it is (by necessity), at least if I understand you correctly. > whereas a) is more like clone-with-modifications, which I prefer in general > because it doesn't require the object you're cloning to be mutable (and I > always try to reduce mutable state whenever I can). Yeah, I'm with you on avoiding spurious uses of state. > Also, it will work without the "with" statement since you could say: > > >>> engine2 = engine(proxy='never', squeeze=True) Yup -- that's the idea (but this remains true for the other option as well, as I tried to indicate -- the implementation obviously gets nastier though). Anyway, I'm happy to go with a) for starters, if we find it doesn't work well enough we can always implement the more complex scheme. > As far as reconciling the two different functions of __call__, we can say > whenever a string is provided for evaluation, do not return a cloned engine, > but evaluate the string (with the settings provided as keyword args, if > any). Then whenever a string is *not* provided, always return a cloned > engine, modified according to the keyword args given. This looks like it is taken from a perl manpage (i.e. no thanks :) cheers, alex From david at ar.media.kyoto-u.ac.jp Fri May 4 03:54:11 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 04 May 2007 16:54:11 +0900 Subject: [SciPy-dev] mlabwrap, alternative to LD_LIBRARY_PATH ? Message-ID: <463AE6A3.6040000@ar.media.kyoto-u.ac.jp> Hi, I had some problems using mlabwrap on my workstation due to a stupid problem, and I think it can cause problems to other people too, so I report it here (I didn't see any way to report bugs to scikits yet ?). The problem: - to use mlabwrap, one needs setting LD_LIBRARY_PATH to the directory where libeng.so is located. - the directory containing libengine.so contains a lot of libraries, including a libz.so. - as LD_LIBRARY_PATH has precedence on system wide libraries, this means that when you launch python, it will use libz from matlab instead of the system wide libz, which is likely to cause problems (it caused me a headache because of unfound symbols when launching some python scripts). I don't know an easy solution to this problem: a workaround is to link the necessary libraries in another directory, and set this directory in LD_LIBRARY_PATH. Another one is to use rlink, but I don't know if distutils exposes this link option, and if it is available on all platforms anyway. Cheers, David From a.schmolck at gmx.net Fri May 4 04:33:38 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Fri, 04 May 2007 09:33:38 +0100 Subject: [SciPy-dev] mlabwrap, alternative to LD_LIBRARY_PATH ? In-Reply-To: <463AE6A3.6040000@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Fri\, 04 May 2007 16\:54\:11 +0900") References: <463AE6A3.6040000@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > Hi, > > I had some problems using mlabwrap on my workstation due to a stupid > problem, and I think it can cause problems to other people too, so I > report it here (I didn't see any way to report bugs to scikits yet ?). > The problem: > - to use mlabwrap, one needs setting LD_LIBRARY_PATH to the > directory where libeng.so is located. > - the directory containing libengine.so contains a lot of > libraries, including a libz.so. > - as LD_LIBRARY_PATH has precedence on system wide libraries, > this means that when you launch python, it will use libz from matlab > instead of the system wide libz, which is likely to cause problems (it > caused me a headache because of unfound symbols when launching some > python scripts). > > I don't know an easy solution to this problem: a workaround is to > link the necessary libraries in another directory, and set this > directory in LD_LIBRARY_PATH. Another one is to use rlink, but I don't > know if distutils exposes this link option, and if it is available on > all platforms anyway. Yeah, the whole LD_LIBRARY_PATH thing is a huge pain -- there ought to be a better way to do this, but it's the "official" one: Any unix build experts who can comment on this? cheers, 'as From david at ar.media.kyoto-u.ac.jp Fri May 4 04:35:09 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 04 May 2007 17:35:09 +0900 Subject: [SciPy-dev] mlabwrap, alternative to LD_LIBRARY_PATH ? In-Reply-To: References: <463AE6A3.6040000@ar.media.kyoto-u.ac.jp> Message-ID: <463AF03D.70807@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > > > Yeah, the whole LD_LIBRARY_PATH thing is a huge pain -- there ought to be a > better way to do this, but it's the "official" one: > > > > Any unix build experts who can comment on this? > If I were to build the thing with gcc, I would use -rpath, which hardcodes the library runtime path into the binary. I used this quite often with mex files, when I was still using matlab. David From matthew.brett at gmail.com Fri May 4 04:53:29 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 4 May 2007 09:53:29 +0100 Subject: [SciPy-dev] mlabwrap, alternative to LD_LIBRARY_PATH ? In-Reply-To: <463AF03D.70807@ar.media.kyoto-u.ac.jp> References: <463AE6A3.6040000@ar.media.kyoto-u.ac.jp> <463AF03D.70807@ar.media.kyoto-u.ac.jp> Message-ID: <1e2af89e0705040153q6306671md58a897874079a8c@mail.gmail.com> Hi, > If I were to build the thing with gcc, I would use -rpath, which > hardcodes the library runtime path into the binary. I used this quite > often with mex files, when I was still using matlab. distutils has a runtime_library_dirs option For eggs: http://cheeseshop.python.org/pypi/zc.recipe.egg/1.0.0b6#creating-eggs-with-extensions-neededing-custom-build-settings I guess that would cover it? Matthew From a.schmolck at gmx.net Fri May 4 05:38:16 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Fri, 04 May 2007 10:38:16 +0100 Subject: [SciPy-dev] mlabwrap, alternative to LD_LIBRARY_PATH ? In-Reply-To: <1e2af89e0705040153q6306671md58a897874079a8c@mail.gmail.com> (Matthew Brett's message of "Fri\, 4 May 2007 09\:53\:29 +0100") References: <463AE6A3.6040000@ar.media.kyoto-u.ac.jp> <463AF03D.70807@ar.media.kyoto-u.ac.jp> <1e2af89e0705040153q6306671md58a897874079a8c@mail.gmail.com> Message-ID: "Matthew Brett" writes: > Hi, > >> If I were to build the thing with gcc, I would use -rpath, which >> hardcodes the library runtime path into the binary. I used this quite >> often with mex files, when I was still using matlab. > > distutils has a runtime_library_dirs option > > For eggs: > http://cheeseshop.python.org/pypi/zc.recipe.egg/1.0.0b6#creating-eggs-with-extensions-neededing-custom-build-settings > > I guess that would cover it? Hey, thanks -- I was just about to send off an email to the distutils-sig, asking about this (having not much luck googling setuptools LD_LIBRARY_PATH and similar), but this seems to do the trick -- I've modfied setup.py accordingly and it appears to work fine. Great! cheers, 'as From bart.vandereycken at cs.kuleuven.be Fri May 4 09:22:30 2007 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Fri, 04 May 2007 15:22:30 +0200 Subject: [SciPy-dev] Callback feature for iterative solvers In-Reply-To: <1b5a37350701240945q309f2855y60ad2516bf1d959@mail.gmail.com> References: <1b5a37350701240635m2e99cd54j26c439ce5be6ae52@mail.gmail.com> <45B7768D.4080209@iam.uni-stuttgart.de> <1b5a37350701240833j342991c5mb0dd9f8cc124743b@mail.gmail.com> <45B78B73.7040405@iam.uni-stuttgart.de> <1b5a37350701240945q309f2855y60ad2516bf1d959@mail.gmail.com> Message-ID: Ed Schofield wrote: > > On 1/24/07, *Nils Wagner* > wrote: > > Ed Schofield wrote: > > > > On 1/24/07, *Nils Wagner* > > >> wrote: > > > > Hi Ed, > > > > Great ! > > Thank you very much ! > > It seems to work well with the exception of GMRES. > > Please find attached a short test. > > Am I missing something ? > > > > > > > No, I agree. It seems that there's something wrong with GMRES > > > ... > Hi Ed, > > Done. Maybe you can add some comments. > Again thank you very much for the new functionality !! > > http://projects.scipy.org/scipy/scipy/ticket/360 > > > Actually, I'm mystified about the meaning of iter_ in gmres() and the > others. Can anyone shed any light on this? Why is the output value of > iter_ from revcom() used as the *input* value in the next iteration? Why > is the maxiter argument only used once, for the first call to revcom(), > and then apparently ignored for all subsequent calls? The meaning of iter_ is the number of restarted GMRES iterations, not the total number. The bottom line is the nice property of GMRES that it can compute the residuals without updating the iterate x. > > I'm inclined to revert the callback patch -- it seems broken. But if the > revcom Fortran functions can perform multiple iterations each, we can't > easily call back Python functions each iteration. Is there a better > solution? I've made a patch in http://projects.scipy.org/scipy/scipy/ticket/360 that gives the relative residual to the callback routine. -- bart From david.huard at gmail.com Mon May 7 14:54:38 2007 From: david.huard at gmail.com (David Huard) Date: Mon, 7 May 2007 14:54:38 -0400 Subject: [SciPy-dev] maskedarray.maximum inconsistent with numpy.ma.maximum Message-ID: <91cf711d0705071154i63699637mcc402580eb6300f7@mail.gmail.com> Hi, maskedarray.maximum(2darray) returns a 1d array, whereas numpy.ma.maximum returns a scalar. This difference breaks matplotlib contour function when numpy.ma is replaced by maskedarray. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Mon May 7 16:48:28 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 7 May 2007 16:48:28 -0400 Subject: [SciPy-dev] maskedarray.maximum inconsistent with numpy.ma.maximum In-Reply-To: <91cf711d0705071154i63699637mcc402580eb6300f7@mail.gmail.com> References: <91cf711d0705071154i63699637mcc402580eb6300f7@mail.gmail.com> Message-ID: <200705071648.28579.pgmdevlist@gmail.com> On Monday 07 May 2007 14:54:38 David Huard wrote: > maskedarray.maximum(2darray) returns a 1d array, > whereas > numpy.ma.maximum returns a scalar. David, thx for the info. The explanation is that numpy.core.ma.maximum computes the max of the compressed array (turned into 1D), while maskedarray.maximum calls numpy.core.umath.maximum.reduce on the 2D array. And numpy.core.umath.maximum.reduce on a 2D array returns a 1D array (the default axis is 0). Is this behavior wanted ? An axis=None is not recognized as a valid argument for numpy.core.umath.maximum, unfortunately... > This difference breaks matplotlib contour function when numpy.ma is > replaced by maskedarray. That's unfortunate indeed. A quick fix is to force a 'ravel' on 'target' in '_extrema_operation.reduce', just after kargs={}. Before commiting the fix, I'd like to know whether numpy.core.umath.maximum.reduce should accept a None value for the axis... From brian.lee.hawthorne at gmail.com Mon May 7 20:39:30 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Mon, 7 May 2007 17:39:30 -0700 Subject: [SciPy-dev] mlabwrap 2.0 vs. mlabwrap 1.1/1.5 In-Reply-To: References: Message-ID: <796269930705071739i4b710fa1qd879dea5bf149f4f@mail.gmail.com> On 5/2/07, Jarrod Millman wrote: > > I have been thinking about what the priorities should be for the next > stable release of mlabwrap. My only personal priority is to have it become > a scikit release. I know that it is still somewhat unclear exactly what is > required to be a scikit, but my 3 priorities are: > 1) use setuptools namespace packages. > 2) remove support for Numeric and Numarray. > 3) use NumPy testing. > > Maybe the best way to proceed would be to quickly get these 3 issues > resolved and then release a stable release of mlabwrap. It should probably > be called 1.1 or 1.5. That seems like a good incremental step. I believe the three issues above are taken care of. I created a 1.1 branch to represent the release. Alexander, can you think of anything else that needs to be done for these three things before making the 1.1 release? If not, I'll go ahead and post an egg to PyPI. -Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.schmolck at gmx.net Tue May 8 06:32:30 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Tue, 08 May 2007 11:32:30 +0100 Subject: [SciPy-dev] mlabwrap 2.0 vs. mlabwrap 1.1/1.5 In-Reply-To: <796269930705071739i4b710fa1qd879dea5bf149f4f@mail.gmail.com> (Brian Hawthorne's message of "Mon\, 7 May 2007 17\:39\:30 -0700") References: <796269930705071739i4b710fa1qd879dea5bf149f4f@mail.gmail.com> Message-ID: First, sorry for letting this one drag, I've still got a backlog of other mlabwrap related TODOs and there are other things I need to get done urgently, but I should have dealt with this earlier. "Brian Hawthorne" writes: > On 5/2/07, Jarrod Millman wrote: >> >> I have been thinking about what the priorities should be for the next >> stable release of mlabwrap. My only personal priority is to have it become >> a scikit release. What are your reasons for wanting to get out a scikits release first and foremost? I ask because I'm a bit reluctant to release a repackaged mlabwrap 1.0 as scikits.mlabwrap 1.1, unless it solves some concrete problem. It seems to me that with the change of name inherent in the move to scikits we have a visible change and unavoidable code breakage anyway (because at least the import statements in user code will need changing) and thus a good opportunity to introduce a clean break, if that's required to improve the capabilities of mlabwrap. In the meantime, with (pre-scikits) mlabwrap 1.0, there is already something viable and stable in the open that people who want to use mlabwrap to interface to matlab can use (and I think scikits.mlabwrap should make transition easy by offering an compatibility layer, if required). For those who are willing to live on the bleeding edge there's always svn and we can also make development release-eggs (BTW, I think we should really move the svn repository to scipy.org soon, if possible). Introducing incompatible changes piecemeal into scikits.mlabwrap without causing user-annoyance and confusion seems less manageable to me in comparison. Having said this, I do agree with the priorities outlined below, and I think we should come up with some reasonable project time-table, so that we're working towards concrete milestones -- I'll append some things on my mind below. >> I know that it is still somewhat unclear exactly what is required to be a >> scikit, but my 3 priorities are: >> 1) use setuptools namespace packages. Essentially done, but there are some related issues (where ``testing.py`` should go should go and getting the version.py thing to work properly). >> 2) remove support for Numeric and Numarray. Done. >> 3) use NumPy testing. Mostly done. The only thing that needs to be reviewed IMO, *if* we do want to make a release, is if we really want to have testing.py in the scikits namespace (i.e. ``scikits.testing``). For an official release, I suggest not including testing (making test_mlabwrap import it conditionally), since a) it's not clear to me that we should put something other than mlabwrap into scikits/, without further discussion b) it's just there to ease test-driven development and is not needed to run the tests in batch-mode c) it's pretty pre-alpha Adding to your list, here are the things currently on my mind 4) Move svn to scipy.org (as mentioned above) 5) Get the info on sorted out; this should indirectly resolve all remaining mlabwrawp issues pertaining to all scikits (things like version.py, the use of numpy.distuils and tract etc.; I'll try to send an email to the list about this in a few hours) 6) Maintenance release of mlabwrap-1.0.1 which solves the LD_LIBRARY_PATH problem (unless we do a scikits.mlabwrap 1.0 release) 7) Make interactive, test driven development convenient by augmenting numpy.testing as required 8) Commit the hybrid-proxing stuff, so people can give it a spin >> Maybe the best way to proceed would be to quickly get these 3 issues >> resolved and then release a stable release of mlabwrap. It should probably >> be called 1.1 or 1.5. > > > That seems like a good incremental step. I believe the three issues above > are taken care of. I created a 1.1 branch to represent the release. > Alexander, can you think of anything else that needs to be done for these > three things before making the 1.1 release? If not, I'll go ahead and post > an egg to PyPI. I'm all for releasing an egg, but as I said, I'd perfer such an egg to be clearly marked as a development release unless there is a compelling reason we need an official 1.1 release. cheers, 'as From matthew.brett at gmail.com Wed May 9 13:35:10 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 9 May 2007 18:35:10 +0100 Subject: [SciPy-dev] mlabwrap 2.0 vs. mlabwrap 1.1/1.5 In-Reply-To: References: <796269930705071739i4b710fa1qd879dea5bf149f4f@mail.gmail.com> Message-ID: <1e2af89e0705091035l54f3cd0cx9803e5132595891c@mail.gmail.com> Hi, > What are your reasons for wanting to get out a scikits release first and > foremost? I ask because I'm a bit reluctant to release a repackaged mlabwrap > 1.0 as scikits.mlabwrap 1.1, unless it solves some concrete problem. I would have thought this would be a useful step. It will get people used to the scikits location, provide a scikits example for other projects, remove numarray numeric compatibility issues, thus simplifying the install. Also the rpath stuff will help the install, and those of us using mlabwrap in larger projects should have fewer places to point at for installation instructions as sckits gets going. I doubt the differences in the import statement will bother many of us. See you, Matthew From david.huard at gmail.com Thu May 10 11:49:12 2007 From: david.huard at gmail.com (David Huard) Date: Thu, 10 May 2007 11:49:12 -0400 Subject: [SciPy-dev] patch to support ND convert in timeseries package Message-ID: <91cf711d0705100849n77b42babhf19e47142e99bb8a@mail.gmail.com> Hi, here are two patches to support the convert method on ND timeseries. hsplit.patch must be applied to the maskedarrays package and the other patch to the timeseries package. Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hsplit.patch Type: text/x-patch Size: 2499 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ndconvert.patch Type: text/x-patch Size: 6770 bytes Desc: not available URL: From matthieu.brucher at gmail.com Fri May 11 03:18:07 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 11 May 2007 09:18:07 +0200 Subject: [SciPy-dev] Scikits for optimization ? Message-ID: Hi, I posted by proposal to the scipy TAC page, but I'm wondering if it wouldn't be better to provide a "toolbox", so then a scikit instead of wanting to put in scipy. My change of thought started from the fact that scipy has very good optimizers at this point, and my proposal is for people that want to use very specific algorithms and to tweak them. And isn't that what scikits were made for ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mforbes at physics.ubc.ca Fri May 11 03:36:43 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Fri, 11 May 2007 00:36:43 -0700 Subject: [SciPy-dev] Scikits for optimization ? In-Reply-To: References: Message-ID: <1B9AC669-F70B-471F-AB5A-1A59021C0239@physics.ubc.ca> Hi Matthieu, I have been busy lately, but I have been thinking about optimization, especially the interface to optimizers and I think that the current optimization module could do with a lot of cleaning up, but I have not had time to flesh out a full proposal yet. (I started to do this on the Trac Wiki but need to think this through more carefully). Basically, there is very little cohesion with the various optimizers: they just wrap underlying fortran routines. The arguments do not even have the same name. It would be nice to flesh out a consistent interface and way of dealing with the plethora of options. Once a consistent interface is established, then it would be easier to include additional optimizers built out of modules. Once this is set up, I think it would be good to have this in SciPy itself rather than a scikit because optimization is a core scientific task, but perhaps a scikit is a good place to start (also, a scikit is the only option if any code has an incompatible licence). Michael. On 11 May 2007, at 12:18 AM, Matthieu Brucher wrote: > Hi, > > I posted by proposal to the scipy TAC page, but I'm wondering if it > wouldn't be better to provide a "toolbox", so then a scikit instead > of wanting to put in scipy. > My change of thought started from the fact that scipy has very good > optimizers at this point, and my proposal is for people that want > to use very specific algorithms and to tweak them. And isn't that > what scikits were made for ? > > Matthieu From matthieu.brucher at gmail.com Fri May 11 04:03:25 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 11 May 2007 10:03:25 +0200 Subject: [SciPy-dev] Scikits for optimization ? In-Reply-To: <1B9AC669-F70B-471F-AB5A-1A59021C0239@physics.ubc.ca> References: <1B9AC669-F70B-471F-AB5A-1A59021C0239@physics.ubc.ca> Message-ID: 2007/5/11, Michael McNeil Forbes : > > Hi Matthieu, > > I have been busy lately, but I have been thinking about optimization, > especially the interface to optimizers and I think that the current > optimization module could do with a lot of cleaning up, but I have > not had time to flesh out a full proposal yet. (I started to do this > on the Trac Wiki but need to think this through more carefully). I'm not a pro here, so I'll follow the proposal as soon as it is ready ;) Basically, there is very little cohesion with the various optimizers: > they just wrap underlying fortran routines. The arguments do not > even have the same name. It would be nice to flesh out a consistent > interface and way of dealing with the plethora of options. Once a > consistent interface is established, then it would be easier to > include additional optimizers built out of modules. That would be indeed a good idea, I have myself my own naming convention for parameters (x0 or start_point ?, ...) Once this is set up, I think it would be good to have this in SciPy > itself rather than a scikit because optimization is a core scientific > task, but perhaps a scikit is a good place to start (also, a scikit > is the only option if any code has an incompatible licence). While I agree that optimization must stay in scipy, my point of view is similar to Matlab's : some simple, ready-to-go functions for optimization (fmin, leastsqr, ...) in the core, and an additional toolbox with more flexibility, ... Otherwise, it could turn into a mess ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mforbes at physics.ubc.ca Fri May 11 04:24:53 2007 From: mforbes at physics.ubc.ca (Michael McNeil Forbes) Date: Fri, 11 May 2007 01:24:53 -0700 Subject: [SciPy-dev] Scikits for optimization ? In-Reply-To: References: <1B9AC669-F70B-471F-AB5A-1A59021C0239@physics.ubc.ca> Message-ID: <83F797BF-10C0-4933-B533-316338B8B114@physics.ubc.ca> > I'm not a pro here, so I'll follow the proposal as soon as it is > ready ;) Don't mistake me for a pro either!-) I would just like to see a clean interface so it is easy to switch optimizers. Please put any ideas you have on the wiki. When we get a fairly complete proposal, we can bring it to the attention of the mailing list to be for comments. > That would be indeed a good idea, I have myself my own naming > convention for parameters (x0 or start_point ?, ...) I think definitely x0 on this one. (Shorter, language agnostic, pretty ubiquitous in the current algorithms). > While I agree that optimization must stay in scipy, my point of > view is similar to Matlab's : some simple, ready-to-go functions > for optimization (fmin, leastsqr, ...) in the core, and an > additional toolbox with more flexibility, ... Otherwise, it could > turn into a mess? Maybe. If it is carefully organized and documented, however, I don't think it has to be a mess. Michael. From ondrej at certik.cz Sun May 13 13:20:14 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 13 May 2007 19:20:14 +0200 Subject: [SciPy-dev] broyden methods commited Message-ID: <85b5c3130705131020l2f177bd4x53f75653927a9ad2@mail.gmail.com> Hi, I just commited the broyden scheme methods: http://projects.scipy.org/scipy/scipy/ticket/402 it's in the module optimize, together with tests, that run fine. Documentation is in the form of docstrings. Please tell me if it is understandable, or I should work a little more on it. If it is ok, then the next step for me will be to look at the differential evolution codes and commit the better one into scipy. Thanks, Ondrej From nwagner at iam.uni-stuttgart.de Sun May 13 13:23:12 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 May 2007 19:23:12 +0200 Subject: [SciPy-dev] broyden methods commited In-Reply-To: <85b5c3130705131020l2f177bd4x53f75653927a9ad2@mail.gmail.com> References: <85b5c3130705131020l2f177bd4x53f75653927a9ad2@mail.gmail.com> Message-ID: On Sun, 13 May 2007 19:20:14 +0200 "Ondrej Certik" wrote: > Hi, > > I just commited the broyden scheme methods: > > http://projects.scipy.org/scipy/scipy/ticket/402 > > it's in the module optimize, together with tests, that >run fine. > Documentation is in the form of docstrings. Please tell >me if it is > understandable, or I should work a little more on it. > > If it is ok, then the next step for me will be to look >at the > differential evolution codes and commit the better one >into scipy. > > Thanks, > Ondrej > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev >>> scipy.__version__ '0.5.3.dev2990' >>> help (optimize.anderson) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'anderson' >>> from scipy.optimize import anderson Traceback (most recent call last): File "", line 1, in ImportError: cannot import name anderson Nils From ondrej at certik.cz Sun May 13 14:22:15 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 13 May 2007 20:22:15 +0200 Subject: [SciPy-dev] broyden methods commited In-Reply-To: References: <85b5c3130705131020l2f177bd4x53f75653927a9ad2@mail.gmail.com> Message-ID: <85b5c3130705131122o1fdb9f81t1d972d266416bead@mail.gmail.com> > >>> scipy.__version__ > '0.5.3.dev2990' > > >>> help (optimize.anderson) > Traceback (most recent call last): > File "", line 1, in > AttributeError: 'module' object has no attribute > 'anderson' > >>> from scipy.optimize import anderson > Traceback (most recent call last): > File "", line 1, in > ImportError: cannot import name anderson > > Nils The methods are in optimize.nonlin - should I import them in optimize.__init__, so that they are accesible immediatelly? At the moment, it works like this: >>> import scipy >>> scipy.__version__ '0.5.3.dev2992' >>> from scipy import optimize >>> help(optimize.nonlin) >>> help(optimize.nonlin.anderson) >>> (the two help() commands will print help) Ondrej From nwagner at iam.uni-stuttgart.de Sun May 13 15:35:18 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 May 2007 21:35:18 +0200 Subject: [SciPy-dev] broyden methods commited In-Reply-To: <85b5c3130705131122o1fdb9f81t1d972d266416bead@mail.gmail.com> References: <85b5c3130705131020l2f177bd4x53f75653927a9ad2@mail.gmail.com> <85b5c3130705131122o1fdb9f81t1d972d266416bead@mail.gmail.com> Message-ID: On Sun, 13 May 2007 20:22:15 +0200 "Ondrej Certik" wrote: >> >>> scipy.__version__ >> '0.5.3.dev2990' >> >> >>> help (optimize.anderson) >> Traceback (most recent call last): >> File "", line 1, in >> AttributeError: 'module' object has no attribute >> 'anderson' >> >>> from scipy.optimize import anderson >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: cannot import name anderson >> >> Nils > > The methods are in optimize.nonlin - should I import >them in > optimize.__init__, so that they are accesible >immediatelly? > IMHO, I would prefer a direct access. Nils > At the moment, it works like this: > >>>> import scipy >>>> scipy.__version__ > '0.5.3.dev2992' >>>> from scipy import optimize >>>> help(optimize.nonlin) > >>>> help(optimize.nonlin.anderson) > >>>> > > (the two help() commands will print help) > > Ondrej > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From ondrej at certik.cz Sun May 13 16:00:11 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 13 May 2007 22:00:11 +0200 Subject: [SciPy-dev] broyden methods commited In-Reply-To: References: <85b5c3130705131020l2f177bd4x53f75653927a9ad2@mail.gmail.com> <85b5c3130705131122o1fdb9f81t1d972d266416bead@mail.gmail.com> Message-ID: <85b5c3130705131300u68b69b27w3bc86af8c342f9dc@mail.gmail.com> > IMHO, I would prefer a direct access. Commited. Use it like this: >>> import scipy >>> scipy.__version__ '0.5.3.dev2992' >>> from scipy import optimize >>> help(optimize.anderson) Ondrej From david at ar.media.kyoto-u.ac.jp Sun May 13 20:24:52 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 14 May 2007 09:24:52 +0900 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning Message-ID: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Dear scipy developers and users, As some of you may know already, my proposal for pymachine, a python toolbox for machine learning in python, has been accepted for the Summer of Code 2007. The detailed proposal is online [1], and wikified [2]. The proposal timeline consists of two main steps: - first improving existing tools related to machine learning in scipy, such as they become part of "official scipy" (eg all tools in toolbox going into main scipy namespace). This includes scipy.cluster, scipy.sandbox.pyem and scipy.sandbox.svm. - Then building from this set of toolboxes a more high level package, in the spirit of similar softwares, such as orange or weka [3], including some visualization tools for data exploration. This part of the code would be put in scikits (because it will require extra dependencies). All development will happen in the scipy and scikits subversion repositories. Now, before starting working on it, I would like to get some feedback about what other people think is necessary with respect to those goals: - What are the requirements for a toolbox to go from the sandbox into the scipy namespace ? - For people willing to use machine learning related software in python/scipy, what are the main requirements/concern ? (eg Data exploration GUI, efficiency, readability of the algorithms, etc...) cheers, David [1] http://www.ar.media.kyoto-u.ac.jp/members/david/fullproposal.html [2] http://projects.scipy.org/scipy/scipy/wiki/MachineLearning [3] orange http://magix.fri.uni-lj.si/orange/, weka: http://www.cs.waikato.ac.nz/ml/weka/ From sim at klubko.net Sun May 13 20:40:19 2007 From: sim at klubko.net (Petr =?utf-8?q?=C5=A0imon?=) Date: Mon, 14 May 2007 08:40:19 +0800 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <200705140840.20317.sim@klubko.net> On Monday 14 May 2007 08:24:52 David Cournapeau wrote: > Dear scipy developers and users, > > As some of you may know already, my proposal for pymachine, a python > toolbox for machine learning in python, has been accepted for the Summer > of Code 2007. The detailed proposal is online [1], and wikified [2]. The > proposal timeline consists of two main steps: > - first improving existing tools related to machine learning in > scipy, such as they become part of "official scipy" (eg all tools in > toolbox going into main scipy namespace). This includes scipy.cluster, > scipy.sandbox.pyem and scipy.sandbox.svm. > - Then building from this set of toolboxes a more high level package, > in the spirit of similar softwares, such as orange or weka [3], > including some visualization tools for data exploration. This part of > the code would be put in scikits (because it will require extra > dependencies). > All development will happen in the scipy and scikits subversion > repositories. > > Now, before starting working on it, I would like to get some feedback > about what other people think is necessary with respect to those goals: > - What are the requirements for a toolbox to go from the sandbox into > the scipy namespace ? > - For people willing to use machine learning related software in > python/scipy, what are the main requirements/concern ? (eg Data > exploration GUI, efficiency, readability of the algorithms, etc...) > > cheers, > > David > > [1] http://www.ar.media.kyoto-u.ac.jp/members/david/fullproposal.html > > [2] http://projects.scipy.org/scipy/scipy/wiki/MachineLearning > > [3] orange http://magix.fri.uni-lj.si/orange/, weka: > http://www.cs.waikato.ac.nz/ml/weka/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Hello, one concern might be data handling. I usually work with rather large datasets that can't fit into memory and most of the machine learning packages naturally do this (weka, orange). Best Petr -- Petr ?imon http://www.klubko.net PhD student, TIGP-CLCLP Academia Sinica http://clclp.ling.sinica.edu.tw "... what the Buddhist call 'right livelyhood', I didn't have that, I didn't have any way of making a living, and to make a living is to be doing something that you love, something that was creative, something that made sense..." Mark Bittner, parrot caretaker, Telegraph Hill From oliphant.travis at ieee.org Mon May 14 03:09:07 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 14 May 2007 01:09:07 -0600 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <46480B13.7070308@ieee.org> David Cournapeau wrote: > > Now, before starting working on it, I would like to get some feedback > about what other people think is necessary with respect to those goals: > - What are the requirements for a toolbox to go from the sandbox into > the scipy namespace ? > For me, it is willingness of somebody to take responsibility for it. Of course we would like some kind of standard in place, which is why I brought up the notion of a peer-reviewed "journal" idea. At this point, however, working code for capability that scipy doesn't already have is pretty convincing for me. From peter.skomoroch at gmail.com Mon May 14 03:17:14 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 14 May 2007 03:17:14 -0400 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: David, I would probably use the code quite a bit. For me an exploration GUI isn't important, but correctness and performance are. Readability of the code next after that. Let me know if you want me to be a guinea pig at some point on the algorithm correctness, I can toss some additional datasets at it. -Pete On 5/13/07, David Cournapeau wrote: > > Dear scipy developers and users, > > As some of you may know already, my proposal for pymachine, a python > toolbox for machine learning in python, has been accepted for the Summer > of Code 2007. The detailed proposal is online [1], and wikified [2]. The > proposal timeline consists of two main steps: > - first improving existing tools related to machine learning in > scipy, such as they become part of "official scipy" (eg all tools in > toolbox going into main scipy namespace). This includes scipy.cluster, > scipy.sandbox.pyem and scipy.sandbox.svm. > - Then building from this set of toolboxes a more high level package, > in the spirit of similar softwares, such as orange or weka [3], > including some visualization tools for data exploration. This part of > the code would be put in scikits (because it will require extra > dependencies). > All development will happen in the scipy and scikits subversion > repositories. > > Now, before starting working on it, I would like to get some feedback > about what other people think is necessary with respect to those goals: > - What are the requirements for a toolbox to go from the sandbox into > the scipy namespace ? > - For people willing to use machine learning related software in > python/scipy, what are the main requirements/concern ? (eg Data > exploration GUI, efficiency, readability of the algorithms, etc...) > > cheers, > > David > > [1] http://www.ar.media.kyoto-u.ac.jp/members/david/fullproposal.html > > [2] http://projects.scipy.org/scipy/scipy/wiki/MachineLearning > > [3] orange http://magix.fri.uni-lj.si/orange/, weka: > http://www.cs.waikato.ac.nz/ml/weka/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Mon May 14 03:23:56 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 May 2007 09:23:56 +0200 Subject: [SciPy-dev] Changeset 2992 - NameError: global name 'fitpack' is not defined Message-ID: <46480E8C.5040301@iam.uni-stuttgart.de> Hi all, Changeset 2992 causes a NameError. ====================================================================== ERROR: test_interp2d (scipy.tests.test_interpolate.test_interp2d) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/interpolate/tests/test_interpolate.py", line 14, in test_interp2d I = interp2d(x, y, z) File "/usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py", line 91, in __init__ self.tck = fitpack.bisplrep(self.x, self.y, self.z, kx=kx, ky=ky, s=0.) NameError: global name 'fitpack' is not defined Nils From david at ar.media.kyoto-u.ac.jp Mon May 14 03:47:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 14 May 2007 16:47:08 +0900 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: <464813FC.3060301@ar.media.kyoto-u.ac.jp> Peter Skomoroch wrote: > David, > > I would probably use the code quite a bit. For me an exploration GUI > isn't important, but correctness and performance are. Readability of > the code next after that. Let me know if you want me to be a guinea > pig at some point on the algorithm correctness, I can toss some > additional datasets at it. > Datasets would be great ! I am converting the basic datasets available in R to be usable in python, but I cannot spend too much time on this, David From peter.skomoroch at gmail.com Mon May 14 08:14:21 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 14 May 2007 08:14:21 -0400 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <464813FC.3060301@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <464813FC.3060301@ar.media.kyoto-u.ac.jp> Message-ID: I followed some of the discussion around datasets in the previous threads. As you mentioned, it might make sense to make some larger datasets available separately or as cran-style optional installs, but I think it would also be a big plus for scipy to have some more smaller built-in datasets of various types. I'm thinking of something along the lines of matlab, where a large number of demos and help are bundled with the software and the datasets used in them are built right in. If you go through the standard "intro to matlab" books that come with the software, it uses these same datasets over and over. Things like load(magic) for a magic square, sunspot data etc. I think this is preferable to hunting down datafiles or packages for small test cases. Scipy already takes a similar approach in a few cases, for example: >>> from scipy import * >>> lena() array([[162, 162, 162, ..., 170, 155, 128], [162, 162, 162, ..., 170, 155, 128], [162, 162, 162, ..., 170, 155, 128], ..., [ 43, 43, 50, ..., 104, 100, 98], [ 44, 44, 55, ..., 104, 105, 108], [ 44, 44, 55, ..., 104, 105, 108]]) This kind of thing just makes life a lot easier, my .02 cents ... -Pete On 5/14/07, David Cournapeau wrote: > > Peter Skomoroch wrote: > > David, > > > > I would probably use the code quite a bit. For me an exploration GUI > > isn't important, but correctness and performance are. Readability of > > the code next after that. Let me know if you want me to be a guinea > > pig at some point on the algorithm correctness, I can toss some > > additional datasets at it. > > > Datasets would be great ! I am converting the basic datasets available > in R to be usable in python, but I cannot spend too much time on this, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon May 14 21:48:52 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 15 May 2007 10:48:52 +0900 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <464813FC.3060301@ar.media.kyoto-u.ac.jp> Message-ID: <46491184.4090900@ar.media.kyoto-u.ac.jp> Peter Skomoroch wrote: > I followed some of the discussion around datasets in the previous > threads. As you mentioned, it might make sense to make some larger > datasets available separately or as cran-style optional installs, but > I think it would also be a big plus for scipy to have some more > smaller built-in datasets of various types. I agree, but there is the problem of licensing. I don't know what is the status of data from a legal point of view: if I copy data from a book, I guess this is copyrighted as the rest of the book. But then there are some famous datasets (eg old faithful, iris, etc...) which are available in different softwares with different licenses. Copying the datasets of R (in r-base) would be useful, but they fall under the GPL, hence they cannot be included in scipy, at least if datasets are also under the GPL. Unfortunately, I don't know the legal details of those cases (status of data in a package licensed under the GPL). Anyway, if I have some useful datasets, I will certainly make them in a separate package, so that it can be used outside pymachine. David From peter.skomoroch at gmail.com Mon May 14 22:59:24 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 14 May 2007 22:59:24 -0400 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <46491184.4090900@ar.media.kyoto-u.ac.jp> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <464813FC.3060301@ar.media.kyoto-u.ac.jp> <46491184.4090900@ar.media.kyoto-u.ac.jp> Message-ID: With some digging, we should be able to find text, images, scientific data, census info, and audio which is in the public domain. I see this a lot: "The information on government web pages is in the public domain unless specifically annotated otherwise (copyright may be held elsewhere) and may therefore be used freely by the public." I'm still learning the ins and outs of licensing, but my understanding is that data/information released as "public domain" can be bundled with anything, i.e. BSD licenses. Does anyone have any insight into this? Here are a few starter sites which mention that the data they provide is in the public domain: Geophysical data: http://www.ngdc.noaa.gov/ngdcinfo/onlineaccess.html Nasa images and measurements: http://adc.gsfc.nasa.gov/adc/questions_feedback.html#policies1%22 Weather, forecasts: http://www.weather.gov/disclaimer.php Genomic data: http://www.drugresearcher.com/news/ng.asp?id=10960-snp-database-goes -Pete On 5/14/07, David Cournapeau wrote: > > Peter Skomoroch wrote: > > I followed some of the discussion around datasets in the previous > > threads. As you mentioned, it might make sense to make some larger > > datasets available separately or as cran-style optional installs, but > > I think it would also be a big plus for scipy to have some more > > smaller built-in datasets of various types. > I agree, but there is the problem of licensing. I don't know what is the > status of data from a legal point of view: if I copy data from a book, I > guess this is copyrighted as the rest of the book. But then there are > some famous datasets (eg old faithful, iris, etc...) which are available > in different softwares with different licenses. > > Copying the datasets of R (in r-base) would be useful, but they fall > under the GPL, hence they cannot be included in scipy, at least if > datasets are also under the GPL. Unfortunately, I don't know the legal > details of those cases (status of data in a package licensed under the > GPL). > > Anyway, if I have some useful datasets, I will certainly make them in a > separate package, so that it can be used outside pymachine. > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.skomoroch at gmail.com Mon May 14 23:01:16 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Mon, 14 May 2007 23:01:16 -0400 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <464813FC.3060301@ar.media.kyoto-u.ac.jp> <46491184.4090900@ar.media.kyoto-u.ac.jp> Message-ID: This google search turns up more sources: http://www.google.com/search?q=data+%22in+the+public+domain%22+%22freely+by+the+public%22&ie=utf-8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a On 5/14/07, Peter Skomoroch wrote: > > With some digging, we should be able to find text, images, scientific > data, census info, and audio which is in the public domain. I see this a > lot: > > "The information on government web pages is in the public domain unless > specifically annotated otherwise (copyright may be held elsewhere) and may > therefore be used freely by the public." > > I'm still learning the ins and outs of licensing, but my understanding is > that data/information released as "public domain" can be bundled with > anything, i.e. BSD licenses. Does anyone have any insight into this? > > Here are a few starter sites which mention that the data they provide is > in the public domain: > > Geophysical data: > > http://www.ngdc.noaa.gov/ngdcinfo/onlineaccess.html > > Nasa images and measurements: > > http://adc.gsfc.nasa.gov/adc/questions_feedback.html#policies1%22 > > Weather, forecasts: > > http://www.weather.gov/disclaimer.php > > Genomic data: > > http://www.drugresearcher.com/news/ng.asp?id=10960-snp-database-goes > > > -Pete > > On 5/14/07, David Cournapeau wrote: > > > > Peter Skomoroch wrote: > > > I followed some of the discussion around datasets in the previous > > > threads. As you mentioned, it might make sense to make some larger > > > datasets available separately or as cran-style optional installs, but > > > I think it would also be a big plus for scipy to have some more > > > smaller built-in datasets of various types. > > I agree, but there is the problem of licensing. I don't know what is the > > status of data from a legal point of view: if I copy data from a book, I > > guess this is copyrighted as the rest of the book. But then there are > > some famous datasets (eg old faithful, iris, etc...) which are available > > in different softwares with different licenses. > > > > Copying the datasets of R (in r-base) would be useful, but they fall > > under the GPL, hence they cannot be included in scipy, at least if > > datasets are also under the GPL. Unfortunately, I don't know the legal > > details of those cases (status of data in a package licensed under the > > GPL). > > > > Anyway, if I have some useful datasets, I will certainly make them in a > > separate package, so that it can be used outside pymachine. > > > > David > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > -- > Peter N. Skomoroch > peter.skomoroch at gmail.com > http://www.datawrangling.com > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Tue May 15 02:04:55 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 14 May 2007 23:04:55 -0700 Subject: [SciPy-dev] mlabwrap 2.0 vs. mlabwrap 1.1/1.5 In-Reply-To: <1e2af89e0705091035l54f3cd0cx9803e5132595891c@mail.gmail.com> References: <796269930705071739i4b710fa1qd879dea5bf149f4f@mail.gmail.com> <1e2af89e0705091035l54f3cd0cx9803e5132595891c@mail.gmail.com> Message-ID: On 5/9/07, Matthew Brett wrote: > > What are your reasons for wanting to get out a scikits release first and > > foremost? I ask because I'm a bit reluctant to release a repackaged mlabwrap > > 1.0 as scikits.mlabwrap 1.1, unless it solves some concrete problem. > > I would have thought this would be a useful step. It will get people > used to the scikits location, provide a scikits example for other > projects, remove numarray numeric compatibility issues, thus > simplifying the install. Also the rpath stuff will help the install, > and those of us using mlabwrap in larger projects should have fewer > places to point at for installation instructions as sckits gets going. > I doubt the differences in the import statement will bother many of > us. I agree with Matthew. I think it would be useful to get a stable version of the new mlabwrap asap. The new API changes and ctypes conversion should be given time to fully bake. In the meantime, it would be useful to: (1) get a scikits project released; (2) have a release of mlabwrap with a simplified install; (3) remove numeric and numarray code; and (4) use numpy testing. You could call the release 1.1/1.5/1.9 or whatever. The higher 1.X release would indicate that the release is close to a 2.X release. I don't think the import changes are going to cause anyone problems. I am eager to get a scikit project released, so that other potential scikit projects will have some example to follow. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From nwagner at iam.uni-stuttgart.de Tue May 15 02:48:40 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 May 2007 08:48:40 +0200 Subject: [SciPy-dev] Typo in Changeset 3000 line 521 Message-ID: <464957C8.7040902@iam.uni-stuttgart.de> byte-compiling /usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py to interpolate.pyc File "/usr/lib64/python2.4/site-packages/scipy/interpolate/interpolate.py", line 521 def ^ SyntaxError: invalid syntax From a.schmolck at gmx.net Tue May 15 20:45:51 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Wed, 16 May 2007 01:45:51 +0100 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Mon\, 14 May 2007 09\:24\:52 +0900") References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > - For people willing to use machine learning related software in > python/scipy, what are the main requirements/concern ? (eg Data > exploration GUI, efficiency, readability of the algorithms, etc...) In order of priority: robustness, readability and efficiency. Obviously very gross inefficiencies are a show-stopper, but on the whole I'd rather see fine-tuning effort go into things like numerical robustness etc. first. P.S. Are you aware of Bishop's new book? Just because in your references you IIRC mention his Neural Networks one favorably, and I think you might be rather interested in his "pat.rec. and machine learning" if you haven't seen it yet. cheers 'as From a.schmolck at gmx.net Wed May 16 20:55:16 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Thu, 17 May 2007 01:55:16 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> Message-ID: Hi everyone, As the author of the first (if yet unreleased) scikit, I've tried to put some info for prospective scikit authors on that tries to condense down the most important points for those who are not yet familiar with the required infrastructure like setuptools (which is fairly rich) etc. to make things a little less overwhelming. A few things also still need clarification. As I'm myself just learning to get to grips with various of these things, so this page could use some attention from someone more knowledgable than me. Here are a few specific points (search for FIXME/XXX in the document) 0. Blurp: What sort of things are suitable for a scikit project, what requirements must be met by projects or authors to qualify (this should also include something on acceptable/desirable[1] licenses, dependencies, code quality etc.) and what the overall aims and immediate future for scikits look like. I'm not really the person to write that so I just wrote something brief and vague. 1. Trac: I'm not really that familiar with trac, but I assume some sorts of conventions/infrastructure will need to be established in order for logically independent projects to use the same trac effectively. I guess one first step might be to use ``trac-admin`` to create each scikit (i.e. mlabwrap) as a component? 2. Homepage/Wiki/Docstandards etc. info. Hmm, it occurs to me that a link to coding standards might also be a good idea... 3. Versioning: I think it would be nice if there were a standard way how people version their scikits, both conventions-wise as well as how the process is automated as far as is possible. I've outlined a setuptools-based solution in preference to one using numpy.distutils docs pretty opaque[1]. 4. Testing: looks like I'll need to write this some other time... Please fix directly as you see fit or post here if you have suggestions or want to discuss it. thanks, 'as p.s. The transfer of the mlabwrap svn to scipy org has been delayed, otherwise I'd have added some links to the mlabwrap code to. Footnotes: [1] From looking at , it's pretty difficult to figure out what it's good for -- there is no rationale who might want to use it and why (see below for an attempt), instead it starts out by describing the scipy structure and requirements for scipy packages Evidently some people who are not interested in writing scipy packages might also benefit from numpy.distutils (scikits authors for example), so it'd would really help to have a short list of features that numpy.distutils has over plain distutils and setuptools and to whom they might be relevant. The second obvious question is how numpy.distutils relates to/plays along with setuptools. Here is an attempt to what the numpy.distutils could say at the beginning concerning possible candidates for packages that should use numpy.distutils: - all scipy packages (?) - packages that need fortran extensions - packages that requires subpackages (but setuptools should take care of that too?) - packages that want certain conveniences such as non manual version-info-support and other code-generation I think setuptools should also be explicitly mentioned. From a.schmolck at gmx.net Wed May 16 21:20:41 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Thu, 17 May 2007 02:20:41 +0100 Subject: [SciPy-dev] mlabwrap 2.0 vs. mlabwrap 1.1/1.5 References: <796269930705071739i4b710fa1qd879dea5bf149f4f@mail.gmail.com> <1e2af89e0705091035l54f3cd0cx9803e5132595891c@mail.gmail.com> Message-ID: Hi Matthew, I'm sorry answering this took so long. "Matthew Brett" writes: > Hi, > >> What are your reasons for wanting to get out a scikits release first and >> foremost? I ask because I'm a bit reluctant to release a repackaged mlabwrap >> 1.0 as scikits.mlabwrap 1.1, unless it solves some concrete problem. > > I would have thought this would be a useful step. Maybe it is. I suggest the following: 1. We make a list of things, dates and responsibities for a 1.1 milestone (getting svn on scipy.org and figuring out how do use trac for scikits, etc. see my other email would be my first priorities) 2. We do them 3. We make either an official or a dev egg release I'm willing to make an official release if everyone else still thinks this is the best at that point; but I'll try to explain again below why I think doing a dev scikit.mlabwrap release and instead releasing mlabwrap1.0.1 now might be better. > It will get people used to the scikits location, Yes. > provide a scikits example for other projects, I agree it's nicer if there's an offical relase, but isn't it enough if there is a dev version and something in svn together with some reasonable documentation for mlabwrap to serve as an example for other projects? I've put some effort into writing the scikitis wiki page, , because I think that's most important, but it clearly needs some more work (and by someone better qualified). > remove numarray numeric compatibility issues, thus simplifying the install. (only Numeric actually, numarray was never supported); I might be wrong but I don't think that the ability to use Numeric will lead to many installation complications. IIRC all complications that people have reported fall into 2 categories 1. building *any* type of python extension on windows 2. matlab's very peculiar ideas under what compiler-version/shell etc. it deigns to allow you to build something uses its engine interface and proudces more than the single informative error message "can't open engine". It's possible that setuptools will make marginally alleviate 1. OTOH I have eventually managed to get user contributed binaries, which IMO present a superior solution for most windows users who are not used to building stuff themselves. > Also the rpath stuff will help the install, Yes, I think your rpath tip is very helpful which is why I intend to do a mlabwrap.1.0.1 maintance release that contains it, unless we do the scikitis.mlabwrap release now. > and those of us using mlabwrap in larger projects should have fewer > places to point at for installation instructions as sckits gets going. Well, currently mlabwrap is still the only scikit so I hope by the time that becomes an issue, 2.0 is pretty far ahead. Currently there is a single webpage; we > I doubt the differences in the import statement will bother many of > us. No, but it might bother some people if a few weeks later they easy_install a new version of scikits.mlabwrap (say 1.5) and all their code suddently breaks. It causes even more bother if you discover that you've accumulated several files that ``import scikits.mlabwrap`` but suddenly behave subtly different (not an unrealistic scenario, given that the default proxying and conversion behavior is under scrutinity). It's certainly something I often found annoying with scipy years ago and rarely still with matplotlib, e.g. I want to do some plots again but they no longer work or look different and I have to figure out what version that might have been and what changed. I'm not advocating backwards-compatibility at all costs, and these projects still have 0.x version numbers and therefore every right to adjust their interfaces but it seems to me that 1. There is already a viable, stable and well-documented version of mlabwrap that seems to fit most peoples' needs. IMO with 1.0final there is a stable, documented and fairly easy to install version of mlabwrap currently available. Since various people have helped out by making the install easier (by providing binaries and patches and info for windows and OS X) and all the questions and requests I've been getting deal with install related issues (no bug reports or feature requests) I am tempted to conclude that, regardless of all the things that mlabwrap undoubtedly could do better or doesn't support at the momen, as far as most current users are concerned, there aren't any glaring problems or omissions in functionality in mlabwrap that prevent them from using it productively (the other interpretation is of course that once they managed to install they immediately ran away in disgust ;). 2. We can more easily minimize user headache resulting from incompatible changes if they coincide with the first release of matlab as a scikit; at the same time it's easy to relase dev builds for people who do want someting more bleeding edge. 3. The stuff we're contemplating on doing for mlabwrap 2.0 isn't all that hard and shouldn't take that long. I think we've got a pretty good idea of what we want and I don't think that it should take me more than a week of full-time work to implement the required functionality, unless the design turns out to be flawed; I think it should be rather less effort than what I've spent so far in the scikikit conversion process (exchanging emails, getting to grips with setuptools and other infrastructure and documenting it on the wiki for other prospective authors etc.). I think the tricky bit is more likely to be choosing the right results and fine tuning other interface matters, for which qutie a bit of user-feedback will be needed -- but just getting a 2.0 alpha ready shouldn't take that much more time than doing 1.1. 4. I'm really spending more time on mlabwrap than I can afford right now, so work load is also a real issue; I don't think I'll be able to find longer stretches of time for mlabwrap for at least a week (and I'd of course also like to do some hacking again rather than more 'adminstrative' stuff like writing wiki stuff and emails etc. as I've been mostly doing of late). Aargh - 3:15am. I really need to learn to be more concise in my emails... night, 'as From david at ar.media.kyoto-u.ac.jp Wed May 16 22:16:07 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 17 May 2007 11:16:07 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> Message-ID: <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > Hi everyone, > > > As the author of the first (if yet unreleased) scikit, I've tried to put some > info for prospective scikit authors on > > > > that tries to condense down the most important points for those who are not > yet familiar with the required infrastructure like setuptools (which is fairly > rich) etc. to make things a little less overwhelming. A few things also still > need clarification. As I'm myself just learning to get to grips with various > of these things, so this page could use some attention from someone more > knowledgable than me. > > Here are a few specific points (search for FIXME/XXX in the document) > > 0. Blurp: What sort of things are suitable for a scikit project, what > requirements must be met by projects or authors to qualify (this should > also include something on acceptable/desirable[1] licenses, dependencies, > code quality etc.) and what the overall aims and immediate future for > scikits look like. I'm not really the person to write that so I just wrote > something brief and vague. > > 1. Trac: I'm not really that familiar with trac, but I assume some sorts of > conventions/infrastructure will need to be established in order for > logically independent projects to use the same trac effectively. I guess > one first step might be to use ``trac-admin`` to create each scikit (i.e. > mlabwrap) as a component? > > 2. Homepage/Wiki/Docstandards etc. info. > > Hmm, it occurs to me that a link to coding standards might also be a good > idea... > > 3. Versioning: I think it would be nice if there were a standard way how > people version their scikits, both conventions-wise as well as how the > process is automated as far as is possible. I've outlined a > setuptools-based solution in preference to one using numpy.distutils docs > pretty opaque[1]. > > 4. Testing: looks like I'll need to write this some other time... > > Please fix directly as you see fit or post here if you have suggestions or > want to discuss it. > One thing which may need to be changed is the license requirement. I thought one of the point of scikits was to include code useful for scipy community, but which cannot be licensed to BDS/MIT. For example, including GPL code, or package which depends on LGPL libraries. malabwrap for that matter depends on proprietary libraries to work... Once this issue is solved, I would be happy to give two toolboxes of my own for audio signal processing (released under the BSD, but depending on LGPL libraries). David From hanu_man_ji at hotmail.com Sat May 19 17:46:29 2007 From: hanu_man_ji at hotmail.com (jai hanuman) Date: Sat, 19 May 2007 17:46:29 -0400 Subject: [SciPy-dev] can't install timeseries Message-ID: hi, i dont have vs 2003. i've tried with both mingw32 and cygwin and it fails to build and install timeseries. (maskedarray installs fine). regards ------------------------ >python setup.py install .... running install running build running config_fc running build_src building extension "timeseries.cseries" sources running build_py running build_ext No module named msvccompiler in numpy.distutils, trying from distutils.. error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. >python setup.py install -c mingw32 .. error: invalid command 'mingw32' >python setup.py -c mingw32 install .. error: option -c not recognized ------------------ From pearu at cens.ioc.ee Sat May 19 17:54:18 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 20 May 2007 00:54:18 +0300 (EEST) Subject: [SciPy-dev] can't install timeseries In-Reply-To: References: Message-ID: <59675.84.202.199.60.1179611658.squirrel@cens.ioc.ee> On Sun, May 20, 2007 12:46 am, jai hanuman wrote: > hi, > i dont have vs 2003. i've tried with both mingw32 and cygwin and it fails > to > build and install timeseries. (maskedarray installs fine). > > regards > > ------------------------ >>python setup.py install > .... > running install > running build > running config_fc > running build_src > building extension "timeseries.cseries" sources > running build_py > running build_ext > No module named msvccompiler in numpy.distutils, trying from distutils.. > error: Python was built with Visual Studio 2003; > extensions must be built with a compiler than can generate compatible > binaries. > Visual Studio 2003 was not found on this system. If you have Cygwin > installed, > you can try compiling with MingW32, by passing "-c mingw32" to setup.py. > >>python setup.py install -c mingw32 > .. > error: invalid command 'mingw32' > >>python setup.py -c mingw32 install > .. > error: option -c not recognized > ------------------ Try: python setup.py build --compiler=mingw32 install HTH, Pearu From mattknox_ca at hotmail.com Sat May 19 19:54:06 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sat, 19 May 2007 23:54:06 +0000 (UTC) Subject: [SciPy-dev] can't install timeseries References: Message-ID: > hi, > i dont have vs 2003. i've tried with both mingw32 and cygwin and it fails to > build and install timeseries. (maskedarray installs fine). cygwin works pretty seamlessly on windows with python 2.5, so I'd recommend that approach. It doesn't require any special witchcraft that used to be needed for compiling extensions on windows with earlier versions of python. Here are some good general instructions for compiling extensions with cygwin and python 2.5: http://boodebr.org/main/python/build-windows-extensions Once you've followed the setup steps there, just open the cygwin shell and you can do "python setup.py build_ext -i" to build the extension in place in that same directory. Or I *believe* "python setup.py install" will compile the extension and install it in your site-packages directory if memory serves me correctly... but I haven't tried that in a while and don't have the necessary stuff setup on my pc here to test. You can even create a nice binary installer by doing "python setup.py bdist_wininst" if you want to pass it around to your friends and family :) Let me know if you have any other problems. If you need any help using the timeseries module once you get it up and running, feel free to post on the scipy-user list and I can give you a hand. - Matt From david at ar.media.kyoto-u.ac.jp Sun May 20 23:02:04 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 21 May 2007 12:02:04 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> Message-ID: <46510BAC.6080308@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > Hi everyone, > > > As the author of the first (if yet unreleased) scikit, I've tried to put some > info for prospective scikit authors on > > > > that tries to condense down the most important points for those who are not > yet familiar with the required infrastructure like setuptools (which is fairly > rich) etc. to make things a little less overwhelming. A few things also still > need clarification. As I'm myself just learning to get to grips with various > of these things, so this page could use some attention from someone more > knowledgable than me. > > Here are a few specific points (search for FIXME/XXX in the document) > > 0. Blurp: What sort of things are suitable for a scikit project, what > requirements must be met by projects or authors to qualify (this should > also include something on acceptable/desirable[1] licenses, dependencies, > code quality etc.) and what the overall aims and immediate future for > scikits look like. I'm not really the person to write that so I just wrote > something brief and vague. > > 1. Trac: I'm not really that familiar with trac, but I assume some sorts of > conventions/infrastructure will need to be established in order for > logically independent projects to use the same trac effectively. I guess > one first step might be to use ``trac-admin`` to create each scikit (i.e. > mlabwrap) as a component? > > 2. Homepage/Wiki/Docstandards etc. info. > > Hmm, it occurs to me that a link to coding standards might also be a good > idea... > > 3. Versioning: I think it would be nice if there were a standard way how > people version their scikits, both conventions-wise as well as how the > process is automated as far as is possible. I've outlined a > setuptools-based solution in preference to one using numpy.distutils docs > pretty opaque[1]. > > 4. Testing: looks like I'll need to write this some other time... > > Please fix directly as you see fit or post here if you have suggestions or > want to discuss it. > I was thinking, why not providing a "fake" scikits package as an example ? I have mostly converted one project to scikits/setuptools, and the info I think may be useful are: * how to use numpy.distutils (in particular, system_info classes) ? * I cannot make the proposed versioning scheme work, because pkg_resources.require('scikits') fails, even though I installed it (I can import it). I guess I am doing something wrong in the scikits namespace, or for the installation, but I don't know what. * more generally, how to convert a distutils pkg to setuptools. cheers, David > thanks, > > 'as > > p.s. The transfer of the mlabwrap svn to scipy org has been delayed, otherwise > I'd have added some links to the mlabwrap code to. > > Footnotes: > [1] From looking at , it's > pretty difficult to figure out what it's good for > -- there is no rationale who might want to use it and why (see below for an > attempt), instead it starts out by describing the scipy structure and > requirements for scipy packages Evidently some people who are not > interested in writing scipy packages might also benefit from > numpy.distutils (scikits authors for example), so it'd would really help to > have a short list of features that numpy.distutils has over plain distutils > and setuptools and to whom they might be relevant. The second obvious > question is how numpy.distutils relates to/plays along with setuptools. > > Here is an attempt to what the numpy.distutils could say at the beginning > concerning possible candidates for packages that should use numpy.distutils: > > - all scipy packages (?) > - packages that need fortran extensions > - packages that requires subpackages (but setuptools should take care of > that too?) > - packages that want certain conveniences such as non manual > version-info-support and other code-generation > > I think setuptools should also be explicitly mentioned. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From millman at berkeley.edu Mon May 21 01:10:19 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 20 May 2007 22:10:19 -0700 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> Message-ID: On 5/16/07, David Cournapeau wrote: > One thing which may need to be changed is the license requirement. I > thought one of the point of scikits was to include code useful for scipy > community, but which cannot be licensed to BDS/MIT. For example, > including GPL code, or package which depends on LGPL libraries. > malabwrap for that matter depends on proprietary libraries to work... Thanks to Alex for putting together this information. I don't have the time to respond to all his points/questions, but I wanted to quickly respond to David's comments about licensing. Originally, I assumed that *all* the scikit code would be released under the same license (ie., the revised BSD license like all the rest of scipy). That would certainly be my preference. My understanding was that the basic idea behind scikits was simply to provide a modular mechanism for distributing "scipy" packages under one namespace. There are several arguments for using the same license for all the packages. First, I think that it would confusing for users importing "from scikits import foo, bar" to find that foo is released under one license and bar is released under another. Second while we want to keep dependencies between scikits packages to a minimum, there will inevitably be a need for code from one package to be used in another. This may lead to code duplication if the needed functionality already exists in a scikit's package, but is released under the GPL when BSD code is needed. Third, it is reasonable to imagine that at some point, someone will want to refactor the various scikit's packages. During a major refactoring, it may be desirable to move code from one package to another or even from scikits to scipy. This will be difficult if it is necessary to keep track of what license each piece of code is released under. Basically, I think that opening up the possibility for each package to be released under its own license is going to make it difficult to maintain some unity to the scipy namespace. I also don't know whether there is a need to include non-BSD code in the scikit's namespace. We could easily create links to non-BSD code on the wikipages or even create a new namespace for code released under different licenses. I would be very interested to hear arguments for why non-BSD code needs to be available under the scikit's namespace. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From sebastian.schoeller at gmail.com Mon May 21 02:42:48 2007 From: sebastian.schoeller at gmail.com (Sebastian Schoeller) Date: Mon, 21 May 2007 08:42:48 +0200 Subject: [SciPy-dev] Help for 5-min frequency for timeseries module Message-ID: <9fb85f420705202342w474dfedgf9eeff184d2db208@mail.gmail.com> Hi Matt and Pierre, I had already written to Matt thanking him for how useful the timeseries library has proven to us so far. Great work thanks again. We are utilizing the timeseries to process mostly hydrological datasets, meaning precipitation and discarge data mostly. Now I would like to tackle the issue to homogenize frequencies, that we are working by using the timeseries library. Some observation devices have a base frequency of 5-min, others adjustable ones depending on the required resolution. Generally spoken though it is required to have user-definable base frequencies (e.g. 1/5-min) for timeseries that may be converted into others (e.g. 5/15-min etc.). In Matt's and Pierres's posting for the alpha release of the timeseries library it is mentioned that additional frequencies may be added fairly easily. Could somebody be so kind and point me in the right direction on how to do that? Best greetings from Darmstadt / Germany Sebastian From david at ar.media.kyoto-u.ac.jp Mon May 21 02:55:00 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 21 May 2007 15:55:00 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> Message-ID: <46514244.4050405@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > On 5/16/07, David Cournapeau wrote: >> One thing which may need to be changed is the license requirement. I >> thought one of the point of scikits was to include code useful for scipy >> community, but which cannot be licensed to BDS/MIT. For example, >> including GPL code, or package which depends on LGPL libraries. >> malabwrap for that matter depends on proprietary libraries to work... > > Thanks to Alex for putting together this information. I don't have > the time to respond to all his points/questions, but I wanted to > quickly respond to David's comments about licensing. > > Originally, I assumed that *all* the scikit code would be released > under the same license (ie., the revised BSD license like all the > rest of scipy). That would certainly be my preference. My > understanding was that the basic idea behind scikits was simply to > provide a modular mechanism for distributing "scipy" packages under > one namespace. Ok, my understanding was different on this issue. I thought scikits and scipy had different purpose, and that one of scikits' goal was to enable code depending on GPL code. I thought scipy modularization was a totally different issue. > > There are several arguments for using the same license for all the > packages. First, I think that it would confusing for users importing > "from scikits import foo, bar" to find that foo is released under one > license and bar is released under another. Second while we want to > keep dependencies between scikits packages to a minimum, there will > inevitably be a need for code from one package to be used in another. > This may lead to code duplication if the needed functionality already > exists in a scikit's package, but is released under the GPL when BSD > code is needed. Third, it is reasonable to imagine that at some > point, someone will want to refactor the various scikit's packages. > During a major refactoring, it may be desirable to move code from one > package to another or even from scikits to scipy. This will be > difficult if it is necessary to keep track of what license each piece > of code is released under. > > Basically, I think that opening up the possibility for each package to > be released under its own license is going to make it difficult to > maintain some unity to the scipy namespace. I also don't know whether > there is a need to include non-BSD code in the scikit's namespace. For example, my package pyaudiolab is licensed under the BSD, but depends on LGPL libraries. Technically, it is a BSD package, but it is unusable without non BSD code. This was given as a reason why pyaudiolab could not be included in scipy, but could be included in scikits. There is also the problem of GPL libraries (qt, pyqt come to mind): I don't think you can use them in BSD code, right ? But I guess this becomes the grey area of derivative work. Can I use GPL libraries in non GPL python if I access the library though ctypes ? If there is a python package under the GPL, and I import it in my own package, do I have to use the GPL ? David From a.schmolck at gmx.net Mon May 21 04:42:25 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 09:42:25 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <46514244.4050405@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Mon\, 21 May 2007 15\:55\:00 +0900") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > Ok, my understanding was different on this issue. I thought scikits and > scipy had different purpose, and that one of scikits' goal was to enable > code depending on GPL code. I thought scipy modularization was a totally > different issue. Well, this is something the official developers of scipy have to make their minds up about; but personally I'm not sure allowing GPLed code in anywhere is a good idea -- IMO there's enough viral scientific computing code out there already. Allowing LGPL or LGPL-dependent code in scikits is a different matter; I think BSD/MIT-style should be encouraged but I think it would be useful for scikits to be able to include modules that wrap or use a LGPLed 3rd party lib for which no suitable freer alternatives exist. > There is also the problem of GPL libraries (qt, pyqt come to mind): I > don't think you can use them in BSD code, right ? But I guess this > becomes the grey area of derivative work. No, not really. The FSF did its best to make sure that only GPLed code can use GPLed code -- that's the whole point. > Can I use GPL libraries in non GPL python if I access the library though > ctypes ? I don't think so. > If there is a python package under the GPL, and I import it in my own > package, do I have to use the GPL ? Yes. The only exception might be if your package can *optionally* use something that's GPLed (in which case it would still have to be GPL compatible), but even then you might find yourself on shaky ground (google clisp readline GPL). IANAL etc., but unless you are, I think you're best off assuming that GPLed code is off-limits unless you want to GPL your code too, because the FSF has hired good lawyers to make it that way. cheers, 'as From a.schmolck at gmx.net Mon May 21 04:47:04 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 09:47:04 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: (Alexander Schmolck's message of "Mon\, 21 May 2007 09\:42\:25 +0100") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> Message-ID: Alexander Schmolck writes: > >> Can I use GPL libraries in non GPL python if I access the library though >> ctypes ? > > I don't think so. Just to clarify: your python code will have to be GPLed (at least that's what I strongly assume) but python license is not a hindrance (its GPL-compatbily, which essentially means one can relicense python under the GPL). 'as From a.schmolck at gmx.net Mon May 21 04:26:57 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 09:26:57 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <46510BAC.6080308@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Mon\, 21 May 2007 12\:02\:04 +0900") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > I was thinking, why not providing a "fake" scikits package as an example ? There definitely should be some package that can serve as an example and the wiki-page should link to its svn-setuptools.py (one of the reasons I'd like to get mlabwrap into the official scikits svn soon). Fake packages share the same disadvantage over real code as comments and docs, though -- there's nothing to ensure they're up to date and accurate -- which is why I'd favor using a real package, unless there isn't anything that's both suitably simple and not completely trivial. I think it might even be best to eventually have 2 or 3 types of example packages: 1. A data-only package 2. A python-only package 3. Something that builds a C-extension We should of course makes sure that the setup.py files of these packages do really implement 'best practices'; once these setup.py files look reasonable we could even see if people in distutils-sig might be willing to give them a quick glance. > I have mostly converted one project to scikits/setuptools, and the info > I think may be useful are: > * how to use numpy.distutils (in particular, system_info classes) ? Why do you want to use numpy.distutils? Doesn't setuptools provide the functionality you want (it might not, but I'd personally just avoid using numpy.distutils for everything for which it isn't strictly required)? > * I cannot make the proposed versioning scheme work, because > pkg_resources.require('scikits') fails, even though I installed it (I > can import it). Hmm, why are your trying to require('scikits')? Scikits is just a namespace, so it doesn't really make sense to require it, I think (it doesn't make that much sense to import it *either*, but that's just how python's module system works). As I wrote on the wiki-page, you want to do: from pkg_resources import require __version__ = require('scikits.mlabwrap')[0].version substituting your project name -- doesn't that work for you? > I guess I am doing something wrong in the scikits namespace, or for the > installation, but I don't know what. > * more generally, how to convert a distutils pkg to setuptools. Changing from ``distutils import`` to from ``setuptools import`` should be enough for a minimal conversion, I think. If that works OK for you I suggest we add a subheading 'minimal conversion from a distutils-based install', including the required (``namespace_packages=['scikits']``) and recommended (``zip_safe=True|False, url='...', download_url='...',test_suite``) additional setuptools flags (unfortunatelly no one has yet expounded on where the 'homepages' of individual scikits are meant to go). cheers, 'as From david at ar.media.kyoto-u.ac.jp Mon May 21 05:05:31 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 21 May 2007 18:05:31 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> Message-ID: <465160DB.8040302@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > David Cournapeau writes: > >> I was thinking, why not providing a "fake" scikits package as an example ? > > There definitely should be some package that can serve as an example and the > wiki-page should link to its svn-setuptools.py (one of the reasons I'd like to > get mlabwrap into the official scikits svn soon). Fake packages share the same > disadvantage over real code as comments and docs, though -- there's nothing to > ensure they're up to date and accurate -- which is why I'd favor using a real > package, unless there isn't anything that's both suitably simple and not > completely trivial. > > I think it might even be best to eventually have 2 or 3 types of example > packages: > > 1. A data-only package > 2. A python-only package > 3. Something that builds a C-extension > > We should of course makes sure that the setup.py files of these packages do > really implement 'best practices'; once these setup.py files look reasonable > we could even see if people in distutils-sig might be willing to give them a > quick glance. > Well, if we have one example with many options, I guess it should not be too difficult to make it up-to-date. For example, for the package I would like to submit to scikits, I have a makefile which builds the pdf doc using some python scripts, and try to run those scripts to make sure everything is updated. >> I have mostly converted one project to scikits/setuptools, and the info >> I think may be useful are: >> * how to use numpy.distutils (in particular, system_info classes) ? > > Why do you want to use numpy.distutils? Doesn't setuptools provide the > functionality you want (it might not, but I'd personally just avoid using > numpy.distutils for everything for which it isn't strictly required)? I am using the system_info scheme from distutils (to retrieve a shared library and its header). I could do without, but that would basically means reimplementing system_info, I guess. > > Hmm, why are your trying to require('scikits')? Scikits is just a namespace, > so it doesn't really make sense to require it, I think (it doesn't make that > much sense to import it *either*, but that's just how python's module system > works). > > As I wrote on the wiki-page, you want to do: > > from pkg_resources import require > __version__ = require('scikits.mlabwrap')[0].version > > substituting your project name -- doesn't that work for you? No, it does not. I assumed the problem was with scikits, because I do not have a .egg file for scikits when I install my package (whereas I have a pyaudiolab.egg). But I know nothing about eggs (I do not even know what they are used for). The other problem I have is that python setup.py test does not work either. After some random investigation, it looks like setuptools do not call check_* functions, only test_*. Does that mean I have to use only check_* functions in my tests ? David From brian.lee.hawthorne at gmail.com Mon May 21 05:25:35 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Mon, 21 May 2007 02:25:35 -0700 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <465160DB.8040302@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> Message-ID: <796269930705210225y427beeb5n5c113fb613a00c70@mail.gmail.com> > The other problem I have is that python setup.py test does not work > either. After some random investigation, it looks like setuptools do not > call check_* functions, only test_*. Does that mean I have to use only > check_* functions in my tests ? Just a small chime in on that last point -- I don't think there's any reason to use the check_* form of test method names (and I plan to switch mlabwrap's test methods back to test_*). -Brian David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Mon May 21 05:39:54 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 21 May 2007 11:39:54 +0200 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> Message-ID: <20070521093954.GH14819@mentat.za.net> On Mon, May 21, 2007 at 09:42:25AM +0100, Alexander Schmolck wrote: > > There is also the problem of GPL libraries (qt, pyqt come to mind): I > > don't think you can use them in BSD code, right ? But I guess this > > becomes the grey area of derivative work. > > No, not really. The FSF did its best to make sure that only GPLed code can use > GPLed code -- that's the whole point. That's not true, strictly speaking. You can still license *your* part of the software under the modified BSD license. From http://www.gnu.org/licenses/gpl-faq.html#CombinePublicDomainWithGPL """ If a program combines public-domain code with GPL-covered code, can I take the public-domain part and use it as public domain code? You can do that, if you can figure out which part is the public domain part and separate it from the rest. If code was put in the public domain by its developer, it is in the public domain no matter where it has been. """ The combined product, however, must be distributed under terms of the GPL. Since users of Python and Scipy expect BSD-compatible licenses, it might be a good idea to have scikits print their license terms on import. Regards St?fan From openopt at ukr.net Mon May 21 05:53:53 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 21 May 2007 12:53:53 +0300 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> Message-ID: <46516C31.4030706@ukr.net> Hi all, I think scikits should be represented as a bunch of software with OSI-approved licenses. Each routine should inform which license it has, by including the information in routine description, and/or by text or graphical output while working, and/or somehow else. Of course, there could be devision like modules "scikitsbsd" and "scikitsgpl", but what for other dozens of OSI-approved licenses? For example, take a look at PSF page: http://www.python.org/psf/contrib/ they require either * *Academic Free License v. 2.1 or Apache License, Version 2.0 Of course, other licenses can appear in future, and one software package, for example lpsolve, can change his license from LGPL to BSD (or vise versa) - so how do you intend to handle the situation? //just my 15 cents Regards, D. * * Jarrod Millman wrote: > On 5/16/07, David Cournapeau wrote: > >> One thing which may need to be changed is the license requirement. I >> thought one of the point of scikits was to include code useful for scipy >> community, but which cannot be licensed to BDS/MIT. For example, >> including GPL code, or package which depends on LGPL libraries. >> malabwrap for that matter depends on proprietary libraries to work... >> > > Thanks to Alex for putting together this information. I don't have > the time to respond to all his points/questions, but I wanted to > quickly respond to David's comments about licensing. > > Originally, I assumed that *all* the scikit code would be released > under the same license (ie., the revised BSD license like all the > rest of scipy). That would certainly be my preference. My > understanding was that the basic idea behind scikits was simply to > provide a modular mechanism for distributing "scipy" packages under > one namespace. > > There are several arguments for using the same license for all the > packages. First, I think that it would confusing for users importing > "from scikits import foo, bar" to find that foo is released under one > license and bar is released under another. Second while we want to > keep dependencies between scikits packages to a minimum, there will > inevitably be a need for code from one package to be used in another. > This may lead to code duplication if the needed functionality already > exists in a scikit's package, but is released under the GPL when BSD > code is needed. Third, it is reasonable to imagine that at some > point, someone will want to refactor the various scikit's packages. > During a major refactoring, it may be desirable to move code from one > package to another or even from scikits to scipy. This will be > difficult if it is necessary to keep track of what license each piece > of code is released under. > > Basically, I think that opening up the possibility for each package to > be released under its own license is going to make it difficult to > maintain some unity to the scipy namespace. I also don't know whether > there is a need to include non-BSD code in the scikit's namespace. We > could easily create links to non-BSD code on the wikipages or even > create a new namespace for code released under different licenses. I > would be very interested to hear arguments for why non-BSD code needs > to be available under the scikit's namespace. > > Thanks, > > From a.schmolck at gmx.net Mon May 21 06:55:34 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 11:55:34 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <20070521093954.GH14819@mentat.za.net> (Stefan van der Walt's message of "Mon\, 21 May 2007 11\:39\:54 +0200") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> <20070521093954.GH14819@mentat.za.net> Message-ID: Stefan van der Walt writes: > On Mon, May 21, 2007 at 09:42:25AM +0100, Alexander Schmolck wrote: >> > There is also the problem of GPL libraries (qt, pyqt come to mind): I >> > don't think you can use them in BSD code, right ? But I guess this >> > becomes the grey area of derivative work. >> >> No, not really. The FSF did its best to make sure that only GPLed code can use >> GPLed code -- that's the whole point. > > That's not true, strictly speaking. I think it is. > You can still license *your* part of the software under the modified BSD > license. My understanding is you can't if it depends on the GPLed code. You'd have to write a non-GPL'ed drop-in replacement first. 'as From a.schmolck at gmx.net Mon May 21 08:17:08 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 13:17:08 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <465160DB.8040302@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Mon\, 21 May 2007 18\:05\:31 +0900") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > Alexander Schmolck wrote: >> David Cournapeau writes: > Well, if we have one example with many options, I guess it should not be > too difficult to make it up-to-date. For example, for the package I > would like to submit to scikits, I have a makefile which builds the pdf > doc using some python scripts, and try to run those scripts to make sure > everything is updated. Sorry, I'm not quite clear what you are getting at, so possibly this answer doesn't address your point at all but basically what I was trying to say is that if you use some real package, chances are that unless gets abandonned, someone will be pretty motivated to make sure its install works properly (even if something in setuptools changes) -- more so than if its just a toy for the benefit of *others*. >>> I have mostly converted one project to scikits/setuptools, and the info >>> I think may be useful are: >>> * how to use numpy.distutils (in particular, system_info classes) ? >> >> Why do you want to use numpy.distutils? Doesn't setuptools provide the >> functionality you want (it might not, but I'd personally just avoid using >> numpy.distutils for everything for which it isn't strictly required)? > I am using the system_info scheme from distutils (to retrieve a shared > library and its header). I could do without, but that would basically > means reimplementing system_info, I guess. That's unfortunate -- I find numpy.distutils pretty impenetrable, and I also tend to believe that pretty much anything that's useful about it ought to eventually end up in setuptools, unless it's really specific to numpy or numerical python code -- IMO numpy having so much general-purpose building (and testing) facilities in numpy is a bad thing and increases the barrier of entry for scikits writers and other contributors of numerical python code unnecessarily, because the learning curve gets steeper and the number of wrong or largely equivalent choices and combinations that won't work grows. Of course if you manage to figure out what parts of numpy.distutils might be helpful for scikits-users, and how to use them, please do share your insights on the wiki -- using functionality from numpy.distutils is certainly better than roling one's own and will likely remain necessary for some time. But I do think that the longer term goal of the scipy-community should be to rely more on setuptools and less on numpy.distutils functionality (as far as non-special-purpose stuff is concerned; support for fortran stuff e.g. might well be special-purpose enough to keep it out of setuptools). >> Hmm, why are your trying to require('scikits')? Scikits is just a namespace, >> so it doesn't really make sense to require it, I think (it doesn't make that >> much sense to import it *either*, but that's just how python's module system >> works). >> >> As I wrote on the wiki-page, you want to do: >> >> from pkg_resources import require >> __version__ = require('scikits.mlabwrap')[0].version >> >> substituting your project name -- doesn't that work for you? > No, it does not. What happens? If your code doesn't have any strange dependencies and you if email it to me (or point me to an url with a tgz), I could try and have a brief look at it, if you want. > I assumed the problem was with scikits, because I do > not have a .egg file for scikits when I install my package (whereas I > have a pyaudiolab.egg). But I know nothing about eggs (I do not even > know what they are used for). Eggs are like jar files. Essentially they are both a distribution format (like rpm) and an archive format from which you can run code directly. > > The other problem I have is that python setup.py test does not work > either. After some random investigation, it looks like setuptools do not > call check_* functions, only test_*. Does that mean I have to use only > check_* functions in my tests ? [test_* you mean?] Yes, that's what I'd suggest. check_* and bench_* are numpy.testing only -- I've never understood the point of introducing them as additional aliases for 'test*' (which is what unittest uses, by default), it just seems to create unnecessary confusion and incompatiblity without any real benfit. Having said that something like this should also work: # test_pyaudialab.py [...] def test_em(): NumpyTest().run() # or whatever # setup.py [...] test_suite="scikits.pyaudiolab.tests.test_pyaudialab.test_em", [...] cheers, 'as From mattknox_ca at hotmail.com Mon May 21 09:17:46 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 21 May 2007 13:17:46 +0000 (UTC) Subject: [SciPy-dev] Help for 5-min frequency for timeseries module References: <9fb85f420705202342w474dfedgf9eeff184d2db208@mail.gmail.com> Message-ID: > We are utilizing the timeseries to process mostly hydrological > datasets, meaning precipitation and discarge data mostly. Now I would > like to tackle the issue to homogenize frequencies, that we are > working by using the timeseries library. Some observation devices have > a base frequency of 5-min, others adjustable ones depending on the > required resolution. Generally spoken though it is required to have > user-definable base frequencies (e.g. 1/5-min) for timeseries that may > be converted into others (e.g. 5/15-min etc.). > > In Matt's and Pierres's posting for the alpha release of the > timeseries library it is mentioned that additional frequencies may be > added fairly easily. Could somebody be so kind and point me in the > right direction on how to do that? well... "fairly easily" was perhaps a bit of an excaggeration :) . Frequencies are defined in the C code, and it is not too difficult to hard code new frequencies in C (ie. explicitly define all the frequency conversion algorithms for the new frequency), but there is no mechanism currently for a user to creatw a new frequency as a multiple of a base frequency. This is something I have contemplated a bit, but have not got around to implementing. It will be a decent amount of work to add this feature, but it definitely is worth having. Unfortunately, I will not likely be able to work on it anytime soon (but would happily accept a patch if a motivated individual felt like implementing it :) ). So anyway... if you wanted to hard code those frequencies in C in your copy of the code, you could certainly do that, but it would likely prove a frustrating experience if you aren't familiar with C. But the answer at the moment unfortunately is that it can't be done without modifying the code. - Matt From mattknox_ca at hotmail.com Mon May 21 10:28:11 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Mon, 21 May 2007 14:28:11 +0000 (UTC) Subject: [SciPy-dev] Help for 5-min frequency for timeseries module References: <9fb85f420705202342w474dfedgf9eeff184d2db208@mail.gmail.com> Message-ID: > > a base frequency of 5-min, others adjustable ones depending on the > > required resolution. Generally spoken though it is required to have > > user-definable base frequencies (e.g. 1/5-min) for timeseries that may > > be converted into others (e.g. 5/15-min etc.). One thing I should note is that you can create series at a specific frequency that don't have completely sequential dates. For example, you could create a minutely frequency series with the Dates: 1-jan-2007 10:10, 1-jan-2007 10:15, 1-jan-2007 10:20, etc... and you could "convert" this series to 15 minute intervals by doing something like: >>> myseries_15min = myseries_5min[(myseries_5min.minute % 15) == 0] And for simple scenarios like this where you are talking about a strict multiple of another underlying base frequency, it would be fairly easy to manually compute the desired frequency conversion results. Also, as a side note... the plotting code still doesn't support all frequencies. In particular, anything higher frequency than daily still isn't supported by the plotting sub-module so you won't be able to plot these minutely frequency series directly at the moment. It's on my list of things to do, but not sure when I'll get around to it. - Matt From robert.kern at gmail.com Mon May 21 12:55:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 11:55:23 -0500 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> Message-ID: <4651CEFB.8010507@gmail.com> Jarrod Millman wrote: > Originally, I assumed that *all* the scikit code would be released > under the same license (ie., the revised BSD license like all the > rest of scipy). That would certainly be my preference. My > understanding was that the basic idea behind scikits was simply to > provide a modular mechanism for distributing "scipy" packages under > one namespace. This is not the case. An explicit goal was for each package to have its own license such that we can provide project infrastructure for projects that have more restrictive licenses than scipy. It is *not* the intention to move scipy.linalg, etc. over to scikits. Nothing currently in scipy (except maybe some of the currently sandboxed projects) will move over to scikits. > There are several arguments for using the same license for all the > packages. First, I think that it would confusing for users importing > "from scikits import foo, bar" to find that foo is released under one > license and bar is released under another. Second while we want to > keep dependencies between scikits packages to a minimum, there will > inevitably be a need for code from one package to be used in another. > This may lead to code duplication if the needed functionality already > exists in a scikit's package, but is released under the GPL when BSD > code is needed. Third, it is reasonable to imagine that at some > point, someone will want to refactor the various scikit's packages. > During a major refactoring, it may be desirable to move code from one > package to another or even from scikits to scipy. This will be > difficult if it is necessary to keep track of what license each piece > of code is released under. I don't think that will be difficult at all. You just have to stop thinking of "scikits" as a project. It's not. It's just a namespace for other projects. > Basically, I think that opening up the possibility for each package to > be released under its own license is going to make it difficult to > maintain some unity to the scipy namespace. I also don't know whether > there is a need to include non-BSD code in the scikit's namespace. We > could easily create links to non-BSD code on the wikipages or even > create a new namespace for code released under different licenses. I > would be very interested to hear arguments for why non-BSD code needs > to be available under the scikit's namespace. There is a need. It was, in fact, the *largest* motivation for starting scikits. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon May 21 13:00:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 12:00:55 -0500 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> <20070521093954.GH14819@mentat.za.net> Message-ID: <4651D047.2040605@gmail.com> Alexander Schmolck wrote: > Stefan van der Walt writes: >> You can still license *your* part of the software under the modified BSD >> license. > > My understanding is you can't if it depends on the GPLed code. You'd have to > write a non-GPL'ed drop-in replacement first. Your understanding is incorrect. You can combine code under a GPL-compatible but non-GPL license with GPLed code. The work as a whole must be distributed under the terms of the GPL. The readline module in Python, for example, is distributed under the PSF license just like the rest of Python. There is no drop-in replacement for libreadline (I don't believe libedit has been a drop-in replacement for many years, now). The non-GPLed bit *must* be GPL-compatible, but it does not have to be GPLed. There are explicit statements from Richard Stallman to that effect. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon May 21 13:13:42 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 12:13:42 -0500 Subject: [SciPy-dev] Official scikits licensing (was: Re: requesting feedback on/editing of scikits wiki-page) In-Reply-To: <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> Message-ID: <4651D346.3090703@gmail.com> David Cournapeau wrote: > One thing which may need to be changed is the license requirement. I > thought one of the point of scikits was to include code useful for scipy > community, but which cannot be licensed to BDS/MIT. For example, > including GPL code, or package which depends on LGPL libraries. > malabwrap for that matter depends on proprietary libraries to work... > > Once this issue is solved, I would be happy to give two toolboxes of my > own for audio signal processing (released under the BSD, but depending > on LGPL libraries). This is the official word on scikits licensing: scikits packages are free to choose their own open source license. The license should be officially OSI approved. We will allow packages to contain code with licenses that, in our judgment, comply with the Open Source Definition but have not gone through the approval process. This is to allow us to adopt old code with permissive licenses. The package itself, though, should use a well-known OSI-approved license. Packages depending on proprietary code are discouraged, but we will allow some packages on a case-by-case basis. Primarily, only packages which depend on proprietary code in order to interact with a proprietary system will be allowed. Wrappers for proprietary *libraries* most likely won't. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.schmolck at gmx.net Mon May 21 13:41:09 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 18:41:09 +0100 Subject: [SciPy-dev] Official scikits licensing In-Reply-To: <4651D346.3090703@gmail.com> (Robert Kern's message of "Mon\, 21 May 2007 12\:13\:42 -0500") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <4651D346.3090703@gmail.com> Message-ID: Robert Kern writes: > This is the official word on scikits licensing: OK, I think this bit should just go verbatim on the scikits wiki: > scikits packages are free to choose their own open source license. The > license should be officially OSI approved. We will allow packages to contain > code with licenses that, in our judgment, comply with the Open Source > Definition but have not gone through the approval process. This is to allow > us to adopt old code with permissive licenses. The package itself, though, > should use a well-known OSI-approved license. I think a recommendation for packages whose authors don't have any strong preferences or that do not have other license constraints would be good to give (BSD? Anything else that's just as good?), because this will reduce unintended license incompatibilities between packages (not all OSI licenses are compatible, obviously). cheers, 'as From a.schmolck at gmx.net Mon May 21 13:46:32 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 18:46:32 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <4651D047.2040605@gmail.com> (Robert Kern's message of "Mon\, 21 May 2007 12\:00\:55 -0500") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> <20070521093954.GH14819@mentat.za.net> <4651D047.2040605@gmail.com> Message-ID: Robert Kern writes: > Alexander Schmolck wrote: >> Stefan van der Walt writes: > >>> You can still license *your* part of the software under the modified BSD >>> license. >> >> My understanding is you can't if it depends on the GPLed code. You'd have to >> write a non-GPL'ed drop-in replacement first. > > Your understanding is incorrect. You can combine code under a GPL-compatible but > non-GPL license with GPLed code. The work as a whole must be distributed under > the terms of the GPL. The readline module in Python, for example, is distributed > under the PSF license just like the rest of Python. There is no drop-in > replacement for libreadline (I don't believe libedit has been a drop-in > replacement for many years, now). Thanks for the clarification. The GPL is weaker then than I thought then, because that means you can rewrite a GPL'ed program or library piecewise thus eventually "liberating" it from the GPL. I am surprised that I haven't heard of anyone doing that -- apart from the likely bad karma, it seems like a reasonable strategy in certain cases... maybe because it's difficult to argue that it would be really clean-room... 'as From robert.kern at gmail.com Mon May 21 13:52:39 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 12:52:39 -0500 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> <20070521093954.GH14819@mentat.za.net> <4651D047.2040605@gmail.com> Message-ID: <4651DC67.6060401@gmail.com> Alexander Schmolck wrote: > Robert Kern writes: > >> Alexander Schmolck wrote: >>> Stefan van der Walt writes: >>>> You can still license *your* part of the software under the modified BSD >>>> license. >>> My understanding is you can't if it depends on the GPLed code. You'd have to >>> write a non-GPL'ed drop-in replacement first. >> Your understanding is incorrect. You can combine code under a GPL-compatible but >> non-GPL license with GPLed code. The work as a whole must be distributed under >> the terms of the GPL. The readline module in Python, for example, is distributed >> under the PSF license just like the rest of Python. There is no drop-in >> replacement for libreadline (I don't believe libedit has been a drop-in >> replacement for many years, now). > > Thanks for the clarification. The GPL is weaker then than I thought then, > because that means you can rewrite a GPL'ed program or library piecewise thus > eventually "liberating" it from the GPL. I am surprised that I haven't heard > of anyone doing that -- apart from the likely bad karma, it seems like a > reasonable strategy in certain cases... maybe because it's difficult to argue > that it would be really clean-room... It wouldn't be. That's why you haven't heard of anyone doing it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon May 21 13:55:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 12:55:02 -0500 Subject: [SciPy-dev] Official scikits licensing In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <4651D346.3090703@gmail.com> Message-ID: <4651DCF6.6040509@gmail.com> Alexander Schmolck wrote: > Robert Kern writes: > >> This is the official word on scikits licensing: > > OK, I think this bit should just go verbatim on the scikits wiki: > >> scikits packages are free to choose their own open source license. The >> license should be officially OSI approved. We will allow packages to contain >> code with licenses that, in our judgment, comply with the Open Source >> Definition but have not gone through the approval process. This is to allow >> us to adopt old code with permissive licenses. The package itself, though, >> should use a well-known OSI-approved license. > > I think a recommendation for packages whose authors don't have any strong > preferences or that do not have other license constraints would be good to > give (BSD? Anything else that's just as good?), because this will reduce > unintended license incompatibilities between packages (not all OSI licenses > are compatible, obviously). """ The recommended license for scikits projects is the (new) BSD license. http://www.opensource.org/licenses/bsd-license.html """ Unofficially, for David, I do recommend against using the BSD license for wrappers of LGPLed or GPLed libraries. It just confuses matters. There probably isn't much code in the wrapper that can be usefully separated from the library. For such bits, people can simply ask you for a new license on the little bits of code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.schmolck at gmx.net Mon May 21 14:53:48 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 19:53:48 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <4651DC67.6060401@gmail.com> (Robert Kern's message of "Mon\, 21 May 2007 12\:52\:39 -0500") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> <20070521093954.GH14819@mentat.za.net> <4651D047.2040605@gmail.com> <4651DC67.6060401@gmail.com> Message-ID: Robert Kern writes: > It wouldn't be. That's why you haven't heard of anyone doing it. Why not, if your eventual goal is BSD anyway? I think if I had to do a BSD'd netlab lookalike and intended to use the library for internal stuff in the meantime, I think I'd rather start out with netlab than from scratch. 'as From a.schmolck at gmx.net Mon May 21 15:01:17 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Mon, 21 May 2007 20:01:17 +0100 Subject: [SciPy-dev] Official scikits licensing In-Reply-To: <4651DCF6.6040509@gmail.com> (Robert Kern's message of "Mon\, 21 May 2007 12\:55\:02 -0500") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <4651D346.3090703@gmail.com> <4651DCF6.6040509@gmail.com> Message-ID: Robert Kern writes: > Alexander Schmolck wrote: >> Robert Kern writes: >> >>> This is the official word on scikits licensing: >> >> OK, I think this bit should just go verbatim on the scikits wiki: >> >>> scikits packages are free to choose their own open source license. The >>> license should be officially OSI approved. We will allow packages to contain >>> code with licenses that, in our judgment, comply with the Open Source >>> Definition but have not gone through the approval process. This is to allow >>> us to adopt old code with permissive licenses. The package itself, though, >>> should use a well-known OSI-approved license. >> >> I think a recommendation for packages whose authors don't have any strong >> preferences or that do not have other license constraints would be good to >> give (BSD? Anything else that's just as good?), because this will reduce >> unintended license incompatibilities between packages (not all OSI licenses >> are compatible, obviously). > > """ > The recommended license for scikits projects is the (new) BSD license. > http://www.opensource.org/licenses/bsd-license.html > """ Thanks -- I have updated accordingly. OK, as far as I'm concerned the scikits wiki page is almost usable now and are the last 2 remaining vital pieces of information that are missing are: 1. Homepage/Download urls I assume the .eggs should end up on cheeseshop and the project homepage will just be the ../scikits/wiki/? E.g mlabwrap's setup would include: setup(... url="http://projects.scipy.org/scipy/scikits/wiki/mlabwrap", download_url="http://cheeseshop.python.org/pypi/mlabwrap/" ...) 2. Trac: How should individual scikits use the common trac? Does every scikit become a component? If so, whom does one have to bug to get the component created? Also, provided the rest looks OK to you, I'll remove the "WARNING THIS INFORMATION IS NOT RELIABLE YET" once these two points are addressed. cheers, 'as From robert.kern at gmail.com Mon May 21 15:09:43 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 14:09:43 -0500 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <46514244.4050405@ar.media.kyoto-u.ac.jp> <20070521093954.GH14819@mentat.za.net> <4651D047.2040605@gmail.com> <4651DC67.6060401@gmail.com> Message-ID: <4651EE77.90004@gmail.com> Alexander Schmolck wrote: > Robert Kern writes: > >> It wouldn't be. That's why you haven't heard of anyone doing it. > > Why not, if your eventual goal is BSD anyway? I think if I had to do a BSD'd > netlab lookalike and intended to use the library for internal stuff in the > meantime, I think I'd rather start out with netlab than from scratch. Pardon my ellipses. "It wouldn't be [clean room]." -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnchen at cortechs.net Mon May 21 15:13:53 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 21 May 2007 12:13:53 -0700 Subject: [SciPy-dev] (no subject) Message-ID: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> Hi! I need to build a distribution OS X package from source for numpy and scipy . But setup.py in both of them seems lacking that option. Can anyone tell me how to do that? Thanks! Gen -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon May 21 15:18:47 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 14:18:47 -0500 Subject: [SciPy-dev] (no subject) In-Reply-To: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> References: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> Message-ID: <4651F097.7090706@gmail.com> Gennan Chen wrote: > Hi! > > I need to build a distribution OS X package from source for numpy and > scipy . But setup.py in both of them seems lacking that option. Can > anyone tell me how to do that? You will need to use bdist_mpkg: http://cheeseshop.python.org/pypi/bdist_mpkg/ Note that with scipy, the FORTRAN-using extension modules will end up linked against your FORTRAN compiler's dynamic runtime libraries. The binaries you make will need those runtime libraries to be present. You will have to have your users install the same FORTRAN compiler you used. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Mon May 21 15:34:01 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 21 May 2007 15:34:01 -0400 Subject: [SciPy-dev] [SciPy-user] (no subject) In-Reply-To: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> References: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> Message-ID: <20070521193401.GA3524@arbutus.physics.mcmaster.ca> On Mon, May 21, 2007 at 12:13:53PM -0700, Gennan Chen wrote: > Hi! > > I need to build a distribution OS X package from source for numpy and > scipy . But setup.py in both of them seems lacking that option. Can > anyone tell me how to do that? In numpy, the setupegg.py script works the same as setup.py, except it import setuptools first so stuff like bdist_egg and bdist_mpkg should work (assuming you have the py2app stuff installed). You can do the same with scipy by importing setuptools first: $ python -c 'import setuptools; execfile("setup.py")' bdist_mpkg -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Mon May 21 19:49:11 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 08:49:11 +0900 Subject: [SciPy-dev] Official scikits licensing In-Reply-To: <4651DCF6.6040509@gmail.com> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <4651D346.3090703@gmail.com> <4651DCF6.6040509@gmail.com> Message-ID: <46522FF7.6030407@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > """ > The recommended license for scikits projects is the (new) BSD license. > http://www.opensource.org/licenses/bsd-license.html > """ > > Unofficially, for David, I do recommend against using the BSD license for > wrappers of LGPLed or GPLed libraries. It just confuses matters. There probably > isn't much code in the wrapper that can be usefully separated from the library. > For such bits, people can simply ask you for a new license on the little bits of > code. So you mean relicensing under the GPL ? I have to confess that I myself much prefer the GPL to BSD anyway, and the only reason I released the code under BSD was to make it easier to integrate with scipy (that project was started before scikits). David From robert.kern at gmail.com Mon May 21 20:02:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 19:02:52 -0500 Subject: [SciPy-dev] Official scikits licensing In-Reply-To: <46522FF7.6030407@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <464BBAE7.1040809@ar.media.kyoto-u.ac.jp> <4651D346.3090703@gmail.com> <4651DCF6.6040509@gmail.com> <46522FF7.6030407@ar.media.kyoto-u.ac.jp> Message-ID: <4652332C.1080005@gmail.com> David Cournapeau wrote: > Robert Kern wrote: >> Unofficially, for David, I do recommend against using the BSD license for >> wrappers of LGPLed or GPLed libraries. It just confuses matters. There probably >> isn't much code in the wrapper that can be usefully separated from the library. >> For such bits, people can simply ask you for a new license on the little bits of >> code. > So you mean relicensing under the GPL ? I recommend making wrappers available under the license of the underlying library if it's one of the standard ones. If it's GPL, then GPL. If it's LGPL, then LGPL. BSD, then BSD. Some ad hoc, but permissive license, then BSD. You said that the underlying library was LGPLed, so I recommend the LGPL in that case. > I have to confess that I myself > much prefer the GPL to BSD anyway, and the only reason I released the > code under BSD was to make it easier to integrate with scipy (that > project was started before scikits). Well, since that wouldn't have worked anyways, there's not much point in keeping the wrappers as BSD. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon May 21 21:10:07 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 10:10:07 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> Message-ID: <465242EF.9040407@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > David Cournapeau writes: > >> Alexander Schmolck wrote: >>> David Cournapeau writes: >> Well, if we have one example with many options, I guess it should not be >> too difficult to make it up-to-date. For example, for the package I >> would like to submit to scikits, I have a makefile which builds the pdf >> doc using some python scripts, and try to run those scripts to make sure >> everything is updated. > > Sorry, I'm not quite clear what you are getting at, so possibly this answer > doesn't address your point at all but basically what I was trying to say is > that if you use some real package, chances are that unless gets abandonned, > someone will be pretty motivated to make sure its install works properly (even > if something in setuptools changes) -- more so than if its just a toy for the > benefit of *others*. I was just saying that if the only drawback of having a "fake" package is that it is not updated, it should not be too difficult to have something in place to test it automatically. But anyway, this is a very minor point (real package vs fake package). > > That's unfortunate -- I find numpy.distutils pretty impenetrable, and I also > tend to believe that pretty much anything that's useful about it ought to > eventually end up in setuptools, unless it's really specific to numpy or > numerical python code -- IMO numpy having so much general-purpose building > (and testing) facilities in numpy is a bad thing and increases the barrier of > entry for scikits writers and other contributors of numerical python code > unnecessarily, because the learning curve gets steeper and the number of wrong > or largely equivalent choices and combinations that won't work grows. I agree with you here, but that's a consequence of the history of numpy, I guess. Now, I don't find setuptools that much clearer myself. Maybe it is just because I have not used it extensively, but I always found setuptools (and distutils for that matter) quite complicated for anything not trivial. I had to spend 2 hours to convert my project to setuptools, and for the only benefit to have support for namespace. And it still does not work because for some reasons, files automatically included by setuptools for install target are not included when using sdist target, whereas there is no difference between those targets with distutils... And I have not tested it yet on other platforms (I am always afraid of windows :) ) > course if you manage to figure out what parts of numpy.distutils might be > helpful for scikits-users, and how to use them, please do share your insights > on the wiki -- using functionality from numpy.distutils is certainly better > than roling one's own and will likely remain necessary for some time. But I do > think that the longer term goal of the scipy-community should be to rely more > on setuptools and less on numpy.distutils functionality (as far as > non-special-purpose stuff is concerned; support for fortran stuff e.g. might > well be special-purpose enough to keep it out of setuptools). I don't know much about numpy.distutils, but if you take a quick look at numpy/distutils, it is around 1.9 Mb, of which: - 1 M of tests - 350 ko for fortran (fcompiler) - 300 ko for extending distutils (command) - half of the rest seems pretty specific to numpy too (system_info is, cpuinfo too, other I don't know). You could say that the modules of numpy/scipy fall into 3 categories wrt distutils: - pure python packages: not difficult to handle with distutils - python packages which use their own C extension: not difficult to handle with distutils. - python packages which depend on C/Fortran code (other libraries): difficult to handle with distutils, if not impossible. In my case, I use numpy.distutils to find a library and its header files. As far as I know, neither distutils or setuptools offer any facility for that. > > What happens? I got the following error when importing the package (from scikits import pyaudiolab) Traceback (most recent call last) /usr/media/boulot/softwares/JapSoft/pykdic/src/ in () /home/david/local/lib/python2.5/site-packages/scikits/pyaudiolab/__init__.py in () 2 # Last Change: Wed Jan 31 03:00 PM 2007 J 3 ----> 4 from info import __doc__ 5 6 #from pyaudioio import play as linux_player /home/david/local/lib/python2.5/site-packages/scikits/pyaudiolab/info.py in () 15 License: BSD-style (see LICENSE.txt in main source directory) 16 """ ---> 17 import version as vpy 18 version = vpy.__version__ 19 /home/david/local/lib/python2.5/site-packages/scikits/pyaudiolab/version.py in () 3 4 from pkg_resources import require ----> 5 __version__ = require('scikits.pyaudiolab')[0].version 6 #__version__ = '0.7' 7 /usr/lib/python2.5/site-packages/pkg_resources.py in require(self, *requirements) 583 """ 584 --> 585 needed = self.resolve(parse_requirements(requirements)) 586 587 for dist in needed: /usr/lib/python2.5/site-packages/pkg_resources.py in resolve(self, requirements, env, installer) 481 dist = best[req.key] = env.best_match(req, self, installer) 482 if dist is None: --> 483 raise DistributionNotFound(req) # XXX put more info here 484 to_activate.append(dist) 485 if dist not in req: : scikits.pyaudiolab If I uncommend the line __version__ = require('scikits.pyaudiolab')[0].version, everything is fine. > > If your code doesn't have any strange dependencies and you if email it to me > (or point me to an url with a tgz), I could try and have a brief look at it, > if you want. You can find it here: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/pyaudiolab-scikits.tbz2 (this is a tarball of the current source directory, as setup.py sdist does not work yet...). The only dependency is libsndfile, which is easily available if you use linux. David From david at ar.media.kyoto-u.ac.jp Mon May 21 21:15:31 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 10:15:31 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <465242EF.9040407@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> <465242EF.9040407@ar.media.kyoto-u.ac.jp> Message-ID: <46524433.6050709@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > I agree with you here, but that's a consequence of the history of numpy, > I guess. Now, I don't find setuptools that much clearer myself. Maybe it > is just because I have not used it extensively, but I always found > setuptools (and distutils for that matter) quite complicated for > anything not trivial. > > I had to spend 2 hours to convert my project to setuptools, and for the > only benefit to have support for namespace. And it still does not work > because for some reasons, files automatically included by setuptools for > install target are not included when using sdist target, whereas there > is no difference between those targets with distutils... Well, I spend one hour trying to make something which is actually a bug, and is not even solved yet. Yeah... http://www.mail-archive.com/distutils-sig at python.org/msg02794.html David From robert.kern at gmail.com Mon May 21 21:24:55 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 May 2007 20:24:55 -0500 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <465242EF.9040407@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> <465242EF.9040407@ar.media.kyoto-u.ac.jp> Message-ID: <46524667.3060704@gmail.com> David Cournapeau wrote: > I got the following error when importing the package (from scikits > import pyaudiolab) > : scikits.pyaudiolab > > If I uncommend the line __version__ = > require('scikits.pyaudiolab')[0].version, everything is fine. Change DISTNAME = 'pyaudiolab' to DISTNAME = 'scikits.pyaudiolab' -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gnchen at cortechs.net Mon May 21 21:30:15 2007 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 21 May 2007 18:30:15 -0700 Subject: [SciPy-dev] [SciPy-user] (no subject) In-Reply-To: <20070521193401.GA3524@arbutus.physics.mcmaster.ca> References: <5A8B0B13-5973-45D3-AF20-FFE5D0FA2E57@cortechs.net> <20070521193401.GA3524@arbutus.physics.mcmaster.ca> Message-ID: <7652F73B-EE59-49B6-971C-1C9870B61786@cortechs.net> Thanks for all info. It worked... Gen On May 21, 2007, at 12:34 PM, David M. Cooke wrote: > On Mon, May 21, 2007 at 12:13:53PM -0700, Gennan Chen wrote: >> Hi! >> >> I need to build a distribution OS X package from source for numpy and >> scipy . But setup.py in both of them seems lacking that option. Can >> anyone tell me how to do that? > > In numpy, the setupegg.py script works the same as setup.py, except it > import setuptools first so stuff like bdist_egg and bdist_mpkg should > work (assuming you have the py2app stuff installed). You can do the > same > with scipy by importing setuptools first: > > $ python -c 'import setuptools; execfile("setup.py")' bdist_mpkg > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Mon May 21 21:31:38 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 May 2007 10:31:38 +0900 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints Message-ID: There don't seem to be any optimizers in SciPy that have all of these attributes: 1) nonlinear objective function 2) uses analytical derivatives 3) allows (non-)linear (in)equality constraints Did I just miss it? The only things I see are methods that can only handle simple bounds, or don't make use of derivative info. BFGS isn't that difficult to adapt to accept linear constraints using the active set approach and lagrange multipliers. I have a BFGS routine I've been using for a while based on code from the book: // Optimization of Dynamic Systems // by Sunil Agrawal and Brian Fabien // http://abs-5.me.washington.edu/pub/fabien/book/code.tar.gz The code seems to be AWOL now, but I'm sure I have a copy I could dig up if anyone's interested. Or there's my C++ adaptation of it, that I do have handy. Given a working BFGS implementation, the basic idea with active set is simple though. Either you're pinned against a constraint, and treating it as an equality constraint (active), or the constraint is off (inactive). If your line search leads you away from an active constraint, you make it inactive. If it takes you up against an inactive constraint, make it active. The active constraints can be handled using Lagrange multipliers. --bb From david at ar.media.kyoto-u.ac.jp Mon May 21 22:02:03 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 11:02:03 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <46524667.3060704@gmail.com> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> <465242EF.9040407@ar.media.kyoto-u.ac.jp> <46524667.3060704@gmail.com> Message-ID: <46524F1B.6020004@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Change > > DISTNAME = 'pyaudiolab' > > to > > DISTNAME = 'scikits.pyaudiolab That was it, thanks, David From david at ar.media.kyoto-u.ac.jp Mon May 21 22:09:32 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 11:09:32 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> Message-ID: <465250DC.2040604@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > Hi everyone, > > > As the author of the first (if yet unreleased) scikit, I've tried to put some > info for prospective scikit authors on > > > > that tries to condense down the most important points for those who are not > yet familiar with the required infrastructure like setuptools (which is fairly > rich) etc. to make things a little less overwhelming. A few things also still > need clarification. As I'm myself just learning to get to grips with various > of these things, so this page could use some attention from someone more > knowledgable than me. > > Here are a few specific points (search for FIXME/XXX in the document) > > 0. Blurp: What sort of things are suitable for a scikit project, what > requirements must be met by projects or authors to qualify (this should > also include something on acceptable/desirable[1] licenses, dependencies, > code quality etc.) and what the overall aims and immediate future for > scikits look like. I'm not really the person to write that so I just wrote > something brief and vague. > > 1. Trac: I'm not really that familiar with trac, but I assume some sorts of > conventions/infrastructure will need to be established in order for > logically independent projects to use the same trac effectively. I guess > one first step might be to use ``trac-admin`` to create each scikit (i.e. > mlabwrap) as a component? > > 2. Homepage/Wiki/Docstandards etc. info. > > Hmm, it occurs to me that a link to coding standards might also be a good > idea... > > 3. Versioning: I think it would be nice if there were a standard way how > people version their scikits, both conventions-wise as well as how the > process is automated as far as is possible. I've outlined a > setuptools-based solution in preference to one using numpy.distutils docs > pretty opaque[1]. I found one problem with the approach you mention for versioning: when building a source package from sdist, only the release version part is appended to the source file, whereas with the "scipy approach", if your version is "0.7dev", it is appended to the source package. Also, having setuptools handling the versioning means it is more difficult to get the exact version back from other programs (things shell scripts and Makefile, for example), whereas extracting a string from a python file is trivial. I am not sure to understand the advantages of the approach you are proposing ? David From openopt at ukr.net Tue May 22 02:08:09 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 22 May 2007 09:08:09 +0300 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints In-Reply-To: References: Message-ID: <465288C9.2070707@ukr.net> Bill Baxter wrote: > There don't seem to be any optimizers in SciPy that have all of these > attributes: > 1) nonlinear objective function > 2) uses analytical derivatives > 3) allows (non-)linear (in)equality constraints > So your message differs very much from topic "Derivative-based nonlinear optimization with linear ineqality constraints" > Did I just miss it? The only things I see are methods that can only > handle simple bounds, or don't make use of derivative info. > It seems to me I had seen in scipy.optimize solver(s) that allow(es) to handle nonlinear inequality constraints (with derives), but not equality. There are very small % of NLP solvers that are able to handle equality constraints. Also note please that many SQP/rSQP solvers can handle equality, but not inequality constraints, so amount of NLP solvers capable of handling both eq & ineq is very small. Usually it is based in method of linearization. Afaik there is free IPopt (but it has GPL license) from COIN-OR project, written in C++, and some commercial, for example KNitro. There was a Ukrainean academician Pshenichniy, who had spend dozens of years to research in linearization-based optimization methods, but some months ago he died. BTW in his book "Linearization method" (1983) he says very important observation: "multiyears practice of solving optimization problems made specialists think, that there can't be universal NLP algorithm, that successfully enough solves ALL problems" (and that he is agree with the statement, as well as I do). I would explain the statement with my own args, but it requires much English-written text, so let me omit the one here. And some lines below Pshenichniy continue: "Practical experience shows, that there are a sets of problems, where an algorithm, that seems to be the most ineffective from the theoretical point of view, turns out to yield good results, due to specific of problem. Hence, a number of different algorithms is required." An alternative approach (to constructing a single universal solver, like some do) can be like TOMLAB (or like in my OpenOpt): assignment of problem to single object + trying different solvers. And, of course, observing convergence pictures, because just single output can be very uninformative in many cases. Regards, D > BFGS isn't that difficult to adapt to accept linear constraints using > the active set approach and lagrange multipliers. > > I have a BFGS routine I've been using for a while based on code from the book: > // Optimization of Dynamic Systems > // by Sunil Agrawal and Brian Fabien > // http://abs-5.me.washington.edu/pub/fabien/book/code.tar.gz > > The code seems to be AWOL now, but I'm sure I have a copy I could dig > up if anyone's interested. Or there's my C++ adaptation of it, that I > do have handy. > > Given a working BFGS implementation, the basic idea with active set is > simple though. Either you're pinned against a constraint, and treating > it as an equality constraint (active), or the constraint is off > (inactive). If your line search leads you away from an active > constraint, you make it inactive. If it takes you up against an > inactive constraint, make it active. The active constraints can be > handled using Lagrange multipliers. > > > --bb > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From wbaxter at gmail.com Tue May 22 03:05:24 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 May 2007 16:05:24 +0900 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints In-Reply-To: <465288C9.2070707@ukr.net> References: <465288C9.2070707@ukr.net> Message-ID: On 5/22/07, dmitrey wrote: > Bill Baxter wrote: > > There don't seem to be any optimizers in SciPy that have all of these > > attributes: > > 1) nonlinear objective function > > 2) uses analytical derivatives > > 3) allows (non-)linear (in)equality constraints > > > So your message differs very much from topic "Derivative-based nonlinear > optimization with linear ineqality constraints" Not sure what you mean. I have gradient information and I am interested in using it to optimizing a non-linear objective function subject to linear inequality constraints. But at the same time I don't see any gradient-based solvers in scipy that can handle *any* constraints other than simple bounds, hence the (non-) and (in) in parentheses. > > Did I just miss it? The only things I see are methods that can only > > handle simple bounds, or don't make use of derivative info. > > > It seems to me I had seen in scipy.optimize solver(s) that allow(es) to > handle nonlinear inequality constraints (with derives), but not > equality. Here's what there is: fmin - simplex method, no derivs no constraints fmin_powell - powell's method, no derivs no constraints fmin_cg - uses derivs, no constraints fmin_ncg - uses derivs (and hessian), no constraints fmin_bfgs - uses derivs, no constraints fmin_l_bfgs_b - uses derivs, bounds only fmin_tnc - uses derivs, bounds only fmin_cobyla - no derivs, but non-linear constraints So again the question... did I miss anything? It seems like there's a hole in the category of "uses derivs, supports constraints". > There are very small % of NLP solvers that are able to handle > equality constraints. Also note please that many SQP/rSQP solvers can > handle equality, but not inequality constraints, so amount of NLP > solvers capable of handling both eq & ineq is very small. Usually it is > based in method of linearization. Afaik there is free IPopt (but it has > GPL license) from COIN-OR project, written in C++, and some commercial, > for example KNitro. There was a Ukrainean academician Pshenichniy, who > had spend dozens of years to research in linearization-based > optimization methods, but some months ago he died. > BTW in his book "Linearization method" (1983) he says very important > observation: > "multiyears practice of solving optimization problems made specialists > think, that there can't be universal NLP algorithm, that successfully > enough solves ALL problems" (and that he is agree with the statement, as > well as I do). I would explain the statement with my own args, but it > requires much English-written text, so let me omit the one here. > And some lines below Pshenichniy continue: > "Practical experience shows, that there are a sets of problems, where an > algorithm, that seems to be the most ineffective from the theoretical > point of view, turns out to yield good results, due to specific of > problem. Hence, a number of different algorithms is required." > An alternative approach (to constructing a single universal solver, like > some do) can be like TOMLAB (or like in my OpenOpt): assignment of > problem to single object + trying different solvers. And, of course, > observing convergence pictures, because just single output x_opt, time_elapsed> can be very uninformative in many cases. All very interesting, but I'm not sure what point you're trying to make. I'd be happy to try different solvers on my problem, and in fact I have tried a couple. The BFGS seem to work pretty well if I start from a feasible point, but it makes me a little nervious using them since the values of my function are bogus outside the feasible region. I'm just setting f(x) to HUGE_NUM outside the feasible region. fmin_tnc did *not* work. It seems to take a step out of bounds and get lost. --bb From david at ar.media.kyoto-u.ac.jp Tue May 22 03:02:18 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 22 May 2007 16:02:18 +0900 Subject: [SciPy-dev] Finish the stats review ? Message-ID: <4652957A.9060704@ar.media.kyoto-u.ac.jp> Hi there, Last year, the statistics review was launched, but it is far from being done: http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview I think it would be nice to have a cleaned stats module. Would other people willing to finish it: in particular, removing unnecessary functions (providing facility available elsewhere, eg numpy), adding tests, etc... ? David From matthieu.brucher at gmail.com Tue May 22 04:21:28 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 May 2007 10:21:28 +0200 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints In-Reply-To: References: <465288C9.2070707@ukr.net> Message-ID: Hi, I think a part of the GSoC on optimization handles the constraints in the framework I proposed, doesn't it ? Writting a solver in this framework would mainly mean writting a new Optimizer that would use the same steps or line searches, that would handle all convex constraints, I suppose. Matthieu 2007/5/22, Bill Baxter : > > On 5/22/07, dmitrey wrote: > > Bill Baxter wrote: > > > There don't seem to be any optimizers in SciPy that have all of these > > > attributes: > > > 1) nonlinear objective function > > > 2) uses analytical derivatives > > > 3) allows (non-)linear (in)equality constraints > > > > > So your message differs very much from topic "Derivative-based nonlinear > > optimization with linear ineqality constraints" > > Not sure what you mean. I have gradient information and I am > interested in using it to optimizing a non-linear objective function > subject to linear inequality constraints. But at the same time I > don't see any gradient-based solvers in scipy that can handle *any* > constraints other than simple bounds, hence the (non-) and (in) in > parentheses. > > > > Did I just miss it? The only things I see are methods that can only > > > handle simple bounds, or don't make use of derivative info. > > > > > It seems to me I had seen in scipy.optimize solver(s) that allow(es) to > > handle nonlinear inequality constraints (with derives), but not > > equality. > > Here's what there is: > fmin - simplex method, no derivs no constraints > fmin_powell - powell's method, no derivs no constraints > fmin_cg - uses derivs, no constraints > fmin_ncg - uses derivs (and hessian), no constraints > fmin_bfgs - uses derivs, no constraints > fmin_l_bfgs_b - uses derivs, bounds only > fmin_tnc - uses derivs, bounds only > fmin_cobyla - no derivs, but non-linear constraints > > So again the question... did I miss anything? It seems like there's a > hole in the category of "uses derivs, supports constraints". > > > There are very small % of NLP solvers that are able to handle > > equality constraints. Also note please that many SQP/rSQP solvers can > > handle equality, but not inequality constraints, so amount of NLP > > solvers capable of handling both eq & ineq is very small. Usually it is > > based in method of linearization. Afaik there is free IPopt (but it has > > GPL license) from COIN-OR project, written in C++, and some commercial, > > for example KNitro. There was a Ukrainean academician Pshenichniy, who > > had spend dozens of years to research in linearization-based > > optimization methods, but some months ago he died. > > BTW in his book "Linearization method" (1983) he says very important > > observation: > > "multiyears practice of solving optimization problems made specialists > > think, that there can't be universal NLP algorithm, that successfully > > enough solves ALL problems" (and that he is agree with the statement, as > > well as I do). I would explain the statement with my own args, but it > > requires much English-written text, so let me omit the one here. > > And some lines below Pshenichniy continue: > > "Practical experience shows, that there are a sets of problems, where an > > algorithm, that seems to be the most ineffective from the theoretical > > point of view, turns out to yield good results, due to specific of > > problem. Hence, a number of different algorithms is required." > > An alternative approach (to constructing a single universal solver, like > > some do) can be like TOMLAB (or like in my OpenOpt): assignment of > > problem to single object + trying different solvers. And, of course, > > observing convergence pictures, because just single output > x_opt, time_elapsed> can be very uninformative in many cases. > > All very interesting, but I'm not sure what point you're trying to make. > I'd be happy to try different solvers on my problem, and in fact I > have tried a couple. The BFGS seem to work pretty well if I start > from a feasible point, but it makes me a little nervious using them > since the values of my function are bogus outside the feasible region. > I'm just setting f(x) to HUGE_NUM outside the feasible region. > fmin_tnc did *not* work. It seems to take a step out of bounds and get > lost. > > --bb > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Tue May 22 03:55:29 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 22 May 2007 10:55:29 +0300 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints In-Reply-To: References: <465288C9.2070707@ukr.net> Message-ID: <4652A1F1.6050406@ukr.net> Bill Baxter wrote: > On 5/22/07, dmitrey wrote: > >> Bill Baxter wrote: >> >>> There don't seem to be any optimizers in SciPy that have all of these >>> attributes: >>> 1) nonlinear objective function >>> 2) uses analytical derivatives >>> 3) allows (non-)linear (in)equality constraints >>> >>> >> So your message differs very much from topic "Derivative-based nonlinear >> optimization with linear ineqality constraints" >> > > Not sure what you mean. I have gradient information and I am > interested in using it to optimizing a non-linear objective function > subject to linear inequality constraints. But at the same time I > don't see any gradient-based solvers in scipy that can handle *any* > constraints other than simple bounds, hence the (non-) and (in) in > parentheses. > > >>> Did I just miss it? The only things I see are methods that can only >>> handle simple bounds, or don't make use of derivative info. >>> >>> >> It seems to me I had seen in scipy.optimize solver(s) that allow(es) to >> handle nonlinear inequality constraints (with derives), but not >> equality. >> > > Here's what there is: > fmin - simplex method, no derivs no constraints > fmin_powell - powell's method, no derivs no constraints > fmin_cg - uses derivs, no constraints > fmin_ncg - uses derivs (and hessian), no constraints > fmin_bfgs - uses derivs, no constraints > fmin_l_bfgs_b - uses derivs, bounds only > fmin_tnc - uses derivs, bounds only > fmin_cobyla - no derivs, but non-linear constraints > > So again the question... did I miss anything? It seems like there's a > hole in the category of "uses derivs, supports constraints". > > Sorry, maybe I saw the ones in Octave-forge repository >> There are very small % of NLP solvers that are able to handle >> equality constraints. Also note please that many SQP/rSQP solvers can >> handle equality, but not inequality constraints, so amount of NLP >> solvers capable of handling both eq & ineq is very small. Usually it is >> based in method of linearization. Afaik there is free IPopt (but it has >> GPL license) from COIN-OR project, written in C++, and some commercial, >> for example KNitro. There was a Ukrainean academician Pshenichniy, who >> had spend dozens of years to research in linearization-based >> optimization methods, but some months ago he died. >> BTW in his book "Linearization method" (1983) he says very important >> observation: >> "multiyears practice of solving optimization problems made specialists >> think, that there can't be universal NLP algorithm, that successfully >> enough solves ALL problems" (and that he is agree with the statement, as >> well as I do). I would explain the statement with my own args, but it >> requires much English-written text, so let me omit the one here. >> And some lines below Pshenichniy continue: >> "Practical experience shows, that there are a sets of problems, where an >> algorithm, that seems to be the most ineffective from the theoretical >> point of view, turns out to yield good results, due to specific of >> problem. Hence, a number of different algorithms is required." >> An alternative approach (to constructing a single universal solver, like >> some do) can be like TOMLAB (or like in my OpenOpt): assignment of >> problem to single object + trying different solvers. And, of course, >> observing convergence pictures, because just single output > x_opt, time_elapsed> can be very uninformative in many cases. >> > > All very interesting, but I'm not sure what point you're trying to make. > I'd be happy to try different solvers on my problem, and in fact I > have tried a couple. The BFGS seem to work pretty well if I start > from a feasible point, but it makes me a little nervious using them > since the values of my function are bogus outside the feasible region. > > I'm just setting f(x) to HUGE_NUM outside the feasible region. > fmin_tnc did *not* work. It seems to take a step out of bounds and get > lost. > Since fmin_tnc uses derives, of course it will fail to solve the problem with HUGE_NUM outside the feasible region. You should either search for BSD (or other appropriate license) solver outside the scipy optim solvers bunch, or use penalty coefficients. BTW ralg is capable of successfully handling very huge penalties, and this is one more reason why our dept use the solver very often in our software. So, if your problem is smooth, you can use something like F = lamda x: f(x) + N*sum(c[i](x)**2) + N*sum(h[i](x)**2) (for constraints c[i](x) <= 0, h[i](x) = 0) so you should rise the N until your solution will become feasible, in a cycle or somehow else. If you use nonsmooth optimizer (like ralg), it's better to handle penalties as F = lamda x: f(x) + N*sum(abs(c[i](x))) + N*sum(abs(h[i](x))) Automitic turning of penalty coefficients (in Naum Z.Shor r-alg) is implemented for example in my OpenOpt for MATLAB/Octave or in SolvOpt by Alexey Kuntsevich for C/Fortran/MATLAB (2001) (also our department worker). But for Python it's not done yet, as well as smooth NLP optimizer that I promised in my GSoC project. WBR, D. From cimrman3 at ntc.zcu.cz Tue May 22 06:17:49 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 22 May 2007 12:17:49 +0200 Subject: [SciPy-dev] lobpcg eigenvalue solver Message-ID: <4652C34D.8040509@ntc.zcu.cz> Hi, a pure SciPy implementation of Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG) for symmetric eigenvalue problems, see http://www-math.cudenver.edu/~aknyazev/software/BLOPEX/, was added into scipy.sandbox. It was written in collaboration with Andrew Knyazev, one of LOBPCG authors, the license is BSD. LOBPCG solves eigenproblems Ax = lambda Bx with large, possibly sparse, symmetric matrix A and symmetric positive definite matrix B. Matrices could be given as full numpy arrays, sparse scipy matrices, or as functions performing the matrix-vector product. There is not much documentation yet, so look at the web page above, and send me your comments! r. PS: Later I would like to move the module into main scipy tree - where would be the best place? From nwagner at iam.uni-stuttgart.de Tue May 22 07:17:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 May 2007 13:17:02 +0200 Subject: [SciPy-dev] lobpcg eigenvalue solver In-Reply-To: <4652C34D.8040509@ntc.zcu.cz> References: <4652C34D.8040509@ntc.zcu.cz> Message-ID: <4652D12E.5050206@iam.uni-stuttgart.de> Robert Cimrman wrote: > Hi, > > a pure SciPy implementation of Locally Optimal Block Preconditioned > Conjugate Gradient Method (LOBPCG) for symmetric eigenvalue problems, see > http://www-math.cudenver.edu/~aknyazev/software/BLOPEX/, > was added into scipy.sandbox. It was written in collaboration with > Andrew Knyazev, one of LOBPCG authors, the license is BSD. > > LOBPCG solves eigenproblems Ax = lambda Bx with large, possibly sparse, > symmetric matrix A and symmetric positive definite matrix B. > > Matrices could be given as full numpy arrays, sparse scipy matrices, or > as functions performing the matrix-vector product. > > There is not much documentation yet, so look at the web page above, and > send me your comments! > > r. > PS: Later I would like to move the module into main scipy tree - where > would be the best place? > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi Robert, Great work ! Thank you very much ! I have just enabled the package. Here is the output of my short test Traceback (most recent call last): File "test_lobpcg.py", line 17, in ? w = lobpcg.lobpcg(X,A,B) File "/usr/lib64/python2.4/site-packages/scipy/sandbox/lobpcg/lobpcg.py", line 148, in lobpcg residualTolerance = sqrt( 1e-15 ) * n NameError: global name 'sqrt' is not defined Cheers, Nils P.S. Is there a specific reason why you have used a pure python implementation ? From nwagner at iam.uni-stuttgart.de Tue May 22 07:53:47 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 May 2007 13:53:47 +0200 Subject: [SciPy-dev] lobpcg eigenvalue solver In-Reply-To: <4652C34D.8040509@ntc.zcu.cz> References: <4652C34D.8040509@ntc.zcu.cz> Message-ID: <4652D9CB.50505@iam.uni-stuttgart.de> Robert Cimrman wrote: > Hi, > > a pure SciPy implementation of Locally Optimal Block Preconditioned > Conjugate Gradient Method (LOBPCG) for symmetric eigenvalue problems, see > http://www-math.cudenver.edu/~aknyazev/software/BLOPEX/, > was added into scipy.sandbox. It was written in collaboration with > Andrew Knyazev, one of LOBPCG authors, the license is BSD. > > LOBPCG solves eigenproblems Ax = lambda Bx with large, possibly sparse, > symmetric matrix A and symmetric positive definite matrix B. > > Matrices could be given as full numpy arrays, sparse scipy matrices, or > as functions performing the matrix-vector product. > > There is not much documentation yet, so look at the web page above, and > send me your comments! > > r. > PS: Later I would like to move the module into main scipy tree - where > would be the best place? > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Robert, You forgot to mention that symeig is currently required by lobpcg. Reference: http://mdp-toolkit.sourceforge.net/symeig.html Cheers, Nils P.S. Ticket 373 http://projects.scipy.org/scipy/scipy/ticket/373 could be closed. From peridot.faceted at gmail.com Tue May 22 07:57:08 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 22 May 2007 07:57:08 -0400 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints In-Reply-To: References: <465288C9.2070707@ukr.net> Message-ID: On 22/05/07, Bill Baxter wrote: > All very interesting, but I'm not sure what point you're trying to make. > I'd be happy to try different solvers on my problem, and in fact I > have tried a couple. The BFGS seem to work pretty well if I start > from a feasible point, but it makes me a little nervious using them > since the values of my function are bogus outside the feasible region. > I'm just setting f(x) to HUGE_NUM outside the feasible region. > fmin_tnc did *not* work. It seems to take a step out of bounds and get > lost. What happened when you tried using a derivative-less solver (which is guess is just fmin_cobyla)? Too slow? I've never actually gone to the trouble of implementing analytic derivatives even when I could have, but then I've never had particularly demanding minimizations to do. Anne From cimrman3 at ntc.zcu.cz Tue May 22 08:36:34 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 22 May 2007 14:36:34 +0200 Subject: [SciPy-dev] lobpcg eigenvalue solver In-Reply-To: <4652D12E.5050206@iam.uni-stuttgart.de> References: <4652C34D.8040509@ntc.zcu.cz> <4652D12E.5050206@iam.uni-stuttgart.de> Message-ID: <4652E3D2.20109@ntc.zcu.cz> Nils Wagner wrote: > Traceback (most recent call last): > > File "test_lobpcg.py", line 17, in ? > w = lobpcg.lobpcg(X,A,B) > File > "/usr/lib64/python2.4/site-packages/scipy/sandbox/lobpcg/lobpcg.py", > line 148, in lobpcg > residualTolerance = sqrt( 1e-15 ) * n > NameError: global name 'sqrt' is not defined Thanks, it should be nm.sqrt... > Cheers, > Nils > > P.S. Is there a specific reason why you have used a pure python > implementation ? Because otherwise I would have to take care of providing optimized BLAS/LAPACK functions to the abstract LOBPCG core. This way I get the optimized dot, cholesky, etc. for free, without the gory details of detecting the right libraries myself... It is also better for future extensibility. The current implementation works and gives the same results as the reference matlab version by the LOBPCG algorithm author(s), but it is slower, so some optimization/profiling will be required. BTW. the matlab version is much faster than a naive implementation in C (without BLAS) ... cheers, r. From matthieu.brucher at gmail.com Tue May 22 09:24:49 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 May 2007 15:24:49 +0200 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <46524F1B.6020004@ar.media.kyoto-u.ac.jp> References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> <465242EF.9040407@ar.media.kyoto-u.ac.jp> <46524667.3060704@gmail.com> <46524F1B.6020004@ar.media.kyoto-u.ac.jp> Message-ID: I tried the last tar file you gave, with the above correction but setup.pytest does not function on FC5 because of : pkg_resources.DistributionNotFound: numpy in the running build_ext. I tried that because I was trying to create a scikit on my own, but the same error occurs if I want "install_requires='numpy'" What are the needed versions of the differents libraries to create a scikit ? Matthieu 2007/5/22, David Cournapeau : > > Robert Kern wrote: > > > > Change > > > > DISTNAME = 'pyaudiolab' > > > > to > > > > DISTNAME = 'scikits.pyaudiolab > That was it, thanks, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.schmolck at gmx.net Tue May 22 09:26:59 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Tue, 22 May 2007 14:26:59 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <465242EF.9040407@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Tue\, 22 May 2007 10\:10\:07 +0900") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> <465242EF.9040407@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > I was just saying that if the only drawback of having a "fake" package > is that it is not updated, it should not be too difficult to have > something in place to test it automatically. Yes, but I'd personally attach far higher priority to fixing mlabwrap if a test reveleaed a setup.py problem than fixing the scikits example package. However, I do agree that such a package would be useful provided there is someone or a group of people who'll take care of it, so if you feel like writing it, and don't think maintenance will be a problem, by all means do -- and if you like we can also replace all the mlabwrap examples on the wiki with ``scikits.example`` (or whatever you want to call it). >> That's unfortunate -- I find numpy.distutils pretty impenetrable, and I also >> tend to believe that pretty much anything that's useful about it ought to >> eventually end up in setuptools, unless it's really specific to numpy or >> numerical python code -- IMO numpy having so much general-purpose building >> (and testing) facilities in numpy is a bad thing and increases the barrier of >> entry for scikits writers and other contributors of numerical python code >> unnecessarily, because the learning curve gets steeper and the number of wrong >> or largely equivalent choices and combinations that won't work grows. > I agree with you here, but that's a consequence of the history of numpy, > I guess. Indeed and I'm sure that writing numpy.distutils was a good idea: distutils has been languishing for ages without an active maintainer (IIRC) and numpy/scipy clearly needed reasonable (and for some things even specialized) build support. I just think that with the arrival of setuptools (which is itself quite rich), it would be good to consolidate the two as much as possible and move to a situation where numpy.distutils really only deals with what can't and shouldn't be done by setuptools. > Now, I don't find setuptools that much clearer myself. Maybe it is just > because I have not used it extensively, but I always found setuptools (and > distutils for that matter) quite complicated for anything not trivial. Yes, setuptools alone is already is enough to chew on. On the other hand setuptools really does offer extremely valuable functionality, first and foremost CPAN like functionality (i.e. easy automatic package installation for users) but also many really useful conveniences (such as the ``python setup.py develop``; ``test`` also seems like a good idea) and it is officially blessed for inclusion as distutils replacement -- so there's no way around dealing with it anyway and it will be with us for some time. So there seems more benefit in learning about setuptools than numpy.distutils, especially for functionality that could also be used for non-numpy projects (adding a dependency to numpy just so you can use numpy.distutils seems a bit excessive, right?), which is why I tried to mostly stick to setuptools on the wiki. > I had to spend 2 hours to convert my project to setuptools, and for the > only benefit to have support for namespace. I can comisserate -- all the work I've done on mlabwrap off late falls into the no-real-coding category, too. However, I are you sure that namespace support is the only benefit? Surely its a benefit if your users can say 'easy_install pyaudiolab' and it will just be downloaded and installed, automatically also downloading numpy if necessary? > And it still does not work because for some reasons, files automatically > included by setuptools for install target are not included when using sdist > target, whereas there is no difference between those targets with > distutils... Hmm maybe a bug? I recommend asking on the distutils-list, I 've found it quite helpful. > And I have not tested it yet on other platforms (I am always afraid of > windows :) ) Rightly so, as I can attest from windows-user reports from the mlabwrap-user list. > I don't know much about numpy.distutils, but if you take a quick look at > numpy/distutils, it is around 1.9 Mb, of which: > - 1 M of tests > - 350 ko for fortran (fcompiler) > - 300 ko for extending distutils (command) > - half of the rest seems pretty specific to numpy too (system_info > is, cpuinfo too, other I don't know). > > You could say that the modules of numpy/scipy fall into 3 categories wrt > distutils: > - pure python packages: not difficult to handle with distutils > - python packages which use their own C extension: not difficult to > handle with distutils. >From what I hear setuptools makes good cross-platform support a whole lot easier, but I can't comment on it from personal experience. > - python packages which depend on C/Fortran code (other libraries): > difficult to handle with distutils, if not impossible. Right -- I wonder whether it might sense to include support for that in setuptools, but it might be too specialized and better handled by numpy.distutils. > In my case, I use numpy.distutils to find a library and its header > files. As far as I know, neither distutils or setuptools offer any > facility for that. Sounds like a good reason for using it then -- but wouldn't you agree that it would be even nicer if setuptools would also offer this functionality and you could use it for non-numpy-related projects? (not that I'd necessarily know if setuptools did already offer something like this) >> What happens? > I got the following error when importing the package (from scikits > import pyaudiolab) [snipped] I assume Robert Kern's tip solved the problem? cheers, 'as From nwagner at iam.uni-stuttgart.de Tue May 22 09:27:23 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 May 2007 15:27:23 +0200 Subject: [SciPy-dev] lobpcg eigenvalue solver In-Reply-To: <4652E3D2.20109@ntc.zcu.cz> References: <4652C34D.8040509@ntc.zcu.cz> <4652D12E.5050206@iam.uni-stuttgart.de> <4652E3D2.20109@ntc.zcu.cz> Message-ID: <4652EFBB.1030005@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Traceback (most recent call last): >> >> File "test_lobpcg.py", line 17, in ? >> w = lobpcg.lobpcg(X,A,B) >> File >> "/usr/lib64/python2.4/site-packages/scipy/sandbox/lobpcg/lobpcg.py", >> line 148, in lobpcg >> residualTolerance = sqrt( 1e-15 ) * n >> NameError: global name 'sqrt' is not defined >> > > Thanks, it should be nm.sqrt... > > O.k. The next problem is Traceback (most recent call last): File "test_lobpcg.py", line 17, in eigs,vecs = lobpcg.lobpcg(X,A,B) File "/usr/local/lib64/python2.5/site-packages/scipy/sandbox/lobpcg/lobpcg.py", line 287, in lobpcg gramYBY, blockVectorBY, blockVectorY ) UnboundLocalError: local variable 'gramYBY' referenced before assignment Nils >> Cheers, >> Nils >> >> P.S. Is there a specific reason why you have used a pure python >> implementation ? >> > > Because otherwise I would have to take care of providing optimized > BLAS/LAPACK functions to the abstract LOBPCG core. This way I get the > optimized dot, cholesky, etc. for free, without the gory details of > detecting the right libraries myself... It is also better for future > extensibility. > > The current implementation works and gives the same results as the > reference matlab version by the LOBPCG algorithm author(s), but it is > slower, so some optimization/profiling will be required. BTW. the matlab > version is much faster than a naive implementation in C (without BLAS) ... > > cheers, > r. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From a.schmolck at gmx.net Tue May 22 10:17:38 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: Tue, 22 May 2007 15:17:38 +0100 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: <465250DC.2040604@ar.media.kyoto-u.ac.jp> (David Cournapeau's message of "Tue\, 22 May 2007 11\:09\:32 +0900") References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <465250DC.2040604@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau writes: > I found one problem with the approach you mention for versioning: when > building a source package from sdist, only the release version part is > appended to the source file, whereas with the "scipy approach", if your > version is "0.7dev", it is appended to the source package. Well, I certainly did mean to have the 0.7dev bit appended to the source package, too and I thought the setup.cfg entry should take care of that, but apparently it doesn't. > Also, having setuptools handling the versioning means it is more difficult > to get the exact version back from other programs (things shell scripts and > Makefile, for example), whereas extracting a string from a python file is > trivial. > > I am not sure to understand the advantages of the approach you are > proposing ? None, since it obviously doesn't work properly. In an ideal world something like this should be triviually handled by the VC system. In a slightly less ideal world, the standard build tool (setuptools) should take care of it -- sane version handling certainly doesn't seem to have anything to do with numpy, but if that's the easiest way to do it for scikits and you know how to, by all means let's do that and update the wiki-page accordingly. Doing it with numpy.distutils actually was my first attempt but it didn't seem all that simple and easy either, and since I thought it didn't really belong there anyway, I decided to ask on the distutils-list, where I eventually got the setuptool-specific info that's currently on the wiki: I think I've more than exhausted my time budget for this now; if needs be I'll just hardcode the version numbers for mlabwrap or write a 3 line-script that does it for me, but it would obviously great if you or someone else could come up with a better way and document it on the wiki. If you'd like to use the numpy.distutils approach, in the older wiki-versions and in the mlabwrap svn there's still an attempt at a fairly generic version.py file that I thought could get used for all scikits: (NB. I think the file's location is wrong; version.py really ought to go under trunk/mlabwrap/scikits/mlabwrap) The corresponding setup.py, which tries to use numpy.distutils.Configuration just for the __svn_version__ file creation: Otherwise, it would presumably be best if someone asked on the distutils-list again for clarification. cheers, 'as From cimrman3 at ntc.zcu.cz Tue May 22 10:43:08 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 22 May 2007 16:43:08 +0200 Subject: [SciPy-dev] lobpcg eigenvalue solver In-Reply-To: <4652EFBB.1030005@iam.uni-stuttgart.de> References: <4652C34D.8040509@ntc.zcu.cz> <4652D12E.5050206@iam.uni-stuttgart.de> <4652E3D2.20109@ntc.zcu.cz> <4652EFBB.1030005@iam.uni-stuttgart.de> Message-ID: <4653017C.5020106@ntc.zcu.cz> Nils Wagner wrote: > O.k. The next problem is > Traceback (most recent call last): > File "test_lobpcg.py", line 17, in > eigs,vecs = lobpcg.lobpcg(X,A,B) > File > "/usr/local/lib64/python2.5/site-packages/scipy/sandbox/lobpcg/lobpcg.py", > line 287, in lobpcg > gramYBY, blockVectorBY, blockVectorY ) > UnboundLocalError: local variable 'gramYBY' referenced before assignment Fixed. BTW you can send these bug reports directly to me, in order not to flood the list :) r. From wbaxter at gmail.com Tue May 22 14:58:59 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 23 May 2007 03:58:59 +0900 Subject: [SciPy-dev] Derivative-based nonlinear optimization with linear ineqality constraints In-Reply-To: References: <465288C9.2070707@ukr.net> Message-ID: On 5/22/07, Anne Archibald wrote: > On 22/05/07, Bill Baxter wrote: > > > All very interesting, but I'm not sure what point you're trying to make. > > I'd be happy to try different solvers on my problem, and in fact I > > have tried a couple. The BFGS seem to work pretty well if I start > > from a feasible point, but it makes me a little nervious using them > > since the values of my function are bogus outside the feasible region. > > I'm just setting f(x) to HUGE_NUM outside the feasible region. > > fmin_tnc did *not* work. It seems to take a step out of bounds and get > > lost. > > What happened when you tried using a derivative-less solver (which is > guess is just fmin_cobyla)? Too slow? I've never actually gone to the > trouble of implementing analytic derivatives even when I could have, > but then I've never had particularly demanding minimizations to do. I should do more testing, but when I tried BFGS with finite differences it used about 10x as many function evaluations. The difference in computation between just the function and the function+gradient is not that big, so using finite differences was significantly slower. I'll give cobyla, and the standard fmin simplex a try just to see, though. --bb From david at ar.media.kyoto-u.ac.jp Tue May 22 19:34:15 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 23 May 2007 08:34:15 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <46510BAC.6080308@ar.media.kyoto-u.ac.jp> <465160DB.8040302@ar.media.kyoto-u.ac.jp> <465242EF.9040407@ar.media.kyoto-u.ac.jp> Message-ID: <46537DF7.5020008@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > > So there seems more benefit in learning about setuptools than numpy.distutils, > especially for functionality that could also be used for non-numpy projects > (adding a dependency to numpy just so you can use numpy.distutils seems a bit > excessive, right?), which is why I tried to mostly stick to setuptools on the > wiki. If you do not use numpy, then yes, it seems rather stupid to depend on numpy.distutils. > > > However, I are you sure that namespace support is the only benefit? Surely its > a benefit if your users can say 'easy_install pyaudiolab' and it will just be > downloaded and installed, automatically also downloading numpy if necessary? I haven't tried it, but I am pretty sure it won't work because it depends on an external library anyway. > > Hmm maybe a bug? I recommend asking on the distutils-list, I 've found it > quite helpful. It is a bug, but it seems it won't be resolved soon. So basically, I am doing everything in the configuration function provided by numpy.distutils for now. Anyway, I will add some info on the wiki, cheers, David From david at ar.media.kyoto-u.ac.jp Tue May 22 19:51:55 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 23 May 2007 08:51:55 +0900 Subject: [SciPy-dev] requesting feedback on/editing of scikits wiki-page In-Reply-To: References: <462EBAE6.1000906@ar.media.kyoto-u.ac.jp> <462EEA26.6000509@gmail.com> <465250DC.2040604@ar.media.kyoto-u.ac.jp> Message-ID: <4653821B.3010806@ar.media.kyoto-u.ac.jp> Alexander Schmolck wrote: > > In an ideal world something like this should be triviually handled by the VC > system. In a slightly less ideal world, the standard build tool (setuptools) > should take care of it -- sane version handling certainly doesn't seem to have > anything to do with numpy, but if that's the easiest way to do it for scikits > and you know how to, by all means let's do that and update the wiki-page > accordingly. > > Doing it with numpy.distutils actually was my first attempt but it didn't seem > all that simple and easy either, and since I thought it didn't really belong > there anyway, I decided to ask on the distutils-list, where I eventually got > the setuptool-specific info that's currently on the wiki: > > > > I think I've more than exhausted my time budget for this now; if needs be I'll > just hardcode the version numbers for mlabwrap or write a 3 line-script that > does it for me, but it would obviously great if you or someone else could come > up with a better way and document it on the wiki. I don't have any solution for something automated, but I found the "scipy way" to work enough for me. The version is encoded in the info.py, the setup.py retrieve the version from there, and it is easily retrieved by a shell script. It is only hardcoded in one place. As I use a SCM different than svn, I cannot use the svn versioning provided by numpy.distutils. David From david at ar.media.kyoto-u.ac.jp Fri May 25 00:23:58 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 May 2007 13:23:58 +0900 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? Message-ID: <465664DE.9010902@ar.media.kyoto-u.ac.jp> Hi, I would like to know if I could include two projects of mine to the scikits svn ? I finished the conversion to the scikits layout and I believe there is no regression anymore compared to the stand alone version. The packages are: - pyaudiolab for import/export between audio files and numpy arrays - pysamplerate, a wrapper around libsamplerate for high quality resampling of audio data. They are both under OSI approved license (LGPL). cheers, David From millman at berkeley.edu Fri May 25 00:39:34 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 24 May 2007 21:39:34 -0700 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? In-Reply-To: <465664DE.9010902@ar.media.kyoto-u.ac.jp> References: <465664DE.9010902@ar.media.kyoto-u.ac.jp> Message-ID: Hey David, I would be happy to see your projects added to scikits. Thanks for taking the time to get them into the scikits layout. Thanks, Jarrod On 5/24/07, David Cournapeau wrote: > Hi, > > I would like to know if I could include two projects of mine to the > scikits svn ? I finished the conversion to the scikits layout and I > believe there is no regression anymore compared to the stand alone > version. The packages are: > - pyaudiolab for import/export between audio files and numpy arrays > - pysamplerate, a wrapper around libsamplerate for high quality > resampling of audio data. > They are both under OSI approved license (LGPL). > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Fri May 25 00:51:40 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 May 2007 13:51:40 +0900 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? In-Reply-To: References: <465664DE.9010902@ar.media.kyoto-u.ac.jp> Message-ID: <46566B5C.9000706@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > Hey David, > > I would be happy to see your projects added to scikits. Thanks for > taking the time to get them into the scikits layout. Hi Jarrod, Can I request writing access to svn ? Committing the code may take a bit some time, because I have a whole history to commit (I converted the history under bzr to svn). David From millman at berkeley.edu Fri May 25 01:22:07 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 24 May 2007 22:22:07 -0700 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? In-Reply-To: <46566B5C.9000706@ar.media.kyoto-u.ac.jp> References: <465664DE.9010902@ar.media.kyoto-u.ac.jp> <46566B5C.9000706@ar.media.kyoto-u.ac.jp> Message-ID: On 5/24/07, David Cournapeau wrote: > Can I request writing access to svn ? Committing the code may take a > bit some time, because I have a whole history to commit (I converted the > history under bzr to svn). You should be able to use your cdavid account on the scikits svn server now. Let me know if you have any problems. Jarrod From brian.lee.hawthorne at gmail.com Fri May 25 01:35:23 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Thu, 24 May 2007 22:35:23 -0700 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? In-Reply-To: <46566B5C.9000706@ar.media.kyoto-u.ac.jp> References: <465664DE.9010902@ar.media.kyoto-u.ac.jp> <46566B5C.9000706@ar.media.kyoto-u.ac.jp> Message-ID: <796269930705242235xe187689m80bd9509e6a2425c@mail.gmail.com> Hi, Glad to see more code moving into scikits! I would make one recommendation regarding naming: it might be nice to remove the "py" from the names, so that they become scikits.audiolab and scikits.samplerate. (we'll be renaming pymat also :) -Brian On 5/24/07, David Cournapeau wrote: > > Jarrod Millman wrote: > > Hey David, > > > > I would be happy to see your projects added to scikits. Thanks for > > taking the time to get them into the scikits layout. > Hi Jarrod, > > Can I request writing access to svn ? Committing the code may take a > bit some time, because I have a whole history to commit (I converted the > history under bzr to svn). > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Fri May 25 01:41:15 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 24 May 2007 22:41:15 -0700 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? In-Reply-To: <796269930705242235xe187689m80bd9509e6a2425c@mail.gmail.com> References: <465664DE.9010902@ar.media.kyoto-u.ac.jp> <46566B5C.9000706@ar.media.kyoto-u.ac.jp> <796269930705242235xe187689m80bd9509e6a2425c@mail.gmail.com> Message-ID: On 5/24/07, Brian Hawthorne wrote: > I would make one recommendation regarding naming: it might be nice to > remove the "py" from the names, so that they become scikits.audiolab and > scikits.samplerate . (we'll be renaming pymat also :) +1 Given that all the scikits projects will be written in Python, I don't think that the 'py' is very useful any more and the extra characters just make the name that much longer. Thanks, Jarrod From david at ar.media.kyoto-u.ac.jp Fri May 25 01:54:24 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 May 2007 14:54:24 +0900 Subject: [SciPy-dev] Adding pyaudiolab and pysamplerate to scikits ? In-Reply-To: References: <465664DE.9010902@ar.media.kyoto-u.ac.jp> <46566B5C.9000706@ar.media.kyoto-u.ac.jp> <796269930705242235xe187689m80bd9509e6a2425c@mail.gmail.com> Message-ID: <46567A10.2090900@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > On 5/24/07, Brian Hawthorne wrote: >> I would make one recommendation regarding naming: it might be nice to >> remove the "py" from the names, so that they become scikits.audiolab and >> scikits.samplerate . (we'll be renaming pymat also :) > > +1 Given that all the scikits projects will be written in Python, I > don't think that the 'py' is very useful any more and the extra > characters just make the name that much longer. I agree, now that it will be in scikits, the py prefix does not make any sense anymore. david From david at ar.media.kyoto-u.ac.jp Fri May 25 03:21:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 25 May 2007 16:21:51 +0900 Subject: [SciPy-dev] Scikits, installation of setuptools required ? Message-ID: <46568E8F.1030006@ar.media.kyoto-u.ac.jp> Hi, I was trying to test my package converted for scikits on windows. I never use windows, so I install the bare minimum to test my python packages, which does not contain setuptools. One the point of setuptools was to be able to boostrap itself anyway from what I understood. This works ok, up to the point where when I want to run it, I have a failure because pkg_resources is not found (in the scikits/__init__.py). Does that mean that we cannot use the bootstraping capability of setuptools, and we require setuptools to be installed ? Or did I do something wrong ? David From robert.kern at gmail.com Fri May 25 13:06:29 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 25 May 2007 12:06:29 -0500 Subject: [SciPy-dev] Scikits, installation of setuptools required ? In-Reply-To: <46568E8F.1030006@ar.media.kyoto-u.ac.jp> References: <46568E8F.1030006@ar.media.kyoto-u.ac.jp> Message-ID: <46571795.9070400@gmail.com> David Cournapeau wrote: > Hi, > > I was trying to test my package converted for scikits on windows. I > never use windows, so I install the bare minimum to test my python > packages, which does not contain setuptools. One the point of setuptools > was to be able to boostrap itself anyway from what I understood. > This works ok, up to the point where when I want to run it, I have a > failure because pkg_resources is not found (in the scikits/__init__.py). > Does that mean that we cannot use the bootstraping capability of > setuptools, and we require setuptools to be installed ? Or did I do > something wrong ? I do not recommend using the ez_setup bootstrap. It is deprecated. Yes, setuptools is required to be installed. See here for instructions: http://www.python.org/pypi/setuptools/0.6c5 You should be importing setuptools in your setup.py script, too, before you import numpy.distutils. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Sun May 27 00:41:52 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 26 May 2007 21:41:52 -0700 Subject: [SciPy-dev] Timetable for 0.5.3 release? Message-ID: Is there a timetable for the next SciPy release? The last release was quite a while ago, so I think we're due :) I realize that releases are somewhat unimportant for much of the SciPy audience, since they'll go straight for the SVN version anyway. However, it's nice to tell casual users that they can apt-get XYZ and try your software. Personally, my code depends critically on the improvements to scipy.sparse made shortly after the last release, so I'm anxious to push this along. FWIW there are currently 15 items on the roadmap for 0.5.3: http://projects.scipy.org/scipy/scipy/roadmap -- Nathan Bell wnbell at gmail.com From david at ar.media.kyoto-u.ac.jp Sun May 27 04:47:14 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 27 May 2007 17:47:14 +0900 Subject: [SciPy-dev] Scikits, installation of setuptools required ? In-Reply-To: <46571795.9070400@gmail.com> References: <46568E8F.1030006@ar.media.kyoto-u.ac.jp> <46571795.9070400@gmail.com> Message-ID: <46594592.30904@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: >> Hi, >> >> I was trying to test my package converted for scikits on windows. I >> never use windows, so I install the bare minimum to test my python >> packages, which does not contain setuptools. One the point of setuptools >> was to be able to boostrap itself anyway from what I understood. >> This works ok, up to the point where when I want to run it, I have a >> failure because pkg_resources is not found (in the scikits/__init__.py). >> Does that mean that we cannot use the bootstraping capability of >> setuptools, and we require setuptools to be installed ? Or did I do >> something wrong ? > > I do not recommend using the ez_setup bootstrap. It is deprecated. Ok, I didn't know that. I used it because I didn't have setuptools on windows (which I only use for testing my packages), but as I have to install setuptools, there is no reason to keep the boostrapping anyway. David From ondrej at certik.cz Sun May 27 10:51:08 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 27 May 2007 16:51:08 +0200 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: References: Message-ID: <85b5c3130705270751s3f01efccx294e0bfd5768e21a@mail.gmail.com> > I realize that releases are somewhat unimportant for much of the SciPy > audience, since they'll go straight for the SVN version anyway. > However, it's nice to tell casual users that they can apt-get XYZ and > try your software. Personally, my code depends critically on the > improvements to scipy.sparse made shortly after the last release, so > I'm anxious to push this along. I think the releases are very important, at least I prefer to install scipy using apt-get and use it as a nice library. The svn should rather serve for implementing and testing new features imho. Ondrej From charlesr.harris at gmail.com Sun May 27 13:28:28 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 27 May 2007 11:28:28 -0600 Subject: [SciPy-dev] Old numarray convolve2d fft=1 option In-Reply-To: <46389B1F.50509@stsci.edu> References: <46388015.60604@sges.auckland.ac.nz> <46389B1F.50509@stsci.edu> Message-ID: On 5/2/07, Christopher Hanley wrote: > > David O'Sullivan wrote: > > Hi, > > I'm relatively new to this, and certainly no expert on the underlying > > math of all this stuff, but I'm trying to do relatively large kernel > > convolutions on large 2d matrices. The kernels might be 160x160 while > > the matrices themselves might be 2500x2500. They're also likely to be > > sparse matrices, but I'll worry about that later - for now I'm just > > scoping things out. > > > > So... anyway... with those size convolutions in mind, I'm intrigued by > > the fft=1 option that was in the numarray.convolve2d function (or so it > > says at > > > > http://stsdas.stsci.edu/numarray/numarray-1.5.html/node65.html > > > > This option doesn't seem to be in the current scipy.signal.convolve2d > > function. Presumably it would speed 2d convolutions up a lot? > > > > Is there a way around this? > > > > Or a plan to put the fft implementation into scipy.signal.convolve2d? > > > > Thanks > > > > David > > > > > > > David, > > The numarray convolve package is currently living in > scipy.stsci.convolve. It should work in the same way the numarray > version did. I believe scipy.stsci.convolve is buggy. Convolve calls _correlate2d_fft, which in turn call fft2 for the forward transforms, multiplies, then calls irfft2 for the inverse transform. However, fft2 does *complex* transforms, so you need to call ifft2. This is easily checked. The pair irfft2, irfft2 can be used for the real case. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Sun May 27 19:38:49 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 27 May 2007 19:38:49 -0400 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: References: Message-ID: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> On Sat, May 26, 2007 at 09:41:52PM -0700, Nathan Bell wrote: > Is there a timetable for the next SciPy release? The last release was > quite a while ago, so I think we're due :) > > I realize that releases are somewhat unimportant for much of the SciPy > audience, since they'll go straight for the SVN version anyway. > However, it's nice to tell casual users that they can apt-get XYZ and > try your software. Personally, my code depends critically on the > improvements to scipy.sparse made shortly after the last release, so > I'm anxious to push this along. > > FWIW there are currently 15 items on the roadmap for 0.5.3: > http://projects.scipy.org/scipy/scipy/roadmap Perhaps we should move to a timetable-based release schedule for scipy? Every three months, release the current svn version. Also making a release after a numpy release is a good idea, so that the current versions work with each other. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From david at ar.media.kyoto-u.ac.jp Sun May 27 19:54:02 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 08:54:02 +0900 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> References: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> Message-ID: <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> David M. Cooke wrote: > On Sat, May 26, 2007 at 09:41:52PM -0700, Nathan Bell wrote: >> Is there a timetable for the next SciPy release? The last release was >> quite a while ago, so I think we're due :) >> >> I realize that releases are somewhat unimportant for much of the SciPy >> audience, since they'll go straight for the SVN version anyway. >> However, it's nice to tell casual users that they can apt-get XYZ and >> try your software. Personally, my code depends critically on the >> improvements to scipy.sparse made shortly after the last release, so >> I'm anxious to push this along. >> >> FWIW there are currently 15 items on the roadmap for 0.5.3: >> http://projects.scipy.org/scipy/scipy/roadmap > > Perhaps we should move to a timetable-based release schedule for scipy? > Every three months, release the current svn version. Also making a > release after a numpy release is a good idea, so that the current > versions work with each other. As the scipy community seems to be growing, and as Travis wanted to have a release manager for scipy, what about adopting a scheme similar to bzr, which seems to work fine for them: having a different release manager for each release ? Not that this is against having a timetable, cheers, David From david at ar.media.kyoto-u.ac.jp Sun May 27 20:32:37 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 09:32:37 +0900 Subject: [SciPy-dev] trac account different on scikits and scipy ? Message-ID: <465A2325.1020102@ar.media.kyoto-u.ac.jp> I am a bit confused by the trac logging system: are the account different for scikits and scipy trac ? When I try to login for the scikits' trac by https://projects.scipy.org/scipy/scikits/, I have an error, saying my password is wrong, whereas I can connect to scipy's trac without any problems. I thought maybe the accounts where different on the different projects, so I tried to register to scikits' trac, but then it tells me my login name already exists, suggesting the contrary... cheers, David From david at ar.media.kyoto-u.ac.jp Sun May 27 20:35:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 09:35:51 +0900 Subject: [SciPy-dev] trac account different on scikits and scipy ? In-Reply-To: <465A2325.1020102@ar.media.kyoto-u.ac.jp> References: <465A2325.1020102@ar.media.kyoto-u.ac.jp> Message-ID: <465A23E7.8020108@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > I am a bit confused by the trac logging system: are the account > different for scikits and scipy trac ? > > When I try to login for the scikits' trac by > https://projects.scipy.org/scipy/scikits/, I have an error, saying my > password is wrong, whereas I can connect to scipy's trac without any > problems. I thought maybe the accounts where different on the different > projects, so I tried to register to scikits' trac, but then it tells me > my login name already exists, suggesting the contrary... To add another question for this: I would like to use the milestone system of trac for my project pymachine, but the code will be both in scikits and scipy. Can I create the milestones in scikits only, and reference tickets from scipy ? Or should I create the milestones in the projects where the code lies ? David From david at ar.media.kyoto-u.ac.jp Mon May 28 00:36:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 28 May 2007 13:36:45 +0900 Subject: [SciPy-dev] Presentation of pymachine, a python package for machine learning In-Reply-To: <46480B13.7070308@ieee.org> References: <4647AC54.9030907@ar.media.kyoto-u.ac.jp> <46480B13.7070308@ieee.org> Message-ID: <465A5C5D.30805@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > David Cournapeau wrote: >> Now, before starting working on it, I would like to get some feedback >> about what other people think is necessary with respect to those goals: >> - What are the requirements for a toolbox to go from the sandbox into >> the scipy namespace ? >> > For me, it is willingness of somebody to take responsibility for it. > Of course we would like some kind of standard in place, which is why I > brought up the notion of a peer-reviewed "journal" idea. At this > point, however, working code for capability that scipy doesn't already > have is pretty convincing for me. May I request to include pyem in the main scipy, then ? The basic functionalities are there (standard EM for finite mixtures) and tested, the ones which I am not confident with are not included by default. The code has not changed much recently, but I am using it regularly, and intend to improve it, using the time that SoC is giving to me. Including pyem in the mainline would makes it easier to track things in Trac (this is the main reason I am requesting this now), David From strawman at astraw.com Mon May 28 04:47:11 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 01:47:11 -0700 Subject: [SciPy-dev] [patch] RQ decomposition In-Reply-To: <465A9674.4050105@astraw.com> References: <465A9674.4050105@astraw.com> Message-ID: <465A970F.4090508@astraw.com> Grr, why can i never remember to actually include the patch? :) -------------- next part -------------- A non-text attachment was scrubbed... Name: rq.patch Type: text/x-patch Size: 7627 bytes Desc: not available URL: From strawman at astraw.com Mon May 28 04:31:49 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 01:31:49 -0700 Subject: [SciPy-dev] scipy Trac error Message-ID: <465A9375.9020901@astraw.com> Hi scipy web admins, I attempted to login to http://projects.scipy.org/scipy/scipy/login but I get this error: Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 387, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 238, in dispatch resp = chosen_handler.process_request(req) File "/home/scipy/trac/scipy/plugins/TracAccountManager-0.1.2-py2.4.egg/acct_mgr/web_ui.py", line 179, in process_request return auth.LoginModule.process_request(self, req) File "/usr/lib/python2.4/site-packages/trac/web/auth.py", line 100, in process_request self._do_login(req) File "/home/scipy/trac/scipy/plugins/TracAccountManager-0.1.2-py2.4.egg/acct_mgr/web_ui.py", line 184, in _do_login return auth.LoginModule._do_login(self, req) File "/usr/lib/python2.4/site-packages/trac/web/auth.py", line 139, in _do_login "VALUES (%s, %s, %s, %s)", (cookie, remote_user, File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "/usr/src/build/539311-i386/install//usr/lib/python2.4/site-packages/sqlite/main.py", line 255, in execute DatabaseError: database is full There must be a lot of users if the database is full! :) Note that I'm not 100% sure if I have an account with the Trac system. I'm pretty sure I do with numpy, but I'm not sure about scipy. But this doesn't look like the typical "wrong password" error message. -Andrew From strawman at astraw.com Mon May 28 05:02:17 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 02:02:17 -0700 Subject: [SciPy-dev] [patch] RQ decomposition In-Reply-To: <465A970F.4090508@astraw.com> References: <465A9674.4050105@astraw.com> <465A970F.4090508@astraw.com> Message-ID: <465A9A99.5020608@astraw.com> Andrew Straw wrote: > Grr, why can i never remember to actually include the patch? :) (Parenthetical remark 1: I would submit this straight to Trac, but as I can't log in, I'm sending it to the list in case I don't get back to this for a while.) (Parenthetical remark 2: grr, it looks like my original email was lost. At least, I didn't get it. So I'm re-sending this one.) (Parenthetical remark 3: I sure like parenthetical remarks.) I attach a patch that implements RQ decomposition (similar to the QR decomposition). This is basically a touch up of Harry Kalogirou's patch of 2 years ago. His original patch can be found at http://projects.scipy.org/pipermail/scipy-dev/2005-July/003075.html My version converts his code to numpy, adds checks to exclude non-square and complex matrices (I couldn't get them to work in the time I tried, this is listed as TODO in the source code comments), and performs some tests on the correctness of the answers. If a committer could commit this or give me feedback, that would be great. Cheers! Andrew From strawman at astraw.com Mon May 28 04:44:36 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 01:44:36 -0700 Subject: [SciPy-dev] [patch] RQ decomposition Message-ID: <465A9674.4050105@astraw.com> (I would submit this straight to Trac, but as I can't log in, I'm sending it to the list in case I don't get back to this for a while.) I attach a patch that implements RQ decomposition (similar to the QR decomposition). This is basically a touch up of Harry Kalogirou's patch of 2 years ago. His original patch can be found at http://projects.scipy.org/pipermail/scipy-dev/2005-July/003075.html My version converts his code to numpy, adds checks to exclude non-square and complex matrices (I couldn't get them to work in the time I tried, this is listed as TODO in the source code comments), and performs some tests on the correctness of the answers. If a committer could commit this or give me feedback, that would be great. Cheers! Andrew From stefan at sun.ac.za Mon May 28 07:25:46 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 28 May 2007 13:25:46 +0200 Subject: [SciPy-dev] [patch] RQ decomposition In-Reply-To: <465A9A99.5020608@astraw.com> References: <465A9674.4050105@astraw.com> <465A970F.4090508@astraw.com> <465A9A99.5020608@astraw.com> Message-ID: <20070528112546.GA15276@mentat.za.net> On Mon, May 28, 2007 at 02:02:17AM -0700, Andrew Straw wrote: > I attach a patch that implements RQ decomposition (similar to the QR > decomposition). This is basically a touch up of Harry Kalogirou's > patch Thanks, Andrew. This is useful (I wish more people would write unit tests for their patches). Applied as r3054. Cheers St?fan From robert.kern at gmail.com Mon May 28 15:09:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 May 2007 14:09:01 -0500 Subject: [SciPy-dev] scipy Trac error In-Reply-To: <465A9375.9020901@astraw.com> References: <465A9375.9020901@astraw.com> Message-ID: <465B28CD.2080601@gmail.com> Andrew Straw wrote: > Hi scipy web admins, > > I attempted to login to http://projects.scipy.org/scipy/scipy/login but I get this error: I can log in. Are you still having problems? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Mon May 28 15:54:41 2007 From: strawman at astraw.com (Andrew Straw) Date: Mon, 28 May 2007 12:54:41 -0700 Subject: [SciPy-dev] scipy Trac error In-Reply-To: <465B28CD.2080601@gmail.com> References: <465A9375.9020901@astraw.com> <465B28CD.2080601@gmail.com> Message-ID: <465B3381.9070407@astraw.com> Robert Kern wrote: > Andrew Straw wrote: > >> Hi scipy web admins, >> >> I attempted to login to http://projects.scipy.org/scipy/scipy/login but I get this error: >> > > I can log in. Are you still having problems? > > Nope. It seems to work now. -Andrew From oliphant.travis at ieee.org Mon May 28 16:12:09 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 May 2007 14:12:09 -0600 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> References: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> Message-ID: <465B3799.6070507@ieee.org> David Cournapeau wrote: >> Perhaps we should move to a timetable-based release schedule for scipy? >> Every three months, release the current svn version. Also making a >> release after a numpy release is a good idea, so that the current >> versions work with each other. >> > As the scipy community seems to be growing, and as Travis wanted to have > a release manager for scipy, what about adopting a scheme similar to > bzr, which seems to work fine for them: having a different release > manager for each release ? Not that this is against having a timetable, > Here, Here. The releases are slow in coming only because it seems to be entirely relying on my finding time for them. I would like to see more code move from the sandbox into the scipy namespace. I am currently working on the interpolation module to enhance the number of ways in which you can do interpolation using B-splines (the basic functionality is in fitpack, but sometimes I can't make sense of what it is doing with the knot points --- there is a lot less flexibility then there could be). I would also like to see the netcdf library in scipy.io be able to write netcdf files. Currently it can only read them. I was hoping to be able to do this before the release, but I've run out of time. -Travis From wbaxter at gmail.com Mon May 28 17:30:43 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 29 May 2007 06:30:43 +0900 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: <465B3799.6070507@ieee.org> References: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> <465B3799.6070507@ieee.org> Message-ID: On 5/29/07, Travis Oliphant wrote: > David Cournapeau wrote: > >> Perhaps we should move to a timetable-based release schedule for scipy? > >> Every three months, release the current svn version. Also making a > >> release after a numpy release is a good idea, so that the current > >> versions work with each other. > >> > > As the scipy community seems to be growing, and as Travis wanted to have > > a release manager for scipy, what about adopting a scheme similar to > > bzr, which seems to work fine for them: having a different release > > manager for each release ? Not that this is against having a timetable, > > > > Here, Here. The releases are slow in coming only because it seems to be > entirely relying on my finding time for them. > > I would like to see more code move from the sandbox into the scipy > namespace. I am currently working on the interpolation module to > enhance the number of ways in which you can do interpolation using > B-splines (the basic functionality is in fitpack, but sometimes I can't > make sense of what it is doing with the knot points --- there is a lot > less flexibility then there could be). I would like to hear comments on a possible change to a 3-stage deployment of sandbox modules: Stage I - "pre-Alpha" like current sandbox - module not distributed with scipy builds. Have to build from source if you really want to try it. At this stage it's probably just one or two folks tinkering around. Module may or may not work. Stage II - "Alpha" - publicly release prebuilt module, but still in sandbox namespace. At this stage module seems to work, for the authors at least. Stage III - "Beta" - move module out of sandbox namespace. Module API has settled down and most likely there won't be any more significant (incompatible) changes to it. The middle stage is the new thing. Make a sandbox module visible for a while first. Then move it out after people have had a release or two to hammer on it. --bb From david at ar.media.kyoto-u.ac.jp Mon May 28 20:45:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 May 2007 09:45:30 +0900 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: <465B3799.6070507@ieee.org> References: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> <465B3799.6070507@ieee.org> Message-ID: <465B77AA.3070904@ar.media.kyoto-u.ac.jp> Travis Oliphant wrote: > David Cournapeau wrote: > >> As the scipy community seems to be growing, and as Travis wanted to have >> a release manager for scipy, what about adopting a scheme similar to >> bzr, which seems to work fine for them: having a different release >> manager for each release ? Not that this is against having a timetable, >> > > Here, Here. The releases are slow in coming only because it seems to be > entirely relying on my finding time for them. I was certainly not implying that you are not doing enough, quite the contrary :) As you already spend so much time on numpy and scipy, I thought the idea of a different release manager for every release could put more burden on other, and less on you and other main maintainers ? David From oliphant.travis at ieee.org Mon May 28 22:30:16 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 May 2007 20:30:16 -0600 Subject: [SciPy-dev] Timetable for 0.5.3 release? In-Reply-To: <465B77AA.3070904@ar.media.kyoto-u.ac.jp> References: <20070527233849.GA5182@arbutus.physics.mcmaster.ca> <465A1A1A.2050605@ar.media.kyoto-u.ac.jp> <465B3799.6070507@ieee.org> <465B77AA.3070904@ar.media.kyoto-u.ac.jp> Message-ID: <465B9038.9020306@ieee.org> David Cournapeau wrote: > Travis Oliphant wrote: > >> David Cournapeau wrote: >> >> >>> As the scipy community seems to be growing, and as Travis wanted to have >>> a release manager for scipy, what about adopting a scheme similar to >>> bzr, which seems to work fine for them: having a different release >>> manager for each release ? Not that this is against having a timetable, >>> >>> >> Here, Here. The releases are slow in coming only because it seems to be >> entirely relying on my finding time for them. >> > I was certainly not implying that you are not doing enough, quite the > contrary :) As you already spend so much time on numpy and scipy, I > thought the idea of a different release manager for every release could > put more burden on other, and less on you and other main maintainers ? > Don't worry, I didn't get any negative implication from what you said. On the contrary, I very much appreciate the suggestion. I agree wholeheartedly. We welcome all assistance. Best regards, -Travis From tom.grydeland at gmail.com Wed May 30 08:48:16 2007 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Wed, 30 May 2007 14:48:16 +0200 Subject: [SciPy-dev] Fwd: Nearest neighbour interpolation in scipy (ticket #305) In-Reply-To: References: Message-ID: Hello all, This is in relation to ticket #305 on trac, http://projects.scipy.org/scipy/scipy/ticket/305 I made a "nearest neighbour" interpolation a while back and submitted it on trac, but I suppose that's not the best way to have it noticed. When I got a new version of scipy with a recent ubuntu upgrade, I found that interpolate.py had morphed enough that my contribution could no longer work, so I made a new one, which was also posted under the same ticket on trac. Since then, I've looked at the svn repository and found even more substantial changes. I downloaded rev. 3049 of interpolate.py and I've tried making my implementation of nearest neighbour interpolation in a form which I '''think''' should work against this revision. Unfortunately, I don't have a way to test this. The script I have used for testing is below. If there is a framework to which I can add an automatically run test case, please point me in the right direction. Cheers, //togr #!/usr/bin/python import scipy.interpolate as interpolate primes = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47 ]; nearest = interpolate.interp1d(primes, primes, 'nearest') inear = nearest(range(50)) for i in range(len(inear)): print "%3d: %3d" % (i, inear[i]) -- Tom Grydeland -------------- next part -------------- A non-text attachment was scrubbed... Name: rev3049-patch Type: application/octet-stream Size: 3345 bytes Desc: not available URL: From bsouthey at gmail.com Wed May 30 09:15:59 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 30 May 2007 08:15:59 -0500 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) Message-ID: Hi, You might find the UCI Machine Learning Repository a useful resource for data: http://www.ics.uci.edu/~mlearn/MLRepository.html Standard sources are: Statlib: http://lib.stat.cmu.edu/ Netlib: http://www.netlib.org/ Even with those included with R may be used because some are in public domain. Regards Bruce On 5/14/07, Peter Skomoroch wrote: > This google search turns up more sources: > > http://www.google.com/search?q=data+%22in+the+public+domain%22+%22freely+by+the+public%22&ie=utf-8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a > > > On 5/14/07, Peter Skomoroch wrote: > > With some digging, we should be able to find text, images, scientific > data, census info, and audio which is in the public domain. I see this a > lot: > > > > "The information on government web pages is in the public domain unless > specifically annotated otherwise (copyright may be held elsewhere) and may > therefore be used freely by the public." > > > > I'm still learning the ins and outs of licensing, but my understanding is > that data/information released as "public domain" can be bundled with > anything, i.e. BSD licenses. Does anyone have any insight into this? > > > > Here are a few starter sites which mention that the data they provide is > in the public domain: > > > > Geophysical data: > > > > http://www.ngdc.noaa.gov/ngdcinfo/onlineaccess.html > > > > Nasa images and measurements: > > > > > http://adc.gsfc.nasa.gov/adc/questions_feedback.html#policies1%22 > > > > Weather, forecasts: > > > > http://www.weather.gov/disclaimer.php > > > > Genomic data: > > > > > http://www.drugresearcher.com/news/ng.asp?id=10960-snp-database-goes > > > > > > -Pete > > > > > > On 5/14/07, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > > > > > Peter Skomoroch wrote: > > > > I followed some of the discussion around datasets in the previous > > > > threads. As you mentioned, it might make sense to make some larger > > > > datasets available separately or as cran-style optional installs, but > > > > I think it would also be a big plus for scipy to have some more > > > > smaller built-in datasets of various types. > > > I agree, but there is the problem of licensing. I don't know what is the > > > status of data from a legal point of view: if I copy data from a book, I > > > guess this is copyrighted as the rest of the book. But then there are > > > some famous datasets (eg old faithful, iris, etc...) which are available > > > in different softwares with different licenses. > > > > > > Copying the datasets of R (in r-base) would be useful, but they fall > > > under the GPL, hence they cannot be included in scipy, at least if > > > datasets are also under the GPL. Unfortunately, I don't know the legal > > > details of those cases (status of data in a package licensed under the > GPL). > > > > > > Anyway, if I have some useful datasets, I will certainly make them in a > > > separate package, so that it can be used outside pymachine. > > > > > > David > > > _______________________________________________ > > > Scipy-dev mailing list > > > Scipy-dev at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > > > > -- > > Peter N. Skomoroch > > peter.skomoroch at gmail.com > > http://www.datawrangling.com > > > > -- > Peter N. Skomoroch > peter.skomoroch at gmail.com > http://www.datawrangling.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From peter.skomoroch at gmail.com Wed May 30 09:37:59 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Wed, 30 May 2007 09:37:59 -0400 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: Message-ID: I was going to suggest UCI, but the licensing wasn't clear from the site (there are a few restricted datasets). Maybe contacting the maintainer is the best way to sort out the usage: Librarian: Patrick M. Murphy (ml-repository at ics.uci.edu) On 5/30/07, Bruce Southey wrote: > Hi, > You might find the UCI Machine Learning Repository a useful resource for data: > http://www.ics.uci.edu/~mlearn/MLRepository.html > > Standard sources are: > Statlib: http://lib.stat.cmu.edu/ > Netlib: http://www.netlib.org/ > > Even with those included with R may be used because some are in public domain. > > Regards > Bruce > > > On 5/14/07, Peter Skomoroch wrote: > > This google search turns up more sources: > > > > http://www.google.com/search?q=data+%22in+the+public+domain%22+%22freely+by+the+public%22&ie=utf-8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a > > > > > > On 5/14/07, Peter Skomoroch wrote: > > > With some digging, we should be able to find text, images, scientific > > data, census info, and audio which is in the public domain. I see this a > > lot: > > > > > > "The information on government web pages is in the public domain unless > > specifically annotated otherwise (copyright may be held elsewhere) and may > > therefore be used freely by the public." > > > > > > I'm still learning the ins and outs of licensing, but my understanding is > > that data/information released as "public domain" can be bundled with > > anything, i.e. BSD licenses. Does anyone have any insight into this? > > > > > > Here are a few starter sites which mention that the data they provide is > > in the public domain: > > > > > > Geophysical data: > > > > > > http://www.ngdc.noaa.gov/ngdcinfo/onlineaccess.html > > > > > > Nasa images and measurements: > > > > > > > > http://adc.gsfc.nasa.gov/adc/questions_feedback.html#policies1%22 > > > > > > Weather, forecasts: > > > > > > http://www.weather.gov/disclaimer.php > > > > > > Genomic data: > > > > > > > > http://www.drugresearcher.com/news/ng.asp?id=10960-snp-database-goes > > > > > > > > > -Pete > > > > > > > > > On 5/14/07, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > > > > > > > Peter Skomoroch wrote: > > > > > I followed some of the discussion around datasets in the previous > > > > > threads. As you mentioned, it might make sense to make some larger > > > > > datasets available separately or as cran-style optional installs, but > > > > > I think it would also be a big plus for scipy to have some more > > > > > smaller built-in datasets of various types. > > > > I agree, but there is the problem of licensing. I don't know what is the > > > > status of data from a legal point of view: if I copy data from a book, I > > > > guess this is copyrighted as the rest of the book. But then there are > > > > some famous datasets (eg old faithful, iris, etc...) which are available > > > > in different softwares with different licenses. > > > > > > > > Copying the datasets of R (in r-base) would be useful, but they fall > > > > under the GPL, hence they cannot be included in scipy, at least if > > > > datasets are also under the GPL. Unfortunately, I don't know the legal > > > > details of those cases (status of data in a package licensed under the > > GPL). > > > > > > > > Anyway, if I have some useful datasets, I will certainly make them in a > > > > separate package, so that it can be used outside pymachine. > > > > > > > > David > > > > _______________________________________________ > > > > Scipy-dev mailing list > > > > Scipy-dev at scipy.org > > > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > > > > > > > > > > -- > > > Peter N. Skomoroch > > > peter.skomoroch at gmail.com > > > http://www.datawrangling.com > > > > > > > > -- > > Peter N. Skomoroch > > peter.skomoroch at gmail.com > > http://www.datawrangling.com > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com From jeremit0 at gmail.com Wed May 30 11:04:20 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Wed, 30 May 2007 11:04:20 -0400 Subject: [SciPy-dev] Problem compiling/installing/using arpack Please advise Message-ID: <3db594f70705300804l52c7a5ffqd1bd8dfa2f3f72c2@mail.gmail.com> I am trying to use arpack, found in the scipy sandbox. It appears to compile/install fine, but I get an error (shown below) when I try to import arpack. I know there has been some discussion lately on Trac about this module. I'm hoping someone can help me figure out what is wrong and how I can use this. I am running python2.5 and I just updated my scipy from the svn repository. Thanks, Jeremy $ python Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import arpack Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/arpack/__init__.py", line 2, in from arpack import * File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/arpack/arpack.py", line 9, in import _arpack ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/arpack/_arpack.so, 2): Symbol not found: _etime_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/arpack/_arpack.so Expected in: dynamic lookup >>> From ravi.rajagopal at amd.com Wed May 30 12:29:24 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Wed, 30 May 2007 12:29:24 -0400 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin Message-ID: <200705301229.24199.ravi@ati.com> Hi, I am trying to compile numpy with python 2.5.1 under cygwin and ran into the following issue. I cannot convince numpy.distutils to tell distutils that I am using cygwin. Using the switch "--compiler=ming32" with both 'config' and 'build' options of setup.py, I end up with an error from distutils (not numpy.distutils) that python was built with Visual Studio 2003 and asking me to pass "-c ming32" to setup.py. What magic incantation do I need to convince distutils+numpy.distutils to accept my cygwin compilers? Regards, Ravi From robert.kern at gmail.com Wed May 30 13:51:13 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 12:51:13 -0500 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <200705301229.24199.ravi@ati.com> References: <200705301229.24199.ravi@ati.com> Message-ID: <465DB991.8020203@gmail.com> Ravikiran Rajagopal wrote: > Hi, > I am trying to compile numpy with python 2.5.1 under cygwin and ran into the > following issue. I cannot convince numpy.distutils to tell distutils that I > am using cygwin. Using the switch "--compiler=ming32" with both 'config' > and 'build' options of setup.py, I end up with an error from distutils (not > numpy.distutils) that python was built with Visual Studio 2003 and asking me > to pass "-c ming32" to setup.py. What magic incantation do I need to convince > distutils+numpy.distutils to accept my cygwin compilers? --compiler=mingw32 ^ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ravi.rajagopal at amd.com Wed May 30 14:05:18 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Wed, 30 May 2007 14:05:18 -0400 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <465DB991.8020203@gmail.com> References: <200705301229.24199.ravi@ati.com> <465DB991.8020203@gmail.com> Message-ID: <200705301405.18138.ravi@ati.com> On Wednesday 30 May 2007 1:51:13 pm Robert Kern wrote: > > --compiler=mingw32 > ^ Apologies. That was a typo in my email. I had tried it with --compiler=mingw32 as well as --compiler=cygwin with identical results. The problem still remains. In fact, after creating a distutils.cfg file with [build] compiler=mingw32 I can compile other extensions and load them with python setup.py build but numpy fails. Regards, Ravi From robert.kern at gmail.com Wed May 30 14:30:00 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 13:30:00 -0500 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <200705301405.18138.ravi@ati.com> References: <200705301229.24199.ravi@ati.com> <465DB991.8020203@gmail.com> <200705301405.18138.ravi@ati.com> Message-ID: <465DC2A8.90405@gmail.com> Ravikiran Rajagopal wrote: > On Wednesday 30 May 2007 1:51:13 pm Robert Kern wrote: >> --compiler=mingw32 >> ^ > > Apologies. That was a typo in my email. I had tried it with > --compiler=mingw32 > as well as > --compiler=cygwin > with identical results. The problem still remains. In fact, after creating a > distutils.cfg file with > > [build] > compiler=mingw32 > > I can compile other extensions and load them with > python setup.py build > but numpy fails. Can you copy-and-paste the full output? We can't tell what's wrong from your description. Also, which version of numpy are you trying to build? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Wed May 30 14:53:01 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 May 2007 20:53:01 +0200 Subject: [SciPy-dev] Typo in line 615 of interpolate.py Message-ID: Hi, Please remove the ":" at the end of line 615. File "/usr/local/lib64/python2.5/site-packages/scipy/interpolate/interpolate.py", line 615 return _find_user(xk, yk, order, conds, B): ^ SyntaxError: invalid syntax Nils From ravi.rajagopal at amd.com Wed May 30 14:56:11 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Wed, 30 May 2007 14:56:11 -0400 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <465DC2A8.90405@gmail.com> References: <200705301229.24199.ravi@ati.com> <200705301405.18138.ravi@ati.com> <465DC2A8.90405@gmail.com> Message-ID: <200705301456.11887.ravi@ati.com> On Wednesday 30 May 2007 2:30:00 pm Robert Kern wrote: > Can you copy-and-paste the full output? We can't tell what's wrong from > your description. Also, which version of numpy are you trying to build? I am trying to build numpy svn version from a couple of hours ago. I am running python 2.5.1, cygwin updated on monday, self-built lapack 3.1.1 and atlas 3.7.32. The error log is as follows: $ python setup.py config --compiler=mingw32 build --compiler=mingw32 Running from numpy source directory. F2PY Version 2_3842 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in c:\python25\lib libraries mkl,vml,guide not found in C:\ libraries mkl,vml,guide not found in c:\python25\libs NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['c:/cygwin/home/rrajagop/scipylibs/atlas-build/lib'] language = c No module named msvccompiler in numpy.distutils; trying from distutils FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['c:/cygwin/home/rrajagop/scipylibs/atlas-build/lib'] language = c define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in c:\python25\lib libraries mkl,vml,guide not found in C:\ libraries mkl,vml,guide not found in c:\python25\libs NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS numpy.distutils.system_info.atlas_threads_info Setting PTATLAS=ATLAS Setting PTATLAS=ATLAS FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['c:/cygwin/home/rrajagop/scipylibs/atlas-build/lib'] language = f77 No module named msvccompiler in numpy.distutils; trying from distutils FOUND: libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['c:/cygwin/home/rrajagop/scipylibs/atlas-build/lib'] language = f77 define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] running config running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler opti ons running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler opt ions running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build\src.win32-2.5\numpy\core\config.h 0 1 error: Python was built with Visual Studio 2003; extensions must be built with a compiler than can generate compatible binaries. Visual Studio 2003 was not found on this system. If you have Cygwin installed, you can try compiling with MingW32, by passing "-c mingw32" to setup.py. From erin.sheldon at gmail.com Wed May 30 15:14:34 2007 From: erin.sheldon at gmail.com (Erin Sheldon) Date: Wed, 30 May 2007 15:14:34 -0400 Subject: [SciPy-dev] Binary i/o package Message-ID: <331116dc0705301214k4b38c97ocdb7e266d6c6873a@mail.gmail.com> Hi all - The tofile() and fromfile() methods are excellent for reading and writing arrays to disk, and there are good tools in scipy for ascii input/output. I often find myself writing huge binary files to disk and then wanting to extract particular rows and columns from that file. It is natural to associate the fields of a numpy array with the fields in the file, which may be inhomogeneous. The ability to extract this information is straightforward to code in C/C++ and brings the file close to a database in functionality without all the overhead of working with a full database or pytables. I have written a simple C++ numpy extension for reading binary data into a numpy array, with the ability to select rows and fields (columns). One enters the dtype that describes each row in list-of-tuples form and the code creates a numpy array (with perhaps a subset of the fields), reads in the requested data, and returns the result. Pretty simple. I feel like this is a pretty generic and useful type of operation, and if people agree I think it could go into the scipy io subpackage. The package is called readfields currently; it contains the readfields.so from the C++ code as well as simple_format.py which contains modules create files with a simple self-describing header and data written using tofile()) and a read function which parses the header and uses readfields to extract subsets of data. Anyone interested in trying it out can get the package here: http://sdss.physics.nyu.edu/esheldon/python/code/ Erin From robert.kern at gmail.com Wed May 30 15:17:30 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 14:17:30 -0500 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <200705301456.11887.ravi@ati.com> References: <200705301229.24199.ravi@ati.com> <200705301405.18138.ravi@ati.com> <465DC2A8.90405@gmail.com> <200705301456.11887.ravi@ati.com> Message-ID: <465DCDCA.5040904@gmail.com> Ravikiran Rajagopal wrote: > On Wednesday 30 May 2007 2:30:00 pm Robert Kern wrote: >> Can you copy-and-paste the full output? We can't tell what's wrong from >> your description. Also, which version of numpy are you trying to build? > > I am trying to build numpy svn version from a couple of hours ago. I am > running python 2.5.1, cygwin updated on monday, self-built lapack 3.1.1 and > atlas 3.7.32. The error log is as follows: > > $ python setup.py config --compiler=mingw32 build --compiler=mingw32 Try $ python setup.py config --compiler=mingw32 build_ext --compiler=mingw32 build If that fails, go back to numpy 1.0.3. There have been some large changes in numpy.distutils after the release that may be causing a problem. Dave Cooke, can you take a look at this? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Wed May 30 16:48:37 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 May 2007 22:48:37 +0200 Subject: [SciPy-dev] Typo in line 615 of interpolate.py In-Reply-To: References: Message-ID: On Wed, 30 May 2007 20:53:01 +0200 "Nils Wagner" wrote: > > Hi, > > Please remove the ":" at the end of line 615. > File > "/usr/local/lib64/python2.5/site-packages/scipy/interpolate/interpolate.py", > line 615 > return _find_user(xk, yk, order, conds, B): > ^ > SyntaxError: invalid syntax > > Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Hi Travis, Thank you for your prompt fix (r3060). You probably know that scipy.test() yields a memory error related to interpolate.py ====================================================================== ERROR: Make sure that appropriate exceptions are raised when invalid values ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/interpolate/tests/test_interpolate.py", line 44, in test_validation interp1d(self.x10, self.y10, kind='cubic') File "/usr/local/lib64/python2.5/site-packages/scipy/interpolate/interpolate.py", line 220, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/usr/local/lib64/python2.5/site-packages/scipy/interpolate/interpolate.py", line 681, in splmake B = _fitpack._bsplmat(order, xk) MemoryError Cheers, Nils From nwagner at iam.uni-stuttgart.de Wed May 30 18:06:45 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 May 2007 00:06:45 +0200 Subject: [SciPy-dev] Ticket 420 Message-ID: Hi, AFAIK ticket 420 can be closed. It's fixed in svn. Am I missing something ? http://projects.scipy.org/scipy/scipy/ticket/420 Nils From peridot.faceted at gmail.com Wed May 30 18:12:19 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 May 2007 18:12:19 -0400 Subject: [SciPy-dev] Ticket 420 In-Reply-To: References: Message-ID: On 30/05/07, Nils Wagner wrote: > AFAIK ticket 420 can be closed. It's fixed in svn. > Am I missing something ? > http://projects.scipy.org/scipy/scipy/ticket/420 Fine by me (I reported it). I tried to close it but I don't think I have permission. Anne From nwagner at iam.uni-stuttgart.de Wed May 30 18:18:02 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 May 2007 00:18:02 +0200 Subject: [SciPy-dev] Ticket 420 In-Reply-To: References: Message-ID: On Wed, 30 May 2007 18:12:19 -0400 "Anne Archibald" wrote: > On 30/05/07, Nils Wagner >wrote: > >> AFAIK ticket 420 can be closed. It's fixed in svn. >> Am I missing something ? >> http://projects.scipy.org/scipy/scipy/ticket/420 > >Fine by me (I reported it). I tried to close it but I >don't think I > have permission. > > Anne > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev IMHO, the reporter should be able to close his own tickets. Nils From robert.kern at gmail.com Wed May 30 18:26:33 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 17:26:33 -0500 Subject: [SciPy-dev] Ticket 420 In-Reply-To: References: Message-ID: <465DFA19.5050900@gmail.com> Nils Wagner wrote: > IMHO, the reporter should be able to close his own > tickets. Unfortunately, your HO does not make a patch to Trac's security model. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ravi.rajagopal at amd.com Wed May 30 19:03:49 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Wed, 30 May 2007 19:03:49 -0400 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <465DCDCA.5040904@gmail.com> References: <200705301229.24199.ravi@ati.com> <200705301456.11887.ravi@ati.com> <465DCDCA.5040904@gmail.com> Message-ID: <200705301903.49708.ravi@ati.com> On Wednesday 30 May 2007 3:17:30 pm Robert Kern wrote: > Try > > ? $ python setup.py config --compiler=mingw32 build_ext --compiler=mingw32 > build That fails identically as well :-( From robert.kern at gmail.com Wed May 30 19:07:32 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 May 2007 18:07:32 -0500 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <200705301903.49708.ravi@ati.com> References: <200705301229.24199.ravi@ati.com> <200705301456.11887.ravi@ati.com> <465DCDCA.5040904@gmail.com> <200705301903.49708.ravi@ati.com> Message-ID: <465E03B4.9040204@gmail.com> Ravikiran Rajagopal wrote: > On Wednesday 30 May 2007 3:17:30 pm Robert Kern wrote: >> Try >> >> $ python setup.py config --compiler=mingw32 build_ext --compiler=mingw32 >> build > > That fails identically as well :-( Okay, try the numpy 1.0.3 release. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hagberg at lanl.gov Wed May 30 19:35:58 2007 From: hagberg at lanl.gov (Aric Hagberg) Date: Wed, 30 May 2007 17:35:58 -0600 Subject: [SciPy-dev] Problem compiling/installing/using arpack Please advise In-Reply-To: <3db594f70705300804l52c7a5ffqd1bd8dfa2f3f72c2@mail.gmail.com> References: <3db594f70705300804l52c7a5ffqd1bd8dfa2f3f72c2@mail.gmail.com> Message-ID: <20070530233558.GL7821@cygnid.lanl.gov> On Wed, May 30, 2007 at 11:04:20AM -0400, Jeremy Conlin wrote: > I am trying to use arpack, found in the scipy sandbox. It appears to > compile/install fine, but I get an error (shown below) when I try to > import arpack. I know there has been some discussion lately on Trac > about this module. I'm hoping someone can help me figure out what is > wrong and how I can use this. I am running python2.5 and I just > updated my scipy from the svn repository. I just checked in [3061] a new version of "second.f" that removes the reference to "external etime". I think that is the correct solution and should now work with gfortran. Aric From david at ar.media.kyoto-u.ac.jp Wed May 30 21:18:29 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 31 May 2007 10:18:29 +0900 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: Message-ID: <465E2265.7010003@ar.media.kyoto-u.ac.jp> Bruce Southey wrote: > Hi, > You might find the UCI Machine Learning Repository a useful resource for data: > http://www.ics.uci.edu/~mlearn/MLRepository.html > > Standard sources are: > Statlib: http://lib.stat.cmu.edu/ > Netlib: http://www.netlib.org/ > > Even with those included with R may be used because some are in public domain. The main problem of datasets seem to be license. For example, you say that some of the datasets in R are public domain: do you know which ones (how do you know ? I looked for informations on this issue, without any luck). For all I know, the datasets (at least the ones in R core) are under the GPL. cheers, David From bsouthey at gmail.com Wed May 30 22:24:55 2007 From: bsouthey at gmail.com (Bruce Southey) Date: Wed, 30 May 2007 21:24:55 -0500 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: <465E2265.7010003@ar.media.kyoto-u.ac.jp> References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> Message-ID: Hi, An example, AirPassengers is not under the GPL. If you do help("AirPassengers") you will see the source: " Box, G. E. P., Jenkins, G. M. and Reinsel, G. C. (1976) _Time Series Analysis, Forecasting and Control._ Third Edition. Holden-Day. Series G." Likewise for BJsales where the help notes was copied from the Time Series Data Library (http://www-personal.buseco.monash.edu.au/~hyndman/TSDL/ ). The license for this site is free. However, the source provided is: "G. E. P. Box and G. M. Jenkins (1976): _Time Series Analysis, Forecasting and Control_, Holden-Day, San Francisco, p. 537." I don't have either book so I can not tell you if there are any terms for use of the dataset. In some cases I presume people would argue 'fair use'. Also note that these books predate the GPL (v1 was released Jan 1989)! Bruce On 5/30/07, David Cournapeau wrote: > Bruce Southey wrote: > > Hi, > > You might find the UCI Machine Learning Repository a useful resource for data: > > http://www.ics.uci.edu/~mlearn/MLRepository.html > > > > Standard sources are: > > Statlib: http://lib.stat.cmu.edu/ > > Netlib: http://www.netlib.org/ > > > > Even with those included with R may be used because some are in public domain. > The main problem of datasets seem to be license. For example, you say > that some of the datasets in R are public domain: do you know which ones > (how do you know ? I looked for informations on this issue, without any > luck). For all I know, the datasets (at least the ones in R core) are > under the GPL. > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From david at ar.media.kyoto-u.ac.jp Wed May 30 22:44:43 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 31 May 2007 11:44:43 +0900 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> Message-ID: <465E369B.6090300@ar.media.kyoto-u.ac.jp> Bruce Southey wrote: > Hi, > An example, AirPassengers is not under the GPL. If you do > help("AirPassengers") you will see the source: > " Box, G. E. P., Jenkins, G. M. and Reinsel, G. C. (1976) _Time Series > Analysis, Forecasting and Control._ Third Edition. Holden-Day. Series > G." > This is why I feel a bit uneasy about this: when you distribute something under alicense, said GPL, you can only apply it to the parts you own the copyright. But R obviously does not own the datasets they distribute. This goes well beyond my knowledge of copyright, and I don't know what I should do about "fair use" (this concept is pretty much specific to the americain copyright system anyway, no ?). Eg if the package is included in scipy, and tomorrow someone sells super visual scipy, can't they be in trouble because they use (distribute) datasets which they do not own ? For GPL, at least, nobody can "close back" the sources, but with BSD, this is not that clear, and I don't want to add code which I am not 100 % sure they can distributed under scipy license. cheers, David From peridot.faceted at gmail.com Wed May 30 23:04:08 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 30 May 2007 23:04:08 -0400 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: <465E369B.6090300@ar.media.kyoto-u.ac.jp> References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> <465E369B.6090300@ar.media.kyoto-u.ac.jp> Message-ID: On 30/05/07, David Cournapeau wrote: > Bruce Southey wrote: > > Hi, > > An example, AirPassengers is not under the GPL. If you do > > help("AirPassengers") you will see the source: > > " Box, G. E. P., Jenkins, G. M. and Reinsel, G. C. (1976) _Time Series > > Analysis, Forecasting and Control._ Third Edition. Holden-Day. Series > > G." > > > This is why I feel a bit uneasy about this: when you distribute > something under alicense, said GPL, you can only apply it to the parts > you own the copyright. But R obviously does not own the datasets they > distribute. This goes well beyond my knowledge of copyright, and I don't > know what I should do about "fair use" (this concept is pretty much > specific to the americain copyright system anyway, no ?). Eg if the > package is included in scipy, and tomorrow someone sells super visual > scipy, can't they be in trouble because they use (distribute) datasets > which they do not own ? > > For GPL, at least, nobody can "close back" the sources, but with BSD, > this is not that clear, and I don't want to add code which I am not 100 > % sure they can distributed under scipy license. It is good that you're concerned. I'm no expert on copyright - it's a staggeringly complicated subject - but I would say that what R is doing is probably illegal. If a piece of code, or text, or an image, or practically anything(*) has no explicit license, you basically have to assume it is owned by the author and you can only legally make copies with their explicit permission. If you follow up some of those R datasets and find the authors explicitly offer them for wide use, great - although you can't do anything they didn't say is permissible. If the authors haven't said anything, they are on firm legal footing if they decide to sue anyone who makes a copy of R. There are a few exceptions to this, but they are nation-specific; fair use, for example, only exists in the United States of America. It allows the copying of material under certain extremely restrictive conditions (of which intent is one, leaving one open to litigation even if one is pure as the driven snow) which are incompatible with inclusion in an open-source project. Datasets published in academic papers are no less subject to these restrictions; generally if you want to use one you must negotiate with the author. Yes, this is an ugly situation. One important exception is that the US federal government has a policy that it releases material it creates into the public domain, that is for free use in any way by anybody. (This does not apply to work created by contractors for the US government, which can be copyrighted.) So if you're looking for free materials, US government websites are a good place to start whether or not you live there. But always look for an explicit copyright statement; sometimes you need to email them to find out. Anne (*)For example, the arrangement of lights on the Eiffel tower is copyright somebody or other, so filmmakers must pay them money to include a scene of the Eiffel tower at night, and it is technically illegal to include it in your holiday snaps. Brilliant, eh? -A From peter.skomoroch at gmail.com Wed May 30 23:24:26 2007 From: peter.skomoroch at gmail.com (Peter Skomoroch) Date: Wed, 30 May 2007 23:24:26 -0400 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> Message-ID: The licensing of datasets is an interesting issue, it sounds like they will need to be tackled one by one unless explicitly released to the public domain. Check out the wikipedia entry on "Open Data": http://en.wikipedia.org/wiki/Open_Data "Creators of data often do not consider the need to state the conditions of ownership, licensing and re-use. For example, many scientists do not regard the published data arising from their work to be theirs to control and the act of publication in a journal is an implicit release of the data into the commons. However the lack of a license makes it difficult to determine the status of a data set and may restrict the use of data offered in an Open spirit. Because of this uncertainty it is also possible for public or private organisations to aggregate such data, protect it with copyright and then resell it." I remember a while back Leslie Kaelbling bought the enron dataset http://www.cs.cmu.edu/~enron/ for use in machine learning. Maybe we can start a scipy wikipage with a list/table of datasets along with license status...and check off the ones which we find are not compatible so we can find replacements or get permission. Also, we might want to add a column for which modules use the data in scipy tests etc., Should I go ahead and create the page? On 5/30/07, Bruce Southey wrote: > > Hi, > An example, AirPassengers is not under the GPL. If you do > help("AirPassengers") you will see the source: > " Box, G. E. P., Jenkins, G. M. and Reinsel, G. C. (1976) _Time Series > Analysis, Forecasting and Control._ Third Edition. Holden-Day. Series > G." > > Likewise for BJsales where the help notes was copied from the Time > Series Data Library > (http://www-personal.buseco.monash.edu.au/~hyndman/TSDL/ > ). The > license for this site is free. However, the source provided is: > "G. E. P. Box and G. M. Jenkins (1976): _Time Series Analysis, > Forecasting and Control_, Holden-Day, San Francisco, p. 537." > > I don't have either book so I can not tell you if there are any terms > for use of the dataset. In some cases I presume people would argue > 'fair use'. Also note that these books predate the GPL (v1 was > released Jan 1989)! > > > Bruce > > On 5/30/07, David Cournapeau wrote: > > Bruce Southey wrote: > > > Hi, > > > You might find the UCI Machine Learning Repository a useful resource > for data: > > > http://www.ics.uci.edu/~mlearn/MLRepository.html > > > > > > Standard sources are: > > > Statlib: http://lib.stat.cmu.edu/ > > > Netlib: http://www.netlib.org/ > > > > > > Even with those included with R may be used because some are in public > domain. > > The main problem of datasets seem to be license. For example, you say > > that some of the datasets in R are public domain: do you know which ones > > (how do you know ? I looked for informations on this issue, without any > > luck). For all I know, the datasets (at least the ones in R core) are > > under the GPL. > > > > cheers, > > > > David > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Peter N. Skomoroch peter.skomoroch at gmail.com http://www.datawrangling.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed May 30 23:51:30 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 31 May 2007 12:51:30 +0900 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> <465E369B.6090300@ar.media.kyoto-u.ac.jp> Message-ID: <465E4642.5080404@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > On 30/05/07, David Cournapeau wrote: >> >> This is why I feel a bit uneasy about this: when you distribute >> something under alicense, said GPL, you can only apply it to the parts >> you own the copyright. But R obviously does not own the datasets they >> distribute. This goes well beyond my knowledge of copyright, and I don't >> know what I should do about "fair use" (this concept is pretty much >> specific to the americain copyright system anyway, no ?). Eg if the >> package is included in scipy, and tomorrow someone sells super visual >> scipy, can't they be in trouble because they use (distribute) datasets >> which they do not own ? >> >> For GPL, at least, nobody can "close back" the sources, but with BSD, >> this is not that clear, and I don't want to add code which I am not 100 >> % sure they can distributed under scipy license. > > It is good that you're concerned. I'm no expert on copyright - it's a > staggeringly complicated subject - but I would say that what R is > doing is probably illegal. That's why I thought too: distributing under the GPL (or any license for that matter) something you do not own seemed strange to me. > (*)For example, the arrangement of lights on the Eiffel tower is > copyright somebody or other, so filmmakers must pay them money to > include a scene of the Eiffel tower at night, and it is technically > illegal to include it in your holiday snaps. Brilliant, eh? -A Good to know that we (French) are not last for doing stupid things related to copyright :) From matthieu.brucher at gmail.com Thu May 31 01:42:01 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 31 May 2007 07:42:01 +0200 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: <465E369B.6090300@ar.media.kyoto-u.ac.jp> References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> <465E369B.6090300@ar.media.kyoto-u.ac.jp> Message-ID: > > For GPL, at least, nobody can "close back" the sources, but with BSD, > this is not that clear, and I don't want to add code which I am not 100 > % sure they can distributed under scipy license. > With the BSD license, you can do what PAlle does with OS X : get the code, add something, and make the customers pay for every big update. The only thing they have to provide is adding a copyright that says that the code was originally under BSD, provided by xxx, ... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu May 31 01:30:00 2007 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 30 May 2007 23:30:00 -0600 Subject: [SciPy-dev] SciPy Journal Message-ID: <465E5D58.9030107@ieee.org> Hi everybody, I'm sorry for the cross posting, but I wanted to reach a wide audience and I know not everybody subscribes to all the lists. I've been thinking more about the "SciPy Journal" that we discussed before and I have some thoughts. 1) I'd like to get it going so that we can push out an electronic issue after the SciPy conference (in September) 2) I think it's scope should be limited to papers that describe algorithms and code that are in NumPy / SciPy / SciKits. Perhaps we could also accept papers that describe code that depends on NumPy / SciPy that is also easily available. 3) I'd like to make a requirement for inclusion of new code in SciPy that it have an associated journal article describing the algorithms, design approach, etc. I don't see this journal article as being user-interface documentation for the code. I see this is as a place to describe why the code is organized as it is and to detail any algorithms that are used. 4) The purpose of the journal as I see it is to a) provide someplace to document what is actually done in SciPy and related software. b) provide a teaching tool of numerical methods with actual "people use-it" code that would be useful to researchers, students, and professionals. c) hopefully clever new algorithms will be developed for SciPy by people using Python that could be show-cased here d) provide a peer-review publication opportunity for people who contribute to open-source software 5) We obviously need associate editors and people willing to review submitted articles as well as people willing to submit articles. I have two articles that can be submitted within the next two months. What do other people have? As an example of the kind of thing a SciPy Journal would be useful for. I have recently over-hauled the interpolation.py file for SciPy by incorporating the B-spline stuff that is partly in fitpack. In the process I noticed two things: 1) I have (what seems to me) a different recursive algorithm for calculating derivatives of B-splines than I could find in fitpack. 2) I have developed a different way to determine the K-1 extra degrees of freedom for Kth-order spline fitting than I have seen before. The SciPy Journal would be a great place to document both of these things while describing the spline interpolation design of scipy.interpolate It is true that I could submit this stuff to other journals, but it seems like that doing that makes the information harder to find in the future and not easier. I'm also dissatisfied with how information exclusionary academic journals seem to be. They are catching up, but they are still not as accessible as other things available on the internet. Given the open nature of most scientific research, it is remarkable that getting access to the information is not as easy as it should be with modern search engines (if your internet domain does not subscribe to the e-journal). Comments and feedback is welcome. -Travis From nwagner at iam.uni-stuttgart.de Thu May 31 03:19:10 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 May 2007 09:19:10 +0200 Subject: [SciPy-dev] Severe installation problems Message-ID: <465E76EE.5020505@iam.uni-stuttgart.de> Hi all, I tried to install the latest svn of scipy but it failed.(SuSE Linux 10.0) python setup.py install non-existing path in 'scipy/cluster': 'tests' non-existing path in 'scipy/cluster': 'src/vq_wrap.cpp' mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: FOUND: libraries = ['fftw3'] library_dirs = ['/usr/lib64'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/include'] djbfft_info: NOT AVAILABLE non-existing path in 'scipy/fftpack': 'tests' could not resolve pattern in 'scipy/fftpack': 'dfftpack/*.f' non-existing path in 'scipy/fftpack': 'fftpack.pyf' non-existing path in 'scipy/fftpack': 'src/zfft.c' non-existing path in 'scipy/fftpack': 'src/drfft.c' non-existing path in 'scipy/fftpack': 'src/zrfft.c' non-existing path in 'scipy/fftpack': 'src/zfftnd.c' non-existing path in 'scipy/fftpack': 'src/zfft_djbfft.c' non-existing path in 'scipy/fftpack': 'src/zfft_fftpack.c' non-existing path in 'scipy/fftpack': 'src/zfft_fftw.c' non-existing path in 'scipy/fftpack': 'src/zfft_fftw3.c' non-existing path in 'scipy/fftpack': 'src/zfft_mkl.c' non-existing path in 'scipy/fftpack': 'convolve.pyf' non-existing path in 'scipy/fftpack': 'src/convolve.c' blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib NOT AVAILABLE atlas_blas_info: FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c include_dirs = ['/usr/local/include/atlas'] customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/lib/atlas -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.7.11 built by root on Fri Mar 3 10:15:49 CET 2006: UNAME : Linux lisa 2.6.13-15.8-default #1 Tue Feb 7 11:07:24 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : -1 1 MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 393216 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.5 20050117 (prerelease) (SUSE Linux) F77FLAGS : -fomit-frame-pointer -O -m64 -fPIC CC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) CC FLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC MCC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) MCCFLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] include_dirs = ['/usr/local/include/atlas'] could not resolve pattern in 'scipy/integrate': 'linpack_lite/*.f' could not resolve pattern in 'scipy/integrate': 'mach/*.f' could not resolve pattern in 'scipy/integrate': 'quadpack/*.f' could not resolve pattern in 'scipy/integrate': 'odepack/*.f' non-existing path in 'scipy/integrate': '_quadpackmodule.c' non-existing path in 'scipy/integrate': '_odepackmodule.c' non-existing path in 'scipy/integrate': 'vode.pyf' non-existing path in 'scipy/integrate': 'tests' could not resolve pattern in 'scipy/interpolate': 'fitpack/*.f' non-existing path in 'scipy/interpolate': '_fitpackmodule.c' non-existing path in 'scipy/interpolate': 'fitpack.pyf' non-existing path in 'scipy/interpolate': 'tests' non-existing path in 'scipy/io': 'numpyiomodule.c' non-existing path in 'scipy/io': 'tests' non-existing path in 'scipy/io': 'examples' non-existing path in 'scipy/io': 'docs' ATLAS version 3.7.11 non-existing path in 'scipy/lib/blas': 'fblas.pyf.src' non-existing path in 'scipy/lib/blas': 'fblaswrap.f.src' could not resolve pattern in 'scipy/lib/blas': 'fblas_l?.pyf.src' non-existing path in 'scipy/lib/blas': 'cblas.pyf.src' could not resolve pattern in 'scipy/lib/blas': 'cblas_l?.pyf.src' non-existing path in 'scipy/lib/blas': 'tests' lapack_opt_info: lapack_mkl_info: NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib/atlas libraries lapack_atlas not found in /usr/local/lib/atlas libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib libraries lapack_atlas not found in /usr/local/lib libraries ptf77blas,ptcblas,atlas not found in /usr/lib libraries lapack_atlas not found in /usr/lib numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries lapack_atlas not found in /usr/local/lib/atlas numpy.distutils.system_info.atlas_info FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 include_dirs = ['/usr/local/include/atlas'] customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler Found executable /usr/bin/g77 Found executable /usr/bin/g77 customize GnuFCompiler using config compiling '_configtest.c': /* This file is generated from numpy/distutils/system_info.py */ void ATL_buildinfo(void); int main(void) { ATL_buildinfo(); return 0; } C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC compile options: '-c' gcc: _configtest.c gcc -pthread _configtest.o -L/usr/local/lib/atlas -llapack -lf77blas -lcblas -latlas -o _configtest ATLAS version 3.7.11 built by root on Fri Mar 3 10:15:49 CET 2006: UNAME : Linux lisa 2.6.13-15.8-default #1 Tue Feb 7 11:07:24 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux INSTFLG : -1 1 MMDEF : ARCHDEF : F2CDEFS : -DAdd__ -DStringSunStyle CACHEEDGE: 393216 F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.5 20050117 (prerelease) (SUSE Linux) F77FLAGS : -fomit-frame-pointer -O -m64 -fPIC CC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) CC FLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC MCC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux) MCCFLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC success! removing: _configtest.c _configtest.o _configtest FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] include_dirs = ['/usr/local/include/atlas'] ATLAS version 3.7.11 non-existing path in 'scipy/lib/lapack': 'flapack.pyf.src' could not resolve pattern in 'scipy/lib/lapack': 'flapack_*.pyf.src' non-existing path in 'scipy/lib/lapack': 'clapack.pyf.src' non-existing path in 'scipy/lib/lapack': 'calc_lwork.f' non-existing path in 'scipy/lib/lapack': 'atlas_version.c' non-existing path in 'scipy/lib/lapack': 'tests' ATLAS version 3.7.11 non-existing path in 'scipy/linalg': 'src/fblaswrap.f' non-existing path in 'scipy/linalg': 'generic_fblas.pyf' non-existing path in 'scipy/linalg': 'generic_fblas1.pyf' non-existing path in 'scipy/linalg': 'generic_fblas2.pyf' non-existing path in 'scipy/linalg': 'generic_fblas3.pyf' non-existing path in 'scipy/linalg': 'interface_gen.py' non-existing path in 'scipy/linalg': 'generic_cblas.pyf' non-existing path in 'scipy/linalg': 'generic_cblas1.pyf' non-existing path in 'scipy/linalg': 'interface_gen.py' non-existing path in 'scipy/linalg': 'generic_flapack.pyf' non-existing path in 'scipy/linalg': 'flapack_user_routines.pyf' non-existing path in 'scipy/linalg': 'interface_gen.py' non-existing path in 'scipy/linalg': 'generic_clapack.pyf' non-existing path in 'scipy/linalg': 'interface_gen.py' non-existing path in 'scipy/linalg': 'src/det.f' non-existing path in 'scipy/linalg': 'src/lu.f' non-existing path in 'scipy/linalg': 'src/calc_lwork.f' non-existing path in 'scipy/linalg': 'atlas_version.c' non-existing path in 'scipy/linalg': 'iterative/STOPTEST2.f.src' non-existing path in 'scipy/linalg': 'iterative/getbreak.f.src' non-existing path in 'scipy/linalg': 'iterative/BiCGREVCOM.f.src' non-existing path in 'scipy/linalg': 'iterative/BiCGSTABREVCOM.f.src' non-existing path in 'scipy/linalg': 'iterative/CGREVCOM.f.src' non-existing path in 'scipy/linalg': 'iterative/CGSREVCOM.f.src' non-existing path in 'scipy/linalg': 'iterative/GMRESREVCOM.f.src' non-existing path in 'scipy/linalg': 'iterative/QMRREVCOM.f.src' non-existing path in 'scipy/linalg': 'iterative/_iterative.pyf.src' non-existing path in 'scipy/linalg': 'tests' non-existing path in 'scipy/linsolve': 'tests' could not resolve pattern in 'scipy/linsolve': 'SuperLU/SRC/*.c' non-existing path in 'scipy/linsolve': '_zsuperlumodule.c' non-existing path in 'scipy/linsolve': '_superlu_utils.c' non-existing path in 'scipy/linsolve': '_superluobject.c' non-existing path in 'scipy/linsolve': '_dsuperlumodule.c' non-existing path in 'scipy/linsolve': '_superlu_utils.c' non-existing path in 'scipy/linsolve': '_superluobject.c' non-existing path in 'scipy/linsolve': '_csuperlumodule.c' non-existing path in 'scipy/linsolve': '_superlu_utils.c' non-existing path in 'scipy/linsolve': '_superluobject.c' non-existing path in 'scipy/linsolve': '_ssuperlumodule.c' non-existing path in 'scipy/linsolve': '_superlu_utils.c' non-existing path in 'scipy/linsolve': '_superluobject.c' non-existing path in 'scipy/linsolve/umfpack': 'tests' umfpack_info: libraries umfpack not found in /usr/local/lib libraries umfpack not found in /usr/lib /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:403: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE non-existing path in 'scipy/linsolve/umfpack': 'umfpack.i' non-existing path in 'scipy/linsolve/umfpack': 'umfpack.i' non-existing path in 'scipy/maxentropy': 'tests' non-existing path in 'scipy/maxentropy': 'examples' non-existing path in 'scipy/maxentropy': 'doc' non-existing path in 'scipy/misc': 'lena.dat' non-existing path in 'scipy/misc': 'tests' non-existing path in 'scipy/odr': 'odrpack/d_odr.f' non-existing path in 'scipy/odr': 'odrpack/d_mprec.f' non-existing path in 'scipy/odr': 'odrpack/dlunoc.f' non-existing path in 'scipy/odr': 'odrpack/d_lpk.f' non-existing path in 'scipy/odr': '__odrpack.c' non-existing path in 'scipy/odr': 'tests' could not resolve pattern in 'scipy/optimize': 'minpack/*f' non-existing path in 'scipy/optimize': '_minpackmodule.c' could not resolve pattern in 'scipy/optimize': 'Zeros/*.c' non-existing path in 'scipy/optimize': 'zeros.c' non-existing path in 'scipy/optimize': 'lbfgsb/lbfgsb.pyf' non-existing path in 'scipy/optimize': 'lbfgsb/routines.f' non-existing path in 'scipy/optimize': 'tnc/moduleTNC.c' non-existing path in 'scipy/optimize': 'tnc/tnc.c' non-existing path in 'scipy/optimize': 'cobyla/cobyla.pyf' non-existing path in 'scipy/optimize': 'cobyla/cobyla2.f' non-existing path in 'scipy/optimize': 'cobyla/trstlp.f' non-existing path in 'scipy/optimize': 'minpack2/minpack2.pyf' non-existing path in 'scipy/optimize': 'minpack2/dcsrch.f' non-existing path in 'scipy/optimize': 'minpack2/dcstep.f' non-existing path in 'scipy/optimize': 'tests' non-existing path in 'scipy/signal': 'tests' non-existing path in 'scipy/signal': 'sigtoolsmodule.c' non-existing path in 'scipy/signal': 'firfilter.c' non-existing path in 'scipy/signal': 'medianfilter.c' non-existing path in 'scipy/signal': 'sigtools.h' non-existing path in 'scipy/signal': 'splinemodule.c' non-existing path in 'scipy/signal': 'S_bspline_util.c' non-existing path in 'scipy/signal': 'D_bspline_util.c' non-existing path in 'scipy/signal': 'C_bspline_util.c' non-existing path in 'scipy/signal': 'Z_bspline_util.c' non-existing path in 'scipy/signal': 'bspline_util.c' non-existing path in 'scipy/sparse': 'tests' non-existing path in 'scipy/sparse': 'sparsetools/sparsetools.py' non-existing path in 'scipy/sparse': 'sparsetools/sparsetools_wrap.cxx' non-existing path in 'scipy/sparse': 'sparsetools' could not resolve pattern in 'scipy/special': 'c_misc/*.c' could not resolve pattern in 'scipy/special': 'cephes/*.c' could not resolve pattern in 'scipy/special': 'mach/*.f' could not resolve pattern in 'scipy/special': 'amos/*.f' could not resolve pattern in 'scipy/special': 'toms/*.f' could not resolve pattern in 'scipy/special': 'cdflib/*.f' could not resolve pattern in 'scipy/special': 'specfun/*.f' non-existing path in 'scipy/special': '_cephesmodule.c' non-existing path in 'scipy/special': 'amos_wrappers.c' non-existing path in 'scipy/special': 'specfun_wrappers.c' non-existing path in 'scipy/special': 'toms_wrappers.c' non-existing path in 'scipy/special': 'cdf_wrappers.c' non-existing path in 'scipy/special': 'ufunc_extras.c' non-existing path in 'scipy/special': 'specfun.pyf' non-existing path in 'scipy/special': 'tests' non-existing path in 'scipy/stats': 'tests' could not resolve pattern in 'scipy/stats': 'statlib/*.f' non-existing path in 'scipy/stats': 'statlib.pyf' non-existing path in 'scipy/stats': 'futil.f' non-existing path in 'scipy/stats': 'mvn.pyf' non-existing path in 'scipy/stats': 'mvndst.f' non-existing path in 'scipy/ndimage': 'src/nd_image.c' non-existing path in 'scipy/ndimage': 'src/ni_filters.c' non-existing path in 'scipy/ndimage': 'src/ni_fourier.c' non-existing path in 'scipy/ndimage': 'src/ni_interpolation.c' non-existing path in 'scipy/ndimage': 'src/ni_measure.c' non-existing path in 'scipy/ndimage': 'src/ni_morphology.c' non-existing path in 'scipy/ndimage': 'src/ni_support.c' non-existing path in 'scipy/ndimage': 'src' non-existing path in 'scipy/ndimage': 'tests' non-existing path in 'scipy/stsci/convolve': 'src/_correlatemodule.c' non-existing path in 'scipy/stsci/convolve': 'src/_lineshapemodule.c' non-existing path in 'scipy/stsci/image': 'src/_combinemodule.c' non-existing path in 'scipy/weave': 'tests' non-existing path in 'scipy/weave': 'scxx' non-existing path in 'scipy/weave': 'blitz/blitz' non-existing path in 'scipy/weave': 'doc' non-existing path in 'scipy/weave': 'examples' Warning: Subpackage 'Lib' configuration returned as 'scipy' running install running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources creating build creating build/src.linux-x86_64-2.4 creating build/src.linux-x86_64-2.4/scipy building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources target build/src.linux-x86_64-2.4/_fftpackmodule.c does not exist: Assuming _fftpackmodule.c was generated with "build_src --inplace" command. error: '_fftpackmodule.c' missing Any pointer ? Nils From pearu at cens.ioc.ee Thu May 31 03:34:28 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 31 May 2007 09:34:28 +0200 Subject: [SciPy-dev] Severe installation problems In-Reply-To: <465E76EE.5020505@iam.uni-stuttgart.de> References: <465E76EE.5020505@iam.uni-stuttgart.de> Message-ID: <465E7A84.6060409@cens.ioc.ee> This was caused by Changeset 3845 in numpy/distutils/misc_util.py. Travis, what problems did you have with data-files in top-level package directory? I can look at it and find a proper fix. Thanks, Pearu Nils Wagner wrote: > Hi all, > > I tried to install the latest svn of scipy but it failed.(SuSE Linux 10.0) > > python setup.py install > non-existing path in 'scipy/cluster': 'tests' > non-existing path in 'scipy/cluster': 'src/vq_wrap.cpp' > mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > fftw3_info: > FOUND: > libraries = ['fftw3'] > library_dirs = ['/usr/lib64'] > define_macros = [('SCIPY_FFTW3_H', None)] > include_dirs = ['/usr/include'] > > djbfft_info: > NOT AVAILABLE > > non-existing path in 'scipy/fftpack': 'tests' > could not resolve pattern in 'scipy/fftpack': 'dfftpack/*.f' > non-existing path in 'scipy/fftpack': 'fftpack.pyf' > non-existing path in 'scipy/fftpack': 'src/zfft.c' > non-existing path in 'scipy/fftpack': 'src/drfft.c' > non-existing path in 'scipy/fftpack': 'src/zrfft.c' > non-existing path in 'scipy/fftpack': 'src/zfftnd.c' > non-existing path in 'scipy/fftpack': 'src/zfft_djbfft.c' > non-existing path in 'scipy/fftpack': 'src/zfft_fftpack.c' > non-existing path in 'scipy/fftpack': 'src/zfft_fftw.c' > non-existing path in 'scipy/fftpack': 'src/zfft_fftw3.c' > non-existing path in 'scipy/fftpack': 'src/zfft_mkl.c' > non-existing path in 'scipy/fftpack': 'convolve.pyf' > non-existing path in 'scipy/fftpack': 'src/convolve.c' > blas_opt_info: > blas_mkl_info: > libraries mkl,vml,guide not found in /usr/local/lib > libraries mkl,vml,guide not found in /usr/lib > NOT AVAILABLE > > atlas_blas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib/atlas > libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib > libraries ptf77blas,ptcblas,atlas not found in /usr/lib > NOT AVAILABLE > > atlas_blas_info: > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = c > include_dirs = ['/usr/local/include/atlas'] > > customize GnuFCompiler > Found executable /usr/bin/g77 > Found executable /usr/bin/g77 > customize GnuFCompiler > Found executable /usr/bin/g77 > Found executable /usr/bin/g77 > customize GnuFCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy/distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/local/lib/atlas -lf77blas -lcblas > -latlas -o _configtest > ATLAS version 3.7.11 built by root on Fri Mar 3 10:15:49 CET 2006: > UNAME : Linux lisa 2.6.13-15.8-default #1 Tue Feb 7 11:07:24 UTC > 2006 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 1 > MMDEF : > ARCHDEF : > F2CDEFS : -DAdd__ -DStringSunStyle > CACHEEDGE: 393216 > F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.5 20050117 > (prerelease) (SUSE Linux) > F77FLAGS : -fomit-frame-pointer -O -m64 -fPIC > CC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 > (prerelease) (SUSE Linux) > CC FLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC > MCC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 > (prerelease) (SUSE Linux) > MCCFLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC > success! > removing: _configtest.c _configtest.o _configtest > FOUND: > libraries = ['f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = c > define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] > include_dirs = ['/usr/local/include/atlas'] > > could not resolve pattern in 'scipy/integrate': 'linpack_lite/*.f' > could not resolve pattern in 'scipy/integrate': 'mach/*.f' > could not resolve pattern in 'scipy/integrate': 'quadpack/*.f' > could not resolve pattern in 'scipy/integrate': 'odepack/*.f' > non-existing path in 'scipy/integrate': '_quadpackmodule.c' > non-existing path in 'scipy/integrate': '_odepackmodule.c' > non-existing path in 'scipy/integrate': 'vode.pyf' > non-existing path in 'scipy/integrate': 'tests' > could not resolve pattern in 'scipy/interpolate': 'fitpack/*.f' > non-existing path in 'scipy/interpolate': '_fitpackmodule.c' > non-existing path in 'scipy/interpolate': 'fitpack.pyf' > non-existing path in 'scipy/interpolate': 'tests' > non-existing path in 'scipy/io': 'numpyiomodule.c' > non-existing path in 'scipy/io': 'tests' > non-existing path in 'scipy/io': 'examples' > non-existing path in 'scipy/io': 'docs' > ATLAS version 3.7.11 > non-existing path in 'scipy/lib/blas': 'fblas.pyf.src' > non-existing path in 'scipy/lib/blas': 'fblaswrap.f.src' > could not resolve pattern in 'scipy/lib/blas': 'fblas_l?.pyf.src' > non-existing path in 'scipy/lib/blas': 'cblas.pyf.src' > could not resolve pattern in 'scipy/lib/blas': 'cblas_l?.pyf.src' > non-existing path in 'scipy/lib/blas': 'tests' > lapack_opt_info: > lapack_mkl_info: > NOT AVAILABLE > > atlas_threads_info: > Setting PTATLAS=ATLAS > libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib/atlas > libraries lapack_atlas not found in /usr/local/lib/atlas > libraries ptf77blas,ptcblas,atlas not found in /usr/local/lib > libraries lapack_atlas not found in /usr/local/lib > libraries ptf77blas,ptcblas,atlas not found in /usr/lib > libraries lapack_atlas not found in /usr/lib > numpy.distutils.system_info.atlas_threads_info > NOT AVAILABLE > > atlas_info: > libraries lapack_atlas not found in /usr/local/lib/atlas > numpy.distutils.system_info.atlas_info > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = f77 > include_dirs = ['/usr/local/include/atlas'] > > customize GnuFCompiler > Found executable /usr/bin/g77 > Found executable /usr/bin/g77 > customize GnuFCompiler > Found executable /usr/bin/g77 > Found executable /usr/bin/g77 > customize GnuFCompiler using config > compiling '_configtest.c': > > /* This file is generated from numpy/distutils/system_info.py */ > void ATL_buildinfo(void); > int main(void) { > ATL_buildinfo(); > return 0; > } > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 > -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC > > compile options: '-c' > gcc: _configtest.c > gcc -pthread _configtest.o -L/usr/local/lib/atlas -llapack -lf77blas > -lcblas -latlas -o _configtest > ATLAS version 3.7.11 built by root on Fri Mar 3 10:15:49 CET 2006: > UNAME : Linux lisa 2.6.13-15.8-default #1 Tue Feb 7 11:07:24 UTC > 2006 x86_64 x86_64 x86_64 GNU/Linux > INSTFLG : -1 1 > MMDEF : > ARCHDEF : > F2CDEFS : -DAdd__ -DStringSunStyle > CACHEEDGE: 393216 > F77 : /usr/bin/g77, version GNU Fortran (GCC) 3.3.5 20050117 > (prerelease) (SUSE Linux) > F77FLAGS : -fomit-frame-pointer -O -m64 -fPIC > CC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 > (prerelease) (SUSE Linux) > CC FLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC > MCC : /usr/bin/gcc, version gcc (GCC) 4.0.2 20050901 > (prerelease) (SUSE Linux) > MCCFLAGS : -fomit-frame-pointer -O -mfpmath=387 -m64 -fPIC > success! > removing: _configtest.c _configtest.o _configtest > FOUND: > libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/lib/atlas'] > language = f77 > define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] > include_dirs = ['/usr/local/include/atlas'] > > ATLAS version 3.7.11 > non-existing path in 'scipy/lib/lapack': 'flapack.pyf.src' > could not resolve pattern in 'scipy/lib/lapack': 'flapack_*.pyf.src' > non-existing path in 'scipy/lib/lapack': 'clapack.pyf.src' > non-existing path in 'scipy/lib/lapack': 'calc_lwork.f' > non-existing path in 'scipy/lib/lapack': 'atlas_version.c' > non-existing path in 'scipy/lib/lapack': 'tests' > ATLAS version 3.7.11 > non-existing path in 'scipy/linalg': 'src/fblaswrap.f' > non-existing path in 'scipy/linalg': 'generic_fblas.pyf' > non-existing path in 'scipy/linalg': 'generic_fblas1.pyf' > non-existing path in 'scipy/linalg': 'generic_fblas2.pyf' > non-existing path in 'scipy/linalg': 'generic_fblas3.pyf' > non-existing path in 'scipy/linalg': 'interface_gen.py' > non-existing path in 'scipy/linalg': 'generic_cblas.pyf' > non-existing path in 'scipy/linalg': 'generic_cblas1.pyf' > non-existing path in 'scipy/linalg': 'interface_gen.py' > non-existing path in 'scipy/linalg': 'generic_flapack.pyf' > non-existing path in 'scipy/linalg': 'flapack_user_routines.pyf' > non-existing path in 'scipy/linalg': 'interface_gen.py' > non-existing path in 'scipy/linalg': 'generic_clapack.pyf' > non-existing path in 'scipy/linalg': 'interface_gen.py' > non-existing path in 'scipy/linalg': 'src/det.f' > non-existing path in 'scipy/linalg': 'src/lu.f' > non-existing path in 'scipy/linalg': 'src/calc_lwork.f' > non-existing path in 'scipy/linalg': 'atlas_version.c' > non-existing path in 'scipy/linalg': 'iterative/STOPTEST2.f.src' > non-existing path in 'scipy/linalg': 'iterative/getbreak.f.src' > non-existing path in 'scipy/linalg': 'iterative/BiCGREVCOM.f.src' > non-existing path in 'scipy/linalg': 'iterative/BiCGSTABREVCOM.f.src' > non-existing path in 'scipy/linalg': 'iterative/CGREVCOM.f.src' > non-existing path in 'scipy/linalg': 'iterative/CGSREVCOM.f.src' > non-existing path in 'scipy/linalg': 'iterative/GMRESREVCOM.f.src' > non-existing path in 'scipy/linalg': 'iterative/QMRREVCOM.f.src' > non-existing path in 'scipy/linalg': 'iterative/_iterative.pyf.src' > non-existing path in 'scipy/linalg': 'tests' > non-existing path in 'scipy/linsolve': 'tests' > could not resolve pattern in 'scipy/linsolve': 'SuperLU/SRC/*.c' > non-existing path in 'scipy/linsolve': '_zsuperlumodule.c' > non-existing path in 'scipy/linsolve': '_superlu_utils.c' > non-existing path in 'scipy/linsolve': '_superluobject.c' > non-existing path in 'scipy/linsolve': '_dsuperlumodule.c' > non-existing path in 'scipy/linsolve': '_superlu_utils.c' > non-existing path in 'scipy/linsolve': '_superluobject.c' > non-existing path in 'scipy/linsolve': '_csuperlumodule.c' > non-existing path in 'scipy/linsolve': '_superlu_utils.c' > non-existing path in 'scipy/linsolve': '_superluobject.c' > non-existing path in 'scipy/linsolve': '_ssuperlumodule.c' > non-existing path in 'scipy/linsolve': '_superlu_utils.c' > non-existing path in 'scipy/linsolve': '_superluobject.c' > non-existing path in 'scipy/linsolve/umfpack': 'tests' > umfpack_info: > libraries umfpack not found in /usr/local/lib > libraries umfpack not found in /usr/lib > /usr/lib64/python2.4/site-packages/numpy/distutils/system_info.py:403: > UserWarning: > UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be specified > in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > non-existing path in 'scipy/linsolve/umfpack': 'umfpack.i' > non-existing path in 'scipy/linsolve/umfpack': 'umfpack.i' > non-existing path in 'scipy/maxentropy': 'tests' > non-existing path in 'scipy/maxentropy': 'examples' > non-existing path in 'scipy/maxentropy': 'doc' > non-existing path in 'scipy/misc': 'lena.dat' > non-existing path in 'scipy/misc': 'tests' > non-existing path in 'scipy/odr': 'odrpack/d_odr.f' > non-existing path in 'scipy/odr': 'odrpack/d_mprec.f' > non-existing path in 'scipy/odr': 'odrpack/dlunoc.f' > non-existing path in 'scipy/odr': 'odrpack/d_lpk.f' > non-existing path in 'scipy/odr': '__odrpack.c' > non-existing path in 'scipy/odr': 'tests' > could not resolve pattern in 'scipy/optimize': 'minpack/*f' > non-existing path in 'scipy/optimize': '_minpackmodule.c' > could not resolve pattern in 'scipy/optimize': 'Zeros/*.c' > non-existing path in 'scipy/optimize': 'zeros.c' > non-existing path in 'scipy/optimize': 'lbfgsb/lbfgsb.pyf' > non-existing path in 'scipy/optimize': 'lbfgsb/routines.f' > non-existing path in 'scipy/optimize': 'tnc/moduleTNC.c' > non-existing path in 'scipy/optimize': 'tnc/tnc.c' > non-existing path in 'scipy/optimize': 'cobyla/cobyla.pyf' > non-existing path in 'scipy/optimize': 'cobyla/cobyla2.f' > non-existing path in 'scipy/optimize': 'cobyla/trstlp.f' > non-existing path in 'scipy/optimize': 'minpack2/minpack2.pyf' > non-existing path in 'scipy/optimize': 'minpack2/dcsrch.f' > non-existing path in 'scipy/optimize': 'minpack2/dcstep.f' > non-existing path in 'scipy/optimize': 'tests' > non-existing path in 'scipy/signal': 'tests' > non-existing path in 'scipy/signal': 'sigtoolsmodule.c' > non-existing path in 'scipy/signal': 'firfilter.c' > non-existing path in 'scipy/signal': 'medianfilter.c' > non-existing path in 'scipy/signal': 'sigtools.h' > non-existing path in 'scipy/signal': 'splinemodule.c' > non-existing path in 'scipy/signal': 'S_bspline_util.c' > non-existing path in 'scipy/signal': 'D_bspline_util.c' > non-existing path in 'scipy/signal': 'C_bspline_util.c' > non-existing path in 'scipy/signal': 'Z_bspline_util.c' > non-existing path in 'scipy/signal': 'bspline_util.c' > non-existing path in 'scipy/sparse': 'tests' > non-existing path in 'scipy/sparse': 'sparsetools/sparsetools.py' > non-existing path in 'scipy/sparse': 'sparsetools/sparsetools_wrap.cxx' > non-existing path in 'scipy/sparse': 'sparsetools' > could not resolve pattern in 'scipy/special': 'c_misc/*.c' > could not resolve pattern in 'scipy/special': 'cephes/*.c' > could not resolve pattern in 'scipy/special': 'mach/*.f' > could not resolve pattern in 'scipy/special': 'amos/*.f' > could not resolve pattern in 'scipy/special': 'toms/*.f' > could not resolve pattern in 'scipy/special': 'cdflib/*.f' > could not resolve pattern in 'scipy/special': 'specfun/*.f' > non-existing path in 'scipy/special': '_cephesmodule.c' > non-existing path in 'scipy/special': 'amos_wrappers.c' > non-existing path in 'scipy/special': 'specfun_wrappers.c' > non-existing path in 'scipy/special': 'toms_wrappers.c' > non-existing path in 'scipy/special': 'cdf_wrappers.c' > non-existing path in 'scipy/special': 'ufunc_extras.c' > non-existing path in 'scipy/special': 'specfun.pyf' > non-existing path in 'scipy/special': 'tests' > non-existing path in 'scipy/stats': 'tests' > could not resolve pattern in 'scipy/stats': 'statlib/*.f' > non-existing path in 'scipy/stats': 'statlib.pyf' > non-existing path in 'scipy/stats': 'futil.f' > non-existing path in 'scipy/stats': 'mvn.pyf' > non-existing path in 'scipy/stats': 'mvndst.f' > non-existing path in 'scipy/ndimage': 'src/nd_image.c' > non-existing path in 'scipy/ndimage': 'src/ni_filters.c' > non-existing path in 'scipy/ndimage': 'src/ni_fourier.c' > non-existing path in 'scipy/ndimage': 'src/ni_interpolation.c' > non-existing path in 'scipy/ndimage': 'src/ni_measure.c' > non-existing path in 'scipy/ndimage': 'src/ni_morphology.c' > non-existing path in 'scipy/ndimage': 'src/ni_support.c' > non-existing path in 'scipy/ndimage': 'src' > non-existing path in 'scipy/ndimage': 'tests' > non-existing path in 'scipy/stsci/convolve': 'src/_correlatemodule.c' > non-existing path in 'scipy/stsci/convolve': 'src/_lineshapemodule.c' > non-existing path in 'scipy/stsci/image': 'src/_combinemodule.c' > non-existing path in 'scipy/weave': 'tests' > non-existing path in 'scipy/weave': 'scxx' > non-existing path in 'scipy/weave': 'blitz/blitz' > non-existing path in 'scipy/weave': 'doc' > non-existing path in 'scipy/weave': 'examples' > Warning: Subpackage 'Lib' configuration returned as 'scipy' > running install > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > building py_modules sources > creating build > creating build/src.linux-x86_64-2.4 > creating build/src.linux-x86_64-2.4/scipy > building library "dfftpack" sources > building library "linpack_lite" sources > building library "mach" sources > building library "quadpack" sources > building library "odepack" sources > building library "fitpack" sources > building library "superlu_src" sources > building library "odrpack" sources > building library "minpack" sources > building library "rootfind" sources > building library "c_misc" sources > building library "cephes" sources > building library "mach" sources > building library "toms" sources > building library "amos" sources > building library "cdf" sources > building library "specfun" sources > building library "statlib" sources > building extension "scipy.cluster._vq" sources > building extension "scipy.fftpack._fftpack" sources > target build/src.linux-x86_64-2.4/_fftpackmodule.c does not exist: > Assuming _fftpackmodule.c was generated with "build_src --inplace" > command. > error: '_fftpackmodule.c' missing > > Any pointer ? > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From pearu at cens.ioc.ee Thu May 31 04:28:11 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 31 May 2007 10:28:11 +0200 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <200705301456.11887.ravi@ati.com> References: <200705301229.24199.ravi@ati.com> <200705301405.18138.ravi@ati.com> <465DC2A8.90405@gmail.com> <200705301456.11887.ravi@ati.com> Message-ID: <465E871B.9080500@cens.ioc.ee> Ravikiran Rajagopal wrote: > On Wednesday 30 May 2007 2:30:00 pm Robert Kern wrote: >> Can you copy-and-paste the full output? We can't tell what's wrong from >> your description. Also, which version of numpy are you trying to build? > > I am trying to build numpy svn version from a couple of hours ago. I am > running python 2.5.1, cygwin updated on monday, self-built lapack 3.1.1 and > atlas 3.7.32. The error log is as follows: > > $ python setup.py config --compiler=mingw32 build --compiler=mingw32 ... > atlas_threads_info: > Setting PTATLAS=ATLAS > numpy.distutils.system_info.atlas_threads_info > Setting PTATLAS=ATLAS > Setting PTATLAS=ATLAS > FOUND: > libraries = ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['c:/cygwin/home/rrajagop/scipylibs/atlas-build/lib'] > language = f77 You are using Fortran compiled libraries that are picked up by numpy.core._dotblas and numpy.linalg.lapack_lite and so the Fortran compiler must be used to link the relevant extension modules. That's just a note for why sometimes fortran compiler is needed to build numpy. I'll look for your specific issue in a moment... Pearu From pearu at cens.ioc.ee Thu May 31 04:31:19 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 31 May 2007 10:31:19 +0200 Subject: [SciPy-dev] Severe installation problems In-Reply-To: <465E7A84.6060409@cens.ioc.ee> References: <465E76EE.5020505@iam.uni-stuttgart.de> <465E7A84.6060409@cens.ioc.ee> Message-ID: <465E87D7.2050903@cens.ioc.ee> This issue should be now fixed in svn. Pearu Pearu Peterson wrote: > This was caused by Changeset 3845 in numpy/distutils/misc_util.py. > Travis, what problems did you have > with data-files in top-level package directory? > I can look at it and find a proper fix. From nwagner at iam.uni-stuttgart.de Thu May 31 04:34:07 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 May 2007 10:34:07 +0200 Subject: [SciPy-dev] Severe installation problems In-Reply-To: <465E87D7.2050903@cens.ioc.ee> References: <465E76EE.5020505@iam.uni-stuttgart.de> <465E7A84.6060409@cens.ioc.ee> <465E87D7.2050903@cens.ioc.ee> Message-ID: <465E887F.7050608@iam.uni-stuttgart.de> Pearu Peterson wrote: > This issue should be now fixed in svn. > > Pearu > Yes! Thank you very much ! Nils From pearu at cens.ioc.ee Thu May 31 04:47:48 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Thu, 31 May 2007 10:47:48 +0200 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <200705301903.49708.ravi@ati.com> References: <200705301229.24199.ravi@ati.com> <200705301456.11887.ravi@ati.com> <465DCDCA.5040904@gmail.com> <200705301903.49708.ravi@ati.com> Message-ID: <465E8BB4.3080802@cens.ioc.ee> Ravikiran Rajagopal wrote: > On Wednesday 30 May 2007 3:17:30 pm Robert Kern wrote: >> Try >> >> $ python setup.py config --compiler=mingw32 build_ext --compiler=mingw32 >> build > > That fails identically as well :-( Try: python setup.py build --compiler=mingw32 --fcompiler=gnu The current svn version of numpy is recommended. Pearu From nwagner at iam.uni-stuttgart.de Thu May 31 07:00:10 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 31 May 2007 13:00:10 +0200 Subject: [SciPy-dev] LAPACK issue Message-ID: <465EAABA.7060500@iam.uni-stuttgart.de> Hi all, scipy.test() has some failures which seem to be related to the new LAPACK version http://www.netlib.org/lapack/lapack-3.1.1.changes ====================================================================== FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769474, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769474, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 32, in check_solve_complex_without_umfpack assert_array_almost_equal(a*x, b) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 20.0%) x: array([ 1. +0.j, 1.99999809+0.j, 3. +0.j, 4.00000048+0.j, 5. +0.j], dtype=complex64) y: array([1, 2, 3, 4, 5]) ---------------------------------------------------------------------- These failures are not present in case of LAPACK 3.0. Any comment ? Nils From jeremit0 at gmail.com Thu May 31 09:34:10 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Thu, 31 May 2007 09:34:10 -0400 Subject: [SciPy-dev] Problem compiling/installing/using arpack Please advise In-Reply-To: <20070530233558.GL7821@cygnid.lanl.gov> References: <3db594f70705300804l52c7a5ffqd1bd8dfa2f3f72c2@mail.gmail.com> <20070530233558.GL7821@cygnid.lanl.gov> Message-ID: <3db594f70705310634y7d828250p84fcc020964399ef@mail.gmail.com> On 5/30/07, Aric Hagberg wrote: > On Wed, May 30, 2007 at 11:04:20AM -0400, Jeremy Conlin wrote: > > I am trying to use arpack, found in the scipy sandbox. It appears to > > compile/install fine, but I get an error (shown below) when I try to > > import arpack. I know there has been some discussion lately on Trac > > about this module. I'm hoping someone can help me figure out what is > > wrong and how I can use this. I am running python2.5 and I just > > updated my scipy from the svn repository. > > I just checked in [3061] a new version of "second.f" that removes the > reference to "external etime". I think that is the correct solution > and should now work with gfortran. > > Aric Perfect. That works now. Thanks for the quick update. Jeremy From ravi.rajagopal at amd.com Thu May 31 10:31:34 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Thu, 31 May 2007 10:31:34 -0400 Subject: [SciPy-dev] Make numpy.distutils work with python 2.5 + cygwin In-Reply-To: <465E8BB4.3080802@cens.ioc.ee> References: <200705301229.24199.ravi@ati.com> <200705301903.49708.ravi@ati.com> <465E8BB4.3080802@cens.ioc.ee> Message-ID: <200705311031.34502.ravi@ati.com> On Thursday 31 May 2007 4:47:48 am Pearu Peterson wrote: > >> ? $ python setup.py config --compiler=mingw32 build_ext > >> --compiler=mingw32 build > > > > That fails identically as well :-( > > Try: > > ? ?python setup.py build --compiler=mingw32 --fcompiler=gnu That too fails in the same way. My humble opinion (worth very little since I do not understand distutils) is that it is not related to the Fortran compiler it fails with an error message from distutils (not numpy.distutils) that the C compiler is incompatible. Regards, Ravi From robert.kern at gmail.com Thu May 31 13:17:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 May 2007 12:17:23 -0500 Subject: [SciPy-dev] Machine learning datasets (was Presentation of pymachine, a python package for machine learning) In-Reply-To: References: <465E2265.7010003@ar.media.kyoto-u.ac.jp> <465E369B.6090300@ar.media.kyoto-u.ac.jp> Message-ID: <465F0323.9050605@gmail.com> Anne Archibald wrote: > Datasets published in academic papers are no less subject to these > restrictions; generally if you want to use one you must negotiate with > the author. Not necessarily. There is another US-specific exception. Data is not copyrightable in the United States. For something to be copyrightable here, it must contain some creative content. Thus, while I may not photocopy a phone book and sell the copy (the arrangement, typography, etc. are deemed creative and copyrightable), I may write down all of the numbers and typeset my own phone book. Now, most other countries don't have this rule. Notably, countries in the EU tend to recognize "the sweat of the brow" expended in collecting the data as being worthy of copyright protection. IANAL, but my approach would be to get in touch with the original source of the data if possible, and ask. The biggest problem you'll face is that few of those sources have ever thought about their datasets in terms of copyright licenses, particularly *software* copyright licenses that permit modification to their precious data. If it's an American source and the data appears to be freely distributed, as in the UCI database, I would probably just take it as public domain according to US law. But of course, I'm in the US. There is a *tiny* possibility that although the "author" of the data is inside the US, too, he intends to pursue copyright outside of the US. However, if it's on something as visible as the UCI site, this possibility is really tiny. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Thu May 31 19:18:34 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 31 May 2007 17:18:34 -0600 Subject: [SciPy-dev] Severe installation problems In-Reply-To: <465E7A84.6060409@cens.ioc.ee> References: <465E76EE.5020505@iam.uni-stuttgart.de> <465E7A84.6060409@cens.ioc.ee> Message-ID: <465F57CA.3060803@ee.byu.edu> Pearu Peterson wrote: >This was caused by Changeset 3845 in numpy/distutils/misc_util.py. >Travis, what problems did you have >with data-files in top-level package directory? > > I'm not sure if you've fixed this or not, but the problem I had was that I could not add data_files that were in the top-level package name-space no matter what I tried. Look specifically in numpy/setup.py to see the addition of COMPATIBILITY, scipy_compatibility, and site.cfg.example. These files were not being added to the distribution despite the presence of an add_data_file line in the setup.py code until I made the change. Granted, I don't follow all of what is going on with numpy.distutils. If you were able to fix the problem a different way, then great. -Travis From amcmorl at gmail.com Thu May 31 23:33:49 2007 From: amcmorl at gmail.com (Angus McMorland) Date: Fri, 1 Jun 2007 15:33:49 +1200 Subject: [SciPy-dev] scipy compilation error on RHEL4 Message-ID: Hi list, I'm trying to install scipyfrom svn source on our dual Intel Xeon x86_64 server, which due to administrative reasons beyond my control runs RHEL4, and for which I don't have root access, so I'm doing a home dir install. I'm following the instructions in INSTALL.txt, and have, I think successfully compiled LAPACK and ATLAS. In the scipy compilation step, I get the following error: /usr/bin/ld: /home/raid/amcmorl/lib/atlas/libptf77blas.a(dscal.o): relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /home/raid/amcmorl/lib/atlas/libptf77blas.a(dscal.o): relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used when making a shared object; recompile with -fPIC /usr/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -g -Wall -shared build/temp.linux-x86_64-2.3/Lib/integrate/_odepackmodule.o -L/home/raid/amcmorl/lib/atlas -Lbuild/temp.linux-x86_64-2.3 -lodepack -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lg2c -o build/lib.linux-x86_64-2.3/scipy/integrate/_odepack.so" failed with exit status 1 I'm not sure what precisely it is that needs to be recompiled with the -fPIC flag. Any advice on how to proceed would be appreciated. Thanks, Angus. -- AJC McMorland, PhD Student Physiology, University of Auckland From robert.kern at gmail.com Thu May 31 23:35:14 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 31 May 2007 22:35:14 -0500 Subject: [SciPy-dev] scipy compilation error on RHEL4 In-Reply-To: References: Message-ID: <465F93F2.1040009@gmail.com> Angus McMorland wrote: > Hi list, > > I'm trying to install scipyfrom svn source on our dual Intel Xeon > x86_64 server, which due to administrative reasons beyond my control > runs RHEL4, and for which I don't have root access, so I'm doing a > home dir install. I'm following the instructions in INSTALL.txt, and > have, I think successfully compiled LAPACK and ATLAS. In the scipy > compilation step, I get the following error: > > /usr/bin/ld: /home/raid/amcmorl/lib/atlas/libptf77blas.a(dscal.o): > relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used > when making a shared object; recompile with -fPIC > /usr/bin/ld: final link failed: Bad value > collect2: ld returned 1 exit status > /usr/bin/ld: /home/raid/amcmorl/lib/atlas/libptf77blas.a(dscal.o): > relocation R_X86_64_PC32 against `atl_f77wrap_dscal__' can not be used > when making a shared object; recompile with -fPIC > /usr/bin/ld: final link failed: Bad value > collect2: ld returned 1 exit status > error: Command "/usr/bin/g77 -g -Wall -shared > build/temp.linux-x86_64-2.3/Lib/integrate/_odepackmodule.o > -L/home/raid/amcmorl/lib/atlas -Lbuild/temp.linux-x86_64-2.3 -lodepack > -llinpack_lite -lmach -lptf77blas -lptcblas -latlas -lg2c -o > build/lib.linux-x86_64-2.3/scipy/integrate/_odepack.so" failed with > exit status 1 > > I'm not sure what precisely it is that needs to be recompiled with the > -fPIC flag. Any advice on how to proceed would be appreciated. ATLAS needs to be recompiled with -fPIC. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco