From david at ar.media.kyoto-u.ac.jp Wed Aug 1 01:13:43 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 14:13:43 +0900 Subject: [SciPy-dev] FFTW performances in scipy and numpy Message-ID: <46B01687.3050604@ar.media.kyoto-u.ac.jp> Hi Stevens, I am one of the contributor to numpy/scipy. Let me first say I am *not* the main author of the fftw wrapping for scipy, and that I am a relatively newcommer in scipy, and do not claim a deep understanding of numpy arrays. But I have been thinking a bit on the problem since I am a big user of fft and debugged some problems in the scipy code since. - about copying killing performances: I am well aware of the problem, this was only a quick hack because the performances were abysal before this hack (the plan was computed for every fft !), and I had some difficulties to follow the code for something better. At least, it made the performances acceptable. - Because I found the code difficult to follow code, I started cleaning up the sources. The real goal is to add a better mechanism to use fftw as efficiently as possible. To improve performances: I thought about several approaches, which happen to be the ones you suggest :) - making numpy data 16 bytes aligned. This one is a bit tricky. I won't bother you with the details, but generally, numpy data may not be even "word aligned". Since some archs require some kind of alignement, there are some mechanisms to get aligned buffers from unaligned buffers in numpy API; I was thinking about an additional flag "SIMD alignement", since this could be quite useful for many optimized libraries using SIMD. But maybe this does not make sense, I have not yet thought enough about it to propose anything concrete to the main numpy developers. - I have tried FFTW_UNALIGNED + FFTW_ESTIMATE plans; unfortunately, I found that the performances were worse than using FFTW_MEASURE + copy (the copies are done into aligned buffers). I have since discover that this may be due to the totally broken architecture of my main workstation (a Pentium four): on my recent macbook (On linux, 32 bits, CoreDuo2), using no copy with FFTW_UNALIGNED is much better. - The above problem is fixable if we add a mechanisme to choose plans (ESTIMATE vs MEASURE vs ... I found that for 1d cases at least, ESTIMATE vs MEASURE is what really count performance wise). - I have also tried to get two plans in parallel for each size (one SIMD, one not SIMD), but this does not work very well, because numpy arrays are almost never 16 bytes aligned, so this does not seem to worth the effort. If you are interested in more concrete results/code, I can take a look at my test programs, make them buildable, and make them available for your comments (the tests programs do not depend on python, they are pure C code using directly the C wrapping used by scipy). cheers, David From peridot.faceted at gmail.com Wed Aug 1 03:04:03 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 1 Aug 2007 03:04:03 -0400 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <46B01687.3050604@ar.media.kyoto-u.ac.jp> References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> Message-ID: On 01/08/07, David Cournapeau wrote: > I am one of the contributor to numpy/scipy. Let me first say I am > *not* the main author of the fftw wrapping for scipy, and that I am a > relatively newcommer in scipy, and do not claim a deep understanding of > numpy arrays. But I have been thinking a bit on the problem since I am a > big user of fft and debugged some problems in the scipy code since. I have been using FFTW from raw c, and thinking about how it could be used efficiently from python. (I also do oodles of FFTs in numpy, but they are not performance-critical.) > - about copying killing performances: I am well aware of the > problem, this was only a quick hack because the performances were abysal > before this hack (the plan was computed for every fft !), and I had some > difficulties to follow the code for something better. At least, it made > the performances acceptable. How much faster are plans likely to get with FFTW_DESTROY? Is copy+FFTW_DESTROY still much slower than FFTW_PRESERVE? Multidimensional real ffts may require copying anyway, since they don't support FFTW_PRESERVE (and the numpy fft interface doesn't damage its input). > - Because I found the code difficult to follow code, I started > cleaning up the sources. The real goal is to add a better mechanism to > use fftw as efficiently as possible. It might be worth providing a thinner wrapper (as well as the current simple one) so that one could control FFTW's planning from python code. Perhaps a Plan object, which keeps track of (and optionally allocates, with optimal alignment and striding) its input and output arrays? > - making numpy data 16 bytes aligned. This one is a bit tricky. I > won't bother you with the details, but generally, numpy data may not be > even "word aligned". Since some archs require some kind of alignement, > there are some mechanisms to get aligned buffers from unaligned buffers > in numpy API; I was thinking about an additional flag "SIMD alignement", > since this could be quite useful for many optimized libraries using > SIMD. But maybe this does not make sense, I have not yet thought enough > about it to propose anything concrete to the main numpy developers. Not just libraries; with SSE2 and related instruction sets, it's quite possible that even ufuncs could be radically accelerated - it's reasonable to use SIMD (and cache control!) for even the simple case of adding two arrays into a third. No code yet exists in numpy to do so, but an aggressive optimizing compiler could do something with the code that is there. (Of course, this has observable numerical effects, so there would be the same problem as for gcc's -ffast-math flag.) Really large numpy arrays are already going to be SIMD-aligned (on Linux at least), because they are allocated on fresh pages. Small arrays are going to waste space if they're SIMD-aligned. So the default allocator is probably fine as it is, but it would be handy to have alignment as an additional property one could request from constructors and check from anywhere. I would hesitate to make it a flag, since one might well care about page alignment, 32-bit alignment, or whatever. Really I suppose one can tell alignment pretty easily by looking at the pointer (does FFTW_ESTIMATE do this automatically?) and IIRC one can produce aligned arrays at the python level, if one is determined to, so even without changes to the numpy C API this could be kluged into place. > - I have tried FFTW_UNALIGNED + FFTW_ESTIMATE plans; unfortunately, > I found that the performances were worse than using FFTW_MEASURE + copy > (the copies are done into aligned buffers). I have since discover that > this may be due to the totally broken architecture of my main > workstation (a Pentium four): on my recent macbook (On linux, 32 bits, > CoreDuo2), using no copy with FFTW_UNALIGNED is much better. > - The above problem is fixable if we add a mechanisme to choose > plans (ESTIMATE vs MEASURE vs ... I found that for 1d cases at least, > ESTIMATE vs MEASURE is what really count performance wise). > - I have also tried to get two plans in parallel for each size (one > SIMD, one not SIMD), but this does not work very well, because numpy > arrays are almost never 16 bytes aligned, so this does not seem to worth > the effort. How much reuse of wisdom is there? Building a good wisdom database during installation might make planning every time reasonable. What properties of an array does the planner care about if used in guru mode? Anne M. Archibald From david at ar.media.kyoto-u.ac.jp Wed Aug 1 03:49:34 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 16:49:34 +0900 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> Message-ID: <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > On 01/08/07, David Cournapeau wrote: > >> I am one of the contributor to numpy/scipy. Let me first say I am >> *not* the main author of the fftw wrapping for scipy, and that I am a >> relatively newcommer in scipy, and do not claim a deep understanding of >> numpy arrays. But I have been thinking a bit on the problem since I am a >> big user of fft and debugged some problems in the scipy code since. > Ok, I prepared a small package to test several strategies: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/fftdev.tbz2 By doing make test, it should build out of the box and run the tests (if you are on Linux, have gcc and fftw3, of course :) ). I did not even check whether the computation is OK (I just tested against memory problems under valgrind). 3 strategies are available: - Have a flag to check whether the given array is 16 bytes aligned, and conditionnally build plans using this info - Use FFTW_UNALIGNED, and do not care about alignement - Current strategy: copy. The three strategies use FFTW_MEASURE, which I didn't do before, and may explain the wrong things I've said in my precedent email. There are four binaries, two for the first strategy: in one case (testsimd), I use standard malloc, and in the second case, I force malloc to be 16 bytes aligned. The results on my pentium 4, for size of 1024, and 5000 iterations: ======================================== 16 bytes aligned malloc + FFTW_MEASURE ======================================== size is 1024, iter is 5000 testing cached (estimate) cycle is 780359600.00, 156071.92 per execution, min is 56816.00 countaligned is 5000 ======================================== not aligned malloc + FFTW_MEASURE ======================================== size is 1024, iter is 5000 testing cached (estimate) cycle is 948300378.00, 189660.08 per execution, min is 88216.00 countaligned is 0 ======================================== not aligned malloc + FFTW_MEASURE | FFTW_UNALIGNED ======================================== size is 1024, iter is 5000 testing cached (estimate) cycle is 1110396876.00, 222079.38 per execution, min is 87788.00 countaligned is 0 ======================================== not aligned malloc + FFTW_MEASURE + copy ======================================== size is 1024, iter is 5000 testing cached (estimate) cycle is 1644872686.00, 328974.54 per execution, min is 219936.00 countaligned is 0 TESTING Done min is what matters: it is the minum number of cycles within the 5000 iterations. On my pentium m, I got totally different results of course: 51000 cycles 55000 cycles 55000 cycles 150000 cycles And also, those are almost always reproducible (do not change much between runs). I don't know if this is because of the Pentium 4 of my workstation, or because I have the home on NFS which may screw things up in a way I don't understand, but on my workstation, each run gives different results. Basically, we should change the current scipy implementation now, as the third strategy is easy to implement, and is 3 times better than the current one :) > > Not just libraries; with SSE2 and related instruction sets, it's quite > possible that even ufuncs could be radically accelerated - it's > reasonable to use SIMD (and cache control!) for even the simple case > of adding two arrays into a third. No code yet exists in numpy to do > so, but an aggressive optimizing compiler could do something with the > code that is there. (Of course, this has observable numerical effects, > so there would be the same problem as for gcc's -ffast-math flag.) The problem of precision really is specific to SSE and x86, right ? But since apple computers also use those now, I guess the problem is kind of pervasive :) > > Really large numpy arrays are already going to be SIMD-aligned (on > Linux at least), because they are allocated on fresh pages. Small > arrays are going to waste space if they're SIMD-aligned. So the > default allocator is probably fine as it is, but it would be handy to > have alignment as an additional property one could request from > constructors and check from anywhere. I would hesitate to make it a > flag, since one might well care about page alignment, 32-bit > alignment, or whatever. Are you sure about the page thing ? A page is 4kb, right ? This would mean any double numpy arrays above 512 items is aligned... which is not what I observed when I tested. Since I screwed things up last time I checked, I should test again, though. > > Really I suppose one can tell alignment pretty easily by looking at > the pointer (does FFTW_ESTIMATE do this automatically?) This is trivial: just check whether your pointer address is a multiple of 16 (one line of code in my benchmark, in zfft_fftw3.c) David From jtravs at gmail.com Wed Aug 1 07:18:25 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 1 Aug 2007 12:18:25 +0100 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> Message-ID: <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> On 01/08/07, David Cournapeau wrote: > Anne Archibald wrote: > > On 01/08/07, David Cournapeau wrote: > > > >> I am one of the contributor to numpy/scipy. Let me first say I am > >> *not* the main author of the fftw wrapping for scipy, and that I am a > >> relatively newcommer in scipy, and do not claim a deep understanding of > >> numpy arrays. But I have been thinking a bit on the problem since I am a > >> big user of fft and debugged some problems in the scipy code since. > > > Ok, I prepared a small package to test several strategies: > > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/fftdev.tbz2 > > By doing make test, it should build out of the box and run the tests (if > you are on Linux, have gcc and fftw3, of course :) ). I did not even > check whether the computation is OK (I just tested against memory > problems under valgrind). > > 3 strategies are available: > - Have a flag to check whether the given array is 16 bytes aligned, > and conditionnally build plans using this info > - Use FFTW_UNALIGNED, and do not care about alignement > - Current strategy: copy. > > The three strategies use FFTW_MEASURE, which I didn't do before, and may Another strategy worth trying is using FFTW_MEASURE once and then using FFTW_ESTIMATE for additional arrays. FFTW accumulates wisdom and so the initial call with MEASURE means that further estimated plans also benefit. In my simple tests it comes very close to measuring for each individual array. J From david at ar.media.kyoto-u.ac.jp Wed Aug 1 07:41:20 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 01 Aug 2007 20:41:20 +0900 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> Message-ID: <46B07160.9090500@ar.media.kyoto-u.ac.jp> John Travers wrote: > On 01/08/07, David Cournapeau wrote: > >> Anne Archibald wrote: >> >>> On 01/08/07, David Cournapeau wrote: >>> >>> >>>> I am one of the contributor to numpy/scipy. Let me first say I am >>>> *not* the main author of the fftw wrapping for scipy, and that I am a >>>> relatively newcommer in scipy, and do not claim a deep understanding of >>>> numpy arrays. But I have been thinking a bit on the problem since I am a >>>> big user of fft and debugged some problems in the scipy code since. >>>> >> Ok, I prepared a small package to test several strategies: >> >> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/fftdev.tbz2 >> >> By doing make test, it should build out of the box and run the tests (if >> you are on Linux, have gcc and fftw3, of course :) ). I did not even >> check whether the computation is OK (I just tested against memory >> problems under valgrind). >> >> 3 strategies are available: >> - Have a flag to check whether the given array is 16 bytes aligned, >> and conditionnally build plans using this info >> - Use FFTW_UNALIGNED, and do not care about alignement >> - Current strategy: copy. >> >> The three strategies use FFTW_MEASURE, which I didn't do before, and may >> > > Another strategy worth trying is using FFTW_MEASURE once and then > using FFTW_ESTIMATE for additional arrays. FFTW accumulates wisdom and > so the initial call with MEASURE means that further estimated plans > also benefit. In my simple tests it comes very close to measuring for > each individual array. > Is this true for different arrays size ? David From openopt at ukr.net Wed Aug 1 09:49:16 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 01 Aug 2007 16:49:16 +0300 Subject: [SciPy-dev] GSoC schedule question (letter for my mentors & Matthieu Brucher) Message-ID: <46B08F5C.1090507@ukr.net> hi all, this is a letter primarily for my mentors & Matthieu Brucher according to the schedule proposed by my mentors I should work on chapter 2 2. Make existing openopt code usable by adding docstrings, unit tests, and documented sample scripts. Docstrings must conform to the standard. http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines Tests must conform to the TestingGuidelines http://projects.scipy.org/scipy/scipy/wiki/TestingGuidelines Make sure ralg and lincher are fully functional, tested, and documented, completing as much as possible of your projected work on ralg (i.e., constraints to nonsmooth ralg solver (c<0, h=0, lb, ub, A, Aeq), as well as their derivatives). Documentation must be such as to allow other developers to understand and maintain the code, should you cease maintenance. Sample scripts should ensure that users have informative (and documented!) examples to work with. (7 days) So, one of the main problems for lincher (along with extern QP solver from cvxopt) is: a good line-search minimizer, that takes into account slope angle, is absent (we had discussed the problem some weeks ago in mailing lists). In scipy.optimize there is only one appropriate func: line_search. However, there are some problems with line_search docstring: line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=0.0001, c2=0.90000000000000002, amax=50) Find alpha that satisfies strong Wolfe conditions. Uses the line search algorithm to enforce strong Wolfe conditions Wright and Nocedal, 'Numerical Optimization', 1999, pg. 59-60 For the zoom phase it uses an algorithm by Outputs: (alpha0, gc, fc) So, as you see, params are not described; especially I'm interested in old_fval and old_old_fval. So, there was a letter from NLPy developer about an alternative (link to an article was provided), also, Matthieu proposed using one of his own solvers. I think the importance of the auxiliary solver is very high and should be done in 1st order. My opinion about next thing to do is using an appropriate solver from Matthieu's package. However, Matthieu's syntax differs too much from openopt one. I think first of all there should be an openopt binding to Matthieu's package, that will allow for oo users to use same syntax: prob = NLP(...) r = prob.solve() I would implement the binding by myself, but I miss well-described API documentation of Matthieu's code. First of all I'm interested 1) what solvers does it contain? 2) how xtol, funtol, contol etc can be passed to the solvers? then, secondary (it can wait, maybe default parameters would be enough for now) 3) which params can I modify (like type of step (Armijo|Powell|etc), etc) BTW ralg is also missing a good line-search optimizer. It requires the one that finds solutions with slope angle > pi/2. But it can wait, it has one quite good and problem with lincher is more actual. So I think the next GSoC schedule step should be connection of 1-2 Matthieu's solvers (that take into account slope angle, like strong Wolfe conditions do) to native openopt syntax. Are you agree? if yes, my question to Matthieu: can you provide a description of some appropriate solvers from your package? and/or an example of usage? Regards, D. From matthieu.brucher at gmail.com Wed Aug 1 10:18:41 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 1 Aug 2007 16:18:41 +0200 Subject: [SciPy-dev] GSoC schedule question (letter for my mentors & Matthieu Brucher) In-Reply-To: <46B08F5C.1090507@ukr.net> References: <46B08F5C.1090507@ukr.net> Message-ID: Hi, So, as you see, params are not described; especially I'm interested in > old_fval and old_old_fval. > So, there was a letter from NLPy developer about an alternative (link to > an article was provided), also, Matthieu proposed using one of his own > solvers. > I think the importance of the auxiliary solver is very high and should be > done in 1st order. My opinion about next thing to do is using an appropriate > solver from Matthieu's package. However, Matthieu's syntax differs too much > from openopt one. I think first of all there should be an openopt binding to > Matthieu's package, that will allow for oo users to use same syntax: > prob = NLP(...) > r = prob.solve() > I would implement the binding by myself, but I miss well-described API > documentation of Matthieu's code. First of all I'm interested > 1) what solvers does it contain? I put a small description here : https://projects.scipy.org/scipy/scikits/wiki/Optimization (still in progress). 2) how xtol, funtol, contol etc can be passed to the solvers? Each of these parameters are either step information, line search information or criterion information. Each parameter must be given to the corresponding object that will use it (I didn't want to centralize everything as some modules need pre-computation before they can be used in the optimizer, like the Fibonacci section search). then, secondary (it can wait, maybe default parameters would be enough for > now) > 3) which params can I modify (like type of step (Armijo|Powell|etc), etc) You can modify everything. The goal is to provide bricks that you can build together so that the optimizer makes what you need. If you want to provide new modules, here are some "rules" : - the function should be provided as an object that defines the correct methods, like __call__, gradient or hessian if needed (the case of approximation of the gradient as a finite-element one should be programmed with a class from which the function derives, but we can speak of this in another mail if you want details on this one) - a criterion module takes only one argument which is the current state of the optimizer - a step module takes three arguments : the function being optimized, the point where to search for a step and the state of the optimizer - a line search module takes a four arguments : the point, the computed step, the function and the state (I suppose this should be refactored to be more consistent with the step module...) - the core optimizer that uses these mdoules and dispatches the data accordingly BTW ralg is also missing a good line-search optimizer. It requires the one > that finds solutions with slope angle > pi/2. But it can wait, it has one > quite good and problem with lincher is more actual. If you have an algorithm that can do this, you only have to program it and everyone will be able to use it with the other modules. So I think the next GSoC schedule step should be connection of 1-2 > Matthieu's solvers (that take into account slope angle, like strong Wolfe > conditions do) to native openopt syntax. If I understand this correctly, it is wrapping some usual combinations together so that people use them without knowing, like for the brent functiona nd the Brent class ? It should be easy by overriding the optimizer constructor and by adding a solve method that just calls optimize() (in this case, I'll probably modify optimize() so that it does not return something, and the state of the optimizer would provide the answer). Are you agree? > > if yes, my question to Matthieu: can you provide a description of some > appropriate solvers from your package? and/or an example of usage? If you need more details, ask specific questions, I'll gladly answer them (and add them to the wiki) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Wed Aug 1 11:42:57 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Wed, 1 Aug 2007 11:42:57 -0400 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> Message-ID: On 01/08/07, David Cournapeau wrote: > Anne Archibald wrote: > > > > Not just libraries; with SSE2 and related instruction sets, it's quite > > possible that even ufuncs could be radically accelerated - it's > > reasonable to use SIMD (and cache control!) for even the simple case > > of adding two arrays into a third. No code yet exists in numpy to do > > so, but an aggressive optimizing compiler could do something with the > > code that is there. (Of course, this has observable numerical effects, > > so there would be the same problem as for gcc's -ffast-math flag.) > The problem of precision really is specific to SSE and x86, right ? But > since apple computers also use those now, I guess the problem is kind of > pervasive :) I think some other architectures (MIPS? not sure) may also use an intermediate representation with more accuracy. As you say, though, x86 and x86-64 are fairly pervasive. BLAS would of course probably be faster (though how well does it cope with peculiarly-strided data?) but I expect resistance to making numpy depend on BLAS. > > Really large numpy arrays are already going to be SIMD-aligned (on > > Linux at least), because they are allocated on fresh pages. Small > > arrays are going to waste space if they're SIMD-aligned. So the > > default allocator is probably fine as it is, but it would be handy to > > have alignment as an additional property one could request from > > constructors and check from anywhere. I would hesitate to make it a > > flag, since one might well care about page alignment, 32-bit > > alignment, or whatever. > Are you sure about the page thing ? A page is 4kb, right ? This would > mean any double numpy arrays above 512 items is aligned... which is not > what I observed when I tested. Since I screwed things up last time I > checked, I should test again, though. By "really large" I don't necessarily mean "larger than a page"; I don't know what malloc's threshold is. I had in mind the 300-MB arrays I'm allocating, which are definitely on fresh pages (which allows malloc to dump them back to the OS when they get freed). Anne From openopt at ukr.net Wed Aug 1 12:57:23 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 01 Aug 2007 19:57:23 +0300 Subject: [SciPy-dev] GSoC schedule question (letter for my mentors & Matthieu Brucher) In-Reply-To: References: <46B08F5C.1090507@ukr.net> Message-ID: <46B0BB73.6040100@ukr.net> Hi Matthieu, thank you for the answer. Please inform me, does your package provide a func that is equivalent to scipy.optimize.line_search? Can you supply an example of usage of the one? here's a link to Wolfe conditions: http://en.wikipedia.org/wiki/Wolfe_conditions so 1) for lincher I need something like that (If you have a solver that uses better conditions than Wolfe ones - ok). 2) for ralg I need auxiliary line search solver that will surely provide (pk, grad f(xk+alpha*pk)) <= 0 1st task is much more important. 2nd one have a solver already, but not the best one, in future I intend to replace the one by something better. Do you have any idea about 1st , or maybe 2nd? Regards, D. Matthieu Brucher wrote: > Hi, > > > So, as you see, params are not described; especially I'm > interested in old_fval and old_old_fval. > So, there was a letter from NLPy developer about an alternative > (link to an article was provided), also, Matthieu proposed using > one of his own solvers. > I think the importance of the auxiliary solver is very high and > should be done in 1st order. My opinion about next thing to do is > using an appropriate solver from Matthieu's package. However, > Matthieu's syntax differs too much from openopt one. I think first > of all there should be an openopt binding to Matthieu's package, > that will allow for oo users to use same syntax: > prob = NLP(...) > r = prob.solve() > I would implement the binding by myself, but I miss well-described > API documentation of Matthieu's code. First of all I'm interested > 1) what solvers does it contain? > > > > I put a small description here : > https://projects.scipy.org/scipy/scikits/wiki/Optimization (still in > progress). > > > 2) how xtol, funtol, contol etc can be passed to the solvers? > > > > Each of these parameters are either step information, line search > information or criterion information. Each parameter must be given to > the corresponding object that will use it (I didn't want to centralize > everything as some modules need pre-computation before they can be > used in the optimizer, like the Fibonacci section search). > > > then, secondary (it can wait, maybe default parameters would be > enough for now) > 3) which params can I modify (like type of step > (Armijo|Powell|etc), etc) > > > > You can modify everything. The goal is to provide bricks that you can > build together so that the optimizer makes what you need. If you want > to provide new modules, here are some "rules" : > - the function should be provided as an object that defines the > correct methods, like __call__, gradient or hessian if needed (the > case of approximation of the gradient as a finite-element one should > be programmed with a class from which the function derives, but we can > speak of this in another mail if you want details on this one) > - a criterion module takes only one argument which is the current > state of the optimizer > - a step module takes three arguments : the function being optimized, > the point where to search for a step and the state of the optimizer > - a line search module takes a four arguments : the point, the > computed step, the function and the state (I suppose this should be > refactored to be more consistent with the step module...) > - the core optimizer that uses these mdoules and dispatches the data > accordingly > > > BTW ralg is also missing a good line-search optimizer. It requires > the one that finds solutions with slope angle > pi/2. But it can > wait, it has one quite good and problem with lincher is more actual. > > > > If you have an algorithm that can do this, you only have to program it > and everyone will be able to use it with the other modules. > > > So I think the next GSoC schedule step should be connection of 1-2 > Matthieu's solvers (that take into account slope angle, like > strong Wolfe conditions do) to native openopt syntax. > > > > If I understand this correctly, it is wrapping some usual combinations > together so that people use them without knowing, like for the brent > functiona nd the Brent class ? It should be easy by overriding the > optimizer constructor and by adding a solve method that just calls > optimize() (in this case, I'll probably modify optimize() so that it > does not return something, and the state of the optimizer would > provide the answer). > > > Are you agree? > > if yes, my question to Matthieu: can you provide a description of > some appropriate solvers from your package? and/or an example of > usage? > > > If you need more details, ask specific questions, I'll gladly answer > them (and add them to the wiki) > > Matthieu > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From jtravs at gmail.com Wed Aug 1 13:22:59 2007 From: jtravs at gmail.com (John Travers) Date: Wed, 1 Aug 2007 18:22:59 +0100 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <46B07160.9090500@ar.media.kyoto-u.ac.jp> References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> <46B07160.9090500@ar.media.kyoto-u.ac.jp> Message-ID: <3a1077e70708011022q6d426b08qb642fdf485615ae8@mail.gmail.com> On 01/08/07, David Cournapeau wrote: > John Travers wrote: > > On 01/08/07, David Cournapeau wrote: > > > >> Anne Archibald wrote: > >> > >>> On 01/08/07, David Cournapeau wrote: > >>> > >>> > >>>> I am one of the contributor to numpy/scipy. Let me first say I am > >>>> *not* the main author of the fftw wrapping for scipy, and that I am a > >>>> relatively newcommer in scipy, and do not claim a deep understanding of > >>>> numpy arrays. But I have been thinking a bit on the problem since I am a > >>>> big user of fft and debugged some problems in the scipy code since. > >>>> > >> Ok, I prepared a small package to test several strategies: > >> > >> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/fftdev.tbz2 > >> > >> By doing make test, it should build out of the box and run the tests (if > >> you are on Linux, have gcc and fftw3, of course :) ). I did not even > >> check whether the computation is OK (I just tested against memory > >> problems under valgrind). > >> > >> 3 strategies are available: > >> - Have a flag to check whether the given array is 16 bytes aligned, > >> and conditionnally build plans using this info > >> - Use FFTW_UNALIGNED, and do not care about alignement > >> - Current strategy: copy. > >> > >> The three strategies use FFTW_MEASURE, which I didn't do before, and may > >> > > > > Another strategy worth trying is using FFTW_MEASURE once and then > > using FFTW_ESTIMATE for additional arrays. FFTW accumulates wisdom and > > so the initial call with MEASURE means that further estimated plans > > also benefit. In my simple tests it comes very close to measuring for > > each individual array. > > > Is this true for different arrays size ? Yes it is, in fact, if you use the fftw_flops function you find that the number of operations required is identical if you plan with measure, plan with estimate (with experience at the same size) or if you plan with estimate with experience at a different size. Of course this is only on my machine (AMD Athlon 64 3200+). The only extra overhead is the planning for each fft (and I haven't tried a comparison with unaligned data). This overhead appears to be about 10% for small (2**15) size arrays. J From matthieu.brucher at gmail.com Wed Aug 1 13:42:35 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 1 Aug 2007 19:42:35 +0200 Subject: [SciPy-dev] GSoC schedule question (letter for my mentors & Matthieu Brucher) In-Reply-To: <46B0BB73.6040100@ukr.net> References: <46B08F5C.1090507@ukr.net> <46B0BB73.6040100@ukr.net> Message-ID: > > Please inform me, does your package provide a func that is equivalent to > scipy.optimize.line_search? Yes, every class in the line_search module. There is simple line searches as well as exact ones and inexact searches as the Wolfe Powell and the strong Wolfe Powell rules. Some time you don't need something as compicated as the WP rules, this is the reason for the most simple searchs. Can you supply an example of usage of the one? > here's a link to Wolfe conditions: > http://en.wikipedia.org/wiki/Wolfe_conditions > so > 1) for lincher I need something like that (If you have a solver that > uses better conditions than Wolfe ones - ok). Standard rules are implemented, as I said. Better conditions could be achieved with the second order step rules, I think I put them in the list that Alan sent you. 2) for ralg I need auxiliary line search solver that will surely provide > (pk, grad f(xk+alpha*pk)) <= 0 I do not have this at the moment (unless you count the exact line searches), but I'll search for one and implement it (if you have an idea of one, please telle me) 1st task is much more important. > 2nd one have a solver already, but not the best one, in future I intend > to replace the one by something better. I'm all ears, I'll be happy to help ;) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Aug 2 00:54:54 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 02 Aug 2007 13:54:54 +0900 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> Message-ID: <46B1639E.30902@ar.media.kyoto-u.ac.jp> Steven G. Johnson wrote: > On Wed, 1 Aug 2007, David Cournapeau wrote: >> - making numpy data 16 bytes aligned. This one is a bit tricky. I >> won't bother you with the details, but generally, numpy data may not >> be even "word aligned". Since some archs require some kind of >> alignement, there are some mechanisms to get aligned buffers from >> unaligned buffers in numpy API; I was thinking about an additional >> flag "SIMD alignement", since this could be quite useful for many >> optimized libraries using SIMD. But maybe this does not make sense, I >> have not yet thought enough about it to propose anything concrete to >> the main numpy developers. > > One possibility might be to allocate *all* arrays over a certain size > using posix_memalign, to get 16-byte alignnment, instead of malloc or > whatever you are doing now, since if the array is big enough the > overhead of alignment is negligible. It depends on the importance of > FFTs, of course, although there may well be other things where 16-byte > alignment will make SIMD optimization easier. I don't know anything > about Python/NumPy's memory management, however, so I don't know > whether this suggestion is feasible. from my understanding of numpy (quite limited, I cannot emphasize this enough), the data buffer can easily be made 16 bytes aligned, since they are always allocated using a macro which is calling malloc, but could replaced by posix_memalign. But then, there is another problem: many numpy arrays are just than one data buffer, are not always contiguous, etc... To interface with most C libraries, this means that a copy has to be made somewhat if the arrays are not contiguous or aligned; a whole infrastructure exists in numpy to do exactly this, but I do not know it really well yet. Whether replacing the current allocation with posix_memalign would enable the current infrastructure to always return 16 bytes aligned buffers is unclear to me, I should discuss this with people more knowledgeable than me. > > On x86, malloc is guaranteed to be 8-byte aligned, so barring > conspiracy the probability of 16-byte alignment should be 50%. On > x86_64, on the other hand, I've read that malloc is guaranteed to be > 16-byte aligned (for allocations > 16 bytes) by the ABI, so you should > have no worries in this case as long as Python uses malloc or similar. Se below for my findings about this problem. >> - I have tried FFTW_UNALIGNED + FFTW_ESTIMATE plans; unfortunately, >> I found that the performances were worse than using FFTW_MEASURE + >> copy (the copies are done into aligned buffers). I have since >> discover that this may be due to the totally broken architecture of >> my main workstation (a Pentium four): on my recent macbook (On linux, >> 32 bits, CoreDuo2), using no copy with FFTW_UNALIGNED is much better. > > If you use FFTW_UNALIGNED, then essentially FFTW cannot use SIMD (with > some exceptions). Whether a copy will be worth it for performance in > this case will depend upon the problem size. For very large problems, > I'm guessing it's probably not worth it, since SIMD has less of an > impact there and cache has a bigger impact. > >> - The above problem is fixable if we add a mechanisme to choose >> plans (ESTIMATE vs MEASURE vs ... I found that for 1d cases at least, >> ESTIMATE vs MEASURE is what really count performance wise). > > One thing that might help is to build in wisdom at least for a few > small power-of-two sizes, created at build time. e.g. wisdom for > sizes 128, 256, 512, 1024, 2048, 4096, and 8192 is < 2kB. > > (We also need to work on the estimator heuristics...this has proven to > be a hard problem in general, unfortunately.) This is the way matlab works, right ? If I understand correctly, wisdoms are a way to compute plans "offline". So for example, if you compute plans with FFTW_MEASURE | FFTW_UNALIGNED, for inplace transforms and a set of sizes, you can record it in a wisdom, and reload it such as later calls with FFTW_MEASURE | FFTW_UNALIGNED will be fast ? Anyway, all this sounds like it should be solved by adding a better infrastructure the current wrappers (ala matlab). > Your claim that numpy arrays are almost never 16-byte aligned strikes > me as odd; if true, it means that NumPy is doing something terribly > weird in its allocation. > > I just did a quick test, and on i386 with glibc (Debian GNU/Linux), > malloc'ing 10000 arrays of double variables of random size, almost > exactly 50% are 16-byte aligned as expected. And on x86_64, 100% are > 16-byte aligned (because of the abovementioned ABI requirement). Well, then I must be doing something wrong in my tests (I attached the test program I use to test pointers returned by malloc). This always return 1 for buffers allocated with fftw_malloc, but always 0 for buffers allocated with malloc on my machine, which is quite similar to yours (ubuntu, x86, glibc). Not sure what I am missing (surely something obvious). regards, David /* * test.c: allocate buffers of random size, and count the ones which are 16 bytes aligned */ #include #include #include #include #include #include #ifdef FORCE_ALIGNED_MALLOC #include #define MALLOC(size) fftw_malloc((size)) #define FREE(ptr) fftw_free((ptr)) #else #define MALLOC(size) malloc((size)) #define FREE(ptr) free((ptr)) #endif /* Return 1 if the pointer x is 16 bytes aligned */ #define CHECK_SIMD_ALIGNMENT(x) \ ( ((ptrdiff_t)(x) & 0xf) == 0) const size_t MAXMEM = 1 << 26; int allocmem(size_t n); int main(void) { int i; int niter = 100000; size_t s; int acc = 0; double mean = 0, min = 1e100, max = 0; double r; /* not really random, but enough for here */ srand48(time(0)); for(i = 0; i < niter; ++i) { /* forcing the size to be in [1, MAXMEM] */ r = drand48() * MAXMEM; s = floor(r); if (s < 1) { s = 1; } else if (s > MAXMEM) { s = MAXMEM; } /* computing max and min allocated size for summary */ mean += r; if (max < s) { max = s; } if (min > s) { min = s; } /* allocating */ acc += allocmem(s); } fprintf(stdout, "ratio %d / %d; average size is %f (m %f, M %f)\n", acc, niter, mean / niter, min, max); return 0; } /* * Alloc n bytes, and return 0 if the memory allocated with this size was 16 * bytes aligned. */ int allocmem(size_t n) { char* tmp; int isal = 0; tmp = MALLOC(n); isal = CHECK_SIMD_ALIGNMENT(tmp); /* Not sure this is useful, but this should guarantee that tmp is really * allocated, and that the compiler is not getting too smart */ tmp[0] = 0; tmp[n-1] = 0; FREE(tmp); return isal; } From jtravs at gmail.com Thu Aug 2 09:41:00 2007 From: jtravs at gmail.com (John Travers) Date: Thu, 2 Aug 2007 14:41:00 +0100 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <3a1077e70708011022q6d426b08qb642fdf485615ae8@mail.gmail.com> References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> <46B07160.9090500@ar.media.kyoto-u.ac.jp> <3a1077e70708011022q6d426b08qb642fdf485615ae8@mail.gmail.com> Message-ID: <3a1077e70708020641m5556d0bam8e89d3246d6dd23a@mail.gmail.com> On 01/08/07, John Travers wrote: > On 01/08/07, David Cournapeau wrote: > > John Travers wrote: > > > Another strategy worth trying is using FFTW_MEASURE once and then > > > using FFTW_ESTIMATE for additional arrays. FFTW accumulates wisdom and > > > so the initial call with MEASURE means that further estimated plans > > > also benefit. In my simple tests it comes very close to measuring for > > > each individual array. > > > > > Is this true for different arrays size ? > > Yes it is, in fact, if you use the fftw_flops function you find that > the number of operations required is identical if you plan with > measure, plan with estimate (with experience at the same size) or if > you plan with estimate with experience at a different size. Of course > this is only on my machine (AMD Athlon 64 3200+). The only extra > overhead is the planning for each fft (and I haven't tried a > comparison with unaligned data). This overhead appears to be about 10% > for small (2**15) size arrays. > I realized as I was cycling home last night that fftw_flops function doesn't quite do what I thought. From timing measurements (using your cycles.h) I've found that the accumulated wisdom doesn't seem to significantly help for different sizes. In fact, I've found that planning with wisdom (but still using FFTW_ESTIMATE) increases the total time as the planning time increases. Though this is only significant for small arrays, as the wisdom does help for same size arrays which are large. Anyway, back to work for me. J From david at ar.media.kyoto-u.ac.jp Fri Aug 3 00:38:26 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 03 Aug 2007 13:38:26 +0900 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B1639E.30902@ar.media.kyoto-u.ac.jp> Message-ID: <46B2B142.9020909@ar.media.kyoto-u.ac.jp> Steven G. Johnson wrote: > On Thu, 2 Aug 2007, David Cournapeau wrote: >> This is the way matlab works, right ? If I understand correctly, >> wisdoms are a way to compute plans "offline". So for example, if you >> compute plans with FFTW_MEASURE | FFTW_UNALIGNED, for inplace >> transforms and a set of sizes, you can record it in a wisdom, and >> reload it such as later calls with FFTW_MEASURE | FFTW_UNALIGNED will >> be fast ? > > Yes, although a wisdom file can contain as many saved plans as you want. > >> Anyway, all this sounds like it should be solved by adding a better >> infrastructure the current wrappers (ala matlab). > > I know a little about how the Matlab usage of FFTW works, and they are > definitely not getting the full performance you would get by calling > FFTW yourself from C etc. So they are not necessarily the gold standard. I certainly do not consider matlab as the gold standard. That's more a reason why the current situation to have worse performances than matlab for fft in scipy with fftw3 is not good. But I will work on a better cache mechanism with a user interface in python (the complex implementation of fft with fftw3 does not use copy anymore for a few days now, by the way). > If you malloc and then immediately free, most of the time the malloc > implementation is just going to re-use the same memory and so you will > get the same pointer over and over. So it's not a good test of malloc > alignment. Ah, this was stupid indeed. I should have checked the addresses returned by malloc. But then, playing a bit with the test program, I found that using size above ~ 17000 double starts to make the ratio of aligned data decreasing. This decreasing does not happen if I force malloc to use srbk and not mmap for big sizes: this is consistent with the fact the threshold for 32 bits for mmapping areas is 128 kb in gnu libc. Basically, areas which are allocated through mmap seem to be never 16 bytes aligned ! This is starting to go way beyond my knowledge... I thought mmap were page aligned, which means 16 bytes alignment. Maybe malloc does not return the pointer it got from mmap directly, but a shifted version for some reasons ? Maybe my test is flawed again ? I pasted it just below. For example, if you have N = 65384, the ratio is more about 10 % than 50 %; if you force not using mmap (M_MMAP_MAX = 0), then it goes back to ~50 %. regards, David --------------------------- #include #include #include #include #include #define NARRAY 10000 #define N (1 << 16) int main(void) { void *a[NARRAY]; uintptr_t p; int i, nalign = 0; int st; /* default value, at least on my 32 bits ubuntu with glibc: 128 kb */ st = mallopt(M_MMAP_THRESHOLD, 128 * 1024); if (st == 0) { fprintf(stderr, "changing malloc option failed\n"); } st = mallopt(M_MMAP_MAX, NARRAY); if (st == 0) { fprintf(stderr, "changing malloc option failed\n"); } srand(time(NULL)); for (i = 0; i < NARRAY; ++i) { a[i] = malloc((rand() % N + 2) * sizeof(double)); p = (uintptr_t) a[i]; if (p % 16 == 0) ++nalign; } printf("%d/%d = %g%% are 16-byte aligned\n", nalign, NARRAY, nalign * 100.0/NARRAY); for (i = 0; i < NARRAY; ++i) free(a[i]); return 0; } From openopt at ukr.net Fri Aug 3 09:58:46 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 03 Aug 2007 16:58:46 +0300 Subject: [SciPy-dev] DocstringStandards webpage Message-ID: <46B33496.1030906@ukr.net> hi all, there is a doctest format mentioned here http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards can you attach an example in the web page? Also, it would be nice if any real func from scipy or numpy have been provided (as an example) in the web page. Regards, D. From openopt at ukr.net Fri Aug 3 10:58:34 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 03 Aug 2007 17:58:34 +0300 Subject: [SciPy-dev] max and argmax Message-ID: <46B3429A.2010804@ukr.net> hi all, in MATLAB [val ind] = max(arr) returns both max and index of the max element. In Python I use ind = argmax(arr) val = arr(ind) is it possible to avoid 2 lines, i.e. somehow to replace by single one? D. From robert.kern at gmail.com Fri Aug 3 11:06:50 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 03 Aug 2007 10:06:50 -0500 Subject: [SciPy-dev] max and argmax In-Reply-To: <46B3429A.2010804@ukr.net> References: <46B3429A.2010804@ukr.net> Message-ID: <46B3448A.20702@gmail.com> dmitrey wrote: > hi all, > in MATLAB > [val ind] = max(arr) > returns both max and index of the max element. > > In Python I use > ind = argmax(arr) > val = arr(ind) > is it possible to avoid 2 lines, i.e. somehow to replace by single one? def matlablike_max(arr): ind = argmax(arr) return ind, arr[ind] ind, val = matlablike_max(arr) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Fri Aug 3 11:10:05 2007 From: openopt at ukr.net (dmitrey) Date: Fri, 03 Aug 2007 18:10:05 +0300 Subject: [SciPy-dev] max and argmax In-Reply-To: <46B3448A.20702@gmail.com> References: <46B3429A.2010804@ukr.net> <46B3448A.20702@gmail.com> Message-ID: <46B3454D.80601@ukr.net> Thank you, I understood the solution, but I meant a func from numpy or scipy core. Regards, D. Robert Kern wrote: > dmitrey wrote: > >> hi all, >> in MATLAB >> [val ind] = max(arr) >> returns both max and index of the max element. >> >> In Python I use >> ind = argmax(arr) >> val = arr(ind) >> is it possible to avoid 2 lines, i.e. somehow to replace by single one? >> > > def matlablike_max(arr): > ind = argmax(arr) > return ind, arr[ind] > > ind, val = matlablike_max(arr) > > From charlesr.harris at gmail.com Fri Aug 3 21:47:12 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 3 Aug 2007 19:47:12 -0600 Subject: [SciPy-dev] DocstringStandards webpage In-Reply-To: <46B33496.1030906@ukr.net> References: <46B33496.1030906@ukr.net> Message-ID: On 8/3/07, dmitrey wrote: > > hi all, > there is a doctest format mentioned here > http://projects.scipy.org/scipy/numpy/wiki/DocstringStandards > can you attach an example in the web page? > Also, it would be nice if any real func from scipy or numpy have been > provided (as an example) in the web page. Unfortunately, the DocstringStandards produce crap and epydoc errors. Try it and see. I will commit another version sometime this weekend. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Aug 3 21:58:30 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 3 Aug 2007 19:58:30 -0600 Subject: [SciPy-dev] Build problems Message-ID: Hi All, Anyone see this before? atlas_blas_info: libraries lapack,blas,cblas,atlas not found in /usr/local/atlas/lib/ libraries lapack,blas,cblas,atlas not found in /usr/local/lib libraries lapack,blas,cblas,atlas not found in /usr/lib/sse2 libraries lapack,blas,cblas,atlas not found in /usr/lib NOT AVAILABLE $[charris at localhost scipy]$ ls /usr/local/atlas/lib libatlas.a libcblas.a libf77blas.a liblapack.a libptcblas.a libptf77blas.a libatlas.so libcblas.so libf77blas.so liblapack.so libptcblas.so libptf77blas.so How come the libraries aren't found? Numpy builds and runs fine. This is scipy revision 3220. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Aug 3 22:02:48 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 03 Aug 2007 21:02:48 -0500 Subject: [SciPy-dev] Build problems In-Reply-To: References: Message-ID: <46B3DE48.7010002@gmail.com> Charles R Harris wrote: > Hi All, > > Anyone see this before? > > atlas_blas_info: > libraries lapack,blas,cblas,atlas not found in /usr/local/atlas/lib/ > libraries lapack,blas,cblas,atlas not found in /usr/local/lib > libraries lapack,blas,cblas,atlas not found in /usr/lib/sse2 > libraries lapack,blas,cblas,atlas not found in /usr/lib > NOT AVAILABLE > > $[charris at localhost scipy]$ ls /usr/local/atlas/lib > libatlas.a libcblas.a libf77blas.a liblapack.a libptcblas.a > libptf77blas.a > libatlas.so libcblas.so libf77blas.so liblapack.so libptcblas.so > libptf77blas.so > > How come the libraries aren't found? Numpy builds and runs fine. This is > scipy revision 3220. "libraries lapack,blas,cblas,atlas not found" ^^^^ It's looking for a libblas, and there is none. Try setting the libraries to lapack,f77blas,cblas,atlas -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From openopt at ukr.net Sat Aug 4 06:56:55 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 04 Aug 2007 13:56:55 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <46B45B77.9010404@ukr.net> hi all, This week I tried to use Matthieu's line-search routines in openopt. I need a good line-search alg that takes into account slope angle, like scipy.optimize.line_search. Unfortunately, that one seems to be obsolete and no maintained any more, noone can (or want) answer about old_fval and old_old_fval parameters (moreover, other ones are not described as well, fortunately I understood them). So, my mailing with Matthieu continues, I hope to solve all the problems raised with connection soon. Also, I began to convert openopt documentation to new standards. There I've got some troubles too, see my post and Charles R Harris answer in scipy dev mail list. rev 3209 contains changes (ticket 285) proposed by Alan Isaac (instead of those ones proposed by the ticket author). Several hours more were elapsed for (as I g\had report) already done ticket 464. The, it turned out that in Nils Wagner numpy 1.0.4dev he has asfarray(matrix([[0.3]])) being matrix, while my numpy 1.0.1 (yesterday I have updated to 1.0.4) yields numpy.ndarray. As for me, I think latter is more correct, I don't know why it was changed in more recent numpy version. Nils have promised to submit a ticket. Regards, D. From aisaac at american.edu Sat Aug 4 11:29:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 4 Aug 2007 11:29:56 -0400 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <46B45B77.9010404@ukr.net> References: <46B45B77.9010404@ukr.net> Message-ID: On Sat, 04 Aug 2007, dmitrey apparently wrote: > scipy.optimize.line_search. > Unfortunately, that one seems to be obsolete and no maintained any more, > noone can (or want) answer about old_fval and old_old_fval parameters 1. Please make a distinction between "obsolete" and "not maintained". This is not obsolete and the lack of maintenance reflects its satifactory state (aside from documentation). Please see the example of use in the definition of fmin_bfgs http://svn.scipy.org/svn/scipy/trunk/Lib/optimize/optimize.py 2. Your problem certainly is showing the inadequacy of documentation of this code. However the two old function values are being used to compute an initial step, as documented at http://svn.scipy.org/svn/scipy/trunk/Lib/optimize/minpack2/dcsrch.f The documentation there is quite good. The line search code is essentially an interface to the dcsrch code. Cheers, Alan Isaac From aisaac at american.edu Sat Aug 4 11:37:41 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 4 Aug 2007 11:37:41 -0400 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <46B45B77.9010404@ukr.net> References: <46B45B77.9010404@ukr.net> Message-ID: On Sat, 04 Aug 2007, dmitrey apparently wrote: > I began to convert openopt documentation to new standards. > There I've got some troubles too, see my post and Charles > R Harris answer in scipy dev mail list. This should not slow you down much. Much of the example is fine. Until Charles posts the rewrite, just avoid parameter lists that do not specify the type :Parameters: this : works as expected this : does not work Cheers, Alan Isaac From pearu at cens.ioc.ee Sat Aug 4 16:24:02 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sat, 4 Aug 2007 23:24:02 +0300 (EEST) Subject: [SciPy-dev] ExtGen - a Python Extension module Generation tool - something very new and fresh Message-ID: <50332.85.166.31.64.1186259042.squirrel@cens.ioc.ee> Hi, I have started to work on an ExtGen package that is a high-level tool for constructing Python extension modules. I'll need it in the G3 F2PY project but ExtGen might have more general interest and usage, hence so early announcement. For more information and hello example, see http://www.scipy.org/ExtGen Enjoy, Pearu From l.mastrodomenico at gmail.com Sat Aug 4 17:54:22 2007 From: l.mastrodomenico at gmail.com (Lino Mastrodomenico) Date: Sat, 4 Aug 2007 23:54:22 +0200 Subject: [SciPy-dev] ExtGen - a Python Extension module Generation tool - something very new and fresh In-Reply-To: <50332.85.166.31.64.1186259042.squirrel@cens.ioc.ee> References: <50332.85.166.31.64.1186259042.squirrel@cens.ioc.ee> Message-ID: 2007/8/4, Pearu Peterson : > I have started to work on an ExtGen package that is a high-level tool > for constructing Python extension modules. I'll need it in the G3 F2PY > project but ExtGen might have more general interest and usage, > hence so early announcement. If you somehow can magic up a way to generate modules written in C (for the standard CPython), in Java (for Jython) and in RPython (for PyPy) from the same source, that would be really awsome and incredibily useful. But I admit that I have no idea about how to do it (if it's possible at all). BTW, if you don't know it yet you may want to check out Pyrex () which does something very similar, but in a different way. -- Lino Mastrodomenico E-mail: l.mastrodomenico at gmail.com From pearu at cens.ioc.ee Sat Aug 4 18:13:00 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 5 Aug 2007 01:13:00 +0300 (EEST) Subject: [SciPy-dev] ExtGen - a Python Extension module Generation tool - something very new and fresh In-Reply-To: References: <50332.85.166.31.64.1186259042.squirrel@cens.ioc.ee> Message-ID: <59449.85.166.31.64.1186265580.squirrel@cens.ioc.ee> On Sun, August 5, 2007 12:54 am, Lino Mastrodomenico wrote: > BTW, if you don't know it yet you may want to check out Pyrex > () which > does something very similar, but in a different way. Yes, I know about Pyrex. Pyrex defines a Python-like language from which extension modules are generated but for generating wrapper functions for Fortran or C programs, I'll need better control how the extension modules are created. This includes dealing with converters between Python-C-Fortran types, for instance. I would consider ExtGen as a tool for programs that need to generate extension modules. Pearu From david at ar.media.kyoto-u.ac.jp Sun Aug 5 09:11:32 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 05 Aug 2007 22:11:32 +0900 Subject: [SciPy-dev] Need some info to solve #238: cblas vs fblas Message-ID: <46B5CC84.4010401@ar.media.kyoto-u.ac.jp> Hi, While trying to look at some of the pending bugs on mac os X, I came across a quite severe problem for scipy.linalg when building with gfortran and veclib on intel. I put my findings in #238: the problem seems to be that 4 functions in veclib use the "old fortran ABI" for returned values, and this causes memory corruption when used in scipy. I am not sure how to solve this problem: I tried many different things, to observe at the end that removing a file (fblaswrap) from the sources of fblas in the setup of linalg solved the problem... What is this file used for ? Can it be removed from the fblas module ? (conditionally to having found vecLib, of course). cheers, David From david at ar.media.kyoto-u.ac.jp Sun Aug 5 11:20:02 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 06 Aug 2007 00:20:02 +0900 Subject: [SciPy-dev] Need some info to solve #238: cblas vs fblas In-Reply-To: <46B5CC84.4010401@ar.media.kyoto-u.ac.jp> References: <46B5CC84.4010401@ar.media.kyoto-u.ac.jp> Message-ID: <46B5EAA2.2040601@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > Hi, > > While trying to look at some of the pending bugs on mac os X, I came > across a quite severe problem for scipy.linalg when building with > gfortran and veclib on intel. I put my findings in #238: the problem > seems to be that 4 functions in veclib use the "old fortran ABI" for > returned values, and this causes memory corruption when used in scipy. > > I am not sure how to solve this problem: I tried many different things, > to observe at the end that removing a file (fblaswrap) from the sources > of fblas in the setup of linalg solved the problem... What is this file > used for ? Can it be removed from the fblas module ? (conditionally to > having found vecLib, of course). Mmh, the above does not make sense, sorry. I have a patch for the bug in #238 which seems to solve the problem, but noticed that the error appears in two different submodules: scipy.lib and scipy.linalg. What's the difference between fblas in scipy.lib and fblas in scipy.linalg ? Isn't scipy.lib redundant with scipy.linalg ? This actually made me crazy since the above bug reappears at random depending on the order of tests in scipy (they both define the same symbols, and this took me a while to understand that the failing function was not the same as the one I was working on). cheers, David From pearu at cens.ioc.ee Sun Aug 5 11:59:24 2007 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Sun, 5 Aug 2007 18:59:24 +0300 (EEST) Subject: [SciPy-dev] Need some info to solve #238: cblas vs fblas In-Reply-To: <46B5EAA2.2040601@ar.media.kyoto-u.ac.jp> References: <46B5CC84.4010401@ar.media.kyoto-u.ac.jp> <46B5EAA2.2040601@ar.media.kyoto-u.ac.jp> Message-ID: <52694.85.166.31.64.1186329564.squirrel@cens.ioc.ee> On Sun, August 5, 2007 6:20 pm, David Cournapeau wrote: > I have a patch for the bug in #238 which seems to solve the problem, but > noticed that the error appears in two different submodules: scipy.lib > and scipy.linalg. What's the difference between fblas in scipy.lib and > fblas in scipy.linalg ? Isn't scipy.lib redundant with scipy.linalg ? > This actually made me crazy since the above bug reappears at random > depending on the order of tests in scipy (they both define the same > symbols, and this took me a while to understand that the failing > function was not the same as the one I was working on). The plan was to move blas/lapack wrappers from scipy.linalg to scipy.lib.blas/lapack so that other scipy subpackage could use them without importing the whole scipy.linalg. It's still work in progress plan.. Blas/lapack wrappers in scipy.lib are improved versions found in scipy.linalg, just replacing the usage of scipy.linalg.blas/lapack with using scipy.lib.blas/lapack is unfinished. Pearu From stefan at sun.ac.za Sun Aug 5 17:15:28 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 5 Aug 2007 23:15:28 +0200 Subject: [SciPy-dev] GSoC weekly report In-Reply-To: <46B45B77.9010404@ukr.net> References: <46B45B77.9010404@ukr.net> Message-ID: <20070805211528.GC13502@mentat.za.net> On Sat, Aug 04, 2007 at 01:56:55PM +0300, dmitrey wrote: > Several hours more were elapsed for (as I g\had report) already done > ticket 464. The, it turned out that in Nils Wagner numpy 1.0.4dev he has > asfarray(matrix([[0.3]])) being matrix, > while my numpy 1.0.1 (yesterday I have updated to 1.0.4) yields > numpy.ndarray. > As for me, I think latter is more correct, I don't know why it was > changed in more recent numpy version. > Nils have promised to submit a ticket. It was changed because Nils filed a ticket against the earlier behaviour. Regards St?fan From david at ar.media.kyoto-u.ac.jp Sun Aug 5 22:19:01 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 06 Aug 2007 11:19:01 +0900 Subject: [SciPy-dev] Need some info to solve #238: cblas vs fblas In-Reply-To: <52694.85.166.31.64.1186329564.squirrel@cens.ioc.ee> References: <46B5CC84.4010401@ar.media.kyoto-u.ac.jp> <46B5EAA2.2040601@ar.media.kyoto-u.ac.jp> <52694.85.166.31.64.1186329564.squirrel@cens.ioc.ee> Message-ID: <46B68515.8070005@ar.media.kyoto-u.ac.jp> Pearu Peterson wrote: > The plan was to move blas/lapack wrappers from scipy.linalg > to scipy.lib.blas/lapack so that other scipy subpackage could use them > without importing the whole scipy.linalg. It's still work in progress plan.. > Blas/lapack wrappers in scipy.lib are improved versions found in > scipy.linalg, > just replacing the usage of scipy.linalg.blas/lapack with using > scipy.lib.blas/lapack is unfinished. > Ok, thanks for the clarification. So what would be your advice for solving the problem in #238: applying the fix on both modules ? Maybe we can just disable scipy.lib until it is finished ? David From openopt at ukr.net Mon Aug 6 13:10:31 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 06 Aug 2007 20:10:31 +0300 Subject: [SciPy-dev] New coding style (docstrings) : question Message-ID: <46B75607.4050904@ukr.net> Hi all, I try to rewrite scipy.optimize docstrings (as well as openopt ones) in new docstrings standard. (it was assigned to my GSoC milestones) So please take a look at the example below - is all correct? Especially I'm interested in func handlers - are they need any type describer? (see those 1 line above and some lines below of line x0 : ndarray -- the initial guess ) Regards, D. def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None): """Minimize a function using the downhill simplex algorithm. :Parameters: func -- the Python function or method to be minimized. x0 : ndarray -- the initial guess. args -- extra arguments for func. callback -- an optional user-supplied function to call after each iteration. It is called as callback(xk), where xk is the current parameter vector. :Returns: (xopt, {fopt, iter, funcalls, warnflag}) xopt : ndarray -- minimizer of function fopt : number -- value of function at minimum: fopt = func(xopt) iter : number -- number of iterations funcalls : number-- number of function calls warnflag : number -- Integer warning flag: 1 : 'Maximum number of function evaluations.' 2 : 'Maximum number of iterations.' allvecs : Python list -- a list of solutions at each iteration :OtherParameters: xtol : number -- acceptable relative error in xopt for convergence. ftol : number -- acceptable relative error in func(xopt) for convergence. maxiter : number -- the maximum number of iterations to perform. maxfun : number -- the maximum number of function evaluations. full_output : number -- non-zero if fval and warnflag outputs are desired. disp : number -- non-zero to print convergence messages. retall : number -- non-zero to return list of solutions at each iteration :SeeAlso: fmin, fmin_powell, fmin_cg, fmin_bfgs, fmin_ncg -- multivariate local optimizers leastsq -- nonlinear least squares minimizer fmin_l_bfgs_b, fmin_tnc, fmin_cobyla -- constrained multivariate optimizers anneal, brute -- global optimizers fminbound, brent, golden, bracket -- local scalar minimizers fsolve -- n-dimenstional root-finding brentq, brenth, ridder, bisect, newton -- one-dimensional root-finding fixed_point -- scalar fixed-point finder Notes ----- Uses a Nelder-Mead simplex algorithm to find the minimum of function of one or more variables. """ From peridot.faceted at gmail.com Mon Aug 6 13:23:14 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 6 Aug 2007 13:23:14 -0400 Subject: [SciPy-dev] New coding style (docstrings) : question In-Reply-To: <46B75607.4050904@ukr.net> References: <46B75607.4050904@ukr.net> Message-ID: On 06/08/07, dmitrey wrote: > Especially I'm interested in func handlers - are they need any type > describer? I don't know if there's any formal standard for how they're written, but it definitely helps to be as specific as possible about what callback functions get passed and must return. > func -- the Python function or method to be minimized. > x0 : ndarray -- the initial guess. > args -- extra arguments for func. > callback -- an optional user-supplied function to call after each > iteration. It is called as callback(xk), where xk is the > current parameter vector. Specifically, what does fopt get passed? the x value, as a one-dimensional array, and args? Is it called as f(x,args) or f(x,*args)? It's also worth mentioning that it should return a scalar, the function value there. Similarly for the callback, which should presumably return None. I take it that callback can throw exceptions to abort the optimization if necessary (that is, this won't leave the optimizer's internal state in a mess)? Anne From aisaac at american.edu Mon Aug 6 14:09:05 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Aug 2007 14:09:05 -0400 Subject: [SciPy-dev] New coding style (docstrings) : question In-Reply-To: <46B75607.4050904@ukr.net> References: <46B75607.4050904@ukr.net> Message-ID: On Mon, 06 Aug 2007, dmitrey apparently wrote: > I try to rewrite scipy.optimize docstrings (as well as openopt ones) in > new docstrings standard. > (it was assigned to my GSoC milestones) > So please take a look at the example below - is all correct? I will add to Anne's comments. 1. Rely more heavily on the epydoc documentation. Be sure to run epydoc and see what you get: is it right? (Actually, your example is pretty self-contained, so you can just run rst2html.py on it first, to make sure it is valide reST.) 2. In reST, you need an empty line before a field list: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#field-lists 3. You do not need an empty line between a field name and the field body. 4. You are mixing stuff together arbitrarily in the field body. Use definition lists: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#definition-lists Especially: it is the classifier that is optional, not the definition. 5. I forget whether the OtherParameters field is implemented in epdoc yet ... ? May be "forthcoming". You should be able to leave it anyway ... 6. Do not add indentation for the text block after your section headers (e.g., your Notes section). hth, Alan Isaac PS A working example based on your post follows ... Section ------- No blank line needed after section (but it looks better). Blank line is needed before field list. :field1: body1 :field2: body2 :Returns: (xopt, {fopt, iter, funcalls, warnflag}) xopt : ndarray minimizer of function fopt : number value of function at minimum: fopt = func(xopt) iter : number number of iterations funcalls : number number of function calls warnflag : number Integer warning flag: 1 : 'Maximum number of function evaluations.' 2 : 'Maximum number of iterations.' allvecs : list a list of solutions at each iteration From jeremit0 at gmail.com Mon Aug 6 14:49:39 2007 From: jeremit0 at gmail.com (Jeremy Conlin) Date: Mon, 6 Aug 2007 14:49:39 -0400 Subject: [SciPy-dev] Seg fault while running scipy.test under Red Hat EL Message-ID: <3db594f70708061149h5b3bb6c6s411c591e0dbb14c9@mail.gmail.com> I have just installed numpy and scipy from the svn repository and I am having a few problems. I get a seg. fault when I run scipy.test(1,10) (see results below). And I get the following error when I try to load the stats package: $ python2.5 Python 2.5 (r25:51908, Apr 10 2007, 10:37:31) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.stats Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/local/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, in import scipy.linalg as linalg File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8, in from basic import * File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 313, in import decomp File "/usr/local/lib/python2.5/site-packages/scipy/linalg/decomp.py", line 22, in from blas import get_blas_funcs File "/usr/local/lib/python2.5/site-packages/scipy/linalg/blas.py", line 14, in from scipy.linalg import fblas ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/fblas.so: undefined symbol: srotmg_ >>> I am using RedHat Enterprise Linux with gcc and g77 version 3.4.6. Can someone help me understand what is wrong with my installation and how to fix it? Thanks, Jeremy $ python2.5 Python 2.5 (r25:51908, Apr 10 2007, 10:37:31) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test(1,10) ? ? uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1Illegal instruction $ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon Aug 6 15:33:13 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 06 Aug 2007 22:33:13 +0300 Subject: [SciPy-dev] one more scipy.optimize.line_search question Message-ID: <46B77779.6050209@ukr.net> hi all, I wonder what does the parameter amax=50 mean (in optimize.line_search func)? Seems like this parameter is never used in the func, moreover, amax is defined in scipy.optimize module as a func. Also, in the middle of the func it has line (optimize.py) maxiter = 10 this one seems to be very small to me. don't you think it's better to handle the param in input args? Also, don't you think that having 2 line_search funcs is ambiguous? (I mean one in scipy.optimize, python-written, and one in scipy.optimize.linesearch, binding to minpack2) Regards, D. From aisaac at american.edu Mon Aug 6 16:29:56 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Aug 2007 16:29:56 -0400 Subject: [SciPy-dev] one more scipy.optimize.line_search question In-Reply-To: <46B77779.6050209@ukr.net> References: <46B77779.6050209@ukr.net> Message-ID: On Mon, 06 Aug 2007, dmitrey apparently wrote: > I wonder what does the parameter amax=50 mean (in > optimize.line_search func)? Seems like this parameter is > never used in the func Is there something wrong with the minpack2 documentation? c stpmax is a double precision variable. c On entry stpmax is a nonnegative upper bound for the step. c On exit stpmax is unchanged. > moreover, amax is defined in scipy.optimize module as > a func I think you are confusing this with the numpy array method? > Also, in the middle of the func it has line > (optimize.py) > maxiter = 10 > this one seems to be very small to me. > don't you think it's better to handle the param in input args? That is just for the bracketing phase. Are any troubles resulting from this value? > Also, don't you think that having 2 line_search funcs is ambiguous? > (I mean one in scipy.optimize, python-written, and one in > scipy.optimize.linesearch, binding to minpack2) It would be nice to have some history on this. I expect we'll have to wait until Travis has time to look this discussion over, which may not be soon. Cheers, Alan Isaac From openopt at ukr.net Mon Aug 6 16:44:33 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 06 Aug 2007 23:44:33 +0300 Subject: [SciPy-dev] one more scipy.optimize.line_search question In-Reply-To: References: <46B77779.6050209@ukr.net> Message-ID: <46B78831.2030709@ukr.net> Alan G Isaac wrote: > On Mon, 06 Aug 2007, dmitrey apparently wrote: > >> I wonder what does the parameter amax=50 mean (in >> optimize.line_search func)? Seems like this parameter is >> never used in the func >> > > Is there something wrong with the minpack2 documentation? > > c stpmax is a double precision variable. > c On entry stpmax is a nonnegative upper bound for the step. > c On exit stpmax is unchanged. > the line_search func from scipy.optimize has no any relation to the fortran routine you have mentioned. Maybe you mean line_search func from scipy.optimize.linesearch, but as I mentioned, scipy has 2 funcs line_search, one python-written (from optimize.py), other from /optimize/linesearch.py. So, as I have mentioned, amax is unused in scipy.optimize.linesearch > > >> moreover, amax is defined in scipy.optimize module as >> a func >> > > I think you are confusing this with the numpy array method? > > > >> Also, in the middle of the func it has line >> (optimize.py) >> maxiter = 10 >> this one seems to be very small to me. >> don't you think it's better to handle the param in input args? >> > > That is just for the bracketing phase. > Are any troubles resulting from this value? > I have troubles all over the func. Seems like sometimes Matthieu's func that declines same goal (finding x that satisfies strong Wolfe conditions) works much more better (in 1st iteration, but makes my CPU hanging on in 2nd iter), but in other cases scipy.optimize provides at least some decrease, while Matthieu's - not (as i mentioned above, also, sometimes Matthieu's func return same x0 after 2nd iter and makes my alg stop very far from x_optim). 1) this test, Matthieu: itn 0: Fk= 8596.39550577 maxResidual= 804.031293334 11 (between these lines I call Matthieu's optimizer) 22 ------------------ xnew: objfun: 227.538805239 max residual: 2.17603712827e-12 (because now I use test where only linear inequalities are present) 11 (CPU hanging on) 2) Same test, scipy.optimize: itn 0: Fk= 8596.39550577 maxResidual= 804.031293334 itn 100 : Fk= 8141.04226717 maxResidual= 784.640128722 itn 200 : Fk= 7708.62739973 maxResidual= 765.716680829 ... as you see, objFun decrease, as well as max constraint decrease, is very slow: If I provide gradient of my func numerically obtaining, nothing changes except of some calculation speed decrease. I tried to modify sigma (Matthieu notation) = c2 (scipy notation) = 0.1...0.9, almost nothing changes (Matthieu's default val = 0.4, scipy - 0.9, as it is in http://en.wikipedia.org/wiki/Wolfe_conditions) afaik c1 is default 0.0001 in both mentioned. So now I'm trying to learn where's the problem. Regards, D. > > >> Also, don't you think that having 2 line_search funcs is ambiguous? >> (I mean one in scipy.optimize, python-written, and one in >> scipy.optimize.linesearch, binding to minpack2) >> > > It would be nice to have some history on this. > I expect we'll have to wait until Travis has time to look > this discussion over, which may not be soon. > > Cheers, > Alan Isaac > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From openopt at ukr.net Mon Aug 6 16:46:54 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 06 Aug 2007 23:46:54 +0300 Subject: [SciPy-dev] one more scipy.optimize.line_search question In-Reply-To: <46B78831.2030709@ukr.net> References: <46B77779.6050209@ukr.net> <46B78831.2030709@ukr.net> Message-ID: <46B788BE.5080607@ukr.net> dmitrey wrote: > Alan G Isaac wrote: > >> On Mon, 06 Aug 2007, dmitrey apparently wrote: >> >> >>> I wonder what does the parameter amax=50 mean (in >>> optimize.line_search func)? Seems like this parameter is >>> never used in the func >>> >>> >> Is there something wrong with the minpack2 documentation? >> >> c stpmax is a double precision variable. >> c On entry stpmax is a nonnegative upper bound for the step. >> c On exit stpmax is unchanged. >> >> > the line_search func from scipy.optimize has no any relation to the > fortran routine you have mentioned. > Maybe you mean line_search func from scipy.optimize.linesearch, but as I > mentioned, scipy has 2 funcs line_search, one python-written (from > optimize.py), other from /optimize/linesearch.py. > So, as I have mentioned, amax is unused in scipy.optimize.linesearch > sorry, I meant in scipy.optimize.line_search, this is another one consequence from the ambiguity. D. From aisaac at american.edu Mon Aug 6 17:03:24 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Aug 2007 17:03:24 -0400 Subject: [SciPy-dev] one more scipy.optimize.line_search question In-Reply-To: <46B78831.2030709@ukr.net> References: <46B77779.6050209@ukr.net> <46B78831.2030709@ukr.net> Message-ID: >> On Mon, 06 Aug 2007, dmitrey apparently wrote: >>> I wonder what does the parameter amax=50 mean (in >>> optimize.line_search func)? Seems like this parameter is >>> never used in the func > Alan G Isaac wrote: >> Is there something wrong with the minpack2 documentation? >> c stpmax is a double precision variable. >> c On entry stpmax is a nonnegative upper bound for the step. >> c On exit stpmax is unchanged. On Mon, 06 Aug 2007, dmitrey apparently wrote: > the line_search func from scipy.optimize has no any relation to the > fortran routine you have mentioned. > Maybe you mean line_search func from scipy.optimize.linesearch, but as I > mentioned, scipy has 2 funcs line_search, one python-written (from > optimize.py), other from /optimize/linesearch.py. > So, as I have mentioned, amax is unused in scipy.optimize.linesearch In optimize.linesearch we define: line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=1e-4, c2=0.9, amax=50): In optimize.optimize we define: line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=1e-4, c2=0.9, amax=50): Clearly this is just a matter of a consistent interface, so all args get the same interpretation. As to why the stepsize bound is not used in the latter, you will have to ask Travis. Do you find a difference in the performance of the two? Cheers, Alan Isaac From openopt at ukr.net Mon Aug 6 17:18:03 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 07 Aug 2007 00:18:03 +0300 Subject: [SciPy-dev] one more scipy.optimize.line_search question In-Reply-To: References: <46B77779.6050209@ukr.net> <46B78831.2030709@ukr.net> Message-ID: <46B7900B.6020209@ukr.net> Alan G Isaac wrote: >>> On Mon, 06 Aug 2007, dmitrey apparently wrote: >>> >>>> I wonder what does the parameter amax=50 mean (in >>>> optimize.line_search func)? Seems like this parameter is >>>> never used in the func >>>> > > > >> Alan G Isaac wrote: >> >>> Is there something wrong with the minpack2 documentation? >>> > > >>> c stpmax is a double precision variable. >>> c On entry stpmax is a nonnegative upper bound for the step. >>> c On exit stpmax is unchanged. >>> > > > On Mon, 06 Aug 2007, dmitrey apparently wrote: > >> the line_search func from scipy.optimize has no any relation to the >> fortran routine you have mentioned. >> Maybe you mean line_search func from scipy.optimize.linesearch, but as I >> mentioned, scipy has 2 funcs line_search, one python-written (from >> optimize.py), other from /optimize/linesearch.py. >> So, as I have mentioned, amax is unused in scipy.optimize.linesearch >> > > > In optimize.linesearch we define: > line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=1e-4, c2=0.9, amax=50): > In optimize.optimize we define: > line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=1e-4, c2=0.9, amax=50): > Clearly this is just a matter of a consistent interface, > so all args get the same interpretation. > If you'll just try ctrl-f "amax" in optimize.py, you'll see that it has only 2 : one - definition of amax as a func and one - in line_search func input args. So the value is unused in line_search. > As to why the stepsize bound is not used in the latter, > you will have to ask Travis. > > Do you find a difference in the performance of the two? > Yes, I do. At least, they return different numbers for my whole problem. fortran: itn 0: Fk= 8596.39550577 maxResidual= 804.031293334 itn 1 : Fk= 8596.39529687 maxResidual= 804.031284626 itn 2 : Fk= 8596.39509622 maxResidual= 804.03127639 itn 3 : Fk= 8596.39489987 maxResidual= 804.031268349 ... itn 10 : Fk= 8596.39355523 maxResidual= 804.031212067 itn 20 : Fk= 8596.39164144 maxResidual= 804.031131664 itn 30 : Fk= 8596.3897277 maxResidual= 804.031051261 itn 40 : Fk= 8596.38781395 maxResidual= 804.030970858 itn 50 : Fk= 8596.38590021 maxResidual= 804.030890455 python: itn 0: Fk= 8596.39550577 maxResidual= 804.031293334 itn 1 : Fk= 8591.30146061 maxResidual= 803.818915875 itn 2 : Fk= 8586.40946095 maxResidual= 803.61802955 itn 3 : Fk= 8581.62365205 maxResidual= 803.421917306 ... itn 10 : Fk= 8548.91083789 maxResidual= 802.050160864 itn 20 : Fk= 8502.55420774 maxResidual= 800.094501458 itn 30 : Fk= 8456.43702405 maxResidual= 798.143610745 itn 40 : Fk= 8410.55715335 maxResidual= 796.19747748 itn 50 : Fk= 8364.9134737 maxResidual= 794.256090032 So Python implementation seems to be better (at least for the single test func; also, I don't check how much time and/or iterations do they consume). But Matthieu's implementation works much more better in single (1st) iteration, but after 2nd i constantly get either zero shift (x_new=x0) or CPU hanging. Maybe he has some variables remembered from previous calculations? D. > Cheers, > Alan Isaac > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From aisaac at american.edu Mon Aug 6 19:41:20 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 6 Aug 2007 19:41:20 -0400 Subject: [SciPy-dev] one more scipy.optimize.line_search question In-Reply-To: <46B7900B.6020209@ukr.net> References: <46B77779.6050209@ukr.net> <46B78831.2030709@ukr.net> <46B7900B.6020209@ukr.net> Message-ID: > Alan G Isaac wrote: >> As to why the stepsize bound is not used in the latter, >> you will have to ask Travis. On Tue, 07 Aug 2007, dmitrey apparently wrote: > If you'll just try ctrl-f "amax" in optimize.py, you'll see that it has > only 2 : one - definition of amax as a func and one - in line_search > func input args. So the value is unused in line_search. Right; as I said, only Travis can say why this is currently unused the line_search function in optimize.py. If I were to guess, I'd guess he did not get around to implementing the stepsize bound yet, but there may be a more interesting explanation. Cheers, Alan Isaac From david at ar.media.kyoto-u.ac.jp Mon Aug 6 21:53:42 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 07 Aug 2007 10:53:42 +0900 Subject: [SciPy-dev] Seg fault while running scipy.test under Red Hat EL In-Reply-To: <3db594f70708061149h5b3bb6c6s411c591e0dbb14c9@mail.gmail.com> References: <3db594f70708061149h5b3bb6c6s411c591e0dbb14c9@mail.gmail.com> Message-ID: <46B7D0A6.7080004@ar.media.kyoto-u.ac.jp> Jeremy Conlin wrote: > I have just installed numpy and scipy from the svn repository and I am > having a few problems. I get a seg. fault when I run scipy.test(1,10) > (see results below). And I get the following error when I try to load > the stats package: Did you install blas and lapack from the official rpms ? If yes, it is likely they are broken (they were on FC5 and maybe 6 for a long time, a least). You should compile blas and LAPACK by yourself. David From cookedm at physics.mcmaster.ca Tue Aug 7 11:41:51 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 07 Aug 2007 11:41:51 -0400 Subject: [SciPy-dev] FFTW performances in scipy and numpy In-Reply-To: <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> (John Travers's message of "Wed, 1 Aug 2007 12:18:25 +0100") References: <46B01687.3050604@ar.media.kyoto-u.ac.jp> <46B03B0E.5050605@ar.media.kyoto-u.ac.jp> <3a1077e70708010418r32d507bwbd837feb6101037f@mail.gmail.com> Message-ID: "John Travers" writes: > On 01/08/07, David Cournapeau wrote: >> >> 3 strategies are available: >> - Have a flag to check whether the given array is 16 bytes aligned, >> and conditionnally build plans using this info >> - Use FFTW_UNALIGNED, and do not care about alignement >> - Current strategy: copy. >> >> The three strategies use FFTW_MEASURE, which I didn't do before, and may > > Another strategy worth trying is using FFTW_MEASURE once and then > using FFTW_ESTIMATE for additional arrays. FFTW accumulates wisdom and > so the initial call with MEASURE means that further estimated plans > also benefit. In my simple tests it comes very close to measuring for > each individual array. We could also cache the wisdom in a file using the guru interface, so that you could do even do a FFTW_PATIENT once. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From matthieu.brucher at gmail.com Wed Aug 8 13:30:01 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 8 Aug 2007 19:30:01 +0200 Subject: [SciPy-dev] Typo in docstrings Message-ID: Some typos for some functions docstrings: signal.qmr : residucal instead of residual signal : misses invresz() medfilt2 instead of medfilt2d signal.wiener in the description it's not kernel_size but mysize I found another typo before, but I forgot where. Next time, I'll send it directly to the list. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Thu Aug 9 07:29:29 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 9 Aug 2007 04:29:29 -0700 Subject: [SciPy-dev] I am volunteering to be the release manager for NumPy 1.0.3.1 and SciPy 0.5.2 Message-ID: I volunteer to be the release manager for NumPy 1.0.3.1 and SciPy 0.5.3. In order to actually get them both released I will obviously need some help. But given the amount of work required and the number of people who have offered to help, I believe this will be doable. Given the extensive discussion about what is needed for these releases, I am fairly confident that I know what needs to be done. I will try to be very specific about what I will do and what I will need help with. Basically, I am just rewriting the plan described by Robert Kern last month. Please let me know if you have any suggestions/comments/problems with this plan and please let me know if you can commit to helping in any way. [[NOTE: I just (on Monday) hired 2 full-time programmers to work on the neuroimaging in python (NIPY) project, so they will be able to help out with bug fixing as well as testing the pre-releases on different platforms.]] Releasing NumPy 1.0.3.1 =================== On July 24th, Robert suggested making a numpy 1.0.3.1 point release. He was concerned that there were some changes in numpy.distutils that needed to cook a little longer. So I am offering to make a 1.0.3.1 release. If Travis or one of the other core NumPy developers want to make a 1.0.4 release in the next week or so, then there won't be a need for a 1.0.3.1 release. First, I will branch from the 1.0.3 tag: svn cp http://svn.scipy.org/svn/numpy/tags/1.0.3 http://svn.scipy.org/svn/numpy/branches/1.0.3 Second, I will apply all the patches necessary to build scipy from svn, but nothing else. Then I will just follow the NumPy release instructions: http://projects.scipy.org/scipy/numpy/wiki/MakingReleases I will make the tarball and source rpm; but will need help with everything else. Things will go faster if someone else can build the Windows binaries. If not, my new programmers and I will make the binaries. Finally, one of the sourceforge admins will need upload those files once we are done. (I am happy to be made an admin and upload the files myself, if it would be more convenient.) Releasing SciPy 0.5.3 ================= I will make a 0.5.3 scipy branch: svn cp http://svn.scipy.org/svn/scipy/trunk http://svn.scipy.org/svn/scipy/branches/0.5.3 >From then on normal development will continue on the trunk, but only bug fixes will be allowed on the branch. I will ask everyone to test the branch for at least 1 week depending on whether we get any bug reports. Once we are able to get the most serious bugs fixed, I will start working with everyone to build as many binaries as possible. I will rely on David Cournapeau and Andrew Straw to provide RPMs and DEBs. Again, things will go faster if someone else can build the Windows binaries. But if not, my new programmers and I will figure out how to make the binaries for Windows. We can also make the OS X binaries especially if Robert Kern is stilling willing to help. I will also draft a release announcement and give everyone time to comment on it. I will either need to get access to the sourceforge site and the PyPi records or someone will have to update them for me. Timeline ======= If this is agreeable to everyone, I will make the NumPy branch on Friday and apply the relevant patches. Then if I can get someone else to make the Windows executables and upload the files, we should be able to have a new NumPy release before the beginning of the SciPy conference. As for the 0.5.3 SciPy branch, we can discuss this in some detail if everyone is OK with the basic plan. In general, I hope that I will be able to have a 1.0.3.1 NumPy release before August 20th. Perhaps we could even make the 0.5.3 branch by the 20th. Fortunately, as David said earlier the main issue is getting a new release of NumPy out. Resources ======== As I mentioned I just hired 2 full-time programmers to work on NIPY who will be able to help me get the binaries built and tested for the different platforms. All 3 of us will be at the SciPy conference next week. So we will hopefully be able to solve whatever problems we run into very quickly given that it will be so easy to get help. Additionally, David Cournapeau has said that he is willing to help get a new release of SciPy out. He has already been busy at work squashing bugs. Sincerely, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Fri Aug 10 04:45:07 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 10 Aug 2007 01:45:07 -0700 Subject: [SciPy-dev] NumPy-1.0.3.x Message-ID: Hello everyone, I made a mumpy-1.0.3.x branch from the 1.0.3 tag and tried to get everything working (see changesets 3957-3961). I added back get_path to numpy/distutils/misc_util.py, which is used by Lib/odr/setup.py in scipy 0.5.2. I also tried to clean up a few issues by doing the same thing that was done to the trunk in: http://projects.scipy.org/scipy/numpy/changeset/3845 http://projects.scipy.org/scipy/numpy/changeset/3848 I am still seeing 2 problems: 1) http://projects.scipy.org/scipy/numpy/ticket/535 2) when I run scipy.test(1,10), I get: check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad)Illegal instruction If anyone has any ideas as to what is wrong, please let me know. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Fri Aug 10 15:33:59 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 10 Aug 2007 21:33:59 +0200 Subject: [SciPy-dev] Error in signal.cascade Message-ID: Hi, I've found that in line 120 in wavelets.py, the code looks for a type "Float" in module sb (alias for numpy). I think it should be float instead, no ? As I found a lot of execution bugs in "usual" functions, are they tested ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Aug 10 20:11:20 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 11 Aug 2007 02:11:20 +0200 Subject: [SciPy-dev] Error in signal.cascade In-Reply-To: References: Message-ID: <20070811001119.GF17460@mentat.za.net> On Fri, Aug 10, 2007 at 09:33:59PM +0200, Matthieu Brucher wrote: > I've found that in line 120 in wavelets.py, the code looks for a type "Float" > in module sb (alias for numpy). I think it should be float instead, no ? > > As I found a lot of execution bugs in "usual" functions, are they > tested ? I fixed some errors and added tests. I wonder if it is worth keeping this module around, since wavelets.scipy.org is much further advanced? Regards St?fan From matthieu.brucher at gmail.com Sat Aug 11 04:57:40 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 11 Aug 2007 10:57:40 +0200 Subject: [SciPy-dev] Error in signal.cascade In-Reply-To: <20070811001119.GF17460@mentat.za.net> References: <20070811001119.GF17460@mentat.za.net> Message-ID: > > I fixed some errors and added tests. I wonder if it is worth keeping > this module around, since wavelets.scipy.org is much further advanced? > That's a good question... Is it going to be a scikit or something, like pyaudio (I found them both in the same section in Topical Software) ? The website does not seem to be up to date on what is available, how to use it, ... :( Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Aug 12 16:09:57 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 12 Aug 2007 16:09:57 -0400 Subject: [SciPy-dev] Fwd: from GSoC student Dmitrey Kroshko Message-ID: Dmitrey's ISP in the Ukraine is experiencing problems, so he asked from someone else's computer that I post this brief weekly update. He'll be back in communication as soon as possible. Cheers, Alan Isaac ------ Forwarded message ------ Date: Sat, 11 Aug 2007 12:58:30 +0400 Subject: from GSoC student Dmitrey Kroshko this week I wrote line-search optimizer based on Armijo rule. It's very simple, but now lincher didn't require other line-search optimizers from other software - I mean Matthieu's one (unfortunately it very often makes CPU hang up), scipy.optimize.line_search or it's fortran version (they can't solve the problem as well). Maybe, these solvers just can't handle non-smooth func like f + max(constraints) is. On the other hand, I want to use these line-search solvers when no constraints are present (however, UC solvers like bfgs etc usually will be better in the case), or if the user's problem has just 1 constraint (in the case max(constraint)=constraint=smooth func). Also, I made some changes to docstrings (both openopt and scipy.optimize), but it still requires more work. -------- End of message ------- From matthieu.brucher at gmail.com Mon Aug 13 03:06:00 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 13 Aug 2007 09:06:00 +0200 Subject: [SciPy-dev] Fwd: from GSoC student Dmitrey Kroshko In-Reply-To: References: Message-ID: Hi, Some precisions :) this week I wrote line-search optimizer based on Armijo rule. It seems not to be committed yet. I've coded it myself thursday or friday : http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/backtracking_search.py Too sad dmitrey did not code it with my framework. It's > very simple, but now lincher didn't require other line-search > optimizers from other software - I mean Matthieu's one (unfortunately > it very often makes CPU hang up) Precision here so that there are no misunderstanding : the one dmitrey is talking anout is one of many, it's the one with Strong Wolfe-Powell rule, and dmitrey's cost function is not smooth thus the line search has trouble dealing with it. For those who are intersted : http://projects.scipy.org/scipy/scikits/wiki/Optimization Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Aug 13 03:07:26 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 13 Aug 2007 16:07:26 +0900 Subject: [SciPy-dev] Using C++ in non optional code ? Message-ID: <46C0032E.6020101@ar.media.kyoto-u.ac.jp> Hi, I wanted to now if the possibility to use C++ code in scipy was discussed before ? The only C++ code lying around are for optional packages. Several people have expressed the wish to use C++, and I would like to know if there is any official policy regarding its usage. cheers, David From robert.kern at gmail.com Mon Aug 13 03:14:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 13 Aug 2007 02:14:15 -0500 Subject: [SciPy-dev] Using C++ in non optional code ? In-Reply-To: <46C0032E.6020101@ar.media.kyoto-u.ac.jp> References: <46C0032E.6020101@ar.media.kyoto-u.ac.jp> Message-ID: <46C004C7.1020309@gmail.com> David Cournapeau wrote: > Hi, > > I wanted to now if the possibility to use C++ code in scipy was > discussed before ? The only C++ code lying around are for optional > packages. Several people have expressed the wish to use C++, and I would > like to know if there is any official policy regarding its usage. It's fine. Try not to use non-standard STL (e.g. hash-based maps). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Aug 13 05:38:45 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 13 Aug 2007 18:38:45 +0900 Subject: [SciPy-dev] Using C++ in non optional code ? In-Reply-To: <46C004C7.1020309@gmail.com> References: <46C0032E.6020101@ar.media.kyoto-u.ac.jp> <46C004C7.1020309@gmail.com> Message-ID: <46C026A5.6090606@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > >> Hi, >> >> I wanted to now if the possibility to use C++ code in scipy was >> discussed before ? The only C++ code lying around are for optional >> packages. Several people have expressed the wish to use C++, and I would >> like to know if there is any official policy regarding its usage. >> > > It's fine. Try not to use non-standard STL (e.g. hash-based maps). > > Great, thank you, David From wnbell at gmail.com Mon Aug 13 08:33:12 2007 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Aug 2007 05:33:12 -0700 Subject: [SciPy-dev] Using C++ in non optional code ? In-Reply-To: <46C0032E.6020101@ar.media.kyoto-u.ac.jp> References: <46C0032E.6020101@ar.media.kyoto-u.ac.jp> Message-ID: On 8/13/07, David Cournapeau wrote: > I wanted to now if the possibility to use C++ code in scipy was > discussed before ? The only C++ code lying around are for optional > packages. Several people have expressed the wish to use C++, and I would > like to know if there is any official policy regarding its usage. FWIW scipy.sparse.sparsetools uses C++: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sparse/sparsetools In order to template the numpy complex types, I subclassed them and wrote the necessary operators: http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sparse/sparsetools/complex_ops.h -- Nathan Bell wnbell at gmail.com From openopt at ukr.net Mon Aug 13 14:06:09 2007 From: openopt at ukr.net (Dmitrey Kroshko) Date: Mon, 13 Aug 2007 21:06:09 +0300 Subject: [SciPy-dev] Fwd: from GSoC student Dmitrey Kroshko Message-ID: Hi Matthieu, I just implemented very simple algorithm for line-search subproblem, described in the book B. Pshenichniy "Linearisation method", page 46, and lincher is almost the same as the NLP solver from same page 46. You can check that one in lincher.py, line 296 (as you see, it didn't use gradient info vs yours). But since the book is dated 1983, I guess there are better algorithms for now. I will be happy if you'll provide any one (that can be used with non-smooth funcs). So I have updated svn, and if you are interested here's results of mine and yours funcs: #my Armijo implementation itn 66 : Fk= 85.4300044557 maxResidual= 8.77891444873e-07 istop:? 4 (|| F[k] - F[k-1] || < funtol) Solver:?? Time elapsed = 7.41 ??? CPU Time Elapsed = 7.28 objFunValue: 85.4300044557 (feasible) #Matthieu: ??????? state = {'direction' : direction, 'gradient': lsF.gradient(x0)} ??????? mylinesearch = line_search.StrongWolfePowellRule(sigma=5) ??????? destination = mylinesearch(function = lsF, origin = x0, step = direction, state = state)??? itn 78 : Fk= 85.4300278074 maxResidual= 6.97178904829e-07 istop:? 4 (|| F[k] - F[k-1] || < funtol) Solver:?? Time elapsed = 8.58 ??? CPU Time Elapsed = 8.46 objFunValue: 85.4300278074 (feasible) if I use line_search.StrongWolfePowellRule() (i.e. with default param sigma) it yields (the example from lincher.py head) itn 0: Fk= 1428.11851019 maxResidual= 2242631.78131 itn 10 : Fk= 86.6072664467 maxResidual= 0.466521056114 N= 5336.61507377 and then CPU hang up (at least, I didn't observe anything till ~2 min and then stopped; my iprint = 10). Matthieu Brucher wrote: Hi, Some precisions :) ? this week I wrote line-search optimizer based on Armijo rule. It seems not to be committed yet. I've coded it myself thursday or friday : http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/backtracking_search.py Too sad dmitrey did not code it with my framework. ? It's very simple, but now lincher didn't require other line-search optimizers from other software - I mean Matthieu's one (unfortunately it very often makes CPU hang up) Precision here so that there are no misunderstanding : the one dmitrey is talking anout is one of many, it's the one with Strong Wolfe-Powell rule, and dmitrey's cost function is not smooth thus the line search has trouble dealing with it. For those who are intersted : http://projects.scipy.org/scipy/scikits/wiki/Optimization Matthieu _______________________________________________ Scipy-dev mailing list Scipy-dev at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Mon Aug 13 14:44:55 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 13 Aug 2007 20:44:55 +0200 Subject: [SciPy-dev] Fwd: from GSoC student Dmitrey Kroshko In-Reply-To: References: Message-ID: 2007/8/13, Dmitrey Kroshko : > > Hi Matthieu, > I just implemented very simple algorithm for line-search subproblem, > described in the book B. Pshenichniy "Linearisation method", page 46, and > lincher is almost the same as the NLP solver from same page 46. You can > check that one in lincher.py, line 296 (as you see, it didn't use gradient > info vs yours). But since the book is dated 1983, I guess there are better > algorithms for now. I will be happy if you'll provide any one (that can be > used with non-smooth funcs). > It's logical that 'my' line search uses the gradient information, it's the WP rules. As I stated before, this is only one of several searches that are available. One of the other available is the backtracking search, one of those mentioned in our discussion which is only application of the Armijo rule (actually the first of the two Wolfe Powell rules). So I have updated svn, > Thanks :) The code here http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/backtracking_search.pydoes almost the same, but with a custom factor instead of only the 0.5 value. and if you are interested here's results of mine and yours funcs: > > #my Armijo implementation > itn 66 : Fk= 85.4300044557 maxResidual= 8.77891444873e-07 > istop: 4 (|| F[k] - F[k-1] || < funtol) > Solver: Time elapsed = 7.41 CPU Time Elapsed = 7.28 > objFunValue: 85.4300044557 (feasible) > > #Matthieu: > state = {'direction' : direction, 'gradient': lsF.gradient(x0)} > mylinesearch = line_search.StrongWolfePowellRule(sigma=5) > destination = mylinesearch(function = lsF, origin = x0, step = > direction, state = state) > > itn 78 : Fk= 85.4300278074 maxResidual= 6.97178904829e-07 > istop: 4 (|| F[k] - F[k-1] || < funtol) > Solver: Time elapsed = 8.58 CPU Time Elapsed = 8.46 > objFunValue: 85.4300278074 (feasible) > > if I use > line_search.StrongWolfePowellRule() (i.e. with default param sigma) > it yields (the example from lincher.py head) > itn 0: Fk= 1428.11851019 maxResidual= 2242631.78131 > itn 10 : Fk= 86.6072664467 maxResidual= 0.466521056114 > N= 5336.61507377 > and then CPU hang up (at least, I didn't observe anything till ~2 min and > then stopped; my iprint = 10). > Well, with this value, outside the (0,1) intervale, you are using the Armijo rule only ;) I'm happy you found a correct line search for your optimizer :) Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Tue Aug 14 03:43:55 2007 From: openopt at ukr.net (Dmitrey Kroshko) Date: Tue, 14 Aug 2007 10:43:55 +0300 Subject: [SciPy-dev] problems with login/register at scikits page Message-ID: hi all, I constantly get problems with login/register at scikits page https://projects.scipy.org/scipy/scikits/register My old registered login names don't work. If I try to register a new one - after I press "register account" I am asked (in popup window) to enter login name and password and those (just created 5 sec ago, as well as old ones) are not working. if it's important I'm using Mozilla Firefox 2.0.0.6 (& Ubuntu 7.04) (but I had encountered same problems with earlier Mozilla versions as well). Can anyone fix the problem? Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Tue Aug 14 03:53:36 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 14 Aug 2007 09:53:36 +0200 Subject: [SciPy-dev] problems with login/register at scikits page In-Reply-To: References: Message-ID: I had the same problem, I don't understand why, but Jeff Strunk solved it for me. Matthieu 2007/8/14, Dmitrey Kroshko : > > hi all, > I constantly get problems with login/register at scikits page > > https://projects.scipy.org/scipy/scikits/register > > My old registered login names don't work. If I try to register a new one - > after I press "register account" I am asked (in popup window) to enter login > name and password and those (just created 5 sec ago, as well as old ones) > are not working. > if it's important I'm using Mozilla Firefox 2.0.0.6 (& Ubuntu 7.04) (but I > had encountered same problems with earlier Mozilla versions as well). > > Can anyone fix the problem? > > Regards, D. > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.p.marshall at gmail.com Tue Aug 14 05:03:20 2007 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Tue, 14 Aug 2007 19:03:20 +1000 Subject: [SciPy-dev] OS X universal SciPy: success! In-Reply-To: References: Message-ID: I've been trying to easily build an OS X universal SciPy for some time now. [1] My prior efforts consisted of lipo-ing together the PPC and x86 'Superpacks' put together by Chris Fonnesbeck [2], which worked for distributing my image processing app locally. However, this isn't really a good way to do it, specially not for putting up for general use. I came across a successful Universal build of gfortran [4,5], and then shortly found this message [5] claiming it could possibly be used to build a universal SciPy. So, I gave it a shot and it works! (with some tricks...) The build has both ppc and i386 architectures in every .so in the scipy install. The tests run fine on my G4, but I haven't yet had the chance to try it on an Intel Mac. If anyone is keen to do so, please let me know. There will need to be some modifications to numpy distutils, since it presumes that there isn't a universal Fortran. The patch below is just a hack to get it working, and will break any non-universal gfortran. Essentially, all that needs to happen is to have '-arch ppc -arch i386' added to any call to gfortran (both compile and link) , and the '-march' flags removed. What's the best way to add this functionality to numpy distutils? I couldn't think of any way to test for a universal compiler, other than trying to compile a test file and seeing if it dies with the multiple arch flags. Regards, Josh Marshall [1] http://mail.python.org/pipermail/pythonmac-sig/2006-December/ 018556.html [2] http://trichech.us/?page_id=5 [3] http://r.research.att.com/tools/ [4] http://r.research.att.com/gfortran-4.2.1.dmg [5] http://mail.python.org/pipermail/pythonmac-sig/2007-May/018975.html isengard:~/Development/Python/numpy-svn/numpy/distutils/fcompiler Josh $ svn diff gnu.py Index: gnu.py =================================================================== --- gnu.py (revision 3964) +++ gnu.py (working copy) @@ -102,6 +102,7 @@ minor) opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) + opt.extend(['-arch ppc -arch i386']) else: opt.append("-shared") if sys.platform.startswith('sunos'): @@ -183,12 +184,13 @@ # Since Apple doesn't distribute a GNU Fortran compiler, we # can't add -arch ppc or -arch i386, as only their version # of the GNU compilers accepts those. - for a in '601 602 603 603e 604 604e 620 630 740 7400 7450 750'\ - '403 505 801 821 823 860'.split(): - if getattr(cpu,'is_ppc%s'%a)(): - opt.append('-mcpu='+a) - opt.append('-mtune='+a) - break + #for a in '601 602 603 603e 604 604e 620 630 740 7400 7450 750'\ + # '403 505 801 821 823 860'.split(): + # if getattr(cpu,'is_ppc%s'%a)(): + # opt.append('-mcpu='+a) + # opt.append('-mtune='+a) + # break + opt.append('-arch ppc -arch i386') return opt From josh.p.marshall at gmail.com Tue Aug 14 05:03:20 2007 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Tue, 14 Aug 2007 19:03:20 +1000 Subject: [SciPy-dev] OS X universal SciPy: success! In-Reply-To: References: Message-ID: I've been trying to easily build an OS X universal SciPy for some time now. [1] My prior efforts consisted of lipo-ing together the PPC and x86 'Superpacks' put together by Chris Fonnesbeck [2], which worked for distributing my image processing app locally. However, this isn't really a good way to do it, specially not for putting up for general use. I came across a successful Universal build of gfortran [4,5], and then shortly found this message [5] claiming it could possibly be used to build a universal SciPy. So, I gave it a shot and it works! (with some tricks...) The build has both ppc and i386 architectures in every .so in the scipy install. The tests run fine on my G4, but I haven't yet had the chance to try it on an Intel Mac. If anyone is keen to do so, please let me know. There will need to be some modifications to numpy distutils, since it presumes that there isn't a universal Fortran. The patch below is just a hack to get it working, and will break any non-universal gfortran. Essentially, all that needs to happen is to have '-arch ppc -arch i386' added to any call to gfortran (both compile and link) , and the '-march' flags removed. What's the best way to add this functionality to numpy distutils? I couldn't think of any way to test for a universal compiler, other than trying to compile a test file and seeing if it dies with the multiple arch flags. Regards, Josh Marshall [1] http://mail.python.org/pipermail/pythonmac-sig/2006-December/ 018556.html [2] http://trichech.us/?page_id=5 [3] http://r.research.att.com/tools/ [4] http://r.research.att.com/gfortran-4.2.1.dmg [5] http://mail.python.org/pipermail/pythonmac-sig/2007-May/018975.html isengard:~/Development/Python/numpy-svn/numpy/distutils/fcompiler Josh $ svn diff gnu.py Index: gnu.py =================================================================== --- gnu.py (revision 3964) +++ gnu.py (working copy) @@ -102,6 +102,7 @@ minor) opt.extend(['-undefined', 'dynamic_lookup', '-bundle']) + opt.extend(['-arch ppc -arch i386']) else: opt.append("-shared") if sys.platform.startswith('sunos'): @@ -183,12 +184,13 @@ # Since Apple doesn't distribute a GNU Fortran compiler, we # can't add -arch ppc or -arch i386, as only their version # of the GNU compilers accepts those. - for a in '601 602 603 603e 604 604e 620 630 740 7400 7450 750'\ - '403 505 801 821 823 860'.split(): - if getattr(cpu,'is_ppc%s'%a)(): - opt.append('-mcpu='+a) - opt.append('-mtune='+a) - break + #for a in '601 602 603 603e 604 604e 620 630 740 7400 7450 750'\ + # '403 505 801 821 823 860'.split(): + # if getattr(cpu,'is_ppc%s'%a)(): + # opt.append('-mcpu='+a) + # opt.append('-mtune='+a) + # break + opt.append('-arch ppc -arch i386') return opt From openopt at ukr.net Tue Aug 14 11:41:51 2007 From: openopt at ukr.net (Dmitrey Kroshko) Date: Tue, 14 Aug 2007 18:41:51 +0300 Subject: [SciPy-dev] about numpy.max() and numpy.min() - isn't it a bug? Message-ID: Hi all, as for me I think the behavior of the func is not optimal >>> from numpy import * >>> max(array(2.), 0) array(2.0) >>> max(array(-2.), 0) 0 So suppose usually I have x values > 0 and 1 time from 100000 - less than zero. Then sometimes I just get the error "int object has no attribute T" (provided I have code r = max(x,0).T, also, here could be other funcs - max(x,0).tolist(), max(x,0).sum() etc) the same bug if I have 0. (float) instead of 0. Maybe, the same should be fixed for numpy.min() : >>> min(array(-2.), 0) array(-2.0) >>> min(array(2.), 0) 0 Regards, D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dd55 at cornell.edu Tue Aug 14 12:05:08 2007 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 14 Aug 2007 12:05:08 -0400 Subject: [SciPy-dev] about numpy.max() and numpy.min() - isn't it a bug? In-Reply-To: References: Message-ID: <200708141205.08499.dd55@cornell.edu> On Tuesday 14 August 2007 11:41:51 am Dmitrey Kroshko wrote: > Hi all, > as for me I think the behavior of the func is not optimal > > >>> from numpy import * > >>> max(array(2.), 0) > > array(2.0) > > >>> max(array(-2.), 0) > > 0 You are not using numpy's amax and amin, but the python builtins. >>> import numpy >>> numpy.amin(numpy.array(-2.), 0) -2.0 From ondrej at certik.cz Tue Aug 14 12:24:57 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 14 Aug 2007 09:24:57 -0700 Subject: [SciPy-dev] scipy svn fails to build on Debian Message-ID: <85b5c3130708140924q2e768589q8b1cbe6543f0272c@mail.gmail.com> Hi, I have a fresh Debian unstable and scipy svn fails to build: $ ./setup.py build [...] adding 'build/src.linux-i686-2.4/fortranobject.c' to sources. adding 'build/src.linux-i686-2.4' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources creating build/src.linux-i686-2.4/scipy/linsolve creating build/src.linux-i686-2.4/scipy/linsolve/umfpack adding 'Lib/linsolve/umfpack/umfpack.i' to sources. creating build/src.linux-i686-2.4/Lib/linsolve creating build/src.linux-i686-2.4/Lib/linsolve/umfpack swig: Lib/linsolve/umfpack/umfpack.i swig -python -I/usr/include -o build/src.linux-i686-2.4/Lib/linsolve/umfpack/_umfpack_wrap.c -outdir build/src.linux-i686-2.4/Lib/linsolve/umfpack Lib/linsolve/umfpack/umfpack.i Lib/linsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' Lib/linsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_solve.h' Lib/linsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_defaults.h' Lib/linsolve/umfpack/umfpack.i:195: Error: Unable to find 'umfpack_triplet_to_col.h' Lib/linsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_col_to_triplet.h' Lib/linsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_transpose.h' Lib/linsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_scale.h' Lib/linsolve/umfpack/umfpack.i:200: Error: Unable to find 'umfpack_report_symbolic.h' Lib/linsolve/umfpack/umfpack.i:201: Error: Unable to find 'umfpack_report_numeric.h' Lib/linsolve/umfpack/umfpack.i:202: Error: Unable to find 'umfpack_report_info.h' Lib/linsolve/umfpack/umfpack.i:203: Error: Unable to find 'umfpack_report_control.h' Lib/linsolve/umfpack/umfpack.i:215: Error: Unable to find 'umfpack_symbolic.h' Lib/linsolve/umfpack/umfpack.i:216: Error: Unable to find 'umfpack_numeric.h' Lib/linsolve/umfpack/umfpack.i:225: Error: Unable to find 'umfpack_free_symbolic.h' Lib/linsolve/umfpack/umfpack.i:226: Error: Unable to find 'umfpack_free_numeric.h' Lib/linsolve/umfpack/umfpack.i:248: Error: Unable to find 'umfpack_get_lunz.h' Lib/linsolve/umfpack/umfpack.i:272: Error: Unable to find 'umfpack_get_numeric.h' error: command 'swig' failed with exit status 1 Ondrej From openopt at ukr.net Tue Aug 14 12:27:22 2007 From: openopt at ukr.net (Dmitrey Kroshko) Date: Tue, 14 Aug 2007 19:27:22 +0300 Subject: [SciPy-dev] about numpy.max() and numpy.min() - isn't it a bug? Message-ID: I use the min and max from numpy (from numpy import *) Ok, maybe they are same as python buildins But ordinary users usually use min and max, not amin and amax, so for to prevent the bug mentioned either they somehow should be warned, or the behavior of the numpy.min and max should be set to always produce numpy.array (for more safety). Also, I guess it's possible to contact python developers that wrote python min and max and ask them to yield numpy.array (when any of min/max args is numpy.array) instead of python type. Also, I think the behavior of the +/- operators (to yield python type) is not optimal. Suppose I have x = (4.4*t+myNumpyArr).tolist() So I should care does myNumpyArr is of size > 1 or not. Of course, there are funcs like atleast1d or something like that but users begin to use that ones only when some time have been elapsed to find and fix the bug. Same to amax/amin? vs max/min. //my 2 cents, D. Darren Dale wrote: On Tuesday 14 August 2007 11:41:51 am Dmitrey Kroshko wrote: Hi all, as for me I think the behavior of the func is not optimal from numpy import * max(array(2.), 0) array(2.0) max(array(-2.), 0) 0 You are not using numpy's amax and amin, but the python builtins. import numpy numpy.amin(numpy.array(-2.), 0) -2.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Tue Aug 14 12:34:07 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 14 Aug 2007 09:34:07 -0700 Subject: [SciPy-dev] scipy svn fails to build on Debian In-Reply-To: <85b5c3130708140924q2e768589q8b1cbe6543f0272c@mail.gmail.com> References: <85b5c3130708140924q2e768589q8b1cbe6543f0272c@mail.gmail.com> Message-ID: <85b5c3130708140934i488b8ab0nac58ded990165bc0@mail.gmail.com> > Lib/linsolve/umfpack/umfpack.i:216: Error: Unable to find 'umfpack_numeric.h' > Lib/linsolve/umfpack/umfpack.i:225: Error: Unable to find > 'umfpack_free_symbolic.h' > Lib/linsolve/umfpack/umfpack.i:226: Error: Unable to find > 'umfpack_free_numeric.h' > Lib/linsolve/umfpack/umfpack.i:248: Error: Unable to find 'umfpack_get_lunz.h' > Lib/linsolve/umfpack/umfpack.i:272: Error: Unable to find > 'umfpack_get_numeric.h' > error: command 'swig' failed with exit status 1 The attached patch fixes the problem. I didn't commit it to svn, since I think it would break the build for non Debian people. Ondrej -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_debian_umfpack.patch Type: text/x-patch Size: 2178 bytes Desc: not available URL: From aisaac at american.edu Tue Aug 14 13:09:17 2007 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 14 Aug 2007 13:09:17 -0400 Subject: [SciPy-dev] about numpy.max() and numpy.min() - isn't it a bug? In-Reply-To: References: Message-ID: On Tue, 14 Aug 2007, Dmitrey Kroshko apparently wrote: > I use the min and max from numpy (from numpy import *) No, Darren is giving you the answer. See below. There are two problems. One problem is that numpy.max exists for historical reason. Another problem is that numpy.max is not import when you use import *. As the saying goes: "That hurts!" "So don't do it!" I.e., adopt practices that ensure you know the namespace you are using. Cheers, Alan Isaac Python 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> from numpy import * >>> help(numpy.max) Help on function amax in module numpy.core.fromnumeric: amax(a, axis=None, out=None) Return the maximum of 'a' along dimension axis. >>> help(max) Help on built-in function max in module __builtin__: max(...) max(iterable[, key=func]) -> value max(a, b, c, ...[, key=func]) -> value With a single iterable argument, return its largest item. With two or more arguments, return the largest argument. >>> From robfalck at gmail.com Wed Aug 15 11:59:16 2007 From: robfalck at gmail.com (Rob Falck) Date: Wed, 15 Aug 2007 11:59:16 -0400 Subject: [SciPy-dev] SLSQP optimizer Message-ID: Hello, I've been tinkering with the SLSQP (Sequential Least SQuares Programming) optimizer on netlib, and have had success in wrapping it using f2py. I'd like to put this optimizer in Scipy because it offers bounds on the independent variables as well as both equality and inequality constraints. My first concern is, is the license compatible with SciPy? SLSQP is apparently under the ACM license. The following URL's show the Fortran source and some documentation: - http://www.netlib.org/toms/733 - http://portal.acm.org/citation.cfm?doid=192115.192124 - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Wed Aug 15 12:22:38 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 15 Aug 2007 09:22:38 -0700 Subject: [SciPy-dev] NumPy 1.0.3.x and SciPy 0.5.2.x Message-ID: Hello, I am hoping to release NumPy 1.0.3.1 and SciPy 0.5.2.1 this weekend. These releases will work with each other and get rid of the annoying deprecation warning about SciPyTest. They are both basically ready to release. If you have some time, please build and install the stable branches and let me know if you have any errors. You can check out the code here: svn co http://svn.scipy.org/svn/numpy/branches/1.0.3.x svn co http://svn.scipy.org/svn/scipy/branches/0.5.2.x Below is a list of the changes I have made. NumPy 1.0.3.1 ============ * Adds back get_path to numpy.distutils.misc_utils SciPy 0.5.2.1 ========== * Replaces ScipyTest with NumpyTest * Fixes mio5.py as per revision 2893 * Adds missing test definition in scipy.cluster as per revision 2941 * Synchs odr module with trunk since odr is broken in 0.5.2 * Updates for SWIG > 1.3.29 and fixes memory leak of type 'void *' Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Wed Aug 15 12:56:02 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Aug 2007 09:56:02 -0700 Subject: [SciPy-dev] SLSQP optimizer In-Reply-To: References: Message-ID: <46C33022.8050900@gmail.com> Rob Falck wrote: > Hello, > > I've been tinkering with the SLSQP (Sequential Least SQuares > Programming) optimizer on netlib, and have had success in wrapping it > using f2py. I'd like to put this optimizer in Scipy because it offers > bounds on the independent variables as well as both equality and > inequality constraints. My first concern is, is the license compatible > with SciPy? SLSQP is apparently under the ACM license. No, the ACM forbids commercial use. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Wed Aug 15 13:47:14 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 15 Aug 2007 13:47:14 -0400 Subject: [SciPy-dev] SLSQP optimizer In-Reply-To: <46C33022.8050900@gmail.com> References: <46C33022.8050900@gmail.com> Message-ID: > Rob Falck wrote: >> My first concern is, is the license compatible with >> SciPy? SLSQP is apparently under the ACM license. On Wed, 15 Aug 2007, Robert Kern apparently wrote: > the ACM forbids commercial use. 1. Suppose it is under the ACM license: it never hurts to ask for it to be relicensed as BSD. 2. It is not clear to me that this is under the ACM license. Why not write the author, explain briefly what SciPy is, and ask him to provide an explicit BSD license? "Dieter Kraft" Cheers, Alan Isaac From robert.kern at gmail.com Wed Aug 15 13:49:15 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Aug 2007 10:49:15 -0700 Subject: [SciPy-dev] SLSQP optimizer In-Reply-To: References: <46C33022.8050900@gmail.com> Message-ID: <46C33C9B.60603@gmail.com> Alan G Isaac wrote: >> Rob Falck wrote: >>> My first concern is, is the license compatible with >>> SciPy? SLSQP is apparently under the ACM license. > > On Wed, 15 Aug 2007, Robert Kern apparently wrote: >> the ACM forbids commercial use. > > 1. Suppose it is under the ACM license: it never hurts to > ask for it to be relicensed as BSD. > 2. It is not clear to me that this is under the ACM license. http://www.acm.org/pubs/copyright_policy/#Works "ACM requires authors to assign their copyrights to ACM as a condition of publishing the work." http://www.acm.org/pubs/copyright_policy/softwareCRnotice.html "All software, both binary and source published by the Association for Computing Machinery (hereafter, Software) is copyrighted by the Association (hereafter, ACM) and ownership of all right, title and interest in and to the Software remains with ACM." The author no longer owns the copyright. He cannot give it to us under a BSD license. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthew.brett at gmail.com Wed Aug 15 14:38:11 2007 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 15 Aug 2007 19:38:11 +0100 Subject: [SciPy-dev] NumPy 1.0.3.x and SciPy 0.5.2.x In-Reply-To: References: Message-ID: <1e2af89e0708151138y69865f9brcfb43b1b6f42dfd@mail.gmail.com> good job On 8/15/07, Jarrod Millman wrote: > Hello, > > I am hoping to release NumPy 1.0.3.1 and SciPy 0.5.2.1 this weekend. > These releases will work with each other and get rid of the annoying > deprecation warning about SciPyTest. > > They are both basically ready to release. If you have some time, > please build and install the stable branches and let me know if you > have any errors. > > You can check out the code here: > svn co http://svn.scipy.org/svn/numpy/branches/1.0.3.x > svn co http://svn.scipy.org/svn/scipy/branches/0.5.2.x > > Below is a list of the changes I have made. > > NumPy 1.0.3.1 > ============ > * Adds back get_path to numpy.distutils.misc_utils > > SciPy 0.5.2.1 > ========== > * Replaces ScipyTest with NumpyTest > * Fixes mio5.py as per revision 2893 > * Adds missing test definition in scipy.cluster as per revision 2941 > * Synchs odr module with trunk since odr is broken in 0.5.2 > * Updates for SWIG > 1.3.29 and fixes memory leak of type 'void *' > > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From aisaac at american.edu Wed Aug 15 14:57:21 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 15 Aug 2007 14:57:21 -0400 Subject: [SciPy-dev] SLSQP optimizer In-Reply-To: <46C33C9B.60603@gmail.com> References: <46C33022.8050900@gmail.com> <46C33C9B.60603@gmail.com> Message-ID: On Wed, 15 Aug 2007, Robert Kern apparently wrote: > http://www.acm.org/pubs/copyright_policy/#Works > "ACM requires authors to assign their copyrights to ACM as a condition of > publishing the work. Yes, but you can see his copyright retention in the published code! Anyway, I'll write them both. Cheers, Alan Isaac From pgmdevlist at gmail.com Wed Aug 15 15:33:23 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 15 Aug 2007 15:33:23 -0400 Subject: [SciPy-dev] Bandwidth selection Message-ID: <200708151533.23570.pgmdevlist@gmail.com> All, I'm slowly cleaning some code I had written a few months ago, porting SiZer to python. SiZer, or Significant Zero crossing, is a graphical tool to estimate what features in a distribution are significant. It is based on gaussian kernel smoothing at different bandwidths: depending on the first derivative of the smooth, features are judged significant or not at a given confidence level. The initial code was written in Matlab w/ pieces of Fortran, and I received the permission from the main author to port it to scipy and release it as BSD. The package introduces some gaussian-kernel smoothing functions, as well as some automatic bandwidth selectors. In a first step, I want to limit myself to 1D datasets, w/ gaussian kernels. * What has already been done in scipy concerning bandwith selectors ? There's a scipy.stats.kde, but the documentation is scarce... * What would be the best way to release the code ? Scikit ? Thanks a lot in advance for your inputs P. From ondrej at certik.cz Wed Aug 15 17:05:28 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 15 Aug 2007 14:05:28 -0700 Subject: [SciPy-dev] NumPy 1.0.3.x and SciPy 0.5.2.x In-Reply-To: <1e2af89e0708151138y69865f9brcfb43b1b6f42dfd@mail.gmail.com> References: <1e2af89e0708151138y69865f9brcfb43b1b6f42dfd@mail.gmail.com> Message-ID: <85b5c3130708151405w4f02f299q47178ceb373e2fa8@mail.gmail.com> Good job. Those using Debian and Ubuntu know, that the python-scipy package was broken for almost two months. I sent a patch that fixes that and makes scipy build and install, into the bug database: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=426012 (scroll down). Hopefully, the maintainer will upload the new package soon, in the meantime, just apply my patch and build the package by hand. Ondrej On 8/15/07, Matthew Brett wrote: > good job > > On 8/15/07, Jarrod Millman wrote: > > Hello, > > > > I am hoping to release NumPy 1.0.3.1 and SciPy 0.5.2.1 this weekend. > > These releases will work with each other and get rid of the annoying > > deprecation warning about SciPyTest. > > > > They are both basically ready to release. If you have some time, > > please build and install the stable branches and let me know if you > > have any errors. > > > > You can check out the code here: > > svn co http://svn.scipy.org/svn/numpy/branches/1.0.3.x > > svn co http://svn.scipy.org/svn/scipy/branches/0.5.2.x > > > > Below is a list of the changes I have made. > > > > NumPy 1.0.3.1 > > ============ > > * Adds back get_path to numpy.distutils.misc_utils > > > > SciPy 0.5.2.1 > > ========== > > * Replaces ScipyTest with NumpyTest > > * Fixes mio5.py as per revision 2893 > > * Adds missing test definition in scipy.cluster as per revision 2941 > > * Synchs odr module with trunk since odr is broken in 0.5.2 > > * Updates for SWIG > 1.3.29 and fixes memory leak of type 'void *' > > > > > > Thanks, > > > > -- > > Jarrod Millman > > Computational Infrastructure for Research Labs > > 10 Giannini Hall, UC Berkeley > > phone: 510.643.4014 > > http://cirl.berkeley.edu/ > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From openopt at ukr.net Sat Aug 18 05:05:41 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 18 Aug 2007 12:05:41 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <46C6B665.6040704@ukr.net> Hi all, because of some reasons (including work and study in my dept) I'm a little bit out of my GSOC schedule but I hope I'll have all it done in-time, and/or I will spent some more days after GSoC finish. So, this week I updated openopt documentation, some examples were added (see ./examples/nlp_1.py, nlp_2.py, qp_1.py, milp_1.py, lp_1.py, nsp_1.py) 2 solvers from scipy were connected: l_bfgs_b as scipy_lbfgsb and tnc as scipy_tnc. I tried to connect cobyla, but I encountered some problems described in my blog http://openopt.blogspot.com/2007/08/lbfgsb-from-scipyoptimize-have-been.html I've got another one letter from ALGENCAN developers. They informed me they took into account some my propositions (using numpy.array instead of python list etc) and have already implemented them. However, I informed them of some more proposition, and still there are no answer - will they implement those ones or not. On the other hand, I think it's already pass enough requirements to be implemented as openopt solver, and I guess it will be much more effective than lincher (because sometimes it's more effective than well-known IPOPT). However, it requires several days to connect, and I'm enable to do this without GSoC schedule modification. Read more about the ALGENCAN letter at http://openopt.blogspot.com/2007/08/news-from-algencan.html openopt tickets 33 and 34 filed by Nils Wagner had been fixed (however, I didn't observed problem of 33rd) and should be closed, but I still have troubles with login/register in projects.scikits.org, and Jeff Strunk hasn't answered my letter yet. See http://openopt.blogspot.com/2007/08/openopt-tickets-33-and-34.html for more details. Also, some minor changes to lincher were made (I have no time for major enhancements). Regards, D. From openopt at ukr.net Sun Aug 19 11:43:13 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 19 Aug 2007 18:43:13 +0300 Subject: [SciPy-dev] report on some tickets related to scipy.optimize Message-ID: <46C86511.1090805@ukr.net> hi all, let me inform you about a summary of ticket work, according to Alan Isaac demand. all tickets assigned to me are mentioned here. ====================== Pending ------- http://projects.scipy.org/scipy/scipy/ticket/344 should be closed changeset [3172] , it had been fixed and now all fmin_cg tests known to me, including unittests, pass ok the problem was in _cubicmin func, it used Num.dot([[dc**2, -db**2],[-dc**3, db**3]],[fb-fa-C*db,fc-fa-C*dc]) and it gave incorrect multiply with current numpy version (matrix of shape 2x2x1 times matrix of shape 2x2 or like that) http://projects.scipy.org/scipy/scipy/ticket/384 tnc: TypeError: argument 2 must be list, not numpy.ndarray so the problem was ... = fmin_tnc(func, x, ...) earlier tnc required x to be Python list, not numpy.ndarray. I added line x = asfarray(x).tolist() So now tnc works with both python list (see /optimize/tests/test_optimize.py, funcs test3fg, test4fg and others) and numpy.ndarray (func test38fg from same file) http://projects.scipy.org/scipy/scipy/ticket/390 (bfgs falls) we decided the one should be closed because bfgs solver is used on an non-convex objfun with lots of local minima. So nothing special that sometimes it yields other solution than mentioned by ticket author. the ticket should be closed http://projects.scipy.org/scipy/scipy/ticket/234 suppressing warning in leastsq and fsolve in optimize it's done in the way proposed by ticket author, the ticket should be closed http://projects.scipy.org/scipy/scipy/ticket/285 (enhancement proposition to fmin_powell) So, as it was proposed by Alan Isaac and agreed in scipy mail list, alternative changes to fmin_powell were implemented (object-oriented style, class with some funcs and fields.) //to Alan Isaac: it was Nils who published the comment, not I. // (please clearly summarize the blog comments here; // we cannot assume persistance of the blog) Afaik David intended to continue transformation of other scipy.optimize funcs to new style, but I don't know will he do it or not and is it worth to do or not. Won't Do -------- http://projects.scipy.org/scipy/scipy/ticket/399 OK (won't do) Closed ------ http://projects.scipy.org/scipy/scipy/ticket/296 OK (closed) http://projects.scipy.org/scipy/scipy/ticket/377 OK (closed) http://projects.scipy.org/scipy/scipy/ticket/416 OK (closed) http://projects.scipy.org/scipy/scipy/ticket/389 OK (closed) http://projects.scipy.org/scipy/scipy/ticket/388 OK (closed) From fperez.net at gmail.com Sun Aug 19 20:51:11 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 19 Aug 2007 18:51:11 -0600 Subject: [SciPy-dev] NumPy 1.0.3.x and SciPy 0.5.2.x In-Reply-To: References: Message-ID: Hey, On 8/15/07, Jarrod Millman wrote: > Hello, > > I am hoping to release NumPy 1.0.3.1 and SciPy 0.5.2.1 this weekend. > These releases will work with each other and get rid of the annoying > deprecation warning about SciPyTest. I just wanted to give you a public, huge thank you for tackling this most thankless but important problem. Many people at the just finished SciPy'07 conference mentioned better deployment/installation support as their main issue with scipy. Our tools are maturing, but we won't get very far if they don't actually get in the hands of users. Regards, f From ondrej at certik.cz Sun Aug 19 23:30:19 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 19 Aug 2007 20:30:19 -0700 Subject: [SciPy-dev] NumPy 1.0.3.x and SciPy 0.5.2.x In-Reply-To: References: Message-ID: <85b5c3130708192030t6947d623oa00a710a229cbd5d@mail.gmail.com> > I just wanted to give you a public, huge thank you for tackling this > most thankless but important problem. Many people at the just > finished SciPy'07 conference mentioned better deployment/installation > support as their main issue with scipy. Our tools are maturing, but > we won't get very far if they don't actually get in the hands of > users. I think all of the developers should make sure, that scipy and numpy installs natively in their own favourite distribution. So for example I am using Debian, so I'll try to keep an eye on it and help the maintainer of the deb package. This way it should cover the most distributions. Ondrej P.S. I don't know what the native way of installing packages on Mac OS X is, but I know of the fink project, that basically allows to use debian packages: http://finkproject.org/ From openopt at ukr.net Mon Aug 20 11:47:50 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 20 Aug 2007 18:47:50 +0300 Subject: [SciPy-dev] optimizers module Message-ID: <46C9B7A6.5040205@ukr.net> hi all, I'm trying to connect a new optimizer (line-search via cubic interpolation) to Matthieu's code. Here are some things that should be discussed, I guess in wide circle. I have already committed ./solvers/optimizers/line_search/qubic_interpolation.py tests/test_qubic_interpolation.py the problems: 1. I have implemented stop tolerance x as self.minStepSize. However, isn't it more correct to observe |x_prev - x_new| according to given by user xtol, than to observe |alpha_prev - alpha_new| according to xtol? If the routine is called from a multi-dimensional NL problem with known xtol, provided by user, I think it's more convenient and more correct to observe |x_prev - x_new| instead of |alpha_prev - alpha_new| as stop criterion. 2. (this is primarily for Matthieu): where should be gradtol taken from? It's main stop criterion, according to alg. Currently I just set it to 1e-6. 3. Don't you think that maxIter and/or maxFunEvals rule should be added? (I ask because I didn't see that ones in Matthieu's quadratic_interpolation solver). It will make algorithms more stable to CPU-hanging errors because of our errors and/or special funcs encountered. I had implemented that ones but since Matthieu didn't have them in quadratic_interpolation, I just comment out the stop criteria (all I can do is to set my defaults like 400 or 1000 (as well as gradtol 1e-6), but since Matthieu "state" variable (afaik) not have those ones - I can't take them as parameters). So should they exist or not? regards, D. From matthieu.brucher at gmail.com Mon Aug 20 12:34:33 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 20 Aug 2007 18:34:33 +0200 Subject: [SciPy-dev] optimizers module In-Reply-To: <46C9B7A6.5040205@ukr.net> References: <46C9B7A6.5040205@ukr.net> Message-ID: Hi again ;) I have already committed > ./solvers/optimizers/line_search/qubic_interpolation.py > tests/test_qubic_interpolation.py qubic should be cubic, no ? the problems: > 1. I have implemented stop tolerance x as self.minStepSize. > However, isn't it more correct to observe |x_prev - x_new| according to > given by user xtol, than to observe |alpha_prev - alpha_new| according > to xtol? If the routine is called from a multi-dimensional NL problem > with known xtol, provided by user, I think it's more convenient and more > correct to observe |x_prev - x_new| instead of |alpha_prev - alpha_new| > as stop criterion. The basic cubic interpolation works on alpha. If you want to implement another based on x, not problem. I think that as a first step, we should add standard algorithms that are documented and described. After this step is done, we can explore. 2. (this is primarily for Matthieu): where should be gradtol taken from? > It's main stop criterion, according to alg. > Currently I just set it to 1e-6. It should be taken in the constructor (see the damped_line_search.py for instance) 3. Don't you think that maxIter and/or maxFunEvals rule should be added? > (I ask because I didn't see that ones in Matthieu's > quadratic_interpolation solver). That is a good question that raised also in our discussion for the Strong Wolfe Powell rules, at least for the maxIter. It will make algorithms more stable to > CPU-hanging errors because of our errors and/or special funcs encountered. > I had implemented that ones but since Matthieu didn't have them in > quadratic_interpolation, I just comment out the stop criteria (all I can > do is to set my defaults like 400 or 1000 (as well as gradtol 1e-6), but > since Matthieu "state" variable (afaik) not have those ones - I can't > take them as parameters). > So should they exist or not? If you want to use them, you should put them in the __init__ method as well. The state could be populated with everything, but that would mean very cumbersome initializations. On one hand, you should create each module with no parameter and pass all of them to the optimizer. That could mean a very long and not readable line. On the other hand, you should create the optimizer, create then every module with the optimizer as a parameter. Not intuitive enough. This is were the limit between the separation principle and the object orientation is fuzzy. So the state dictionary is only responsible for what is specifically connected to the function. Either the parameters, or different evaluations (hessian, gradient, direction and so on). That's why you "can't" put gradtol in it (for instance). I saw that you test for the presence of the gradient method, you should not. If people want to use this line search, they _must_ provide a gradient. If they can't provide an analytical gradient, they can provide a numerical one by using helpers.ForwardFiniteDifferenceDerivatives. This is questionable, I know, but the simpler the algorithm, the simpler their use, their reading and debugging (that way, you can get rid of the f_and_df function as well or at least of the test). Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Mon Aug 20 13:40:18 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 20 Aug 2007 20:40:18 +0300 Subject: [SciPy-dev] optimizers module In-Reply-To: References: <46C9B7A6.5040205@ukr.net> Message-ID: <46C9D202.6090205@ukr.net> Matthieu Brucher wrote: > Hi again ;) > > I have already committed > ./solvers/optimizers/line_search/qubic_interpolation.py > tests/test_qubic_interpolation.py > > > > qubic should be cubic, no ? yes, I will rename it now > > > the problems: > 1. I have implemented stop tolerance x as self.minStepSize. > However, isn't it more correct to observe |x_prev - x_new| > according to > given by user xtol, than to observe |alpha_prev - alpha_new| > according > to xtol? If the routine is called from a multi-dimensional NL problem > with known xtol, provided by user, I think it's more convenient > and more > correct to observe |x_prev - x_new| instead of |alpha_prev - > alpha_new| > as stop criterion. > > > > The basic cubic interpolation works on alpha. If you want to implement > another based on x, not problem. I think that as a first step, we > should add standard algorithms that are documented and described. > After this step is done, we can explore. yes, but all your solvers are already written in terms of n-dimensional problem (x0 and direction, both are of nVars size), so it would be more natural to use xtol (from general problem), not alpha_tol (from line-search subproblem) > > > 2. (this is primarily for Matthieu): where should be gradtol taken > from? > It's main stop criterion, according to alg. > Currently I just set it to 1e-6. > > > > It should be taken in the constructor (see the damped_line_search.py > for instance) > > > 3. Don't you think that maxIter and/or maxFunEvals rule should be > added? > (I ask because I didn't see that ones in Matthieu's > quadratic_interpolation solver). > > > > That is a good question that raised also in our discussion for the > Strong Wolfe Powell rules, at least for the maxIter. As for SWP, I think a check should be made if solution with required c1 and c2 can't be obtained and/or don't exist at all. For example objFun(x) = 1e-5*x while c1 = 1e-4 (IIRC this is the example where I encountered alpha = 1e-28 -> f0 = f(x0+alpha*direction)). The SWP-based solver should produce something different than CPU hangup. OK, it turned to be impossible to obtain new_X that satisfies c1 and c2, but an approximation very often will be enough good to continue solving the NL problem involved. So I think check for |x_prev - x_new | < xtol should be added and will be very helpful here. You have something like that with alphap in line s68-70 (swp.py) but this is very unclear and I suspect for some problems may be endless (as well as other stop creteria implemented for now in the SWP). > > > It will make algorithms more stable to > CPU-hanging errors because of our errors and/or special funcs > encountered. > I had implemented that ones but since Matthieu didn't have them in > quadratic_interpolation, I just comment out the stop criteria (all > I can > do is to set my defaults like 400 or 1000 (as well as gradtol > 1e-6), but > since Matthieu "state" variable (afaik) not have those ones - I can't > take them as parameters). > So should they exist or not? > > > > If you want to use them, you should put them in the __init__ method as > well. > > The state could be populated with everything, but that would mean very > cumbersome initializations. On one hand, you should create each module > with no parameter and pass all of them to the optimizer. That could > mean a very long and not readable line. On the other hand, you should > create the optimizer, create then every module with the optimizer as a > parameter. Not intuitive enough. > This is were the limit between the separation principle and the object > orientation is fuzzy. > So the state dictionary is only responsible for what is specifically > connected to the function. Either the parameters, or different > evaluations (hessian, gradient, direction and so on). That's why you > "can't" put gradtol in it (for instance). I'm not know your code very good yet, but why can't you just set default params as I do in /Kernel/BaseProblem.py? And then if user wants to change any specific parameter - he can do it very easy. And no "very long and not readable line" are present in my code. > > I saw that you test for the presence of the gradient method, you > should not. If people want to use this line search, they _must_ > provide a gradient. If they can't provide an analytical gradient, they > can provide a numerical one by using > helpers.ForwardFiniteDifferenceDerivatives. This is questionable, I > know, but the simpler the algorithm, the simpler their use, their > reading and debugging (that way, you can get rid of the f_and_df > function as well or at least of the test). I still think the approach is incorrect, user didn't ought to supply gradient, we should calculate it by ourselves if it's absent. At least any known to me optimization software do the trick. As for helpers.ForwardFiniteDifferenceDerivatives, it will take too much time for user to dig into documentation to find the one. Also, as you see my f_and_df is optimized to not recalculate f(x0) while gradient obtaining numerically, like some do, for example approx_fprime in scipy.optimize. For problems with costly funcs and small nVars (1..5) speedup can be significant. Of course, it should be placed in single file for whole "optimizers" package, like I do in my ObjFunRelated.py, not in qubic_interpolation.py. But it would be better would you chose the most appropriate place (and / or informed me where is it). Regards, D. From jh at physics.ucf.edu Mon Aug 20 14:04:51 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Mon, 20 Aug 2007 14:04:51 -0400 Subject: [SciPy-dev] NumPy 1.0.3.x and SciPy 0.5.2.x (Fernando Perez) Message-ID: <1187633091.11253.104.camel@glup.physics.ucf.edu> As someone who is teaching a new course in numpy starting tomorrow, I want to echo Fernando's email, and add a quick request. I'll take the risk that the same thing was decided a different way at the conference, but if it was, please just let me and the other list readers know. First, a major thanks to everyone who works on making numpy and scipy a turn-key install. Though there is a ways yet to go, things have come a long way. Perhaps the most pleasing thing for me as an early advocate has been the rise in the percentage of list traffic devoted to packaging and releasing for the masses. I would very much like to see the "use SVN" instruction banished forever from the download page. But, the goal of a no-brainer install for Mac, PC, and the major Linux flavors does not seem unachievable anymore, and of course we're there for many of the Linux systems. My request is in the direction of transparency on the release cycle. I know the release of NumPy 1.0.3.1 and SciPy 0.5.2.1 is imminent, and I look forward to it very much. How does it go from a tarball to, say, a .deb in the Ubuntu repo? How long does it take for that process and who does it? How much of that is in "our" control and how much of it is in the OS vendor's/distro's control? I would favor this being part of a "Release Status" page linked off the Download page. The top item would be a chart showing the release steps across the top and the names of the distros we build for down the side. There would be one such chart for each active release of numpy and of scipy. They would contain green checks for done items, a black X for not doing, a green dash for not necessary, a red exclamation point for a problem, etc. Below that, there would be a short description of how a release goes from tarball to OS packages. The page should also include the names of people or lists to which to send questions like the inevitable "Version XXX of YYY has been out for weeks now, when will I be able to get it with my OS's installer?" This would also act as a way to connect potential helpers with problems to solve on the release side of the house. I'd make it and populate it, in the wiki way, except I have essentially none of the necessary information for the page. So, I'll leave it at that and hope someone who does takes interest. Either way, if someone could let us know what to expect in terms of the new releases getting out to the repos, that would be greatly appreciated, even if it's just a rough guide. Thanks again for all the hard work on packaging! --jh-- Prof. Joseph Harrington Department of Physics MAP 420 University of Central Florida Orlando, FL 32816-2385 (407) 823-3416 voice (407) 823-5112 fax (407) 823-2325 physics office jh at physics.ucf.edu (direct) jh at alum.mit.edu (permanent forwarding address, will work forever) From matthieu.brucher at gmail.com Mon Aug 20 14:27:47 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 20 Aug 2007 20:27:47 +0200 Subject: [SciPy-dev] optimizers module In-Reply-To: <46C9D202.6090205@ukr.net> References: <46C9B7A6.5040205@ukr.net> <46C9D202.6090205@ukr.net> Message-ID: > > > The basic cubic interpolation works on alpha. If you want to implement > > another based on x, not problem. I think that as a first step, we > > should add standard algorithms that are documented and described. > > After this step is done, we can explore. > yes, but all your solvers are already written in terms of n-dimensional > problem (x0 and direction, both are of nVars size), so it would be more > natural to use xtol (from general problem), not alpha_tol (from > line-search subproblem) xtol is not available everywhere, it's something that is criterion-dependent. It might be given or not. The documented algorithm is based on alpha, and our main implementation should follow the documented algorithm as well. > > That is a good question that raised also in our discussion for the > > Strong Wolfe Powell rules, at least for the maxIter. > As for SWP, I think a check should be made if solution with required c1 > and c2 can't be obtained and/or don't exist at all. > For example objFun(x) = 1e-5*x while c1 = 1e-4 (IIRC this is the example > where I encountered alpha = 1e-28 -> f0 = f(x0+alpha*direction)). The > SWP-based solver should produce something different than CPU hangup. OK, > it turned to be impossible to obtain new_X that satisfies c1 and c2, but > an approximation very often will be enough good to continue solving the > NL problem involved. According to the theory, a solution is found is the direction is correct and if the function is continuously differentiable. But I agree that some additional tests must be added. So I think check for |x_prev - x_new | < xtol > should be added and will be very helpful here. You have something like > that with alphap in line s68-70 (swp.py) but this is very unclear and I > suspect for some problems may be endless (as well as other stop creteria > implemented for now in the SWP). This check is done because of the interpolation nature of the WP algorithm. You can't put |x_prev_x_new| < xtol as a stopping criteria. This is why your idea of limiting the number of iterations is relevant. > So the state dictionary is only responsible for what is specifically > > connected to the function. Either the parameters, or different > > evaluations (hessian, gradient, direction and so on). That's why you > > "can't" put gradtol in it (for instance). > I'm not know your code very good yet, but why can't you just set default > params as I do in /Kernel/BaseProblem.py? Because there are a lot of default parameters that could be set depending on the algorithm. From an object-oriented point of view, this way of doing things is correct: the different modules posess the arguments because they are responsible for using them. Besides, you may want a different gradient tolerance for different sub-modules. And then if user wants to change any specific parameter - he can do it > very easy. And no "very long and not readable line" are present in my > code. It won't be in your code, it would be in the user code. The goal of the framework is to make things readable and usable. If every parameter is only given to the optimizer, the other submodules are no longer of interest. With the parameters given to each specific module with respect of their belonging, one can create several instances of a specific module to use for different optimizers. This is what object-oriented code is intended for. I've done this in my PhD code, and it works very well. I still think the approach is incorrect, user didn't ought to supply > gradient, we should calculate it by ourselves if it's absent. At least > any known to me optimization software do the trick. Perhaps, but I have several reasons : - when it's hidden, it's a magic trick. My view of the framework is that it must not do anything like that. It's designed for advanced users that do not want those tricks - from an architectural point of view, it's wrong, plainly wrong. I'm a scientist specialized in electronics and signal processing, but I have high requirements for everything that is IT-oriented. Encapsulation is one part of the object principle, and implementing finite-difference outside breaks it (and I'm not talking about code duplication). As for > helpers.ForwardFiniteDifferenceDerivatives, it will take too much time > for user to dig into documentation to find the one. Not if it's clearly stated in the documentation, perhaps it can be enhanced. It's on the first optimization page in the scikit wiki, with examples. Also, as you see my f_and_df is optimized to not recalculate f(x0) while > gradient obtaining numerically, like some do, for example approx_fprime > in scipy.optimize. For problems with costly funcs and small nVars (1..5) > speedup can be significant. Yes, I agree. For optimization, an additional argument could be given to the gradient that will be used if needed (remember that the other way of implementing finite difference do not use f(x0)), but it will bring some trouble to the user (every gradient function must have this additional argument). This should be discussed in a specific topic, because I don't think there is a simple answer to this specific point (and as I stated before I prefer something complete but not as fast than something incomplete but very fast, and there is nothing like free lunch in IT, we have to make compromises). Of course, it should be placed in single file for whole "optimizers" > package, like I do in my ObjFunRelated.py, not in qubic_interpolation.py. > But it would be better would you chose the most appropriate place (and / > or informed me where is it). It's already implemented (both versions) and available. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jstrunk at enthought.com Mon Aug 20 14:46:56 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Mon, 20 Aug 2007 13:46:56 -0500 Subject: [SciPy-dev] scipy and scikits tracs use the same password file now Message-ID: <20070820134656.snuwvgd3qc00wwwk@mail.enthought.com> Since Dmitrey was unable to login to the scikits trac, I realized that the scikits new users were actually registered on the scipy trac's password file. However, the webserver was authenticating from the scikits password file. I changed the webserver settings so the scikits trac authenticates from the scipy password file. If you find you can't login to the scikits trac, please try your scipy trac password. Thank you. I'll be more responsive when I return from my vacation on Monday. Jeff From millman at berkeley.edu Tue Aug 21 01:50:21 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 20 Aug 2007 22:50:21 -0700 Subject: [SciPy-dev] NumPy 1.0.3.1 released Message-ID: I'm pleased to announce the release of NumPy 1.0.3.1 This a minor bug fix release, which enables the latest release of SciPy to build. Bug-fixes =============== * Add back get_path to numpy.distutils.misc_utils * Fix 64-bit zgeqrf * Add parenthesis around GETPTR macros Thank you to everybody who contributed to the recent release. Best regards, NumPy Developers http://numpy.scipy.org From millman at berkeley.edu Tue Aug 21 03:55:34 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 21 Aug 2007 00:55:34 -0700 Subject: [SciPy-dev] SciPy 0.5.2.1 release Message-ID: I'm pleased to announce the release of SciPy 0.5.2.1 This a minor bug fix release. Bug-fixes =============== * Replaces ScipyTest with NumpyTest * Fixes mio5.py as per revision 2893 * Adds missing test definition in scipy.cluster as per revision 2941 * Synchs odr module with trunk since odr is broken in 0.5.2 * Updates for SWIG > 1.3.29 and fixes memory leak of type 'void *' Thank you to everybody who contributed to the recent release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From openopt at ukr.net Tue Aug 21 04:04:44 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 21 Aug 2007 11:04:44 +0300 Subject: [SciPy-dev] optimizers module In-Reply-To: References: <46C9B7A6.5040205@ukr.net> <46C9D202.6090205@ukr.net> Message-ID: <46CA9C9C.1070104@ukr.net> Matthieu Brucher wrote: > > > So the state dictionary is only responsible for what is > specifically > > connected to the function. Either the parameters, or different > > evaluations (hessian, gradient, direction and so on). That's why you > > "can't" put gradtol in it (for instance). > I'm not know your code very good yet, but why can't you just set > default > params as I do in /Kernel/BaseProblem.py? > > > > Because there are a lot of default parameters that could be set > depending on the algorithm. From an object-oriented point of view, > this way of doing things is correct: the different modules posess the > arguments because they are responsible for using them. Besides, you > may want a different gradient tolerance for different sub-modules. I do have other gradtol for different problems, for example NLP and NSP (non-smooth). In any problem class default gradtol value should be known to user constant, like TOMLAB do. Setting different gradtol for each solver is senseless, it's matter of how close xk is to x_opt (of course, if function is tooo special and/or non-convex and/or non-smooth default value maybe is worth to be changed by other, but it must decide user according to his knowledge of function). The situation to xtol or funtol or diffInt is more complex, but still TOMLAB has their default constants common to all algs, almost in the same way as I do, and years of successful TOMLAB spreading (see http://tomopt.com/tomlab/company/customers.php) are one more evidence that this approach is good. As for special solver params, like space transformation parameter in ralg, it can be passed from dictionary or something like that, for example in openopt for MATLAB I do p.ralg.alpha = 2.5 p.ralg.n2 = 1.2 in Python it's not implemented yet, but I intend to do something like p = NSP(...) p.ralg = {'alpha' : 2.5, 'n2': 1.2} (so other 3 ralg params remain unchanged - they will be loaded from default settings) r = p.solve('ralg') either p.ralg = p.setdefaults('ralg') print p.ralg.alpha # previous approach didn't allow ... r = p.solve('ralg') BTW TOMLAB handle numerical gradient obtaining the same way that I do. I think 90% of users doesn't care at all about which tolfun, tolx etc are set and which way gradient is calculating - maybe, lots of them don't know at all that gradient is obtaining. Trey just require problem to be solved - and no matter which way, use that way gradient or not (as you see, the only one scipy optimizer that handles non-lin constraints is cobyla and it takes no gradient by user). Of course, if problem is too complex to be solved quickly, they can begun investigating ways to speedup and gradient probably will be the 1st one. > > > I still think the approach is incorrect, user didn't ought to supply > gradient, we should calculate it by ourselves if it's absent. At least > any known to me optimization software do the trick. > > > > Perhaps, but I have several reasons : > - when it's hidden, it's a magic trick. My view of the framework is > that it must not do anything like that. It's designed for advanced > users that do not want those tricks > - from an architectural point of view, it's wrong, plainly wrong. I'm > a scientist specialized in electronics and signal processing, but I > have high requirements for everything that is IT-oriented. > Encapsulation is one part of the object principle, and implementing > finite-difference outside breaks it (and I'm not talking about code > duplication). So, as I said, it could be solved like p.showDefaults('ralg') or p = NLP(...), print p.xtol. For 99.9% of users it should be enough. > > > Also, as you see my f_and_df is optimized to not recalculate f(x0) > while > gradient obtaining numerically, like some do, for example > approx_fprime > in scipy.optimize. For problems with costly funcs and small nVars > (1..5) > speedup can be significant. > > > > Yes, I agree. For optimization, an additional argument could be given > to the gradient that will be used if needed (remember that the other > way of implementing finite difference do not use f(x0)), but it will > bring some trouble to the user (every gradient function must have this > additional argument). I didn't see any troubles for user, he just provides either f and df or only f, as anywhere else, w/o any changes. Or did you mean gradient to be func(x, arg1, arg2,...)? There are no troubles as well - for example redefinition df = lambda x: func(x, arg1, arg2) from the very beginning. As for my openopt, there are lots of tools to prevent recalculating for twice. One of them is to check for previous x, if its the same - return previous fval (cval, hval). for example if a solver (or user from his df) calls F = p.f(x) DF = p.df(x) and df is calculating numerically, it will not recalculate f(x=x0), it will just use previous value from calculating p.f (because x is equal to p.FprevX). the same to dc, dh (c(x)<=0, h(x)=0). As you know, comparison numpy.all(x==xprev) don't take too much time, at least it takes much less than 0.1% of whole time/cputime elapsed, while I was observing MATLAB profiler results on different NL problems. Regards, D. From matthieu.brucher at gmail.com Tue Aug 21 04:49:27 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 21 Aug 2007 10:49:27 +0200 Subject: [SciPy-dev] optimizers module In-Reply-To: <46CA9C9C.1070104@ukr.net> References: <46C9B7A6.5040205@ukr.net> <46C9D202.6090205@ukr.net> <46CA9C9C.1070104@ukr.net> Message-ID: > > I do have other gradtol for different problems, for example NLP and NSP > (non-smooth). In any problem class default gradtol value should be known > to user constant, like TOMLAB do. Setting different gradtol for each > solver is senseless, it's matter of how close xk is to x_opt (of course, > if function is tooo special and/or non-convex and/or non-smooth default > value maybe is worth to be changed by other, but it must decide user > according to his knowledge of function). There is a difference between a gradient tolerance for the global stop criterion that works on the real gradient and a gradient tolerance in the line search that works on a scalar. We can have the same default value for all gradient tolerance, but the framework is not designed to unify them. It's not its purpose. The situation to xtol or funtol or diffInt is more complex, but still > TOMLAB has their default constants common to all algs, almost in the > same way as I do, and years of successful TOMLAB spreading (see > http://tomopt.com/tomlab/company/customers.php) are one more evidence > that this approach is good. xtol or funtol are used in the criteria, that is only one file. Not much trouble. And diffInt is a sort of universal value (the square root of the epsilon for a 32bits float). Once more, it's defined in one place. BTW TOMLAB handle numerical gradient obtaining the same way that I do. I > think 90% of users doesn't care at all about which tolfun, tolx etc are > set and which way gradient is calculating - maybe, lots of them don't > know at all that gradient is obtaining. Trey just require problem to be > solved - and no matter which way, use that way gradient or not (as you > see, the only one scipy optimizer that handles non-lin constraints is > cobyla and it takes no gradient by user). These people are not the prime target of the framework. They want a fully fledged function that take care of everything, it's exactly what you did with lincher for instance. On the contrary, here, users must know a little bit about optimization because they have to build the optimizer they want. I don't expect 50% of the scipy.optimize users to use this framework. As they must know about what optimization is, I can assume that they know about finite-difference methods, tolerance on the parameters, ... So, as I said, it could be solved like p.showDefaults('ralg') or p = > NLP(...), print p.xtol. For 99.9% of users it should be enough. Yes I think we could add a dictionary "? la matplotlib" or something like it. I don't think it is a priority to do it. It can be added seemlessly when the optimizers are stabilized. I didn't see any troubles for user, he just provides either f and df or > only f, as anywhere else, w/o any changes. Or did you mean gradient to > be func(x, arg1, arg2,...)? There are no troubles as well - for example > redefinition df = lambda x: func(x, arg1, arg2) from the very beginning. If gradient can accept another formal or positional argument, every gradient in a user-defined function class must have it. As it is done now, it's not acceptable (we cannot require additional arguments because the gradient might be computed with forward differences). As for my openopt, there are lots of tools to prevent recalculating for > twice. One of them is to check for previous x, if its the same - return > previous fval (cval, hval). for example if a solver (or user from his > df) calls > F = p.f(x) > DF = p.df(x) > and df is calculating numerically, it will not recalculate f(x=x0), it > will just use previous value from calculating p.f (because x is equal to > p.FprevX). > the same to dc, dh (c(x)<=0, h(x)=0). As you know, comparison > numpy.all(x==xprev) don't take too much time, at least it takes much > less than 0.1% of whole time/cputime elapsed, while I was observing > MATLAB profiler results on different NL problems. The last computed x can be added to the state dictionary, do it if that pleases you. But as I stated before, I prefer something tested and complete but not optimized than something not tested or not complete but optimized. BTW, I fixed your tests for the cubic interpolation. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Tue Aug 21 05:20:04 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 21 Aug 2007 12:20:04 +0300 Subject: [SciPy-dev] problems with scikits svn Message-ID: <46CAAE44.6070901@ukr.net> svn: PROPFIND request failed on '/svn/scikits/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/tests' svn: PROPFIND of '/svn/scikits/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/tests': Could not read status line: Connection reset by peer (http://svn.scipy.org) From matthieu.brucher at gmail.com Tue Aug 21 05:23:41 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 21 Aug 2007 11:23:41 +0200 Subject: [SciPy-dev] problems with scikits svn In-Reply-To: <46CAAE44.6070901@ukr.net> References: <46CAAE44.6070901@ukr.net> Message-ID: It seems that the website is down as well (projects.scipy.org/scipy/scipyfor instance) 2007/8/21, dmitrey : > > svn: PROPFIND request failed on > > '/svn/scikits/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/tests' > svn: PROPFIND of > > '/svn/scikits/trunk/openopt/scikits/openopt/solvers/optimizers/line_search/tests': > Could not read status line: Connection reset by peer (http://svn.scipy.org > ) > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Tue Aug 21 05:28:39 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 21 Aug 2007 12:28:39 +0300 Subject: [SciPy-dev] unified svn timestamp In-Reply-To: References: <46CAAE44.6070901@ukr.net> Message-ID: <46CAB047.9050100@ukr.net> Hi all, don't you think all scipy dev should have unified svn timestamp? Regards, D. Matthieu Brucher wrote: > It seems that the website is down as well > (projects.scipy.org/scipy/scipy > for instance) It's just another one reminder that today Jeff vacations were finished :) From millman at berkeley.edu Tue Aug 21 18:47:23 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 21 Aug 2007 15:47:23 -0700 Subject: [SciPy-dev] SciPy 0.6.0 release planned for August 31st Message-ID: Hello everyone, Now that SciPy 0.5.2.1 is out the door it is time to get a new feature release done as well. SciPy 0.5.2 was released over 8 months ago (2006-12-07) and a lot of things have been improved and fixed since then. After talking with Travis Oliphant and Robert Kern (as well as quite a few other people at the SciPy conference), we decided to overhaul the release process. On Monday, August 27th, I will make a 0.6.x branch. I would like to release 0.6.0 on Friday, August 31st. Continued development will occur on the trunk and will lead to a 0.7.x branch 3 months later on Monday, November 26th. If there are any important bugs in the 0.6.0 release we will make a 0.6.1 release and so on. I have started putting together a 0.6 roadmap here (it should mostly be a list of major things that have already been added or fixed): http://projects.scipy.org/scipy/scipy/milestone/0.6 Please don't make any major changes to the trunk between now and next Monday. If you can I would like to ask everyone to look through the open tickets and close anything that has already been fixed. Even better if you see something that you can easily fix without breaking something else, please do so. If you find a ticket that should be a release blocker, please let me know ASAP. Currently, there are 4 release blockers that I want fixed before making the 0.6.0 tag: http://projects.scipy.org/scipy/scipy/ticket/238 http://projects.scipy.org/scipy/scipy/ticket/401 http://projects.scipy.org/scipy/scipy/ticket/406 http://projects.scipy.org/scipy/scipy/ticket/482 I wanted to get this timeline out ASAP, but there has been a lot of discussion recently about packaging. I agree that we need to get packaging sorted out as well. I will send out my thoughts about packaging later today. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From eric at enthought.com Tue Aug 21 22:57:42 2007 From: eric at enthought.com (eric) Date: Tue, 21 Aug 2007 21:57:42 -0500 Subject: [SciPy-dev] SciPy 0.6.0 release planned for August 31st In-Reply-To: References: Message-ID: <46CBA626.8010307@enthought.com> Hey Jarrod, Jarrod Millman wrote: > Hello everyone, > > Currently, there are 4 release blockers that I want fixed > before making the 0.6.0 tag: > > http://projects.scipy.org/scipy/scipy/ticket/406 > http://projects.scipy.org/scipy/scipy/ticket/482 > I'm on the hook for these two. I will work on them this weekend. thanks, eric p.s Your THE man. From matthieu.brucher at gmail.com Wed Aug 22 03:23:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 22 Aug 2007 09:23:23 +0200 Subject: [SciPy-dev] SciPy 0.6.0 release planned for August 31st In-Reply-To: References: Message-ID: Hi, Ticket 405 can be closed as it is now part of the openopt scikit. Matthieu 2007/8/22, Jarrod Millman : > > Hello everyone, > > Now that SciPy 0.5.2.1 is out the door it is time to get a new feature > release done as well. SciPy 0.5.2 was released over 8 months ago > (2006-12-07) and a lot of things have been improved and fixed since > then. > > After talking with Travis Oliphant and Robert Kern (as well as quite a > few other people at the SciPy conference), we decided to overhaul the > release process. On Monday, August 27th, I will make a 0.6.x branch. > I would like to release 0.6.0 on Friday, August 31st. Continued > development will occur on the trunk and will lead to a 0.7.x branch 3 > months later on Monday, November 26th. If there are any important > bugs in the 0.6.0 release we will make a 0.6.1 release and so on. > > I have started putting together a 0.6 roadmap here (it should mostly > be a list of major things that have already been added or fixed): > http://projects.scipy.org/scipy/scipy/milestone/0.6 > > Please don't make any major changes to the trunk between now and next > Monday. If you can I would like to ask everyone to look through the > open tickets and close anything that has already been fixed. Even > better if you see something that you can easily fix without breaking > something else, please do so. > > If you find a ticket that should be a release blocker, please let me > know ASAP. Currently, there are 4 release blockers that I want fixed > before making the 0.6.0 tag: > http://projects.scipy.org/scipy/scipy/ticket/238 > http://projects.scipy.org/scipy/scipy/ticket/401 > http://projects.scipy.org/scipy/scipy/ticket/406 > http://projects.scipy.org/scipy/scipy/ticket/482 > > I wanted to get this timeline out ASAP, but there has been a lot of > discussion recently about packaging. I agree that we need to get > packaging sorted out as well. I will send out my thoughts about > packaging later today. > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jh at physics.ucf.edu Wed Aug 22 15:51:40 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Wed, 22 Aug 2007 15:51:40 -0400 Subject: [SciPy-dev] EMPython site is dead...delete from Topical? In-Reply-To: 80c99e790704110710j37d046eat88b37854de57b6ef@mail.gmail.com Message-ID: <1187812300.11253.235.camel@glup.physics.ucf.edu> A user just complained to me that the link to www.empython.org on the topical software page took him to a porn site. It appears that www.empython.org is not a live site, and instead gives you one of those IP registrar ad sites. After a while it sometimes redirects you into neverland, and maybe neverland has some porn sites. At any rate, does EM python still exist anywhere as a project? Do we want to archive the code somewhere (presuming someone has it)? We should either archive the code and link to it, or delete the link entirely if we don't have that code or don't want to archive it. I found the note below from April, but when I tried emailing rob (at) empython.org, it bounced. Thoughts? We'll see this happen again, so we should decide some sort of consistent treatment of sites that just vanish. --jh-- [SciPy-user] EMPython Robert Kern robert.kern at gmail.... Wed Apr 11 14:05:31 CDT 2007 * Previous message: [SciPy-user] EMPython * Next message: [SciPy-user] Building SciPy * Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] ________________________________________________________________________ lorenzo bolla wrote: > Dear all, > does anyone know who is responsible for the website: > http://www.empython.org/? > if I get it right, he should be Robert Lytle, former responsible for > www.electromagneticpython.org , > now dismissed. > do you know an e-mail address I can write to? He's posted on enthought-dev recently. rob (at) empython.org -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ________________________________________________________________________ From millman at berkeley.edu Wed Aug 22 16:37:32 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 22 Aug 2007 13:37:32 -0700 Subject: [SciPy-dev] EMPython site is dead...delete from Topical? In-Reply-To: <1187812300.11253.235.camel@glup.physics.ucf.edu> References: <1187812300.11253.235.camel@glup.physics.ucf.edu> Message-ID: Hey Joe, It is too bad that the site is gone. I went ahead and deleted the link from the wiki. If we are just archiving old code that doesn't work with NumPy, I think it would be more distracting than useful. On the other hand, if someone is interested in taking this project over, porting it to NumPy, and maintaining it that would be excellent. The most current version of the site that I was able to find is this from the waybackmachine: http://web.archive.org/web/20050620082840/http://www.pythonemproject.com/ Does anyone have a more recent version of the site and code? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From ondrej at certik.cz Wed Aug 22 20:48:12 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 22 Aug 2007 17:48:12 -0700 Subject: [SciPy-dev] Documentation to SciPy Message-ID: <85b5c3130708221748g1ff51243rfb23ca73032eacee@mail.gmail.com> Hi, I would like to rise the quesiton of documentation again, because I find it really confusing. 4 months ago I created http://scipy.org/DocumentationNew That is intended as a replacement for http://scipy.org/Documentation. It should probably be updated to reflect the current state of the documentation. Also a question to Dmitrey - is there some place with a documentation to your new module? I believe it could go to this page: http://scipy.org/SciPy_packages Note: I myself also only put the documentation to the nonlin module inside the code. Once we agree on some way of documenting things, I'll update the documentation on the wiki. Ondrej From openopt at ukr.net Thu Aug 23 02:25:55 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 23 Aug 2007 09:25:55 +0300 Subject: [SciPy-dev] Documentation to SciPy In-Reply-To: <85b5c3130708221748g1ff51243rfb23ca73032eacee@mail.gmail.com> References: <85b5c3130708221748g1ff51243rfb23ca73032eacee@mail.gmail.com> Message-ID: <46CD2873.4020004@ukr.net> Ondrej Certik wrote: > Hi, > > I would like to rise the quesiton of documentation again, because I > find it really confusing. 4 months ago I created > > http://scipy.org/DocumentationNew > > That is intended as a replacement for http://scipy.org/Documentation. > It should probably be updated to reflect the current state of the > documentation. > > Also a question to Dmitrey - is there some place with a documentation > to your new module? I believe it could go to this page: > > http://scipy.org/SciPy_packages > Currently the situation is: Matthieu created the page https://projects.scipy.org/scipy/scikits/wiki/Optimization title for the page is The openopt scikit ? Constrained and unconstrained optimization ? Framework for generic optimizers (so web search "python openopt" points to the page). but the web page(s) are 100% related to his module "optimizers". The module is part of openopt svn, also, in future, openopt will use some solvers from "optimizers" (i.e. they will be available soon (I hope) with openopt syntax, that's not implemented yet because of lack of time before GSoC finish), on the other hand, "optimizers" remain being a separate module. I didn't created webpages yet, can anyone give me url what should I do? All I need is extracting docstring from each function (MILP, LP, QP, NLP, NSP) from http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/oo.py and attaching examples from http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/ milp_1.py, lp_1.py, qp_1.py, nlp_1.py, nsp_1.py I don't want just copy-paste, because it will be changed (updated) from time to time. Is it possible somehow that webpages changed along with svn code automatically? BTW http://scipy.org/ homepage is missing link to the "scipy packages" page. Don't you think it should be added? Regards, D. > Note: I myself also only put the documentation to the nonlin module > inside the code. Once we agree on some way of documenting things, I'll > update the documentation on the wiki. > > Ondrej > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > From matthieu.brucher at gmail.com Thu Aug 23 02:37:11 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 23 Aug 2007 08:37:11 +0200 Subject: [SciPy-dev] Documentation to SciPy In-Reply-To: <46CD2873.4020004@ukr.net> References: <85b5c3130708221748g1ff51243rfb23ca73032eacee@mail.gmail.com> <46CD2873.4020004@ukr.net> Message-ID: > > (so web search "python openopt" points to the page). > but the web page(s) are 100% related to his module "optimizers". I didn't want to add documentation about your work because don't exactly now what's in it. The > module is part of openopt svn, also, in future, openopt will use some > solvers from "optimizers" (i.e. they will be available soon (I hope) > with openopt syntax, that's not implemented yet because of lack of time > before GSoC finish), on the other hand, "optimizers" remain being a > separate module. > I didn't created webpages yet, can anyone give me url what should I do? I suggested you could complete the first part, for instance with what is provided, what other module is needed, ... and then you put link to another page for each your optimizer and perhaps some examples ? All I need is extracting docstring from each function (MILP, LP, QP, > NLP, NSP) from > > http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/oo.py > and attaching examples from > > http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/ > milp_1.py, lp_1.py, qp_1.py, nlp_1.py, nsp_1.py > I don't want just copy-paste, because it will be changed (updated) from > time to time. Is it possible somehow that webpages changed along with > svn code automatically? I don't know, but what I do is keeping both to date, even if that means some more work. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Fri Aug 24 20:27:41 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 24 Aug 2007 20:27:41 -0400 Subject: [SciPy-dev] Maskedarray implementations Message-ID: <200708242027.41896.pgmdevlist@gmail.com> All, As you might be aware, there are currently two concurrent implementations of masked arrays in numpy: * numpy.ma is the official implementation, but it is unclear whether it is still actively maintained. * maskedarray is the alternative I've been developing initially for my own purpose from numpy.ma. It is available in the scipy svn sandbox, but is already fully functional The main difference between numpy.ma and maskedarray is that the objects created by numpy.ma are NOT ndarrays, while maskedarray.MaskedArray is a full subclass of ndarrays. For example: >>>import numpy, maskedarray >>>x = numpy.ma.array([1,2], mask=[0,1]) >>>isinstance(x, numpy.ndarray) False >>>numpy.asanyarray(x) array([1,2]) Note that we just lost the mask... >>>x = maskedarray.array([1,2], mask=[0,1]) >>>isinstance(x, numpy.ndarray) True >>>numpy.asanyarray(x) masked_array(data = [1 --], ? ? ? mask = [False ?True], ? ? ? fill_value=999999) Note that the mask is conserved. Having the masked array be a subclass of ndarray makes masked arrays easier to mix with other ndarray types and to subclass. An example of application is the TimeSeries package, where the main TimeSeries class is a subclass of maskedarray.MaskedArray. * Does anyone see any *disadvantages* to this aspect of maskedarray relative to numpy.ma? * What would be the requisites to move maskedarray out of the sandbox ? We hope to be able in the short term to either replace or at least merge the two implementations, once a couple of issues are addressed (but we can talk about that later...) Thanks a lot in advance for your feedback Pierre From openopt at ukr.net Sat Aug 25 02:24:06 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 25 Aug 2007 09:24:06 +0300 Subject: [SciPy-dev] GSoC weekly report Message-ID: <46CFCB06.8010401@ukr.net> Hi all, this week 2 solvers were implemented in optimizers module: - based on cubic interpolation - Barzalai and Borwein (monotone and non-monotone versions) Unfortunately, as I had informed Matthieu, cubic interpolation algorithm contained a mistake (after block "h>0?", "No" branch, there should be added "phi1', phi2' = phi2', phi1'"), and it took a significant time for me to find and fix the one. but when I tried my examples it turned out that as for Barzalai and Borwein (nonmonotone), the book I'm following doesn't provide default values of all those initial settings (sigma, sigma1, sigma2, alpha_min, alpha_max, pho). Ok, I know that 0 < sigma1 < sigma2 < 1 and 00, 0 References: <200708242027.41896.pgmdevlist@gmail.com> <200708251506.08568.pgmdevlist@gmail.com> <46D08770.7050007@hawaii.edu> Message-ID: <200708252021.26965.pgmdevlist@gmail.com> On Saturday 25 August 2007 15:48:00 Eric Firing wrote: > I've made a couple of small "emergency" edits, but a separate page would > make things much more visible and less confusing. So here it is: http://projects.scipy.org/scipy/numpy/wiki/MaskedArrayAlternative Please note the section : Optimizing maskedarray. You'll find the quick description of a test case (three implementations of divide) that emerged from on off-list discussion with Eric Firing. The problem can be formulated as "do we need to fill masked arrays before processing or not ?". Eric is in favor of the second solution (prefilling according to the domain mask), while the more it goes, the more I'm leaning towards the third one "bah, let numpy take care of that." I would be very grateful if you could post your comments/ideas/suggestions about the three implementations on that list. This is an issue I'd like to solve ASAP. Thanks a lot in advance Pierre From millman at berkeley.edu Sat Aug 25 22:44:12 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 25 Aug 2007 19:44:12 -0700 Subject: [SciPy-dev] Documentation to SciPy In-Reply-To: <85b5c3130708221748g1ff51243rfb23ca73032eacee@mail.gmail.com> References: <85b5c3130708221748g1ff51243rfb23ca73032eacee@mail.gmail.com> Message-ID: On 8/22/07, Ondrej Certik wrote: > I would like to rise the quesiton of documentation again, because I > find it really confusing. 4 months ago I created > > http://scipy.org/DocumentationNew > > That is intended as a replacement for http://scipy.org/Documentation. > It should probably be updated to reflect the current state of the > documentation. Hey Ondrej, I like the direction you are going with the Documentation page. I completely agree that we need to move toward simplification and streamlining the documentation. Please go ahead, update it, and make your new Documentation page the official one. I added a TOC to the page, which doesn't look particularly attractive. It would be great if you could tidy up what I did. If you are willing to do it, I would like to ask you to become the official maintainer of this page. I think that this page should be aimed at the new user. This means that when someone comes to it for the first time they shouldn't be overwhelmed with information. It should also be easy to find everything you need. Obviously, this trade-off will involve careful consideration. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Sun Aug 26 05:31:16 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 26 Aug 2007 02:31:16 -0700 Subject: [SciPy-dev] Reminder: upcoming SciPy 0.6.x branch Message-ID: Hello, This is a reminder that I will be branching for the 0.6 release on Monday. So (until I make the branch) please don't check anything in unless it is a bugfix. You can check out the roadmap here: http://projects.scipy.org/scipy/scipy/milestone/0.6 I am also putting together some notes about making releases here: http://projects.scipy.org/scipy/scipy/wiki/MakingReleases Please let me know if you have any questions. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From openopt at ukr.net Sun Aug 26 15:10:34 2007 From: openopt at ukr.net (dmitrey) Date: Sun, 26 Aug 2007 22:10:34 +0300 Subject: [SciPy-dev] some problems with optimize.py docstrings Message-ID: <46D1D02A.1080301@ukr.net> Hi all, unfortunately some days ago I had a problems with internet connection and that's why I didn't committed updated docstrings to optimize.py. That time I encountered some problems and that's why I switched to other tasks assigned to me, and forgot to do this one in-time. The problems that I mentioned are related to lack of documentation, and, moreover, to incorrect documentation. So, 1. def line_search(...) """... Outputs: (alpha, gc, fc) ...""" ... return alpha_star, _ls_fc, _ls_gc, fval_star, old_fval, fprime_star 1) is (gc, fc) really equal to _ls_fc, _ls_gc? Maybe vise versa? As for me, I don't know what are those return params mean. 2) documentation has the line "For the zoom phase it uses an algorithm by" (so what should be here?) 3) Description misses meanings of amax, old_fval, old_old_fval (same to line_search_BFGS) 2. def line_search_BFGS(...) """... Outputs: (alpha, fc, gc)""" ... if (phi_a0 <= phi0 + c1*alpha0*derphi0): return alpha0, fc, 0, phi_a0 #NOTE: 4 params instead of 3 ... if (phi_a1 <= phi0 + c1*alpha1*derphi0): return alpha1, fc, 0, phi_a1 #same: 4 params instead of 3 ... if (phi_a2 <= phi0 + c1*alpha2*derphi0): return alpha2, fc, 0, phi_a2 #same So, as you see, 1) it has incorrect output - 4 values instead of 3 2) it doesn't describe what are those fc and gc 3) output gc is always equal to zero On the other hand, almost noone uses the funcs except some other funcs from optimize.py (elseware lots of tickets would be filed). The same to some other funcs from optimize.py (i.e. they are used only from other optimize.py funcs). So I suppose it could wait. Are you agree? So do you want me to commit the changes I done to svn right now, before tomorrow scipy 0.6 release? Let me attach to the letter links to optimize.py file as well as epydoc output. Regards, D. optimize.py http://www.box.net/shared/3bi5n8jtln html-file http://www.box.net/shared/0l44js08jj From aisaac at american.edu Mon Aug 27 15:02:15 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 27 Aug 2007 15:02:15 -0400 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: <46D1D02A.1080301@ukr.net> References: <46D1D02A.1080301@ukr.net> Message-ID: On Sun, 26 Aug 2007, dmitrey apparently wrote: > So do you want me to commit the changes I done to svn right now, before > tomorrow scipy 0.6 release? You are talking *only* about documentation improvements, even though incomplete? Is that right? Then unless Jarrod objects, that seems OK. > 1) is (gc, fc) really equal to _ls_fc, _ls_gc? Maybe vise versa? > As for me, I don't know what are those return params mean. Did you try comparing to the related minpack routines? > 2) documentation has the line > "For the zoom phase it uses an algorithm by" (so what should be here?) Only Travis can say ... > 3) Description misses meanings of amax, old_fval, old_old_fval (same to > line_search_BFGS) Again, did you try comparing to the related minpack routines? > if (phi_a0 <= phi0 + c1*alpha0*derphi0): > return alpha0, fc, 0, phi_a0 > #NOTE: 4 params instead of 3 ... if (phi_a1 <= phi0 > + c1*alpha1*derphi0): > return alpha1, fc, 0, phi_a1 > #same: 4 params instead of 3 > ... > if (phi_a2 <= phi0 + c1*alpha2*derphi0): > return alpha2, fc, 0, phi_a2 > #same Note that in every case, one of the parameters is 0. Cheers, Alan Isaac From openopt at ukr.net Mon Aug 27 15:35:25 2007 From: openopt at ukr.net (dmitrey) Date: Mon, 27 Aug 2007 22:35:25 +0300 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: References: <46D1D02A.1080301@ukr.net> Message-ID: <46D3277D.4070508@ukr.net> Alan G Isaac wrote: > On Sun, 26 Aug 2007, dmitrey apparently wrote: > >> So do you want me to commit the changes I done to svn right now, before >> tomorrow scipy 0.6 release? >> > > You are talking *only* about documentation improvements, > even though incomplete? Is that right? Then unless Jarrod > objects, that seems OK. > > Despite I have add meaning of some input/output parameters, the optimize.py documentation still remains incomplete, because I don't know ALL parameters (from whole optimize.py) meaning. However, I still suppose that some enhancements is better than nothing at all, and others could be added by Travis and/or other authors of the file. > >> 1) is (gc, fc) really equal to _ls_fc, _ls_gc? Maybe vise versa? >> As for me, I don't know what are those return params mean. >> > > Did you try comparing to the related minpack routines? > Ok, I have looked at some, but fortran routine has other input/output parameters As for python code - seems like fc and gc are func and gradient counters, but I still don't know all other parameters meaning and why gc is always zero. Also, I still think in :Returns: gc and fc should be vise versa: fc and gc. >> 2) documentation has the line >> "For the zoom phase it uses an algorithm by" (so what should be here?) >> > > Only Travis can say ... > > > >> 3) Description misses meanings of amax, old_fval, old_old_fval (same to >> line_search_BFGS) >> > > Again, did you try comparing to the related minpack > routines? > Minpack routines have other syntax (some parameters from linesearch.py are transformed to other parameters and only then passed to fortran code). Also, amax is used in minpack but not in python code (i.e. in optimize.py it's unused as I had mentioned). > > >> if (phi_a0 <= phi0 + c1*alpha0*derphi0): >> return alpha0, fc, 0, phi_a0 >> #NOTE: 4 params instead of 3 ... if (phi_a1 <= phi0 >> + c1*alpha1*derphi0): >> return alpha1, fc, 0, phi_a1 >> #same: 4 params instead of 3 >> ... >> if (phi_a2 <= phi0 + c1*alpha2*derphi0): >> return alpha2, fc, 0, phi_a2 >> #same >> > > Note that in every case, one of the parameters is 0. > Maybe I misunderstood something in code and/or your answer, let me explain my suspection once again: documentation to the func (line_search_BFGS) says: :Returns: (alpha, fc, gc) It doesn't explain what are those fc and gc. Also, according to the function code, it always returns 4 parameters, and if someone use my_alpha, my_fc, my_gc = line_search_BFGS(...) then he will always got my_gc = 0. Is this really what it was intended to be?! On the other hand, if gc is number of gradient evaluation, it seems to be true - as I noticed the code to line_search_BFGS didn't use gradient. On the other hand, as I mentioned before, very unlikely that someone will use line_search and line_search_BFGS, they are intended for optimize.py inner usage. Regards, D. From millman at berkeley.edu Mon Aug 27 17:39:29 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 27 Aug 2007 14:39:29 -0700 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: <46D3277D.4070508@ukr.net> References: <46D1D02A.1080301@ukr.net> <46D3277D.4070508@ukr.net> Message-ID: On 8/27/07, dmitrey wrote: > >> So do you want me to commit the changes I done to svn right now, before > >> tomorrow scipy 0.6 release? > > > > You are talking *only* about documentation improvements, > > even though incomplete? Is that right? Then unless Jarrod > > objects, that seems OK. > > > Despite I have add meaning of some input/output parameters, the > optimize.py documentation still remains incomplete, because I don't know > ALL parameters (from whole optimize.py) meaning. However, I still > suppose that some enhancements is better than nothing at all, and others > could be added by Travis and/or other authors of the file. Hey Dmitrey, I would like to have your improvements to the docstrings for the optimization code included in the 0.6 release. They don't have to be complete as long as they are better than what we currently have. Thanks, Jarrod From openopt at ukr.net Tue Aug 28 01:58:08 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 08:58:08 +0300 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: References: <46D1D02A.1080301@ukr.net> <46D3277D.4070508@ukr.net> Message-ID: <46D3B970.4090406@ukr.net> Jarrod Millman wrote: > On 8/27/07, dmitrey wrote: > >>>> So do you want me to commit the changes I done to svn right now, before >>>> tomorrow scipy 0.6 release? >>>> >>> You are talking *only* about documentation improvements, >>> even though incomplete? Is that right? Then unless Jarrod >>> objects, that seems OK. >>> >>> >> Despite I have add meaning of some input/output parameters, the >> optimize.py documentation still remains incomplete, because I don't know >> ALL parameters (from whole optimize.py) meaning. However, I still >> suppose that some enhancements is better than nothing at all, and others >> could be added by Travis and/or other authors of the file. >> > > Hey Dmitrey, > > I would like to have your improvements to the docstrings for the > optimization code included in the 0.6 release. They don't have to be > complete as long as they are better than what we currently have. > > Thanks, > Jarrod > Hi Jarrod, I don't know when 0.6 will be released, maybe I will commit the changes and some minutes later it will be already released - then it could turn out to have some problems. I mean: I didn't check how __docformat__="restructuredtext en" works, because if I put the line in start of optimize.py file my epydoc (v 2.1) says "unknown docformat" and stops. So maybe epydoc will just crush on optimize.py file. Also, I suppose that it would be better to clean optimize.py html output from all those auxiliary files like rosenbrock, but I don't know how to do this. Ok, so I committed it 1 min ago but it requires someone to check epydoc output (and maybe uncomment __docformat__="restructuredtext en" in the file head). Regards, D. From millman at berkeley.edu Tue Aug 28 03:32:25 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 28 Aug 2007 00:32:25 -0700 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: <46D3B970.4090406@ukr.net> References: <46D1D02A.1080301@ukr.net> <46D3277D.4070508@ukr.net> <46D3B970.4090406@ukr.net> Message-ID: On 8/27/07, dmitrey wrote: > I don't know when 0.6 will be released, maybe I will commit the changes > and some minutes later it will be already released - then it could turn > out to have some problems. I mean: I didn't check how > __docformat__="restructuredtext en" works, because if I put the line in > start of optimize.py file my epydoc (v 2.1) says "unknown docformat" and > stops. Before I tag the 0.6.0 release, I will first make a 0.6.x branch. I will ask for everyone to test the branch and then, if no one reports any problems, I will tag the release and start making binary packages. In short, you don't need to worry about making a change and then having the release come out a few minutes later. > So maybe epydoc will just crush on optimize.py file. > Also, I suppose that it would be better to clean optimize.py html output > from all those auxiliary files like rosenbrock, but I don't know how to > do this. Please commit your changes, so that I can see what you have done. If there are any problems, I can always revert the changes. > Ok, so I committed it 1 min ago but it requires someone to check epydoc > output (and maybe uncomment __docformat__="restructuredtext en" in the > file head). I don't see your changes, so please try committing them again. Thanks for all your help. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Tue Aug 28 04:01:56 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 28 Aug 2007 01:01:56 -0700 Subject: [SciPy-dev] I will branch SciPy 0.6.x tomorrow night (8/29) Message-ID: Hello, Given the amount of work that was done today, I am going to wait another day before branching. The "soft-freeze" is still in effect, so (until I make the branch) please don't check anything in unless it is a bugfix. You can check out the roadmap here: http://projects.scipy.org/scipy/scipy/milestone/0.6 Please let me know if you have any questions/comments/concerns. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Tue Aug 28 04:43:24 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 28 Aug 2007 01:43:24 -0700 Subject: [SciPy-dev] I need help with a couple of tickets Message-ID: Hello, There are a few tickets that I need some help with: http://projects.scipy.org/scipy/scipy/ticket/238 =================================== My understanding is that the issues raised by this ticket are solved. There is a patch attached that fixes the problem in scipy.linalg; but the patch needs to be generalized to apply to scipy.lib as well. Please let me know if that isn't correct. Also, what is the best way to detect whether something has been compiled with veclib? http://projects.scipy.org/scipy/scipy/ticket/325 =================================== It seems like this ticket should be closed. Is anyone still having this issue? http://projects.scipy.org/scipy/scipy/ticket/401 =================================== It seems like this ticket should be closed. Is anyone still having this issue? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From openopt at ukr.net Tue Aug 28 06:20:09 2007 From: openopt at ukr.net (dmitrey) Date: Tue, 28 Aug 2007 13:20:09 +0300 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: References: <46D1D02A.1080301@ukr.net> <46D3277D.4070508@ukr.net> <46D3B970.4090406@ukr.net> Message-ID: <46D3F6D9.8060902@ukr.net> Jarrod, I can't commit : Could not read status line: Connection reset by peer (http://svn.scipy.org) and my ping svn.scipy.org fails (already very long for now), either fix it if you can or maybe it will be better if you will download optimize.py from the location I had mentioned: http://www.box.net/shared/3bi5n8jtln I would attach the file to the letter but then I get "wait for moderator approvement" because letter exceeds max size 40 kb or kind of. Regards, D. Jarrod Millman wrote: > On 8/27/07, dmitrey wrote: > >> I don't know when 0.6 will be released, maybe I will commit the changes >> and some minutes later it will be already released - then it could turn >> out to have some problems. I mean: I didn't check how >> __docformat__="restructuredtext en" works, because if I put the line in >> start of optimize.py file my epydoc (v 2.1) says "unknown docformat" and >> stops. >> > > Before I tag the 0.6.0 release, I will first make a 0.6.x branch. I > will ask for everyone to test the branch and then, if no one reports > any problems, I will tag the release and start making binary packages. > In short, you don't need to worry about making a change and then > having the release come out a few minutes later. > > >> So maybe epydoc will just crush on optimize.py file. >> Also, I suppose that it would be better to clean optimize.py html output >> from all those auxiliary files like rosenbrock, but I don't know how to >> do this. >> > > Please commit your changes, so that I can see what you have done. If > there are any problems, I can always revert the changes. > > >> Ok, so I committed it 1 min ago but it requires someone to check epydoc >> output (and maybe uncomment __docformat__="restructuredtext en" in the >> file head). >> > > I don't see your changes, so please try committing them again. > > Thanks for all your help. > > From millman at berkeley.edu Tue Aug 28 06:25:57 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 28 Aug 2007 03:25:57 -0700 Subject: [SciPy-dev] some problems with optimize.py docstrings In-Reply-To: <46D3F6D9.8060902@ukr.net> References: <46D1D02A.1080301@ukr.net> <46D3277D.4070508@ukr.net> <46D3B970.4090406@ukr.net> <46D3F6D9.8060902@ukr.net> Message-ID: Hey Dmitrey, Looks like the server is down again. It should be back up in a few hours. You can just make the commits then. Thanks, Jarrod From stefan at sun.ac.za Tue Aug 28 06:36:30 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 28 Aug 2007 12:36:30 +0200 Subject: [SciPy-dev] I will branch SciPy 0.6.x tomorrow night (8/29) In-Reply-To: References: Message-ID: <20070828103630.GY14395@mentat.za.net> On Tue, Aug 28, 2007 at 01:01:56AM -0700, Jarrod Millman wrote: > Given the amount of work that was done today, I am going to wait > another day before branching. The "soft-freeze" is still in effect, > so (until I make the branch) please don't check anything in > unless it is a bugfix. > > You can check out the roadmap here: > http://projects.scipy.org/scipy/scipy/milestone/0.6 > > Please let me know if you have any questions/comments/concerns. In addition, scipy.org is currently down, preventing any patches from being submitted. Cheers St?fan From stefan at sun.ac.za Tue Aug 28 08:11:42 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 28 Aug 2007 14:11:42 +0200 Subject: [SciPy-dev] Requiring Python 2.4 Message-ID: <20070828121142.GC14395@mentat.za.net> Hi all, I noticed that some Python 2.4-only features crept into parts of SciPy (yes, I'm guilty, too). Has Python 2.5 been out for long enough that we can drop support for 2.3? St?fan From ondrej at certik.cz Tue Aug 28 08:24:25 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 28 Aug 2007 14:24:25 +0200 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <20070828121142.GC14395@mentat.za.net> References: <20070828121142.GC14395@mentat.za.net> Message-ID: <85b5c3130708280524o72e6fe1dg31682a2bfad5a1e0@mail.gmail.com> > I noticed that some Python 2.4-only features crept into parts of SciPy > (yes, I'm guilty, too). Has Python 2.5 been out for long enough that > we can drop support for 2.3? Yes, I would drop the support for python 2.3, thus requiring python >= 2.4. Ondrej From ondrej at certik.cz Tue Aug 28 10:34:17 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 28 Aug 2007 16:34:17 +0200 Subject: [SciPy-dev] superlu sources and Debian Message-ID: <85b5c3130708280734k74273217s69ff07085920f892@mail.gmail.com> Hi, I adopted the scipy package in debian, it will be fixed soon. And definitely I'll be available to test the new release so that it works in Debian. In the process of polishing the package I noticed there are sources of SuperLU included, but not of the umfpack. I think either there should be both sources of superlu and umfpack, or nothing. I personally am for nothing, since they are already in Debian as separate packages, so it is better to link against already installed packages - saves space and it is the cleaner solution. Also if there are some bugs discovered and fixed by upstream (superlu, umfpack) or the maintainer of the respective debian package, all I need is to recompile the scipy package, so I am using the work made by others (contrary to distributing the sources of superlu ourselves, in which case we need to fix everything). Ondrej From robert.kern at gmail.com Tue Aug 28 14:10:51 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 13:10:51 -0500 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <20070828121142.GC14395@mentat.za.net> References: <20070828121142.GC14395@mentat.za.net> Message-ID: <46D4652B.2060507@gmail.com> Stefan van der Walt wrote: > Hi all, > > I noticed that some Python 2.4-only features crept into parts of SciPy > (yes, I'm guilty, too). Has Python 2.5 been out for long enough that > we can drop support for 2.3? I'd prefer that we didn't. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Aug 28 14:20:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 13:20:05 -0500 Subject: [SciPy-dev] superlu sources and Debian In-Reply-To: <85b5c3130708280734k74273217s69ff07085920f892@mail.gmail.com> References: <85b5c3130708280734k74273217s69ff07085920f892@mail.gmail.com> Message-ID: <46D46755.4020007@gmail.com> Ondrej Certik wrote: > Hi, > > I adopted the scipy package in debian, it will be fixed soon. And > definitely I'll be available to test the new release so that it works > in Debian. Thank you! That's wonderful news. > In the process of polishing the package I noticed there are sources of > SuperLU included, but not of the umfpack. I think either there should > be both sources of superlu and umfpack, or nothing. UMFPACK is GPLed and optional (for that reason), so we cannot distribute it. SuperLU is under a BSDish license, is not optional, and is small enough to include that the costs of doing so are outweighed by the benefits of not forcing most scipy users to download and build SuperLU separately. We don't all use Debian. > I personally am > for nothing, since they are already in Debian as separate packages, so > it is better to link against already installed packages - saves space > and it is the cleaner solution. Also if there are some bugs discovered > and fixed by upstream (superlu, umfpack) or the maintainer of the > respective debian package, all I need is to recompile the scipy > package, so I am using the work made by others (contrary to > distributing the sources of superlu ourselves, in which case we need > to fix everything). As a package maintainer, you are free to patch up scipy as you see fit in order to reuse other packages in your distribution. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Tue Aug 28 14:24:43 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 28 Aug 2007 11:24:43 -0700 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <46D4652B.2060507@gmail.com> References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> Message-ID: On 8/28/07, Robert Kern wrote: > > I noticed that some Python 2.4-only features crept into parts of SciPy > > (yes, I'm guilty, too). Has Python 2.5 been out for long enough that > > we can drop support for 2.3? What 2.4 features are being used? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Tue Aug 28 17:22:46 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 28 Aug 2007 23:22:46 +0200 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> Message-ID: <20070828212246.GH14395@mentat.za.net> On Tue, Aug 28, 2007 at 11:24:43AM -0700, Jarrod Millman wrote: > On 8/28/07, Robert Kern wrote: > > > I noticed that some Python 2.4-only features crept into parts of SciPy > > > (yes, I'm guilty, too). Has Python 2.5 been out for long enough that > > > we can drop support for 2.3? > > What 2.4 features are being used? Mostly list comprehension (should be easy to find using a regular expression search). No decorators (those you don't use by accident). I hope that ParametricTest is still compatible with 2.3, but I don't have 2.3 around to test anymore. Robert says we should still support 2.3 -- but is there a time limit or a migration plan? Regards St?fan From pgmdevlist at gmail.com Tue Aug 28 17:33:21 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 28 Aug 2007 17:33:21 -0400 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <20070828212246.GH14395@mentat.za.net> References: <20070828121142.GC14395@mentat.za.net> <20070828212246.GH14395@mentat.za.net> Message-ID: <200708281733.22448.pgmdevlist@gmail.com> On Tuesday 28 August 2007 17:22:46 Stefan van der Walt wrote: > Robert says we should still support 2.3 -- but is there a time limit > or a migration plan? FYI, the maskedarray implementation uses properties. Those are 2.4 features, right ? From robert.kern at gmail.com Tue Aug 28 17:48:19 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 16:48:19 -0500 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <200708281733.22448.pgmdevlist@gmail.com> References: <20070828121142.GC14395@mentat.za.net> <20070828212246.GH14395@mentat.za.net> <200708281733.22448.pgmdevlist@gmail.com> Message-ID: <46D49823.5000502@gmail.com> Pierre GM wrote: > On Tuesday 28 August 2007 17:22:46 Stefan van der Walt wrote: >> Robert says we should still support 2.3 -- but is there a time limit >> or a migration plan? > > FYI, the maskedarray implementation uses properties. Those are 2.4 features, > right ? No. Only the decorator syntax is new. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From l.mastrodomenico at gmail.com Tue Aug 28 17:52:04 2007 From: l.mastrodomenico at gmail.com (Lino Mastrodomenico) Date: Tue, 28 Aug 2007 23:52:04 +0200 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <20070828212246.GH14395@mentat.za.net> References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> <20070828212246.GH14395@mentat.za.net> Message-ID: 2007/8/28, Stefan van der Walt : > On Tue, Aug 28, 2007 at 11:24:43AM -0700, Jarrod Millman wrote: > > What 2.4 features are being used? > > Mostly list comprehension (should be easy to find using a regular > expression search). List comprehensions, e.g. [i for i in range(10)], were introduced in Python 2.0. Maybe you mean generator expressions, e.g. (i for i in range(10)) ? -- Lino Mastrodomenico E-mail: l.mastrodomenico at gmail.com From stefan at sun.ac.za Tue Aug 28 18:06:03 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 29 Aug 2007 00:06:03 +0200 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> <20070828212246.GH14395@mentat.za.net> Message-ID: <20070828220603.GM14395@mentat.za.net> On Tue, Aug 28, 2007 at 11:52:04PM +0200, Lino Mastrodomenico wrote: > 2007/8/28, Stefan van der Walt : > > On Tue, Aug 28, 2007 at 11:24:43AM -0700, Jarrod Millman wrote: > > > What 2.4 features are being used? > > > > Mostly list comprehension (should be easy to find using a regular > > expression search). > > List comprehensions, e.g. [i for i in range(10)], were introduced in > Python 2.0. Well, that's good news, then :) Thanks St?fan From robert.kern at gmail.com Tue Aug 28 18:47:06 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Aug 2007 17:47:06 -0500 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <20070828212246.GH14395@mentat.za.net> References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> <20070828212246.GH14395@mentat.za.net> Message-ID: <46D4A5EA.5060502@gmail.com> Stefan van der Walt wrote: > On Tue, Aug 28, 2007 at 11:24:43AM -0700, Jarrod Millman wrote: >> On 8/28/07, Robert Kern wrote: >>>> I noticed that some Python 2.4-only features crept into parts of SciPy >>>> (yes, I'm guilty, too). Has Python 2.5 been out for long enough that >>>> we can drop support for 2.3? >> What 2.4 features are being used? > > Mostly list comprehension (should be easy to find using a regular > expression search). No decorators (those you don't use by accident). > I hope that ParametricTest is still compatible with 2.3, but I don't > have 2.3 around to test anymore. What 2.4 features do you think you might have used? The list of such features is here: http://www.python.org/doc/2.4.4/whatsnew/whatsnew24.html > Robert says we should still support 2.3 -- but is there a time limit > or a migration plan? To the extent that I have a say in the matter (and a survey of recent checkin activity would suggest "significantly less than others in this discussion"), I would like to maintain compatibility with as old a Python as we can for as long as we can. I think we should have a compelling reason to drop support for a version of Python that is more than just the age of that version. People are still actively using Python 2.3. People are still actively using Python 1.5.2, for that matter; and the people that still maintain their libraries for that platform are highly regarded for it (e.g. Fredrik Lundh). We should drop compatibility when there is a significant new feature that we need to use. If scipy had a lot of places where it used popen, the subprocess module in 2.4 would be a good reason to switch, in my opinion. However, I don't see many 2.4 features that are really critical, just conveniences. On the other hand, if there were a great influx of developers who had never seen a Python older than 2.4, that might be a good reason to switch, too. But until then, I think the rest of us can have the discipline to use only features from 2.3. It's not that hard. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Tue Aug 28 19:01:36 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 28 Aug 2007 16:01:36 -0700 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <46D4A5EA.5060502@gmail.com> References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> <20070828212246.GH14395@mentat.za.net> <46D4A5EA.5060502@gmail.com> Message-ID: On 8/28/07, Robert Kern wrote: > On the other hand, if there were a great influx of developers who had never seen > a Python older than 2.4, that might be a good reason to switch, too. But until > then, I think the rest of us can have the discipline to use only features from > 2.3. It's not that hard. I agree. Let's at least table the discussion for now; we can return to this question later. And since it doesn't sound like any Python 2.4 features have sneaked in, the 0.6 release will still be compatible with Python 2.3. Please correct me if I am missing something. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Tue Aug 28 19:21:40 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 29 Aug 2007 01:21:40 +0200 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: <46D4A5EA.5060502@gmail.com> References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> <20070828212246.GH14395@mentat.za.net> <46D4A5EA.5060502@gmail.com> Message-ID: <20070828232139.GO14395@mentat.za.net> On Tue, Aug 28, 2007 at 05:47:06PM -0500, Robert Kern wrote: > I would like to maintain compatibility with as old a Python as we > can for as long as we can. I think we should have a compelling > reason to drop support for a version of Python that is more than > just the age of that version. Fair enough. For some reason I was under the impression that list comprehension was a 2.4 feature, but as Lino pointed out, it's been around forever. The rest of the features introduced in 2.4 were fairly minor, and we can work around decorators. The Python 2.5 list is more exciting: http://docs.python.org/whatsnew/whatsnew25.html Especially conditional expressions, partial functions, and 'with' context managers. Nothing we can't live without, though. Thanks for the feedback. St?fan From stefan at sun.ac.za Tue Aug 28 19:30:53 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 29 Aug 2007 01:30:53 +0200 Subject: [SciPy-dev] Requiring Python 2.4 In-Reply-To: References: <20070828121142.GC14395@mentat.za.net> <46D4652B.2060507@gmail.com> <20070828212246.GH14395@mentat.za.net> <46D4A5EA.5060502@gmail.com> Message-ID: <20070828233053.GP14395@mentat.za.net> On Tue, Aug 28, 2007 at 04:01:36PM -0700, Jarrod Millman wrote: > On 8/28/07, Robert Kern wrote: > > On the other hand, if there were a great influx of developers who had never seen > > a Python older than 2.4, that might be a good reason to switch, too. But until > > then, I think the rest of us can have the discipline to use only features from > > 2.3. It's not that hard. > > I agree. Let's at least table the discussion for now; we can return > to this question later. > > And since it doesn't sound like any Python 2.4 features have sneaked > in, the 0.6 release will still be compatible with Python 2.3. Please > correct me if I am missing something. Shouldn't be a problem. I'm now compiling 2.3.6 from source, so if no one else gets around to it, I'll make sure that everything is in order tomorrow morning (GMT+2). The buildbot clients all run 2.4, 2.5 (mainly) and 2.6; I should add a machine running 2.3 as well. Regards St?fan From openopt at ukr.net Wed Aug 29 02:42:49 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 29 Aug 2007 09:42:49 +0300 Subject: [SciPy-dev] svn server still doesn't work? Message-ID: <46D51569.1090004@ukr.net> hi all, does svn server not work only for me or for all as well? does anyone know when it will be fixed? ~/install/scipy/Lib/optimize$ svn ci optimize.py svn: Commit failed (details follow): svn: PROPFIND request failed on '/svn/scipy/trunk/Lib/optimize' svn: '/svn/scipy/trunk/Lib/optimize' path not found svn: Your commit message was left in a temporary file: svn: '/home/dmitrey/install/scipy/Lib/optimize/svn-commit.tmp' Regards, D. From matthieu.brucher at gmail.com Wed Aug 29 02:47:28 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 29 Aug 2007 08:47:28 +0200 Subject: [SciPy-dev] svn server still doesn't work? In-Reply-To: <46D51569.1090004@ukr.net> References: <46D51569.1090004@ukr.net> Message-ID: Hi, I just tried 5 minutes ago, and it works. Matthieu 2007/8/29, dmitrey : > > hi all, > does svn server not work only for me or for all as well? > > does anyone know when it will be fixed? > > ~/install/scipy/Lib/optimize$ svn ci optimize.py > svn: Commit failed (details follow): > svn: PROPFIND request failed on '/svn/scipy/trunk/Lib/optimize' > svn: '/svn/scipy/trunk/Lib/optimize' path not found > svn: Your commit message was left in a temporary file: > svn: '/home/dmitrey/install/scipy/Lib/optimize/svn-commit.tmp' > > Regards, D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Wed Aug 29 02:48:01 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 29 Aug 2007 08:48:01 +0200 Subject: [SciPy-dev] svn server still doesn't work? In-Reply-To: <46D51569.1090004@ukr.net> References: <46D51569.1090004@ukr.net> Message-ID: OK, in fact the Lib folder was renamed scipy, that's why it doesn't work. Matthieu 2007/8/29, dmitrey : > > hi all, > does svn server not work only for me or for all as well? > > does anyone know when it will be fixed? > > ~/install/scipy/Lib/optimize$ svn ci optimize.py > svn: Commit failed (details follow): > svn: PROPFIND request failed on '/svn/scipy/trunk/Lib/optimize' > svn: '/svn/scipy/trunk/Lib/optimize' path not found > svn: Your commit message was left in a temporary file: > svn: '/home/dmitrey/install/scipy/Lib/optimize/svn-commit.tmp' > > Regards, D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Wed Aug 29 02:54:51 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 29 Aug 2007 08:54:51 +0200 Subject: [SciPy-dev] superlu sources and Debian In-Reply-To: <46D46755.4020007@gmail.com> References: <85b5c3130708280734k74273217s69ff07085920f892@mail.gmail.com> <46D46755.4020007@gmail.com> Message-ID: <85b5c3130708282354x2ed19d50r128ae60a670e915c@mail.gmail.com> > > In the process of polishing the package I noticed there are sources of > > SuperLU included, but not of the umfpack. I think either there should > > be both sources of superlu and umfpack, or nothing. > > UMFPACK is GPLed and optional (for that reason), so we cannot distribute it. > SuperLU is under a BSDish license, is not optional, and is small enough to > include that the costs of doing so are outweighed by the benefits of not forcing > most scipy users to download and build SuperLU separately. We don't all use Debian. I can see now - it makes sense to include SuperLU, so that at least one solver is working out of the box for every scipy user. Generally though, my philosophy is not to distribute other programs in projects, but rather it is the job of the linux distribution (whichever you use) to do so. OK, thanks for clarification. Ondrej From stefan at sun.ac.za Wed Aug 29 03:09:07 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 29 Aug 2007 09:09:07 +0200 Subject: [SciPy-dev] svn server still doesn't work? In-Reply-To: <46D51569.1090004@ukr.net> References: <46D51569.1090004@ukr.net> Message-ID: <20070829070907.GR14395@mentat.za.net> On Wed, Aug 29, 2007 at 09:42:49AM +0300, dmitrey wrote: > hi all, > does svn server not work only for me or for all as well? > > does anyone know when it will be fixed? > > ~/install/scipy/Lib/optimize$ svn ci optimize.py > svn: Commit failed (details follow): > svn: PROPFIND request failed on '/svn/scipy/trunk/Lib/optimize' > svn: '/svn/scipy/trunk/Lib/optimize' path not found > svn: Your commit message was left in a temporary file: > svn: '/home/dmitrey/install/scipy/Lib/optimize/svn-commit.tmp' A good rule to follow: always "svn up" before you check in changes. Cheers St?fan From openopt at ukr.net Wed Aug 29 03:36:10 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 29 Aug 2007 10:36:10 +0300 Subject: [SciPy-dev] svn server still doesn't work? In-Reply-To: <20070829070907.GR14395@mentat.za.net> References: <46D51569.1090004@ukr.net> <20070829070907.GR14395@mentat.za.net> Message-ID: <46D521EA.1050601@ukr.net> Stefan van der Walt wrote: > On Wed, Aug 29, 2007 at 09:42:49AM +0300, dmitrey wrote: > >> hi all, >> does svn server not work only for me or for all as well? >> >> does anyone know when it will be fixed? >> >> ~/install/scipy/Lib/optimize$ svn ci optimize.py >> svn: Commit failed (details follow): >> svn: PROPFIND request failed on '/svn/scipy/trunk/Lib/optimize' >> svn: '/svn/scipy/trunk/Lib/optimize' path not found >> svn: Your commit message was left in a temporary file: >> svn: '/home/dmitrey/install/scipy/Lib/optimize/svn-commit.tmp' >> > > A good rule to follow: always "svn up" before you check in changes. > As for me I see results in shell (terminal): if it's M(erge) or, moreover, C(onflict) then I see what's the matter. Regards, D. From openopt at ukr.net Wed Aug 29 05:28:56 2007 From: openopt at ukr.net (dmitrey) Date: Wed, 29 Aug 2007 12:28:56 +0300 Subject: [SciPy-dev] about optimization funcs Message-ID: <46D53C58.4080402@ukr.net> hi all, don't you mind if I will add a single line in scipy.optimize fmin_tnc and lbfgsb docstrings that these solvers are also available (with other, unified call syntax) in scikits.openopt (provided scipy is installed)? (briefly for those who don't familiar with scikits.openopt yet: it looks like prob = scikits.openopt.NLP(fun, x0, **kwargs), for example NLP(fun, x0, iprint = 100, xtol = 1e-4, doPlot = 1, maxIter = 1000, maxCPUTime = 100,...) r = prob.solve('scipy_tnc') or r = prob.solve('scipy_lbfgsb') or 'lincher' or 'ralg'(this one for uc only for now) or p = QP(...) or LP(...), NSP(...) for nonsmooth, MILP for mixed-integer LP ) The line should be something like "also you can call the solver from scikits using unified openopt syntax" or "alternatively you can call the solver from scikits using unified openopt syntax" you can correct the line because I'm not skilled in English very well. If noone will mind then probably I will add the line, so what do you think? Regards, D. From aisaac at american.edu Wed Aug 29 09:52:00 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 29 Aug 2007 09:52:00 -0400 Subject: [SciPy-dev] about optimization funcs In-Reply-To: <46D53C58.4080402@ukr.net> References: <46D53C58.4080402@ukr.net> Message-ID: On Wed, 29 Aug 2007, dmitrey apparently wrote: > "also you can call the solver from scikits using unified > openopt syntax" > or > "alternatively you can call the solver from scikits using unified > openopt syntax" > you can correct the line because I'm not skilled in English very well. > If noone will mind then probably I will add the line, so what do you think? Taking advantage of reST, perhaps something like:: :see: scikits.openopt, which offers a unified syntax to call this and other solvers Cheers, Alan Isaac From millman at berkeley.edu Wed Aug 29 20:48:08 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 29 Aug 2007 17:48:08 -0700 Subject: [SciPy-dev] SciPy trunk has references to ParametricTestCase Message-ID: Hello, I noticed that a few references to ParametricTestCase have made it into the SciPy trunk: scipy/sparse/tests/test_sparse.py: ParametricTestCase): scipy/misc/tests/test_pilutil.py:class test_pilutil(ParametricTestCase): ParametricTestCase was only recently added to the NumPy: http://projects.scipy.org/scipy/numpy/changeset/3976/trunk/numpy/testing/parametric.py So should we remove these references just from the tag (which I will make very soon, I promise) or should we remove them from the trunk (at least until a new release of NumPy is made)? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Wed Aug 29 21:48:46 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 29 Aug 2007 20:48:46 -0500 Subject: [SciPy-dev] SciPy trunk has references to ParametricTestCase In-Reply-To: References: Message-ID: <46D621FE.3040709@gmail.com> Jarrod Millman wrote: > Hello, > > I noticed that a few references to ParametricTestCase have made it > into the SciPy trunk: > scipy/sparse/tests/test_sparse.py: ParametricTestCase): > scipy/misc/tests/test_pilutil.py:class test_pilutil(ParametricTestCase): > > ParametricTestCase was only recently added to the NumPy: > http://projects.scipy.org/scipy/numpy/changeset/3976/trunk/numpy/testing/parametric.py > > So should we remove these references just from the tag (which I will > make very soon, I promise) or should we remove them from the trunk (at > least until a new release of NumPy is made)? I don't mind the trunk of scipy requiring the trunk of numpy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Wed Aug 29 21:52:02 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 29 Aug 2007 18:52:02 -0700 Subject: [SciPy-dev] SciPy trunk has references to ParametricTestCase In-Reply-To: <46D621FE.3040709@gmail.com> References: <46D621FE.3040709@gmail.com> Message-ID: On 8/29/07, Robert Kern wrote: > I don't mind the trunk of scipy requiring the trunk of numpy. Fine with me as well. I will go ahead and branch. Then, we can just remove the ParametricTestCase from the branch. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Aug 29 22:00:35 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 29 Aug 2007 19:00:35 -0700 Subject: [SciPy-dev] 0.6.x branch created Message-ID: Hello, Thanks for all the work getting the trunk in shape for the 0.6.x branch: http://svn.scipy.org/svn/scipy/branches/0.6.x There are a few small changes I still want to make and I will need to ask everyone to test it before making the 0.6.0 tag and release. The version of the trunk has been updated to reflect the fact that all development on it will be in preparation for the 0.7.0 release in about 3 months. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Aug 29 23:19:18 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 29 Aug 2007 20:19:18 -0700 Subject: [SciPy-dev] ticket 401 Message-ID: Hello, I can't reproduce this problem: http://projects.scipy.org/scipy/scipy/ticket/401 Neither could David Cournapeau. So unless someone tells me otherwise I am going to mark it closed tomorrow. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From ondrej at certik.cz Thu Aug 30 02:55:24 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 30 Aug 2007 08:55:24 +0200 Subject: [SciPy-dev] 0.6.x branch created In-Reply-To: References: Message-ID: <85b5c3130708292355u6ac0450cr60407f902a1f676@mail.gmail.com> > Thanks for all the work getting the trunk in shape for the 0.6.x branch: > http://svn.scipy.org/svn/scipy/branches/0.6.x > > There are a few small changes I still want to make and I will need to > ask everyone to test it before making the 0.6.0 tag and release. Compiles fine, only I need to patch umfpack.i with stuff like this: -%include +%include But this is specific to Debian, because there the umfpack is in the suitesparse package (I have this patch in the debian package, so you don't have to worry about it). The results of tests are here (am I executing them correctly?). Ondrej ondra at pc232:~/scipy/jarrod/dist/lib/python2.4/site-packages$ python Python 2.4.4 (#2, Aug 16 2007, 02:03:40) [GCC 4.1.3 20070812 (prerelease) (Debian 4.1.2-15)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Found 9 tests for scipy.cluster.vq Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.fftpack.helper Found 20 tests for scipy.fftpack.pseudo_diffs Found 1 tests for scipy.integrate Found 10 tests for scipy.integrate.quadpack Found 3 tests for scipy.integrate.quadrature Found 6 tests for scipy.interpolate Found 6 tests for scipy.interpolate.fitpack Found 4 tests for scipy.io.array_import Found 28 tests for scipy.io.mio Found 13 tests for scipy.io.mmio Found 5 tests for scipy.io.npfile Found 4 tests for scipy.io.recaster Found 16 tests for scipy.lib.blas Found 128 tests for scipy.lib.blas.fblas Found 42 tests for scipy.lib.lapack Found 41 tests for scipy.linalg.basic Found 14 tests for scipy.linalg.blas Found 72 tests for scipy.linalg.decomp Found 128 tests for scipy.linalg.fblas Found 6 tests for scipy.linalg.iterative Found 4 tests for scipy.linalg.lapack Found 7 tests for scipy.linalg.matfuncs Found 9 tests for scipy.linsolve.umfpack Found 2 tests for scipy.maxentropy Warning: FAILURE importing tests for scipy/misc/tests/test_pilutil.py:13: NameError: name 'ParametricTestCase' is not defined (in ?) Found 399 tests for scipy.ndimage Found 5 tests for scipy.odr Found 8 tests for scipy.optimize Found 1 tests for scipy.optimize.cobyla Found 10 tests for scipy.optimize.nonlin Found 4 tests for scipy.optimize.zeros Found 5 tests for scipy.signal.signaltools Found 4 tests for scipy.signal.wavelets Warning: FAILURE importing tests for scipy/sparse/tests/test_sparse.py:792: NameError: name 'ParametricTestCase' is not defined (in ?) Warning: FAILURE importing tests for scipy/sparse/tests/test_sparse.py:792: NameError: name 'ParametricTestCase' is not defined (in ?) Found 342 tests for scipy.special.basic Found 3 tests for scipy.special.spfun_stats Found 107 tests for scipy.stats Found 73 tests for scipy.stats.distributions Found 23 tests for scipy.stats.models.formula Found 2 tests for scipy.stats.models.glm Found 4 tests for scipy.stats.models.regression Found 2 tests for scipy.stats.models.rlm Found 6 tests for scipy.stats.models.utils Found 10 tests for scipy.stats.morestats Found 0 tests for __main__ .../home/ondra/scipy/jarrod/dist/lib/python2.4/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006950608e-07 ..................../home/ondra/scipy/jarrod/dist/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ...... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..............................................................................................................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .............Result may be inaccurate, approximate err = 4.44411917524e-09 ...Result may be inaccurate, approximate err = 1.61696391382e-10 ......Use minimum degree ordering on A'+A. ..Use minimum degree ordering on A'+A. ...Use minimum degree ordering on A'+A. ..........................................................................................................scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................0.2 0.2 0.2 ......0.2 ..0.2 0.2 0.2 0.2 0.2 ..............................................................................................................................................................................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... ---------------------------------------------------------------------- Ran 1608 tests in 6.435s OK From fperez.net at gmail.com Thu Aug 30 14:24:18 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 30 Aug 2007 12:24:18 -0600 Subject: [SciPy-dev] 0.6.x branch created In-Reply-To: References: Message-ID: Hey, On 8/29/07, Jarrod Millman wrote: > Hello, > > Thanks for all the work getting the trunk in shape for the 0.6.x branch: > http://svn.scipy.org/svn/scipy/branches/0.6.x > > There are a few small changes I still want to make and I will need to > ask everyone to test it before making the 0.6.0 tag and release. > > The version of the trunk has been updated to reflect the fact that all > development on it will be in preparation for the 0.7.0 release in > about 3 months. These tests were done on an Ubuntu Feisty 32-bit box, with python2.5 and using the official numpy: In [7]: numpy.__version__ Out[7]: '1.0.3.1' from the sourceforge released tarball. Both the short and long test suites pass, I'm attaching the result log. Oddly enough, the very first time I ran the test suite, it hung for about 1/2 hour at 100%CPU until I killed it. This is the traceback I got at that point: Use minimum degree ordering on A'+A. Traceback (most recent call last): File "./testpkg", line 23, in pkg.test(10) File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/__init__.py", line 77, in test return NumpyTest(scipy).test(level, verbosity) File "/home/fperez/usr/opt/lib/python2.5/site-packages/numpy/testing/numpytest.py", line 568 , in test runner.run(all_tests) File "/usr/lib/python2.5/unittest.py", line 705, in run test(result) File "/usr/lib/python2.5/unittest.py", line 437, in __call__ return self.run(*args, **kwds) File "/usr/lib/python2.5/unittest.py", line 433, in run test(result) File "/home/fperez/usr/opt/lib/python2.5/site-packages/numpy/testing/numpytest.py", line 139 , in __call__ unittest.TestCase.__call__(self, result) File "/usr/lib/python2.5/unittest.py", line 281, in __call__ return self.run(*args, **kwds) File "/usr/lib/python2.5/unittest.py", line 260, in run testMethod() File "/home/fperez/tmp/local/lib/python2.5/site-packages/scipy/optimize/tests/test_optimize. py", line 107, in check_ncg retall=False) File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/optimize/optimize.py", line 110 7, in fmin_ncg Ap = approx_fhess_p(xk,psupi,fprime,epsilon) File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/optimize/optimize.py", line 625 , in approx_fhess_p f2 = fprime(*((x0+epsilon*p,)+args)) File "/home/fperez/usr/opt/lib/python2.5/site-packages/scipy/optimize/optimize.py", line 95, in function_wrapper return function(x, *args) File "/home/fperez/tmp/local/lib/python2.5/site-packages/scipy/optimize/tests/test_optimize. py", line 41, in grad log_pdot = dot(self.F, x) KeyboardInterrupt This hasn't happened again, so I mention it just in case anyone else sees the problem (perhaps a test with random data that happened not to converge?) Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy_tests.txt.gz Type: application/x-gzip Size: 2996 bytes Desc: not available URL: From ondrej at certik.cz Thu Aug 30 19:24:13 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Fri, 31 Aug 2007 01:24:13 +0200 Subject: [SciPy-dev] 0.6.x branch created In-Reply-To: References: Message-ID: <85b5c3130708301624m1367cea3m8b9ad3620c78e09d@mail.gmail.com> > This hasn't happened again, so I mention it just in case anyone else > sees the problem (perhaps a test with random data that happened not to > converge?) I think there shouldn't be any random data in the tests, for exactly the reason above, that you will not able to reproduce the bug anyway. Ondrej