From josef.pktd at gmail.com Sat Nov 1 00:49:16 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 1 Nov 2008 00:49:16 -0400 Subject: [SciPy-dev] Reminder: SciPy Sprint is TOMORROW (Saturday) and Sunday In-Reply-To: <490B8770.6040202@enthought.com> References: <91b4b1ab0810311423t1cc32c68x8501076730c15fa4@mail.gmail.com> <490B8770.6040202@enthought.com> Message-ID: <1cd32cbb0810312149h3653de81t9cfcefc606f7bfa7@mail.gmail.com> I went bug hunting today and updated http://scipy.org/scipy/scipy/ticket/745 Josef From cournape at gmail.com Sat Nov 1 01:05:21 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 1 Nov 2008 14:05:21 +0900 Subject: [SciPy-dev] Renaming fftpack to fft, removing backends In-Reply-To: <3d375d730810312050k65b1661et1078983efda255af@mail.gmail.com> References: <4906D211.6070505@ar.media.kyoto-u.ac.jp> <3d375d730810281031s77008de4q1e956b28dd93e117@mail.gmail.com> <9457e7c80810310923m60aa9008pfef3f6a0ed70c45@mail.gmail.com> <3d375d730810311225w79ec9deey67d7e980eff798e6@mail.gmail.com> <5b8d13220810312015g1366e5c0m821ad3ed893f6275@mail.gmail.com> <3d375d730810312050k65b1661et1078983efda255af@mail.gmail.com> Message-ID: <5b8d13220810312205i6819fa3u7d09ae611a2a7c1d@mail.gmail.com> On Sat, Nov 1, 2008 at 12:50 PM, Robert Kern wrote: > > For example, Python itself is a pretty good project for maintaining > backwards compatibility. But behavior changes do happen, and > deeply-diving codebases like Twisted do break on just about every new > 2.x release. Because the changes are usually beneficial to a wide > audience while the breakage is limited to small parts of libraries > that are depending on mostly unspecified behavior, these changes are > usually considered acceptable Yes, but there is a difference: in python, it is clear how the decision is made, and the decisions are motivated through a relatively strong direction on where the project is going. Also, twisted is not meant to be used interactively, I guess. Scipy and numpy are two things: a library, and an interactive tool (or at least a strong foundation for an interactive tool). For a library, a renaming has low, if any value. Both arguments can be made, and given than I consider you know more than I on numpy/scipy, I won't proceed with the renaming in this precise case. But this puzzles me a bit about what the scipy objectives are; maybe I am overstating it, though. > > I don't see how it follows that taking my "middle ground" stance means > that people won't discuss possibly-code-breaking changes they are > making. Sorry, that's was not clearly stated, let me rephrase it: I meant that I feel like some changes as trivial as rename already happened before in scipy/numpy, without any discussion. I am wondering why those changes rather than other happened; there are some things I would like to see changed in scipy in particular, and I don't want to start endless discussions about every one of them. Of course, some things will always have to be discussed, but having a somewhat formal process would be helpful, if only as a filter for the dumbest ideas. For other projects I am involved with, I have some kind of feeling of what has a chance to be implemented/accepted, and what not. This is not the case for scipy. This may well just be my own problem, though. cheers, David From stefan at sun.ac.za Sat Nov 1 09:23:55 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sat, 1 Nov 2008 15:23:55 +0200 Subject: [SciPy-dev] Reminder: SciPy Sprint is TOMORROW (Saturday) and Sunday In-Reply-To: <1cd32cbb0810312149h3653de81t9cfcefc606f7bfa7@mail.gmail.com> References: <91b4b1ab0810311423t1cc32c68x8501076730c15fa4@mail.gmail.com> <490B8770.6040202@enthought.com> <1cd32cbb0810312149h3653de81t9cfcefc606f7bfa7@mail.gmail.com> Message-ID: <9457e7c80811010623u49c4d488pee651dc5cbadc57f@mail.gmail.com> Hi Josef, 2008/11/1 : > I went bug hunting today and updated http://scipy.org/scipy/scipy/ticket/745 I'd like to merge your work on the stats module. Will you be online sometime so we can discuss the changes? Cheers St?fan From pav at iki.fi Sat Nov 1 12:09:58 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 1 Nov 2008 16:09:58 +0000 (UTC) Subject: [SciPy-dev] Trac wiki edit permissions Message-ID: Hi, Could I get permissions to edit Scipy Trac wiki pages? (Right now, it would be useful for the Scipy sprint.) My account there is "pv". Thanks, -- Pauli Virtanen From gvrooyen at gmail.com Sat Nov 1 12:54:23 2008 From: gvrooyen at gmail.com (G-J van Rooyen) Date: Sat, 1 Nov 2008 18:54:23 +0200 Subject: [SciPy-dev] Definition of gammaln(x) for negative x Message-ID: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> Hey everyone Ticket #737 refers: ----- Gamma in negative for negative x whose floor value is odd. As such, gammaln does not make sense for those values (while staying in the real domain, at least). scipy.special.gammaln returns bogus values: import numpy as np from scipy.special import gamma, gammaln print np.log(gamma(-0.5)) print gammaln(-0.5) Returns nan in the first case (expected) and 1.26551212348 in the second (totally meaningless value). ----- The info line for gammaln reads: * gammaln -- Log of the absolute value of the gamma function. With this definition of gammaln, the function actually works fine, since np.log(abs(gamma(-0.5))) is in fact 1.2655. However, this seems to be an unusual definition for gammaln. What is the best way to fix it? Options: 1) Keep it as it is, with gammaln(x) = ln|gamma(x)| 2) Change it so that it returns NaN for negative values of gamma(x) (i.e. negative x whose floor value is odd) 3) Change it to always give NaN for negative values of x (Matlab's approach) 4) Have it return complex values for negative logarithms Which is best? G-J From millman at berkeley.edu Sat Nov 1 13:12:33 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 1 Nov 2008 10:12:33 -0700 Subject: [SciPy-dev] Trac wiki edit permissions In-Reply-To: References: Message-ID: On Sat, Nov 1, 2008 at 9:09 AM, Pauli Virtanen wrote: > Could I get permissions to edit Scipy Trac wiki pages? (Right now, it > would be useful for the Scipy sprint.) My account there is "pv". You should be all set now. Thanks for helping out and please let me know if you need anything else. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Sat Nov 1 13:05:17 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 02 Nov 2008 02:05:17 +0900 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> Message-ID: <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> G-J van Rooyen wrote: > Hey everyone > > Ticket #737 refers: > > ----- > > Gamma in negative for negative x whose floor value is odd. As such, > gammaln does not make sense for those values (while staying in the > real domain, at least). scipy.special.gammaln returns bogus values: > > import numpy as np > from scipy.special import gamma, gammaln > print np.log(gamma(-0.5)) > print gammaln(-0.5) > > Returns nan in the first case (expected) and 1.26551212348 in the > second (totally meaningless value). > > ----- > > The info line for gammaln reads: > * gammaln -- Log of the absolute value of the gamma function. > > With this definition of gammaln, the function actually works fine, > since np.log(abs(gamma(-0.5))) is in fact 1.2655. I have just checked with R, R does define log gamma as the log(abs(gamma(x))) (I guess that's where the definition comes from). I find this definition a bit strange, that's not the one I have seen where I see it used, but I certainly don't claim to use what would be considered as a reference for this stuff (I mostly use log gamma to deal with precision problem of gamma in some statistics computation). cheers, David From gvrooyen at gmail.com Sat Nov 1 15:53:56 2008 From: gvrooyen at gmail.com (G-J van Rooyen) Date: Sat, 1 Nov 2008 21:53:56 +0200 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> Message-ID: <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> I think it probably makes sense to keep it the way it is. AFAIK gammaln is typically used to calculate products and divisions with the gamma function, where it makes more sense to transform the entire calculation into the log-domain, in order to prevent numerical inaccuracies. e.g. gamma(A)*gamma(B)/gamma(C)*gamma(D) = exp(gammaln(A)+gammaln(B)-gammaln(C)+gammaln(D)) which works fine if all the arguments produce positive gamma-values. If negative gamma-values are produced (as described in ticket #737), the same calculation can be done by calculating gammaln on the absolute value of the arguments, and doing a sign correction at the end. For this, only the signs of gamma(A), etc. are needed. The original scipy/special/cephes/gamma.c writes the sign to a global variable named sgngam; this never gets imported to python. It might make sense to keep gammaln as it is, but to optionally return the sign information of gamma(A) in some way. Your thoughts? G-J 2008/11/1 David Cournapeau : > G-J van Rooyen wrote: >> Hey everyone >> >> Ticket #737 refers: >> >> ----- >> >> Gamma in negative for negative x whose floor value is odd. As such, >> gammaln does not make sense for those values (while staying in the >> real domain, at least). scipy.special.gammaln returns bogus values: >> >> import numpy as np >> from scipy.special import gamma, gammaln >> print np.log(gamma(-0.5)) >> print gammaln(-0.5) >> >> Returns nan in the first case (expected) and 1.26551212348 in the >> second (totally meaningless value). >> >> ----- >> >> The info line for gammaln reads: >> * gammaln -- Log of the absolute value of the gamma function. >> >> With this definition of gammaln, the function actually works fine, >> since np.log(abs(gamma(-0.5))) is in fact 1.2655. > > I have just checked with R, R does define log gamma as the > log(abs(gamma(x))) (I guess that's where the definition comes from). I > find this definition a bit strange, that's not the one I have seen where > I see it used, but I certainly don't claim to use what would be > considered as a reference for this stuff (I mostly use log gamma to deal > with precision problem of gamma in some statistics computation). > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Sat Nov 1 16:28:49 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 01 Nov 2008 21:28:49 +0100 Subject: [SciPy-dev] FAIL: test_imresize (test_pilutil.TestPILUtil) Message-ID: Hi all, Can someone reproduce the following failure ? ====================================================================== FAIL: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/numpy/testing/decorators.py", line 82, in skipper return f(*args, **kwargs) File "/usr/local/lib64/python2.5/site-packages/scipy/misc/tests/test_pilutil.py", line 24, in test_imresize assert_equal(im1.shape,(11,22)) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 174, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ---------------------------------------------------------------------- Ran 2368 tests in 37.285s FAILED (KNOWNFAIL=2, failures=1) >>> scipy.__version__ '0.7.0.dev4902' Cheers, Nils From pav at iki.fi Sat Nov 1 18:38:17 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 1 Nov 2008 22:38:17 +0000 (UTC) Subject: [SciPy-dev] docs.scipy.org -- new site for the documentation marathon References: Message-ID: Mon, 27 Oct 2008 15:16:14 -0400, jh wrote: [clip] > For the top page layout, I propose (rationale below): > > (No left sidebar) > ------------------------------------------------------------ > DOCUMENTATION EDITOR > Write, review, or proof the docs! > > MATURE DOCUMENTS > > Guide to Numpy > PDF book by Travis Oliphant > > DOCUMENT-IN-PROGRESS SNAPSHOTS > > Numpy Reference Guide (as of yyyy-mm-dd) PDF zipped HTML > refguide glossary shortcut > > Numpy User Guide (as of yyyy-mm-dd) > PDF zipped HTML > > Scipy Reference Guide (as of yyyy-mm-dd) PDF zipped HTML > > SEE ALSO > > SciPy.org > all things NumPy/SciPy (bug reports, downloads, conferences, etc.) > > SciPy.org/Documentation > more documentation not (yet) in the right format for this site > > SciPy.org/Cookbook > live, user-contributed examples and recipes for common tasks > ------------------------------------------------------------ I adapted this suggestion a bit -- it's now live on docs.scipy.org. The "See also" part lives in the side bar, but otherwise it's almost as above. The download links look definitely better on the front page. There's also an attempt to link the site visually to the parent scipy.org site, but I'm no web designer, and it seems there are now too many snakes on the page. -- Pauli Virtanen From gvrooyen at gmail.com Sat Nov 1 19:19:54 2008 From: gvrooyen at gmail.com (G-J van Rooyen) Date: Sun, 2 Nov 2008 01:19:54 +0200 Subject: [SciPy-dev] SciPy sprint in Stellenbosch today Message-ID: <15c068b00811011619n14003149ndbcc6730a597a359@mail.gmail.com> OK, I post the ONLY because stefanv thought it was cool enough to send to the dev list... :P We had a SciPy sprint at the Stellenbosch University Media Lab today (a new lab that is only a year old, focusing on next-generation web tech). A number of new devs joined, including myself. Some had no experience in open source dev, source control or unit testing at all, and for them this was a great learning experience. If there's fizzy sugary caffeiny drinks and pizza, the postgrads will come :) Here's an animoto video and slideshow of the South African developers strutting their stuff: http://catpt.blogspot.com/2008/11/scipy-sprint-07-za.html One of my colleages (cover pic to the video) was very impressed with closing his first ticket! Thanks for all the support, and we look forward to being involved in future! G-J From millman at berkeley.edu Sat Nov 1 21:36:24 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 1 Nov 2008 18:36:24 -0700 Subject: [SciPy-dev] SciPy sprint in Stellenbosch today In-Reply-To: <15c068b00811011619n14003149ndbcc6730a597a359@mail.gmail.com> References: <15c068b00811011619n14003149ndbcc6730a597a359@mail.gmail.com> Message-ID: On Sat, Nov 1, 2008 at 4:19 PM, G-J van Rooyen wrote: > We had a SciPy sprint at the Stellenbosch University Media Lab today > (a new lab that is only a year old, focusing on next-generation web > tech). A number of new devs joined, including myself. Some had no > experience in open source dev, source control or unit testing at all, > and for them this was a great learning experience. If there's fizzy > sugary caffeiny drinks and pizza, the postgrads will come :) That's very cool. Thanks to everyone who is helping out! > Here's an animoto video and slideshow of the South African developers > strutting their stuff: > > http://catpt.blogspot.com/2008/11/scipy-sprint-07-za.html > > One of my colleages (cover pic to the video) was very impressed with > closing his first ticket! > > Thanks for all the support, and we look forward to being involved in future! I added a news item about the sprint on scipy.org: http://www.scipy.org/ -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From tom.grydeland at gmail.com Sun Nov 2 07:12:38 2008 From: tom.grydeland at gmail.com (Tom Grydeland) Date: Sun, 2 Nov 2008 13:12:38 +0100 Subject: [SciPy-dev] Renaming fftpack to fft, removing backends In-Reply-To: <3d375d730810312050k65b1661et1078983efda255af@mail.gmail.com> References: <4906D211.6070505@ar.media.kyoto-u.ac.jp> <3d375d730810281031s77008de4q1e956b28dd93e117@mail.gmail.com> <9457e7c80810310923m60aa9008pfef3f6a0ed70c45@mail.gmail.com> <3d375d730810311225w79ec9deey67d7e980eff798e6@mail.gmail.com> <5b8d13220810312015g1366e5c0m821ad3ed893f6275@mail.gmail.com> <3d375d730810312050k65b1661et1078983efda255af@mail.gmail.com> Message-ID: On Sat, Nov 1, 2008 at 4:50 AM, Robert Kern wrote: > I don't see how it follows that taking my "middle ground" stance means > that people won't discuss possibly-code-breaking changes they are > making. It seems to me that it would encourage more discussion because > the tradeoffs need to be justified and agreed upon. As someone who is primarily a user, not a developer, of scipy (and numpy), I try to keep an eye on what happens on the scipy-dev mailing list, and I am quite prepared to deal with breakage for the time being. Perhaps it would be possible upon a new release to include a README-style file with a list of backward-incompatible changes that have been made since the previous release? It would list the interfaces that have disappeared (possibly with suggested replacements), those that have been moved or renamed, and those that have become deprecated (again with suggested replacements). Just a suggestion. > Robert Kern -- Tom Grydeland From millman at berkeley.edu Sun Nov 2 12:16:31 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 2 Nov 2008 09:16:31 -0800 Subject: [SciPy-dev] scipy.distance Message-ID: Hey, Damian Eads is joining the Berkeley sprint today and we were hoping to make a change that was discussed at the SciPy conference and on the list a few times. As part of his hierarchical clustering work, Damian wrote a module for a bunch of different distance functions: http://scipy.org/scipy/scipy/browser/trunk/scipy/cluster/distance.py This is obviously much more generic and useful than the domain of clustering. So we were proposing to make distance a top-level subpackage called: scipy.distance Before he made the change, I wanted to run it by the list one last time and see if anyone had any thoughts, comments, or suggestions. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From aisaac at american.edu Sun Nov 2 12:55:14 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 02 Nov 2008 12:55:14 -0500 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: Message-ID: <490DE982.9000008@american.edu> On 11/2/2008 12:16 PM Jarrod Millman apparently wrote: > I wanted to run it by the list one last > time and see if anyone had any thoughts, comments, or suggestions. Minor comment: I do not like the use of capitalized function arguments, especially when used to distinguish function arguments (e.g., v vs. V). But overall, great! Alan From aarchiba at physics.mcgill.ca Sun Nov 2 14:01:27 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sun, 2 Nov 2008 15:01:27 -0400 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: Message-ID: 2008/11/2 Jarrod Millman : > Hey, > > Damian Eads is joining the Berkeley sprint today and we were hoping to > make a change that was discussed at the SciPy conference and on the > list a few times. As part of his hierarchical clustering work, Damian > wrote a module for a bunch of different distance functions: > http://scipy.org/scipy/scipy/browser/trunk/scipy/cluster/distance.py > > This is obviously much more generic and useful than the domain of > clustering. So we were proposing to make distance a top-level > subpackage called: > scipy.distance > > Before he made the change, I wanted to run it by the list one last > time and see if anyone had any thoughts, comments, or suggestions. Very nice code! It seems closely related to scipy.spatial. It was agreed, fairly reasonably, that the new code in scipy.spatial wouldn't go into 0.7, but this could quite reasonably go into scipy.spatial now, I think. Just where to put it isn't so clear - if scipy.spatial already existed and were populated, I would be tempted to put this as scipy.spatial.distance. Anne From damian.eads.lists at gmail.com Sun Nov 2 16:46:32 2008 From: damian.eads.lists at gmail.com (Damian Eads) Date: Sun, 2 Nov 2008 13:46:32 -0800 Subject: [SciPy-dev] Generating SciPy Sphinx HTML Message-ID: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> Hi, I'm at the Berkeley sprint now trying to fix a few doc bugs. Can anyone point me to instructions or a script for generating Sphinx HTML documentation from the RST docstrings? Please advise. Thank you! Damian ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From pav at iki.fi Sun Nov 2 16:56:03 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 2 Nov 2008 21:56:03 +0000 (UTC) Subject: [SciPy-dev] Generating SciPy Sphinx HTML References: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> Message-ID: Hi, Sun, 02 Nov 2008 13:46:32 -0800, Damian Eads wrote: > I'm at the Berkeley sprint now trying to fix a few doc bugs. Can anyone > point me to instructions or a script for generating Sphinx HTML > documentation from the RST docstrings? Like this, for Scipy: svn co http://svn.scipy.org/svn/scipy/scipy-docs/trunk scipy-docs cd scipy-docs export PYTHONPATH=/wherever/your/scipy/is make html and for Numpy, svn co http://svn.scipy.org/svn/numpy/numpy-docs/trunk numpy-docs cd numpy-docs export PYTHONPATH=/wherever/your/numpy/is make html Note that you need Sphinx 0.5.dev development version, and to actually compile Numpy or Scipy first. Sphinx 0.5: svn co http://svn.python.org/projects/doctools/trunk sphinx-trunk cd sphinx-trunk python setup.py install -- Pauli Virtanen From damian.eads.lists at gmail.com Sun Nov 2 19:47:14 2008 From: damian.eads.lists at gmail.com (Damian Eads) Date: Sun, 2 Nov 2008 16:47:14 -0800 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: Message-ID: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Thanks. I created a spatial and spatial/tests directory in the SciPy trunk. This should not pose an issue when merging the kd-tree code from the spatial branch into the trunk, just change the setup.py and __init__.py accordingly. After fixing a few minor import bugs caused by the move, I am pleased to report all the cluster and distance tests pass. I had a chance to peruse the code in the spatial branch, and it looks like there are a considerable number of tests for the kd-tree code. It seems mature enough for it to be incorporated into the 0.7 release as a technology preview. I think we should move that code over into the trunk too, since this will give us more feedback from early adopters and testers so we can better refine. It should also be noted, we are changing the module index in the documentation to indicate those modules/packages that are part of the technology preview. Would others care to comment? Cheers, Damian On Sun, Nov 2, 2008 at 11:01 AM, Anne Archibald wrote: > 2008/11/2 Jarrod Millman : >> Hey, >> >> Damian Eads is joining the Berkeley sprint today and we were hoping to >> make a change that was discussed at the SciPy conference and on the >> list a few times. As part of his hierarchical clustering work, Damian >> wrote a module for a bunch of different distance functions: >> http://scipy.org/scipy/scipy/browser/trunk/scipy/cluster/distance.py >> >> This is obviously much more generic and useful than the domain of >> clustering. So we were proposing to make distance a top-level >> subpackage called: >> scipy.distance >> >> Before he made the change, I wanted to run it by the list one last >> time and see if anyone had any thoughts, comments, or suggestions. > > Very nice code! > > It seems closely related to scipy.spatial. It was agreed, fairly > reasonably, that the new code in scipy.spatial wouldn't go into 0.7, > but this could quite reasonably go into scipy.spatial now, I think. > Just where to put it isn't so clear - if scipy.spatial already existed > and were populated, I would be tempted to put this as > scipy.spatial.distance. > > Anne > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From millman at berkeley.edu Sun Nov 2 19:51:58 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 2 Nov 2008 16:51:58 -0800 Subject: [SciPy-dev] scipy.distance In-Reply-To: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: On Sun, Nov 2, 2008 at 4:47 PM, Damian Eads wrote: > I had a chance to peruse the code in the spatial branch, and it looks > like there are a considerable number of tests for the kd-tree code. It > seems mature enough for it to be incorporated into the 0.7 release as > a technology preview. I think we should move that code over into the > trunk too, since this will give us more feedback from early adopters > and testers so we can better refine. It should also be noted, we are > changing the module index in the documentation to indicate those > modules/packages that are part of the technology preview. +1 I think that it would be better to get the kd-tree code out in this release, but clearly mark it as a technology preview. This will give us the opportunity to get more feedback and testing without any major restrictions on improving the API. I know that there was some concerns that including new code could slow down the release of 0.7, but I don't think that has to be the case. Since it is a technology preview and isn't changing current behavior; as long as it compiles and doesn't create a lot of noise for testing, I don't think I will be necessary to delay the release for it. Thoughts? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Sun Nov 2 19:54:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 2 Nov 2008 18:54:49 -0600 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: <3d375d730811021654p7cb95595had881236da653e99@mail.gmail.com> On Sun, Nov 2, 2008 at 18:51, Jarrod Millman wrote: > On Sun, Nov 2, 2008 at 4:47 PM, Damian Eads wrote: >> I had a chance to peruse the code in the spatial branch, and it looks >> like there are a considerable number of tests for the kd-tree code. It >> seems mature enough for it to be incorporated into the 0.7 release as >> a technology preview. I think we should move that code over into the >> trunk too, since this will give us more feedback from early adopters >> and testers so we can better refine. It should also be noted, we are >> changing the module index in the documentation to indicate those >> modules/packages that are part of the technology preview. > > +1 > I think that it would be better to get the kd-tree code out in this > release, but clearly mark it as a technology preview. This will give > us the opportunity to get more feedback and testing without any major > restrictions on improving the API. I know that there was some > concerns that including new code could slow down the release of 0.7, > but I don't think that has to be the case. Since it is a technology > preview and isn't changing current behavior; as long as it compiles > and doesn't create a lot of noise for testing, I don't think I will be > necessary to delay the release for it. > > Thoughts? Sure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eads at soe.ucsc.edu Sun Nov 2 20:15:52 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 2 Nov 2008 17:15:52 -0800 Subject: [SciPy-dev] Generating SciPy Sphinx HTML In-Reply-To: References: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> Message-ID: <91b4b1ab0811021715h6cf216cch972119296accf5a3@mail.gmail.com> Hi Pauli, When invoking make, I get an error because ext/autosummary_generate.py cannot be found. Is this auto-generated or did someone forget to check this in? Please advise. [eads at localhost scipy-docs]$ make html mkdir -p build ./ext/autosummary_generate.py source/*.rst \ -p dump.xml -o source/generated /bin/sh: ./ext/autosummary_generate.py: No such file or directory make: *** [build/generate-stamp] Error 127 [eads at localhost scipy-docs]$ ls ext/.svn/ entries format prop-base/ props/ text-base/ tmp/ [eads at localhost scipy-docs]$ svn up Thanks. Damian On Sun, Nov 2, 2008 at 1:56 PM, Pauli Virtanen wrote: > Hi, > > Sun, 02 Nov 2008 13:46:32 -0800, Damian Eads wrote: >> I'm at the Berkeley sprint now trying to fix a few doc bugs. Can anyone >> point me to instructions or a script for generating Sphinx HTML >> documentation from the RST docstrings? > > Like this, for Scipy: > > svn co http://svn.scipy.org/svn/scipy/scipy-docs/trunk scipy-docs > cd scipy-docs > export PYTHONPATH=/wherever/your/scipy/is > make html > > and for Numpy, > > svn co http://svn.scipy.org/svn/numpy/numpy-docs/trunk numpy-docs > cd numpy-docs > export PYTHONPATH=/wherever/your/numpy/is > make html > > Note that you need Sphinx 0.5.dev development version, and to actually > compile Numpy or Scipy first. > > Sphinx 0.5: > > svn co http://svn.python.org/projects/doctools/trunk sphinx-trunk > cd sphinx-trunk > python setup.py install > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From pav at iki.fi Sun Nov 2 20:57:47 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 3 Nov 2008 01:57:47 +0000 (UTC) Subject: [SciPy-dev] Generating SciPy Sphinx HTML References: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> <91b4b1ab0811021715h6cf216cch972119296accf5a3@mail.gmail.com> Message-ID: (Answered on IRC, but let's answer it also here.) Sun, 02 Nov 2008 17:15:52 -0800, Damian Eads wrote: > When invoking make, I get an error because ext/autosummary_generate.py > cannot be found. Is this auto-generated or did someone forget to check > this in? Please advise. > > [eads at localhost scipy-docs]$ make html mkdir -p build > ./ext/autosummary_generate.py source/*.rst \ > -p dump.xml -o source/generated > /bin/sh: ./ext/autosummary_generate.py: No such file or directory make: > *** [build/generate-stamp] The ext directory comes from http://sphinx.googlecode.com/svn/contrib/trunk/numpyext and the Makefile checks it out by itself. (Damian clarified that in this case the checkout was interrupted earlier, but it had already created the ext/ directory, and so left the ext/ checkout incomplete.) -- Pauli Virtanen From jh at physics.ucf.edu Sun Nov 2 22:07:55 2008 From: jh at physics.ucf.edu (jh at physics.ucf.edu) Date: Sun, 02 Nov 2008 22:07:55 -0500 Subject: [SciPy-dev] docs.scipy.org -- new site for the documentation marathon In-Reply-To: (scipy-dev-request@scipy.org) References: Message-ID: Pauli Virtanen wrote: > Mon, 27 Oct 2008 15:16:14 -0400, jh wrote: > [clip] > > For the top page layout, I propose (rationale below): > > > > (No left sidebar) > > ------------------------------------------------------------ > > DOCUMENTATION EDITOR > > Write, review, or proof the docs! > > > > MATURE DOCUMENTS > > > > Guide to Numpy > > PDF book by Travis Oliphant > > > > DOCUMENT-IN-PROGRESS SNAPSHOTS > > > > Numpy Reference Guide (as of yyyy-mm-dd) PDF zipped HTML > > refguide glossary shortcut > > > > Numpy User Guide (as of yyyy-mm-dd) > > PDF zipped HTML > > > > Scipy Reference Guide (as of yyyy-mm-dd) PDF zipped HTML > > > > SEE ALSO > > > > SciPy.org > > all things NumPy/SciPy (bug reports, downloads, conferences, etc.) > > > > SciPy.org/Documentation > > more documentation not (yet) in the right format for this site > > > > SciPy.org/Cookbook > > live, user-contributed examples and recipes for common tasks > > ------------------------------------------------------------ > I adapted this suggestion a bit -- it's now live on docs.scipy.org. The > "See also" part lives in the side bar, but otherwise it's almost as > above. The download links look definitely better on the front page. Looks great! The tag lines under the main links are all redundant except for the one on Travis's book. E.g.: Numpy Reference reference documentation for Numpy There's no new information in the extra line. Also, please add "Guide" to the title for this one. Would it be possible to take the latest stats graph and stick it next to the numpy reference manual on the front page? Just the graph for the last week and the key. Don't do this if it's real work, or if it slows loading much. We can add graphs for the others when they go under the wiki. Also, how difficult is it to put the current date of the works in progress? That would help those downloading it to see how recent it is and when to get an update. Again, don't do if it's a lot of work. > There's also an attempt to link the site visually to the parent scipy.org > site, but I'm no web designer, and it seems there are now too many snakes > on the page. Well, if it's not under that site, then maybe that's not a principal concern. If a web designer wants to step in, let 'em. Nicely done. --jh-- From jh at physics.ucf.edu Sun Nov 2 22:37:55 2008 From: jh at physics.ucf.edu (jh at physics.ucf.edu) Date: Sun, 02 Nov 2008 22:37:55 -0500 Subject: [SciPy-dev] Generating SciPy Sphinx HTML In-Reply-To: (scipy-dev-request@scipy.org) References: Message-ID: Sun, 02 Nov 2008 13:46:32 -0800, Damian Eads wrote: > I'm at the Berkeley sprint now trying to fix a few doc bugs. Can anyone > point me to instructions or a script for generating Sphinx HTML > documentation from the RST docstrings? [Pauli replied with instructions.] I just added Pauli's doc building instructions to the DevZone page. --jh-- From aarchiba at physics.mcgill.ca Mon Nov 3 00:03:22 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 3 Nov 2008 01:03:22 -0400 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: 2008/11/2 Jarrod Millman : > On Sun, Nov 2, 2008 at 4:47 PM, Damian Eads wrote: >> I had a chance to peruse the code in the spatial branch, and it looks >> like there are a considerable number of tests for the kd-tree code. It >> seems mature enough for it to be incorporated into the 0.7 release as >> a technology preview. I think we should move that code over into the >> trunk too, since this will give us more feedback from early adopters >> and testers so we can better refine. It should also be noted, we are >> changing the module index in the documentation to indicate those >> modules/packages that are part of the technology preview. > > +1 > I think that it would be better to get the kd-tree code out in this > release, but clearly mark it as a technology preview. This will give > us the opportunity to get more feedback and testing without any major > restrictions on improving the API. I know that there was some > concerns that including new code could slow down the release of 0.7, > but I don't think that has to be the case. Since it is a technology > preview and isn't changing current behavior; as long as it compiles > and doesn't create a lot of noise for testing, I don't think I will be > necessary to delay the release for it. > > Thoughts? I think for computing nearest neighbors of points, the code is basically done. For other applications, well, it's not yet clear what other applications people want. One aspect I do expect will change is the API: there are many applications where a kd-tree is a useful data structure (beyond simple point nearest neighbor), but for most of them one needs to write a custom tree traversal and possibly annotate the tree. I'm not sure how to set the kd-tree code up so that it can easily (and efficiently) be used for that, but I think the only way to find out is for users to actually go ahead and try to do it. So I think as long as it's clear to users that the API may still change, it will actually help to put the code in 0.7. Anne From eads at soe.ucsc.edu Mon Nov 3 00:09:48 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 2 Nov 2008 21:09:48 -0800 Subject: [SciPy-dev] Generating SciPy Sphinx HTML In-Reply-To: References: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> <91b4b1ab0811021715h6cf216cch972119296accf5a3@mail.gmail.com> Message-ID: <91b4b1ab0811022109t5e53812fx17d905c6b72a27a2@mail.gmail.com> Very nice work. I got Sphinx to work for me. When info.py files get changed, the corresponding package-level documentation does not get updated. I have to remove the build/ directory for it to correctly update it. Removing build/pkg_name.html does not work. Damian On Sun, Nov 2, 2008 at 5:57 PM, Pauli Virtanen wrote: > (Answered on IRC, but let's answer it also here.) > > Sun, 02 Nov 2008 17:15:52 -0800, Damian Eads wrote: >> When invoking make, I get an error because ext/autosummary_generate.py >> cannot be found. Is this auto-generated or did someone forget to check >> this in? Please advise. >> >> [eads at localhost scipy-docs]$ make html mkdir -p build >> ./ext/autosummary_generate.py source/*.rst \ >> -p dump.xml -o source/generated >> /bin/sh: ./ext/autosummary_generate.py: No such file or directory make: >> *** [build/generate-stamp] > > The ext directory comes from > http://sphinx.googlecode.com/svn/contrib/trunk/numpyext > and the Makefile checks it out by itself. (Damian clarified that in this > case the checkout was interrupted earlier, but it had already created the > ext/ directory, and so left the ext/ checkout incomplete.) > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From arokem at berkeley.edu Mon Nov 3 00:29:43 2008 From: arokem at berkeley.edu (Ariel Rokem) Date: Sun, 2 Nov 2008 21:29:43 -0800 Subject: [SciPy-dev] percentileofscore rewrite Message-ID: <1B00B5C5-1F35-4973-A3AE-05FF541099DF@berkeley.edu> Hello - following ticket #560 (http://scipy.org/scipy/scipy/ticket/560) I just rewrote the function scipy.stats.percentileofscore. The change can be reviewed here: http://codereview.appspot.com/7913 Note that the rewrite does something quite different than the previous version, including changes to the signature. Is this change actually an improvement? Does it make the function more/less useful? Cheers, Ariel ------------------------------------------------------ Ariel Rokem Helen Wills Neuroscience Institute 582 Minor Hall University of California, Berkeley Berkeley, CA 94720-2020 -- Tel: +1-510-6423134 -- http://argentum.ucbso.berkeley.edu/ariel ------------------------------------------------------ From eads at soe.ucsc.edu Mon Nov 3 00:47:52 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 2 Nov 2008 21:47:52 -0800 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: <91b4b1ab0811022147j348c533fkea10e6d1aba82f8d@mail.gmail.com> Excellent Anne. Please incorporate the code into the spatial/ directory at your convenience. Cheers, Damian On Sun, Nov 2, 2008 at 9:03 PM, Anne Archibald wrote: > 2008/11/2 Jarrod Millman : >> On Sun, Nov 2, 2008 at 4:47 PM, Damian Eads wrote: >>> I had a chance to peruse the code in the spatial branch, and it looks >>> like there are a considerable number of tests for the kd-tree code. It >>> seems mature enough for it to be incorporated into the 0.7 release as >>> a technology preview. I think we should move that code over into the >>> trunk too, since this will give us more feedback from early adopters >>> and testers so we can better refine. It should also be noted, we are >>> changing the module index in the documentation to indicate those >>> modules/packages that are part of the technology preview. >> >> +1 >> I think that it would be better to get the kd-tree code out in this >> release, but clearly mark it as a technology preview. This will give >> us the opportunity to get more feedback and testing without any major >> restrictions on improving the API. I know that there was some >> concerns that including new code could slow down the release of 0.7, >> but I don't think that has to be the case. Since it is a technology >> preview and isn't changing current behavior; as long as it compiles >> and doesn't create a lot of noise for testing, I don't think I will be >> necessary to delay the release for it. >> >> Thoughts? > > I think for computing nearest neighbors of points, the code is > basically done. For other applications, well, it's not yet clear what > other applications people want. One aspect I do expect will change is > the API: there are many applications where a kd-tree is a useful data > structure (beyond simple point nearest neighbor), but for most of them > one needs to write a custom tree traversal and possibly annotate the > tree. I'm not sure how to set the kd-tree code up so that it can > easily (and efficiently) be used for that, but I think the only way to > find out is for users to actually go ahead and try to do it. > > So I think as long as it's clear to users that the API may still > change, it will actually help to put the code in 0.7. > > Anne > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From eads at soe.ucsc.edu Mon Nov 3 01:06:02 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 2 Nov 2008 22:06:02 -0800 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations Message-ID: <91b4b1ab0811022206y1fc9ca82pce5809342ae247cd@mail.gmail.com> Hi there, Technology preview code is new code incorporated into the trunk of SciPy, but may also appear into a future SciPy release at our option. Such code is considered production grade and well-tested, but which also has no guarantees of a stable API to enable further improvements based on community feedback. The documentation will explicitly indicate which packages, modules, and functions are part of the technology preview. Jarrod and I had a discussion on how to best annotate the RST of the docstrings. One idea we came up with is to add a ".. techpreview" keyword for use in an info.py file (packages), module-level docstring, or function docstring. References in the index and detail documentation are footnoted "Technology preview". We'd like to get others to comment on the best way to annotate. Pauli: how easy would it be to add an extra keyword for this purpose? Thanks. Cheers, Damian ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From millman at berkeley.edu Mon Nov 3 01:17:22 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 2 Nov 2008 22:17:22 -0800 Subject: [SciPy-dev] License review / weave bsd-ification Message-ID: Hey, I have been working to ensure that all the code in scipy is correctly licensed. One of the main issues was the blitz code in weave. I contacted the authors and we were given permission to use the code with a BSD license: http://scipy.org/scipy/scipy/ticket/649 I just finished most of the changes in: http://scipy.org/scipy/scipy/changeset/4949 http://scipy.org/scipy/scipy/changeset/4949 I found one file in blitz that I am not sure what to do about: http://scipy.org/scipy/scipy/browser/trunk/scipy/weave/blitz/blitz/rand-mt.h Any ideas? thought? suggestions? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From pgmdevlist at gmail.com Mon Nov 3 01:26:37 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 3 Nov 2008 01:26:37 -0500 Subject: [SciPy-dev] Timeseries Unusual Behaviour Message-ID: <200811030126.37485.pgmdevlist@gmail.com> On Wednesday 29 October 2008 12:41:08 David Huard wrote: > Pierre, > > A similar problem occurs with fromrecords. Dates are ordered but not > the data that is passed. David, It should be fixed in SVN (r1569). Thanks again for reporting. From charlesr.harris at gmail.com Mon Nov 3 02:02:29 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 3 Nov 2008 00:02:29 -0700 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: On Sun, Nov 2, 2008 at 10:03 PM, Anne Archibald wrote: > 2008/11/2 Jarrod Millman : > > On Sun, Nov 2, 2008 at 4:47 PM, Damian Eads > wrote: > >> I had a chance to peruse the code in the spatial branch, and it looks > >> like there are a considerable number of tests for the kd-tree code. It > >> seems mature enough for it to be incorporated into the 0.7 release as > >> a technology preview. I think we should move that code over into the > >> trunk too, since this will give us more feedback from early adopters > >> and testers so we can better refine. It should also be noted, we are > >> changing the module index in the documentation to indicate those > >> modules/packages that are part of the technology preview. > > > > +1 > > I think that it would be better to get the kd-tree code out in this > > release, but clearly mark it as a technology preview. This will give > > us the opportunity to get more feedback and testing without any major > > restrictions on improving the API. I know that there was some > > concerns that including new code could slow down the release of 0.7, > > but I don't think that has to be the case. Since it is a technology > > preview and isn't changing current behavior; as long as it compiles > > and doesn't create a lot of noise for testing, I don't think I will be > > necessary to delay the release for it. > > > > Thoughts? > > I think for computing nearest neighbors of points, the code is > basically done. For other applications, well, it's not yet clear what > other applications people want. One aspect I do expect will change is > the API: there are many applications where a kd-tree is a useful data > structure (beyond simple point nearest neighbor), but for most of them > one needs to write a custom tree traversal and possibly annotate the > tree. I'm not sure how to set the kd-tree code up so that it can > easily (and efficiently) be used for that, but I think the only way to > find out is for users to actually go ahead and try to do it. I usually want a complete list of points in some neighborhood. I looked through your cython code and I think the loops can be improved a bit to make better use of low level C code. I'm working on some cover tree code with an eye to future inclusion unless someone beats me to it. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From eads at soe.ucsc.edu Mon Nov 3 02:06:12 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sun, 2 Nov 2008 23:06:12 -0800 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: <91b4b1ab0811022306q2f5bc431l519d288d8f1c9e78@mail.gmail.com> Voronoi membership images seem like a good feature to have in there as well. What do you think? Damian On Sun, Nov 2, 2008 at 11:02 PM, Charles R Harris wrote: > > > On Sun, Nov 2, 2008 at 10:03 PM, Anne Archibald > wrote: >> >> 2008/11/2 Jarrod Millman : >> > On Sun, Nov 2, 2008 at 4:47 PM, Damian Eads >> > wrote: >> >> I had a chance to peruse the code in the spatial branch, and it looks >> >> like there are a considerable number of tests for the kd-tree code. It >> >> seems mature enough for it to be incorporated into the 0.7 release as >> >> a technology preview. I think we should move that code over into the >> >> trunk too, since this will give us more feedback from early adopters >> >> and testers so we can better refine. It should also be noted, we are >> >> changing the module index in the documentation to indicate those >> >> modules/packages that are part of the technology preview. >> > >> > +1 >> > I think that it would be better to get the kd-tree code out in this >> > release, but clearly mark it as a technology preview. This will give >> > us the opportunity to get more feedback and testing without any major >> > restrictions on improving the API. I know that there was some >> > concerns that including new code could slow down the release of 0.7, >> > but I don't think that has to be the case. Since it is a technology >> > preview and isn't changing current behavior; as long as it compiles >> > and doesn't create a lot of noise for testing, I don't think I will be >> > necessary to delay the release for it. >> > >> > Thoughts? >> >> I think for computing nearest neighbors of points, the code is >> basically done. For other applications, well, it's not yet clear what >> other applications people want. One aspect I do expect will change is >> the API: there are many applications where a kd-tree is a useful data >> structure (beyond simple point nearest neighbor), but for most of them >> one needs to write a custom tree traversal and possibly annotate the >> tree. I'm not sure how to set the kd-tree code up so that it can >> easily (and efficiently) be used for that, but I think the only way to >> find out is for users to actually go ahead and try to do it. > > I usually want a complete list of points in some neighborhood. I looked > through your cython code and I think the loops can be improved a bit to make > better use of low level C code. > > I'm working on some cover tree code with an eye to future inclusion unless > someone beats me to it. > > Chuck > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From david at ar.media.kyoto-u.ac.jp Mon Nov 3 02:08:57 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 16:08:57 +0900 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: Message-ID: <490EA389.4050004@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > Hey, > > I have been working to ensure that all the code in scipy is correctly > licensed. One of the main issues was the blitz code in weave. I > contacted the authors and we were given permission to use the code > with a BSD license: http://scipy.org/scipy/scipy/ticket/649 > > I just finished most of the changes in: > http://scipy.org/scipy/scipy/changeset/4949 > http://scipy.org/scipy/scipy/changeset/4949 > > I found one file in blitz that I am not sure what to do about: > http://scipy.org/scipy/scipy/browser/trunk/scipy/weave/blitz/blitz/rand-mt.h > > I believe you have no choice but to get the agreement of every contributor to this code if you want to change it to BSD (which looks like at least Matsumoto-san and Allan Stokes). Is this file mandatory for blitz ? Also, blitz is only used in weave, right ? I remember some discussion about putting weave into a scikit, but I can't find it ATM; maybe I am confused with something else, though. cheers, David From eads at soe.ucsc.edu Mon Nov 3 02:40:21 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 3 Nov 2008 00:40:21 -0700 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <490EA389.4050004@ar.media.kyoto-u.ac.jp> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> Message-ID: <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> I'm in the car with Jarrod on the bay bridge. We were just discussing this issue a bit. I tend to agree with you that it belongs outside of SciPy, perhaps as a Scikit. Several tests have been failing for quite some time, which is problematic when we're trying to push towards a more regular release cycle. It seems very unlike the rest of SciPy as it deals with inline compilation of C code rather than generic scientific code. The licensing issues are also a bit onerous. My two cents, Damian On 11/3/08, David Cournapeau wrote: > Jarrod Millman wrote: >> Hey, >> >> I have been working to ensure that all the code in scipy is correctly >> licensed. One of the main issues was the blitz code in weave. I >> contacted the authors and we were given permission to use the code >> with a BSD license: http://scipy.org/scipy/scipy/ticket/649 >> >> I just finished most of the changes in: >> http://scipy.org/scipy/scipy/changeset/4949 >> http://scipy.org/scipy/scipy/changeset/4949 >> >> I found one file in blitz that I am not sure what to do about: >> >> http://scipy.org/scipy/scipy/browser/trunk/scipy/weave/blitz/blitz/rand-mt.h >> >> > > I believe you have no choice but to get the agreement of every > contributor to this code if you want to change it to BSD (which looks > like at least Matsumoto-san and Allan Stokes). > > Is this file mandatory for blitz ? Also, blitz is only used in weave, > right ? I remember some discussion about putting weave into a scikit, > but I can't find it ATM; maybe I am confused with something else, though. > > cheers, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From aarchiba at physics.mcgill.ca Mon Nov 3 03:36:17 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 3 Nov 2008 03:36:17 -0500 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: 2008/11/3 Charles R Harris : > I usually want a complete list of points in some neighborhood. I looked > through your cython code and I think the loops can be improved a bit to make > better use of low level C code. The C implementation doesn't currently do this at all. It'd be a good addition, though I think you'd have to use object arrays of lists, which have always made me faintly queasy. You'd want a whole separate tree-traversal routine here, with short-circuit branches for both all-in-the-neighborhood and all-outside-the-neighborhood. Since kdtree construction is rather fast, does it perhaps make sense to write a two-tree version? > I'm working on some cover tree code with an eye to future inclusion unless > someone beats me to it. That would be great. I'm afraid the benefits of a cython implementation are going to be somewhat limited by the fact that users are going to want to supply their own distance functions. But it's probably worth allowing user distance functions to accept a short-circuiting distance argument: d(A,B,r) returns the distance if it's less than r, but if the true distance is greater than r it returns an arbitrary value rather than r. Even for Minkowski distances this can be a win in high dimension, and for more sophisticated distance measures it could save even more time. Anne From cournape at gmail.com Mon Nov 3 04:03:07 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 3 Nov 2008 18:03:07 +0900 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> Message-ID: <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> > > It might make sense to keep gammaln as it is, but to optionally return > the sign information of gamma(A) in some way. I don't know what's best. You're right that gammaln is often useful to control precision of gamma which quickly overflows in the linear domain, and it is documented as such (I should have looked at the docstring). OTOH, having a function which actually computes the log gamma would be useful, too. Changing the current gammaln to get the sign information is not easy either; we could add an argument sign to the function, in which case the function would return two values (the value + the sign), but I don't like it (I don't like having function returning different values depending on the argument). cheers, David From aarchiba at physics.mcgill.ca Mon Nov 3 04:10:19 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Mon, 3 Nov 2008 04:10:19 -0500 Subject: [SciPy-dev] scipy.distance In-Reply-To: <91b4b1ab0811022147j348c533fkea10e6d1aba82f8d@mail.gmail.com> References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> <91b4b1ab0811022147j348c533fkea10e6d1aba82f8d@mail.gmail.com> Message-ID: 2008/11/3 Damian Eads : > Excellent Anne. Please incorporate the code into the spatial/ > directory at your convenience. Done. It's kind of a mess just now; in particular the python implementation still uses its own version of the Minkowski distances. Also the docstrings are kind of chaotic, and I'm pretty sure the scons support is broken. Anne From stefan at sun.ac.za Mon Nov 3 04:12:09 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 3 Nov 2008 11:12:09 +0200 Subject: [SciPy-dev] scipy.distance In-Reply-To: <91b4b1ab0811022306q2f5bc431l519d288d8f1c9e78@mail.gmail.com> References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> <91b4b1ab0811022306q2f5bc431l519d288d8f1c9e78@mail.gmail.com> Message-ID: <9457e7c80811030112y1b6a5641he79eccc9ba04408b@mail.gmail.com> Hi Damian Do you have plans to add distances between sets, such as the Hausdorff distance? Regards St?fan 2008/11/3 Damian Eads : > Voronoi membership images seem like a good feature to have in there as > well. What do you think? From stefan at sun.ac.za Mon Nov 3 04:17:24 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 3 Nov 2008 11:17:24 +0200 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> Message-ID: <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> 2008/11/3 David Cournapeau : > I don't like having function returning different values depending > on the argument. This is my favorite quote of the week! Cheers St?fan From david at ar.media.kyoto-u.ac.jp Mon Nov 3 04:03:21 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 18:03:21 +0900 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> <91b4b1ab0811022147j348c533fkea10e6d1aba82f8d@mail.gmail.com> Message-ID: <490EBE59.5010607@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > Done. It's kind of a mess just now; in particular the python > implementation still uses its own version of the Minkowski distances. > Also the docstrings are kind of chaotic, and I'm pretty sure the scons > support is broken. > Hi Anne, Don't worry too much about scons builds, I generally update them regularly. David From david at ar.media.kyoto-u.ac.jp Mon Nov 3 04:05:10 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 18:05:10 +0900 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> Message-ID: <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > > This is my favorite quote of the week! > Oups. This should of course read returning different *number* of return values depending on the argument :) David From benny.malengier at gmail.com Mon Nov 3 04:22:50 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 3 Nov 2008 10:22:50 +0100 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> Message-ID: I think weave is very important to researchers as it offers a qood reason to actually use scipy over writing C in the first place. I for one use it to speed up some important short functions. We know python is for many things not the best language in scientific computing, so these interfaces to allow for faster implementations are very important. Just my 2 cents Benny 2008/11/3 Damian Eads > I'm in the car with Jarrod on the bay bridge. We were just discussing > this issue a bit. I tend to agree with you that it belongs outside of > SciPy, perhaps as a Scikit. Several tests have been failing for quite > some time, which is problematic when we're trying to push towards a > more regular release cycle. It seems very unlike the rest of SciPy as > it deals with inline compilation of C code rather than generic > scientific code. The licensing issues are also a bit onerous. > > My two cents, > > Damian > > On 11/3/08, David Cournapeau wrote: > > Jarrod Millman wrote: > >> Hey, > >> > >> I have been working to ensure that all the code in scipy is correctly > >> licensed. One of the main issues was the blitz code in weave. I > >> contacted the authors and we were given permission to use the code > >> with a BSD license: http://scipy.org/scipy/scipy/ticket/649 > >> > >> I just finished most of the changes in: > >> http://scipy.org/scipy/scipy/changeset/4949 > >> http://scipy.org/scipy/scipy/changeset/4949 > >> > >> I found one file in blitz that I am not sure what to do about: > >> > >> > http://scipy.org/scipy/scipy/browser/trunk/scipy/weave/blitz/blitz/rand-mt.h > >> > >> > > > > I believe you have no choice but to get the agreement of every > > contributor to this code if you want to change it to BSD (which looks > > like at least Matsumoto-san and Allan Stokes). > > > > Is this file mandatory for blitz ? Also, blitz is only used in weave, > > right ? I remember some discussion about putting weave into a scikit, > > but I can't find it ATM; maybe I am confused with something else, though. > > > > cheers, > > > > David > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > -- > Sent from my mobile device > > ----------------------------------------------------- > Damian Eads Ph.D. Student > Jack Baskin School of Engineering, UCSC E2-489 > 1156 High Street Machine Learning Lab > Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Mon Nov 3 04:30:36 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 3 Nov 2008 10:30:36 +0100 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> Message-ID: <20081103093036.GA538@phare.normalesup.org> On Mon, Nov 03, 2008 at 10:22:50AM +0100, Benny Malengier wrote: > I think weave is very important to researchers as it offers a qood reason > to actually use scipy over writing C in the first place. > I for one use it to speed up some important short functions. We know > python is for many things not the best language in scientific computing, > so these interfaces to allow for faster implementations are very > important. +1 on all this. Practicality beats purity. It is going to be pretty hard to explain to people that they have to install a bunch of different packages get a full working environment. Ga?l From david at ar.media.kyoto-u.ac.jp Mon Nov 3 04:23:59 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 18:23:59 +0900 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <20081103093036.GA538@phare.normalesup.org> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: <490EC32F.40102@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > > +1 on all this. Practicality beats purity. license issues are very practical, I think. cheers, David From millman at berkeley.edu Mon Nov 3 04:51:12 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 3 Nov 2008 01:51:12 -0800 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <20081103093036.GA538@phare.normalesup.org> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: On Mon, Nov 3, 2008 at 1:30 AM, Gael Varoquaux wrote: > +1 on all this. Practicality beats purity. It is going to be pretty hard > to explain to people that they have to install a bunch of different > packages get a full working environment. Well my vote is to move weave out of scipy (and has been for some time), but it seems like there is some interest in keeping it as part of scipy. I am not going to have anymore time to work on weave for the next week. But I am opposed to releasing another version of scipy with GPL code; so this needs to be fixed or removed before the 0.7 release. I also want to have all tests passing before we release 0.7, which means that all the weave tests need to pass as well. If you feel strongly about keeping weave in scipy, now would be a good time to step up and get weave in shape for the 0.7 release. There are a number of weave test failures that need to be fixed. Here is a ticket with some (there may be more, I am not sure): http://scipy.org/scipy/scipy/ticket/490 It looks like the license issue can be resolved. Newer versions of the Mersenne Twister random number generator have a more liberal license: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/elicense.html Is anyone willing to volunteer to get the newer version working? Is it worth considering moving this code out of weave and making the MT RNG more generally available? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From benny.malengier at gmail.com Mon Nov 3 04:47:26 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 3 Nov 2008 10:47:26 +0100 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <490EC32F.40102@ar.media.kyoto-u.ac.jp> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> <490EC32F.40102@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/3 David Cournapeau > Gael Varoquaux wrote: > > > > +1 on all this. Practicality beats purity. > > license issues are very practical, I think. I agree with you. I just mean, if the license issue can be resolved, weave is a worthwhile part to have inside of scipy. The fact that it is different than other pieces is just an observation. In my way of using scipy, scipy is the part I need to add after numpy to start actual work of number crunching. Having to install all scikits too is no problem for me, but it is another barrier for people who just want to install some main components and get going. Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Mon Nov 3 05:09:50 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 3 Nov 2008 19:09:50 +0900 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> <490EC32F.40102@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811030209q2e8b6da0v9aadfca08853deb1@mail.gmail.com> On Mon, Nov 3, 2008 at 6:47 PM, Benny Malengier wrote: > I agree with you. > I just mean, if the license issue can be resolved, weave is a worthwhile > part to have inside of scipy. Sure. I mentioned scikits because one of the reason it exists in the first place is licensing issues. It looks like if someone is willing to port the newer random code to blitz template mechanism, the license issue can be solved. cheers, David From millman at berkeley.edu Mon Nov 3 05:24:30 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 3 Nov 2008 02:24:30 -0800 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: On Mon, Nov 3, 2008 at 1:51 AM, Jarrod Millman wrote: > It looks like the license issue can be resolved. Newer versions of > the Mersenne Twister random number generator have a more liberal > license: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/elicense.html > Is anyone willing to volunteer to get the newer version working? Is > it worth considering moving this code out of weave and making the MT > RNG more generally available? Matthew Brett just pointed out that numpy.random uses MT via randomkit: http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/random/mtrand/randomkit.c -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Mon Nov 3 05:20:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 19:20:53 +0900 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: <490ED085.20509@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > > Matthew Brett just pointed out that numpy.random uses MT via randomkit: > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/random/mtrand/randomkit.c I took a look at the weave package, and it looks like MersenneTwister is never referenced outsied rand-mt.h. Can we just remove this file at all ? I am not so familiar with weave, so I don't know if the MersenneTwister code can be used within weave, or if it was just there because of blitz, cheers, David From benny.malengier at gmail.com Mon Nov 3 06:07:48 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 3 Nov 2008 12:07:48 +0100 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: 2008/11/3 Jarrod Millman > > There are a number of weave test failures that need to be fixed. Here > is a ticket with some (there may be more, I am not sure): > http://scipy.org/scipy/scipy/ticket/490 > I wanted to look at this, but running here >>> import scipy.weave >>> scipy.weave.test() Ran 135 tests in 3.217s OK Level is deprecated in next release I see, adding it changes nothing. Am I doing something wrong? Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Nov 3 06:08:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 03 Nov 2008 20:08:56 +0900 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: <490EDBC8.5000205@ar.media.kyoto-u.ac.jp> Benny Malengier wrote: > > > 2008/11/3 Jarrod Millman > > > > There are a number of weave test failures that need to be fixed. Here > is a ticket with some (there may be more, I am not sure): > http://scipy.org/scipy/scipy/ticket/490 > > > I wanted to look at this, but running here > > >>> import scipy.weave > >>> scipy.weave.test() > > Ran 135 tests in 3.217s > > OK > > > Level is deprecated in next release I see, adding it changes nothing. > Am I doing something wrong? scipy.weave.test(label='full') should run the whole test suite. David From benny.malengier at gmail.com Mon Nov 3 06:57:22 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 3 Nov 2008 12:57:22 +0100 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <490EDBC8.5000205@ar.media.kyoto-u.ac.jp> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> <490EDBC8.5000205@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/3 David Cournapeau > Benny Malengier wrote: > > > > > > 2008/11/3 Jarrod Millman > > > > > > > > There are a number of weave test failures that need to be fixed. > Here > > is a ticket with some (there may be more, I am not sure): > > http://scipy.org/scipy/scipy/ticket/490 > > > > > > I wanted to look at this, but running here > > > > >>> import scipy.weave > > >>> scipy.weave.test() > > > > Ran 135 tests in 3.217s > > > > OK > > > > > > Level is deprecated in next release I see, adding it changes nothing. > > Am I doing something wrong? > > scipy.weave.test(label='full') should run the whole test suite. > Thanks for this, the page http://www.scipy.org/scipy/numpy/wiki/TestingGuidelines is so long that me scanning it missed it. I know, I should sit down and take the time to read what I see :-( . The full option is all over the page... Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Mon Nov 3 09:00:42 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 03 Nov 2008 08:00:42 -0600 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> Message-ID: <490F040A.6030409@enthought.com> Jarrod Millman wrote: > On Mon, Nov 3, 2008 at 1:30 AM, Gael Varoquaux > wrote: > >> +1 on all this. Practicality beats purity. It is going to be pretty hard >> to explain to people that they have to install a bunch of different >> packages get a full working environment. >> > > Well my vote is to move weave out of scipy (and has been for some > time), but it seems like there is some interest in keeping it as part > of scipy. I am not going to have anymore time to work on weave for > the next week. But I am opposed to releasing another version of scipy > with GPL code; so this needs to be fixed or removed before the 0.7 > release. I also want to have all tests passing before we release 0.7, > which means that all the weave tests need to pass as well. > > If you feel strongly about keeping weave in scipy, now would be a good > time to step up and get weave in shape for the 0.7 release. > It's only the blitz portion of weave that has "GPLish" features. These can be disabled or moved to a scikit with no difficulty. Or, they could be moved back to an older version of blitz (I recall that an upgrade to the blitz code is what changed the license. I'm pretty sure that Eric did not put GPL code into weave to begin with). The original idea was to move weave into numpy not out into a scikit. So, I'm very much against moving weave outside. If weave were to move outside then I would support it if it were to move it into a weave / f2py / pypy hybrid package called "compile" or something like that --- where C-code, Fortran-code, and RPython code could be compiled into fast loops, dynamically loaded, and integrated into NumPy (in the spirit of Ilan's fast_vectorize). > It looks like the license issue can be resolved. Newer versions of > the Mersenne Twister random number generator have a more liberal > license: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/elicense.html > Is anyone willing to volunteer to get the newer version working? Is > it worth considering moving this code out of weave and making the MT > RNG more generally available? > > Isn't NumPy's random number generator the MT algorithm? -Travis From david.huard at gmail.com Mon Nov 3 09:55:06 2008 From: david.huard at gmail.com (David Huard) Date: Mon, 3 Nov 2008 10:55:06 -0400 Subject: [SciPy-dev] Timeseries Unusual Behaviour In-Reply-To: <200811030126.37485.pgmdevlist@gmail.com> References: <200811030126.37485.pgmdevlist@gmail.com> Message-ID: <91cf711d0811030655v1e53a58fy4822cba623198a2c@mail.gmail.com> On Mon, Nov 3, 2008 at 2:26 AM, Pierre GM wrote: > On Wednesday 29 October 2008 12:41:08 David Huard wrote: > > Pierre, > > > > A similar problem occurs with fromrecords. Dates are ordered but not > > the data that is passed. > > David, > It should be fixed in SVN (r1569). > Thanks again for reporting. And thanks again for fixing it. David > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvrooyen at gmail.com Mon Nov 3 10:38:44 2008 From: gvrooyen at gmail.com (G-J van Rooyen) Date: Mon, 3 Nov 2008 17:38:44 +0200 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> Message-ID: <15c068b00811030738i7bc288f1vfbd4686e7246fe1a@mail.gmail.com> The C code in the cephes maths library (for which gammaln is just a wrapper) takes the C-ish approach of updating the sign in a global extern int sgngam. Presumably the user of the C function would check sgngam after a call to lgamma() if needed. I don't much like functions returning auxiliary results in globals, since they may have side effects (e.g. with threads) and break modularity... but I also agree with David that a function should return just one type of result. Alternatively, passing an optional second mutable argument where the sign can get stored is just clunky. Creating a second, differently-named function (e.g. gammaln2) that returns magnitude and sign information is probably just as bad as having varying return types. So which of the above is the lesser evil? Or is there an elegant solution? G-J 2008/11/3 David Cournapeau : > St?fan van der Walt wrote: >> >> This is my favorite quote of the week! >> > > Oups. This should of course read returning different *number* of return > values depending on the argument :) > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From chanley at stsci.edu Mon Nov 3 11:10:54 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 03 Nov 2008 11:10:54 -0500 Subject: [SciPy-dev] scipy.stsci to scikits? Message-ID: <490F228E.9040505@stsci.edu> Would there be any opposition is moving the scipy.stsci package to scikits? As a reminder, scipy.stsci contains the convolve and image modules that were once distributed with numarray. Thoughts? Opinions? A secondary question. If I have commit privileges for scipy do I need a separate account for scikit access? Thanks, Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From benny.malengier at gmail.com Mon Nov 3 11:16:03 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Mon, 3 Nov 2008 17:16:03 +0100 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <490F040A.6030409@enthought.com> References: <490EA389.4050004@ar.media.kyoto-u.ac.jp> <91b4b1ab0811022340hfe53e0bo889ebeeb1f949342@mail.gmail.com> <20081103093036.GA538@phare.normalesup.org> <490F040A.6030409@enthought.com> Message-ID: Looking at the bug output of the tests of weave, the unknown bugs are in the support for wxwidgets types. If nobody supports that code (the error is there since 2004 ? ( http://mlblog.osdir.com/python.scientific.devel/2004-04/index.shtml )) then why not remove support for those types? I don't see the use of wxwidget string type and such support.The core python and numpy types are supported, so conversion to that is possible in python. Benny 2008/11/3 Travis E. Oliphant > Jarrod Millman wrote: > > On Mon, Nov 3, 2008 at 1:30 AM, Gael Varoquaux > > wrote: > > > >> +1 on all this. Practicality beats purity. It is going to be pretty hard > >> to explain to people that they have to install a bunch of different > >> packages get a full working environment. > >> > > > > Well my vote is to move weave out of scipy (and has been for some > > time), but it seems like there is some interest in keeping it as part > > of scipy. I am not going to have anymore time to work on weave for > > the next week. But I am opposed to releasing another version of scipy > > with GPL code; so this needs to be fixed or removed before the 0.7 > > release. I also want to have all tests passing before we release 0.7, > > which means that all the weave tests need to pass as well. > > > > If you feel strongly about keeping weave in scipy, now would be a good > > time to step up and get weave in shape for the 0.7 release. > > > It's only the blitz portion of weave that has "GPLish" features. These > can be disabled or moved to a scikit with no difficulty. Or, they could > be moved back to an older version of blitz (I recall that an upgrade to > the blitz code is what changed the license. I'm pretty sure that Eric > did not put GPL code into weave to begin with). > > The original idea was to move weave into numpy not out into a scikit. > So, I'm very much against moving weave outside. If weave were to move > outside then I would support it if it were to move it into a weave / > f2py / pypy hybrid package called "compile" or something like that --- > where C-code, Fortran-code, and RPython code could be compiled into fast > loops, dynamically loaded, and integrated into NumPy (in the spirit of > Ilan's fast_vectorize). > > > It looks like the license issue can be resolved. Newer versions of > > the Mersenne Twister random number generator have a more liberal > > license: > http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/elicense.html > > Is anyone willing to volunteer to get the newer version working? Is > > it worth considering moving this code out of weave and making the MT > > RNG more generally available? > > > > > Isn't NumPy's random number generator the MT algorithm? > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Nov 3 12:07:58 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 3 Nov 2008 11:07:58 -0600 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: Message-ID: <3d375d730811030907j408b7009w9520db3807520192@mail.gmail.com> On Mon, Nov 3, 2008 at 00:17, Jarrod Millman wrote: > Hey, > > I have been working to ensure that all the code in scipy is correctly > licensed. One of the main issues was the blitz code in weave. I > contacted the authors and we were given permission to use the code > with a BSD license: http://scipy.org/scipy/scipy/ticket/649 > > I just finished most of the changes in: > http://scipy.org/scipy/scipy/changeset/4949 > http://scipy.org/scipy/scipy/changeset/4949 > > I found one file in blitz that I am not sure what to do about: > http://scipy.org/scipy/scipy/browser/trunk/scipy/weave/blitz/blitz/rand-mt.h > > Any ideas? thought? suggestions? To clarify a few things that came up in this thread: * The license in question is the LGPL, not the GPL. * The upstream source of rand-mt.h (the original Mersenne Twister C source) changed from the LGPL to a BSD-style license, so we're fine for that part. * The C++ modifications by Allan Stokes still require his permission to change the license. * But as David points out, it's not actually used anywhere in weave. We can just remove it. * numpy.random is a Mersenne Twister implementation. * scikits is not a project like scipy. It's just a namespace for packages. You install each individual scikits package independently. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From eads at soe.ucsc.edu Mon Nov 3 12:08:27 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 3 Nov 2008 09:08:27 -0800 Subject: [SciPy-dev] scipy.distance In-Reply-To: <9457e7c80811030112y1b6a5641he79eccc9ba04408b@mail.gmail.com> References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> <91b4b1ab0811022306q2f5bc431l519d288d8f1c9e78@mail.gmail.com> <9457e7c80811030112y1b6a5641he79eccc9ba04408b@mail.gmail.com> Message-ID: <91b4b1ab0811030908q1c64ed33w6a166c9308fe5c01@mail.gmail.com> Hi Stefan, Good question. I'm on the train now. I'll look into this distance when I get to my office. Damian On 11/3/08, St?fan van der Walt wrote: > Hi Damian > > Do you have plans to add distances between sets, such as the Hausdorff > distance? > > Regards > St?fan > > 2008/11/3 Damian Eads : >> Voronoi membership images seem like a good feature to have in there as >> well. What do you think? > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From millman at berkeley.edu Mon Nov 3 12:10:57 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 3 Nov 2008 09:10:57 -0800 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: <3d375d730811030907j408b7009w9520db3807520192@mail.gmail.com> References: <3d375d730811030907j408b7009w9520db3807520192@mail.gmail.com> Message-ID: On Mon, Nov 3, 2008 at 9:07 AM, Robert Kern wrote: > * But as David points out, it's not actually used anywhere in weave. > We can just remove it. Is there any objection to removing rand-mt? If not, I will be happy to take care of it later today. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robert.kern at gmail.com Mon Nov 3 12:12:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 3 Nov 2008 11:12:49 -0600 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <15c068b00811030738i7bc288f1vfbd4686e7246fe1a@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> <15c068b00811030738i7bc288f1vfbd4686e7246fe1a@mail.gmail.com> Message-ID: <3d375d730811030912k7f8a2ab6s97b2cf2661066a3b@mail.gmail.com> On Mon, Nov 3, 2008 at 09:38, G-J van Rooyen wrote: > The C code in the cephes maths library (for which gammaln is just a > wrapper) takes the C-ish approach of updating the sign in a global > extern int sgngam. Presumably the user of the C function would check > sgngam after a call to lgamma() if needed. > > I don't much like functions returning auxiliary results in globals, > since they may have side effects (e.g. with threads) and break > modularity... but I also agree with David that a function should > return just one type of result. Alternatively, passing an optional > second mutable argument where the sign can get stored is just clunky. > > Creating a second, differently-named function (e.g. gammaln2) that > returns magnitude and sign information is probably just as bad as > having varying return types. > > So which of the above is the lesser evil? Or is there an elegant solution? The latter. By far. I'm not sure why you think it would be just as bad as having varying return types. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Mon Nov 3 12:14:44 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 4 Nov 2008 02:14:44 +0900 Subject: [SciPy-dev] #331 and updating sigtoolmodules.c Message-ID: <5b8d13220811030914v32279351td4d418d5f541683a@mail.gmail.com> Hi, I wanted to fix #331, and after having scratched my head, I think the code is too clever for me. I think the code could be updated and simplified quite a bit by using array iterator; it would mean that we would give up on the independance wrt python C API (Eric mentioned it in the commentary, if I believe svn blame and the comments). I just want to make sure it is not a concern anymore before rewriting the filter functions with the numpy array iterator, cheers, David From charlesr.harris at gmail.com Mon Nov 3 12:30:57 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 3 Nov 2008 10:30:57 -0700 Subject: [SciPy-dev] scipy.distance In-Reply-To: References: <91b4b1ab0811021647g4272ffb3s514f04b237244df1@mail.gmail.com> Message-ID: On Mon, Nov 3, 2008 at 1:36 AM, Anne Archibald wrote: > 2008/11/3 Charles R Harris : > > > I usually want a complete list of points in some neighborhood. I looked > > through your cython code and I think the loops can be improved a bit to > make > > better use of low level C code. > > The C implementation doesn't currently do this at all. It'd be a good > addition, though I think you'd have to use object arrays of lists, > which have always made me faintly queasy. You'd want a whole separate > tree-traversal routine here, with short-circuit branches for both > all-in-the-neighborhood and all-outside-the-neighborhood. Since kdtree > construction is rather fast, does it perhaps make sense to write a > two-tree version? > Looks like the BioPython folks have been busy. The KDTree code has been updated to use numpy and converted to C from C++. You can view the code in CVS here . Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaperl at gmail.com Mon Nov 3 11:47:48 2008 From: metaperl at gmail.com (Terrence Brannon) Date: Mon, 3 Nov 2008 16:47:48 +0000 (UTC) Subject: [SciPy-dev] broken link Message-ID: At this page: http://scipy.org/Installing_SciPy/Windows The link in this sentence: """If you do not have Python installed on your system you can install the Enthought Python distribution which comes with Scipy and many other useful scientific tools""" points to a non-existent place (http://code.enthought.com/enthon/) From gvrooyen at gmail.com Mon Nov 3 12:53:33 2008 From: gvrooyen at gmail.com (G-J van Rooyen) Date: Mon, 3 Nov 2008 19:53:33 +0200 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <3d375d730811030912k7f8a2ab6s97b2cf2661066a3b@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> <15c068b00811030738i7bc288f1vfbd4686e7246fe1a@mail.gmail.com> <3d375d730811030912k7f8a2ab6s97b2cf2661066a3b@mail.gmail.com> Message-ID: <15c068b00811030953g6b24504byc861e38583ecd109@mail.gmail.com> >> Creating a second, differently-named function (e.g. gammaln2) that >> returns magnitude and sign information is probably just as bad as >> having varying return types. >> >> So which of the above is the lesser evil? Or is there an elegant solution? > > The latter. By far. I'm not sure why you think it would be just as bad > as having varying return types. It just seems to me that Y = gammaln(X) and (Y, sign) = gammaln(X,'sign') are conceptually identical to Y = gammaln(X) and (Y,sign) = gammaln_sign(X) The latter just absorbs the mode argument into the alternative function name. But since they're equivalent to me, and you have a clear preference for the latter, I'll go with the second option :) Thanks for the feedback Gert-Jan From robert.kern at gmail.com Mon Nov 3 13:07:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 3 Nov 2008 12:07:10 -0600 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <15c068b00811030953g6b24504byc861e38583ecd109@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> <15c068b00811030738i7bc288f1vfbd4686e7246fe1a@mail.gmail.com> <3d375d730811030912k7f8a2ab6s97b2cf2661066a3b@mail.gmail.com> <15c068b00811030953g6b24504byc861e38583ecd109@mail.gmail.com> Message-ID: <3d375d730811031007t11a3274asbf0a9b28ee59de88@mail.gmail.com> On Mon, Nov 3, 2008 at 11:53, G-J van Rooyen wrote: >>> Creating a second, differently-named function (e.g. gammaln2) that >>> returns magnitude and sign information is probably just as bad as >>> having varying return types. >>> >>> So which of the above is the lesser evil? Or is there an elegant solution? >> >> The latter. By far. I'm not sure why you think it would be just as bad >> as having varying return types. > > It just seems to me that > > Y = gammaln(X) > and > (Y, sign) = gammaln(X,'sign') > > are conceptually identical to > > Y = gammaln(X) > and > (Y,sign) = gammaln_sign(X) > > The latter just absorbs the mode argument into the alternative > function name. But since they're equivalent to me, and you have a > clear preference for the latter, I'll go with the second option :) Well, there are a couple of issues. 1) These are ufuncs, not regular functions, so they have limited options for signatures. Basically, there is no room for options like 'sign'. 2) There is a reasonable rule elucidated by Guido van Rossum: If you are using a boolean argument, and you always expect to use literals as the argument, just make different functions instead, especially if it changes the number of returned values. The boolean switch (and the changing number of return values) basically makes two different functions. The usual mechanism for the programmer to decide between two different functions is to use two different names. Now, a case can be made as an exception to the latter rule for things like the full_output= argument to the scipy.optimize functions. There are a number of these functions, and the full_output= behavior is actually a fairly small bit of the functionality for each of the functions. In this case, though, it's just one function, and the difference in behavior is substantial. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gvrooyen at gmail.com Mon Nov 3 13:14:13 2008 From: gvrooyen at gmail.com (G-J van Rooyen) Date: Mon, 3 Nov 2008 20:14:13 +0200 Subject: [SciPy-dev] Definition of gammaln(x) for negative x In-Reply-To: <3d375d730811031007t11a3274asbf0a9b28ee59de88@mail.gmail.com> References: <15c068b00811010954w563b871dgf8670beaae53c108@mail.gmail.com> <490C8C4D.8010007@ar.media.kyoto-u.ac.jp> <15c068b00811011253mb9cc012k6bbac13f865b3f2e@mail.gmail.com> <5b8d13220811030103s215bf96cq740792417d12ff1a@mail.gmail.com> <9457e7c80811030117u27fb0da1ic099b479c0564ae5@mail.gmail.com> <490EBEC6.7060601@ar.media.kyoto-u.ac.jp> <15c068b00811030738i7bc288f1vfbd4686e7246fe1a@mail.gmail.com> <3d375d730811030912k7f8a2ab6s97b2cf2661066a3b@mail.gmail.com> <15c068b00811030953g6b24504byc861e38583ecd109@mail.gmail.com> <3d375d730811031007t11a3274asbf0a9b28ee59de88@mail.gmail.com> Message-ID: <15c068b00811031014h4308fcbfrce2a258073f60ca8@mail.gmail.com> OK, that sounds very reasonable. I don't know ufuncs well yet (was hoping to cut my teeth on them with this ticket), so I wasn't aware of the first consideration. And Von Rossom's rule does make a lot of sense. Thanks, Robert G-J From pav at iki.fi Mon Nov 3 13:22:12 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 3 Nov 2008 18:22:12 +0000 (UTC) Subject: [SciPy-dev] Generating SciPy Sphinx HTML References: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> <91b4b1ab0811021715h6cf216cch972119296accf5a3@mail.gmail.com> <91b4b1ab0811022109t5e53812fx17d905c6b72a27a2@mail.gmail.com> Message-ID: Sun, 02 Nov 2008 21:09:48 -0800, Damian Eads wrote: > Very nice work. I got Sphinx to work for me. > > When info.py files get changed, the corresponding package-level > documentation does not get updated. I have to remove the build/ > directory for it to correctly update it. Removing build/pkg_name.html > does not work. Yes, sphinx.ext.autodoc does not track changes in the docstring -- but maybe this could be added to the Sphinx issues list as a feature request? Sphinx monitors the timestamps of the *.rst files, so you can touch the files that you thing should be regenerated. -- Pauli Virtanen From aisaac at american.edu Mon Nov 3 13:38:47 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 03 Nov 2008 13:38:47 -0500 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: <490F228E.9040505@stsci.edu> References: <490F228E.9040505@stsci.edu> Message-ID: <490F4537.6010503@american.edu> Christopher Hanley wrote: > Would there be any opposition is moving the scipy.stsci package to > scikits? As a reminder, scipy.stsci contains the convolve and image > modules that were once distributed with numarray. Thoughts? Opinions? I would be affected if ndimage were moved out of SciPy. It is not clear to me that you have this in mind when you mention the "image modules"? If anything, I'd like to see ndimage move into NumPy. Certainly not "out" into a scikit! Alan Isaac From chanley at stsci.edu Mon Nov 3 13:46:46 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 03 Nov 2008 13:46:46 -0500 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: <490F4537.6010503@american.edu> References: <490F228E.9040505@stsci.edu> <490F4537.6010503@american.edu> Message-ID: <490F4716.2060907@stsci.edu> Alan G Isaac wrote: > Christopher Hanley wrote: >> Would there be any opposition is moving the scipy.stsci package to >> scikits? As a reminder, scipy.stsci contains the convolve and image >> modules that were once distributed with numarray. Thoughts? Opinions? > > I would be affected if ndimage were moved out of SciPy. > It is not clear to me that you have this in mind when you > mention the "image modules"? If anything, I'd like to see > ndimage move into NumPy. Certainly not "out" into a scikit! > > Alan Isaac > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Moving ndimage was not part of my plans. Only what is currently under the scipy.stsci namespace. There are two packages there right now. One is called "convolve", the other is "image". Both are really geared toward 2D arrays. The ndimage package is under a completely different namespace. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From eads at soe.ucsc.edu Mon Nov 3 14:14:34 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 3 Nov 2008 11:14:34 -0800 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: <490F4716.2060907@stsci.edu> References: <490F228E.9040505@stsci.edu> <490F4537.6010503@american.edu> <490F4716.2060907@stsci.edu> Message-ID: <91b4b1ab0811031114g57068705qed04ee7a2bb29d3b@mail.gmail.com> +1 keep ndimage in SciPy +1 please clarify what you mean by image modules. Do you mean to be inclusive of ndimage? Image processing seems pretty generic and fundamental. It's useful throughout many fields including pathology, medical imaging, astronomy, remote sensing, computer vision, multimedia, enviromental science, and meteorology. When I first started using SciPy, ndimage was the reason I downloaded it. I'm sure quite a large number of scientists depend on it. I guess before I argue any further, I'd ask you to clarify what you mean. Damian On 11/3/08, Christopher Hanley wrote: > Alan G Isaac wrote: >> Christopher Hanley wrote: >>> Would there be any opposition is moving the scipy.stsci package to >>> scikits? As a reminder, scipy.stsci contains the convolve and image >>> modules that were once distributed with numarray. Thoughts? Opinions? >> >> I would be affected if ndimage were moved out of SciPy. >> It is not clear to me that you have this in mind when you >> mention the "image modules"? If anything, I'd like to see >> ndimage move into NumPy. Certainly not "out" into a scikit! >> >> Alan Isaac >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev > > Moving ndimage was not part of my plans. Only what is currently under > the scipy.stsci namespace. There are two packages there right now. One > is called "convolve", the other is "image". Both are really geared > toward 2D arrays. > > The ndimage package is under a completely different namespace. > > Chris > > > -- > Christopher Hanley > Senior Systems Software Engineer > Space Telescope Science Institute > 3700 San Martin Drive > Baltimore MD, 21218 > (410) 338-4338 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From chanley at stsci.edu Mon Nov 3 14:37:11 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 03 Nov 2008 14:37:11 -0500 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: <91b4b1ab0811031114g57068705qed04ee7a2bb29d3b@mail.gmail.com> References: <490F228E.9040505@stsci.edu> <490F4537.6010503@american.edu> <490F4716.2060907@stsci.edu> <91b4b1ab0811031114g57068705qed04ee7a2bb29d3b@mail.gmail.com> Message-ID: <490F52E7.4040205@stsci.edu> Damian Eads wrote: > +1 keep ndimage in SciPy I have made no claim that ndimage should be moved. The ndimage code is separate from the code I am discussing. > +1 please clarify what you mean by image modules. Do you mean to be > inclusive of ndimage? I am referring to the code in the directory: scipy/scipy/stsci/ These are modules that were once distributed as part of numarray because we needed them for our STScI code. There is code that we and other astronomers still use that depend on these modules. ndimage does not depend on these modules in any way. > > Image processing seems pretty generic and fundamental. It's useful > throughout many fields including pathology, medical imaging, > astronomy, remote sensing, computer vision, multimedia, enviromental > science, and meteorology. When I first started using SciPy, ndimage > was the reason I downloaded it. I'm sure quite a large number of > scientists depend on it. I guess before I argue any further, I'd ask > you to clarify what you mean. > > Damian > I am all for generic image processing and would encourage as many people as possible to use and support ndimage. That being said, these other modules still need to be kept around for compatibility. I am arguing that they be moved to a scikit because they do not support N-dimensional array processing. I see scipy as a general purpose scientific computing environment and scikits as more specialized packages. If you need more specialized array processing, grab our scikit. Otherwise just stick with scipy and don't worry about having to install yet-another-package that you will never see or use. This is my current thought process. Keep in mind that I am not saying anything should or shouldn't be done with ndimage. I am only addressing what is currently in the scipy.stsci namespace. Thanks, Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From oliphant at enthought.com Mon Nov 3 14:41:35 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 03 Nov 2008 13:41:35 -0600 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: <490F228E.9040505@stsci.edu> References: <490F228E.9040505@stsci.edu> Message-ID: <490F53EF.5050404@enthought.com> Christopher Hanley wrote: > Would there be any opposition is moving the scipy.stsci package to > scikits? As a reminder, scipy.stsci contains the convolve and image > modules that were once distributed with numarray. Thoughts? Opinions? > > I think this would be fine. > A secondary question. If I have commit privileges for scipy do I need a > separate account for scikit access? > Not a separate account, but somebody needs to give you permissions on the repository. I can do that. -Travis From millman at berkeley.edu Mon Nov 3 14:42:28 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 3 Nov 2008 11:42:28 -0800 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: <91b4b1ab0811031114g57068705qed04ee7a2bb29d3b@mail.gmail.com> References: <490F228E.9040505@stsci.edu> <490F4537.6010503@american.edu> <490F4716.2060907@stsci.edu> <91b4b1ab0811031114g57068705qed04ee7a2bb29d3b@mail.gmail.com> Message-ID: Hey Chris, Thanks for offering to work on this. I don't like the package *name* scipy.stsci; however, I am not sure what should be done. Obviously, there is a strong consensus that scipy.ndimage should stay and you clearly aren't suggesting otherwise. I also want to preface the following remarks with saying that rather than focusing on how to handle deprecations or renaming, I would like everyone to focus specifically on what we would ideal like to see done with scipy.stsci. Once we figure out what we want done with this package, we can discuss how to get there. It seems to me that a 2D image processing package would be useful and appropriate in scipy. We currently have two places that it may make sense to put 2D image processing functionality: scipy.signal scipy.ndimage Is it reasonable to merge the scipy.stsci functionality into either ndimage or signal? Would it be better to rename stsci to image perhaps? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Mon Nov 3 15:12:51 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 3 Nov 2008 22:12:51 +0200 Subject: [SciPy-dev] scipy.stsci to scikits? In-Reply-To: References: <490F228E.9040505@stsci.edu> <490F4537.6010503@american.edu> <490F4716.2060907@stsci.edu> <91b4b1ab0811031114g57068705qed04ee7a2bb29d3b@mail.gmail.com> Message-ID: <9457e7c80811031212i26effe88r5d6fc443f983ffc9@mail.gmail.com> 2008/11/3 Jarrod Millman : > It seems to me that a 2D image processing package would be useful and > appropriate in scipy. We currently have two places that it may make > sense to put 2D image processing functionality: > scipy.signal > scipy.ndimage > > Is it reasonable to merge the scipy.stsci functionality into either > ndimage or signal? Would it be better to rename stsci to image > perhaps? I'd love to see scipy.image happen! I'm a bit worried about ndimage. Much of its functionality can now be written in pure Python, with the backing of NumPy. The code is not well supported (I don't think many others often dive in there), and I know of some fundamental bugs (for example the boundary extension modes "mirror"/"reflect", which is broken due to assumptions about the underlying spatial structure of an image; or missing 64-bit support). I've started porting the C-API to NumPy a while ago, but never got around to finishing the job. It may well be time to give this module a good refactoring. Cheers St?fan From jh at physics.ucf.edu Mon Nov 3 18:39:37 2008 From: jh at physics.ucf.edu (jh at physics.ucf.edu) Date: Mon, 03 Nov 2008 18:39:37 -0500 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: (scipy-dev-request@scipy.org) References: Message-ID: "Damian Eads" wrote: > Technology preview code is new code incorporated into the trunk of > SciPy ... considered production grade and well-tested ... no > guarantees of a stable API to enable further improvements based on > community feedback. Sorry, but I feel this is a poor idea. Scipy is supposed to be stable. We got rid of the sandbox for a reason. We have too many API problems in scipy and even in numpy (median, FFT, and many others) to introduce a sanctioned mechanism for breaking APIs. If code is production-grade, *it has a stable API*. If not, release a separate package, get some use experience, and let the API mature. If the code is add-on code to an existing package in scipy, your package can monkeypatch it into the relevant scipy package as you and interested others test it, or you can import it separately. Then propose to bring it into SciPy as a mature package once it's ready. I would certainly favor a section of the web site devoted to promoting such tests (scipy.org/nursery? scipy.org/testbed? scipy.org/greenhouse?). Putting a markup in the documentation is not nearly sufficient warning since many people exchange code (e.g., Cookbook) without reading the docs for all the functions and classes the code contains. Also, having "Technology Demonstration" labels all over the place will only serve to shake people's faith in the stability of the package, and prevent it from ever getting a reputation for reliability. Numpy and scipy should never be places for experimentation. --jh-- From millman at berkeley.edu Mon Nov 3 19:36:37 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 3 Nov 2008 16:36:37 -0800 Subject: [SciPy-dev] a modest proposal for technology previews Message-ID: Hey, I have been thinking about how to best get useful, widely-needed, high-quality code with a good, stable API into scipy without creating an unnecessary burden on developers or early adopters. Unfortunately, I don't have time to fully flesh out what I have been thinking; but I went ahead and started writing a SEP: http://projects.scipy.org/scipy/scipy/browser/trunk/doc/seps/technology-preview.rst Please note that I don't intend this to replace scikits or other staging grounds. I imagine that a project could easily start as a scikit and mature there. Then a number of developer decide that it belongs in scipy proper. Rather than just working in a branch until the code is ready for release to the world. This mechanism would allow any additional incubator for code maturation and development. I would love to hear everyone's initial thoughts, suggestions, and ideas. I will try and incorporate these comments into the SEP and then we can discuss whether we should accept, reject, or defer the SEP. I have been kicking the idea around a bit with Fernando, Chris, Stefan, and others; so I can't claim the idea is mine. I am just trying to write it up. If anyone once to help out, the SEP is checked into the scipy trunk. I plan to work on it more later tonight to flesh out some of the ideas better. But in the spirit of "release early, release often" ... have at it. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From oliphant at enthought.com Mon Nov 3 23:43:34 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Mon, 03 Nov 2008 22:43:34 -0600 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: References: Message-ID: <490FD2F6.8000907@enthought.com> Jarrod Millman wrote: > Hey, > > I have been thinking about how to best get useful, widely-needed, > high-quality code with a good, stable API into scipy without creating > an unnecessary burden on developers or early adopters. Unfortunately, > I don't have time to fully flesh out what I have been thinking; but I > went ahead and started writing a SEP: > http://projects.scipy.org/scipy/scipy/browser/trunk/doc/seps/technology-preview.rst > > Please note that I don't intend this to replace scikits or other > staging grounds. I imagine that a project could easily start as a > scikit and mature there. Then a number of developer decide that it > belongs in scipy proper. Rather than just working in a branch until > the code is ready for release to the world. This mechanism would > allow any additional incubator for code maturation and development. > > > I would love to hear everyone's initial thoughts, suggestions, and > ideas. I will try and incorporate these comments into the SEP and > then we can discuss whether we should accept, reject, or defer the > SEP. I have been kicking the idea around a bit with Fernando, Chris, > Stefan, and others; so I can't claim the idea is mine. I am just > trying to write it up. If anyone once to help out, the SEP is checked > into the scipy trunk. > > Hi Jarrod, I think it is useful to have a pattern people can follow for getting new code into SciPy in a way that produces stable APIs. Right now, though, I don't see how scipy.preview is preferrable to another staging ground like, say, scikits.forscipy. In fact, I see how it might be a bad thing as it basically brings back the sandbox under a different name (although one could argue it's a sandbox that actually gets distributed). -Travis From robert.kern at gmail.com Mon Nov 3 23:59:03 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 3 Nov 2008 22:59:03 -0600 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD2F6.8000907@enthought.com> References: <490FD2F6.8000907@enthought.com> Message-ID: <3d375d730811032059r7b558cd0x1745ca2ac9962a7@mail.gmail.com> On Mon, Nov 3, 2008 at 22:43, Travis E. Oliphant wrote: > Jarrod Millman wrote: >> Hey, >> >> I have been thinking about how to best get useful, widely-needed, >> high-quality code with a good, stable API into scipy without creating >> an unnecessary burden on developers or early adopters. Unfortunately, >> I don't have time to fully flesh out what I have been thinking; but I >> went ahead and started writing a SEP: >> http://projects.scipy.org/scipy/scipy/browser/trunk/doc/seps/technology-preview.rst >> >> Please note that I don't intend this to replace scikits or other >> staging grounds. I imagine that a project could easily start as a >> scikit and mature there. Then a number of developer decide that it >> belongs in scipy proper. Rather than just working in a branch until >> the code is ready for release to the world. This mechanism would >> allow any additional incubator for code maturation and development. >> >> >> I would love to hear everyone's initial thoughts, suggestions, and >> ideas. I will try and incorporate these comments into the SEP and >> then we can discuss whether we should accept, reject, or defer the >> SEP. I have been kicking the idea around a bit with Fernando, Chris, >> Stefan, and others; so I can't claim the idea is mine. I am just >> trying to write it up. If anyone once to help out, the SEP is checked >> into the scipy trunk. >> >> > Hi Jarrod, > > I think it is useful to have a pattern people can follow for getting new > code into SciPy in a way that produces stable APIs. > > Right now, though, I don't see how scipy.preview is preferrable to > another staging ground like, say, scikits.forscipy. In fact, I see how > it might be a bad thing as it basically brings back the sandbox under a > different name (although one could argue it's a sandbox that actually > gets distributed). And there was a reason that the sandbox wasn't distributed in binaries. The developing code might impose extra build requirements onto scipy-as-a-whole before we even agree that we want to include it. It might even break the build at times. We should have a standard path for new package- or module-sized stuff to be added, but I would prefer that it be outside of scipy for this reason. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Nov 3 23:58:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Nov 2008 13:58:15 +0900 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD2F6.8000907@enthought.com> References: <490FD2F6.8000907@enthought.com> Message-ID: <490FD667.9040601@ar.media.kyoto-u.ac.jp> Travis E. Oliphant wrote: > Jarrod Millman wrote: >> Hey, >> >> I have been thinking about how to best get useful, widely-needed, >> high-quality code with a good, stable API into scipy without creating >> an unnecessary burden on developers or early adopters. Unfortunately, >> I don't have time to fully flesh out what I have been thinking; but I >> went ahead and started writing a SEP: >> http://projects.scipy.org/scipy/scipy/browser/trunk/doc/seps/technology-preview.rst >> >> Please note that I don't intend this to replace scikits or other >> staging grounds. I imagine that a project could easily start as a >> scikit and mature there. Then a number of developer decide that it >> belongs in scipy proper. Rather than just working in a branch until >> the code is ready for release to the world. This mechanism would >> allow any additional incubator for code maturation and development. >> >> >> I would love to hear everyone's initial thoughts, suggestions, and >> ideas. I will try and incorporate these comments into the SEP and >> then we can discuss whether we should accept, reject, or defer the >> SEP. I have been kicking the idea around a bit with Fernando, Chris, >> Stefan, and others; so I can't claim the idea is mine. I am just >> trying to write it up. If anyone once to help out, the SEP is checked >> into the scipy trunk. >> >> > Hi Jarrod, > > I think it is useful to have a pattern people can follow for getting new > code into SciPy in a way that produces stable APIs. > > Right now, though, I don't see how scipy.preview is preferrable to > another staging ground like, say, scikits.forscipy. In fact, I see how > it might be a bad thing as it basically brings back the sandbox under a > different name (although one could argue it's a sandbox that actually > gets distributed). There are basically two key differences between scikits and scipy for new code: - in scipy, it is always built, thus the author package does not have to deal with release process - if in scipy, for the reason above, it can be easily distributable. You can just say: get scipy, and you will be guaranteed to get this code. I don't think the solution is to get the code in scipy either. The solution is to solve the problems above for scikits: ideally, there would be an automated process to regularly build tarballs/releases, and something to get the code. That would remove most needs for inclusion in scipy, no ? Of course, it is easier said than done. Eggs + pypi could help for that, maybe. I wish we had a system like CRAN, where inside R, you just say you want to install a new package, and they have the infrastructure to make this work. IMHO, that's the only solution in the long term, cheers, David From david at ar.media.kyoto-u.ac.jp Mon Nov 3 23:59:32 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Nov 2008 13:59:32 +0900 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD667.9040601@ar.media.kyoto-u.ac.jp> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> Message-ID: <490FD6B4.5000209@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > > There are basically two key differences between scikits and scipy for > new code: > - in scipy, it is always built, thus the author package does not > have to deal with release process > - if in scipy, for the reason above, it can be easily distributable. > You can just say: get scipy, and you will be guaranteed to get this code. > > I don't think the solution is to get the code in scipy either. The > solution is to solve the problems above for scikits: ideally, there > would be an automated process to regularly build tarballs/releases, and > something to get the code. Sorry, this should read "and something to build the package". David From gael.varoquaux at normalesup.org Tue Nov 4 01:14:49 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 4 Nov 2008 07:14:49 +0100 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD667.9040601@ar.media.kyoto-u.ac.jp> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> Message-ID: <20081104061449.GA7680@phare.normalesup.org> On Tue, Nov 04, 2008 at 01:58:15PM +0900, David Cournapeau wrote: > Eggs + pypi could help for that, maybe. Currently Eggs + pypi don't work well-enough for compiled code. There is a famous "sandbox violation" bug that strikes every so and then when installing source packages that have to be builts. It may be a due to a bad interaction between numpy.distutils and setuptools. > I wish we had a system like CRAN, where inside R, you just say you want > to install a new package, and they have the infrastructure to make this > work. IMHO, that's the only solution in the long term, Absolutely. And right now this is not the case. Trying to rely on eggs and pypi for non-trivial packages means your users will experience incomprehensible failures. And I am not even discussing the mess created by an upgrade after eggs have been improperly cleaned up, which is another large source of failure for installs. Ga?l From david at ar.media.kyoto-u.ac.jp Tue Nov 4 01:19:58 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Nov 2008 15:19:58 +0900 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <20081104061449.GA7680@phare.normalesup.org> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> <20081104061449.GA7680@phare.normalesup.org> Message-ID: <490FE98E.1050007@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > > Currently Eggs + pypi don't work well-enough for compiled code. There is > a famous "sandbox violation" bug that strikes every so and then when > installing source packages that have to be builts. It may be a due to a > bad interaction between numpy.distutils and setuptools. Let's not go into the setuptools debate, please. I think I am in a relatively good position to know about distutils/setuptools idiosyncraties. Pypi is independant of setuptools, and can already help quite a bit for the source release part. I think to say: grab this tarball from pypi is quite a big step toward easier installation compared to use svn. It could be a good policy to tell people developing scikits to upload sources to pypi whenever they do a release (I am as guilty as other in that department). Then, there is the problem to build binary releases. On Linux, there are automated build farms available; I spent quite a good deal of time on this at some point. I am not sure you can see the following page, but this shows the state of linux package on various distributions https://build.opensuse.org/project/monitor?project=home%3Aashigabou It is almost 100 % automated. For windows and mac os X, this needs to be done differently. The problem is then lack of resources (of which time is not the cheapest). But basically, it is not fundamentally different from how R does it. cheers, David From olivier.grisel at ensta.org Tue Nov 4 03:19:54 2008 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 4 Nov 2008 09:19:54 +0100 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FE98E.1050007@ar.media.kyoto-u.ac.jp> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> <20081104061449.GA7680@phare.normalesup.org> <490FE98E.1050007@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/4 David Cournapeau : > Gael Varoquaux wrote: >> >> Currently Eggs + pypi don't work well-enough for compiled code. There is >> a famous "sandbox violation" bug that strikes every so and then when >> installing source packages that have to be builts. It may be a due to a >> bad interaction between numpy.distutils and setuptools. > > Let's not go into the setuptools debate, please. I think I am in a > relatively good position to know about distutils/setuptools > idiosyncraties. Pypi is independant of setuptools, and can already help > quite a bit for the source release part. I think to say: grab this > tarball from pypi is quite a big step toward easier installation > compared to use svn. > > It could be a good policy to tell people developing scikits to upload > sources to pypi whenever they do a release (I am as guilty as other in > that department). As a scikit user, +1 for using pypi more systematically for scikit releases (being source only release without setuptools if setuptools is unable to manage the binary properly). I think it would help scikits gain more visibility in the python community as a whole and hence scipy too. The pypi RSS feed is relayed on various (sometimes manually postfiltered) aggregators and it can help gain a lot of pagerank. -- Olivier From millman at berkeley.edu Tue Nov 4 03:22:13 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 4 Nov 2008 00:22:13 -0800 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD2F6.8000907@enthought.com> References: <490FD2F6.8000907@enthought.com> Message-ID: On Mon, Nov 3, 2008 at 8:43 PM, Travis E. Oliphant wrote: > Right now, though, I don't see how scipy.preview is preferrable to > another staging ground like, say, scikits.forscipy. In fact, I see how > it might be a bad thing as it basically brings back the sandbox under a > different name (although one could argue it's a sandbox that actually > gets distributed). I absolutely agree that bringing back the sandbox would be a very bad idea. But I don't think what I am trying to propose is in any significant way similar to the sandbox. The sandbox had no vetting, no discussion, no plan for inclusion in scipy, and not actively being developed. I was imaging that scipy.preview would be reserved for code that there was a very high bar for inclusion. Code that had most likely either had: 1. some peer review in the case of new call signatures for existing code. 2. some existence as an external project. I was trying to address a specific concern I have about how to handle some of the code that has made it, once again into the trunk right as we are trying to make a new stable release. I don't see this situation going away and I don't see it getting any better. Rather than just stating that we will have a very formal vetting process or we will follow typical release procedures with a feature freeze and regular releases, I was thinking that there may be another alternative that would better suit the developer community that we seem to have. Since we have agreed to include scipy.spatial (this is only an example we could use scipy.stats.models or numpy.ma or ...), I was wondering if there was a way to include this code (as opposed to random things that were thrown into scipy.sandbox) in a way that would indicate that the developers have agreed: 1. to include the code in scipy 2. that the code meets some given standards of test and documentation coverage 3. whatever else we think should be required.... Currently, the problem is that someone proposes or writes something that we agree should be part of scipy. We discuss on the list perhaps where the code should go or what it should be called. We may discuss the API. Then the code is included in the trunk and we make a release. For a large number of scipy user, this may be the first time they have been made aware of the new code and its API. They may find that the code doesn't meet their needs. Also even, though, the code has been discussed on the list, most of us our so busy we don't have very much time to closely look at the code. So when we agree to include it, we only have a general sense that the code has a reasonable API. I was proposing scipy.preview to try an address these and other concerns. It seems doubtful to me that scikits addresses this problem. And I don't think that we currently have enough resources to make scikits better serve this purpose. I don't have time to work on scikits and I don't pay much attention to them unfortunately. If someone was to step forward and start pushing scikits that would be great and I would be happy to see that happen. But even if that was to happen, we would still need a process to incorporate a scikit into scipy if we decided we wanted to. My proposal is aimed at solving that problem. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From gael.varoquaux at normalesup.org Tue Nov 4 03:23:04 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 4 Nov 2008 09:23:04 +0100 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> <20081104061449.GA7680@phare.normalesup.org> <490FE98E.1050007@ar.media.kyoto-u.ac.jp> Message-ID: <20081104082304.GA15107@phare.normalesup.org> On Tue, Nov 04, 2008 at 09:19:54AM +0100, Olivier Grisel wrote: > > It could be a good policy to tell people developing scikits to upload > > sources to pypi whenever they do a release (I am as guilty as other in > > that department). > As a scikit user, +1 for using pypi more systematically for scikit > releases (being source only release without setuptools if setuptools > is unable to manage the binary properly). I think it would help > scikits gain more visibility in the python community as a whole and > hence scipy too. The pypi RSS feed is relayed on various (sometimes > manually postfiltered) aggregators and it can help gain a lot of > pagerank. Actually, I am +1 on that too. My caution about setuptools is just what it is caution. I am not saying we should not be using it, or PyPI, to provide services that nothing else provides. Ga?l From millman at berkeley.edu Tue Nov 4 03:34:20 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 4 Nov 2008 00:34:20 -0800 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD667.9040601@ar.media.kyoto-u.ac.jp> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Nov 3, 2008 at 8:58 PM, David Cournapeau wrote: > There are basically two key differences between scikits and scipy for > new code: > - in scipy, it is always built, thus the author package does not > have to deal with release process > - if in scipy, for the reason above, it can be easily distributable. > You can just say: get scipy, and you will be guaranteed to get this code. The major difference I see is that one is a scikit and the other is part of scipy. In my mind, for something to be part of scipy means that it should fit into the package in a consistent way. If I was developing a scikit, I don't think I would necessarily write it the same way as if I was trying to make it part of scipy. Also users expect some consistency and uniformity when using scipy, but may be more tolerant for code that is part of a scikit. We currently don't have a way to indicate that a specific scikit is about to be included into scipy. By putting in scipy.preview the idea would be that the scipy developers have agreed that this code is intended for inclusion in a future scipy release. And that they are interested in user feedback and testing before finalizing the API. > I don't think the solution is to get the code in scipy either. The > solution is to solve the problems above for scikits: ideally, there > would be an automated process to regularly build tarballs/releases, and > something to get the code. That would remove most needs for inclusion in > scipy, no ? Of course, it is easier said than done. Eggs + pypi could > help for that, maybe. While I don't think this is directly answering the problems that I was hoping to address with scipy.preview, I think it would be great. Frankly, I would even be happy at this point if we could get regular (much less automated and regular) builds/releases of scipy out before focusing on the scikits. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Tue Nov 4 03:39:59 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 4 Nov 2008 10:39:59 +0200 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <490FD2F6.8000907@enthought.com> References: <490FD2F6.8000907@enthought.com> Message-ID: <9457e7c80811040039i563e6b6aod9661637c05e0d0c@mail.gmail.com> 2008/11/4 Travis E. Oliphant : > Right now, though, I don't see how scipy.preview is preferrable to > another staging ground like, say, scikits.forscipy. In fact, I see how > it might be a bad thing as it basically brings back the sandbox under a > different name (although one could argue it's a sandbox that actually > gets distributed). I see `preview` fulfilling two roles: 1) Exposing a package meant to be included in SciPy to the majority of users 2) Allow developers to make use of user feedback, made possible because the API of packages in `preview` are still under review. In this light, I think `preview` fulfills a distinctly different role from the sanbox. In it, we shall put the packages that will definitely end up in SciPy in one or two releases. It allows us to get the build process sorted out and the code integrated, but with a layer of protection for the developers. Regards St?fan From millman at berkeley.edu Tue Nov 4 03:40:02 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 4 Nov 2008 00:40:02 -0800 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <20081104082304.GA15107@phare.normalesup.org> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> <20081104061449.GA7680@phare.normalesup.org> <490FE98E.1050007@ar.media.kyoto-u.ac.jp> <20081104082304.GA15107@phare.normalesup.org> Message-ID: On Tue, Nov 4, 2008 at 12:23 AM, Gael Varoquaux wrote: > On Tue, Nov 04, 2008 at 09:19:54AM +0100, Olivier Grisel wrote: >> > It could be a good policy to tell people developing scikits to upload >> > sources to pypi whenever they do a release (I am as guilty as other in >> > that department). > >> As a scikit user, +1 for using pypi more systematically for scikit >> releases (being source only release without setuptools if setuptools >> is unable to manage the binary properly). I think it would help >> scikits gain more visibility in the python community as a whole and >> hence scipy too. The pypi RSS feed is relayed on various (sometimes >> manually postfiltered) aggregators and it can help gain a lot of >> pagerank. > > Actually, I am +1 on that too. My caution about setuptools is just what > it is caution. I am not saying we should not be using it, or PyPI, to > provide services that nothing else provides. Anyone who wants to should feel free to add a note about this on the scikit's dev page: http://scipy.org/scipy/scikits/wiki/ScikitsForDevelopers I don't think anyone would disagree that new releases should be posted to pypi. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Tue Nov 4 03:36:28 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Nov 2008 17:36:28 +0900 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> Message-ID: <4910098C.4060001@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > > The major difference I see is that one is a scikit and the other is > part of scipy. Yes - and that's exactly why I think it is good to use scikits for that :) If you put things in scipy.preview: - what happens when the build breaks on common platforms ? - what happens when you want to work in a manner which is not synchronized with scipy release process ? The only advantage of scipy.preview I can see is that it is a easier to get the code for scipy users (hence my discussion about scikits build management). I don't think it outweights the disadvantages. Here is how I would see the process: - you start coding your scikit - once you have something, you discuss it on the ML, and you say you want it to include for scipy (here we can put any requirement: decent doc, testsuite, etc...) - if the consensus is that it can be included, then put it somewhere in scipy As you see, there is almost no difference compared to putting it into scipy.preview from the code review process. BUT, during the dev process, it can break scipy build, may not buildable on the platforms we usually support, etc... The whole code process not happening in scipy.preview has a lot of advantages. > In my mind, for something to be part of scipy means > that it should fit into the package in a consistent way. If I was > developing a scikit, I don't think I would necessarily write it the > same way as if I was trying to make it part of scipy. I don't understand this: it just means you have to think that it can go into scipy when you develop your scikit. How would the code would be any different is it was developed under scikits or scipy.preview ? The only difference for the source is that the namespace and the svn repository is different. cheers, David From stefan at sun.ac.za Tue Nov 4 03:51:48 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 4 Nov 2008 10:51:48 +0200 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: Message-ID: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> 2008/11/4 : > Numpy and scipy should never be places for experimentation. There is some agreement that scikits is the right place for experimentation, but unfortunately they aren't easy to install (Or, rather, the procedure isn't optimal or clear. The problem requires both technical and marketing attention.). It means that the exposure of beta packages is currently very low, so we seldom get the feedback required in time. This is one of the reasons why I like Jarrod's "preview" idea. Regards St?fan From david at ar.media.kyoto-u.ac.jp Tue Nov 4 03:58:59 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Nov 2008 17:58:59 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> Message-ID: <49100ED3.7040107@ar.media.kyoto-u.ac.jp> St?fan van der Walt wrote: > 2008/11/4 : > >> Numpy and scipy should never be places for experimentation. >> > > There is some agreement that scikits is the right place for > experimentation, but unfortunately they aren't easy to install (Or, > rather, the procedure isn't optimal or clear. The problem requires > both technical and marketing attention.). It means that the exposure > of beta packages is currently very low, so we seldom get the feedback > required in time. Yes, that's a big problem for the scikits. But if we had a windows build machine for automated binary build, it would largely alleviate the problem, no ? Here is the main page for this on CRAN: http://cran.r-project.org/ If we had the same system, such as you could simply upload your package tarball, and it would give you back a windows installer (then available on pypi), it would make installation quite smooth. Even smoother than with scipy, actually, since you could release a binary at any point. cheers, David From millman at berkeley.edu Tue Nov 4 04:30:04 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 4 Nov 2008 01:30:04 -0800 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <4910098C.4060001@ar.media.kyoto-u.ac.jp> References: <490FD2F6.8000907@enthought.com> <490FD667.9040601@ar.media.kyoto-u.ac.jp> <4910098C.4060001@ar.media.kyoto-u.ac.jp> Message-ID: On Tue, Nov 4, 2008 at 12:36 AM, David Cournapeau wrote: > Jarrod Millman wrote: >> >> The major difference I see is that one is a scikit and the other is >> part of scipy. > > Yes - and that's exactly why I think it is good to use scikits for that > :) If you put things in scipy.preview: > - what happens when the build breaks on common platforms ? > - what happens when you want to work in a manner which is not > synchronized with scipy release process ? Those are problems regardless of whether your code is in scipy.preview or scipy.xxx. I am not suggesting that we include unstable, broken code in scipy.preview. What I am suggesting is that for code that we agree should be in scipy and are ready to include in scipy, that the code first go into scipy.preview. > The only advantage of scipy.preview I can see is that it is a easier to > get the code for scipy users (hence my discussion about scikits build > management). I don't think it outweights the disadvantages. I feel like you are either misunderstanding (or perhaps dismissing) my argument (I am fairly tired so I don't claim to be making it very well). The main and specific advantage is that when we are ready to include new code into scipy, we have a staging area where we can clean up or fix the API before just thowing it into the code base. I don't want to use scipy.preview as a sandbox to write new code. If you are starting a new package, you could use code.google.com like Damian did, or a branch like Anne did, or make a scikit. I don't really care how individual authors want to do that. But once you have some reasonably mature code and the scipy developers agree that they would like to include your code, then you start by including your code in scipy.preview. > Here is how I would see the process: > - you start coding your scikit > - once you have something, you discuss it on the ML, and you say you > want it to include for scipy (here we can put any requirement: decent > doc, testsuite, etc...) > - if the consensus is that it can be included, then put it somewhere > in scipy > > As you see, there is almost no difference compared to putting it into > scipy.preview from the code review process. BUT, during the dev process, > it can break scipy build, may not buildable on the platforms we usually > support, etc... The whole code process not happening in scipy.preview > has a lot of advantages. I am not suggesting that "the whole code process happen in scipy.preview". And there is a very concrete difference with what I am proposing and the code review process you outline above. Namely I am proposing an additional step between 1) the ML discussion with consensus and 2) put it somewhere in scipy. That step is put it somewhere in scipy.preview between 1) and 2). >> In my mind, for something to be part of scipy means >> that it should fit into the package in a consistent way. If I was >> developing a scikit, I don't think I would necessarily write it the >> same way as if I was trying to make it part of scipy. > > I don't understand this: it just means you have to think that it can go > into scipy when you develop your scikit. How would the code would be any > different is it was developed under scikits or scipy.preview ? The only > difference for the source is that the namespace and the svn repository > is different. It wasn't a particularly important point. I was just trying to point out that scikit code doesn't necessarily mean that the author intends to include the code in scipy. It could, of course. But again the issue I am trying to point out is that: even if there were nightly builds of windows, mac, linux, bsd, etc. of every scikit, the problem I am trying to address wouldn't necessarily be addressed. Specifically I am trying to propose a very narrowly defined way to get a technology preview of new code to end user's as part of our regular and official scipy releases. I am open to discussions about what the bar should be for getting code in as part of the technology preview. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Tue Nov 4 04:47:30 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 4 Nov 2008 11:47:30 +0200 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <49100ED3.7040107@ar.media.kyoto-u.ac.jp> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> 2008/11/4 David Cournapeau : > problem, no ? Here is the main page for this on CRAN: > > http://cran.r-project.org/ > > If we had the same system, such as you could simply upload your package > tarball, and it would give you back a windows installer (then available > on pypi), it would make installation quite smooth. Even smoother than > with scipy, actually, since you could release a binary at any point. That would be great. But currently, this is all vapourware. Cheers St?fan From cournape at gmail.com Tue Nov 4 07:18:31 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 4 Nov 2008 21:18:31 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> Message-ID: <5b8d13220811040418k16956f43sc0f54e743b1c9119@mail.gmail.com> On Tue, Nov 4, 2008 at 6:47 PM, St?fan van der Walt wrote: > 2008/11/4 David Cournapeau : >> problem, no ? Here is the main page for this on CRAN: >> >> http://cran.r-project.org/ >> >> If we had the same system, such as you could simply upload your package >> tarball, and it would give you back a windows installer (then available >> on pypi), it would make installation quite smooth. Even smoother than >> with scipy, actually, since you could release a binary at any point. > > That would be great. But currently, this is all vapourware. Yes, but if the code is included in scipy.preview instead, the burden would be on scipy package managers. Not many people want to spend time on release management, and if we can furnish binary installers once a year, I don't see how incorporating the code into scipy.preview helps the review compared to doing it elsewhere. IOW, a CRAN-like system is vaporware, but not more than regular scipy releases. If binary packages don't matter, then I don't see the advantage of putting it in scipy.preview in the first place, since uploading it to pypi makes it as easy if not easier for people to get their hand on the sources. Jarrod mentioned that only mature packages would be put in scipy.preview: but how can a package be matured if people have not tried it out ? That's why IMO the problem really is a release process more than anything else. Once this is solved (and it has to be solved anyway), we can put a review process in place, but at this point, when the package is officially reviewed, it does not matter where it is as long as it is easily available. David From olivier.grisel at ensta.org Tue Nov 4 08:49:36 2008 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 4 Nov 2008 14:49:36 +0100 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> Message-ID: 2008/11/4 St?fan van der Walt : > 2008/11/4 David Cournapeau : >> problem, no ? Here is the main page for this on CRAN: >> >> http://cran.r-project.org/ >> >> If we had the same system, such as you could simply upload your package >> tarball, and it would give you back a windows installer (then available >> on pypi), it would make installation quite smooth. Even smoother than >> with scipy, actually, since you could release a binary at any point. > > That would be great. But currently, this is all vapourware. AFAIK the scipy buildbot already has a windows slave [1]. Would not it be possible to configure new buildbot tasks to build and test scikits as well? [1] http://buildbot.scipy.org/builders/Windows_XP_x86_64_MSVC -- Olivier From stefan at sun.ac.za Tue Nov 4 08:55:26 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 4 Nov 2008 15:55:26 +0200 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> Message-ID: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> 2008/11/4 Olivier Grisel : >> That would be great. But currently, this is all vapourware. > > AFAIK the scipy buildbot already has a windows slave [1]. Would not it > be possible to configure new buildbot tasks to build and test scikits > as well? > > [1] http://buildbot.scipy.org/builders/Windows_XP_x86_64_MSVC At the moment, these are all NumPy buildbots. Even so, we don't control those machines directly. Whenever I change something, I need to send the owners e-mail. And then, there is the problem of getting the packages back to a central repository. Not impossible, but this isn't an ideal solution. Cheers St?fan From olivier.grisel at ensta.org Tue Nov 4 09:09:39 2008 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 4 Nov 2008 15:09:39 +0100 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> Message-ID: 2008/11/4 St?fan van der Walt : > 2008/11/4 Olivier Grisel : >>> That would be great. But currently, this is all vapourware. >> >> AFAIK the scipy buildbot already has a windows slave [1]. Would not it >> be possible to configure new buildbot tasks to build and test scikits >> as well? >> >> [1] http://buildbot.scipy.org/builders/Windows_XP_x86_64_MSVC > > At the moment, these are all NumPy buildbots. Even so, we don't > control those machines directly. Whenever I change something, I need > to send the owners e-mail. And then, there is the problem of getting > the packages back to a central repository. Not impossible, but this > isn't an ideal solution. Would not it be possible to have 2 types of buildbot tasks: - react on svn checkin to build and launch the tests (for numpy, scipy and each independent scikit) - nightly build and package a windows installer (idem numpy, scipy and each independent scikit ) and then push it to a static webserver somewhere ? Sure that would require some (unbounded) time investment but that is work that is similar to what is already done manually at each release of numpy / scipy and could help streamline the scikits maintenance / qa a lot. -- Olivier From cournape at gmail.com Tue Nov 4 09:18:08 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 4 Nov 2008 23:18:08 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> Message-ID: <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> On Tue, Nov 4, 2008 at 11:09 PM, Olivier Grisel wrote: > > Would not it be possible to have 2 types of buildbot tasks: > > - react on svn checkin to build and launch the tests (for numpy, scipy > and each independent scikit) > - nightly build and package a windows installer (idem numpy, scipy and > each independent scikit ) and then push it to a static webserver > somewhere > > ? > > Sure that would require some (unbounded) time investment but that is > work that is similar to what is already done manually at each release > of numpy / scipy and could help streamline the scikits maintenance / > qa a lot. Yes, that's exactly my point: if we don't have CRAN-like system, it does not mean we won't have to do its work, only that we will have to do it manually instead. And building installers for every scikit is actually much easier than building scipy or numpy installers (because of the dependency; most scikits do not depend on anything but numpy/scipy). If we could have access to one mac VM and one windows VM, it would relatively easy to have a batch job building nightly binaries for every scikit (it is mostly a matter of getting numpy and scipy installed + a few tools), and then uploading them automatically on scipy.org. David From nwagner at iam.uni-stuttgart.de Tue Nov 4 10:03:13 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Nov 2008 16:03:13 +0100 Subject: [SciPy-dev] Generating SciPy Sphinx HTML In-Reply-To: References: <91b4b1ab0811021346q4c6429f7xf4eef688dc6b3b10@mail.gmail.com> Message-ID: On Sun, 2 Nov 2008 21:56:03 +0000 (UTC) Pauli Virtanen wrote: > Hi, > > Sun, 02 Nov 2008 13:46:32 -0800, Damian Eads wrote: >> I'm at the Berkeley sprint now trying to fix a few doc >>bugs. Can anyone >> point me to instructions or a script for generating >>Sphinx HTML >> documentation from the RST docstrings? > > Like this, for Scipy: > > svn co http://svn.scipy.org/svn/scipy/scipy-docs/trunk >scipy-docs > cd scipy-docs > export PYTHONPATH=/wherever/your/scipy/is > make html > > and for Numpy, > > svn co http://svn.scipy.org/svn/numpy/numpy-docs/trunk >numpy-docs > cd numpy-docs > export PYTHONPATH=/wherever/your/numpy/is > make html > > Note that you need Sphinx 0.5.dev development version, >and to actually > compile Numpy or Scipy first. > > Sphinx 0.5: > > svn co http://svn.python.org/projects/doctools/trunk >sphinx-trunk > cd sphinx-trunk > python setup.py install > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev I struggled to build the documentation svn/scipy-docs > make html svn co http://sphinx.googlecode.com/svn/contrib/trunk/numpyext ext A ext/LICENSE.txt A ext/tests A ext/tests/test_docscrape.py A ext/docscrape_sphinx.py A ext/traitsdoc.py A ext/numpydoc.py A ext/__init__.py A ext/autosummary_generate.py A ext/phantom_import.py A ext/comment_eater.py A ext/docscrape.py A ext/autosummary.py A ext/compiler_unparse.py Checked out revision 22. mkdir -p build ./ext/autosummary_generate.py source/*.rst \ -p dump.xml -o source/generated Traceback (most recent call last): File "./ext/autosummary_generate.py", line 18, in ? from autosummary import import_by_name File "/data/home/nwagner/svn/scipy-docs/ext/autosummary.py", line 59, in ? import sphinx.addnodes, sphinx.roles, sphinx.builder File "/data/home/nwagner/local/lib/python2.5/site-packages/Sphinx-0.5dev_20081104-py2.5.egg/sphinx/__init__.py", line 68 '-c' not in (opt[0] for opt in opts): ^ SyntaxError: invalid syntax make: *** [build/generate-stamp] Error 1 Any pointer ? Nils From eads at soe.ucsc.edu Tue Nov 4 11:28:11 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 4 Nov 2008 08:28:11 -0800 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> Message-ID: <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> Hi there, After the first release of my clustering library last November, I suggested it be included in SciPy. Since the code was quite polished but could use some refinements, it was suggested a scikit be created. I found the distribution and web infrastructure for scikits somewhat lacking, and feared it would eliminate my package's chances of gaining widespread adoption. If one googles "scikit" instead of "scikits", no matches come up for the SciPy scikit wiki. If one manages to land on the scikit front page, its messiness and lack of structure is somewhat discouraging. Code development is a big time investment, and there is something very emotionally unagreeable about the thought of one's well-written code collecting dust. I think if the web site were revamped so it looked less like a wiki and more professional, you'd get more scikit developers. I envision a dynamically generated listing of projects complete with a description, download link, SVN info, and mailing list subscription information. Scikit third-party package developers can log-in to create new projects, change their descriptions, or post new releases. By using dynamically generated content for the front page, a more consistent look and feel can be achieved. The MLOSS page is a good example of a project listing infrastructure that looks nice and professional, http://www.mloss.org/software/. If the Scikit page was more professional and we had better keyword placement on search engines, we'd encourage more third-party package developers to develop using scipy.org resources over Google Code, Savannah, or Sourceforge. My two cents, Damian On Tue, Nov 4, 2008 at 6:18 AM, David Cournapeau wrote: > On Tue, Nov 4, 2008 at 11:09 PM, Olivier Grisel > wrote: >> >> Would not it be possible to have 2 types of buildbot tasks: >> >> - react on svn checkin to build and launch the tests (for numpy, scipy >> and each independent scikit) >> - nightly build and package a windows installer (idem numpy, scipy and >> each independent scikit ) and then push it to a static webserver >> somewhere >> >> ? >> >> Sure that would require some (unbounded) time investment but that is >> work that is similar to what is already done manually at each release >> of numpy / scipy and could help streamline the scikits maintenance / >> qa a lot. > > Yes, that's exactly my point: if we don't have CRAN-like system, it > does not mean we won't have to do its work, only that we will have to > do it manually instead. And building installers for every scikit is > actually much easier than building scipy or numpy installers (because > of the dependency; most scikits do not depend on anything but > numpy/scipy). > > If we could have access to one mac VM and one windows VM, it would > relatively easy to have a batch job building nightly binaries for > every scikit (it is mostly a matter of getting numpy and scipy > installed + a few tools), and then uploading them automatically on > scipy.org. > > David From eads at soe.ucsc.edu Tue Nov 4 12:18:23 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 4 Nov 2008 09:18:23 -0800 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: Message-ID: <91b4b1ab0811040918v6ae665d6s37ed216d9178c109@mail.gmail.com> On Mon, Nov 3, 2008 at 3:39 PM, wrote: > "Damian Eads" wrote: > >> Technology preview code is new code incorporated into the trunk of >> SciPy ... considered production grade and well-tested ... no >> guarantees of a stable API to enable further improvements based on >> community feedback. > > Sorry, but I feel this is a poor idea. Scipy is supposed to be > stable. We got rid of the sandbox for a reason. We have too many API > problems in scipy and even in numpy (median, FFT, and many others) to > introduce a sanctioned mechanism for breaking APIs. > > If code is production-grade, *it has a stable API*. If not, release a > separate package, get some use experience, and let the API mature. I don't necessarily agree. Developers have their own idiosyncrasies of what makes a good API. I certainly have my own biases in this regard. Developing my package on Google Code gave me the chance to iron out a lot of API issues and fix a lot of bugs to bring the code to stability. Discussion with other SciPy developers also helped further refine the API. There came a point when we decided that the code was really well-documented, mature and well-tested, and nothing further could be done until the code was out there. One must recognize that there are always issues when you first integrate code into a big package that you don't think about. It's just life because you can't anticipate everything. If you make the requirements for entry too onerous, the larger package does not grow, and competitive edge is lost. As Jarrod wrote in a separate thread, I don't think it's anyone's intention to bring back the scipy.sandbox. We're going to set the standard very high about what is acceptable for inclusion as a technology preview. The bar is high enough where the core SciPy developers really can't find anything wrong with the code--it just looks ready. Marking new code as "preview" gives us a chance to warn the user that the code is new and a few minor changes to the API might be made to iron things out. In most if not all cases, we're not talking about major changes, we're talking about very minor fixes. They are bound to occur no matter the level of maturity of the new code so either SciPy does not grow or we give our users a heads up. I think our users would prefer the latter. > If > the code is add-on code to an existing package in scipy, your package > can monkeypatch it into the relevant scipy package as you and > interested others test it, or you can import it separately. Then > propose to bring it into SciPy as a mature package once it's ready. Code is included in a tech preview when we think it's mature and ready. I think that's a pretty high standard. If you set it any higher, I don't think SciPy will grow. > I > would certainly favor a section of the web site devoted to promoting > such tests (scipy.org/nursery? scipy.org/testbed? > scipy.org/greenhouse?). You are not proposing anything new here. Scikits serve as one of many possible venues for maturing a new package. What we need is proposals (!and the work itself!) for greatly improving the Scikit venue to encourage more developers to use it. Developers find nothing more emotionally unagreeable than the thought of their code collecting dust. > Putting a markup in the documentation is not nearly sufficient warning > since many people exchange code (e.g., Cookbook) without reading the > docs for all the functions and classes the code contains. Also, > having "Technology Demonstration" labels all over the place will only > serve to shake people's faith in the stability of the package, and > prevent it from ever getting a reputation for reliability. Tech previews are common practice. QT, a popular dual-licensed GUI package, uses the word "Technology Preview" to indicate to the user that parts of their API are new. I would say Trolltech's employment of this practice hasn't prevented them from building on top of their already very strong reputation. > Numpy and scipy should never be places for experimentation. I don't think we're talking about that. Damian From millman at berkeley.edu Tue Nov 4 12:25:20 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 4 Nov 2008 09:25:20 -0800 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> Message-ID: I absolutely agree with the ideas presented about scikits and look forward to seeing the numerous scikits improvements. I feel that I have gotten into a discussion where the counter argument to what I am proposing is something I strongly support. I also feel that the counterargument doesn't directly address my concern; but it may be that I am simply perceiving a problem that no one else believes exists. I am heading out for a week long vacation, so this will be my last email on this. So let me briefly state my concern: 1. I think it would be great to use the scikits as a staging ground for code that is going to be introduced into scipy. However, I don't think that this has ever happened. If some one can come up with an instance, I am happy to be found wrong. Even if it does become a staging ground for scipy, I would be uncomfortable requiring that package developer's who have written mature code that they wish to contribute to scipy (e.g., Damian's clustering code, PyDSTools, etc.) first make their code a scikit. I look forward to all the great improvements to the scikits infrastructure. However, I would like to see people stay focused on scipy for just a little bit longer. We haven't had a release in quite some time and there is a lot of work left to be done. In fact most of the suggestions that have been made for improving the scikits infrastructure would be great additions to the numpy/scipy infrastructure. I look forward to seeing these improvements, so please don't feel that I am arguing against them. 2. The problem I am raising is how do we best bring in new code to scipy regardless of whether it comes from scikits or some other source. Let's imagine for the sake of argument that the best imaginable infrastructure for scikits exists. Will it entirely remove the need for us to look at the new code's APIs? Will it make it easier for all the scipy developer's to test out the scikits? I am pretty confident that I can install the scikits (in fact I doubt that I would even use the scikits installers), but I don't. I have enough work just tracking development of numpy, scipy, and the other projects that I follow. 3. We have limited developer's resources. This is the main reason that I think new code isn't better reviewed. Having more projects to follow and more infrastructure to develop isn't going to necessarily solve that issue. 4. The proposal I am making directly addresses code that has currently been added to the scipy trunk. Rather than releasing 0.7 and being relatively stuck with the design decisions and API choices of the authors, I am proposing a middle ground. I am not interested in scipy.preview becoming a dumping ground in the future. And I believe we could devise policies to ensure that it doesn't. Does everyone feel confident that all the APIs for packages included in the trunk since the last scipy release are stable and won't need to change? If you think that they might need to change what do you propose to do? We could pull all the new packages and make them scikits. We could delay the release and review them all. We could release the code and just live with it. But are those the only choices we have? Could there be something like scipy.preview? OK, so here is my concern one last time before I take off for vacation. Since the 0.6 release we have decided to include a number of new packages, modules, etc (e.g., radial basis functions, sparse matrices, hierarchical clustering). My feeling is that we all agreed that we should include this code and we are all happy to have done it. I don't feel that there has been a thorough code review of any of the code by the scipy developer community. Now we are preparing to release 0.7. Once that happens are we prepared to effectively freeze the API for all that new code? Do we intend to accept that we will need two minor releases to implement any API changes? Is there a concern that the development cycle for scipy is too slow? That is my feeling. I would like to see scipy development become much more rapid now. I think that my modest proposal would help in that direction, but we will need to do much more. Automated nightly builds would be a great step. Better web infrastructure to attract new developers would help. But what I am proposing would require little effort and could basically be set up in a day or so if we agree that it would be useful. I called my proposal "modest" because I don't intend it to radically change how we develop scipy. I was imagining that we would be able to work out fairly rigorous criteria for inclusion in scipy.preview (please see my proposal if you haven't looked at it yet: http://projects.scipy.org/scipy/scipy/browser/trunk/doc/seps/technology-preview.rst). I am open to requiring that code first become a scikit before moving to scipy.preview. At this point, it sounds like there is very little interest in pursuing an idea like scipy.preview. So unless I find that when I return in a week that there has been more interest I will drop it. Hopefully, the discussion to improve scikits will continue. I look forward to seeing it improve. If anyone is interested in editing the technology preview proposal, please feel free it is checked into the scipy trunk. However, I ask that if you wish to propose improvements to scikits that you start a second sep. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From rmay31 at gmail.com Tue Nov 4 12:51:26 2008 From: rmay31 at gmail.com (Ryan May) Date: Tue, 04 Nov 2008 11:51:26 -0600 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> Message-ID: <49108B9E.8060309@gmail.com> Jarrod Millman wrote: > At this point, it sounds like there is very little interest in > pursuing an idea like scipy.preview. So unless I find that when I > return in a week that there has been more interest I will drop it. > Hopefully, the discussion to improve scikits will continue. I look > forward to seeing it improve. If anyone is interested in editing the > technology preview proposal, please feel free it is checked into the > scipy trunk. However, I ask that if you wish to propose improvements > to scikits that you start a second sep. For what it's worth, I'm +1 on the idea of a scipy.preview. I understand the desire to not make scipy more experimental and confusing for users, and certainly understand David's point to not make building Scipy any more of a burden than it already is. However, I think a scipy.preview *uniquely* addresses the recent concern of API stability. I'm sorry, but growing up as a scikit will only get a set of functionality so much exposure. Sure for a relatively complete package, a core group of users will be sure to install it. However, there is a group of casual users that will not install anything beyond scipy+numpy. And it is these users that usually find the roughest edges of the API; the parts that are intuitive to someone deep into the algorithms but are confusing to someone just starting out or wanting to test something. I simply don't think any amount of "cooking" as a scikit will iron out all API issues. So once code then gets incorporated into scipy and released, API issues *will* crop up. It would be nice to have a place to put things so that they get a wide release *and* can still change API, rather than having a single shot to get it right. My $0.02. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From bsouthey at gmail.com Tue Nov 4 13:03:05 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 04 Nov 2008 12:03:05 -0600 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <49108B9E.8060309@gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <49108B9E.8060309@gmail.com> Message-ID: <49108E59.3030001@gmail.com> Ryan May wrote: > Jarrod Millman wrote: > > >> At this point, it sounds like there is very little interest in >> pursuing an idea like scipy.preview. So unless I find that when I >> return in a week that there has been more interest I will drop it. >> Hopefully, the discussion to improve scikits will continue. I look >> forward to seeing it improve. If anyone is interested in editing the >> technology preview proposal, please feel free it is checked into the >> scipy trunk. However, I ask that if you wish to propose improvements >> to scikits that you start a second sep. >> > > For what it's worth, I'm +1 on the idea of a scipy.preview. I > understand the desire to not make scipy more experimental and confusing > for users, and certainly understand David's point to not make building > Scipy any more of a burden than it already is. However, I think a > scipy.preview *uniquely* addresses the recent concern of API stability. > > I'm sorry, but growing up as a scikit will only get a set of > functionality so much exposure. Sure for a relatively complete package, > a core group of users will be sure to install it. However, there is a > group of casual users that will not install anything beyond scipy+numpy. > And it is these users that usually find the roughest edges of the API; > the parts that are intuitive to someone deep into the algorithms but are > confusing to someone just starting out or wanting to test something. I > simply don't think any amount of "cooking" as a scikit will iron out all > API issues. So once code then gets incorporated into scipy and > released, API issues *will* crop up. It would be nice to have a place > to put things so that they get a wide release *and* can still change > API, rather than having a single shot to get it right. > > My $0.02. > > Ryan > > +1 on this as I have experienced the problems with Scikits. I would note that Python provides the __future__ module (http://www.python.org/doc/2.5.2/lib/module-future.html) that provides a somewhat similar goal. Bruce From david at ar.media.kyoto-u.ac.jp Tue Nov 4 13:02:10 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Nov 2008 03:02:10 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <49108B9E.8060309@gmail.com> References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <49108B9E.8060309@gmail.com> Message-ID: <49108E22.9060907@ar.media.kyoto-u.ac.jp> Ryan May wrote: > > For what it's worth, I'm +1 on the idea of a scipy.preview. I > understand the desire to not make scipy more experimental and confusing > for users, and certainly understand David's point to not make building > Scipy any more of a burden than it already is. The point is not so much that it will make scipy more complicated to build, but that the works *has to be done anyway*. Whether it is a scikits, from google code, or in scipy. Having good infrastructure for scikits is vaporware, but scipy.review relies on regular scipy releases, and frankly, this feels even more vaporware to me, if only by looking at the history of the project. How many times a year would you like to see scipy released for scipy.review to be useful ? And how many times has it been in the recent years ? cheers, David From nwagner at iam.uni-stuttgart.de Tue Nov 4 13:36:38 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Nov 2008 19:36:38 +0100 Subject: [SciPy-dev] ValueError: Buffer datatype mismatch (rejecting on 'l') Message-ID: Hi all, scipy.test() results in Ran 2595 tests in 47.866s FAILED (KNOWNFAIL=2, errors=26, failures=2) ====================================================================== ERROR: test_kdtree.test_random_compiled.test_approx ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", line 363, in setUp try_run(self.inst, ('setup', 'setUp')) File "/usr/local/lib64/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/util.py", line 453, in try_run return func() File "/usr/local/lib64/python2.5/site-packages/scipy/spatial/tests/test_kdtree.py", line 133, in setUp self.kdtree = cKDTree(self.data) File "ckdtree.pyx", line 229, in scipy.spatial.ckdtree.cKDTree.__init__ (scipy/spatial/ckdtree.c:1039) ValueError: Buffer datatype mismatch (rejecting on 'l') Nils From jh at physics.ucf.edu Tue Nov 4 13:39:24 2008 From: jh at physics.ucf.edu (jh at physics.ucf.edu) Date: Tue, 04 Nov 2008 13:39:24 -0500 Subject: [SciPy-dev] toward scipy 1.0 Message-ID: This message is about cleaning up scipy and releasing 1.0. Tied up in this topic is the issue of how code is brought into scipy. We recognize that there is work to do before scipy is 1.0-ready, and much of that work is not merely adding to what is there. So, I also think that we need to solve the problems that got us here before 1.0. Since this is a message about solving a problem in our process, some may think I am placing blame or don't support our developers. So, let me state at the outset that we could not have come as far as we have without a lot of dedicated work on many parts, particularly the current set of developers. Further, they have all recognized and spoken about the organizational issues we have had, and about greater community input as a route out. All I'm intending here is to move us in that direction. "Jarrod Millman" wrote in another thread: > I imagine that a project could easily start as a > scikit and mature there. Then a number of developer decide that it > belongs in scipy proper. I think this is part of the problem. We need to define a clear route to inclusion for new packages. That route should include a period of community review and then a vote, and not merely be the decision of a few developers in private discussion. This should be a formalized process, so that if we have a disagreement, the result is clear and we all agree to live with it. I feel tht the small number of people involved in decisions, and the lack of community review, has brought scipy's organization to where it is today. We are all human. As recently as the 1.2.0 numpy release we had significant API changes angling for inclusion at the last minute, and only last-minute community outcry let cooler heads prevail. To solve this problem, we need to open the process and make it more deliberate and deliberative. We don't need scipy to be nimble, we need it to be stable and well thought-out. >From discussions with Stefan, I understand that some parts of scipy are not maintained and others don't hang together or mesh well with the rest of the package. Some module names don't make sense (scipy.stsci was recently pointed out). Docs are arcane (stats) or lacking. We now have a decade or more of use experience with scipy and its predecessors, enough to make something coherent and long-term stable out of it, but this refactoring has not yet happened. So, I propose a community reassessment of scipy and a refactoring and doc effort before 1.0. The goal is that in 6.0 your 1.0-based code still works well, and we look at it and say, "That structure really stood the test of time. It still makes sense today." Here's what I propose: 1. reassess what's there: - break it into components for discussion purposes - decide for each whether it's: - used - self-consistent - complete at some level - maintained - documented for normal users - a build problem - well integrated into the rest of scipy This part of the process would be done by small teams and would result in a short report for each package recommending whether it is: - worth keeping as is - needs specific work (docs, build stuff, tests, etc.) - needs a maintainer - needs to be refactored, removed, or merged into something else Any component for which a team does not volunteer to do the review would be a good candidate for removal based on lack of use. 2. hunt for looming incompatible API changes: - in the code as it exists - that result from the reassessment above These would be collected on a page(s) for specific community review and comment. 3. community comments on the collection of reports online and looks for consensus on any overall restructuring 4. do the work resulting from the reassessment Yes, this is the hard part. Likely we will pick up some new developers/maintainers from step 1 and they pick up much of the work. 5. present the refactored package in a declared-unstable release (0.7.99?) 6. stabilize it and release it as 0.8 7. run a doc marathon based on that release (I can pay one or more writers) 8. allow significant additions, if any, until 0.9, but only after they have been scikits for a year *and* pass a community-based acceptance process 9. release scipy 1.0 as a cleaned-up package with community testing and full docs, but no significant new code after 0.9 10. include new code in future releases after they have been scikits for a year *and* pass a community-based acceptance process I don't expect most scikits to be accepted after 1 year, and expect that some never will be. I think this is a 2-3-year process, of which steps 1-4 could happen in 6-8 months, say by next June. My guess is that, except for docs, most of the components would come through pretty much unchanged, but a few would get a real working over or be removed. We'd need a web tool similar to the doc wiki for this. Or, we could use a wiki and a lot of discipline. We would also need to define some sort of voting process after the assessment period. Whoever gets to vote, I propose a 2/3 majority required for inclusion of a new package, and an 80% majority for a major API change (which waits for the next major release). There is much detail to fill in, such as what this community review process really looks like, not to mention whether any of this is a good idea at all. Let me know your thoughts. --jh-- From rob.clewley at gmail.com Tue Nov 4 14:18:27 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 4 Nov 2008 14:18:27 -0500 Subject: [SciPy-dev] toward scipy 1.0 In-Reply-To: References: Message-ID: Hi, > We don't need scipy to be nimble, we need it to be stable and well > thought-out. I basically agree with your proposal. A substantial re-organization has to occur before scipy becomes Frankenstein's monster a few years down the line. It seems that scipy will only become stable and well thought out by making substantial API changes and refactoring of all sorts of code in different modules. So, yes, we need a big community effort to do that effectively, and that's what I want to address below. > Here's what I propose: > > 1. reassess what's there: > 2. hunt for looming incompatible API changes: > 3. community comments on the collection of reports online and looks > for consensus on any overall restructuring These three initial steps are going to be very hard for people like myself who haven't ever delved into all the scipy modules, let alone the scikits. May I suggest that, to involve as many of the community as possible, part of these steps should involve an effective way to visualize and analyze all the co-dependent or merely related modules of scipy. > We'd need a web tool similar to the doc wiki for this. Or, we could > use a wiki and a lot of discipline. In these days of easily setup online tools, gathering and presenting information from the community could be done more effectively even than the somewhat linear style of a wiki. The best way to do this could be an online graphical browser for the repository. I don't mean a hierarchical navigator like in windows explorer or the mac finder, I mean one of those interactive and "springy" associative network diagrams that are increasingly common on the web (e.g. the graphical thesaurus, music map). These would be community-edited diagrams so they would build over time as people work through the code base. One diagram could show cross-dependencies from imports (maybe generated automatically), and another could show where modules have overlapping functionality. Colors and other graphical markup of the nodes and links could indicate the extent or significance of these overlaps, to help flag potential problems or places to focus effort. The nodes could link to the code sources and show markup there that highlight and show comments about the overlaps and ideas. I'm just thinking off the top of my head here, and I have no idea if there's already a graphical repository browser that we could use. It would surely be a tremendous help to organizing any large software project, even for small groups of developers. -Rob From oliphant at enthought.com Tue Nov 4 15:50:05 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 04 Nov 2008 14:50:05 -0600 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: References: <490FD2F6.8000907@enthought.com> Message-ID: <4910B57D.8060402@enthought.com> > Currently, the problem is that someone proposes or writes something > that we agree should be part of scipy. We discuss on the list perhaps > where the code should go or what it should be called. We may discuss > the API. Then the code is included in the trunk and we make a > release. For a large number of scipy user, this may be the first time > they have been made aware of the new code and its API. They may find > that the code doesn't meet their needs. Also even, though, the code > has been discussed on the list, most of us our so busy we don't have > very much time to closely look at the code. So when we agree to > include it, we only have a general sense that the code has a > reasonable API. > > I was proposing scipy.preview to try an address these and other > concerns. It seems doubtful to me that scikits addresses this > problem. And I don't think that we currently have enough resources to > make scikits better serve this purpose. I don't have time to work on > scikits and I don't pay much attention to them unfortunately. If > someone was to step forward and start pushing scikits that would be > great and I would be happy to see that happen. But even if that was > to happen, we would still need a process to incorporate a scikit into > scipy if we decided we wanted to. My proposal is aimed at solving > that problem. > > My biggest point is "why call it scipy.preview when scikits.forscipy works just as well" In other words, create a *single* scikit that is the equivalent of scipy.preview. What is the benefit of having it in scipy.preview? -Travis From oliphant at enthought.com Tue Nov 4 16:01:12 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 04 Nov 2008 15:01:12 -0600 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040051t418b8e61r26f70f82dbae551e@mail.gmail.com> <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> Message-ID: <4910B818.9040904@enthought.com> Jarrod Millman wrote: > I absolutely agree with the ideas presented about scikits and look > forward to seeing the numerous scikits improvements. I feel that I > have gotten into a discussion where the counter argument to what I am > proposing is something I strongly support. I also feel that the > counterargument doesn't directly address my concern; but it may be > that I am simply perceiving a problem that no one else believes > exists. > Let me make my point again. I'm arguing that instead of scipy.preview, let's just make a *single* scikit called scikit.preview or scikit.forscipy or scikit.future_scipy or whatever. This will create some incentive to make scikits easier to install generally as we want to get the future_scipy out there being used. I'm very interested, though, to hear what developers of modules that they would like to see in SciPy but have not made it there yet, think. I'm very interested in the question of how do we make it easier to contribute to SciPy. -Travis From gael.varoquaux at normalesup.org Tue Nov 4 16:27:57 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 4 Nov 2008 22:27:57 +0100 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <4910B57D.8060402@enthought.com> References: <490FD2F6.8000907@enthought.com> <4910B57D.8060402@enthought.com> Message-ID: <20081104212757.GA32578@phare.normalesup.org> On Tue, Nov 04, 2008 at 02:50:05PM -0600, Travis E. Oliphant wrote: > In other words, create a *single* scikit that is the equivalent of > scipy.preview. What is the benefit of having it in scipy.preview? It gets built and shipped on people's boxes. Elsewhere, as I think Ryan was pointing out, a large fractions of users simply never get to see it and provide feedback. Ga?l From benny.malengier at gmail.com Tue Nov 4 16:54:14 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Tue, 4 Nov 2008 22:54:14 +0100 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <20081104212757.GA32578@phare.normalesup.org> References: <490FD2F6.8000907@enthought.com> <4910B57D.8060402@enthought.com> <20081104212757.GA32578@phare.normalesup.org> Message-ID: 2008/11/4 Gael Varoquaux > On Tue, Nov 04, 2008 at 02:50:05PM -0600, Travis E. Oliphant wrote: > > In other words, create a *single* scikit that is the equivalent of > > scipy.preview. What is the benefit of having it in scipy.preview? > > It gets built and shipped on people's boxes. Elsewhere, as I think Ryan > was pointing out, a large fractions of users simply never get to see it > and provide feedback. > As a user of scipy I can only say that although I use it for years, I have no idea what is in it apart from the pieces I need (odes, weave). If I need new functionality, I google/search for it, and if in scipy, then great, if not, I'll start installing whatever I need, or code it up (the spline part in scipy eg did not do it for me, and dae are missing). The idea of scipy as a whole just does not come across, it is a collection of pieces for scientific computing. If I need to install scikits for the functionality I need, then so be it. I want to save time coding, so I'll have researched before what is available anyway. Having a scikit will give more reassurance to me though than some code on sourceforge, as the link to numpy/scipy is clear. The important part as a user is that if my code uses numpy arrays, I really don't want to convert to another type of array just to use another library. So given two more or less equal choices, it's clear which one would win. scikits look just fine for this. I don't follow this list for a long time, but the fact that one could view scipy as a tool the user of it knows what it all contains sounds strange to me, the tools are just not there to easily discover what scipy covers. A small anacdote to end, I do a lot of line searches, coded my own because linesearch in scipy.optimize is not what I need. I looks like http://scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/solvers/optimizers/line_searchhas more than enough of what I needed, but to find this from a search on google would be really hard. So more documentation for scipy and the scikits looks like the main point to make scikits, and scipy, more visible. Puttings things in a preview part will not do this in my opinion, working on documentation and visibility on the web will, however it is then packaged. Greetings, Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Tue Nov 4 17:34:58 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 4 Nov 2008 17:34:58 -0500 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <4910B818.9040904@enthought.com> References: <49100ED3.7040107@ar.media.kyoto-u.ac.jp> <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> Message-ID: 2008/11/4 Travis E. Oliphant : > Jarrod Millman wrote: >> I absolutely agree with the ideas presented about scikits and look >> forward to seeing the numerous scikits improvements. I feel that I >> have gotten into a discussion where the counter argument to what I am >> proposing is something I strongly support. I also feel that the >> counterargument doesn't directly address my concern; but it may be >> that I am simply perceiving a problem that no one else believes >> exists. >> > Let me make my point again. I'm arguing that instead of scipy.preview, > let's just make a *single* scikit called scikit.preview or > scikit.forscipy or scikit.future_scipy or whatever. This will create > some incentive to make scikits easier to install generally as we want to > get the future_scipy out there being used. > > I'm very interested, though, to hear what developers of modules that > they would like to see in SciPy but have not made it there yet, think. > > I'm very interested in the question of how do we make it easier to > contribute to SciPy. As a developer who has written the module that is sparking this discussion, if the route to inclusion in scipy were "make a scikit, maintain and distribute it until you get enough user feedback to judge whether the API is optimal, then move it fully-formed into scipy" my code would simply gather dust and never be included. I don't have the time and energy to maintain a scikit. If I were a user I wouldn't bother downloading and installing a scikit in the six different computing environments I have used this week, particularly since it uses compiled code (but no additional dependencies). Why should I expect others to be different? The question is really, how do we take tested, apparently production-ready code, and get it out there so users can get at it? The current approach is "put it in scipy and live with the API". One proposal is, put it in scipy - which is still quite unstable, API-wise - but marked as "new, will be stabilizing until the next release". The other proposal is organize a community - nobody has volunteered to do any actual work yet - to build, maintain, publicize and distribute scikits so that they are as accessible as scipy itself. Frankly, I think the effect of agreeing on the scikits proposal - whatever its theoretical merits - will be to leave things at the status quo, except possibly for a higher barrier to getting good code into scipy: "put it in a scikit", we will say, and into a scikit it will go (or not), where it will molder and rot, never to be included. If people want to make the scikits option reasonable, I suggest, instead of arguing on the mailing list, going out and getting things set up so it really is easy to add packages. *Then* come back and tell us how it's a better option than what we have now. In the meantime, we can either go on with the approach we have - live with occasional API breakage for new components of scipy - or spend an evening getting scipy.preview set up. Anne From wnbell at gmail.com Tue Nov 4 18:16:41 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 4 Nov 2008 18:16:41 -0500 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> Message-ID: On Tue, Nov 4, 2008 at 5:34 PM, Anne Archibald wrote: > > If people want to make the scikits option reasonable, I suggest, > instead of arguing on the mailing list, going out and getting things > set up so it really is easy to add packages. *Then* come back and tell > us how it's a better option than what we have now. In the meantime, we > can either go on with the approach we have - live with occasional API > breakage for new components of scipy - or spend an evening getting > scipy.preview set up. > I completely agree. Scikits is not a viable incubator at the moment. IMO that someone is willing to maintain a submodule of SciPy is far more important than matters of API stability. I'd argue that scipy.spatial should exist in SciPy 0.7 so people actually use the thing. A disclaimer in the docstring is sufficient warning. Those that ignore the warning are still better off than they would have been without the code in the first place. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From oliphant at enthought.com Tue Nov 4 18:18:56 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 04 Nov 2008 17:18:56 -0600 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> Message-ID: <4910D860.9020003@enthought.com> Nathan Bell wrote: > On Tue, Nov 4, 2008 at 5:34 PM, Anne Archibald > wrote: > >> If people want to make the scikits option reasonable, I suggest, >> instead of arguing on the mailing list, going out and getting things >> set up so it really is easy to add packages. *Then* come back and tell >> us how it's a better option than what we have now. In the meantime, we >> can either go on with the approach we have - live with occasional API >> breakage for new components of scipy - or spend an evening getting >> scipy.preview set up. >> >> > > I completely agree. Scikits is not a viable incubator at the moment. > > IMO that someone is willing to maintain a submodule of SciPy is far > more important than matters of API stability. > Here, here.... > I'd argue that scipy.spatial should exist in SciPy 0.7 so people > actually use the thing. A disclaimer in the docstring is sufficient > warning. Those that ignore the warning are still better off than they > would have been without the code in the first place. > Again, well spoken... -Travis From eads at soe.ucsc.edu Tue Nov 4 18:36:28 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 4 Nov 2008 15:36:28 -0800 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> Message-ID: <91b4b1ab0811041536x684bd832h9385ae59b572f7b4@mail.gmail.com> I'm on the train so I'll be short. I agree with Anne's concerns. There are lots of proposals being circulated for fairly extensive community review panels, paperwork, and refactoring without any real evidence of who is going to do the work. Many of us start writing new SciPy packages simply because they are related to our own research. Expecting huge participation toward a 1.0 release with a potentially unpassable barrier to entry for new code is hard sell to me as a developer. I do want to remark I perused Anne's code on the branch, and was quite impressed with it. The code was documented according to our RST standards, the documentation was well-written, regression tests covered many corner cases of the API, the code was written with what I consider fine practice (I'm known at work for being a stickler for cleanliness), and the API had a good design. I think it is a model case for inclusion in SciPy as a tech preview. I will say that crafting good API design, writing clean code, developing good tests is damn hard and very time consuming. Expecting us to follow an extensive review and voting period may just cause many to give up and withdraw our packages from consideration. After all, for many of us, this isn't our day job: we have research to do, papers to submit, Ph.D.'s to defend, and grant proposals to write. A package inclusion process that is more supportive than defeating will encourage developers to make more general contributions such as documentation clean-up, code refactoring, API restructuring, and bug fixing. But I'm not so sure we have the resources at this time for a massive revamp as has been proposed. Damian On 11/4/08, Anne Archibald wrote: > 2008/11/4 Travis E. Oliphant : >> Jarrod Millman wrote: >>> I absolutely agree with the ideas presented about scikits and look >>> forward to seeing the numerous scikits improvements. I feel that I >>> have gotten into a discussion where the counter argument to what I am >>> proposing is something I strongly support. I also feel that the >>> counterargument doesn't directly address my concern; but it may be >>> that I am simply perceiving a problem that no one else believes >>> exists. >>> >> Let me make my point again. I'm arguing that instead of scipy.preview, >> let's just make a *single* scikit called scikit.preview or >> scikit.forscipy or scikit.future_scipy or whatever. This will create >> some incentive to make scikits easier to install generally as we want to >> get the future_scipy out there being used. >> >> I'm very interested, though, to hear what developers of modules that >> they would like to see in SciPy but have not made it there yet, think. >> >> I'm very interested in the question of how do we make it easier to >> contribute to SciPy. > > As a developer who has written the module that is sparking this > discussion, if the route to inclusion in scipy were "make a scikit, > maintain and distribute it until you get enough user feedback to judge > whether the API is optimal, then move it fully-formed into scipy" my > code would simply gather dust and never be included. I don't have the > time and energy to maintain a scikit. > > If I were a user I wouldn't bother downloading and installing a scikit > in the six different computing environments I have used this week, > particularly since it uses compiled code (but no additional > dependencies). Why should I expect others to be different? > > The question is really, how do we take tested, apparently > production-ready code, and get it out there so users can get at it? > The current approach is "put it in scipy and live with the API". One > proposal is, put it in scipy - which is still quite unstable, API-wise > - but marked as "new, will be stabilizing until the next release". The > other proposal is organize a community - nobody has volunteered to do > any actual work yet - to build, maintain, publicize and distribute > scikits so that they are as accessible as scipy itself. > > Frankly, I think the effect of agreeing on the scikits proposal - > whatever its theoretical merits - will be to leave things at the > status quo, except possibly for a higher barrier to getting good code > into scipy: "put it in a scikit", we will say, and into a scikit it > will go (or not), where it will molder and rot, never to be included. > > If people want to make the scikits option reasonable, I suggest, > instead of arguing on the mailing list, going out and getting things > set up so it really is easy to add packages. *Then* come back and tell > us how it's a better option than what we have now. In the meantime, we > can either go on with the approach we have - live with occasional API > breakage for new components of scipy - or spend an evening getting > scipy.preview set up. > > > Anne > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From pav at iki.fi Tue Nov 4 18:48:25 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 4 Nov 2008 23:48:25 +0000 (UTC) Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations References: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> Message-ID: Tue, 04 Nov 2008 18:16:41 -0500, Nathan Bell wrote: [clip] > I'd argue that scipy.spatial should exist in SciPy 0.7 so people > actually use the thing. A disclaimer in the docstring is sufficient > warning. Those that ignore the warning are still better off than they > would have been without the code in the first place. If this is not enough, it could also be possible to issue a FutureWarning on module import. Obnoxious, yes, but hard to miss. But I believe that serious users would anyway read the documentation or docstrings. Which brings me to a point: would the authors of spatial be willing to write some documentation for their module, in addition to docstrings, now that we have some infrastructure in place for that? Basically, a page that groups the contents of the module logically and gives some brief background, defines concepts, and maybe shows some small examples would come handy for future users. Probably anything is improvement over an autogenerated list of classes and functions. Unfortunately, the current Sphinx-generated Scipy docs are IMO *not* good examples how to do things right. This [1] is going to the right direction, and the Python reference docs [2] are not bad in my opinion. .. [1] http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html Check the "Show source" link on the left sidebar. .. [2] eg. http://docs.python.org/library/datetime.html -- Pauli Virtanen From pgmdevlist at gmail.com Tue Nov 4 19:20:54 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 4 Nov 2008 19:20:54 -0500 Subject: [SciPy-dev] toward scipy 1.0 In-Reply-To: References: Message-ID: <200811041920.54975.pgmdevlist@gmail.com> All, I'm not really involved in the development of scipy, so I don't expect my opinion to matter much. Still, I'm getting a bit frustrated with Scipy at the moment: it looks like the release of 0.70 has been postponed month after month since spring because of some blocking tickets in modules I have little to no use for. It's hard to explain to some colleagues of mine that they can't use the scikits.timeseries package because it relies on a couple of functions that are not part of any official release yet. If we're going to reorganize scipy, I'd be pretty much in favor of modularity: let me install just the packages I need (scipy.stats, scipy.special, scipy.whatever) without bothering about the ones I'll never use. Breaking scipy into smaller packages sounds a lot like scikits, but is it that bad a thing ? Would it make development more difficult ? Would it make installation and maintenance more complex ? As long as there's one standard for setup.py, things should go OK, shouldn't they ? Because they'd be part of scipy and not a scikit, the different modules would have a varnish of stability that the scikits don't necessarily have, and it might simplify the transformation of a scikit to a scipy module. As you'd have guessed, I'm all in favor of a kind of central repository like cran or ctan. Each scikit could come with a few keywords (provided by their developer) to simplify the cataloguing, and with a central page it shouldn't be that difficult to know what is being developed and at what pace, or even just what is available. It might reduce the chances of duplicated code, help future developers by providing some examples, and generally be a good PR system for scikits/scipy packages... And yes, why not using some kind of graphical brwser ? Now, of course, I'll go with the flow no matter what... From cournape at gmail.com Tue Nov 4 20:25:28 2008 From: cournape at gmail.com (David Cournapeau) Date: Wed, 5 Nov 2008 10:25:28 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040147j4962ea94s47e6f063c19f3c89@mail.gmail.com> <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> Message-ID: <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> On Wed, Nov 5, 2008 at 7:34 AM, Anne Archibald wrote: > 2008/11/4 Travis E. Oliphant : >> Jarrod Millman wrote: >>> I absolutely agree with the ideas presented about scikits and look >>> forward to seeing the numerous scikits improvements. I feel that I >>> have gotten into a discussion where the counter argument to what I am >>> proposing is something I strongly support. I also feel that the >>> counterargument doesn't directly address my concern; but it may be >>> that I am simply perceiving a problem that no one else believes >>> exists. >>> >> Let me make my point again. I'm arguing that instead of scipy.preview, >> let's just make a *single* scikit called scikit.preview or >> scikit.forscipy or scikit.future_scipy or whatever. This will create >> some incentive to make scikits easier to install generally as we want to >> get the future_scipy out there being used. >> >> I'm very interested, though, to hear what developers of modules that >> they would like to see in SciPy but have not made it there yet, think. >> >> I'm very interested in the question of how do we make it easier to >> contribute to SciPy. > > As a developer who has written the module that is sparking this > discussion, if the route to inclusion in scipy were "make a scikit, > maintain and distribute it until you get enough user feedback to judge > whether the API is optimal, then move it fully-formed into scipy" my > code would simply gather dust and never be included. I don't have the > time and energy to maintain a scikit. That's what I don't understand: there is almost no difference between maintaing a scikit and a scipy submodule. In both case you have to write some setup.py + the module itself. To get the sources, it is scikit vs scipy svn. Both Damian and you made this case, so I would like to understand what's so different from your POV, because I just don't get it ATM. Maybe there are some confusion on how a scikit can be made and distributer (the documentation could certainly be improved). Having a scikit also means that if you are willing to do it, you can easily build binaries installers, source distributions *in one command*. You can't do that with scipy, which won't change for the foreseable future. And you don't need to care about breaking scipy. > > The question is really, how do we take tested, apparently > production-ready code, and get it out there so users can get at it? > The current approach is "put it in scipy and live with the API". Not exactly: the "live with the API" case has been made for features which have been in scipy for years, that many people depend on. Also, I can't help noticing than in both Damian and your case, what happened is not what scipy.preview is about, but to put code directly in scipy. And I also think the process of scipy.preview does not scale much. It worked in your case, but will it work if many people want to put code in scipy ? From aarchiba at physics.mcgill.ca Tue Nov 4 20:57:18 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 4 Nov 2008 20:57:18 -0500 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> References: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> Message-ID: 2008/11/4 David Cournapeau : > On Wed, Nov 5, 2008 at 7:34 AM, Anne Archibald > wrote: >> 2008/11/4 Travis E. Oliphant : >>> Jarrod Millman wrote: >>>> I absolutely agree with the ideas presented about scikits and look >>>> forward to seeing the numerous scikits improvements. I feel that I >>>> have gotten into a discussion where the counter argument to what I am >>>> proposing is something I strongly support. I also feel that the >>>> counterargument doesn't directly address my concern; but it may be >>>> that I am simply perceiving a problem that no one else believes >>>> exists. >>>> >>> Let me make my point again. I'm arguing that instead of scipy.preview, >>> let's just make a *single* scikit called scikit.preview or >>> scikit.forscipy or scikit.future_scipy or whatever. This will create >>> some incentive to make scikits easier to install generally as we want to >>> get the future_scipy out there being used. >>> >>> I'm very interested, though, to hear what developers of modules that >>> they would like to see in SciPy but have not made it there yet, think. >>> >>> I'm very interested in the question of how do we make it easier to >>> contribute to SciPy. >> >> As a developer who has written the module that is sparking this >> discussion, if the route to inclusion in scipy were "make a scikit, >> maintain and distribute it until you get enough user feedback to judge >> whether the API is optimal, then move it fully-formed into scipy" my >> code would simply gather dust and never be included. I don't have the >> time and energy to maintain a scikit. > > That's what I don't understand: there is almost no difference between > maintaing a scikit and a scipy submodule. In both case you have to > write some setup.py + the module itself. To get the sources, it is > scikit vs scipy svn. Both Damian and you made this case, so I would > like to understand what's so different from your POV, because I just > don't get it ATM. Maybe there are some confusion on how a scikit can > be made and distributer (the documentation could certainly be > improved). A scipy submodule will be distributed to users. It has a bug tracker, and other people who read the bug tracker, and can possibly fix minor bugs. Users can find it. It gets built and tested on all relevant architectures. The unit tests get rerun whenever some piece of scipy changes. I don't have to do *any* of that. If something goes wrong, okay, I can go in and try to fix it, but other than that I can leave the working code as it is. If it were a scikit I would have to scrounge build machines, I would have to rerun the unit tests every time some part of scipy I depend on changes, I would have to set up a bug tracker, and I would have to publicize it. More, it makes it difficult for other packages: how are dependencies between scikits handled? Is there a way to automatically download and install all scikits a package depends on? And: how many scikits have ever been incorporated in scipy? > Having a scikit also means that if you are willing to do it, you can > easily build binaries installers, source distributions *in one > command*. You can't do that with scipy, which won't change for the > foreseable future. And you don't need to care about breaking scipy. I can't build binary distributions at all; I don't have access to (for example) Windows machines. And scipy.preview is intended for code stable enough that breaking scipy is not a concern (i.e., not more of a problem than for the rest of scipy). >> The question is really, how do we take tested, apparently >> production-ready code, and get it out there so users can get at it? >> The current approach is "put it in scipy and live with the API". > > Not exactly: the "live with the API" case has been made for features > which have been in scipy for years, that many people depend on. > > Also, I can't help noticing than in both Damian and your case, what > happened is not what scipy.preview is about, but to put code directly > in scipy. And I also think the process of scipy.preview does not scale > much. It worked in your case, but will it work if many people want to > put code in scipy ? If scipy.preview had existed, I would have put my code there. In fact, if someone creates it, my code will probably be the first to be moved there. I think my code is useful as is, and I don't think I'll need to change the API of what's there, but when we start seeing users it may be a different story. (Specifically, I think there will be a demand for annotated kdtrees and some way to efficiently implement custom tree traversal.) Anne From david at ar.media.kyoto-u.ac.jp Tue Nov 4 21:32:49 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 05 Nov 2008 11:32:49 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> Message-ID: <491105D1.9020301@ar.media.kyoto-u.ac.jp> Anne Archibald wrote: > > A scipy submodule will be distributed to users. It has a bug tracker, > and other people who read the bug tracker, and can possibly fix minor > bugs. Users can find it. It gets built and tested on all relevant > architectures. The unit tests get rerun whenever some piece of scipy > changes. Scikits has a bug tracker too. > I don't have to do *any* of that. If something goes wrong, > okay, I can go in and try to fix it, but other than that I can leave > the working code as it is. Well, ok, fair enough. And in all fairness, if scipy.review is about putting the burden on regular scipy maintainers, then I am even more strongly against scipy.preview :) It will only make scipy release process even more cumbersome that it already is. I understand the concern that adding packages directly to scipy is easier for people who want their code in. But I think you don't realize the cost of adding code to an already big project: adding code is easy, but once it is there, it has maintenance cost, and you can't remove it. As an example, scipy.sparse will be the huge feature for scipy 0.7. It has taken me half a day to solve an issue on Mac OS X; I am neither a user nor a developer of the package. I certainly don't want to say that scipy.sparse should not have been in scipy: I certainly think it is a great feature (and is why I spent quite some time on something which is not even remotely fun). The whole reason why scipy has not seen a release if because it lacks resources to work on it. Adding new tasks does not help, obviously. scipy.preview makes it easier to add new code for new contributors, at the expense of scipy developers. I am much more willing to make is easier to add new code without adding work to scipy > > If it were a scikit I would have to scrounge build machines, No, you don't have to if you don't care. > I would > have to rerun the unit tests every time some part of scipy I depend on > changes, I would have to set up a bug tracker There is a scikit bug tracker: http://projects.scipy.org/scipy/scikits > , and I would have to > publicize it. More, it makes it difficult for other packages: how are > dependencies between scikits handled? Is there a way to automatically > download and install all scikits a package depends on? Yes, with eggs. This discussion motivated me to start something which should have done from the beginning, an example scikits.example. Nothing fancy, but it took me ten minutes to do it, 5 minutes to register it (it can be done in 10 seconds, but I have a peculiar network at my lab which makes the process cumbersome). You can now install it with easy_install easy_install scikits.example You can get the sources with easy_install, too: easy_install -eNb example scikits.example You can also put - if you want - , a windows or a mac os X binary. > > And: how many scikits have ever been incorporated in scipy? None, but not any has asked. Most of them depend on non BSD code, BTW. I personally started one scikit which I hope to see included in scipy at some point (talkbox). > > I can't build binary distributions at all; I don't have access to (for > example) Windows machines. And scipy.preview is intended for code > stable enough that breaking scipy is not a concern (i.e., not more of > a problem than for the rest of scipy). But you can't know it does not break if you have not used it on supported platforms. Scipy maintainers will have to deal with this instead. > > If scipy.preview had existed, I would have put my code there. In fact, > if someone creates it, my code will probably be the first to be moved > there. My remark was not to meant to say that your code should have been done in a scikits first. It was meant: since your code was discussed, and since you are already a known contributor to numpy/scipy, we included your code directly in scipy. I personally have no problem with how things happened in that exact case. But again, this cannot scale. cheers, David From rob.clewley at gmail.com Tue Nov 4 23:36:37 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Tue, 4 Nov 2008 23:36:37 -0500 Subject: [SciPy-dev] toward scipy 1.0 In-Reply-To: <200811041920.54975.pgmdevlist@gmail.com> References: <200811041920.54975.pgmdevlist@gmail.com> Message-ID: On Tue, Nov 4, 2008 at 7:20 PM, Pierre GM wrote: > All, > I'm not really involved in the development of scipy, so I don't expect my > opinion to matter much. I think the point is that it should matter more! > Breaking scipy into smaller packages sounds a lot like scikits, but is it that > bad a thing ? Would it make development more difficult ? Would it make > installation and maintenance more complex ? As long as there's one standard > for setup.py, things should go OK, shouldn't they ? Because they'd be part of > scipy and not a scikit, the different modules would have a varnish of > stability that the scikits don't necessarily have, and it might simplify the > transformation of a scikit to a scipy module. Breaking it up might be a bad thing if it means the packages become too autonomous and disconnected. If the current trend continues and scipy degenerates into a collection of disjoint subpackages that are all managed independently, then what's the point of scipy at all? Doesn't it become a package in name only? I'm not in favor of that at all. It's no better than having the existing python.org web site listing of all sorts of different scientific packages that solve different types of problem and letting people download them all separately. I thought the point was that scipy should be a coherent set of core libraries for scientific computation, even if there are modular and optional add-ons. So I'm all for modularity, but I think only a smart top-down re-organization will actually help you keep the core development properly separated from more sideline issues. > As you'd have guessed, I'm all in favor of a kind of central repository like cran or ctan. Each scikit could come with a few keywords (provided by their developer) to simplify the cataloguing, and with a central page it shouldn't be that difficult to know what is being developed and at what pace, or even just what is available. I don't think this idea would follow through to the situation that we would both like to see. Your description sounds like an idealization where we'd effectively start over, with authors submitting brand new, well thought out packages with keywords that help each other avoid duplication in the future. But people already have their existing packages (e.g. current scikits) that they'd immediately submit to the repository, and there is no mechanism to ensure that their initial submissions will work together coherently. And that's where the problem is. So even with keywords or something similar, the duplication and confusion will have already happened. To facilitate this process, another idea is to look at the model of online journals/encyclopedias (e.g. wikipedia, but I'm thinking more like scholarpedia). We could agree on "chief coordinators" to manage a broad sub-package, with subs to coordinate at the finer grain level. This has to be done in a way that doesn't place a huge burden on these coordinators - they shouldn't be the poor mugs who end up actually *doing* everything just because they signed up to help. -Rob From cournape at gmail.com Wed Nov 5 02:56:52 2008 From: cournape at gmail.com (David Cournapeau) Date: Wed, 5 Nov 2008 16:56:52 +0900 Subject: [SciPy-dev] toward scipy 1.0 In-Reply-To: <200811041920.54975.pgmdevlist@gmail.com> References: <200811041920.54975.pgmdevlist@gmail.com> Message-ID: <5b8d13220811042356o44eb5582ja989b7238aae531e@mail.gmail.com> On Wed, Nov 5, 2008 at 9:20 AM, Pierre GM wrote: > If we're going to reorganize scipy, I'd be pretty much in favor of modularity: > let me install just the packages I need (scipy.stats, scipy.special, > scipy.whatever) without bothering about the ones I'll never use. Yes, that's an ideal world, but it is hard to do in practice with a finite amount of time. > > Breaking scipy into smaller packages sounds a lot like scikits, but is it that > bad a thing ? Would it make development more difficult ? Would it make > installation and maintenance more complex ? Yes. Generating 10 packages instead of one increases the work. It means each of them can be updated independantly, so you have an exponential combination of configurations to test. It really is a lot of work - unless we don't do any QA, that we release packages which may be broken on some platforms. OTOH, having a big set of packages means that a single one can postpone the release; there is a balance to find. We can propose all kind of methods for better releasing, but IMO, the uncomforting truth is that we simply lack the man power to do more than what is already being done. I personally would be much more confortable with more code put in scipy if it meant that at the same time, more people would be willing to participate to the task. This has not been the case; everybody want to spend time coding new algorithms, new API, etc... Nobody wants to spend time on platform idiosyncraties, platform specific bugs, etc... > As long as there's one standard > for setup.py, things should go OK, shouldn't they ? Unfortunately, no. It is true for packages which are pure python, more or less true for packages with only C code and no dependencies, and not true at all for everything else (including fortran, C++, etc...). For example, what if you install one package with one version of BLAS/LAPACK, and another with another ? Crashes, wrong results, a lot of nasty things. I think what should follow is a R-like model: a well maintained core, with code that people are willing to maintain for some time. And then, hopefully, it can be used by most other packages, including scikits, without depending too much one of each other. Having an infrastructure to support this. Everthing else, in the current state of affairs, has no chance of succeeding, because scipy developers are already overbooked. > As you'd have guessed, I'm all in favor of a kind of central repository like > cran or ctan. Each scikit could come with a few keywords (provided by their > developer) to simplify the cataloguing, and with a central page it shouldn't > be that difficult to know what is being developed and at what pace, or even > just what is available. It might reduce the chances of duplicated code, help > future developers by providing some examples, and generally be a good PR > system for scikits/scipy packages... And yes, why not using some kind of > graphical brwser ? I think you vastly underestimate the size of this task. It has not entirely happened yet for python itself, BTW. Distribution problems are a very difficult, challenging problems. And there is no silver bullet: it needs a lot of man power, with a lot of not-that-rewarding tasks. cheers, David From benny.malengier at gmail.com Wed Nov 5 08:06:35 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Wed, 5 Nov 2008 14:06:35 +0100 Subject: [SciPy-dev] scikit for dae Message-ID: With all the talk of integration of new code and the difficulties involved, I decided to rewrite my patch to scipy.integrate to add a dae solver, as a scikit instead: scikits.odes Is this acceptable? If allowed as a scikit I could add some further backends (I'm thinking of 2). It would ease my hacking on it, allow collaboration, and enable more extensive testing (only linux 64bit here). The code would be more accessible too being available with easy_install, instead of a patch set. The patch is here: http://cage.ugent.be/~bm/progs.html Benny PS: If somebody needs Krylov preconditioners in dae, I could interface that in the ddaspk backend, but as I have no experience with this, somebody interested should construct a test case, and collaborate with me. ddaspk is in essense two programs in one, one with Krylov, one without. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guyer at nist.gov Wed Nov 5 10:33:02 2008 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 5 Nov 2008 10:33:02 -0500 Subject: [SciPy-dev] Old SciPy wiki content? Message-ID: <310D99E5-A052-4CAD-852A-32EA00FF612F@nist.gov> Many ages ago, I posted some benchmarking results on sparse solvers to the old SciPy wiki. http://thread.gmane.org/gmane.comp.python.scientific.devel/2243/focus=3192 When the site got moved, I never did figure out where it had gone to and never made a big enough fuss about it at the time. There's an item on http://www.scipy.org/MigratingFromPlone about moving it, but it's given low priority and the link to http://old.scipy.org/wikis/featurerequests/SparseSolvers gives a 404. Is this content anywhere anymore? From cournape at gmail.com Wed Nov 5 11:53:43 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 6 Nov 2008 01:53:43 +0900 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <20081104212757.GA32578@phare.normalesup.org> References: <490FD2F6.8000907@enthought.com> <4910B57D.8060402@enthought.com> <20081104212757.GA32578@phare.normalesup.org> Message-ID: <5b8d13220811050853r74280642ndb34ee5d702a03c9@mail.gmail.com> On Wed, Nov 5, 2008 at 6:27 AM, Gael Varoquaux wrote: > On Tue, Nov 04, 2008 at 02:50:05PM -0600, Travis E. Oliphant wrote: >> In other words, create a *single* scikit that is the equivalent of >> scipy.preview. What is the benefit of having it in scipy.preview? > > It gets built and shipped on people's boxes. Except it doesn't. Putting it into scipy does not make it magically built and released. Again, the last release of scipy was in september 2007. As I pointed out previously, putting code into scipy makes it *more* difficult to distribute, at least in practice. If scipy were released regularly, then I would at least understand the argument (I would still be against, though). But it isn't. And adding code means it takes even more time to get it out. One more package does not make much difference. But this just cannot work as a general process, not without scaling the man power for scipy. It is not a new problem; every project at a certain size has the same problem. There is a need for a staging area. Without it, scipy.preview would become the stating area (and that's not what Jarrod had in mind). That's why I am strongly against scipy.preview in the current state of affairs; it would practically become a staging area. cheers, David From robert.kern at gmail.com Wed Nov 5 12:53:29 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 5 Nov 2008 11:53:29 -0600 Subject: [SciPy-dev] scikit for dae In-Reply-To: References: Message-ID: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> On Wed, Nov 5, 2008 at 07:06, Benny Malengier wrote: > With all the talk of integration of new code and the difficulties involved, > I decided to rewrite my patch to scipy.integrate to add a dae solver, as a > scikit instead: scikits.odes > > Is this acceptable? Absolutely. > If allowed as a scikit I could add some further backends > (I'm thinking of 2). > It would ease my hacking on it, allow collaboration, and enable more > extensive testing (only linux 64bit here). The code would be more accessible > too being available with easy_install, instead of a patch set. > > The patch is here: http://cage.ugent.be/~bm/progs.html We'll get you SVN access so you don't have to distribute this as a patch. In the meantime, can you turn this into a tarball? scikits is not a monolithic package. It's just a namespace. You don't have to patch the main scikits repository (or even *use* the main scikits repository) to provide a scikits package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Wed Nov 5 13:15:12 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 5 Nov 2008 13:15:12 -0500 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: <5b8d13220811050853r74280642ndb34ee5d702a03c9@mail.gmail.com> References: <490FD2F6.8000907@enthought.com> <4910B57D.8060402@enthought.com> <20081104212757.GA32578@phare.normalesup.org> <5b8d13220811050853r74280642ndb34ee5d702a03c9@mail.gmail.com> Message-ID: On Wed, Nov 5, 2008 at 11:53 AM, David Cournapeau wrote: > > Except it doesn't. Putting it into scipy does not make it magically > built and released. Again, the last release of scipy was in september > 2007. As I pointed out previously, putting code into scipy makes it > *more* difficult to distribute, at least in practice. Well, that's (1) not true today since 0.7 is due and (2) another problem that needs to be addressed (e.g. by moving to a 6 month interval). > If scipy were released regularly, then I would at least understand the > argument (I would still be against, though). But it isn't. And adding > code means it takes even more time to get it out. One more package > does not make much difference. But this just cannot work as a general > process, not without scaling the man power for scipy. As I see it, scipy.spatial: - has an API that was discussed publically - has a comprehensive set of unit tests - has an active maintainer Furthermore, the release of SciPy 0.7 *is* imminent, and not a year or more away. Let's not argue abstract cases anymore. The worst case scenario is that the spatial module requires modest API changes in a subsequent release. Anne has done due diligence in soliciting ideas and feedback from the mailing list, so I doubt these will upset people. Also, the only way to expand spatial's userbase in a meaningful way *today* is to put spatial in scipy proper. I don't fundamentally disagree with the concerns that have been laid out w.r.t. stability. At the same time, I don't think we have the luxury or the right to set arbitrarily high standards for inclusion of code into scipy. We should not forget that a significant part of why people contribute to scipy (esp. the mundane tasks) is the implicit promise that their work will be visible. Scikits does not yet meet this standard. Let's release SciPy 0.7 (w/ spatial) ASAP and then look towards making scikits a more viable alternative. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From josef.pktd at gmail.com Wed Nov 5 15:03:05 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 5 Nov 2008 15:03:05 -0500 Subject: [SciPy-dev] patchfiles in 745 Message-ID: <1cd32cbb0811051203g78dfca1apb99229a097c93357@mail.gmail.com> From cournape at gmail.com Wed Nov 5 21:05:11 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 6 Nov 2008 11:05:11 +0900 Subject: [SciPy-dev] Dealing with precision problem: warning vs raise Message-ID: <5b8d13220811051805t2451c610t1a475c52e9db913f@mail.gmail.com> Hi, I was looking at bug #651, related to finding the roots of a polynomial with too small coefficients. Currently, the small values are discarded by the function in #651, which I find a bid surprising (it may not be obvious to the user why it gets no zero for a polynomial of order N). Wouldn't it be better to raise an exception or a at least a warning instead (I was thinking about doing that in numpy.roots instead, but that may be too drastic, as it would break some code - maybe it is desirable, though, as the values in that case are more or less meaningless ) cheers, David From robert.kern at gmail.com Wed Nov 5 21:28:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 5 Nov 2008 20:28:40 -0600 Subject: [SciPy-dev] Dealing with precision problem: warning vs raise In-Reply-To: <5b8d13220811051805t2451c610t1a475c52e9db913f@mail.gmail.com> References: <5b8d13220811051805t2451c610t1a475c52e9db913f@mail.gmail.com> Message-ID: <3d375d730811051828m493ac50cua8c1d035a9ba59bb@mail.gmail.com> On Wed, Nov 5, 2008 at 20:05, David Cournapeau wrote: > Hi, > > I was looking at bug #651, related to finding the roots of a > polynomial with too small coefficients. Currently, the small values > are discarded by the function in #651, which I find a bid surprising > (it may not be obvious to the user why it gets no zero for a > polynomial of order N). Wouldn't it be better to raise an exception or > a at least a warning instead (I was thinking about doing that in > numpy.roots instead, but that may be too drastic, as it would break > some code - maybe it is desirable, though, as the values in that case > are more or less meaningless ) Issue a warning using warnings.warn(). Users may use the warnings module to turn it into an exception should they so desire. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Wed Nov 5 23:49:49 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 6 Nov 2008 13:49:49 +0900 Subject: [SciPy-dev] Dealing with precision problem: warning vs raise In-Reply-To: <3d375d730811051828m493ac50cua8c1d035a9ba59bb@mail.gmail.com> References: <5b8d13220811051805t2451c610t1a475c52e9db913f@mail.gmail.com> <3d375d730811051828m493ac50cua8c1d035a9ba59bb@mail.gmail.com> Message-ID: <5b8d13220811052049k7d9e595aicb6d531e80ddedcc@mail.gmail.com> On Thu, Nov 6, 2008 at 11:28 AM, Robert Kern wrote: > > Issue a warning using warnings.warn(). Users may use the warnings > module to turn it into an exception should they so desire. Ok, thanks David From david at ar.media.kyoto-u.ac.jp Thu Nov 6 06:18:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 06 Nov 2008 20:18:41 +0900 Subject: [SciPy-dev] scikit for dae In-Reply-To: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> References: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> Message-ID: <4912D291.8080301@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > On Wed, Nov 5, 2008 at 07:06, Benny Malengier wrote: > >> With all the talk of integration of new code and the difficulties involved, >> I decided to rewrite my patch to scipy.integrate to add a dae solver, as a >> scikit instead: scikits.odes >> >> Is this acceptable? >> > > Absolutely. > > >> If allowed as a scikit I could add some further backends >> (I'm thinking of 2). >> It would ease my hacking on it, allow collaboration, and enable more >> extensive testing (only linux 64bit here). The code would be more accessible >> too being available with easy_install, instead of a patch set. >> >> The patch is here: http://cage.ugent.be/~bm/progs.html >> > > We'll get you SVN access so you don't have to distribute this as a > patch. In the meantime, can you turn this into a tarball? scikits is > not a monolithic package. It's just a namespace. You don't have to > patch the main scikits repository (or even *use* the main scikits > repository) to provide a scikits package. > Also, if you are not so familiar with setuptools, and you missed it, I put an example package which shows how to set up a scikits with the few commands to put your releases to pypi. http://projects.scipy.org/scipy/scikits/browser/trunk/example cheers, David From cournape at gmail.com Thu Nov 6 07:11:06 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 6 Nov 2008 21:11:06 +0900 Subject: [SciPy-dev] a modest proposal for technology previews In-Reply-To: References: <490FD2F6.8000907@enthought.com> <4910B57D.8060402@enthought.com> <20081104212757.GA32578@phare.normalesup.org> <5b8d13220811050853r74280642ndb34ee5d702a03c9@mail.gmail.com> Message-ID: <5b8d13220811060411u1ac4a9eekb410c217165c9dd5@mail.gmail.com> On Thu, Nov 6, 2008 at 3:15 AM, Nathan Bell wrote: > On Wed, Nov 5, 2008 at 11:53 AM, David Cournapeau wrote: >> >> Except it doesn't. Putting it into scipy does not make it magically >> built and released. Again, the last release of scipy was in september >> 2007. As I pointed out previously, putting code into scipy makes it >> *more* difficult to distribute, at least in practice. > > Well, that's (1) not true today since 0.7 is due This is not true for scipy.spatial, but this will be true if we follow the same workflow for every package. More code means more work, it is as simple as that. > and (2) another > problem that needs to be addressed (e.g. by moving to a 6 month > interval). Yes, I also want more regular scipy releases. But adding more code makes it less likely. Adding code has a cost on the release management, unless we have a fairly restricted window for getting changes, which would then be against the whole point of putting the new code in scipy. > > Let's release SciPy 0.7 (w/ spatial) ASAP and then look towards making > scikits a more viable alternative. Yes, that's the plan I want to follow. 0.7 will of course be released with scipy.spatial, I don't think anyone has ever been arguing for the contrary. cheers, David From benny.malengier at gmail.com Thu Nov 6 09:08:57 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Thu, 6 Nov 2008 15:08:57 +0100 Subject: [SciPy-dev] scikit for dae In-Reply-To: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> References: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> Message-ID: 2008/11/5 Robert Kern > On Wed, Nov 5, 2008 at 07:06, Benny Malengier > wrote: > > With all the talk of integration of new code and the difficulties > involved, > > I decided to rewrite my patch to scipy.integrate to add a dae solver, as > a > > scikit instead: scikits.odes > > > > Is this acceptable? > > Absolutely. > > > If allowed as a scikit I could add some further backends > > (I'm thinking of 2). > > It would ease my hacking on it, allow collaboration, and enable more > > extensive testing (only linux 64bit here). The code would be more > accessible > > too being available with easy_install, instead of a patch set. > > > > The patch is here: http://cage.ugent.be/~bm/progs.html > > We'll get you SVN access so you don't have to distribute this as a > patch. In the meantime, can you turn this into a tarball? scikits is > not a monolithic package. It's just a namespace. You don't have to > patch the main scikits repository (or even *use* the main scikits > repository) to provide a scikits package. I created a zip : http://cage.ugent.be/~bm/downloads/odes.zip If possible I'd like to use a repository inside scikits. I understand I can create a repository here and host the code there, but then I have to set up all the infrastructure too (bugs, new logins, ...). Our admin is already very busy, so I cannot offload issues with the server, ..... If SVN access is given to scikits, I can just commit from my end, you don't have to set it up. Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From benny.malengier at gmail.com Thu Nov 6 09:12:58 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Thu, 6 Nov 2008 15:12:58 +0100 Subject: [SciPy-dev] scikit for dae In-Reply-To: <4912D291.8080301@ar.media.kyoto-u.ac.jp> References: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> <4912D291.8080301@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/6 David Cournapeau > Robert Kern wrote: > > On Wed, Nov 5, 2008 at 07:06, Benny Malengier > wrote: > > > >> With all the talk of integration of new code and the difficulties > involved, > >> I decided to rewrite my patch to scipy.integrate to add a dae solver, as > a > >> scikit instead: scikits.odes > >> > >> Is this acceptable? > >> > > > > Absolutely. > > > > > >> If allowed as a scikit I could add some further backends > >> (I'm thinking of 2). > >> It would ease my hacking on it, allow collaboration, and enable more > >> extensive testing (only linux 64bit here). The code would be more > accessible > >> too being available with easy_install, instead of a patch set. > >> > >> The patch is here: http://cage.ugent.be/~bm/progs.html > >> > > > > We'll get you SVN access so you don't have to distribute this as a > > patch. In the meantime, can you turn this into a tarball? scikits is > > not a monolithic package. It's just a namespace. You don't have to > > patch the main scikits repository (or even *use* the main scikits > > repository) to provide a scikits package. > > > > Also, if you are not so familiar with setuptools, and you missed it, I > put an example package which shows how to set up a scikits with the few > commands to put your releases to pypi. > > http://projects.scipy.org/scipy/scikits/browser/trunk/example > I looked at it. I don't see fortran mentioned. I guess in my case with fortran files only source distribution is possible via pypi, yes? I further noted on my end that installing installs a .egg archive, and then the code to test does not run: import scikits.odes scikits.odes.test() Running unit tests for scikits.odes-0.01-py2.5-linux-x86_64.egg.scikits.odes NumPy version 1.3.0.dev5972 NumPy is installed in /usr/lib/python2.5/site-packages/numpy Python version 2.5.2 (r252:60911, Jul 31 2008, 17:31:22) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] nose version 0.10.4 E ====================================================================== ERROR: Failure: OSError (No such file /usr/lib/python2.5/site-packages/scikits.odes-0.01-py2.5-linux-x86_64.egg/scikits/odes) If I install in the old style method : sudo python setup.py install --single-version-externally-managed --root / The above works though. Is there some command I should add to make the egg work too? Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Nov 6 09:10:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 06 Nov 2008 23:10:52 +0900 Subject: [SciPy-dev] scikit for dae In-Reply-To: References: <3d375d730811050953g10c77442y733caf5520ff2512@mail.gmail.com> <4912D291.8080301@ar.media.kyoto-u.ac.jp> Message-ID: <4912FAEC.6090204@ar.media.kyoto-u.ac.jp> Benny Malengier wrote: > > I looked at it. I don't see fortran mentioned. I guess in my case with > fortran files only source distribution is possible via pypi, yes? Fortran is unfortunately the biggest hassle of all from a binary distribution POV. You can always build binary eggs, but not only will they be dependent on the platform and the python version, but also on the fortran compiler you are using. Otherwise, building fortran extensions is the same for scikits as for any code in numpy/scipy, so you can take a look there. If you have questions, don't hesitate to ask here, of course. > > The above works though. Is there some command I should add to make the > egg work too? It is a bit difficult to say without looking at your setup.py; I don't see anything which looks obviously wrong, at least, David From opossumnano at gmail.com Thu Nov 6 09:34:04 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 6 Nov 2008 15:34:04 +0100 Subject: [SciPy-dev] linalg.eigh tests - commit permissions? Message-ID: <20081106143404.GA30039@localhost> dear devs, I am working on the integration of symeig in scipy as a replacement for eigh. As a first step I wrote some tests for eigh, which was not tested till now. It would be better if these tests could be committed before anything else, so that even if symeig integration fails for some reason, eigh gets tested anyway. Who should I ask for commit privileges to the scipy svn repo? Alternatively, I can send a patch here or on scipy Track. The two utility functions "symrand" (generate a random symmetric positive definite matrix) and "random_rot" (generate a random rotation matrix) are in test_decomp.py, let me know if you want them available somewhere else in scipy. I think they are useful enough to be put somewhere, what about directly under scipy.linalg? some other random questions: - the signature for eigh currently is: eigh(a, lower=True, eigvals_only=False, overwrite_a=False) because eigh will grow an additional optional argument "b" (the second matrix of a general eigenproblem), the "overwrite_a" argument should become "overwrite": "b" will be also overwritten. Is this too much of an API breakage? - in the eigh code I see snippets like this: if heev.module_name[:7] == 'flapack': lwork = calc_lwork.heev(heev.prefix,a1.shape[0],lower) w,v,info = heev(a1,lwork = lwork,... else # 'clapack' w,v,info = heev(a1,... what is 'clapack'? how can I test it? I work on a debian system with atlas installed and the 'flapack' branch is always run. why has clapack no "lwork" argument? is it important at all? thank you! tiziano From opossumnano at gmail.com Thu Nov 6 10:33:00 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 6 Nov 2008 16:33:00 +0100 Subject: [SciPy-dev] assert_dtype_equal In-Reply-To: <20081106143404.GA30039@localhost> References: <20081106143404.GA30039@localhost> Message-ID: <20081106153259.GA31562@localhost> dear devs, writing the tests for eigh I noticed that there is no "assert_dtype_equal" in numpy.testing . I implemented it in test_decomp.py as: def assert_dtype_equal(act, des): assert dtype(act) == dtype(des), \ 'dtype mismatch: "%s" (should be "%s") '%(act,des) would'nt it be an useful addition to numpy.testing? ciao, tiziano On Thu 06 Nov, 15:34, Tiziano Zito wrote: > dear devs, > > I am working on the integration of symeig in scipy as a replacement > for eigh. As a first step I wrote some tests for eigh, which was not > tested till now. It would be better if these tests could be > committed before anything else, so that even if symeig integration > fails for some reason, eigh gets tested anyway. Who should I ask > for commit privileges to the scipy svn repo? Alternatively, I can > send a patch here or on scipy Track. > > The two utility functions "symrand" (generate a random symmetric > positive definite matrix) and "random_rot" (generate a random > rotation matrix) are in test_decomp.py, let me know if you want them > available somewhere else in scipy. I think they are useful enough to > be put somewhere, what about directly under scipy.linalg? > > some other random questions: > > - the signature for eigh currently is: > eigh(a, lower=True, eigvals_only=False, overwrite_a=False) > > because eigh will grow an additional optional argument "b" (the > second matrix of a general eigenproblem), the "overwrite_a" > argument should become "overwrite": "b" will be also overwritten. > Is this too much of an API breakage? > > - in the eigh code I see snippets like this: > > if heev.module_name[:7] == 'flapack': > lwork = calc_lwork.heev(heev.prefix,a1.shape[0],lower) > w,v,info = heev(a1,lwork = lwork,... > else # 'clapack' > w,v,info = heev(a1,... > > what is 'clapack'? how can I test it? I work on a debian system > with atlas installed and the 'flapack' branch is always run. why > has clapack no "lwork" argument? is it important at all? > > > thank you! > > tiziano > From stefan at sun.ac.za Thu Nov 6 16:38:33 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 6 Nov 2008 23:38:33 +0200 Subject: [SciPy-dev] linalg.eigh tests - commit permissions? In-Reply-To: <20081106143404.GA30039@localhost> References: <20081106143404.GA30039@localhost> Message-ID: <9457e7c80811061338ud206771rfdcfd51cbe76c9fa@mail.gmail.com> Hi Tiziano 2008/11/6 Tiziano Zito : > fails for some reason, eigh gets tested anyway. Who should I ask > for commit privileges to the scipy svn repo? Alternatively, I can > send a patch here or on scipy Track. Jarrod normally handles these requests, but since he is away, Robert may be able to help you out. Regards St?fan From robert.kern at gmail.com Thu Nov 6 16:58:37 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 6 Nov 2008 15:58:37 -0600 Subject: [SciPy-dev] linalg.eigh tests - commit permissions? In-Reply-To: <9457e7c80811061338ud206771rfdcfd51cbe76c9fa@mail.gmail.com> References: <20081106143404.GA30039@localhost> <9457e7c80811061338ud206771rfdcfd51cbe76c9fa@mail.gmail.com> Message-ID: <3d375d730811061358u1a6fe93cpd6c8dcaa4442f6c1@mail.gmail.com> On Thu, Nov 6, 2008 at 15:38, St?fan van der Walt wrote: > Hi Tiziano > > 2008/11/6 Tiziano Zito : >> fails for some reason, eigh gets tested anyway. Who should I ask >> for commit privileges to the scipy svn repo? Alternatively, I can >> send a patch here or on scipy Track. > > Jarrod normally handles these requests, but since he is away, Robert > may be able to help you out. I just forward these requests to Jarrod (as I did when I told Tiziano that we would get him SVN access the first time). I'm sorry that it hasn't happened, yet. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Thu Nov 6 17:12:05 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 7 Nov 2008 00:12:05 +0200 Subject: [SciPy-dev] linalg.eigh tests - commit permissions? In-Reply-To: <20081106143404.GA30039@localhost> References: <20081106143404.GA30039@localhost> Message-ID: <9457e7c80811061412xeadacffoe69f7dffbed6c898@mail.gmail.com> Hi Tiziano 2008/11/6 Tiziano Zito : > I am working on the integration of symeig in scipy as a replacement > for eigh. As a first step I wrote some tests for eigh, which was not > tested till now. It would be better if these tests could be > committed before anything else, so that even if symeig integration > fails for some reason, eigh gets tested anyway. Who should I ask > for commit privileges to the scipy svn repo? Alternatively, I can > send a patch here or on scipy Track. Looks like we'll only be able to get you access in a week's time. In the meanwhile, you can clone a git repository of Scipy: http://projects.scipy.org/scipy/numpy/wiki/GitMirror If you then publish your repo somewhere, I can push the changes on your behalf. If you feel more comfortable with hg, bzr or just plain patches, that's fine too. Cheers St?fan From ndbecker2 at gmail.com Thu Nov 6 22:19:50 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 06 Nov 2008 22:19:50 -0500 Subject: [SciPy-dev] complex-value remez? Message-ID: I see some posts from 2003 about complex remez. Any news? http://osdir.com/ml/python.scientific.devel/2003-02/msg00008.html From charlesr.harris at gmail.com Thu Nov 6 22:32:34 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Nov 2008 20:32:34 -0700 Subject: [SciPy-dev] complex-value remez? In-Reply-To: References: Message-ID: On Thu, Nov 6, 2008 at 8:19 PM, Neal Becker wrote: > I see some posts from 2003 about complex remez. Any news? > http://osdir.com/ml/python.scientific.devel/2003-02/msg00008.html > That would be me. I have a python version sitting around somewhere and did some experiments on the roundoff error for various implementations, but haven't done much since because I haven't needed it except for the one project. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Nov 6 23:35:37 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Nov 2008 21:35:37 -0700 Subject: [SciPy-dev] patchfiles in 745 In-Reply-To: <1cd32cbb0811051203g78dfca1apb99229a097c93357@mail.gmail.com> References: <1cd32cbb0811051203g78dfca1apb99229a097c93357@mail.gmail.com> Message-ID: Maybe you should ask for commit privilege, you seem to be doing a lot of work in this area. Until Jarrod gets back you might try working in a git mirror (if you run on unix/linux)...Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 7 00:16:23 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Nov 2008 22:16:23 -0700 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <91b4b1ab0811041536x684bd832h9385ae59b572f7b4@mail.gmail.com> References: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <91b4b1ab0811041536x684bd832h9385ae59b572f7b4@mail.gmail.com> Message-ID: On Tue, Nov 4, 2008 at 4:36 PM, Damian Eads wrote: > I'm on the train so I'll be short. I agree with Anne's concerns. > > There are lots of proposals being circulated for fairly extensive > community review panels, paperwork, and refactoring without any real > evidence of who is going to do the work. Many of us start writing new > SciPy packages simply because they are related to our own research. > Expecting huge participation toward a 1.0 release with a potentially > unpassable barrier to entry for new code is hard sell to me as a > developer. > I agree with this. We don't have nearly enough people working on scipy to support that sort of structure. At best we have a few folks interested in a particular module. If at some future time we have more people then things can change, but at the moment the problem is getting more people involved and the best way to do that is to make things easy. I think we should let things evolve incrementally and not spend a lot of time planning social organizations. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Nov 7 00:24:23 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Nov 2008 22:24:23 -0700 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> References: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> Message-ID: On Tue, Nov 4, 2008 at 6:25 PM, David Cournapeau wrote: > On Wed, Nov 5, 2008 at 7:34 AM, Anne Archibald > wrote: > > 2008/11/4 Travis E. Oliphant : > >> Jarrod Millman wrote: > >>> I absolutely agree with the ideas presented about scikits and look > >>> forward to seeing the numerous scikits improvements. I feel that I > >>> have gotten into a discussion where the counter argument to what I am > >>> proposing is something I strongly support. I also feel that the > >>> counterargument doesn't directly address my concern; but it may be > >>> that I am simply perceiving a problem that no one else believes > >>> exists. > >>> > >> Let me make my point again. I'm arguing that instead of scipy.preview, > >> let's just make a *single* scikit called scikit.preview or > >> scikit.forscipy or scikit.future_scipy or whatever. This will create > >> some incentive to make scikits easier to install generally as we want to > >> get the future_scipy out there being used. > >> > >> I'm very interested, though, to hear what developers of modules that > >> they would like to see in SciPy but have not made it there yet, think. > >> > >> I'm very interested in the question of how do we make it easier to > >> contribute to SciPy. > > > > As a developer who has written the module that is sparking this > > discussion, if the route to inclusion in scipy were "make a scikit, > > maintain and distribute it until you get enough user feedback to judge > > whether the API is optimal, then move it fully-formed into scipy" my > > code would simply gather dust and never be included. I don't have the > > time and energy to maintain a scikit. > > That's what I don't understand: there is almost no difference between > maintaing a scikit and a scipy submodule. In both case you have to > write some setup.py + the module itself. To get the sources, it is > scikit vs scipy svn. Both Damian and you made this case, so I would > like to understand what's so different from your POV, because I just > don't get it ATM. Maybe there are some confusion on how a scikit can > be made and distributer (the documentation could certainly be > improved). > I think we could make a distinction between pure python/C code without dependencies, and code that needs to deal with fortran compilers or library deficiencies. For the former, a simple setup.py using distutils should do the job and once that is in place it shouldn't be difficult to maintain. For the latter, inclusion in scipy proper might be the better route. > Having a scikit also means that if you are willing to do it, you can > easily build binaries installers, source distributions *in one > command*. You can't do that with scipy, which won't change for the > foreseable future. And you don't need to care about breaking scipy. > Agree. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From eads at soe.ucsc.edu Fri Nov 7 01:41:43 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Thu, 6 Nov 2008 22:41:43 -0800 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: References: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> Message-ID: <91b4b1ab0811062241k143e9b40k2584176f6c1d4420@mail.gmail.com> On Thu, Nov 6, 2008 at 9:24 PM, Charles R Harris wrote: > > > On Tue, Nov 4, 2008 at 6:25 PM, David Cournapeau wrote: >> >> On Wed, Nov 5, 2008 at 7:34 AM, Anne Archibald >> wrote: >> > 2008/11/4 Travis E. Oliphant : >> >> Jarrod Millman wrote: >> >>> I absolutely agree with the ideas presented about scikits and look >> >>> forward to seeing the numerous scikits improvements. I feel that I >> >>> have gotten into a discussion where the counter argument to what I am >> >>> proposing is something I strongly support. I also feel that the >> >>> counterargument doesn't directly address my concern; but it may be >> >>> that I am simply perceiving a problem that no one else believes >> >>> exists. >> >>> >> >> Let me make my point again. I'm arguing that instead of >> >> scipy.preview, >> >> let's just make a *single* scikit called scikit.preview or >> >> scikit.forscipy or scikit.future_scipy or whatever. This will create >> >> some incentive to make scikits easier to install generally as we want >> >> to >> >> get the future_scipy out there being used. >> >> >> >> I'm very interested, though, to hear what developers of modules that >> >> they would like to see in SciPy but have not made it there yet, think. >> >> >> >> I'm very interested in the question of how do we make it easier to >> >> contribute to SciPy. >> > >> > As a developer who has written the module that is sparking this >> > discussion, if the route to inclusion in scipy were "make a scikit, >> > maintain and distribute it until you get enough user feedback to judge >> > whether the API is optimal, then move it fully-formed into scipy" my >> > code would simply gather dust and never be included. I don't have the >> > time and energy to maintain a scikit. >> >> That's what I don't understand: there is almost no difference between >> maintaing a scikit and a scipy submodule. In both case you have to >> write some setup.py + the module itself. To get the sources, it is >> scikit vs scipy svn. Both Damian and you made this case, so I would >> like to understand what's so different from your POV, because I just >> don't get it ATM. Maybe there are some confusion on how a scikit can >> be made and distributer (the documentation could certainly be >> improved). > > I think we could make a distinction between pure python/C code without > dependencies, and code that needs to deal with fortran compilers or library > deficiencies. For the former, a simple setup.py using distutils should do > the job and once that is in place it shouldn't be difficult to maintain. For > the latter, inclusion in scipy proper might be the better route. Good point. There is a big difference between new code that depends heavily on external libraries, especially new ones, and python/C-standard compliant code with no dependencies. One of the reasons why scipy.sparse has made the release process difficult is its dependencies. David has patiently worked through a lot of these issues, and I thank him for it. However, it seems we should be a bit more circumspect including code in scipy proper that depend on external libraries. Code with few if any dependencies seem like better candidates for inclusion. Would you care to clarify about what you mean because it seems like you're arguing the opposite? One of the nice things about scipy.spatial and scipy.cluster is that both packages require no external dependencies, putting much less burden on the release process. For now, it seems like deciding what to include on a case by case basis may be the best approach. If a developer has shown a strong presence on the mailing lists, has made general contributions to the SciPy package, has developed well-written stable production code tested and documented to standards, is highly likely to maintain the package after its release, has shown that the code compiles and passes tests on the architectures supported by SciPy, then the package is more deserving of consideration for inclusion. Not saying these conditions are necessary nor sufficient but they are a good rough guidelines about what to look for in new code. Damian ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From david at ar.media.kyoto-u.ac.jp Fri Nov 7 01:36:15 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 07 Nov 2008 15:36:15 +0900 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <91b4b1ab0811062241k143e9b40k2584176f6c1d4420@mail.gmail.com> References: <9457e7c80811040555p605e3453t1d2675100f445a35@mail.gmail.com> <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> <91b4b1ab0811062241k143e9b40k2584176f6c1d4420@mail.gmail.com> Message-ID: <4913E1DF.2090800@ar.media.kyoto-u.ac.jp> Damian Eads wrote: > > However, it seems we should be a bit more circumspect including code > in scipy proper that depend on external libraries. Code with few if > any dependencies seem like better candidates for inclusion. Would you > care to clarify about what you mean because it seems like you're > arguing the opposite? > It is true that packages depending on external libraries mean more work for releasing, but OTOH, it means they are the ones who would benefit the most from being packages inside scipy, because python build tools do not have a good support for that case ATM. IOW, they are the ones for which there is a clear advantage to being in scipy compared to outside scipy tree (be it scikits or somewhere else). > For now, it seems like deciding what to include on a case by case > basis may be the best approach. If a developer has shown a strong > presence on the mailing lists, has made general contributions to the > SciPy package, has developed well-written stable production code > tested and documented to standards, is highly likely to maintain the > package after its release, has shown that the code compiles and passes > tests on the architectures supported by SciPy, then the package is > more deserving of consideration for inclusion. Not saying these > conditions are necessary nor sufficient but they are a good rough > guidelines about what to look for in new code. Exactly. David From charlesr.harris at gmail.com Fri Nov 7 02:54:26 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 7 Nov 2008 00:54:26 -0700 Subject: [SciPy-dev] Technology Previews and Sphinx Docstring Annotations In-Reply-To: <4913E1DF.2090800@ar.media.kyoto-u.ac.jp> References: <5b8d13220811040618s677880f1qc3450191ebbe9fd5@mail.gmail.com> <91b4b1ab0811040828u3f1f9c1bvfacff0ce1a557f5a@mail.gmail.com> <4910B818.9040904@enthought.com> <5b8d13220811041725w7d76423bw17e1d43dfce00f59@mail.gmail.com> <91b4b1ab0811062241k143e9b40k2584176f6c1d4420@mail.gmail.com> <4913E1DF.2090800@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Nov 6, 2008 at 11:36 PM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Damian Eads wrote: > > > > However, it seems we should be a bit more circumspect including code > > in scipy proper that depend on external libraries. Code with few if > > any dependencies seem like better candidates for inclusion. Would you > > care to clarify about what you mean because it seems like you're > > arguing the opposite? > > > > It is true that packages depending on external libraries mean more work > for releasing, but OTOH, it means they are the ones who would benefit > the most from being packages inside scipy, because python build tools do > not have a good support for that case ATM. IOW, they are the ones for > which there is a clear advantage to being in scipy compared to outside > scipy tree (be it scikits or somewhere else). > Yes, that was my thought. Such packages shouldn't have to depend on the release schedule of scipy proper and could almost be released by a script if the proper machines and compilers were made available. I think this would be a good thing for less ambitious developers who just want to contribute a useful package and not worry about all the problems of a major scipy release. Adding the ability to select that particular package in a bug report would also make maintainence easier. It would be even better if the mailer could also make the distinction so that one could subscribe to just those reports. I don't know if that is possible, however... Anyway, I think it would be useful to divide scipy into a core, difficult part and an easy part. The different parts could all install into the same folder, or be part of some super package, but the easy parts could also be made available as separate pieces with their own version numbers. Some might be at version 1.0 already ;) I'm not sure how svn could be set up to support this, but Anne should be able to check out only that part that is hers and there should be some way of building only that part. This also implies some sort of infrastructure to tag and release only those parts. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Nov 7 08:16:45 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 07 Nov 2008 08:16:45 -0500 Subject: [SciPy-dev] numerical math consortium Message-ID: Have you guys seen this? http://www.nmconsortium.org/FldFaq/?id=66&page=FAQ From ndbecker2 at gmail.com Fri Nov 7 08:17:26 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 07 Nov 2008 08:17:26 -0500 Subject: [SciPy-dev] complex-value remez? References: Message-ID: Charles R Harris wrote: > On Thu, Nov 6, 2008 at 8:19 PM, Neal Becker wrote: > >> I see some posts from 2003 about complex remez. Any news? >> http://osdir.com/ml/python.scientific.devel/2003-02/msg00008.html >> > > That would be me. I have a python version sitting around somewhere and did > some experiments on the roundoff error for various implementations, but > haven't done much since because I haven't needed it except for the one > project. > > Chuck If it's usable I'd be very interested. From berthold at despammed.com Fri Nov 7 10:08:51 2008 From: berthold at despammed.com (Berthold =?iso-8859-15?Q?H=F6llmann?=) Date: Fri, 07 Nov 2008 16:08:51 +0100 Subject: [SciPy-dev] "intent(aux)" does not honor "depend(..)" Message-ID: I try to wrap the lapack zheevr routine for a project of ours. Doing this I try to determine the optimal workspace in the wrapper routine. The generated code does work when put into correct order, but inspite ow writing:: integer intent(aux), depend(n) :: nb1 = __ilaenv(1, "ZHETRD", uplo, n, -1, -1, -1) integer intent(aux), depend(n) :: nb2 = __ilaenv(1, "ZUNMTR", uplo, n, -1, -1, -1) integer intent(aux), depend(nb1, nb2) :: nb = MAX(nb1, nb2) the generated code for calculating nb1, nb2, and nb is inserted directly after `PyArg_ParseTupleAndKeywords` is called, instead at a point after `uplo` and `n` are set. I guess this is a bug? Kind regards Berthold -- __ Address: G / \ L Germanischer Lloyd phone: +49-40-36149-7374 -+----+- Vorsetzen 35 P.O.Box 111606 fax : +49-40-36149-7320 \__/ D-20459 Hamburg D-20416 Hamburg -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 188 bytes Desc: not available URL: From opossumnano at gmail.com Fri Nov 7 10:45:15 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Fri, 7 Nov 2008 16:45:15 +0100 Subject: [SciPy-dev] "intent(aux)" does not honor "depend(..)" In-Reply-To: References: Message-ID: <20081107154514.GA5103@localhost> hi berthold, the zheevr routine is wrapped already in the symeig package (http://mdp-toolkit.sourceforge.net/symeig.html). this package is being integrated in scipy, but in the meanwhile it is available as a standalone package. ciao, tiziano On Fri 07 Nov, 16:08, Berthold H?llmann wrote: > > I try to wrap the lapack zheevr routine for a project of ours. Doing > this I try to determine the optimal workspace in the wrapper > routine. The generated code does work when put into correct order, but > inspite ow writing:: > > integer intent(aux), depend(n) :: nb1 = __ilaenv(1, "ZHETRD", uplo, n, -1, -1, -1) > integer intent(aux), depend(n) :: nb2 = __ilaenv(1, "ZUNMTR", uplo, n, -1, -1, -1) > integer intent(aux), depend(nb1, nb2) :: nb = MAX(nb1, nb2) > > the generated code for calculating nb1, nb2, and nb is inserted directly > after `PyArg_ParseTupleAndKeywords` is called, instead at a point after > `uplo` and `n` are set. I guess this is a bug? > > Kind regards > Berthold > -- > __ Address: > G / \ L Germanischer Lloyd > phone: +49-40-36149-7374 -+----+- Vorsetzen 35 P.O.Box 111606 > fax : +49-40-36149-7320 \__/ D-20459 Hamburg D-20416 Hamburg > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From charlesr.harris at gmail.com Fri Nov 7 10:55:33 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 7 Nov 2008 08:55:33 -0700 Subject: [SciPy-dev] complex-value remez? In-Reply-To: References: Message-ID: On Fri, Nov 7, 2008 at 6:17 AM, Neal Becker wrote: > Charles R Harris wrote: > > > On Thu, Nov 6, 2008 at 8:19 PM, Neal Becker wrote: > > > >> I see some posts from 2003 about complex remez. Any news? > >> http://osdir.com/ml/python.scientific.devel/2003-02/msg00008.html > >> > > > > That would be me. I have a python version sitting around somewhere and > did > > some experiments on the roundoff error for various implementations, but > > haven't done much since because I haven't needed it except for the one > > project. > > > > Chuck > > If it's usable I'd be very interested. Mind, it wasn't a general case program, it just generated complex, Hermitean coefficients for a filter whose passband could be specified right up to the sampling frequency (twice the Nyquist). I used it to design a decimating filter since the aliasing is much easier to deal with when the negative frequencies aren't there. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Fri Nov 7 18:27:17 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 07 Nov 2008 17:27:17 -0600 Subject: [SciPy-dev] Do we care about the LAPACK C interface? Message-ID: <4914CED5.7080005@enthought.com> Hi all, I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He is looking for information about people that care about the C interface to LAPACK. Do we care? I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? If I hear from you soon, I can give Clint the information before I leave the BOF. -Travis From robert.kern at gmail.com Fri Nov 7 18:34:42 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Nov 2008 17:34:42 -0600 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <4914CED5.7080005@enthought.com> References: <4914CED5.7080005@enthought.com> Message-ID: <3d375d730811071534j4cd2c2c0v2573a8c46df96839@mail.gmail.com> On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: > > Hi all, > > I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He > is looking for information about people that care about the C interface > to LAPACK. Do we care? > > I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? We do use the C interfaces if available for both BLAS and LAPACK. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Nov 7 18:43:46 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Nov 2008 17:43:46 -0600 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <4914CED5.7080005@enthought.com> References: <4914CED5.7080005@enthought.com> Message-ID: <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: > > Hi all, > > I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He > is looking for information about people that care about the C interface > to LAPACK. Do we care? > > I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? > > If I hear from you soon, I can give Clint the information before I leave > the BOF. Query: Are you talking about just the C interface to LAPACK or the C interface to the BLAS, too? I definitely want to keep the BLAS C interface. It would be nice if there were a C interface to the entire LAPACK, but I don't think that ATLAS (or anyone else) provides that currently. That way, we would be able to worry less about mixing and matching Fortran compilers (e.g. your distro builds ATLAS's LAPACK with g77 and you want to use gfortran to build scipy). But since ATLAS's LAPACK is a mixture of C and FORTRAN, I think we may be pretty much stuck with that problem regardless. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Fri Nov 7 19:03:49 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sat, 8 Nov 2008 01:03:49 +0100 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <4914CED5.7080005@enthought.com> References: <4914CED5.7080005@enthought.com> Message-ID: > I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He > is looking for information about people that care about the C interface > to LAPACK. Do we care? Well, the issue is that there is no official LAPACK interface. Intel is trying to get one based on the C interface to BLAS, but I didn't here from it recently. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From oliphant at enthought.com Fri Nov 7 21:28:08 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 07 Nov 2008 20:28:08 -0600 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> References: <4914CED5.7080005@enthought.com> <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> Message-ID: <4914F938.8040901@enthought.com> Robert Kern wrote: > On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: > >> Hi all, >> >> I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He >> is looking for information about people that care about the C interface >> to LAPACK. Do we care? >> >> I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? >> >> If I hear from you soon, I can give Clint the information before I leave >> the BOF. >> > > Query: Are you talking about just the C interface to LAPACK or the C > interface to the BLAS, too? I definitely want to keep the BLAS C > interface. It would be nice if there were a C interface to the entire > LAPACK, but I don't think that ATLAS (or anyone else) provides that > currently. That way, we would be able to worry less about mixing and > matching Fortran compilers (e.g. your distro builds ATLAS's LAPACK > with g77 and you want to use gfortran to build scipy). But since > ATLAS's LAPACK is a mixture of C and FORTRAN, I think we may be pretty > much stuck with that problem regardless. > Clint Whaley has written some more C interfaces to LAPACK and was wondering if anybody cared (or if he should stop doing it). It sounds like we should let him know that his efforts in this direction are useful to us. -Travis From michael.abshoff at googlemail.com Fri Nov 7 22:35:52 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Fri, 07 Nov 2008 19:35:52 -0800 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <4914F938.8040901@enthought.com> References: <4914CED5.7080005@enthought.com> <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> <4914F938.8040901@enthought.com> Message-ID: <49150918.4040303@gmail.com> Travis E. Oliphant wrote: > Robert Kern wrote: >> On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: Hi, >>> Hi all, >>> >>> I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He >>> is looking for information about people that care about the C interface >>> to LAPACK. Do we care? >>> >>> I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? Wasn't there now some code in Scipy that depended on a CBLAS interface? >>> If I hear from you soon, I can give Clint the information before I leave >>> the BOF. >>> >> Query: Are you talking about just the C interface to LAPACK or the C >> interface to the BLAS, too? I definitely want to keep the BLAS C >> interface. It would be nice if there were a C interface to the entire >> LAPACK That is the current plane. The current code which is not a complete interface yet is in ATLAS 3.9.4 and is not feature complete yet. >>, but I don't think that ATLAS (or anyone else) provides that >> currently. That way, we would be able to worry less about mixing and >> matching Fortran compilers (e.g. your distro builds ATLAS's LAPACK >> with g77 and you want to use gfortran to build scipy). But since >> ATLAS's LAPACK is a mixture of C and FORTRAN, I think we may be pretty >> much stuck with that problem regardless. >> > > Clint Whaley has written some more C interfaces to LAPACK and was > wondering if anybody cared (or if he should stop doing it). It sounds > like we should let him know that his efforts in this direction are > useful to us. Clint actually has (had?) a student working on this, too, and his point was that he only would keep working on this if there were actual users of the interface. There is certainly interest from some people in the Sage project for such a C interface. I guess in the end the hope is that by establishing a Lapack C interface via ATLAS other libraries will start to copy the interface, especially since it is BSD licensed code. > -Travis Cheers, Michael > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From cournape at gmail.com Fri Nov 7 22:42:46 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 8 Nov 2008 12:42:46 +0900 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> References: <4914CED5.7080005@enthought.com> <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> Message-ID: <5b8d13220811071942p58cd01bak3044d1677618099d@mail.gmail.com> On Sat, Nov 8, 2008 at 8:43 AM, Robert Kern wrote: > On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: >> >> Hi all, >> >> I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He >> is looking for information about people that care about the C interface >> to LAPACK. Do we care? >> >> I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? >> >> If I hear from you soon, I can give Clint the information before I leave >> the BOF. > > Query: Are you talking about just the C interface to LAPACK or the C > interface to the BLAS, too? I definitely want to keep the BLAS C > interface. I agree cblas is more interesting than clapack. In theory, we could wrap more cblas into numpy, without the need of any fortran compiler. There is also technical difference between clapack and cblas: clapack interface is a fortran-like interfance (everything passed by reference, same name than lapack), contrary to cblas which is more C. This makes clapack quite confusing, actually, and not always even usable at the same time as LAPACK (because of name collision). > It would be nice if there were a C interface to the entire > LAPACK, but I don't think that ATLAS (or anyone else) provides that > currently. I am not 100 % positive, but I think the accelerate framework offers a clapack interface. AMD, too, may offer a clapack interface. It is not always obvious whereas a given interface is lapack or clapack (since the names are the same). cheers, David From robert.kern at gmail.com Fri Nov 7 22:50:26 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 7 Nov 2008 21:50:26 -0600 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <5b8d13220811071942p58cd01bak3044d1677618099d@mail.gmail.com> References: <4914CED5.7080005@enthought.com> <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> <5b8d13220811071942p58cd01bak3044d1677618099d@mail.gmail.com> Message-ID: <3d375d730811071950k329618f1pd672e987a7ce4cc3@mail.gmail.com> On Fri, Nov 7, 2008 at 21:42, David Cournapeau wrote: > On Sat, Nov 8, 2008 at 8:43 AM, Robert Kern wrote: >> On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: >>> >>> Hi all, >>> >>> I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He >>> is looking for information about people that care about the C interface >>> to LAPACK. Do we care? >>> >>> I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? >>> >>> If I hear from you soon, I can give Clint the information before I leave >>> the BOF. >> >> Query: Are you talking about just the C interface to LAPACK or the C >> interface to the BLAS, too? I definitely want to keep the BLAS C >> interface. > > I agree cblas is more interesting than clapack. In theory, we could > wrap more cblas into numpy, without the need of any fortran compiler. > There is also technical difference between clapack and cblas: clapack > interface is a fortran-like interfance (everything passed by > reference, same name than lapack), contrary to cblas which is more C. > This makes clapack quite confusing, actually, and not always even > usable at the same time as LAPACK (because of name collision). > >> It would be nice if there were a C interface to the entire >> LAPACK, but I don't think that ATLAS (or anyone else) provides that >> currently. > > I am not 100 % positive, but I think the accelerate framework offers > a clapack interface. AMD, too, may offer a clapack interface. It is > not always obvious whereas a given interface is lapack or clapack > (since the names are the same). I don't think that's what we're talking about, here. The CLAPACK that you are talking about is already accomplished, and is just the result of f2c'ing the Fortran LAPACK sources; it is available here: http://netlib.org/clapack/ That is what is in OS X's Accelerate.framework, too. The C interface to LAPACK is a new effort by ATLAS (at least) along the same lines as the C interface to BLAS (pass-by-value for integer arguments, etc.). It is not yet complete, but you can see the current state in ATLAS/interfaces/lapack/C/src/ in the ATLAS 3.9.4 source tarball. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Fri Nov 7 23:19:06 2008 From: cournape at gmail.com (David Cournapeau) Date: Sat, 8 Nov 2008 13:19:06 +0900 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <3d375d730811071950k329618f1pd672e987a7ce4cc3@mail.gmail.com> References: <4914CED5.7080005@enthought.com> <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> <5b8d13220811071942p58cd01bak3044d1677618099d@mail.gmail.com> <3d375d730811071950k329618f1pd672e987a7ce4cc3@mail.gmail.com> Message-ID: <5b8d13220811072019g3fc75a41x807201743b775e3@mail.gmail.com> On Sat, Nov 8, 2008 at 12:50 PM, Robert Kern wrote: > On Fri, Nov 7, 2008 at 21:42, David Cournapeau wrote: >> On Sat, Nov 8, 2008 at 8:43 AM, Robert Kern wrote: >>> On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: >>>> >>>> Hi all, >>>> >>>> I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He >>>> is looking for information about people that care about the C interface >>>> to LAPACK. Do we care? >>>> >>>> I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? >>>> >>>> If I hear from you soon, I can give Clint the information before I leave >>>> the BOF. >>> >>> Query: Are you talking about just the C interface to LAPACK or the C >>> interface to the BLAS, too? I definitely want to keep the BLAS C >>> interface. >> >> I agree cblas is more interesting than clapack. In theory, we could >> wrap more cblas into numpy, without the need of any fortran compiler. >> There is also technical difference between clapack and cblas: clapack >> interface is a fortran-like interfance (everything passed by >> reference, same name than lapack), contrary to cblas which is more C. >> This makes clapack quite confusing, actually, and not always even >> usable at the same time as LAPACK (because of name collision). >> >>> It would be nice if there were a C interface to the entire >>> LAPACK, but I don't think that ATLAS (or anyone else) provides that >>> currently. >> >> I am not 100 % positive, but I think the accelerate framework offers >> a clapack interface. AMD, too, may offer a clapack interface. It is >> not always obvious whereas a given interface is lapack or clapack >> (since the names are the same). > > I don't think that's what we're talking about, here. The CLAPACK that > you are talking about is already accomplished, and is just the result > of f2c'ing the Fortran LAPACK sources; it is available here: > > http://netlib.org/clapack/ Ok, Netlib clapack and ATLAS clapack are not the same thing. The clapack we wrap in scipy.linalg is the later, then But since we have to support the fortran interface anyway, what's the advantage of using a parallel clapack ? Even if atlas had a 100 % coverage of LAPACK into a C interface to LAPACK, we still could not drop the fortran wrapper ( to support old versions of atlas, and all the other ones without C interface). David From bgoli at sun.ac.za Sun Nov 9 12:17:21 2008 From: bgoli at sun.ac.za (Brett G. Olivier) Date: Sun, 09 Nov 2008 19:17:21 +0200 Subject: [SciPy-dev] Do we care about the LAPACK C interface? In-Reply-To: <4914F938.8040901@enthought.com> References: <4914CED5.7080005@enthought.com> <3d375d730811071543h607dbd4bs2752d3194d81ebce@mail.gmail.com> <4914F938.8040901@enthought.com> Message-ID: <49171B21.9090401@sun.ac.za> Travis E. Oliphant wrote: > Robert Kern wrote: >> On Fri, Nov 7, 2008 at 17:27, Travis E. Oliphant wrote: >> >>> Hi all, >>> >>> I'm sitting here in a BOF with Clint Whaley (the author of ATLAS). He >>> is looking for information about people that care about the C interface >>> to LAPACK. Do we care? >>> >>> I'm not clear on that. Don't we just use the FORTRAN interfaces to ATLAS? >>> >>> If I hear from you soon, I can give Clint the information before I leave >>> the BOF. >>> >> Query: Are you talking about just the C interface to LAPACK or the C >> interface to the BLAS, too? I definitely want to keep the BLAS C >> interface. It would be nice if there were a C interface to the entire >> LAPACK, but I don't think that ATLAS (or anyone else) provides that >> currently. That way, we would be able to worry less about mixing and >> matching Fortran compilers (e.g. your distro builds ATLAS's LAPACK >> with g77 and you want to use gfortran to build scipy). But since >> ATLAS's LAPACK is a mixture of C and FORTRAN, I think we may be pretty >> much stuck with that problem regardless. >> > > Clint Whaley has written some more C interfaces to LAPACK and was > wondering if anybody cared (or if he should stop doing it). It sounds > like we should let him know that his efforts in this direction are > useful to us. Hi, Full support from us! This might be a bit late, but we use both the C (if available) and F LAPACK interfaces in our SciPy dependent software. These efforts would be highly appreciated. Cheers, Brett. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From nwagner at iam.uni-stuttgart.de Mon Nov 10 07:06:53 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 13:06:53 +0100 Subject: [SciPy-dev] scipy.cluster Message-ID: >>> from scipy import cluster >>> cluster.test() ====================================================================== FAIL: Tests fcluster(Z, criterion='maxclust', t=2) on a random 3-cluster data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 457, in test_fcluster_maxclusts_2 self.failUnless(is_isomorphic(T, expectedT)) AssertionError ====================================================================== FAIL: Tests fcluster(Z, criterion='maxclust', t=3) on a random 3-cluster data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 466, in test_fcluster_maxclusts_3 self.failUnless(is_isomorphic(T, expectedT)) AssertionError ====================================================================== FAIL: Tests fcluster(Z, criterion='maxclust', t=4) on a random 3-cluster data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 475, in test_fcluster_maxclusts_4 self.failUnless(is_isomorphic(T, expectedT)) AssertionError ====================================================================== FAIL: Tests fclusterdata(X, criterion='maxclust', t=2) on a random 3-cluster data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 434, in test_fclusterdata_maxclusts_2 self.failUnless(is_isomorphic(T, expectedT)) AssertionError ====================================================================== FAIL: Tests fclusterdata(X, criterion='maxclust', t=3) on a random 3-cluster data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 441, in test_fclusterdata_maxclusts_3 self.failUnless(is_isomorphic(T, expectedT)) AssertionError ====================================================================== FAIL: Tests fclusterdata(X, criterion='maxclust', t=4) on a random 3-cluster data set. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 448, in test_fclusterdata_maxclusts_4 self.failUnless(is_isomorphic(T, expectedT)) AssertionError ---------------------------------------------------------------------- Ran 166 tests in 20.086s FAILED (errors=1, failures=6) >>> from scipy import cluster >>> scipy.__version__ '0.7.0.dev5035' ====================================================================== ERROR: Tests leaders using a flat clustering generated by single linkage. ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", line 487, in test_leaders_single self.failUnless((L[0] == Lright[0]).all() and (L[1] == Lright[1]).all()) AttributeError: 'bool' object has no attribute 'all' From nwagner at iam.uni-stuttgart.de Mon Nov 10 09:15:45 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 15:15:45 +0100 Subject: [SciPy-dev] Memory error in interpolate.sproot Message-ID: Hi all, I have some trouble concerning sproot from numpy import linspace, sin from scipy.interpolate import splrep, splev, sproot from pylab import plot, show, legend, savefig x = linspace(0, 10, 10) y = sin(x) tck = splrep(x, y) x2 = linspace(0, 10, 200) y2 = splev(x2, tck) plot(x, y, 'o', x2, y2) legend(('$\sin(x)$',r'Spline')) show() roots = sproot(tck, mest=2) python -i test_interpol.py /data/home/nwagner/local/lib/python2.5/site-packages/matplotlib/__init__.py:367: UserWarning: matplotlibrc text.usetex can not be used with *Agg backend unless dvipng-1.5 or later is installed on your system warnings.warn( 'matplotlibrc text.usetex can not be used with *Agg ' Traceback (most recent call last): File "test_interpol.py", line 12, in roots = sproot(tck, mest=2) File "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/interpolate/fitpack.py", line 576, in sproot z,ier=_fitpack._sproot(t,c,k,mest) MemoryError How can I resolve that problem ? Nils From chanley at stsci.edu Mon Nov 10 10:49:03 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 10 Nov 2008 10:49:03 -0500 Subject: [SciPy-dev] recursion error in tests Message-ID: <491857EF.7050109@stsci.edu> Hi, I am running scipy.test() for version 0.7.0.dev5042. When I do so I receive the following error: .......................................................................................................................................................................................................................................... ====================================================================== ERROR: Regression test for #653. ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line 182, in runTest self.test(*self.arg) File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/tests/test_mio.py", line 343, in test_regression_653 savemat(StringIO(), {'d':{1:2}}, format='5') File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio.py", line 159, in savemat MW.put_variables(mdict) File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 996, in put_variables mat_writer.write() File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 855, in write MW.write() File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 855, in write MW.write() ... File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 855, in write MW.write() File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 855, in write MW.write() File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 849, in write self.write_header(mclass=mxCELL_CLASS) File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 757, in write_header self.write_element(np.array(shape, dtype='i4')) File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 704, in write_element self.write_smalldata_element(arr, mdtype, byte_count) File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 714, in write_smalldata_element self.write_dtype(tag) File "/Users/chanley/dev/site-packages/lib/python/scipy/io/matlab/mio5.py", line 696, in write_dtype self.file_stream.write(arr.tostring()) File "/usr/stsci/pyssgdev/Python-2.5.1/lib/python2.5/StringIO.py", line 213, in write _complain_ifclosed(self.closed) RuntimeError: maximum recursion depth exceeded ---------------------------------------------------------------------- Ran 2547 tests in 33.578s FAILED (SKIP=28, errors=1) Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From cournape at gmail.com Mon Nov 10 12:01:09 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 11 Nov 2008 02:01:09 +0900 Subject: [SciPy-dev] recursion error in tests In-Reply-To: <491857EF.7050109@stsci.edu> References: <491857EF.7050109@stsci.edu> Message-ID: <5b8d13220811100901w7cd1111ar955061501ab37da8@mail.gmail.com> On Tue, Nov 11, 2008 at 12:49 AM, Christopher Hanley wrote: > Hi, > > I am running scipy.test() for version 0.7.0.dev5042. When I do so I > receive the following error: Yep, that's a "fake" regression (fake because it has not been fixed yet). I tagged it as a known failure because infinite recursion clutter the output, so you should just get known failure instead now. David From eads at soe.ucsc.edu Mon Nov 10 12:28:14 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 10 Nov 2008 09:28:14 -0800 Subject: [SciPy-dev] scipy.cluster In-Reply-To: References: Message-ID: <91b4b1ab0811100928s585406e1jb4b0447dcd6f97a5@mail.gmail.com> I just wrote these tests this weekend. Please standby. On 11/10/08, Nils Wagner wrote: >>>> from scipy import cluster >>>> cluster.test() > ====================================================================== > FAIL: Tests fcluster(Z, criterion='maxclust', t=2) on a > random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 457, in test_fcluster_maxclusts_2 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fcluster(Z, criterion='maxclust', t=3) on a > random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 466, in test_fcluster_maxclusts_3 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fcluster(Z, criterion='maxclust', t=4) on a > random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 475, in test_fcluster_maxclusts_4 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fclusterdata(X, criterion='maxclust', t=2) on > a random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 434, in test_fclusterdata_maxclusts_2 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fclusterdata(X, criterion='maxclust', t=3) on > a random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 441, in test_fclusterdata_maxclusts_3 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fclusterdata(X, criterion='maxclust', t=4) on > a random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 448, in test_fclusterdata_maxclusts_4 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ---------------------------------------------------------------------- > Ran 166 tests in 20.086s > > FAILED (errors=1, failures=6) > >>>> from scipy import cluster >>>> scipy.__version__ > '0.7.0.dev5035' > > ====================================================================== > ERROR: Tests leaders using a flat clustering generated by > single linkage. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 487, in test_leaders_single > self.failUnless((L[0] == Lright[0]).all() and (L[1] > == Lright[1]).all()) > AttributeError: 'bool' object has no attribute 'all' > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From chanley at stsci.edu Mon Nov 10 12:34:03 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 10 Nov 2008 12:34:03 -0500 Subject: [SciPy-dev] recursion error in tests In-Reply-To: <5b8d13220811100901w7cd1111ar955061501ab37da8@mail.gmail.com> References: <491857EF.7050109@stsci.edu> <5b8d13220811100901w7cd1111ar955061501ab37da8@mail.gmail.com> Message-ID: <4918708B.1030404@stsci.edu> David Cournapeau wrote: > On Tue, Nov 11, 2008 at 12:49 AM, Christopher Hanley wrote: >> Hi, >> >> I am running scipy.test() for version 0.7.0.dev5042. When I do so I >> receive the following error: > > Yep, that's a "fake" regression (fake because it has not been fixed > yet). I tagged it as a known failure because infinite recursion > clutter the output, so you should just get known failure instead now. > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev David, Thank you for making this change. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From eads at soe.ucsc.edu Mon Nov 10 12:57:16 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 10 Nov 2008 09:57:16 -0800 Subject: [SciPy-dev] scipy.cluster In-Reply-To: References: Message-ID: <91b4b1ab0811100957g3a19581fha981053e41afa257@mail.gmail.com> Hi Nils, Several months ago someone changed the dtype= to dtype=np.typename in hierarchy.py. This works without incident when passing an array created with dtype=np.float to C code that operates on a double *. However, with ints, numpy.int and 'i' both yield two different array data types on 64-bit machines. >>> import numpy >>> numpy.array([], dtype=numpy.int) array([], dtype=int64) >>> numpy.array([], dtype='i') array([], dtype=int32) >>> The 'i' string corresponds to the C-type int, correctly whereas numpy.int does not. Changing the dtype fields back to 'i' for these arrays should fix the problem on 64-bit machines. I committed the fix. Please let me know if this works. Thanks, Damian On Mon, Nov 10, 2008 at 4:06 AM, Nils Wagner wrote: >>>> from scipy import cluster >>>> cluster.test() > ====================================================================== > FAIL: Tests fcluster(Z, criterion='maxclust', t=2) on a > random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 457, in test_fcluster_maxclusts_2 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fcluster(Z, criterion='maxclust', t=3) on a > random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 466, in test_fcluster_maxclusts_3 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fcluster(Z, criterion='maxclust', t=4) on a > random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 475, in test_fcluster_maxclusts_4 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fclusterdata(X, criterion='maxclust', t=2) on > a random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 434, in test_fclusterdata_maxclusts_2 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fclusterdata(X, criterion='maxclust', t=3) on > a random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 441, in test_fclusterdata_maxclusts_3 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ====================================================================== > FAIL: Tests fclusterdata(X, criterion='maxclust', t=4) on > a random 3-cluster data set. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 448, in test_fclusterdata_maxclusts_4 > self.failUnless(is_isomorphic(T, expectedT)) > AssertionError > > ---------------------------------------------------------------------- > Ran 166 tests in 20.086s > > FAILED (errors=1, failures=6) > >>>> from scipy import cluster >>>> scipy.__version__ > '0.7.0.dev5035' > > ====================================================================== > ERROR: Tests leaders using a flat clustering generated by > single linkage. > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", > line 487, in test_leaders_single > self.failUnless((L[0] == Lright[0]).all() and (L[1] > == Lright[1]).all()) > AttributeError: 'bool' object has no attribute 'all' > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From nwagner at iam.uni-stuttgart.de Mon Nov 10 15:45:03 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 21:45:03 +0100 Subject: [SciPy-dev] Memory error in interpolate.sproot In-Reply-To: References: Message-ID: On Mon, 10 Nov 2008 15:15:45 +0100 "Nils Wagner" wrote: > Hi all, > > I have some trouble concerning sproot > > from numpy import linspace, sin > from scipy.interpolate import splrep, splev, sproot > from pylab import plot, show, legend, savefig > x = linspace(0, 10, 10) > y = sin(x) > tck = splrep(x, y) > x2 = linspace(0, 10, 200) > y2 = splev(x2, tck) > plot(x, y, 'o', x2, y2) > legend(('$\sin(x)$',r'Spline')) > show() > roots = sproot(tck, mest=2) > > > python -i test_interpol.py > /data/home/nwagner/local/lib/python2.5/site-packages/matplotlib/__init__.py:367: > UserWarning: matplotlibrc text.usetex can not be used >with > *Agg backend unless dvipng-1.5 or later is installed on > your system > warnings.warn( 'matplotlibrc text.usetex can not be > used with *Agg ' > Traceback (most recent call last): > File "test_interpol.py", line 12, in > roots = sproot(tck, mest=2) > File > "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/interpolate/fitpack.py", > line 576, in sproot > z,ier=_fitpack._sproot(t,c,k,mest) > MemoryError > > How can I resolve that problem ? The same program works for me on my old 32bit laptop. Can somebody reproduce the problem on a 64-bit system ? Nils From wnbell at gmail.com Mon Nov 10 16:15:50 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 10 Nov 2008 16:15:50 -0500 Subject: [SciPy-dev] Memory error in interpolate.sproot In-Reply-To: References: Message-ID: On Mon, Nov 10, 2008 at 3:45 PM, Nils Wagner wrote: > > The same program works for me on my old 32bit laptop. > > Can somebody reproduce the problem on a 64-bit system ? > Works for me. I get a plot and the following output: $ python -i test_iterpol.py Warning: the number of zeros exceeds mest -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From nwagner at iam.uni-stuttgart.de Mon Nov 10 16:23:24 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Nov 2008 22:23:24 +0100 Subject: [SciPy-dev] Memory error in interpolate.sproot In-Reply-To: References: Message-ID: On Mon, 10 Nov 2008 16:15:50 -0500 "Nathan Bell" wrote: > On Mon, Nov 10, 2008 at 3:45 PM, Nils Wagner > wrote: >> >> The same program works for me on my old 32bit laptop. >> >> Can somebody reproduce the problem on a 64-bit system ? >> > > Works for me. I get a plot and the following output: > > $ python -i test_iterpol.py > Warning: the number of zeros exceeds mest I have modified "mest" an estimate of the number of zeros. roots = sproot(tck, mest=4) plot(roots,sin(roots),'ro') legend(('$\sin(x)$',r'Spline',r'Zeros')) Nils From pav at iki.fi Mon Nov 10 16:30:08 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 10 Nov 2008 21:30:08 +0000 (UTC) Subject: [SciPy-dev] Memory error in interpolate.sproot References: Message-ID: Mon, 10 Nov 2008 16:15:50 -0500, Nathan Bell wrote: > On Mon, Nov 10, 2008 at 3:45 PM, Nils Wagner > wrote: >> >> The same program works for me on my old 32bit laptop. >> >> Can somebody reproduce the problem on a 64-bit system ? >> >> > Works for me. I get a plot and the following output: > > $ python -i test_iterpol.py > Warning: the number of zeros exceeds mest Valgrind spits out some warnings for me for the sproots line (Scipy r5054): {{{ ... >>> y2 = splev(x2, tck) >>> roots = sproot(tck, mest=2) ==4530== ==4530== Conditional jump or move depends on uninitialised value(s) ==4530== at 0x612DE8E: PyArray_NewFromDescr (arrayobject.c:5565) ==4530== by 0xADF5316: fitpack_sproot (__fitpack.h:518) ==4530== by 0x47F621: PyEval_EvalFrameEx (ceval.c:3566) ... ==4530== Conditional jump or move depends on uninitialised value(s) ==4530== at 0x612DE90: PyArray_NewFromDescr (arrayobject.c:5566) ==4530== by 0xADF5316: fitpack_sproot (__fitpack.h:518) ==4530== by 0x47F621: PyEval_EvalFrameEx (ceval.c:3566) ... ... ... ==4530== Conditional jump or move depends on uninitialised value(s) ==4530== at 0x4A1CFDA: memcpy (mc_replace_strmem.c:406) ==4530== by 0xADF5337: fitpack_sproot (__fitpack.h:520) ==4530== by 0x47F621: PyEval_EvalFrameEx (ceval.c:3566) ... ==4530== Conditional jump or move depends on uninitialised value(s) ==4530== at 0x4A1CFF9: memcpy (mc_replace_strmem.c:406) ==4530== by 0xADF5337: fitpack_sproot (__fitpack.h:520) ==4530== by 0x47F621: PyEval_EvalFrameEx (ceval.c:3566) ... ... ... ==4530== Use of uninitialised value of size 8 ==4530== at 0x4A1D074: memcpy (mc_replace_strmem.c:406) ==4530== by 0xADF5337: fitpack_sproot (__fitpack.h:520) ==4530== by 0x47F621: PyEval_EvalFrameEx (ceval.c:3566) ... ... Warning: the number of zeros exceeds mest }}} And... Bingo! {{{ __fitpack.h: 512 if ((z = (double *)malloc(mest*sizeof(double)))==NULL) { 513 PyErr_NoMemory(); 514 goto fail; 515 } 516 SPROOT(t,&n,c,z,&mest,&m,&ier); 517 if (ier==10) m=0; 518 ap_z = (PyArrayObject *)PyArray_SimpleNew(1,&m,PyArray_DOUBLE); 519 if (ap_z == NULL) goto fail; 520 memcpy(ap_z->data,z,m*sizeof(double)); }}} Obviously, the last line should be mest*sizeof(double). I'll fix this. -- Pauli Virtanen From pav at iki.fi Mon Nov 10 16:35:07 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 10 Nov 2008 21:35:07 +0000 (UTC) Subject: [SciPy-dev] Memory error in interpolate.sproot References: Message-ID: Mon, 10 Nov 2008 21:30:08 +0000, Pauli Virtanen wrote: [clip: valgrind errors] > And... Bingo! [clip: probably false alarm] Ok, this was too hasty, it looks like it's right nevertheless, provided SPROOT the fortran routine always sets m <= mest. -- Pauli Virtanen From eads at soe.ucsc.edu Tue Nov 11 00:39:57 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 10 Nov 2008 21:39:57 -0800 Subject: [SciPy-dev] scipy.cluster In-Reply-To: <91b4b1ab0811100957g3a19581fha981053e41afa257@mail.gmail.com> References: <91b4b1ab0811100957g3a19581fha981053e41afa257@mail.gmail.com> Message-ID: <91b4b1ab0811102139l5a290483i438936e0ca4a828a@mail.gmail.com> Nils, Can you confirm my changes work now for you? Thanks. Damian On Mon, Nov 10, 2008 at 9:57 AM, Damian Eads wrote: > Hi Nils, > > Several months ago someone changed the dtype= to > dtype=np.typename in hierarchy.py. This works without incident when > passing an array created with dtype=np.float to C code that operates > on a double *. However, with ints, numpy.int and 'i' both yield two > different array data types on 64-bit machines. > >>>> import numpy >>>> numpy.array([], dtype=numpy.int) > array([], dtype=int64) >>>> numpy.array([], dtype='i') > array([], dtype=int32) >>>> > > The 'i' string corresponds to the C-type int, correctly whereas > numpy.int does not. Changing the dtype fields back to 'i' for these > arrays should fix the problem on 64-bit machines. I committed the fix. > Please let me know if this works. > > Thanks, > > Damian > > > On Mon, Nov 10, 2008 at 4:06 AM, Nils Wagner > wrote: >>>>> from scipy import cluster >>>>> cluster.test() >> ====================================================================== >> FAIL: Tests fcluster(Z, criterion='maxclust', t=2) on a >> random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 457, in test_fcluster_maxclusts_2 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fcluster(Z, criterion='maxclust', t=3) on a >> random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 466, in test_fcluster_maxclusts_3 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fcluster(Z, criterion='maxclust', t=4) on a >> random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 475, in test_fcluster_maxclusts_4 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fclusterdata(X, criterion='maxclust', t=2) on >> a random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 434, in test_fclusterdata_maxclusts_2 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fclusterdata(X, criterion='maxclust', t=3) on >> a random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 441, in test_fclusterdata_maxclusts_3 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fclusterdata(X, criterion='maxclust', t=4) on >> a random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 448, in test_fclusterdata_maxclusts_4 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ---------------------------------------------------------------------- >> Ran 166 tests in 20.086s >> >> FAILED (errors=1, failures=6) >> >>>>> from scipy import cluster >>>>> scipy.__version__ >> '0.7.0.dev5035' >> >> ====================================================================== >> ERROR: Tests leaders using a flat clustering generated by >> single linkage. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 487, in test_leaders_single >> self.failUnless((L[0] == Lright[0]).all() and (L[1] >> == Lright[1]).all()) >> AttributeError: 'bool' object has no attribute 'all' ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From nwagner at iam.uni-stuttgart.de Tue Nov 11 02:11:45 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 11 Nov 2008 08:11:45 +0100 Subject: [SciPy-dev] scipy.cluster In-Reply-To: <91b4b1ab0811102139l5a290483i438936e0ca4a828a@mail.gmail.com> References: <91b4b1ab0811100957g3a19581fha981053e41afa257@mail.gmail.com> <91b4b1ab0811102139l5a290483i438936e0ca4a828a@mail.gmail.com> Message-ID: On Mon, 10 Nov 2008 21:39:57 -0800 "Damian Eads" wrote: > Nils, > > Can you confirm my changes work now for you? > > Thanks. > > Damian Damian, Works fine for me >>> scipy.__version__ '0.7.0.dev5057' >>> numpy.__version__ '1.3.0.dev6001' Ran 2663 tests in 39.376s OK (KNOWNFAIL=1, SKIP=16) Thank you very much. Nils From nwagner at iam.uni-stuttgart.de Tue Nov 11 02:21:41 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 11 Nov 2008 08:21:41 +0100 Subject: [SciPy-dev] Memory error in interpolate.sproot In-Reply-To: References: Message-ID: On Mon, 10 Nov 2008 21:35:07 +0000 (UTC) Pauli Virtanen wrote: > Mon, 10 Nov 2008 21:30:08 +0000, Pauli Virtanen wrote: > [clip: valgrind errors] >> And... Bingo! > [clip: probably false alarm] > > Ok, this was too hasty, it looks like it's right >nevertheless, provided > SPROOT the fortran routine always sets m <= mest. > > -- > Pauli Virtanen > Anyway, I have filed a ticket. Nils From millman at berkeley.edu Tue Nov 11 02:37:04 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 10 Nov 2008 23:37:04 -0800 Subject: [SciPy-dev] License review / weave bsd-ification In-Reply-To: References: <3d375d730811030907j408b7009w9520db3807520192@mail.gmail.com> Message-ID: On Mon, Nov 3, 2008 at 9:10 AM, Jarrod Millman wrote: > On Mon, Nov 3, 2008 at 9:07 AM, Robert Kern wrote: >> * But as David points out, it's not actually used anywhere in weave. >> We can just remove it. > > Is there any objection to removing rand-mt? If not, I will be happy > to take care of it later today. Removed: http://projects.scipy.org/scipy/scipy/changeset/5058 All blitz code is now properly licensed and I have closed the ticket: http://scipy.org/scipy/scipy/ticket/649 -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Tue Nov 11 03:53:25 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 11 Nov 2008 00:53:25 -0800 Subject: [SciPy-dev] arpack wrapper status Message-ID: I think this ticket should be closed: http://projects.scipy.org/scipy/scipy/ticket/231 I know that there is still work planned on this, but it might be more appropriate to create a new ticket to enhance the arpack wrapper. It would also be great if Aric (or someone else) could go ahead and an entry to the release notes: http://projects.scipy.org/scipy/scipy/milestone/0.7.0 Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthew.brett at gmail.com Tue Nov 11 04:06:26 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 11 Nov 2008 01:06:26 -0800 Subject: [SciPy-dev] recursion error in tests In-Reply-To: <5b8d13220811100901w7cd1111ar955061501ab37da8@mail.gmail.com> References: <491857EF.7050109@stsci.edu> <5b8d13220811100901w7cd1111ar955061501ab37da8@mail.gmail.com> Message-ID: <1e2af89e0811110106v6a46701bgfa5074b95fa12f7d@mail.gmail.com> Hi David, >> I am running scipy.test() for version 0.7.0.dev5042. When I do so I >> receive the following error: > > Yep, that's a "fake" regression (fake because it has not been fixed > yet). I tagged it as a known failure because infinite recursion > clutter the output, so you should just get known failure instead now. Thanks for putting that useful test in. I've reenabled it and fixed it now in SVN, in the sense that I don't think we should have been able to do what the test wants to do. At least the current code is not set up to do it. I'd be glad of more people testing it though. Thanks, Matthew From cournape at gmail.com Tue Nov 11 04:13:39 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 11 Nov 2008 18:13:39 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: Message-ID: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> On Tue, Nov 11, 2008 at 5:53 PM, Jarrod Millman wrote: > I think this ticket should be closed: > http://projects.scipy.org/scipy/scipy/ticket/231 > > I know that there is still work planned on this, but it might be more > appropriate to create a new ticket to enhance the arpack wrapper. It > would also be great if Aric (or someone else) could go ahead and an > entry to the release notes: > http://projects.scipy.org/scipy/scipy/milestone/0.7.0 Some issues were solved (mac os X segfaults, in particular), but not everything in that ticket above has been addressed AFAIK. There is a general issue related to arpack and other 'core' Fortran libraries which I would like to see solved in a more general, systematic way in scipy (after 0.7 of course). One problem is that we need some workaround depending on the blas/lapack we are using, and there is already code duplication in arpack and scipy.lib and scipy.linalg (to deal with g77 vs gfortran ABI issue, which matters on Mac OS X has the abi of their lapack/blas is g77 but we use gfortran on this platform). I don't know what's best, but one solution would be to push for low-level fortran wrappers somewhere (say scipy.lib), and make use of that everywhere else. That would be a relatively big task, though. David From cournape at gmail.com Tue Nov 11 04:35:45 2008 From: cournape at gmail.com (David Cournapeau) Date: Tue, 11 Nov 2008 18:35:45 +0900 Subject: [SciPy-dev] recursion error in tests In-Reply-To: <1e2af89e0811110106v6a46701bgfa5074b95fa12f7d@mail.gmail.com> References: <491857EF.7050109@stsci.edu> <5b8d13220811100901w7cd1111ar955061501ab37da8@mail.gmail.com> <1e2af89e0811110106v6a46701bgfa5074b95fa12f7d@mail.gmail.com> Message-ID: <5b8d13220811110135r164bc637o92f78a9387e53575@mail.gmail.com> On Tue, Nov 11, 2008 at 6:06 PM, Matthew Brett wrote: > Thanks for putting that useful test in. I've reenabled it and fixed > it now in SVN, in the sense that I don't think we should have been > able to do what the test wants to do. At least the current code is > not set up to do it. I'd be glad of more people testing it though. Great, thanks for the work. David From millman at berkeley.edu Tue Nov 11 04:48:24 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 11 Nov 2008 01:48:24 -0800 Subject: [SciPy-dev] default matlab file format Message-ID: I think that the savemat function should default to version 5 rather than 4: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/io/matlab/mio.py I know that there has been an increasing sense that changing the API should be done via deprecation warnings; but, in this particular case, it seems that it would be much more useful to end users to change the behavior without a deprecation. I seriously doubt that there are many (if any) Matlab users with such old versions of Matlab that they can't read version 5 files. What do other people think? Does anyone have a need for keeping version 4 as the default format for saving Matlab files? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From berthold.hoellmann at gl-group.com Tue Nov 11 08:52:31 2008 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?Berthold_=22H=F6llmann=22?=) Date: Tue, 11 Nov 2008 14:52:31 +0100 Subject: [SciPy-dev] "intent(aux)" does not honor "depend(..)" In-Reply-To: <20081107154514.GA5103@localhost> (Tiziano Zito's message of "Fri\, 7 Nov 2008 16\:45\:15 +0100") References: <20081107154514.GA5103@localhost> Message-ID: Tiziano Zito writes: > hi berthold, the zheevr routine is wrapped already in the symeig > package (http://mdp-toolkit.sourceforge.net/symeig.html). this > package is being integrated in scipy, but in the meanwhile it is > available as a standalone package. > > ciao, > tiziano > > > On Fri 07 Nov, 16:08, Berthold H?llmann wrote: >> >> I try to wrap the lapack zheevr routine for a project of ours. Doing >> this I try to determine the optimal workspace in the wrapper >> routine. The generated code does work when put into correct order, but >> inspite ow writing:: >> >> integer intent(aux), depend(n) :: nb1 = __ilaenv(1, "ZHETRD", uplo, n, -1, -1, -1) >> integer intent(aux), depend(n) :: nb2 = __ilaenv(1, "ZUNMTR", uplo, n, -1, -1, -1) >> integer intent(aux), depend(nb1, nb2) :: nb = MAX(nb1, nb2) >> >> the generated code for calculating nb1, nb2, and nb is inserted directly >> after `PyArg_ParseTupleAndKeywords` is called, instead at a point after >> `uplo` and `n` are set. I guess this is a bug? I wanted to check all the arguments for valid values and use the optimal workspace size. I found a way to achieve this by the following signature (comments left out): subroutine zheevr(jobz, range, uplo, n, a, lda, vl, vu, il, iu, & & abstol, m, w, z, ldz, isuppz, work, lwork, rwork, lrwork, & & iwork, liwork, info) character intent(in), check(*jobz=='N'||*jobz=='V') :: jobz character intent(in), check(*range=='A'||*range=='V'||*range=='I') :: range character intent(in), check(*uplo=='U'||*uplo=='L') :: uplo integer intent(hide), check(shape(a,1)==n), depend(a) :: n=shape(a,1) complex*16 intent(in,out), dimension(lda,n) :: a integer intent(hide), check(shape(a,0)==lda), depend(a) :: lda=shape(a,0) double precision intent(in), optional, depend(vu), & & check((not *range=='V')||vl < vu) :: vl=0. double precision intent(in), optional :: vu=0. integer intent(in), optional, depend(iu), & & check((not *range=='I')|| & & (((n>0&&(1<=il<=iu<=n)))||(n==0&&(il==1&&iu==0)))) :: il=-1 integer intent(in), optional :: iu=-1 double precision intent(in), optional :: abstol=0. integer intent(out) :: m double precision intent(out), dimension(n) :: w complex*16 intent(out), dimension(ldz,n) :: z integer intent(hide) :: ldz=MAX(1,n) integer intent(out), dimension(2*MAX(1,n)) :: isuppz complex*16 intent(hide, cache), dimension(lwork), depend(lwork) :: work integer intent(hide), depend(n, uplo) :: lwork=zheevr_lwork(n, &uplo) double precision intent(hide, cache), dimension(lrwork), depend(lrwork) :: rwork integer intent(hide), depend(n, uplo) :: lrwork=zheevr_lrwork(n, &uplo) integer intent(hide, cache), dimension(liwork), depend(liwork) :: iwork integer intent(hide), depend(n) :: liwork=MAX(1, 10*n) integer intent(out) :: info end subroutine zheevr The functions `zheevr_lwork` and `zheevr_lrwork` are defined in the `usercode` block as:: extern void F_WRAPPEDFUNC(ilaenv,ILAENV)(int*,int*,string,string,int*,int*,int*,int*,size_t,size_t); static int zheevr_nb(int n, char uplo) { int nb1, nb2; int one=1; int none=-1; size_t name_len = 6; size_t opts_len = 1; (*F_WRAPPEDFUNC(ilaenv,ILAENV))(&nb1, &one, "ZHETRD", &uplo, &n, &none, &none, &none, name_len, opts_len); (*F_WRAPPEDFUNC(ilaenv,ILAENV))(&nb2, &one, "ZUNMTR", &uplo, &n, &none, &none, &none, name_len, opts_len); return MAX(nb1, nb2); } static int zheevr_lwork(int n, char uplo) { int nb = zheevr_nb(n, uplo); return MAX((nb+1)*n, MAX(1, 2*n)); } static int zheevr_lrwork(int n, char uplo) { int nb = zheevr_nb(n, uplo); return MAX((nb+1)*n, MAX(1, 24*n)); } Any comments are welcome. Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 188 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disclaimer.txt Type: application/octet-stream Size: 2196 bytes Desc: not available URL: From oliphant at enthought.com Tue Nov 11 09:47:58 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 11 Nov 2008 08:47:58 -0600 Subject: [SciPy-dev] default matlab file format In-Reply-To: References: Message-ID: <49199B1E.2060506@enthought.com> Jarrod Millman wrote: > I think that the savemat function should default to version 5 rather than 4: > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/io/matlab/mio.py > > I know that there has been an increasing sense that changing the API > should be done via deprecation warnings; but, in this particular case, > it seems that it would be much more useful to end users to change the > behavior without a deprecation. I seriously doubt that there are many > (if any) Matlab users with such old versions of Matlab that they can't > read version 5 files. What do other people think? Does anyone have a > need for keeping version 4 as the default format for saving Matlab > files? > I think this is fine to change in 0.7. People who *need* version 4 files can still specify them. -Travis From hagberg at lanl.gov Tue Nov 11 10:36:33 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Tue, 11 Nov 2008 08:36:33 -0700 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> Message-ID: <20081111153633.GM12595@bigjim2.lanl.gov> On Tue, Nov 11, 2008 at 06:13:39PM +0900, David Cournapeau wrote: > On Tue, Nov 11, 2008 at 5:53 PM, Jarrod Millman wrote: > > I think this ticket should be closed: > > http://projects.scipy.org/scipy/scipy/ticket/231 > > > > I know that there is still work planned on this, but it might be more > > appropriate to create a new ticket to enhance the arpack wrapper. It > > would also be great if Aric (or someone else) could go ahead and an > > entry to the release notes: > > http://projects.scipy.org/scipy/scipy/milestone/0.7.0 > > Some issues were solved (mac os X segfaults, in particular), but not > everything in that ticket above has been addressed AFAIK. There is a > general issue related to arpack and other 'core' Fortran libraries > which I would like to see solved in a more general, systematic way in > scipy (after 0.7 of course). I agree that http://projects.scipy.org/scipy/scipy/ticket/231 can be closed and another ticket can be opened for further enhancements. Thanks to David for figuring out the LAPACK issues needed to solve the OSX segfaults. I don't have MILESTONE_MODIFY privileges on the wiki - here is some text for the release notes: == New sparse eigensolver == A new wrapper to the ARPACK eigensolver provides functions to compute a few eigenvalues and eigenvectors of general n by n sparse matrices. http://scipy.org/scipy/scipy/browser/trunk/scipy/sparse/linalg/eigen/arpack Aric From wnbell at gmail.com Tue Nov 11 13:10:53 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 11 Nov 2008 13:10:53 -0500 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <20081111153633.GM12595@bigjim2.lanl.gov> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> Message-ID: On Tue, Nov 11, 2008 at 10:36 AM, Aric Hagberg wrote: > > I don't have MILESTONE_MODIFY privileges on the wiki - here is > some text for the release notes: > > == New sparse eigensolver == > > A new wrapper to the ARPACK eigensolver provides functions to compute > a few eigenvalues and eigenvectors of general n by n sparse matrices. > > http://scipy.org/scipy/scipy/browser/trunk/scipy/sparse/linalg/eigen/arpack > Done: http://scipy.org/scipy/scipy/milestone/0.7.0 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From oliphant at enthought.com Tue Nov 11 14:08:15 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 11 Nov 2008 13:08:15 -0600 Subject: [SciPy-dev] [Fwd: Re: [Fwd: Re: Do we care about the LAPACK C interface?]] Message-ID: <4919D81F.4020109@enthought.com> -------------- next part -------------- An embedded message was scrubbed... From: whaley at cs.utsa.edu (Clint Whaley) Subject: Re: [Fwd: Re: Do we care about the LAPACK C interface?] Date: Tue, 11 Nov 2008 11:33:51 -0600 Size: 2365 URL: From wnbell at gmail.com Tue Nov 11 21:24:25 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 11 Nov 2008 21:24:25 -0500 Subject: [SciPy-dev] Missed gfortran library when linking umfpack Message-ID: Has anyone encountered this problem before or know of a solution? I know we're dropping UMFPACK support after 0.7, but this is likely to pop up again in the UMFPACK scikit. Also, this is the last outstanding ticket in scipy.sparse or scipy.sparse.linalg, so I'd like to resolve it :) http://scipy.org/scipy/scipy/ticket/790 Description: I had a problem when using umfpack (both scipy and scikits) due to a missing -lgfortran. I had to re-compile by hand after the regular build. I dont know if it is some problem with my umfpack/atlas build (gcc and gfortran) or some issue in the configuration. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From eads at soe.ucsc.edu Tue Nov 11 22:07:40 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Tue, 11 Nov 2008 19:07:40 -0800 Subject: [SciPy-dev] scipy.cluster In-Reply-To: <91b4b1ab0811100957g3a19581fha981053e41afa257@mail.gmail.com> References: <91b4b1ab0811100957g3a19581fha981053e41afa257@mail.gmail.com> Message-ID: <91b4b1ab0811111907j483389b6ndc51772fc0313cc3@mail.gmail.com> We can probably close tickets #781 and #782 as a result of this change. Please let me know if this is okay. Damian On Mon, Nov 10, 2008 at 9:57 AM, Damian Eads wrote: > Hi Nils, > > Several months ago someone changed the dtype= to > dtype=np.typename in hierarchy.py. This works without incident when > passing an array created with dtype=np.float to C code that operates > on a double *. However, with ints, numpy.int and 'i' both yield two > different array data types on 64-bit machines. > >>>> import numpy >>>> numpy.array([], dtype=numpy.int) > array([], dtype=int64) >>>> numpy.array([], dtype='i') > array([], dtype=int32) >>>> > > The 'i' string corresponds to the C-type int, correctly whereas > numpy.int does not. Changing the dtype fields back to 'i' for these > arrays should fix the problem on 64-bit machines. I committed the fix. > Please let me know if this works. > > Thanks, > > Damian > > > On Mon, Nov 10, 2008 at 4:06 AM, Nils Wagner > wrote: >>>>> from scipy import cluster >>>>> cluster.test() >> ====================================================================== >> FAIL: Tests fcluster(Z, criterion='maxclust', t=2) on a >> random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 457, in test_fcluster_maxclusts_2 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fcluster(Z, criterion='maxclust', t=3) on a >> random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 466, in test_fcluster_maxclusts_3 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fcluster(Z, criterion='maxclust', t=4) on a >> random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 475, in test_fcluster_maxclusts_4 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fclusterdata(X, criterion='maxclust', t=2) on >> a random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 434, in test_fclusterdata_maxclusts_2 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fclusterdata(X, criterion='maxclust', t=3) on >> a random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 441, in test_fclusterdata_maxclusts_3 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ====================================================================== >> FAIL: Tests fclusterdata(X, criterion='maxclust', t=4) on >> a random 3-cluster data set. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 448, in test_fclusterdata_maxclusts_4 >> self.failUnless(is_isomorphic(T, expectedT)) >> AssertionError >> >> ---------------------------------------------------------------------- >> Ran 166 tests in 20.086s >> >> FAILED (errors=1, failures=6) >> >>>>> from scipy import cluster >>>>> scipy.__version__ >> '0.7.0.dev5035' >> >> ====================================================================== >> ERROR: Tests leaders using a flat clustering generated by >> single linkage. >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/scipy/cluster/tests/test_hierarchy.py", >> line 487, in test_leaders_single >> self.failUnless((L[0] == Lright[0]).all() and (L[1] >> == Lright[1]).all()) >> AttributeError: 'bool' object has no attribute 'all' >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > > > -- > ----------------------------------------------------- > Damian Eads Ph.D. Student > Jack Baskin School of Engineering, UCSC E2-489 > 1156 High Street Machine Learning Lab > Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads > -- ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From david at ar.media.kyoto-u.ac.jp Tue Nov 11 22:17:03 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 12 Nov 2008 12:17:03 +0900 Subject: [SciPy-dev] Missed gfortran library when linking umfpack In-Reply-To: References: Message-ID: <491A4AAF.6060307@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > Has anyone encountered this problem before or know of a solution? I > know we're dropping UMFPACK support after 0.7, but this is likely to > pop up again in the UMFPACK scikit. Also, this is the last > outstanding ticket in scipy.sparse or scipy.sparse.linalg, so I'd like > to resolve it :) > > http://scipy.org/scipy/scipy/ticket/790 > Description: > I had a problem when using umfpack (both scipy and scikits) due to a > missing -lgfortran. I had to re-compile by hand after the regular > build. I dont know if it is some problem with my umfpack/atlas build > (gcc and gfortran) or some issue in the configuration. > We certainly can't help without more information: we need information like platform (OS), compiler, BLAS/LAPACK (and how it was built if it was), eventually CPU. There should be an easy way to produce those data, directly from scipy, though, because it is difficult for users to know what's relevant. Something like scipy.build_info module, or whatever. David From ondrej at certik.cz Wed Nov 12 07:23:10 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 12 Nov 2008 13:23:10 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> Message-ID: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> On Tue, Nov 11, 2008 at 7:10 PM, Nathan Bell wrote: > On Tue, Nov 11, 2008 at 10:36 AM, Aric Hagberg wrote: >> >> I don't have MILESTONE_MODIFY privileges on the wiki - here is >> some text for the release notes: >> >> == New sparse eigensolver == >> >> A new wrapper to the ARPACK eigensolver provides functions to compute >> a few eigenvalues and eigenvectors of general n by n sparse matrices. >> >> http://scipy.org/scipy/scipy/browser/trunk/scipy/sparse/linalg/eigen/arpack >> > > Done: > http://scipy.org/scipy/scipy/milestone/0.7.0 Note, that since arpack was recently removed from Debian main due to it's non-free license, scipy should be able to configure without arpack, so that it can remain in main. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=497724 Ondrej From david at ar.media.kyoto-u.ac.jp Wed Nov 12 07:16:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 12 Nov 2008 21:16:53 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> Message-ID: <491AC935.1040105@ar.media.kyoto-u.ac.jp> Ondrej Certik wrote: > > Note, that since arpack was recently removed from Debian main due to > it's non-free license, scipy should be able to configure without > arpack, so that it can remain in main. > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=497724 > I did not know arpack was non-free. If it is not an acceptable free license, it should be removed from scipy, then. David From ondrej at certik.cz Wed Nov 12 07:51:30 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 12 Nov 2008 13:51:30 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <491AC935.1040105@ar.media.kyoto-u.ac.jp> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> Message-ID: <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> On Wed, Nov 12, 2008 at 1:16 PM, David Cournapeau wrote: > Ondrej Certik wrote: >> >> Note, that since arpack was recently removed from Debian main due to >> it's non-free license, scipy should be able to configure without >> arpack, so that it can remain in main. >> >> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=497724 >> > > I did not know arpack was non-free. If it is not an acceptable free > license, it should be removed from scipy, then. It is not acceptable for Debian (and thus neither for me), but you should read the actual details here and make your own opinion: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=491794 Ondrej From uschmitt at mineway.de Wed Nov 12 10:22:30 2008 From: uschmitt at mineway.de (Uwe Schmitt) Date: Wed, 12 Nov 2008 16:22:30 +0100 Subject: [SciPy-dev] scipy.sparse crashes In-Reply-To: <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> Message-ID: <491AF4B6.7010904@mineway.de> Hi, are there any known issues concerning crashes in scipy.sparse ? The Python interpreter crashes if I have huge sparse matrices, but I am not able to reproduce this problem (at least now). As my matrix is about 260MB I can not post it here, and even providing a download is difficult. Greetings, Uwe -- Dr. rer. nat. Uwe Schmitt F&E Mathematik mineway GmbH Science Park 2 D-66123 Saarbr?cken Telefon: +49 (0)681 8390 5334 Telefax: +49 (0)681 830 4376 uschmitt at mineway.de www.mineway.de Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer Amtsgericht Saarbr?cken HRB 12339 From cournape at gmail.com Wed Nov 12 10:35:13 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 00:35:13 +0900 Subject: [SciPy-dev] scipy.sparse crashes In-Reply-To: <491AF4B6.7010904@mineway.de> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <491AF4B6.7010904@mineway.de> Message-ID: <5b8d13220811120735l582ff1d7k3a4866941bcac3b7@mail.gmail.com> On Thu, Nov 13, 2008 at 12:22 AM, Uwe Schmitt wrote: > > Hi, > > are there any known issues concerning crashes in scipy.sparse ? > The Python interpreter crashes if I have huge sparse matrices, > but I am not able to reproduce this problem (at least now). Which platform are you on ? And which version of scipy are you using ? There were known defects on mac os X which have been solved recently. > As my matrix is about 260MB I can not post it here, and even > providing a download is difficult. The matrix cannot be generated ? Maybe you could try to reproduce the crash with a smaller matrix. A backtrace would be useful, too. Under linux or mac os X, it is relatively simple to do so: gdb python run myscript.py ... -> crash bt David From uschmitt at mineway.de Wed Nov 12 10:56:59 2008 From: uschmitt at mineway.de (Uwe Schmitt) Date: Wed, 12 Nov 2008 16:56:59 +0100 Subject: [SciPy-dev] [mailinglist] Re: scipy.sparse crashes In-Reply-To: <5b8d13220811120735l582ff1d7k3a4866941bcac3b7@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <491AF4B6.7010904@mineway.de> <5b8d13220811120735l582ff1d7k3a4866941bcac3b7@mail.gmail.com> Message-ID: <491AFCCB.80601@mineway.de> David Cournapeau schrieb: > On Thu, Nov 13, 2008 at 12:22 AM, Uwe Schmitt wrote: > >> Hi, >> >> are there any known issues concerning crashes in scipy.sparse ? >> The Python interpreter crashes if I have huge sparse matrices, >> but I am not able to reproduce this problem (at least now). >> > > > Which platform are you on ? And which version of scipy are you using ? > There were known defects on mac os X which have been solved recently. > > It is cygwin / Windows XP, Scipy version is 0.6.0. >> As my matrix is about 260MB I can not post it here, and even >> providing a download is difficult. >> > > The matrix cannot be generated ? Maybe you could try to reproduce the > crash with a smaller matrix. A backtrace would be useful, too. Under > linux or mac os X, it is relatively simple to do so: > No, I'm processing large text files and the matrices originate from them. Further I am not allowed to give those files away. There is no backtrace despite a message window telling that the Python interpreter chrashed. Further I am not able to reproduce the problem on smaller matrices. gdb says: Program received signal SIGSEGV, Segmentation fault. 0x102d6860 in tk84!TkFinalize () from /cygdrive/c/Python25/DLLs/tk84.dll I think the TkFinalize is from the message window, I do not use tk or matplotlib in my script. Greetings, Uwe > gdb python > run myscript.py > ... -> crash > bt > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -- Dr. rer. nat. Uwe Schmitt F&E Mathematik mineway GmbH Science Park 2 D-66123 Saarbr?cken Telefon: +49 (0)681 8390 5334 Telefax: +49 (0)681 830 4376 uschmitt at mineway.de www.mineway.de Gesch?ftsf?hrung: Dr.-Ing. Mathias Bauer Amtsgericht Saarbr?cken HRB 12339 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.gueth at free.fr Wed Nov 12 12:03:35 2008 From: pierre.gueth at free.fr (Pierre Gueth) Date: Wed, 12 Nov 2008 18:03:35 +0100 Subject: [SciPy-dev] scipy 0.6 with python 2.6 leads to strange keyword argument errors Message-ID: <491B0C67.10806@free.fr> Hi, I compiled scipy 0.6 with python 2.6 and this patch to solve the superlu compilation problem: https://bugs.gentoo.org/attachment.cgi?id=171446 I can import scipy and scipy.linalg but when i try to call some function like pinv or inv, i get a "RuntimeError: more argument specifiers than keyword list entries" error: [...] >>> import scipy as s >>> import scipy.linalg as l >>> l.pinv(s.array([[1,0],[0,1]])) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/site-packages/scipy/linalg/basic.py", line 468, in pinv return lstsq(a, b, cond=cond)[0] File "/usr/lib/python2.6/site-packages/scipy/linalg/basic.py", line 441, in lstsq lwork = calc_lwork.gelss(gelss.prefix,m,n,nrhs)[1] RuntimeError: more argument specifiers than keyword list entries (remaining format:'|:calc_lwork.gelss') [...] Does anyone knows how to solve this problem?? Regards pierre From wnbell at gmail.com Wed Nov 12 12:37:12 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Nov 2008 12:37:12 -0500 Subject: [SciPy-dev] scipy.sparse crashes In-Reply-To: <491AF4B6.7010904@mineway.de> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <491AF4B6.7010904@mineway.de> Message-ID: On Wed, Nov 12, 2008 at 10:22 AM, Uwe Schmitt wrote: > > are there any known issues concerning crashes in scipy.sparse ? > The Python interpreter crashes if I have huge sparse matrices, > but I am not able to reproduce this problem (at least now). > > As my matrix is about 260MB I can not post it here, and even > providing a download is difficult. > The only known crash is with scipy.linsolve on some singular matrices. You'll need to provide us with more specific information before we can diagnose the problem or tell you if it has been fixed. The size of your matrix is not itself a problem in scipy.sparse. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Wed Nov 12 12:55:02 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Nov 2008 12:55:02 -0500 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> Message-ID: On Wed, Nov 12, 2008 at 7:51 AM, Ondrej Certik wrote: >> >> I did not know arpack was non-free. If it is not an acceptable free >> license, it should be removed from scipy, then. > > It is not acceptable for Debian (and thus neither for me), but you > should read the actual details here and make your own opinion: > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=491794 > Did they refuse to re-license ARPACK under something explicitly compatible? I'd hate to throw out the work that's gone into supporting ARPACK without pursuing all options. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From ondrej at certik.cz Wed Nov 12 13:04:37 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Wed, 12 Nov 2008 19:04:37 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> Message-ID: <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> On Wed, Nov 12, 2008 at 6:55 PM, Nathan Bell wrote: > On Wed, Nov 12, 2008 at 7:51 AM, Ondrej Certik wrote: >>> >>> I did not know arpack was non-free. If it is not an acceptable free >>> license, it should be removed from scipy, then. >> >> It is not acceptable for Debian (and thus neither for me), but you >> should read the actual details here and make your own opinion: >> >> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=491794 >> > > Did they refuse to re-license ARPACK under something explicitly > compatible? I don't know. One should ask the arpack developers. > I'd hate to throw out the work that's gone into > supporting ARPACK without pursuing all options. Yes, absolutely. It was a surprise for me as well then arpack uses a problematic license. Ondrej From josef.pktd at gmail.com Wed Nov 12 15:55:55 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 12 Nov 2008 15:55:55 -0500 Subject: [SciPy-dev] patchfiles in 745 In-Reply-To: References: <1cd32cbb0811051203g78dfca1apb99229a097c93357@mail.gmail.com> Message-ID: <1cd32cbb0811121255n28dce1d7p32f4407b2830c5a1@mail.gmail.com> I was hoping that somebody would commit these fixes, many bugs have been around a long time and several have been reported at least last year on the mailing list. If I get commit rights, then I will fix scipy.stats.distributions myself and increase the test coverage. Since I'm working on windows, I am only using svn and bazar. Josef On Thu, Nov 6, 2008 at 11:35 PM, Charles R Harris wrote: > Maybe you should ask for commit privilege, you seem to be doing a lot of > work in this area. Until Jarrod gets back you might try working in a git > mirror (if you run on unix/linux)...Chuck > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From stefan at sun.ac.za Wed Nov 12 16:17:43 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 12 Nov 2008 23:17:43 +0200 Subject: [SciPy-dev] patchfiles in 745 In-Reply-To: <1cd32cbb0811121255n28dce1d7p32f4407b2830c5a1@mail.gmail.com> References: <1cd32cbb0811051203g78dfca1apb99229a097c93357@mail.gmail.com> <1cd32cbb0811121255n28dce1d7p32f4407b2830c5a1@mail.gmail.com> Message-ID: <9457e7c80811121317g1a95cd7p24fa12c4e2f4c975@mail.gmail.com> 2008/11/12 : > I was hoping that somebody would commit these fixes, many bugs have > been around a long time and several have been reported at least last > year on the mailing list. > > If I get commit rights, then I will fix scipy.stats.distributions > myself and increase the test coverage. This should definitely go in before the next release. I'm trying my best to get around to it before Monday, but if anyone else has time, please help reviewing these patches. Cheers St?fan From robert.kern at gmail.com Wed Nov 12 18:32:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Nov 2008 17:32:44 -0600 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> Message-ID: <3d375d730811121532x22ba1823h6f3749c248e57ef0@mail.gmail.com> On Wed, Nov 12, 2008 at 06:51, Ondrej Certik wrote: > On Wed, Nov 12, 2008 at 1:16 PM, David Cournapeau > wrote: >> Ondrej Certik wrote: >>> >>> Note, that since arpack was recently removed from Debian main due to >>> it's non-free license, scipy should be able to configure without >>> arpack, so that it can remain in main. >>> >>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=497724 >>> >> >> I did not know arpack was non-free. If it is not an acceptable free >> license, it should be removed from scipy, then. > > It is not acceptable for Debian (and thus neither for me), but you > should read the actual details here and make your own opinion: > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=491794 I agree with you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Wed Nov 12 18:45:46 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Nov 2008 18:45:46 -0500 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> Message-ID: On Wed, Nov 12, 2008 at 1:04 PM, Ondrej Certik wrote: >> >> Did they refuse to re-license ARPACK under something explicitly >> compatible? > > I don't know. One should ask the arpack developers. > Aric, do you want to take the lead on this one? If not, I can give it a try. I'm not sure what is required from them. Does an email stating "we agree to release ARPACK under the standard, three-clause BSD license" suffice? -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Wed Nov 12 18:54:08 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Nov 2008 17:54:08 -0600 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> Message-ID: <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> On Wed, Nov 12, 2008 at 17:45, Nathan Bell wrote: > On Wed, Nov 12, 2008 at 1:04 PM, Ondrej Certik wrote: >>> >>> Did they refuse to re-license ARPACK under something explicitly >>> compatible? >> >> I don't know. One should ask the arpack developers. >> > > Aric, do you want to take the lead on this one? If not, I can give it a try. > > I'm not sure what is required from them. Does an email stating "we > agree to release ARPACK under the standard, three-clause BSD license" > suffice? Preferably, they would cut a new tarball with the new license text. It's also unlikely that the authors have the right to change the license. The copyright is owned by Rice University, so we need at least an email from an authorized person in Rice's technology transfer department (or whatever they call it). We could get going with just that, but getting that email will probably take significantly longer than making a new tarball once they get permission. Either way, I'm fairly certain that we will not get a positive response before we need to cut a beta release. We should start removing the code soon. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Wed Nov 12 19:28:29 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 12 Nov 2008 17:28:29 -0700 Subject: [SciPy-dev] Commit rights for josef.pktd@gmail.com? Message-ID: Hi All, Josef has been doing a lot of work on the stats package and I think it would be helpful if he could have commit permissions so he could apply his fixes. Can we do that? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 12 19:32:51 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Nov 2008 18:32:51 -0600 Subject: [SciPy-dev] Commit rights for josef.pktd@gmail.com? In-Reply-To: References: Message-ID: <3d375d730811121632j53f47bb6s12f96e4cb98cad3f@mail.gmail.com> On Wed, Nov 12, 2008 at 18:28, Charles R Harris wrote: > Hi All, > > Josef has been doing a lot of work on the stats package and I think it would > be helpful if he could have commit permissions so he could apply his fixes. > Can we do that? It has been done. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Wed Nov 12 19:50:19 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Nov 2008 19:50:19 -0500 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> Message-ID: On Wed, Nov 12, 2008 at 6:54 PM, Robert Kern wrote: > > Either way, I'm fairly certain that we will not get a positive > response before we need to cut a beta release. We should start > removing the code soon. > You're probably right. Nevertheless, I've emailed a request to arpack at caam.rice.edu. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From hagberg at lanl.gov Wed Nov 12 19:54:08 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Wed, 12 Nov 2008 17:54:08 -0700 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> Message-ID: <20081113005408.GA28872@bigjim2.lanl.gov> On Wed, Nov 12, 2008 at 06:45:46PM -0500, Nathan Bell wrote: > On Wed, Nov 12, 2008 at 1:04 PM, Ondrej Certik wrote: > >> > >> Did they refuse to re-license ARPACK under something explicitly > >> compatible? > > > > I don't know. One should ask the arpack developers. > > > > Aric, do you want to take the lead on this one? If not, I can give it a try. > > I'm not sure what is required from them. Does an email stating "we > agree to release ARPACK under the standard, three-clause BSD license" > suffice? I read through the threads related to including ARPACK in Debian and Fedora. The issue appears to be with the clause " . Written notification is provided to the developers of intent to use this software. Also, we ask that use of ARPACK is properly cited in any resulting publications or software documentation." which violates the Debian Free Software Guidelines. One of the authors (Sorensen) was contacted and willing to (and did) clarify the intent of the license but apparently didn't get the Rice Office of Technology Transfer involved to change the clause. I'll contact him and see if they would consider the standard BSD license instead of the modified version. Please do whatever you need regarding ARPACK code in SciPy in order to get release out on time. Aric From cournape at gmail.com Wed Nov 12 20:42:37 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 10:42:37 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <20081113005408.GA28872@bigjim2.lanl.gov> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081113005408.GA28872@bigjim2.lanl.gov> Message-ID: <5b8d13220811121742x62b9d84ewaa1486f8a615560e@mail.gmail.com> On Thu, Nov 13, 2008 at 9:54 AM, Aric Hagberg wrote: > > I read through the threads related to including ARPACK in Debian > and Fedora. The issue appears to be with the clause > > " . Written notification is provided to the developers of intent to > use this software. Also, we ask that use of ARPACK is properly > cited in any resulting publications or software documentation." > > which violates the Debian Free Software Guidelines. I am surprised this is the only problematic part. I think the third clause of their license as much of a problem: "If you modify the source for these routines we ask that you change the name of the routine and comment the changes made to the original." It means that as it is, we are even breaking the license terms. I feel a bit guilty, I wrongly assumed ARPACK was similar to LAPACK/BLAS, I should have checked before doing changes to our internal copy of it. David From robert.kern at gmail.com Wed Nov 12 20:44:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Nov 2008 19:44:04 -0600 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <5b8d13220811121742x62b9d84ewaa1486f8a615560e@mail.gmail.com> References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081113005408.GA28872@bigjim2.lanl.gov> <5b8d13220811121742x62b9d84ewaa1486f8a615560e@mail.gmail.com> Message-ID: <3d375d730811121744q25d01898r6668973c22846d56@mail.gmail.com> On Wed, Nov 12, 2008 at 19:42, David Cournapeau wrote: > On Thu, Nov 13, 2008 at 9:54 AM, Aric Hagberg wrote: > >> >> I read through the threads related to including ARPACK in Debian >> and Fedora. The issue appears to be with the clause >> >> " . Written notification is provided to the developers of intent to >> use this software. Also, we ask that use of ARPACK is properly >> cited in any resulting publications or software documentation." >> >> which violates the Debian Free Software Guidelines. > > I am surprised this is the only problematic part. I think the third > clause of their license as much of a problem: > > "If you modify the source for these routines we ask that you change > the name of the routine and comment the changes made to the original." > > It means that as it is, we are even breaking the license terms. I feel > a bit guilty, I wrongly assumed ARPACK was similar to LAPACK/BLAS, I > should have checked before doing changes to our internal copy of it. "ask" is the operative word. It's not a requirement of the license. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Wed Nov 12 21:01:30 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 11:01:30 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <3d375d730811121744q25d01898r6668973c22846d56@mail.gmail.com> References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081113005408.GA28872@bigjim2.lanl.gov> <5b8d13220811121742x62b9d84ewaa1486f8a615560e@mail.gmail.com> <3d375d730811121744q25d01898r6668973c22846d56@mail.gmail.com> Message-ID: <5b8d13220811121801m7fcf6d28j859348f6be851440@mail.gmail.com> On Thu, Nov 13, 2008 at 10:44 AM, Robert Kern wrote: > On Wed, Nov 12, 2008 at 19:42, David Cournapeau wrote: >> On Thu, Nov 13, 2008 at 9:54 AM, Aric Hagberg wrote: >> >>> >>> I read through the threads related to including ARPACK in Debian >>> and Fedora. The issue appears to be with the clause >>> >>> " . Written notification is provided to the developers of intent to >>> use this software. Also, we ask that use of ARPACK is properly >>> cited in any resulting publications or software documentation." >>> >>> which violates the Debian Free Software Guidelines. >> >> I am surprised this is the only problematic part. I think the third >> clause of their license as much of a problem: >> >> "If you modify the source for these routines we ask that you change >> the name of the routine and comment the changes made to the original." >> >> It means that as it is, we are even breaking the license terms. I feel >> a bit guilty, I wrongly assumed ARPACK was similar to LAPACK/BLAS, I >> should have checked before doing changes to our internal copy of it. > > "ask" is the operative word. Ok, I did not know ask was not mandatory in that context. Anyway, would removing arpack from scipy tarball/builds be enough for the time being, or would it be better to remove it from svn as well ? David From robert.kern at gmail.com Wed Nov 12 21:44:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Nov 2008 20:44:05 -0600 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <5b8d13220811121801m7fcf6d28j859348f6be851440@mail.gmail.com> References: <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081113005408.GA28872@bigjim2.lanl.gov> <5b8d13220811121742x62b9d84ewaa1486f8a615560e@mail.gmail.com> <3d375d730811121744q25d01898r6668973c22846d56@mail.gmail.com> <5b8d13220811121801m7fcf6d28j859348f6be851440@mail.gmail.com> Message-ID: <3d375d730811121844v12243d35p3cee078303040ed@mail.gmail.com> On Wed, Nov 12, 2008 at 20:01, David Cournapeau wrote: > Anyway, would removing arpack from scipy tarball/builds be enough for > the time being, or would it be better to remove it from svn as well ? It's easier to remove it from SVN, I think. Tag the trunk before you do it, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Wed Nov 12 23:37:51 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Nov 2008 13:37:51 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <3d375d730811121844v12243d35p3cee078303040ed@mail.gmail.com> References: <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081113005408.GA28872@bigjim2.lanl.gov> <5b8d13220811121742x62b9d84ewaa1486f8a615560e@mail.gmail.com> <3d375d730811121744q25d01898r6668973c22846d56@mail.gmail.com> <5b8d13220811121801m7fcf6d28j859348f6be851440@mail.gmail.com> <3d375d730811121844v12243d35p3cee078303040ed@mail.gmail.com> Message-ID: <5b8d13220811122037o6759370fl8529b5a06278929a@mail.gmail.com> On Thu, Nov 13, 2008 at 11:44 AM, Robert Kern wrote: > On Wed, Nov 12, 2008 at 20:01, David Cournapeau wrote: >> Anyway, would removing arpack from scipy tarball/builds be enough for >> the time being, or would it be better to remove it from svn as well ? > > It's easier to remove it from SVN, I think. Tag the trunk before you > do it, though. Done. The tag before the removal is scipy-with-arpack. David From meine at informatik.uni-hamburg.de Thu Nov 13 07:52:51 2008 From: meine at informatik.uni-hamburg.de (Hans Meine) Date: Thu, 13 Nov 2008 13:52:51 +0100 Subject: [SciPy-dev] docs.scipy.org: Add edit permissions for hans_meine Message-ID: <200811131352.52385.meine@informatik.uni-hamburg.de> Hi, please give me (hans_meine) edit permissions for the docstrings. Thanks, Hans From pav at iki.fi Thu Nov 13 08:44:46 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 13 Nov 2008 13:44:46 +0000 (UTC) Subject: [SciPy-dev] docs.scipy.org: Add edit permissions for hans_meine References: <200811131352.52385.meine@informatik.uni-hamburg.de> Message-ID: Hi, Thu, 13 Nov 2008 13:52:51 +0100, Hans Meine wrote: > please give me (hans_meine) edit permissions for the docstrings. Given. Cheers, Pauli From chanley at stsci.edu Thu Nov 13 09:53:37 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 13 Nov 2008 09:53:37 -0500 Subject: [SciPy-dev] Two new test failures Message-ID: <491C3F71.1040800@stsci.edu> Hi, I am getting two new test failures from scipy. They appear to be from some test tests added yesterday in the stats package. ====================================================================== FAIL: test_discrete_chisquare.test_discrete_rvs_cdf('logser', (0.59999999999999998,)) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line 182, in runTest self.test(*self.arg) File "/Users/chanley/dev/site-packages/lib/python/scipy/stats/tests/test_discrete_chisquare.py", line 77, in check_discrete_chisquare 'at arg = %s' % (distname,str(arg)) AssertionError: chisquare - test for logserat arg = (0.59999999999999998,) ====================================================================== FAIL: test_discrete_chisquare.test_discrete_rvs_cdf('nbinom', (5, 0.5)) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line 182, in runTest self.test(*self.arg) File "/Users/chanley/dev/site-packages/lib/python/scipy/stats/tests/test_discrete_chisquare.py", line 77, in check_discrete_chisquare 'at arg = %s' % (distname,str(arg)) AssertionError: chisquare - test for nbinomat arg = (5, 0.5) ---------------------------------------------------------------------- Ran 3161 tests in 172.342s FAILED (SKIP=28, failures=2) I am running OSX 10.5 on an Intel Mac. I have svn versions of both numpy and scipy: >>> scipy.__version__ '0.7.0.dev5090' >>> numpy.__version__ '1.3.0.dev6022' Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From josef.pktd at gmail.com Thu Nov 13 10:28:40 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Nov 2008 10:28:40 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: <491C3F71.1040800@stsci.edu> References: <491C3F71.1040800@stsci.edu> Message-ID: <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> On Thu, Nov 13, 2008 at 9:53 AM, Christopher Hanley wrote: > Hi, > > I am getting two new test failures from scipy. They appear to be from > some test tests added yesterday in the stats package. > > ====================================================================== > FAIL: test_discrete_chisquare.test_discrete_rvs_cdf('logser', > (0.59999999999999998,)) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line > 182, in runTest > self.test(*self.arg) > File > "/Users/chanley/dev/site-packages/lib/python/scipy/stats/tests/test_discrete_chisquare.py", > line 77, in check_discrete_chisquare > 'at arg = %s' % (distname,str(arg)) > AssertionError: chisquare - test for logserat arg = (0.59999999999999998,) > > ====================================================================== > FAIL: test_discrete_chisquare.test_discrete_rvs_cdf('nbinom', (5, 0.5)) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line > 182, in runTest > self.test(*self.arg) > File > "/Users/chanley/dev/site-packages/lib/python/scipy/stats/tests/test_discrete_chisquare.py", > line 77, in check_discrete_chisquare > 'at arg = %s' % (distname,str(arg)) > AssertionError: chisquare - test for nbinomat arg = (5, 0.5) > > ---------------------------------------------------------------------- > Ran 3161 tests in 172.342s > > FAILED (SKIP=28, failures=2) > > > I am running OSX 10.5 on an Intel Mac. I have svn versions of both > numpy and scipy: > > >>> scipy.__version__ > '0.7.0.dev5090' > >>> numpy.__version__ > '1.3.0.dev6022' > > Chris > > -- > Christopher Hanley > Senior Systems Software Engineer > Space Telescope Science Institute > 3700 San Martin Drive > Baltimore MD, 21218 > (410) 338-4338 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Thanks for testing. The first test failure in logser is a known failure that I haven't marked yet as such (the error is in numpy.random according to my tests). The second failure, nbinom, I don't get in my test runs. Since these are random tests, it might be an accidental failure and I might have to add a comment or reduce the precision of the test. Could you run the tests again, just scipy.stats, to see if the second test fails each time or just accidentally? I will also check again, that there are no problems left with nbinom. By the way, these are failures because of new tests with more coverage and not because of new bugs. Josef From chanley at stsci.edu Thu Nov 13 10:39:36 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 13 Nov 2008 10:39:36 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> Message-ID: <491C4A38.9030203@stsci.edu> josef.pktd at gmail.com wrote: > On Thu, Nov 13, 2008 at 9:53 AM, Christopher Hanley wrote: >> Hi, >> >> I am getting two new test failures from scipy. They appear to be from >> some test tests added yesterday in the stats package. >> >> ====================================================================== >> FAIL: test_discrete_chisquare.test_discrete_rvs_cdf('logser', >> (0.59999999999999998,)) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line >> 182, in runTest >> self.test(*self.arg) >> File >> "/Users/chanley/dev/site-packages/lib/python/scipy/stats/tests/test_discrete_chisquare.py", >> line 77, in check_discrete_chisquare >> 'at arg = %s' % (distname,str(arg)) >> AssertionError: chisquare - test for logserat arg = (0.59999999999999998,) >> >> ====================================================================== >> FAIL: test_discrete_chisquare.test_discrete_rvs_cdf('nbinom', (5, 0.5)) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/Users/chanley/dev/site-packages/lib/python/nose/case.py", line >> 182, in runTest >> self.test(*self.arg) >> File >> "/Users/chanley/dev/site-packages/lib/python/scipy/stats/tests/test_discrete_chisquare.py", >> line 77, in check_discrete_chisquare >> 'at arg = %s' % (distname,str(arg)) >> AssertionError: chisquare - test for nbinomat arg = (5, 0.5) >> >> ---------------------------------------------------------------------- >> Ran 3161 tests in 172.342s >> >> FAILED (SKIP=28, failures=2) >> >> >> I am running OSX 10.5 on an Intel Mac. I have svn versions of both >> numpy and scipy: >> >> >>> scipy.__version__ >> '0.7.0.dev5090' >> >>> numpy.__version__ >> '1.3.0.dev6022' >> >> Chris >> >> -- >> Christopher Hanley >> Senior Systems Software Engineer >> Space Telescope Science Institute >> 3700 San Martin Drive >> Baltimore MD, 21218 >> (410) 338-4338 >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > Thanks for testing. > > The first test failure in logser is a known failure that I haven't > marked yet as such (the error is in numpy.random according to my > tests). The second failure, nbinom, I don't get in my test runs. Since > these are random tests, it might be an accidental failure and I might > have to add a comment or reduce the precision of the test. Could you > run the tests again, just scipy.stats, to see if the second test fails > each time or just accidentally? I will also check again, that there > are no problems left with nbinom. > > By the way, these are failures because of new tests with more coverage > and not because of new bugs. > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev The nbinom test does fail randomly. Although for me it seems to fail more times than it passes. Reducing the precision of the test would probably be a good idea. Thank you for taking the time to expand the test coverage in scipy. The effort is appreciated. Chris -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From josef.pktd at gmail.com Thu Nov 13 11:27:57 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Nov 2008 11:27:57 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: <491C4A38.9030203@stsci.edu> References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <491C4A38.9030203@stsci.edu> Message-ID: <1cd32cbb0811130827x6a26468asd114c4a7c5d88c49@mail.gmail.com> Hi, I ran the test 10,15 times and I never get a failure with nbinom. I'm working with numpy 1.2.1 on Windows XP, and I don't know which part of the test causes the failure. Would you please run the attached test file with nosetests -v -s and post the output? It is the same test as in trunk but running only nbinom with debug printout, and runs pretty fast. To get a quick check on the random number generator you could also check: >>> stats.nbinom.stats(5,0.5) (array(5.0), array(10.0)) >>> rvs = np.random.negative_binomial(5, 0.5, 1000000) >>> rvs.mean() 5.0053890000000001 >>> rvs.var() 10.016529958705213 Thanks for the help, Josef -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test_discrete_chisquare.py URL: From chanley at stsci.edu Thu Nov 13 11:35:14 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Thu, 13 Nov 2008 11:35:14 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: <1cd32cbb0811130827x6a26468asd114c4a7c5d88c49@mail.gmail.com> References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <491C4A38.9030203@stsci.edu> <1cd32cbb0811130827x6a26468asd114c4a7c5d88c49@mail.gmail.com> Message-ID: <491C5742.1050704@stsci.edu> josef.pktd at gmail.com wrote: > Hi, > > I ran the test 10,15 times and I never get a failure with nbinom. I'm > working with numpy 1.2.1 on Windows XP, and I don't know which part of > the test causes the failure. > > Would you please run the attached test file with nosetests -v -s and > post the output? It is the same test as in trunk but running only > nbinom with debug printout, and runs pretty fast. > > To get a quick check on the random number generator you could also check: > >>>> stats.nbinom.stats(5,0.5) > (array(5.0), array(10.0)) >>>> rvs = np.random.negative_binomial(5, 0.5, 1000000) >>>> rvs.mean() > 5.0053890000000001 >>>> rvs.var() > 10.016529958705213 > > Thanks for the help, > > Josef > > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Sure thing: [redcedar:~/dev/devCode] chanley% nosetests -v -s test_discrete_chisquare.py nbinom test_discrete_chisquare.test_discrete_rvs_cdf('nbinom', (5, 0.5)) ... /Users/chanley/dev/site-packages/lib/python/numpy/lib/function_base.py:325: Warning: The new semantics of histogram is now the default and the `new` keyword will be removed in NumPy 1.4. """, Warning) chis,pval: 14.4736379946 0.152462703179 len(distsupp), len(distmass), len(hsupp), len(freq) 12 11 12 11 distsupp [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 10. 15. Inf] distmass [ 5468.75 5859.375 6835.9375 6835.9375 6152.34375 5126.953125 4028.3203125 3021.24023438 3709.41162109 2666.28265381 295.44830322] freq [5504 5814 6930 6969 6102 5041 3925 3098 3748 2591 278] itemfreq [[ 0.00000000e+00 1.56800000e+03] [ 1.00000000e+00 3.93600000e+03] [ 2.00000000e+00 5.81400000e+03] [ 3.00000000e+00 6.93000000e+03] [ 4.00000000e+00 6.96900000e+03] [ 5.00000000e+00 6.10200000e+03] [ 6.00000000e+00 5.04100000e+03] [ 7.00000000e+00 3.92500000e+03] [ 8.00000000e+00 3.09800000e+03] [ 9.00000000e+00 2.22600000e+03] [ 1.00000000e+01 1.52200000e+03] [ 1.10000000e+01 1.00400000e+03] [ 1.20000000e+01 6.87000000e+02] [ 1.30000000e+01 4.48000000e+02] [ 1.40000000e+01 2.81000000e+02] [ 1.50000000e+01 1.71000000e+02] [ 1.60000000e+01 1.15000000e+02] [ 1.70000000e+01 6.80000000e+01] [ 1.80000000e+01 3.60000000e+01] [ 1.90000000e+01 2.90000000e+01] [ 2.00000000e+01 1.30000000e+01] [ 2.10000000e+01 7.00000000e+00] [ 2.20000000e+01 2.00000000e+00] [ 2.30000000e+01 5.00000000e+00] [ 2.40000000e+01 2.00000000e+00] [ 2.50000000e+01 1.00000000e+00]] n*pmf [ 1562.5 3906.25 5859.375 6835.9375 6835.9375 6152.34375 5126.953125 4028.3203125 3021.24023437 2182.00683594] ok ---------------------------------------------------------------------- Ran 1 test in 0.889s OK -- Christopher Hanley Senior Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From pav at iki.fi Thu Nov 13 11:51:50 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 13 Nov 2008 16:51:50 +0000 (UTC) Subject: [SciPy-dev] Two new test failures References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> Message-ID: Thu, 13 Nov 2008 10:28:40 -0500, josef.pktd wrote: [clip] > The first test failure in logser is a known failure that I haven't > marked yet as such (the error is in numpy.random according to my tests). > The second failure, nbinom, I don't get in my test runs. Since these are > random tests, it might be an accidental failure and I might have to add > a comment or reduce the precision of the test. [clip] Could you set the random seed manually in the test fixture setup? [1] This way the tests would fail deterministically. I think scipy.stats uses numpy.random, so calling numpy.random.seed would suffice. .. [1] cf. http://somethingaboutorange.com/mrl/projects/nose/ -- Pauli Virtanen From bjracine at glosten.com Thu Nov 13 12:27:36 2008 From: bjracine at glosten.com (Benjamin J. Racine) Date: Thu, 13 Nov 2008 09:27:36 -0800 Subject: [SciPy-dev] IPython TextMate Bundle Message-ID: <8C2B20C4348091499673D86BF10AB67621C30FF3@clipper.glosten.local> I am sending this forward on behalf of Matt Foster... Be sure to look into pysmell (for completion) as well. >>>>>>>>>>>>>>>>>>>>>> Hi All, A similar mail has already been on the (ipython) users mailing list, so my apologies if you've seen most of this before. I've started working on a TextMate bundle for IPython, based on the info on the Wiki [1], the aim is to produce a BSD licensed bundle which helps to integrate TextMate with IPython. I have set up a project on Github [2] which currently contains: * Some help, which doubles as the README * commands for running the current file / line / section in IPython (via applescript, and Terminal.app) * a basic language grammar for ipythonrc config files. The GitHub page contains the README file which has instructions on how to get GetBundles, which will allow you to install the bundle (but not track changes). Alternatively, if you have git, you can get the bundle using the following commands: cd "~/Library/Application Support/TextMate/Bundles" git clone git://github.com/mattfoster/ipython-tmbundle.git IPython.tmbundle osascript -e 'tell app "TextMate" to reload bundles' GitHub users can fork the project and make their own changes. I'd really love to hear any ideas, suggestions or feature requests people have, and I've been told by Fernando that it's ok to use this list for discussions, provided we prefix mail subjects with [TextMate]. Thanks, Matt [1]: http://ipython.scipy.org/moin/Cookbook/UsingIPythonWithTextMate [2]: http://github.com/mattfoster/ipython-tmbundle/ -- Matt Foster | http://my-mili.eu/matt _______________________________________________ IPython-dev mailing list IPython-dev at scipy.org http://lists.ipython.scipy.org/mailman/listinfo/ipython-dev From nwagner at iam.uni-stuttgart.de Thu Nov 13 13:45:44 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Nov 2008 19:45:44 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> Message-ID: On Wed, 12 Nov 2008 19:50:19 -0500 "Nathan Bell" wrote: > On Wed, Nov 12, 2008 at 6:54 PM, Robert Kern > wrote: >> >> Either way, I'm fairly certain that we will not get a >>positive >> response before we need to cut a beta release. We should >>start >> removing the code soon. >> > > You're probably right. Nevertheless, I've emailed a >request to > arpack at caam.rice.edu. My work relies on ARPACK. Is there any alternative solution wrt nonsymmetric matrices ? Nils From ondrej at certik.cz Thu Nov 13 14:15:30 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 13 Nov 2008 20:15:30 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> Message-ID: <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> On Thu, Nov 13, 2008 at 7:45 PM, Nils Wagner wrote: > On Wed, 12 Nov 2008 19:50:19 -0500 > "Nathan Bell" wrote: >> On Wed, Nov 12, 2008 at 6:54 PM, Robert Kern >> wrote: >>> >>> Either way, I'm fairly certain that we will not get a >>>positive >>> response before we need to cut a beta release. We should >>>start >>> removing the code soon. >>> >> >> You're probably right. Nevertheless, I've emailed a >>request to >> arpack at caam.rice.edu. > > My work relies on ARPACK. > Is there any alternative solution > wrt nonsymmetric matrices ? Yes, there are quite a lot of libraries for that: http://www.grycap.upv.es/slepc/documentation/reports/str6.pdf Ondrej From josef.pktd at gmail.com Thu Nov 13 14:28:33 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Nov 2008 14:28:33 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> Message-ID: <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> > Could you set the random seed manually in the test fixture setup? [1] > This way the tests would fail deterministically. I think scipy.stats uses > numpy.random, so calling numpy.random.seed would suffice. > > .. [1] cf. http://somethingaboutorange.com/mrl/projects/nose/ > > -- > Pauli Virtanen > I have problems making my nose tests more complex. All my tests are based on test generators with yield. I did not manage to get the test generators to work inside a TestCase class and using fixtures but I managed to work around that. I added a seed for the random number generator to my test function so now I get deterministic results. The problem, I still didn't manage to figure out, is how and whether the knownfailureif decorator works with test generators, e.g. this is my test: @dec.knownfailureif(True, "This test is known to fail") def test_discrete_rvs_cdf_fail(): distknownfail = [ ['logser', (0.6,)]] for distname, arg in distknownfail: if debug: print distname yield check_discrete_chisquare, distname, arg this is the result: ...........E ====================================================================== ERROR: test_discrete_chisquare.test_discrete_rvs_cdf_fail ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\lib\site-packages\nose-0.10.4-py2.5.egg\nose\case.p y", line 182, in runTest self.test(*self.arg) File "C:\Programs\Python25\lib\site-packages\numpy\testing\decorators.py", lin e 119, in skipper raise KnownFailureTest, msg KnownFailureTest: This test is known to fail ---------------------------------------------------------------------- Ran 12 tests in 0.781s FAILED (errors=1) I get an error instead of a known failure which is not counted towards failures and errors. Is there a trick to get the decorators to work with generators or is this not possible? Josef From nwagner at iam.uni-stuttgart.de Thu Nov 13 14:36:59 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Nov 2008 20:36:59 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> Message-ID: On Thu, 13 Nov 2008 20:15:30 +0100 "Ondrej Certik" wrote: > On Thu, Nov 13, 2008 at 7:45 PM, Nils Wagner > wrote: >> On Wed, 12 Nov 2008 19:50:19 -0500 >> "Nathan Bell" wrote: >>> On Wed, Nov 12, 2008 at 6:54 PM, Robert Kern >>> wrote: >>>> >>>> Either way, I'm fairly certain that we will not get a >>>>positive >>>> response before we need to cut a beta release. We should >>>>start >>>> removing the code soon. >>>> >>> >>> You're probably right. Nevertheless, I've emailed a >>>request to >>> arpack at caam.rice.edu. >> >> My work relies on ARPACK. >> Is there any alternative solution >> wrt nonsymmetric matrices ? > > Yes, there are quite a lot of libraries for that: None of them is available in scipy. Do you have experience with slepc4py ? http://t2.unl.edu/documentation/slepc4py BTW, is it planned to implement polyeig in scipy ? Cheers, Nils From nwagner at iam.uni-stuttgart.de Thu Nov 13 14:50:20 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Nov 2008 20:50:20 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> Message-ID: On Thu, 13 Nov 2008 20:15:30 +0100 "Ondrej Certik" wrote: > On Thu, Nov 13, 2008 at 7:45 PM, Nils Wagner > wrote: >> On Wed, 12 Nov 2008 19:50:19 -0500 >> "Nathan Bell" wrote: >>> On Wed, Nov 12, 2008 at 6:54 PM, Robert Kern >>> wrote: >>>> >>>> Either way, I'm fairly certain that we will not get a >>>>positive >>>> response before we need to cut a beta release. We should >>>>start >>>> removing the code soon. >>>> >>> >>> You're probably right. Nevertheless, I've emailed a >>>request to >>> arpack at caam.rice.edu. >> >> My work relies on ARPACK. >> Is there any alternative solution >> wrt nonsymmetric matrices ? > > Yes, there are quite a lot of libraries for that: > > http://www.grycap.upv.es/slepc/documentation/reports/str6.pdf > > > Ondrej > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev FWIW http://pypi.python.org/pypi/arpack/0.0.0 Nils From pav at iki.fi Thu Nov 13 15:03:27 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 13 Nov 2008 20:03:27 +0000 (UTC) Subject: [SciPy-dev] Two new test failures References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> Message-ID: Thu, 13 Nov 2008 14:28:33 -0500, josef.pktd wrote: [clip] > I have problems making my nose tests more complex. All my tests are > based on test generators with yield. > > > I did not manage to get the test generators to work inside a TestCase > class and using fixtures but I managed to work around that. I added a > seed for the random number generator to my test function so now I get > deterministic results. Nose indeed has some problems with test generators in classes inherited from TestCase. This is probably a bug in Nose. If you don't need the self.* methods available in a 'TestCase' class, you can base the class on 'object' instead. Nose still finds the tests in it, and the generators start to work. > The problem, I still didn't manage to figure out, is how and whether the > knownfailureif decorator works with test generators, e.g. > > this is my test: > > @dec.knownfailureif(True, "This test is known to fail") def > test_discrete_rvs_cdf_fail(): [clip] > yield check_discrete_chisquare, distname, arg [clip] > I get an error instead of a known failure which is not counted towards > failures and errors. > > Is there a trick to get the decorators to work with generators or is > this not possible? I think it will work if the decorators are applied to the _check methods instead. But this seems like a bug in Nose -- it should catch SkipTest raised already in the generator method, not only in functions yielded by it. -- Pauli Virtanen From josef.pktd at gmail.com Thu Nov 13 15:45:44 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Nov 2008 15:45:44 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> Message-ID: <1cd32cbb0811131245l1bf936c0qdc859e4036c1c270@mail.gmail.com> On Thu, Nov 13, 2008 at 3:03 PM, Pauli Virtanen wrote: > Thu, 13 Nov 2008 14:28:33 -0500, josef.pktd wrote: > [clip] >> I have problems making my nose tests more complex. All my tests are >> based on test generators with yield. >> >> >> I did not manage to get the test generators to work inside a TestCase >> class and using fixtures but I managed to work around that. I added a >> seed for the random number generator to my test function so now I get >> deterministic results. > > Nose indeed has some problems with test generators in classes inherited > from TestCase. This is probably a bug in Nose. > > If you don't need the self.* methods available in a 'TestCase' class, you > can base the class on 'object' instead. Nose still finds the tests in it, > and the generators start to work. > >> The problem, I still didn't manage to figure out, is how and whether the >> knownfailureif decorator works with test generators, e.g. >> >> this is my test: >> >> @dec.knownfailureif(True, "This test is known to fail") def >> test_discrete_rvs_cdf_fail(): > [clip] >> yield check_discrete_chisquare, distname, arg > [clip] >> I get an error instead of a known failure which is not counted towards >> failures and errors. >> >> Is there a trick to get the decorators to work with generators or is >> this not possible? > > I think it will work if the decorators are applied to the _check methods > instead. But this seems like a bug in Nose -- it should catch SkipTest > raised already in the generator method, not only in functions yielded by > it. > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > applying the decorator to the check method doesn't seem to work either, nose still doesn't capture the error: @dec.knownfailureif(True, "This test is known to fail") def check_discrete_chisquare(distname, arg, alpha = 0.01): gives me: KnownFailureTest: This test is known to fail ------------------------------------------------ Ran 12 tests in 0.000s FAILED (errors=12) The second problem I have with this is that I did not find any explanation what I can put into the ifcondition of the decorator, e.g. when I try to refer to one of the function arguments, I get an undefined name. Unconditional knownfailure are not very useful when using test generators. @dec.knownfailureif(distname=='nbinom', "This test is known to fail") def check_discrete_chisquare(distname, arg, alpha = 0.01): gives me: @dec.knownfailureif(distname=='nbinom', "This test is known to fail") NameError: name 'distname' is not defined ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (errors=1) Thanks, I will skip the failing test for now, since I need my time to finish up some missing tests. We can do stylistic changes, once I'm pretty sure the basic stuff is ok. So far, test coverage for distributions.py has increased from 54% to 86% One more question, recently there was the discussion, not to use the plain python assert in tests. I didn't find an assert_ in numpy.testing, but I would prefer to use an assert without having to construct a testcase class. What's the recommended way now with numpy 1.2.1? Maybe I missed something, otherwise I will work around it. Josef From ondrej at certik.cz Thu Nov 13 15:46:00 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Thu, 13 Nov 2008 21:46:00 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> Message-ID: <85b5c3130811131246h3797467dyc33bff4dbd71c3ef@mail.gmail.com> On Thu, Nov 13, 2008 at 8:36 PM, Nils Wagner wrote: > On Thu, 13 Nov 2008 20:15:30 +0100 > "Ondrej Certik" wrote: >> On Thu, Nov 13, 2008 at 7:45 PM, Nils Wagner >> wrote: >>> On Wed, 12 Nov 2008 19:50:19 -0500 >>> "Nathan Bell" wrote: >>>> On Wed, Nov 12, 2008 at 6:54 PM, Robert Kern >>>> wrote: >>>>> >>>>> Either way, I'm fairly certain that we will not get a >>>>>positive >>>>> response before we need to cut a beta release. We should >>>>>start >>>>> removing the code soon. >>>>> >>>> >>>> You're probably right. Nevertheless, I've emailed a >>>>request to >>>> arpack at caam.rice.edu. >>> >>> My work relies on ARPACK. >>> Is there any alternative solution >>> wrt nonsymmetric matrices ? >> >> Yes, there are quite a lot of libraries for that: > > > None of them is available in scipy. > > Do you have experience with slepc4py ? Yes, but slepc itself is non-free, so I don't use it much. But the author of slepc told me he is planning to release the next version of slepc as opensource, so we'll see. Ondrej From fperez.net at gmail.com Thu Nov 13 16:05:38 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 13 Nov 2008 13:05:38 -0800 Subject: [SciPy-dev] Two new test failures In-Reply-To: <1cd32cbb0811131245l1bf936c0qdc859e4036c1c270@mail.gmail.com> References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> <1cd32cbb0811131245l1bf936c0qdc859e4036c1c270@mail.gmail.com> Message-ID: On Thu, Nov 13, 2008 at 12:45 PM, wrote: >>> The problem, I still didn't manage to figure out, is how and whether the >>> knownfailureif decorator works with test generators, e.g. >>> >>> this is my test: >>> >>> @dec.knownfailureif(True, "This test is known to fail") def >>> test_discrete_rvs_cdf_fail(): >> [clip] >>> yield check_discrete_chisquare, distname, arg >> [clip] >>> I get an error instead of a known failure which is not counted towards >>> failures and errors. >>> >>> Is there a trick to get the decorators to work with generators or is >>> this not possible? >> >> I think it will work if the decorators are applied to the _check methods >> instead. But this seems like a bug in Nose -- it should catch SkipTest >> raised already in the generator method, not only in functions yielded by >> it. I have to run now, but tomorrow I'll write back on this. I've been tracking down locally how to integrate decorators with test generators, there's definitely a bug there. Matthew Brett and I worked on it and I have kludgy but functional solution, and an improved @skipif that can take a function as the test condition, and that works with test generators. I'll get back to you tomorrow with that code, and hopefully we can work out the details until we have a clean solution to push back into the numpy test decorators module. Cheers, f From robert.kern at gmail.com Thu Nov 13 16:25:45 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Nov 2008 15:25:45 -0600 Subject: [SciPy-dev] Two new test failures In-Reply-To: References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> Message-ID: <3d375d730811131325t36512d5aiacf42295cfd9eb7c@mail.gmail.com> On Thu, Nov 13, 2008 at 14:03, Pauli Virtanen wrote: > Thu, 13 Nov 2008 14:28:33 -0500, josef.pktd wrote: > [clip] >> I have problems making my nose tests more complex. All my tests are >> based on test generators with yield. >> >> >> I did not manage to get the test generators to work inside a TestCase >> class and using fixtures but I managed to work around that. I added a >> seed for the random number generator to my test function so now I get >> deterministic results. > > Nose indeed has some problems with test generators in classes inherited > from TestCase. This is probably a bug in Nose. No, it's a feature. When encountering a subclass of unittest.TestCase, nose simply uses unittest semantics. This lets it be backwards compatible with all unittest test suites, even if they have been customized. If you need nose features, you need to use functions or classes that don't subclass from TestCase. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Thu Nov 13 16:47:44 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 13 Nov 2008 16:47:44 -0500 Subject: [SciPy-dev] Two new test failures In-Reply-To: References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> Message-ID: <1cd32cbb0811131347l81d45a6m7d26fde03b20197a@mail.gmail.com> the current test for scipy.stats take now about 3.6 minutes on my notebook, but I don't have any errors or failures: Ran 886 tests in 215.110s OK Do I need a slow decorator, or something else to indicate that the tests are not very fast? Josef From robert.kern at gmail.com Thu Nov 13 17:36:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Nov 2008 16:36:18 -0600 Subject: [SciPy-dev] Two new test failures In-Reply-To: <1cd32cbb0811131347l81d45a6m7d26fde03b20197a@mail.gmail.com> References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> <1cd32cbb0811131347l81d45a6m7d26fde03b20197a@mail.gmail.com> Message-ID: <3d375d730811131436v67a1efe3s31e24b2c174b3cd0@mail.gmail.com> On Thu, Nov 13, 2008 at 15:47, wrote: > the current test for scipy.stats take now about 3.6 minutes on my > notebook, but I don't have any errors or failures: > > Ran 886 tests in 215.110s > OK > > Do I need a slow decorator, or something else to indicate that the > tests are not very fast? from numpy.testing.decorators import slow I'm not entirely sure what practical effect that has, though. Ideally, you could come up with faster tests that do 80% of the job. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Thu Nov 13 18:42:59 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 13 Nov 2008 16:42:59 -0700 Subject: [SciPy-dev] Two new test failures In-Reply-To: <3d375d730811131436v67a1efe3s31e24b2c174b3cd0@mail.gmail.com> References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> <1cd32cbb0811131347l81d45a6m7d26fde03b20197a@mail.gmail.com> <3d375d730811131436v67a1efe3s31e24b2c174b3cd0@mail.gmail.com> Message-ID: On Thu, Nov 13, 2008 at 3:36 PM, Robert Kern wrote: > On Thu, Nov 13, 2008 at 15:47, wrote: > > the current test for scipy.stats take now about 3.6 minutes on my > > notebook, but I don't have any errors or failures: > > > > Ran 886 tests in 215.110s > > OK > > > > Do I need a slow decorator, or something else to indicate that the > > tests are not very fast? > > from numpy.testing.decorators import slow > > I'm not entirely sure what practical effect that has, though. > > Ideally, you could come up with faster tests that do 80% of the job. > Is there a way to make the tests depend on some testing level? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Nov 13 18:50:01 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Nov 2008 17:50:01 -0600 Subject: [SciPy-dev] Two new test failures In-Reply-To: References: <491C3F71.1040800@stsci.edu> <1cd32cbb0811130728g25214db7rb32374aeb4a267f0@mail.gmail.com> <1cd32cbb0811131128s200bbacbv17a087b21fbe9c53@mail.gmail.com> <1cd32cbb0811131347l81d45a6m7d26fde03b20197a@mail.gmail.com> <3d375d730811131436v67a1efe3s31e24b2c174b3cd0@mail.gmail.com> Message-ID: <3d375d730811131550p6033409bo6cdf3e0afdefdb88@mail.gmail.com> On Thu, Nov 13, 2008 at 17:42, Charles R Harris wrote: > > On Thu, Nov 13, 2008 at 3:36 PM, Robert Kern wrote: >> >> On Thu, Nov 13, 2008 at 15:47, wrote: >> > the current test for scipy.stats take now about 3.6 minutes on my >> > notebook, but I don't have any errors or failures: >> > >> > Ran 886 tests in 215.110s >> > OK >> > >> > Do I need a slow decorator, or something else to indicate that the >> > tests are not very fast? >> >> from numpy.testing.decorators import slow >> >> I'm not entirely sure what practical effect that has, though. >> >> Ideally, you could come up with faster tests that do 80% of the job. > > Is there a way to make the tests depend on some testing level? Sure, but you have to be explicit about it every time. nosetests -A "not slow" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Thu Nov 13 20:13:12 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 14 Nov 2008 01:13:12 +0000 (UTC) Subject: [SciPy-dev] docs.scipy.org -- new site for the documentation marathon References: Message-ID: Sun, 02 Nov 2008 22:07:55 -0500, jh wrote: [clip: docs.scipy.org front page] > Looks great! The tag lines under the main links are all redundant > except for the one on Travis's book. E.g.: > > Numpy Reference > reference documentation for Numpy > > There's no new information in the extra line. Also, please add "Guide" > to the title for this one. Fixed. > Would it be possible to take the latest stats graph and stick it next to > the numpy reference manual on the front page? Just the graph for the > last week and the key. Don't do this if it's real work, or if it slows > loading much. We can add graphs for the others when they go under the > wiki. This is not straightforward at the moment -- the bar graph on the doc wiki is not an image, so it's not very easy to embed. But this can go to the wishlist, to be added later. > Also, how difficult is it to put the current date of the works in > progress? That would help those downloading it to see how recent it is > and when to get an update. Again, don't do if it's a lot of work. At present this would need to be done manually every time someone uploads the documents, and at times people would forget to update this. So I'd rather not add the dates here as long as we have to do it manually. But could be done, sure. -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Fri Nov 14 00:36:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Nov 2008 14:36:56 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <3d375d730811121554r652650cj8d7bd6423a10be59@mail.gmail.com> <85b5c3130811131115j7080892dqd89015cceb787615@mail.gmail.com> Message-ID: <491D0E78.6050404@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > > None of them is available in scipy. > > Do you have experience with slepc4py ? > > http://t2.unl.edu/documentation/slepc4py > > BTW, is it planned to implement polyeig in scipy ? > Generally, what is planned is what people are ready to work on :) The ARPACK situation is a bit disappointing; I hope the license issues can be resolved (in which case it would be reintegrated in scipy). Otherwise, it should not be too difficult to make a scikit from it, David From josef.pktd at gmail.com Fri Nov 14 10:56:11 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 14 Nov 2008 10:56:11 -0500 Subject: [SciPy-dev] docs scipy.stats has incorrect introduction - instances of discrete distributions Message-ID: <1cd32cbb0811140756q5f09e6b0r2273f2cd5808e899@mail.gmail.com> The docs for stats distributions does not mention discrete distributions, i.e change and add something like this: Each included distribution is an instance of the class rv_continous or of rv_discrete: .. autosummary:: :toctree: generated/ rv_discrete rv_discrete.pmf rv_discrete.cdf rv_discrete.sf rv_discrete.ppf rv_discrete.isf rv_discrete.stats additional methods, moment, entropy, nnlf, fit, could also be added (but I haven't finished testing all of those) Josef From hagberg at lanl.gov Fri Nov 14 18:07:15 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Fri, 14 Nov 2008 16:07:15 -0700 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: References: <5b8d13220811110113v29a658c4yacbc6a2b9939ab41@mail.gmail.com> <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> Message-ID: <20081114230715.GD24599@frappa.lanl.gov> On Wed, Nov 12, 2008 at 06:45:46PM -0500, Nathan Bell wrote: > On Wed, Nov 12, 2008 at 1:04 PM, Ondrej Certik wrote: > >> > >> Did they refuse to re-license ARPACK under something explicitly > >> compatible? > > > > I don't know. One should ask the arpack developers. > > > > Aric, do you want to take the lead on this one? If not, I can give it a try. > > I'm not sure what is required from them. Does an email stating "we > agree to release ARPACK under the standard, three-clause BSD license" > suffice? I exchanged email with Professor Sorensen and explained the issue with the current ARPACK license. He is definitely concerned about this and committed to do the work to relicense. This will be no small amount of work for him so I want to make sure we get it right. His intent is that ARPACK should be able to be used everywhere LAPACK is used and suggested licensing it that way. My understanding is that the current LAPACK license is exactly the 3-clause BSD ("modified BSD") so that is what I suggested. T Please let me know if there is anything I am missing here. Aric From robert.kern at gmail.com Fri Nov 14 19:18:13 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Nov 2008 18:18:13 -0600 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <20081114230715.GD24599@frappa.lanl.gov> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081114230715.GD24599@frappa.lanl.gov> Message-ID: <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> On Fri, Nov 14, 2008 at 17:07, Aric Hagberg wrote: > On Wed, Nov 12, 2008 at 06:45:46PM -0500, Nathan Bell wrote: >> On Wed, Nov 12, 2008 at 1:04 PM, Ondrej Certik wrote: >> >> >> >> Did they refuse to re-license ARPACK under something explicitly >> >> compatible? >> > >> > I don't know. One should ask the arpack developers. >> > >> >> Aric, do you want to take the lead on this one? If not, I can give it a try. >> >> I'm not sure what is required from them. Does an email stating "we >> agree to release ARPACK under the standard, three-clause BSD license" >> suffice? > > I exchanged email with Professor Sorensen and explained the issue > with the current ARPACK license. > > He is definitely concerned about this and committed to do the work to > relicense. This will be no small amount of work for him so I want to make > sure we get it right. > > His intent is that ARPACK should be able to be used everywhere LAPACK > is used and suggested licensing it that way. My understanding is that > the current LAPACK license is exactly the 3-clause BSD ("modified > BSD") so that is what I suggested. T > > Please let me know if there is anything I am missing here. The remainder of the previous sentence starting with "T"? :-) Thanks for taking charge with this. This is good news. Yes, the LAPACK license is the standard 3-clause BSD license acceptable for scipy (it's almost, if not entirely identical to scipy's). A copy is here: http://netlib.org/lapack/COPYING -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hagberg at lanl.gov Fri Nov 14 19:40:04 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Fri, 14 Nov 2008 17:40:04 -0700 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081114230715.GD24599@frappa.lanl.gov> <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> Message-ID: <20081115004004.GE24599@frappa.lanl.gov> On Fri, Nov 14, 2008 at 06:18:13PM -0600, Robert Kern wrote: > On Fri, Nov 14, 2008 at 17:07, Aric Hagberg wrote: > > On Wed, Nov 12, 2008 at 06:45:46PM -0500, Nathan Bell wrote: > >> On Wed, Nov 12, 2008 at 1:04 PM, Ondrej Certik wrote: > >> >> > >> >> Did they refuse to re-license ARPACK under something explicitly > >> >> compatible? > >> > > >> > I don't know. One should ask the arpack developers. > >> > > >> > >> Aric, do you want to take the lead on this one? If not, I can give it a try. > >> > >> I'm not sure what is required from them. Does an email stating "we > >> agree to release ARPACK under the standard, three-clause BSD license" > >> suffice? > > > > I exchanged email with Professor Sorensen and explained the issue > > with the current ARPACK license. > > > > He is definitely concerned about this and committed to do the work to > > relicense. This will be no small amount of work for him so I want to make > > sure we get it right. > > > > His intent is that ARPACK should be able to be used everywhere LAPACK > > is used and suggested licensing it that way. My understanding is that > > the current LAPACK license is exactly the 3-clause BSD ("modified > > BSD") so that is what I suggested. T > > > > Please let me know if there is anything I am missing here. > > The remainder of the previous sentence starting with "T"? :-) How about: "This is great news!" Aric From pav at iki.fi Sat Nov 15 07:37:05 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 15 Nov 2008 12:37:05 +0000 (UTC) Subject: [SciPy-dev] docs scipy.stats has incorrect introduction - instances of discrete distributions References: <1cd32cbb0811140756q5f09e6b0r2273f2cd5808e899@mail.gmail.com> Message-ID: Fri, 14 Nov 2008 10:56:11 -0500, josef.pktd wrote: > The docs for stats distributions does not mention discrete > distributions, i.e change and add something like this: [clip] Fixed in r5116. -- Pauli Virtanen From cohen at slac.stanford.edu Sat Nov 15 14:45:46 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Sat, 15 Nov 2008 20:45:46 +0100 Subject: [SciPy-dev] about http://www.scipy.org/LoktaVolterraTutorial Message-ID: <491F26EA.3030408@slac.stanford.edu> Hello, Icame across this web page browsing my stored emails from the mailing list, and I wondered how I could have found it other than by receiving the email that pointed to it.... Should it be moved inside the cookbook? Are there many other examples like that that are hard to find in the web site? thanks, Johann From gael.varoquaux at normalesup.org Sat Nov 15 14:51:34 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 15 Nov 2008 20:51:34 +0100 Subject: [SciPy-dev] about http://www.scipy.org/LoktaVolterraTutorial In-Reply-To: <491F26EA.3030408@slac.stanford.edu> References: <491F26EA.3030408@slac.stanford.edu> Message-ID: <20081115195134.GC23252@phare.normalesup.org> On Sat, Nov 15, 2008 at 08:45:46PM +0100, Johann Cohen-Tanugi wrote: > Icame across this web page browsing my stored emails from the mailing > list, and I wondered how I could have found it other than by receiving > the email that pointed to it.... Should it be moved inside the cookbook? It should. > Are there many other examples like that that are hard to find in the web > site? Maybe. Ga?l From josef.pktd at gmail.com Sun Nov 16 00:06:51 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 16 Nov 2008 00:06:51 -0500 Subject: [SciPy-dev] memoize temporary values and scipy special questions Message-ID: <1cd32cbb0811152106w54669d0aq76978526e37f1552@mail.gmail.com> 2 observations and 2 questions observation 1: gausshyper distribution uses the generic way of generating random variables, which is pretty slow for larger samples. the _pdf recalculates the same temporary value each time again. Cinv is the normalization constant for the pdf, which depends on the parametersn(a, b, c, z) but not on the sample point x class gausshyper_gen(rv_continuous): def _pdf(self, x, a, b, c, z): Cinv = gam(a)*gam(b)/gam(a+b)*special.hyp2f1(c,a,a+b,-z) return 1.0/Cinv * x**(a-1.0) * (1.0-x)**(b-1.0) / (1.0+z*x)**c Note: gam = special.gamma observation 2: special.hyp2f1 seems to have problems over some parameter range question 1: When calculating the pdf for different x but same parameters, then the normalization constant could be temporarily stored somewhere as in memoizing or as a lazy attribute. Is there a precedence and pattern for this in numpy/scipy, or do I try whatever might work in the individual case, e.g. create a cache for repeated calculation of the same values? question 2: In the above case, the normalization constant could also be obtained through direct numerical integration (over x in interval [0,1]). What are the relative merits of using scipy.special versus numerical integration, in terms of speed, accuracy, and range for which correct answers are produced? These are just some ideas I had, while I playing around with gausshyper to see why it doesn't work very well (it is slow and convergence of fit method to true parameters is not very good). I haven't profiled anything and rvs is additionally slow because the generic way of generating random variables when only the pdf is given is pretty roundabout: pdf->cdf->ppf->rvs If somebody can provide some information on this, then I don't have to figure it out myself. Thanks, Josef From millman at berkeley.edu Sun Nov 16 03:05:18 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 16 Nov 2008 00:05:18 -0800 Subject: [SciPy-dev] help needed with 0.7.0 release notes Message-ID: Hey, In preparation for the upcoming release, I have moved the release notes for SciPy into the code repository in the documentation directory: http://projects.scipy.org/scipy/scipy/browser/trunk/doc/release/0.7.0-notes.rst You can see the rendered version here: http://scipy.org/scipy/scipy/milestone/0.7.0 Please take a look at the release notes and let me know if you see anything missing or find any other problems. Of course, it would be even better if you would just go ahead and make any changes yourself. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From pav at iki.fi Sun Nov 16 06:54:08 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 16 Nov 2008 11:54:08 +0000 (UTC) Subject: [SciPy-dev] help needed with 0.7.0 release notes References: Message-ID: Sun, 16 Nov 2008 00:05:18 -0800, Jarrod Millman wrote: > Hey, > > In preparation for the upcoming release, I have moved the release notes > for SciPy into the code repository in the documentation directory: > http://projects.scipy.org/scipy/scipy/browser/trunk/doc/release/0.7.0-notes.rst > > You can see the rendered version here: > http://scipy.org/scipy/scipy/milestone/0.7.0 > > Please take a look at the release notes and let me know if you see > anything missing or find any other problems. Of course, it would be > even better if you would just go ahead and make any changes yourself. I added a mention of the ZVODE complex-valued ODE solver, which is new in 0.7.0. -- Pauli Virtanen From pav at iki.fi Sun Nov 16 06:58:49 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 16 Nov 2008 11:58:49 +0000 (UTC) Subject: [SciPy-dev] help needed with 0.7.0 release notes References: Message-ID: Sun, 16 Nov 2008 00:05:18 -0800, Jarrod Millman wrote: > Hey, > > In preparation for the upcoming release, I have moved the release notes > for SciPy into the code repository in the documentation directory: > http://projects.scipy.org/scipy/scipy/browser/trunk/doc/release/0.7.0- notes.rst > > You can see the rendered version here: > http://scipy.org/scipy/scipy/milestone/0.7.0 > > Please take a look at the release notes and let me know if you see > anything missing or find any other problems. Of course, it would be > even better if you would just go ahead and make any changes yourself. Also, I wonder if these should be mentioned in the release notes: http://projects.scipy.org/scipy/scipy/ticket/289 http://projects.scipy.org/scipy/scipy/ticket/660 as they change the shape of interp1d return values for multidimensional interpolants. It's an API break, but IMO unavoidable, as the old behavior was clearly wrong, but I wonder if this will break someone's code. -- Pauli Virtanen From millman at berkeley.edu Sun Nov 16 07:20:15 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 16 Nov 2008 04:20:15 -0800 Subject: [SciPy-dev] help needed with 0.7.0 release notes In-Reply-To: References: Message-ID: On Sun, Nov 16, 2008 at 3:58 AM, Pauli Virtanen wrote: > Also, I wonder if these should be mentioned in the release notes: > > http://projects.scipy.org/scipy/scipy/ticket/289 > http://projects.scipy.org/scipy/scipy/ticket/660 > > as they change the shape of interp1d return values for multidimensional > interpolants. It's an API break, but IMO unavoidable, as the old behavior > was clearly wrong, but I wonder if this will break someone's code. Please add this to the release notes. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From josef.pktd at gmail.com Sun Nov 16 09:23:23 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 16 Nov 2008 09:23:23 -0500 Subject: [SciPy-dev] help with special.nbdtrc Message-ID: <1cd32cbb0811160623v58a0ed91o7bfc2ae18b8bc424@mail.gmail.com> Is there a function in scipy.special equivalent to special.nbdtrc that also works when the second coefficient is smaller than one? >>> special.nbdtrc(2, 4.5, 0.42) 0.79201423936000004 >>> special.nbdtrc(2, 0.4 , 0.42) 1.#QNAN the reason: according to http://scipy.org/scipy/scipy/ticket/583 nbinom should also work for all parameters smaller than one, but the survival function uses special.nbdtrc class nbinom_gen(rv_discrete): def _sf(self, x, n, pr): k = floor(x) return special.nbdtrc(k,n,pr) >>> print stats.nbinom.sf(0,4.4, 0.42) 0.96888304 >>> print stats.nbinom.sf(0,0.4, 0.42) 1.#QNAN I can work around the problem, since I can easily calculate the sf without specifying a specific formula for it. Josef From ndbecker2 at gmail.com Sun Nov 16 13:44:39 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Sun, 16 Nov 2008 13:44:39 -0500 Subject: [SciPy-dev] fftw? Message-ID: So what happens to fftw? Will this be available as an additional package? Has any work started on this? From nwagner at iam.uni-stuttgart.de Sun Nov 16 13:45:51 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 16 Nov 2008 19:45:51 +0100 Subject: [SciPy-dev] scipy doc and python 2.4 Message-ID: Hi all, AFAIK scipy 0.7 requires python 2.4 and higher. "finally" is not available in 2.4. Failed to import 'scipy.fftpack.dftfreq': Failed to import 'scipy.io.wavfile.scipy.io.arff.loadarff': Failed to import 'scipy.signal.medfilt2': Failed to import 'scipy.stats.anova': Failed to import 'scipy.stats.fretcher_l': Failed to import 'scipy.stats.fretchet_r': Failed to import 'scipy.stats.meanwhitneyu': Failed to import 'scipy.stats.paired': Failed to import 'scipy.stats.von_mises': touch build/generate-stamp mkdir -p build/html build/doctrees LANG=C sphinx-build -b html -d build/doctrees source build/html Exception occurred: File "/usr/lib/python2.4/site-packages/Sphinx-0.5dev_20081113-py2.4.egg/sphinx/application.py", line 161, in setup_extension mod = __import__(extension, None, None, ['setup']) File "/home/nwagner/svn/scipy/doc/ext/plot_directive.py", line 64 finally: ^ SyntaxError: invalid syntax Nils From pav at iki.fi Sun Nov 16 13:59:52 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 16 Nov 2008 18:59:52 +0000 (UTC) Subject: [SciPy-dev] scipy doc and python 2.4 References: Message-ID: Sun, 16 Nov 2008 19:45:51 +0100, Nils Wagner wrote: > AFAIK scipy 0.7 requires python 2.4 and higher. "finally" is not > available in 2.4. [clip] > "/home/nwagner/svn/scipy/doc/ext/plot_directive.py", line 64 > finally: > ^ > SyntaxError: invalid syntax Finally is available in 2.4, but not the combined try-except-finally. Fixed. -- Pauli Virtanen From millman at berkeley.edu Sun Nov 16 18:06:02 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 16 Nov 2008 15:06:02 -0800 Subject: [SciPy-dev] Preparing to tag 0.7.0b1 in 1 week Message-ID: I would like to tag the 0.7.0b1 release on Saturday (11/23) night and announce the beta on Monday (11/24). If you know of any reason that we should delay the first beta, please let me know ASAP. If there is anything that you were planning to commit before the beta, please let me know. All the tests--that aren't known failures or marked to be skipped--pass on my computer. I don't know of any major feature that are waiting to be merged. It would also be great if everyone could test the trunk. Double check to make sure there aren't any regressions. We will be releasing the beta on sourceforge with binaries for Windows and Mac. I will also announce the release on all the mailing lists. I hope to have very wide-spread testing of this beta, so we need to be fairly certain there aren't any unknown regressions. I have a few remaining tasks I would like to see taken care of before the beta: * review, fix, and apply the percentileofscore rewrite and close ticket 560: http://codereview.appspot.com/7913 * work on release notes http://projects.scipy.org/scipy/scipy/browser/trunk/doc * quickly review and bring up-to-date the scipy tutorial http://docs.scipy.org/doc/scipy/reference/tutorial/index.html Please take a look at the release notes and let me know if you see anything that needs to be changed or updated: http://projects.scipy.org/scipy/scipy/milestone/0.7.0 Once we get the beta out we will focus even more on bugs and further improving documentation. Stefan is organizing another scipy sprint as part of the Cape Town Python Users' Group for Saturday, November 29. I will organize a Berkeley sprint as well. So mark your calendars. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From cournape at gmail.com Sun Nov 16 18:44:23 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 17 Nov 2008 08:44:23 +0900 Subject: [SciPy-dev] fftw? In-Reply-To: References: Message-ID: <5b8d13220811161544i3564fc3fu74928fc07130e06b@mail.gmail.com> On Mon, Nov 17, 2008 at 3:44 AM, Neal Becker wrote: > So what happens to fftw? It was removed > Will this be available as an additional package? No, unless someone works on it. > Has any work started on this? Not that I am aware of. David From josef.pktd at gmail.com Sun Nov 16 22:44:47 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 16 Nov 2008 22:44:47 -0500 Subject: [SciPy-dev] percentileofscore Message-ID: <1cd32cbb0811161944h3d65b40ehbffa8854f5903ea5@mail.gmail.com> What is percentileofscore supposed to do? I did not find any good interpretation what the numbers are supposed to mean. >From statistics, I am used to a definition according to the cdf, i.e. fraction of elements weakly smaller than the "score". Instead a strictly smaller definition could be useful, as used eg. in ranking of schools. The current implementation with histogram, does not give results that I can easily interpret. The proposed implementation, has still one error as mentioned by Stefan. It uses the mean when there are multiple elements presents. I looked at 3 cases: * the score element is uniquely present in array * multiple elements in the array are equal to the score * no element in the array is equal to the score I tried out 5 different definitions percentileofscore_proposed: taken from google review with correction percentileofscore_mean: similar to proposed, give mean rank if multiple present This just adds another correction to the proposed version (start index at one instead of zero) percentileofscore_meaninterp: similar to proposed, interpolate if missing percentileofscore_strict: one liner, Fraction(x>> percentileofscore_proposed([1,2,3,4,5,6,7,8,9,10],4) 30.0 >>> percentileofscore_mean([1,2,3,4,5,6,7,8,9,10],4) 40.0 >>> percentileofscore_meaninterp([1,2,3,4,5,6,7,8,9,10],4) 40.0 >>> percentileofscore_strict([1,2,3,4,5,6,7,8,9,10],4) 30.0 >>> percentileofscore_weak([1,2,3,4,5,6,7,8,9,10],4) 40.0 #multiple elements >>> percentileofscore_proposed([1,2,3,4,4,5,6,7,8,9],4) 35.0 >>> percentileofscore_mean([1,2,3,4,4,5,6,7,8,9],4) 45.0 >>> percentileofscore_meaninterp([1,2,3,4,4,5,6,7,8,9],4) 45.0 >>> percentileofscore_weak([1,2,3,4,4,5,6,7,8,9],4) 50.0 >>> percentileofscore_strict([1,2,3,4,4,5,6,7,8,9],4) 30.0 #missing elements >>> percentileofscore_proposed([1,2,3,5,6,7,8,9,10,11],4) 30.0 >>> percentileofscore_mean([1,2,3,5,6,7,8,9,10,11],4) 30.0 >>> percentileofscore_meaninterp([1,2,3,5,6,7,8,9,10,11],4) 35.0 >>> percentileofscore_weak([1,2,3,5,6,7,8,9,10,11],4) 30.0 >>> percentileofscore_strict([1,2,3,5,6,7,8,9,10,11],4) 30.0 What's the use case for percentileofscore? I just use Fraction(x<=score) or Fraction(x From robert.kern at gmail.com Sun Nov 16 22:56:02 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Nov 2008 21:56:02 -0600 Subject: [SciPy-dev] percentileofscore In-Reply-To: <1cd32cbb0811161944h3d65b40ehbffa8854f5903ea5@mail.gmail.com> References: <1cd32cbb0811161944h3d65b40ehbffa8854f5903ea5@mail.gmail.com> Message-ID: <3d375d730811161956r3227958fs50c0875eaca9365f@mail.gmail.com> On Sun, Nov 16, 2008 at 21:44, wrote: > What is percentileofscore supposed to do? > I did not find any good interpretation what the numbers > are supposed to mean. It's a poor implementation (IMO; I wrote that comment). > >From statistics, I am used to a definition according to the > cdf, i.e. fraction of elements weakly smaller than the "score". Yup. > Instead a strictly smaller definition could be useful, as > used eg. in ranking of schools. > The current implementation with histogram, does not give > results that I can easily interpret. > The proposed implementation, has still one error as mentioned > by Stefan. It uses the mean when there are multiple elements presents. > > I looked at 3 cases: > * the score element is uniquely present in array > * multiple elements in the array are equal to the score > * no element in the array is equal to the score > > I tried out 5 different definitions > percentileofscore_proposed: taken from google review with correction > percentileofscore_mean: similar to proposed, give mean rank if multiple present > This just adds another correction to the proposed version (start > index at one instead of zero) > percentileofscore_meaninterp: similar to proposed, interpolate if missing > percentileofscore_strict: one liner, Fraction(x percentileofscore_weak one liner, Fraction(x<=score) Wikipedia says to use half of the frequency of the ties (x==score) in addition to the cumulative frequency of strict x I finished with the basic cleanup of scipy.stats.distribution. All generic methods work now, all distributions (except logser.rvs) pass the basic tests for the given parameter values. Test coverage according to figleaf is about 91%. There are some remaining problems: Entropy and fit test for the continuous rv are not included in the test suite. The entropy integration fails for 6 (out of more than 80) continuous distributions and returns nans, I haven't looked at this in detail. Also the entropy test only checks for nan, I didn't find a quick, general test for the numerical correctness of the entropy calculation. The parameter estimation with fit also does not converge very well for for some distribution with sample size up to 10000, and it takes pretty long to run. Some methods defined in the specific distributions don't work correctly, but I did not find any mistakes or I could not find enough information of the statistical properties of these distributions with googling or the bugs are outside of scipy.stats. I replaced these methods by their generic counterparts which work correctly although maybe slower. The skipped methods were renamed by appending "_skip " to the method name. If someone finds the correction, then any help is appreciated. All my tests are currently for chosen parameter values, but I know of a few cases that are broken for some parameter values that are in the valid (but maybe uncommon) range. I did quite a bit of fuzz testing earlier on, but don't have the time now to go over the remaining cases. Tickets 697, 758, 766 and my ticket 745 can be closed now. ticket 620, I would close as don't fix, but I'm not sure how important users would think this is. I also just fixed 769, which looks correct to me. Enhancement tickets 767, 768 are about including limiting cases in distributions. I don't have a strong opinion about these. Is the speed penalty important in this case or not? Are boundary cases important in applications? There are also possibly some details that I missed, e.g. I didn't check return types, but I am basically finished and waiting to see what a more wide spread testing will bring. Josef From charlesr.harris at gmail.com Mon Nov 17 00:38:27 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 16 Nov 2008 22:38:27 -0700 Subject: [SciPy-dev] status of stats.distributions In-Reply-To: <1cd32cbb0811162124p15fda88u4c129722fa7b4ed9@mail.gmail.com> References: <1cd32cbb0811162124p15fda88u4c129722fa7b4ed9@mail.gmail.com> Message-ID: On Sun, Nov 16, 2008 at 10:24 PM, wrote: > I finished with the basic cleanup of scipy.stats.distribution. > > All generic methods work now, all distributions (except logser.rvs) pass > the > basic tests for the given parameter values. Test coverage according to > figleaf is about 91%. > > There are some remaining problems: > > Entropy and fit test for the continuous rv are not included > in the test suite. The entropy integration fails for 6 (out of more than > 80) > continuous distributions and returns nans, I haven't looked at this in > detail. > Also the entropy test only checks for nan, I didn't find a quick, general > test > for the numerical correctness of the entropy calculation. > The parameter estimation with fit also does not converge very well for > for some distribution with sample size up to 10000, and it takes pretty > long to run. > > Some methods defined in the specific distributions don't work > correctly, but I did not find any mistakes or I could not find enough > information of the statistical properties of these distributions with > googling or the bugs are outside of scipy.stats. > I replaced these methods by their generic counterparts which > work correctly although maybe slower. The skipped methods were > renamed by appending "_skip " to the method name. If someone finds > the correction, then any help is appreciated. > > All my tests are currently for chosen parameter values, but > I know of a few cases that are broken for some parameter > values that are in the valid (but maybe uncommon) range. I did > quite a bit of fuzz testing earlier on, but don't have the time now > to go over the remaining cases. > > Tickets 697, 758, 766 and my ticket 745 can be closed now. > ticket 620, I would close as don't fix, but I'm not sure how > important users would think this is. > I also just fixed 769, which looks correct to me. > Can you close tickets? I think you should now be able to close these yourself. Give it a shot. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Nov 17 00:53:29 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Nov 2008 00:53:29 -0500 Subject: [SciPy-dev] percentileofscore In-Reply-To: <3d375d730811161956r3227958fs50c0875eaca9365f@mail.gmail.com> References: <1cd32cbb0811161944h3d65b40ehbffa8854f5903ea5@mail.gmail.com> <3d375d730811161956r3227958fs50c0875eaca9365f@mail.gmail.com> Message-ID: <1cd32cbb0811162153u72f9fbb9vc2c0d2506f2cf733@mail.gmail.com> On Sun, Nov 16, 2008 at 10:56 PM, Robert Kern wrote: > > Wikipedia says to use half of the frequency of the ties (x==score) in > addition to the cumulative frequency of strict x > http://en.wikipedia.org/wiki/Percentile_rank > The 0.5 weight looks pretty arbitrary to me percentilescore_wikip([1,2,3,4,4,4,5,6,7,8]) = 3 + 0.5*3 = 4.5 I guess the question is, whether this a commonly accepted convention, or maybe, which and whose convention should scipy follow. The proposed patch is pretty easy to adjust to any convention. Maybe percentileofscore should get a weight parameter for ties: 0 for strict inequality, 1 for weak inequality, 0.5 (default?) for wikipedia and -1 for mean. The inverse functions, scoreatpercentile and mquantiles in stats.mstats, give a whole range of weighting schemes, but it takes too long now for me to figure out what that actually does. Josef From robert.kern at gmail.com Mon Nov 17 00:58:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Nov 2008 23:58:36 -0600 Subject: [SciPy-dev] percentileofscore In-Reply-To: <1cd32cbb0811162153u72f9fbb9vc2c0d2506f2cf733@mail.gmail.com> References: <1cd32cbb0811161944h3d65b40ehbffa8854f5903ea5@mail.gmail.com> <3d375d730811161956r3227958fs50c0875eaca9365f@mail.gmail.com> <1cd32cbb0811162153u72f9fbb9vc2c0d2506f2cf733@mail.gmail.com> Message-ID: <3d375d730811162158n660ec83dx27f67ab51f4de367@mail.gmail.com> On Sun, Nov 16, 2008 at 23:53, wrote: > On Sun, Nov 16, 2008 at 10:56 PM, Robert Kern wrote: > >> >> Wikipedia says to use half of the frequency of the ties (x==score) in >> addition to the cumulative frequency of strict x> >> http://en.wikipedia.org/wiki/Percentile_rank >> > > The 0.5 weight looks pretty arbitrary to me > percentilescore_wikip([1,2,3,4,4,4,5,6,7,8]) = 3 + 0.5*3 = 4.5 It's not arbitrary. It's the average of the x I guess the question is, whether this a commonly accepted convention, > or maybe, which and whose convention should scipy follow. > > The proposed patch is pretty easy to adjust to any convention. > Maybe percentileofscore should get a weight parameter for ties: > 0 for strict inequality, 1 for weak inequality, > 0.5 (default?) for wikipedia and -1 for mean. I prefer strings, myself. 'strict', 'weak', 'mean'. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Nov 17 10:31:08 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 17 Nov 2008 10:31:08 -0500 Subject: [SciPy-dev] percentileofscore In-Reply-To: <3d375d730811162158n660ec83dx27f67ab51f4de367@mail.gmail.com> References: <1cd32cbb0811161944h3d65b40ehbffa8854f5903ea5@mail.gmail.com> <3d375d730811161956r3227958fs50c0875eaca9365f@mail.gmail.com> <1cd32cbb0811162153u72f9fbb9vc2c0d2506f2cf733@mail.gmail.com> <3d375d730811162158n660ec83dx27f67ab51f4de367@mail.gmail.com> Message-ID: <1cd32cbb0811170731h30975cf8jeb49b747267a74f9@mail.gmail.com> >> The proposed patch is pretty easy to adjust to any convention. >> Maybe percentileofscore should get a weight parameter for ties: >> 0 for strict inequality, 1 for weak inequality, >> 0.5 (default?) for wikipedia and -1 for mean. > > I prefer strings, myself. 'strict', 'weak', 'mean'. > How about this? for experimentation without string arguments, see attachment ''' >>> percentileofscore_gen([1,2,3,4,5,6,7,8,9,10],4) #default weight=0.5 35.0 >>> percentileofscore_gen([1,2,3,4,5,6,7,8,9,10],4,weight=0) 30.0 >>> percentileofscore_gen([1,2,3,4,5,6,7,8,9,10],4,weight=1) 40.0 # multiple - 2 >>> percentileofscore_gen([1,2,3,4,4,5,6,7,8,9],4,weight=0.0) 30.0 >>> percentileofscore_gen([1,2,3,4,4,5,6,7,8,9],4,weight=1) 50.0 >>> percentileofscore_gen([1,2,3,4,4,5,6,7,8,9],4,weight=0.5) 40.0 # multiple - 3 >>> percentileofscore_gen([1,2,3,4,4,4,5,6,7,8],4,weight=0.5) 45.0 >>> percentileofscore_gen([1,2,3,4,4,4,5,6,7,8],4,weight=0) 30.0 >>> percentileofscore_gen([1,2,3,4,4,4,5,6,7,8],4,weight=1) 60.0 # missing >>> percentileofscore_gen([1,2,3,5,6,7,8,9,10,11],4) 30.0 >>> percentileofscore_gen([1,2,3,5,6,7,8,9,10,11],4,weight=0) 30.0 >>> percentileofscore_gen([1,2,3,5,6,7,8,9,10,11],4,weight=1) 30.0 #larger numbers >>> percentileofscore_gen([10,20,30,40,50,60,70,80,90,100],40) 35.0 >>> percentileofscore_gen([10,20,30,40,50,60,70,80,90,100],40,weight=0) 30.0 >>> percentileofscore_gen([10,20,30,40,50,60,70,80,90,100],40,weight=1) 40.0 >>> percentileofscore_gen([10,20,30,40,40,40,50,60,70,80],40,weight=0.5) 45.0 >>> percentileofscore_gen([10,20,30,40,40,40,50,60,70,80],40,weight=0) 30.0 >>> percentileofscore_gen([10,20,30,40,40,40,50,60,70,80],40,weight=1) 60.0 >>> percentileofscore_gen([ 10,20,30,50,60,70,80,90,100,110],40,weight=0.5) 30.0 >>> percentileofscore_gen([ 10,20,30,50,60,70,80,90,100,110],40,weight=0) 30.0 >>> percentileofscore_gen([ 10,20,30,50,60,70,80,90,100,110],40,weight=1) 30.0 >>> percentileofscore_gen([20,80,100],80) 50.0 >>> (1+0.5*1)/3.0*100 50.0 >>> percentileofscore_gen([20,80,100],80) == (1+0.5*1)/3.0*100 True -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: percofscore_1.py URL: From robince at gmail.com Mon Nov 17 13:42:03 2008 From: robince at gmail.com (Robin) Date: Mon, 17 Nov 2008 18:42:03 +0000 Subject: [SciPy-dev] nearest neighbour interpolation Message-ID: Hi, I just got bitten by this bug: http://www.scipy.org/scipy/scipy/ticket/773 It is quite nasty I think (I lost a lot of time...) and could be fixed easily just by changing the documentation. (At least so people don't loose so much time). My wiki username for the documentation is robince, so if I am enabled for write access I could make this change. Is there any way to get nearest neighbour interpolation in scipy? This bug looks related: http://www.scipy.org/scipy/scipy/ticket/305 Perhaps this could be reopened? Thanks, Robin From stefan at sun.ac.za Mon Nov 17 15:18:51 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 17 Nov 2008 22:18:51 +0200 Subject: [SciPy-dev] nearest neighbour interpolation In-Reply-To: References: Message-ID: <9457e7c80811171218y1259cfddg21be2c274a2bb0fa@mail.gmail.com> 2008/11/17 Robin : > My wiki username for the documentation is robince, so if I am enabled > for write access I could make this change. Done. Thanks! Cheers St?fan From pav at iki.fi Mon Nov 17 15:23:36 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 17 Nov 2008 20:23:36 +0000 (UTC) Subject: [SciPy-dev] nearest neighbour interpolation References: Message-ID: Mon, 17 Nov 2008 18:42:03 +0000, Robin wrote: > Hi, > > I just got bitten by this bug: > http://www.scipy.org/scipy/scipy/ticket/773 > > It is quite nasty I think (I lost a lot of time...) and could be fixed > easily just by changing the documentation. (At least so people don't > loose so much time). > > My wiki username for the documentation is robince, so if I am enabled > for write access I could make this change. > > Is there any way to get nearest neighbour interpolation in scipy? This > bug looks related: > http://www.scipy.org/scipy/scipy/ticket/305 Perhaps this could be > reopened? I have a clean and bug-fixed implementation from #305 here: http://github.com/pv/scipy/commit/777d59eb6498b73a1c018600b2c11b42ec410eb6 http://github.com/pv/scipy/commit/20ee8bdb07d6629ebe16cf850d8c34b80ce6b0b9 Shall I commit? Or does someone immediately know how to fix the 'zero' order spline -- it appears to have also other problems: >>> from scipy.interpolate import interp1d >>> x = [0,1,2,3,4,5,6,7,8,9] >>> c = interp1d(x,x,kind=0) >>> c(x) array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 8.]) -- Pauli Virtanen From pav at iki.fi Mon Nov 17 15:25:20 2008 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 17 Nov 2008 20:25:20 +0000 (UTC) Subject: [SciPy-dev] nearest neighbour interpolation References: Message-ID: Mon, 17 Nov 2008 18:42:03 +0000, Robin wrote: > Hi, > > I just got bitten by this bug: > http://www.scipy.org/scipy/scipy/ticket/773 > > It is quite nasty I think (I lost a lot of time...) and could be fixed > easily just by changing the documentation. (At least so people don't > loose so much time). > > My wiki username for the documentation is robince, so if I am enabled > for write access I could make this change. Unfortunately, the scipy docs aren't yet in the Doc wiki, although the main page leads to believe so. -- Pauli Virtanen From thouis at broad.mit.edu Tue Nov 18 12:59:23 2008 From: thouis at broad.mit.edu (Thouis (Ray) Jones) Date: Tue, 18 Nov 2008 12:59:23 -0500 Subject: [SciPy-dev] help needed with 0.7.0 release notes In-Reply-To: References: Message-ID: <6c17e6f50811180959r7506f6d0u7586ac1906d3a1c7@mail.gmail.com> I don't have SVN write access, but here's a patch for the release notes adding a short description the matlab io changes (hopefully I've formatted it appropriately). Ray Jones On Sun, Nov 16, 2008 at 3:05 AM, Jarrod Millman wrote: > Hey, > > In preparation for the upcoming release, I have moved the release > notes for SciPy into the code repository in the documentation > directory: > http://projects.scipy.org/scipy/scipy/browser/trunk/doc/release/0.7.0-notes.rst > > You can see the rendered version here: > http://scipy.org/scipy/scipy/milestone/0.7.0 > > Please take a look at the release notes and let me know if you see > anything missing or find any other problems. Of course, it would be > even better if you would just go ahead and make any changes yourself. > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: mio.RN.patch Type: application/octet-stream Size: 663 bytes Desc: not available URL: From opossumnano at gmail.com Tue Nov 18 13:05:44 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Tue, 18 Nov 2008 19:05:44 +0100 Subject: [SciPy-dev] symeig integration ready Message-ID: <20081118180543.GA28573@localhost> Dear devs, I have managed to integrate symeig in scipy, but before doing a commit I would like to get your feedback: - scipy.linalg.eigh gets the following signature: eigh(a, b=None, lower=True, eigvals_only=False, overwrite_a=False, overwrite_b=False, turbo=True, eigvals=None, type=1): this is different from the current signature: eigh(a, lower=True, eigvals_only=False, overwrite_a=False): Is it OK to break backward-compatibility? Code would only break if someone was specifing keyword arguments as positional arguments... Same thing for the eigvalsh routine. - I added the pyf wrappers for the new lapack routines in generic_lapack.pyf . They look somewhat different from all other wrappers (they were written 6 years ago without scipy in mind). In the long run I think those fortran routines should get the same kind of wrappers all others have, but I can not do such a rewriting without the guidance of the guy who wrote the generic_XXX.pyf in the first place, which I assume is pearu. In particular the calc_lwork.f module puzzles me. The new lapack routine get the minimum "lwork" and not the optimal one, which should be delivered by calc_lwork, if I only knew how... - I added 129 tests to test_decomp.py using the marvelous nose parametric tests feature. They run pretty fast, and they excercise all relevant lapack routines with most (but not all) combinations of parameters. Is it OK? Should I flag those tests as "less" important so that they are not run by default? If everything is all right with you I would commit the changes tomorrow. I will also continue to provide symeig as a standalone module until scipy 0.7 is released and a corresponding debian package is available. A new release with all changes (and bug-fixes :-)) I made to incorporate it in scipy will be released soon. Let me know, tiziano From millman at berkeley.edu Tue Nov 18 14:27:20 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 18 Nov 2008 11:27:20 -0800 Subject: [SciPy-dev] symeig integration ready In-Reply-To: <20081118180543.GA28573@localhost> References: <20081118180543.GA28573@localhost> Message-ID: On Tue, Nov 18, 2008 at 10:05 AM, Tiziano Zito wrote: > Is it OK to break backward-compatibility? Code would only break if > someone was specifing keyword arguments as positional arguments... > Same thing for the eigvalsh routine. That's OK with me, but I don't have a strong opinion on it. > - I added 129 tests to test_decomp.py using the marvelous nose > parametric tests feature. They run pretty fast, and they excercise > all relevant lapack routines with most (but not all) combinations > of parameters. Is it OK? Should I flag those tests as "less" > important so that they are not run by default? That is excellent! I would just leave them as is for now. I would rather have the tests running relatively aggressively for the upcoming 0.7.0b1 release by default. > If everything is all right with you I would commit the changes > tomorrow. I will also continue to provide symeig as a standalone > module until scipy 0.7 is released and a corresponding debian > package is available. A new release with all changes (and bug-fixes > :-)) I made to incorporate it in scipy will be released soon. Thank you very much for putting the effort into getting this done. If you could, I would appreciate it if you would add this to the release notes: http://projects.scipy.org/scipy/scipy/browser/trunk/doc/release/0.7.0-notes.rst -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From josef.pktd at gmail.com Tue Nov 18 16:35:28 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Nov 2008 16:35:28 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? Message-ID: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> scipy\stats\mstats.py and scipy\stats\mmorestats.py are masked array version of many of the statistical function. They are not imported in any __init__.py and I did not find them in the new documentation for scipy at http://docs.scipy.org/doc/scipy/reference/stats.html. Is this on purpose or not? Josef From pgmdevlist at gmail.com Tue Nov 18 16:52:16 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 18 Nov 2008 16:52:16 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> Message-ID: <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> On Nov 18, 2008, at 4:35 PM, josef.pktd at gmail.com wrote: > scipy\stats\mstats.py and scipy\stats\mmorestats.py are masked array > version of many of the statistical function. > > They are not imported in any __init__.py and I did not find them in > the new documentation for scipy at > http://docs.scipy.org/doc/scipy/reference/stats.html. > > Is this on purpose or not? I'd say yes. I tried to keep the names of these functions as close to their non-masked equivalent as possible. Import ting them in __init__ would likely erase the non-masked ones (or vice-versa depending the import order). It's not that difficult to access them through: from scipy.stats.mstats import thefunctionyouwant or import scipy.stats.mstats as mstats About the documentation: well, I guess I should take the blame for not having written more thorough docstrings. However, I'm not in charge of building the whole doc. Now, I wonder whether it wouldn't be worth to consolidate things a bit, by making sure a function returns a masked array if its input is a masked array, a ndarray otherwise... From josef.pktd at gmail.com Tue Nov 18 19:04:01 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Nov 2008 19:04:01 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> Message-ID: <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> On Tue, Nov 18, 2008 at 4:52 PM, Pierre GM wrote: > > On Nov 18, 2008, at 4:35 PM, josef.pktd at gmail.com wrote: > >> scipy\stats\mstats.py and scipy\stats\mmorestats.py are masked array >> version of many of the statistical function. >> >> They are not imported in any __init__.py and I did not find them in >> the new documentation for scipy at >> http://docs.scipy.org/doc/scipy/reference/stats.html. >> >> Is this on purpose or not? > > > I'd say yes. I tried to keep the names of these functions as close to > their non-masked equivalent as possible. Import ting them in __init__ > would likely erase the non-masked ones (or vice-versa depending the > import order). It's not that difficult to access them through: > from scipy.stats.mstats import thefunctionyouwant > or > import scipy.stats.mstats as mstats > > About the documentation: well, I guess I should take the blame for not > having written more thorough docstrings. However, I'm not in charge > of building the whole doc. > > Now, I wonder whether it wouldn't be worth to consolidate things a > bit, by making sure a function returns a masked array if its input is > a masked array, a ndarray otherwise... > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > I thought it more as an "advertising" question, since the docs seem to pick up only functions that are exposed in __all__, also np.lookfor does not seem to pick up any of the functions. Josef From josef.pktd at gmail.com Tue Nov 18 19:10:36 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Nov 2008 19:10:36 -0500 Subject: [SciPy-dev] vonmises_cython exposes all of numpy to np.lookfor Message-ID: <1cd32cbb0811181610p1418263ft546226a65e53ef03@mail.gmail.com> vonmises_cython exposes all of numpy to np.lookfor It looks like there is no __all__ in vonmises_cython, instead it exports np, scipy and __builtins__ which seems to confuse np.lookfor >>> dir(stats.vonmises_cython) ['__builtins__', '__doc__', '__file__', '__name__', 'i0', 'np', 'numpy', 'scipy' , 'von_mises_cdf', 'von_mises_cdf_normalapprox'] Josef From robert.kern at gmail.com Tue Nov 18 19:12:07 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Nov 2008 18:12:07 -0600 Subject: [SciPy-dev] vonmises_cython exposes all of numpy to np.lookfor In-Reply-To: <1cd32cbb0811181610p1418263ft546226a65e53ef03@mail.gmail.com> References: <1cd32cbb0811181610p1418263ft546226a65e53ef03@mail.gmail.com> Message-ID: <3d375d730811181612v12422ebaj285c10bae04031f1@mail.gmail.com> On Tue, Nov 18, 2008 at 18:10, wrote: > vonmises_cython exposes all of numpy to np.lookfor > > It looks like there is no __all__ in vonmises_cython, instead it > exports np, scipy and > __builtins__ which seems to confuse np.lookfor Go ahead and add it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Tue Nov 18 19:19:34 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 18 Nov 2008 19:19:34 -0500 Subject: [SciPy-dev] vonmises_cython exposes all of numpy to np.lookfor In-Reply-To: <3d375d730811181612v12422ebaj285c10bae04031f1@mail.gmail.com> References: <1cd32cbb0811181610p1418263ft546226a65e53ef03@mail.gmail.com> <3d375d730811181612v12422ebaj285c10bae04031f1@mail.gmail.com> Message-ID: <1cd32cbb0811181619p1fa6d246x1208f51cd400a15@mail.gmail.com> I think it is missing inside the cython file, and since I never seriously touched cython or pyrex files, I would prefer if someone else does it. Josef On Tue, Nov 18, 2008 at 7:12 PM, Robert Kern wrote: > On Tue, Nov 18, 2008 at 18:10, wrote: >> vonmises_cython exposes all of numpy to np.lookfor >> >> It looks like there is no __all__ in vonmises_cython, instead it >> exports np, scipy and >> __builtins__ which seems to confuse np.lookfor > > Go ahead and add it. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From pav at iki.fi Wed Nov 19 04:07:13 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 19 Nov 2008 09:07:13 +0000 (UTC) Subject: [SciPy-dev] are masked array statistical function hidden intentionally? References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> Message-ID: Tue, 18 Nov 2008 19:04:01 -0500, josef.pktd wrote: > On Tue, Nov 18, 2008 at 4:52 PM, Pierre GM wrote: >> About the documentation: well, I guess I should take the blame for not >> having written more thorough docstrings. However, I'm not in charge of >> building the whole doc. [clip] > I thought it more as an "advertising" question, since the docs seem to > pick up only functions that are exposed in __all__, also np.lookfor does > not seem to pick up any of the functions. The docs don't pick up functions automatically -- instead, each function is manually added to the docs, to a place that makes sense. This is the way Sphinx works, and IMHO it is the correct way -- only the developer knows what it supposed to be the API of a module and how to best describe it. The Scipy docs live in the Scipy repository, under doc/. Please feel free to add whatever you think is missing there. Ditto for the numpy docs (they are also in Numpy's SVN, but under /numpy-docs/trunk). Pierre -- the masked array documentation in Numpy Reference Guide is especially lacking as not all MA functions are listed or the functionality explained. I'm not so familiar with MA, so I would appreciate help in writing the documentation for this part of Numpy. np.lookfor is a different beast, and it IIRC picks only items listed in __all__. I am the one to blame if the logic in it is incorrect, so please tell me if you have a better idea how it should work. -- Pauli Virtanen From pav at iki.fi Wed Nov 19 04:40:28 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 19 Nov 2008 09:40:28 +0000 (UTC) Subject: [SciPy-dev] are masked array statistical function hidden intentionally? References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> Message-ID: Wed, 19 Nov 2008 09:07:13 +0000, Pauli Virtanen wrote: [clip] > Pierre -- the masked array documentation in Numpy Reference Guide is > especially lacking as not all MA functions are listed or the > functionality explained. I'm not so familiar with MA, so I would > appreciate help in writing the documentation for this part of Numpy. [clip] Just to clarify: I tried to say that the part of the Sphinx-generated docs dealing with MA are not properly organized at the present: - http://docs.scipy.org/doc/numpy/reference/arrays.classes.html#module-numpy.ma Some general text exists, but eg. special methods and properties of MAs are not systematically listed. Also, MAs might also deserve a page of their own. - http://docs.scipy.org/doc/numpy/reference/routines.ma.html#routines-ma Not all MA functions or constants (eg. ma.masked) are listed. Reorganization of this is pending in my job queue, but help is appreciated. -- Pauli Virtanen From scott.sinclair.za at gmail.com Wed Nov 19 06:38:01 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Wed, 19 Nov 2008 13:38:01 +0200 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> Message-ID: <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> 2008/11/19 Pauli Virtanen : > Tue, 18 Nov 2008 19:04:01 -0500, josef.pktd wrote: >> On Tue, Nov 18, 2008 at 4:52 PM, Pierre GM wrote: >>> About the documentation: well, I guess I should take the blame for not >>> having written more thorough docstrings. However, I'm not in charge of >>> building the whole doc. > Pierre -- the masked array documentation in Numpy Reference Guide is > especially lacking as not all MA functions are listed or the > functionality explained. I'm not so familiar with MA, so I would > appreciate help in writing the documentation for this part of Numpy. I'm putting some effort into this area at the moment, but I'm a marathon runner not a sprinter (tm) and also trying to work it out as I go along. Pierre, if you aren't able to find the time for this, then it might be most productive (for you) to review the docstrings as I work on them. That way you're just checking that the documentation doesn't lie or miss any subtleties. Cheers, Scott From josef.pktd at gmail.com Wed Nov 19 08:49:16 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 19 Nov 2008 08:49:16 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> Message-ID: <1cd32cbb0811190549r4f655fe4lb89cab183d77e9d7@mail.gmail.com> On Wed, Nov 19, 2008 at 4:07 AM, Pauli Virtanen wrote: > Tue, 18 Nov 2008 19:04:01 -0500, josef.pktd wrote: >> On Tue, Nov 18, 2008 at 4:52 PM, Pierre GM wrote: >>> About the documentation: well, I guess I should take the blame for not >>> having written more thorough docstrings. However, I'm not in charge of >>> building the whole doc. > [clip] >> I thought it more as an "advertising" question, since the docs seem to >> pick up only functions that are exposed in __all__, also np.lookfor does >> not seem to pick up any of the functions. > > The docs don't pick up functions automatically -- instead, each function > is manually added to the docs, to a place that makes sense. This is the > way Sphinx works, and IMHO it is the correct way -- only the developer > knows what it supposed to be the API of a module and how to best describe > it. > > The Scipy docs live in the Scipy repository, under doc/. Please feel free > to add whatever you think is missing there. Ditto for the numpy docs > (they are also in Numpy's SVN, but under /numpy-docs/trunk). > > Pierre -- the masked array documentation in Numpy Reference Guide is > especially lacking as not all MA functions are listed or the > functionality explained. I'm not so familiar with MA, so I would > appreciate help in writing the documentation for this part of Numpy. > > np.lookfor is a different beast, and it IIRC picks only items listed in > __all__. I am the one to blame if the logic in it is incorrect, so please > tell me if you have a better idea how it should work. > > -- > Pauli Virtanen > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Currently the MA related functions are split over two files, and users would have to know which function is in which module. Should we create a namespace stats.ma that imports all of mstats and mmorestats? Then stats.ma can be imported and included in stats,__init__.__all__. something like (or relative imports) stats.ma.py ----------------- from scipy.stats.mstats import * from scipy.stats.mmorestats import * ---------------- this would make it easier to find, both for users and for np.lookfor. Another question: When users want to use masked array version by default, is there a recipe to overwrite all corresponding functions in scipy.stats, i.e. something like stats.__dict__.update(stats.ma.__dict__ ... but only for public function. If users need this often, there could even be a function, like: def make_ma_as_default: scipy.stats.__dict__.update with stats.ma methods Josef From pgmdevlist at gmail.com Wed Nov 19 08:55:19 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 19 Nov 2008 08:55:19 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> Message-ID: <11BCFFB9-A488-497C-B7AD-4ECF87453E32@gmail.com> On Nov 19, 2008, at 6:38 AM, Scott Sinclair wrote: > 2008/11/19 Pauli Virtanen : >> Tue, 18 Nov 2008 19:04:01 -0500, josef.pktd wrote: >>> On Tue, Nov 18, 2008 at 4:52 PM, Pierre GM >>> wrote: >>>> About the documentation: well, I guess I should take the blame >>>> for not >>>> having written more thorough docstrings. However, I'm not in >>>> charge of >>>> building the whole doc. > >> Pierre -- the masked array documentation in Numpy Reference Guide is >> especially lacking as not all MA functions are listed or the >> functionality explained. I'm not so familiar with MA, so I would >> appreciate help in writing the documentation for this part of Numpy. > > Pierre, if you aren't able to find the time for this, then it might be > most productive (for you) to review the docstrings as I work on them. > That way you're just checking that the documentation doesn't lie or > miss any subtleties. All, I tend to edit the docstrings as I edit the code, and most functions/ methods do have a proper docstring that follows the numpy standard. There's definitely a lack of examples and see alsos, though... I'm quite surprised to see that so many functions are not picked up during the doc build. Pauli, could you point me towards the part of the autosummary/autodoc that lists the functions in a module ? Should I edit the docstring of the module to organize the output (using ..autofunction / ..automethod directives ? Is it legit to put sphinx directives/shortcuts in the doc, or are we still trying to ensure compatibility with an extra package? Scott, thanks a lot for your suggestion, it'd be easier indeed for me to review stats.mstats functions docstrings than (re)write them. From pgmdevlist at gmail.com Wed Nov 19 09:09:56 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 19 Nov 2008 09:09:56 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: <1cd32cbb0811190549r4f655fe4lb89cab183d77e9d7@mail.gmail.com> References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <1cd32cbb0811190549r4f655fe4lb89cab183d77e9d7@mail.gmail.com> Message-ID: > > Currently the MA related functions are split over two files, and > users would > have to know which function is in which module. Indeed. I tried to follow the basic stats organization. > > Should we create a namespace stats.ma that imports all of mstats and > mmorestats? > Then stats.ma can be imported and included in stats,__init__.__all__. Sounds a cool idea, but I'd prefer to rename stats.mstats.py as stats.mstats_basic and stats.mmorestats as stats.mstats_extras, and call the namespace stats.mstats (I have a lot of code that use `import stats.mstats as mstats`, and I don't really want to spend time in a few monthstrying to figure why it doesn't work anymore... > > Another question: > > When users want to use masked array version by default, is there a > recipe > to overwrite all corresponding functions in scipy.stats, i.e. > something like > > stats.__dict__.update(stats.ma.__dict__ ... but only for public > function. > > If users need this often, there could even be a function, like: > > def make_ma_as_default: > scipy.stats.__dict__.update with stats.ma methods Could be a possibility, but then we need to make sure that the standard and masked versions of the functions have exactly the same syntax and return the same thing if a ndarray is used as input (I suspect it's not always the case...). If we're to create a new namespace, we could introduce an extra layer for the functions that have been checked, while still leaving the possibility to access the non-verified functions. What about something like that: * stats.mstats.__init__ from stats.mstats.basic import * from stats.mstats.extras import * from stats.mstats.basic_unchecked import * from stats.mstats.extras_unchecked import * with `stats.mstats.basic.py` being the new name of `stats.mstats.extras` the new name of `stats.mmorestats.py`.... That way, we don't lose any time nor functionality and at term, we'll have a fully checked module. True, I could have done that earlier, but there wasn't that much interest into these functions so far, so I just coded what I needed and let it at that... From pav at iki.fi Wed Nov 19 10:08:14 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 19 Nov 2008 15:08:14 +0000 (UTC) Subject: [SciPy-dev] are masked array statistical function hidden intentionally? References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> <11BCFFB9-A488-497C-B7AD-4ECF87453E32@gmail.com> Message-ID: Hi Pierre, Wed, 19 Nov 2008 08:55:19 -0500, Pierre GM wrote: [clip] > I tend to edit the docstrings as I edit the code, and most functions/ > methods do have a proper docstring that follows the numpy standard. > There's definitely a lack of examples and see alsos, though... > > I'm quite surprised to see that so many functions are not picked up > during the doc build. This is actually my fault -- I left sorting out the functions under numpy submodules last when I added most other parts of numpy, and I haven't still finished numpy.ma. Also numpy.emath, numpy.rec, numpy.numarray, numpy.oldnumeric, numpy.ctypeslib, numpy.matlib would need work (but are less important than MA). > Pauli, could you point me towards the part of the autosummary/autodoc > that lists the functions in a module ? Should I edit the docstring of > the module to organize the output (using ..autofunction / ..automethod > directives ? Is it legit to put sphinx directives/shortcuts in the doc, > or are we still trying to ensure compatibility with an extra package? The docs pick only those docstrings listed in an auto*:: directive in one of the *.rst files. Sphinx stuff will work in the docstrings, but also `numpy.foo` should IIRC generate reference links (this comes from the numpy Sphinx extension). I don't remember if Sphinx markup was discussed when the docstring format was agreed on, but I remember people being worried about making the docstrings more difficult to read on the terminal. If the markup doesn't compromise this, at least I don't see problems with using it. I think a useful way forward could be: 1. Editing numpy-docs/source/routines.ma.rst and adding any missing utility functions inside the autosummary:: directives. Mentioning a function/etc in an autosummary:: directive somewhere in the documentation will instruct Sphinx to generate a separate page for the function docstring, and link the function under that page in the table of contents. (Though at present this doesn't work for directives in module docstrings, only for those in .rst files.) Including the documentation using the other auto*:: directives is OK, but personally I find this a bit distracting. Numpy's docstrings tend to become very long and detailed, so that a page with more than one on it is difficult to read. 2. Editing numpy-docs/source/arrays.classes.rst and adding any missing reference information (using autosummary::, or autoattribute:: etc.) about the masked array objects there, possibly in the same way as in arrays.ndarray.rst for the base ndarray. Alternatively, split the MA documentation to a separate page, for example arrays.ma.rst. I'm not sure what is the best organization here or if it makes sense to split the MA docs in two places. Except for the fact that autosummary:: directive doesn't fully function inside module docstrings (this can probably be fixed), I believe the main documentation can well be included in the module docstring. Then, one would need to put only a single automodule:: directive in arrays.ma.rst to dump the text there. Pauli From pgmdevlist at gmail.com Wed Nov 19 10:35:22 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 19 Nov 2008 10:35:22 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> <11BCFFB9-A488-497C-B7AD-4ECF87453E32@gmail.com> Message-ID: <91054A8B-1E5E-496D-8D3F-EA4F4D9B703F@gmail.com> >> > This is actually my fault -- I left sorting out the functions under > numpy > submodules last when I added most other parts of numpy, and I haven't > still finished numpy.ma. Also numpy.emath, numpy.rec, numpy.numarray, > numpy.oldnumeric, numpy.ctypeslib, numpy.matlib would need work (but > are > less important than MA). Pauli, no problem. I agree that there should be at least one specific page for numpy.ma functions/methods, organized by topics. Where should I create it (them) ? > Sphinx stuff will work in the docstrings, but also `numpy.foo` should > IIRC generate reference links (this comes from the numpy Sphinx > extension). I don't remember if Sphinx markup was discussed when the > docstring format was agreed on, but I remember people being worried > about > making the docstrings more difficult to read on the terminal. If the > markup doesn't compromise this, at least I don't see problems with > using > it. Mmh, my question was more about links to other functions/methods inside the docstring, using for eample :func:, :meth:, :attr: fields... > > I think a useful way forward could be: > > 1. Editing numpy-docs/source/routines.ma.rst and adding any missing > utility functions inside the autosummary:: directives. Can you remind me where I can find numpy-docs ? It's not on the numpy SVN, right ? What's the address of the repository ? Do I have write access to it ? > > Including the documentation using the other auto*:: directives is > OK, > but personally I find this a bit distracting. Numpy's docstrings > tend > to become very long and detailed, so that a page with more than > one on > it is difficult to read. Agreed. I guess I'll find templates on the numpy-docs site, right ? > Alternatively, split the MA documentation to a separate page, for > example arrays.ma.rst. I'm not sure what is the best organization > here or if it makes sense to split the MA docs in two places. Well, there are 2 different aspects: the actual implementation (functions docstring), and some kind of tutorial. This latter may find its place in numpy/docs, actually... From pav at iki.fi Wed Nov 19 11:28:08 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 19 Nov 2008 16:28:08 +0000 (UTC) Subject: [SciPy-dev] are masked array statistical function hidden intentionally? References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> <11BCFFB9-A488-497C-B7AD-4ECF87453E32@gmail.com> <91054A8B-1E5E-496D-8D3F-EA4F4D9B703F@gmail.com> Message-ID: Wed, 19 Nov 2008 10:35:22 -0500, Pierre GM wrote: [clip] > I agree that there should be at least one specific > page for numpy.ma functions/methods, organized by topics. Where should I > create it (them) ? I suggest making a new RST file "arrays.ma.rst" under numpy-docs/source. [clip] > Mmh, my question was more about links to other functions/methods inside > the docstring, using for example :func:, :meth:, :attr: fields... I don't know. Do :meth: and :attr: work in Sphinx with the bare method names without adding prefixes, if you haven't nested your method:: and attribute:: in the class:: directive. If yes, then they'll probably just work. >> I think a useful way forward could be: >> >> 1. Editing numpy-docs/source/routines.ma.rst and adding any missing >> utility functions inside the autosummary:: directives. > > Can you remind me where I can find numpy-docs ? It's not on the numpy > SVN, right ? What's the address of the repository ? Do I have write > access to it ? Ah yes, it's currently at http://svn.scipy.org/svn/numpy/numpy-docs/trunk/ It's technically a part of Numpy's SVN repo, so that if you can commit to Numpy, you can commit to the docs. I believe that we'll move the docs under the doc/ dir in the main numpy trunk in the near future, so that they're easier to find and can be tagged etc. at the same time as the code. >> Including the documentation using the other auto*:: directives is >> OK, >> but personally I find this a bit distracting. Numpy's docstrings >> tend >> to become very long and detailed, so that a page with more than >> one on >> it is difficult to read. > > Agreed. I guess I'll find templates on the numpy-docs site, right ? Yes, I believe taking a look at what's there now may help. (They're also linked to from docs.scipy.org in the sidebar.) >> Alternatively, split the MA documentation to a separate page, for >> example arrays.ma.rst. I'm not sure what is the best organization >> here or if it makes sense to split the MA docs in two places. > > Well, there are 2 different aspects: the actual implementation > (functions docstring), and some kind of tutorial. This latter may find > its place in numpy/docs, actually... Well, I think the implementation docs should also reside in numpy/doc, at least the functions should be manually grouped to smaller categories that make sense, so that it is easier to find the correct one if you don't exactly know what you are looking for. At present, I'd suggest putting tutorial material to a separate file in the source/user/ directory so that it goes to the "User Guide". Btw, I'm at present not sure if it makes sense to put the tutorial stuff so far from the reference stuff, so we may need to reorganize this later on. Pauli From pgmdevlist at gmail.com Wed Nov 19 13:32:29 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 19 Nov 2008 13:32:29 -0500 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> <11BCFFB9-A488-497C-B7AD-4ECF87453E32@gmail.com> <91054A8B-1E5E-496D-8D3F-EA4F4D9B703F@gmail.com> Message-ID: <493EB18A-A54C-4BAF-BD2C-E81159C019D5@gmail.com> Pauli, Thx a lot for the info. I should be good to go. I'm currently on the move, so won't be able to post anything on the server before a couple of days. Not that we're in a rush anyway... On Nov 19, 2008, at 11:28 AM, Pauli Virtanen wrote: > Wed, 19 Nov 2008 10:35:22 -0500, Pierre GM wrote: > [clip] >> I agree that there should be at least one specific >> page for numpy.ma functions/methods, organized by topics. Where >> should I >> create it (them) ? > > I suggest making a new RST file "arrays.ma.rst" under numpy-docs/ > source. > > [clip] >> Mmh, my question was more about links to other functions/methods >> inside >> the docstring, using for example :func:, :meth:, :attr: fields... > > I don't know. Do :meth: and :attr: work in Sphinx with the bare method > names without adding prefixes, if you haven't nested your method:: and > attribute:: in the class:: directive. If yes, then they'll probably > just > work. > >>> I think a useful way forward could be: >>> >>> 1. Editing numpy-docs/source/routines.ma.rst and adding any missing >>> utility functions inside the autosummary:: directives. >> >> Can you remind me where I can find numpy-docs ? It's not on the numpy >> SVN, right ? What's the address of the repository ? Do I have write >> access to it ? > > Ah yes, it's currently at http://svn.scipy.org/svn/numpy/numpy-docs/trunk/ > > It's technically a part of Numpy's SVN repo, so that if you can > commit to > Numpy, you can commit to the docs. I believe that we'll move the docs > under the doc/ dir in the main numpy trunk in the near future, so that > they're easier to find and can be tagged etc. at the same time as the > code. > >>> Including the documentation using the other auto*:: directives is >>> OK, >>> but personally I find this a bit distracting. Numpy's docstrings >>> tend >>> to become very long and detailed, so that a page with more than >>> one on >>> it is difficult to read. >> >> Agreed. I guess I'll find templates on the numpy-docs site, right ? > > Yes, I believe taking a look at what's there now may help. (They're > also > linked to from docs.scipy.org in the sidebar.) > >>> Alternatively, split the MA documentation to a separate page, for >>> example arrays.ma.rst. I'm not sure what is the best organization >>> here or if it makes sense to split the MA docs in two places. >> >> Well, there are 2 different aspects: the actual implementation >> (functions docstring), and some kind of tutorial. This latter may >> find >> its place in numpy/docs, actually... > > Well, I think the implementation docs should also reside in numpy/ > doc, at > least the functions should be manually grouped to smaller categories > that > make sense, so that it is easier to find the correct one if you don't > exactly know what you are looking for. > > At present, I'd suggest putting tutorial material to a separate file > in > the source/user/ directory so that it goes to the "User Guide". Btw, > I'm > at present not sure if it makes sense to put the tutorial stuff so far > from the reference stuff, so we may need to reorganize this later on. > > Pauli > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From nwagner at iam.uni-stuttgart.de Wed Nov 19 14:24:07 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 19 Nov 2008 20:24:07 +0100 Subject: [SciPy-dev] LAPACK 3.2 Message-ID: FWIW Nils -------- Original Message -------- Date: Wed, 19 Nov 2008 06:03:40 -0800 From: James Demmel To: Kreinovich, Vladik , reliable computing CC: James Demmel Subject: Announcement for Reliable Computing I would appreciate it if you could post this on your mailing list. Thanks, Jim Demmel Subject: New LAPACK release with "guaranteed" error bounds We have just released LAPACK 3.2, which among other improvements includes iterative refinement for solving linear systems with "guaranteed" error bounds, measured both normwise and componentwise. Portable high precision arithmetic (using another package we just released called XBLAS) is used to compute residuals. What we mean by "guarantee" is that either the error bound is correctly O(machine_epsilon), or a warning is returned that the condition number is ~1/machine_epsilon or larger. In extensive testing, we have never found it to fail, either by returning a too-small error bound, or failing to solve a problem whose condition number is at least a little below 1/machine_epsilon. But if there is a community that can find a failure, it is this one, and we would be very interested if you can find one. Since we are not doing interval arithmetic, we expect to keep "guaranteed" in quotation marks. For more information, please see http://www.netlib.org/lapack/ and http://www.netlib.org/xblas/ http://www.netlib.org/lapack/lapack-3.2.html Regards, Jim Demmel, and the rest of the LAPACK team From josef.pktd at gmail.com Wed Nov 19 15:50:50 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 19 Nov 2008 15:50:50 -0500 Subject: [SciPy-dev] test coverage for each individual functions ? Message-ID: <1cd32cbb0811191250k2eb4d342he3e6b8ecf79b0f3f@mail.gmail.com> I would like to make a list of functions in scipy.stats that have no or low test coverage? Figleaf and coverage seem to work only on the module level. Does anyone know if this is easily possible or does anyone have a script for this? Thanks, Josef From nwagner at iam.uni-stuttgart.de Thu Nov 20 12:50:06 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 20 Nov 2008 18:50:06 +0100 Subject: [SciPy-dev] scipy.test() error Message-ID: test_decomp.test_eigh('general ', 6, 'd', False, True, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('general ', 6, 'd', False, False, False, (2, 4)) ... ok test_decomp.test_eigh('ordinary', 6, 'F', True, True, True, None) ... ** On entry to CSTEGR parameter number 17 had an illegal value nwagner at noname:~> >>> scipy.__version__ '0.7.0.dev5151' From hagberg at lanl.gov Thu Nov 20 13:11:11 2008 From: hagberg at lanl.gov (Aric Hagberg) Date: Thu, 20 Nov 2008 11:11:11 -0700 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081114230715.GD24599@frappa.lanl.gov> <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> Message-ID: <20081120181111.GA7023@frappa.lanl.gov> On Fri, Nov 14, 2008 at 06:18:13PM -0600, Robert Kern wrote: > > > > I exchanged email with Professor Sorensen and explained the issue > > with the current ARPACK license. > > > > He is definitely concerned about this and committed to do the work to > > relicense. This will be no small amount of work for him so I want to make > > sure we get it right. > > > > His intent is that ARPACK should be able to be used everywhere LAPACK > > is used and suggested licensing it that way. My understanding is that > > the current LAPACK license is exactly the 3-clause BSD ("modified > > BSD") so that is what I suggested. T > > > > Please let me know if there is anything I am missing here. > > Thanks for taking charge with this. This is good news. Yes, the LAPACK > license is the standard 3-clause BSD license acceptable for scipy > (it's almost, if not entirely identical to scipy's). A copy is here: ARPACK now has a new 3-clause BSD license http://www.caam.rice.edu/software/ARPACK/RiceBSD.txt I've cc'ed the Debian ARPACK maintainer to make sure they are aware of the change as well. Many thanks to Professor Sorensen for making this happen! Aric From nwagner at iam.uni-stuttgart.de Thu Nov 20 13:16:44 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 20 Nov 2008 19:16:44 +0100 Subject: [SciPy-dev] make html in scipy/doc Message-ID: Hi all, make html doesn't work for me with latest svn: stats.uniform generated/scipy.stats.var generated/scipy.stats.variation generated/scipy.stats.wald generated/scipy.stats.weibull_max generated/scipy.stats.weibull_min generated/scipy.stats.wilcoxon generated/scipy.stats.wrapcauchy generated/scipy.stats.z generated/scipy.stats.zipf generated/scipy.stats.zmap generated/scipy.stats.zs index integrate interpolate io linalg maxentropy misc ndimage odr optimize signal sparse sparse.linalg spatial spatial.distance Math extension error: latex exited with error: [stderr] [stdout] This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) entering extended mode (/tmp/tmpJTO7je/math.tex LaTeX2e <2003/12/01> Babel and hyphenation patterns for american, french, german, ngerman, b ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch, esperanto, e stonian, finnish, greek, icelandic, irish, italian, latin, magyar, norsk, polis h, portuges, romanian, russian, serbian, slovak, slovene, spanish, swedish, tur kish, ukrainian, nohyphenation, loaded. (/usr/share/texmf/tex/latex/base/article.cls Document Class: article 2004/02/16 v1.4f Standard LaTeX document class (/usr/share/texmf/tex/latex/base/size12.clo)) (/usr/share/texmf/tex/latex/base/inputenc.sty (/usr/share/texmf/tex/latex/base/utf8.def (/usr/share/texmf/tex/latex/base/t1enc.dfu) (/usr/share/texmf/tex/latex/base/ot1enc.dfu) (/usr/share/texmf/tex/latex/base/omsenc.dfu))) (/usr/share/texmf/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texmf/tex/latex/amsmath/amstext.sty (/usr/share/texmf/tex/latex/amsmath/amsgen.sty)) (/usr/share/texmf/tex/latex/amsmath/amsbsy.sty) (/usr/share/texmf/tex/latex/amsmath/amsopn.sty)) (/usr/share/texmf/tex/latex/amscls/amsthm.sty) (/usr/share/texmf/tex/latex/amsfonts/amssymb.sty (/usr/share/texmf/tex/latex/amsfonts/amsfonts.sty)) (/usr/share/texmf/tex/latex/tools/bm.sty) (/usr/share/texmf/tex/latex/preview/preview.sty No auxiliary output files. ) No file math.aux. Preview: Fontsize 12pt (/usr/share/texmf/tex/latex/amsfonts/umsa.fd) (/usr/share/texmf/tex/latex/amsfonts/umsb.fd) ! Missing } inserted. } l.16 \end{gather} ! Missing { inserted. { l.16 \end{gather} ! Missing } inserted. } l.16 \end{gather} ! Missing { inserted. { l.16 \end{gather} [1] ) (see the transcript file for additional information) Output written on /tmp/tmpJTO7je/math.dvi (1 page, 560 bytes). Transcript written on /tmp/tmpJTO7je/math.log. make: *** [html] Error 1 From millman at berkeley.edu Thu Nov 20 13:27:15 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 20 Nov 2008 10:27:15 -0800 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <20081120181111.GA7023@frappa.lanl.gov> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081114230715.GD24599@frappa.lanl.gov> <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> <20081120181111.GA7023@frappa.lanl.gov> Message-ID: On Thu, Nov 20, 2008 at 10:11 AM, Aric Hagberg wrote: > ARPACK now has a new 3-clause BSD license > http://www.caam.rice.edu/software/ARPACK/RiceBSD.txt > > Many thanks to Professor Sorensen for making this happen! Excellent. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From nwagner at iam.uni-stuttgart.de Thu Nov 20 13:52:07 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 20 Nov 2008 19:52:07 +0100 Subject: [SciPy-dev] LAPACK 3.2 In-Reply-To: References: Message-ID: Hi all, Just curious. Has someone tested LAPACK 3.2 in connection with numpy/scipy ? LAPACK-3.2 now requires a FORTRAN 90 compiler. Which Fortran compiler is used by default when building numpy/scipy ? Which Fortran compiler is recommended (gfortran or g95) ? Nils From pav at iki.fi Thu Nov 20 14:24:47 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 20 Nov 2008 19:24:47 +0000 (UTC) Subject: [SciPy-dev] make html in scipy/doc References: Message-ID: Thu, 20 Nov 2008 19:16:44 +0100, Nils Wagner wrote: > > make html doesn't work for me with latest svn: [clip] Bad Latex in spatial.distance docstrings; fixed. I'll try to get a patch in Sphinx that makes Latex errors in math non- fatal. -- Pauli Virtanen From gael.varoquaux at normalesup.org Thu Nov 20 14:41:37 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 20 Nov 2008 20:41:37 +0100 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <20081120181111.GA7023@frappa.lanl.gov> References: <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081114230715.GD24599@frappa.lanl.gov> <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> <20081120181111.GA7023@frappa.lanl.gov> Message-ID: <20081120194137.GB4715@phare.normalesup.org> On Thu, Nov 20, 2008 at 11:11:11AM -0700, Aric Hagberg wrote: > Many thanks to Professor Sorensen for making this happen! Yes, that's a very big deal. Thanks to you to for your efforts. Ga?l From scott.sinclair.za at gmail.com Fri Nov 21 05:00:47 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 21 Nov 2008 12:00:47 +0200 Subject: [SciPy-dev] are masked array statistical function hidden intentionally? In-Reply-To: <493EB18A-A54C-4BAF-BD2C-E81159C019D5@gmail.com> References: <1cd32cbb0811181335r66fd6deblb9e174ae3104c82@mail.gmail.com> <2908E30C-D30A-4588-8BC2-62CBFE0BFB25@gmail.com> <1cd32cbb0811181604i7634ed3w1022fb7a21d08249@mail.gmail.com> <6a17e9ee0811190338r3adb129dlaa9cd7f919b858b8@mail.gmail.com> <11BCFFB9-A488-497C-B7AD-4ECF87453E32@gmail.com> <91054A8B-1E5E-496D-8D3F-EA4F4D9B703F@gmail.com> <493EB18A-A54C-4BAF-BD2C-E81159C019D5@gmail.com> Message-ID: <6a17e9ee0811210200qe9f3ab7mf0e0061ff1220a81@mail.gmail.com> 2008/11/19 Pierre GM : > Pauli, > Thx a lot for the info. I should be good to go. I'm currently on the > move, so won't be able to post anything on the server before a couple > of days. Not that we're in a rush anyway... I'm attaching two patches against the numpy-docs trunk that may be useful. The first adds a warning (too strong?) that the masked array sections of the Numpy reference guide are not feature complete and gives a pointer to the Doc App where all of the functions are visible. The second adds some of the missing ma functions to the Numpy reference guide. Use them as you see fit. Cheers, Scott -------------- next part -------------- A non-text attachment was scrubbed... Name: ma.warning.diff Type: text/x-patch Size: 1258 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ma.add_routines.diff Type: text/x-patch Size: 781 bytes Desc: not available URL: From cournape at gmail.com Fri Nov 21 05:26:53 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 21 Nov 2008 19:26:53 +0900 Subject: [SciPy-dev] arpack wrapper status In-Reply-To: <20081120181111.GA7023@frappa.lanl.gov> References: <20081111153633.GM12595@bigjim2.lanl.gov> <85b5c3130811120423q64c66ff9xac08b629722221e7@mail.gmail.com> <491AC935.1040105@ar.media.kyoto-u.ac.jp> <85b5c3130811120451ld8437abj4f9592d45b1590a1@mail.gmail.com> <85b5c3130811121004g3934af38r5f799422a61bd49c@mail.gmail.com> <20081114230715.GD24599@frappa.lanl.gov> <3d375d730811141618w416e9de3nbff315facff0b7fe@mail.gmail.com> <20081120181111.GA7023@frappa.lanl.gov> Message-ID: <5b8d13220811210226q993f34bh980015c45aeb90c2@mail.gmail.com> On Fri, Nov 21, 2008 at 3:11 AM, Aric Hagberg wrote: > > I've cc'ed the Debian ARPACK maintainer to make sure they are aware of > the change as well. > > Many thanks to Professor Sorensen for making this happen! Great. I restored the arpack module, and updated the license, David From benny.malengier at gmail.com Fri Nov 21 06:09:58 2008 From: benny.malengier at gmail.com (Benny Malengier) Date: Fri, 21 Nov 2008 12:09:58 +0100 Subject: [SciPy-dev] close tickets, add doc for odes Message-ID: Hi, can somebody close: http://projects.scipy.org/scipy/scipy/ticket/615 http://www.scipy.org/scipy/scipy/ticket/730 They are part of new odes scikit. I have put documentation now on my site (http://cage.ugent.be/~bm/progs.html) but would like to update http://scipy.org/scipy/scikits/ I have no edit button on that wiki page though although the text says I should have. I suppose the page is protected. Can somebody add an entry for odes or give me permission? I will then add more on a separate wiki page. My login on the wiki is user 'bmcage' >From my point of view, the PR around scikits could indeed use some improvement/streamlining. Benny -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Nov 21 09:03:09 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 15:03:09 +0100 Subject: [SciPy-dev] Possibly bug in savetxt Message-ID: Hi all, The colon is missing in the output file when I use savetxt('dsvnode_nw.dat',A,fmt='%6i %6i', delimiter=':') to store A. >>> shape(A) (5760, 2) >>> type(A) Am I missing something ? Nils From scott.sinclair.za at gmail.com Fri Nov 21 09:38:23 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Fri, 21 Nov 2008 16:38:23 +0200 Subject: [SciPy-dev] Possibly bug in savetxt In-Reply-To: References: Message-ID: <6a17e9ee0811210638h77b927bbtde687af902a14a3@mail.gmail.com> 2008/11/21 Nils Wagner : > The colon is missing in the output file when I use > > savetxt('dsvnode_nw.dat',A,fmt='%6i %6i', delimiter=':') > > to store A. > >>>> shape(A) > (5760, 2) >>>> type(A) > I think the fmt string overrides the delimiter when multiple formats are specified. Try savetxt('dsvnode_nw.dat', A, fmt='%6i:%6i'') Cheers, Scott From nwagner at iam.uni-stuttgart.de Fri Nov 21 12:18:15 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 18:18:15 +0100 Subject: [SciPy-dev] make html in scipy/doc In-Reply-To: References: Message-ID: On Thu, 20 Nov 2008 19:24:47 +0000 (UTC) Pauli Virtanen wrote: > Thu, 20 Nov 2008 19:16:44 +0100, Nils Wagner wrote: >> >> make html doesn't work for me with latest svn: > [clip] > > Bad Latex in spatial.distance docstrings; fixed. > > I'll try to get a patch in Sphinx that makes Latex >errors in math non- > fatal. > > -- > Pauli Virtanen > > __ Hi Pauli, here comes the next issue copying static files... done dumping search index... Exception occurred: File "/usr/local/lib64/python2.5/site-packages/Sphinx-0.5dev_20081008-py2.5.egg/sphinx/search.py", line 151, in get_descrefs pdict[name] = (fn2index[doc], i) KeyError: 'generated/scipy.linalg.cg' The full traceback has been saved in /tmp/sphinx-err-wGd8VJ.log, if you want to report the issue to the author. Please also report this if it was a user error, so that a better error message can be provided next time. Send reports to sphinx-dev at googlegroups.com. Thanks! make: *** [html] Error 1 Cheers, Nils From pav at iki.fi Fri Nov 21 12:26:02 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 21 Nov 2008 17:26:02 +0000 (UTC) Subject: [SciPy-dev] make html in scipy/doc References: Message-ID: Fri, 21 Nov 2008 18:18:15 +0100, Nils Wagner wrote: [clip] > here comes the next issue > > copying static files... done > dumping search index... Exception occurred: > File > "/usr/local/lib64/python2.5/site-packages/Sphinx-0.5dev_20081008- py2.5.egg/sphinx/search.py", > line 151, in get_descrefs > pdict[name] = (fn2index[doc], i) > KeyError: 'generated/scipy.linalg.cg' The full traceback has been saved > in > /tmp/sphinx-err-wGd8VJ.log, if you want to report the issue to the > author. > Please also report this if it was a user error, so that a better error > message can be provided next time. Send reports to > sphinx-dev at googlegroups.com. Thanks! make: *** [html] Error 1 Removing build/ directory and trying again should help. This bug seems to occasionally appear -- don't know if it's in Sphinx (more probable) or in the Numpy extensions (less probable). Pauli From nwagner at iam.uni-stuttgart.de Fri Nov 21 12:42:11 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Nov 2008 18:42:11 +0100 Subject: [SciPy-dev] make html in scipy/doc In-Reply-To: References: Message-ID: On Fri, 21 Nov 2008 17:26:02 +0000 (UTC) Pauli Virtanen wrote: >Fri, 21 Nov 2008 18:18:15 +0100, Nils Wagner wrote: > [clip] >> here comes the next issue >> >> copying static files... done >> dumping search index... Exception occurred: >> File >> "/usr/local/lib64/python2.5/site-packages/Sphinx-0.5dev_20081008- > py2.5.egg/sphinx/search.py", >> line 151, in get_descrefs >> pdict[name] = (fn2index[doc], i) >> KeyError: 'generated/scipy.linalg.cg' The full traceback >>has been saved >> in >> /tmp/sphinx-err-wGd8VJ.log, if you want to report the >>issue to the >> author. >> Please also report this if it was a user error, so that >>a better error >> message can be provided next time. Send reports to >> sphinx-dev at googlegroups.com. Thanks! make: *** [html] >>Error 1 > > Removing build/ directory and trying again should help. >This bug seems to > occasionally appear -- don't know if it's in Sphinx >(more probable) or in > the Numpy extensions (less probable). > > Pauli Works fine for me. Thank you very much. Nils From pav at iki.fi Fri Nov 21 16:12:39 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 21 Nov 2008 21:12:39 +0000 (UTC) Subject: [SciPy-dev] Scipy Trac broken? Message-ID: Hi, When logging in or trying to view Scipy roadmap in Scipy's Trac, I get the error: Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 387, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 244, in dispatch req.session.save() File "/usr/lib/python2.4/site-packages/trac/web/session.py", line 206, in save (mintime,)) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "/usr/src/build/539311-i386/install//usr/lib/python2.4/site- packages/sqlite/main.py", line 255, in execute DatabaseError: database disk image is malformed -- Pauli Virtanen From robert.kern at gmail.com Fri Nov 21 16:18:42 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Nov 2008 15:18:42 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: References: Message-ID: <3d375d730811211318yee0d8dcp6aeffde53fd49bdb@mail.gmail.com> On Fri, Nov 21, 2008 at 15:12, Pauli Virtanen wrote: > Hi, > > When logging in or trying to view Scipy roadmap in Scipy's Trac, I get > the error: Working on it. Thank you for the report. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pwang at enthought.com Fri Nov 21 16:45:22 2008 From: pwang at enthought.com (Peter Wang) Date: Fri, 21 Nov 2008 15:45:22 -0600 Subject: [SciPy-dev] Scipy Trac broken? In-Reply-To: <3d375d730811211318yee0d8dcp6aeffde53fd49bdb@mail.gmail.com> References: <3d375d730811211318yee0d8dcp6aeffde53fd49bdb@mail.gmail.com> Message-ID: <8E16B797-1DFB-4B5B-BDAF-4B70CD2E6DA1@enthought.com> On Nov 21, 2008, at 3:18 PM, Robert Kern wrote: > On Fri, Nov 21, 2008 at 15:12, Pauli Virtanen wrote: >> Hi, >> >> When logging in or trying to view Scipy roadmap in Scipy's Trac, I >> get >> the error: > > Working on it. Thank you for the report. OK, I have moved the broken sqlite database out of the way and put a "fixed" database in its place. The fixed DB file is 300kb smaller than the bad file, and I'm not sure what, if anything, was lost. It looks like we have the most recent ticket change in the fixed DB, but I'm not sure if we have all of the most recent wiki edits. If you edited the wiki within the last couple of hours, please just double- check that your changes are present. Thanks for pointing this out, and please let me know if you see other weirdness. Thanks, Peter From pav at iki.fi Fri Nov 21 18:00:57 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 21 Nov 2008 23:00:57 +0000 (UTC) Subject: [SciPy-dev] nearest neighbour interpolation References: Message-ID: Mon, 17 Nov 2008 20:23:36 +0000, Pauli Virtanen wrote: > Mon, 17 Nov 2008 18:42:03 +0000, Robin wrote: >> I just got bitten by this bug: >> http://www.scipy.org/scipy/scipy/ticket/773 >> >> It is quite nasty I think (I lost a lot of time...) and could be fixed >> easily just by changing the documentation. (At least so people don't >> loose so much time). >> >> My wiki username for the documentation is robince, so if I am enabled >> for write access I could make this change. >> >> Is there any way to get nearest neighbour interpolation in scipy? This >> bug looks related: >> http://www.scipy.org/scipy/scipy/ticket/305 Perhaps this could be >> reopened? > > I have a clean and bug-fixed implementation from #305 here: > > http://github.com/pv/scipy/ commit/777d59eb6498b73a1c018600b2c11b42ec410eb6 > http://github.com/pv/scipy/ commit/20ee8bdb07d6629ebe16cf850d8c34b80ce6b0b9 > > Shall I commit? No objections, so committed. -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Fri Nov 21 19:44:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 22 Nov 2008 09:44:52 +0900 Subject: [SciPy-dev] Reverting eigh code ? Message-ID: <49275604.5040207@ar.media.kyoto-u.ac.jp> Hi, A few days ago, new code for eigh decomposition has been added, and it fails running correctly. Since we are days - even hours away from beta, and I am a bit tired looking at fortran problems, unless someone else solves it, I would like to set the changes asides from 0.7. If nobody complains within the beta time, I will remove it myself, David From josef.pktd at gmail.com Fri Nov 21 22:29:17 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 21 Nov 2008 22:29:17 -0500 Subject: [SciPy-dev] subversion commit policy for rename files question Message-ID: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> I want to do the renaming and importing in __all__ discussed here: http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html For this I had to resolve some circular imports and add some missing functions to __all__. Is there a policy whether renames should be committed separately or can it be together with changes in the file, or it doesn't matter? BTW: Is there an opinion about percentileofscore? Josef From robert.kern at gmail.com Fri Nov 21 23:06:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Nov 2008 22:06:49 -0600 Subject: [SciPy-dev] subversion commit policy for rename files question In-Reply-To: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> References: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> Message-ID: <3d375d730811212006l1214cf76k9c5d56e11aca678d@mail.gmail.com> On Fri, Nov 21, 2008 at 21:29, wrote: > I want to do the renaming and importing in __all__ discussed here: > http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html > For this I had to resolve some circular imports and add some missing > functions to __all__. > > Is there a policy whether renames should be committed separately or > can it be together with changes in the file, > or it doesn't matter? Probably doesn't matter. > BTW: Is there an opinion about percentileofscore? You have mine. I prefer strings instead of a weight parameter. AFAICT, there are only three values one would ever use as a "weight", 0, 1, and 0.5. I don't think anyone else thinks about it in terms of a weight, so it would be a weird concept to introduce. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Fri Nov 21 23:27:45 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 21 Nov 2008 20:27:45 -0800 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: <49275604.5040207@ar.media.kyoto-u.ac.jp> References: <49275604.5040207@ar.media.kyoto-u.ac.jp> Message-ID: On Fri, Nov 21, 2008 at 4:44 PM, David Cournapeau wrote: > If nobody complains within the beta time, I will remove it myself, Let's give it 24 hours before removing it. I want to take a look before making a judgment; but, in principle, I would be OK with some *minor* rough edges in the first beta if someone is willing to commit to fixing them before the release candidate. But the beta should be feature complete so we need to decide whether to remove the eigh code before we tag the release. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Fri Nov 21 23:13:14 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 22 Nov 2008 13:13:14 +0900 Subject: [SciPy-dev] subversion commit policy for rename files question In-Reply-To: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> References: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> Message-ID: <492786DA.9080508@ar.media.kyoto-u.ac.jp> josef.pktd at gmail.com wrote: > I want to do the renaming and importing in __all__ discussed here: > http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html > For this I had to resolve some circular imports and add some missing > functions to __all__. > > Is there a policy whether renames should be committed separately or > can it be together with changes in the file, > or it doesn't matter? > Independently of subversion, when you rename files and fix imports, just keep in mind to remove a previously installed version when you are testing (to avoid the risk to have both previous and current file names installed). David From robert.kern at gmail.com Fri Nov 21 23:30:53 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Nov 2008 22:30:53 -0600 Subject: [SciPy-dev] subversion commit policy for rename files question In-Reply-To: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> References: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> Message-ID: <3d375d730811212030o9e6e76bx57e97511296eb1e2@mail.gmail.com> On Fri, Nov 21, 2008 at 21:29, wrote: > I want to do the renaming and importing in __all__ discussed here: > http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html > For this I had to resolve some circular imports and add some missing > functions to __all__. > > Is there a policy whether renames should be committed separately or > can it be together with changes in the file, > or it doesn't matter? I take my "doesn't matter" back. Yes, please do file renames and internal modifications separately. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Nov 21 23:18:25 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 22 Nov 2008 13:18:25 +0900 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: References: <49275604.5040207@ar.media.kyoto-u.ac.jp> Message-ID: <49278811.90903@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > On Fri, Nov 21, 2008 at 4:44 PM, David Cournapeau > wrote: > >> If nobody complains within the beta time, I will remove it myself, >> > > Let's give it 24 hours before removing it. I want to take a look > before making a judgment; but, in principle, I would be OK with some > *minor* rough edges in the first beta if someone is willing to commit > to fixing them before the release candidate. But the beta should be > feature complete so we need to decide whether to remove the eigh code > before we tag the release. > I agree with the above principle in general but in that case: - it is a fortran issue, and worse happens in call to external code (BLAS/LAPACK) - the feature was added a few days ago (hence nobody really tested it, and nobody really depends on it either) - in that precise case, it is hard to know if it is minor or not (the problem seems to depend on the LAPACK version; I can't reproduce it on every machine I have at hand). So if the feature is known to break for the beta, it means we will need at least two beta. Which I would rather avoid just for that issue, David From josef.pktd at gmail.com Sat Nov 22 00:10:03 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 22 Nov 2008 00:10:03 -0500 Subject: [SciPy-dev] subversion commit policy for rename files question In-Reply-To: <3d375d730811212030o9e6e76bx57e97511296eb1e2@mail.gmail.com> References: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> <3d375d730811212030o9e6e76bx57e97511296eb1e2@mail.gmail.com> Message-ID: <1cd32cbb0811212110h5a565cfax47dc05cdce6708fd@mail.gmail.com> On Fri, Nov 21, 2008 at 11:30 PM, Robert Kern wrote: > On Fri, Nov 21, 2008 at 21:29, wrote: >> I want to do the renaming and importing in __all__ discussed here: >> http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html >> For this I had to resolve some circular imports and add some missing >> functions to __all__. >> >> Is there a policy whether renames should be committed separately or >> can it be together with changes in the file, >> or it doesn't matter? > > I take my "doesn't matter" back. Yes, please do file renames and > internal modifications separately. > > -- > Robert Kern Thanks, I will do it in several steps. All tests pass (after making sure that no old stuff is lying around), but not every function is tested. Also np.lookfor picks it up Robert, given our previous discussion, and the wikipedia definition of percentileofscore, I don't see any reason not to do a very simple implementation. Initially, I thought the proposed implementation can be vectorized, but I don't see how. Without vectorization, this version looks much simpler and, I guess, should be about as fast: import numpy as np def percentileofscore(a, score, kind = 'mean' ): a=np.array(a) n = len(a) if kind == 'strict': return sum(a References: <1cd32cbb0811211929wb218e01k230771260fea4d0a@mail.gmail.com> <3d375d730811212030o9e6e76bx57e97511296eb1e2@mail.gmail.com> <1cd32cbb0811212110h5a565cfax47dc05cdce6708fd@mail.gmail.com> Message-ID: <3d375d730811212125n734e1801j15c320e1f7c2dd6b@mail.gmail.com> On Fri, Nov 21, 2008 at 23:10, wrote: > On Fri, Nov 21, 2008 at 11:30 PM, Robert Kern wrote: >> On Fri, Nov 21, 2008 at 21:29, wrote: >>> I want to do the renaming and importing in __all__ discussed here: >>> http://projects.scipy.org/pipermail/scipy-dev/2008-November/010241.html >>> For this I had to resolve some circular imports and add some missing >>> functions to __all__. >>> >>> Is there a policy whether renames should be committed separately or >>> can it be together with changes in the file, >>> or it doesn't matter? >> >> I take my "doesn't matter" back. Yes, please do file renames and >> internal modifications separately. >> >> -- >> Robert Kern > > > Thanks, I will do it in several steps. > All tests pass (after making sure that no old stuff is lying around), > but not every function is tested. > Also np.lookfor picks it up > > > Robert, > given our previous discussion, and the wikipedia definition of > percentileofscore, I don't see any reason not to do a very simple > implementation. > Initially, I thought the proposed implementation can be vectorized, > but I don't see how. Without vectorization, this version looks much > simpler and, I guess, should be about as fast: > > import numpy as np > > def percentileofscore(a, score, kind = 'mean' ): > a=np.array(a) > n = len(a) > if kind == 'strict': > return sum(a elif kind == 'weak': > return sum(a<=score) / float(n) * 100 > elif kind == 'mean': > return (sum(a else: > raise NotImplementedError > > If you think this is ok, I put it in svn, I'm not sure whether to call > the type, "kind", doctest pass the same as previous version I'd raise a ValueError with a message stating that 'strong', 'weak', and 'mean' are the only correct values, but otherwise, that looks fine. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Sat Nov 22 03:01:35 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 22 Nov 2008 03:01:35 -0500 Subject: [SciPy-dev] percentileofscore in svn Message-ID: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> To ariel.rokem (I don't know your email address) You beat me in correcting percentileofscore, however, your version still has a bias for multiple ties. >>> percentileofscorein([1,1,1,1],1) # yours 62.5 >>> percentileofscore([1,1,1,1],1) # wikipedia 50.0 >>> percentileofscore([1,1,1,1,1,2,2,2,2,2],1) # wikipedia 25.0 >>> percentileofscorein([1,1,1,1,1,2,2,2,2,2],1) # yours 30.0 >>> percentileofscorein([1,1,1],1) # yours 66.666666666666657 >>> percentileofscore([1,1,1],1) # wikipedia 50.0 >>> percentileofscorein([1,1,1,1,1,2,2,2,2,2],2) # yours 80.0 >>> percentileofscore([1,1,1,1,1,2,2,2,2,2],2) # wikipedia 75.0 Since we were discussing definitions, do you have a definition of your method? I'm not too convinced of the wikipedia version, but it looks like a consistent definition I renamed my version temporarily to percentileofscore2 Josef From opossumnano at gmail.com Sat Nov 22 04:51:28 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Sat, 22 Nov 2008 10:51:28 +0100 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: <49278811.90903@ar.media.kyoto-u.ac.jp> References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> Message-ID: I have added the code for eigh. The corresponding fortran wrappers have been running without any problems since 6 years in symeig. symeig has been dowloaded more than 1000 times since its first appearence and the MDP package (more than 11000 dowloads) depends on it. No bug has been reported regarding symeig in 6 years. I feel responsible for the eigh code, so if someone cares to submit a proper bug report, or a link to a scipy ticket, I would be glad to also have a look at it. It may be an issue with proper "lwork" size if using a LAPACK+ATLAS compiled on a different machine to that where the code is running. Before submitting the code, I tested it on a debian lenny + system lapack and atlas, on a windows XP with manually compiled atals 3.8 and lapack 3.1, and on a MacOsX with enthought python distribution. Is there a buildbot that we can use for such cases? tiziano On Sat, Nov 22, 2008 at 5:18 AM, David Cournapeau wrote: > Jarrod Millman wrote: >> On Fri, Nov 21, 2008 at 4:44 PM, David Cournapeau >> wrote: >> >>> If nobody complains within the beta time, I will remove it myself, >>> >> >> Let's give it 24 hours before removing it. I want to take a look >> before making a judgment; but, in principle, I would be OK with some >> *minor* rough edges in the first beta if someone is willing to commit >> to fixing them before the release candidate. But the beta should be >> feature complete so we need to decide whether to remove the eigh code >> before we tag the release. >> > > I agree with the above principle in general but in that case: > - it is a fortran issue, and worse happens in call to external code > (BLAS/LAPACK) > - the feature was added a few days ago (hence nobody really tested > it, and nobody really depends on it either) > - in that precise case, it is hard to know if it is minor or not > (the problem seems to depend on the LAPACK version; I can't reproduce it > on every machine I have at hand). > > So if the feature is known to break for the beta, it means we will need > at least two beta. Which I would rather avoid just for that issue, > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Sat Nov 22 04:57:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 22 Nov 2008 03:57:55 -0600 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811220157t43d7015fpefd4ebf7d804e993@mail.gmail.com> On Sat, Nov 22, 2008 at 03:51, Tiziano Zito wrote: > I have added the code for eigh. The corresponding fortran wrappers > have been running without any problems since 6 years in symeig. symeig > has been dowloaded more than 1000 times since its first appearence and > the MDP package (more than 11000 dowloads) depends on it. No bug has > been reported regarding symeig in 6 years. > I feel responsible for the eigh code, so if someone cares to submit a > proper bug report, or a link to a scipy ticket, I would be glad to > also have a look at it. http://projects.scipy.org/scipy/scipy/ticket/795 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Sat Nov 22 04:46:45 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 22 Nov 2008 18:46:45 +0900 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> Message-ID: <4927D505.7020003@ar.media.kyoto-u.ac.jp> Hi Tiziano, Tiziano Zito wrote: > I have added the code for eigh. The corresponding fortran wrappers > have been running without any problems since 6 years in symeig. symeig > has been dowloaded more than 1000 times since its first appearence and > the MDP package (more than 11000 dowloads) depends on it. No bug has > been reported regarding symeig in 6 years. I have no doubt the fortran code itself is OK. But there is a vast variety of Fortran/Blas/Lapack configurations, and sometimes, the code has to be modified to run on some platforms. That's why I am a bit worried about last-minute change with fortran-related code; experience says that it has always been a source of problems. > I feel responsible for the eigh code, so if someone cares to submit a > proper bug report, or a link to a scipy ticket, I would be glad to > also have a look at it. http://projects.scipy.org/scipy/scipy/ticket/795 I could reproduce a similar problem (indeed related to work size) on Ubuntu Hardy 32 bits with custom-build ATLAS, and RHEL 5 64 bits - the tests fail, but do not crash, though. > Is there a buildbot that we can use for such cases? Not for scipy, unfortunately. Here is the exact configurations which fail for me: - Ubuntu 8.04 - 32 bits - packaged gcc and g77 - custom-made ATLAS (3.8.2) against LAPACK 3.1.1 And - RHEL 5 - 64 bits - packaged gcc and gfortran - custom made ATLAS (3.8.2) against LAPACK 3.1.1 In the later case, I cannot reproduce the problem if I use LAPACK without ATLAS (I have not taken the time to test this with Ubuntu, but I can do it if that's useful). David From nwagner at iam.uni-stuttgart.de Sat Nov 22 05:01:49 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 22 Nov 2008 11:01:49 +0100 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> Message-ID: On Sat, 22 Nov 2008 10:51:28 +0100 "Tiziano Zito" wrote: > I have added the code for eigh. The corresponding >fortran wrappers > have been running without any problems since 6 years in >symeig. symeig > has been dowloaded more than 1000 times since its first >appearence and > the MDP package (more than 11000 dowloads) depends on >it. No bug has > been reported regarding symeig in 6 years. > I feel responsible for the eigh code, so if someone >cares to submit a > proper bug report, or a link to a scipy ticket, I would >be glad to > also have a look at it. It may be an issue with proper >"lwork" size if > using a LAPACK+ATLAS compiled on a different machine to >that where the > code is running. > Before submitting the code, I tested it on a debian >lenny + system > lapack and atlas, on a windows XP with manually compiled >atals 3.8 and > lapack 3.1, and on a MacOsX with enthought python >distribution. > Is there a buildbot that we can use for such cases? > > tiziano > Hi Tiziano, See http://projects.scipy.org/scipy/scipy/ticket/795 I use openSUSE 10.2 (X86-64) >>> show_config() atlas_threads_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] language = c atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] define_macros = [('ATLAS_INFO', '"\\"3.7.11\\""')] language = f77 atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = f77 lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/lib/atlas'] language = c mkl_info: NOT AVAILABLE Cheers, Nils From opossumnano at gmail.com Sat Nov 22 05:51:32 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Sat, 22 Nov 2008 11:51:32 +0100 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: <4927D505.7020003@ar.media.kyoto-u.ac.jp> References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> <4927D505.7020003@ar.media.kyoto-u.ac.jp> Message-ID: Hi David! > I could reproduce a similar problem (indeed related to work size) on > Ubuntu Hardy 32 bits with custom-build ATLAS, and RHEL 5 64 bits - the > tests fail, but do not crash, though. > >> Is there a buildbot that we can use for such cases? Unfortunately I am away from my developing platform, I'll be back on Monday. I think increasing the lwork size for the complex routines in the generalized eigenproblem case may help. I think the minimal lwork assumptions I have made may need to be relaxed a bit. As i already explained, I did not understand the code in calc_lwork.f enough to be able to write an lwork calculator for the eigh routines: doing this will be the best option, but it may take sometime (and someone else's work). If you send me the exact failing tests (so that I can check which routine is failing) I may try to repair it within Monday afternoon (European time). Are you ging to be online or can you give me access on a machine where the tests are failing? thank you! tiziano PS: for obvious reasons I would be really happy to see the improved eigh in scipy 0.7 ;-)) From robince at gmail.com Sat Nov 22 08:04:17 2008 From: robince at gmail.com (Robin) Date: Sat, 22 Nov 2008 13:04:17 +0000 Subject: [SciPy-dev] nearest neighbour interpolation In-Reply-To: References: Message-ID: On Fri, Nov 21, 2008 at 11:00 PM, Pauli Virtanen wrote: > No objections, so committed. Thanks very much - it looks great to me... Robin From josef.pktd at gmail.com Sat Nov 22 09:49:17 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 22 Nov 2008 09:49:17 -0500 Subject: [SciPy-dev] percentileofscore in svn In-Reply-To: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> References: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> Message-ID: <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> On Sat, Nov 22, 2008 at 3:01 AM, wrote: >>>> percentileofscorein([1,1,1,1],1) # yours > 62.5 >>>> percentileofscore([1,1,1,1],1) # wikipedia > 50.0 >>>> percentileofscore([1,1,1,1,1,2,2,2,2,2],1) # wikipedia > 25.0 >>>> percentileofscorein([1,1,1,1,1,2,2,2,2,2],1) # yours > 30.0 >>>> percentileofscorein([1,1,1],1) # yours > 66.666666666666657 >>>> percentileofscore([1,1,1],1) # wikipedia > 50.0 >>>> percentileofscorein([1,1,1,1,1,2,2,2,2,2],2) # yours > 80.0 >>>> percentileofscore([1,1,1,1,1,2,2,2,2,2],2) # wikipedia > 75.0 > Actually, these numbers make perfect sense are rank orderings. Last night, I was thinking too much in terms of the [0,1] interval. I merged the two versions, and made "rank" as default, because it has better backwards compatibility. I also added tests. How if percentileofscore could handle a vector of scores at the same time, then this function would be really useful. Josef From arokem at berkeley.edu Sat Nov 22 11:55:55 2008 From: arokem at berkeley.edu (Ariel Rokem) Date: Sat, 22 Nov 2008 08:55:55 -0800 Subject: [SciPy-dev] percentileofscore in svn In-Reply-To: <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> References: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> Message-ID: <43958ee60811220855i7a66b3dsb07b99ea5c6ff857@mail.gmail.com> Hi Josef - cool reworking of the function. I am not quite following - what do you mean "handle a vector of scores at the same time"? Ariel On Sat, Nov 22, 2008 at 6:49 AM, wrote: > On Sat, Nov 22, 2008 at 3:01 AM, wrote: > > >>>> percentileofscorein([1,1,1,1],1) # yours > > 62.5 > >>>> percentileofscore([1,1,1,1],1) # wikipedia > > 50.0 > >>>> percentileofscore([1,1,1,1,1,2,2,2,2,2],1) # wikipedia > > 25.0 > >>>> percentileofscorein([1,1,1,1,1,2,2,2,2,2],1) # yours > > 30.0 > >>>> percentileofscorein([1,1,1],1) # yours > > 66.666666666666657 > >>>> percentileofscore([1,1,1],1) # wikipedia > > 50.0 > >>>> percentileofscorein([1,1,1,1,1,2,2,2,2,2],2) # yours > > 80.0 > >>>> percentileofscore([1,1,1,1,1,2,2,2,2,2],2) # wikipedia > > 75.0 > > > > Actually, these numbers make perfect sense are rank orderings. Last > night, I was thinking too much in terms of the [0,1] interval. > > I merged the two versions, and made "rank" as default, because it has > better backwards compatibility. I also added tests. > > How if percentileofscore could handle a vector of scores at the same > time, then this function would be really useful. > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sat Nov 22 12:32:47 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 22 Nov 2008 12:32:47 -0500 Subject: [SciPy-dev] percentileofscore in svn In-Reply-To: <43958ee60811220855i7a66b3dsb07b99ea5c6ff857@mail.gmail.com> References: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> <43958ee60811220855i7a66b3dsb07b99ea5c6ff857@mail.gmail.com> Message-ID: <1cd32cbb0811220932h606d474bs1d09a48d0da55b30@mail.gmail.com> On Sat, Nov 22, 2008 at 11:55 AM, Ariel Rokem wrote: > Hi Josef - cool reworking of the function. I am not quite following - what > do you mean "handle a vector of scores at the same > time"? > > Ariel evaluate for several scores at the same time: a = [1,2,3,4,5,6,7,8,9,10] percentileofscore(a,[4,5,8]) or percentileofscore(a,np.array([4,5,8])) instead of for s in [4,5,8]: percentileofscore(a,s) For example if "a" are the student grades, then percentileofscore(a,a) would give you the ranking of every student. percentileofscore(a,[4,5,8], kind = 'weak') would provide empirical cumulative frequency for 4,5,8 But I didn't see a way of gaining much in the function compared to the simple for loop. Josef From rmay31 at gmail.com Sat Nov 22 13:22:01 2008 From: rmay31 at gmail.com (Ryan May) Date: Sat, 22 Nov 2008 12:22:01 -0600 Subject: [SciPy-dev] percentileofscore in svn In-Reply-To: <1cd32cbb0811220932h606d474bs1d09a48d0da55b30@mail.gmail.com> References: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> <43958ee60811220855i7a66b3dsb07b99ea5c6ff857@mail.gmail.com> <1cd32cbb0811220932h606d474bs1d09a48d0da55b30@mail.gmail.com> Message-ID: <49284DC9.8010905@gmail.com> josef.pktd at gmail.com wrote: > On Sat, Nov 22, 2008 at 11:55 AM, Ariel Rokem wrote: >> Hi Josef - cool reworking of the function. I am not quite following - what >> do you mean "handle a vector of scores at the same >> time"? >> >> Ariel > > evaluate for several scores at the same time: > a = [1,2,3,4,5,6,7,8,9,10] > percentileofscore(a,[4,5,8]) > or > percentileofscore(a,np.array([4,5,8])) > instead of > for s in [4,5,8]: > percentileofscore(a,s) > > For example if "a" are the student grades, then > percentileofscore(a,a) > would give you the ranking of every student. > > percentileofscore(a,[4,5,8], kind = 'weak') would provide empirical > cumulative frequency for 4,5,8 > > But I didn't see a way of gaining much in the function compared to the > simple for loop. But you could at least perform this loop within the function so that the user can get the ease of functionality and not have to write the loop his/herself. My $0.02. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From wnbell at gmail.com Sat Nov 22 14:50:16 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 22 Nov 2008 14:50:16 -0500 Subject: [SciPy-dev] percentileofscore in svn In-Reply-To: <1cd32cbb0811220932h606d474bs1d09a48d0da55b30@mail.gmail.com> References: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> <43958ee60811220855i7a66b3dsb07b99ea5c6ff857@mail.gmail.com> <1cd32cbb0811220932h606d474bs1d09a48d0da55b30@mail.gmail.com> Message-ID: On Sat, Nov 22, 2008 at 12:32 PM, wrote: > > evaluate for several scores at the same time: > a = [1,2,3,4,5,6,7,8,9,10] > percentileofscore(a,[4,5,8]) > or > percentileofscore(a,np.array([4,5,8])) > instead of > for s in [4,5,8]: > percentileofscore(a,s) > > For example if "a" are the student grades, then > percentileofscore(a,a) > would give you the ranking of every student. > > percentileofscore(a,[4,5,8], kind = 'weak') would provide empirical > cumulative frequency for 4,5,8 > > But I didn't see a way of gaining much in the function compared to the > simple for loop. > Hi Josef, Is there a reason why you couldn't implement percentileofscore() with numpy's searchsorted()? That would give you vectorization and more efficiently handle large #s of bins. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From josef.pktd at gmail.com Sat Nov 22 22:59:08 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sat, 22 Nov 2008 22:59:08 -0500 Subject: [SciPy-dev] friedmanchisquare fixed, test based on R help ? Message-ID: <1cd32cbb0811221959t4461b800nb48cd25ab5850cbb@mail.gmail.com> I applied the patch from ticket:117 for stats.friedmanchisquare, after verifying with matlab and R, the solution is identical to both. I added tests based on examples in the patch. One copyright question: I also added an example from the R help file to the tests, which is based on a paper published in 1973. Are there any copyright problems for doing this? Josef From eads at soe.ucsc.edu Sat Nov 22 23:16:18 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 22 Nov 2008 20:16:18 -0800 Subject: [SciPy-dev] friedmanchisquare fixed, test based on R help ? In-Reply-To: <1cd32cbb0811221959t4461b800nb48cd25ab5850cbb@mail.gmail.com> References: <1cd32cbb0811221959t4461b800nb48cd25ab5850cbb@mail.gmail.com> Message-ID: <91b4b1ab0811222016l645e1ba0y298caccdfedda795@mail.gmail.com> I don't believe there would be any copyright issues, just potential license issues. The R help is distributed under the terms of the GNU Free Documentation License, which is a copyleft license. If you derive the test directly from the 1973 paper and not the R code, there shouldn't be any issues. Damian On 11/22/08, josef.pktd at gmail.com wrote: > I applied the patch from ticket:117 for stats.friedmanchisquare, after > verifying with matlab and R, the solution is identical to both. > > I added tests based on examples in the patch. > > One copyright question: > I also added an example from the R help file to the tests, which is > based on a paper published in 1973. > Are there any copyright problems for doing this? > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From opossumnano at gmail.com Sun Nov 23 04:03:19 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Sun, 23 Nov 2008 10:03:19 +0100 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> <4927D505.7020003@ar.media.kyoto-u.ac.jp> Message-ID: Hi David, I commited a possible fix: there may be some problems in the lapack documentation, the minimal lwork assignments are, for some complex routines, oddly smaller than that of the corresponding real routine. The fortran routines are actually just driver routines, which in turn call more specialized routines underneath: in the documentation of the underlying routines I found different minimal values for lwork and set that for the driver routines. I used the same values years ago on a red hat machine where I had to compile LAPACK and ATLAS by hand and something was not working (I found out in symeig svn history). Let me know if this solves the problem you are seeing. nils, would you also try it? It's in revision 5175. ciao, tiziano From xavier.gnata at gmail.com Sun Nov 23 07:36:03 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 23 Nov 2008 13:36:03 +0100 Subject: [SciPy-dev] cephes broken (revision 5176) Message-ID: <49294E33.1070608@gmail.com> Hi, There are some missing declaration in the cephes module (rev 5197) scipy/special/_cephesmodule.c: In function ?Cephes_InitOperators?: scipy/special/_cephesmodule.c:340: error: ?PyUFunc_f_f_As_d_d? undeclared (first use in this function) scipy/special/_cephesmodule.c:340: error: (Each undeclared identifier is reported only once scipy/special/_cephesmodule.c:340: error: for each function it appears in.) scipy/special/_cephesmodule.c:341: error: ?PyUFunc_d_d? undeclared (first use in this function) scipy/special/_cephesmodule.c:364: error: ?PyUFunc_ff_f_As_dd_d? undeclared (first use in this function) scipy/special/_cephesmodule.c:365: error: ?PyUFunc_dd_d? undeclared (first use in this function) scipy/special/_cephesmodule.c: In function ?Cephes_InitOperators?: scipy/special/_cephesmodule.c:340: error: ?PyUFunc_f_f_As_d_d? undeclared (first use in this function) scipy/special/_cephesmodule.c:340: error: (Each undeclared identifier is reported only once scipy/special/_cephesmodule.c:340: error: for each function it appears in.) scipy/special/_cephesmodule.c:341: error: ?PyUFunc_d_d? undeclared (first use in this function) scipy/special/_cephesmodule.c:364: error: ?PyUFunc_ff_f_As_dd_d? undeclared (first use in this function) scipy/special/_cephesmodule.c:365: error: ?PyUFunc_dd_d? undeclared (first use in this function) Xavier From david at ar.media.kyoto-u.ac.jp Sun Nov 23 07:30:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 23 Nov 2008 21:30:41 +0900 Subject: [SciPy-dev] cephes broken (revision 5176) In-Reply-To: <49294E33.1070608@gmail.com> References: <49294E33.1070608@gmail.com> Message-ID: <49294CF1.7030404@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > Hi, > > There are some missing declaration in the cephes module (rev 5197) > Actually, that's numpy svn which is broken ATM. You can build scipy against numpy 1.2.1, for example. David From cournape at gmail.com Sun Nov 23 07:57:12 2008 From: cournape at gmail.com (David Cournapeau) Date: Sun, 23 Nov 2008 21:57:12 +0900 Subject: [SciPy-dev] Reverting eigh code ? In-Reply-To: References: <49275604.5040207@ar.media.kyoto-u.ac.jp> <49278811.90903@ar.media.kyoto-u.ac.jp> <4927D505.7020003@ar.media.kyoto-u.ac.jp> Message-ID: <5b8d13220811230457i7795153dtb7adc3055c9d4cbf@mail.gmail.com> On Sun, Nov 23, 2008 at 6:03 PM, Tiziano Zito wrote: > Hi David, > I commited a possible fix: there may be some problems in the lapack > documentation, the minimal lwork assignments are, for some complex > routines, oddly smaller than that of the corresponding real routine. > The fortran routines are actually just driver routines, which in turn > call more specialized routines underneath: in the documentation of the > underlying routines I found different minimal values for lwork and set > that for the driver routines. I used the same values years ago on a > red hat machine where I had to compile LAPACK and ATLAS by hand and > something was not working (I found out in symeig svn history). Let me > know if this solves the problem you are seeing. It seems to work on both computers which were failing before. I also checked it did not break on Windows + atlas, and it still works. Thank you very much ! David From josef.pktd at gmail.com Sun Nov 23 09:43:00 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 23 Nov 2008 09:43:00 -0500 Subject: [SciPy-dev] friedmanchisquare fixed, test based on R help ? In-Reply-To: <91b4b1ab0811222016l645e1ba0y298caccdfedda795@mail.gmail.com> References: <1cd32cbb0811221959t4461b800nb48cd25ab5850cbb@mail.gmail.com> <91b4b1ab0811222016l645e1ba0y298caccdfedda795@mail.gmail.com> Message-ID: <1cd32cbb0811230643rc17518fv8ee1b059a982da90@mail.gmail.com> >> >> One copyright question: >> I also added an example from the R help file to the tests, which is >> based on a paper published in 1973. >> Are there any copyright problems for doing this? >> >> Josef On Sat, Nov 22, 2008 at 11:16 PM, Damian Eads wrote: > I don't believe there would be any copyright issues, just potential > license issues. The R help is distributed under the terms of the GNU > Free Documentation License, which is a copyleft license. If you derive > the test directly from the 1973 paper and not the R code, there > shouldn't be any issues. > > Damian > actually it's a textbook: Nonparametric Statistical Methods By Myles Hollander, Douglas A. Wolfe Published by Wiley, 1973 But since I currently don't have access to it, I removed the test. I didn't look at the R code, I had just taken the example table and the reference from the help file. Josef From pav at iki.fi Sun Nov 23 10:21:52 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 23 Nov 2008 15:21:52 +0000 (UTC) Subject: [SciPy-dev] Numpy documentation editable @ docs.scipy.org Message-ID: Dear all, All of the Sphinx-generated documentation of Numpy (not only the docstrings!) can be now edited in the wiki at http://docs.scipy.org/numpy/ for example, http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.ndarray.rst/ Contributing should now be quite a lot easier, and does not require you to install Sphinx and check out the docs from SVN any more. There's still a small amount of work left to make the same be true also for Scipy, but this is not far off. -- Pauli Virtanen From charlesr.harris at gmail.com Sun Nov 23 19:17:49 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 23 Nov 2008 17:17:49 -0700 Subject: [SciPy-dev] Numpy documentation editable @ docs.scipy.org In-Reply-To: References: Message-ID: On Sun, Nov 23, 2008 at 8:21 AM, Pauli Virtanen wrote: > Dear all, > > All of the Sphinx-generated documentation of Numpy (not only the > docstrings!) can be now edited in the wiki at > > http://docs.scipy.org/numpy/ > > for example, > > > http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.ndarray.rst/ > > Contributing should now be quite a lot easier, and does not require > you to install Sphinx and check out the docs from SVN any more. > There are still missing ufuncs. How do they get added? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 23 19:27:17 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 23 Nov 2008 17:27:17 -0700 Subject: [SciPy-dev] cephes broken (revision 5176) In-Reply-To: <49294CF1.7030404@ar.media.kyoto-u.ac.jp> References: <49294E33.1070608@gmail.com> <49294CF1.7030404@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Nov 23, 2008 at 5:30 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Xavier Gnata wrote: > > Hi, > > > > There are some missing declaration in the cephes module (rev 5197) > > > > Actually, that's numpy svn which is broken ATM. You can build scipy > against numpy 1.2.1, for example. > So it is. I fixed that once, but I must have broken it again somewhere along the line. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Nov 23 20:09:32 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 23 Nov 2008 18:09:32 -0700 Subject: [SciPy-dev] cephes broken (revision 5176) In-Reply-To: References: <49294E33.1070608@gmail.com> <49294CF1.7030404@ar.media.kyoto-u.ac.jp> Message-ID: On Sun, Nov 23, 2008 at 5:27 PM, Charles R Harris wrote: > > > On Sun, Nov 23, 2008 at 5:30 AM, David Cournapeau < > david at ar.media.kyoto-u.ac.jp> wrote: > >> Xavier Gnata wrote: >> > Hi, >> > >> > There are some missing declaration in the cephes module (rev 5197) >> > >> >> Actually, that's numpy svn which is broken ATM. You can build scipy >> against numpy 1.2.1, for example. >> > > So it is. I fixed that once, but I must have broken it again somewhere > along the line. > It should be fixed now. This brings up something I've been thinking about. I think the include headers should exist independently of the src. They are currently generated by scanning the files, but that is error prone and breaks the independent signature check that a fixed include file performs. Besides, I don't think is asking much to require programmers to declare interface functions before defining them. It is good practice. It would also simplify the current setup by making it closer to standard practice instead of having things buried in code generators and order lists. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sun Nov 23 21:56:55 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 23 Nov 2008 21:56:55 -0500 Subject: [SciPy-dev] the state of scipy unit tests Message-ID: In the past, I would always run 'nosetests scipy' before committing changes to SVN. Due to the current state of the unit tests, I don't anymore, and I suspect I'm not alone. Here are the main offenders on my system: scipy.stats. I appreciate the fact that rigorous testing on this module takes time, but 4 minutes on a 2.4GHz Core 2 system is unreasonable. IMO 20 seconds is a reasonable upper bound. Essential tests that don't meet this time constraint should be filtered out of the default test suite. scipy.weave Takes 2.5 minutes and litters my screen with a few hundred lines like, /tmp/tmpcRI5WR/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: deprecated conversion from and dumps a bunch of .cpp and .so files in the current working directory. Some minor offenders: scipy.interpolate Emits several UserWarnings (should be filtered out with warnings.warnfilter). scipy.io Several DeprecationWarnings scipy.lib A dozen lines like "zcopy:n=3" scipy.linalg Outputs ATLAS info and a dozen lines like "zcopy:n=3". I'd like to be able to run the entire battery of tests in about a minute with minimal unnecessary output. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From josef.pktd at gmail.com Sun Nov 23 21:59:39 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 23 Nov 2008 21:59:39 -0500 Subject: [SciPy-dev] percentileofscore in svn In-Reply-To: References: <1cd32cbb0811220001o3379141alcd422e4a182c2fab@mail.gmail.com> <1cd32cbb0811220649tf45828crdd6ebc9a1994f445@mail.gmail.com> <43958ee60811220855i7a66b3dsb07b99ea5c6ff857@mail.gmail.com> <1cd32cbb0811220932h606d474bs1d09a48d0da55b30@mail.gmail.com> Message-ID: <1cd32cbb0811231859j13151fe4r41ee3413e42c2522@mail.gmail.com> > > Hi Josef, > > Is there a reason why you couldn't implement percentileofscore() with > numpy's searchsorted()? That would give you vectorization and more > efficiently handle large #s of bins. > > Nathan Bell wnbell at gmail.com The reason is that I never used searchsorted, and I still don't have an overview which functions are available in numpy/scipy. But, thank you for the hint, after I found the left and right option, searchsorted works perfectly. It is also easy to get empirical cumulative frequency this way, and also directly the frequency count. It requires a sort, which would be a waste if I just need the cdf for a single value, but then I wouldn't need a function. The same options that I added to percentileofscore, can be easily calculated: >>> hi = np.searchsorted([1,2,3,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9], side='right') >>> lo = np.searchsorted([1,2,3,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9], side='left') # rank ordering >>> hi array([ 1, 2, 4, 5, 6, 7, 8, 9, 10]) >>> lo array([0, 1, 2, 4, 5, 6, 7, 8, 9]) >>> hi-lo array([1, 1, 2, 1, 1, 1, 1, 1, 1]) percentiles of scores >>> n=10 >>> (lo+0.5*(hi-lo))/float(n)*100 # mean wikipedia array([ 5., 15., 30., 45., 55., 65., 75., 85., 95.]) >>> (0.5*(hi+1+lo))/float(n)*100 # rank (mean rank) array([ 10., 20., 35., 50., 60., 70., 80., 90., 100.]) >>> hi/float(n)*100 # weak inequality (cdf) array([ 10., 20., 40., 50., 60., 70., 80., 90., 100.]) >>> lo/float(n)*100 # strict inequality array([ 0., 10., 20., 40., 50., 60., 70., 80., 90.]) >>> hi/float(n)*100-lo/float(n)*100 # frequencies in percent array([ 10., 10., 20., 10., 10., 10., 10., 10., 10.]) >>> Not, properly tested yet but looks good. Josef From millman at berkeley.edu Sun Nov 23 22:32:40 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 19:32:40 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: Message-ID: On Sun, Nov 23, 2008 at 6:56 PM, Nathan Bell wrote: > scipy.stats. > I appreciate the fact that rigorous testing on this module takes time, > but 4 minutes on a 2.4GHz Core 2 system is unreasonable. IMO 20 > seconds is a reasonable upper bound. Essential tests that don't meet > this time constraint should be filtered out of the default test suite. I agree. Josef asked whether he should reduce the number of tests run bu default and I (perhaps mistakenly) said that he should focus on fixing broken code and writing new tests before we released the first beta. I was thinking that it would be best to have as many tests run by default for the beta release. It would probably be better to enable all the tests by default for tagged beta and rc releases, but not the development trunk. Ideas? As soon as the beta is released, we should focus on reducing the time required to run the default test suite. > scipy.weave > Takes 2.5 minutes and litters my screen with a few hundred lines like, > /tmp/tmpcRI5WR/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: > deprecated conversion from > and dumps a bunch of .cpp and .so files in the current working directory. I find that annoying too. > I'd like to be able to run the entire battery of tests in about a > minute with minimal unnecessary output. +1, let's make this a focus for post-beta1 attention. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Sun Nov 23 22:37:22 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 19:37:22 -0800 Subject: [SciPy-dev] beta 1 tagged in a few hours Message-ID: I will be tagging the first beta for 0.7 in a few hours. Please let me know if there is any reason that this needs to be delayed. Thanks for everyone who has helped get the trunk ready for a beta release. Once the release is tagged, David and I will create binaries and make the actual release. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From josef.pktd at gmail.com Sun Nov 23 22:55:43 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 23 Nov 2008 22:55:43 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: Message-ID: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> On Sun, Nov 23, 2008 at 9:56 PM, Nathan Bell wrote: > In the past, I would always run 'nosetests scipy' before committing > changes to SVN. Due to the current state of the unit tests, I don't > anymore, and I suspect I'm not alone. > > Here are the main offenders on my system: > > scipy.stats. > I appreciate the fact that rigorous testing on this module takes time, > but 4 minutes on a 2.4GHz Core 2 system is unreasonable. IMO 20 > seconds is a reasonable upper bound. Essential tests that don't meet > this time constraint should be filtered out of the default test suite. > I agree that it is pretty painful, I usually just run nosetests on the module or package level, e.g. 'nosetests scipy.stats' before commit and specific test files while correcting individual functions. For my distributions tests, I use additional tests that are renamed in svn so that nose doesn't pick them up. Is it possible to use an exclude option with nose that excludes for example all tests in scipy stats or specific test files? My problem, that I raised already once on the mailing list, is that I am testing now essentially all methods of close to 100 distributions, some of which require a lot of numerical integration and optimization. I wrote the tests pretty fast, for bug hunting and to get one thorough round of testing during the next beta release. But for everyday usage they are too much. I haven't done any profiling to see which are the most offending distributions, and since there are so many distributions and all tests are generators, it is difficult to special case individual time consuming methods and distributions. Another problem are tests based on random numbers, if the sample size and power of statistical tests are too small (as was the case in scipy until a few months ago), then it doesn't catch many bugs, if the statistical tests should have some power, then they require larger samples and more calculation. My initial attempts to use decorators were not very successful, since nose doesn't allow to decorate test generators. One option would be to label most of my test functions with slow, but I haven't tried this yet. In the old test system, it was possible to assign levels to the tests. I don't know if or how it is possible to label my tests so that a few basic ones are run on a low level and the other ones only at higher levels. Triaging my tests will be quite a bit of work, but the short term solution is to find a way how to exclude most of them for everyday use but keep them available for beta testing. BTW. Is there a way to profile the tests itself (test yielded by generator not the test function)? Josef From millman at berkeley.edu Sun Nov 23 23:25:00 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 20:25:00 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> References: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> Message-ID: On Sun, Nov 23, 2008 at 7:55 PM, wrote: > My initial attempts to use decorators were not very successful, since > nose doesn't allow to decorate test generators. One option would be to > label most of my test functions with slow, but I haven't tried this > yet. In the old test system, it was possible to assign levels to the > tests. I don't know if or how it is possible to label my tests so that > a few basic ones are run on a low level and the other ones only at > higher levels. Fernando Perez wrote some code to allow you to decorate test generators (he mentioned it in an earlier thread, but I don't think we followed up on it). He also raised the question about this at a Baypiggies meeting and Alex Martelli blogged about his thoughts here: http://aleaxit.blogspot.com/2008/11/python-introspecting-for-generator-vs.html I'll talk to Fernando tomorrow and make sure we follow-up on this. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From scott.sinclair.za at gmail.com Mon Nov 24 00:43:37 2008 From: scott.sinclair.za at gmail.com (Scott Sinclair) Date: Mon, 24 Nov 2008 07:43:37 +0200 Subject: [SciPy-dev] Numpy documentation editable @ docs.scipy.org In-Reply-To: References: Message-ID: <6a17e9ee0811232143v1915196ckad1c1f2e922795ef@mail.gmail.com> 2008/11/24 Charles R Harris : > > > On Sun, Nov 23, 2008 at 8:21 AM, Pauli Virtanen wrote: >> >> Dear all, >> >> All of the Sphinx-generated documentation of Numpy (not only the >> docstrings!) can be now edited in the wiki at >> >> http://docs.scipy.org/numpy/ >> >> for example, >> >> >> http://docs.scipy.org/numpy/docs/numpy-docs/reference/arrays.ndarray.rst/ >> >> Contributing should now be quite a lot easier, and does not require >> you to install Sphinx and check out the docs from SVN any more. > > There are still missing ufuncs. How do they get added? They need to be manually added by editing http://docs.scipy.org/numpy/docs/numpy-docs/reference/ufuncs.rst Cheers, Scott From fperez.net at gmail.com Mon Nov 24 01:31:03 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Nov 2008 22:31:03 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> Message-ID: Howdy, On Sun, Nov 23, 2008 at 8:25 PM, Jarrod Millman wrote: > On Sun, Nov 23, 2008 at 7:55 PM, wrote: >> My initial attempts to use decorators were not very successful, since >> nose doesn't allow to decorate test generators. One option would be to >> label most of my test functions with slow, but I haven't tried this >> yet. In the old test system, it was possible to assign levels to the >> tests. I don't know if or how it is possible to label my tests so that >> a few basic ones are run on a low level and the other ones only at >> higher levels. > > Fernando Perez wrote some code to allow you to decorate test > generators (he mentioned it in an earlier thread, but I don't think we > followed up on it). He also raised the question about this at a > Baypiggies meeting and Alex Martelli blogged about his thoughts here: > http://aleaxit.blogspot.com/2008/11/python-introspecting-for-generator-vs.html > > I'll talk to Fernando tomorrow and make sure we follow-up on this. Sorry, I got busy with other things. Here's the diff for decorators, with an implementation that works with generators and also allows the test condition to be a callable (very useful for conditions that you want to evaluate only at suite run time, not at import time). I hadn't sent it because I wanted to polish it and write some tests for it, but here it is for now. I also included a patch for the verbosity problem: the issue is that we're hardcoding '-s' in the test runner, which suppresses stdout capture. This should instead be an option for the user (like test(capture=False)). That diff just disables -s, so it's not finished, but I don't have time right now to implement the complete solution. At least I hope pointing in the right direction will be useful if someone else can finish. Cheers, f -------------- next part -------------- A non-text attachment was scrubbed... Name: decorators.diff Type: text/x-diff Size: 3155 bytes Desc: not available URL: From millman at berkeley.edu Mon Nov 24 01:57:31 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 22:57:31 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> Message-ID: On Sun, Nov 23, 2008 at 10:31 PM, Fernando Perez wrote: > Sorry, I got busy with other things. Here's the diff for decorators, > with an implementation that works with generators and also allows the > test condition to be a callable (very useful for conditions that you > want to evaluate only at suite run time, not at import time). I > hadn't sent it because I wanted to polish it and write some tests for > it, but here it is for now. Thanks, I created a ticket and attached your patch: http://scipy.org/scipy/numpy/ticket/957 > I also included a patch for the verbosity problem: the issue is that > we're hardcoding '-s' in the test runner, which suppresses stdout > capture. This should instead be an option for the user (like > test(capture=False)). That diff just disables -s, so it's not > finished, but I don't have time right now to implement the complete > solution. At least I hope pointing in the right direction will be > useful if someone else can finish. Currently, when running scipy.test('full') there is a large amount of information printed to the screen. Presumably, this information is being printed out because the test writer is using it for debugging information. Your patch (to remove the '-s' option) will help in this respect, but we will need to do more. Just to state my goal: I would like to change the scipy.test so that it behaves more like numpy.test: In [1]: numpy.test('full') Running unit tests for numpy NumPy version 1.3.0.dev6099 NumPy is installed in /home/jarrod/usr/local/lib64/python2.5/site-packages/numpy Python version 2.5.1 (r251:54863, Jul 10 2008, 17:25:56) [GCC 4.1.2 20070925 (Red Hat 4.1.2-33)] nose version 0.10.3 .......................K........ ---------------------------------------------------------------------- Ran 1768 tests in 4.235s OK (KNOWNFAIL=1) That is it just prints '.' and letter codes. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Mon Nov 24 02:17:35 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Nov 2008 23:17:35 -0800 Subject: [SciPy-dev] Numpy documentation editable @ docs.scipy.org In-Reply-To: <6a17e9ee0811232143v1915196ckad1c1f2e922795ef@mail.gmail.com> References: <6a17e9ee0811232143v1915196ckad1c1f2e922795ef@mail.gmail.com> Message-ID: On Sun, Nov 23, 2008 at 9:43 PM, Scott Sinclair wrote: > 2008/11/24 Charles R Harris : >> There are still missing ufuncs. How do they get added? > > They need to be manually added by editing > http://docs.scipy.org/numpy/docs/numpy-docs/reference/ufuncs.rst Or you can just manually add them in the trunk: http://projects.scipy.org/scipy/numpy/browser/trunk/doc/source/reference/ufuncs.rst -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Mon Nov 24 02:09:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Nov 2008 16:09:41 +0900 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: Message-ID: <492A5335.6040508@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > In the past, I would always run 'nosetests scipy' before committing > changes to SVN. Due to the current state of the unit tests, I don't > anymore, and I suspect I'm not alone. > > Here are the main offenders on my system: > > scipy.stats. > I appreciate the fact that rigorous testing on this module takes time, > but 4 minutes on a 2.4GHz Core 2 system is unreasonable. IMO 20 > seconds is a reasonable upper bound. Essential tests that don't meet > this time constraint should be filtered out of the default test suite. > I don't agree much on that reasoning. Test are useful; the more run by default, the better; tests which are not run by default are nearly useless IMO, since not many people would run tests with options; since there are ways to restrict tests to a meaningful subset (per subpackage), I think this is enough; if some tests can be run faster, then ok, but not if it requires to lose some test coverage. Why does the test time matter so much to you ? > scipy.weave > Takes 2.5 minutes and litters my screen with a few hundred lines like, > /tmp/tmpcRI5WR/sc_3a5c7ad3ac45a98d03cd9168232f7d8f1.cpp:618: warning: > deprecated conversion from > and dumps a bunch of .cpp and .so files in the current working directory. > > Some minor offenders: > > scipy.interpolate > Emits several UserWarnings (should be filtered out with warnings.warnfilter). > > scipy.io > Several DeprecationWarnings > > scipy.lib > A dozen lines like "zcopy:n=3" > > scipy.linalg > Outputs ATLAS info and a dozen lines like "zcopy:n=3". > Yes, those should be cleaned (except maybe DeprecationWarnings - if the deprecated functions are the one being tested: I am not sure what we should do in that case). But that's a lot of grunt work ; particularly scipy.lib, which should be removed IMHO (as for now, it is mostly redundant with scipy.linalg, except for some unit tests which would be useful to put into scipy.linalg). > I'd like to be able to run the entire battery of tests in about a > minute with minimal unnecessary output. > I hear you, I would like the whole build + test process for scipy to be faster too :) If 4 minutes sounds long, what about build + test on windows, which takes at least 20 minutes (to multiply by three when I build the superpack - and the process can't even be controlled remotely) ! David From robert.kern at gmail.com Mon Nov 24 02:44:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 01:44:04 -0600 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <492A5335.6040508@ar.media.kyoto-u.ac.jp> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> Message-ID: <3d375d730811232344o288c7535u5593d71b8635382d@mail.gmail.com> On Mon, Nov 24, 2008 at 01:09, David Cournapeau wrote: > Nathan Bell wrote: >> In the past, I would always run 'nosetests scipy' before committing >> changes to SVN. Due to the current state of the unit tests, I don't >> anymore, and I suspect I'm not alone. >> >> Here are the main offenders on my system: >> >> scipy.stats. >> I appreciate the fact that rigorous testing on this module takes time, >> but 4 minutes on a 2.4GHz Core 2 system is unreasonable. IMO 20 >> seconds is a reasonable upper bound. Essential tests that don't meet >> this time constraint should be filtered out of the default test suite. > > I don't agree much on that reasoning. Test are useful; the more run by > default, the better; tests which are not run by default are nearly > useless IMO, since not many people would run tests with options; since > there are ways to restrict tests to a meaningful subset (per > subpackage), I think this is enough; if some tests can be run faster, > then ok, but not if it requires to lose some test coverage. > > Why does the test time matter so much to you ? You want to be able to run the main automated test suite every time before you do a check in, and more frequently while you are working on something, so that you make sure you didn't break things you weren't working on. This is a fairly well-accepted principle of testing. No one is suggesting that tests should be deleted, just that they might be moved (or marked) out of the main test suite. Multiple test suites for different purposes and constraints is far from uncommon. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Mon Nov 24 02:57:22 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 23 Nov 2008 23:57:22 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> Message-ID: On Sun, Nov 23, 2008 at 10:57 PM, Jarrod Millman wrote: > respect, but we will need to do more. Just to state my goal: I would > like to change the scipy.test so that it behaves more like numpy.test: > > In [1]: numpy.test('full') > Running unit tests for numpy > NumPy version 1.3.0.dev6099 > NumPy is installed in > /home/jarrod/usr/local/lib64/python2.5/site-packages/numpy > Python version 2.5.1 (r251:54863, Jul 10 2008, 17:25:56) [GCC 4.1.2 > 20070925 (Red Hat 4.1.2-33)] > nose version 0.10.3 > .......................K........ > ---------------------------------------------------------------------- > Ran 1768 tests in 4.235s > > OK (KNOWNFAIL=1) > > That is it just prints '.' and letter codes. Yup, that would be ideal. Just to note, some of the printouts are coming directly (I suspect) from inside the fortran code or the wrappers, so disabling those may take a bit more work. Cheers, f From cournape at gmail.com Mon Nov 24 02:59:24 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 24 Nov 2008 16:59:24 +0900 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <3d375d730811232344o288c7535u5593d71b8635382d@mail.gmail.com> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <3d375d730811232344o288c7535u5593d71b8635382d@mail.gmail.com> Message-ID: <5b8d13220811232359ja14ebd0u1a22b9275c3afc4e@mail.gmail.com> On Mon, Nov 24, 2008 at 4:44 PM, Robert Kern wrote: >> Why does the test time matter so much to you ? > > You want to be able to run the main automated test suite every time > before you do a check in, and more frequently while you are working on > something, so that you make sure you didn't break things you weren't > working on. This is a fairly well-accepted principle of testing. Yes, I understand the different suites thing, but that's not what we are talking about, right ? Weave, for example, does not output files when the default test suite is run. > Multiple test > suites for different purposes and constraints is far from uncommon. Sure, I don't argue against different purposes test suite, but about what goes in the default. David From gael.varoquaux at normalesup.org Mon Nov 24 03:15:29 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 24 Nov 2008 09:15:29 +0100 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <1cd32cbb0811231955v62cddf20k4771b5207e75d8fe@mail.gmail.com> Message-ID: <20081124081529.GB29511@phare.normalesup.org> On Sun, Nov 23, 2008 at 11:57:22PM -0800, Fernando Perez wrote: > Yup, that would be ideal. Just to note, some of the printouts are > coming directly (I suspect) from inside the fortran code or the > wrappers, so disabling those may take a bit more work. Actually, it is not that hard, playing with file descriptors. The implementation in the IPython codebase is in: http://bazaar.launchpad.net/%7Eipython-dev/ipython/trunk/annotate/1149?file_id=fd_redirector.py-20080801050147-n5amq5090x5mx3mk-2 Ga?l From millman at berkeley.edu Mon Nov 24 03:15:54 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Nov 2008 00:15:54 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <5b8d13220811232359ja14ebd0u1a22b9275c3afc4e@mail.gmail.com> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <3d375d730811232344o288c7535u5593d71b8635382d@mail.gmail.com> <5b8d13220811232359ja14ebd0u1a22b9275c3afc4e@mail.gmail.com> Message-ID: On Sun, Nov 23, 2008 at 11:59 PM, David Cournapeau wrote: > Sure, I don't argue against different purposes test suite, but about > what goes in the default. I would like to see different defaults (one for the development trunk; one for binary alpha, beta, and rc releases; and possibly stable releases). For the development trunk, we need a quick and relatively complete default test suite. This will make it easier for developer's to adopt the habit of running the full test suite before checking in any changes to the trunk. If a developer wants to run a more complete test suite, they should be able to run the full suite whenever they want. For the binary alpha, beta, and rc releases, we want the default test suite to be as complete as possible so that we get better feedback from early adopters without having to get them to run the tests with many options. I was imaging something like when building the binaries, I would need to set a flag which could easily be done in the scripts for building binaries: http://projects.scipy.org/scipy/scipy/browser/trunk/tools/win32/build_scripts or could use the release flag, which is already being set: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/version.py For binaries for stable releases, we should decide whether we want the default test suite to favor speed or completeness. Thoughts? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Mon Nov 24 03:26:06 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 24 Nov 2008 09:26:06 +0100 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <3d375d730811232344o288c7535u5593d71b8635382d@mail.gmail.com> <5b8d13220811232359ja14ebd0u1a22b9275c3afc4e@mail.gmail.com> Message-ID: 2008/11/24 Jarrod Millman : > On Sun, Nov 23, 2008 at 11:59 PM, David Cournapeau wrote: >> Sure, I don't argue against different purposes test suite, but about >> what goes in the default. > > I would like to see different defaults (one for the development trunk; > one for binary alpha, beta, and rc releases; and possibly stable > releases). There may be issues with this when people modify some packages deeply, but in a way not caught by the standard test battery. And then, when you go to alpha, you get hundreds of failing tests, and it's so overwhelming that you have to start the test battery from scratch. It could be better to have a scipy buildbot, like for numpy, that runs all the tests, and people before committing just check the most important tests. This way, you don't get hundreds of failing tests once you reactivate them, you can still track where the errors come from, and the test time for a single developer remains small (although, he could only check the result of the tests on the package he's modifying). Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From wnbell at gmail.com Mon Nov 24 03:38:17 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 24 Nov 2008 03:38:17 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <492A5335.6040508@ar.media.kyoto-u.ac.jp> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Nov 24, 2008 at 2:09 AM, David Cournapeau wrote: > > I don't agree much on that reasoning. Test are useful; the more run by > default, the better; tests which are not run by default are nearly > useless IMO, since not many people would run tests with options; since > there are ways to restrict tests to a meaningful subset (per > subpackage), I think this is enough; if some tests can be run faster, > then ok, but not if it requires to lose some test coverage. > As a general rule, more tests are better. OTOH, tests that people *choose not to run* are not helpful. > Why does the test time matter so much to you ? I want to know that my changes to scipy.sparse haven't adversely affected other parts of scipy. To my knowledge, there are only a few such modules (io, maxentropy, spatial, and sparse.linalg), so I could, in principle, test those directly and call it a day. However, it's possible that modules that depend on those modules will expose errors that would be hidden otherwise. > > I hear you, I would like the whole build + test process for scipy to be > faster too :) If 4 minutes sounds long, what about build + test on > windows, which takes at least 20 minutes (to multiply by three when I > build the superpack - and the process can't even be controlled remotely) ! > Still, I'm not giving up my C++ templates :) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From millman at berkeley.edu Mon Nov 24 03:38:42 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Nov 2008 00:38:42 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <3d375d730811232344o288c7535u5593d71b8635382d@mail.gmail.com> <5b8d13220811232359ja14ebd0u1a22b9275c3afc4e@mail.gmail.com> Message-ID: On Mon, Nov 24, 2008 at 12:26 AM, Matthieu Brucher wrote: > There may be issues with this when people modify some packages deeply, > but in a way not caught by the standard test battery. And then, when > you go to alpha, you get hundreds of failing tests, and it's so > overwhelming that you have to start the test battery from scratch. I don't think that is very likely. All the tests are still there and would be run if you ran scipy.test('full'). I would imagine that many people would run the full test-suite regularly. A developer could run scipy.test() for every change they make (if it only takes a short amount of time) and then run scipy.test('full') just once or twice a day. > It could be better to have a scipy buildbot, like for numpy, that runs > all the tests, and people before committing just check the most > important tests. This way, you don't get hundreds of failing tests > once you reactivate them, you can still track where the errors come > from, and the test time for a single developer remains small > (although, he could only check the result of the tests on the package > he's modifying). I think this should be done any way; but I don't think it solves the problem for developers who want to quickly run the default test suite regularly. I think that decorating more of the tests as slow would solve this problem. The other problem is that we want binaries of alpha, beta, and rc releases to run the full test suite by default, since, in this instance, time isn't as important but completeness is. This could be solved by, for instance, by changing label to be full by default in nosetester.py for tagged releases. This could be done by running a script that takes care of it or by adding some logic that changes the behavior if a flag (e.g., release in version.py) is True in version.py. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Mon Nov 24 03:38:36 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Nov 2008 17:38:36 +0900 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> Message-ID: <492A680C.6030100@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > > As a general rule, more tests are better. OTOH, tests that people > *choose not to run* are not helpful. Agreed. But you can choose not to run scipy.stats, right ? > > I want to know that my changes to scipy.sparse haven't adversely > affected other parts of scipy. To my knowledge, there are only a few > such modules (io, maxentropy, spatial, and sparse.linalg), so I could, > in principle, test those directly and call it a day. However, it's > possible that modules that depend on those modules will expose errors > that would be hidden otherwise. Yes, but if you don't run a subset of the tests at all, you run into the same kind of issues anyway, no ? In Scipy, most packages are relatively independent from each other, so a 'fast' mode to check that you did not screw up badly (some import stuff, etc...) is enough most of the time. IOW, I prefer something where you have to explicitly disregard tests rather than explicitly include them. > Still, I'm not giving up my C++ templates :) :) David From wnbell at gmail.com Mon Nov 24 04:39:35 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 24 Nov 2008 04:39:35 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <492A680C.6030100@ar.media.kyoto-u.ac.jp> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Nov 24, 2008 at 3:38 AM, David Cournapeau wrote: > > Agreed. But you can choose not to run scipy.stats, right ? > That's right, and I don't currently run those tests. But can a person who changes something in scipy.linalg choose not to run those tests? > > Yes, but if you don't run a subset of the tests at all, you run into the > same kind of issues anyway, no ? In Scipy, most packages are relatively > independent from each other, so a 'fast' mode to check that you did not > screw up badly (some import stuff, etc...) is enough most of the time. > > IOW, I prefer something where you have to explicitly disregard tests > rather than explicitly include them. > I don't understand your argument. You propose to make 'fast' be the thing that developers run before committing changes to SVN and then argue that this will lead to more tests being run? Who runs the slow tests? If you make the default too slow, the *de facto default* will be 'fast' or None :) Passing 'nosetests scipy' should be the standard for modifications to scipy. It should be as comprehensive as possible while running in ~60 seconds. We can have an additional suite of 'slow' tests for releases, build bots, and paranoid developers. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From millman at berkeley.edu Mon Nov 24 04:43:16 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Nov 2008 01:43:16 -0800 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Nov 24, 2008 at 1:39 AM, Nathan Bell wrote: > Passing 'nosetests scipy' should be the standard for modifications to > scipy. It should be as comprehensive as possible while running in ~60 > seconds. We can have an additional suite of 'slow' tests for > releases, build bots, and paranoid developers. I completely agree. And it would be fairly easy to do with out much work, which is a big plus. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From david at ar.media.kyoto-u.ac.jp Mon Nov 24 04:40:20 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 24 Nov 2008 18:40:20 +0900 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> Message-ID: <492A7684.8090706@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > I don't understand your argument. You propose to make 'fast' be the > thing that developers run before committing changes to SVN and then > argue that this will lead to more tests being run? Who runs the slow > tests? Users. But well, it looks like I am in minority, so let's go for your suggestion. David From stefan at sun.ac.za Mon Nov 24 06:00:31 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 24 Nov 2008 13:00:31 +0200 Subject: [SciPy-dev] close tickets, add doc for odes In-Reply-To: References: Message-ID: <9457e7c80811240300s3c948de7pbf201c51468af426@mail.gmail.com> Hi Benny 2008/11/21 Benny Malengier : > From my point of view, the PR around scikits could indeed use some > improvement/streamlining. There are some plans in the pipeline to work on this. Hopefully, I'll have some good news by the end of the holidays. Regards St?fan From millman at berkeley.edu Mon Nov 24 06:46:40 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 24 Nov 2008 03:46:40 -0800 Subject: [SciPy-dev] close tickets, add doc for odes In-Reply-To: References: Message-ID: On Fri, Nov 21, 2008 at 3:09 AM, Benny Malengier wrote: > can somebody close: > http://projects.scipy.org/scipy/scipy/ticket/615 > http://www.scipy.org/scipy/scipy/ticket/730 Done. > They are part of new odes scikit. Thanks. > I have no edit button on that wiki page though although the text says I > should have. I suppose the page is protected. Can somebody add an entry for > odes or give me permission? I will then add more on a separate wiki page. My > login on the wiki is user 'bmcage' Done. > From my point of view, the PR around scikits could indeed use some > improvement/streamlining. Absolutely. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From bsouthey at gmail.com Mon Nov 24 10:19:27 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 24 Nov 2008 09:19:27 -0600 Subject: [SciPy-dev] friedmanchisquare fixed, test based on R help ? In-Reply-To: <1cd32cbb0811230643rc17518fv8ee1b059a982da90@mail.gmail.com> References: <1cd32cbb0811221959t4461b800nb48cd25ab5850cbb@mail.gmail.com> <91b4b1ab0811222016l645e1ba0y298caccdfedda795@mail.gmail.com> <1cd32cbb0811230643rc17518fv8ee1b059a982da90@mail.gmail.com> Message-ID: <492AC5FF.7030909@gmail.com> An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Mon Nov 24 12:12:43 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Nov 2008 12:12:43 -0500 Subject: [SciPy-dev] stats.cov versus np.cov Message-ID: <1cd32cbb0811240912x8161a68k68cd7dc1d5adb93e@mail.gmail.com> in http://scipy.org/scipy/scipy/ticket/425, stats mean, var and median got depreciated. Should this also be done for cov? Note: scipy and stats versions have different defaults for column versus row variables >>> x = np.array([[0, 2], [1, 1], [2, 0]]).T >>> np.cov(x) array([[ 1., -1.], [-1., 1.]]) >>> stats.cov(x) array([[ 2., 0., -2.], [ 0., 0., 0.], [-2., 0., 2.]]) >>> stats.cov(x.T) array([[ 1., -1.], [-1., 1.]]) >>> np.mean(x) 1.0 Josef From josef.pktd at gmail.com Mon Nov 24 15:17:58 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Nov 2008 15:17:58 -0500 Subject: [SciPy-dev] license for code published in articles Message-ID: <1cd32cbb0811241217o34389c4dqee28e819f3758e28@mail.gmail.com> Is code that is published verbatim inside a paper (not as attachment) license restricted? The Journal of Statistical Software publishes code under GPL. Does GPL also apply to source snippets or functions that are included in the text of the article, or is there a fair use assumption on the text of the article? example: Kolmogorov distribution for two-sided ks-test paper: http://www.jstatsoft.org/v08/i18 George Marsaglia, Wai Wan Tsang, Jingbo Wang: Evaluating Kolmogorov's Distribution, Journal of Statistical Software Vol. 8, Issue 18, Nov 2003 discussion: http://projects.scipy.org/pipermail/scipy-dev/2004-July/002182.html Josef From robert.kern at gmail.com Mon Nov 24 15:22:27 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 14:22:27 -0600 Subject: [SciPy-dev] license for code published in articles In-Reply-To: <1cd32cbb0811241217o34389c4dqee28e819f3758e28@mail.gmail.com> References: <1cd32cbb0811241217o34389c4dqee28e819f3758e28@mail.gmail.com> Message-ID: <3d375d730811241222h23a06153s9d34848962559e3a@mail.gmail.com> On Mon, Nov 24, 2008 at 14:17, wrote: > Is code that is published verbatim inside a paper (not as attachment) > license restricted? Yes, of course, provided that the code is otherwise copyrightable. > The Journal of Statistical Software publishes code under GPL. > Does GPL also apply to source snippets or functions that are included > in the text of the article, or is there a fair use assumption on the > text of the article? Including code in scipy does not fall under fair use, so that's neither here nor there. You can ask the authors for permission, if you like, but it's best just to read the paper, and write new code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Nov 24 15:56:24 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Nov 2008 15:56:24 -0500 Subject: [SciPy-dev] license for code published in articles In-Reply-To: <3d375d730811241222h23a06153s9d34848962559e3a@mail.gmail.com> References: <1cd32cbb0811241217o34389c4dqee28e819f3758e28@mail.gmail.com> <3d375d730811241222h23a06153s9d34848962559e3a@mail.gmail.com> Message-ID: <1cd32cbb0811241256l542247f8i911b2be85862d61f@mail.gmail.com> > > but it's best > just to read the paper, and write new code. > > -- > Robert Kern Thanks for the clarification. I was just wondering whether we have to stop reading the paper, when the authors start to show the code after finishing with the description of the algorithm (for clean room development). Josef From xavier.gnata at gmail.com Mon Nov 24 16:39:38 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Mon, 24 Nov 2008 22:39:38 +0100 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <492A7684.8090706@ar.media.kyoto-u.ac.jp> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> <492A7684.8090706@ar.media.kyoto-u.ac.jp> Message-ID: <492B1F1A.2090406@gmail.com> > Nathan Bell wrote: >> I don't understand your argument. You propose to make 'fast' be the >> thing that developers run before committing changes to SVN and then >> argue that this will lead to more tests being run? Who runs the slow >> tests? > > Users. But well, it looks like I am in minority, so let's go for your > suggestion. > > David Well looks like "unitary tests" versus "integration tests". Sounds good. Many users use the svn (for various reasons). >From there point of view, it could be problem when the svn is really broken. Small test to be quite sure the svn is not broken and extensive tests run once per X and/or by users (after a fresh install) Xavier From eads at soe.ucsc.edu Mon Nov 24 16:54:07 2008 From: eads at soe.ucsc.edu (Damian Eads) Date: Mon, 24 Nov 2008 13:54:07 -0800 Subject: [SciPy-dev] license for code published in articles In-Reply-To: <1cd32cbb0811241256l542247f8i911b2be85862d61f@mail.gmail.com> References: <1cd32cbb0811241217o34389c4dqee28e819f3758e28@mail.gmail.com> <3d375d730811241222h23a06153s9d34848962559e3a@mail.gmail.com> <1cd32cbb0811241256l542247f8i911b2be85862d61f@mail.gmail.com> Message-ID: <91b4b1ab0811241354w1aa6e1afrb6fe2ac44f2d5cdc@mail.gmail.com> One solution is to blur your eyes so you can't read text but can distinguish between fixed width font and variable-width, cross out the code with a sharpie magic marker, and then read the paper. Damian On 11/24/08, josef.pktd at gmail.com wrote: >> >> but it's best >> just to read the paper, and write new code. >> >> -- >> Robert Kern > > Thanks for the clarification. > > I was just wondering whether we have to stop reading the paper, when > the authors start to show the code after finishing with the > description of the algorithm (for clean room development). > > Josef > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Sent from my mobile device ----------------------------------------------------- Damian Eads Ph.D. Student Jack Baskin School of Engineering, UCSC E2-489 1156 High Street Machine Learning Lab Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From josef.pktd at gmail.com Mon Nov 24 17:58:18 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Nov 2008 17:58:18 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <492B1F1A.2090406@gmail.com> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> <492A7684.8090706@ar.media.kyoto-u.ac.jp> <492B1F1A.2090406@gmail.com> Message-ID: <1cd32cbb0811241458o7e3c3f70jb0478fc048008b75@mail.gmail.com> On Mon, Nov 24, 2008 at 4:39 PM, Xavier Gnata wrote: >> Nathan Bell wrote: >>> I don't understand your argument. You propose to make 'fast' be the >>> thing that developers run before committing changes to SVN and then >>> argue that this will lead to more tests being run? Who runs the slow >>> tests? >> >> Users. But well, it looks like I am in minority, so let's go for your >> suggestion. >> >> David > > > Well looks like "unitary tests" versus "integration tests". > Sounds good. Many users use the svn (for various reasons). > >From there point of view, it could be problem when the svn is really broken. > Small test to be quite sure the svn is not broken and extensive tests > run once per X and/or by users (after a fresh install) > > Xavier > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Now that 0.7 has been tagged, shall I decorate my tests as slow? nosetests -A "not slow" or scipy.test() will then exclude the 4-5 minutes of distributions tests. Without my tests (default setting with not slow) scipy.stats takes 4-6 seconds. I started to profile one of the tests and some distributions are very slow, they provide the correct results but the generic way of calculating takes a lot of time. example: For the R distribution, rdist, the test runs two kolmogorov smirnov tests and has about 4 million function calls to the _pdf function, I guess mostly to generate 2000 random variables in a generic way based only on the pdf. Selectively tagging and excluding time expensive methods, is too much work for me right now, because which methods are expensive depends on what methods are defined in each specific distribution. Josef From robert.kern at gmail.com Mon Nov 24 18:21:35 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Nov 2008 17:21:35 -0600 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <1cd32cbb0811241458o7e3c3f70jb0478fc048008b75@mail.gmail.com> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> <492A7684.8090706@ar.media.kyoto-u.ac.jp> <492B1F1A.2090406@gmail.com> <1cd32cbb0811241458o7e3c3f70jb0478fc048008b75@mail.gmail.com> Message-ID: <3d375d730811241521n64d9214qf4938ea6dbafe49b@mail.gmail.com> On Mon, Nov 24, 2008 at 16:58, wrote: > On Mon, Nov 24, 2008 at 4:39 PM, Xavier Gnata wrote: >>> Nathan Bell wrote: >>>> I don't understand your argument. You propose to make 'fast' be the >>>> thing that developers run before committing changes to SVN and then >>>> argue that this will lead to more tests being run? Who runs the slow >>>> tests? >>> >>> Users. But well, it looks like I am in minority, so let's go for your >>> suggestion. >>> >>> David >> >> >> Well looks like "unitary tests" versus "integration tests". >> Sounds good. Many users use the svn (for various reasons). >> >From there point of view, it could be problem when the svn is really broken. >> Small test to be quite sure the svn is not broken and extensive tests >> run once per X and/or by users (after a fresh install) >> >> Xavier >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > Now that 0.7 has been tagged, shall I decorate my tests as slow? > > nosetests -A "not slow" or scipy.test() will then exclude the 4-5 > minutes of distributions tests. > Without my tests (default setting with not slow) scipy.stats takes 4-6 seconds. > > I started to profile one of the tests and some distributions are very > slow, they provide the correct results but the generic way of > calculating takes a lot of time. > > example: For the R distribution, rdist, the test runs two kolmogorov > smirnov tests and has about 4 million function calls to the _pdf > function, I guess mostly to generate 2000 random variables in a > generic way based only on the pdf. I don't think we should be doing any K-S tests of the distributions in the test suite. Once we have validated that our algorithms work (using these tests, with large sample sizes), we should generate a small number of variates from each distribution using a fixed seed. The unit tests in the main test suite will simply generate the same number of variates with the same seed and directly compare the results. If we start to get failures, then we can recheck using the K-S tests that the algorithm is still good, and regenerate the reference variates. The only problem I can see is if there are platform-dependent results for some distributions, but that would be very good to figure out now, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From josef.pktd at gmail.com Mon Nov 24 19:11:42 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 24 Nov 2008 19:11:42 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <3d375d730811241521n64d9214qf4938ea6dbafe49b@mail.gmail.com> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> <492A7684.8090706@ar.media.kyoto-u.ac.jp> <492B1F1A.2090406@gmail.com> <1cd32cbb0811241458o7e3c3f70jb0478fc048008b75@mail.gmail.com> <3d375d730811241521n64d9214qf4938ea6dbafe49b@mail.gmail.com> Message-ID: <1cd32cbb0811241611m33bf939eq49dcfceefd2179f0@mail.gmail.com> > > I don't think we should be doing any K-S tests of the distributions in > the test suite. Once we have validated that our algorithms work (using > these tests, with large sample sizes), we should generate a small > number of variates from each distribution using a fixed seed. The unit > tests in the main test suite will simply generate the same number of > variates with the same seed and directly compare the results. If we > start to get failures, then we can recheck using the K-S tests that > the algorithm is still good, and regenerate the reference variates. > > The only problem I can see is if there are platform-dependent results > for some distributions, but that would be very good to figure out now, > too. > > -- > Robert Kern > Currently I am using generated random variables for two purposes: * To test whether the random number generator is correct, kstest or something similar would be necessary, with a large enough sample size for the tests to have reasonable power, similar to the initial kstest in the test suite. (btw. there are still 2 known failures in mtrand) * In the second type of tests, I use the sample properties as a benchmark for the theoretical properties. For this purpose any randomness could be completely removed. Currently the only outside information the tests use comes from numpy.random. e.g. I compare sample moments with theoretical moments. If we have a benchmark for what the true theoretical values should be, then these could be directly compared, without generating a random sample. However, I wasn't willing to go to R and generate benchmark data for 100 or so distributions, so I used the sample properties. Using sample properties and internal consistency between specific and generic methods, creates, I think, quite reliable tests. For this case, we could create now our own benchmark, assuming our algorithms are correct, and use those for regression tests. A simple script should be able to create the benchmark data. One disadvantage of this is that, if we want to test a distribution with different parameter values, we still need to get the benchmark data for the new parameters. When I made changes to, for example, to the behavior of a distribution method at an extreme or close to corner value, I was quite glad I could rely on my tests. I just needed to add a test case with new parameters and the tests checked all methods for this case, without me having to specify expected results for each method. I don't know how everyone is handling this, but I need to keep track of a public test suite (for those not working on distributions) and a "development" test suite, which is much stricter, and that I use when I make changes directly to the distribution module. But, I agree, for the purpose of a regression test suite, there is a large amount of simplification that can be done to my (bug-hunting) test suite. Josef From nwagner at iam.uni-stuttgart.de Tue Nov 25 13:29:57 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Nov 2008 19:29:57 +0100 Subject: [SciPy-dev] New scipy.test() failures Message-ID: Hi all, I found some new test failures. Can someone reproduce these failures ? ====================================================================== FAIL: test_simple (test_fblas.TestCgemv) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/lib/blas/tests/test_fblas.py", line 325, in test_simple assert_array_almost_equal(desired_y,y) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 310, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([ -5.82329798 +5.82329798j, 8.58068085 -6.58068085j, -12.74014378+16.74014282j], dtype=complex64) y: array([ -5.82329798 +5.82329798j, 8.58068085 -6.58068085j, -12.74014378+16.74014473j], dtype=complex64) ====================================================================== FAIL: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/numpy/testing/decorators.py", line 82, in skipper return f(*args, **kwargs) File "/usr/local/lib64/python2.5/site-packages/scipy/misc/tests/test_pilutil.py", line 24, in test_imresize assert_equal(im1.shape,(11,22)) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 174, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ====================================================================== FAIL: test_kdtree.test_distance_matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", line 182, in runTest self.test(*self.arg) File "/usr/local/lib64/python2.5/site-packages/scipy/spatial/tests/test_kdtree.py", line 416, in test_distance_matrix assert_equal(distance(xs[i],ys[j]),ds[i,j]) File "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 2.775628274295713 DESIRED: 2.7756282742957135 ---------------------------------------------------------------------- Ran 2858 tests in 43.229s FAILED (KNOWNFAIL=2, SKIP=16, failures=3) >>> scipy.__version__ '0.7.0.dev5193' From ndbecker2 at gmail.com Tue Nov 25 13:45:14 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 25 Nov 2008 13:45:14 -0500 Subject: [SciPy-dev] Using memoryviews References: <49253821.7010909@egenix.com> Message-ID: This discussion might be of interest (from python-3000.devel): Antoine Pitrou wrote: > Josiah Carlson gmail.com> writes: >> >> From what I understand of the memoryview when I tried to do the same >> thing a few months ago (use memoryview to replace buffer in >> asyncore/asynchat), memoryview is incomplete. It didn't support >> character buffer slicing (you know, the 'offset' and 'size' arguments >> that were in buffer), and at least a handful of other things (that I >> can't remember at the moment). > > You should try again, memoryview now supports slicing (with the usual > Python syntax, e.g. m[2:5]) as well as slice assignment (with the fairly > sensible limitation that you can't resize the underlying buffer). There's > no real doc for it, but you can look at test_memoryview.py in the Lib/test > directory to have a fairly comprehensive list of the things currently > supported. > > I also support the addition of official functions or macros to access the > underlying fields of the Py_buffer struct, rather than access them > directly from 3rd party code. Someone please open an issue for that in the > tracker. > > The big, big limitation of memoryviews right now is that they only support > one-dimensional byte buffers. The people interested in more complex > arrangements (that is, Scipy/Numpy people) have been completely absent > from the python-dev community for many months now, and I don't think > anyone else cares enough to do the job instead of them. > > Regards > > Antoine. From ilanschnell at gmail.com Tue Nov 25 18:26:16 2008 From: ilanschnell at gmail.com (Ilan Schnell) Date: Tue, 25 Nov 2008 17:26:16 -0600 Subject: [SciPy-dev] New scipy.test() failures In-Reply-To: References: Message-ID: <2fbe16300811251526h5200cd83hf120e0bc248daefb@mail.gmail.com> Hi Nils, Can you please provide some information about the system you are using, and which compiler you used, or whether you installed scipy from a binary, ... There are known problems on different architectures. - Ilan On Tue, Nov 25, 2008 at 12:29 PM, Nils Wagner wrote: > Hi all, > > I found some new test failures. > Can someone reproduce these failures ? > > ====================================================================== > FAIL: test_simple (test_fblas.TestCgemv) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib64/python2.5/site-packages/scipy/lib/blas/tests/test_fblas.py", > line 325, in test_simple > assert_array_almost_equal(desired_y,y) > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", > line 310, in assert_array_almost_equal > header='Arrays are not almost equal') > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", > line 295, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 33.3333333333%) > x: array([ -5.82329798 +5.82329798j, 8.58068085 > -6.58068085j, > -12.74014378+16.74014282j], dtype=complex64) > y: array([ -5.82329798 +5.82329798j, 8.58068085 > -6.58068085j, > -12.74014378+16.74014473j], dtype=complex64) > > ====================================================================== > FAIL: test_imresize (test_pilutil.TestPILUtil) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/decorators.py", > line 82, in skipper > return f(*args, **kwargs) > File > "/usr/local/lib64/python2.5/site-packages/scipy/misc/tests/test_pilutil.py", > line 24, in test_imresize > assert_equal(im1.shape,(11,22)) > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", > line 174, in assert_equal > assert_equal(len(actual),len(desired),err_msg,verbose) > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", > line 183, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 0 > DESIRED: 2 > > ====================================================================== > FAIL: test_kdtree.test_distance_matrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib64/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", > line 182, in runTest > self.test(*self.arg) > File > "/usr/local/lib64/python2.5/site-packages/scipy/spatial/tests/test_kdtree.py", > line 416, in test_distance_matrix > assert_equal(distance(xs[i],ys[j]),ds[i,j]) > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", > line 183, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 2.775628274295713 > DESIRED: 2.7756282742957135 > > ---------------------------------------------------------------------- > Ran 2858 tests in 43.229s > > FAILED (KNOWNFAIL=2, SKIP=16, failures=3) > > >>>> scipy.__version__ > '0.7.0.dev5193' > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From aarchiba at physics.mcgill.ca Tue Nov 25 19:36:34 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Tue, 25 Nov 2008 19:36:34 -0500 Subject: [SciPy-dev] New scipy.test() failures In-Reply-To: References: Message-ID: 2008/11/25 Nils Wagner : > ====================================================================== > FAIL: test_kdtree.test_distance_matrix > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib64/python2.5/site-packages/nose-0.10.4-py2.5.egg/nose/case.py", > line 182, in runTest > self.test(*self.arg) > File > "/usr/local/lib64/python2.5/site-packages/scipy/spatial/tests/test_kdtree.py", > line 416, in test_distance_matrix > assert_equal(distance(xs[i],ys[j]),ds[i,j]) > File > "/usr/local/lib64/python2.5/site-packages/numpy/testing/utils.py", > line 183, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 2.775628274295713 > DESIRED: 2.7756282742957135 Should be fixed in SVN. That'll teach me to trust floating-point arithmetic. Anne From charlesr.harris at gmail.com Tue Nov 25 21:19:03 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 25 Nov 2008 19:19:03 -0700 Subject: [SciPy-dev] Using memoryviews In-Reply-To: References: <49253821.7010909@egenix.com> Message-ID: On Tue, Nov 25, 2008 at 11:45 AM, Neal Becker wrote: > This discussion might be of interest (from python-3000.devel): > > Antoine Pitrou wrote: > > > Josiah Carlson gmail.com> writes: > >> > >> From what I understand of the memoryview when I tried to do the same > >> thing a few months ago (use memoryview to replace buffer in > >> asyncore/asynchat), memoryview is incomplete. It didn't support > >> character buffer slicing (you know, the 'offset' and 'size' arguments > >> that were in buffer), and at least a handful of other things (that I > >> can't remember at the moment). > > > > You should try again, memoryview now supports slicing (with the usual > > Python syntax, e.g. m[2:5]) as well as slice assignment (with the fairly > > sensible limitation that you can't resize the underlying buffer). There's > > no real doc for it, but you can look at test_memoryview.py in the > Lib/test > > directory to have a fairly comprehensive list of the things currently > > supported. > > > > I also support the addition of official functions or macros to access the > > underlying fields of the Py_buffer struct, rather than access them > > directly from 3rd party code. Someone please open an issue for that in > the > > tracker. > > > > The big, big limitation of memoryviews right now is that they only > support > > one-dimensional byte buffers. The people interested in more complex > > arrangements (that is, Scipy/Numpy people) have been completely absent > > from the python-dev community for many months now, and I don't think > > anyone else cares enough to do the job instead of them. > > What is memoryview? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Wed Nov 26 01:04:29 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 25 Nov 2008 22:04:29 -0800 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) Message-ID: I'm pleased to announce the first beta release of SciPy 0.7.0. SciPy is a package of tools for science and engineering for Python. It includes modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, ODE solvers, and more. This beta release comes almost one year after the 0.6.0 release and contains many new features, numerous bug-fixes, improved test coverage, and better documentation. Please note that SciPy 0.7.0b1 requires Python 2.4 or greater and NumPy 1.2.0 or greater. For information, please see the release notes: http://sourceforge.net/project/shownotes.php?group_id=27747&release_id=642769 You can download the release from here: http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=642769 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From efiring at hawaii.edu Wed Nov 26 02:14:30 2008 From: efiring at hawaii.edu (Eric Firing) Date: Tue, 25 Nov 2008 21:14:30 -1000 Subject: [SciPy-dev] scipy.signal.signaltools: filtfilt calls missing function Message-ID: <492CF756.4090302@hawaii.edu> The filtfilt() function calls lfilter_zi, which is absent from scipy. A version of it is in the Cookbook: http://www.scipy.org/Cookbook/FiltFilt. It looks to me like a bit of reworking would be in order before adding it to scipy; it is using matrices. I hope this can be fixed before the 0.7 release. Is it essential that I create a ticket, or is this email sufficient? Eric From nwagner at iam.uni-stuttgart.de Wed Nov 26 02:15:16 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 Nov 2008 08:15:16 +0100 Subject: [SciPy-dev] New scipy.test() failures In-Reply-To: <2fbe16300811251526h5200cd83hf120e0bc248daefb@mail.gmail.com> References: <2fbe16300811251526h5200cd83hf120e0bc248daefb@mail.gmail.com> Message-ID: On Tue, 25 Nov 2008 17:26:16 -0600 "Ilan Schnell" wrote: > Hi Nils, > > Can you please provide some information about the system > you are using, and which compiler you used, or whether >you > installed scipy from a binary, ... > There are known problems on different architectures. > > - Ilan > > Hi Ilan, I am using the latest svn versions on a x86_64 linux box (OpenSuSe 10.2). I have installed LAPACK/ATLAS numpy, scipy from source. Nils Nils From ndbecker2 at gmail.com Wed Nov 26 08:00:14 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 26 Nov 2008 08:00:14 -0500 Subject: [SciPy-dev] Howto specify mkl location? References: Message-ID: I have intel mkl on linux (x86_64) installed here: /opt/intel/mkl/10.0.5.025 How do I build scipy-0.7.0b1 using this? From ndbecker2 at gmail.com Wed Nov 26 08:04:43 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 26 Nov 2008 08:04:43 -0500 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) References: Message-ID: TESTING ======= To test SciPy after installation (highly recommended), execute in Python >>> import scipy >>> scipy.test(level=1) Can scipy be tested _before_ installation? That would be helpful to add tests to the .spec file used for rpm build. From matthieu.brucher at gmail.com Wed Nov 26 08:07:02 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 26 Nov 2008 14:07:02 +0100 Subject: [SciPy-dev] Howto specify mkl location? In-Reply-To: References: Message-ID: Hi Neal, Add : [mkl] library_dirs = /opt/intel/mkl/10.0.5.025/lib/em64t lapack_libs = libmkl_lapack.a mkl_libs = libmkl_intel_lp64.a,libmkl_intel_thread.a,libmkl_core.a,iomp5,guide I think scipy has the same issue than numpy with the mkl, so it will not work unless you use the static MKL libraries (and not the shared ones, as it is the acse by default). Unfortunately, I didn't find out how to do this... Although I have the static libraries in my path, they are not found by python setup.py config. Matthieu 2008/11/26 Neal Becker : > I have intel mkl on linux (x86_64) installed here: > > /opt/intel/mkl/10.0.5.025 > > How do I build scipy-0.7.0b1 using this? > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From david at ar.media.kyoto-u.ac.jp Wed Nov 26 07:52:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 26 Nov 2008 21:52:52 +0900 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: References: Message-ID: <492D46A4.7090909@ar.media.kyoto-u.ac.jp> Neal Becker wrote: > Can scipy be tested _before_ installation? That would be helpful to add tests to the .spec file used for rpm build. > What do you mean by testing before installation ? David From ndbecker2 at gmail.com Wed Nov 26 08:23:45 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 26 Nov 2008 08:23:45 -0500 Subject: [SciPy-dev] 1 Failure in 0.7.0b1 References: Message-ID: Fedora F9 x86_64: FAIL: test_pbdv (test_basic.TestCephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.5/site-packages/scipy/special/tests/test_basic.py", line 357, in test_pbdv assert_equal(cephes.pbdv(1,0),(0.0,0.0)) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 176, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=1 ACTUAL: 1.0 DESIRED: 0.0 From ndbecker2 at gmail.com Wed Nov 26 08:25:04 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 26 Nov 2008 08:25:04 -0500 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) References: <492D46A4.7090909@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Neal Becker wrote: >> Can scipy be tested _before_ installation? That would be helpful to add >> tests to the .spec file used for rpm build. >> > > What do you mean by testing before installation ? > > David Build scipy, don't install, then test it in place. Nice to know if it passes the tests before I install and overwrite my known good version. From opossumnano at gmail.com Wed Nov 26 08:34:43 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Wed, 26 Nov 2008 14:34:43 +0100 Subject: [SciPy-dev] scipy.linalg.qr deprecation warning Message-ID: <20081126133442.GA20844@localhost> Hi guys, when using scipy.linalg.qr without specifying any keyword argument the user gets the following warning: DeprecationWarning: qr econ argument will be removed after scipy 0.7. The economy transform will then be available through the mode='economic' argument. While I appreciate the idea of warning users of future API changes, I think emitting a warning even if the function has been called without specifying the "econ" argument is a bit too much. The user may not even know what "econ" stands for. What about setting "econ=None" by default and - if the user specified "econ" explicitly, emit the warning and set econ according to user preference - if the user did not specify "econ", set it to the current default, i.e. False ? If no one objects, I will do the needed changes myself tomorrow. tiziano From david at ar.media.kyoto-u.ac.jp Wed Nov 26 08:27:33 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 26 Nov 2008 22:27:33 +0900 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: References: <492D46A4.7090909@ar.media.kyoto-u.ac.jp> Message-ID: <492D4EC5.5090602@ar.media.kyoto-u.ac.jp> Neal Becker wrote: > > Build scipy, don't install, then test it in place. Nice to know if it passes the tests before I install and overwrite my known good version. > You can simply install several versions in parallel, using something like stow if you are on unix. That's what I use to easily switch between a stable version for my research and last svn version when developing numpy/scipy. Even simpler, you can first install it in a temporary directory to test it. cheers, David From stefan at sun.ac.za Wed Nov 26 09:01:31 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 26 Nov 2008 16:01:31 +0200 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <492D4EC5.5090602@ar.media.kyoto-u.ac.jp> References: <492D46A4.7090909@ar.media.kyoto-u.ac.jp> <492D4EC5.5090602@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80811260601s5a91f907vca0664edcaa8a0b5@mail.gmail.com> 2008/11/26 David Cournapeau : > You can simply install several versions in parallel, using something > like stow if you are on unix. That's what I use to easily switch between > a stable version for my research and last svn version when developing > numpy/scipy. Even simpler, you can first install it in a temporary > directory to test it. I.e. (unverified, but you get the idea): STAGE=/path/to/stage SCIPY=/path/to/scipy mkdir ${STAGE} cd ${SCIPY} python setup.py install --prefix=${STAGE} cd export PYTHONPATH=${STAGE} python -c 'import scipy; scipy.test(level=1) Cheers St?fan From nwagner at iam.uni-stuttgart.de Wed Nov 26 09:08:10 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 26 Nov 2008 15:08:10 +0100 Subject: [SciPy-dev] scipy.linalg.qr deprecation warning In-Reply-To: <20081126133442.GA20844@localhost> References: <20081126133442.GA20844@localhost> Message-ID: On Wed, 26 Nov 2008 14:34:43 +0100 Tiziano Zito wrote: > Hi guys, > > when using scipy.linalg.qr without specifying any >keyword argument > the user gets the following warning: > > DeprecationWarning: qr econ argument will be removed >after scipy > 0.7. The economy transform will then be available >through the > mode='economic' argument. > > While I appreciate the idea of warning users of future >API changes, > I think emitting a warning even if the function has been >called without > specifying the "econ" argument is a bit too much. The >user may not > even know what "econ" stands for. > What about setting "econ=None" by default and > > - if the user specified "econ" explicitly, emit the >warning and set > econ according to user preference > - if the user did not specify "econ", set it to the >current default, > i.e. False > > ? > > If no one objects, I will do the needed changes myself >tomorrow. > > tiziano > In that context http://projects.scipy.org/scipy/scipy/ticket/243 might be of interest. Nils From ramercer at gmail.com Wed Nov 26 11:23:48 2008 From: ramercer at gmail.com (Adam Mercer) Date: Wed, 26 Nov 2008 10:23:48 -0600 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: References: Message-ID: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> On Wed, Nov 26, 2008 at 00:04, Jarrod Millman wrote: > I'm pleased to announce the first beta release of SciPy 0.7.0. > > SciPy is a package of tools for science and engineering for Python. > It includes modules for statistics, optimization, integration, linear > algebra, Fourier transforms, signal and image processing, ODE solvers, > and more. > > This beta release comes almost one year after the 0.6.0 release and > contains many new features, numerous bug-fixes, improved test > coverage, and better documentation. Please note that SciPy 0.7.0b1 > requires Python 2.4 or greater and NumPy 1.2.0 or greater. > > For information, please see the release notes: > http://sourceforge.net/project/shownotes.php?group_id=27747&release_id=642769 > > You can download the release from here: > http://sourceforge.net/project/showfiles.php?group_id=27747&package_id=19531&release_id=642769 > > Thank you to everybody who contributed to this release. Just tried build on Mac OS X Leopard using MacPorts python and I'm getting the following build error: building 'arpack' library compiling C sources C compiler: /usr/bin/gcc-4.0 -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes error: file 'ARPACK/FWRAPPERS/veclib_cabi_c.c' does not exist Anyone seen this? Cheers Adam From josef.pktd at gmail.com Wed Nov 26 11:25:46 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 26 Nov 2008 11:25:46 -0500 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <9457e7c80811260601s5a91f907vca0664edcaa8a0b5@mail.gmail.com> References: <492D46A4.7090909@ar.media.kyoto-u.ac.jp> <492D4EC5.5090602@ar.media.kyoto-u.ac.jp> <9457e7c80811260601s5a91f907vca0664edcaa8a0b5@mail.gmail.com> Message-ID: <1cd32cbb0811260825w2ba90a73r13683e3aad4e70e4@mail.gmail.com> Using windows installer on windowsXP, sse2, Pentium M: slow but no errors or failures >>> scipy.test(level=1) Running unit tests for scipy NumPy version 1.2.1 NumPy is installed in C:\Programs\Python25\lib\site-packages\numpy SciPy version 0.7.0.dev5180 SciPy is installed in C:\Programs\Python25\lib\site-packages\scipy Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] Ran 3688 tests in 508.000s OK (KNOWNFAIL=2, SKIP=31) The only very minor caveat is a warning about not removing a temp directory, but it is pretty common that installers and tests leave some temporary files and directories lying around in the default temp directory. Just out of curiosity: which part of scipy is producing temporary matrix market files, each test run is producing around 20 of those file: tmpyqkvrg.mtx --------- %%MatrixMarket matrix array complex hermitian % 2 2 1.0000000000000000e+000 0.0000000000000000e+000 2.0000000000000000e+000 -3.0000000000000000e+000 4.0000000000000000e+000 0.0000000000000000e+000 ---------- Josef From malkarouri at yahoo.co.uk Wed Nov 26 12:31:28 2008 From: malkarouri at yahoo.co.uk (Muhammad Alkarouri) Date: Wed, 26 Nov 2008 17:31:28 +0000 (GMT) Subject: [SciPy-dev] Scipy not compiling in OS X 10.4 Message-ID: <487093.5338.qm@web27908.mail.ukl.yahoo.com> Hi everyone, Congratulations on getting the beta version out. When I tried to compile scipy I get the same error as in http://projects.scipy.org/pipermail/scipy-dev/2008-November/010352.html I am using Python 2.6 from python.org and OS X 10.4.11. The error is: error: file '/Users/malkarouri/tmp/scipy-0.7.0b1/ARPACK/FWRAPPERS/veclib_cabi_c.c' does not exist Regards, Muhammad Alkarouri From josef.pktd at gmail.com Wed Nov 26 13:16:54 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 26 Nov 2008 13:16:54 -0500 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <1cd32cbb0811260825w2ba90a73r13683e3aad4e70e4@mail.gmail.com> References: <492D46A4.7090909@ar.media.kyoto-u.ac.jp> <492D4EC5.5090602@ar.media.kyoto-u.ac.jp> <9457e7c80811260601s5a91f907vca0664edcaa8a0b5@mail.gmail.com> <1cd32cbb0811260825w2ba90a73r13683e3aad4e70e4@mail.gmail.com> Message-ID: <1cd32cbb0811261016u3b0bd662x47c83829259f9253@mail.gmail.com> On Wed, Nov 26, 2008 at 11:25 AM, wrote: > Using windows installer on windowsXP, sse2, Pentium M scipy.test(level=1) : level is depreciated and I get the same results with level=10 as with level=1 I build scipy beta source package from sourceforge with mingw: * scipy.test(level=1) gives the same successful result as the installer * scipy.test('full') end with a segfault, that I always got with one weave example nosetests pathto\scipy\weave\tests\test_ext_tools.py:TestExtModule.test_with_include ends with a segfault I think the offending code is the writing to stdout from c: code = """ std::cout << std::endl; std::cout << "test printing a value:" << a << std::endl; """ As Robert Kern mentioned several times, MinGW has a problem with stdout. Is it possible to skip this test when the compiler for weave is mingw, or change the code in the test if the purpose of the test is to test an include instead of testing stdout? After skipping this test, except for knownfailures and skips all tests pass. however, 2 knownfailures are counted as errors (maybe another problem with nose decorators): >nosetests pathto\scipy\weave\tests\ ====================================================================== ERROR: test_char_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\lib\site-packages\numpy\testing\decorators.py", lin e 119, in skipper raise KnownFailureTest, msg KnownFailureTest: Test skipped due to known failure ====================================================================== ERROR: test_obj_fail (test_scxx_dict.TestDictGetItemOp) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Programs\Python25\lib\site-packages\numpy\testing\decorators.py", lin e 119, in skipper raise KnownFailureTest, msg KnownFailureTest: Test skipped due to known failure ---------------------------------------------------------------------- Ran 449 tests in 775.250s FAILED (SKIP=8, errors=2) Josef From oliphant at enthought.com Wed Nov 26 13:30:02 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 26 Nov 2008 12:30:02 -0600 Subject: [SciPy-dev] scipy.signal.signaltools: filtfilt calls missing function In-Reply-To: <492CF756.4090302@hawaii.edu> References: <492CF756.4090302@hawaii.edu> Message-ID: <492D95AA.6090502@enthought.com> Eric Firing wrote: > The filtfilt() function calls lfilter_zi, which is absent from scipy. A > version of it is in the Cookbook: > http://www.scipy.org/Cookbook/FiltFilt. It looks to me like a bit of > reworking would be in order before adding it to scipy; it is using matrices. > > Thanks for the catch. I've added the missing functionality to trunk. > I hope this can be fixed before the 0.7 release. > > Is it essential that I create a ticket, or is this email sufficient? > > The email is sufficient. I've fixed the problem. Thanks for the report. -Travis From josef.pktd at gmail.com Wed Nov 26 16:29:51 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 26 Nov 2008 16:29:51 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <1cd32cbb0811241611m33bf939eq49dcfceefd2179f0@mail.gmail.com> References: <492A5335.6040508@ar.media.kyoto-u.ac.jp> <492A680C.6030100@ar.media.kyoto-u.ac.jp> <492A7684.8090706@ar.media.kyoto-u.ac.jp> <492B1F1A.2090406@gmail.com> <1cd32cbb0811241458o7e3c3f70jb0478fc048008b75@mail.gmail.com> <3d375d730811241521n64d9214qf4938ea6dbafe49b@mail.gmail.com> <1cd32cbb0811241611m33bf939eq49dcfceefd2179f0@mail.gmail.com> Message-ID: <1cd32cbb0811261329m10c234aqc781d4e8ef89a3c6@mail.gmail.com> I reorganized the tests for stats.distributions. On my notebook, I have now 25 to 30 seconds for `nosetests -A "not slow" scipy.stats` (equivalents to scipy.stats.test() ); Without the "not slow" option, I still have around 5 minutes I did not change the actual tests, I merged some tests to reuse generated random variables, and, after profiling, I moved some of the slowest continuous distribution into a separate test which is decorated with slow. The basic tests, including kstest, are now run by default for 70 out of 84 continuous and all discrete distributions. I also reduced the sample size and fixed the see, and I hope that we don't get spurious random failures. I hope this time consumption of the tests is ok for now. Further test optimization has to wait. Josef From aisaac at american.edu Wed Nov 26 16:36:45 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 26 Nov 2008 16:36:45 -0500 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: References: Message-ID: <492DC16D.8040302@american.edu> Python 2.5 superpack on Win XP SP 3: Ran 3688 tests in 412.203s OK (KNOWNFAIL=2, SKIP=28) Alan Isaac From wnbell at gmail.com Wed Nov 26 16:53:23 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 26 Nov 2008 16:53:23 -0500 Subject: [SciPy-dev] the state of scipy unit tests In-Reply-To: <1cd32cbb0811261329m10c234aqc781d4e8ef89a3c6@mail.gmail.com> References: <492A680C.6030100@ar.media.kyoto-u.ac.jp> <492A7684.8090706@ar.media.kyoto-u.ac.jp> <492B1F1A.2090406@gmail.com> <1cd32cbb0811241458o7e3c3f70jb0478fc048008b75@mail.gmail.com> <3d375d730811241521n64d9214qf4938ea6dbafe49b@mail.gmail.com> <1cd32cbb0811241611m33bf939eq49dcfceefd2179f0@mail.gmail.com> <1cd32cbb0811261329m10c234aqc781d4e8ef89a3c6@mail.gmail.com> Message-ID: On Wed, Nov 26, 2008 at 4:29 PM, wrote: > I reorganized the tests for stats.distributions. > > On my notebook, I have now 25 to 30 seconds for `nosetests -A "not > slow" scipy.stats` (equivalents to scipy.stats.test() ); > Without the "not slow" option, I still have around 5 minutes > > I hope this time consumption of the tests is ok for now. Further test > optimization has to wait. > Thanks Josef, that makes things a lot better. Also, thank you for your other improvements to scipy.stats. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From cournape at gmail.com Wed Nov 26 19:20:24 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 09:20:24 +0900 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> References: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> Message-ID: <5b8d13220811261620w45a4ab63mc83c712b2ae96c0b@mail.gmail.com> On Thu, Nov 27, 2008 at 1:23 AM, Adam Mercer wrote: > > Just tried build on Mac OS X Leopard using MacPorts python and I'm > getting the following build error: That's a bug in the arpack build file. It should be fixed in SVN, David From ramercer at gmail.com Wed Nov 26 20:51:34 2008 From: ramercer at gmail.com (Adam Mercer) Date: Wed, 26 Nov 2008 19:51:34 -0600 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <5b8d13220811261620w45a4ab63mc83c712b2ae96c0b@mail.gmail.com> References: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> <5b8d13220811261620w45a4ab63mc83c712b2ae96c0b@mail.gmail.com> Message-ID: <799406d60811261751i489f8ccfnee69405e4e689dbf@mail.gmail.com> On Wed, Nov 26, 2008 at 18:20, David Cournapeau wrote: > On Thu, Nov 27, 2008 at 1:23 AM, Adam Mercer wrote: >> >> Just tried build on Mac OS X Leopard using MacPorts python and I'm >> getting the following build error: > > That's a bug in the arpack build file. It should be fixed in SVN, I'm assuming thats r5199? Applying that change to the 0.7.0b1 tarball results in the same error. Cheers Adam From ramercer at gmail.com Wed Nov 26 20:57:44 2008 From: ramercer at gmail.com (Adam Mercer) Date: Wed, 26 Nov 2008 19:57:44 -0600 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <799406d60811261751i489f8ccfnee69405e4e689dbf@mail.gmail.com> References: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> <5b8d13220811261620w45a4ab63mc83c712b2ae96c0b@mail.gmail.com> <799406d60811261751i489f8ccfnee69405e4e689dbf@mail.gmail.com> Message-ID: <799406d60811261757hdfcb442o7f7efc1ee0d96a7b@mail.gmail.com> On Wed, Nov 26, 2008 at 19:51, Adam Mercer wrote: >> That's a bug in the arpack build file. It should be fixed in SVN, > > I'm assuming thats r5199? Applying that change to the 0.7.0b1 tarball > results in the same error. Ok, building from the trunk works so I must have mucked something up. Ignore the noise. Cheers Adam From millman at berkeley.edu Wed Nov 26 22:20:12 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 19:20:12 -0800 Subject: [SciPy-dev] Can we close ticket 593? Message-ID: Tiziano Zito committed a fix in revision 5193 http://projects.scipy.org/scipy/scipy/changeset/5193 for ticket 593: http://projects.scipy.org/scipy/scipy/ticket/593 Nils reported that it fixed the problem for him. Can anyone else who was having this problem confirm that r5193 fixes it? If I don't hear anything to the contrary I will assume that we can close this ticket? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Nov 26 22:22:51 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 19:22:51 -0800 Subject: [SciPy-dev] can we close these tickets? Message-ID: I think the following tickets can be closed: http://scipy.org/scipy/scipy/ticket/723 http://scipy.org/scipy/scipy/ticket/795 If I don't hear anything to the contrary, I will assume that this tickets can be closed. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Nov 26 22:31:11 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 19:31:11 -0800 Subject: [SciPy-dev] two tickets with patches Message-ID: There are two tickets that contain donated code, which have been sitting around for over a year or more: http://scipy.org/scipy/scipy/ticket/354 http://scipy.org/scipy/scipy/ticket/457 I would like to see at least some comments on them about why they haven't been accepted. They are both first attempts but no one has made any critical comments or useful suggestions for what the authors need to do in order to have their code accepted. One provides support the Harwell-Boeing format and the other provides a FIR filter. For instance, would someone be willing to take a look and add a comment to the ticket that addresses at least: Do they need tests and coding style work? Or is there no interest in including this code in scipy? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Nov 26 22:36:33 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 19:36:33 -0800 Subject: [SciPy-dev] Scipy not compiling in OS X 10.4 In-Reply-To: <487093.5338.qm@web27908.mail.ukl.yahoo.com> References: <487093.5338.qm@web27908.mail.ukl.yahoo.com> Message-ID: On Wed, Nov 26, 2008 at 9:31 AM, Muhammad Alkarouri wrote: > When I tried to compile scipy I get the same error as in http://projects.scipy.org/pipermail/scipy-dev/2008-November/010352.html > I am using Python 2.6 from python.org and OS X 10.4.11. The error is: > error: file '/Users/malkarouri/tmp/scipy-0.7.0b1/ARPACK/FWRAPPERS/veclib_cabi_c.c' does not exist This should be fixed in the trunk. Could you try checking out the trunk and checking whether the problem goes away? -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Nov 26 22:46:16 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 19:46:16 -0800 Subject: [SciPy-dev] scipy.linalg.qr deprecation warning In-Reply-To: References: <20081126133442.GA20844@localhost> Message-ID: On Wed, Nov 26, 2008 at 6:08 AM, Nils Wagner wrote: > On Wed, 26 Nov 2008 14:34:43 +0100 > Tiziano Zito wrote: >> What about setting "econ=None" by default and >> >> - if the user specified "econ" explicitly, emit the >>warning and set >> econ according to user preference >> - if the user did not specify "econ", set it to the >>current default, >> i.e. False +1 Please go ahead and make the change. > http://projects.scipy.org/scipy/scipy/ticket/243 Could you also update and close this ticket? Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Wed Nov 26 22:50:49 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 26 Nov 2008 19:50:49 -0800 Subject: [SciPy-dev] Using memoryviews In-Reply-To: References: <49253821.7010909@egenix.com> Message-ID: On Tue, Nov 25, 2008 at 6:19 PM, Charles R Harris wrote: > What is memoryview? Take a look at the new c-api calls in the buffer protocol PEP: http://www.python.org/dev/peps/pep-3118/#new-c-api-calls-are-proposed -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From wnbell at gmail.com Wed Nov 26 22:56:48 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 26 Nov 2008 22:56:48 -0500 Subject: [SciPy-dev] two tickets with patches In-Reply-To: References: Message-ID: On Wed, Nov 26, 2008 at 10:31 PM, Jarrod Millman wrote: > > I would like to see at least some comments on them about why they > haven't been accepted. They are both first attempts but no one has > made any critical comments or useful suggestions for what the authors > need to do in order to have their code accepted. One provides support > the Harwell-Boeing format and the other provides a FIR filter. > > For instance, would someone be willing to take a look and add a > comment to the ticket that addresses at least: Do they need tests and > coding style work? Or is there no interest in including this code in > scipy? > Harwell-Boeing support would be useful, and the attached code is well-written. We'll need some tests, but that's not too difficult. I'll see that this gets added to SciPy 0.8. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From cournape at gmail.com Wed Nov 26 23:24:51 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 13:24:51 +0900 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <799406d60811261757hdfcb442o7f7efc1ee0d96a7b@mail.gmail.com> References: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> <5b8d13220811261620w45a4ab63mc83c712b2ae96c0b@mail.gmail.com> <799406d60811261751i489f8ccfnee69405e4e689dbf@mail.gmail.com> <799406d60811261757hdfcb442o7f7efc1ee0d96a7b@mail.gmail.com> Message-ID: <5b8d13220811262024p4bc80573ma90387673c16a79b@mail.gmail.com> On Thu, Nov 27, 2008 at 10:57 AM, Adam Mercer wrote: > On Wed, Nov 26, 2008 at 19:51, Adam Mercer wrote: > >>> That's a bug in the arpack build file. It should be fixed in SVN, >> >> I'm assuming thats r5199? Applying that change to the 0.7.0b1 tarball >> results in the same error. > > Ok, building from the trunk works so I must have mucked something up. > Ignore the noise. No, you may be right :) The problem is specific to the tarball (the file exists in subversion, it is not included in the tarball). To test whether the fix is OK, you should first generate the tarball from the trunk: python setup.py sdist And then use the generated tarball as the source tree to build. David From cournape at gmail.com Wed Nov 26 23:27:32 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 13:27:32 +0900 Subject: [SciPy-dev] scipy.linalg.qr deprecation warning In-Reply-To: References: <20081126133442.GA20844@localhost> Message-ID: <5b8d13220811262027m7af77c65wd0e11d8c145d0247@mail.gmail.com> On Thu, Nov 27, 2008 at 12:46 PM, Jarrod Millman wrote: > On Wed, Nov 26, 2008 at 6:08 AM, Nils Wagner > wrote: >> On Wed, 26 Nov 2008 14:34:43 +0100 >> Tiziano Zito wrote: >>> What about setting "econ=None" by default and >>> >>> - if the user specified "econ" explicitly, emit the >>>warning and set >>> econ according to user preference >>> - if the user did not specify "econ", set it to the >>>current default, >>> i.e. False > > +1 > Please go ahead and make the change. > >> http://projects.scipy.org/scipy/scipy/ticket/243 > > Could you also update and close this ticket? Hi Jarrod, the ticket cannot be closed: I am the one who added the warning, but the change indicated in the ticket has not been done yet; it will have to wait for 0.8; the deprecation is only the first step. David From cournape at gmail.com Wed Nov 26 23:33:36 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 27 Nov 2008 13:33:36 +0900 Subject: [SciPy-dev] Scipy not compiling in OS X 10.4 In-Reply-To: References: <487093.5338.qm@web27908.mail.ukl.yahoo.com> Message-ID: <5b8d13220811262033s4ac59585y1d8112fcc28b73ac@mail.gmail.com> On Thu, Nov 27, 2008 at 12:36 PM, Jarrod Millman wrote: > On Wed, Nov 26, 2008 at 9:31 AM, Muhammad Alkarouri > wrote: >> When I tried to compile scipy I get the same error as in http://projects.scipy.org/pipermail/scipy-dev/2008-November/010352.html >> I am using Python 2.6 from python.org and OS X 10.4.11. The error is: >> error: file '/Users/malkarouri/tmp/scipy-0.7.0b1/ARPACK/FWRAPPERS/veclib_cabi_c.c' does not exist > > This should be fixed in the trunk. Could you try checking out the > trunk and checking whether the problem goes away? More exactly, you should try wether the generated tarball contain the needed file, that is, from the trunk: python setup.py sdist # Generate the source tarball # Untar the tarball and try building scipy from the tarball, not from the trunk David From josef.pktd at gmail.com Thu Nov 27 01:02:50 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 27 Nov 2008 01:02:50 -0500 Subject: [SciPy-dev] kstest is reporting wrong p-value ?? Message-ID: <1cd32cbb0811262202j61886db1vc7668ed348a75210@mail.gmail.com> Looking again at ticket 395 about the Kolmogorov-Smirnov test, I'm quite sure the kstest is wrong. The current implementation uses absolute value of the deviation, therefore it is a two sided test. A one-sided test takes either max or min of the deviations (not of absolute deviations). However, the test distribution that is used to calculate the p-value is ksone, the distribution for the one-sided Kolmogorov-Smirnov test. So, the reported p-value should be off by approximately one half, or maybe double (?). There was a discussion in http://projects.scipy.org/pipermail/scipy-dev/2004-July/002181.html about this, but I'm not sure that conclusion is correct Can a statistics knowledgeable person check this, or someone with access to a good book? If I am correct, then I can fix the test next week. Josef From robert.kern at gmail.com Thu Nov 27 01:43:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Nov 2008 00:43:30 -0600 Subject: [SciPy-dev] kstest is reporting wrong p-value ?? In-Reply-To: <1cd32cbb0811262202j61886db1vc7668ed348a75210@mail.gmail.com> References: <1cd32cbb0811262202j61886db1vc7668ed348a75210@mail.gmail.com> Message-ID: <3d375d730811262243o474f11atbd0ebc263bc17ade@mail.gmail.com> On Thu, Nov 27, 2008 at 00:02, wrote: > Looking again at ticket 395 about the Kolmogorov-Smirnov test, I'm > quite sure the kstest is wrong. > > The current implementation uses absolute value of the deviation, > therefore it is a two sided test. A one-sided test takes either max or > min of the deviations (not of absolute deviations). However, the test > distribution that is used to calculate the p-value is ksone, the > distribution for the one-sided Kolmogorov-Smirnov test. So, the > reported p-value should be off by approximately one half, or maybe > double (?). No, it's only slightly off (but you are correct that it is off). The names "one-sided" and "two-sided" don't really correspond with the usual meaning for generic hypothesis tests. Rather, they describe the different statistics and their distributions. There are two different kinds of "one-sided" K-S statistics, one that uses the greatest signed difference between the ECDF and the CDF, and one that uses the greatest signed difference between the CDF and the ECDF. Note the orders. Both statistics are positive values, and both follow the same "one-sided K-S distribution". The "two-sided K-S statistic" is the maximum of both variants of the one-sided statistic. Its distribution is close to the one-sided distribution, but is difficult to compute. The K-S hypothesis test can be conducted with any of these, and can be either one-sided (e.g. "is the fit poor?") or two-sided (e.g. "is the fit either too poor or too good to be true?") in the conventional sense hypothesis testing sense. kstest() implements a one-sided test using the "one-sided K-S distribution" but incorrectly uses the "two-sided K-S statistic". Is that a clear explanation? > There was a discussion in > http://projects.scipy.org/pipermail/scipy-dev/2004-July/002181.html > about this, but I'm not sure that conclusion is correct You are correct. The terminology was tripping me up at the time, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Thu Nov 27 05:16:33 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Nov 2008 02:16:33 -0800 Subject: [SciPy-dev] scipy.linalg.qr deprecation warning In-Reply-To: <5b8d13220811262027m7af77c65wd0e11d8c145d0247@mail.gmail.com> References: <20081126133442.GA20844@localhost> <5b8d13220811262027m7af77c65wd0e11d8c145d0247@mail.gmail.com> Message-ID: On Wed, Nov 26, 2008 at 8:27 PM, David Cournapeau wrote: > the ticket cannot be closed: I am the one who added the warning, but > the change indicated in the ticket has not been done yet; it will have > to wait for 0.8; the deprecation is only the first step. Thanks for the clarification, I hadn't looked at what was happening very closely. After looking more closely, there are several things I would like to clarify. I am assuming that we want to unify the calling convention and semantics of qr function in numpy and scipy. Ideally, I think it would be best if we could move toward having only one implementation. To start with here is the current numpy.linalg.qr signature: def qr(a, mode='full'): where mode : {'full', 'r', 'economic'} Determines what information is to be returned. 'full' is the default. Economic mode is slightly faster if only R is needed. Here is the currrent scipy.linalg.qr signature: def qr(a,overwrite_a=0,lwork=None,econ=False,mode='qr'): where econ : boolean Whether to compute the economy-size QR decomposition, making shapes of Q and R (M, K) and (K, N) instead of (M,M) and (M,N). K=min(M,N) mode : {'qr', 'r'} Determines what information is to be returned: either both Q and R or only R. At first glance, it seems like we should simply change the scipy.linalg.qr signature to: def qr(a,overwrite_a=0,lwork=None,mode='full'): where mode : {'full', 'r', 'economic'} If so, would it make sense for 0.7 to change the scipy.linalg.qr signature to: def qr(a,overwrite_a=0,lwork=None,econ=None,mode='full'): where mode : {'full', 'qr' 'r', 'economic'} This would allow code written for scipy 0.8 to run on scipy 0.7; while, letting code written for scipy 0.6 to run on 0.7. Another issue with doing this is that it appears that the current implementation of qr in scipy will let you do the following: >>> q, r = sp.linalg.qr(a, econ=True) >>> r2 = sp.linalg.qr(a, econ=True, mode='r') To do this we could do something like mode : {'full', 'r', 'economic', 'economic-full'} so that >>> q, r = sp.linalg.qr(a, mode='economic-full') >>> r2 = sp.linalg.qr(a, mode='economic') This raises the question whether it makes sense to return both q and r when using economy mode. If so, should we change the qr implementation in numpy as well? Currently, in numpy when mode='economy' qr returns A2: 444 mode = 'economic' 445 A2 : double or complex array, shape (M, N) 446 The diagonal and the upper triangle of A2 contains R, 447 while the rest of the matrix is undefined. Are there any references that we should mention in the docstring, which would be able to explain economy mode and the trade-offs regarding whether q is returned as well as r? Also if we want to change the call signatures to be the same should the new scipy sig be: def qr(a,mode='full',overwrite_a=0,lwork=None): That way we could add (...overwrite_a=0,lwork=None) to numpy 1.3 so that np.linalg.qr sig would be: def qr(a,mode='full',overwrite_a=0,lwork=None): NumPy uses 'zgeqrf' or 'dgeqrf', while scipy uses 'orgqr' or 'ungqr'. Does anyone know if it makes sense for the numpy and scipy implementations to use different lapack functions? Or is it just historical artifact? If it is merely historic, which functions should both implementations use? Another issue is that scipy has an older implementation of qr called qr_old. Can we just remove it? Or do we need to deprecate it first? Regardless of what is decided, I would like to make sure we agree on a plan and document it in the scipy 0.7 release notes. I also want to make sure we address: 1. removing sp.linalg.qr_old 2. whether economy mode should optionally return qr or r 3. if economy mode can optionally return qr, how do we specify this option 4. what lapack functions should we use and should numpy/scipy use the same lapack functions 5. can scipy's qr eventually just call numpy's 6. should we handle mode='full' and mode='economy' in scipy 0.7 7. should we update sp.linalg.qr's docstring using np.linalg.qr's Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From opossumnano at gmail.com Thu Nov 27 10:06:44 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 27 Nov 2008 16:06:44 +0100 Subject: [SciPy-dev] scipy ticket #800 Message-ID: <20081127150643.GA11392@localhost> hi! I've just added ticket #800 (numpy's test_poly1d_nan_roots is broken if scipy.linalg is imported) and I know how to fix it, but would like to hear your comments. The best solution IMO is to change the numpy behaviour, and that's something that needs discussion (and that I can't do myself) :-)) ciao, tiziano From opossumnano at gmail.com Thu Nov 27 10:51:11 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Thu, 27 Nov 2008 16:51:11 +0100 Subject: [SciPy-dev] scipy.linalg.qr deprecation warning In-Reply-To: References: <20081126133442.GA20844@localhost> <5b8d13220811262027m7af77c65wd0e11d8c145d0247@mail.gmail.com> Message-ID: <20081127155110.GB11392@localhost> > If so, would it make sense for 0.7 to change the scipy.linalg.qr signature to: > def qr(a,overwrite_a=0,lwork=None,econ=None,mode='full'): > where > mode : {'full', 'qr' 'r', 'economic'} > This would allow code written for scipy 0.8 to run on scipy 0.7; > while, letting code written for scipy 0.6 to run on 0.7. +1 > Another issue with doing this is that it appears that the current > implementation of qr in scipy will let you do the following: > >>> q, r = sp.linalg.qr(a, econ=True) > >>> r2 = sp.linalg.qr(a, econ=True, mode='r') > To do this we could do something like > mode : {'full', 'r', 'economic', 'economic-full'} > so that > >>> q, r = sp.linalg.qr(a, mode='economic-full') > >>> r2 = sp.linalg.qr(a, mode='economic') > > This raises the question whether it makes sense to return both q and r > when using economy mode. If so, should we change the qr > implementation in numpy as well? Currently, in numpy when > mode='economy' qr returns A2: > 444 mode = 'economic' > 445 A2 : double or complex array, shape (M, N) > 446 The diagonal and the upper triangle of A2 contains R, > 447 while the rest of the matrix is undefined. > > Are there any references that we should mention in the docstring, > which would be able to explain economy mode and the trade-offs > regarding whether q is returned as well as r? I think "economy" mode is the term that is used in matlab's qr function. Personally I'm not sure I understand why is economy mode at all important. > Also if we want to change the call signatures to be the same should > the new scipy sig be: > def qr(a,mode='full',overwrite_a=0,lwork=None): > That way we could add (...overwrite_a=0,lwork=None) to numpy 1.3 so > that np.linalg.qr sig would be: > def qr(a,mode='full',overwrite_a=0,lwork=None): > > NumPy uses 'zgeqrf' or 'dgeqrf', while scipy uses 'orgqr' or 'ungqr'. > Does anyone know if it makes sense for the numpy and scipy > implementations to use different lapack functions? Or is it just > historical artifact? If it is merely historic, which functions should > both implementations use? If I understood the issue correctly: NumPY: - 'zgeqrf' or 'dgeqrf' are used to get 'r' and then 'zungqr' or 'dorgqr' are used to get 'q'. The latter happens only if mode='full'. The only difference between mode='economic' and mode='r' is a call to fastCopyAndTranspose (there is even a comment in the code: "economic mode. Isn't actually economic.", i.e. 'economic' mode is a fake). SciPy: - the same routines as in NumPy are used and but 'econ' and 'mode' are disjoint (as in matlab). I think that the SciPy behaviour is more consistent (as in SciPy ticket #800) and that NumPy's 'qr' should be changed to match SciPy signature (without the overwrite_a and lwork arguments). > Another issue is that scipy has an older implementation of qr called > qr_old. Can we just remove it? Or do we need to deprecate it first? +1 to just remove it. To summarize my views: > 1. removing sp.linalg.qr_old +1 > 2. whether economy mode should optionally return qr or r +1 > 3. if economy mode can optionally return qr, how do we specify this option have two different arguments: mode={'r', 'qr'} and econ={True, False} > 4. what lapack functions should we use and should numpy/scipy use the > same lapack functions Both use the same lapack functions, so no problem here. > 5. can scipy's qr eventually just call numpy's -1 SciPy will always have overwrite_a and lwork (it's calling the f2py generated interface when present), whereas NumPy uses lapack_lite (where there is no overwrite_X argument). > 6. should we handle mode='full' and mode='economy' in scipy 0.7 -1 (see above) > 7. should we update sp.linalg.qr's docstring using np.linalg.qr's -1 (see above) ciao, tiziano From josef.pktd at gmail.com Thu Nov 27 12:30:24 2008 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 27 Nov 2008 12:30:24 -0500 Subject: [SciPy-dev] kstest is reporting wrong p-value ?? In-Reply-To: <3d375d730811262243o474f11atbd0ebc263bc17ade@mail.gmail.com> References: <1cd32cbb0811262202j61886db1vc7668ed348a75210@mail.gmail.com> <3d375d730811262243o474f11atbd0ebc263bc17ade@mail.gmail.com> Message-ID: <1cd32cbb0811270930l294c21c9ka7130650621c82ae@mail.gmail.com> I compared with R in more detail: conclusion for small samples: * stats.kstest() for less than 10 observation is pretty wrong * calculation of D differs quite a bit from R and matlab (those 2 give the same numbers) * exact method in R uses the same distribution as stats.ksone.sf(D,n)*2 up to 4 decimals ! Note: times 2 * asymptotic distribution in R (not using exact) is exactly the same as kstwobign.(D*sqrt(n)) up to more than 7 decimals For larger samples, I tried 100 normal distributed random variables stats.kstest() still gives the wrong D and pval, but the difference is not as large as in small samples. With a sample of 1000 normal rvs, the D of stats.kstest() and of R are essentially identical, but the pvalue reported by stats.kstest() is half of the one in R >>> xxrl = stats.norm.rvs(size=1000) >>> resultrl=ksfn(xxrl,'pnorm', exact = True) #this is R's kstest through rpy >>> resultrl['p.value'] 0.2419499342788699 >>> resultrl['statistic']['D'] 0.032317405617139472 >>> stats.kstest(xxrl,'norm') (0.032317405617139472, 0.12118954799968018) So, stats.kstest() definitely needs to be fixed. Josef From mellerf at netvision.net.il Thu Nov 27 13:02:26 2008 From: mellerf at netvision.net.il (Yosef Meller) Date: Thu, 27 Nov 2008 20:02:26 +0200 Subject: [SciPy-dev] two tickets with patches In-Reply-To: References: Message-ID: <492EE0B2.4000009@netvision.net.il> Nathan Bell wrote: > On Wed, Nov 26, 2008 at 10:31 PM, Jarrod Millman wrote: >> I would like to see at least some comments on them about why they >> haven't been accepted. They are both first attempts but no one has >> made any critical comments or useful suggestions for what the authors >> need to do in order to have their code accepted. One provides support >> the Harwell-Boeing format and the other provides a FIR filter. >> >> For instance, would someone be willing to take a look and add a >> comment to the ticket that addresses at least: Do they need tests and >> coding style work? Or is there no interest in including this code in >> scipy? >> > > Harwell-Boeing support would be useful, and the attached code is > well-written. We'll need some tests, but that's not too difficult. > > I'll see that this gets added to SciPy 0.8. While we're at it, what about #713? http://www.scipy.org/scipy/scipy/ticket/713 From james at NBN.ac.za Thu Nov 27 14:18:07 2008 From: james at NBN.ac.za (James Dominy) Date: Thu, 27 Nov 2008 21:18:07 +0200 Subject: [SciPy-dev] Constructing an ndarray around a ctypes array Message-ID: <492EF26F.3040600@nbn.ac.za> Hi, Is there a way to create an ndarray from a ctypes array, such that they both use the same memory space. Thanks, James From pav at iki.fi Thu Nov 27 15:49:19 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 27 Nov 2008 20:49:19 +0000 (UTC) Subject: [SciPy-dev] #693 (Evaluating bivariate piecewise splines at arbitrary points) Message-ID: Hi all, Do we want to get this: http://www.scipy.org/scipy/scipy/ticket/693 to Scipy 0.7.0, or does it need improvements etc.? It's pretty straightforward; probably the only question is whether >>> zi = spline_object.ev(xi, yi) is a good interface for getting zi[j] = spline(xi[j], yi[j]). [As opposed to BivariateSpline.__call__(x,y), which returns answers evaluated on meshgrid(x,y)] -- Pauli Virtanen From anand.prabhakar.patil at gmail.com Fri Nov 28 07:37:49 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Fri, 28 Nov 2008 12:37:49 +0000 Subject: [SciPy-dev] Single precision FFT Message-ID: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> Hi all, I need a single precision FFT from numpy. I'm willing to hack numpy's fft module to make it but before I start I'm wondering if there's any interest in including single-precision fft functions in the numpy distribution? If that's the case I'll try to make a nice patch. Anand From gael.varoquaux at normalesup.org Fri Nov 28 07:40:37 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Fri, 28 Nov 2008 13:40:37 +0100 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> Message-ID: <20081128124037.GD12802@phare.normalesup.org> On Fri, Nov 28, 2008 at 12:37:49PM +0000, Anand Patil wrote: > I need a single precision FFT from numpy. I'm willing to hack numpy's > fft module to make it but before I start I'm wondering if there's any > interest in including single-precision fft functions in the numpy > distribution? If that's the case I'll try to make a nice patch. I'd really love it. More generally, I'd really be interested in having all the numpy function (including the linalg ones) work with single precision, as I my main limitation on the work I am doing right now is memory. I did have a quick look, and it seemed to me that this was not always possible, due to the underlying fortran libraries. Ga?l From david at ar.media.kyoto-u.ac.jp Fri Nov 28 07:37:14 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 28 Nov 2008 21:37:14 +0900 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> Message-ID: <492FE5FA.70606@ar.media.kyoto-u.ac.jp> Anand Patil wrote: > Hi all, > > I need a single precision FFT from numpy. I'm willing to hack numpy's > fft module to make it but before I start I'm wondering if there's any > interest in including single-precision fft functions in the numpy > distribution? If that's the case I'll try to make a nice patch. > Does it need to be numpy ? Because I would love to see it in scipy, and the underlying fortran code is already there, only the wrappers need to be done. If you are not familiar with scipy code, I can tell you where to look at (the module is scipy.fftpack), David From anand.prabhakar.patil at gmail.com Fri Nov 28 08:44:57 2008 From: anand.prabhakar.patil at gmail.com (Anand Patil) Date: Fri, 28 Nov 2008 13:44:57 +0000 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <492FE5FA.70606@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> Message-ID: <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> Dang, I should have checked my email an hour ago... it doesn't need to be numpy, but I already did it. I just made a new module called 'sfft' that's a copy of fft, but with everything in single precision. Is that any use to anyone? Anand On Fri, Nov 28, 2008 at 12:37 PM, David Cournapeau wrote: > Anand Patil wrote: >> Hi all, >> >> I need a single precision FFT from numpy. I'm willing to hack numpy's >> fft module to make it but before I start I'm wondering if there's any >> interest in including single-precision fft functions in the numpy >> distribution? If that's the case I'll try to make a nice patch. >> > > Does it need to be numpy ? Because I would love to see it in scipy, and > the underlying fortran code is already there, only the wrappers need to > be done. If you are not familiar with scipy code, I can tell you where > to look at (the module is scipy.fftpack), > > David > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From ramercer at gmail.com Fri Nov 28 10:12:01 2008 From: ramercer at gmail.com (Adam Mercer) Date: Fri, 28 Nov 2008 09:12:01 -0600 Subject: [SciPy-dev] ANN: SciPy 0.7.0b1 (beta release) In-Reply-To: <5b8d13220811262024p4bc80573ma90387673c16a79b@mail.gmail.com> References: <799406d60811260823x6367f71dvaafe2c87c52dc60b@mail.gmail.com> <5b8d13220811261620w45a4ab63mc83c712b2ae96c0b@mail.gmail.com> <799406d60811261751i489f8ccfnee69405e4e689dbf@mail.gmail.com> <799406d60811261757hdfcb442o7f7efc1ee0d96a7b@mail.gmail.com> <5b8d13220811262024p4bc80573ma90387673c16a79b@mail.gmail.com> Message-ID: <799406d60811280712se009c84yb7307f9655ba4689@mail.gmail.com> On Wed, Nov 26, 2008 at 22:24, David Cournapeau wrote: > No, you may be right :) The problem is specific to the tarball (the > file exists in subversion, it is not included in the tarball). To test > whether the fix is OK, you should first generate the tarball from the > trunk: > > python setup.py sdist > > And then use the generated tarball as the source tree to build. Just build from a tarball created from r5203, and it builds without issue on Mac OS X Leopard, Python-2.5.2: OK (KNOWNFAIL=2, SKIP=21) Cheers Adam From koepsell at gmail.com Fri Nov 28 23:50:44 2008 From: koepsell at gmail.com (killian koepsell) Date: Fri, 28 Nov 2008 20:50:44 -0800 Subject: [SciPy-dev] Constructing an ndarray around a ctypes array In-Reply-To: <492EF26F.3040600@nbn.ac.za> References: <492EF26F.3040600@nbn.ac.za> Message-ID: On Thu, Nov 27, 2008 at 11:18 AM, James Dominy wrote: > Is there a way to create an ndarray from a ctypes array, such that they both use > the same memory space. James, you can use the function PyBuffer_FromMemory or PyBuffer_FromReadWriteMemory if you want to have write access to the memory space from python. I use the following python function: def array_from_memory(pointer,shape,dtype): import ctypes as C import numpy as np from_memory = C.pythonapi.PyBuffer_FromReadWriteMemory from_memory.restype = C.py_object arr = np.empty(shape=shape,dtype=dtype) arr.data = from_memory(pointer,arr.nbytes) return arr Kilian From robert.kern at gmail.com Sat Nov 29 00:02:37 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Nov 2008 23:02:37 -0600 Subject: [SciPy-dev] Constructing an ndarray around a ctypes array In-Reply-To: <492EF26F.3040600@nbn.ac.za> References: <492EF26F.3040600@nbn.ac.za> Message-ID: <3d375d730811282102p6c84ebe0y54bdfd985c6d9c2@mail.gmail.com> On Thu, Nov 27, 2008 at 13:18, James Dominy wrote: > Hi, > > Is there a way to create an ndarray from a ctypes array, such that they both use > the same memory space. from numpy.ctypeslib import as_array -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pav at iki.fi Sat Nov 29 14:40:45 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 29 Nov 2008 21:40:45 +0200 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite Message-ID: <1227987645.8264.200.camel@idol> Hi all, I spent some time rewriting the scipy.optimize.nonlin module: http://github.com/pv/scipy/tree/ticket-791/scipy/optimize/nonlin.py The following things changed: - Support tolerance-based stopping conditions (cf. ticket #791) - Improved handling of input and return values from the functions, so that they are now easier to use. - Don't use np.matrix at all. - Jacobian approximations factored into classes; the iteration code is now in only one place, so trust-region handling or other improvements can be added to a single place later on. - There's now a line search to the direction inverting the Jacobian gave. (But there's no checking that it is an actual descent direction for some merit function.) - Rewrote docstrings. The routines should produce the same output as previously. The tests are still not very strong, however. But I have now some questions: * Getting this to 0.7.0; this is a complete rewrite, so is it too late, and is it better to wait for 0.8? * Some of the algorithms in there don't appear to work too well, and some appear to be redundant. I'd like to clean up this a bit, leaving only known-good stuff in. * I'd like to remove `broyden2` as the actual Jacobian approximation in this appears to be the same as in `broyden3`, and there does not appear to be much difference in the work involved in the two. Ondrej, since you wrote the original code, do you think there is a reason to keep both? * `broyden1_modified` appears to be, in the end if you work out the matrix algebra, updating the inverse Jacobian in a way that corresponds to J := J + (y - J s / 2) s^T / ||s||^2 for the Jacobian (with s = dx, y = df). Apart from the factor 1/2, it's Broyden's good method. [1] One can also verify that the updated inverse Jacobian does not satisfy the quasi-Newton condition s = J^{-1} y, and that `broyden1_modified` doesn't generate the same sequence as `broyden1`. Hence, I'd like to remove this routine, unless there's some literature that shows that the above works better than Broyden's method; Ondrej, do you agree? .. [1] http://en.wikipedia.org/wiki/Broyden%27s_method http://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula * Also, which articles were used as reference for the non-Quasi-Newton algorithms: - `broyden_modified`. This appears to be a bit exotic, and it uses several magic constants (`w0`, `wl`) whose meaning is not clear to me. A reference would be helpful here, also for the user who has to choose the parameters as appropriate for his/her specific problem. - `broyden_generalized`, `anderson`, `anderson2`. These appear to be variants of Anderson mixing, so probably we only want at most one of these. Also, broyden_generalized() == anderson(w0=0), am I correct? `anderson` and `anderson2` don't appear to function equivalently, and I suspect the update formula in the latter is wrong, since this algorithm can't solve any of the test problems. Do you have a reference for this? Is there a rule how `w0` should be chosen for some specific problem? - `excitingmixing`. A reference would be useful, to clarify the heuristics used. - `linearmixing`. I'm not sure that Scipy should advocate this method :) Linearmixing and excitingmixing also seem to require something from the objective function, possibly that the eigenvalues of its Jacobian have the opposite sign than `alpha`. For example, neither of them can find a solution for the equation ``I x = 0`` where ``I`` is the identity matrix (test function F2). So, I'm a bit tempted to remove also these routines, as it seems they probably are not too useful for general problems. * The code in there is still a bit naive implementation of the inexact (Quasi-)Newton method, and one could add eg. add trust-region handling or try to guarantee that the line search direction is always a decrease direction for a merit function. (I don't know if it's possible to do the latter unless one has a matrix representation of the Jacobian approximation.) So, I suspect there are problems for which eg. MINPACK code will find solutions, but for which the nonlin.py code fails. * One could add more algorithms suitable for large-scale problems; for example some limited-memory Broyden methods (eg. [1]) or Secant-Krylov methods [2]. .. [1] http://www.math.leidenuniv.nl/~verduyn/publications/reports/equadiff.ps .. [2] D. A. Knoll and D. E. Keyes. Jacobian free Newton-Krylov methods. Journal of Computational Physics, 20(2):357?397, 2004. I have implementations for both types of algorithms that could possibly go in after some polishing. -- Pauli Virtanen From david at ar.media.kyoto-u.ac.jp Sun Nov 30 00:30:53 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 30 Nov 2008 14:30:53 +0900 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <1227987645.8264.200.camel@idol> References: <1227987645.8264.200.camel@idol> Message-ID: <4932250D.7040003@ar.media.kyoto-u.ac.jp> Pauli Virtanen wrote: > * Getting this to 0.7.0; this is a complete rewrite, so is it too late, > and is it better to wait for 0.8? > I would much prefer delaying this after 0.7 release. Unfortunately, I have nothing else to say, since I know nothing to optimization :) David From stefan at sun.ac.za Sun Nov 30 17:10:53 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 1 Dec 2008 00:10:53 +0200 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <4932250D.7040003@ar.media.kyoto-u.ac.jp> References: <1227987645.8264.200.camel@idol> <4932250D.7040003@ar.media.kyoto-u.ac.jp> Message-ID: <9457e7c80811301410x333f6352r5fa4e81f4ac628@mail.gmail.com> 2008/11/30 David Cournapeau : > Pauli Virtanen wrote: >> * Getting this to 0.7.0; this is a complete rewrite, so is it too late, >> and is it better to wait for 0.8? >> > I would much prefer delaying this after 0.7 release. Unfortunately, I > have nothing else to say, since I know nothing to optimization :) Postponing a patch often leads to the author losing interest, and to the patch never being applied. I don't know if we can afford that. I'd be quite happy to see this going into 0.7 rc2 and trying it out for beta 1 (or whatever the naming scheme is), especially if we can fine-tune the tests a bit. Cheers St?fan From simpson at math.toronto.edu Sun Nov 30 18:08:24 2008 From: simpson at math.toronto.edu (Gideon Simpson) Date: Sun, 30 Nov 2008 18:08:24 -0500 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <1227987645.8264.200.camel@idol> References: <1227987645.8264.200.camel@idol> Message-ID: <920F563C-A1BB-487A-81C6-8571C4DC526C@math.toronto.edu> Still no args input for inputting arguments to the function F? Sorry to complain, but the absence of this has put me off using these routines as it would require a rewrite of much of my code. -gideon On Nov 29, 2008, at 2:40 PM, Pauli Virtanen wrote: > Hi all, > > I spent some time rewriting the scipy.optimize.nonlin module: > > http://github.com/pv/scipy/tree/ticket-791/scipy/optimize/nonlin.py > > The following things changed: > > - Support tolerance-based stopping conditions (cf. ticket #791) > > - Improved handling of input and return values from the functions, > so that they are now easier to use. > > - Don't use np.matrix at all. > > - Jacobian approximations factored into classes; the iteration code is > now in only one place, so trust-region handling or other improvements > can be added to a single place later on. > > - There's now a line search to the direction inverting the Jacobian > gave. (But there's no checking that it is an actual descent direction > for some merit function.) > > - Rewrote docstrings. > > The routines should produce the same output as previously. The tests > are > still not very strong, however. > > > But I have now some questions: > > * Getting this to 0.7.0; this is a complete rewrite, so is it too > late, > and is it better to wait for 0.8? > > * Some of the algorithms in there don't appear to work too well, and > some appear to be redundant. I'd like to clean up this a bit, leaving > only known-good stuff in. > > * I'd like to remove `broyden2` as the actual Jacobian approximation > in > this appears to be the same as in `broyden3`, and there does not > appear to be much difference in the work involved in the two. > > Ondrej, since you wrote the original code, do you think there is > a reason to keep both? > > * `broyden1_modified` appears to be, in the end if you work out the > matrix algebra, updating the inverse Jacobian in a way that > corresponds to > > J := J + (y - J s / 2) s^T / ||s||^2 > > for the Jacobian (with s = dx, y = df). Apart from the factor > 1/2, it's Broyden's good method. [1] One can also verify that the > updated inverse Jacobian does not satisfy the quasi-Newton condition > s = J^{-1} y, and that `broyden1_modified` doesn't generate the same > sequence as `broyden1`. > > Hence, I'd like to remove this routine, unless there's some > literature > that shows that the above works better than Broyden's method; Ondrej, > do you agree? > > .. [1] http://en.wikipedia.org/wiki/Broyden%27s_method > http://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula > > * Also, which articles were used as reference for the non-Quasi-Newton > algorithms: > > - `broyden_modified`. This appears to be a bit exotic, and it uses > several magic constants (`w0`, `wl`) whose meaning is not clear to > me. > > A reference would be helpful here, also for the user who has to > choose the parameters as appropriate for his/her specific problem. > > - `broyden_generalized`, `anderson`, `anderson2`. These appear to be > variants of Anderson mixing, so probably we only want at most > one of these. Also, broyden_generalized() == anderson(w0=0), am I > correct? > > `anderson` and `anderson2` don't appear to function equivalently, > and I suspect the update formula in the latter is wrong, since this > algorithm can't solve any of the test problems. Do you have a > reference for this? > > Is there a rule how `w0` should be chosen for some specific > problem? > > - `excitingmixing`. A reference would be useful, to clarify the > heuristics used. > > - `linearmixing`. I'm not sure that Scipy should advocate this > method :) > > Linearmixing and excitingmixing also seem to require something from > the objective function, possibly that the eigenvalues of its Jacobian > have the opposite sign than `alpha`. For example, neither of them can > find a solution for the equation ``I x = 0`` where ``I`` is the > identity matrix (test function F2). So, I'm a bit tempted to remove > also these routines, as it seems they probably are not too useful for > general problems. > > * The code in there is still a bit naive implementation of the inexact > (Quasi-)Newton method, and one could add eg. add trust-region > handling > or try to guarantee that the line search direction is always a > decrease direction for a merit function. (I don't know if it's > possible to do the latter unless one has a matrix representation of > the Jacobian approximation.) So, I suspect there are problems for > which eg. MINPACK code will find solutions, but for which the > nonlin.py code fails. > > * One could add more algorithms suitable for large-scale problems; for > example some limited-memory Broyden methods (eg. [1]) or Secant- > Krylov > methods [2]. > > .. [1] http://www.math.leidenuniv.nl/~verduyn/publications/reports/equadiff.ps > .. [2] D. A. Knoll and D. E. Keyes. Jacobian free Newton-Krylov > methods. > Journal of Computational Physics, 20(2):357?397, 2004. > > I have implementations for both types of algorithms that could > possibly go in after some polishing. > > -- > Pauli Virtanen > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From xavier.gnata at gmail.com Sun Nov 30 18:12:32 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Mon, 01 Dec 2008 00:12:32 +0100 Subject: [SciPy-dev] 1 Failure in 0.7.0b1 In-Reply-To: References: Message-ID: <49331DE0.1040700@gmail.com> > Fedora F9 x86_64: > > FAIL: test_pbdv (test_basic.TestCephes) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib64/python2.5/site-packages/scipy/special/tests/test_basic.py", line 357, in test_pbdv > assert_equal(cephes.pbdv(1,0),(0.0,0.0)) > File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 176, in assert_equal > assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) > File "/usr/lib64/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > item=1 > > ACTUAL: 1.0 > DESIRED: 0.0 Ok I can reproduce this one on intrepid ibex 64bits (gfortran). The funny think is that I think (if I read the correct mathword pages) that 1.0 is the correct answer (but it is so easy to be wrong with these special function). Xavier From charlesr.harris at gmail.com Sun Nov 30 21:16:08 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 30 Nov 2008 19:16:08 -0700 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <920F563C-A1BB-487A-81C6-8571C4DC526C@math.toronto.edu> References: <1227987645.8264.200.camel@idol> <920F563C-A1BB-487A-81C6-8571C4DC526C@math.toronto.edu> Message-ID: On Sun, Nov 30, 2008 at 4:08 PM, Gideon Simpson wrote: > Still no args input for inputting arguments to the function F? > > Sorry to complain, but the absence of this has put me off using these > routines as it would require a rewrite of much of my code. > The method used for the 1d zero finders might be useful here. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aarchiba at physics.mcgill.ca Sun Nov 30 21:59:32 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sun, 30 Nov 2008 21:59:32 -0500 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <920F563C-A1BB-487A-81C6-8571C4DC526C@math.toronto.edu> References: <1227987645.8264.200.camel@idol> <920F563C-A1BB-487A-81C6-8571C4DC526C@math.toronto.edu> Message-ID: 2008/11/30 Gideon Simpson : > Still no args input for inputting arguments to the function F? > > Sorry to complain, but the absence of this has put me off using these > routines as it would require a rewrite of much of my code. Why? Instead of optimize.whatever(F, args=extra) just use optimize.whatever(lambda x: F(x,extra)) Anne From charlesr.harris at gmail.com Sun Nov 30 22:23:50 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 30 Nov 2008 20:23:50 -0700 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: References: <1227987645.8264.200.camel@idol> <920F563C-A1BB-487A-81C6-8571C4DC526C@math.toronto.edu> Message-ID: On Sun, Nov 30, 2008 at 7:59 PM, Anne Archibald wrote: > 2008/11/30 Gideon Simpson : > > Still no args input for inputting arguments to the function F? > > > > Sorry to complain, but the absence of this has put me off using these > > routines as it would require a rewrite of much of my code. > > Why? > > Instead of > optimize.whatever(F, args=extra) > just use > optimize.whatever(lambda x: F(x,extra)) > Yeah, that was my thought years ago for the zeros functions. I was told no, no, no. I'm not sure there is a good reason beyond what folks are used to. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Sun Nov 30 22:32:31 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 1 Dec 2008 12:32:31 +0900 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <9457e7c80811301410x333f6352r5fa4e81f4ac628@mail.gmail.com> References: <1227987645.8264.200.camel@idol> <4932250D.7040003@ar.media.kyoto-u.ac.jp> <9457e7c80811301410x333f6352r5fa4e81f4ac628@mail.gmail.com> Message-ID: <5b8d13220811301932x7fdf8f0bn72ec63496bad8423@mail.gmail.com> On Mon, Dec 1, 2008 at 7:10 AM, St?fan van der Walt wrote: > > Postponing a patch often leads to the author losing interest, and to > the patch never being applied. The patch should have been applied before. We are already in beta phase, a few days away from releasing scipy. I don't see the point of taking the time to do beta if we keep adding code, specially after the first beta. Also, rushing to add new code may lead to oversight some limitation, some API bug, etc... David From charlesr.harris at gmail.com Sun Nov 30 22:53:13 2008 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 30 Nov 2008 20:53:13 -0700 Subject: [SciPy-dev] scipy.optimize.nonlin rewrite In-Reply-To: <5b8d13220811301932x7fdf8f0bn72ec63496bad8423@mail.gmail.com> References: <1227987645.8264.200.camel@idol> <4932250D.7040003@ar.media.kyoto-u.ac.jp> <9457e7c80811301410x333f6352r5fa4e81f4ac628@mail.gmail.com> <5b8d13220811301932x7fdf8f0bn72ec63496bad8423@mail.gmail.com> Message-ID: On Sun, Nov 30, 2008 at 8:32 PM, David Cournapeau wrote: > On Mon, Dec 1, 2008 at 7:10 AM, St?fan van der Walt > wrote: > > > > > Postponing a patch often leads to the author losing interest, and to > > the patch never being applied. > > The patch should have been applied before. We are already in beta > phase, a few days away from releasing scipy. I don't see the point of > taking the time to do beta if we keep adding code, specially after the > first beta. Also, rushing to add new code may lead to oversight some > limitation, some API bug, etc... > > Agree. The code seems to need extensive rework and it would be better to take the time to get it right. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Sun Nov 30 23:22:19 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 01 Dec 2008 13:22:19 +0900 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> Message-ID: <4933667B.3090200@ar.media.kyoto-u.ac.jp> Anand Patil wrote: > Dang, I should have checked my email an hour ago... it doesn't need to > be numpy, but I already did it. I just made a new module called 'sfft' > that's a copy of fft, but with everything in single precision. Is that > any use to anyone? Hi Anand, Sorry for not having answered before. If you care about the float support being available to many people, I think the best solution really is adding it to scipy. Generally, I think there is a consensus that we would like to avoid adding new features to numpy itself, specially if the features fit quite well scipy. To add support to float support to scipy.fftpack, you need to do the following: - Enable build the fftpack library, single version (scipy/fftpack/src/fftpack) in scipy/fftpack/setup.py - start writing fftpack wrappers in C (look at zfft_pack.c and zfft.c for a simple example complex->complex fft, one dimension) - add support at python level. The 2nd step is the one which will take time, although it should be quite similar to the double prevision version. cheers, David From aarchiba at physics.mcgill.ca Sun Nov 30 23:49:10 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Sun, 30 Nov 2008 23:49:10 -0500 Subject: [SciPy-dev] Single precision FFT In-Reply-To: <4933667B.3090200@ar.media.kyoto-u.ac.jp> References: <2bc7a5a50811280437v44e8d3d6w2c283223d7e4607@mail.gmail.com> <492FE5FA.70606@ar.media.kyoto-u.ac.jp> <2bc7a5a50811280544s4519adabt5a9a9a40dfc409d1@mail.gmail.com> <4933667B.3090200@ar.media.kyoto-u.ac.jp> Message-ID: 2008/11/30 David Cournapeau : > Anand Patil wrote: >> Dang, I should have checked my email an hour ago... it doesn't need to >> be numpy, but I already did it. I just made a new module called 'sfft' >> that's a copy of fft, but with everything in single precision. Is that >> any use to anyone? > Sorry for not having answered before. If you care about the float > support being available to many people, I think the best solution really > is adding it to scipy. Generally, I think there is a consensus that we > would like to avoid adding new features to numpy itself, specially if > the features fit quite well scipy. > > To add support to float support to scipy.fftpack, you need to do the > following: > - Enable build the fftpack library, single version > (scipy/fftpack/src/fftpack) in scipy/fftpack/setup.py > - start writing fftpack wrappers in C (look at zfft_pack.c and > zfft.c for a simple example complex->complex fft, one dimension) > - add support at python level. > > The 2nd step is the one which will take time, although it should be > quite similar to the double prevision version. I'd also like to suggest that, if possible, it would be nice if single-precision FFTs were not a separate module, or even a separate function, but instead the usual fft function selected them when handed a single-precision input. Anne