From prabhu at aero.iitb.ac.in Sun Oct 1 02:05:15 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sun, 1 Oct 2006 11:35:15 +0530 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <451DBA2D.3070104@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> Message-ID: <17695.23195.379855.269684@prpc.aero.iitb.ac.in> >>>>> "Travis" == Travis Oliphant writes: Travis> 1) Do you like the idea ? I think this is a very good idea. Some people are likely to claim "nothing new in the particular implementation". However, you could insist that code contributed here should be unit tested and reasonably integration tested (how often do we see anything close to that in other papers?). It might be a good idea to have two kinds of contributions, "original/new ideas/algorithms/applications" and "new implementations". Most papers, it would appear, are read by a handful and used by an even smaller subset of the audience. The advantage of something contributed to SciPy is that it has (on the average) far greater utility to the whole community. Besides, I think it is reasonable to argue that good code usually takes as long as (if not longer than) writing a paper. Travis> 2) Could you help (i.e. as an editor, reviewer, etc.)? I can try to help with this. cheers, prabhu From guyer at nist.gov Sun Oct 1 15:44:37 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Sun, 1 Oct 2006 15:44:37 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> Message-ID: <5BB2AEF0-B477-40E6-B63A-CE21C301D850@nist.gov> On Sep 30, 2006, at 10:21 AM, I wrote: > (code fusion, anybody?); ^^^^ Interesting typo, given the subject matter. I'm pretty sure I meant "cold". From robert.kern at gmail.com Sun Oct 1 23:24:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 01 Oct 2006 22:24:45 -0500 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <17695.23195.379855.269684@prpc.aero.iitb.ac.in> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> Message-ID: <4520867D.6010306@gmail.com> Prabhu Ramachandran wrote: >>>>>> "Travis" == Travis Oliphant writes: > > Travis> 1) Do you like the idea ? > > I think this is a very good idea. Some people are likely to claim > "nothing new in the particular implementation". However, you could > insist that code contributed here should be unit tested and reasonably > integration tested (how often do we see anything close to that in > other papers?). It might be a good idea to have two kinds of > contributions, "original/new ideas/algorithms/applications" and "new > implementations". Most papers, it would appear, are read by a handful > and used by an even smaller subset of the audience. The advantage of > something contributed to SciPy is that it has (on the average) far > greater utility to the whole community. Besides, I think it is > reasonable to argue that good code usually takes as long as (if not > longer than) writing a paper. For comparison, here are excerpts from the JStatSoft submission instructions. http://www.jstatsoft.org/instructions.php """JSS will publish 1. Manuals, user's guides, and other forms of description of statistical software, together with the actual software in human-readable form (peer-reviewed) 2. Code snippets -- small code projects, any language (section editors Hornik and Koenker, peer-reviewed). 3. Special issues on topics in statistical computing (guest editors, peer-reviewed, by invitation only, suggestions welcome). 4. A yearly special issue documenting progress of major statistical software projects (section editor Rossini, by invitation only, suggestions welcome) . 5. Reviews of Books on statistical computing and software. (section editor Gentleman, by invitation only, suggestions welcome) . 6. Reviews and comparisons of statistical software (section editors Unwin and Hartman, by invitation only, suggestions welcome). The typical JSS paper will have a section explaining the statistical technique, a section explaining the code, a section with the actual code, and a section with examples. All sections will be made browsable as well as downloadable. The papers and code should be accessible to a broad community of practitioners, teachers, and researchers in the field of statistics. """ However: """If code does something standard (for instance compute an incomplete beta in Fortran) it is only acceptable if it is better than the alternatives. On the other hand, if it does an incomplete non-central beta in Xlisp-Stat, then it merely has to show that it works well. """ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Mon Oct 2 03:22:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 02 Oct 2006 01:22:16 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <4520867D.6010306@gmail.com> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com> Message-ID: <4520BE28.10707@ieee.org> > For comparison, here are excerpts from the JStatSoft submission instructions. > > http://www.jstatsoft.org/instructions.php > > """JSS will publish > > 1. Manuals, user's guides, and other forms of description of statistical > software, together with the actual software in human-readable form (peer-reviewed) > 2. Code snippets -- small code projects, any language (section editors > Hornik and Koenker, peer-reviewed). > 3. Special issues on topics in statistical computing (guest editors, > peer-reviewed, by invitation only, suggestions welcome). > 4. A yearly special issue documenting progress of major statistical software > projects (section editor Rossini, by invitation only, suggestions welcome) . > 5. Reviews of Books on statistical computing and software. (section editor > Gentleman, by invitation only, suggestions welcome) . > 6. Reviews and comparisons of statistical software (section editors Unwin > and Hartman, by invitation only, suggestions welcome). > > The typical JSS paper will have a section explaining the statistical technique, > a section explaining the code, a section with the actual code, and a section > with examples. All sections will be made browsable as well as downloadable. The > papers and code should be accessible to a broad community of practitioners, > teachers, and researchers in the field of statistics. > """ > > However: > > """If code does something standard (for instance compute an incomplete beta in > Fortran) it is only acceptable if it is better than the alternatives. On the > other hand, if it does an incomplete non-central beta in Xlisp-Stat, then it > merely has to show that it works well. > """ > I think these are good ideas that we can glean from. It is really nice that www.jstatsoft.org is already out there. There is enough interest that we should try to push this forward. I like the idea of volumes tracked by year and issues (or numbers) simply tracked by article. Let's go with that. We need to come up with guidelines for what can be published. There is a lot to discuss here. We don't have to nail it down exactly, but we should have a general idea of what we consider "publishable". Given the existence of other journals that are more "general-purpose," I'd really like to have a journal that focuses on code written for Python. Is that feasible? I don't see why not. I think that Python makes an excellent language to express scientific algorithms in. I think it should be a requirement that the contribution have some relevance to computing with Python. I definitely think there will be different "tiers" of contributions. These should be recognized as such by the existence of two "types of contributions". My preferred approach is to have this divided into two Journals (call them A and B for now) which clarify the difference in novelty. The "A" journal would discuss contributions where a novel algorithm, implementation, or idea is expressed while the "B" contributions are module-style Python implementations of previously-known algorithms. Another candidate for the "A" journal might be documentation and "packaging" of several "B" contributions into something that could be a SciPy sub-package. I could see a typical publication needing two parts code and write-up. Code: 1) working code 2) unit-tests 3) docstring-style docs Write-up 1) Description of how the algorithm fits into the world at a "reasonably" high level. 2) Discussion of how the algorithm was implemented and the design decisions behind the interface, some references to other implementations would also be useful here (whether in Python or not). 3) Technical description. I think there are some "ready-made" publications already sitting in the SciPy SVN tree. Getting the raw code "published" would be the "review" process that Robert is pushing for and could be under-taken by people who are not the original author. I think we should tag code that has been "published" in some way in the SciPy tree so that 1) It is searchable (remember the discussion about adding code-category-tags to SciPy). 2) You can know whether or not an algorithm has had that review. Just to clarify my views, I don't think we need to make it a requirement that the code "go in to SciPy" to be published. In other-words, people shouldn't feel like they have to "give-up" their code. I only want to require that it works with Python. Other modules that are separately installed could still be "published" as long as the code was provided. What should the title be? How about Journal of Scientific Computing with Python for boring??? Anxious to hear your ideas. My time-frame for this is that I'd like to get an issue out by January of next year (remember an issue is just a single publication). -Travis From wbaxter at gmail.com Mon Oct 2 04:05:05 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 2 Oct 2006 17:05:05 +0900 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <4520BE28.10707@ieee.org> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com> <4520BE28.10707@ieee.org> Message-ID: On 10/2/06, Travis Oliphant wrote: > > Given the existence of other journals that are more "general-purpose," > I'd really like to have a journal that focuses on code written for > Python. Is that feasible? I don't see why not. I think that Python > makes an excellent language to express scientific algorithms in. I > think it should be a requirement that the contribution have some > relevance to computing with Python. Even if it is all Python for the time being, I think I would prefer a more generic sounding name, like "Journal of Scientific Computing in High Level Languages". Ok, that sucks, but still, programming langauges come and go, and staking the journal's name to a particular one seems to be saying the ideas don't matter as much as the technology used. Sounds maybe more like a Ziff-Davis magazine than a Journal. --bb From cimrman3 at ntc.zcu.cz Mon Oct 2 08:34:04 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 02 Oct 2006 14:34:04 +0200 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <451DBA2D.3070104@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> Message-ID: <4521073C.2040606@ntc.zcu.cz> Travis Oliphant wrote: > > 1) Do you like the idea ? +1 > 2) Could you help (i.e. as an editor, reviewer, etc.)? +1 As concerns the name, I like the proposal of Travis (Journal of Scientific Computing with Python) but to come with something new: 'Journal of Mixed-language Computing' to stress the principal way of getting performance in Python by coding the bottleneck parts in lower level languages. r. From guyer at nist.gov Mon Oct 2 08:35:45 2006 From: guyer at nist.gov (Jonathan Guyer) Date: Mon, 2 Oct 2006 08:35:45 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <4520BE28.10707@ieee.org> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com> <4520BE28.10707@ieee.org> Message-ID: On Oct 2, 2006, at 3:22 AM, Travis Oliphant wrote: > Just to clarify my views, I don't think we need to make it a > requirement that the code "go in to SciPy" to be published. In > other-words, people shouldn't feel like they have to "give-up" their > code. I only want to require that it works with Python. Other > modules > that are separately installed could still be "published" as long as > the > code was provided. With this clarification, definitely count me as +1 on this scheme. I was supportive already, but was hesitant that a "SciPy Journal" was going to be a journal about SciPy, rather than a journal about science with Python. I should have said so, but couldn't figure out a way to do it so that I didn't come across as one of those paranoid people that didn't want to "give-up" their code... which I am... From aisaac at american.edu Mon Oct 2 09:10:29 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 2 Oct 2006 09:10:29 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <4520BE28.10707@ieee.org> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in><4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> Message-ID: On Mon, 02 Oct 2006, Travis Oliphant apparently wrote: > My preferred approach is to have this divided into two > Journals (call them A and B for now) which clarify the > difference in novelty. > The "A" journal would discuss contributions where a novel > algorithm, implementation, or idea is expressed while the > "B" contributions are module-style Python implementations > of previously-known algorithms. Another candidate for the > "A" journal might be documentation and "packaging" of > several "B" contributions into something that could be > a SciPy sub-package. Here is an alternate proposal: have one journal with well designated sections. If desirable, a section that experiences large growth in quality submissions can be split off later. Until then, all citations will be to a single journal, which will be helpful for name recognition (which is needed for several reasons). Four of the sections could simply be called "Algorithms", "Packages", "Implementations", and "Reviews". Each can have specific criteria for submission. Oddly enough, I do not think the name "Computational Science" is taken, and it might be nice ... Finally, to reduce the editor's burden, it will be important to limit the formats allowed for submission. If it is to be a free journal, authors must typeset articles themselves. I recommend requiring LaTeX and allowing the use of the listings package. This may be overkill, although some math journal(s) have almost fully automated the publication process this way. Also allowing restructured text, which has a LaTeX writer, may be useful---I do not know. Cheers, Alan Isaac From pearu at cens.ioc.ee Mon Oct 2 09:36:07 2006 From: pearu at cens.ioc.ee (pearu at cens.ioc.ee) Date: Mon, 2 Oct 2006 16:36:07 +0300 (EEST) Subject: [SciPy-dev] Online SciPy Journal Message-ID: I think it is a great idea. I have found that there are number of useful coding ideas in Numpy that would be off interest also for non-Python users. One example is accessing Fortran 90 structures from C in a compiler independent way though it is well known fact that the layout of these structures depends on a compiler or even of compiler version. On 9/30/06, Travis Oliphant wrote: > 1) Do you like the idea ? +1 > 2) Could you help (i.e. as an editor, reviewer, etc.)? +1 Pearu From david.huard at gmail.com Mon Oct 2 10:24:28 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 2 Oct 2006 10:24:28 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: Message-ID: <91cf711d0610020724m5bb252b2hc55637f7f531cdb1@mail.gmail.com> Most scientific journals I know don't take code very seriously, even if a paper is based entirely on numerical simulations. The code is not reviewed, only the results and the analysis, which does not make much sense IMHO. So until science journals start to review and publish code (which make take a while), a Journal of Computational Science would be a great place to submit the code and get a review, before submitting the results and analysis to domain specific journals. These articles could then cite the Computational Science related articles, as a token of a reviewed numerical implementation (this also makes for great publicity, and a good deal of citation hits). Also, the procedure for submitting and reviewing articles could be based on svn. For each submitted article, you get an svn access, commit the code and the article, wait for the reviewers to send comments and bugs, commit the revision so the reviewers know exactly what are the changes to the previous version (svn diff). Overall, I think this is an opportunity to force good programming habits on scientists, improve the quality of software, and ensure the accessibility of code (after many hours of frustration spent trying to find code from published articles, emailing authors at old adresses, etc.). +1 +1, David P.S. There is a new journal named Computational Science and Discovery which seems to have just appeared and is probably a direct competitor. http://www.iop.org/EJ/journal/CSD 2006/10/2, pearu at cens.ioc.ee : > > > I think it is a great idea. I have found that there are number of useful > coding > ideas in Numpy that would be off interest also for non-Python users. One > example > is accessing Fortran 90 structures from C in a compiler independent way > though > it is well known fact that the layout of these structures depends on a > compiler or even of compiler version. > > On 9/30/06, Travis Oliphant wrote: > > > 1) Do you like the idea ? > > +1 > > > 2) Could you help (i.e. as an editor, reviewer, etc.)? > > +1 > > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Oct 2 10:39:59 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 02 Oct 2006 16:39:59 +0200 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in><4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> Message-ID: <452124BF.2090607@ntc.zcu.cz> Alan G Isaac wrote: > Oddly enough, I do not think the name "Computational > Science" is taken, and it might be nice ... google says: International Journal of Computational Science and Engineering (IJCSE) (https://www.inderscience.com/browse/index.php?journalcode=ijcse) which is too close... From aisaac at american.edu Mon Oct 2 10:55:07 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 2 Oct 2006 10:55:07 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <452124BF.2090607@ntc.zcu.cz> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in><4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> <452124BF.2090607@ntc.zcu.cz> Message-ID: > Alan G Isaac wrote: >> Oddly enough, I do not think the name "Computational >> Science" is taken, and it might be nice ... On Mon, 02 Oct 2006, Robert Cimrman apparently wrote: > google says: > International Journal of Computational Science and Engineering (IJCSE) > (https://www.inderscience.com/browse/index.php?journalcode=ijcse) > which is too close... Maybe this is field specific, but I do not consider that too close. In Economics we have, just as a very small example: Review of Political Economy Review of International Political Economy Journal of Political Economy and then there are things like Political Economy Journal etc etc etc Someone will grab the name Computational Science soon. Might as well be someone virtuous ... Cheers, Alan Isaac From cimrman3 at ntc.zcu.cz Mon Oct 2 11:13:57 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 02 Oct 2006 17:13:57 +0200 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in><4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> <452124BF.2090607@ntc.zcu.cz> Message-ID: <45212CB5.7000500@ntc.zcu.cz> Alan G Isaac wrote: >> Alan G Isaac wrote: >>> Oddly enough, I do not think the name "Computational >>> Science" is taken, and it might be nice ... > > On Mon, 02 Oct 2006, Robert Cimrman apparently wrote: >> google says: >> International Journal of Computational Science and Engineering (IJCSE) >> (https://www.inderscience.com/browse/index.php?journalcode=ijcse) >> which is too close... > > Maybe this is field specific, > but I do not consider that too close. > In Economics we have, just as a very small example: > Review of Political Economy > Review of International Political Economy > Journal of Political Economy > and then there are things like > Political Economy Journal > etc etc etc > > Someone will grab the name Computational Science soon. > Might as well be someone virtuous ... Sure - I also like the proposed name :). But still, in your examples, there is always a word different, an adjective etc, while in our case it is just a subset of the other name which could cause confusion (or am I just getting paranoid?). Maybe an adjective would help (J. of Computational Software Science?? - as a non-native english speaker, I am not sure if it sounds weird or not.) Nevertheless, having a journal is a great idea! (Had Travis any bad idea ever?) cheers, r. From ellisonbg.net at gmail.com Mon Oct 2 13:57:25 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 2 Oct 2006 11:57:25 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <45212CB5.7000500@ntc.zcu.cz> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com> <4520BE28.10707@ieee.org> <452124BF.2090607@ntc.zcu.cz> <45212CB5.7000500@ntc.zcu.cz> Message-ID: <6ce0ac130610021057g5cfe26f2tfebd60b717ccced4@mail.gmail.com> Hi, I would love to see something like this and am willing to help. I think it is better to have one journal with different sections. This gives more flexibility moving forward: it is easier to change sections in a journal than it is to change an entire journal. Also, I think the title should reflect an emphasis in "scientific computing" not "computational science" or more generally "computing." And I am fine having it be Python focused in both its title and official goals. I have no desire to spend time promoting scientific computing in say Perl. I think it would be a very good idea to find sponsors for this (companies, national labs, universities). I am willing to help coordinate this effort. I am also wiling to help as an editor and reviewer. Brian From oliphant at ee.byu.edu Mon Oct 2 16:53:36 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 02 Oct 2006 14:53:36 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com> <4520BE28.10707@ieee.org> Message-ID: <45217C50.4050800@ee.byu.edu> Bill Baxter wrote: >On 10/2/06, Travis Oliphant wrote: > > >>Given the existence of other journals that are more "general-purpose," >>I'd really like to have a journal that focuses on code written for >>Python. Is that feasible? I don't see why not. I think that Python >>makes an excellent language to express scientific algorithms in. I >>think it should be a requirement that the contribution have some >>relevance to computing with Python. >> >> > >Even if it is all Python for the time being, I think I would prefer a >more generic sounding name, like "Journal of Scientific Computing in >High Level Languages". > I have mixed feelings here. I, too, am concerned that a journal which expounds scientific computing with Python might be seen as too "Ziff-Davis" like (although it will certainly have a much different focus than their magazines). On the other hand, there are already a lot of "general-purpose" journals for publishing algorithms in scientific computing. We need a journal that documents the development of tools for an open source platform for conducting scientific computing. Right now that platform for us is Python. Should it change, then another journal gets started. I think this is an example of where you lose benefit by trying to be too general. I don't think the reputation of the journal will be altered (in the long run) by whether or not the word Python is in the name. In fact, I'd like to emphasize the use of Python for scientific code development in place of other choices which are also available. So, perhaps I'm not "ambitious" enough to create a publication that outlasts Python, but in my mind, the journal lives and dies by the use of Python in scientific computing. There are just too many other journals that take the "all languages" welcome view point and provide you with papers that describe algorithms but are difficult to reproduce. I think we should make it clear that this journal details how Python can be used for computing (and for explaining algorithms). -Travis From aisaac at american.edu Mon Oct 2 17:21:13 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 2 Oct 2006 17:21:13 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <45217C50.4050800@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> <45217C50.4050800@ee.byu.edu> Message-ID: On Mon, 02 Oct 2006, Travis Oliphant apparently wrote: > We need a journal that documents the development of tools > for an open source platform for conducting scientific > computing. Right now that platform for us is Python. A journal has a mission statement where this can be specified. It does not need to be in its title. And the title needs not to turn away people who would e.g., write C extensions or create something like f2py. More titles (any puns intended): Objective Science Innovations in Scientific Computing Extending Scientific Computing Code Fusion (serendipity; but I forget whom) Applied Scientific Computing Scientific Computing Insight But to be specific it would be: SciPy Journal Oh well, Alan Isaac From oliphant at ee.byu.edu Mon Oct 2 17:56:14 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 02 Oct 2006 15:56:14 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in><4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> Message-ID: <45218AFE.8040005@ee.byu.edu> Alan G Isaac wrote: >On Mon, 02 Oct 2006, Travis Oliphant apparently wrote: > > >>My preferred approach is to have this divided into two >>Journals (call them A and B for now) which clarify the >>difference in novelty. >> >> > > > >Here is an alternate proposal: have one journal with well >designated sections. > I like it. > If desirable, a section that >experiences large growth in quality submissions can be split >off later. Until then, all citations will be to a single >journal, which will be helpful for name recognition (which >is needed for several reasons). Four of the sections could >simply be called "Algorithms", "Packages", >"Implementations", and "Reviews". Each can have specific >criteria for submission. > > I like these names to. >Finally, to reduce the editor's burden, it will be important >to limit the formats allowed for submission. > Yes, that is true. This journal needs to be author edited as much as possible. -Travis From oliphant at ee.byu.edu Mon Oct 2 17:57:04 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 02 Oct 2006 15:57:04 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> <45217C50.4050800@ee.byu.edu> Message-ID: <45218B30.7080109@ee.byu.edu> Alan G Isaac wrote: >On Mon, 02 Oct 2006, Travis Oliphant apparently wrote: > > >>We need a journal that documents the development of tools >>for an open source platform for conducting scientific >>computing. Right now that platform for us is Python. >> >> > >A journal has a mission statement where this can be >specified. It does not need to be in its title. > >And the title needs not to turn away people who would e.g., >write C extensions or create something like f2py. > >More titles (any puns intended): >Objective Science >Innovations in Scientific Computing >Extending Scientific Computing >Code Fusion (serendipity; but I forget whom) >Applied Scientific Computing >Scientific Computing Insight > > > Good name suggestions. I like the names Applied Scientific Computing Applied Computational Science -Travis From wbaxter at gmail.com Mon Oct 2 20:52:44 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 3 Oct 2006 09:52:44 +0900 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <45217C50.4050800@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com> <4520BE28.10707@ieee.org> <45217C50.4050800@ee.byu.edu> Message-ID: On 10/3/06, Travis Oliphant wrote: > Bill Baxter wrote: > > >On 10/2/06, Travis Oliphant wrote: > > > > > >Even if it is all Python for the time being, I think I would prefer a > >more generic sounding name, like "Journal of Scientific Computing in > >High Level Languages". > We need a journal > that documents the development of tools for an open source platform for > conducting scientific computing. Right now that platform for us is > Python. > So, perhaps I'm not "ambitious" enough to create a publication that > outlasts Python Well I think it's pretty ambitious to restrict the potential pool of submitters to just one language from the outset. Maybe the policies can worm around that a little. LIke "ok we call it the Journal of Scientific Python, but really we'll accept things in other languages too, as long as they are really good and take advantage of the same sorts of features that make Python great." So that way if someone goes and creates SciRuby it could be published in the Journal of Scientific Python, too. ( But after that they may want to go and create the Journal of Scientific Ruby.) In that way it could serve as a way to get ideas for features that are must-haves for the next generation python. -- There could be a subtitle that usually no-one ever mentions but the official title is like: THE JOURNAL OF SCIENTIFIC PYTHON and science with dynamic programming languages -- Another side-benefit of it being very obviously Python-centric is that it becomes a very visible sign of the size, vitality and value of the community of scientific python users, which could help lend weight to requests for language changes. Being able to create web pages better and faster is great, but if your programming language is actually saving lives by analyzing CAT scan data or something like that, then that's really something to be proud of. And if the makers of that software say they could do a better job with feature X that seems pretty compelling. --bb From aisaac at american.edu Mon Oct 2 22:56:01 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 2 Oct 2006 22:56:01 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <45218B30.7080109@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <4520867D.6010306@gmail.com><4520BE28.10707@ieee.org> <45217C50.4050800@ee.byu.edu> <45218B30.7080109@ee.byu.edu> Message-ID: On Mon, 02 Oct 2006, Travis Oliphant apparently wrote: > I like the names > Applied Scientific Computing > Applied Computational Science I think the first is more accurate? The journal will also have to be hosted someplace reasonably stable, so that access to past issues is sustained. Might Enthought be willing to provide space for this? (I have absolutely no right to ask this and no clue about its appropriateness; I am just brain storming.) Assuming LaTeX only submissions, good style file is important right off the bat. I like the JSS style. This is Copyright (C) 2004 Achim Zeileis Achim.Zeileis at wu-wien.ac.at If others like it too, I'm willing to ask him for permission to hack it. Cheers, Alan Isaac From fperez.net at gmail.com Tue Oct 3 02:59:33 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 3 Oct 2006 00:59:33 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <451DBA2D.3070104@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> Message-ID: On 9/29/06, Travis Oliphant wrote: > As most of you know, I'm at an academic institution and have a strong > belief that open source should mesh better with the academic world then > it currently does. One of the problems, is that it is not easy to just > point to open source software as a "publication" which is one of the > areas that most tenure-granting boards look at when deciding on a > candidate. > > To help with that a little-bit, I'd really like to get a peer-reviewed > on-line journal for code contributed to SciPy started. Not to implicate > him if he no longer thinks it's a good idea, but I first started > thinking seriously about this at SciPy 2006 when Fernando Perez > mentioned he had seen this model with a math-related software package > and thought we could try something similar. [...] I think this is an important and wortwhile idea, and I'm willing to help. However, I think it's worth clarifying the intent of this effort to make sure it really does something useful. The GAP process (I can confirm that what Robert mentioned is what I had in mind when I spoke with Travis, we can request further details from Steve Linton at some point if we deem it necessary) addresses specifically the issue of code contributions to GAP as peer-reviewed packages, without going 'all the way' into the creation of a Journal, which is an expensive proposition (at the very least from a time and effort perspective). There are basically, I think, two issues at play here: 1. How to ensure that time spent on developing open source packages which are truly useful is acknowledged by 'traditional' academia, for things like hiring and tenure reviews, promotions, grant applications (this means that any changes we target need to make their way into the funding agencies), etc. 2. The question about having a Journal, covering modern scientific computing development, practices and algorithms, and with a more or less narrow Python focus. I am convinced that #1 is critically important: I believe strongly that the open source model produces /better/ tools, more flexibly, and in a vastly more pleasant fashion, than mailing a big check every year to your favorite vendor. But if many of us want a professional research future that involves these ideas, we're going to need them to be acknowledged by the proverbial bean counters. Else, committing significant amounts of time to open source developments will look an awful lot like professional suicide. However, I am not yet convinced that #2 is the way to achieve #1. It may be a worthy goal in and of itself, and that's certainly a valid question to be discussed. But I think it's really worth sorting out whether #2 is the only, or even a good way, to accomplish #1. The editing and publication of a journal requires a vast amount of work, and for a journal to be really worth anything towards goal #1, it /must/ achieve a certain level of respectability that takes a lot of work and time. The GAP developers seem to have found that a clearly defined process of review is enough to satisfy #1 without having to create a journal. It may be worth at least considering this option before going full bore with a journal idea. If, however, a Journal is deemed necessary, then we need to make sure that the ideas behind it give it a solid chance of long-term success as a genuine contribution to the scientific computing literature. We all know there's already way too many obscure journals nobody reads, we shouldn't be contributing to that. I'm listing below a few references to various topics and ideas I've noted over time on this topic, hoping they may be of value for guiding this discussion: - One of the central points of the open source model is that it provides for true reproducibility of computational results, since users can rebuild everything down to the operating system itself if need be. The big mantra of reproducible research has for a long time been championed by Stanford's Jon Claerbout, and there is a very famous paper by Donoho and Buckheit which summarizes this in the context of Wavelab, a Matlab-based wavelet toolkit. These are the basic references: http://sepwww.stanford.edu/research/redoc/ http://www-stat.stanford.edu/~donoho/Reports/1995/wavelab.pdf I think it would be great if a new journal would emphasize these ideas as a founding principle, by making full reproducibility (which obviously requires access to source code and data) a condition for publication. I am convinced that this alone would drastically improve the S/N ratio of the journal, by eliminating all the plotting-by-photoshop papers from the get go. - There is an interesting note in a recent LWN issue: http://lwn.net/Articles/199427/ if you scroll down to the section titled "WOS4: Quality management in free content", you'll find a description of the journal Atmospheric Chemistry and Physics (http://www.copernicus.org/EGU/acp/acp.html). They use an interesting combination of traditional peer-review and a 'publish early, discuss often' approach which apparently has produced good results. Quoting from the above: When a paper is submitted, as long as it's not complete junk, it will be immediately published as a "discussion paper" on the journal's web site. It is clearly marked as an unreviewed paper, not to be taken as definitive results at that time. While the referees are reviewing the paper, others can post comments and questions as well. These others are limited to "registered scientists," since the desire is to keep the conversation at a high level. The comments become part of the permanent record stored with the paper, and they can, at times, be cited by others in their own right. The editor will consider outside comments when deciding whether the paper is to be accepted and what revisions are to be required. After using this process for five years, Atmospheric Chemistry and Physics has the highest level of citations in the field. Citations are important in the scientific world: they are an indication that a given set of research results has helped and inspired discoveries elsewhere. The high level of citations here indicates that this publication process is succeeding in attracting high-level papers and filtering out the less useful submissions. - It's always worth having a look at the PLOS process (http://www.plos.org), which has been for a few years trying to change the publication model in the biomedical community towards a more open one. I'm not sure what actual impact their journals currently have, though. - In the high energy physics community, for a long time the de facto mechanism for real communication has been the arXiv (http://arxiv.org). I think it's fair to say that in several subfields of HEP, people push for publication in 'real journals' mostly for career reasons, but by the time a paper makes its way to Physical Review or Nuclear Physics, it has long ago been digested, discussed and commented, possibly in a round of short comment papers at the arXiv itself. While the arXiv is NOT peer-reviewed and hence the need for 'real' publications for bean-counting purposes remains, the crackpots and other poor-quality work never seem to be a major problem: people tend to just silently ignore them, while the good work is quickly picked up and generates response. Over time, the arXiv has developed subfields beyond HEP, I have no idea how successful these have been in their respective communities. - I also think we should target a higher, long-term goal: improving the standards of software development in scientific work, by 'showing the way' with an emphasis on documentation, testing (unit and otherwise), clean APIs, etc. Hopefully this will be a beneficial side effect of this effort, whether done via a journal or some other process. In any case, I'm very interested in this and I'm willing to help, and I didn't mean to undermine the enthusiasm displayed so far. Regards, f From fperez.net at gmail.com Tue Oct 3 02:59:33 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 3 Oct 2006 00:59:33 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <451DBA2D.3070104@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> Message-ID: On 9/29/06, Travis Oliphant wrote: > As most of you know, I'm at an academic institution and have a strong > belief that open source should mesh better with the academic world then > it currently does. One of the problems, is that it is not easy to just > point to open source software as a "publication" which is one of the > areas that most tenure-granting boards look at when deciding on a > candidate. > > To help with that a little-bit, I'd really like to get a peer-reviewed > on-line journal for code contributed to SciPy started. Not to implicate > him if he no longer thinks it's a good idea, but I first started > thinking seriously about this at SciPy 2006 when Fernando Perez > mentioned he had seen this model with a math-related software package > and thought we could try something similar. [...] I think this is an important and wortwhile idea, and I'm willing to help. However, I think it's worth clarifying the intent of this effort to make sure it really does something useful. The GAP process (I can confirm that what Robert mentioned is what I had in mind when I spoke with Travis, we can request further details from Steve Linton at some point if we deem it necessary) addresses specifically the issue of code contributions to GAP as peer-reviewed packages, without going 'all the way' into the creation of a Journal, which is an expensive proposition (at the very least from a time and effort perspective). There are basically, I think, two issues at play here: 1. How to ensure that time spent on developing open source packages which are truly useful is acknowledged by 'traditional' academia, for things like hiring and tenure reviews, promotions, grant applications (this means that any changes we target need to make their way into the funding agencies), etc. 2. The question about having a Journal, covering modern scientific computing development, practices and algorithms, and with a more or less narrow Python focus. I am convinced that #1 is critically important: I believe strongly that the open source model produces /better/ tools, more flexibly, and in a vastly more pleasant fashion, than mailing a big check every year to your favorite vendor. But if many of us want a professional research future that involves these ideas, we're going to need them to be acknowledged by the proverbial bean counters. Else, committing significant amounts of time to open source developments will look an awful lot like professional suicide. However, I am not yet convinced that #2 is the way to achieve #1. It may be a worthy goal in and of itself, and that's certainly a valid question to be discussed. But I think it's really worth sorting out whether #2 is the only, or even a good way, to accomplish #1. The editing and publication of a journal requires a vast amount of work, and for a journal to be really worth anything towards goal #1, it /must/ achieve a certain level of respectability that takes a lot of work and time. The GAP developers seem to have found that a clearly defined process of review is enough to satisfy #1 without having to create a journal. It may be worth at least considering this option before going full bore with a journal idea. If, however, a Journal is deemed necessary, then we need to make sure that the ideas behind it give it a solid chance of long-term success as a genuine contribution to the scientific computing literature. We all know there's already way too many obscure journals nobody reads, we shouldn't be contributing to that. I'm listing below a few references to various topics and ideas I've noted over time on this topic, hoping they may be of value for guiding this discussion: - One of the central points of the open source model is that it provides for true reproducibility of computational results, since users can rebuild everything down to the operating system itself if need be. The big mantra of reproducible research has for a long time been championed by Stanford's Jon Claerbout, and there is a very famous paper by Donoho and Buckheit which summarizes this in the context of Wavelab, a Matlab-based wavelet toolkit. These are the basic references: http://sepwww.stanford.edu/research/redoc/ http://www-stat.stanford.edu/~donoho/Reports/1995/wavelab.pdf I think it would be great if a new journal would emphasize these ideas as a founding principle, by making full reproducibility (which obviously requires access to source code and data) a condition for publication. I am convinced that this alone would drastically improve the S/N ratio of the journal, by eliminating all the plotting-by-photoshop papers from the get go. - There is an interesting note in a recent LWN issue: http://lwn.net/Articles/199427/ if you scroll down to the section titled "WOS4: Quality management in free content", you'll find a description of the journal Atmospheric Chemistry and Physics (http://www.copernicus.org/EGU/acp/acp.html). They use an interesting combination of traditional peer-review and a 'publish early, discuss often' approach which apparently has produced good results. Quoting from the above: When a paper is submitted, as long as it's not complete junk, it will be immediately published as a "discussion paper" on the journal's web site. It is clearly marked as an unreviewed paper, not to be taken as definitive results at that time. While the referees are reviewing the paper, others can post comments and questions as well. These others are limited to "registered scientists," since the desire is to keep the conversation at a high level. The comments become part of the permanent record stored with the paper, and they can, at times, be cited by others in their own right. The editor will consider outside comments when deciding whether the paper is to be accepted and what revisions are to be required. After using this process for five years, Atmospheric Chemistry and Physics has the highest level of citations in the field. Citations are important in the scientific world: they are an indication that a given set of research results has helped and inspired discoveries elsewhere. The high level of citations here indicates that this publication process is succeeding in attracting high-level papers and filtering out the less useful submissions. - It's always worth having a look at the PLOS process (http://www.plos.org), which has been for a few years trying to change the publication model in the biomedical community towards a more open one. I'm not sure what actual impact their journals currently have, though. - In the high energy physics community, for a long time the de facto mechanism for real communication has been the arXiv (http://arxiv.org). I think it's fair to say that in several subfields of HEP, people push for publication in 'real journals' mostly for career reasons, but by the time a paper makes its way to Physical Review or Nuclear Physics, it has long ago been digested, discussed and commented, possibly in a round of short comment papers at the arXiv itself. While the arXiv is NOT peer-reviewed and hence the need for 'real' publications for bean-counting purposes remains, the crackpots and other poor-quality work never seem to be a major problem: people tend to just silently ignore them, while the good work is quickly picked up and generates response. Over time, the arXiv has developed subfields beyond HEP, I have no idea how successful these have been in their respective communities. - I also think we should target a higher, long-term goal: improving the standards of software development in scientific work, by 'showing the way' with an emphasis on documentation, testing (unit and otherwise), clean APIs, etc. Hopefully this will be a beneficial side effect of this effort, whether done via a journal or some other process. In any case, I'm very interested in this and I'm willing to help, and I didn't mean to undermine the enthusiasm displayed so far. Regards, f From stefan at sun.ac.za Tue Oct 3 04:02:24 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 3 Oct 2006 10:02:24 +0200 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> <17695.23195.379855.269684@prpc.aero.iitb.ac.in> <45217C50.4050800@ee.byu.edu> Message-ID: <20061003080224.GB26999@mentat.za.net> On Mon, Oct 02, 2006 at 10:56:01PM -0400, Alan G Isaac wrote: > Assuming LaTeX only submissions, good style file is > important right off the bat. I like the JSS style. > This is > Copyright (C) 2004 Achim Zeileis Achim.Zeileis at wu-wien.ac.at > If others like it too, I'm willing to ask him for permission > to hack it. I guess we are getting ahead of ourselves, but since the topic was brough up: The IEEE Journal layout is distributed under the Perl Artistic License at http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/ with the additional advantage that it has a LyX template (included in the LyX distribution). The idea of a journal for pythonic science appeals to me as a student. In order to complete theses, large pieces of code are often written; yet, for all that time spent doing that there is no reward in terms of a publication. I agree with David that this may motivate contributing scientists to write better code. We could probably do with a review of the level of testing done in scipy/numpy ourselves ;) Regards St?fan From oliphant at ee.byu.edu Tue Oct 3 12:11:16 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 03 Oct 2006 10:11:16 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: References: <451DBA2D.3070104@ee.byu.edu> Message-ID: <45228BA4.1050809@ee.byu.edu> Fernando Perez wrote: >I think this is an important and wortwhile idea, and I'm willing to help. >However, I think it's worth clarifying the intent of this effort to make sure >it really does something useful. > > Thank you. Much of what you have written I've been concerned about as well. >The GAP process (I can confirm that what Robert mentioned is what I had in >mind when I spoke with Travis, we can request further details from Steve >Linton at some point if we deem it necessary) addresses specifically the >issue of code contributions to GAP as peer-reviewed packages, without going >'all the way' into the creation of a Journal, which is an expensive >proposition (at the very least from a time and effort perspective). > >There are basically, I think, two issues at play here: > >1. How to ensure that time spent on developing open source packages which >are truly useful is acknowledged by 'traditional' academia, for things like >hiring and tenure reviews, promotions, grant applications (this means that >any changes we target need to make their way into the funding agencies), etc. > >2. The question about having a Journal, covering modern scientific computing >development, practices and algorithms, and with a more or less narrow Python >focus. > > >I am convinced that #1 is critically important: I believe strongly that the >open source model produces /better/ tools, more flexibly, and in a vastly more >pleasant fashion, than mailing a big check every year to your favorite vendor. >But if many of us want a professional research future that involves these >ideas, we're going to need them to be acknowledged by the proverbial bean >counters. Else, committing significant amounts of time to open source >developments will look an awful lot like professional suicide. > > #1 is my priority as well. When I talked about a Journal I meant it as a name one could use when citing #1. In other-words the Journal is a way to archive and document the contributions to open-source packaging. >The GAP developers seem to have found that a clearly defined process of review >is enough to satisfy #1 without having to create a journal. It may be worth >at least considering this option before going full bore with a journal idea. > > Definitely. >In any case, I'm very interested in this and I'm willing to help, and I didn't >mean to undermine the enthusiasm displayed so far. > > I don't think you've undermined enthusiasm. You have identified what we should be thinking about. As I've looked around at what's available, there are already a host of journals one can submit "algorithms" to. I'd like to figure out a way we can get "peer-review" credit for contributions to the open-source world by establishing some sort of peer-review process. At my university (BYU) two things are looked at for "scholarship"-points. 1) peer-review and 2) novelty. The #1 we can easily argue for with open-source software. The novelty issue is what most people looking at contributions to scientific computing with Python will probably wonder about. Is code that implements some other-wise well known algorithm in Python "novel". Here is what my University documents indicate about scholarship evidence: "It should be of high quality and contain some element of originality, either in the form of new knowledge, new understanding, fresh insight, or unique skill or interpretation" Other documents may differ, but I think a strong case can be made for a lot (but not all) of the software that gets written for SciPy has at least "unique skill or interpretation" but perhaps also "new understanding" and "fresh insight." Other factors include: the definition of "well-known" algorithm is usually quite subjective. A lot of "well-known" algorithms actually require quite a bit of decision-making / tweaking to implement in a particular language. Not only that, but the design of an interface to code is also an aspect of code writing that could be considered scholarly. I really think that having well-stated goals and a good peer-review process and someway to "cite" the work would go a long way to solving the problem of "how can I get credit for this software" -Travis >Regards, > >f >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-dev > > From ellisonbg.net at gmail.com Tue Oct 3 14:17:30 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Tue, 3 Oct 2006 12:17:30 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <45228BA4.1050809@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> <45228BA4.1050809@ee.byu.edu> Message-ID: <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> Thanks Fernando for spending so much time thinking about this. There is another pitfall that I see lurking here that is relevant if we want to create something that the bean counters will count. I am guessing that the reason the everyone has been so enthusiastic about the idea (including myself) is that we want such a journal to exist because we want to publish articles in it. But this won't work very well if all of us are also the journal's primary organizers/editors/reviewers. From the outside, (and to the bean counters) I think that scenario will simply look like a bunch of friends in an isolated community approving of each other's work (even though it may be more than that in reality). Because of this, no matter what we do, I think it is important to build something that people outside scipy-dev at scipy.org will want to i) read and ii) publish in. The question is how to go about this. One way would be to find sponsors - organizations and individuals that could get behind the effort and provide publicity, manpower and resources. Another thing that might be important is to have a rolling publication model like that of the physics arxiv or boost - rather than a traditional model of having issues/volumes published at regular intervals. The difficulty with the traditional model is that there is pressure to have a continual flow of articles - it also creates deadlines :( Brian From oliphant at ee.byu.edu Tue Oct 3 15:21:03 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 03 Oct 2006 13:21:03 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> References: <451DBA2D.3070104@ee.byu.edu> <45228BA4.1050809@ee.byu.edu> <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> Message-ID: <4522B81F.2010609@ee.byu.edu> Brian Granger wrote: >Thanks Fernando for spending so much time thinking about this. > >There is another pitfall that I see lurking here that is relevant if >we want to create something that the bean counters will count. > >I am guessing that the reason the everyone has been so enthusiastic >about the idea (including myself) is that we want such a journal to >exist because we want to publish articles in it. But this won't work >very well if all of us are also the journal's primary >organizers/editors/reviewers. > I can see your point, and we definitely want to make it as broadly interesting as possible. But, naturally, it will self-select those who are interested in using Python for scientific computing. Having reviewers be publishers is not the problem. We will need more reviewers than "editors" (used in the sense of assigning reviewers to submissions). >Because of this, no matter what we do, I think it is important to >build something that people outside scipy-dev at scipy.org will want to >i) read and ii) publish in. The question is how to go about this. > > Outside of scipy-dev definitely, outside of scipy-user? maybe, outside of both scipy-user and numpy-discussions? I don't think so. The point is to grow both scipy-user and numpy-discussions. Perhaps they become an integral part of the review process. I don't know. Maybe a forum page is better for that. >One way would be to find sponsors - organizations and individuals that >could get behind the effort and provide publicity, manpower and >resources. > > This is a great idea. >Another thing that might be important is to have a rolling publication >model like that of the physics arxiv or boost - rather than a >traditional model of having issues/volumes published at regular >intervals. > I'm not sure what the difference is between the arxiv "rolling" model and the traditional model modified only to the point that you publish each submission as it gets "accepted" as an issue. I haven't studied the arxiv model though. -Travis From david.huard at gmail.com Tue Oct 3 15:21:58 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 3 Oct 2006 15:21:58 -0400 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> References: <451DBA2D.3070104@ee.byu.edu> <45228BA4.1050809@ee.byu.edu> <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> Message-ID: <91cf711d0610031221s5fece503s974b145616d209d@mail.gmail.com> I found some resources that may be of interest: A document on the planning of an open access journal (business models, tips, etc.) http://www.soros.org/openaccess/oajguides/html/business_planning.htm Software to create and manage online open access repositories for articles: Eprints (from Southampton University, perl) DSpace (from MIT, java) CDSWare (from CERN, python) FEDORA (from Cornell and U. of Virginia, java) Copernicus, a company specializing in the publication of open access journals. http://www.copernicus.org/COPERNICUS/publications/copernicus_strategies.html Cheers, David 2006/10/3, Brian Granger : > > Thanks Fernando for spending so much time thinking about this. > > There is another pitfall that I see lurking here that is relevant if > we want to create something that the bean counters will count. > > I am guessing that the reason the everyone has been so enthusiastic > about the idea (including myself) is that we want such a journal to > exist because we want to publish articles in it. But this won't work > very well if all of us are also the journal's primary > organizers/editors/reviewers. From the outside, (and to the bean > counters) I think that scenario will simply look like a bunch of > friends in an isolated community approving of each other's work (even > though it may be more than that in reality). > > Because of this, no matter what we do, I think it is important to > build something that people outside scipy-dev at scipy.org will want to > i) read and ii) publish in. The question is how to go about this. > One way would be to find sponsors - organizations and individuals that > could get behind the effort and provide publicity, manpower and > resources. > > Another thing that might be important is to have a rolling publication > model like that of the physics arxiv or boost - rather than a > traditional model of having issues/volumes published at regular > intervals. The difficulty with the traditional model is that there is > pressure to have a continual flow of articles - it also creates > deadlines :( > > Brian > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsouthey at gmail.com Tue Oct 3 20:48:58 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Tue, 3 Oct 2006 19:48:58 -0500 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <91cf711d0610031221s5fece503s974b145616d209d@mail.gmail.com> References: <451DBA2D.3070104@ee.byu.edu> <45228BA4.1050809@ee.byu.edu> <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> <91cf711d0610031221s5fece503s974b145616d209d@mail.gmail.com> Message-ID: Hi, This is an online statistical journal that appears to address some of the aspects being discussed. Regards Bruce ABOUT INTERNATIONAL JOURNAL OF BIOSTATISTICS 'The International Journal of Biostatistics' covers the entire range of biostatistics, from theoretical advances to relevant and sensible translations of a practical problem into a statistical framework, including advances in biostatistical computing. Electronic publication also allows for data and software code to be appended, and opens the door for reproducible research allowing readers to easily replicate analyses described in a paper. Both original research and review articles will be warmly received, as will articles applying sound statistical methods to practical problems. For more details, or to submit your next paper, visit: http://www.bepress.com/ijb From ellisonbg.net at gmail.com Wed Oct 4 00:00:50 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Tue, 3 Oct 2006 22:00:50 -0600 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <4522B81F.2010609@ee.byu.edu> References: <451DBA2D.3070104@ee.byu.edu> <45228BA4.1050809@ee.byu.edu> <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> <4522B81F.2010609@ee.byu.edu> Message-ID: <6ce0ac130610032100q61515368o326576d50feb478a@mail.gmail.com> > I can see your point, and we definitely want to make it as broadly > interesting as possible. But, naturally, it will self-select those who > are interested in using Python for scientific computing. Having > reviewers be publishers is not the problem. We will need more > reviewers than "editors" (used in the sense of assigning reviewers to > submissions). Yes, we definitely need more reviewers than editors. > >Because of this, no matter what we do, I think it is important to > >build something that people outside scipy-dev at scipy.org will want to > >i) read and ii) publish in. The question is how to go about this. > > > > > Outside of scipy-dev definitely, outside of scipy-user? maybe, outside > of both scipy-user and numpy-discussions? I don't think so. The > point is to grow both scipy-user and numpy-discussions. Perhaps they > become an integral part of the review process. I don't know. Maybe a > forum page is better for that. True, if you start to include the entire Numpy community, the group does begin to cover all those who would be interested in scientific computing with python - and others as well. > >Another thing that might be important is to have a rolling publication > >model like that of the physics arxiv or boost - rather than a > >traditional model of having issues/volumes published at regular > >intervals. > > > I'm not sure what the difference is between the arxiv "rolling" model > and the traditional model modified only to the point that you publish > each submission as it gets "accepted" as an issue. I haven't studied > the arxiv model though. The main difference I was referring to is that in a rolling model, each article is published on a web site immediately upon being accepted/edited without waiting for the next "volume" of the journal to be published. I guess the rolling model is more like a continuous stream of article, whereas the traditional model (of print journals for instance) is bunches of article packages as a monthly or bi-monthly "volume" The arxiv is different in other ways, but I didn't have those in mind. From robert.kern at gmail.com Wed Oct 4 00:10:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 03 Oct 2006 23:10:09 -0500 Subject: [SciPy-dev] Online SciPy Journal In-Reply-To: <6ce0ac130610032100q61515368o326576d50feb478a@mail.gmail.com> References: <451DBA2D.3070104@ee.byu.edu> <45228BA4.1050809@ee.byu.edu> <6ce0ac130610031117j77d490f7u1a0e0ca1febdd482@mail.gmail.com> <4522B81F.2010609@ee.byu.edu> <6ce0ac130610032100q61515368o326576d50feb478a@mail.gmail.com> Message-ID: <45233421.2070602@gmail.com> [Brian Granger:] >>> Another thing that might be important is to have a rolling publication >>> model like that of the physics arxiv or boost - rather than a >>> traditional model of having issues/volumes published at regular >>> intervals. [Travis Oliphant:] >> I'm not sure what the difference is between the arxiv "rolling" model >> and the traditional model modified only to the point that you publish >> each submission as it gets "accepted" as an issue. I haven't studied >> the arxiv model though. [Brian Granger:] > The main difference I was referring to is that in a rolling model, > each article is published on a web site immediately upon being > accepted/edited without waiting for the next "volume" of the journal > to be published. I guess the rolling model is more like a continuous > stream of article, whereas the traditional model (of print journals > for instance) is bunches of article packages as a monthly or > bi-monthly "volume" The arxiv is different in other ways, but I > didn't have those in mind. You two are talking about the same thing using different words. :-) Travis's "modified traditional" model is that of JStatSoft's: as soon as an article is accepted, it is immediately published on the web as it's own "issue". A "volume" is simply a fairly arbitrary number of consecutive "issues". In the "really traditional" model, maybe a dozen or so articles are published every n months in an "issue" and a "volume" is simply a year's worth of "issues". The "modified traditional" model is just as much a rolling model as the arxiv's, it just (ab)uses the traditional "volume/issue" terminology to make the citations look more acceptable. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gregwillden at gmail.com Thu Oct 5 21:47:22 2006 From: gregwillden at gmail.com (Greg Willden) Date: Thu, 5 Oct 2006 20:47:22 -0500 Subject: [SciPy-dev] Flattop window and docstrings Message-ID: <903323ff0610051847j259dafa8naa64386e7507f1f5@mail.gmail.com> Hi all, Based on recommendations from the NumPy list I have submitted ticket #276 which adds support in signaltools.py for the flattop window. I also added "See also" sections to the docstrings for all window functions and a few others in this file. Regards, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Oct 6 09:56:32 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 06 Oct 2006 15:56:32 +0200 Subject: [SciPy-dev] Trouble with sparse complex matrices Message-ID: <45266090.10409@iam.uni-stuttgart.de> Hi all, I have constructed a 2x2 lil_matrix and added some complex values to it. The imaginary is neglected for some reason. Am I missing something ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: cs.py Type: text/x-python Size: 234 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Fri Oct 6 10:36:05 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 06 Oct 2006 23:36:05 +0900 Subject: [SciPy-dev] PyEM, toolbox for Expectation Maximization for Gaussian Mixtures (proposal for inclusion into scipy.sandbox) Message-ID: <452669D5.1080807@ar.media.kyoto-u.ac.jp> Hi, A few months ago, I posted a preliminary version of PyEM, a numpy package for Expectation Maximization for Gaussian Mixture Models. As it was developed during the various change of core functions in numpy (change of axis convention in mean, sum, etc...), I stopped doing public releases. Now that this numpy API has settled, I propose the package for inclusion into the scipy sandbox. The package has already been used with success by several other people; I tried to have a coherent and easy to use API, and I have included some pretty plotting functions :). By including it to scipy, I hope also to get some feedback on usage, possible improvements, more testing, etc... * DOWNLOAD: The scipy version is available here: http://www.ar.media.kyoto-u.ac.jp/members/david/pyem/pyem-scipy-0.5.3.tar.gz * INSTALLATION INSTRUCTIONS: I don't know the best way to package a package so that it is included in scipy: to make it work, you just need to uncompress the archive, and move the directory to Lib/sandbox/pyem. An example script is included in the archive, example.py. Some preliminary tests are included (including the not-enabled by default ctype version, which requires a recent version of ctype). * EXAMPLE USAGE: import numpy as N from scipy.sandbox.pyem import GM, GMM, EM import copy #++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # Create an artificial 2 dimension, 3 component GMM model, sample it #++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ d = 2 k = 3 w, mu, va = GM.gen_param(d, k, 'diag', spread = 1.5) # GM.fromvalues is a class function gm = GM.fromvalues(w, mu, va) # Sample nframes frames from the model data = gm.sample(nframes) #++++++++++++++++++++++++ # Learn the model with EM #++++++++++++++++++++++++ # Init the model: here we create a mixture from its meta-parameters only # (dimension, number of components) using the GM "ctor" lgm = GM(d, k, mode) # Create a model to be trained from a mixture, with kmean for initialization gmm = GMM(lgm, 'kmean') gmm.init(data) # The actual EM, with likelihood computation. The threshold # is compared to the (linearly approximated) derivative of the likelihood em = EM() like = em.train(data, gmm, maxiter = 30, thresh = 1e-8) # "Trained" parameters are available through gmm.gm.w, gmm.gm.mu, gmm.gm.va * PLOTTING EXAMPLES: http://www.ar.media.kyoto-u.ac.jp/members/david/pyem/example_1_dimension_mode_diag.png http://www.ar.media.kyoto-u.ac.jp/members/david/pyem/example_2_dimension_mode_diag.png * FUTURE: I use the package myself quite regularly, and intend to improve it in the near future: - a script online_em.py for online EM for reinforcement learning is included, but not available by default as this is beta, the API awkward, and not likely to work really well for now. - inclusion of priors to avoid covariance shrinking toward 0. - I started to code some core functions in C with ctypes (this can be enabled if you uncomment #import _c_densities as densities in the file gmm_em.py, and comment the line import densities). - Ideally, I was hoping to start a project of numpy packages for Machine Learning (Kalman filtering, HMM, etc...); I don't know if other people would be interested in developing such a package. Cheers, David From fullung at gmail.com Fri Oct 6 11:09:04 2006 From: fullung at gmail.com (Albert Strasheim) Date: Fri, 6 Oct 2006 17:09:04 +0200 Subject: [SciPy-dev] PyEM, toolbox for Expectation Maximization for Gaussian Mixtures (proposal for inclusion into scipy.sandbox) In-Reply-To: <452669D5.1080807@ar.media.kyoto-u.ac.jp> Message-ID: Hello all > -----Original Message----- > From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On > Behalf Of David Cournapeau > Sent: 06 October 2006 16:36 > To: SciPy Developers List > Subject: [SciPy-dev] PyEM, toolbox for Expectation Maximization for > Gaussian Mixtures (proposal for inclusion into scipy.sandbox) > > Hi, > > A few months ago, I posted a preliminary version of PyEM, a numpy > package for Expectation Maximization for Gaussian Mixture Models. As it > was developed during the various change of core functions in numpy > (change of axis convention in mean, sum, etc...), I stopped doing public > releases. > Now that this numpy API has settled, I propose the package for > inclusion into the scipy sandbox. The package has already been used with > success by several other people; I tried to have a coherent and easy to > use API, and I have included some pretty plotting functions :). By > including it to scipy, I hope also to get some feedback on usage, > possible improvements, more testing, etc... +1 on inclusion in the SciPy sandbox. I'd love to be able to work on David's already excellent code. > > * FUTURE: > > > > - Ideally, I was hoping to start a project of numpy packages for > Machine Learning (Kalman filtering, HMM, etc...); I don't know if other > people would be interested in developing such a package. +1. The sandbox already has my libsvm wrapper, but it's a bit limited by libsvm's sparse data structure -- an SVM implementation using NumPy arrays is the next logical step. There's also PyKF[1], but it doesn't seem to be actively maintained. [1] http://pykf.sourceforge.net/ Cheers, Albert From david at ar.media.kyoto-u.ac.jp Fri Oct 6 11:32:13 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 07 Oct 2006 00:32:13 +0900 Subject: [SciPy-dev] PyEM, toolbox for Expectation Maximization for Gaussian Mixtures (proposal for inclusion into scipy.sandbox) In-Reply-To: References: Message-ID: <452676FD.8060403@ar.media.kyoto-u.ac.jp> Albert Strasheim wrote: > > +1. The sandbox already has my libsvm wrapper, but it's a bit limited by > libsvm's sparse data structure -- an SVM implementation using NumPy arrays > is the next logical step. There's also PyKF[1], but it doesn't seem to be > actively maintained. > I believe that pure python versions of algorithms is really important, at least for educational purpose. I myself implement a lot of algorithms to familiarize myself with them, and it is sometimes frustrating not to have simple reference implementations available for algorithms as basic as EM. I was hoping more collaboration, too. For example, PyEM includes code to compute multivariate Gaussian densities and evaluation of confidence ellipsoids with custom level. Also, two years ago, I developed a quite efficient and small C library for EM (handling full covariance cases if clapack is available), which I would be happy to include in PyEM. In particular, it can estimate Gaussian densities with any kind of covariance matrices, and it would be stupid that other packages roll their own (I didn't try to include it in the stats package because it lacks most of the function required, and I don't think this is that useful to implement all those functions). This could be then used for all algorithms requiring this function (Kalman filtering, Recursive EM, HMM with Gaussian output, etc...) > [1] http://pykf.sourceforge.net/ > I stopped looking at PYKF because the code is GPL (it also uses numeric, which I am not familiar with). We may ask the author to re-release the code under a scipy-friendly license, though (I did that for PyEM, as initially it was quite heavily inspired by a GPL matlab toolbox). I have code for basic Kalman filtering (for now, it has roughly the same features as the Kevin Murphy toolbox on Kalman filtering: http://www.cs.ubc.ca/~murphyk/Software/Kalman/kalman.html), but I am not sure I will expand it much further. David From dd55 at cornell.edu Fri Oct 6 12:18:26 2006 From: dd55 at cornell.edu (Darren Dale) Date: Fri, 6 Oct 2006 12:18:26 -0400 Subject: [SciPy-dev] umfpack-5.1 In-Reply-To: <45100DDC.302@ntc.zcu.cz> References: <200609191124.22568.dd55@cornell.edu> <45100DDC.302@ntc.zcu.cz> Message-ID: <200610061218.27062.dd55@cornell.edu> On Tuesday 19 September 2006 11:33, Robert Cimrman wrote: > Darren Dale wrote: > > After updating umfpack to the current version (umfpack-5.1), I am not > > able to build scipy. There is a long list of errors, such as > > ... > > Will scipy support new versions of umfpack? > > Sure. Unfortunately, I have some urgent things to do now, so stick, > please, with the version 5.0 (which works for me) for the moment if you > can. I think I made a mistake, the current version is 5.0.1, not 5.1. Furthermore, I built 5.0.1 using a gentoo ebuild posted at http://svn.cryos.net/projects/gentoo-sci-overlay, which had a bug that has now been fixed. I rebuilt umfpack-5.0.1 with the improved gentoo ebuild, and then I was able to build scipy again. Sorry for the noise. Darren From oliphant at ee.byu.edu Fri Oct 6 20:06:45 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 06 Oct 2006 18:06:45 -0600 Subject: [SciPy-dev] Giving write access to SciPy SVN Message-ID: <4526EF95.6030004@ee.byu.edu> Hi all, What is the attitude toward members on this list of growing write-access to SVN. It seems to me that SciPy will need a lot of people who have write access in order to keep up with the different domains of tools that SciPy has. Do we want to open up the SVN tree to a lot of developers or in concert with developing a peer-review and "publication" strategy for SciPy have "associate editors" in different problem domains who have write access to SVN and encourage others to contribute patches. I'd love to hear your opinions. -Travis From eric at enthought.com Fri Oct 6 23:36:13 2006 From: eric at enthought.com (eric jones) Date: Fri, 06 Oct 2006 22:36:13 -0500 Subject: [SciPy-dev] Giving write access to SciPy SVN In-Reply-To: <4526EF95.6030004@ee.byu.edu> References: <4526EF95.6030004@ee.byu.edu> Message-ID: <452720AD.6010008@enthought.com> I am for a very open approach to this. I think we should give people access once they have (1) submitted 2-3 patches, (2)proven themselves trust worthy and committed to the community by activity on the mailing lists, and (3) asked for write access. This approach ensures a maximum number of capable hands tending the garden, and our experience up to now has been that people are generally well behaved. If trust is ever abused, then we can re-evaluate the policy. My .02, eric Travis Oliphant wrote: >Hi all, > >What is the attitude toward members on this list of growing write-access >to SVN. It seems to me that SciPy will need a lot of people who have >write access in order to keep up with the different domains of tools >that SciPy has. Do we want to open up the SVN tree to a lot of >developers or in concert with developing a peer-review and "publication" >strategy for SciPy have "associate editors" in different problem domains >who have write access to SVN and encourage others to contribute patches. > >I'd love to hear your opinions. > >-Travis > > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From nwagner at iam.uni-stuttgart.de Mon Oct 9 04:18:40 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 09 Oct 2006 10:18:40 +0200 Subject: [SciPy-dev] More trouble with complex sparse matrices Message-ID: <452A05E0.6050100@iam.uni-stuttgart.de> Hi all, The attached script yields (0, 0) (1+1j) (0, 1) (0.974820093029+0.566704683795j) (1, 0) (0.623780655719+0.906657280726j) (1, 1) (0.770817980571+0.526185760506j) Traceback (most recent call last): File "cs.py", line 7, in ? A = A.todense() File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 358, in todense return asmatrix(self.toarray()) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 2647, in toarray d[i, j] = self.data[i][pos] TypeError: can't convert complex to float; use abs(z) Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: cs.py Type: text/x-python Size: 179 bytes Desc: not available URL: From cimrman3 at ntc.zcu.cz Mon Oct 9 05:51:28 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 09 Oct 2006 11:51:28 +0200 Subject: [SciPy-dev] Trouble with sparse complex matrices In-Reply-To: <45266090.10409@iam.uni-stuttgart.de> References: <45266090.10409@iam.uni-stuttgart.de> Message-ID: <452A1BA0.7090906@ntc.zcu.cz> Nils Wagner wrote: > Hi all, > > I have constructed a 2x2 lil_matrix and added some complex values to it. > The imaginary is neglected for some reason. > > from scipy import * > A = sparse.lil_matrix((2,2)) > A[:,1] = random.rand(2)+1j*random.rand(2) > A[1,:] = random.rand(2)+1j*random.rand(2) > print A.real > print A.imag > print A > A = A.todense() > print 'The imiginary part of A is missing' > print A > In [1]:A = sp.lil_matrix((2,2)) In [2]:A Out[2]: <2x2 sparse matrix of type '' with 0 stored elements in LInked List format> In [3]:A[:,1] = rand(2)+1j*rand(2) In [4]:A[1,:] = rand(2)+1j*rand(2) In [8]:A Out[8]: <2x2 sparse matrix of type '' with 3 stored elements in LInked List format> -------------------- - so here the type of the matrix is still real! I think you have found another bug... In the meantime, you can use the code below: -------------------- In [15]:A = sp.lil_matrix((2,2), dtype = nm.complex128) In [16]:A Out[16]: <2x2 sparse matrix of type '' with 0 stored elements in LInked List format> In [17]:A[:,1] = rand(2)+1j*rand(2) In [18]:A[1,:] = rand(2)+1j*rand(2) In [19]:A Out[19]: <2x2 sparse matrix of type '' with 3 stored elements in LInked List format> In [20]:print A (0, 1) (0.958469152451+0.477931946516j) (1, 0) (0.945332109928+0.92267626524j) (1, 1) (0.734181404114+0.921519994736j) In [21]:A.real Out[21]: <2x2 sparse matrix of type '' with 3 stored elements (space for 3) in Compressed Sparse Column format> In [22]:print A.real (1, 0) 0.945332109928 (0, 1) 0.958469152451 (1, 1) 0.734181404114 In [23]:print A.imag (1, 0) 0.92267626524 (0, 1) 0.477931946516 (1, 1) 0.921519994736 In [24]:A = A.todense() In [25]:print A [[ 0.00000000e+00+0.j 9.58469152e-01+0.47793195j] [ 9.45332110e-01+0.92267627j 7.34181404e-01+0.92151999j]] From nwagner at iam.uni-stuttgart.de Mon Oct 9 07:00:24 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 09 Oct 2006 13:00:24 +0200 Subject: [SciPy-dev] Trouble with sparse complex matrices In-Reply-To: <452A1BA0.7090906@ntc.zcu.cz> References: <45266090.10409@iam.uni-stuttgart.de> <452A1BA0.7090906@ntc.zcu.cz> Message-ID: <452A2BC8.8050606@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Hi all, >> >> I have constructed a 2x2 lil_matrix and added some complex values to it. >> The imaginary is neglected for some reason. >> >> from scipy import * >> A = sparse.lil_matrix((2,2)) >> A[:,1] = random.rand(2)+1j*random.rand(2) >> A[1,:] = random.rand(2)+1j*random.rand(2) >> print A.real >> print A.imag >> print A >> A = A.todense() >> print 'The imiginary part of A is missing' >> print A >> >> > > In [1]:A = sp.lil_matrix((2,2)) > In [2]:A > Out[2]: > <2x2 sparse matrix of type '' > with 0 stored elements in LInked List format> > > In [3]:A[:,1] = rand(2)+1j*rand(2) > In [4]:A[1,:] = rand(2)+1j*rand(2) > In [8]:A > Out[8]: > <2x2 sparse matrix of type '' > with 3 stored elements in LInked List format> > > -------------------- > - so here the type of the matrix is still real! I think you have found > another bug... In the meantime, you can use the code below: > -------------------- > > In [15]:A = sp.lil_matrix((2,2), dtype = nm.complex128) > > In [16]:A > Out[16]: > <2x2 sparse matrix of type '' > with 0 stored elements in LInked List format> > > In [17]:A[:,1] = rand(2)+1j*rand(2) > > In [18]:A[1,:] = rand(2)+1j*rand(2) > > In [19]:A > Out[19]: > <2x2 sparse matrix of type '' > with 3 stored elements in LInked List format> > > In [20]:print A > (0, 1) (0.958469152451+0.477931946516j) > (1, 0) (0.945332109928+0.92267626524j) > (1, 1) (0.734181404114+0.921519994736j) > > In [21]:A.real > Out[21]: > <2x2 sparse matrix of type '' > with 3 stored elements (space for 3) > in Compressed Sparse Column format> > > In [22]:print A.real > (1, 0) 0.945332109928 > (0, 1) 0.958469152451 > (1, 1) 0.734181404114 > > In [23]:print A.imag > (1, 0) 0.92267626524 > (0, 1) 0.477931946516 > (1, 1) 0.921519994736 > > In [24]:A = A.todense() > > In [25]:print A > [[ 0.00000000e+00+0.j 9.58469152e-01+0.47793195j] > [ 9.45332110e-01+0.92267627j 7.34181404e-01+0.92151999j]] > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi Robert, Thank you very much for your explanation and workaround ! Cheers, Nils From jdhunter at ace.bsd.uchicago.edu Mon Oct 9 12:07:10 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 09 Oct 2006 11:07:10 -0500 Subject: [SciPy-dev] Giving write access to SciPy SVN In-Reply-To: <452720AD.6010008@enthought.com> (eric jones's message of "Fri, 06 Oct 2006 22:36:13 -0500") References: <4526EF95.6030004@ee.byu.edu> <452720AD.6010008@enthought.com> Message-ID: <87slhxeamp.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "eric" == eric jones writes: eric> I am for a very open approach to this. I think we should eric> give people access once they have (1) submitted 2-3 patches, eric> (2)proven themselves trust worthy and committed to the eric> community by activity on the mailing lists, and (3) asked eric> for write access. This approach ensures a maximum number of eric> capable hands tending the garden, and our experience up to eric> now has been that people are generally well behaved. If eric> trust is ever abused, then we can re-evaluate the policy. eric> My .02, eric This is the loose policy we've used for matplotlib, and we've had between 12 and 20 people with commit access in the last year or so. We ask people to note non-trivial changes to a CHANGELOG and post to API_CHANGES when there are API changes. This has worked fairly well and I haven't seen any abuse or rogue commits to date. I recently purged anyone who hasn't made a commit in they last year just to keep the list manageable. JDH From hagberg at lanl.gov Tue Oct 10 00:12:32 2006 From: hagberg at lanl.gov (Aric Hagberg) Date: Mon, 9 Oct 2006 22:12:32 -0600 Subject: [SciPy-dev] ARPACK wrapper first cut Message-ID: <20061010041232.GH23011@t7.lanl.gov> I made a whack at a wrapper for ARPACK (FORTRAN code for finding a few eigenvalues and eigenvectors of a matrix). Short summary and longer code at: http://projects.scipy.org/scipy/scipy/ticket/231 This is my first attempt at using ARPACK, f2py, and fitting code into the scipy framework. I do know what an eigenvalue is at least. Hopefully this is a decent start and I can find some help here to finish the development and get it completely tested. The ARPACK code is fairly complicated and it will take some effort to get all the features implemented and working. But I do think it would be a valuable addition to scipy. Best Regards, Aric From karol.langner at kn.pl Tue Oct 10 03:24:57 2006 From: karol.langner at kn.pl (Karol Langner) Date: Tue, 10 Oct 2006 09:24:57 +0200 Subject: [SciPy-dev] scipy and lapack Message-ID: <200610100924.57888.karol.langner@kn.pl> I installed scipy revision 2242 with manually built atlas, lapack, umfpack, fftw, and dbjfft libraries. The install looks fine, and the tests only give error that are labelled as "not real problems". The install and test output can be found here: http://www.mml.ch.pwr.wroc.pl/langner/tmp/scipy/ However, when I import from scipy I get an ImportError: >>> from scipy import * Traceback (most recent call last): File "", line 1, in File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/cluster/__init__. py", line 9, in import vq File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/cluster/vq.py", l ine 21, in from scipy.stats import std, mean File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/stats/__init__.py ", line 7, in from stats import * File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/stats/stats.py", line 190, in import scipy.special as special File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/special/__init__. py", line 10, in import orthogonal File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/special/orthogona l.py", line 65, in from scipy.linalg import eig File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/__init__.p y", line 8, in from basic import * File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/lapack.py" , line 18, in from scipy.linalg import clapack ImportError: /home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/clapa ck.so: undefined symbol: clapack_sgesv Any suggestions? Karol -- written by Karol Langner Tue Oct 10 09:05:30 CEST 2006 From jtravs at gmail.com Tue Oct 10 14:48:04 2006 From: jtravs at gmail.com (John Travers) Date: Tue, 10 Oct 2006 19:48:04 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 Message-ID: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> Hi all, I've just submitted a ticket (#286) with patches to add support for the regrid function from dierckx (fitpack) to the scipy interface. regrid is used for fitting smoothing or interpolating splines to rectangular mesh data. It is more efficient and stable than using surfit (which is designed for scattered data). The patch makes use of the newstyle fitpack2 interface based on hand coded fitpack.pyf. The added class is RectBivariateSpline?. A simple test case is included. Can somebody tell me why the dierckx function surfit is currently used in interp2d? As far as I can tell from the documentation (and the fitpack author's book) surfit shouldn't be used for interpolation. It is designed for smoothing scattered data. I say this as the reason I implemented the regrid function is because interp2d wouldn't work for my data. regrid does (and *is* designed for interpolation or smoothing of rectangular data). Can I suggest that interp2d uses regrid instead (or do people need to interpolate scattered data, in which case a different library alltogether would be required)? I'm willing to work out the required patches to make these changes if people think I should. Best regards, John Travers From icy.flame.gm at gmail.com Wed Oct 11 04:53:09 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Wed, 11 Oct 2006 09:53:09 +0100 Subject: [SciPy-dev] Perhaps an addition to scipy.signal.wavelets? Message-ID: I been messing around with the continuous wavelet transform lately, but not seems to be able to find much resources in scipy about wavelets. The Morlet wavelet is a pretty standard one to use, so I coded this function to generate the wavelet, hope it can be some use for others. Might require some modification, because I coded this as a standalone function. ######## # Morlet # ######## def morlet(M, w = 5.0, s = 1.0, cplt = True): """ Returns the complex Morlet wavelet with length of M Inputs: M Length of the wavelet w Omega0 s Scaling factor, windowed from -s*2*pi to +s*2*pi cplt Use the complete or standard version The standard version: pi**-0.25 * exp(1j*w*x) * exp(-0.5*(x**2)) Often referred to as simply the Morlet wavelet in many text, also commonly use in practice. However, this simplified version can cause admissibility problem at low w. The complete version: pi**-0.25 * (exp(1j*w*x) - exp(-0.5*(w**2))) * exp(-0.5*(x**2)) Complete version of the Morlet wavelet, with the correction term to improve admissibility. For w is greater than 5, the correction term will be negligible. The energy of the return wavelet is NOT normalised according to s. NB: The fundamental frequency of this wavelet in Hz is given by: f = 2*s*w*r / M r - Sampling Rate """ from scipy import linspace from scipy import pi from scipy import exp from scipy import zeros correction = exp(-0.5*(w**2)) c1 = 1j*w s *= 2*pi xlist = linspace(-s, s, M) output = zeros(M).astype('F') if cplt == True: for i in range(M): x = xlist[i] output[i] = (exp(c1*x) - correction) * exp(-0.5*(x**2)) else: for i in range(M): x = xlist[i] output[i] = exp(c1*x) * exp(-0.5*(x**2)) output *= pi**-0.25 return output ################################################# -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From stefan at sun.ac.za Wed Oct 11 17:09:24 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 11 Oct 2006 23:09:24 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> Message-ID: <20061011210924.GK20657@mentat.za.net> Hi John On Tue, Oct 10, 2006 at 07:48:04PM +0100, John Travers wrote: > As far as I can tell from the documentation (and the fitpack author's > book) surfit shouldn't be used for interpolation. It is designed for > smoothing scattered data. I say this as the reason I implemented the > regrid function is because interp2d wouldn't work for my data. regrid > does (and *is* designed for interpolation or smoothing of rectangular > data). Can I suggest that interp2d uses regrid instead (or do people > need to interpolate scattered data, in which case a different library > alltogether would be required)? > > I'm willing to work out the required patches to make these changes if > people think I should. As you say, we should probably use the function purpose written for a rectangular grid in interp2d. However, I wouldn't like to lose the ability to smooth scattered data. What do you mean by "or do people need to interpolate scattered data, in which case a different library altogether would be required"? Isn't that functionality currently exposed via the surface fitting algorithm? Regards St?fan From karol.langner at kn.pl Wed Oct 11 17:31:08 2006 From: karol.langner at kn.pl (Karol Langner) Date: Wed, 11 Oct 2006 23:31:08 +0200 Subject: [SciPy-dev] scipy and lapack In-Reply-To: <200610100924.57888.karol.langner@kn.pl> References: <200610100924.57888.karol.langner@kn.pl> Message-ID: <200610112331.08362.karol.langner@kn.pl> Just for the record, I got to the bottom of this. The problem was that I was using the LAPACK library without the atlas C routines. After merging the atlas-lapack and lapack libraries everything works fine. Karol On Tuesday 10 of October 2006 09:24, Karol Langner wrote: > I installed scipy revision 2242 with manually built atlas, lapack, umfpack, > fftw, and dbjfft libraries. The install looks fine, and the tests only give > error that are labelled as "not real problems". > > The install and test output can be found here: > http://www.mml.ch.pwr.wroc.pl/langner/tmp/scipy/ > > However, when I import from scipy I get an ImportError: > >>> from scipy import * > > Traceback (most recent call last): > File "", line 1, in > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/cluster/__init >__. py", line 9, in > import vq > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/cluster/vq.py" >, l ine 21, in > from scipy.stats import std, mean > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/stats/__init__ >.py ", line 7, in > from stats import * > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/stats/stats.py >", line 190, in > import scipy.special as special > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/special/__init >__. py", line 10, in > import orthogonal > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/special/orthog >ona l.py", line 65, in > from scipy.linalg import eig > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/__init_ >_.p y", line 8, in > from basic import * > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/basic.p >y", line 17, in > from lapack import get_lapack_funcs > File > "/home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/lapack. >py" , line 18, in > from scipy.linalg import clapack > ImportError: > /home/langner/apps/python/lib/python2.5/site-packages/scipy/linalg/clapa > ck.so: undefined symbol: clapack_sgesv > > Any suggestions? > Karol -- written by Karol Langner Wed Oct 11 23:30:00 CEST 2006 From jtravs at gmail.com Wed Oct 11 17:59:37 2006 From: jtravs at gmail.com (John Travers) Date: Wed, 11 Oct 2006 22:59:37 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <20061011210924.GK20657@mentat.za.net> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> Message-ID: <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> Hi Stefan, thanks for replying. I've answered your questions below... On 11/10/06, Stefan van der Walt wrote: > Hi John > >> ... >> > > As you say, we should probably use the function purpose written for a > rectangular grid in interp2d. However, I wouldn't like to lose the > ability to smooth scattered data. Of course. But interp2d should be for interpolation only, and I think it is OK just to have it only for a rectangular grid. regrid is designed specifically for this (though it can also smooth data). The interface to surfit (which is bisplprep and SmoothBivariateSpline) will still be there for smoothing of scattered data, though it shouldn't be used for interpolation. > > What do you mean by "or do people need to interpolate scattered data, > in which case a different library altogether would be required"? > Isn't that functionality currently exposed via the surface fitting > algorithm? No, the surfit routine can only smooth scattered data, to interpolate through scattered data points would require a different library from dierckx fitpack. Something like the other netlib fitpack or natgrid would be suitable. Best regards, John From stefan at sun.ac.za Wed Oct 11 19:35:06 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 12 Oct 2006 01:35:06 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> Message-ID: <20061011233506.GN20657@mentat.za.net> On Wed, Oct 11, 2006 at 10:59:37PM +0100, John Travers wrote: > Of course. But interp2d should be for interpolation only, and I think > it is OK just to have it only for a rectangular grid. regrid is > designed specifically for this (though it can also smooth data). The > interface to surfit (which is bisplprep and SmoothBivariateSpline) > will still be there for smoothing of scattered data, though it > shouldn't be used for interpolation. [...] > > What do you mean by "or do people need to interpolate scattered data, > > in which case a different library altogether would be required"? > > Isn't that functionality currently exposed via the surface fitting > > algorithm? > > No, the surfit routine can only smooth scattered data, to interpolate > through scattered data points would require a different library from > dierckx fitpack. Something like the other netlib fitpack or natgrid > would be suitable. OK, I understand what you meant now. I've applied your patch (btw, please keep line lengths < 79 chars). I still need to modify interp2d, which I will do on Friday, unless you submit a patch before then. Cheers St?fan From wbaxter at gmail.com Wed Oct 11 22:12:34 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 12 Oct 2006 11:12:34 +0900 Subject: [SciPy-dev] ARPACK wrapper first cut In-Reply-To: <20061010041232.GH23011@t7.lanl.gov> References: <20061010041232.GH23011@t7.lanl.gov> Message-ID: Just wanted to say this is great news! I added my own pyf interface descriptions for two of the arpack functions to the tracker. May not be of much use to you, but thought I'd put it up there for comparison. I'm no f2py expert, though (nor am I an 'f' expert, or a 'py' expert for that matter :-) ) But I've been using the wrapper with my pyf for a while without issues. --bb On 10/10/06, Aric Hagberg wrote: > I made a whack at a wrapper for ARPACK (FORTRAN code for finding a > few eigenvalues and eigenvectors of a matrix). > Short summary and longer code at: > http://projects.scipy.org/scipy/scipy/ticket/231 > > This is my first attempt at using ARPACK, f2py, and fitting > code into the scipy framework. I do know what an eigenvalue is at least. > > Hopefully this is a decent start and I can find some help here to > finish the development and get it completely tested. The ARPACK code > is fairly complicated and it will take some effort to get all the > features implemented and working. But I do think it would be a > valuable addition to scipy. > > Best Regards, > Aric > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From hagberg at lanl.gov Wed Oct 11 23:19:45 2006 From: hagberg at lanl.gov (Aric Hagberg) Date: Wed, 11 Oct 2006 21:19:45 -0600 Subject: [SciPy-dev] ARPACK wrapper first cut In-Reply-To: References: <20061010041232.GH23011@t7.lanl.gov> Message-ID: <20061012031945.GI23011@t7.lanl.gov> I have uploaded another version of the wrappers that I think clears up the errors I was seeing with the original. http://projects.scipy.org/scipy/scipy/ticket/231 The package includes wrappers to the symmetric, nonsymmetric, real and complex eigensolvers (which have just a different enough interface to make life interesting). I generated the wrappers with f2py but I'm a novice there - they could benefit from some expert eyes. The sometimes incorrect answers in the nonsymmetric solver routines were apparently coming from a mis-compiled lahqr.f in my system lapack3 libraries. I included these files in the package - the rest of the lapack library seems OK for use with ARPACk. That kind of annoying thing turns out to take a while to find. Also I have included Python code in a similar style to linalg.iterative to call the wrapped functions. That code only implements the simple eigenvalue problem (A x = lambda x) but it should be possible to add code to solve the generalized problem (A x = lambda M x) with various inverse and shifted modes. Neilen has reported doing this already using linsolve.splu as the sparse linear system solver but I don't have code for that yet. Finally there are some tests and other scipyish bits. You should be able to unpack the code into Lib/sandbox and give it a try. This is a missing piece of the chart (eigs) at http://scipy.org/NumPy_for_Matlab_Users I don't know Matlab and I didn't try to make the Python interface match eigs. But I'm pretty sure the one I designed isn't much better or different. Best Regards, Aric From david at ar.media.kyoto-u.ac.jp Thu Oct 12 02:01:34 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Oct 2006 15:01:34 +0900 Subject: [SciPy-dev] Import a package under SVN into scipy SVN repository Message-ID: <452DDA3E.7020604@ar.media.kyoto-u.ac.jp> Hi, I am not sure this is the right place to ask, but I have a question about importing a new project into the svn sandbox. I have been working on adapting my package before its inclusion into the scipy svn repository. My package was developed using bazaar-NG for source version control. I would like to import the package with its whole history. The first step I did was to convert bzr meta-data to svn metadata: this took me a while, but I finally managed to do it. I can now checkout my package from a local subversion server, with the whole history. Now, I need to import the package to the scipy subversion repository, but I do not know how to do that (I am not really familiar with subversion; the only source control system I know is arch and relatives, like bazaar-NG). Cheers, David From fperez.net at gmail.com Thu Oct 12 03:41:03 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 12 Oct 2006 01:41:03 -0600 Subject: [SciPy-dev] [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan? In-Reply-To: <452DEAB4.3030506@ukzn.ac.za> References: <452D56F5.9000404@ee.byu.edu> <452DDC4B.1050304@noaa.gov> <452DE009.3070100@ieee.org> <452DEAB4.3030506@ukzn.ac.za> Message-ID: [ Moved over from the numpy list at Travis' request ] On 10/12/06, Scott Sinclair wrote: > As far as I can tell this is exactly what happens. Consider the issue > under discussion... > > ---------------------------------- > >>> import numpy as np > >>> np.sqrt(-1) > -1.#IND > >>> np.sqrt(-1+0j) > 1j > >>> a = complex(-1) > >>> np.sqrt(a) > 1j > >>> import scipy as sp > >>> sp.sqrt(-1) > -1.#IND > >>> np.sqrt(-1+0j) > 1j > >>> sp.sqrt(a) > 1j > >>> np.__version__ > '1.0rc1' > >>> sp.__version__ > '0.5.1' > >>> > > ---------------------------------- > > I'm sure that this hasn't changed in the development versions. Check again (I just rebuilt from SVN right now): In [1]: import numpy as n, scipy as s In [2]: n.sqrt(-1) Out[2]: nan In [3]: s.sqrt(-1) Out[3]: 1j In [4]: n.__version__,s.__version__ Out[4]: ('1.0.dev3315', '0.5.2.dev2264') Travis gave all the details on this particular issue, so I won't rehash them. Cheers, f From fullung at gmail.com Thu Oct 12 06:42:55 2006 From: fullung at gmail.com (Albert Strasheim) Date: Thu, 12 Oct 2006 12:42:55 +0200 Subject: [SciPy-dev] Import a package under SVN into scipy SVN repository In-Reply-To: <452DDA3E.7020604@ar.media.kyoto-u.ac.jp> Message-ID: Hey David You could try this svn2svn script: http://blog.yanime.org/articles/2006/05/15/svn2svn You might want to try it on another repo before letting it loose on the SciPy repository. Maybe you can modify svn2svn to just create the diffs for you and then merge them into SciPy by hand. Cheers, Albert > -----Original Message----- > From: scipy-dev-bounces at scipy.org [mailto:scipy-dev-bounces at scipy.org] On > Behalf Of David Cournapeau > Sent: 12 October 2006 08:02 > To: SciPy Developers List > Subject: [SciPy-dev] Import a package under SVN into scipy SVN repository > > Hi, > > I am not sure this is the right place to ask, but I have a question > about importing a new project into the svn sandbox. I have been working > on adapting my package before its inclusion into the scipy svn > repository. My package was developed using bazaar-NG for source version > control. I would like to import the package with its whole history. > The first step I did was to convert bzr meta-data to svn metadata: > this took me a while, but I finally managed to do it. I can now checkout > my package from a local subversion server, with the whole history. > Now, I need to import the package to the scipy subversion > repository, but I do not know how to do that (I am not really familiar > with subversion; the only source control system I know is arch and > relatives, like bazaar-NG). > > Cheers, > > David From david at ar.media.kyoto-u.ac.jp Thu Oct 12 08:25:36 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 12 Oct 2006 21:25:36 +0900 Subject: [SciPy-dev] Import a package under SVN into scipy SVN repository In-Reply-To: References: Message-ID: <452E3440.1070000@ar.media.kyoto-u.ac.jp> Albert Strasheim wrote: > Hey David > > You could try this svn2svn script: > > http://blog.yanime.org/articles/2006/05/15/svn2svn > > You might want to try it on another repo before letting it loose on the > SciPy repository. Maybe you can modify svn2svn to just create the diffs for > you and then merge them into SciPy by hand. > > Cheers, > > Albert > > This is exactly what I needed (I didn't have the idea to extract the diff from my rep to commit them one after the other to the scipy rep) ! I hope to finally commit my package tonight, thanks to you, David From jtravs at gmail.com Thu Oct 12 08:42:44 2006 From: jtravs at gmail.com (John Travers) Date: Thu, 12 Oct 2006 13:42:44 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <20061011233506.GN20657@mentat.za.net> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> Message-ID: <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> Thanks! Sorry for the long lines, I'll be stricter on the format next time. I can send a patch for interp2d this evening (London time) if you want. John On 12/10/06, Stefan van der Walt wrote: > On Wed, Oct 11, 2006 at 10:59:37PM +0100, John Travers wrote: > > Of course. But interp2d should be for interpolation only, and I think > > it is OK just to have it only for a rectangular grid. regrid is > > designed specifically for this (though it can also smooth data). The > > interface to surfit (which is bisplprep and SmoothBivariateSpline) > > will still be there for smoothing of scattered data, though it > > shouldn't be used for interpolation. > > [...] > > > > What do you mean by "or do people need to interpolate scattered data, > > > in which case a different library altogether would be required"? > > > Isn't that functionality currently exposed via the surface fitting > > > algorithm? > > > > No, the surfit routine can only smooth scattered data, to interpolate > > through scattered data points would require a different library from > > dierckx fitpack. Something like the other netlib fitpack or natgrid > > would be suitable. > > OK, I understand what you meant now. I've applied your patch (btw, > please keep line lengths < 79 chars). I still need to modify > interp2d, which I will do on Friday, unless you submit a patch before > then. > > Cheers > St?fan > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- My mobile (which I rarely answer): (+44) (0) 7739 105209 If you want to see what I do: www.femto.ph.ic.ac.uk From stefan at sun.ac.za Thu Oct 12 08:47:22 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 12 Oct 2006 14:47:22 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> Message-ID: <20061012124722.GD32224@mentat.za.net> On Thu, Oct 12, 2006 at 01:42:44PM +0100, John Travers wrote: > Thanks! Sorry for the long lines, I'll be stricter on the format next > time. I can send a patch for interp2d this evening (London time) if > you want. That's great, I'll keep an eye out for it. If anyone on the list currently uses interp2d on irregularly gridded data, now is the time to speak up. Cheers St?fan From nmb at wartburg.edu Thu Oct 12 10:55:22 2006 From: nmb at wartburg.edu (Neil Martinsen-Burrell) Date: Thu, 12 Oct 2006 09:55:22 -0500 Subject: [SciPy-dev] Segfault in scipy.stats Message-ID: <20061012145521.GA16736@quaggy.local> I get the following segmentation fault when trying to import scipy.stats. This is on MacOS 10.4.8, python 2.5, IBM XLF fortran compiler, SciPy 0.5.1, numpy 1.0rc2. Any ideas? This is a fresh source build using GCC 3.3. -Neil -- nmb at quaggy[~/Desktop]$ gdb --args `python -c "import sys;print sys.executable"` GNU gdb 6.3.50-20050815 (Apple version gdb-563) (Wed Jul 19 05:17:43 GMT 2006) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "powerpc-apple-darwin"...Reading symbols for shared libraries .... done (gdb) r Starting program: /Library/Frameworks/Python.framework/Versions/2.5/Resources/Python.app/Contents/MacOS/Python Reading symbols for shared libraries .+ done Python 2.5 (r25:51908, Oct 5 2006, 08:33:45) [GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin Type "help", "copyright", "credits" or "license" for more information. Reading symbols for shared libraries .... done >>> import scipy.stats Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries ......... done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries ........................................................... done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries ... done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Reading symbols for shared libraries . done Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x2f830000 0x2f830000 in ?? () (gdb) where #0 0x2f830000 in ?? () Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 #1 0x02a41510 in PyFortranObject_New (defs=Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 0x0, init=Cannot access memory at address 0x2f830000 0) at build/src.macosx-10.3-ppc-2.5/fortranobject.c:60 Cannot access memory at address 0x2f830000 #2 0x02a408a8 in initmvn () at build/src.macosx-10.3-ppc-2.5/Lib/stats/mvnmodule.c:629 Cannot access memory at address 0x2f830000 #3 0x002d0454 in _PyImport_LoadDynamicModule (name=0xbfffc5ac "scipy.stats.mvn", pathname=0xbfffc0c0 "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/mvn.so", fp=0x0) at ./Python/importdl.c:53 #4 0x002ce538 in import_submodule (mod=0x136f830, subname=0xbfffc5b8 "mvn", fullname=0xbfffc5ac "scipy.stats.mvn") at Python/import.c:2383 #5 0x002ce7e0 in load_next (mod=0x136f830, altmod=Cannot access memory at address 0x2f830000 Cannot access memory at address 0x2f830000 0x32a2c8, p_name=0xbfffc5b8, buf=0xbfffc5ac "scipy.stats.mvn", p_buflen=0xbfffc5a8) at Python/import.c:2203 #6 0x002cef10 in import_module_level (name=0x0, globals=0x600150, locals=Cannot access memory at address 0x2f830000 0x2a49418, fromlist=0x32a2c8, level=-1) at Python/import.c:1984 #7 0x002cf2d8 in PyImport_ImportModuleLevel (name=0x29f6c74 "mvn", globals=0x29f4e40, locals=0x29f4e40, fromlist=0x32a2c8, level=-1) at Python/import.c:2055 #8 0x002a6d28 in builtin___import__ (self=Cannot access memory at address 0x2f830000 0x4e800020, args=Cannot access memory at address 0x2f830000 0x0, kwds=Cannot access memory at address 0x2f830000 0x2a49418) at Python/bltinmodule.c:47 #9 0x0020dee4 in PyObject_Call (func=Cannot access memory at address 0x2f830000 0x4e800020, arg=Cannot access memory at address 0x2f830000 0x0, kw=Cannot access memory at address 0x2f830000 0x2a49418) at Objects/abstract.c:1860 #10 0x002ac9a0 in PyEval_CallObjectWithKeywords (func=0xe4b8, arg=0x1766a80, kw=0x0) at Python/ceval.c:3435 #11 0x002b16fc in PyEval_EvalFrameEx (f=0x6b6ea0, throwflag=7040988) at Python/ceval.c:2065 #12 0x002b45a0 in PyEval_EvalCodeEx (co=0x176d848, globals=Cannot access memory at address 0x2f830000 0x0, locals=Cannot access memory at address 0x2f830000 0xb8, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #13 0x002b4740 in PyEval_EvalCode (co=Cannot access memory at address 0x2f830000 0x4e800020, globals=Cannot access memory at address 0x2f830000 0x0, locals=Cannot access memory at address 0x2f830000 0x2a49418) at Python/ceval.c:494 #14 0x002ccf30 in PyImport_ExecCodeModuleEx (name=0xbfffd73c "scipy.stats.kde", co=0x176d848, pathname=0xbfffcdd8 "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/kde.pyc") at Python/import.c:658 #15 0x002cd460 in load_source_module (name=0xbfffd73c "scipy.stats.kde", pathname=0xbfffcdd8 "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/kde.pyc", fp=0x176d848) at Python/import.c:942 #16 0x002ce538 in import_submodule (mod=0x136f830, subname=0xbfffd748 "kde", fullname=0xbfffd73c "scipy.stats.kde") at Python/import.c:2383 #17 0x002ce7e0 in load_next (mod=0x136f830, altmod=0x32a2c8, p_name=0xbfffd748, buf=0xbfffd73c "scipy.stats.kde", p_buflen=0xbfffd738) at Python/import.c:2203 #18 0x002cef10 in import_module_level (name=0x0, globals=0x600150, locals=Cannot access memory at address 0x2f830000 0x2a49418, fromlist=0x1346290, level=-1) at Python/import.c:1984 #19 0x002cf2d8 in PyImport_ImportModuleLevel (name=0x135e214 "kde", globals=0x13758a0, locals=0x13758a0, fromlist=0x1346290, level=-1) at Python/import.c:2055 #20 0x002a6d28 in builtin___import__ (self=Cannot access memory at address 0x2f830000 0x4e800020, args=Cannot access memory at address 0x2f830000 0x0, kwds=Cannot access memory at address 0x2f830000 0x2a49418) at Python/bltinmodule.c:47 #21 0x0020dee4 in PyObject_Call (func=Cannot access memory at address 0x2f830000 0x4e800020, arg=Cannot access memory at address 0x2f830000 0x0, kw=Cannot access memory at address 0x2f830000 0x2a49418) at Objects/abstract.c:1860 #22 0x002ac9a0 in PyEval_CallObjectWithKeywords (func=0xe4b8, arg=0x57d20, kw=0x0) at Python/ceval.c:3435 #23 0x002b16fc in PyEval_EvalFrameEx (f=0x658790, throwflag=6654156) at Python/ceval.c:2065 #24 0x002b45a0 in PyEval_EvalCodeEx (co=0x1374218, globals=Cannot access memory at address 0x2f830000 0x0, locals=Cannot access memory at address 0x2f830000 0xb8, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #25 0x002b4740 in PyEval_EvalCode (co=Cannot access memory at address 0x2f830000 0x4e800020, globals=Cannot access memory at address 0x2f830000 0x0, locals=Cannot access memory at address 0x2f830000 0x2a49418) at Python/ceval.c:494 #26 0x002ccf30 in PyImport_ExecCodeModuleEx (name=0xbfffed3c "scipy.stats", co=0x1374218, pathname=0xbfffdf68 "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.pyc") at Python/import.c:658 #27 0x002cd460 in load_source_module (name=0xbfffed3c "scipy.stats", pathname=0xbfffdf68 "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/stats/__init__.pyc", fp=0x1374218) at Python/import.c:942 #28 0x002cdf08 in load_package (name=0xbfffed3c "scipy.stats", pathname=0x1344a70 "") at Python/import.c:998 #29 0x002ce538 in import_submodule (mod=0x63430, subname=0xbfffed42 "stats", fullname=0xbfffed3c "scipy.stats") at Python/import.c:2383 #30 0x002ce7e0 in load_next (mod=0x63430, altmod=0x63430, p_name=0xbfffed42, buf=0xbfffed3c "scipy.stats", p_buflen=0xbfffed38) at Python/import.c:2203 #31 0x002cef48 in import_module_level (name=0x0, globals=0x63430, locals=Cannot access memory at address 0x2f830000 0x2a49418, fromlist=0x32a2c8, level=406576) at Python/import.c:1991 #32 0x002cf2d8 in PyImport_ImportModuleLevel (name=0x62e6c "scipy.stats", globals=0x1fc90, locals=0x1fc90, fromlist=0x32a2c8, level=-1) at Python/import.c:2055 #33 0x002a6d28 in builtin___import__ (self=Cannot access memory at address 0x2f830000 0x4e800020, args=Cannot access memory at address 0x2f830000 0x0, kwds=Cannot access memory at address 0x2f830000 0x2a49418) at Python/bltinmodule.c:47 #34 0x0020dee4 in PyObject_Call (func=Cannot access memory at address 0x2f830000 0x4e800020, arg=Cannot access memory at address 0x2f830000 0x0, kw=Cannot access memory at address 0x2f830000 0x2a49418) at Objects/abstract.c:1860 #35 0x002ac9a0 in PyEval_CallObjectWithKeywords (func=0xe4b8, arg=0x1e480, kw=0x0) at Python/ceval.c:3435 #36 0x002b16fc in PyEval_EvalFrameEx (f=0x60ce10, throwflag=6344524) at Python/ceval.c:2065 #37 0x002b45a0 in PyEval_EvalCodeEx (co=0x43f08, globals=Cannot access memory at address 0x2f830000 0x0, locals=Cannot access memory at address 0x2f830000 0xb8, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #38 0x002b4740 in PyEval_EvalCode (co=Cannot access memory at address 0x2f830000 0x4e800020, globals=Cannot access memory at address 0x2f830000 0x0, locals=Cannot access memory at address 0x2f830000 0x2a49418) at Python/ceval.c:494 #39 0x002d9ef4 in PyRun_InteractiveOneFlags (fp=0x0, filename=0x317d40 "", flags=0xbffff668) at Python/pythonrun.c:1264 #40 0x002da100 in PyRun_InteractiveLoopFlags (fp=0xa0001b9c, filename=0x317d40 "", flags=0xbffff668) at Python/pythonrun.c:714 #41 0x002da7b0 in PyRun_AnyFileExFlags (fp=0xa0001b9c, filename=0x317d40 "", closeit=0, flags=0xbffff668) at Python/pythonrun.c:683 #42 0x002e9e2c in Py_Main (argc=1, argv=0xbffff7ec) at Modules/main.c:496 #43 0x000019bc in ?? () #44 0x000016c0 in ?? () (gdb) -- Neil Martinsen-Burrell nmb at wartburg.edu From jtravs at gmail.com Thu Oct 12 12:35:36 2006 From: jtravs at gmail.com (John Travers) Date: Thu, 12 Oct 2006 17:35:36 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <20061012124722.GD32224@mentat.za.net> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> Message-ID: <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> I started to look at this and think some extra work is required. The problem is that regrid requires the data to be evenly spaced on a rectangular grid, rather than just a rectangular grid (which could have, say, increasing spacing in the y direction). I think this may limit its use more than users of interp2d would expect. The problem then is: 1. The currently used function (dierckx surfit) is not suitable for interpolation and needs to be replaced (it segfaults my simple test). 2. regrid works very well (fast, low mem etc.) for regular rectangular grids, but can't handle varying rectangular grids, let alone scattered data. 3. uses probably expect that at least varying rectangular grids are handled Solutions: a) just use regrid and say we can't work with no regular data. b) use a regridding method to get regular data if required, then use regrid I favour the second. As it happens, this is what MATLAB does which I guess is a reasonable example to follow. The are several regridding methods. MATLAB uses the freely available (I think scipy compatible) Qhull code. So I'll have a look at packaging the qhull code into scipy, then we can move on from there. In the mean time, do we use regrid over the restricted domain, or use the current surfit (which only works in very few cases). Regards, John On 12/10/06, Stefan van der Walt wrote: > On Thu, Oct 12, 2006 at 01:42:44PM +0100, John Travers wrote: > > Thanks! Sorry for the long lines, I'll be stricter on the format next > > time. I can send a patch for interp2d this evening (London time) if > > you want. > > That's great, I'll keep an eye out for it. If anyone on the list > currently uses interp2d on irregularly gridded data, now is the time > to speak up. > > Cheers > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- My mobile (which I rarely answer): (+44) (0) 7739 105209 If you want to see what I do: www.femto.ph.ic.ac.uk From jtravs at gmail.com Thu Oct 12 13:32:24 2006 From: jtravs at gmail.com (John Travers) Date: Thu, 12 Oct 2006 18:32:24 +0100 Subject: [SciPy-dev] rules for api and doc changes Message-ID: <3a1077e70610121032v13c1a3b5n1af690717ab8abdc@mail.gmail.com> Hi all, I've been following the scipy/numpy lists for about half a year and have submitted 4 patches for various things - and I plan on sending in a lot more. I have a couple of questions about rules for submission that I'd like to ask for comments from everyone. I hope this turns into a useful discussion! 1. What are the rules for api changes in scipy? (I know numpy is close to 1.0 release, so I guess none are allowed there). For example, I've been looking at the interpolate module and it seems to me that some renaming of classes in fitpack2 would be very useful to make things logical/intuitive (esp with expansion), but any changes will brake code. Do we leave everything as is for backwards compatibility or can I submit patches that change something which seems reasonable? (ending after, say, scipy 1.0) The key change being that most of scipy.interpolate is actually for fitting smoothing splines rather than interpolation (which took me a while to understand) - surely it would be logical to have say scipy.spline (with all the spline related stuff) and then scipy.interpolate with just the interpolation classes (which could ref the spline module). 2. What about changing docstrings when you are not the original author? 3. When is it OK to include the full source of a package rather than depend on a library. I'm thinking here with respect to an earlier post I made about regridding data. There is a package called Qhull which does this and is used by MATLAB etc. which has a scipy compat license (I think). The chances are than very few users will have it, and it is not too big, so could it be included? Or is this frowned upon? I have some more questions, but I guess this is enough for starters. Any comments would be most appreciated. Best regards, John Travers From jtravs at gmail.com Thu Oct 12 14:08:49 2006 From: jtravs at gmail.com (John Travers) Date: Thu, 12 Oct 2006 19:08:49 +0100 Subject: [SciPy-dev] interp2d and sandbox/delaunay Message-ID: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> Hi all, As you may have guessed from my noise (tell me to shut up if i'm irritating), i'm taking a serious look at the state of scipy.interpolate As previously stated, I think it needs to be reorganised/split into several parts, but I'll leave that to other threads. In this post I'm wondering about the status of interp2d. At the moment it is based on netlib->dierckx->surfit. This is bad as this routine should not be used for interpolation. I think others have had problems with this code (reading about a year ago on scipy-user) - it segfaults mine. I have recently implemented netlib->dierckx->regrid. Which interpolates nicely for regular rectangular grids -> this could be used in interp2d, except that it doesn't work on scattered data (is this required? I think so yes). The solution would be to call regrid for regular data and for scattered data. The best appears to be delaunay triangulation followed by natural neighbour interpolation or regrid. (this is what MATLAB does). So my questions are: 1. can I fiddle with interp2d and the other scipy.interpolate stuff? 2. what is the status of the sanbox/delaunay stuff (why is it not in the main scipy) if there is a problem with it, can I package a different package that does this? Regards, John From jtravs at gmail.com Thu Oct 12 15:15:46 2006 From: jtravs at gmail.com (John Travers) Date: Thu, 12 Oct 2006 20:15:46 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> Message-ID: <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> I've added a patch to the ticket #286 that makes interp2d use regrid. I think this must be the best compromise for now until we sort out a suitable routine for scattered data interpolation. Regards, John On 12/10/06, John Travers wrote: > I started to look at this and think some extra work is required. > > The problem is that regrid requires the data to be evenly spaced on a > rectangular grid, rather than just a rectangular grid (which could > have, say, increasing spacing in the y direction). I think this may > limit its use more than users of interp2d would expect. > > The problem then is: > > 1. The currently used function (dierckx surfit) is not suitable for > interpolation and needs to be replaced (it segfaults my simple test). > > 2. regrid works very well (fast, low mem etc.) for regular rectangular > grids, but can't handle varying rectangular grids, let alone scattered > data. > > 3. uses probably expect that at least varying rectangular grids are handled > > Solutions: > > a) just use regrid and say we can't work with no regular data. > > b) use a regridding method to get regular data if required, then use regrid > > I favour the second. As it happens, this is what MATLAB does which I > guess is a reasonable example to follow. The are several regridding > methods. MATLAB uses the freely available (I think scipy compatible) > Qhull code. > > So I'll have a look at packaging the qhull code into scipy, then we > can move on from there. > > In the mean time, do we use regrid over the restricted domain, or use > the current surfit (which only works in very few cases). > > Regards, > John > > > On 12/10/06, Stefan van der Walt wrote: > > On Thu, Oct 12, 2006 at 01:42:44PM +0100, John Travers wrote: > > > Thanks! Sorry for the long lines, I'll be stricter on the format next > > > time. I can send a patch for interp2d this evening (London time) if > > > you want. > > > > That's great, I'll keep an eye out for it. If anyone on the list > > currently uses interp2d on irregularly gridded data, now is the time > > to speak up. > > > > Cheers > > St?fan > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > My mobile (which I rarely answer): (+44) (0) 7739 105209 > If you want to see what I do: www.femto.ph.ic.ac.uk > -- My mobile (which I rarely answer): (+44) (0) 7739 105209 If you want to see what I do: www.femto.ph.ic.ac.uk From oliphant at ee.byu.edu Thu Oct 12 15:33:27 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 12 Oct 2006 13:33:27 -0600 Subject: [SciPy-dev] interp2d and sandbox/delaunay In-Reply-To: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> References: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> Message-ID: <452E9887.7010104@ee.byu.edu> John Travers wrote: >Hi all, > >As you may have guessed from my noise (tell me to shut up if i'm >irritating), i'm taking a serious look at the state of >scipy.interpolate > > We are all very glad you are looking at it. In my mind, scipy.interpolate is very weak and needs help. >As previously stated, I think it needs to be reorganised/split into >several parts, but I'll leave that to other threads. > >In this post I'm wondering about the status of interp2d. > > interp2d is not at all what it should be. >At the moment it is based on netlib->dierckx->surfit. This is bad as >this routine should not be used for interpolation. I think others have >had problems with this code (reading about a year ago on scipy-user) - >it segfaults mine. > > Please change it from using surfit. It shouldn't. >I have recently implemented netlib->dierckx->regrid. Which >interpolates nicely for regular rectangular grids -> this could be >used in interp2d, except that it doesn't work on scattered data (is >this required? I think so yes). > > Yes, interp2d should work on scattered data, I think. >The solution would be to call regrid for regular data and > for scattered data. > >The best appears to be delaunay triangulation >followed by natural neighbour interpolation or regrid. (this is what >MATLAB does). > >So my questions are: > >1. can I fiddle with interp2d and the other scipy.interpolate stuff? >2. what is the status of the sanbox/delaunay stuff (why is it not in >the main scipy) if there is a problem with it, can I package a >different package that does this? > > It's just the review process that has kept it out, I think. Ask Robert (if he's not already preparing his response). -Travis From oliphant at ee.byu.edu Thu Oct 12 15:35:50 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 12 Oct 2006 13:35:50 -0600 Subject: [SciPy-dev] [Numpy-discussion] Should numpy.sqrt(-1) return 1j rather than nan? In-Reply-To: References: <452D56F5.9000404@ee.byu.edu> <452DDC4B.1050304@noaa.gov> <452DE009.3070100@ieee.org> <452DEAB4.3030506@ukzn.ac.za> Message-ID: <452E9916.2000805@ee.byu.edu> Fernando Perez wrote: >[ Moved over from the numpy list at Travis' request ] > > > Thank you. Now, just because I checked in a change to scipy, doesn't mean it has to stay that way. Perhaps the best thing to do, is to rename the fancy functions and accept backwards incompatibility with new scipy. Some will be unhappy as it will require code changes, but it's the reason that scipy has been pre 1.0 for so long (we needed to fix up the basic array situation first). -Travis From stefan at sun.ac.za Thu Oct 12 17:41:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 12 Oct 2006 23:41:14 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> Message-ID: <20061012214114.GJ32224@mentat.za.net> On Thu, Oct 12, 2006 at 05:35:36PM +0100, John Travers wrote: > 1. The currently used function (dierckx surfit) is not suitable for > interpolation and needs to be replaced (it segfaults my simple > test). Please file a ticket -- scipy should never segfault. Thanks St?fan From stefan at sun.ac.za Thu Oct 12 18:48:15 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 13 Oct 2006 00:48:15 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> Message-ID: <20061012224815.GK32224@mentat.za.net> On Thu, Oct 12, 2006 at 08:15:46PM +0100, John Travers wrote: > I've added a patch to the ticket #286 that makes interp2d use regrid. > I think this must be the best compromise for now until we sort out a > suitable routine for scattered data interpolation. I have a couple of concerns: 1. I recently made some changes to the interp2d interface in order to make the coordinate specification more consistent. Your patch simply reverts to the older behaviour. I don't like this because it assumes that x values increase with rows and y values with columns, even though this isn't mentioned anywhere in the docstrings. I'm not suggesting that the alternative is better, but it does make fewer assumptions -- it simply flattens the x and y inputs and pass them to the fitpack routine (unless x*y != z.size, in which case it makes up a meshgrid). 2. The docstrings were updated to be fairly descriptive, whereas after patching we'd be back to short, non-descriptive documentation. I'm also not happy about the way the output shape is determined (but this is a shortcoming of the current version, not a problem with your patch). Again, do x values increase along rows or columns? One option which overcomes this problem is for the output to be the same as the argument to __call__, i.e. x([[0,1,2],[3,4,5],[6,7,8]],[[0,1,2,],[3,4,5],[6,7,8]]) returns a (3,3) array of values, whereas x([0,1,2],[0,1,2]) returns a (3,) array of values, unlike the current (3,3). Unfortunately, this doesn't allow the usage of ogrid. Ultimately, it doesn't really matter as long as we choose an axis system and document it clearly. I'd also like to see some tests to verify the behaviour of the new interp2d that is *different* from the old version. It's easier to add it now, while everything is still fresh in our memories. We can probably sort out these issues with minimal effort. Thanks a lot for working on interp2d -- it has been long overdue for an overhaul. Cheers St?fan From jtravs at gmail.com Thu Oct 12 19:48:56 2006 From: jtravs at gmail.com (John Travers) Date: Fri, 13 Oct 2006 00:48:56 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <20061012224815.GK32224@mentat.za.net> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> <20061012224815.GK32224@mentat.za.net> Message-ID: <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> Hi Stefan, First, about the segfaults - I can submit a ticket, but it basically stems from surfit not liking interpolation (even the fortran only versions segfault). Using regrid+method for scattered data should deal with this problem. In my enthusiasm I started two threads. In the newer thread I mention that delaunay triangulation can be used for interpolating scattered data. So I'm thinking of combining the delaunay package from the sandbox with regird from fitpack to make a general interp2d function which should work well in most cases. On 12/10/06, Stefan van der Walt wrote: [...] > I have a couple of concerns: > > 1. I recently made some changes to the interp2d interface in order to > make the coordinate specification more consistent. Your patch > simply reverts to the older behaviour. I don't like this because > it assumes that x values increase with rows and y values with > columns, even though this isn't mentioned anywhere in the > docstrings. I'm not suggesting that the alternative is better, but > it does make fewer assumptions -- it simply flattens the x and y > inputs and pass them to the fitpack routine (unless x*y != z.size, > in which case it makes up a meshgrid). The reason I changed this is because 'regrid' can only take 1d x,y arrays which have the same number of points as the respective axis of z. Flattening doesn't work here. It probably isn't hard for interp2d to take either coordinate arrays, mgrid matrices, meshgrid matrices or ogrid arrays and for us to detect which axis they are varying in and then get things right for regrid. But as per my note above, when we get scattered data working too we can deal with the input issues better. In the mean time I think it is better to have just regrid working. > > 2. The docstrings were updated to be fairly descriptive, whereas after > patching we'd be back to short, non-descriptive documentation. > I tried to only remove things that were no longer relevant (like the input array format). But if we change as per above, we should re-instate your docs. (I don't mean to tred on toes...). I can make clearer docstrings for the temporary regrid only interp2d first. > I'm also not happy about the way the output shape is determined (but > this is a shortcoming of the current version, not a problem with your > patch). Again, do x values increase along rows or columns? One > option which overcomes this problem is for the output to be the same > as the argument to __call__, i.e. > > x([[0,1,2],[3,4,5],[6,7,8]],[[0,1,2,],[3,4,5],[6,7,8]]) > > returns a (3,3) array of values, whereas > > x([0,1,2],[0,1,2]) > > returns a (3,) array of values, unlike the current (3,3). > Unfortunately, this doesn't allow the usage of ogrid. Ultimately, it > doesn't really matter as long as we choose an axis system and document > it clearly. Agreed. I always use columns = x. But we should be more flexible. > > I'd also like to see some tests to verify the behaviour of the new > interp2d that is *different* from the old version. It's easier to add > it now, while everything is still fresh in our memories. Will try and do this tomorrow. > We can probably sort out these issues with minimal effort. Thanks a > lot for working on interp2d -- it has been long overdue for an > overhaul. Thanks. John From stefan at sun.ac.za Thu Oct 12 20:38:56 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 13 Oct 2006 02:38:56 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <20061011210924.GK20657@mentat.za.net> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> <20061012224815.GK32224@mentat.za.net> <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> Message-ID: <20061013003856.GN32224@mentat.za.net> Hi John On Fri, Oct 13, 2006 at 12:48:56AM +0100, John Travers wrote: > > 1. I recently made some changes to the interp2d interface in order to > > make the coordinate specification more consistent. Your patch > > simply reverts to the older behaviour. I don't like this because > > it assumes that x values increase with rows and y values with > > columns, even though this isn't mentioned anywhere in the > > docstrings. I'm not suggesting that the alternative is better, but > > it does make fewer assumptions -- it simply flattens the x and y > > inputs and pass them to the fitpack routine (unless x*y != z.size, > > in which case it makes up a meshgrid). > > The reason I changed this is because 'regrid' can only take 1d x,y > arrays which have the same number of points as the respective axis of > z. Flattening doesn't work here. It probably isn't hard for interp2d > to take either coordinate arrays, mgrid matrices, meshgrid matrices or > ogrid arrays and for us to detect which axis they are varying in and > then get things right for regrid. But as per my note above, when we > get scattered data working too we can deal with the input issues > better. In the mean time I think it is better to have just regrid > working. Right, the relevant comments in regrid.f for anyone else following: c given the set of values z(i,j) on the rectangular grid (x(i),y(j)), c i=1,...,mx;j=1,...,my, subroutine regrid determines a smooth bivar- c iate spline approximation s(x,y) of degrees kx and ky on the rect- c angle xb <= x <= xe, yb <= y <= ye. c z : real array of dimension at least (mx*my). c before entry, z(my*(i-1)+j) must be set to the data value at c the grid point (x(i),y(j)) for i=1,...,mx and j=1,...,my. c unchanged on exit. > > 2. The docstrings were updated to be fairly descriptive, whereas after > > patching we'd be back to short, non-descriptive documentation. > > > > I tried to only remove things that were no longer relevant (like the > input array format). But if we change as per above, we should > re-instate your docs. (I don't mean to tred on toes...). I can make > clearer docstrings for the temporary regrid only interp2d first. No! Please go ahead and forget about the toes. I only ask these questions to make sure that we are on the same page. I want the documentation to clearly state exactly what we are doing, so that there is no confusion on how to use the new interface (I've never seen a ticket complaining about the detailed level of docstrings ;). > > I'd also like to see some tests to verify the behaviour of the new > > interp2d that is *different* from the old version. It's easier to add > > it now, while everything is still fresh in our memories. > > Will try and do this tomorrow. OK, unless someone beats me to it, I will merge the patch tomorrow morning (ZA time). Cheers St?fan From samnemo at gmail.com Thu Oct 12 23:41:22 2006 From: samnemo at gmail.com (sam n) Date: Thu, 12 Oct 2006 23:41:22 -0400 Subject: [SciPy-dev] warnings+errors(check_integer) with scipy.test() on scipy-0.5.1 Message-ID: Hi All, I just built and installed numpy-1.0rc2 and scipy-0.5.1 and encountered the warnings & errors below (only get errors from scipy.test() , no problems with numpy.test() ). If anyone knows how to fix these problems please let me know. I already had LAPACK and BLAS installed. I also included diagnostic information after the warnings & errors. Sorry if this email is too long. Thanks for any help/suggestions. Sam >>> import scipy >>> scipy.test() Warning: FAILURE importing tests for /usr/arch/lib/python2.4/site-packages/scipy/linalg/basic.py:23: ImportError: cannot import name calc_lwork (in ?) Found 42 tests for scipy.lib.lapack Warning: FAILURE importing tests for /usr/arch/lib/python2.4/site-packages/scipy/linalg/basic.py:23: ImportError: cannot import name calc_lwork (in ?) ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/arch/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/usr/arch/lib/python2.4/site-packages/scipy/stats/stats.py", line 191, in ? import scipy.special as special File "/usr/arch/lib/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? import orthogonal File "/usr/arch/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 67, in ? from scipy.linalg import eig File "/usr/arch/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/arch/lib/python2.4/site-packages/scipy/linalg/basic.py", line 23, in ? from scipy.linalg import calc_lwork ImportError: cannot import name calc_lwork ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/arch/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 30, in _check_level assert len(expected) == len(actual), "Different list lengths at %s" % label TypeError: len() of unsized object ---------------------------------------------------------------------- Ran 630 tests in 7.745s FAILED (errors=2) diagnostic information from : python -c 'from numpy.f2py.diagnose import run; run()' ------ os.name='posix' ------ sys.platform='linux2' ------ sys.version: 2.4.1 (#3, May 9 2005, 14:40:40) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-49)] ------ sys.prefix: /usr/arch ------ sys.path=':/usr/arch/lib/python24.zip:/usr/arch/lib/python2.4:/usr/arch/lib/python2.4/plat-linux2:/usr/arch/lib/python2.4/lib-tk:/usr/arch/lib/python2.4/lib-dynload:/usr/arch/lib/python2.4/site-packages ' ------ Failed to import Numeric: No module named Numeric Failed to import numarray: No module named numarray Found new numpy version '1.0rc2' in /usr/arch/lib/python2.4/site-packages/numpy/__init__.pyc Found f2py2e version '2_3296' in /usr/arch/lib/python2.4/site-packages/numpy/f2py/f2py2e.pyc Found numpy.distutils version '0.4.0' in '/usr/arch/lib/python2.4/site-packages/numpy/distutils/__init__.pyc' ------ Importing numpy.distutils.fcompiler ... ok ------ Checking availability of supported Fortran compilers: customize CompaqFCompiler customize NoneFCompiler customize AbsoftFCompiler Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelFCompiler Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize SunFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize IbmFCompiler customize Gnu95FCompiler customize IntelVisualFCompiler customize G95FCompiler customize IntelItaniumFCompiler customize PGroupFCompiler customize LaheyFCompiler customize CompaqVisualFCompiler customize MipsFCompiler customize HPUXFCompiler customize IntelItaniumVisualFCompiler customize IntelEM64TFCompiler List of available Fortran compilers: --fcompiler=gnu GNU Fortran Compiler (3.2.3) List of unavailable Fortran compilers: --fcompiler=absoft Absoft Corp Fortran Compiler --fcompiler=compaq Compaq Fortran Compiler --fcompiler=compaqv DIGITAL|Compaq Visual Fortran Compiler --fcompiler=g95 G95 Fortran Compiler --fcompiler=gnu95 GNU 95 Fortran Compiler --fcompiler=hpux HP Fortran 90 Compiler --fcompiler=ibm IBM XL Fortran Compiler --fcompiler=intel Intel Fortran Compiler for 32-bit apps --fcompiler=intele Intel Fortran Compiler for Itanium apps --fcompiler=intelem Intel Fortran Compiler for EM64T-based apps --fcompiler=intelev Intel Visual Fortran Compiler for Itanium apps --fcompiler=intelv Intel Visual Fortran Compiler for 32-bit apps --fcompiler=lahey Lahey/Fujitsu Fortran 95 Compiler --fcompiler=mips MIPSpro Fortran Compiler --fcompiler=nag NAGWare Fortran 95 Compiler --fcompiler=none Fake Fortran compiler --fcompiler=pg Portland Group Fortran Compiler --fcompiler=sun Sun|Forte Fortran 95 Compiler --fcompiler=vast Pacific-Sierra Research Fortran 90 Compiler List of unimplemented Fortran compilers: --fcompiler=f Fortran Company/NAG F Compiler For compiler details, run 'config_fc --verbose' setup command. ------ Importing numpy.distutils.cpuinfo ... ok ------ CPU information: getNCPUs has_mmx has_sse is_32bit is_Intel is_Pentium is_PentiumII is_PentiumIII is_i686 ------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtravs at gmail.com Fri Oct 13 10:42:13 2006 From: jtravs at gmail.com (John Travers) Date: Fri, 13 Oct 2006 15:42:13 +0100 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <20061013003856.GN32224@mentat.za.net> References: <3a1077e70610101148n7c1731f5p404f25bde4177928@mail.gmail.com> <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> <20061012224815.GK32224@mentat.za.net> <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> <20061013003856.GN32224@mentat.za.net> Message-ID: <3a1077e70610130742s54d73815ra22a39392b4ce827@mail.gmail.com> Hi Stefan, On 13/10/06, Stefan van der Walt wrote: > > > I'd also like to see some tests to verify the behaviour of the new > > > interp2d that is *different* from the old version. It's easier to add > > > it now, while everything is still fresh in our memories. > > > > Will try and do this tomorrow. > Attached is a very simple script (will make it a test later) which uses the test data from netlib->dierckx to check the use fo regrid and surfit. The plot attached shows the results. There could be an error in my dealing with meshgrid (I hate they way it swaps axes), but I think it is right. The surfit part issues a warning on interpolation. If you increase s to say 20 (0 is needed for interpolation) then the warning goes, but then you are smoothing (as can be seen from the corresponding output plot - which was down sampled for file size). I'll make this into a test at some point and also improve the interp2d interface to be more flexible to input array layout - but this will have to be nest week due to time constraints (I need to work...) Hope this demonstrates my point (and that yopu don't find an obvious error in my code...) John -------------- next part -------------- A non-text attachment was scrubbed... Name: intplot2.png Type: image/png Size: 22948 bytes Desc: not available URL: -------------- next part -------------- # very simple script to test interpolation with regrid and surfit # based on example data from netlib->dierckx->regrid from numpy import * from scipy.interpolate.fitpack2 import SmoothBivariateSpline, \ RectBivariateSpline import matplotlib matplotlib.use('Agg') import pylab # x,y coordinates x = linspace(-1.5,1.5,11) y = x # data taken from daregr z = array([ [-0.0325, 0.0784, 0.0432, 0.0092, 0.1523, 0.0802, 0.0925, -0.0098, \ 0.0810, -0.0146, -0.0019], [0.1276, 0.0223, 0.0357, 0.1858, 0.2818, 0.1675, 0.2239, 0.1671, \ 0.0843, 0.0151, 0.0427], [0.0860, 0.1267, 0.1839, 0.3010, 0.5002, 0.4683, 0.4562, 0.2688, \ 0.1276, 0.1244, 0.0377], [0.0802, 0.1803, 0.3055, 0.4403, 0.6116, 0.7178, 0.6797, 0.5218, \ 0.2624, 0.1341, -0.0233], [0.1321, 0.2023, 0.4446, 0.7123, 0.7944, 0.9871, 0.8430, 0.6440, \ 0.4682, 0.1319, 0.1075], [0.2561, 0.1900, 0.4614, 0.7322, 0.9777, 1.0463, 0.9481, 0.6649, \ 0.4491, 0.2442, 0.1341], [0.0981, 0.2009, 0.4616, 0.5514, 0.7692, 0.9831, 0.7972, 0.5937, \ 0.4190, 0.1436, 0.0995], [0.0991, 0.1545, 0.3399, 0.4940, 0.6328, 0.7168, 0.6886, 0.3925, \ 0.3015, 0.1758, 0.0928], [-0.0197, 0.1479, 0.1225, 0.3254, 0.3847, 0.4767, 0.4324, 0.2827, \ 0.2287, 0.0999, 0.0785], [0.0032, 0.0917, 0.0246, 0.1780, 0.2394, 0.1765, 0.1642, 0.2081, \ 0.1049, 0.0493, -0.0502], [0.0101, 0.0297, 0.0468, 0.0221, 0.1074, 0.0433, 0.0626, 0.1436, \ 0.1092, -0.0232, 0.0132]]) # plot original data pylab.subplot(1,3,1) pylab.imshow(z) pylab.title('orig') # check regrid mrs = RectBivariateSpline(x,y,z) zr = mrs(x,y) print sum(abs(zr-z)) pylab.subplot(1,3,2) pylab.imshow(zr) pylab.title('regrid') # check surfit # x increases in columns, y increases along rows ym,xm = meshgrid(y,x) # deal with meshgrid badness mrs = SmoothBivariateSpline(ravel(xm),ravel(ym),ravel(z),kx=3,ky=3,s=0) zr = mrs(x,y) print sum(abs(zr-z)) pylab.subplot(1,3,3) pylab.imshow(zr) pylab.title('surfit') pylab.savefig('intplot.png') pylab.close() From Balazs.Nemeth at shaw.ca Fri Oct 13 10:44:59 2006 From: Balazs.Nemeth at shaw.ca (Balazs Nemeth) Date: Fri, 13 Oct 2006 14:44:59 +0000 (UTC) Subject: [SciPy-dev] blitz and gcc-4.xx Message-ID: Hello, On a system with gcc version larger than 3 ( Ubuntu 6.06) blitz fails because in /usr/lib/python2.4/site-packages/weave/blitz-20001213/blitz/config.h due to the construction: .. #if (__GNUC__ && __GNUC__ == 3) #define BZ_HAVE_NUMERIC_LIMITS #else #undef BZ_HAVE_NUMERIC_LIMITS #endif .. BZ_HAVE_NUMERIC_LIMITS limits stays undefined, which is incorrect because it is defined in /usr/lib/gcc/x86_64-linux-gnu/4.1.0/include/limits.h changing the above section to .. #if (__GNUC__ && __GNUC__ > 2) #define BZ_HAVE_NUMERIC_LIMITS #else #undef BZ_HAVE_NUMERIC_LIMITS #endif .. solves the problem. Thanks BN From stefan at sun.ac.za Fri Oct 13 12:40:30 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 13 Oct 2006 18:40:30 +0200 Subject: [SciPy-dev] new function regrid for fitpack2 In-Reply-To: <3a1077e70610130742s54d73815ra22a39392b4ce827@mail.gmail.com> References: <3a1077e70610111459u39740c36t3929b79e2bf64af9@mail.gmail.com> <20061011233506.GN20657@mentat.za.net> <3a1077e70610120542y73a1f4ebvea778ef0985d0933@mail.gmail.com> <20061012124722.GD32224@mentat.za.net> <3a1077e70610120935u78630c6cy4550f74f21f493c9@mail.gmail.com> <3a1077e70610121215v5161ba80q35e33c9fb71f195e@mail.gmail.com> <20061012224815.GK32224@mentat.za.net> <3a1077e70610121648v39fdefcew6a989fd81d8ba73c@mail.gmail.com> <20061013003856.GN32224@mentat.za.net> <3a1077e70610130742s54d73815ra22a39392b4ce827@mail.gmail.com> Message-ID: <20061013164030.GB16246@mentat.za.net> Hi John On Fri, Oct 13, 2006 at 03:42:13PM +0100, John Travers wrote: > Attached is a very simple script (will make it a test later) which > uses the test data from netlib->dierckx to check the use fo regrid and > surfit. The plot attached shows the results. There could be an error > in my dealing with meshgrid (I hate they way it swaps axes), but I > think it is right. The surfit part issues a warning on interpolation. > If you increase s to say 20 (0 is needed for interpolation) then the > warning goes, but then you are smoothing (as can be seen from the > corresponding output plot - which was down sampled for file size). I think matplotlib may be throwing a spanner in the wheels here by using its own interpolation. If you use imshow(x,interpolation='nearest') you get a plot like http://mentat.za.net/results/interpolate.png (I changed to a more densely sampled grid). As for the meshgrid behaviour, take a look at numpy's mgrid, which doesn't do the argument swapping. > I'll make this into a test at some point and also improve the interp2d > interface to be more flexible to input array layout - but this will > have to be nest week due to time constraints (I need to work...) I'm sorry, I still havn't had time to merge the patch -- will hopefully be able to do that over the weekend at least. > Hope this demonstrates my point (and that yopu don't find an obvious > error in my code...) Thanks for the demo! Pretty pictures always make for a convincing argument. Cheers St?fan From dalcinl at gmail.com Fri Oct 13 16:09:36 2006 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Fri, 13 Oct 2006 17:09:36 -0300 Subject: [SciPy-dev] NumPy/SciPy + MPI for Python Message-ID: This post is surely OT, but I cannot imagine a better place to contact people about this subject. Please, don't blame me. Any people here interested in NumPy/SciPy + MPI? From some time ago, I've been developing mpi4py (first release at SF) and I am really near to release a new version. This package exposes an API almost identical to MPI-2 C++ bindings. Almost all MPI-1 and MPI-2 features (even one-sided communications and parallel I/O) are fully supported for any object exposing single-segment buffer interface, an only some of them for communication of general Python objects (with the help of pickle/marshal). The posibility of constructing any user-defined MPI datatypes, as well as virtual topologies (specially cartesian), can be really nice for anyone interested in parallel multidimensional array procesing. Before the next release, I would like to wait for any comment, You can contact me via private mail to get a tarbal with latest developments, or we can have some discussion here, if many of you consider this a good idea. In the long term, I would like to see mpi4py integrated as a subpackage of SciPy. Regards, -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From cookedm at physics.mcmaster.ca Fri Oct 13 17:40:06 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 13 Oct 2006 17:40:06 -0400 Subject: [SciPy-dev] blitz and gcc-4.xx In-Reply-To: References: Message-ID: <20061013174006.61d5ec06@arbutus.physics.mcmaster.ca> On Fri, 13 Oct 2006 14:44:59 +0000 (UTC) Balazs Nemeth wrote: > Hello, > > On a system with gcc version larger than 3 ( Ubuntu 6.06) blitz fails > because > in /usr/lib/python2.4/site-packages/weave/blitz-20001213/blitz/config.h due > to the construction: .. > #if (__GNUC__ && __GNUC__ == 3) > #define BZ_HAVE_NUMERIC_LIMITS > #else > #undef BZ_HAVE_NUMERIC_LIMITS > #endif You're using the old version of scipy, where weave is a separate package. The current version doesn't have this problem. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ellisonbg.net at gmail.com Fri Oct 13 18:49:41 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 13 Oct 2006 16:49:41 -0600 Subject: [SciPy-dev] NumPy/SciPy + MPI for Python In-Reply-To: References: Message-ID: <6ce0ac130610131549s626aae9chbf3bed45dfcb9230@mail.gmail.com> Just as a data point. I have used mpi4py before and have built it on many systems ranging from my macbook to NERSC supercomputers. In my opinion it is currently the best python mpi bindings available. Lisandro has done a fantastic job with this. Also Fernando and I have worked hard to make sure that mpi4py works with the new parallel capabilities of IPython. I would love to see mpi4py hosted in a public repository for others to contribute. I think this would really solidify mpi4py as a top notch mpi interface. But, my only concern is that there might be many folks who want to use mpi4py who don't need scipy. I am one of those folks - I don't necessarily need scipy on the NERSC supercomputers, but I do need mpi4py. Because of this, I would probably still recommend keeping mpi4py as a separate project. Is there any chance it could be hosted at mip4py.scipy.org? I strongly encourage others to try it out. Installation is is easy. Brian Granger On 10/13/06, Lisandro Dalcin wrote: > This post is surely OT, but I cannot imagine a better place to contact > people about this subject. Please, don't blame me. > > Any people here interested in NumPy/SciPy + MPI? From some time ago, > I've been developing mpi4py (first release at SF) and I am really near > to release a new version. > > This package exposes an API almost identical to MPI-2 C++ bindings. > Almost all MPI-1 and MPI-2 features (even one-sided communications and > parallel I/O) are fully supported for any object exposing > single-segment buffer interface, an only some of them for > communication of general Python objects (with the help of > pickle/marshal). > > The posibility of constructing any user-defined MPI datatypes, as well > as virtual topologies (specially cartesian), can be really nice for > anyone interested in parallel multidimensional array procesing. > > Before the next release, I would like to wait for any comment, You can > contact me via private mail to get a tarbal with latest developments, > or we can have some discussion here, if many of you consider this a > good idea. In the long term, I would like to see mpi4py integrated as > a subpackage of SciPy. > > Regards, > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From thamelry at binf.ku.dk Sat Oct 14 04:58:41 2006 From: thamelry at binf.ku.dk (thamelry at binf.ku.dk) Date: Sat, 14 Oct 2006 10:58:41 +0200 Subject: [SciPy-dev] NumPy/SciPy + MPI for Python In-Reply-To: References: Message-ID: <2d7c25310610140158y27c6b01ci97130eb08242e5cc@mail.gmail.com> On 10/13/06, Lisandro Dalcin wrote: > This post is surely OT, but I cannot imagine a better place to contact > people about this subject. Please, don't blame me. > > Any people here interested in NumPy/SciPy + MPI? I've been working on a Dynamic Bayesian Network (DBN) toolkit for some time (called Mocapy, freely available from sourceforge https://sourceforge.net/projects/mocapy/). The thing is almost entirely implemented in Python, and currently uses Numeric and pyMPI. I routinely train DBNs from 100.000s of observations on our 240 CPU cluster. I'm in the proces of porting Mocapy to numpy. I assume pyMPI will also work with numpy, but I haven't tried it out yet. Would be great if scipy came with default MPI support, especially since pyMPI does not seem to be actively developed anymore. Cheers, -Thomas ---- Thomas Hamelryck, Post-doctoral researcher Bioinformatics center Institute of Molecular Biology and Physiology University of Copenhagen Universitetsparken 15 - Bygning 10 DK-2100 Copenhagen ? Denmark Homepage: http://www.binf.ku.dk/Protein_structure From dalcinl at gmail.com Sun Oct 15 15:05:03 2006 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Sun, 15 Oct 2006 16:05:03 -0300 Subject: [SciPy-dev] MPI for Python RC1 Message-ID: For all of you interested in mpi4py, I've uploaded a tarball to PyPI http://www.python.org/pypi/mpi4py/0.4.0rc1 Make sure you have mpicc in your path and then - If you have setuptools, an try $ easy_intall mpi4py - Download the tarball and next $ python setup.py install [--home=$HOME] You should look at 'test/mpi-rev-v1'. I've wrote some tests from the MPI book, chapters 2 and 3 (BibTex reference in README.txt). You should try to run te unittest scripts under '/tests/unittest' Paul: I've tested mpi4py with MPICH1/2, OpenMPI and LAM. Please let me know any issue your have under AIX with your MPI implementation in case you use a vendor MPI. -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From david at ar.media.kyoto-u.ac.jp Mon Oct 16 00:03:10 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 16 Oct 2006 13:03:10 +0900 Subject: [SciPy-dev] test for packages into sandbox ? Message-ID: <4533047E.1050108@ar.media.kyoto-u.ac.jp> Hi, I was wondering if anyone would know the rules about the tests runs when doing a scipy.test() ? Before, I thought that running "import scipy; scipy.test()" would run all test of default level. But it looks like it does not run the tests in the sandbox (at least with the packages I tried, like svm, numexpr, pyem): is this normal behaviour ? I would prefer my package pyem tested whenever scipy.test() is run. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Oct 16 00:26:36 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 16 Oct 2006 13:26:36 +0900 Subject: [SciPy-dev] Changing API for kmeans in scipy.cluster ? Message-ID: <453309FC.2000002@ar.media.kyoto-u.ac.jp> Hi, I would like to know if it is ok to slightly change the API of kmeans in scipy.cluster ? For my package pyem, it would be practical for kmeans to return the observed book, that is an index array describing the class' membership. This can trivially done by returning it as a third argument (it is already computed in the current implementation); as the function already returns a list of arrays, I believe it would be backward compatible with the current version. If nobody has any objection, I can make the change in the subversion repository, cheers, David From bart.vandereycken at cs.kuleuven.be Mon Oct 16 04:35:12 2006 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Mon, 16 Oct 2006 10:35:12 +0200 Subject: [SciPy-dev] scipy.linalg shadowed Message-ID: Hi, the scipy.linalg module is shadowed by numpy.linalg > import scipy > scipy.linalg.__file__ '.../lib/python/numpy/linalg/__init__.pyc' import scipy.linalg works fine Reading http://permalink.gmane.org/gmane.comp.python.scientific.devel/4431 , I see that "del linalg" is not present in scipy/__init__.py Bart From jtravs at gmail.com Mon Oct 16 12:29:15 2006 From: jtravs at gmail.com (John Travers) Date: Mon, 16 Oct 2006 17:29:15 +0100 Subject: [SciPy-dev] interp2d and sandbox/delaunay In-Reply-To: <452E9887.7010104@ee.byu.edu> References: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> <452E9887.7010104@ee.byu.edu> Message-ID: <3a1077e70610160929q741be182td3c7c70f492fdf26@mail.gmail.com> Hi Travis, and others, On 12/10/06, Travis Oliphant wrote: > John Travers wrote: > >As previously stated, I think it needs to be reorganised/split into > >several parts, but I'll leave that to other threads. > > > >In this post I'm wondering about the status of interp2d. > > > > > interp2d is not at all what it should be. While thinking about this problem I have had a few ideas that might be worth considering. Interp2d is used for _interpolation_ of data. It comes to mind that most people (esp with scattered data) really want smoothing (closely fitting a smoothing surface) to their data. Maybe we should provide a function smooth2d or fit2d which does this using surfit (which although, as I've said, is very bad for interpolation, is actually very good for smoothing). This would effectively be the original interp2d but with the smoothing factor s > 0 (0 is for the ill advised interpolation). > >I have recently implemented netlib->dierckx->regrid. Which > >interpolates nicely for regular rectangular grids -> this could be > >used in interp2d, except that it doesn't work on scattered data (is > >this required? I think so yes). > > > Yes, interp2d should work on scattered data, I think. OK, we have a patch waiting for putting regrid into interp2d. The delaunay code in the sandbox looks like it is fine for the scattered data. So our new interp2d should call regrid for regular data, and use natural neighbour interpolation from delaunay for scattered data. I can submit a patch for this, but we need delaunay out of sandbox for this to work. (and it means interpolate will depend on another module). Speaking about the current module setup, we should really actually have a scipy.approximation.spline (contents of fitpack, currently in interpolate) module. At the moment it is just confusing (calling scipy.interpolate.fitpack2.SmoothBivariateSpline for smoothing and not interpolation???) then we could also have scipy.approximate.smooth2d() and leave scipy.interpolate with interp1d etc. Anybody have any comments on this? Is delaunay ready to be moved, how about the module reorganisation (would break api). Best regards, John From robert.kern at gmail.com Mon Oct 16 12:43:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Oct 2006 11:43:38 -0500 Subject: [SciPy-dev] interp2d and sandbox/delaunay In-Reply-To: <3a1077e70610160929q741be182td3c7c70f492fdf26@mail.gmail.com> References: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> <452E9887.7010104@ee.byu.edu> <3a1077e70610160929q741be182td3c7c70f492fdf26@mail.gmail.com> Message-ID: <4533B6BA.9000900@gmail.com> John Travers wrote: > OK, we have a patch waiting for putting regrid into interp2d. The > delaunay code in the sandbox looks like it is fine for the scattered > data. So our new interp2d should call regrid for regular data, and use > natural neighbour interpolation from delaunay for scattered data. I > can submit a patch for this, but we need delaunay out of sandbox for > this to work. (and it means interpolate will depend on another > module). I am very much against interp2d() doing different algorithms. Each algorithm should be exposed as separate functions. Even though my data is regular, it is entirely possible that I would want to use natural neighbor interpolation anyways. > Speaking about the current module setup, we should really actually > have a scipy.approximation.spline (contents of fitpack, currently in > interpolate) module. At the moment it is just confusing (calling > scipy.interpolate.fitpack2.SmoothBivariateSpline for smoothing and not > interpolation???) then we could also have > scipy.approximate.smooth2d() and leave scipy.interpolate with interp1d > etc. That's not a bad idea. > Anybody have any comments on this? Is delaunay ready to be moved, how > about the module reorganisation (would break api). I think the API is stable, and I think most of the bugs have been shaken out. I would like to replace the underlying Delaunay triangulation algorithm with a Guibas-Stolfi divide-and-conquer algorithm using robust geometric predicates, but I'm still implementing that slowly. Also, the error handling in the C++ was implemented very naively and should be replaced. However, neither of those issues would affect the external use of the module (except perhaps by making it more robust), and probably don't need to hold up moving the module out of the sandbox if there is a great need for it. An impartial reviewer might disagree with me, of course. :-) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jtravs at gmail.com Mon Oct 16 13:14:18 2006 From: jtravs at gmail.com (John Travers) Date: Mon, 16 Oct 2006 18:14:18 +0100 Subject: [SciPy-dev] interp2d and sandbox/delaunay In-Reply-To: <4533B6BA.9000900@gmail.com> References: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> <452E9887.7010104@ee.byu.edu> <3a1077e70610160929q741be182td3c7c70f492fdf26@mail.gmail.com> <4533B6BA.9000900@gmail.com> Message-ID: <3a1077e70610161014g4875a1b2w4dfaa8bcdbb0ac99@mail.gmail.com> On 16/10/06, Robert Kern wrote: > John Travers wrote: > > OK, we have a patch waiting for putting regrid into interp2d. The > > delaunay code in the sandbox looks like it is fine for the scattered > > data. So our new interp2d should call regrid for regular data, and use > > natural neighbour interpolation from delaunay for scattered data. I > > can submit a patch for this, but we need delaunay out of sandbox for > > this to work. (and it means interpolate will depend on another > > module). > > I am very much against interp2d() doing different algorithms. Each algorithm > should be exposed as separate functions. Even though my data is regular, it is > entirely possible that I would want to use natural neighbor interpolation anyways. OK, so the user can specify the method and we simply note in the docs that spline interpolation isn't available for scattered data? > > Speaking about the current module setup, we should really actually > > have a scipy.approximation.spline (contents of fitpack, currently in > > interpolate) module. At the moment it is just confusing (calling > > scipy.interpolate.fitpack2.SmoothBivariateSpline for smoothing and not > > interpolation???) then we could also have > > scipy.approximate.smooth2d() and leave scipy.interpolate with interp1d > > etc. > > That's not a bad idea. > > > Anybody have any comments on this? Is delaunay ready to be moved, how > > about the module reorganisation (would break api). > > I think the API is stable, and I think most of the bugs have been shaken out. I > would like to replace the underlying Delaunay triangulation algorithm with a > Guibas-Stolfi divide-and-conquer algorithm using robust geometric predicates, > but I'm still implementing that slowly. Also, the error handling in the C++ was > implemented very naively and should be replaced. However, neither of those > issues would affect the external use of the module (except perhaps by making it > more robust), and probably don't need to hold up moving the module out of the > sandbox if there is a great need for it. An impartial reviewer might disagree > with me, of course. :-) Well, given that the current interp2d doesn't work at all for even quite small amounts of data I think the sooner the better. Alternatively, we could setup scipy.sandbox.spline and then write the new interpolation module under the sandbox for now. (scipy.sandbox.interpolation). BTW, did you consider using Qhull as the delaunay code? I think the licence is OK, and it seems to be the "standard" (i.e. octave and Matlab both use it) and actively maintained. Best regards, John From oliphant.travis at ieee.org Mon Oct 16 14:07:18 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 16 Oct 2006 12:07:18 -0600 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: References: Message-ID: <4533CA56.2070709@ieee.org> Bart Vandereycken wrote: > Hi, > > the scipy.linalg module is shadowed by numpy.linalg > > > import scipy > > scipy.linalg.__file__ > '.../lib/python/numpy/linalg/__init__.pyc' > > import scipy.linalg works fine > > That's not what I see. What version do you have? import scipy scipy.linalg.__file__ '/usr/lib/python2.4/site-packages/scipy/linalg/__init__.pyc' scipy.__version__ '0.5.2.dev2247' -Travis From robert.kern at gmail.com Mon Oct 16 14:19:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Oct 2006 13:19:02 -0500 Subject: [SciPy-dev] interp2d and sandbox/delaunay In-Reply-To: <3a1077e70610161014g4875a1b2w4dfaa8bcdbb0ac99@mail.gmail.com> References: <3a1077e70610121108l5c996761ocaa7a4d13225df26@mail.gmail.com> <452E9887.7010104@ee.byu.edu> <3a1077e70610160929q741be182td3c7c70f492fdf26@mail.gmail.com> <4533B6BA.9000900@gmail.com> <3a1077e70610161014g4875a1b2w4dfaa8bcdbb0ac99@mail.gmail.com> Message-ID: <4533CD16.6070403@gmail.com> John Travers wrote: > BTW, did you consider using Qhull as the delaunay code? I think the > licence is OK, and it seems to be the "standard" (i.e. octave and > Matlab both use it) and actively maintained. Yes, but I have two issues with it: It uses jittering to try to avoid degeneracies and so is not truly robust. It is very, very difficult to work with as a library; it was written to be a standalone program, so there are global variables and other such nonsense floating about. For N-D, it's pretty much the only game in town; however, for 2-D, there are much better options. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From charlesr.harris at gmail.com Mon Oct 16 20:19:31 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 16 Oct 2006 18:19:31 -0600 Subject: [SciPy-dev] rules for api and doc changes In-Reply-To: <3a1077e70610121032v13c1a3b5n1af690717ab8abdc@mail.gmail.com> References: <3a1077e70610121032v13c1a3b5n1af690717ab8abdc@mail.gmail.com> Message-ID: On 10/12/06, John Travers wrote: > > > The key change being that most of scipy.interpolate is actually for > fitting smoothing splines rather than interpolation (which took me a > while to understand) - surely it would be logical to have say > scipy.spline (with all the spline related stuff) and then > scipy.interpolate with just the interpolation classes (which could ref > the spline module). Yep, and last I looked the spline interface was one of those awful multiargument fortran driver thingees. Not to mention the confusion between smoothing and interpolating splines. If someone wants to rationalize the package I am all in favor. Once upon a time I also wrote some polynomial interpolation routines that I was thinking of including, but never got around to it. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From bart.vandereycken at cs.kuleuven.be Tue Oct 17 04:19:16 2006 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Tue, 17 Oct 2006 10:19:16 +0200 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: <4533CA56.2070709@ieee.org> References: <4533CA56.2070709@ieee.org> Message-ID: Travis Oliphant wrote: > Bart Vandereycken wrote: >> Hi, >> >> the scipy.linalg module is shadowed by numpy.linalg >> >> > import scipy >> > scipy.linalg.__file__ >> '.../lib/python/numpy/linalg/__init__.pyc' >> >> import scipy.linalg works fine >> >> > > That's not what I see. What version do you have? > > import scipy > scipy.linalg.__file__ > '/usr/lib/python2.4/site-packages/scipy/linalg/__init__.pyc' > > scipy.__version__ > '0.5.2.dev2247' '0.5.2.dev2287' From jtravs at gmail.com Tue Oct 17 11:03:38 2006 From: jtravs at gmail.com (John Travers) Date: Tue, 17 Oct 2006 16:03:38 +0100 Subject: [SciPy-dev] rules for api and doc changes In-Reply-To: References: <3a1077e70610121032v13c1a3b5n1af690717ab8abdc@mail.gmail.com> Message-ID: <3a1077e70610170803k3da056bwfb218cc21dde589c@mail.gmail.com> On 17/10/06, Charles R Harris wrote: > On 10/12/06, John Travers wrote: > > > > The key change being that most of scipy.interpolate is actually for > > fitting smoothing splines rather than interpolation (which took me a > > while to understand) - surely it would be logical to have say > > scipy.spline (with all the spline related stuff) and then > > scipy.interpolate with just the interpolation classes (which could ref > > the spline module). > > Yep, and last I looked the spline interface was one of those awful > multiargument fortran driver thingees. Not to mention the confusion between > smoothing and interpolating splines. If someone wants to rationalize the > package I am all in favor. Once upon a time I also wrote some polynomial > interpolation routines that I was thinking of including, but never got > around to it. > > Chuck Well, I think fitpack2 has gone some way to sorting the interface into python-style classes. Though it is far from perfect. I'll happily have a go at making a nicer spline module in the sandbox, to eventually allow interpolate to become a pure interpolation module, if someone gives me svn access. John From nwagner at iam.uni-stuttgart.de Thu Oct 19 03:40:56 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 19 Oct 2006 09:40:56 +0200 Subject: [SciPy-dev] Warning: invalid value encountered in divide Message-ID: <45372C08.6020405@iam.uni-stuttgart.de> Hi all, If I run numpy.test(1) from the latest svn I get several warnings Warning: invalid value encountered in divide Warning: divide by zero encountered in divide Can I neglect these warnings ? Nils From nwagner at iam.uni-stuttgart.de Fri Oct 20 03:00:18 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Oct 2006 09:00:18 +0200 Subject: [SciPy-dev] base_classes.py Message-ID: <45387402.40907@iam.uni-stuttgart.de> File "/usr/lib64/python2.4/site-packages/numpy/f2py/lib/parser/base_classes.py", line 369 return self.is_allocatable() or self.is_pointer(): ^ SyntaxError: invalid syntax From stefan at sun.ac.za Fri Oct 20 09:50:47 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 20 Oct 2006 15:50:47 +0200 Subject: [SciPy-dev] moving imread Message-ID: <20061020135047.GH10843@mentat.za.net> Hi, I would like to move 'imread' to a more noticeable place. 'scipy.misc.pilutil.imread' is hardly suitable. It probably belongs under 'ndimage', but since it isn't *part of* ndimage, I can't simply add it there (in case someone wants to install ndimage on its own). Then, I thought setup.py would be the place, but I see ndimage is imported using config.add_subpackage('ndimage') I guess one could just add ndimage.imread = scipil.imread, but that doesn't feel quite right. Can anyone advise me on the best way to add imread to ndimage? Thanks St?fan From nwagner at iam.uni-stuttgart.de Fri Oct 20 12:00:07 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Oct 2006 18:00:07 +0200 Subject: [SciPy-dev] Shape mismatch sparse/dense matrix vector multiplication Message-ID: <4538F287.5070103@iam.uni-stuttgart.de> Hi all, I am confused by the the result of the attached script. Is that a bug ? Any pointer would be appreciated. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_todense.py Type: text/x-python Size: 590 bytes Desc: not available URL: From robert.kern at gmail.com Fri Oct 20 12:54:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Oct 2006 11:54:09 -0500 Subject: [SciPy-dev] Shape mismatch sparse/dense matrix vector multiplication In-Reply-To: <4538F287.5070103@iam.uni-stuttgart.de> References: <4538F287.5070103@iam.uni-stuttgart.de> Message-ID: <4538FF31.60608@gmail.com> Nils Wagner wrote: > Hi all, > > I am confused by the the result of the attached script. > Is that a bug ? What result did you get? What result did you expect? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From haase at msg.ucsf.edu Fri Oct 20 13:40:42 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 20 Oct 2006 10:40:42 -0700 Subject: [SciPy-dev] moving imread In-Reply-To: <20061020135047.GH10843@mentat.za.net> References: <20061020135047.GH10843@mentat.za.net> Message-ID: <200610201040.43134.haase@msg.ucsf.edu> Hi, keep in mind that in ndimage the emphasis in more on the *nd* part than in the *image* part. Meaning ndimage is (so far) concerned about any-dimensional "image"-algorithms -- like segmentation or nd-erosion and so on. It is not at all about reading jpg-files. My 2 cents. -Sebastian Haase On Friday 20 October 2006 06:50, Stefan van der Walt wrote: > Hi, > > I would like to move 'imread' to a more noticeable place. > 'scipy.misc.pilutil.imread' is hardly suitable. It probably belongs > under 'ndimage', but since it isn't *part of* ndimage, I can't simply > add it there (in case someone wants to install ndimage on its own). > > Then, I thought setup.py would be the place, but I see ndimage is > imported using > > config.add_subpackage('ndimage') > > I guess one could just add ndimage.imread = scipil.imread, but that > doesn't feel quite right. > > Can anyone advise me on the best way to add imread to ndimage? > > Thanks > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From nwagner at iam.uni-stuttgart.de Fri Oct 20 13:39:26 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Oct 2006 19:39:26 +0200 Subject: [SciPy-dev] Shape mismatch sparse/dense matrix vector multiplication In-Reply-To: <4538FF31.60608@gmail.com> References: <4538F287.5070103@iam.uni-stuttgart.de> <4538FF31.60608@gmail.com> Message-ID: On Fri, 20 Oct 2006 11:54:09 -0500 Robert Kern wrote: > Nils Wagner wrote: >> Hi all, >> >> I am confused by the the result of the attached script. >> Is that a bug ? > > What result did you get? Spare matrix vector multiplication (48,) Dense matrix vector multiplication (1, 48) What result did you expect? I expected the same shape for both operations dense and sparse multiplication. Am I missing something ? Nils > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From nwagner at iam.uni-stuttgart.de Fri Oct 20 13:52:30 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 20 Oct 2006 19:52:30 +0200 Subject: [SciPy-dev] Shape mismatch sparse/dense matrix vector multiplication In-Reply-To: <4538FF31.60608@gmail.com> References: <4538F287.5070103@iam.uni-stuttgart.de> <4538FF31.60608@gmail.com> Message-ID: On Fri, 20 Oct 2006 11:54:09 -0500 Robert Kern wrote: > Nils Wagner wrote: >> Hi all, >> >> I am confused by the the result of the attached script. >> Is that a bug ? > > What result did you get? What result did you expect? > > -- > Robert Kern > > "I have come to believe that the whole world is an >enigma, a harmless enigma > that is made terrible by our own mad attempt to >interpret it as though it had > an underlying truth." > -- Umberto Eco > _________________________________,______________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev What it makes worse is the fact that the norm of $\| yd-ys\|$ is not zero but something like 1.e-7. I have expected a value near machine precision (1.e-16). Any idea how to fix that ? Nils From gregwillden at gmail.com Fri Oct 20 14:32:57 2006 From: gregwillden at gmail.com (Greg Willden) Date: Fri, 20 Oct 2006 13:32:57 -0500 Subject: [SciPy-dev] Docstrings patches Message-ID: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> Hi All, I submitted a patch in a ticket a few weeks ago that would add the flattop window to signaltools.py as well as add some helpful docstrings. I see that the window function was added but the docstring changes were not. Was this intentional or accidental? Is there some fundamental opposition to improving the docstrings so that they help refer the user to other similar functions (i.e. See Also: sections)? I found this type of inline documentation invaluable when I was learning Matlab. I'd like to help improve things but I won't waste my time if the group is not interested in this type of thing. Regards, Greg -- Linux. Because rebooting is for adding hardware. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri Oct 20 14:39:36 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 21 Oct 2006 03:39:36 +0900 Subject: [SciPy-dev] Docstrings patches In-Reply-To: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> Message-ID: On 10/21/06, Greg Willden wrote: > Hi All, > I found this type of inline documentation invaluable when I was > learning Matlab. Me too. Good see also's are very very helpful. Personally, I'd like to see pretty much everything on the numpy examples page translated into docstrings. http://www.scipy.org/Numpy_Example_List > I'd like to help improve things but I won't waste my time if the group is > not interested in this type of thing. I don't know what happened, but everyone here would like better documentation strings. --bb From robert.kern at gmail.com Fri Oct 20 14:58:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 20 Oct 2006 13:58:40 -0500 Subject: [SciPy-dev] Docstrings patches In-Reply-To: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> Message-ID: <45391C60.2050105@gmail.com> Greg Willden wrote: > Hi All, > > I submitted a patch in a ticket a few weeks ago that would add the > flattop window to signaltools.py as well as add some helpful docstrings. > I see that the window function was added but the docstring changes were > not. > > Was this intentional or accidental? Stefan wasn't too keen on adding the cross-references to all of the window functions. From his comment: """Is it practical to add a "See also:" sections to each window function? Every time we add a new window function, we have to add the name in 15 places.""" I think I agree in this case. A more useful thing to do would be to state that one can find a list of the other window functions in the scipy.signal docstring. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gregwillden at gmail.com Fri Oct 20 15:04:08 2006 From: gregwillden at gmail.com (Greg Willden) Date: Fri, 20 Oct 2006 14:04:08 -0500 Subject: [SciPy-dev] Docstrings patches In-Reply-To: <45391C60.2050105@gmail.com> References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> <45391C60.2050105@gmail.com> Message-ID: <903323ff0610201204m73641d61j71b859846f65abe3@mail.gmail.com> On 10/20/06, Robert Kern wrote: > > Stefan wasn't too keen on adding the cross-references to all of the window > functions. From his comment: > > """Is it practical to add a "See also:" sections to each window function? > Every > time we add a new window function, we have to add the name in 15 > places.""" > > I think I agree in this case. A more useful thing to do would be to state > that > one can find a list of the other window functions in the scipy.signaldocstring. > I can understand that. However, in this case, I think we've just about got all the window functions that are out there. But I'll try something else and submit it. Thanks for the feedback. Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Fri Oct 20 19:01:10 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 21 Oct 2006 01:01:10 +0200 Subject: [SciPy-dev] moving imread In-Reply-To: <200610201040.43134.haase@msg.ucsf.edu> References: <20061020135047.GH10843@mentat.za.net> <200610201040.43134.haase@msg.ucsf.edu> Message-ID: <20061020230110.GJ10843@mentat.za.net> On Fri, Oct 20, 2006 at 10:40:42AM -0700, Sebastian Haase wrote: > Hi, > keep in mind that in > ndimage the emphasis in more on the *nd* part than in the *image* part. > Meaning ndimage is (so far) concerned about any-dimensional "image"-algorithms > -- like segmentation or nd-erosion and so on. > It is not at all about reading jpg-files. It is the only image related toolbox in scipy. It could go under io as well -- anywhere more accessable/logical than scipy.misc.pilutil.imread will make me happy. Cheers St?fan From stefan at sun.ac.za Fri Oct 20 19:18:11 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 21 Oct 2006 01:18:11 +0200 Subject: [SciPy-dev] Docstrings patches In-Reply-To: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> Message-ID: <20061020231811.GK10843@mentat.za.net> On Fri, Oct 20, 2006 at 01:32:57PM -0500, Greg Willden wrote: > I'd like to help improve things but I won't waste my time if the group is not > interested in this type of thing. You didn't respond to the ticket comment at http://projects.scipy.org/scipy/scipy/ticket/276 so I assumed you were satisfied and the ticket remained closed. I updated the signal toolbox info string, as mentioned there: Window functions: boxcar -- Boxcar window triang -- Triangular window parzen -- Parzen window bohman -- Bohman window blackman -- Blackman window blackmanharris -- Minimum 4-term Blackman-Harris window nuttall -- Nuttall's minimum 4-term Blackman-Harris window flattop -- Flat top window bartlett -- Bartlett window hann -- Hann window barthann -- Bartlett-Hann window hamming -- Hamming window kaiser -- Kaiser window gaussian -- Gaussian window general_gaussian -- Generalized Gaussian window slepian -- Slepian window The descriptions are pretty terse, so we could always expand them if you are unhappy. As for see also's, I think Robert's suggestion is the way to go. Cheers St?fan From gregwillden at gmail.com Fri Oct 20 19:46:55 2006 From: gregwillden at gmail.com (Greg Willden) Date: Fri, 20 Oct 2006 18:46:55 -0500 Subject: [SciPy-dev] Docstrings patches In-Reply-To: <20061020231811.GK10843@mentat.za.net> References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> <20061020231811.GK10843@mentat.za.net> Message-ID: <903323ff0610201646w2a793f5aha58537a7fff169f9@mail.gmail.com> On 10/20/06, Stefan van der Walt wrote: > > On Fri, Oct 20, 2006 at 01:32:57PM -0500, Greg Willden wrote: > > I'd like to help improve things but I won't waste my time if the group > is not > > interested in this type of thing. > > You didn't respond to the ticket comment at > Yeah, I'm a noobie so I went looking for my ticket but couldn't find it. That's when I sent the email. After I got Robert's email I looked harder and found it. The info string in signal looks good and Robert's suggestion makes sense for each function's docstring. Regards, Greg -- Linux. Because rebooting is for adding hardware. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kamrik at gmail.com Sun Oct 22 09:30:21 2006 From: kamrik at gmail.com (Mark Koudritsky) Date: Sun, 22 Oct 2006 15:30:21 +0200 Subject: [SciPy-dev] Docstrings patches In-Reply-To: <903323ff0610201646w2a793f5aha58537a7fff169f9@mail.gmail.com> References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> <20061020231811.GK10843@mentat.za.net> <903323ff0610201646w2a793f5aha58537a7fff169f9@mail.gmail.com> Message-ID: Was the possibility of editing the docstrings in a wiki style ever discussed? I think it can be done with a special wiki which has a page for every docstring in the sources (not every comment, but every docstring intended for the user). Some script can be run once in a while that will take the docstrings from the wiki and insert them into the sources. Such a system would very significantly lower the barrier for contributing to documentation. Of course, the system is not trivial, it should check the docstrings from the wiki to avoid "injection attacks" etc. etc. But it's not too complicated either. Does anybody know if a similar system already exists somewhere? Anyway, I call for discussion of this possibility. Regards - Mark From wbaxter at gmail.com Sun Oct 22 13:10:55 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 23 Oct 2006 02:10:55 +0900 Subject: [SciPy-dev] [Numpy-discussion] some work on arpack In-Reply-To: <20061022140850.GR10447@t7.lanl.gov> References: <20061022140850.GR10447@t7.lanl.gov> Message-ID: Don't know if this is any use, but to me (not knowing Fortran nearly so well as C++) this looks pretty useful: http://www.caam.rice.edu/software/ARPACK/arpack++.html http://www.ime.unicamp.br/~chico/arpack++/ It provides a nice high-level interface on top of ARPACK. I could see it being useful on a number of levels: 1) actually use its wrappers directly instead of calling the f2py'ed fortran code - at least as a stop gap measure to cover the holes ("shifted modes" etc) - this could potentially even be faster since the reverse-communication interface requires some iteration, and this way the loops would run in compiled C++ instead of python. 2) use it's code just as a reference to help write more wrappers to the fortran code 3) if nothing else, it has pretty decent documentation. For instance it includes a nice table of the amount of storage you need to reserve for various different calling modes. (It seems ARPACK's main documentation is not freely available, so I'm not sure what's in it, but if you have that then ARPACK++'s docs may not be much help). As far as I can tell, it must be under the same licensing terms as ARPACK itself. It doesn't actually specify anything as far as I could see. But it seems to be linked prominently from the ARPACK website and appears as an offshoot project. --bb On 10/22/06, Aric Hagberg wrote: > On Sat, Oct 21, 2006 at 02:05:42PM -0700, Keith Goodman wrote: > > Did you, or anybody else on the list, have any luck making a numpy > > version of eigs? > > I made a start at an ARPACK wrapper, see > http://projects.scipy.org/scipy/scipy/ticket/231 > and the short thread at scipy-dev > http://thread.gmane.org/gmane.comp.python.scientific.devel/5166/focus=5175 > > In addition to the wrapper there is a Python interface (and some tests). > I don't know if the interface is like "eigs" - I don't use Matlab. > > It will give you a few eigenvalues and eigenvectors for the standard > eigenproblem (Ax=lx) for any type of A (symmetric/nonsymmetric, real/complex, > single/double, sparse/nonsparse). > > The generalized and shifted modes are not implemented. > I need to find some time (or some help) to get it finished. > > Regards, > Aric From wbaxter at gmail.com Sun Oct 22 13:26:28 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Mon, 23 Oct 2006 02:26:28 +0900 Subject: [SciPy-dev] Docstrings patches In-Reply-To: References: <903323ff0610201132m4be9fb7bm7455df4096ae7f1a@mail.gmail.com> <20061020231811.GK10843@mentat.za.net> <903323ff0610201646w2a793f5aha58537a7fff169f9@mail.gmail.com> Message-ID: Automating all that seems pretty tricky, though valuable if someone can find the time to do it. But one similar idea that might be easier to get going would be to set up said wiki with a fixed base url, and then write some utility functions for scipy that let you do something like: scipy.webdoc(scipy.hamming) to launch the user's browser and take it to that wiki page. These pages can have more detail, examples, and cross-referencing than is possible with the regular built-in docs, and being wiki, could also have discussions attached. The main issue I see with straight wiki, is that it would be nice to impose some structure on the docs, such as One Line Description | Brief Description | SeeAlso | Examples | Discussion, but with wiki all you get is one big blob of text. --bb On 10/22/06, Mark Koudritsky wrote: > Was the possibility of editing the docstrings in a wiki style ever discussed? > > I think it can be done with a special wiki which has a page for every > docstring in the sources (not every comment, but every docstring > intended for the user). Some script can be run once in a while that > will take the docstrings from the wiki and insert them into the > sources. Such a system would very significantly lower the barrier for > contributing to documentation. > > Of course, the system is not trivial, it should check the docstrings > from the wiki to avoid "injection attacks" etc. etc. But it's not too > complicated either. > > Does anybody know if a similar system already exists somewhere? > > Anyway, I call for discussion of this possibility. > > Regards > > - Mark > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From jeff at taupro.com Sun Oct 22 14:20:19 2006 From: jeff at taupro.com (Jeff Rush) Date: Sun, 22 Oct 2006 13:20:19 -0500 Subject: [SciPy-dev] A Call for SciPy Talks at PyCon 2007 Message-ID: <453BB663.6070803@taupro.com> We're rapidly approaching the deadline (Oct 31) for submitting a talk proposal for PyCon 2007, being held Feb 23-25 in Addison (Dallas), Texas. As co-chair of this year's conference I'd like to encourage the SciPy/NumPy community to submit a few talks. While we're in the early stages of keynote selection, it appears to be shaping up as a "Python Empowers Education" conference theme, with participants from the One Laptop Per Child project. And with one focus of SciPy being education, from the SciPy Developer Zone, "Today Python is used as a tool for research and higher education, but we would also like to see secondary-school students and their teachers choosing Python as an educational tool over commercial offerings." I think SciPy can be a significant player at PyCon. We also run half-day paid tutorials taught by community members on Feb 22 and the course submission deadline for those is Nov 15. It'd be really cool to have a half-day hands-out introduction to SciPy, and the teachers get paid up to $1500 per half-day, depending upon the number of students. For talk ideas check out our wiki page at: http://us.pycon.org/TX2007/TalkIdeas And to submit a talk proposal: http://us.pycon.org/TX2007/CallForProposals or a tutorial: http://us.pycon.org/TX2007/CallForTutorials Also please spread the word about the conference and encourage those in the scientific, engineering and education communities to attend. Jeff Rush PyCon 2007 Co-Chair From nwagner at iam.uni-stuttgart.de Tue Oct 24 06:46:51 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 24 Oct 2006 12:46:51 +0200 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver Message-ID: <453DEF1B.7070903@iam.uni-stuttgart.de> Hi all, A new eigensolver is available (*NA Digest, V. 06, # 43*). PRIMME is released under the Lesser GPL license. Is this licence compatible wrt scipy ? Nils From: Andreas Stathopoulos Date: Thu, 19 Oct 2006 01:50:09 -0400 Subject: PRIMME Eigensolver PRIMME: PReconditioned Iterative MultiMethod Eigensolver http://www.cs.wm.edu/~andreas/software/ We are pleased to announce the release of PRIMME Version 1.1 featuring enhanced functionality and a year-long testing by several groups. PRIMME is a C library for the solution of large, sparse, real symmetric and complex Hermitian eigenvalue problems. It implements a main iteration similar to Davidson/Jacobi-Davidson, but with the appropriate choice of parameters it can transform to most known, and even yet undiscovered, preconditioned eigensolvers. These include the nearly optimal GD+k and JDQMR methods. PRIMME implements one of the most comprehensive sets of eigenvalue techniques, including block methods, locking, locally optimal restarting, dynamic stopping criteria for inner iteration, and many correction equation variants. Coupled with the above nearly optimal methods, PRIMME demonstrates exceptional robustness and efficiency. Even without preconditioning, PRIMME has proved faster than Lanczos methods for a small number of eigenpairs of difficult problems. PRIMME's multi-layer interface allows non-expert, end-users to access the full power of these methods. Unlike traditional Jacobi-Davidson methods, where a host of parameters must be tuned, PRIMME has a sophisticated mechanism for determining appropriate defaults. Moreover, under a new DYNAMIC mode, the best method can be selected dynamically based on runtime timings and measurements. Yet, experts can still control and experiment with numerous tuning knobs. PRIMME works with any external matrix-(multi)vector and preconditioner (multi-)vector functions, including indefinite preconditioners; it can find both exterior and interior eigenpairs; it is both sequential and parallel (SPMD parallelization); and can be called from any C, C++, or Fortran program. For questions, bugs, reports please email: andreas at cs.wm.edu From robert.kern at gmail.com Tue Oct 24 13:09:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Oct 2006 12:09:41 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <453DEF1B.7070903@iam.uni-stuttgart.de> References: <453DEF1B.7070903@iam.uni-stuttgart.de> Message-ID: <453E48D5.1060009@gmail.com> Nils Wagner wrote: > Hi all, > > A new eigensolver is available (*NA Digest, V. 06, # 43*). > PRIMME is released under the Lesser GPL license. > Is this licence compatible wrt scipy ? No, it cannot be included in scipy. You have asked this question many times before, and the answer has never changed. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Wed Oct 25 06:18:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 25 Oct 2006 04:18:07 -0600 Subject: [SciPy-dev] SciPy release Message-ID: <453F39DF.1090009@ieee.org> Now that NumPy 1.0 is out, we need to make a SciPy release. Let's target Monday of next week for the release of 0.5.2. Please have code-contributions and clean-ups by then. -Travis From nwagner at iam.uni-stuttgart.de Wed Oct 25 08:43:50 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Oct 2006 14:43:50 +0200 Subject: [SciPy-dev] optimize.fmin_tnc TypeError: argument 2 must be list, not numpy.ndarray Message-ID: <453F5C06.2030005@iam.uni-stuttgart.de> The optimizer tnc requires a list for the second argument. All other optimizers work with an array. For what reason ? xopt = optimize.fmin_tnc(P_A,x0,P_Ap) File "/usr/lib64/python2.4/site-packages/scipy/optimize/tnc.py", line 221, in fmin_tnc fmin, ftol, rescale) TypeError: argument 2 must be list, not numpy.ndarray >>> type(x0) Nils From zufus at zufus.org Wed Oct 25 09:24:05 2006 From: zufus at zufus.org (Marco Presi) Date: Wed, 25 Oct 2006 15:24:05 +0200 Subject: [SciPy-dev] SciPy release In-Reply-To: <453F39DF.1090009@ieee.org> (Travis Oliphant's message of "Wed, 25 Oct 2006 04:18:07 -0600") References: <453F39DF.1090009@ieee.org> Message-ID: <87y7r4r0ju.fsf@zufus.org> || On Wed, 25 Oct 2006 04:18:07 -0600 || Travis Oliphant wrote: to> Now that NumPy 1.0 is out, we need to make a SciPy release. Let's to> target Monday of next week for the release of 0.5.2. Hi Travis, while I think that probably there is a chance to get numpy in next stable Debian release, probably it could be late for scipy 0.5.2. Anyway, do you think that by simply rebuilding scipy 0.5.1 on top of numpy-1.0 can introduce regressions? to> Please have code-contributions and clean-ups by then. Regards Marco -- "I videogiochi non influenzano i bambini. Voglio dire, se Pac-Man avesse influenzato la nostra generazione, staremmo tutti saltando in sale scure, masticando pillole magiche e ascoltando musica elettronica ripetitiva." "Videogames do not influence kids. I mean, if Pac-Man influenced our generation, we were all jumping in dark rooms, chomping pills and listening to electronic repeating music." Kristian Wilson, Nintendo Inc. 1989 From haase at msg.ucsf.edu Wed Oct 25 12:41:16 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 25 Oct 2006 09:41:16 -0700 Subject: [SciPy-dev] SciPy release In-Reply-To: <87y7r4r0ju.fsf@zufus.org> References: <453F39DF.1090009@ieee.org> <87y7r4r0ju.fsf@zufus.org> Message-ID: <200610250941.16621.haase@msg.ucsf.edu> On Wednesday 25 October 2006 06:24, Marco Presi wrote: > || On Wed, 25 Oct 2006 04:18:07 -0600 > || Travis Oliphant wrote: > > to> Now that NumPy 1.0 is out, we need to make a SciPy release. Let's > to> target Monday of next week for the release of 0.5.2. > > Hi Travis, > > while I think that probably there is a chance to get numpy in > next stable Debian release, probably it could be late for scipy > 0.5.2. > > Anyway, do you think that by simply rebuilding scipy 0.5.1 on > top of numpy-1.0 can introduce regressions? > > to> Please have code-contributions and clean-ups by then. > > Regards > > Marco Would it not be better to use the latest SVN instead of 0.5.1. It should be awefully close to 0.5.2 and many fixes towards numpy1.0 are probably required. -Sebastian From oliphant at ee.byu.edu Wed Oct 25 16:20:45 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 25 Oct 2006 14:20:45 -0600 Subject: [SciPy-dev] SciPy release In-Reply-To: <87y7r4r0ju.fsf@zufus.org> References: <453F39DF.1090009@ieee.org> <87y7r4r0ju.fsf@zufus.org> Message-ID: <453FC71D.1040301@ee.byu.edu> Marco Presi wrote: > || On Wed, 25 Oct 2006 04:18:07 -0600 > || Travis Oliphant wrote: > > to> Now that NumPy 1.0 is out, we need to make a SciPy release. Let's > to> target Monday of next week for the release of 0.5.2. > > Hi Travis, > > while I think that probably there is a chance to get numpy in > next stable Debian release, probably it could be late for scipy > 0.5.2. > > Anyway, do you think that by simply rebuilding scipy 0.5.1 on > top of numpy-1.0 can introduce regressions? > > That should work for the most part. There is only one problem I know about and that is in the umfpack interface which will raise an error on testing. There are some issues on INTEL Mac OS X, as well but I haven't seen those fixed. 0.5.1 rebuilt should be fine. -Travis From oliphant.travis at ieee.org Thu Oct 26 13:24:25 2006 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Thu, 26 Oct 2006 11:24:25 -0600 Subject: [SciPy-dev] [ANN] NumPy 1.0 release Message-ID: <4540EF49.1040605@ieee.org> We are very pleased to announce the release of NumPy 1.0 available for download at http://www.numpy.org This release is the culmination of over 18 months of effort to allow unification of the Numeric and Numarray communities. NumPy provides the features of both packages as well as comparable speeds in the domains where both were considered fast --- often beating both packages on certain problems. If there is an area where we can speed up NumPy then we are interested in hearing about the solution. NumPy is essentially a re-write of Numeric to include the features of Numarray plus more. NumPy is a C-based extension module to Python that provides an N-dimensional array object (ndarray), a collection of fast math functions, basic linear algebra, array-producing random number generators, and basic Fourier transform capabilities. Also included with NumPy are: 1) A data-type object. The data-type of all NumPy arrays are defined by a data-type object that describes how the block of memory that makes up an element of the array is to be interpreted. Supported are all basic C-types, structures containing C-types, arrays of C-types, and structures containing structures of C-types. Data-types can also be in big or little-endian order. NumPy arrays can therefore be constructed from any regularly-sized chunk of data. A chunk of data can also be a pointer to a Python object and therefore Object arrays can be constructed (including record arrays with object members). 2) Array scalars: there is a Python scalar object (inheriting from the standard object where possible) defined for every basic data-type that an array can have. 2) A matrix object so that '*' is re-defined as matrix-multiplication and '**' as matrix-power. 3) A character array object that can replace Numarray's similarly-named object. It is basically an array of strings (or unicode) with methods matching the string and unicode methods. 4) A record array that builds on the advanced data-type support of the basic array object to allow field access using attribute look-up as well as to provide more ways to build-up a record-array. 5) A memory-map object that makes it easier to use memory-mapped areas as the memory for an array object. 6) A basic container class that uses the ndarray as a member. This often facilitates multiple-inheritance. 7) A large collection of basic functions on the array. 8) Compatibility layer for Numeric including code to help in the conversion to NumPy and full C-API support. 9) Compatibility layer for NumPy including code to help in the conversion to NumPy and full C-API support. NumPy can work with Numeric and Numarray installed and while the three array objects are different to Python, they can all share each other's data through the use of the array interface. As the developers for Numeric we can definitively say development of Numeric has ceased as has effective support. You may still find an answer to a question or two and Numeric will be available for download as long as Sourceforge is around so and code written to Numeric will still work, but there will not be "official" releases of Numeric for future versions of Python (including Python2.5). The development of NumPy has been supported by the people at STScI who created Numarray and support it. They have started to port their applications to NumPy and have indicated that support for Numarray will be phased out over the next year. You are strongly encouraged to move to NumPy. The whole point of NumPy is to unite the Numeric/Numarray development and user communities. We have done our part in releasing NumPy 1.0 and doing our best to make the transistion as easy as possible. Please support us by adopting NumPy. If you have trouble with that, please let us know why so that we can address the problems you identify. Even better, help us in fixing the problems. New users should download NumPy first unless they need an older package to work with third party code. Third-party package writers should migrate to use NumPy. Though it is not difficult, there are some things that have to be altered. Several people are available to help with that process, just ask (we will do it free for open source code and as work-for-hire for commercial code). This release would not have been possible without the work of many people. Thanks go to (if we have missed your contribution please let us know): * Travis Oliphant for the majority of the code adaptation (blame him for code problems :-) ) * Jim Hugunin, Paul Dubois, Konrad Hinsen, David Ascher, Jim Fulton and many others for Numeric on which the code is based. * Perry Greenfield, J Todd Miller, Rick White, Paul Barrett for Numarray which gave much inspiration and showed the way forward. * Paul Dubois for Masked Arrays * Pearu Peterson for f2py and numpy.distutils and help with code organization * Robert Kern for mtrand, bug fixes, help with distutils, code organization, and much more. * David Cooke for many code improvements including the auto-generated C-API and optimizations. * Alexander Belopolsky (Sasha) for Masked array bug-fixes and tests, rank-0 array improvements, scalar math help and other code additions * Francesc Altet for unicode and nested record tests and much help with rooting out nested record array bugs. * Tim Hochberg for getting the build working on MSVC, optimization improvements, and code review * Charles Harris for the sorting code originally written for Numarray and for improvements to polyfit, many bug fixes, and documentation strings. * Robert Cimrman for numpy.distutils help and the set-operations for arrays * David Huard for histogram code improvements including 2-d and d-d code * Eric Jones for sundry subroutines borrowed from scipy_base * Fernando Perez for code snippets, ideas, bugfixes, and testing. * Ed Schofield for matrix.py patches, bugfixes, testing, and docstrings. * John Hunter for code snippets (from matplotlib) * Chris Hanley for help with records.py, testing, and bug fixes. * Travis Vaught, Joe Cooper, Jeff Strunk for administration of numpy.org web site and SVN * Andrew Straw for bug-reports and help with www.scipy.org * Albert Strasheim for bug-reports, unit-testing and Valgrind runs * Stefan van der Walt for bug-reports, regression-testing, and bug-fixes. * Eric Firing for bugfixes. * Arnd Baecker for 64-bit testing * A.M. Archibald for code that decreases the number of times reshape makes a copy. More information is available at http://numpy.scipy.org and http://www.scipy.org. Bug-reports and feature requests should be submitted as tickets to the Trac pages at http://projects.scipy.org/scipy/numpy/ As an anti-SPAM measure, you must create an account in order to post tickets. Enjoy the new release, Sincerely, The NumPy Developers *Disclaimer*: The main author, Travis Oliphant, has written a 350+ page book entitled "Guide to NumPy" that documents the new system fairly thoroughly. The first two chapters of this book are available on-line for free, but the remainder must be purchased (until 2010 or a certain number of total sales has been reached). See http://www.trelgol.com for more details. There is plenty of free documentation available now for NumPy, however. Go to http://www.scipy.org for more details. From ndbecker2 at gmail.com Thu Oct 26 21:40:22 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 26 Oct 2006 21:40:22 -0400 Subject: [SciPy-dev] scipy-0.5.1/numpy-1.0 x86_64 test Message-ID: ====================================================================== ERROR: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py", line 25, in check_solve_complex_without_umfpack x = linsolve.spsolve(a, b) File "/usr/lib64/python2.4/site-packages/scipy/linsolve/linsolve.py", line 76, in spsolve return gssv(N, lastel, data, index0, index1, b, csc, permc_spec)[0] TypeError: array cannot be safely cast to required type ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 80, in cc self._check_case(name, expected) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 75, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 33, in _check_level self._check_level(level_label, ev, actual[i]) File "/usr/lib64/python2.4/site-packages/scipy/io/tests/test_mio.py", line 30, in _check_level assert len(expected) == len(actual), "Different list lengths at %s" % label TypeError: len() of unsized object ====================================================================== FAIL: check_rvs (scipy.stats.tests.test_distributions.test_randint) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/stats/tests/test_distributions.py", line 84, in check_rvs assert isinstance(val, numpy.ScalarType),`type(val)` AssertionError: ---------------------------------------------------------------------- Ran 1569 tests in 5.746s FAILED (failures=1, errors=2) >>> From david at ar.media.kyoto-u.ac.jp Fri Oct 27 05:53:28 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Oct 2006 18:53:28 +0900 Subject: [SciPy-dev] pyaudio, a module to make noise from numpy arrays Message-ID: <4541D718.6040308@ar.media.kyoto-u.ac.jp> Hi, I announce the first release of pyaudio, a module to make noise from numpy arrays (read, write and play audio files with numpy arrays). * WHAT FOR ?: The Goal is to give to a numpy/scipy environmenet some basic audio IO facilities (ala sound, wavread, wavwrite of matlab). With pyaudio, you should be able to read and write most common audio files from and to numpy arrays. The underlying IO operations are done using libsndfile from Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/), so he is the one who did the hard work. As libsndfile has support for a vast number of audio files (including wav, aiff, but also htk, ircam, and flac, an open source lossless codec), pyaudio enables the import from and export to a fairly large number of file formats. There is also a really crude player, which uses tempfile to play audio, and which only works for linux-alsa anyway. I intend to add better support at least for linux, and for other platforms if this does not involve too much hassle. So basically, if you are lucky enough to use a recent linux system, pyaudio already gives you the equivalent of wavread, wavwrite and sound. * DOWNLOAD: http://www.ar.media.kyoto-u.ac.jp/members/david/pyaudio.tar.gz * INSTALLATION INSTRUCTIONS: Just untar the package and drop it into scipy/Lib/sandbox, and add the two following lines to scipy/Lib/sandbox/setup.py: # Package to make some noise using numpy config.add_subpackage('pyaudio') (if libsndfile.so is not in /usr/lib, a fortiori if you are a windows user, you should also change set the right location for libsndfile in pyaudio/pysndfile.py, at the line _snd.cdll.LoadLibrary('/usr/lib/libsndfile.so') ) * EXAMPLE USAGE == Reading example == # Reading from '/home/david/blop.flac' from scipy.sandbox.pyaudio import sndfile a = sndfile('/home/david/blop.flac') print a tmp = a.read_frames_float(1024) --> Prints: File : /home/david/blop.flac Sample rate : 44100 Channels : 2 Frames : 9979776 Format : 0x00170002 Sections : 1 Seekable : True Duration : 00:03:46.298 And put into tmp the 1024 first frames (a frame is the equivalent of a sample, but taking into account the number of channels: so 1024 frames gives you 2048 samples here). == Writing example == # Writing to a wavfile: from scipy.sandbox.pyaudio import sndfile import numpy as N noise = N.random.randn((44100)) a = sndfile('/home/david/blop.flac', sfm['SFM_WRITE'], sf_format['SF_FORMAT_WAV'] | sf_format['SF_FORMAT_PCM16'], 1, 44100) a.write_frames(noise, 44100) a.close() -> should gives you a lossless compressed white noise ! This is really a first release, not really tested, not much documentation, I can just say it works for me. I haven't found a good way to emulate enumerations, which libsndfile uses a lot, so I am using dictionaries generated from the library C header to get a relation enum label <=> value. If someone has a better idea, I am open to suggestions ! Cheers, David From david at ar.media.kyoto-u.ac.jp Fri Oct 27 06:03:14 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 27 Oct 2006 19:03:14 +0900 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <453E48D5.1060009@gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> Message-ID: <4541D962.4080002@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > Nils Wagner wrote: > >> Hi all, >> >> A new eigensolver is available (*NA Digest, V. 06, # 43*). >> PRIMME is released under the Lesser GPL license. >> Is this licence compatible wrt scipy ? >> > > No, it cannot be included in scipy. You have asked this question many times > before, and the answer has never changed. > What do you mean by 'cannot be included in scipy', exactly ? For example, a numpy/scipy wrapper to a LGPL library is acceptable, if the library itself is not included in scipy sources ? Is a wrapper considered as a derivative work ? cheers, David From cimrman3 at ntc.zcu.cz Fri Oct 27 06:40:24 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 27 Oct 2006 12:40:24 +0200 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4541D962.4080002@ar.media.kyoto-u.ac.jp> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> Message-ID: <4541E218.2040505@ntc.zcu.cz> David Cournapeau wrote: > Robert Kern wrote: >> Nils Wagner wrote: >> >>> Hi all, >>> >>> A new eigensolver is available (*NA Digest, V. 06, # 43*). >>> PRIMME is released under the Lesser GPL license. >>> Is this licence compatible wrt scipy ? >>> >> No, it cannot be included in scipy. You have asked this question many times >> before, and the answer has never changed. >> > What do you mean by 'cannot be included in scipy', exactly ? For > example, a numpy/scipy wrapper to a LGPL library is acceptable, if the > library itself is not included in scipy sources ? Is a wrapper > considered as a derivative work ? It is the same situation as with the umfpack solver (GPL) - I have written the wrappers which are included in scipy (they do not use any umfpack code, they just use information about its API) and added a check into distutils and a section into site.cfg file to see if umfpack is installed in the system. IMHO (IANAL) a wrapper is not a derivative work - it does not use any code from the library it wraps (and considering it a derivative work would exclude all extensions that use e.g. linux system calls?) r. From matthew.brett at gmail.com Fri Oct 27 07:15:40 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 27 Oct 2006 12:15:40 +0100 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4541E218.2040505@ntc.zcu.cz> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> Message-ID: <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> Hi, > It is the same situation as with the umfpack solver (GPL) - I have > written the wrappers which are included in scipy (they do not use any > umfpack code, they just use information about its API) and added a check > into distutils and a section into site.cfg file to see if umfpack is > installed in the system. Well - the question was about the LGPL. My reading of the license: http://www.gnu.org/licenses/lgpl.html is that (under section 6) you are allowed to distribute an LGPL library with non-derived code that links to it, as long as your own license terms 'permit modification of [your] work for the customer's own use and reverse engineering for debugging such modifications', and that you provide (section 6a) the source code to the code linking to the library and the library itself. If I read that correctly, it seems to me that an LGPL library can be included with SciPy without SciPy needing to change its own license. Best, Matthew From cimrman3 at ntc.zcu.cz Fri Oct 27 07:22:09 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 27 Oct 2006 13:22:09 +0200 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> Message-ID: <4541EBE1.9080202@ntc.zcu.cz> Matthew Brett wrote: > Hi, > >> It is the same situation as with the umfpack solver (GPL) - I have >> written the wrappers which are included in scipy (they do not use any >> umfpack code, they just use information about its API) and added a check >> into distutils and a section into site.cfg file to see if umfpack is >> installed in the system. > > Well - the question was about the LGPL. My reading of the license: > > http://www.gnu.org/licenses/lgpl.html > > is that (under section 6) you are allowed to distribute an LGPL > library with non-derived code that links to it, as long as your own > license terms 'permit modification of [your] work for the customer's > own use and reverse engineering for debugging such modifications', and > that you provide (section 6a) the source code to the code linking to > the library and the library itself. > > If I read that correctly, it seems to me that an LGPL library can be > included with SciPy without SciPy needing to change its own license. duh, I have missed the 'L'. So it's even easier. cheers, r. From bart.vandereycken at cs.kuleuven.be Fri Oct 27 10:36:15 2006 From: bart.vandereycken at cs.kuleuven.be (Bart Vandereycken) Date: Fri, 27 Oct 2006 16:36:15 +0200 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: <4533CA56.2070709@ieee.org> References: <4533CA56.2070709@ieee.org> Message-ID: Travis Oliphant wrote: > Bart Vandereycken wrote: >> Hi, >> >> the scipy.linalg module is shadowed by numpy.linalg >> >> > import scipy >> > scipy.linalg.__file__ >> '.../lib/python/numpy/linalg/__init__.pyc' >> >> import scipy.linalg works fine >> >> > > That's not what I see. What version do you have? I still have this error with the latest svn: In [6]: scipy.linalg.__file__ Out[6]: '/home/bartvde/Local/lib/python/numpy/linalg/__init__.pyc' In [7]: scipy.__version__ Out[7]: '0.5.2.dev2299' Is this a bug or am I doing something wrong? Bart From nwagner at iam.uni-stuttgart.de Fri Oct 27 11:06:11 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 27 Oct 2006 17:06:11 +0200 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: References: <4533CA56.2070709@ieee.org> Message-ID: <45422063.6030407@iam.uni-stuttgart.de> Bart Vandereycken wrote: > Travis Oliphant wrote: > >> Bart Vandereycken wrote: >> >>> Hi, >>> >>> the scipy.linalg module is shadowed by numpy.linalg >>> >>> > import scipy >>> > scipy.linalg.__file__ >>> '.../lib/python/numpy/linalg/__init__.pyc' >>> >>> import scipy.linalg works fine >>> >>> >>> >> That's not what I see. What version do you have? >> > > I still have this error with the latest svn: > > In [6]: scipy.linalg.__file__ > Out[6]: '/home/bartvde/Local/lib/python/numpy/linalg/__init__.pyc' > > In [7]: scipy.__version__ > Out[7]: '0.5.2.dev2299' > > > Is this a bug or am I doing something wrong? > > > Bart > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > >>> import scipy >>> scipy.__version__ '0.5.2.dev2299' >>> scipy.linalg.__file__ '/usr/lib64/python2.4/site-packages/numpy/linalg/__init__.pyc' >>> import numpy >>> numpy.linalg.__file__ '/usr/lib64/python2.4/site-packages/numpy/linalg/__init__.pyc' >>> from scipy import * >>> scipy.linalg.__file__ '/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.pyc' Nils From oliphant.travis at ieee.org Fri Oct 27 12:11:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 27 Oct 2006 10:11:37 -0600 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: <45422063.6030407@iam.uni-stuttgart.de> References: <4533CA56.2070709@ieee.org> <45422063.6030407@iam.uni-stuttgart.de> Message-ID: <45422FB9.9020502@ieee.org> Nils Wagner wrote: >> >> >>>> import scipy >>>> scipy.__version__ >>>> > '0.5.2.dev2299' > >>>> scipy.linalg.__file__ >>>> > '/usr/lib64/python2.4/site-packages/numpy/linalg/__init__.pyc' > >>>> import numpy >>>> numpy.linalg.__file__ >>>> > '/usr/lib64/python2.4/site-packages/numpy/linalg/__init__.pyc' > >>>> from scipy import * >>>> scipy.linalg.__file__ >>>> > '/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.pyc' > > > This is odd. Here's what I get. In [1]: import scipy In [2]: scipy.__version__ Out[2]: '0.5.2.dev2299' In [3]: scipy.linalg Out[3]: In [4]: scipy.linalg.__file__ Out[4]: '/usr/lib/python2.4/site-packages/scipy/linalg/__init__.pyc' -Travis From robert.kern at gmail.com Fri Oct 27 12:32:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Oct 2006 11:32:45 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> Message-ID: <454234AD.9010607@gmail.com> Matthew Brett wrote: > Hi, > >> It is the same situation as with the umfpack solver (GPL) - I have >> written the wrappers which are included in scipy (they do not use any >> umfpack code, they just use information about its API) and added a check >> into distutils and a section into site.cfg file to see if umfpack is >> installed in the system. > > Well - the question was about the LGPL. My reading of the license: > > http://www.gnu.org/licenses/lgpl.html > > is that (under section 6) you are allowed to distribute an LGPL > library with non-derived code that links to it, as long as your own > license terms 'permit modification of [your] work for the customer's > own use and reverse engineering for debugging such modifications', and > that you provide (section 6a) the source code to the code linking to > the library and the library itself. > > If I read that correctly, it seems to me that an LGPL library can be > included with SciPy without SciPy needing to change its own license. The only reason that wrappers for FFTW and UMFPACK are allowed are because they are optional. We need people to be able to "install scipy" without needing to install *any* GPL or LGPL code even if the only code in our SVN repository is just the BSD-licensed wrappers. If you want more wrappers for GPL or LGPL software, it's time to start the scikits package. I'd really like to avoid more optional wrappers. It makes writing code that depends on scipy much harder. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Fri Oct 27 13:47:14 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 27 Oct 2006 19:47:14 +0200 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: <45422FB9.9020502@ieee.org> References: <4533CA56.2070709@ieee.org> <45422063.6030407@iam.uni-stuttgart.de> <45422FB9.9020502@ieee.org> Message-ID: On Fri, 27 Oct 2006 10:11:37 -0600 Travis Oliphant wrote: > Nils Wagner wrote: >>> >>> >>>>> import scipy >>>>> scipy.__version__ >>>>> >> '0.5.2.dev2299' >> >>>>> scipy.linalg.__file__ >>>>> >> '/usr/lib64/python2.4/site-packages/numpy/linalg/__init__.pyc' >> >>>>> import numpy >>>>> numpy.linalg.__file__ >>>>> >> '/usr/lib64/python2.4/site-packages/numpy/linalg/__init__.pyc' >> >>>>> from scipy import * >>>>> scipy.linalg.__file__ >>>>> >> '/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.pyc' >> >> >> > > This is odd. Here's what I get. > > > In [1]: import scipy > > In [2]: scipy.__version__ > Out[2]: '0.5.2.dev2299' > > In [3]: scipy.linalg > Out[3]: '/usr/lib/python2.4/site-packages/scipy/linalg/__init__.pyc'> > > In [4]: scipy.linalg.__file__ > Out[4]: >'/usr/lib/python2.4/site-packages/scipy/linalg/__init__.pyc' > > > -Travis > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Hi Travis, I have installed everything from scratch. >>> import scipy >>> scipy.linalg >>> scipy.__version__ '0.5.2.dev2299' Can someone reproduce this behaviour ? Nils From oliphant.travis at ieee.org Fri Oct 27 14:40:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 27 Oct 2006 12:40:43 -0600 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <454234AD.9010607@gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> Message-ID: <454252AB.8070108@ieee.org> Robert Kern wrote: > Matthew Brett wrote: > >> Hi, >> >> >>> It is the same situation as with the umfpack solver (GPL) - I have >>> written the wrappers which are included in scipy (they do not use any >>> umfpack code, they just use information about its API) and added a check >>> into distutils and a section into site.cfg file to see if umfpack is >>> installed in the system. >>> >> Well - the question was about the LGPL. My reading of the license: >> >> http://www.gnu.org/licenses/lgpl.html >> >> is that (under section 6) you are allowed to distribute an LGPL >> library with non-derived code that links to it, as long as your own >> license terms 'permit modification of [your] work for the customer's >> own use and reverse engineering for debugging such modifications', and >> that you provide (section 6a) the source code to the code linking to >> the library and the library itself. >> >> If I read that correctly, it seems to me that an LGPL library can be >> included with SciPy without SciPy needing to change its own license. >> > > The only reason that wrappers for FFTW and UMFPACK are allowed are because they > are optional. We need people to be able to "install scipy" without needing to > install *any* GPL or LGPL code even if the only code in our SVN repository is > just the BSD-licensed wrappers. > > If you want more wrappers for GPL or LGPL software, it's time to start the > scikits package. I'd really like to avoid more optional wrappers. It makes > It looks like David's PyAudio is a perfect thing to get this started since it is a "wrapper" to libsndfile which is LGPL. How should we proceed? Can Robert set up a new SVN tree for scikits? Can I? What kind of things should go into scikits. How should they be installed -- under the scikits name-space (do we really like that name --- it sounds quirky to me)? We have something to get started, let's do it. -Travis From nwagner at iam.uni-stuttgart.de Fri Oct 27 14:45:22 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 27 Oct 2006 20:45:22 +0200 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <454252AB.8070108@ieee.org> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <454252AB.8070108@ieee.org> Message-ID: On Fri, 27 Oct 2006 12:40:43 -0600 Travis Oliphant wrote: > Robert Kern wrote: >> Matthew Brett wrote: >> >>> Hi, >>> >>> >>>> It is the same situation as with the umfpack solver >>>>(GPL) - I have >>>> written the wrappers which are included in scipy (they >>>>do not use any >>>> umfpack code, they just use information about its API) >>>>and added a check >>>> into distutils and a section into site.cfg file to see >>>>if umfpack is >>>> installed in the system. >>>> >>> Well - the question was about the LGPL. My reading of >>>the license: >>> >>> http://www.gnu.org/licenses/lgpl.html >>> >>> is that (under section 6) you are allowed to distribute >>>an LGPL >>> library with non-derived code that links to it, as long >>>as your own >>> license terms 'permit modification of [your] work for >>>the customer's >>> own use and reverse engineering for debugging such >>>modifications', and >>> that you provide (section 6a) the source code to the >>>code linking to >>> the library and the library itself. >>> >>> If I read that correctly, it seems to me that an LGPL >>>library can be >>> included with SciPy without SciPy needing to change its >>>own license. >>> >> >> The only reason that wrappers for FFTW and UMFPACK are >>allowed are because they >> are optional. We need people to be able to "install >>scipy" without needing to >> install *any* GPL or LGPL code even if the only code in >>our SVN repository is >> just the BSD-licensed wrappers. >> >> If you want more wrappers for GPL or LGPL software, it's >>time to start the >> scikits package. I'd really like to avoid more optional >>wrappers. It makes >> > It looks like David's PyAudio is a perfect thing to get >this started > since it is a "wrapper" to libsndfile which is LGPL. > > How should we proceed? Can Robert set up a new SVN tree >for scikits? > Can I? What kind of things should go into scikits. How >should they be > installed -- under the scikits name-space (do we really >like that name > --- it sounds quirky to me)? > > We have something to get started, let's do it. > > -Travis > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev scitools How about that ? Nils From robert.kern at gmail.com Fri Oct 27 16:57:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Oct 2006 15:57:58 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <454252AB.8070108@ieee.org> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <454252AB.8070108@ieee.org> Message-ID: <454272D6.8010907@gmail.com> Travis Oliphant wrote: > How should we proceed? Can Robert set up a new SVN tree for scikits? > Can I? What kind of things should go into scikits. How should they be > installed -- under the scikits name-space (do we really like that name > --- it sounds quirky to me)? When you decide on a name, Jeff Strunk can set up a projects.scipy.org project for it. Since each sciwhatever.* subpackage will be separate from the others, I would recommend following Zope's lead and keeping the sources separate although they share a common namespace package. E.g. trunk/ audio/ README setup.py sciwhatever/ __init__.py audio/ __init__.py audiostuff.py primme/ README setup.py sciwhatever/ __init__.py primme/ __init__.py eigenstuff.py I think the focus of sciwhatever should still be on open source packages, so I would reject packages with research-use-only licenses and the like. I must admit that I'm kind of fond of "sciwhatever" as a name, now. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Oct 27 22:29:23 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Oct 2006 11:29:23 +0900 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <454234AD.9010607@gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> Message-ID: <4542C083.2070208@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > The only reason that wrappers for FFTW and UMFPACK are allowed are because they > are optional. We need people to be able to "install scipy" without needing to > install *any* GPL or LGPL code even if the only code in our SVN repository is > just the BSD-licensed wrappers. > > If you want more wrappers for GPL or LGPL software, it's time to start the > scikits package. I'd really like to avoid more optional wrappers. It makes > writing code that depends on scipy much harder. > I have seen scikits mentioned before already, but I didn't find an explanation on what it is (supposed to be ?) in the archive. Is it just a separate svn branch to have clean separation between code which depends on different bindings ? I am not sure to understand the link between scikits and the problem of depending on optional codes ? cheers, David From david at ar.media.kyoto-u.ac.jp Fri Oct 27 22:32:29 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 28 Oct 2006 11:32:29 +0900 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4542C083.2070208@ar.media.kyoto-u.ac.jp> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> Message-ID: <4542C13D.8050205@ar.media.kyoto-u.ac.jp> David Cournapeau wrote: > I have seen scikits mentioned before already, but I didn't find an > explanation on what it is (supposed to be ?) in the archive. Is it just > a separate svn branch to have clean separation between code which > depends on different bindings ? I am not sure to understand the link > between scikits and the problem of depending on optional codes ? > > I am sorry, I meant is scikits a separate svn branch for code which depends on libraries with different *licenses* than scipy, David From robert.kern at gmail.com Sat Oct 28 00:15:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Oct 2006 23:15:51 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4542C13D.8050205@ar.media.kyoto-u.ac.jp> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> Message-ID: <4542D977.2020401@gmail.com> David Cournapeau wrote: > David Cournapeau wrote: >> I have seen scikits mentioned before already, but I didn't find an >> explanation on what it is (supposed to be ?) in the archive. Is it just >> a separate svn branch to have clean separation between code which >> depends on different bindings ? I am not sure to understand the link >> between scikits and the problem of depending on optional codes ? >> > I am sorry, I meant is scikits a separate svn branch for code which > depends on libraries with different *licenses* than scipy, Fernando Perez brought up the idea some time ago to establish a namespace package (with projects.scipy.org hosting) for valuable scipy-related projects that {c,w,sh}ould not actually go into scipy for whatever reason. One good reason would be the license. Instead of making more code based on LGPL and GPL software optional to avoid license hassles, it's cleaner all around to simply put the problematic code into a separate package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ellisonbg.net at gmail.com Sat Oct 28 01:26:19 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 27 Oct 2006 23:26:19 -0600 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4542D977.2020401@gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> Message-ID: <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> My few cents... While the license issues are important, I don't think they should be the only thing that is considered when deciding about what gets included into scipy. A few things that I think /should/ be strongly considered: * Is the code strongly coupled to scipy already? Does it have scipy as a requirement? Does it only require NumPy? * Does the code provide capabilities that *many* users of scipy would want? * Would people ever want to use the code without the rest of scipy? If the answer is yes then I think the code should probably NOT be included in SciPy. Two example of projects that fit into this category (license issues aside): pytables and mpi4py. While one could make an argument that scipy should include tools like these, many people need to use these things who don't need scipy at all. I work with many people who use pytables/mpi4py on supercomputers regularly - and installing things on these systems can be a huge pain. If these packages were included with scipy, scpy would become a hinderance and burden at that point. * NumPy goes a long way in encouranging interoperability amongst numerical codes written in Python. If a code already interoperates with NumPy and SciPy is there any additional benefit to actually including it with with SciPy? I do think it is a good idea to keep the non BSD code in a separate namespace (as Fernando has suggested). Either that or they can be hosted as separate foo.scipy.org projects like ipython.scipy.org, pyxg.scipy.org, wavelets.scipy.org, etc. Why not have PyPRIMME.scipy.org? Brian On 10/27/06, Robert Kern wrote: > David Cournapeau wrote: > > David Cournapeau wrote: > >> I have seen scikits mentioned before already, but I didn't find an > >> explanation on what it is (supposed to be ?) in the archive. Is it just > >> a separate svn branch to have clean separation between code which > >> depends on different bindings ? I am not sure to understand the link > >> between scikits and the problem of depending on optional codes ? > >> > > I am sorry, I meant is scikits a separate svn branch for code which > > depends on libraries with different *licenses* than scipy, > > Fernando Perez brought up the idea some time ago to establish a namespace > package (with projects.scipy.org hosting) for valuable scipy-related projects > that {c,w,sh}ould not actually go into scipy for whatever reason. One good > reason would be the license. > > Instead of making more code based on LGPL and GPL software optional to avoid > license hassles, it's cleaner all around to simply put the problematic code into > a separate package. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From gruben at bigpond.net.au Sat Oct 28 01:43:01 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 28 Oct 2006 15:43:01 +1000 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> Message-ID: <4542EDE5.2050101@bigpond.net.au> Hi Brian, The issue is not one of convenience. The issue is that the GPL and LGPL are legally and practically incompatible with the scipy/MIT licence and would require a change of the scipy licence is any of their code were to be included with the existing package. What I'm unsure of if whether it's OK to combine GPL and LGPL code (and GPL3?) into a single package. My guess is that it's not and that this means a proliferation of extra packages are required. If this is the case maybe they should be called scipy_gpl and scipy_lgpl or scikits.gpl, scikits.lgpl or something. Maybe there can be an individual egg for each one which carries the relevant licence around. Gary R. Brian Granger wrote: > My few cents... > > While the license issues are important, I don't think they should be > the only thing that is considered when deciding about what gets > included into scipy. A few things that I think /should/ be strongly > considered: > > * Is the code strongly coupled to scipy already? Does it have scipy > as a requirement? Does it only require NumPy? > > * Does the code provide capabilities that *many* users of scipy would want? > > * Would people ever want to use the code without the rest of scipy? > If the answer is yes then I think the code should probably NOT be > included in SciPy. Two example of projects that fit into this > category (license issues aside): pytables and mpi4py. While one could > make an argument that scipy should include tools like these, many > people need to use these things who don't need scipy at all. I work > with many people who use pytables/mpi4py on supercomputers regularly - > and installing things on these systems can be a huge pain. If these > packages were included with scipy, scpy would become a hinderance and > burden at that point. > > * NumPy goes a long way in encouranging interoperability amongst > numerical codes written in Python. If a code already interoperates > with NumPy and SciPy is there any additional benefit to actually > including it with with SciPy? > > I do think it is a good idea to keep the non BSD code in a separate > namespace (as Fernando has suggested). Either that or they can be > hosted as separate foo.scipy.org projects like ipython.scipy.org, > pyxg.scipy.org, wavelets.scipy.org, etc. Why not have > PyPRIMME.scipy.org? > > Brian From robert.kern at gmail.com Sat Oct 28 02:10:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Oct 2006 01:10:45 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> Message-ID: <4542F465.9010007@gmail.com> Brian Granger wrote: > My few cents... > > While the license issues are important, I don't think they should be > the only thing that is considered when deciding about what gets > included into scipy. A few things that I think /should/ be strongly > considered: I agree. I would have mentioned them, too, but I blanked when I wrote that email. :-) > I do think it is a good idea to keep the non BSD code in a separate > namespace (as Fernando has suggested). Either that or they can be > hosted as separate foo.scipy.org projects like ipython.scipy.org, > pyxg.scipy.org, wavelets.scipy.org, etc. Why not have > PyPRIMME.scipy.org? Of course, since no one has actually stated that they will make wrappers for PRIMME, that example is a bit moot. :-) Anyways, I'd rather start making a scikits.scipy.org and establish the scikits namespace for administrative reasons if nothing else. While projects.scipy.org is a useful hub as it is, the overhead of yet another Trac/SVN/MoinMoin instance is probably needless. I think it's going to be easier to convince someone with a useful package to simply add themselves to scikits.scipy.org and share the administrative burden of the site. Of course, we'll still happily add new top-level projects that can't be or simply don't want to be a scikits package. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Oct 28 02:12:19 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Oct 2006 01:12:19 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4542EDE5.2050101@bigpond.net.au> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> <4542EDE5.2050101@bigpond.net.au> Message-ID: <4542F4C3.3070701@gmail.com> Gary Ruben wrote: > Hi Brian, > The issue is not one of convenience. The issue is that the GPL and LGPL > are legally and practically incompatible with the scipy/MIT licence and > would require a change of the scipy licence is any of their code were to > be included with the existing package. No, it wouldn't. scipy's license is compatible with the GPL in that one can combine scipy code and GPLed code just fine without relicensing anything. The scipy license grants strictly more freedoms than the GPL. However, the longstanding *policy* of the scipy project is not to include code with more restrictions than the BSD license. There are some exceptions made for certain wrappers which are strictly optional like FFTW. > What I'm unsure of if whether > it's OK to combine GPL and LGPL code (and GPL3?) into a single package. Yes, they are constructed explicitly to allow that. > My guess is that it's not and that this means a proliferation of extra > packages are required. If this is the case maybe they should be called > scipy_gpl and scipy_lgpl or scikits.gpl, scikits.lgpl or something. > Maybe there can be an individual egg for each one which carries the > relevant licence around. There's no point in that. Each scikits.* subpackage can have its own license. There will be no "scikits license" that applies to everything distributed in the scikits namespace. Users of each package are responsible for obeying the conditions of the licenses. That they are in the same SVN repository is "mere aggregation" as the term is used in the GPL and LGPL. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gruben at bigpond.net.au Sat Oct 28 05:08:29 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sat, 28 Oct 2006 19:08:29 +1000 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4542F4C3.3070701@gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> <4542EDE5.2050101@bigpond.net.au> <4542F4C3.3070701@gmail.com> Message-ID: <45431E0D.4010701@bigpond.net.au> Robert Kern wrote: > Gary Ruben wrote: >> Hi Brian, >> The issue is not one of convenience. The issue is that the GPL and LGPL >> are legally and practically incompatible with the scipy/MIT licence and >> would require a change of the scipy licence is any of their code were to >> be included with the existing package. > > No, it wouldn't. scipy's license is compatible with the GPL in that one can > combine scipy code and GPLed code just fine without relicensing anything. The > scipy license grants strictly more freedoms than the GPL. > > However, the longstanding *policy* of the scipy project is not to include code > with more restrictions than the BSD license. There are some exceptions made for > certain wrappers which are strictly optional like FFTW. Sorry. Thanks for the correction. Would it be more correct to say that there might be the possibility that, were Scipy to be incorporated in commercial software, the existence of GPLed parts might cause confusion on the part of developers? Is this the reason for the policy? Gary R. From matthew.brett at gmail.com Sat Oct 28 05:26:56 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Sat, 28 Oct 2006 10:26:56 +0100 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <4542EDE5.2050101@bigpond.net.au> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> <4542EDE5.2050101@bigpond.net.au> Message-ID: <1e2af89e0610280226r75d57b53y7588766fcb5bdf19@mail.gmail.com> Hi, > The issue is not one of convenience. The issue is that the GPL and LGPL > are legally and practically incompatible with the scipy/MIT licence and > would require a change of the scipy licence is any of their code were to > be included with the existing package. I am sorry to persist, but as far as I can see, including an LGPL (rather than GPL) package, with wrappers, is not incompatible with the scipy license. I may have misunderstood the license, but if I haven't, then a policy of not including LGPL code will put an unnecessary hurdle in the way of scipy users. I can imagine there is a reason to prefer that option, but I wasn't sure what it was - hence my earlier question... Best, Matthew From robert.kern at gmail.com Sat Oct 28 05:33:35 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Oct 2006 04:33:35 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <45431E0D.4010701@bigpond.net.au> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> <4542EDE5.2050101@bigpond.net.au> <4542F4C3.3070701@gmail.com> <45431E0D.4010701@bigpond.net.au> Message-ID: <454323EF.2080207@gmail.com> Gary Ruben wrote: > Robert Kern wrote: >> Gary Ruben wrote: >>> Hi Brian, >>> The issue is not one of convenience. The issue is that the GPL and LGPL >>> are legally and practically incompatible with the scipy/MIT licence and >>> would require a change of the scipy licence is any of their code were to >>> be included with the existing package. >> No, it wouldn't. scipy's license is compatible with the GPL in that one can >> combine scipy code and GPLed code just fine without relicensing anything. The >> scipy license grants strictly more freedoms than the GPL. >> >> However, the longstanding *policy* of the scipy project is not to include code >> with more restrictions than the BSD license. There are some exceptions made for >> certain wrappers which are strictly optional like FFTW. > > Sorry. Thanks for the correction. Would it be more correct to say that > there might be the possibility that, were Scipy to be incorporated in > commercial software, the existence of GPLed parts might cause confusion > on the part of developers? Is this the reason for the policy? It's more than a possibility; it *is* incorporated in all of Enthought's proprietary software. As long as scipy is still a monolithic package with interdependencies between its components, introducing a non-optional GPLed component does more than cause confusion; it makes scipy impossible for us and others to use in this manner. The LGPL does not cause the same issue; however, we've always favored keeping scipy as a whole BSD-licensed for a number of reasons. The Python community tends towards BSD-like licenses due to Python's license; BSD-like licenses are easier to explain to our customers; it's requirements are easier to explain to other programmers. At this point in time, introducing an LGPLed component is most likely to cause confusion amongst the other BSDish components, as you say. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Oct 28 05:51:19 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Oct 2006 04:51:19 -0500 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <1e2af89e0610280226r75d57b53y7588766fcb5bdf19@mail.gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <4542C083.2070208@ar.media.kyoto-u.ac.jp> <4542C13D.8050205@ar.media.kyoto-u.ac.jp> <4542D977.2020401@gmail.com> <6ce0ac130610272226l7d9b5a97h51af9b63fb99e262@mail.gmail.com> <4542EDE5.2050101@bigpond.net.au> <1e2af89e0610280226r75d57b53y7588766fcb5bdf19@mail.gmail.com> Message-ID: <45432817.1060508@gmail.com> Matthew Brett wrote: > Hi, > >> The issue is not one of convenience. The issue is that the GPL and LGPL >> are legally and practically incompatible with the scipy/MIT licence and >> would require a change of the scipy licence is any of their code were to >> be included with the existing package. > > I am sorry to persist, but as far as I can see, including an LGPL > (rather than GPL) package, with wrappers, is not incompatible with the > scipy license. I may have misunderstood the license, but if I > haven't, then a policy of not including LGPL code will put an > unnecessary hurdle in the way of scipy users. I can imagine there is > a reason to prefer that option, but I wasn't sure what it was - hence > my earlier question... I've tried to explain in my latest response to Gary Ruben. To the contrary, I see *including* an LGPLed component in scipy as putting an unnecessary hurdle in the way of scipy users. While the LGPL is lighter in its requirements than the GPL, it's certainly not trivial to comply with as scipy's license currently is. It is an unnecessary hurdle because there are really not many reasons why a new package *must* be in scipy. Small, well-separated packages are, in fact, preferred for technical reasons above and beyond the legal ones. A scikits (scitools, sciwhatever) project will provide the benefits of the scipy community for the smaller, individual projects. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Sat Oct 28 14:35:05 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 28 Oct 2006 14:35:05 -0400 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <453DEF1B.7070903@iam.uni-stuttgart.de> References: <453DEF1B.7070903@iam.uni-stuttgart.de> Message-ID: On Tue, 24 Oct 2006, Nils Wagner apparently wrote: > A new eigensolver is available (*NA Digest, V. 06, # 43*). > PRIMME is released under the Lesser GPL license. > Is this licence compatible wrt scipy ? So as I understand what this discussion led to: This would be a great SciKit, if Nils will provide a wrapper and submit it. There may be some residual confusion about whom he should submit it to and where the kits will be found. Hope that's a correct summary. Cheers, Alan Isaac From nwagner at iam.uni-stuttgart.de Sun Oct 29 16:20:58 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 29 Oct 2006 22:20:58 +0100 Subject: [SciPy-dev] ERROR: check loadmat case cellnest Message-ID: scipy.test(1) yields ====================================================================== ERROR: check loadmat case cell ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 84, in cc self._check_case(name, files, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 66, in _check_level assert_array_almost_equal(actual, expected, err_msg=label, decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 200, in assert_array_compare val = comparison(x,y) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 228, in compare return around(abs(x-y),decimal) <= 10.0**(-decimal) TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode' ====================================================================== ERROR: check loadmat case cellnest ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 84, in cc self._check_case(name, files, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 66, in _check_level assert_array_almost_equal(actual, expected, err_msg=label, decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 200, in assert_array_compare val = comparison(x,y) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 228, in compare return around(abs(x-y),decimal) <= 10.0**(-decimal) File "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", line 574, in round_ return round(decimals, out) AttributeError: 'numpy.ndarray' object has no attribute 'rint' ====================================================================== ERROR: check loadmat case emptycell ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 84, in cc self._check_case(name, files, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 66, in _check_level assert_array_almost_equal(actual, expected, err_msg=label, decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 200, in assert_array_compare val = comparison(x,y) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 228, in compare return around(abs(x-y),decimal) <= 10.0**(-decimal) File "/usr/lib/python2.4/site-packages/numpy/core/fromnumeric.py", line 574, in round_ return round(decimals, out) AttributeError: 'numpy.float64' object has no attribute 'rint' ====================================================================== ERROR: check loadmat case stringarray ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 84, in cc self._check_case(name, files, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 66, in _check_level assert_array_almost_equal(actual, expected, err_msg=label, decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 200, in assert_array_compare val = comparison(x,y) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 228, in compare return around(abs(x-y),decimal) <= 10.0**(-decimal) TypeError: unsupported operand type(s) for -: 'unicode' and 'unicode' ====================================================================== ERROR: check loadmat case structarr ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 84, in cc self._check_case(name, files, expected) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 79, in _check_case self._check_level(k_label, expected, matdict[k]) File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_mio.py", line 66, in _check_level assert_array_almost_equal(actual, expected, err_msg=label, decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 200, in assert_array_compare val = comparison(x,y) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 228, in compare return around(abs(x-y),decimal) <= 10.0**(-decimal) TypeError: unsupported operand type(s) for -: 'mat_struct' and 'mat_struct' ---------------------------------------------------------------------- From robert.kern at gmail.com Sun Oct 29 18:32:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 29 Oct 2006 17:32:15 -0600 Subject: [SciPy-dev] ERROR: check loadmat case cellnest In-Reply-To: References: Message-ID: <454539FF.3040104@gmail.com> Please use the bug tracker. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Oct 30 01:40:05 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 30 Oct 2006 15:40:05 +0900 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <454272D6.8010907@gmail.com> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <454252AB.8070108@ieee.org> <454272D6.8010907@gmail.com> Message-ID: <45459E45.7040400@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > Travis Oliphant wrote: > >> How should we proceed? Can Robert set up a new SVN tree for scikits? >> Can I? What kind of things should go into scikits. How should they be >> installed -- under the scikits name-space (do we really like that name >> --- it sounds quirky to me)? >> > > When you decide on a name, Jeff Strunk can set up > a projects.scipy.org project for it. Since each sciwhatever.* subpackage will be > separate from the others, I would recommend following Zope's lead and keeping > the sources separate although they share a common namespace package. E.g. > > > trunk/ > audio/ > README > setup.py > sciwhatever/ > __init__.py > audio/ > __init__.py > audiostuff.py > primme/ > README > setup.py > sciwhatever/ > __init__.py > primme/ > __init__.py > eigenstuff.py > > > I think the focus of sciwhatever should still be on open source packages, so I > would reject packages with research-use-only licenses and the like. > > I must admit that I'm kind of fond of "sciwhatever" as a name, now. > > I can quickly change the layout of pyaudio so that it looks like above. I am not sure about the setup.py, and sciwhatever/__init__.py, though (I don't know much about python packages and co) - Is sciwhatever/__init__.py shared accross all the packages (ie always the same in primme,pyaudio, etc... ?) - Should the setup.py follow the same convention that the scipy modules' ones ? Cheers, David From robert.kern at gmail.com Mon Oct 30 01:46:36 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Oct 2006 00:46:36 -0600 Subject: [SciPy-dev] PRIMME: PReconditioned Iterative MultiMethod Eigensolver In-Reply-To: <45459E45.7040400@ar.media.kyoto-u.ac.jp> References: <453DEF1B.7070903@iam.uni-stuttgart.de> <453E48D5.1060009@gmail.com> <4541D962.4080002@ar.media.kyoto-u.ac.jp> <4541E218.2040505@ntc.zcu.cz> <1e2af89e0610270415q76ca24bbv2bf0920ceb645cc@mail.gmail.com> <454234AD.9010607@gmail.com> <454252AB.8070108@ieee.org> <454272D6.8010907@gmail.com> <45459E45.7040400@ar.media.kyoto-u.ac.jp> Message-ID: <45459FCC.3070404@gmail.com> David Cournapeau wrote: > I can quickly change the layout of pyaudio so that it looks like above. > I am not sure about the setup.py, and sciwhatever/__init__.py, though (I > don't know much about python packages and co) > > - Is sciwhatever/__init__.py shared accross all the packages (ie > always the same in primme,pyaudio, etc... ?) It will be always be an empty file. > - Should the setup.py follow the same convention that the scipy > modules' ones ? I haven't figured out the exact details of that. Most likely there will be a pyaudio/sciwhatever/pyaudio/setup.py that will do all of the heavy lifting. The pyaudio/setup.py at top will probably just .add_subpackage() that (but again, I'm not too sure about the exact details). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From karol.langner at kn.pl Mon Oct 30 10:54:34 2006 From: karol.langner at kn.pl (Karol Langner) Date: Mon, 30 Oct 2006 16:54:34 +0100 Subject: [SciPy-dev] scipy.linalg shadowed In-Reply-To: References: <45422FB9.9020502@ieee.org> Message-ID: <200610301654.34749.karol.langner@kn.pl> On Friday 27 of October 2006 19:47, Nils Wagner wrote: > Hi Travis, > > I have installed everything from scratch. > > >>> import scipy > >>> scipy.linalg > > '/usr/lib/python2.4/site-packages/numpy/linalg/__init__.pyc'> > > >>> scipy.__version__ > > '0.5.2.dev2299' > > Can someone reproduce this behaviour ? > > Nils I also built everything from scratch get exactly the same result as Nils: Python 2.5 (r25:51908, Oct 9 2006, 13:22:58) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.linalg >>> scipy.__version__ '0.5.2.dev2299' Karol -- written by Karol Langner Mon Oct 30 16:52:56 CET 2006 From dalcinl at gmail.com Mon Oct 30 16:08:23 2006 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Mon, 30 Oct 2006 18:08:23 -0300 Subject: [SciPy-dev] [Numpy-discussion] Conversion about getting array interface into Python In-Reply-To: <45464C94.4080900@ieee.org> References: <45464C94.4080900@ieee.org> Message-ID: On 10/30/06, Travis Oliphant wrote: > > If anybody has a desire to see the array interface into Python, please > help by voicing an opinion on python-dev in the discussion about adding > data-type objects to Python. There are a few prominent people who > don't get why applications would need to share data-type information > about memory areas. I need help giving reasons why. > > -Travis Python-Dev is sometimes a hard place to get into. I remember posting some proposals or even bugs (considered bugs for me and others) and getting rather crude responses. Travis said "a few prominent people...". I consider Travis a *very* prominent people in Python world. His **terrific** contribution to NumPy reveals a really smart, always-looking-ahead way of doing things. However, I've seen before strong and disparate opposition to his proposals in Python-Dev. Perhaps the reason for this is simple: few Python core developers are involved in scientific computing and do not have a clear idea of what it is needed for this. I really believe that NumPy/Scipy community should try to raise his voice on Python-Dev. Many NumPy/Scipy users/developers really want to run high-performance Python code. Python is being used in supercomputers, some applications taking advantage of Python (SPaSM) have even won the Gordon Bell Performance Prize. An 25 Tflop/s application involving Python programing language is really a good example of what can be achieved with Python and compiled code interaction. In short, I fully support Travis in his initiative to standardize access to low level binary data, and encourage others like me who really want this to post to Python-Dev. From my part, I will try to post my reasons in connection with my (small) experience developing MPI for Python. Regars, -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From dalcinl at gmail.com Tue Oct 31 12:01:35 2006 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Tue, 31 Oct 2006 14:01:35 -0300 Subject: [SciPy-dev] Citing_SciPy link Message-ID: I cannot access the following link http://www.scipy.org/Citing_SciPy How should I cite SciPy project? -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From bhendrix at enthought.com Tue Oct 31 12:30:15 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 31 Oct 2006 11:30:15 -0600 Subject: [SciPy-dev] Citing_SciPy link In-Reply-To: References: Message-ID: <45478827.1000705@enthought.com> I didn't have any problems with the link, but here's a cut'n'paste of the page: If you would like to cite the SciPy tools in a paper or presentation, the following is recommended: * Author: Eric Jones, Travis Oliphant, Pearu Peterson and others. * Title: SciPy : Open Source Scientific Tools for Python * Year: 2001 - * URL: [WWW] http://www.scipy.org Here's an example of a BibTeX entry: @Misc{, author = {Eric Jones and Travis Oliphant and Pearu Peterson and others}, title = {{SciPy}: Open source scientific tools for {Python}}, year = {2001--}, url = "http://www.scipy.org/" } Lisandro Dalcin wrote: > I cannot access the following link > > http://www.scipy.org/Citing_SciPy > > How should I cite SciPy project? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: moin-www.png Type: image/png Size: 150 bytes Desc: not available URL: From martin.hoefling at gmx.de Tue Oct 31 13:27:31 2006 From: martin.hoefling at gmx.de (Martin =?iso-8859-1?q?H=F6fling?=) Date: Tue, 31 Oct 2006 19:27:31 +0100 Subject: [SciPy-dev] Ebuilds and Attribute error Message-ID: <200610311927.31883.martin.hoefling@gmx.de> Hi there, first of all, i've created subversion ebuilds for numpy and scipy so if anyone is interested in them just tell me. It seems as if there' something wrong with numy: I tried from the tutorial in the documentation. In [1]: from scipy import * In [2]: x,y = mgrid[-1:1:20j,-1:1,20j] --------------------------------------------------------------------------- exceptions.AttributeError Traceback (most recent call last) /home/martin/ /usr/lib/python2.4/site-packages/numpy/lib/index_tricks.py in __getitem__(self, key) 129 typ = int 130 for k in range(len(key)): --> 131 step = key[k].step 132 start = key[k].start 133 if start is None: start=0 AttributeError: 'complex' object has no attribute 'step' I tried also to revert back to numpy 1.0, no change. Best wishes, Martin -- HTML erh?ht den Informationsgehalt eines Postings *immer* um ein paar unvorteilhafte Informationen ?ber den Verfasser. (Thore Tams, de.soc.netzkultur, 17.5.1999)