From robert.kern at gmail.com Sat Apr 1 03:18:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Apr 2006 02:18:56 -0600 Subject: [SciPy-dev] Trac maintenance Message-ID: <442E3770.6030809@gmail.com> I've been doing a bit of maintenance on the Trac instances for numpy and scipy. In particular, I've removed the default "component1" and "milestone2" nonsense and put meaningful values in their place. If you have any requests, or you think my component lists are bogus, enter a ticket, set the component to "Trac" and assign it to rkern. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Sat Apr 1 04:05:26 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 01 Apr 2006 01:05:26 -0800 Subject: [SciPy-dev] Deleting a recipe in the cookbook wiki In-Reply-To: <200603291038.22453.faltet@carabos.com> References: <200603291038.22453.faltet@carabos.com> Message-ID: <442E4256.8030808@astraw.com> Francesc Altet wrote: >Hi, > >I've ended changing the name of a recipe of mine in the SciPy cookbook >wiki, i.e. from "A Pyrex Agnostic Class" to "A Numerical Agnostic >Pyrex Class", which I find better. Anyone with privileges can delete >the former one? > >Thanks, > > > Done. (This example is very meta... In case anyone dreams up a 4th N-dimensional array package, please make sure it's compatible with this! :) Cheers! Andrew From robert.kern at gmail.com Sat Apr 1 20:51:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Apr 2006 19:51:33 -0600 Subject: [SciPy-dev] Statistics review months Message-ID: <442F2E25.9040705@gmail.com> In the interest of improving the quality of the scipy.stats package, I hereby declare April and May of 2006 to be Statistics Review Months. I propose that we set ourselves a goal to review each function in stats.py and morestats.py (and a few others) for correctness and completeness of implementation by the end of May. By my count, that's about 2.5 functions every day. Surely this is a reasonable amount of effort for a rather large payoff: a robust, well-tested and thorough statistics library. I have added a Wiki page describing the details: http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview Barring any objections, I will be irretrievably creating the ~150 tickets or so for all of the functions to be reviewed later tonight. So if you object, act fast! [Disclosure: this idea isn't mine. Eric Jones mentioned it to me once, and I'm just running with it.] -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonas at mwl.mit.edu Sat Apr 1 21:20:34 2006 From: jonas at mwl.mit.edu (Eric Jonas) Date: Sat, 1 Apr 2006 21:20:34 -0500 Subject: [SciPy-dev] Statistics review months In-Reply-To: <442F2E25.9040705@gmail.com> References: <442F2E25.9040705@gmail.com> Message-ID: <20060402022034.GP16205@convolution.mit.edu> Robert, > In the interest of improving the quality of the scipy.stats package, > I hereby declare April and May of 2006 to be Statistics Review > Months. I propose that we set ourselves a goal to review each > function in stats.py and morestats.py (and a few others) for > correctness and completeness of implementation by the end of What a great idea! We had just started working on this ourselves. We're going to migrate some of our internal wiki content over to the trac pages over the next few days, regarding rv_continuous and rv_discrete. In particular, the distributions implementation is both very well-designed and very bug-ridden. While we're doing this level of refactoring, perhaps more attention could be given to fernando's "scikit" idea? If we're going to invest this much time in a module, perhaps it would be a good time to consider cleaning up the interface, and even writing a good how-to guide, ala the matlab toolbox users guides... ...Eric Jonas From robert.kern at gmail.com Sat Apr 1 22:17:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Apr 2006 21:17:59 -0600 Subject: [SciPy-dev] Statistics review months In-Reply-To: <20060402022034.GP16205@convolution.mit.edu> References: <442F2E25.9040705@gmail.com> <20060402022034.GP16205@convolution.mit.edu> Message-ID: <442F4267.8060703@gmail.com> Eric Jonas wrote: > Robert, > >>In the interest of improving the quality of the scipy.stats package, >>I hereby declare April and May of 2006 to be Statistics Review >>Months. I propose that we set ourselves a goal to review each >>function in stats.py and morestats.py (and a few others) for >>correctness and completeness of implementation by the end of > > What a great idea! We had just started working on this ourselves. > We're going to migrate some of our internal wiki content over to the > trac pages over the next few days, regarding rv_continuous and > rv_discrete. In particular, the distributions implementation is both > very well-designed and very bug-ridden. Yes. Given the number of functions in stats.py and the number of distributions (multiplied by the number of methods each distribution has), I thought it best to focus on the functions this time around. But please do add the Wiki pages so we can keep track of this! If this procedure works well, I'm sure we will do a Distributions Review Month soon. > While we're doing this level of refactoring, perhaps more attention > could be given to fernando's "scikit" idea? If we're going to invest > this much time in a module, perhaps it would be a good time to > consider cleaning up the interface, and even writing a good how-to > guide, ala the matlab toolbox users guides... Good point. I will add a Wiki page where reviewers can add examples and HOWTO text for each function as they work their way down the list. When the reviewing settles down, that would be the perfect time for some reorganization and editing work to form a coherent user's guide for scipy.stats. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at stanford.edu Sat Apr 1 23:20:02 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Sat, 01 Apr 2006 20:20:02 -0800 Subject: [SciPy-dev] Statistics review months In-Reply-To: <442F2E25.9040705@gmail.com> References: <442F2E25.9040705@gmail.com> Message-ID: <442F50F2.2060303@stanford.edu> on this topic, as an honest-to-goodness statistician it might be nice to see more statistical modelling in scipy. i know Rpy exists, but the interface is not very pythonic. i have some "home-brew" modules for linear regression, formula building (something like R's) and a few other things. if it went into something like scipy, it might gain from the criticisms of others.... is there any interest in making the equivalent of a scipy.stats.models module? i think an easily (medium-term) achievable goal is: i) linear (least-squares) regression models with/without weights or non-diagonal covariance matrices (in R: lm + more) ii) generalized linear models (in R: glm) iii) iteratively reweighted least squares algorithms (glm is a special case), i.e. robust regression (in R: rlm). iv) ordinary least squares multivariate linear models (i.e. multivariate responses) some of these models can easily be "broadcasted", others not so easily.... further goals are more general models: classification, constrained model fitting, model selection.... for some of these things, it may not be worth duplicating R's (or other packages') efforts. -- jonathan Robert Kern wrote: >In the interest of improving the quality of the scipy.stats package, I hereby >declare April and May of 2006 to be Statistics Review Months. I propose that we >set ourselves a goal to review each function in stats.py and morestats.py (and a >few others) for correctness and completeness of implementation by the end of >May. By my count, that's about 2.5 functions every day. Surely this is a >reasonable amount of effort for a rather large payoff: a robust, well-tested and >thorough statistics library. > >I have added a Wiki page describing the details: > > http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview > >Barring any objections, I will be irretrievably creating the ~150 tickets or so >for all of the functions to be reviewed later tonight. So if you object, act fast! > >[Disclosure: this idea isn't mine. Eric Jones mentioned it to me once, and I'm >just running with it.] > > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From steve at shrogers.com Sun Apr 2 10:07:18 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Sun, 02 Apr 2006 08:07:18 -0600 Subject: [SciPy-dev] Statistics review months In-Reply-To: <442F50F2.2060303@stanford.edu> References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu> Message-ID: <442FDA96.2050007@shrogers.com> I think this would be a useful addition. Jonathan Taylor wrote: > on this topic, as an honest-to-goodness statistician it might be nice to > see more statistical modelling in scipy. i know Rpy exists, but the > interface is not very pythonic. > > i have some "home-brew" modules for linear regression, formula building > (something like R's) and a few other things. if it went into something > like scipy, it might gain from the criticisms of others.... > > is there any interest in making the equivalent of a > > scipy.stats.models > > module? > > i think an easily (medium-term) achievable goal is: > > i) linear (least-squares) regression models with/without weights or > non-diagonal covariance matrices (in R: lm + more) > > ii) generalized linear models (in R: glm) > > iii) iteratively reweighted least squares algorithms (glm is a special > case), i.e. robust regression (in R: rlm). > > iv) ordinary least squares multivariate linear models (i.e. multivariate > responses) > > some of these models can easily be "broadcasted", others not so easily.... > > further goals are more general models: classification, constrained model > fitting, model selection.... for some of these things, it may not be > worth duplicating R's (or other packages') efforts. > > -- jonathan > > > Robert Kern wrote: > >> In the interest of improving the quality of the scipy.stats package, I hereby >> declare April and May of 2006 to be Statistics Review Months. I propose that we >> set ourselves a goal to review each function in stats.py and morestats.py (and a >> few others) for correctness and completeness of implementation by the end of >> May. By my count, that's about 2.5 functions every day. Surely this is a >> reasonable amount of effort for a rather large payoff: a robust, well-tested and >> thorough statistics library. >> >> I have added a Wiki page describing the details: >> >> http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview >> >> Barring any objections, I will be irretrievably creating the ~150 tickets or so >> for all of the functions to be reviewed later tonight. So if you object, act fast! >> >> [Disclosure: this idea isn't mine. Eric Jones mentioned it to me once, and I'm >> just running with it.] >> >> >> > -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From steve at shrogers.com Sun Apr 2 11:58:58 2006 From: steve at shrogers.com (Steven H. Rogers) Date: Sun, 02 Apr 2006 09:58:58 -0600 Subject: [SciPy-dev] Porting Plone to MoinMoin Coference Pages Message-ID: <442FF4C2.7060400@shrogers.com> I've ported the SciPy 2005 Conference pages to MoinMoin with the exception of attachments (one abstract and the presentations). I don't seem to have permission to upload attachments. If there is no desire to move the attachments to the new site, I'll just link to the old site. How are attachments to be handled in th new site? This port may be helpful as a template for future conferences, but if the Plone SciPy site will be kept for archival purposes, I see no reason to port pages from older conferences. The same applies to other static content). Steve -- Steven H. Rogers, Ph.D., steve at shrogers.com Weblog: http://shrogers.com/weblog "He who refuses to do arithmetic is doomed to talk nonsense." -- John McCarthy From strawman at astraw.com Sun Apr 2 14:00:23 2006 From: strawman at astraw.com (Andrew Straw) Date: Sun, 02 Apr 2006 11:00:23 -0700 Subject: [SciPy-dev] Porting Plone to MoinMoin Coference Pages In-Reply-To: <442FF4C2.7060400@shrogers.com> References: <442FF4C2.7060400@shrogers.com> Message-ID: <44301137.4030504@astraw.com> Steven H. Rogers wrote: >I've ported the SciPy 2005 Conference pages to MoinMoin with the >exception of attachments (one abstract and the presentations). > Great! Thanks. > I don't >seem to have permission to upload attachments. > Hmm, I just checked with a completely unpriveledged account and I could upload just fine. Why do you say you don't have permission? >If there is no desire to >move the attachments to the new site, I'll just link to the old site. >How are attachments to be handled in th new site? > > By typing attachment:filename.ext in the wiki, hitting preview or save, then clicking on the resulting link to upload the attachment. Alternatively, you can go to the "attachments" action of a given page and upload from there, creating the "attachment:filename.ext" later. From nwagner at iam.uni-stuttgart.de Mon Apr 3 03:58:17 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 03 Apr 2006 09:58:17 +0200 Subject: [SciPy-dev] ImportError: ndimage does not yet support 64-bit systems Message-ID: <4430D599.3080408@iam.uni-stuttgart.de> >>> from scipy import * import linsolve.umfpack -> failed: No module named _umfpack Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/ndimage/__init__.py", line 33, in ? raise ImportError, "ndimage does not yet support 64-bit systems" ImportError: ndimage does not yet support 64-bit systems I know that I can disable config.add_subpackage('ndimage') in ~/scipy/Lib/setup.py but is there another option to fix the problem on 64bit systems ? Nils From pgmdevlist at mailcan.com Mon Apr 3 05:03:52 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Mon, 3 Apr 2006 05:03:52 -0400 Subject: [SciPy-dev] Statistics review months In-Reply-To: References: Message-ID: <200604030503.53714.pgmdevlist@mailcan.com> Robert, Excellent initative, thanks a lot ! Before getting too involved, I have a question: should the functions support MaskedArrays (when possible) ? I think about var/std (already available for MA in this patch, that should be checked before inclusion [http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/ma-200603280900.patch]), or median, in particular... From jdhunter at ace.bsd.uchicago.edu Mon Apr 3 12:01:53 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 03 Apr 2006 11:01:53 -0500 Subject: [SciPy-dev] setup/config problem with scipy svn Message-ID: <871wweabri.fsf@peds-pc311.bsd.uchicago.edu> I just tried updating and building scipy from svn (which I've built on this platform recently in the last week) and am now getting python/svn/scipy> uname -a Linux peds-pc311 2.6.10-5-386 #1 Thu Aug 18 22:23:56 UTC 2005 i686 GNU/Linux python/svn/scipy> svn up At revision 1808. python/svn/scipy> rm -rf build python/svn/scipy> python setup.py build /usr/lib/python2.4/distutils/dist.py:236: UserWarning: Unknown distribution option: 'configuration' warnings.warn(msg) running build running config_fc python/svn/scipy> and nothing happens -- no build, no compile, etc... JDH From strawman at astraw.com Mon Apr 3 13:08:50 2006 From: strawman at astraw.com (Andrew Straw) Date: Mon, 03 Apr 2006 10:08:50 -0700 Subject: [SciPy-dev] setup/config problem with scipy svn In-Reply-To: <871wweabri.fsf@peds-pc311.bsd.uchicago.edu> References: <871wweabri.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <443156A2.808@astraw.com> I'm not sure it'll help (there may be a new problem, as you suggest), but I've found that numpy distutils is brittle when it comes to figuring out the right commands, thus I specify them explicitly: python setup.py build_src build_clib build_ext --fcompiler=gnu John Hunter wrote: >I just tried updating and building scipy from svn (which I've built on >this platform recently in the last week) and am now getting > >python/svn/scipy> uname -a >Linux peds-pc311 2.6.10-5-386 #1 Thu Aug 18 22:23:56 UTC 2005 i686 >GNU/Linux >python/svn/scipy> svn up >At revision 1808. >python/svn/scipy> rm -rf build >python/svn/scipy> python setup.py build >/usr/lib/python2.4/distutils/dist.py:236: UserWarning: Unknown >distribution option: 'configuration' > warnings.warn(msg) >running build >running config_fc >python/svn/scipy> > >and nothing happens -- no build, no compile, etc... > >JDH > >_______________________________________________ >Scipy-dev mailing list >Scipy-dev at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-dev > > From jdhunter at ace.bsd.uchicago.edu Mon Apr 3 13:14:27 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 03 Apr 2006 12:14:27 -0500 Subject: [SciPy-dev] setup/config problem with scipy svn In-Reply-To: <443156A2.808@astraw.com> (Andrew Straw's message of "Mon, 03 Apr 2006 10:08:50 -0700") References: <871wweabri.fsf@peds-pc311.bsd.uchicago.edu> <443156A2.808@astraw.com> Message-ID: <8764lq8tu4.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Andrew" == Andrew Straw writes: Andrew> I'm not sure it'll help (there may be a new problem, as Andrew> you suggest), but I've found that numpy distutils is Andrew> brittle when it comes to figuring out the right commands, Andrew> thus I specify them explicitly: Taking a stab in the dark after looking at older versions of setup.py, I modified setup( configuration=configuration ) to read setup( **configuration().todict() ) and was able to build. JDH From robert.kern at gmail.com Mon Apr 3 13:43:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 03 Apr 2006 12:43:23 -0500 Subject: [SciPy-dev] setup/config problem with scipy svn In-Reply-To: <8764lq8tu4.fsf@peds-pc311.bsd.uchicago.edu> References: <871wweabri.fsf@peds-pc311.bsd.uchicago.edu> <443156A2.808@astraw.com> <8764lq8tu4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44315EBB.3060603@gmail.com> John Hunter wrote: >>>>>>"Andrew" == Andrew Straw writes: > > > Andrew> I'm not sure it'll help (there may be a new problem, as > Andrew> you suggest), but I've found that numpy distutils is > Andrew> brittle when it comes to figuring out the right commands, > Andrew> thus I specify them explicitly: > > Taking a stab in the dark after looking at older versions of setup.py, > I modified > > setup( configuration=configuration ) > > to read > > setup( **configuration().todict() ) > > and was able to build. Be sure your numpy is up-to-date, at least r2315. http://projects.scipy.org/scipy/numpy/changeset/2315 -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jdhunter at ace.bsd.uchicago.edu Mon Apr 3 13:51:46 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 03 Apr 2006 12:51:46 -0500 Subject: [SciPy-dev] setup/config problem with scipy svn In-Reply-To: <44315EBB.3060603@gmail.com> (Robert Kern's message of "Mon, 03 Apr 2006 12:43:23 -0500") References: <871wweabri.fsf@peds-pc311.bsd.uchicago.edu> <443156A2.808@astraw.com> <8764lq8tu4.fsf@peds-pc311.bsd.uchicago.edu> <44315EBB.3060603@gmail.com> Message-ID: <87ek0emtsd.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Robert" == Robert Kern writes: Robert> Be sure your numpy is up-to-date, at least r2315. Doh (slaps self on head). That was it. JDH From dd55 at cornell.edu Mon Apr 3 15:15:40 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 3 Apr 2006 15:15:40 -0400 Subject: [SciPy-dev] [PATCH] d1mach problem In-Reply-To: References: Message-ID: <200604031515.41313.dd55@cornell.edu> On Tuesday 07 March 2006 09:41, Neal Becker wrote: > I think these patches will fix the problem with d1mach miscompiling with > gcc4.1: Thank you, your patches worked for me. (gcc-4.1, amd64, bleeding gentoo) Darren From schofield at ftw.at Tue Apr 4 06:32:02 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 4 Apr 2006 12:32:02 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <442A9F8D.906@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> Message-ID: On 29/03/2006, at 4:54 PM, Robert Cimrman wrote: > Ed Schofield wrote: >> Okay, I've now figured out that the umfpack hooks need NumPy >> distutils >>> = r2286. But the get_info() call in umfpack/setup.py still isn't >> acknowledging that umfpack exists. It should find it from the scipy >> source tree, right? Do you have any hints? > > Well, the umfpack sources are not in the scipy source tree - there > is just my wrapper/module code there. I did it this way, because > even if the license of the version 4.4 (which is known to work with > the module) is (imho) acceptable for scipy, the curent version > (4.6) is under LGPL... > I've run some small tests and the UMFPACK wrappers seems to work fine. Well done, Robert! Well done to Pearu too for his help with distutils. I've added some small unit tests in linsolve/umfpack/tests/ test_umfpack2.py, including two that are commented out because I can't figure out how to disable umfpack with the useUmfpack variable. Could you please check this, Robert? -- Ed From ndbecker2 at gmail.com Tue Apr 4 06:43:21 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 04 Apr 2006 06:43:21 -0400 Subject: [SciPy-dev] [PATCH] d1mach problem References: <200604031515.41313.dd55@cornell.edu> Message-ID: Darren Dale wrote: > On Tuesday 07 March 2006 09:41, Neal Becker wrote: >> I think these patches will fix the problem with d1mach miscompiling with >> gcc4.1: > > Thank you, your patches worked for me. (gcc-4.1, amd64, bleeding gentoo) > > Darren You'll also be happy to know: ------- Additional Comments From jakub at redhat.com 2006-04-01 06:22 EST ------- Fixed in rawhide gcc-4.1.0-4, whenever FC5 updates are made it will be there too. From nwagner at iam.uni-stuttgart.de Tue Apr 4 07:09:08 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Apr 2006 13:09:08 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> Message-ID: <443253D4.90806@iam.uni-stuttgart.de> Ed Schofield wrote: > On 29/03/2006, at 4:54 PM, Robert Cimrman wrote: > > >> Ed Schofield wrote: >> >>> Okay, I've now figured out that the umfpack hooks need NumPy >>> distutils >>> >>>> = r2286. But the get_info() call in umfpack/setup.py still isn't >>>> >>> acknowledging that umfpack exists. It should find it from the scipy >>> source tree, right? Do you have any hints? >>> >> Well, the umfpack sources are not in the scipy source tree - there >> is just my wrapper/module code there. I did it this way, because >> even if the license of the version 4.4 (which is known to work with >> the module) is (imho) acceptable for scipy, the curent version >> (4.6) is under LGPL... >> >> > > I've run some small tests and the UMFPACK wrappers seems to work > fine. Well done, Robert! Well done to Pearu too for his help with > distutils. > > I've added some small unit tests in linsolve/umfpack/tests/ > test_umfpack2.py, including two that are commented out because I > can't figure out how to disable umfpack with the useUmfpack > variable. Could you please check this, Robert? > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > I have installed version 4.6 before. Do you know if your wrapper works also with that version ? Nils From jdhunter at ace.bsd.uchicago.edu Tue Apr 4 09:31:37 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 04 Apr 2006 08:31:37 -0500 Subject: [SciPy-dev] Stineman interpolation Message-ID: <87odzh1n7q.fsf@peds-pc311.bsd.uchicago.edu> A couple of matplotlib users, who know much more about these matters than I do, put together this interpolation routine which they feel is superior and better behaved than many out there. With minor tweaks this may be suitable for scipy.interpolate (it currently lives in matplotlib.mlab svn) def slopes(x,y): """ SLOPES calculate the slope y'(x) Given data vectors X and Y SLOPES calculates Y'(X), i.e the slope of a curve Y(X). The slope is estimated using the slope obtained from that of a parabola through any three consecutive points. This method should be superior to that described in the appendix of A CONSISTENTLY WELL BEHAVED METHOD OF INTERPOLATION by Russel W. Stineman (Creative Computing July 1980) in at least one aspect: Circles for interpolation demand a known aspect ratio between x- and y-values. For many functions, however, the abscissa are given in different dimensions, so an aspect ratio is completely arbitrary. The parabola method gives very similar results to the circle method for most regular cases but behaves much better in special cases Norbert Nemec, Institute of Theoretical Physics, University or Regensburg, April 2006 Norbert.Nemec at physik.uni-regensburg.de (inspired by a original implementation by Halldor Bjornsson, Icelandic Meteorological Office, March 2006 halldor at vedur.is) """ # Cast key variables as float. x=nx.asarray(x, nx.Float) y=nx.asarray(y, nx.Float) yp=nx.zeros(y.shape, nx.Float) dx=x[1:] - x[:-1] dy=y[1:] - y[:-1] dydx = dy/dx yp[1:-1] = (dydx[:-1] * dx[1:] + dydx[1:] * dx[:-1])/(dx[1:] + dx[:-1]) yp[0] = 2.0 * dy[0]/dx[0] - yp[1] yp[-1] = 2.0 * dy[-1]/dx[-1] - yp[-2] return yp def stineman_interp(xi,x,y,yp=None): """ STINEMAN_INTERP Well behaved data interpolation. Given data vectors X and Y, the slope vector YP and a new abscissa vector XI the function stineman_interp(xi,x,y,yp) uses Stineman interpolation to calculate a vector YI corresponding to XI. Here's an example that generates a coarse sine curve, then interpolates over a finer abscissa: x = linspace(0,2*pi,20); y = sin(x); yp = cos(x) xi = linspace(0,2*pi,40); yi = stineman_interp(xi,x,y,yp); plot(x,y,'o',xi,yi) The interpolation method is described in the article A CONSISTENTLY WELL BEHAVED METHOD OF INTERPOLATION by Russell W. Stineman. The article appeared in the July 1980 issue of Creative computing with a note from the editor stating that while they were not an academic journal but once in a while something serious and original comes in adding that this was "apparently a real solution" to a well known problem. For yp=None, the routine automatically determines the slopes using the "slopes" routine. X is assumed to be sorted in increasing order For values xi[j] < x[0] or xi[j] > x[-1], the routine tries a extrapolation. The relevance of the data obtained from this, of course, questionable... original implementation by Halldor Bjornsson, Icelandic Meteorolocial Office, March 2006 halldor at vedur.is completely reworked and optimized for Python by Norbert Nemec, Institute of Theoretical Physics, University or Regensburg, April 2006 Norbert.Nemec at physik.uni-regensburg.de """ # Cast key variables as float. x=nx.asarray(x, nx.Float) y=nx.asarray(y, nx.Float) assert x.shape == y.shape N=len(y) if yp is None: yp = slopes(x,y) else: yp=nx.asarray(yp, nx.Float) xi=nx.asarray(xi, nx.Float) yi=nx.zeros(xi.shape, nx.Float) # calculate linear slopes dx = x[1:] - x[:-1] dy = y[1:] - y[:-1] s = dy/dx #note length of s is N-1 so last element is #N-2 # find the segment each xi is in # this line actually is the key to the efficiency of this implementation idx = nx.searchsorted(x[1:-1], xi) # now we have generally: x[idx[j]] <= xi[j] <= x[idx[j]+1] # except at the boundaries, where it may be that xi[j] < x[0] or xi[j] > x[-1] # the y-values that would come out from a linear interpolation: sidx = nx.take(s, idx) xidx = nx.take(x, idx) yidx = nx.take(y, idx) xidxp1 = nx.take(x, idx+1) yo = yidx + sidx * (xi - xidx) # the difference that comes when using the slopes given in yp dy1 = (nx.take(yp, idx)- sidx) * (xi - xidx) # using the yp slope of the left point dy2 = (nx.take(yp, idx+1)-sidx) * (xi - xidxp1) # using the yp slope of the right point dy1dy2 = dy1*dy2 # The following is optimized for Python. The solution actually # does more calculations than necessary but exploiting the power # of numpy, this is far more efficient than coding a loop by hand # in Python yi = yo + dy1dy2 * nx.choose(nx.array(nx.sign(dy1dy2), nx.Int32)+1, ((2*xi-xidx-xidxp1)/((dy1-dy2)*(xidxp1-xidx)), 0.0, 1/(dy1+dy2),)) return yi # Here is some example code from pylab import figure, show, nx, linspace, stineman_interp x = linspace(0,2*nx.pi,20); y = nx.sin(x); yp = None xi = linspace(x[0],x[-1],100); yi = stineman_interp(xi,x,y,yp); fig = figure() ax = fig.add_subplot(111) ax.plot(x,y,'ro',xi,yi,'-b.') show() From jonathan.taylor at stanford.edu Wed Apr 5 17:19:00 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed, 05 Apr 2006 14:19:00 -0700 Subject: [SciPy-dev] interp1d, bounds_error & fill_value Message-ID: <44343444.3030407@stanford.edu> just a note to mention that it seems that the docstring on interp1d is not quite correct: it states that NaN is returned if bounds_error=0. this is not quite true, as you can set fill_value (not mentioned in docstring but evident in code). if x and y are both int arrays, it does not seem to return NaN... >>> import scipy.interpolate >>> import numpy as N >>> >>> x=N.arange(20) >>> f=scipy.interpolate.interp1d(x,x,bounds_error=0, fill_value=0.) >>> print f(30) [0] >>> >>> g=scipy.interpolate.interp1d(x,x,bounds_error=0) >>> print g(30), N.isnan(g(30)) [-2147483648] [False] >>> >>> X=N.arange(0,20.) >>> h=scipy.interpolate.interp1d(X,X,bounds_error=0) >>> print h(30) [ nan] >>> >>> X=N.arange(0,20.) >>> i=scipy.interpolate.interp1d(X,x,bounds_error=0) >>> print i(30) [ nan] >>> >>> X=N.arange(0,20.) >>> j=scipy.interpolate.interp1d(x,X,bounds_error=0) >>> print j(30) [ nan] >>> -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From cimrman3 at ntc.zcu.cz Thu Apr 6 04:51:37 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 06 Apr 2006 10:51:37 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443253D4.90806@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> Message-ID: <4434D699.5030102@ntc.zcu.cz> Nils Wagner wrote: > Ed Schofield wrote: >>I've run some small tests and the UMFPACK wrappers seems to work >>fine. Well done, Robert! Well done to Pearu too for his help with >>distutils. >> >>I've added some small unit tests in linsolve/umfpack/tests/ >>test_umfpack2.py, including two that are commented out because I >>can't figure out how to disable umfpack with the useUmfpack >>variable. Could you please check this, Robert? >> >>-- Ed >> > > I have installed version 4.6 before. Do you know if your wrapper works > also with that version ? > > Nils I have not tried yet. There is a chance it might work right away, given the subset of names and arguments of umfpack library functions and constants the wrapper uses did not change. Wanna try? :-) r. From nwagner at iam.uni-stuttgart.de Thu Apr 6 05:01:07 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Apr 2006 11:01:07 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <4434D699.5030102@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> Message-ID: <4434D8D3.7050200@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Ed Schofield wrote: >> >>> I've run some small tests and the UMFPACK wrappers seems to work >>> fine. Well done, Robert! Well done to Pearu too for his help with >>> distutils. >>> >>> I've added some small unit tests in linsolve/umfpack/tests/ >>> test_umfpack2.py, including two that are commented out because I >>> can't figure out how to disable umfpack with the useUmfpack >>> variable. Could you please check this, Robert? >>> >>> -- Ed >>> >>> >> I have installed version 4.6 before. Do you know if your wrapper works >> also with that version ? >> >> Nils >> > > I have not tried yet. There is a chance it might work right away, given > the subset of names and arguments of umfpack library functions and > constants the wrapper uses did not change. Wanna try? :-) > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Robert, Ok I will try it. BTW, is it possible to add a site.cfg to the svn tree ? Anyway I have to replace by my local settings. Do you agree ? Nils [umfpack] library_dirs = /UMFPACK/Lib:/AMD/Lib include_dirs = /UMFPACK/Include umfpack_libs = umfpack, amd From cimrman3 at ntc.zcu.cz Thu Apr 6 05:26:07 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 06 Apr 2006 11:26:07 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> Message-ID: <4434DEAF.8050001@ntc.zcu.cz> Ed Schofield wrote: > > On 29/03/2006, at 4:54 PM, Robert Cimrman wrote: > >> Ed Schofield wrote: >> > > I've run some small tests and the UMFPACK wrappers seems to work > fine. Well done, Robert! Well done to Pearu too for his help with > distutils. > > I've added some small unit tests in linsolve/umfpack/tests/ > test_umfpack2.py, including two that are commented out because I > can't figure out how to disable umfpack with the useUmfpack > variable. Could you please check this, Robert? I will rename test_umfpack.py -> try_umfpack.py test_umfpack2.py -> test_umfpack.py so that the tests could be run via scipy.test(). I will also check the useUmfpack problem. However system_info now fails to find my umfpack installation (revision 2308 worked for me), details are below: umfpack_info: /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['umfpack', 'amd'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['umfpack', 'amd'] found_libs=['/home/share/software/packages/UMFPACK/UMFPACK/Lib/libumfpack.a'] warnings.warn("Library error: libs=%s found_libs=%s" % \ /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: UserWarning: Library error: libs=['umfpack', 'amd'] found_libs=['/home/share/software/packages/UMFPACK/AMD/Lib/libamd.a'] warnings.warn("Library error: libs=%s found_libs=%s" % \ /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:392: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE the problem is, that len(found_libs) == len(libs) does not hold in my case (it is in system_info._check_libs()). Pearu, could you fix this, please? (btw. this is exactly the reason the original (rev. 2308) umfpack_info looked so clumsy - I was checking one lib at a time.) r. From cimrman3 at ntc.zcu.cz Thu Apr 6 06:11:51 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 06 Apr 2006 12:11:51 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <4434DEAF.8050001@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <4434DEAF.8050001@ntc.zcu.cz> Message-ID: <4434E967.8010500@ntc.zcu.cz> Robert Cimrman wrote: > Ed Schofield wrote: >>>Ed Schofield wrote: >>> >> >>I've run some small tests and the UMFPACK wrappers seems to work >>fine. Well done, Robert! Well done to Pearu too for his help with >>distutils. >> >>I've added some small unit tests in linsolve/umfpack/tests/ >>test_umfpack2.py, including two that are commented out because I >>can't figure out how to disable umfpack with the useUmfpack >>variable. Could you please check this, Robert? > > > I will rename > test_umfpack.py -> try_umfpack.py > test_umfpack2.py -> test_umfpack.py > > so that the tests could be run via scipy.test(). I will also check the > useUmfpack problem. done. A sparse solver to use can be now chosen by e.g. linsolve.use_solver( {'useUmfpack' : False} ) r. From cimrman3 at ntc.zcu.cz Thu Apr 6 07:32:59 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 06 Apr 2006 13:32:59 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <4434D8D3.7050200@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> <4434D8D3.7050200@iam.uni-stuttgart.de> Message-ID: <4434FC6B.3000905@ntc.zcu.cz> Nils Wagner wrote: > Robert Cimrman wrote: > > Ok I will try it. BTW, is it possible to add a site.cfg to the svn tree ? > Anyway I have to replace by my local settings. > Do you agree ? > > Nils > > [umfpack] > library_dirs = /UMFPACK/Lib:/AMD/Lib > include_dirs = /UMFPACK/Include > umfpack_libs = umfpack, amd Yes, you should replace , but the version 4.6 can have other directory structure, so you might need to replace more to specify where libumfpack.a and libamd.a actually are. I doubt that having a site.cfg in the SVN would be useful (people's configurations differ), but what about adding site.cfg.example where all available sections and relevant fields would be listed, e.g. [umfpack] library_dirs = include_dirs = umfpack_libs = umfpack, amd Or should this go to the Wiki? r. From nwagner at iam.uni-stuttgart.de Thu Apr 6 07:56:59 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Apr 2006 13:56:59 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <4434FC6B.3000905@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> <4434D8D3.7050200@iam.uni-stuttgart.de> <4434FC6B.3000905@ntc.zcu.cz> Message-ID: <4435020B.9040705@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Robert Cimrman wrote: >> >> Ok I will try it. BTW, is it possible to add a site.cfg to the svn tree ? >> Anyway I have to replace by my local settings. >> Do you agree ? >> >> Nils >> >> [umfpack] >> library_dirs = /UMFPACK/Lib:/AMD/Lib >> include_dirs = /UMFPACK/Include >> umfpack_libs = umfpack, amd >> > > Yes, you should replace , but the version 4.6 can have > other directory structure, so you might need to replace more to > specify where libumfpack.a and libamd.a actually are. > > I doubt that having a site.cfg in the SVN would be useful (people's > configurations differ), but what about adding site.cfg.example where all > available sections and relevant fields would be listed, e.g. > > [umfpack] > library_dirs = > include_dirs = > umfpack_libs = umfpack, amd > > Or should this go to the Wiki? > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > It seems that UMFPACK cannot be detected on my system. Any idea ? Numpy version 0.9.7.2328 Scipy version 0.4.9.1812 umfpack_info: NOT AVAILABLE +1 for adding site.cfg.example to svn. Nils From cimrman3 at ntc.zcu.cz Thu Apr 6 08:15:46 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 06 Apr 2006 14:15:46 +0200 Subject: [SciPy-dev] site.cfg.example In-Reply-To: <4435020B.9040705@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> <4434D8D3.7050200@iam.uni-stuttgart.de> <4434FC6B.3000905@ntc.zcu.cz> <4435020B.9040705@iam.uni-stuttgart.de> Message-ID: <44350672.4020008@ntc.zcu.cz> I have added numpy/site.cfg.example to the SVN. It should contain a list all possible sections and relevant fields, so that a (new) user sees what can be configured and then just copies the file to numpy/site.cfg, removes the unwanted sections and edits the wanted. If you think it is a good idea and have a section that is not present or properly described, contribute it, please :-) When/if the file grows, we can put it to the Wiki. cheers, r. From nwagner at iam.uni-stuttgart.de Thu Apr 6 08:19:12 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Apr 2006 14:19:12 +0200 Subject: [SciPy-dev] site.cfg.example In-Reply-To: <44350672.4020008@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> <4434D8D3.7050200@iam.uni-stuttgart.de> <4434FC6B.3000905@ntc.zcu.cz> <4435020B.9040705@iam.uni-stuttgart.de> <44350672.4020008@ntc.zcu.cz> Message-ID: <44350740.2070601@iam.uni-stuttgart.de> Robert Cimrman wrote: > I have added numpy/site.cfg.example to the SVN. It should contain a list > all possible sections and relevant fields, so that a (new) user sees > what can be configured and then just copies the file to numpy/site.cfg, > removes the unwanted sections and edits the wanted. > > If you think it is a good idea and have a section that is not present or > properly described, contribute it, please :-) When/if the file grows, we > can put it to the Wiki. > > cheers, > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > My site.cfg is in ~/numpy/numpy/distutils Is this location wrong ? Nils From cimrman3 at ntc.zcu.cz Thu Apr 6 08:26:46 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 06 Apr 2006 14:26:46 +0200 Subject: [SciPy-dev] site.cfg.example In-Reply-To: <44350740.2070601@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> <4434D8D3.7050200@iam.uni-stuttgart.de> <4434FC6B.3000905@ntc.zcu.cz> <4435020B.9040705@iam.uni-stuttgart.de> <44350672.4020008@ntc.zcu.cz> <44350740.2070601@iam.uni-stuttgart.de> Message-ID: <44350906.8010108@ntc.zcu.cz> Nils Wagner wrote: > My site.cfg is in ~/numpy/numpy/distutils > > Is this location wrong ? It is ok (it does work, doesn't it?). But in relatively current versions of numpy you can have the file directly in numpy/. r. From nwagner at iam.uni-stuttgart.de Thu Apr 6 08:35:22 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 06 Apr 2006 14:35:22 +0200 Subject: [SciPy-dev] site.cfg.example In-Reply-To: <44350906.8010108@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <44280C20.8000003@ntc.zcu.cz> <44297152.9000305@ftw.at> <442A698C.9000104@ntc.zcu.cz> <442A7E78.1030901@ftw.at> <442A86D2.20902@ntc.zcu.cz> <442A9A67.8050106@ftw.at> <442A9F8D.906@ntc.zcu.cz> <443253D4.90806@iam.uni-stuttgart.de> <4434D699.5030102@ntc.zcu.cz> <4434D8D3.7050200@iam.uni-stuttgart.de> <4434FC6B.3000905@ntc.zcu.cz> <4435020B.9040705@iam.uni-stuttgart.de> <44350672.4020008@ntc.zcu.cz> <44350740.2070601@iam.uni-stuttgart.de> <44350906.8010108@ntc.zcu.cz> Message-ID: <44350B0A.2040800@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> My site.cfg is in ~/numpy/numpy/distutils >> >> Is this location wrong ? >> > > It is ok (it does work, doesn't it?). But in relatively current versions > of numpy you can have the file directly in numpy/. > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > How can I check that UMFPACK works with SciPy ? >>> from scipy import * import linsolve.umfpack -> failed: No module named _umfpack umfpack_info: NOT AVAILABLE Nils From pearu at scipy.org Sat Apr 8 04:31:40 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 03:31:40 -0500 (CDT) Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <4434DEAF.8050001@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> Message-ID: On Thu, 6 Apr 2006, Robert Cimrman wrote: > so that the tests could be run via scipy.test(). I will also check the > useUmfpack problem. However system_info now fails to find my umfpack > installation (revision 2308 worked for me), details are below: > > umfpack_info: > /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: > UserWarning: Library error: libs=['umfpack', 'amd'] found_libs=[] > warnings.warn("Library error: libs=%s found_libs=%s" % \ > /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: > UserWarning: Library error: libs=['umfpack', 'amd'] > found_libs=['/home/share/software/packages/UMFPACK/UMFPACK/Lib/libumfpack.a'] > warnings.warn("Library error: libs=%s found_libs=%s" % \ > /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: > UserWarning: Library error: libs=['umfpack', 'amd'] > found_libs=['/home/share/software/packages/UMFPACK/AMD/Lib/libamd.a'] > warnings.warn("Library error: libs=%s found_libs=%s" % \ > /home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:392: > UserWarning: > UMFPACK sparse solver > (http://www.cise.ufl.edu/research/sparse/umfpack/) > not found. Directories to search for the libraries can be specified > in the > numpy/distutils/site.cfg file (section [umfpack]) or by setting > the UMFPACK environment variable. > warnings.warn(self.notfounderror.__doc__) > NOT AVAILABLE > > the problem is, that len(found_libs) == len(libs) does not hold in my > case (it is in system_info._check_libs()). Pearu, could you fix this, > please? (btw. this is exactly the reason the original (rev. 2308) > umfpack_info looked so clumsy - I was checking one lib at a time.) Could you send my the details of your umfpack/amd installations and how do you specify these libraries for numpy.distutils? (site.cfg, env variables ets.) Thanks, Pearu From cimrman3 at ntc.zcu.cz Mon Apr 10 05:52:23 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 10 Apr 2006 11:52:23 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> Message-ID: <443A2AD7.8030603@ntc.zcu.cz> Pearu Peterson wrote: > > On Thu, 6 Apr 2006, Robert Cimrman wrote: > > >>so that the tests could be run via scipy.test(). I will also check the >>useUmfpack problem. However system_info now fails to find my umfpack >>installation (revision 2308 worked for me), details are below: >> >> umfpack_info: >>/home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: >>UserWarning: Library error: libs=['umfpack', 'amd'] found_libs=[] >> warnings.warn("Library error: libs=%s found_libs=%s" % \ >>/home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: >>UserWarning: Library error: libs=['umfpack', 'amd'] >>found_libs=['/home/share/software/packages/UMFPACK/UMFPACK/Lib/libumfpack.a'] >> warnings.warn("Library error: libs=%s found_libs=%s" % \ >>/home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:540: >>UserWarning: Library error: libs=['umfpack', 'amd'] >>found_libs=['/home/share/software/packages/UMFPACK/AMD/Lib/libamd.a'] >> warnings.warn("Library error: libs=%s found_libs=%s" % \ >>/home/share/software/usr/lib/python2.4/site-packages/numpy/distutils/system_info.py:392: >>UserWarning: >> UMFPACK sparse solver >>(http://www.cise.ufl.edu/research/sparse/umfpack/) >> not found. Directories to search for the libraries can be specified >>in the >> numpy/distutils/site.cfg file (section [umfpack]) or by setting >> the UMFPACK environment variable. >> warnings.warn(self.notfounderror.__doc__) >> NOT AVAILABLE >> >>the problem is, that len(found_libs) == len(libs) does not hold in my >>case (it is in system_info._check_libs()). Pearu, could you fix this, >>please? (btw. this is exactly the reason the original (rev. 2308) >>umfpack_info looked so clumsy - I was checking one lib at a time.) > > > Could you send my the details of your umfpack/amd installations and > how do you specify these libraries for numpy.distutils? (site.cfg, env > variables ets.) The UMFPACK sources are in /home/share/software/packages/UMFPACK directory (). The solver/wrapper needs two libraries: libumfpack.a from /UMFPACK/Lib, and libamd.a from /AMD/Lib. This is my site.cfg: --- [atlas] library_dirs = /usr/lib atlas_libs = lapack, blas, cblas, atlas [umfpack] library_dirs = /home/share/software/packages/UMFPACK/UMFPACK/Lib:/home/share/software/packages/UMFPACK/AMD/Lib include_dirs = /home/share/software/packages/UMFPACK/UMFPACK/Include umfpack_libs = umfpack, amd --- I assume that moving the both libraries into one directory would solve the problem, but this is the way the UMFPACK installation does by default. Thanks for your help, r. From pearu at scipy.org Mon Apr 10 06:55:49 2006 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 10 Apr 2006 05:55:49 -0500 (CDT) Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A2AD7.8030603@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> Message-ID: On Mon, 10 Apr 2006, Robert Cimrman wrote: >> Could you send my the details of your umfpack/amd installations and >> how do you specify these libraries for numpy.distutils? (site.cfg, env >> variables ets.) > > The UMFPACK sources are in /home/share/software/packages/UMFPACK > directory (). > > The solver/wrapper needs two libraries: libumfpack.a from > /UMFPACK/Lib, and libamd.a from /AMD/Lib. > > This is my site.cfg: > --- > [atlas] > library_dirs = /usr/lib > atlas_libs = lapack, blas, cblas, atlas > > [umfpack] > library_dirs = > /home/share/software/packages/UMFPACK/UMFPACK/Lib:/home/share/software/packages/UMFPACK/AMD/Lib > include_dirs = /home/share/software/packages/UMFPACK/UMFPACK/Include > umfpack_libs = umfpack, amd > --- > > I assume that moving the both libraries into one directory would solve > the problem, but this is the way the UMFPACK installation does by default. Thanks for the information. I have splitted umfpack_info class into umfpack_info and amd_info classes and the location of umfpack and amd libraries must be specified separately in site.cfg file. In this way the system_info tools can handle various configurations, including default UMFPACK installation as well as various linux distribution provided installations (amd lives in /usr/lib and umfpack lives in /usr/lib/umfpack, for instance). So, use the following contents in site.cfg: [amd] library_dirs = /home/share/software/packages/AMD/Lib include_dirs = /home/share/software/packages/AMD/Include amd_libs = amd [umfpack] library_dirs = /home/share/software/packages/UMFPACK/Lib include_dirs = /home/share/software/packages/UMFPACK/Include umfpack_libs = umfpack HTH, Pearu From nwagner at iam.uni-stuttgart.de Mon Apr 10 07:28:30 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Apr 2006 13:28:30 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> Message-ID: <443A415E.3020309@iam.uni-stuttgart.de> Pearu Peterson wrote: > On Mon, 10 Apr 2006, Robert Cimrman wrote: > > >>> Could you send my the details of your umfpack/amd installations and >>> how do you specify these libraries for numpy.distutils? (site.cfg, env >>> variables ets.) >>> >> The UMFPACK sources are in /home/share/software/packages/UMFPACK >> directory (). >> >> The solver/wrapper needs two libraries: libumfpack.a from >> /UMFPACK/Lib, and libamd.a from /AMD/Lib. >> >> This is my site.cfg: >> --- >> [atlas] >> library_dirs = /usr/lib >> atlas_libs = lapack, blas, cblas, atlas >> >> [umfpack] >> library_dirs = >> /home/share/software/packages/UMFPACK/UMFPACK/Lib:/home/share/software/packages/UMFPACK/AMD/Lib >> include_dirs = /home/share/software/packages/UMFPACK/UMFPACK/Include >> umfpack_libs = umfpack, amd >> --- >> >> I assume that moving the both libraries into one directory would solve >> the problem, but this is the way the UMFPACK installation does by default. >> > > Thanks for the information. I have splitted umfpack_info class into > umfpack_info and amd_info classes and the location of umfpack and amd > libraries must be specified separately in site.cfg file. In this way > the system_info tools can handle various configurations, including > default UMFPACK installation as well as various linux distribution > provided installations (amd lives in /usr/lib and umfpack lives in > /usr/lib/umfpack, for instance). > > So, use the following contents in site.cfg: > > [amd] > library_dirs = /home/share/software/packages/AMD/Lib > include_dirs = /home/share/software/packages/AMD/Include > amd_libs = amd > > [umfpack] > library_dirs = /home/share/software/packages/UMFPACK/Lib > include_dirs = /home/share/software/packages/UMFPACK/Include > umfpack_libs = umfpack > > HTH, > Pearu > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev I have installed UMFPACK 4.6. >>> from scipy import * import linsolve.umfpack -> failed: /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: undefined symbol: e_wsfe Is the wrapper restricted to version 4.4 ? >>> numpy.__version__ '0.9.7.2338' >>> scipy.__version__ '0.4.9.1848' Nils From cimrman3 at ntc.zcu.cz Mon Apr 10 07:33:36 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 10 Apr 2006 13:33:36 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> Message-ID: <443A4290.4050903@ntc.zcu.cz> Pearu Peterson wrote: > Thanks for the information. I have splitted umfpack_info class into > umfpack_info and amd_info classes and the location of umfpack and amd > libraries must be specified separately in site.cfg file. In this way > the system_info tools can handle various configurations, including > default UMFPACK installation as well as various linux distribution > provided installations (amd lives in /usr/lib and umfpack lives in > /usr/lib/umfpack, for instance). > ... Great, thanks! r. From cimrman3 at ntc.zcu.cz Mon Apr 10 07:38:52 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 10 Apr 2006 13:38:52 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A415E.3020309@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> <443A415E.3020309@iam.uni-stuttgart.de> Message-ID: <443A43CC.8020701@ntc.zcu.cz> Nils Wagner wrote: > I have installed UMFPACK 4.6. > > >>> from scipy import * > import linsolve.umfpack -> failed: > /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: > undefined symbol: e_wsfe > > Is the wrapper restricted to version 4.4 ? Yes, it is tested to work with 4.4. But stay tuned. There are many interesting algorithms in 4.6, and I may be tempted... :) r. From nwagner at iam.uni-stuttgart.de Mon Apr 10 08:09:11 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Apr 2006 14:09:11 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A43CC.8020701@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> <443A415E.3020309@iam.uni-stuttgart.de> <443A43CC.8020701@ntc.zcu.cz> Message-ID: <443A4AE7.7060400@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> I have installed UMFPACK 4.6. >> >> >>> from scipy import * >> import linsolve.umfpack -> failed: >> /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: >> undefined symbol: e_wsfe >> >> Is the wrapper restricted to version 4.4 ? >> > > Yes, it is tested to work with 4.4. But stay tuned. There are many > interesting algorithms in 4.6, and I may be tempted... :) > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Robert, The problem persists with version 4.4 Am I missing something ? >>> import scipy import linsolve.umfpack -> failed: /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: undefined symbol: e_wsfe umfpack_info: libraries = ['umfpack', 'amd', 'f77blas', 'cblas', 'atlas'] library_dirs = ['/usr/local/src/UMFPACKv4.4/UMFPACK/Lib', '/usr/local/src/UMFPACKv4.4/AMD/Lib', '/usr/local/lib/atlas'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None), ('ATLAS_INFO', '"\\"3.7.11\\""')] swig_opts = ['-I/usr/local/src/UMFPACKv4.4/UMFPACK/Include', '-I/usr/local/src/UMFPACKv4.4/AMD/Include'] include_dirs = ['/usr/local/src/UMFPACKv4.4/UMFPACK/Include', '/usr/local/src/UMFPACKv4.4/AMD/Include'] amd_info: libraries = ['amd'] library_dirs = ['/usr/local/src/UMFPACKv4.4/AMD/Lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/local/src/UMFPACKv4.4/AMD/Include'] include_dirs = ['/usr/local/src/UMFPACKv4.4/AMD/Include'] Nils From cimrman3 at ntc.zcu.cz Mon Apr 10 08:22:00 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 10 Apr 2006 14:22:00 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A4AE7.7060400@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> <443A415E.3020309@iam.uni-stuttgart.de> <443A43CC.8020701@ntc.zcu.cz> <443A4AE7.7060400@iam.uni-stuttgart.de> Message-ID: <443A4DE8.10806@ntc.zcu.cz> Nils Wagner wrote: > > Hi Robert, > > The problem persists with version 4.4 > Am I missing something ? > > >>> import scipy > import linsolve.umfpack -> failed: > /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: > undefined symbol: e_wsfe > > umfpack_info: > libraries = ['umfpack', 'amd', 'f77blas', 'cblas', 'atlas'] > library_dirs = ['/usr/local/src/UMFPACKv4.4/UMFPACK/Lib', > '/usr/local/src/UMFPACKv4.4/AMD/Lib', '/usr/local/lib/atlas'] > define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None), > ('ATLAS_INFO', '"\\"3.7.11\\""')] > swig_opts = ['-I/usr/local/src/UMFPACKv4.4/UMFPACK/Include', > '-I/usr/local/src/UMFPACKv4.4/AMD/Include'] > include_dirs = ['/usr/local/src/UMFPACKv4.4/UMFPACK/Include', > '/usr/local/src/UMFPACKv4.4/AMD/Include'] > amd_info: > libraries = ['amd'] > library_dirs = ['/usr/local/src/UMFPACKv4.4/AMD/Lib'] > define_macros = [('SCIPY_AMD_H', None)] > swig_opts = ['-I/usr/local/src/UMFPACKv4.4/AMD/Include'] > include_dirs = ['/usr/local/src/UMFPACKv4.4/AMD/Include'] > > Nils $ nm -gA /usr/lib/lib* 2> /dev/null | grep e_wsfe /usr/lib/libblas.a:xerbla.o: U e_wsfe /usr/lib/libf2c.a:sfe.o:00000130 T e_wsfe /usr/lib/liblapack.a:dlamch.o: U e_wsfe /usr/lib/liblapack.a:slamch.o: U e_wsfe /usr/lib/liblapack.a:xerbla.o: U e_wsfe so apparently this symbol comes from the f2c library and is used within blas/lapack. Did you specify correctly which BLAS should be used when building and installing umfpack? BTW, if you have matlab and build the umfpack matlab interace which comes along, you could try if umpfack itself works from the matlab command-line. r. From nwagner at iam.uni-stuttgart.de Mon Apr 10 08:30:40 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Apr 2006 14:30:40 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A4DE8.10806@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> <443A415E.3020309@iam.uni-stuttgart.de> <443A43CC.8020701@ntc.zcu.cz> <443A4AE7.7060400@iam.uni-stuttgart.de> <443A4DE8.10806@ntc.zcu.cz> Message-ID: <443A4FF0.9050008@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Hi Robert, >> >> The problem persists with version 4.4 >> Am I missing something ? >> >> >>> import scipy >> import linsolve.umfpack -> failed: >> /usr/lib64/python2.4/site-packages/scipy/linsolve/umfpack/__umfpack.so: >> undefined symbol: e_wsfe >> >> umfpack_info: >> libraries = ['umfpack', 'amd', 'f77blas', 'cblas', 'atlas'] >> library_dirs = ['/usr/local/src/UMFPACKv4.4/UMFPACK/Lib', >> '/usr/local/src/UMFPACKv4.4/AMD/Lib', '/usr/local/lib/atlas'] >> define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None), >> ('ATLAS_INFO', '"\\"3.7.11\\""')] >> swig_opts = ['-I/usr/local/src/UMFPACKv4.4/UMFPACK/Include', >> '-I/usr/local/src/UMFPACKv4.4/AMD/Include'] >> include_dirs = ['/usr/local/src/UMFPACKv4.4/UMFPACK/Include', >> '/usr/local/src/UMFPACKv4.4/AMD/Include'] >> amd_info: >> libraries = ['amd'] >> library_dirs = ['/usr/local/src/UMFPACKv4.4/AMD/Lib'] >> define_macros = [('SCIPY_AMD_H', None)] >> swig_opts = ['-I/usr/local/src/UMFPACKv4.4/AMD/Include'] >> include_dirs = ['/usr/local/src/UMFPACKv4.4/AMD/Include'] >> >> Nils >> > > $ nm -gA /usr/lib/lib* 2> /dev/null | grep e_wsfe > /usr/lib/libblas.a:xerbla.o: U e_wsfe > /usr/lib/libf2c.a:sfe.o:00000130 T e_wsfe > /usr/lib/liblapack.a:dlamch.o: U e_wsfe > /usr/lib/liblapack.a:slamch.o: U e_wsfe > /usr/lib/liblapack.a:xerbla.o: U e_wsfe > > so apparently this symbol comes from the f2c library and is used within > blas/lapack. Did you specify correctly which BLAS should be used when > building and installing umfpack? BTW, if you have matlab and build the > umfpack matlab interace which comes along, you could try if umpfack > itself works from the matlab command-line. > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Robert, Please find attached my Make.linux. # 3: with Fortran interface to the ATLAS BLAS CONFIG = LIB = -L/usr/local/lib/atlas -lf77blas -latlas -lfrtbegin -lg2c -lm Is that o.k. ? Nils -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Make.linux URL: From cimrman3 at ntc.zcu.cz Mon Apr 10 08:36:27 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 10 Apr 2006 14:36:27 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A4FF0.9050008@iam.uni-stuttgart.de> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> <443A415E.3020309@iam.uni-stuttgart.de> <443A43CC.8020701@ntc.zcu.cz> <443A4AE7.7060400@iam.uni-stuttgart.de> <443A4DE8.10806@ntc.zcu.cz> <443A4FF0.9050008@iam.uni-stuttgart.de> Message-ID: <443A514B.4020705@ntc.zcu.cz> Nils Wagner wrote: > Robert Cimrman wrote: >> $ nm -gA /usr/lib/lib* 2> /dev/null | grep e_wsfe >> /usr/lib/libblas.a:xerbla.o: U e_wsfe >> /usr/lib/libf2c.a:sfe.o:00000130 T e_wsfe >> /usr/lib/liblapack.a:dlamch.o: U e_wsfe >> /usr/lib/liblapack.a:slamch.o: U e_wsfe >> /usr/lib/liblapack.a:xerbla.o: U e_wsfe >> >> so apparently this symbol comes from the f2c library and is used >> within blas/lapack. Did you specify correctly which BLAS should be >> used when building and installing umfpack? BTW, if you have matlab and >> build the umfpack matlab interace which comes along, you could try if >> umpfack itself works from the matlab command-line. >> >> r. > Hi Robert, > > Please find attached my Make.linux. > > # 3: with Fortran interface to the ATLAS BLAS > CONFIG = > LIB = -L/usr/local/lib/atlas -lf77blas -latlas -lfrtbegin -lg2c -lm > > > Is that o.k. ? Well, I don't know :). But the following works for me: # 2: with the ATLAS C-BLAS (http://www.netlib.org/atlas). CONFIG = -DCBLAS -I/usr/include/atlas LIB = -lcblas -latlas -lm - it all depends on how yout ATLAS installation looks. r. From nwagner at iam.uni-stuttgart.de Mon Apr 10 09:24:36 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 10 Apr 2006 15:24:36 +0200 Subject: [SciPy-dev] scipy.sparse + umfpack + system_info In-Reply-To: <443A514B.4020705@ntc.zcu.cz> References: <44280161.4030708@ntc.zcu.cz> <442808AF.6090006@ftw.at> <442A698C.9000104@ntc.zcu.cz><442A9A67.8050106@ftw.at> <4434DEAF.8050001@ntc.zcu.cz> <443A2AD7.8030603@ntc.zcu.cz> <443A415E.3020309@iam.uni-stuttgart.de> <443A43CC.8020701@ntc.zcu.cz> <443A4AE7.7060400@iam.uni-stuttgart.de> <443A4DE8.10806@ntc.zcu.cz> <443A4FF0.9050008@iam.uni-stuttgart.de> <443A514B.4020705@ntc.zcu.cz> Message-ID: <443A5C94.6040709@iam.uni-stuttgart.de> Robert Cimrman wrote: > Nils Wagner wrote: > >> Robert Cimrman wrote: >> >>> $ nm -gA /usr/lib/lib* 2> /dev/null | grep e_wsfe >>> /usr/lib/libblas.a:xerbla.o: U e_wsfe >>> /usr/lib/libf2c.a:sfe.o:00000130 T e_wsfe >>> /usr/lib/liblapack.a:dlamch.o: U e_wsfe >>> /usr/lib/liblapack.a:slamch.o: U e_wsfe >>> /usr/lib/liblapack.a:xerbla.o: U e_wsfe >>> >>> so apparently this symbol comes from the f2c library and is used >>> within blas/lapack. Did you specify correctly which BLAS should be >>> used when building and installing umfpack? BTW, if you have matlab and >>> build the umfpack matlab interace which comes along, you could try if >>> umpfack itself works from the matlab command-line. >>> >>> r. >>> > > >> Hi Robert, >> >> Please find attached my Make.linux. >> >> # 3: with Fortran interface to the ATLAS BLAS >> CONFIG = >> LIB = -L/usr/local/lib/atlas -lf77blas -latlas -lfrtbegin -lg2c -lm >> >> >> Is that o.k. ? >> > > Well, I don't know :). But the following works for me: > > # 2: with the ATLAS C-BLAS (http://www.netlib.org/atlas). > CONFIG = -DCBLAS -I/usr/include/atlas > LIB = -lcblas -latlas -lm > > - it all depends on how yout ATLAS installation looks. > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Robert, Indeed, it works fine with #2. I have added a directory /usr/local/include/atlas and copied all header files from my ATLAS installation in that directory. # 2: with the ATLAS C-BLAS (http://www.netlib.org/atlas). CONFIG = -DCBLAS -I/usr/local/include/atlas LIB = -L/usr/local/lib/atlas -lcblas -latlas -lm Thank you very much. Nils From robert.kern at gmail.com Tue Apr 11 02:02:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Apr 2006 01:02:24 -0500 Subject: [SciPy-dev] Statistics Review progress Message-ID: <443B4670.70000@gmail.com> This weekend I made a first pass through stats.py and did a fair bit of cleanup. I checked in my changes as separate revisions for easier perusal: http://projects.scipy.org/scipy/scipy/timeline?from=04%2F09%2F06&daysback=0&changeset=on&update=Update I got through about 25 or so functions. For the most part, I focused on the docstrings and making the code look nice and use relatively modern numpy idioms. Implementing proper unit tests would be the next step for this set of functions. I have made comments on some of the functions in their associated tickets. If you would like to keep track of your favorite functions, please look at their tickets. http://projects.scipy.org/scipy/scipy/query?status=new&status=assigned&status=reopened&milestone=Statistics+Review+Months&order=priority There are some issues that I thought should be discussed here rather than the tracker. * anova(): I want to get rid of it. It and its support functions take up nearly a quarter of the entire code of stats.py. It returns nothing but simply prints it out to stdout. It uses globals. It depends on several undocumented, uncommented functions that nothing else uses; getting rid of anova() closes a lot of other tickets, too (I subscribe to the "making progress by moving goalposts" model of development). It's impossible to unit-test since it returns no values. Gary Strangman, the original author of most of stats.py, removed it from his copy some time ago because he didn't have confidence in its implementation. It appears to me that the function would need a complete rewrite to meet the standards I have set for passing review. I propose to remove it now. * Some of the functions like mean() and std() are replications of functionality in numpy and even the methods of array objects themselves. I would like to remove them, but I imagine they are being used in various places. There's a certain amount of code breakage I'm willing to accept in order to clean up stats.py (e.g. all of my other bullet items), but this seems just gratuitous. * paired() is a bit similar in that it does not return any values but just prints them out. It is somewhat better in that it does not do much computation itself but simply calls other functions depending on the type of data. However, it determines the type of data by asking the user through raw_input(). It seems to me that this kind of function does not add anything to scipy.stats. This functionality would make much more sense as a method in a class that represented a pairwise dataset. The class could hold the information about the kind of data it contained, and thus it would not need to ask the user through raw_input(). * histogram() had a bug in how it computed the default bins. I fixed it. If you use this function's defaults, your results will now be different. This function will be refactored later to use numpy.histogram() internally as it is much faster. * Several functions rely on first histogramming the data and then computing some values from the cumulative histogram. Namely, cmedian(), scoreatpercentile(), and percentileatscore(). Essentially, these functions are using the histogram to calculate an empirical CDF and then using that to find particular values. However, since the fastest histogram implementation readily available, numpy.histogram() sorts the array first, there really isn't any point in doing the histogram. It is faster and more accurate simply to sort the data and lineraly interpolate (sorted_x, linspace(0., 1., len(sorted_x)). However, the current forms of these functions would be useful again if they are slightly modified to accept *already histogrammed* data. They would make good methods on a Histogram class, for instance. * I changed the input arguments of pointbiserialr() so my head didn't hurt trying to follow the old implementation. See the ticket for details: http://projects.scipy.org/scipy/scipy/ticket/100 * We really need to sort out the issue of biased and unbiased estimators. At least, a number of scipy.stats functions compute values that could be computed in two different ways, conventionally given labels "biased" and "unbiased". Now while there is some disagreement as to which is better (you get to guess which I prefer), I think we should offer both. Normally, I try to follow the design principle that if the value of a keyword argument is almost always given as a constant (e.g. bias=True rather than bias=flag_set_somewhere_else_in_my_code), then the functionality should be exposed as two separate functions. However, there are a lot of these functions in scipy.stats, and I don't think we would be doing anyone any favors by doubling the number of these functions. IMO, "practicality beats purity" in this case. Additionally, if people start writing classes that encapsulate datasets with methods that estimate quantities (mean, variance, etc.) from that data, they are likely to want "biased" or "unbiased" estimates for *all* of their quantities together. A bias flag handles this use-case much more naturally. The names "biased" and "unbiased" are, of course up for discussion, since the label "biased" is not particularly precise. The default setting is also up for discussion. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From travis at enthought.com Tue Apr 11 16:10:30 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 11 Apr 2006 15:10:30 -0500 Subject: [SciPy-dev] ANN: SciPy 2006 Conference Message-ID: <443C0D36.80608@enthought.com> Greetings, The *SciPy 2006 Conference* is scheduled for August 17-18, 2006 at CalTech. A tremendous amount of work has gone into SciPy and Numpy over the past few months, and the scientific python community around these and other tools has truly flourished[1]. The Scipy 2006 Conference is an excellent opportunity to exchange ideas, learn techniques, contribute code and affect the direction of scientific computing with Python. Conference details are at http://www.scipy.org/SciPy2006 Keynote ------- Python language author Guido van Rossum (!) has agreed to be the Keynote speaker at this year's Conference. http://www.python.org/~guido/ Registration: ------------- Registration is now open. You may register early online for $100.00 at http://www.enthought.com/scipy06. Registration includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. After July 14, 2006, registration will cost $150.00. Call for Presenters ------------------- If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 7, 2006. Papers and/or presentation slides are acceptable and are due by August 4, 2006. Tutorial Sessions ----------------- Several people have expressed interest in attending a tutorial session. The Wednesday before the conference might be a good day for this. Please email the list if you have particular topics that you are interested in. Here's a preliminary list: - Migrating from Numeric or Numarray to Numpy - 2D Visualization with Python - 3D Visualization with Python - Introduction to Scientific Computing with Python - Building Scientific Simulation Applications - Traits/TraitsUI Please rate these and add others in a subsequent thread to the SciPy-user mailing list. Perhaps we can pick 4-6 top ideas and recruit speakers as demand dictates. The authoritative list will be tracked here: http://www.scipy.org/SciPy2006/TutorialSessions Coding Sprints -------------- If anyone would like to arrive earlier (Monday and Tuesday the 14th and 15th of August), we can borrow a room on the CalTech campus to sit and code against particular libraries or apps of interest. Please register your interest in these coding sprints on the SciPy-user mailing list as well. The authoritative list will be tracked here: http://www.scipy.org/SciPy2006/CodingSprints Mailing list address: scipy-user at scipy.org Mailing list archives: http://dir.gmane.org/gmane.comp.python.scientific.user Mailing list signup: http://www.scipy.net/mailman/listinfo/scipy-user [1] Some stats: NumPy has averaged over 16,000 downloads per month Sept. 05 to March 06. SciPy has averaged over 3,800 downloads per month in Feb. and March 06. (both scipy and numpy figures do not include the 2000 instances per month downloaded as part of the Python Enthought Edition Distribution for Windows.) From david.huard at gmail.com Tue Apr 11 20:26:23 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 11 Apr 2006 20:26:23 -0400 Subject: [SciPy-dev] Statistical review month : weighted histogram and cumfreq Message-ID: <91cf711d0604111726h5335cf73qe5a4c7e8393ab96f@mail.gmail.com> I recently had to compute a weighted cumulative frequency distribution so I modified the scipy.stats.histogram and scipy.stats.cumfreq fonctions. I added a key in both function call, namely weight=None, where the default is simply uniform weights. I wanted to ask if this change would be welcome before submitting the patch. My concern is that the change modifies the result returned by the function. Presently, the histogram and cumfreq functions return integers arrays, the number of items lying in certain intervals. When these items are weighted, an integer count doesn't make much sense, and I normalized the histogram and cumfreq results. In other words, the new histogram function returns a float array of the frequency, instead of a count. I feel that having a normalized output is more pratical, but it would ruin existing code. There is always the possibility of creating a whistogram and wcumfreq functions, but this is not a pretty solution. I'd like your feedback about that. Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Tue Apr 11 20:27:43 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 11 Apr 2006 20:27:43 -0400 Subject: [SciPy-dev] Statistical review month : weighted histogram and cumfreq In-Reply-To: <91cf711d0604111726h5335cf73qe5a4c7e8393ab96f@mail.gmail.com> References: <91cf711d0604111726h5335cf73qe5a4c7e8393ab96f@mail.gmail.com> Message-ID: <91cf711d0604111727k7dd1edeam2cb874c2d896a26c@mail.gmail.com> Hi, I recently had to compute a weighted cumulative frequency distribution so I modified the scipy.stats.histogram and scipy.stats.cumfreq fonctions. I added a key in both function call, namely weight=None, where the default is simply uniform weights. I wanted to ask if this change would be welcome before submitting the patch. My concern is that the change modifies the result returned by the function. Presently, the histogram and cumfreq functions return integers arrays, the number of items lying in certain intervals. When these items are weighted, an integer count doesn't make much sense, and I normalized the histogram and cumfreq results. In other words, the new histogram function returns a float array of the frequency, instead of a count. I feel that having a normalized output is more pratical, but it would ruin existing code. There is always the possibility of creating a whistogram and wcumfreq functions, but this is not a pretty solution. I'd like your feedback about that. Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Apr 11 20:53:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Apr 2006 19:53:28 -0500 Subject: [SciPy-dev] Statistical review month : weighted histogram and cumfreq In-Reply-To: <91cf711d0604111726h5335cf73qe5a4c7e8393ab96f@mail.gmail.com> References: <91cf711d0604111726h5335cf73qe5a4c7e8393ab96f@mail.gmail.com> Message-ID: <443C4F88.5070606@gmail.com> David Huard wrote: > I recently had to compute a weighted cumulative frequency distribution > so I modified the scipy.stats.histogram and scipy.stats.cumfreq > fonctions. I added a key in both function call, namely weight=None, > where the default is simply uniform weights. I wanted to ask if this > change would be welcome before submitting the patch. Well, it's frequently difficult to talk about whether a change is welcome or not without seeing the code, so please submit the patch to the tracker. http://projects.scipy.org/scipy/scipy/ticket/76 > My concern is that > the change modifies the result returned by the function. Presently, the > histogram and cumfreq functions return integers arrays, the number of > items lying in certain intervals. When these items are weighted, an > integer count doesn't make much sense, and I normalized the histogram > and cumfreq results. In other words, the new histogram function returns > a float array of the frequency, instead of a count. I feel that having a > normalized output is more pratical, but it would ruin existing code. > There is always the possibility of creating a whistogram and wcumfreq > functions, but this is not a pretty solution. My feeling is that there are a lot of ways to compute histograms. A lot of those choices are orthogonal to each other. Also largely orthogonal are the ways in which you might *use* a histogram. Trying to manage all of those choices with keyword arguments or differently-named functions is a nightmare. This is a place for classes. For example, Konrad Hinsen has nice Histogram and WeightedHistogram classes in Scientific. I think we should leave the interface of scipy.histogram() alone and write a Histogram class instead. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From tim.leslie at gmail.com Tue Apr 11 21:25:00 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Wed, 12 Apr 2006 11:25:00 +1000 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443B4670.70000@gmail.com> References: <443B4670.70000@gmail.com> Message-ID: On 4/11/06, Robert Kern wrote: > * anova(): I want to get rid of it. It and its support functions take up > nearly > a quarter of the entire code of stats.py. It returns nothing but simply > prints > it out to stdout. It uses globals. It depends on several undocumented, > uncommented functions that nothing else uses; getting rid of anova() > closes a > lot of other tickets, too (I subscribe to the "making progress by moving > goalposts" model of development). It's impossible to unit-test since it > returns > no values. Gary Strangman, the original author of most of stats.py, > removed it > from his copy some time ago because he didn't have confidence in its > implementation. > > It appears to me that the function would need a complete rewrite to meet > the > standards I have set for passing review. I propose to remove it now. +2. I went through and made some cosmetic changes to stats.py and when I got to anova() my eyes bugged out. After an hour of head scratching and scrolling up and down through the code I decided it wasn't worth the effort to try to decipher quite what was going on. Trying To call the function showed up bugs in the _support functions. I think the only logical choice at the stage is to remove anova() and the code which "supports" it. Cheers, Timl -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.mcmorland at auckland.ac.nz Tue Apr 11 23:10:35 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed, 12 Apr 2006 15:10:35 +1200 Subject: [SciPy-dev] Arbitrary n-dimensional rebin routine Message-ID: <443C6FAB.2020007@auckland.ac.nz> Hi all, I'm not sure of the protocol here, so I'll try the list first... Here's an n-dimensional rebinning routine I've written (based loosely on IDLs congrid function), and hope it might be of some use. Perhaps it can get bundled with Gary Ruben's rebin function, currently on the wiki (http://scipy.org/Cookbook/Rebinning). I've thrown a few simple tests at the routine so far, with good results, but it could probably do with more field testing. The routine is currently optimised to take as long and as much memory as possible, so if someone with more knowledge of these things than me is interested to refine it a bit, that'd be great. As a scipy beginner, I'm also really interested to get some feedback on coding best-practices that I should have used and haven't. Cheers, Angus. def congrid(a, nud, method='neighbour', centre=False, minusone=False): '''Arbitrary resampling of source array to new dimension sizes. Currently only supports maintaining the same number of dimensions. Also doesn''t work for 1-D arrays unless promoted to shape (x,1). Based loosely on IDL''s congrid routine, which apparently originally came from a VAX/VMS routine of the same name. I''m not completely sure of the validity of using parallel 1-D interpolations repeated along each axis in succession, but the results are visually compelling. method: neighbour - closest value from original data the ''kinds'' supported by scipy.interpolate.interp1d centre: True - interpolation points are at the centres of the bins False - points are at the front edge of the bin minusone: For example- inarray.shape = (i,j) & new dimensions = (x,y) False - inarray is resampled by factors of (i/x) * (j/y) True - inarray is resampled by(i-1)/(x-1) * (j-1)/(y-1) This prevents extrapolation one element beyond bounds of input array. ''' if not a.dtype.type in [n.typeDict['Float32'], n.typeDict['Float64']]: print "Converting to float" a = a.astype('Float32') if minusone: m1 = 1. else: m1 = 0. if centre: ofs = 0.5 else: ofs = 0. old = n.asarray( a.shape ) ndims = len( old ) if len( nud ) != ndims: print "Congrid: dimensions error. This routine currently only support rebinning to the same number of dimensions." return None nudr = n.asarray( nud ).astype('Float32') dimlist = [] if method == 'neighbour': for i in range( ndims ): base = nvals(i, nudr) dimlist.append( (old[i] - m1) / (nudr[i] - m1) \ * (base + ofs) - ofs ) cd = n.array( dimlist ) cdr = cd.round().astype( 'UInt16' ) nua = a[list( cdr )] return nua elif method in ['nearest','linear','cubic','spline']: # calculate new dims for i in range( ndims ): base = n.arange( nudr[i] ) dimlist.append( (old[i] - m1) / (nudr[i] - m1) \ * (base + ofs) - ofs ) # specify old dims olddims = [n.arange(i).astype('Float32') for i in list( a.shape )] # first interpolation - for ndims = any mint = scipy.interpolate.interp1d( olddims[-1], a, kind=method ) nua = mint( dimlist[-1] ) trorder = [ndims - 1] + range( ndims - 1 ) for i in range( ndims - 2, -1, -1 ): nua = nua.transpose( trorder ) mint = scipy.interpolate.interp1d( olddims[i], nua, kind=method ) nua = mint( dimlist[i] ) if ndims > 1: # need one more transpose to return to original dimensions nua = nua.transpose( trorder ) return nua else: print "Congrid error: Unrecognized interpolation type.\n", \ "This routine currently only supports \'nearest\',\'linear\',\'cubic\', and \'spline\'." return None -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From a.mcmorland at auckland.ac.nz Wed Apr 12 01:48:24 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed, 12 Apr 2006 17:48:24 +1200 Subject: [SciPy-dev] Arbitrary n-dimensional rebin routine In-Reply-To: <443C6FAB.2020007@auckland.ac.nz> References: <443C6FAB.2020007@auckland.ac.nz> Message-ID: <443C94A8.1070101@auckland.ac.nz> Okay, my first reply is to myself, with all the bits I forgot to add. Angus McMorland wrote: > [snip] import numpy as n import scipy.interpolate def nvals(n, dims): '''Returns xvals-like volumes, indexing over any dimension n. Will probably crash and burn if n >= len(dims).''' evList = '' for i in range( len(dims) - 1): if i < n: evList = evList + '%d' % dims[i] else: evList = evList + '%d' % dims[i + 1] if i < len(dims) - 2: evList = evList + ', ' evList = 'xvals( (%d,' % dims[n] + evList + ')' + ')' xs = eval( evList ) evList = '' for i in range( len(dims) ): if i < n: ind = i + 1 elif i == n: ind = 0 else: ind = i evList = evList + '%d' % ind if i < len(dims) - 1: evList = evList + ', ' evList = 'xs.transpose(' + evList + ')' return eval( evList ) > def congrid(a, nud, method='neighbour', centre=False, minusone=False): > '''Arbitrary resampling of source array to new dimension sizes. [snip] Also the docstring should have something about the first couple of params: Usage: arg0: input array arg1: tuple of resulting dimensions Example: rebinned = rebin.congrid( raw, (2,2,2), \ method=[neighbour|linear|cubic|spline], \ minusone=[True|False], \ centre=[True|False]) Let's try again. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From a.mcmorland at auckland.ac.nz Wed Apr 12 01:57:08 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed, 12 Apr 2006 17:57:08 +1200 Subject: [SciPy-dev] Arbitrary n-dimensional rebin routine In-Reply-To: <443C94A8.1070101@auckland.ac.nz> References: <443C6FAB.2020007@auckland.ac.nz> <443C94A8.1070101@auckland.ac.nz> Message-ID: <443C96B4.1020102@auckland.ac.nz> Angus McMorland wrote: > Okay, my first reply is to myself, with all the bits I forgot to add. Make that _some_ of the bits I forgot. Here's the last of them: def xvals(dims): '''Returns a volume with the appropriate x index value in each element.''' evList = ['n.fromfunction( xi, (%d' % dims[0] ] + \ [', %d' % dims[i+1] for i in range( len(dims) - 1 )] + \ [') )'] return eval( ''.join(evList) ) Apologies for the multiple postings. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From robert.kern at gmail.com Wed Apr 12 02:05:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Apr 2006 01:05:41 -0500 Subject: [SciPy-dev] Arbitrary n-dimensional rebin routine In-Reply-To: <443C96B4.1020102@auckland.ac.nz> References: <443C6FAB.2020007@auckland.ac.nz> <443C94A8.1070101@auckland.ac.nz> <443C96B4.1020102@auckland.ac.nz> Message-ID: <443C98B5.8050406@gmail.com> Angus McMorland wrote: > Angus McMorland wrote: > >>Okay, my first reply is to myself, with all the bits I forgot to add. > > Make that _some_ of the bits I forgot. Here's the last of them: I'm not sure how many people are going to try to reconstruct the full module by copying and pasting from their email clients. Could you post it, in its entirety, to the SciPy wiki instead? This page would be good: http://www.scipy.org/Cookbook/Rebinning -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.mcmorland at auckland.ac.nz Wed Apr 12 03:43:53 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed, 12 Apr 2006 19:43:53 +1200 Subject: [SciPy-dev] Arbitrary n-dimensional rebin routine In-Reply-To: <443C98B5.8050406@gmail.com> References: <443C6FAB.2020007@auckland.ac.nz> <443C94A8.1070101@auckland.ac.nz> <443C96B4.1020102@auckland.ac.nz> <443C98B5.8050406@gmail.com> Message-ID: <443CAFB9.4060203@auckland.ac.nz> Robert Kern wrote: > I'm not sure how many people are going to try to reconstruct the full module by > copying and pasting from their email clients. Could you post it, in its > entirety, to the SciPy wiki instead? This page would be good: > > http://www.scipy.org/Cookbook/Rebinning Done. Thanks for the advice, that's much better than email. The code is now up, in its entirety at the above page. A. -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From schofield at ftw.at Wed Apr 12 09:03:23 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 12 Apr 2006 15:03:23 +0200 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443B4670.70000@gmail.com> References: <443B4670.70000@gmail.com> Message-ID: <443CFA9B.2070705@ftw.at> Robert Kern wrote: > This weekend I made a first pass through stats.py and did a fair bit of cleanup. > I checked in my changes as separate revisions for easier perusal: > http://projects.scipy.org/scipy/scipy/timeline?from=04%2F09%2F06&daysback=0&changeset=on&update=Update > > I got through about 25 or so functions. For the most part, I focused on the > docstrings and making the code look nice and use relatively modern numpy idioms. > Implementing proper unit tests would be the next step for this set of functions. Well done for your work on this! > * anova(): I want to get rid of it. It and its support functions take up nearly > a quarter of the entire code of stats.py. It returns nothing but simply prints > it out to stdout. It uses globals. It depends on several undocumented, > uncommented functions that nothing else uses; getting rid of anova() closes a > lot of other tickets, too (I subscribe to the "making progress by moving > goalposts" model of development). It's impossible to unit-test since it returns > no values. Gary Strangman, the original author of most of stats.py, removed it > from his copy some time ago because he didn't have confidence in its implementation. > > It appears to me that the function would need a complete rewrite to meet the > standards I have set for passing review. I propose to remove it now. > +1. This sounds like the best option for the short term. > * Some of the functions like mean() and std() are replications of functionality > in numpy and even the methods of array objects themselves. I would like to > remove them, but I imagine they are being used in various places. There's a > certain amount of code breakage I'm willing to accept in order to clean up > stats.py (e.g. all of my other bullet items), but this seems just gratuitous. > I think we should remove the duplicated functions mean, std, and var from stats. The corresponding functions are currently imported from numpy into the stats namespace anyway. > * We really need to sort out the issue of biased and unbiased estimators. At > least, a number of scipy.stats functions compute values that could be computed > in two different ways, conventionally given labels "biased" and "unbiased". Now > while there is some disagreement as to which is better (you get to guess which I > prefer), I think we should offer both. > > Normally, I try to follow the design principle that if the value of a keyword > argument is almost always given as a constant (e.g. bias=True rather than > bias=flag_set_somewhere_else_in_my_code), then the functionality should be > exposed as two separate functions. However, there are a lot of these functions > in scipy.stats, and I don't think we would be doing anyone any favors by > doubling the number of these functions. IMO, "practicality beats purity" in this > case. > I'd argue strongly that var and std should be identical to the functions in numpy. If we want this we'd need separate functions like varbiased. I don't really see the benefit of a 'bias' flag. If we do encounter some real problems in handling the biased estimators consistently without it, we might as well argue for modifying the corresponding functions in numpy. But it'd be trivial to write def my_var_function_with_bias_flag(a, bias=True): if bias: return varbiased(a) else: return var(a) if this were ever necessary. > The names "biased" and "unbiased" are, of course up for discussion, since the > label "biased" is not particularly precise. The default setting is also up for > discussion. > I can't think of anything better than 'biased'. 'Sample' would be ambiguous, as Zachary mentioned. Using 'unbiased' in the names would be incorrect for std, as Zachary also mentioned, but we could avoid this by using numpy.std etc instead. We could also change the numpy.std docstring to note explicitly that it's the square root of the unbiased sample variance estimate. -- Ed From aisaac at american.edu Wed Apr 12 09:17:07 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 12 Apr 2006 09:17:07 -0400 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443CFA9B.2070705@ftw.at> References: <443B4670.70000@gmail.com><443CFA9B.2070705@ftw.at> Message-ID: On Wed, 12 Apr 2006, Ed Schofield apparently wrote: > I can't think of anything better than 'biased'. 'Sample' > would be ambiguous, as Zachary mentioned. Using > 'unbiased' in the names would be incorrect for std, as > Zachary also mentioned 'central'? http://mathworld.wolfram.com/SampleCentralMoment.html fwiw, Alan Isaac From robert.kern at gmail.com Wed Apr 12 12:48:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Apr 2006 11:48:21 -0500 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: References: <443B4670.70000@gmail.com><443CFA9B.2070705@ftw.at> Message-ID: <443D2F55.1050806@gmail.com> Alan G Isaac wrote: > On Wed, 12 Apr 2006, Ed Schofield apparently wrote: > >>I can't think of anything better than 'biased'. 'Sample' >>would be ambiguous, as Zachary mentioned. Using >>'unbiased' in the names would be incorrect for std, as >>Zachary also mentioned > > 'central'? > http://mathworld.wolfram.com/SampleCentralMoment.html No, that's another concept entirely. The nth moment of a distribution is defined around 0: integrate(lambda x: (x-0)**n*pdf(x), -inf, inf) The nth central moment of a distribution is defined around the mean of the distribution: integrate(lambda x: (x-mean)**n*pdf(x), -inf, inf) One can devise procedures ("estimators") to estimate these and other quantities from actual data putatively sampled from these distributions. These estimators are called "unbiased" when the distribution of the *estimated quantity* (assuming we were to draw same-sized datasets from the same underlying distribution many times and calculate estimates from each of them separately) has a mean equal to the actual quantity. You could also define estimators that have other features, for example, picking out the estimate that would provide the maximum likelihood of actually getting the data in front of you. Maximum likelihood estimators are generally "biased." Honestly, I think this is a good thing, but many people don't. For most of the estimators that we are talking about here, the only difference between the two is a coefficient near 1 (and gets closer to 1 as the size of the sample increases). For example, the maximum likelihood estimator for (central!) variance is ((x-x.mean())**2).sum()/len(x). The "unbiased" estimator is ((x-x.mean())**2)/(len(x)-1). -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Wed Apr 12 12:51:32 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 12 Apr 2006 10:51:32 -0600 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443B4670.70000@gmail.com> References: <443B4670.70000@gmail.com> Message-ID: <443D3014.2020102@ieee.org> Robert Kern wrote: > * anova(): I want to get rid of it. It and its support functions take up nearly > a quarter of the entire code of stats.py. It returns nothing but simply prints > it out to stdout. It uses globals. It depends on several undocumented, > uncommented functions that nothing else uses; getting rid of anova() closes a > lot of other tickets, too (I subscribe to the "making progress by moving > goalposts" model of development). It's impossible to unit-test since it returns > no values. Gary Strangman, the original author of most of stats.py, removed it > from his copy some time ago because he didn't have confidence in its implementation. > > +1 > * Some of the functions like mean() and std() are replications of functionality > in numpy and even the methods of array objects themselves. I would like to > remove them, but I imagine they are being used in various places. There's a > certain amount of code breakage I'm willing to accept in order to clean up > stats.py (e.g. all of my other bullet items), but this seems just gratuitous. > -0 on removing them > * paired() is a bit similar in that it does not return any values but just > prints them out. It is somewhat better in that it does not do much computation > itself but simply calls other functions depending on the type of data. However, > it determines the type of data by asking the user through raw_input(). It seems > to me that this kind of function does not add anything to scipy.stats. This > functionality would make much more sense as a method in a class that represented > a pairwise dataset. The class could hold the information about the kind of data > it contained, and thus it would not need to ask the user through raw_input(). > > +1 on removing it > * histogram() had a bug in how it computed the default bins. I fixed it. If you > use this function's defaults, your results will now be different. This function > will be refactored later to use numpy.histogram() internally as it is much faster. > +1 on deleting this > * Several functions rely on first histogramming the data and then computing some > values from the cumulative histogram. Namely, cmedian(), scoreatpercentile(), > and percentileatscore(). Essentially, these functions are using the histogram to > calculate an empirical CDF and then using that to find particular values. > However, since the fastest histogram implementation readily available, > numpy.histogram() sorts the array first, there really isn't any point in doing > the histogram. It is faster and more accurate simply to sort the data and > lineraly interpolate (sorted_x, linspace(0., 1., len(sorted_x)). > > However, the current forms of these functions would be useful again if they are > slightly modified to accept *already histogrammed* data. They would make good > methods on a Histogram class, for instance. > +1 on a Histogram class > * I changed the input arguments of pointbiserialr() so my head didn't hurt > trying to follow the old implementation. See the ticket for details: > > http://projects.scipy.org/scipy/scipy/ticket/100 > > * We really need to sort out the issue of biased and unbiased estimators. At > least, a number of scipy.stats functions compute values that could be computed > in two different ways, conventionally given labels "biased" and "unbiased". Now > while there is some disagreement as to which is better (you get to guess which I > prefer), I think we should offer both. > Yes we should offer both. It would be nice if we could allow the user to decide as well. > Normally, I try to follow the design principle that if the value of a keyword > argument is almost always given as a constant (e.g. bias=True rather than > bias=flag_set_somewhere_else_in_my_code), then the functionality should be > exposed as two separate functions. However, there are a lot of these functions > in scipy.stats, and I don't think we would be doing anyone any favors by > doubling the number of these functions. IMO, "practicality beats purity" in this > case. > That's my thinking too. > Additionally, if people start writing classes that encapsulate datasets with > methods that estimate quantities (mean, variance, etc.) from that data, they are > likely to want "biased" or "unbiased" estimates for *all* of their quantities > together. A bias flag handles this use-case much more naturally. > > The names "biased" and "unbiased" are, of course up for discussion, since the > label "biased" is not particularly precise. The default setting is also up for > discussion. > > How about making the default minimize mean square error --- i.e division by N+1 for variance calculation :-) -Travis From aisaac at american.edu Wed Apr 12 13:03:36 2006 From: aisaac at american.edu (Alan Isaac) Date: Wed, 12 Apr 2006 13:03:36 -0400 (EDT) Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443D2F55.1050806@gmail.com> References: <443B4670.70000@gmail.com><443CFA9B.2070705@ftw.at> <443D2F55.1050806@gmail.com> Message-ID: On Wed, 12 Apr 2006, Robert Kern wrote: > that's another concept entirely. http://mathworld.wolfram.com/SampleVariance.html Cheers, Alan Isaac From robert.kern at gmail.com Wed Apr 12 13:04:20 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Apr 2006 12:04:20 -0500 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443CFA9B.2070705@ftw.at> References: <443B4670.70000@gmail.com> <443CFA9B.2070705@ftw.at> Message-ID: <443D3314.3060603@gmail.com> Ed Schofield wrote: > Robert Kern wrote: >>* Some of the functions like mean() and std() are replications of functionality >>in numpy and even the methods of array objects themselves. I would like to >>remove them, but I imagine they are being used in various places. There's a >>certain amount of code breakage I'm willing to accept in order to clean up >>stats.py (e.g. all of my other bullet items), but this seems just gratuitous. > > I think we should remove the duplicated functions mean, std, and var > from stats. The corresponding functions are currently imported from > numpy into the stats namespace anyway. Well, not for long. http://projects.scipy.org/scipy/scipy/ticket/192 But we could keep std(), var(), mean(), and median() in mind and import them specifically. However, numpy.median() will have to grow an axis argument. >>* We really need to sort out the issue of biased and unbiased estimators. At >>least, a number of scipy.stats functions compute values that could be computed >>in two different ways, conventionally given labels "biased" and "unbiased". Now >>while there is some disagreement as to which is better (you get to guess which I >>prefer), I think we should offer both. >> >>Normally, I try to follow the design principle that if the value of a keyword >>argument is almost always given as a constant (e.g. bias=True rather than >>bias=flag_set_somewhere_else_in_my_code), then the functionality should be >>exposed as two separate functions. However, there are a lot of these functions >>in scipy.stats, and I don't think we would be doing anyone any favors by >>doubling the number of these functions. IMO, "practicality beats purity" in this >>case. > > I'd argue strongly that var and std should be identical to the functions > in numpy. If we want this we'd need separate functions like varbiased. > > I don't really see the benefit of a 'bias' flag. Well, you snipped the use-case I gave. > If we do encounter > some real problems in handling the biased estimators consistently > without it, we might as well argue for modifying the corresponding > functions in numpy. Yes. I do in fact argue for that. > But it'd be trivial to write > > def my_var_function_with_bias_flag(a, bias=True): > if bias: > return varbiased(a) > else: > return var(a) > > if this were ever necessary. This is a bit backwards. I would implement varbiased() and var() def varbiased(a): return var_with_flag(a, bias=True) def var(a): return var_with_flag(a, bias=False) I *don't* want three versions of each of these functions. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From travis at enthought.com Wed Apr 12 13:33:42 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 12 Apr 2006 12:33:42 -0500 Subject: [SciPy-dev] [SciPy-user] ANN: SciPy 2006 Conference In-Reply-To: References: <443C0D36.80608@enthought.com> Message-ID: <443D39F6.6040805@enthought.com> Perry Greenfield wrote: > Hi Travis, > > Who is helping organize this year's conference besides you. I'm > thinking I may want to help, particularly in involving the > astronomical community more. The next month isn't that good for me > but I'd like to be involved if that is of interest to you (i.e., with > the program and such). If you are interested, what is your schedule > of activities for planning the conference? > > Thanks, Perry Hey Perry (and anyone else interested in helping), I'm open to any help we can get to organize the conference (thus the cross-post to scipy-dev)--thanks for the willingness to pitch in. It's probably useful to give a breakdown of all the tasks (at least the one's I can think of right now): Abstracts Review/Speaker Recruitment & Acceptance ------------------------------------------------- This has traditionally been under the purview of the co-sponsors, represented by me, Eric Jones, Michael Aivazis and Michel Sanner. If anyone would like to participate, let us know and we'll organize a committee to do this with more rigor. I think there's a lot of potential in this area that, frankly, has gone untapped. Organize Auxiliary Sessions --------------------------- Several things to do here, particularly in accommodating the various sub-communities like astronomy, biology, etc. This may be a good area for you to pitch in, Perry.: - Organize Sprint projects - Organize Tutorials - Initiate and moderate conversation about BOF interests and organize BOF meeting times. Marketing the Conference ------------------------ We have largely relied on the usual mailing lists to get the word out about the conference. We could definitely have a broader campaign here. I think we could reasonably accommodate 50% more attendees and still have a nice collaborative environment. Some things that come immediately to mind are: - A prominent announcement on the python.org home page & encouraging other PSF involvement (sponsorship of a speaker(s), a sprint, student attendees?) - Announce/articles on other sites/blogs. - Better follow-up reminders for registration. - Targeting particular folks in various Organizations/Universities/Labs to spread the word internally. - Pitch in on keeping the scipy site updated, graphics, etc. - Any other ideas? Sponsorship/Encouragement of Student Participation -------------------------------------------------- There has been some discussion about ways to encourage more students to attend the meeting. Because of our goal of keeping registration costs low, we don't have funds to physically bring students to the conference. We could possibly waive student registration fees, or recruit organizations to sponsor student attendance. We could organize Sprints/Tutorials/BOFs specifically for this sort of group. Of course there's also the issue of getting the word out and targeting interested students. Any ideas? Event Planning -------------- The wonderful folks at CalTech handle this superbly. All planning for meals, meeting rooms, A/V, parking, nametags, check-in, etc. are pretty much taken care of by them--an amazing thing, really. So, not much to do here (thankfully). Ideas? ------ We're interested in any ideas to make this a compelling, productive time--I'm sure I'm forgetting/missing some things. Now, to actually answer your question about the schedule for the Conference planning. Here are some dates: --Between now and June, we should try to build a schedule of tutorials and sprint projects. We should follow up with threads for this. I've created wiki stubs to hold the results: http://www.scipy.org/SciPy2006/TutorialSessions http://www.scipy.org/SciPy2006/CodingSprints -- June 30,2006: Arbitrary target date for preliminary Sprint & Tutorial Schedule --manage 'registration' of tutorial and sprint attendees-- July 7, 2006: Presentation Abstracts Due --week of reviewing abstracts and defining the schedule-- July 14, 2006: Accept/announce presentation schedule July 14, 2006: Early registration deadline --Figure out a BOF schedule sometime in August and announce at the Conference-- We're a bit flexible on this, so suggestions are welcome. Thanks again, Perry. Travis From travis at enthought.com Wed Apr 12 16:16:30 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 12 Apr 2006 15:16:30 -0500 Subject: [SciPy-dev] [SciPy-user] ANN: SciPy 2006 Conference In-Reply-To: <443D39F6.6040805@enthought.com> References: <443C0D36.80608@enthought.com> <443D39F6.6040805@enthought.com> Message-ID: <443D601E.3020500@enthought.com> Travis N. Vaught wrote: > > Abstracts Review/Speaker Recruitment & Acceptance > ------------------------------------------------- > This has traditionally been under the purview of the co-sponsors, > represented by me, Eric Jones, Michael Aivazis and Michel Sanner. One correction...I believe Travis Oliphant took on the bulk of this work last year. Apologies to "the hard-working Travis" for forgetting about that. Travis From pgmdevlist at mailcan.com Wed Apr 12 17:22:28 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Wed, 12 Apr 2006 17:22:28 -0400 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443D3314.3060603@gmail.com> References: <443B4670.70000@gmail.com> <443CFA9B.2070705@ftw.at> <443D3314.3060603@gmail.com> Message-ID: <200604121722.29089.pgmdevlist@mailcan.com> My 2c > Well, not for long. > > http://projects.scipy.org/scipy/scipy/ticket/192 > > But we could keep std(), var(), mean(), and median() in mind and import > them specifically. However, numpy.median() will have to grow an axis > argument. Well, std, var, mean... are methods in numpy. We could quite well leave out the numpy functions and reimplement them in Scipy. I'm also quite in favor of a biased flag in these functions, instead of 3 different functions. For std and var, `biased=False` could be the default, in order to match their current behavior. If the flag option is deemed confusing and eventually discarded, shouldn't we change some functions in numpy.lib.mlab as well, for the sake of homogeneity ? Have a look at `numpy.lib.mlab.cov` in particular. And once again, should we look forward supporting MaskedArrays in scipy ? From andorxor at gmx.de Wed Apr 12 18:11:50 2006 From: andorxor at gmx.de (Stephan Tolksdorf) Date: Thu, 13 Apr 2006 00:11:50 +0200 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443B4670.70000@gmail.com> References: <443B4670.70000@gmail.com> Message-ID: <443D7B26.2090208@gmx.de> Robert Kern wrote: > * We really need to sort out the issue of biased and unbiased estimators. At > least, a number of scipy.stats functions compute values that could be computed > in two different ways, conventionally given labels "biased" and "unbiased". Now > while there is some disagreement as to which is better (you get to guess which I > prefer), I think we should offer both. > I'd like to note that the square root of the unbiased variance estimator is not an unbiased estimator of the standard deviation* (see Jensen's inequality), which is why unbiasedness is not a terribly useful concept in this context. It's a pity I currently don't have the time to help reviewing the stats functions. Stephan *) This is small error in Travis' Guide to Numpy. From robert.kern at gmail.com Wed Apr 12 18:42:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 12 Apr 2006 17:42:10 -0500 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443D7B26.2090208@gmx.de> References: <443B4670.70000@gmail.com> <443D7B26.2090208@gmx.de> Message-ID: <443D8242.3070100@gmail.com> Stephan Tolksdorf wrote: > Robert Kern wrote: > >>* We really need to sort out the issue of biased and unbiased estimators. At >>least, a number of scipy.stats functions compute values that could be computed >>in two different ways, conventionally given labels "biased" and "unbiased". Now >>while there is some disagreement as to which is better (you get to guess which I >>prefer), I think we should offer both. > > I'd like to note that the square root of the unbiased variance estimator > is not an unbiased estimator of the standard deviation* (see Jensen's > inequality), which is why unbiasedness is not a terribly useful concept > in this context. Hmmm. It occurs to me that "bias" or "mean squared error" is probably best defined in the logarithmic space for strictly positive quantities like variance. Then the standard deviation is just a scale away, and the "biasedness" ought to be invariant. This is off-topic, but has anyone seen any publications taking this approach? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Apr 12 19:55:13 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 12 Apr 2006 17:55:13 -0600 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443D8242.3070100@gmail.com> References: <443B4670.70000@gmail.com> <443D7B26.2090208@gmx.de> <443D8242.3070100@gmail.com> Message-ID: <443D9361.8010802@ee.byu.edu> Robert Kern wrote: >Stephan Tolksdorf wrote: > > >>Robert Kern wrote: >> >> >> >>>* We really need to sort out the issue of biased and unbiased estimators. At >>>least, a number of scipy.stats functions compute values that could be computed >>>in two different ways, conventionally given labels "biased" and "unbiased". Now >>>while there is some disagreement as to which is better (you get to guess which I >>>prefer), I think we should offer both. >>> >>> >>I'd like to note that the square root of the unbiased variance estimator >>is not an unbiased estimator of the standard deviation* (see Jensen's >>inequality), which is why unbiasedness is not a terribly useful concept >>in this context. >> >> > >Hmmm. It occurs to me that "bias" or "mean squared error" is probably best >defined in the logarithmic space for strictly positive quantities like variance. >Then the standard deviation is just a scale away, and the "biasedness" ought to >be invariant. > >This is off-topic, but has anyone seen any publications taking this approach? > > I have not seen any, but I think you are spot-on. -Travis From schofield at ftw.at Thu Apr 13 06:10:42 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 13 Apr 2006 12:10:42 +0200 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443D3314.3060603@gmail.com> References: <443B4670.70000@gmail.com> <443CFA9B.2070705@ftw.at> <443D3314.3060603@gmail.com> Message-ID: <443E23A2.6080907@ftw.at> Travis Oliphant wrote: > How about making the default minimize mean square error --- i.e > division by N+1 for variance calculation :-) Robert Kern wrote: > I would implement varbiased() and var() > > def varbiased(a): > return var_with_flag(a, bias=True) > > def var(a): > return var_with_flag(a, bias=False) > > I *don't* want three versions of each of these functions. > If we really want just one version for the three cases with denominators (n-1), n, and (n+1), I suggest we get rid of the boolean bias flag and make it an integer flag instead, taking either -1 (default), 0, or +1. A boolean flag would be clumsy, since it'd be true for both the n and (n+1) cases, and we'd need yet another flag to distinguish between these. And, as others have pointed out, bias=False would be an inaccurate description of the std function. -- Ed From robert.kern at gmail.com Thu Apr 13 13:05:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Apr 2006 12:05:13 -0500 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443E23A2.6080907@ftw.at> References: <443B4670.70000@gmail.com> <443CFA9B.2070705@ftw.at> <443D3314.3060603@gmail.com> <443E23A2.6080907@ftw.at> Message-ID: <443E84C9.8080100@gmail.com> Ed Schofield wrote: > Travis Oliphant wrote: > >>How about making the default minimize mean square error --- i.e >>division by N+1 for variance calculation :-) > > Robert Kern wrote: > >>I would implement varbiased() and var() >> >>def varbiased(a): >> return var_with_flag(a, bias=True) >> >>def var(a): >> return var_with_flag(a, bias=False) >> >>I *don't* want three versions of each of these functions. > > If we really want just one version for the three cases with denominators > (n-1), n, and (n+1), I suggest we get rid of the boolean bias flag and > make it an integer flag instead, taking either -1 (default), 0, or +1. > A boolean flag would be clumsy, since it'd be true for both the n and > (n+1) cases, and we'd need yet another flag to distinguish between > these. And, as others have pointed out, bias=False would be an > inaccurate description of the std function. That would work for variance, but not for other quantities. If we really do want to support more than two forms across several quantities in a uniform manner, then enums are the way to specify them. kind.Efficient kind.MinMSE kind.MaxLikelihood -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Apr 14 02:43:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Apr 2006 01:43:14 -0500 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443D9361.8010802@ee.byu.edu> References: <443B4670.70000@gmail.com> <443D7B26.2090208@gmx.de> <443D8242.3070100@gmail.com> <443D9361.8010802@ee.byu.edu> Message-ID: <443F4482.4010201@gmail.com> Travis Oliphant wrote: > Robert Kern wrote: >>Hmmm. It occurs to me that "bias" or "mean squared error" is probably best >>defined in the logarithmic space for strictly positive quantities like variance. >>Then the standard deviation is just a scale away, and the "biasedness" ought to >>be invariant. >> >>This is off-topic, but has anyone seen any publications taking this approach? > > I have not seen any, but I think you are spot-on. Interestingly, when you define the "error" of the variance estimator to be log(vhat/real_var), the minimum mean squared error estimator in the family power(x - x.mean(), 2).mean() / y gives y=n-2 rather than any of the ones proposed in this thread. Even more interestingly, it is also unbiased! At least according to my numerical experiments. I haven't made any rigorous derivations. Surely this must be Jaynes' book somewhere. I'll have to check when I get back to the office. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco -------------- next part -------------- A non-text attachment was scrubbed... Name: mmse.py Type: application/x-python Size: 789 bytes Desc: not available URL: From jonathan.taylor at stanford.edu Fri Apr 14 14:47:23 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 14 Apr 2006 11:47:23 -0700 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <443E23A2.6080907@ftw.at> References: <443B4670.70000@gmail.com> <443CFA9B.2070705@ftw.at> <443D3314.3060603@gmail.com> <443E23A2.6080907@ftw.at> Message-ID: <443FEE3B.7030200@stanford.edu> the discussion of bias vs. unbiasedness depends on the degrees of freedom in the "residuals" (and whether you are estimating the variance (sigma^2) or the standard deviation (sigma)). most (mathematical) stats books refer to unbiased estimates of sigma^2 as compared to the MLE estimate of sigma^2 (which has denominator n). so, i think an integer flag something along the lines of "df=n-1" makes a lot of sense because that way, when estimating other standard errors (say the error from a linear regression model), the flag would be the same and could be set appropriately for these residuals. just my $0.02 -- jonathan Ed Schofield wrote: > Travis Oliphant wrote: > > >> How about making the default minimize mean square error --- i.e >> division by N+1 for variance calculation :-) >> > > Robert Kern wrote: > >> I would implement varbiased() and var() >> >> def varbiased(a): >> return var_with_flag(a, bias=True) >> >> def var(a): >> return var_with_flag(a, bias=False) >> >> I *don't* want three versions of each of these functions. >> >> > > If we really want just one version for the three cases with denominators > (n-1), n, and (n+1), I suggest we get rid of the boolean bias flag and > make it an integer flag instead, taking either -1 (default), 0, or +1. > A boolean flag would be clumsy, since it'd be true for both the n and > (n+1) cases, and we'd need yet another flag to distinguish between > these. And, as others have pointed out, bias=False would be an > inaccurate description of the std function. > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From robert.kern at gmail.com Sun Apr 16 05:36:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Apr 2006 04:36:37 -0500 Subject: [SciPy-dev] Trac Wikis closed for anonymous edits until further notice Message-ID: <44421025.9060804@gmail.com> We've been hit badly by spammers, so I can only presume our Trac sites are now on the traded spam lists. I am going to turn off anonymous edits for now. Ticket creation will probably still be left open for now. Many thanks to David Cooke for quickly removing the spam. I am looking into ways to allow people to register themselves with the Trac sites so they can edit the Wikis and submit tickets without needing to be added by a project admin. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From austyn at getcontrol.com Fri Apr 14 14:52:32 2006 From: austyn at getcontrol.com (Austyn Bontrager) Date: Fri, 14 Apr 2006 11:52:32 -0700 Subject: [SciPy-dev] bug in scipy.signal.waveforms Message-ID: Hi all There is a bug in the chirp() routine in scipy.signal.waveforms. chirp() generates a frequency-swept cosine waveform. I filed a defect report in scipy trac: http://projects.scipy.org/scipy/scipy/ticket/193 In summary, the waveform is currently generated as: cos((frequency*time) + phi) However, it should really be: cos((integral of frequency over time) + phi) There is also a bug regarding the phi parameter: It ought to be multiplied by 2*pi, but it's not. I didn't test that at first, so ignore the first patch that I uploaded. Austyn Bontrager From robert.kern at gmail.com Fri Apr 14 17:46:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 14 Apr 2006 16:46:17 -0500 Subject: [SciPy-dev] Statistics Review progress In-Reply-To: <200604121722.29089.pgmdevlist@mailcan.com> References: <443B4670.70000@gmail.com> <443CFA9B.2070705@ftw.at> <443D3314.3060603@gmail.com> <200604121722.29089.pgmdevlist@mailcan.com> Message-ID: <44401829.7040103@gmail.com> Pierre GM wrote: > And once again, should we look forward supporting MaskedArrays in scipy ? Not from me, most likely. I don't often use MaskedArrays, so I'm not going to try to guess the appropriate semantics. If you would like to ensure support for MaskedArrays in scipy.stats, please start reviewing functions for how they handle MaskedArrays and submitting patches where necessary. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arauzo at decsai.ugr.es Tue Apr 18 09:20:18 2006 From: arauzo at decsai.ugr.es (Antonio Arauzo Azofra) Date: Tue, 18 Apr 2006 15:20:18 +0200 Subject: [SciPy-dev] Hello, patch submmited, user to log in trac Message-ID: <4444E792.60302@decsai.ugr.es> Hello everybody, I just started using Scipy some weeks ago, and I have found it very useful. I have seen you are reviewing stats. I have tried some statistical tests i am used to, and I found a bug in friedmanchisquare at stats.py. A patch is subbmitted to fix the bug in trac. It is obvious the old code had a bug because variable "data" was not used after calculating it. I have tested the fixed function with data from tests done elsewhere and it works ok. friendmanchisquare() still lacks from tie handling. This does not affect to test reliability, but it does affect to the test power. I can prepare a patch for that too. Unintentinaly I have changed the type of the ticket from review to defect. Is it right because it is a defect, or should we change it again to review? Should I have filed the patch anywhere else? Can you create a user for me in trac? I hope I can help developing in future. Thanks for all. -- All the best, Antonio Arauzo Azofra From robert.kern at gmail.com Tue Apr 18 10:03:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Apr 2006 09:03:23 -0500 Subject: [SciPy-dev] Hello, patch submmited, user to log in trac In-Reply-To: <4444E792.60302@decsai.ugr.es> References: <4444E792.60302@decsai.ugr.es> Message-ID: <4444F1AB.8090909@gmail.com> Antonio Arauzo Azofra wrote: > Hello everybody, > > I just started using Scipy some weeks ago, and I have found it very useful. > > I have seen you are reviewing stats. I have tried some statistical tests > i am used to, and I found a bug in friedmanchisquare at stats.py. > A patch is subbmitted to fix the bug in trac. It is obvious the old code > had a bug because variable "data" was not used after calculating it. I > have tested the fixed function with data from tests done elsewhere and > it works ok. Thank you! Can you provide some test data that we could incorporate into our unit tests? > friendmanchisquare() still lacks from tie handling. This does not affect > to test reliability, but it does affect to the test power. I can prepare > a patch for that too. That would be excellent. > Unintentinaly I have changed the type of the ticket from review to > defect. Is it right because it is a defect, or should we change it again > to review? I changed it back to "review" to help me keep track of all of the review activities. Reviewing is kind of a melange of finding defects, writing enhancements, and doing non-coding tasks, so I made it its own category. > Should I have filed the patch anywhere else? Nope, that was just the right place. > Can you create a user for me in trac? I hope I can help developing in > future. I'm looking into allowing users to register themselves. At the moment, I think the user list is tied to SVN write access. OTOH, if you keep the quality patches coming, I'm sure we'll give you that, too. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ravi at ati.com Tue Apr 18 12:54:39 2006 From: ravi at ati.com (Ravikiran Rajagopal) Date: Tue, 18 Apr 2006 12:54:39 -0400 Subject: [SciPy-dev] Changing return types in C API In-Reply-To: <17377.16528.222966.57180@owl.eos.ubc.ca> References: <17377.16528.222966.57180@owl.eos.ubc.ca> Message-ID: <200604181254.39959.ravi@ati.com> On Wednesday 01 February 2006 18:13, Philip Austin wrote: > On this topic: I've just about finished testing our > boost:::python::numeric helper functions > (http://www.eos.ubc.ca/research/clouds/num_util.html) > with numpy. ?The only change required was > a new free function to replace array.typecode(). ?Once > I finish the cleanup and a couple of new examples I'll post the link > to the topical software page with a short description. Any news on this? The code on the page above still points to the Numeric version. Regards, Ravi From schofield at ftw.at Tue Apr 18 14:00:57 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 18 Apr 2006 20:00:57 +0200 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo Message-ID: <44452959.1090008@ftw.at> Hi all, I've now modified the Monte Carlo package (still in the sandbox) to use the RNG from RandomKit that comes with numpy. It builds and works fine for me, but only with two directories hard-coded into the setup.py file. I have some questions (e.g. for Pearu) on how to use numpy distutils to do this portably. 1. Currently the add_headers method of distutils.misc_util.Configuration installs headers to $PREFIX/include/python2.4/numpy. This doesn't bother me, but do we want this? All other header files are installed into $PREFIX/lib/python2.4/site-packages/numpy or scipy/. 2. The add_library method creates a static library under build/temp.linux-etc/. Is it possible to use distutils to install this (e.g. librandomkit.a) to a system-wide location? (This is necessary to prevent scipy having a build dependency on the numpy source files.) 3. Does distutils support dynamic libraries? But randomkit is a small library, so I suppose static linking is fine here. I also have a question for Robert K: A while ago you offered to work on exporting the mtrand functions so other packages could link with them. Could you please export the function prototypes as a separate header file? This would probably require a modification the generate_mtrand_c.py script. I'd appreciate this; then I could modify the montecarlo package to accept a RandomState object. -- Ed From robert.kern at gmail.com Tue Apr 18 14:41:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Apr 2006 13:41:02 -0500 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: <44452959.1090008@ftw.at> References: <44452959.1090008@ftw.at> Message-ID: <444532BE.5010408@gmail.com> Ed Schofield wrote: > Hi all, > > I've now modified the Monte Carlo package (still in the sandbox) to use > the RNG from RandomKit that comes with numpy. It builds and works fine > for me, but only with two directories hard-coded into the setup.py > file. I have some questions (e.g. for Pearu) on how to use numpy > distutils to do this portably. > > 1. Currently the add_headers method of distutils.misc_util.Configuration > installs headers to $PREFIX/include/python2.4/numpy. This doesn't > bother me, but do we want this? All other header files are installed > into $PREFIX/lib/python2.4/site-packages/numpy or scipy/. What headers are you trying to install? > 2. The add_library method creates a static library under > build/temp.linux-etc/. Is it possible to use distutils to install this > (e.g. librandomkit.a) to a system-wide location? (This is necessary to > prevent scipy having a build dependency on the numpy source files.) No, distutils does not handle this, and probably cannot do so in any portable way. > 3. Does distutils support dynamic libraries? But randomkit is a small > library, so I suppose static linking is fine here. Not particularly. But, of course, the approach below is probably the best way: > I also have a question for Robert K: > > A while ago you offered to work on exporting the mtrand functions so > other packages could link with them. Could you please export the > function prototypes as a separate header file? This would probably > require a modification the generate_mtrand_c.py script. I'd appreciate > this; then I could modify the montecarlo package to accept a RandomState > object. Yes, I've started working on exporting an array all of the functions in randomkit and distributions.c as a CObject. It will be difficult to expose the functions that are created by Pyrex since it is generated code, though. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at scipy.org Tue Apr 18 15:35:34 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 18 Apr 2006 14:35:34 -0500 (CDT) Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: <44452959.1090008@ftw.at> References: <44452959.1090008@ftw.at> Message-ID: On Tue, 18 Apr 2006, Ed Schofield wrote: > Hi all, > > I've now modified the Monte Carlo package (still in the sandbox) to use > the RNG from RandomKit that comes with numpy. It builds and works fine > for me, but only with two directories hard-coded into the setup.py > file. I have some questions (e.g. for Pearu) on how to use numpy > distutils to do this portably. > > 1. Currently the add_headers method of distutils.misc_util.Configuration > installs headers to $PREFIX/include/python2.4/numpy. This doesn't > bother me, but do we want this? All other header files are installed > into $PREFIX/lib/python2.4/site-packages/numpy or scipy/. add_headers method behaves in a way python installs header files. The reasons for not using add_headers in numpy/scipy cases have been discussed earlier in this list (we need a FAQ item, see numpy ticket #69). numpy uses add_data_files for installing numpy header files. > 2. The add_library method creates a static library under > build/temp.linux-etc/. Is it possible to use distutils to install this > (e.g. librandomkit.a) to a system-wide location? (This is necessary to > prevent scipy having a build dependency on the numpy source files.) That would be a desirable feature. For example, when scipy would ship blas/lapack source codes then it would be desirable to install blas/lapack libraries so that various scipy subpackages could use them. I have started implementing the support for this feature about half a year ago but I haven't finished it yet, mostly because of I haven't had time to figure out what would be good (portable,etc) locations of such libraries and how would subpackages access them in different situations (building whole scipy, building a subpackage without building scipy, etc..). > 3. Does distutils support dynamic libraries? But randomkit is a small > library, so I suppose static linking is fine here. numpy.distutils has support for using dynamic libraries when static libraries are not available, but no support for creating ones. This would be also related to your question 2. Pearu From cookedm at physics.mcmaster.ca Tue Apr 18 16:09:24 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 18 Apr 2006 16:09:24 -0400 Subject: [SciPy-dev] [Numpy-discussion] Trac Wikis closed for anonymous edits until further notice In-Reply-To: <44421025.9060804@gmail.com> (Robert Kern's message of "Sun, 16 Apr 2006 04:36:37 -0500") References: <44421025.9060804@gmail.com> Message-ID: Robert Kern writes: > We've been hit badly by spammers, so I can only presume our Trac sites are now > on the traded spam lists. I am going to turn off anonymous edits for now. Ticket > creation will probably still be left open for now. Another thing that's concerned me is closing of tickets by anonymous; can we turn that off? It disturbs me when I'm browsing the RSS feed and I see that. If a user who's not a developer thinks it could be closed, they could post a comment saying that, and a developer could close it. > Many thanks to David Cooke for quickly removing the spam. The RSS feeds are great for that. Although having a way to quickly revert a change would have made it easier :-) > I am looking into ways to allow people to register themselves with the Trac > sites so they can edit the Wikis and submit tickets without needing to be added > by a project admin. that'd be good. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Tue Apr 18 16:51:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 18 Apr 2006 14:51:42 -0600 Subject: [SciPy-dev] [Numpy-discussion] Trac Wikis closed for anonymous edits until further notice In-Reply-To: References: <44421025.9060804@gmail.com> Message-ID: <4445515E.5000308@ieee.org> David M. Cooke wrote: > Robert Kern writes: > > >> We've been hit badly by spammers, so I can only presume our Trac sites are now >> on the traded spam lists. I am going to turn off anonymous edits for now. Ticket >> creation will probably still be left open for now. >> > > Another thing that's concerned me is closing of tickets by anonymous; > can we turn that off? It disturbs me when I'm browsing the RSS feed > and I see that. If a user who's not a developer thinks it could be > closed, they could post a comment saying that, and a developer could > close it. > I'm sure most instances of this behavior are me not logged in :-) Bad habit... -Travis From robert.kern at gmail.com Tue Apr 18 17:09:34 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Apr 2006 16:09:34 -0500 Subject: [SciPy-dev] [Numpy-discussion] Trac Wikis closed for anonymous edits until further notice In-Reply-To: References: <44421025.9060804@gmail.com> Message-ID: <4445558E.1020603@gmail.com> David M. Cooke wrote: > Robert Kern writes: > > >>We've been hit badly by spammers, so I can only presume our Trac sites are now >>on the traded spam lists. I am going to turn off anonymous edits for now. Ticket >>creation will probably still be left open for now. > > Another thing that's concerned me is closing of tickets by anonymous; > can we turn that off? It disturbs me when I'm browsing the RSS feed > and I see that. If a user who's not a developer thinks it could be > closed, they could post a comment saying that, and a developer could > close it. I'm not sure if there is a permission specifically for that. If I remove TICKET_CHANGE, then anonymous can't comment. I think maybe some combination of TICKET_ADMIN, TICKET_APPEND, TICKET_CHGPROP, TICKET_CREATE might work. I'll have to read the docs. But good idea. >>Many thanks to David Cooke for quickly removing the spam. > > The RSS feeds are great for that. Although having a way to quickly > revert a change would have made it easier :-) Are you comfortable with SQL? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Tue Apr 18 17:50:31 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 18 Apr 2006 17:50:31 -0400 Subject: [SciPy-dev] [Numpy-discussion] Trac Wikis closed for anonymous edits until further notice In-Reply-To: <4445558E.1020603@gmail.com> (Robert Kern's message of "Tue, 18 Apr 2006 16:09:34 -0500") References: <44421025.9060804@gmail.com> <4445558E.1020603@gmail.com> Message-ID: Robert Kern writes: > David M. Cooke wrote: >> Robert Kern writes: >> >> >>>We've been hit badly by spammers, so I can only presume our Trac sites are now >>>on the traded spam lists. I am going to turn off anonymous edits for now. Ticket >>>creation will probably still be left open for now. >> >> Another thing that's concerned me is closing of tickets by anonymous; >> can we turn that off? It disturbs me when I'm browsing the RSS feed >> and I see that. If a user who's not a developer thinks it could be >> closed, they could post a comment saying that, and a developer could >> close it. > > I'm not sure if there is a permission specifically for that. If I remove > TICKET_CHANGE, then anonymous can't comment. I think maybe some combination of > TICKET_ADMIN, TICKET_APPEND, TICKET_CHGPROP, TICKET_CREATE might work. I'll have > to read the docs. But good idea. I think if anonymous only has TICKET_VIEW, TICKET_CREATE, TICKET_APPEND, and TICKET_CHGPROP, then that's fine. It's TICKET_MODIFY that allows resolving (at least according to http://projects.scipy.org/scipy/scipy/wiki/TracPermissions). >>>Many thanks to David Cooke for quickly removing the spam. >> >> The RSS feeds are great for that. Although having a way to quickly >> revert a change would have made it easier :-) > > Are you comfortable with SQL? Pretty much. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From schofield at ftw.at Wed Apr 19 08:49:55 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 19 Apr 2006 14:49:55 +0200 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: References: <44452959.1090008@ftw.at> Message-ID: <444631F3.3060508@ftw.at> Pearu Peterson wrote: > Ed Schofield wrote: > >> I've now modified the Monte Carlo package (still in the sandbox) to use >> the RNG from RandomKit that comes with numpy. It builds and works fine >> for me, but only with two directories hard-coded into the setup.py >> file. I have some questions (e.g. for Pearu) on how to use numpy >> distutils to do this portably. >> >> 1. Currently the add_headers method of distutils.misc_util.Configuration >> installs headers to $PREFIX/include/python2.4/numpy. This doesn't >> bother me, but do we want this? All other header files are installed >> into $PREFIX/lib/python2.4/site-packages/numpy or scipy/. >> > > add_headers method behaves in a way python installs header files. > The reasons for not using add_headers in numpy/scipy cases have been > discussed earlier in this list (we need a FAQ item, see numpy ticket #69). > numpy uses add_data_files for installing numpy header files. > Ah, thanks. I've added a brief warning about this to the add_headers section in DISTUTILS.txt, which is what lead me astray ... >> 2. The add_library method creates a static library under >> build/temp.linux-etc/. Is it possible to use distutils to install this >> (e.g. librandomkit.a) to a system-wide location? (This is necessary to >> prevent scipy having a build dependency on the numpy source files.) >> > > That would be a desirable feature. For example, when scipy would ship > blas/lapack source codes then it would be desirable to install blas/lapack > libraries so that various scipy subpackages could use them. > I have started implementing the support for this feature about half a year > ago but I haven't finished it yet, mostly because of I haven't had time > to figure out what would be good (portable,etc) locations of such > libraries and how would subpackages access them in different situations > (building whole scipy, building a subpackage without building scipy, > etc..). > > I've been playing around with it some more, and it actually seems that config.add_extension already supports building and installing shared libraries. Normally this is used for building Python extension modules, but the symbol table seems to be independent of Python unless the source files explicitly use Python symbols. So, unless I'm very mistaken, this works already -- by using the machinery of add_extension rather than add_library. Here's an example patch (to NumPy) that builds a beautiful shared library for randomkit: Index: numpy/random/setup.py =================================================================== --- numpy/random/setup.py (revision 2374) +++ numpy/random/setup.py (working copy) @@ -32,6 +32,10 @@ ) config.add_data_files(('.', join('mtrand', 'randomkit.h'))) + + config.add_extension('librandomkit', + sources=[join('mtrand', 'randomkit.c')], + depends=[join('mtrand', 'randomkit.h')]) return config I've also committed a new sandbox/montecarlo/setup.py file to scipy that uses this. The relevant lines are: random_lib_dir = os.path.dirname(numpy.random.__file__) config.add_extension('_intsampler', include_dirs = [numpy.get_numpy_include(), random_lib_dir], libraries=['randomkit'], library_dirs=[random_lib_dir], runtime_library_dirs=[random_lib_dir], sources = [join('src', f) for f in ['_intsamplermodule.c', 'compact5table.c']]) This works fine under Linux. I'll also test it out with Win32/MinGW. (By the way, does distutils provide its own way of retrieving the site-packages/numpy/random directory?) -- Ed From pearu at scipy.org Wed Apr 19 10:22:00 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 19 Apr 2006 09:22:00 -0500 (CDT) Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: <444631F3.3060508@ftw.at> References: <44452959.1090008@ftw.at> <444631F3.3060508@ftw.at> Message-ID: On Wed, 19 Apr 2006, Ed Schofield wrote: >> That would be a desirable feature. For example, when scipy would ship >> blas/lapack source codes then it would be desirable to install blas/lapack >> libraries so that various scipy subpackages could use them. >> I have started implementing the support for this feature about half a year >> ago but I haven't finished it yet, mostly because of I haven't had time >> to figure out what would be good (portable,etc) locations of such >> libraries and how would subpackages access them in different situations >> (building whole scipy, building a subpackage without building scipy, >> etc..). >> > I've been playing around with it some more, and it actually seems that > config.add_extension already supports building and installing shared > libraries. Normally this is used for building Python extension modules, > but the symbol table seems to be independent of Python unless the source > files explicitly use Python symbols. So, unless I'm very mistaken, this > works already -- by using the machinery of add_extension rather than > add_library. Here's an example patch (to NumPy) that builds a beautiful > shared library for randomkit: > > Index: numpy/random/setup.py > =================================================================== > --- numpy/random/setup.py (revision 2374) > +++ numpy/random/setup.py (working copy) > @@ -32,6 +32,10 @@ > ) > > config.add_data_files(('.', join('mtrand', 'randomkit.h'))) > + > + config.add_extension('librandomkit', > + sources=[join('mtrand', 'randomkit.c')], > + depends=[join('mtrand', 'randomkit.h')]) > > return config Hmm, I haven't thought about the above approach but it can work. However, there are some side effect that we should get rid of: the shared library will be linked against python library and so it's an unclean solution To do things right, we should review command/build_ext.py code and see if it can be used for building "clean" shared libraries when Extension object has some is_shared_library flag set True (that is set via (to be implemented) add_shared_library method). If that does not work, the next step is to copy necessary hooks for shared libraries from build_ext.py to build_clib.py or create a new command build_slib (or something like that) altogether. > I've also committed a new sandbox/montecarlo/setup.py file to scipy that > uses this. The relevant lines are: > > random_lib_dir = os.path.dirname(numpy.random.__file__) > > config.add_extension('_intsampler', > include_dirs = [numpy.get_numpy_include(), random_lib_dir], > libraries=['randomkit'], > library_dirs=[random_lib_dir], > runtime_library_dirs=[random_lib_dir], > sources = [join('src', f) for f in > ['_intsamplermodule.c', 'compact5table.c']]) > > This works fine under Linux. I'll also test it out with Win32/MinGW. > (By the way, does distutils provide its own way of retrieving the > site-packages/numpy/random directory?) See distutils.sysconfig.get_python_lib function. Using smth like random_lib_dir = os.path.join(get_python_lib(),'numpy','random') should work in this particular case but it's not a general solution. For in-place builds random_lib_dir should probably be the one starting with path that build_py.get_package_dir(..) method would return, for instance. So, supporting installing shared libraries in a robust way may require some work. Though, simple solutions like above might work already for certain platforms, we should be careful for not breaking scipy/numpy builds for other platforms. Pearu From robert.kern at gmail.com Wed Apr 19 11:30:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Apr 2006 10:30:30 -0500 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: <444631F3.3060508@ftw.at> References: <44452959.1090008@ftw.at> <444631F3.3060508@ftw.at> Message-ID: <44465796.1020800@gmail.com> Ed Schofield wrote: > I've been playing around with it some more, and it actually seems that > config.add_extension already supports building and installing shared > libraries. Normally this is used for building Python extension modules, > but the symbol table seems to be independent of Python unless the source > files explicitly use Python symbols. So, unless I'm very mistaken, this > works already -- by using the machinery of add_extension rather than > add_library. The platform differences between how shared libraries are looked up will prevent this from working as an actual distribution mechanism, although you've mostly figured out how to build them. This has been discussed *extensively* on the distutils-sig. It's not going to be practical. At the very least, it's going to make building numpy and scipy more fragile and complicated. The easiest option for you now is to simply copy randomkit.[ch] to montecarlo. It won't be changing any time soon. That should tide you over until I manage to export a function pointer table from mtrand. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Wed Apr 19 12:08:14 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 19 Apr 2006 18:08:14 +0200 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: References: <44452959.1090008@ftw.at> <444631F3.3060508@ftw.at> Message-ID: <4446606E.5080004@ftw.at> Pearu Peterson wrote: > On Wed, 19 Apr 2006, Ed Schofield wrote: > >> I've been playing around with it some more, and it actually seems that >> config.add_extension already supports building and installing shared >> libraries. Normally this is used for building Python extension modules, >> but the symbol table seems to be independent of Python unless the source >> files explicitly use Python symbols. So, unless I'm very mistaken, this >> works already -- by using the machinery of add_extension rather than >> add_library. Here's an example patch (to NumPy) that builds a beautiful >> shared library for randomkit: >> >> Index: numpy/random/setup.py >> =================================================================== >> --- numpy/random/setup.py (revision 2374) >> +++ numpy/random/setup.py (working copy) >> @@ -32,6 +32,10 @@ >> ) >> >> config.add_data_files(('.', join('mtrand', 'randomkit.h'))) >> + >> + config.add_extension('librandomkit', >> + sources=[join('mtrand', 'randomkit.c')], >> + depends=[join('mtrand', 'randomkit.h')]) >> >> return config >> > > Hmm, I haven't thought about the above approach but it can work. However, > there are some side effect that we should get rid of: > the shared library will be linked against python library and so it's > an unclean solution > > To do things right, we should review command/build_ext.py code and > see if it can be used for building "clean" shared libraries when > Extension object has some is_shared_library flag set True (that is set via > (to be implemented) add_shared_library method). If that does not work, > the next step is to copy necessary hooks for shared libraries from > build_ext.py to build_clib.py or create a new command build_slib (or > something like that) altogether. > Okay, here are some more details. Shared libraries built this way under Linux look perfectly "clean": $ ldd /usr/lib/python2.4/site-packages/numpy/random/librandomkit.so linux-gate.so.1 => (0xffffe000) libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0xb7eee000) libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0xb7dbf000) /lib/ld-linux.so.2 (0x80000000) and $ objdump -x librandomkit.so | grep py turns up no Python symbols. Under Win32/MinGW this approach builds a .pyd file. According to various distutils list postings, this is just a renamed DLL, but one that Python expects to export certain symbols. But librandomkit.pyd also appears clean of Python symbols, according to my 30-day trial DLL explorer; the only dependencies are: advapi32.dll kernel32.dll msvcr71.dll msvcrt.dll So it seems the GNU linker does the right thing in the MinGW case too, not linking in Python symbols unless they're explicitly dereferenced. I've now got the SciPy montecarlo build working with MinGW too. It required three workarounds: - The first is to rename the randomkit.pyd file in site-packages\numpy\random\ to randomkit.dll. This is necessary for the build; otherwise the linker complains it cannot find -lrandomkit. - The second is to remove the runtime_library_dirs option from the config.add_extension call in montecarlo\setup.py. Keeping this seems to trigger a weird bug in Python distutils(!) on this platform, where for some reason sysconfig.get_config_var("CC") returns None. The function in question is runtime_library_dir_option() in unixccompiler.py, which has the comment # XXX Hackish, at the very least. See Python bug #445902 - The third is to copy the randomkit.dll file to site-packages\scipy\montecarlo\randomkit.pyd. Of course, manually copying the file defeats the purpose of a dynamic link library. This can probably be avoided by using the correct link flags, or maybe specifying a DLL definition file. So I agree with your conclusion that making this robustly across different platforms would require some more work ;) But it looks possible ... -- Ed From schofield at ftw.at Wed Apr 19 12:42:34 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 19 Apr 2006 18:42:34 +0200 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: <44465796.1020800@gmail.com> References: <44452959.1090008@ftw.at> <444631F3.3060508@ftw.at> <44465796.1020800@gmail.com> Message-ID: <4446687A.5040502@ftw.at> Robert Kern wrote: > The easiest option for you now is to simply copy randomkit.[ch] to montecarlo. > It won't be changing any time soon. That should tide you over until I manage to > export a function pointer table from mtrand. > I don't follow. The hardest part is with the linking. How would a function pointer table help? -- Ed From robert.kern at gmail.com Wed Apr 19 12:51:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Apr 2006 11:51:06 -0500 Subject: [SciPy-dev] distutils, mtrand, Monte Carlo In-Reply-To: <4446687A.5040502@ftw.at> References: <44452959.1090008@ftw.at> <444631F3.3060508@ftw.at> <44465796.1020800@gmail.com> <4446687A.5040502@ftw.at> Message-ID: <44466A7A.7040508@gmail.com> Ed Schofield wrote: > Robert Kern wrote: > >>The easiest option for you now is to simply copy randomkit.[ch] to montecarlo. >>It won't be changing any time soon. That should tide you over until I manage to >>export a function pointer table from mtrand. > > I don't follow. The hardest part is with the linking. How would a > function pointer table help? mtrand would already be imported and loaded through the Python mechanism. It would expose the table as a CObject. This is exactly how other extensions use numpy (and Numeric and numarray). This has been a solved problem for the past ten years or so. We know it works. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at stanford.edu Wed Apr 19 19:16:17 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed, 19 Apr 2006 16:16:17 -0700 Subject: [SciPy-dev] some statistical models / formulas Message-ID: <4446C4C1.2060003@stanford.edu> i have made a numpy/scipy package for some linear statistical models http://www-stat.stanford.edu/~jtaylo/scipy_stats_models-0.01a.tar.gz i was hoping that it might someday get into scipy.stats, maybe as scipy.stats.models? anyways, i am sure the code needs work and more docs with examples, but right now there is basic functionality for the following (the tests give some examples): - model formulae as in R (to some extent) - OLS (ordinary least square regression) - WLS (weighted least square regression) - AR1 regression (non-diagonal covariance -- right now just AR1 but easy to extend to ARp) - generalized linear models (all of R's links and variance functions but extensible as well -- not everything has been rigorously tested but logistic agrees with R, for instance) - robust linear models using M estimators (with a number of standard default robust norms as in R's rlm) - robust scale estimates (MAD, Huber's proposal 2). it would be nice to add a few things over time, too, like: - mixed effects models - generalized additive models (gam), generalized estimating equations (gee).... - nonlinear regression (i have some quasi working code for this, too, but it is not yet included). + anything else people want to add. -- jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From robert.kern at gmail.com Wed Apr 19 19:44:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Apr 2006 18:44:00 -0500 Subject: [SciPy-dev] some statistical models / formulas In-Reply-To: <4446C4C1.2060003@stanford.edu> References: <4446C4C1.2060003@stanford.edu> Message-ID: <4446CB40.2010009@gmail.com> Jonathan Taylor wrote: > i have made a numpy/scipy package for some linear statistical models > > http://www-stat.stanford.edu/~jtaylo/scipy_stats_models-0.01a.tar.gz > > i was hoping that it might someday get into scipy.stats, maybe as > scipy.stats.models? Awesome! I've only had a brief chance to look over the code, but it looks great so far. I'll contact you offlist to see about getting you some SVN checkin privileges. I would suggest putting it into scipy.sandbox.models for now and moving it to scipy.stats.models when we get some more eyeballs on it. Thank you! -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arauzo at decsai.ugr.es Thu Apr 20 06:57:34 2006 From: arauzo at decsai.ugr.es (Antonio Arauzo Azofra) Date: Thu, 20 Apr 2006 12:57:34 +0200 Subject: [SciPy-dev] Hello, patch submmited, user to log in trac In-Reply-To: <4444F1AB.8090909@gmail.com> References: <4444E792.60302@decsai.ugr.es> <4444F1AB.8090909@gmail.com> Message-ID: <4447691E.8010101@decsai.ugr.es> Robert Kern wrote: > Thank you! Can you provide some test data that we could incorporate into our > unit tests? Yes, the following data, according to: Janez Demsar "Statistical Comparisons of Classifiers over Multiple Data Sets". Journal of Machine Learning Research, v7, pages 1-30, 2006. scipy.stats.friedmanchisquare(scipy.array([0.763, 0.599, 0.954, 0.628, 0.882, 0.936, 0.661, 0.583, 0.775, 1.0, 0.94, 0.619, 0.972, 0.957]), scipy.array([0.768, 0.591,0.971,0.661,0.888, 0.931, 0.668, 0.583, 0.838, 1.0, 0.962, 0.666, 0.981, 0.978]), scipy.array([0.771, 0.590, 0.968, 0.654, 0.886, 0.916, 0.609, 0.563, 0.866, 1.0, 0.965, 0.614, 0.9751, 0.946]), scipy.array([0.798, 0.569, 0.967, 0.657, 0.898, 0.931, 0.685, 0.625, 0.875, 1.0, 0.962, 0.669, 0.975, 0.970])) should give a statistic value around 9.28 (there is low precision as the result is given on a derived test). This article ignores ties. If ties are handled the result should be equal or greater. I am not sure how to incorporate this in the tests code. I'll prepare the patch for tie handling in friedman soon. Also I have programed a Friedman derived test by Iman and Davenport, not sure if it is interesting to include it. >>Can you create a user for me in trac? I hope I can help developing in >>future. > I'm looking into allowing users to register themselves. At the moment, I think > the user list is tied to SVN write access. OTOH, if you keep the quality patches > coming, I'm sure we'll give you that, too. Ok. -- Saludos, Antonio Arauzo Azofra From arauzo at decsai.ugr.es Thu Apr 20 07:05:14 2006 From: arauzo at decsai.ugr.es (Antonio Arauzo Azofra) Date: Thu, 20 Apr 2006 13:05:14 +0200 Subject: [SciPy-dev] Google Summer of Code Message-ID: <44476AEA.7080003@decsai.ugr.es> Google Summer of Code http://code.google.com/soc/ Have you considered participating as a Mentoring organization? Offering any project about Scipy? -- Saludos, Antonio Arauzo Azofra From david.huard at gmail.com Fri Apr 21 14:39:19 2006 From: david.huard at gmail.com (David Huard) Date: Fri, 21 Apr 2006 14:39:19 -0400 Subject: [SciPy-dev] splrep broken Message-ID: <91cf711d0604211139q665ccf5bqf780115b7396bb46@mail.gmail.com> Hi, looks like splrep is broken, at least on the 0.4.8 version for windows, python2.4 and pentium3. Calling x = arange(1,10,.1) y = sin(x) interpolate.splrep(x,y) crashes the python shell. Also, in version 0.4.6, the knots were choosen somewhat ackwardly, with only a series of endpoints, and no knots in between. This had as a consequence that for a wide input range, the spline estimation was very poor. i.e. x = arange(1,5,.1) would work but not x= arange(1,20,.1) I seem to recall that changes were made to the knots routine due to a bug for periodic boundaries, but I don't remember if my version dated from after or before the change. Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattknox_ca at hotmail.com Sat Apr 22 11:23:12 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Sat, 22 Apr 2006 11:23:12 -0400 Subject: [SciPy-dev] interest in Time series functionality? Message-ID: I hope this is the correct mailing list to post this kind of question to... and if not, my apologies. I work in the quantitative finance division of a financial services company in Canada and the last year or so we have been doing a lot more python based work. Most of the data we work with is time series data (stock price data, etc) and we have traditionally used FAME (a product of sungard) to store and manipulate this data. We have developed a python api on top of the included FAME c api to access FAME data from python, but the problem now is that there aren't any available python libraries (to my knowledge) for manipulating time series data in any way that comes close to the power of FAME. Our motivation for being able to manipulate this data in Python is primarily for web-based applications. Although being able to bring this data into the python world opens up many other possibilities for us as well (FAME has no matrix capabilities whatsoever, which is pretty sad really given their target market). We have done some preliminary work in developing a time series class built on top of the numpy array class, and it has gotten to the point where it works reasonably well for what we are using it for, although I'm certain it could be optimized a great deal. The key features of this module are: - works with different frequencies of data (currently supports monthly, daily, business days, and secondly frequencies) - able to index the time series directly by date objects (from a custom date class we have created) - handle missing values (along the lines of masked arrays) - global module settings to dictate how certain scenarios are handled - perform operations on time series that do not necessarily have the same start/end dates (+,-,*,/) (and handle missing values appropriately in the operation according to certain global option settings). This involves an implicit resizing of the arrays. - perform operations on time series that do not have the same frequency and perform implicit frequency conversions (according to certain global option settings). Again, this involves implicitly resizing the arrays We have basically attempted to model the time series functionality to be similar to how FAME handles it since that works reasonably well. I'm wondering if there is any kind of interest in this? Our group consists mostly of financial practitioners and engineers, not really pure software developers, so if somebody is interested in taking this to the next level I would be willing to release the code (both the FAME api, and the time series module) if someone wanted to improve upon this and share their improvements in the future. The code is definitely not a polished product right now, but it is functional. If you have any thoughts on this (positive or negative) I would love to hear them. Thanks, - Matt Knox _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From robert.kern at gmail.com Sun Apr 23 20:50:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Apr 2006 19:50:46 -0500 Subject: [SciPy-dev] Changing the Trac authentication Message-ID: <444C20E5.7090309@gmail.com> I will be changing the Trac authentication over the next hour or so. I will be installing the AccountManagerPlugin to allow users to create accounts for themselves without needing to have SVN write access. Anonymous users will not be able to edit the Wikis or tickets. Non-developer, but registered users will be able to do so with some restrictions, notably not being able to resolve tickets. Developers who currently have accounts will have the same username/password as before. If you have problems using the Trac sites before I announce that I am done, please wait until I am finished. If there are still problems, please let me know and I will try to fix them as soon as possible. Thank you for your patience. Hopefully, this change will resolve the spam problem. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Apr 23 21:11:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Apr 2006 20:11:05 -0500 Subject: [SciPy-dev] Changing the Trac authentication In-Reply-To: <444C20E5.7090309@gmail.com> References: <444C20E5.7090309@gmail.com> Message-ID: <444C25A9.8080701@gmail.com> Robert Kern wrote: > I will be changing the Trac authentication over the next hour or so. Never mind. I'll have to do it tomorrow when I get to the office. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Norbert.Nemec.list at gmx.de Mon Apr 24 04:24:03 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Mon, 24 Apr 2006 10:24:03 +0200 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) Message-ID: <444C8B23.1080706@gmx.de> Hi there, I just had my first encounter with Chebyshev polynomials of the second kind. A lengthy calculation could be solved very elegantly using these special functions. In the end, I wanted to plot the solution using scipy&numpy&matplotlib. To my disappointment, the results showed strong numerical errors, so I had to dig in deeper. I quickly found out that the call chebyu(N)(x) itself returned bad results for N>30 and complex x ~= +/- 1.0 I checked the implementation of chebyu and found that it basically first calculates the coefficients from the generating function and then adds up the polynomial. In the numerical range of question, this means adding up huge numbers with alternating sign, producing bad numerical errors. After a long odyssee of optimizations (I even considered using gmpy to simply increase the internal precision) I found a ridiculously simple solution: ---------------- def chebyu(N,x): previous = 0.0 current = 1.0 for n in range(N): new = 2*x*current - previous previous = current current = new return current ---------------- proved to be perfectly stable for all my applications. For N>1000 and |x|>1.1 it starts producing NaNs because of an internal overflow, but I have not, yet, encountered error accumulation. Now, it would, of course, make sense to contribute this implementation, but this is, where I'm lost: Currently, chebyu is defined in special/orthogonal.py, where it first creates a orthopoly1d object which can then be evaluated. As mentioned, this two-step process is intrisically problematic. I believe, the same applies to many of the other polynomials defined in the same place. On the other hand, in other applications, the coefficients themselves are valuable as well and it would be a pity to throw them out completely. The best idea that I had so far was to replace the function chebyu by a class definition: ---------------------------- class chebyu(orthopoly1d): def __init__(self,N): self.N = N ... same initialization code as before ... def __call__(self,x): previous = 0.0 current = 1.0 for n in range(self.N): new = 2*x*current - previous previous = current current = new return current --------------------------- however, I'm not sure whether this solution is unproblematic. Furthermore, it looks like a rather dirty hack to me... Greetings, Norbert Nemec From gruben at bigpond.net.au Mon Apr 24 07:38:58 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon, 24 Apr 2006 21:38:58 +1000 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: <444C8B23.1080706@gmx.de> References: <444C8B23.1080706@gmx.de> Message-ID: <444CB8D2.8010109@bigpond.net.au> Hi Norbert, No comment in answer to your question, but you can avoid the 'new' variable by changing the loop to for n in range(N): current, previous = 2*x*current - previous, current Gary R. Norbert Nemec wrote: > def chebyu(N,x): > previous = 0.0 > current = 1.0 > for n in range(N): > new = 2*x*current - previous > previous = current > current = new > return current From cimrman3 at ntc.zcu.cz Mon Apr 24 08:43:03 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 24 Apr 2006 14:43:03 +0200 Subject: [SciPy-dev] fmin_cg bug Message-ID: <444CC7D7.7020403@ntc.zcu.cz> Hi, the little script below revealed a little bug in scipy.optimize.fmin_cg: before fix: Warning: Desired error not necessarily achieved due to precision loss Current function value: 333283335000.000000 Iterations: 0 Function evaluations: 2 Gradient evaluations: 1 after fix: Optimization terminated successfully. Current function value: 0.000000 Iterations: 5 Function evaluations: 26 Gradient evaluations: 26 I dared to commit the (little :) change to SVN. r. --- import numpy as nm import scipy.optimize as opt def f( x ): return nm.sum( x * x ) def fg( x ): return 2 * x x0 = nm.arange( 100000, dtype = nm.float64 ) x = opt.fmin_cg( f, x0, fg ) From nwagner at iam.uni-stuttgart.de Mon Apr 24 08:57:39 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 24 Apr 2006 14:57:39 +0200 Subject: [SciPy-dev] fmin_cg bug In-Reply-To: <444CC7D7.7020403@ntc.zcu.cz> References: <444CC7D7.7020403@ntc.zcu.cz> Message-ID: <444CCB43.4030508@iam.uni-stuttgart.de> Robert Cimrman wrote: > Hi, > the little script below revealed a little bug in scipy.optimize.fmin_cg: > > before fix: > Warning: Desired error not necessarily achieved due to precision loss > Current function value: 333283335000.000000 > Iterations: 0 > Function evaluations: 2 > Gradient evaluations: 1 > after fix: > Optimization terminated successfully. > Current function value: 0.000000 > Iterations: 5 > Function evaluations: 26 > Gradient evaluations: 26 > > I dared to commit the (little :) change to SVN. > > r. > > --- > import numpy as nm > import scipy.optimize as opt > > def f( x ): > return nm.sum( x * x ) > > def fg( x ): > return 2 * x > > x0 = nm.arange( 100000, dtype = nm.float64 ) > x = opt.fmin_cg( f, x0, fg ) > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > After fix I get Optimization terminated successfully. Current function value: 0.000000 Iterations: 5 Function evaluations: 31 Gradient evaluations: 31 >>> scipy.__version__ '0.4.9.1879' >>> nm.__version__ '0.9.7.2394' >>> x array([ 0.00000000e+00, 2.98982049e-13, 5.97964097e-13, ..., 2.98973079e-08, 2.98976069e-08, 2.98979059e-08]) Is the number of function evaluations a machine-dependent property ? Nils From cimrman3 at ntc.zcu.cz Mon Apr 24 09:22:20 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 24 Apr 2006 15:22:20 +0200 Subject: [SciPy-dev] fmin_cg bug In-Reply-To: <444CCB43.4030508@iam.uni-stuttgart.de> References: <444CC7D7.7020403@ntc.zcu.cz> <444CCB43.4030508@iam.uni-stuttgart.de> Message-ID: <444CD10C.5070004@ntc.zcu.cz> Nils Wagner wrote: > Robert Cimrman wrote: > >>after fix: >>Optimization terminated successfully. >> Current function value: 0.000000 >> Iterations: 5 >> Function evaluations: 26 >> Gradient evaluations: 26 >> >>I dared to commit the (little :) change to SVN. >> > > After fix I get > > Optimization terminated successfully. > Current function value: 0.000000 > Iterations: 5 > Function evaluations: 31 > Gradient evaluations: 31 > > Is the number of function evaluations a machine-dependent property ? > no, 31 is for 100000 elements of x0, 26 for 10000. I just forgot to change the value before posting. r. From dd55 at cornell.edu Mon Apr 24 09:29:55 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 24 Apr 2006 09:29:55 -0400 Subject: [SciPy-dev] uncommented items in sandbox/setup.py Message-ID: <200604240929.55431.dd55@cornell.edu> I notice that in svn 1879, several subpackages have been uncommented in sandbox/setup.py. I assume this was a simple mistake, since the setup.py file indicates that all the subpackages should be commented out in the file that has been checked into the repository. I was unable to build scipy this morning until I commented them again. Darren From schofield at ftw.at Mon Apr 24 10:27:50 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 24 Apr 2006 16:27:50 +0200 Subject: [SciPy-dev] fmin_cg bug In-Reply-To: <444CC7D7.7020403@ntc.zcu.cz> References: <444CC7D7.7020403@ntc.zcu.cz> Message-ID: <444CE066.1090105@ftw.at> Robert Cimrman wrote: > Hi, > the little script below revealed a little bug in scipy.optimize.fmin_cg: > > before fix: > Warning: Desired error not necessarily achieved due to precision loss > Current function value: 333283335000.000000 > Iterations: 0 > Function evaluations: 2 > Gradient evaluations: 1 > after fix: > Optimization terminated successfully. > Current function value: 0.000000 > Iterations: 5 > Function evaluations: 26 > Gradient evaluations: 26 > > I dared to commit the (little :) change to SVN. > Great! I'm the one who committed the check and the warning, because I noticed that sometimes it was silently failing. Now you've fixed it properly. Thanks :) -- Ed From schofield at ftw.at Mon Apr 24 10:29:25 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 24 Apr 2006 16:29:25 +0200 Subject: [SciPy-dev] uncommented items in sandbox/setup.py In-Reply-To: <200604240929.55431.dd55@cornell.edu> References: <200604240929.55431.dd55@cornell.edu> Message-ID: <444CE0C5.5010101@ftw.at> Darren Dale wrote: > I notice that in svn 1879, several subpackages have been uncommented in > sandbox/setup.py. I assume this was a simple mistake, since the setup.py file > indicates that all the subpackages should be commented out in the file that > has been checked into the repository. I was unable to build scipy this > morning until I commented them again. > Thanks for the tip-off. I've disabled them again in SVN. -- Ed From robert.kern at gmail.com Mon Apr 24 12:07:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 11:07:38 -0500 Subject: [SciPy-dev] uncommented items in sandbox/setup.py In-Reply-To: <200604240929.55431.dd55@cornell.edu> References: <200604240929.55431.dd55@cornell.edu> Message-ID: <444CF7CA.7060107@gmail.com> Darren Dale wrote: > I notice that in svn 1879, several subpackages have been uncommented in > sandbox/setup.py. I assume this was a simple mistake, since the setup.py file > indicates that all the subpackages should be commented out in the file that > has been checked into the repository. I was unable to build scipy this > morning until I commented them again. D'oh! Sorry, that was my fault. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Mon Apr 24 12:08:31 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 24 Apr 2006 10:08:31 -0600 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: <444C8B23.1080706@gmx.de> References: <444C8B23.1080706@gmx.de> Message-ID: <444CF7FF.2030908@ieee.org> Norbert Nemec wrote: > Hi there, > > I just had my first encounter with Chebyshev polynomials of the second > kind. A lengthy calculation could be solved very elegantly using these > special functions. In the end, I wanted to plot the solution using > scipy&numpy&matplotlib. To my disappointment, the results showed strong > numerical errors, so I had to dig in deeper. > The polynomials are not implemented in the best way as you've discovered. This is a known problem. We welcome any suggestions for improving their implementation. Thank you for your suggestions. -Travis From charlesr.harris at gmail.com Mon Apr 24 12:26:31 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Apr 2006 10:26:31 -0600 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: <444C8B23.1080706@gmx.de> References: <444C8B23.1080706@gmx.de> Message-ID: Hi, On 4/24/06, Norbert Nemec wrote: > > Hi there, > > I just had my first encounter with Chebyshev polynomials of the second > kind. A lengthy calculation could be solved very elegantly using these > special functions. In the end, I wanted to plot the solution using > scipy&numpy&matplotlib. To my disappointment, the results showed strong > numerical errors, so I had to dig in deeper. > > I quickly found out that the call chebyu(N)(x) itself returned bad > results for N>30 and complex x ~= +/- 1.0 > > I checked the implementation of chebyu and found that it basically first > calculates the coefficients from the generating function and then adds > up the polynomial. In the numerical range of question, this means adding > up huge numbers with alternating sign, producing bad numerical errors. > > After a long odyssee of optimizations (I even considered using gmpy to > simply increase the internal precision) I found a ridiculously simple > solution: > > ---------------- > def chebyu(N,x): > previous = 0.0 > current = 1.0 > for n in range(N): > new = 2*x*current - previous > previous = current > current = new > return current > ---------------- Chebychev polynomials are notorious for roundoff error when evaluated as power series; power series tend to be ill conditioned and often lose precision when the degree gets up around 5 -10, which is one reason to use Chebychev series instead. Chebychev series can be evaluated using inverse recursion (aka Clenshaw's recurrence), which is a generalized Horner's method. You will probably want to do that at some point and you can find it covered in several texts, Numerical Recipes for instance. The normal Chebychev recursion may not be suitable for very large values of N due to roundoff error. There is a trick used to generate high precision sin,cos for fft's using recursion that could probably be adapted to those cases. I've normally constrained myself to N < 100 since the Chebychev coefficients tend to decrease *very* rapidly. In those cases there is a loss of float64 precision with the error going from roughly 1e-17 to 1e-14. The Chebychev functions of the second kind are the normal Chebychev functions over the interval [-2,2] scaled to [-1,1], so the same precision observation should hold over at least part of the interval, [-.5,.5], but outside of that interval the solution you want to compute from the recursion is the increasing one, so you should be fine. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Apr 24 12:48:56 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Apr 2006 10:48:56 -0600 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: References: <444C8B23.1080706@gmx.de> Message-ID: Hi, I've attached the module I use for normal Chebychev stuff. You may find some of the routines adaptable to your work with the Chebychev functions of the second kind. I've done a bit of editing here and there and haven't run it lately, so caveat emptor. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: chebychev.py Type: text/x-python Size: 19615 bytes Desc: not available URL: From aisaac at american.edu Mon Apr 24 13:01:56 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 24 Apr 2006 13:01:56 -0400 Subject: [SciPy-dev] =?iso-8859-1?q?Numerical_stability_of_special_functio?= =?iso-8859-1?q?ns=09=28here=3A=09chebyu=29?= In-Reply-To: References: <444C8B23.1080706@gmx.de> Message-ID: On Mon, 24 Apr 2006, Charles R Harris apparently wrote: > I've attached the module I use for normal Chebychev stuff. Just a reminder that code without a license statement is generally not useful. (Your statement 'caveat emptor' while well intended adds to the ambiguity.) Please clarify the license of posted code: BSD or MIT preferred on this list. Thank you, Alan Isaac PS Since this comes up some often, I wonder if there is not some way to implement a list policy: posted code is available under the BSD license unless otherwise specified. I have seen similar things on other lists. From charlesr.harris at gmail.com Mon Apr 24 13:12:41 2006 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 24 Apr 2006 11:12:41 -0600 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: References: <444C8B23.1080706@gmx.de> Message-ID: Hmmm, On 4/24/06, Alan G Isaac wrote: > > On Mon, 24 Apr 2006, Charles R Harris apparently wrote: > > I've attached the module I use for normal Chebychev stuff. > > Just a reminder that code without a license statement is > generally not useful. (Your statement 'caveat emptor' while > well intended adds to the ambiguity.) > > Please clarify the license of posted code: > BSD or MIT preferred on this list. Is there some simple boiler plate for the BSD license somewhere. I assume that I also need to put my name in as copyright holder. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Mon Apr 24 13:29:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 24 Apr 2006 13:29:36 -0400 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: References: <444C8B23.1080706@gmx.de> Message-ID: On Mon, 24 Apr 2006, Charles R Harris apparently wrote: > Is there some simple boiler plate for the BSD license > somewhere. I assume that I also need to put my name in as > copyright holder. http://www.opensource.org/licenses/bsd-license.php http://www.opensource.org/licenses/mit-license.html My own view (but IANAL) is that it completely suffices to list yourself as author, date it, and state that the code is under the MIT license: e.g., :author: Charles R Harris :date: 2006-04-25 :license: MIT http://www.opensource.org/licenses/mit-license.html Cheers, Alan Isaac From robert.kern at gmail.com Mon Apr 24 14:30:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 13:30:14 -0500 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: References: <444C8B23.1080706@gmx.de> Message-ID: <444D1936.8050506@gmail.com> Alan G Isaac wrote: > PS Since this comes up some often, I wonder if there is not > some way to implement a list policy: posted code is > available under the BSD license unless otherwise specified. > I have seen similar things on other lists. No, I don't think that's really enforcable either legally or practically. A simple polite question to the poster suffices, I think. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Mon Apr 24 16:03:40 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 24 Apr 2006 16:03:40 -0400 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: References: Message-ID: On Sat, 22 Apr 2006, Matt Knox apparently wrote: > Our group consists mostly of financial practitioners and > engineers, not really pure software developers, so if > somebody is interested in taking this to the next level > I would be willing to release the code (both the FAME api, > and the time series module) if someone wanted to improve > upon this and share their improvements in the future. The > code is definitely not a polished product right now, but > it is functional. > If you have any thoughts on this (positive or negative) > I would love to hear them. I was hoping someone else would respond first, but since they have not, I will provide a smidgen of feedback. I hope you will release the code in advance of the reassurances you seek. It sounds useful, and it sounds likely to attract development effort over time. I am interested in looking at it, if it is released under a liberal license, but I am more a user than a developer. Still, Python is great in that for many applications users can readily contribute to development. The time series module seems to be an obvious candidate for such contributions. The real question, I propose, is where to house the code and how to manage patches. As for the former, it seems to be an obvious candidate for the scipy sandbox. Cheers, Alan Isaac From dkaufman at imago.com Mon Apr 24 18:45:54 2006 From: dkaufman at imago.com (Duane Kaufman) Date: Mon, 24 Apr 2006 17:45:54 -0500 Subject: [SciPy-dev] Scipy ga module on Windows Message-ID: <949BD319240E484B984463501CD7D17C257CB2@IMAGOEXCH1.corp.imago.com> Hi, I am looking to utilize a genetic algorithm for a project I am working on. I noticed Scipy had this advertised as included, but cannot find it (I downloaded the latest for Python 2.4) Is this module not available? Thanks, Duane NOTICE: These communications may contain privileged or other confidential information for the sole use of the designated recipients. If you have received it in error, please advise the sender by reply email and immediately delete the message and any attachments without reviewing, copying or disclosing the contents. From robert.kern at gmail.com Mon Apr 24 18:57:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 17:57:59 -0500 Subject: [SciPy-dev] Scipy ga module on Windows In-Reply-To: <949BD319240E484B984463501CD7D17C257CB2@IMAGOEXCH1.corp.imago.com> References: <949BD319240E484B984463501CD7D17C257CB2@IMAGOEXCH1.corp.imago.com> Message-ID: <444D57F7.9010907@gmail.com> Duane Kaufman wrote: > Hi, > > I am looking to utilize a genetic algorithm for a project I am working > on. > > I noticed Scipy had this advertised as included, but cannot find it (I > downloaded the latest for Python 2.4) > > Is this module not available? It has been moved to the sandbox because it has not been completely ported to use numpy, yet. You will have to build scipy from source and uncomment the appropriate line in Lib/sandbox/setup.py to have it install as part of the scipy package. However, it is pure Python, so you can just copy it from the source distribution to the appropriate location, too. We should probably strike that item from the description until we get it in the main package again. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Apr 24 20:59:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 19:59:04 -0500 Subject: [SciPy-dev] Changing the Trac authentication, for real this time! Message-ID: <444D7458.3020402@gmail.com> If you encounter errors accessing the Trac sites for NumPy and SciPy over the next hour or so, please wait until I have announced that I have finished. If things are still broken after that, please let me know and I will try to fix it immediately. The details of the changes were posted to the previous thread "Changing the Trac authentication". Apologies for any disruption and for the noise. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mattknox_ca at hotmail.com Mon Apr 24 21:40:10 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Tue, 25 Apr 2006 01:40:10 +0000 (UTC) Subject: [SciPy-dev] interest in Time series functionality? References: Message-ID: Alan G Isaac american.edu> writes: > > On Sat, 22 Apr 2006, Matt Knox apparently wrote: > > Our group consists mostly of financial practitioners and > > engineers, not really pure software developers, so if > > somebody is interested in taking this to the next level > > I would be willing to release the code (both the FAME api, > > and the time series module) if someone wanted to improve > > upon this and share their improvements in the future. The > > code is definitely not a polished product right now, but > > it is functional. > > > If you have any thoughts on this (positive or negative) > > I would love to hear them. > > I was hoping someone else would respond first, but since > they have not, I will provide a smidgen of feedback. > > I hope you will release the code in advance of the > reassurances you seek. It sounds useful, and it sounds > likely to attract development effort over time. I am > interested in looking at it, if it is released under > a liberal license, but I am more a user than a developer. > Still, Python is great in that for many applications users > can readily contribute to development. The time series > module seems to be an obvious candidate for such > contributions. > > The real question, I propose, is where to house the code and > how to manage patches. As for the former, it seems to be an > obvious candidate for the scipy sandbox. > > Cheers, > Alan Isaac > Thanks for the reply Alan. The code needs some additional spit and polish, reorganization and some additional documentation before it is suitable to be released to the public, but within the next couple of months I hope to be able to find the time to do that. I have no experience writing or managing any kind of open source project (nor do any of my colleagues), so I'm not sure I would be able to offer much in the way of managing patches, etc. Currently the FAME database functionality is sort of mangled in with the time series module, but that can definitely be separated better. Would the FAME api be suitable for the sandbox as well? or just the time series capabilities? I suspect database API's aren't really something people would look for on scipy, but who knows. At any rate, I am fairly certain there is no existing python API for FAME freely available so some people might be interested in that. - Matt Knox From travis at enthought.com Mon Apr 24 22:15:42 2006 From: travis at enthought.com (Travis N. Vaught) Date: Mon, 24 Apr 2006 21:15:42 -0500 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: References: Message-ID: <444D864E.2040704@enthought.com> Hey Matt, Matt Knox wrote: > Alan G Isaac american.edu> writes: > > >> On Sat, 22 Apr 2006, Matt Knox apparently wrote: >> >>> Our group consists mostly of financial practitioners and >>> engineers, not really pure software developers, so if >>> somebody is interested in taking this to the next level >>> I would be willing to release the code (both the FAME api, >>> and the time series module) if someone wanted to improve >>> upon this and share their improvements in the future. The >>> code is definitely not a polished product right now, but >>> it is functional. >>> >>> If you have any thoughts on this (positive or negative) >>> I would love to hear them. >>> >> I was hoping someone else would respond first, but since >> they have not, I will provide a smidgen of feedback. >> >> I hope you will release the code in advance of the >> reassurances you seek. It sounds useful, and it sounds >> likely to attract development effort over time. I am >> interested in looking at it, if it is released under >> a liberal license, but I am more a user than a developer. >> Still, Python is great in that for many applications users >> can readily contribute to development. The time series >> module seems to be an obvious candidate for such >> contributions. >> >> The real question, I propose, is where to house the code and >> how to manage patches. As for the former, it seems to be an >> obvious candidate for the scipy sandbox. >> >> Cheers, >> Alan Isaac >> >> > > Thanks for the reply Alan. > > The code needs some additional spit and polish, reorganization and some > additional documentation before it is suitable to be released to the public, but > within the next couple of months I hope to be able to find the time to do that. > I have no experience writing or managing any kind of open source project (nor do > any of my colleagues), so I'm not sure I would be able to offer much in the way > of managing patches, etc. > Currently the FAME database functionality is sort of mangled in with the time > series module, but that can definitely be separated better. Would the FAME api > be suitable for the sandbox as well? or just the time series capabilities? I > suspect database API's aren't really something people would look for on scipy, > but who knows. At any rate, I am fairly certain there is no existing python API > for FAME freely available so some people might be interested in that. > > - Matt Knox For the record, Enthought is also very interested in seeing time-series functionality within scipy. I'd love to see the code in the sandbox--coupled to the FAME api or not. It is a sandbox, after all, and the refactoring can be done in the open (if you're comfortable with that). To state the obvious: If it's useful, folks will pile on and it can get integrated into the core of scipy. If not, it won't. Regarding the posting of the code, we at Enthought have been involved in open source for a while now and my best advice is "don't do as poor a job as we do" in making the code available at an early stage to the wider community. We're trying to do a better job of this ourselves. The code we've developed and open-sourced has really suffered as a result of our not disseminating and promoting it earlier in the process. Best, Travis From robert.kern at gmail.com Mon Apr 24 22:38:26 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 21:38:26 -0500 Subject: [SciPy-dev] Changing the Trac authentication, for real this time! In-Reply-To: <444D7458.3020402@gmail.com> References: <444D7458.3020402@gmail.com> Message-ID: <444D8BA2.1080407@gmail.com> I hate computers. It's still not done. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stephen.walton at csun.edu Mon Apr 24 23:18:03 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon, 24 Apr 2006 20:18:03 -0700 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: <444D864E.2040704@enthought.com> References: <444D864E.2040704@enthought.com> Message-ID: <444D94EB.4030400@csun.edu> Travis N. Vaught wrote: >For the record, Enthought is also very interested in seeing time-series >functionality within scipy. > So am I. I do a lot of time series analysis (solar activity time series in my case) and am forever re-inventing the wheel here. My reading of Matt's original post sounds like it would be extremely useful to a very large community, namely those solar astronomers who study the solar cycle on daily/weekly/annual/decadal time scales. Steve Walton From robert.kern at gmail.com Tue Apr 25 01:14:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 00:14:27 -0500 Subject: [SciPy-dev] Google Summer of Code In-Reply-To: <44476AEA.7080003@decsai.ugr.es> References: <44476AEA.7080003@decsai.ugr.es> Message-ID: <444DB033.4000906@gmail.com> [Cross-posted because this is partially an announcement. Continuing discussion should go to only one list, please.] Antonio Arauzo Azofra wrote: > Google Summer of Code > http://code.google.com/soc/ > > Have you considered participating as a Mentoring organization? Offering > any project about Scipy? I'm not sure which "you" you are referring to here, but yes! Unfortunately, it was a bit late in the process to be applying as a mentoring organization. Google started consolidating mentoring organizations. However, I and several others at Enthought are volunteering to mentor through the PSF. I encourage others on these lists to do the same or to apply as students, whichever is appropriate. We'll be happy to provide SVN workspace for numpy and scipy SoC projects. I've added one fairly general scipy entry to the python.org Wiki page listing project ideas: http://wiki.python.org/moin/SummerOfCode If you have more specific ideas, please add them to the Wiki. Potential mentors: Neal Norwitz is coordinating PSF mentors this year and has asked that those he or Guido does not know personally to give personal references. If you've been active on this list, I'm sure we can play the "Two Degrees of Separation From Guido Game" and get you a reference from someone else here. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Tue Apr 25 02:32:44 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 08:32:44 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array Message-ID: <444DC28C.6050709@iam.uni-stuttgart.de> 0.9.7.2404 on a 64 bit system numpy.test(1,10) results in Check reading the nested fields of a nested array (1st level)*** glibc detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 13980)] 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 Check reading the nested fields of a nested array (1st level)*** glibc detected *** free(): invalid next size (fast): 0x0000000000641100 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 13938)] 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 (gdb) bt #0 0x00002aaaab6164f9 in kill () from /lib64/libc.so.6 #1 0x00002aaaaadf3821 in pthread_kill () from /lib64/libpthread.so.0 #2 0x00002aaaaadf3be2 in raise () from /lib64/libpthread.so.0 #3 0x00002aaaab61759d in abort () from /lib64/libc.so.6 #4 0x00002aaaab64a7be in __libc_message () from /lib64/libc.so.6 #5 0x00002aaaab64f76c in malloc_printerr () from /lib64/libc.so.6 #6 0x00002aaaab65025a in free () from /lib64/libc.so.6 #7 0x00002aaaaac3b41a in unicode_dealloc (unicode=0x770e50) at unicodeobject.c:260 #8 0x00002aaaaac5387f in PyEval_EvalFrame (f=0x608780) at ceval.c:3578 #9 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x5c6940) at ceval.c:3629 #10 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbeec00, globals=, locals=, args=0x2aaaadd2b260, argcount=2, kws=0x78a940, kwcount=0, defs=0x2aaaabbf6de8, defcount=1, closure=0x0) at ceval.c:2730 #11 0x00002aaaaac0e9af in function_call (func=0x2aaaabc056e0, arg=0x2aaaadd2b248, kw=) at funcobject.c:548 #12 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #13 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0x5d8e30) at ceval.c:3824 #14 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbeec70, globals=, locals=, args=0x2aaaade0bde8, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #15 0x00002aaaaac0e9af in function_call (func=0x2aaaabc05758, arg=0x2aaaade0bdd0, kw=) at funcobject.c:548 #16 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #17 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaaade0bdd0, kw=0x0) at classobject.c:2431 #18 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #19 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x6231b0) at ceval.c:3755 #20 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbe41f0, globals=, ---Type to continue, or q to quit--- locals=, args=0x2aaaadd2b0b0, argcount=2, kws=0x0, kwcount=0, defs=0x2aaaabc088a8, defcount=1, closure=0x0) at ceval.c:2730 #21 0x00002aaaaac0e9af in function_call (func=0x2aaaabc09c80, arg=0x2aaaadd2b098, kw=) at funcobject.c:548 #22 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #23 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaaadd2b098, kw=0x0) at classobject.c:2431 #24 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #25 0x00002aaaaac33b0a in slot_tp_call (self=, args=0x2aaaade70f10, kwds=0x0) at typeobject.c:4526 #26 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #27 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x677ba0) at ceval.c:3755 #28 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbf46c0, globals=, locals=, args=0x2aaaadd29218, argcount=2, kws=0x770110, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #29 0x00002aaaaac0e9af in function_call (func=0x2aaaabc05f50, arg=0x2aaaadd29200, kw=) at funcobject.c:548 #30 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #31 0x00002aaaaac532e2 in PyEval_EvalFrame (f=0x620960) at ceval.c:3824 #32 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbf4730, globals=, locals=, args=0x2aaaadd2b920, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #33 0x00002aaaaac0e9af in function_call (func=0x2aaaabc06050, arg=0x2aaaadd2b908, kw=) at funcobject.c:548 #34 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #35 0x00002aaaaac02131 in instancemethod_call (func=, arg=0x2aaaadd2b908, kw=0x0) ---Type to continue, or q to quit--- at classobject.c:2431 #36 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #37 0x00002aaaaac33b0a in slot_tp_call (self=, args=0x2aaaadd2e350, kwds=0x0) at typeobject.c:4526 #38 0x00002aaaaabfa760 in PyObject_Call (func=, arg=, kw=) at abstract.c:1751 #39 0x00002aaaaac5380d in PyEval_EvalFrame (f=0x5dac10) at ceval.c:3755 #40 0x00002aaaaac53b97 in PyEval_EvalFrame (f=0x5dd1f0) at ceval.c:3629 #41 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaabbe4c70, globals=, locals=, args=0x5d5c78, argcount=3, kws=0x5d5c90, kwcount=0, defs=0x2aaaabc07c38, defcount=2, closure=0x0) at ceval.c:2730 #42 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x5d5ae0) at ceval.c:3640 #43 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab23b90, globals=, locals=, args=0x541af0, argcount=2, kws=0x541b00, kwcount=0, defs=0x2aaaacb8f698, defcount=2, closure=0x0) at ceval.c:2730 #44 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x541960) at ceval.c:3640 #45 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab368f0, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #46 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #47 0x00002aaaaac70719 in run_node (n=, filename=, globals=0x503b50, locals=0x503b50, flags=) at pythonrun.c:1265 #48 0x00002aaaaac71bc7 in PyRun_InteractiveOneFlags (fp=, filename=0x2aaaaac95e73 "", flags=0x7fffff849610) at pythonrun.c:762 #49 0x00002aaaaac71cbe in PyRun_InteractiveLoopFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", flags=0x7fffff849610) at pythonrun.c:695 #50 0x00002aaaaac7221c in PyRun_AnyFileExFlags (fp=0x2aaaab809e00, filename=0x2aaaaac95e73 "", closeit=0, flags=0x7fffff849610) at pythonrun.c:658 From robert.kern at gmail.com Tue Apr 25 02:36:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 01:36:57 -0500 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DC28C.6050709@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> Message-ID: <444DC389.1020103@gmail.com> Nils Wagner wrote: > 0.9.7.2404 on a 64 bit system > > numpy.test(1,10) results in > > Check reading the nested fields of a nested array (1st level)*** glibc > detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** Did it print anything at all before this? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Tue Apr 25 02:38:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 00:38:27 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DC28C.6050709@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> Message-ID: <444DC3E3.2060604@ieee.org> Nils Wagner wrote: > 0.9.7.2404 on a 64 bit system > > numpy.test(1,10) results in > > Check reading the nested fields of a nested array (1st level)*** glibc > detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** > Can you check to see which check-in caused the problem. Go back to revision 2403, 2402, and so forth. Also, please remove the build directory and the installed numpy to be sure changes to the C-API are not causing problems. The trace-back is not really giving a clue. Perhaps if you ran under valgrind better help could be given. Are there any warnings emmitted during compilation? Could somebody else with a 64-bit system verify? -Travis From nwagner at iam.uni-stuttgart.de Tue Apr 25 02:40:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 08:40:19 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DC389.1020103@gmail.com> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC389.1020103@gmail.com> Message-ID: <444DC453.3020101@iam.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> 0.9.7.2404 on a 64 bit system >> >> numpy.test(1,10) results in >> >> Check reading the nested fields of a nested array (1st level)*** glibc >> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >> > > Did it print anything at all before this? > > snip Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation from list of list of tuples ... ok Check creation from list of tuples ... ok Check creation from tuples ... ok Check creation of 0-dimensional objects ... ok Check creation of multi-dimensional objects ... ok Check creation of single-dimensional objects ... ok Check creation of 0-dimensional objects ... ok Check creation of multi-dimensional objects ... ok Check creation of single-dimensional objects ... ok Check reading the top fields of a nested array ... ok Check reading the nested fields of a nested array (1st level) ... ok Check access nested descriptors of a nested array (1st level) ... ok Check reading the nested fields of a nested array (2nd level) ... ok Check access nested descriptors of a nested array (2nd level) ... ok Check reading the top fields of a nested array ... ok Check reading the nested fields of a nested array (1st level)*** glibc detected *** free(): invalid next size (fast): 0x0000000000641360 *** Abort Nils From jonathan.taylor at stanford.edu Tue Apr 25 02:47:18 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 24 Apr 2006 23:47:18 -0700 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: <444D94EB.4030400@csun.edu> References: <444D864E.2040704@enthought.com> <444D94EB.4030400@csun.edu> Message-ID: <444DC5F6.8090707@stanford.edu> For the record, I deal with fMRI time series. I would like to see a proper time series module in scipy, too. Although there are lots of hacks in fMRI to speed computations, a proper time series module would be a good addition. Jonathan Taylor Stephen Walton wrote: > Travis N. Vaught wrote: > > >> For the record, Enthought is also very interested in seeing time-series >> functionality within scipy. >> >> > So am I. I do a lot of time series analysis (solar activity time series > in my case) and am forever re-inventing the wheel here. My reading of > Matt's original post sounds like it would be extremely useful to a very > large community, namely those solar astronomers who study the solar > cycle on daily/weekly/annual/decadal time scales. > > Steve Walton > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From nwagner at iam.uni-stuttgart.de Tue Apr 25 02:51:50 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 08:51:50 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DC3E3.2060604@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> Message-ID: <444DC706.1030406@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> 0.9.7.2404 on a 64 bit system >> >> numpy.test(1,10) results in >> >> Check reading the nested fields of a nested array (1st level)*** glibc >> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >> >> > > Can you check to see which check-in caused the problem. Go back to > revision 2403, 2402, and so forth. > > Stupid question but how do I use svn to retrieve older revisions ? > Also, please remove the build directory and the installed numpy to be > sure changes to the C-API are not causing problems. > > I have removed the build directory. Maybe I should also remove the numpy directory in /usr/lib64/python2.4/site-packages/ > The trace-back is not really giving a clue. Perhaps if you ran under > valgrind better help could be given. Are there any warnings emmitted > during compilation? > > > No. > Could somebody else with a 64-bit system verify? > > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Tue Apr 25 03:02:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 02:02:11 -0500 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: References: Message-ID: <444DC973.5010608@gmail.com> Matt Knox wrote: > Currently the FAME database functionality is sort of mangled in with the time > series module, but that can definitely be separated better. Would the FAME api > be suitable for the sandbox as well? or just the time series capabilities? I > suspect database API's aren't really something people would look for on scipy, > but who knows. At any rate, I am fairly certain there is no existing python API > for FAME freely available so some people might be interested in that. FAME is a proprietary package, right? The website (fame.com) looks expensive. I would really like to see some good tools for handling time series (specifically calendrical time series) in scipy. I hope that as much functionality as possible can be decoupled from FAME. I don't think that wrappers to an expensive database package really belong in scipy although they might be just right for a separate projects.scipy.org project. Of course, the API and conventions that you've established by how the non-FAME bits interact with the FAME bits will probably serve as a useful standard for talking to other databases with time series information. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Tue Apr 25 03:15:10 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 01:15:10 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DC706.1030406@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DC706.1030406@iam.uni-stuttgart.de> Message-ID: <444DCC7E.3050006@ieee.org> Nils Wagner wrote: > Travis Oliphant wrote: > >> Nils Wagner wrote: >> >> >>> 0.9.7.2404 on a 64 bit system >>> >>> numpy.test(1,10) results in >>> >>> Check reading the nested fields of a nested array (1st level)*** glibc >>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>> >>> >>> >> Can you check to see which check-in caused the problem. Go back to >> revision 2403, 2402, and so forth. >> >> >> > Stupid question but how do I use svn to retrieve older revisions ? > svn update -r 2403 svn update -r 2402 etc.. -Travis From nwagner at iam.uni-stuttgart.de Tue Apr 25 03:23:04 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 09:23:04 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DCC7E.3050006@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DC706.1030406@iam.uni-stuttgart.de> <444DCC7E.3050006@ieee.org> Message-ID: <444DCE58.3010805@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> Travis Oliphant wrote: >> >> >>> Nils Wagner wrote: >>> >>> >>> >>>> 0.9.7.2404 on a 64 bit system >>>> >>>> numpy.test(1,10) results in >>>> >>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>> >>>> >>>> >>>> >>> Can you check to see which check-in caused the problem. Go back to >>> revision 2403, 2402, and so forth. >>> >>> >>> >>> >> Stupid question but how do I use svn to retrieve older revisions ? >> >> > > svn update -r 2403 > svn update -r 2402 > > etc.. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Ran 363 tests in 0.306s OK >>> numpy.__version__ '0.9.7.2402' So the problem arises in 2403. Nils From nwagner at iam.uni-stuttgart.de Tue Apr 25 03:35:02 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 09:35:02 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DC3E3.2060604@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> Message-ID: <444DD126.1070403@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> 0.9.7.2404 on a 64 bit system >> >> numpy.test(1,10) results in >> >> Check reading the nested fields of a nested array (1st level)*** glibc >> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >> >> > > Can you check to see which check-in caused the problem. Go back to > revision 2403, 2402, and so forth. > > Also, please remove the build directory and the installed numpy to be > sure changes to the C-API are not causing problems. > > The trace-back is not really giving a clue. Perhaps if you ran under > valgrind better help could be given. Are there any warnings emmitted > during compilation? > > > Could somebody else with a 64-bit system verify? > > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > And how do I use valgrind here ? Nils From nwagner at iam.uni-stuttgart.de Tue Apr 25 03:44:53 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 09:44:53 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DD126.1070403@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> Message-ID: <444DD375.10206@iam.uni-stuttgart.de> Nils Wagner wrote: > Travis Oliphant wrote: > >> Nils Wagner wrote: >> >> >>> 0.9.7.2404 on a 64 bit system >>> >>> numpy.test(1,10) results in >>> >>> Check reading the nested fields of a nested array (1st level)*** glibc >>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>> >>> >>> >> Can you check to see which check-in caused the problem. Go back to >> revision 2403, 2402, and so forth. >> >> Also, please remove the build directory and the installed numpy to be >> sure changes to the C-API are not causing problems. >> >> The trace-back is not really giving a clue. Perhaps if you ran under >> valgrind better help could be given. Are there any warnings emmitted >> during compilation? >> >> >> Could somebody else with a 64-bit system verify? >> >> BTW, the problem is not restricted to 64-bit systems. I just installed the latest svn version of numpy on a 32-bit system and got Check reading the nested fields of a nested array (1st level)*** glibc detected *** free(): invalid next size (fast): 0x081d5870 *** Abort Nils >> -Travis >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > > > And how do I use valgrind here ? > > Nils > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant.travis at ieee.org Tue Apr 25 04:00:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 02:00:28 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DD375.10206@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> Message-ID: <444DD71C.6000502@ieee.org> Nils Wagner wrote: > Nils Wagner wrote: > >> Travis Oliphant wrote: >> >> >>> Nils Wagner wrote: >>> >>> >>> >>>> 0.9.7.2404 on a 64 bit system >>>> >>>> numpy.test(1,10) results in >>>> >>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>> >>>> >>>> >>>> >>> Can you check to see which check-in caused the problem. Go back to >>> revision 2403, 2402, and so forth. >>> >>> Also, please remove the build directory and the installed numpy to be >>> sure changes to the C-API are not causing problems. >>> >>> The trace-back is not really giving a clue. Perhaps if you ran under >>> valgrind better help could be given. Are there any warnings emmitted >>> during compilation? >>> >>> >>> Could somebody else with a 64-bit system verify? >>> >>> >>> > BTW, the problem is not restricted to 64-bit systems. I just installed > the latest svn version of numpy > on a 32-bit system and got > Check reading the nested fields of a nested array (1st level)*** glibc > detected *** free(): invalid next size (fast): 0x081d5870 *** > Abort > > Please remove your installation of numpy and your build directory and try again. I do not get these errors. The copyswap function changed arguments and appears to be at the root of these problems. -Travis From nwagner at iam.uni-stuttgart.de Tue Apr 25 04:07:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 10:07:37 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DD71C.6000502@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> Message-ID: <444DD8C9.2070605@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> Nils Wagner wrote: >> >> >>> Travis Oliphant wrote: >>> >>> >>> >>>> Nils Wagner wrote: >>>> >>>> >>>> >>>> >>>>> 0.9.7.2404 on a 64 bit system >>>>> >>>>> numpy.test(1,10) results in >>>>> >>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>>> >>>>> >>>>> >>>>> >>>>> >>>> Can you check to see which check-in caused the problem. Go back to >>>> revision 2403, 2402, and so forth. >>>> >>>> Also, please remove the build directory and the installed numpy to be >>>> sure changes to the C-API are not causing problems. >>>> >>>> The trace-back is not really giving a clue. Perhaps if you ran under >>>> valgrind better help could be given. Are there any warnings emmitted >>>> during compilation? >>>> >>>> >>>> Could somebody else with a 64-bit system verify? >>>> >>>> >>>> >>>> >> BTW, the problem is not restricted to 64-bit systems. I just installed >> the latest svn version of numpy >> on a 32-bit system and got >> Check reading the nested fields of a nested array (1st level)*** glibc >> detected *** free(): invalid next size (fast): 0x081d5870 *** >> Abort >> >> >> > Please remove your installation of numpy and your build directory and > try again. I do not get these errors. > > The copyswap function changed arguments and appears to be at the root of > these problems. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Travis, I installed numpy from scratch but the problem persists. Any idea ? Nils From oliphant.travis at ieee.org Tue Apr 25 04:14:11 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 02:14:11 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DD8C9.2070605@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> Message-ID: <444DDA53.5090104@ieee.org> Nils Wagner wrote: > Travis Oliphant wrote: > >> Nils Wagner wrote: >> >> >>> Nils Wagner wrote: >>> >>> >>> >>>> Travis Oliphant wrote: >>>> >>>> >>>> >>>> >>>>> Nils Wagner wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> 0.9.7.2404 on a 64 bit system >>>>>> >>>>>> numpy.test(1,10) results in >>>>>> >>>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> Can you check to see which check-in caused the problem. Go back to >>>>> revision 2403, 2402, and so forth. >>>>> >>>>> Also, please remove the build directory and the installed numpy to be >>>>> sure changes to the C-API are not causing problems. >>>>> >>>>> The trace-back is not really giving a clue. Perhaps if you ran under >>>>> valgrind better help could be given. Are there any warnings emmitted >>>>> during compilation? >>>>> >>>>> >>>>> Could somebody else with a 64-bit system verify? >>>>> >>>>> >>>>> >>>>> >>>>> >>> BTW, the problem is not restricted to 64-bit systems. I just installed >>> the latest svn version of numpy >>> on a 32-bit system and got >>> Check reading the nested fields of a nested array (1st level)*** glibc >>> detected *** free(): invalid next size (fast): 0x081d5870 *** >>> Abort >>> >>> >>> >>> >> Please remove your installation of numpy and your build directory and >> try again. I do not get these errors. >> >> The copyswap function changed arguments and appears to be at the root of >> these problems. >> >> -Travis >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > Hi Travis, > > I installed numpy from scratch but the problem persists. Any idea ? > > Did you remove your numpy from and the build directory before building? If so, please isolate the test that is failing into a short Python snippet that exhibits the error. I've just double-checked all uses of copyswap to make sure they have all been changed. All tests are passing for me as well (also all SciPy tests pass). So, I can't reproduce the problem and it smells a lot like an installation problem. Are you picking up include files from some other location? Can somebody else verify to see if current SVN of numpy builds and passes tests? -Travis From nwagner at iam.uni-stuttgart.de Tue Apr 25 04:21:59 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 10:21:59 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DDA53.5090104@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> Message-ID: <444DDC27.2020007@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> Travis Oliphant wrote: >> >> >>> Nils Wagner wrote: >>> >>> >>> >>>> Nils Wagner wrote: >>>> >>>> >>>> >>>> >>>>> Travis Oliphant wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> Nils Wagner wrote: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> 0.9.7.2404 on a 64 bit system >>>>>>> >>>>>>> numpy.test(1,10) results in >>>>>>> >>>>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> Can you check to see which check-in caused the problem. Go back to >>>>>> revision 2403, 2402, and so forth. >>>>>> >>>>>> Also, please remove the build directory and the installed numpy to be >>>>>> sure changes to the C-API are not causing problems. >>>>>> >>>>>> The trace-back is not really giving a clue. Perhaps if you ran under >>>>>> valgrind better help could be given. Are there any warnings emmitted >>>>>> during compilation? >>>>>> >>>>>> >>>>>> Could somebody else with a 64-bit system verify? >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>> BTW, the problem is not restricted to 64-bit systems. I just installed >>>> the latest svn version of numpy >>>> on a 32-bit system and got >>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>> detected *** free(): invalid next size (fast): 0x081d5870 *** >>>> Abort >>>> >>>> >>>> >>>> >>>> >>> Please remove your installation of numpy and your build directory and >>> try again. I do not get these errors. >>> >>> The copyswap function changed arguments and appears to be at the root of >>> these problems. >>> >>> -Travis >>> >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-dev >>> >>> >>> >> Hi Travis, >> >> I installed numpy from scratch but the problem persists. Any idea ? >> >> >> > > Did you remove your numpy from and the build directory > before building? > > Yes. > If so, please isolate the test that is failing into a short Python > snippet that exhibits the error. I've just double-checked all uses of > copyswap to make sure they have all been changed. All tests are > passing for me as well (also all SciPy tests pass). > > So, I can't reproduce the problem and it smells a lot like an > installation problem. Are you picking up include files from some other > location? > How can I check that ? > Can somebody else verify to see if current SVN of numpy builds and > passes tests? > > -Travis > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > I found the test in test_numerictypes.py def check_nested1_acessors(self): """Check reading the nested fields of a nested array (1st level)""" h = array(self._buffer, dtype=self._descr) if not self.multiple_rows: assert_equal(h['Info']['value'], array(self._buffer[1][0], dtype='c16')) assert_equal(h['Info']['y2'], array(self._buffer[1][1], dtype='f8')) assert_equal(h['info']['Name'], array(self._buffer[3][0], dtype='U2')) assert_equal(h['info']['Value'], array(self._buffer[3][1], dtype='c16')) else: assert_equal(h['Info']['value'], array([self._buffer[0][1][0], self._buffer[1][1][0]], dtype='c16')) assert_equal(h['Info']['y2'], array([self._buffer[0][1][1], self._buffer[1][1][1]], dtype='f8')) assert_equal(h['info']['Name'], array([self._buffer[0][3][0], self._buffer[1][3][0]], dtype='U2')) assert_equal(h['info']['Value'], array([self._buffer[0][3][1], self._buffer[1][3][1]], dtype='c16')) Nils From oliphant.travis at ieee.org Tue Apr 25 04:48:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 02:48:39 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DDC27.2020007@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444DDC27.2020007@iam.uni-stuttgart.de> Message-ID: <444DE267.9030903@ieee.org> Nils Wagner wrote: > Travis Oliphant wrote: > >> Nils Wagner wrote: >> >> >>> Travis Oliphant wrote: >>> >>> >>> >>>> Nils Wagner wrote: >>>> >>>> >>>> >>>> >>>>> Nils Wagner wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> Travis Oliphant wrote: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> Nils Wagner wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> 0.9.7.2404 on a 64 bit system >>>>>>>> >>>>>>>> numpy.test(1,10) results in >>>>>>>> >>>>>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>>>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Can you check to see which check-in caused the problem. Go back to >>>>>>> revision 2403, 2402, and so forth. >>>>>>> >>>>>>> Also, please remove the build directory and the installed numpy to be >>>>>>> sure changes to the C-API are not causing problems. >>>>>>> >>>>>>> The trace-back is not really giving a clue. Perhaps if you ran under >>>>>>> valgrind better help could be given. Are there any warnings emmitted >>>>>>> during compilation? >>>>>>> >>>>>>> >>>>>>> Could somebody else with a 64-bit system verify? >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>> BTW, the problem is not restricted to 64-bit systems. I just installed >>>>> the latest svn version of numpy >>>>> on a 32-bit system and got >>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>> detected *** free(): invalid next size (fast): 0x081d5870 *** >>>>> Abort >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> Please remove your installation of numpy and your build directory and >>>> try again. I do not get these errors. >>>> >>>> The copyswap function changed arguments and appears to be at the root of >>>> these problems. >>>> >>>> -Travis >>>> >>>> _______________________________________________ >>>> Scipy-dev mailing list >>>> Scipy-dev at scipy.net >>>> http://www.scipy.net/mailman/listinfo/scipy-dev >>>> >>>> >>>> >>>> >>> Hi Travis, >>> >>> I installed numpy from scratch but the problem persists. Any idea ? >>> >>> >>> >>> >> Did you remove your numpy from and the build directory >> before building? >> >> >> > Yes. > >> If so, please isolate the test that is failing into a short Python >> snippet that exhibits the error. I've just double-checked all uses of >> copyswap to make sure they have all been changed. All tests are >> passing for me as well (also all SciPy tests pass). >> >> So, I can't reproduce the problem and it smells a lot like an >> installation problem. Are you picking up include files from some other >> location? >> >> > How can I check that ? > >> Can somebody else verify to see if current SVN of numpy builds and >> passes tests? >> >> -Travis >> >> >> >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev >> >> > I found the test in test_numerictypes.py > > def check_nested1_acessors(self): > """Check reading the nested fields of a nested array (1st level)""" > h = array(self._buffer, dtype=self._descr) > if not self.multiple_rows: > assert_equal(h['Info']['value'], > array(self._buffer[1][0], dtype='c16')) > assert_equal(h['Info']['y2'], > array(self._buffer[1][1], dtype='f8')) > assert_equal(h['info']['Name'], > array(self._buffer[3][0], dtype='U2')) > assert_equal(h['info']['Value'], > array(self._buffer[3][1], dtype='c16')) > else: > assert_equal(h['Info']['value'], > array([self._buffer[0][1][0], > self._buffer[1][1][0]], > dtype='c16')) > assert_equal(h['Info']['y2'], > array([self._buffer[0][1][1], > self._buffer[1][1][1]], > dtype='f8')) > assert_equal(h['info']['Name'], > array([self._buffer[0][3][0], > self._buffer[1][3][0]], > dtype='U2')) > assert_equal(h['info']['Value'], > array([self._buffer[0][3][1], > self._buffer[1][3][1]], > dtype='c16')) > > > Please copy this code into a short Python file that when run i.e. python myfile.py exhibits the problem. You need to adapt the support code to do it. -Travis From nwagner at iam.uni-stuttgart.de Tue Apr 25 05:02:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 11:02:37 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DE267.9030903@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444DDC27.2020007@iam.uni-stuttgart.de> <444DE267.9030903@ieee.org> Message-ID: <444DE5AD.7010000@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> Travis Oliphant wrote: >> >> >>> Nils Wagner wrote: >>> >>> >>> >>>> Travis Oliphant wrote: >>>> >>>> >>>> >>>> >>>>> Nils Wagner wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> Nils Wagner wrote: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> Travis Oliphant wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Nils Wagner wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> 0.9.7.2404 on a 64 bit system >>>>>>>>> >>>>>>>>> numpy.test(1,10) results in >>>>>>>>> >>>>>>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>>>>>> detected *** malloc(): memory corruption (fast): 0x00000000007dee40 *** >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Can you check to see which check-in caused the problem. Go back to >>>>>>>> revision 2403, 2402, and so forth. >>>>>>>> >>>>>>>> Also, please remove the build directory and the installed numpy to be >>>>>>>> sure changes to the C-API are not causing problems. >>>>>>>> >>>>>>>> The trace-back is not really giving a clue. Perhaps if you ran under >>>>>>>> valgrind better help could be given. Are there any warnings emmitted >>>>>>>> during compilation? >>>>>>>> >>>>>>>> >>>>>>>> Could somebody else with a 64-bit system verify? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>> BTW, the problem is not restricted to 64-bit systems. I just installed >>>>>> the latest svn version of numpy >>>>>> on a 32-bit system and got >>>>>> Check reading the nested fields of a nested array (1st level)*** glibc >>>>>> detected *** free(): invalid next size (fast): 0x081d5870 *** >>>>>> Abort >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> Please remove your installation of numpy and your build directory and >>>>> try again. I do not get these errors. >>>>> >>>>> The copyswap function changed arguments and appears to be at the root of >>>>> these problems. >>>>> >>>>> -Travis >>>>> >>>>> _______________________________________________ >>>>> Scipy-dev mailing list >>>>> Scipy-dev at scipy.net >>>>> http://www.scipy.net/mailman/listinfo/scipy-dev >>>>> >>>>> >>>>> >>>>> >>>>> >>>> Hi Travis, >>>> >>>> I installed numpy from scratch but the problem persists. Any idea ? >>>> >>>> >>>> >>>> >>>> >>> Did you remove your numpy from and the build directory >>> before building? >>> >>> >>> >>> >> Yes. >> >> >>> If so, please isolate the test that is failing into a short Python >>> snippet that exhibits the error. I've just double-checked all uses of >>> copyswap to make sure they have all been changed. All tests are >>> passing for me as well (also all SciPy tests pass). >>> >>> So, I can't reproduce the problem and it smells a lot like an >>> installation problem. Are you picking up include files from some other >>> location? >>> >>> >>> >> How can I check that ? >> >> >>> Can somebody else verify to see if current SVN of numpy builds and >>> passes tests? >>> >>> -Travis >>> >>> >>> >>> >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.net >>> http://www.scipy.net/mailman/listinfo/scipy-dev >>> >>> >>> >> I found the test in test_numerictypes.py >> >> def check_nested1_acessors(self): >> """Check reading the nested fields of a nested array (1st level)""" >> h = array(self._buffer, dtype=self._descr) >> if not self.multiple_rows: >> assert_equal(h['Info']['value'], >> array(self._buffer[1][0], dtype='c16')) >> assert_equal(h['Info']['y2'], >> array(self._buffer[1][1], dtype='f8')) >> assert_equal(h['info']['Name'], >> array(self._buffer[3][0], dtype='U2')) >> assert_equal(h['info']['Value'], >> array(self._buffer[3][1], dtype='c16')) >> else: >> assert_equal(h['Info']['value'], >> array([self._buffer[0][1][0], >> self._buffer[1][1][0]], >> dtype='c16')) >> assert_equal(h['Info']['y2'], >> array([self._buffer[0][1][1], >> self._buffer[1][1][1]], >> dtype='f8')) >> assert_equal(h['info']['Name'], >> array([self._buffer[0][3][0], >> self._buffer[1][3][0]], >> dtype='U2')) >> assert_equal(h['info']['Value'], >> array([self._buffer[0][3][1], >> self._buffer[1][3][1]], >> dtype='c16')) >> >> >> >> > > Please copy this code into a short Python file that when run i.e. > > python myfile.py > > exhibits the problem. > > > You need to adapt the support code to do it. > > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Hi Travis, I started to isolate the code. But to be honest, I don't know how to proceed. Please find attached my first try of myfile.py. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: myfile.py Type: text/x-python Size: 3664 bytes Desc: not available URL: From vel.accel at gmail.com Tue Apr 25 05:06:39 2006 From: vel.accel at gmail.com (dHering) Date: Tue, 25 Apr 2006 05:06:39 -0400 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: <444DC973.5010608@gmail.com> References: <444DC973.5010608@gmail.com> Message-ID: <1e52e0880604250206n11da1f37y479cb4342f7e9315@mail.gmail.com> As far as Databases go, I'd love to see more people jumping on the HDF5 bandwagon. http://hdf.ncsa.uiuc.edu/HDF5/ They are about to release version 1.8 (1.8alpha already released on April 20). There's some very nice features (A new higher level interface, packet tables for socket streams, dimension scales, imaging etc). Not to mention It's totally portable and PyTables provides an excellent Python/Numpy/Numarray/Numeric API to HDF5, so database usage between Python and other HDF5 APIs (C, FORTRAN, C++, Java) is totally seamless. PyTables has a netCDF conversion facility and also very nice compression features too. See: http://www.pytables.org/moin/PyTables By the way Matt, I'm a financial engineer myself. If I were to find the time, I'd also like to be involved in such a project. Dieter On 4/25/06, Robert Kern wrote: > Matt Knox wrote: > > > Currently the FAME database functionality is sort of mangled in with the > time > > series module, but that can definitely be separated better. Would the FAME > api > > be suitable for the sandbox as well? or just the time series capabilities? > I > > suspect database API's aren't really something people would look for on > scipy, > > but who knows. At any rate, I am fairly certain there is no existing > python API > > for FAME freely available so some people might be interested in that. > > FAME is a proprietary package, right? The website (fame.com) looks > expensive. I > would really like to see some good tools for handling time series > (specifically > calendrical time series) in scipy. I hope that as much functionality as > possible > can be decoupled from FAME. I don't think that wrappers to an expensive > database > package really belong in scipy although they might be just right for a > separate > projects.scipy.org project. > > Of course, the API and conventions that you've established by how the > non-FAME > bits interact with the FAME bits will probably serve as a useful standard > for > talking to other databases with time series information. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From schofield at ftw.at Tue Apr 25 06:08:35 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 25 Apr 2006 12:08:35 +0200 Subject: [SciPy-dev] Percentile functions in scipy.stats Message-ID: <444DF523.7060300@ftw.at> Hi all, A colleague of mine, Tobias Witek, has been re-writing the percentile-related functions in stats.py (currently broken). I'm helping him review the code, and it should be ready to commit soon. So, if you're working on stats.py, skip over these functions for now and we can avoid some duplication of effort :) -- Ed From nwagner at iam.uni-stuttgart.de Tue Apr 25 06:47:07 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Apr 2006 12:47:07 +0200 Subject: [SciPy-dev] Program received signal SIGABRT Message-ID: <444DFE2B.6000002@iam.uni-stuttgart.de> >>> numpy.__version__ '0.9.7.2406' Now numpy.test(1,10) results in snip Check reading the nested fields of a nested array (1st level)*** glibc detected *** malloc(): memory corruption (fast): 0x00000000007aec90 *** Program received signal SIGABRT, Aborted. [Switching to Thread 16384 (LWP 9302)] 0x00002aaaab5edf39 in kill () from /lib64/libc.so.6 (gdb) bt #0 0x00002aaaab5edf39 in kill () from /lib64/libc.so.6 #1 0x00002aaaaadedb21 in pthread_kill () from /lib64/libpthread.so.0 #2 0x00002aaaaadedb52 in raise () from /lib64/libpthread.so.0 #3 0x00002aaaab5edc32 in raise () from /lib64/libc.so.6 #4 0x00002aaaab5ef013 in abort () from /lib64/libc.so.6 #5 0x00002aaaab620dfa in __libc_message () from /lib64/libc.so.6 #6 0x00002aaaab625573 in malloc_printerr () from /lib64/libc.so.6 #7 0x00002aaaab62757c in _int_malloc () from /lib64/libc.so.6 #8 0x00002aaaab628da2 in malloc () from /lib64/libc.so.6 #9 0x00002aaaabd1ded9 in PyArray_NewFromDescr (subtype=0x2aaaabe5b520, descr=, nd=0, dims=0x0, strides=0x0, data=0x0, flags=0, obj=0x0) at arrayobject.c:4413 #10 0x00002aaaabd21dc9 in Array_FromScalar (op=0x2aaaadda3210, typecode=0x2aaaaddc00f0) at arrayobject.c:5979 #11 0x00002aaaabd282ea in PyArray_FromAny (op=0x2aaaadda3210, newtype=0x2aaaaddc00f0, min_depth=0, max_depth=0, flags=, context=) at arrayobject.c:6025 #12 0x00002aaaabd3196e in _array_fromobject (ignored=, args=, kws=) at multiarraymodule.c:4580 #13 0x00002aaaaac1c17e in PyCFunction_Call () from /usr/lib64/libpython2.4.so.1.0 #14 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #15 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #16 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #17 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #18 0x00002aaaaac0d485 in function_call () from /usr/lib64/libpython2.4.so.1.0 #19 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #20 0x00002aaaaac0073d in instancemethod_call () from /usr/lib64/libpython2.4.so.1.0 #21 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #22 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #23 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #24 0x00002aaaaac0d485 in function_call () from /usr/lib64/libpython2.4.so.1.0 #25 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #26 0x00002aaaaac0073d in instancemethod_call () from /usr/lib64/libpython2.4.so.1.0 #27 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #28 0x00002aaaaac2df14 in slot_tp_call () from /usr/lib64/libpython2.4.so.1.0 #29 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #30 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #31 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #32 0x00002aaaaac0d485 in function_call () from /usr/lib64/libpython2.4.so.1.0 #33 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #34 0x00002aaaaac0073d in instancemethod_call () from /usr/lib64/libpython2.4.so.1.0 #35 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #36 0x00002aaaaac2df14 in slot_tp_call () from /usr/lib64/libpython2.4.so.1.0 #37 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #38 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #39 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #40 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #41 0x00002aaaaac4f789 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #42 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #43 0x00002aaaaac4f789 in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #44 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #45 0x00002aaaaac51992 in PyEval_EvalCode () from /usr/lib64/libpython2.4.so.1.0 #46 0x00002aaaaac6aeb9 in run_node () from /usr/lib64/libpython2.4.so.1.0 #47 0x00002aaaaac6c67f in PyRun_InteractiveOneFlags () from /usr/lib64/libpython2.4.so.1.0 ---Type to continue, or q to quit--- #48 0x00002aaaaac6c7ee in PyRun_InteractiveLoopFlags () from /usr/lib64/libpython2.4.so.1.0 #49 0x00002aaaaac6c8ec in PyRun_AnyFileExFlags () from /usr/lib64/libpython2.4.so.1.0 #50 0x00002aaaaac72023 in Py_Main () from /usr/lib64/libpython2.4.so.1.0 #51 0x00000000004008d9 in main (argc=, argv=) at ccpython.cc:10 From stefan at sun.ac.za Tue Apr 25 07:49:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 25 Apr 2006 13:49:14 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DE5AD.7010000@iam.uni-stuttgart.de> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444DDC27.2020007@iam.uni-stuttgart.de> <444DE267.9030903@ieee.org> <444DE5AD.7010000@iam.uni-stuttgart.de> Message-ID: <20060425114913.GD17604@alpha> I've installed numpy from svn on two machines this morning, and on both installations I see the glibc segfault. Nils' python snippet does not reproduce the problem, however. I attach the last output of valgrind -v --error-limit=no --leak-check=full python -c 'import numpy; numpy.test()' If I do export GLIBCXX_FORCE_NEW=1 the tests crash simply with "Segmentation fault" instead of with *** glibc detected *** free(): invalid next size (fast): 0x0830e828 *** Aborted I'll try to find a minimal snippet that reproduces the problem. Regards St?fan On Tue, Apr 25, 2006 at 11:02:37AM +0200, Nils Wagner wrote: > Travis Oliphant wrote: > >Nils Wagner wrote: > > > >>Travis Oliphant wrote: > >> > >> > >>>Nils Wagner wrote: > >>> > >>> > >>> > >>>>Travis Oliphant wrote: > >>>> > >>>> > >>>> > >>>> > >>>>>Nils Wagner wrote: > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>>>Nils Wagner wrote: etc. -------------- next part -------------- valgrind: m_mallocfree.c:186 (get_bszB_as_is): Assertion 'bszB_lo == bszB_hi' failed. valgrind: Heap block lo/hi size mismatch: lo = 4096, hi = 0. Probably caused by overrunning/underrunning a heap block's bounds. ==20067== at 0xA010AB3: report_and_quit (m_libcassert.c:122) ==20067== by 0xA010CDC: vgPlain_assert_fail (m_libcassert.c:185) ==20067== by 0xA01A5F8: mkFreeBlock (m_mallocfree.c:183) ==20067== by 0xA01B761: vgPlain_arena_free (m_mallocfree.c:1115) ==20067== by 0xA034513: vgPlain_cli_free (replacemalloc_core.c:108) ==20067== by 0xA0015F0: die_and_free_mem (mac_malloc_wrappers.c:120) ==20067== by 0xA03629C: do_client_request (scheduler.c:987) ==20067== by 0xA035C0A: vgPlain_scheduler (scheduler.c:721) ==20067== by 0xA04A133: thread_wrapper (syswrap-linux.c:86) ==20067== by 0xA04A260: run_a_thread_NORETURN (syswrap-linux.c:119) sched status: running_tid=1 Thread 1: status = VgTs_Runnable ==20067== at 0x401BFCF: free (vg_replace_malloc.c:235) ==20067== by 0x8098973: (within /usr/bin/python2.4) ==20067== by 0x80B20C0: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B6FDA: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80FBF2C: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x80B4A59: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80FBF2C: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x805EFD4: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x80B457A: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80FBF2C: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x805EFD4: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x808B076: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x80B457A: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80FBF2C: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x80B4A59: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80FBF2C: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x805EFD4: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x808B076: (within /usr/bin/python2.4) ==20067== by 0x805943B: PyObject_Call (in /usr/bin/python2.4) ==20067== by 0x80B457A: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B6FDA: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80B6F32: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80B6F32: PyEval_EvalFrame (in /usr/bin/python2.4) ==20067== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==20067== by 0x80B7904: PyEval_EvalCode (in /usr/bin/python2.4) ==20067== by 0x80D9A41: PyRun_SimpleStringFlags (in /usr/bin/python2.4) ==20067== by 0x8055702: Py_Main (in /usr/bin/python2.4) ==20067== by 0x4084EA1: __libc_start_main (in /lib/tls/i686/cmov/libc-2.3.6.so) From stefan at sun.ac.za Tue Apr 25 10:13:46 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 25 Apr 2006 16:13:46 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <20060425114913.GD17604@alpha> References: <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444DDC27.2020007@iam.uni-stuttgart.de> <444DE267.9030903@ieee.org> <444DE5AD.7010000@iam.uni-stuttgart.de> <20060425114913.GD17604@alpha> Message-ID: <20060425141346.GA22846@alpha> Hi, My message below wasn't very helpful, mainly due to my inability to use valgrind properly. With some help from Albert, I think I narrowed down the problem. The following writes at an invalid memory location: from numpy import * ulen = 1 ucs_value = u'\U0010FFFF' ua = array([[[ucs_value*ulen]*2]*3]*4, dtype='U%s' % ulen) ua2 = ua.newbyteorder() which generates this warning under Valgrind: ==22943== Invalid write of size 1 ==22943== at 0x4902AAF: UNICODE_copyswap (arraytypes.inc:10223) ==22943== by 0x48FB166: PyArray_Scalar (arrayobject.c:1093) ==22943== by 0x49431B7: array_subscript_nice (arrayobject.c:2446) ==22943== by 0x80B3DB4: PyEval_EvalFrame (in /usr/bin/python2.4) ==22943== by 0x80B6FDA: PyEval_EvalFrame (in /usr/bin/python2.4) ==22943== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==22943== by 0x80FBF2C: (within /usr/bin/python2.4) ==22943== Address 0x454F771 is 1 bytes after a block of size 8 alloc'd ==22943== at 0x401B422: malloc (vg_replace_malloc.c:149) ==22943== by 0x48FB11D: PyArray_Scalar (arrayobject.c:1023) ==22943== by 0x49431B7: array_subscript_nice (arrayobject.c:2446) ==22943== by 0x80B3DB4: PyEval_EvalFrame (in /usr/bin/python2.4) ==22943== by 0x80B6FDA: PyEval_EvalFrame (in /usr/bin/python2.4) ==22943== by 0x80B76BE: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==22943== by 0x80FBF2C: (within /usr/bin/python2.4) I filed a ticket at http://projects.scipy.org/scipy/numpy/ticket/79 Regards St?fan On Tue, Apr 25, 2006 at 01:49:14PM +0200, Stefan van der Walt wrote: > I've installed numpy from svn on two machines this morning, and on > both installations I see the glibc segfault. > > Nils' python snippet does not reproduce the problem, however. > > I attach the last output of > > valgrind -v --error-limit=no --leak-check=full python -c 'import numpy; numpy.test()' > > If I do > > export GLIBCXX_FORCE_NEW=1 > > the tests crash simply with "Segmentation fault" instead of with > > *** glibc detected *** free(): invalid next size (fast): 0x0830e828 *** > Aborted > > I'll try to find a minimal snippet that reproduces the problem. > > Regards > St?fan From dkaufman at imago.com Tue Apr 25 10:14:21 2006 From: dkaufman at imago.com (Duane Kaufman) Date: Tue, 25 Apr 2006 09:14:21 -0500 Subject: [SciPy-dev] Scipy ga module on Windows Message-ID: <949BD319240E484B984463501CD7D17C257CB3@IMAGOEXCH1.corp.imago.com> Hi, > > Duane Kaufman wrote: > > Hi, > > > > I am looking to utilize a genetic algorithm for a project I am > > working on. > > > > I noticed Scipy had this advertised as included, but cannot > find it (I > > downloaded the latest for Python 2.4) > > > > Is this module not available? > > It has been moved to the sandbox because it has not been > completely ported to use numpy, yet. You will have to build > scipy from source and uncomment the appropriate line in > Lib/sandbox/setup.py to have it install as part of the scipy > package. However, it is pure Python, so you can just copy it > from the source distribution to the appropriate location, too. > Um, can I grab the 'ga' folder contents from the sandbox? What do I need to use these then? When I try this I get: >>> import scipy.ga Overwriting info= from scipy.misc.helpmod (was from scipy.utils.helpmod) Overwriting factorial= from scipy.misc.common (was from scipy.utils.common) Overwriting factorial2= from scipy.misc.common (was from scipy.utils.common) Overwriting factorialk= from scipy.misc.common (was from scipy.utils.common) Overwriting comb= from scipy.misc.common (was from scipy.utils.common) Overwriting who= from scipy.misc.common (was from scipy.utils.common) Overwriting lena= from scipy.misc.common (was from scipy.utils.common) Overwriting central_diff_weights= from scipy.misc.common (was from scipy.utils.common) Overwriting derivative= from scipy.misc.common (was from scipy.utils.common) Overwriting pade= from scipy.misc.common (was from scipy.utils.common) C:\Python24\lib\whrandom.py:38: DeprecationWarning: the whrandom module is deprecated; please use the random module DeprecationWarning) File "", line 1, in ? File "C:\Python24\lib\site-packages\scipy\ga\__init__.py", line 5, in ? import gene File "C:\Python24\lib\site-packages\scipy\ga\gene.py", line 302, in ? from scipy_base.fastumath import * ''' exceptions.ImportError : No module named scipy_base.fastumath ''' >>> Any ideas? Thanks, Duane > We should probably strike that item from the description > until we get it in the main package again. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a > harmless enigma that is made terrible by our own mad attempt > to interpret it as though it had an underlying truth." > -- Umberto Eco > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > NOTICE: These communications may contain privileged or other confidential information for the sole use of the designated recipients. If you have received it in error, please advise the sender by reply email and immediately delete the message and any attachments without reviewing, copying or disclosing the contents. From Bernhard.Hoefle at uibk.ac.at Tue Apr 25 10:23:00 2006 From: Bernhard.Hoefle at uibk.ac.at (Bernhard Reimar Hoefle) Date: Tue, 25 Apr 2006 16:23:00 +0200 Subject: [SciPy-dev] Error compiling delaunay Message-ID: <1145974980.444e30c4e46be@web-mail1.uibk.ac.at> Hi! I updated my subversion repository today. It get the following error when I try to compile scipy with the sandbox/delaunay module. I have gcc-Version 4.1.0, Python 2.4.1 and Numpy 0.9.7. creating build/temp.linux-i686-2.4/Lib/sandbox creating build/temp.linux-i686-2.4/Lib/sandbox/delaunay compile options: '-ILib/sandbox/delaunay -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' c++: Lib/sandbox/delaunay/_delaunay.cpp Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:208: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?ELgethash? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:231: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?openpl? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:232: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?line? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:233: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?circle? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:234: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?range? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:208: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?ELgethash? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:231: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?openpl? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:232: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?line? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:233: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?circle? Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:234: Fehler: extra qualification ?VoronoiDiagramGenerator::? on member ?range? error: Command "c++ -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -ILib/sandbox/delaunay -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c Lib/sandbox/delaunay/_delaunay.cpp -o build/temp.linux-i686-2.4/Lib/sandbox/delaunay/_delaunay.o" failed with exit status 1 From pearu at scipy.org Tue Apr 25 11:02:54 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 25 Apr 2006 10:02:54 -0500 (CDT) Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444DDA53.5090104@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org><444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> Message-ID: Hi Travis, Here's a small code that fails with current svn: >>> from numpy import array >>> Ndescr = [('Name','U8')] >>> Nbuffer = [('NN',)] >>> h = array(Nbuffer,dtype=Ndescr) >>> h array([(u'NN',)], dtype=[('Name', '>> h['Name'] *** glibc detected *** free(): invalid next size (fast): 0x08224588 *** Aborted In addition, all other numpy/core test files run ok except test_unicode.py test_numerictypes.py When I comment out the following lines in test_numerictypes.py assert_equal(h['info']['Name'], array(self._buffer[3][0], dtype='U2')) in read_values_nested.check_nested1_acessors method, all test_numerictypes.py tests succeed ok. HTH, Pearu From oliphant.travis at ieee.org Tue Apr 25 11:51:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 09:51:53 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org><444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> Message-ID: <444E4599.7060706@ieee.org> Pearu Peterson wrote: > Hi Travis, > > Here's a small code that fails with current svn: > > Ahh... Thanks Pearu. The problem was UNICODE_copyswap. I was not dividing the itemsize by 4 to get the number of elements to swap... Hopefully this is fixed, now. -Travis From oliphant.travis at ieee.org Tue Apr 25 12:10:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 10:10:08 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <20060425141346.GA22846@alpha> References: <444DC3E3.2060604@ieee.org> <444DD126.1070403@iam.uni-stuttgart.de> <444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444DDC27.2020007@iam.uni-stuttgart.de> <444DE267.9030903@ieee.org> <444DE5AD.7010000@iam.uni-stuttgart.de> <20060425114913.GD17604@alpha> <20060425141346.GA22846@alpha> Message-ID: <444E49E0.4000905@ieee.org> Stefan van der Walt wrote: > Hi, > > My message below wasn't very helpful, mainly due to my inability to > use valgrind properly. With some help from Albert, I think I narrowed > down the problem. > > The following writes at an invalid memory location: > > > from numpy import * > > ulen = 1 > ucs_value = u'\U0010FFFF' > > ua = array([[[ucs_value*ulen]*2]*3]*4, > dtype='U%s' % ulen) > ua2 = ua.newbyteorder() > Thank you very, very much for this very helpful demonstration. It allowed me to quickly isolate the problem which was indeed a memory-access violation due to forgetting to divide the itemsize of a unicode array by 4 (to get the number of actual unicode characters in the array). I think the problem is now fixed. Please try current SVN. -Travis From pearu at scipy.org Tue Apr 25 12:11:37 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 25 Apr 2006 11:11:37 -0500 (CDT) Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444E4599.7060706@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org><444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444E4599.7060706@ieee.org> Message-ID: On Tue, 25 Apr 2006, Travis Oliphant wrote: > Pearu Peterson wrote: >> Hi Travis, >> >> Here's a small code that fails with current svn: >> >> > Ahh... Thanks Pearu. The problem was UNICODE_copyswap. I was not > dividing the itemsize by 4 to get the number of elements to swap... > > Hopefully this is fixed, now. Hmm, I still get a crash.. Pearu From robert.kern at gmail.com Tue Apr 25 12:55:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 11:55:32 -0500 Subject: [SciPy-dev] Scipy ga module on Windows In-Reply-To: <949BD319240E484B984463501CD7D17C257CB3@IMAGOEXCH1.corp.imago.com> References: <949BD319240E484B984463501CD7D17C257CB3@IMAGOEXCH1.corp.imago.com> Message-ID: <444E5484.4050704@gmail.com> Duane Kaufman wrote: > Hi, > >>Duane Kaufman wrote: >> >>>Hi, >>> >>>I am looking to utilize a genetic algorithm for a project I am >>>working on. >>> >>>I noticed Scipy had this advertised as included, but cannot >> >>find it (I >> >>>downloaded the latest for Python 2.4) >>> >>>Is this module not available? >> >>It has been moved to the sandbox because it has not been >>completely ported to use numpy, yet. You will have to build >>scipy from source and uncomment the appropriate line in >>Lib/sandbox/setup.py to have it install as part of the scipy >>package. However, it is pure Python, so you can just copy it >>from the source distribution to the appropriate location, too. > > Um, can I grab the 'ga' folder contents from the sandbox? What do I need > to use these then? Well, like I said, it has not been ported to use numpy, yet. It's not going to run until it is. If you need something that works out-of-box right now, you will probably have to use another package. If you would like to help us restore the ga package to scipy, then please check out the source from the Subversion repository and submit patches to the Trac. http://www.scipy.org/Developer_Zone -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Tue Apr 25 12:57:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 10:57:24 -0600 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org><444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444E4599.7060706@ieee.org> Message-ID: <444E54F4.4000508@ieee.org> Pearu Peterson wrote: > On Tue, 25 Apr 2006, Travis Oliphant wrote: > > >> Pearu Peterson wrote: >> >>> Hi Travis, >>> >>> Here's a small code that fails with current svn: >>> >>> >>> >> Ahh... Thanks Pearu. The problem was UNICODE_copyswap. I was not >> dividing the itemsize by 4 to get the number of elements to swap... >> >> Hopefully this is fixed, now. >> > > Hmm, I still get a crash.. > > Could you try it now? Do you have a wide-build of Python? This could explain why I'm not seeing the problem. -Travis From Norbert.Nemec.list at gmx.de Tue Apr 25 13:08:34 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Tue, 25 Apr 2006 19:08:34 +0200 Subject: [SciPy-dev] Numerical stability of special functions (here: chebyu) In-Reply-To: <444CB8D2.8010109@bigpond.net.au> References: <444C8B23.1080706@gmx.de> <444CB8D2.8010109@bigpond.net.au> Message-ID: <444E5792.10207@gmx.de> Gary Ruben wrote: > Hi Norbert, > No comment in answer to your question, but you can avoid the 'new' > variable by changing the loop to > > for n in range(N): > current, previous = 2*x*current - previous, current > Thanks! Nice idea. Even after two years intense work with Python, I often forget the most obvious tricks... :-) From stefan at sun.ac.za Tue Apr 25 13:12:36 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 25 Apr 2006 19:12:36 +0200 Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444E54F4.4000508@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444E4599.7060706@ieee.org> <444E54F4.4000508@ieee.org> Message-ID: <20060425171236.GA23267@alpha> On Tue, Apr 25, 2006 at 10:57:24AM -0600, Travis Oliphant wrote: > Pearu Peterson wrote: > > On Tue, 25 Apr 2006, Travis Oliphant wrote: > > > > > >> Pearu Peterson wrote: > >> > >>> Hi Travis, > >>> > >>> Here's a small code that fails with current svn: > >>> > >>> > >>> > >> Ahh... Thanks Pearu. The problem was UNICODE_copyswap. I was not > >> dividing the itemsize by 4 to get the number of elements to swap... > >> > >> Hopefully this is fixed, now. > >> > > > > Hmm, I still get a crash.. > > > > > Could you try it now? Do you have a wide-build of Python? This could > explain why I'm not seeing the problem. Thanks, Travis. That fixes it for me. St?fan From pearu at scipy.org Tue Apr 25 13:17:31 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 25 Apr 2006 12:17:31 -0500 (CDT) Subject: [SciPy-dev] Check reading the nested fields of a nested array In-Reply-To: <444E54F4.4000508@ieee.org> References: <444DC28C.6050709@iam.uni-stuttgart.de> <444DC3E3.2060604@ieee.org><444DD375.10206@iam.uni-stuttgart.de> <444DD71C.6000502@ieee.org> <444DD8C9.2070605@iam.uni-stuttgart.de> <444DDA53.5090104@ieee.org> <444E4599.7060706@ieee.org> <444E54F4.4000508@ieee.org> Message-ID: On Tue, 25 Apr 2006, Travis Oliphant wrote: >>>> >>> Ahh... Thanks Pearu. The problem was UNICODE_copyswap. I was not >>> dividing the itemsize by 4 to get the number of elements to swap... >>> >>> Hopefully this is fixed, now. >>> >> >> Hmm, I still get a crash.. >> >> > Could you try it now? Do you have a wide-build of Python? This could > explain why I'm not seeing the problem. Sure. Yes. Now all numpy tests pass ok. Thanks, Pearu From mattknox_ca at hotmail.com Tue Apr 25 20:32:14 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 26 Apr 2006 00:32:14 +0000 (UTC) Subject: [SciPy-dev] interest in Time series functionality? References: <444DC973.5010608@gmail.com> Message-ID: I'm going to try and do some work this week on extricating the FAME bits from the core time series module, and flesh out the documentation and create some simple examples, then hopefully I will have some code to release towards the end of next week. I am making no claims in regards to the quality/design of this module, I'm sure it is not up to the standards of some of the more experienced developers involved with numpy/scipy. But if we can begin to form a framework for how the core classes in a time series module should behave, then that would probably be a good start. Coming from a FAME background, the functionality really kind of mirrors FAME's approach to time series (or attempts to), but if somebody has a better approach, I'm certainly open to the idea of reshaping my view of the world :) If someone could provide me with some instructions for where/how to post the code, that would be great. I work strictly in the Microsoft windows world, so any *nix jargon will be lost on me. And I have only tried compiling the C portion of the code under windows with numpy 0.9.4, so no promises for other platforms. - Matt Knox From aisaac at american.edu Tue Apr 25 20:51:54 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 25 Apr 2006 20:51:54 -0400 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: References: <444DC973.5010608@gmail.com> Message-ID: On Wed, 26 Apr 2006, (UTC) Matt Knox apparently wrote: > hopefully I will have some code to release towards the end > of next week. Looking forward to it. > I am making no claims in regards to the quality/design of > this module That's part of the MIT license: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND Go for it! Cheers, Alan Isaac From mattknox_ca at hotmail.com Tue Apr 25 21:03:56 2006 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 26 Apr 2006 01:03:56 +0000 (UTC) Subject: [SciPy-dev] interest in Time series functionality? References: <444DC973.5010608@gmail.com> Message-ID: Oh, one thing I should mention is that the code currently uses the mx.DateTime module (http://www.egenix.com/files/python/mxDateTime.html) because I have found it to offer a lot more flexibility than python's built-in date/time module. It could *probably* be rewritten to use only the built in date/time module, but I am not going to attempt to do that at this time - Matt Knox From robert.kern at gmail.com Tue Apr 25 23:09:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 22:09:23 -0500 Subject: [SciPy-dev] Chang*ed* the Trac authentication Message-ID: <444EE463.10007@gmail.com> Trying not to embarass myself again, I made the changes without telling you. :-) In order to create or modify Wiki pages or tickets on the NumPy and SciPy Tracs, you will have to be logged in. You can register yourself by clicking the "Register" link in the upper right-hand corner of the page. Developers who previously had accounts have the same username/password as before. You can now change your password if you like. Only developers have the ability to close tickets, delete Wiki pages entirely, or create new ticket reports (and possibly a couple of other things). Developers, please enter your name and email by clicking on the "Settings" link up at top once logged in. Thank you for your patience. If there are any problems, please email me, and I will try to correct them quickly. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Wed Apr 26 15:31:54 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 26 Apr 2006 15:31:54 -0400 Subject: [SciPy-dev] [Numpy-discussion] Chang*ed* the Trac authentication In-Reply-To: <444EE463.10007@gmail.com> (Robert Kern's message of "Tue, 25 Apr 2006 22:09:23 -0500") References: <444EE463.10007@gmail.com> Message-ID: Robert Kern writes: > Trying not to embarass myself again, I made the changes without telling you. :-) > > In order to create or modify Wiki pages or tickets on the NumPy and SciPy Tracs, > you will have to be logged in. You can register yourself by clicking the > "Register" link in the upper right-hand corner of the page. > > Developers who previously had accounts have the same username/password as > before. You can now change your password if you like. Only developers have the > ability to close tickets, delete Wiki pages entirely, or create new ticket > reports (and possibly a couple of other things). Developers, please enter your > name and email by clicking on the "Settings" link up at top once logged in. > > Thank you for your patience. If there are any problems, please email me, and I > will try to correct them quickly. Thanks Robert; I hope this helps with our spam problem to an extent. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Wed Apr 26 16:15:37 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 26 Apr 2006 16:15:37 -0400 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: References: <444DC973.5010608@gmail.com> Message-ID: On Wed, 26 Apr 2006, (UTC) Matt Knox apparently wrote: > Oh, one thing I should mention is that the code currently > uses the mx.DateTime module > (http://www.egenix.com/files/python/mxDateTime.html) > because I have found it to offer a lot more flexibility > than python's built-in date/time module. It could probably > be rewritten to use only the built in date/time module, > but I am not going to attempt to do that at this time IMO, this is a reasonable dependency, even if I might better like reliance on python-dateutil, which is wonderful. (I think Matplotlib uses python-dateutil for time series handling; you might want to take a look.) The only thing I suggest is gracefully handling the import error if the module is not present. (E.g., say where to get it.) Cheers, Alan Isaac From robert.kern at gmail.com Wed Apr 26 17:20:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 26 Apr 2006 16:20:12 -0500 Subject: [SciPy-dev] interest in Time series functionality? In-Reply-To: References: <444DC973.5010608@gmail.com> Message-ID: <444FE40C.2090400@gmail.com> Alan G Isaac wrote: > On Wed, 26 Apr 2006, (UTC) Matt Knox apparently wrote: > >>Oh, one thing I should mention is that the code currently >>uses the mx.DateTime module >>(http://www.egenix.com/files/python/mxDateTime.html) >>because I have found it to offer a lot more flexibility >>than python's built-in date/time module. It could probably >>be rewritten to use only the built in date/time module, >>but I am not going to attempt to do that at this time > > IMO, this is a reasonable dependency, > even if I might better like reliance on python-dateutil, > which is wonderful. (I think Matplotlib uses > python-dateutil for time series handling; you might > want to take a look.) mxDateTime does have (at least) one relevant advantage over python-dateutil: it deals with the various "Julian Day Number" systems that astronomers and some other science fields use. It is also frequently *the* date-time type supported by various database adapters. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fred.mailhot at gmail.com Thu Apr 27 03:50:53 2006 From: fred.mailhot at gmail.com (Fred Mailhot) Date: Thu, 27 Apr 2006 03:50:53 -0400 Subject: [SciPy-dev] Porting (some of?) Netlab to SciPy (Google Soc) Message-ID: Hello, The Python Software Foundation's page of potential projects for this year's Google Summer of Code contains the following entry: "Conduct a review of one chunk of functionality in scipy similar to the one currently in progress about the statistics package [...] Recently discussed requests have been [...], and porting the Netlab neural network code to scipy." I am highly interested in carrying out the Netlab project in the context of Google's Summer of Code (fulltime from June-August) and would like to know whether this is something that is sufficiently of interest to SciPy's developers and if so whether there is someone available and willing to serve as mentor (I envision weekly online meetings plus emergency contact as needed). A bit about myself: I am a graduate student in Cognitive Science with about 2 years of experience with Python in particular and about 6 years total of programming experience. I have implemented several neural network models (feedforward, recurrent, Hopfield) as well as many other machine learning algorithms so I am quite comfortable with the requisite concepts. If someone is interested in pursuing this please contact me as soon as possible so that we can work on putting a great proposal together. I am happy to pass along my CV or answer any other questions Regards, Fred Mailhot -- Institute of Cognitive Science Carleton University Ottawa, ON, Canada From nwagner at iam.uni-stuttgart.de Thu Apr 27 05:11:31 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 Apr 2006 11:11:31 +0200 Subject: [SciPy-dev] sparse matrix comparisons Message-ID: <44508AC3.4030802@iam.uni-stuttgart.de> Hi all, The current status wrt comparision of sparse matrices is File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 155, in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented http://aspn.activestate.com/ASPN/Mail/Message/scipy-dev/2240626 Shall I submit a ticket for this feature ? Nils From cimrman3 at ntc.zcu.cz Thu Apr 27 05:52:26 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 27 Apr 2006 11:52:26 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <444FA745.1010608@ieee.org> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> Message-ID: <4450945A.5070008@ntc.zcu.cz> Hi all, outside the list, I have proposed some enhancements to scipy.optimize, which are now below. Opinions welcome. Travis Oliphant wrote: > Robert Cimrman wrote: > >> Hi Travis, >> >> I write first to you as you are the author of scipy.optimize. >> >> I have been using scipy.optimize recently quite a lot and also coded the >> steepest descent method (named fmin_sd) myself, which has some features >> I would like to see e.g. in fmin_cg too. These are: >> 1. supporting hooks for a (user-provided?) logging class to log and >> possibly plot the convergence progress; >> 2. time statistics (via modified wrap_function) >> 3. passing configuration parameters in a structure (e,g. conf.gtol, >> conf.maxiter) >> >> Do you think that some points would be interesting to others? I attach >> my code (outside of scipy now, so you cannot run it as it is) and an >> example log plot. > > > Yes. logging is particularly requested quite often. As for > configuration parameters, I suspect that it is best done by making a > nice object-based interface to the optimization routines. I'm not sure > what the benefit is of passing in a configuration object over simply > using keyword arguments. Well, I have nothing against keyword arguments. But with a configuration object, the function argument list would be smaller, all fmin_* functions could have the same syntax and new configuration options would not influence the argument list. I am willing to work on the object-based interface. Not too much time right now, but... > But, make your suggestion on the list. It would also be nice to have > the fmin_sd code in SciPy. Yes, I am going to include it to SciPy. > In the near future, I'd like to put a trust_region optimization method > as well. Yes. that would be very usefull. r. From nwagner at iam.uni-stuttgart.de Thu Apr 27 10:45:07 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 Apr 2006 16:45:07 +0200 Subject: [SciPy-dev] Segfault Message-ID: <4450D8F3.40606@iam.uni-stuttgart.de> Hi all, I am going to visualize the sparsity pattern of large matrices using numpy/scipy/matplotlib. I received a segfault running the following script from pylab import show, plot, axis, xlabel, subplot, title, spy from scipy import * K = io.mmread('k.mtx') M = io.mmread('m.mtx') print 'Reading finished' K = K.todense() spy(K) show() Starting program: /usr/bin/python matrixstructure.py [Thread debugging using libthread_db enabled] [New Thread 16384 (LWP 13423)] Reading finished Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 13423)] array_from_pyobj (type_num=12, dims=0x7fffffffd270, rank=2, intent=0, obj=) at fortranobject.c:547 547 PyArray_FILLWBYTE(arr, 0); Current language: auto; currently c (gdb) bt #0 array_from_pyobj (type_num=12, dims=0x7fffffffd270, rank=2, intent=0, obj=) at fortranobject.c:547 #1 0x00002aaab37606ab in f2py_rout_sparsetools_dcsctofull (capi_self=, capi_args=, capi_keywds=, f2py_func=0x2aaab3778760 ) at sparsetoolsmodule.c:12056 #2 0x00002aaab376f25b in fortran_call (fp=, arg=, kw=) at fortranobject.c:289 #3 0x00002aaaaabf7f20 in PyObject_Call () from /usr/lib64/libpython2.4.so.1.0 #4 0x00002aaaaac4ebaa in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #5 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #6 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #7 0x00002aaaaac510af in PyEval_EvalFrame () from /usr/lib64/libpython2.4.so.1.0 #8 0x00002aaaaac51705 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.4.so.1.0 #9 0x00002aaaaac51992 in PyEval_EvalCode () from /usr/lib64/libpython2.4.so.1.0 #10 0x00002aaaaac6aeb9 in run_node () from /usr/lib64/libpython2.4.so.1.0 #11 0x00002aaaaac6c3c9 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.4.so.1.0 #12 0x00002aaaaac6c921 in PyRun_AnyFileExFlags () from /usr/lib64/libpython2.4.so.1.0 #13 0x00002aaaaac72023 in Py_Main () from /usr/lib64/libpython2.4.so.1.0 #14 0x00000000004008d9 in main (argc=, argv=) at ccpython.cc:10 Any idea how to fix this problem ? Nils From oliphant.travis at ieee.org Thu Apr 27 11:23:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 27 Apr 2006 09:23:42 -0600 Subject: [SciPy-dev] Segfault In-Reply-To: <4450D8F3.40606@iam.uni-stuttgart.de> References: <4450D8F3.40606@iam.uni-stuttgart.de> Message-ID: <4450E1FE.5000804@ieee.org> Nils Wagner wrote: > Hi all, > > I am going to visualize the sparsity pattern of large matrices using > numpy/scipy/matplotlib. > I received a segfault running the following script > > from pylab import show, plot, axis, xlabel, subplot, title, spy > from scipy import * > > K = io.mmread('k.mtx') > M = io.mmread('m.mtx') > print 'Reading finished' > > K = K.todense() > spy(K) > show() > > Thanks for the traceback. It looks like an error was not being caught. This is now fixed in SVN f2py. Your code should now raise an error that is occurring in trying to convert an object to an array in mmread... -Travis From nwagner at iam.uni-stuttgart.de Thu Apr 27 11:57:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 27 Apr 2006 17:57:19 +0200 Subject: [SciPy-dev] Segfault In-Reply-To: <4450E1FE.5000804@ieee.org> References: <4450D8F3.40606@iam.uni-stuttgart.de> <4450E1FE.5000804@ieee.org> Message-ID: <4450E9DF.4020403@iam.uni-stuttgart.de> Travis Oliphant wrote: > Nils Wagner wrote: > >> Hi all, >> >> I am going to visualize the sparsity pattern of large matrices using >> numpy/scipy/matplotlib. >> I received a segfault running the following script >> >> from pylab import show, plot, axis, xlabel, subplot, title, spy >> from scipy import * >> >> K = io.mmread('k.mtx') >> M = io.mmread('m.mtx') >> print 'Reading finished' >> >> K = K.todense() >> spy(K) >> show() >> >> >> > > Thanks for the traceback. It looks like an error was not being caught. > This is now fixed in SVN f2py. Your code should now raise an error that > is occurring in trying to convert an object to an array in mmread... > > -Travis > Reading finished Traceback (most recent call last): File "matrixstructure.py", line 8, in ? K = K.todense() File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 356, in todense return asmatrix(self.toarray()) File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 360, in toarray return csc.toarray() File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line 969, in toarray return func(self.shape[0], self.data, self.rowind, self.indptr) MemoryError Nils > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Thu Apr 27 12:23:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Apr 2006 11:23:12 -0500 Subject: [SciPy-dev] Segfault In-Reply-To: <4450E9DF.4020403@iam.uni-stuttgart.de> References: <4450D8F3.40606@iam.uni-stuttgart.de> <4450E1FE.5000804@ieee.org> <4450E9DF.4020403@iam.uni-stuttgart.de> Message-ID: <4450EFF0.1090400@gmail.com> Nils Wagner wrote: > Reading finished > Traceback (most recent call last): > File "matrixstructure.py", line 8, in ? > K = K.todense() > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line > 356, in todense > return asmatrix(self.toarray()) > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line > 360, in toarray > return csc.toarray() > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line > 969, in toarray > return func(self.shape[0], self.data, self.rowind, self.indptr) > MemoryError Notabug. The array is simply too large to be represented as a dense array. You need to either get more memory, or write a procedure to plot the sparsity pattern without creating the dense version of the array. Good luck. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at utoronto.ca Thu Apr 27 12:28:51 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 27 Apr 2006 12:28:51 -0400 Subject: [SciPy-dev] Porting (some of?) Netlab to SciPy (Google Soc) In-Reply-To: References: Message-ID: <463e11f90604270928m428e5179l57b4d65cc7c95e7f@mail.gmail.com> Sounds cool. You may want to ping the numpy list as it has more traffic than this list. Jon. On 4/27/06, Fred Mailhot wrote: > Hello, > > The Python Software Foundation's page of potential projects for this > year's Google Summer of Code contains the following entry: > > "Conduct a review of one chunk of functionality in scipy similar to > the one currently in progress about the statistics package [...] > Recently discussed requests have been [...], and porting the Netlab > neural network code to scipy." > > I am highly interested in carrying out the Netlab project in the > context of Google's Summer of Code (fulltime from June-August) and > would like to know whether this is something that is sufficiently of > interest to SciPy's developers and if so whether there is someone > available and willing to serve as mentor (I envision weekly online > meetings plus emergency contact as needed). > > A bit about myself: > I am a graduate student in Cognitive Science with about 2 years of > experience with Python in particular and about 6 years total of > programming experience. I have implemented several neural network > models (feedforward, recurrent, Hopfield) as well as many other > machine learning algorithms so I am quite comfortable with the > requisite concepts. > > If someone is interested in pursuing this please contact me as soon as > possible so that we can work on putting a great proposal together. I > am happy to pass along my CV or answer any other questions > > > Regards, > > Fred Mailhot > -- > Institute of Cognitive Science > Carleton University > Ottawa, ON, Canada > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From oliphant at ee.byu.edu Thu Apr 27 12:36:56 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 27 Apr 2006 10:36:56 -0600 Subject: [SciPy-dev] Segfault In-Reply-To: <4450E9DF.4020403@iam.uni-stuttgart.de> References: <4450D8F3.40606@iam.uni-stuttgart.de> <4450E1FE.5000804@ieee.org> <4450E9DF.4020403@iam.uni-stuttgart.de> Message-ID: <4450F328.2080806@ee.byu.edu> Nils Wagner wrote: >Travis Oliphant wrote: > > >>Nils Wagner wrote: >> >> >> >>>Hi all, >>> >>>I am going to visualize the sparsity pattern of large matrices using >>>numpy/scipy/matplotlib. >>>I received a segfault running the following script >>> >>>from pylab import show, plot, axis, xlabel, subplot, title, spy >>>from scipy import * >>> >>>K = io.mmread('k.mtx') >>>M = io.mmread('m.mtx') >>>print 'Reading finished' >>> >>>K = K.todense() >>>spy(K) >>>show() >>> >>> >>> >>> >>> >>Thanks for the traceback. It looks like an error was not being caught. >>This is now fixed in SVN f2py. Your code should now raise an error that >>is occurring in trying to convert an object to an array in mmread... >> >>-Travis >> >> >> >Reading finished >Traceback (most recent call last): > File "matrixstructure.py", line 8, in ? > K = K.todense() > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line >356, in todense > return asmatrix(self.toarray()) > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line >360, in toarray > return csc.toarray() > File "/usr/lib64/python2.4/site-packages/scipy/sparse/sparse.py", line >969, in toarray > return func(self.shape[0], self.data, self.rowind, self.indptr) >MemoryError > > > Could you use pdb import pdb pdb.pm() to find out what self.shape[0] is? This will help track down the problem. -Travis From robert.kern at gmail.com Thu Apr 27 13:07:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 27 Apr 2006 12:07:01 -0500 Subject: [SciPy-dev] Porting (some of?) Netlab to SciPy (Google Soc) In-Reply-To: References: Message-ID: <4450FA35.4060903@gmail.com> Fred Mailhot wrote: > Hello, > > The Python Software Foundation's page of potential projects for this > year's Google Summer of Code contains the following entry: > > "Conduct a review of one chunk of functionality in scipy similar to > the one currently in progress about the statistics package [...] > Recently discussed requests have been [...], and porting the Netlab > neural network code to scipy." > > I am highly interested in carrying out the Netlab project in the > context of Google's Summer of Code (fulltime from June-August) and > would like to know whether this is something that is sufficiently of > interest to SciPy's developers Well, that would be why I put it up there. :-) > and if so whether there is someone > available and willing to serve as mentor (I envision weekly online > meetings plus emergency contact as needed). There will be. It's one of the several projects I am interested in mentoring. There are possibly others interested as well. Of course, decisions as to who will be mentoring what project won't be made until the proposals are in and reviewed. > A bit about myself: > I am a graduate student in Cognitive Science with about 2 years of > experience with Python in particular and about 6 years total of > programming experience. I have implemented several neural network > models (feedforward, recurrent, Hopfield) as well as many other > machine learning algorithms so I am quite comfortable with the > requisite concepts. > > If someone is interested in pursuing this please contact me as soon as > possible so that we can work on putting a great proposal together. I > am happy to pass along my CV or answer any other questions Actually, you get to write the proposal yourself. ;-) However, I'm more than happy to look over what you've written and make suggestions before you submit your proposal. The same goes for any other potential scipy students. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Thu Apr 27 13:52:39 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 27 Apr 2006 19:52:39 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <4450945A.5070008@ntc.zcu.cz> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> Message-ID: <445104E7.8020000@ftw.at> Robert Cimrman wrote: > Hi all, > > outside the list, I have proposed some enhancements to scipy.optimize, > which are now below. Opinions welcome. > Hi Robert, Yes, I think these are great suggestions. If you look at maxentropy.py, you'll see that I've written various wrappers to deal with exactly these issues. If we had a common interface to the parameters of the optimize functions and a callback facility, I could simplify this code. I think a generic callback facility would be useful either for logging or a variety of other tasks, such as reliability tests in the case of stochastic optimization. We'd need to think about whether to call back each iteration or each function/gradient evaluation. I think each iteration would be preferable, but seem to recall that it's not trivial to do with, for example, L-BFGS-B. We'd also need to consider how to specify stopping criteria uniformly; currently the functions use several different definitions of tolerance, such as the mean or the norm of the gradient vector. It would be great to unify these. -- Ed From fred.mailhot at gmail.com Thu Apr 27 13:48:32 2006 From: fred.mailhot at gmail.com (Fred Mailhot) Date: Thu, 27 Apr 2006 13:48:32 -0400 Subject: [SciPy-dev] Porting (some of?) Netlab to SciPy (Google Soc) In-Reply-To: <4450FA35.4060903@gmail.com> References: <4450FA35.4060903@gmail.com> Message-ID: [snip] So it looks like this is definitely something worth going ahead with. Thanks to everyone for their supportive comments and suggestions. I'm going to put a proposal together over the net few days and post it (or a link to it) to this list by the middle of next week (I'm at the end of my semester here, so things are a bit hectic for the next few days). I'd love to get feedback from any and all interested parties. Cheers! Fred. -- Institute of Cognitive Science Carleton University Ottawa, ON, Canada From cimrman3 at ntc.zcu.cz Fri Apr 28 05:12:12 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 28 Apr 2006 11:12:12 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <445104E7.8020000@ftw.at> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> <445104E7.8020000@ftw.at> Message-ID: <4451DC6C.1070103@ntc.zcu.cz> Ed Schofield wrote: > Hi Robert, > > Yes, I think these are great suggestions. If you look at maxentropy.py, > you'll see that I've written various wrappers to deal with exactly these > issues. If we had a common interface to the parameters of the optimize > functions and a callback facility, I could simplify this code. > I think a generic callback facility would be useful either for logging > or a variety of other tasks, such as reliability tests in the case of > stochastic optimization. We'd need to think about whether to call back > each iteration or each function/gradient evaluation. I think each > iteration would be preferable, but seem to recall that it's not trivial > to do with, for example, L-BFGS-B. Yes, fmin_l_bfgs_b uses a wrapped fortran module, so I can think only about rewriting its high-level logic (the main iteration loop?) in Python, using directly the fortran subroutines. But all functions in optimize.py could use the common interface without pain. The two main possibilities are: 1) call back each iteration (as I do in my fmin_sd) only 2) call back via wrap_function macro, so that function/gradient calls e.g. in line search functions are not missed - the callback could have one arg saying from where it was called, so that in postporcessing you could plot e.g. just the data from main loop iterations. Now as I have written them down, I would vote for 2) in some form. > We'd also need to consider how to specify stopping criteria uniformly; > currently the functions use several different definitions of tolerance, > such as the mean or the norm of the gradient vector. It would be great > to unify these. I will send you off-list how I have it for now. cheers, r. From schofield at ftw.at Fri Apr 28 05:45:09 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 28 Apr 2006 11:45:09 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <4451DC6C.1070103@ntc.zcu.cz> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> <445104E7.8020000@ftw.at> <4451DC6C.1070103@ntc.zcu.cz> Message-ID: <4451E425.70408@ftw.at> Robert Cimrman wrote: > Yes, fmin_l_bfgs_b uses a wrapped fortran module, so I can think only > about rewriting its high-level logic (the main iteration loop?) in > Python, using directly the fortran subroutines. But all functions in > optimize.py could use the common interface without pain. > > The two main possibilities are: > 1) call back each iteration (as I do in my fmin_sd) only > 2) call back via wrap_function macro, so that function/gradient calls > e.g. in line search functions are not missed - the callback could have > one arg saying from where it was called, so that in postporcessing you > could plot e.g. just the data from main loop iterations. > > Now as I have written them down, I would vote for 2) in some form. > Yes, (2) would be good :) >> We'd also need to consider how to specify stopping criteria uniformly; >> currently the functions use several different definitions of tolerance, >> such as the mean or the norm of the gradient vector. It would be great >> to unify these. >> > > I will send you off-list how I have it for now. > It looks like a good start. I suggest you copy the whole scipy.optimize package into the sandbox (e.g. as 'newoptimize') and check your code in there... -- Ed From cimrman3 at ntc.zcu.cz Fri Apr 28 10:41:31 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 28 Apr 2006 16:41:31 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <4451E425.70408@ftw.at> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> <445104E7.8020000@ftw.at> <4451DC6C.1070103@ntc.zcu.cz> <4451E425.70408@ftw.at> Message-ID: <4452299B.9040105@ntc.zcu.cz> Ed Schofield wrote: > Robert Cimrman wrote: > >>Yes, fmin_l_bfgs_b uses a wrapped fortran module, so I can think only >>about rewriting its high-level logic (the main iteration loop?) in >>Python, using directly the fortran subroutines. But all functions in >>optimize.py could use the common interface without pain. >> >>The two main possibilities are: >>1) call back each iteration (as I do in my fmin_sd) only >>2) call back via wrap_function macro, so that function/gradient calls >>e.g. in line search functions are not missed - the callback could have >>one arg saying from where it was called, so that in postporcessing you >>could plot e.g. just the data from main loop iterations. >> >>Now as I have written them down, I would vote for 2) in some form. > > Yes, (2) would be good :) OK, I am willing to do a prototype in the sandbox. Unfortunately I will be available first after 9. May :(, so be patient. >>I will send you off-list how I have it for now. > > > It looks like a good start. I suggest you copy the whole scipy.optimize > package into the sandbox (e.g. as 'newoptimize') and check your code in > there... It's there and I removed dependencies on my other codes, so that you can try it - run test_optimize.py, or try_log.py r. From nwagner at iam.uni-stuttgart.de Fri Apr 28 10:46:56 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 28 Apr 2006 16:46:56 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <4452299B.9040105@ntc.zcu.cz> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> <445104E7.8020000@ftw.at> <4451DC6C.1070103@ntc.zcu.cz> <4451E425.70408@ftw.at> <4452299B.9040105@ntc.zcu.cz> Message-ID: <44522AE0.20006@iam.uni-stuttgart.de> Robert Cimrman wrote: > Ed Schofield wrote: > >> Robert Cimrman wrote: >> >> >>> Yes, fmin_l_bfgs_b uses a wrapped fortran module, so I can think only >>> about rewriting its high-level logic (the main iteration loop?) in >>> Python, using directly the fortran subroutines. But all functions in >>> optimize.py could use the common interface without pain. >>> >>> The two main possibilities are: >>> 1) call back each iteration (as I do in my fmin_sd) only >>> 2) call back via wrap_function macro, so that function/gradient calls >>> e.g. in line search functions are not missed - the callback could have >>> one arg saying from where it was called, so that in postporcessing you >>> could plot e.g. just the data from main loop iterations. >>> >>> Now as I have written them down, I would vote for 2) in some form. >>> >> Yes, (2) would be good :) >> > > OK, I am willing to do a prototype in the sandbox. Unfortunately I will > be available first after 9. May :(, so be patient. > > >>> I will send you off-list how I have it for now. >>> >> It looks like a good start. I suggest you copy the whole scipy.optimize >> package into the sandbox (e.g. as 'newoptimize') and check your code in >> there... >> > > It's there and I removed dependencies on my other codes, so that you can > try it - run test_optimize.py, or try_log.py > > r. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > There is no entry for newoptimize in Lib/sandbox/setup.py. Am I missing something ? Nils From oliphant at ee.byu.edu Fri Apr 28 16:22:54 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 Apr 2006 14:22:54 -0600 Subject: [SciPy-dev] SciPy SVN site Message-ID: <4452799E.8010700@ee.byu.edu> Is anybody else having trouble accessing the scipy SVN site? I can't seem to update the tree on my work computer... -Travis From jonathan.taylor at stanford.edu Fri Apr 28 16:30:30 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 28 Apr 2006 13:30:30 -0700 Subject: [SciPy-dev] SciPy SVN site In-Reply-To: <4452799E.8010700@ee.byu.edu> References: <4452799E.8010700@ee.byu.edu> Message-ID: <44527B66.8060107@stanford.edu> site seems to be working for me -- jonathan Travis Oliphant wrote: > Is anybody else having trouble accessing the scipy SVN site? I can't > seem to update the tree on my work computer... > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From robert.kern at gmail.com Fri Apr 28 17:03:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Apr 2006 16:03:23 -0500 Subject: [SciPy-dev] SciPy SVN site In-Reply-To: <4452799E.8010700@ee.byu.edu> References: <4452799E.8010700@ee.byu.edu> Message-ID: <4452831B.3070506@gmail.com> Travis Oliphant wrote: > Is anybody else having trouble accessing the scipy SVN site? I can't > seem to update the tree on my work computer... It's working for me, but I'm only 10 or so meters away from the machine itself, so perhaps that doesn't mean much. Have you made sure that your proxy settings are correct? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Fri Apr 28 17:09:57 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 28 Apr 2006 15:09:57 -0600 Subject: [SciPy-dev] SciPy SVN site In-Reply-To: <4452831B.3070506@gmail.com> References: <4452799E.8010700@ee.byu.edu> <4452831B.3070506@gmail.com> Message-ID: <445284A5.2090009@ee.byu.edu> Robert Kern wrote: >Travis Oliphant wrote: > > >>Is anybody else having trouble accessing the scipy SVN site? I can't >>seem to update the tree on my work computer... >> >> > >It's working for me, but I'm only 10 or so meters away from the machine itself, >so perhaps that doesn't mean much. Have you made sure that your proxy settings >are correct? > > I have no idea how to do that... It's worked in the past and is just today not working. I'm not sure where the problem is. I am behind a firewall, here, if that is any help. -Travis From robert.kern at gmail.com Fri Apr 28 17:12:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Apr 2006 16:12:13 -0500 Subject: [SciPy-dev] SciPy SVN site In-Reply-To: <445284A5.2090009@ee.byu.edu> References: <4452799E.8010700@ee.byu.edu> <4452831B.3070506@gmail.com> <445284A5.2090009@ee.byu.edu> Message-ID: <4452852D.6060804@gmail.com> Travis Oliphant wrote: > Robert Kern wrote: > >>Travis Oliphant wrote: >> >>>Is anybody else having trouble accessing the scipy SVN site? I can't >>>seem to update the tree on my work computer... >> >>It's working for me, but I'm only 10 or so meters away from the machine itself, >>so perhaps that doesn't mean much. Have you made sure that your proxy settings >>are correct? > > I have no idea how to do that... It's worked in the past and is just > today not working. I'm not sure where the problem is. > > I am behind a firewall, here, if that is any help. That is usually the source of the problem and what I was referring to when I asked you about your proxy settings. Try googling some of the relevant words in the error message. Speaking of which, what is the error message? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Apr 29 03:22:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 29 Apr 2006 02:22:21 -0500 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <44522AE0.20006@iam.uni-stuttgart.de> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> <445104E7.8020000@ftw.at> <4451DC6C.1070103@ntc.zcu.cz> <4451E425.70408@ftw.at> <4452299B.9040105@ntc.zcu.cz> <44522AE0.20006@iam.uni-stuttgart.de> Message-ID: <4453142D.2080902@gmail.com> Nils Wagner wrote: > There is no entry for newoptimize in Lib/sandbox/setup.py. > Am I missing something ? Since it's not a package, yet, no. But you could very easily turn it into one if you really wanted to. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Apr 29 03:26:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 29 Apr 2006 02:26:56 -0500 Subject: [SciPy-dev] Error compiling delaunay In-Reply-To: <1145974980.444e30c4e46be@web-mail1.uibk.ac.at> References: <1145974980.444e30c4e46be@web-mail1.uibk.ac.at> Message-ID: <44531540.4020908@gmail.com> Bernhard Reimar Hoefle wrote: > Hi! > I updated my subversion repository today. > It get the following error when I try to compile scipy with the sandbox/delaunay > module. I have gcc-Version 4.1.0, Python 2.4.1 and Numpy 0.9.7. > > creating build/temp.linux-i686-2.4/Lib/sandbox > creating build/temp.linux-i686-2.4/Lib/sandbox/delaunay > compile options: '-ILib/sandbox/delaunay > -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 > -c' > c++: Lib/sandbox/delaunay/_delaunay.cpp > Lib/sandbox/delaunay/VoronoiDiagramGenerator.h:208: Fehler: extra qualification > ?VoronoiDiagramGenerator::? on member ?ELgethash? Fixed in SVN. Thank you. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From g.vanuxem at wanadoo.fr Sat Apr 29 14:02:10 2006 From: g.vanuxem at wanadoo.fr (Vanuxem =?ISO-8859-1?Q?Gr=E9gory?=) Date: Sat, 29 Apr 2006 20:02:10 +0200 Subject: [SciPy-dev] sinm and cosm for matrices with real coefficients Message-ID: <1146333730.9508.12.camel@localhost.localdomain> Hi, The code of cosm and sinm for matrices with real coefficients can be improved. The actual code is something like: sinm : (expm(j * M) - expm(- j*M)) / (2 * j) cosm : (expm(j * M) + expm(- j* M)) / 2 But this can be changed to (only for matrices with real coefficients): sinm : imaginary_part(expm(j * M)) cosm : real_part(expm(j * M)) Sorry, as you can see I'm not a python programmer, just reading the scipy source code... Cheers, Greg From robert.kern at gmail.com Sun Apr 30 01:08:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 30 Apr 2006 00:08:29 -0500 Subject: [SciPy-dev] sinm and cosm for matrices with real coefficients In-Reply-To: <1146333730.9508.12.camel@localhost.localdomain> References: <1146333730.9508.12.camel@localhost.localdomain> Message-ID: <4454464D.4070206@gmail.com> Vanuxem Gr?gory wrote: > Hi, > > The code of cosm and sinm for matrices with real coefficients can be > improved. > > The actual code is something like: > > sinm : (expm(j * M) - expm(- j*M)) / (2 * j) > > cosm : (expm(j * M) + expm(- j* M)) / 2 > > But this can be changed to (only for matrices with real coefficients): > > sinm : imaginary_part(expm(j * M)) > > cosm : real_part(expm(j * M)) However, we want the functions to work on complex matrices, too. The optimization is probably not worth the extra code to dispatch on the datatype. Of course, if someone comes along with a use-case where they need the extra speed, and they have a patch, unit tests, and benchmarks ready, we'd probably put it in. Thank you for your interest, though. Please let us know if you find anything else that seems bogus. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant.travis at ieee.org Sun Apr 30 02:46:06 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 30 Apr 2006 00:46:06 -0600 Subject: [SciPy-dev] sinm and cosm for matrices with real coefficients In-Reply-To: <4454464D.4070206@gmail.com> References: <1146333730.9508.12.camel@localhost.localdomain> <4454464D.4070206@gmail.com> Message-ID: <44545D2E.8070701@ieee.org> Robert Kern wrote: > Vanuxem Gr?gory wrote: > >> Hi, >> >> The code of cosm and sinm for matrices with real coefficients can be >> improved. >> >> The actual code is something like: >> >> sinm : (expm(j * M) - expm(- j*M)) / (2 * j) >> >> cosm : (expm(j * M) + expm(- j* M)) / 2 >> >> But this can be changed to (only for matrices with real coefficients): >> >> sinm : imaginary_part(expm(j * M)) >> >> cosm : real_part(expm(j * M)) >> > > However, we want the functions to work on complex matrices, too. The > optimization is probably not worth the extra code to dispatch on the datatype. > Of course, if someone comes along with a use-case where they need the extra > speed, and they have a patch, unit tests, and benchmarks ready, we'd probably > put it in. > Actually the code was already dispatching on the data-type (to convert back to real instead of leaving it complex). So, this one was an easy addition and I added it. The functions still need to be adapted so they will return matrices when given matrices. -Travis From nodwell at physics.ubc.ca Sun Apr 30 14:10:57 2006 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Sun, 30 Apr 2006 11:10:57 -0700 Subject: [SciPy-dev] -Qnew option breaks scipy Message-ID: Using the -Qnew switch with python breaks scipy (version 0.4.8 and previous versions I have tested). The lines below demonstrate this. In summary, I run: (1) python without -Q option, and without "from __future__ import division". This works. (2) python without -Q option, and with "from __future__ import division". This works. (2) python with the -Qnew option, and without "from __future__ import division". This breaks scipy. $ /sw/bin/python Python 2.4.2 (#1, Mar 5 2006, 22:35:36) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 3/2 # verify old style division 1 >>> import scipy >>> scipy.version.version '0.4.8' >>> $ /sw/bin/python Python 2.4.2 (#1, Mar 5 2006, 22:35:36) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from __future__ import division >>> 3/2 # verify new style division 1.5 >>> import scipy >>> $ /sw/bin/python -Qnew Python 2.4.2 (#1, Mar 5 2006, 22:35:36) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 3/2 # verify new style division 1.5 >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/sw/lib/python2.4/site-packages/scipy/__init__.py", line 32, in ? from numpy import * File "/sw/lib/python2.4/site-packages/numpy/f2py/__init__.py", line 12, in ? import f2py2e File "/sw/lib/python2.4/site-packages/numpy/f2py/f2py2e.py", line 27, in ? import rules File "/sw/lib/python2.4/site-packages/numpy/f2py/rules.py", line 96, in ? module_rules={ File "/sw/lib/python2.4/site-packages/numpy/f2py/auxfuncs.py", line 395, in gentitle return '/*%s %s %s*/'%(l*'*',name,l*'*') TypeError: can't multiply sequence by non-int >>> From robert.kern at gmail.com Sun Apr 30 15:24:47 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 30 Apr 2006 14:24:47 -0500 Subject: [SciPy-dev] -Qnew option breaks scipy In-Reply-To: References: Message-ID: <44550EFF.4000009@gmail.com> Eric Nodwell wrote: > Using the -Qnew switch with python breaks scipy (version 0.4.8 and > previous versions I have tested). Okay. So don't do that. I really don't think Python will be switching to the new division scheme in the 2.x series any more than it will switch to all-Unicode strings in that timeframe (-U). These options are for testing only, not deployment. > The lines below demonstrate this. In summary, I run: > (1) python without -Q option, and without "from __future__ import > division". This works. > (2) python without -Q option, and with "from __future__ import > division". This works. > (2) python with the -Qnew option, and without "from __future__ import > division". This breaks scipy. Note that "from __future__ import division" is a per-module setting. Executing it at the prompt only sets it in the __main__ module; it does not extend inside scipy or any other imported module. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nodwell at physics.ubc.ca Sun Apr 30 16:21:25 2006 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Sun, 30 Apr 2006 13:21:25 -0700 Subject: [SciPy-dev] -Qnew option breaks scipy In-Reply-To: <44550EFF.4000009@gmail.com> References: <44550EFF.4000009@gmail.com> Message-ID: If a command line switch exists, it will be used. According to http://www.python.org/dev/peps/pep-0238, this project: http://www.vpython.org, has a need for running everything with -Qnew . I can't say whether they truely need it or not, I only point it out as an example of the above axiom. This is something which will have to be done eventually anyways, since some day -Qnew will become the default. Maybe not in the 2.x series as you point out. OK, I admit I'm not submitting a patch myself, merely pointing it out in the hope that someone will say, hey I can fix that in 5 minutes. For all I know however a fix would require a major rewrite of scipy. I was using -Qnew because I often use python as an interactive calculator and I was getting tired of typing "from __future__ import division" so I aliased python -Qnew. Too bad if this is something one ought not to do, because it makes the best calculator that I know of. It does no good to put the future statement in a module listed in PYTHONSTARTUP, because then, as you point out, the effect is limited to that module. On 4/30/06, Robert Kern wrote: > Eric Nodwell wrote: > > Using the -Qnew switch with python breaks scipy (version 0.4.8 and > > previous versions I have tested). > > Okay. So don't do that. I really don't think Python will be switching to the new > division scheme in the 2.x series any more than it will switch to all-Unicode > strings in that timeframe (-U). These options are for testing only, not deployment. > > > The lines below demonstrate this. In summary, I run: > > (1) python without -Q option, and without "from __future__ import > > division". This works. > > (2) python without -Q option, and with "from __future__ import > > division". This works. > > (2) python with the -Qnew option, and without "from __future__ import > > division". This breaks scipy. > > Note that "from __future__ import division" is a per-module setting. Executing > it at the prompt only sets it in the __main__ module; it does not extend inside > scipy or any other imported module. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Sun Apr 30 17:19:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 30 Apr 2006 16:19:06 -0500 Subject: [SciPy-dev] -Qnew option breaks scipy In-Reply-To: References: <44550EFF.4000009@gmail.com> Message-ID: <445529CA.6030408@gmail.com> Eric Nodwell wrote: > If a command line switch exists, it will be used. Sure. And those users need to take responsibility for doing weird things, not library authors. > According to http://www.python.org/dev/peps/pep-0238, this project: > http://www.vpython.org, has a need for running everything with -Qnew . > I can't say whether they truely need it or not, I only point it out > as an example of the above axiom. And they were wrong to do so. > This is something which will have to be done eventually anyways, since > some day -Qnew will become the default. Maybe not in the 2.x series > as you point out. Definitely not in the 2.x series. From the PEP: """ - Classic division will remain the default in the Python 2.x series; true division will be standard in Python 3.0. """ Python 3.0 is the backwards-compatibility-breaking release. Scipy will have to go through much larger changes then than division. Python 3.0 does not exist yet, and it is impossible to write 3.0-compatible code. We'll have to deal with those changes later. That's the perfect time to deal with the change in division behavior. > OK, I admit I'm not submitting a patch myself, > merely pointing it out in the hope that someone will say, hey I can > fix that in 5 minutes. For all I know however a fix would require a > major rewrite of scipy. It's not a 5-minute fix. > I was using -Qnew because I often use python as an interactive > calculator and I was getting tired of typing "from __future__ import > division" so I aliased python -Qnew. Too bad if this is something one > ought not to do, because it makes the best calculator that I know of. > It does no good to put the future statement in a module listed in > PYTHONSTARTUP, because then, as you point out, the effect is limited > to that module. Have you tried it? The file in PYTHONSTARTUP gets exec'ed, not imported. [~]$ cat $PYTHONSTARTUP from __future__ import division try: import readline except ImportError: print "Module readline not available." else: import rlcompleter readline.parse_and_bind("tab: complete") [~]$ python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> 3/2 1.5 >>> -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nodwell at physics.ubc.ca Sun Apr 30 18:17:44 2006 From: nodwell at physics.ubc.ca (Eric Nodwell) Date: Sun, 30 Apr 2006 15:17:44 -0700 Subject: [SciPy-dev] -Qnew option breaks scipy In-Reply-To: <445529CA.6030408@gmail.com> References: <44550EFF.4000009@gmail.com> <445529CA.6030408@gmail.com> Message-ID: OK, I'll accept that it's not really a bug (yet). But I disagree that it's something one doesn't need to care about until python 3.x, and I disagree even moreso that it's not even possible to deal with this until 3.x. My understanding is that PEP238 has been accepted and is final, and the change to division is fully implemented as of 2.2, with no further changes to division occuring except the gradual phase-in for backwards compatibility. From the PEP: "" The -Q command line option takes a string argument that can take four values: "old", "warn", "warnall", or "new". The default is "old" in Python 2.2 but will change to "warn" in later 2.x versions. "" I guess I just generated a slightly premature warning. $ /sw/bin/python -Qwarn Python 2.4.2 (#1, Mar 5 2006, 22:35:36) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy /sw/lib/python2.4/site-packages/numpy/core/numerictypes.py:137: DeprecationWarning: classic int division bytes = bits / 8 /sw/lib/python2.4/site-packages/numpy/core/numerictypes.py:177: DeprecationWarning: classic int division na_name = '%s%d' % (base.capitalize(), bit/2) /sw/lib/python2.4/site-packages/numpy/core/numerictypes.py:377: DeprecationWarning: classic int division nbytes[obj] = val[2] / 8 /sw/lib/python2.4/site-packages/numpy/f2py/auxfuncs.py:394: DeprecationWarning: classic int division l=(80-len(name)-6)/2 (Oh, and I did just try PYTHONSTARTUP and of course it does get exec'ed, which is just peachy. I thought I'd tried it once long ago and couldn't get "from __future__ import division" that way. Obviously that must have been something else that I tried. Thanks for pointing that out.) cheers, Eric On 4/30/06, Robert Kern wrote: > Eric Nodwell wrote: > > If a command line switch exists, it will be used. > > Sure. And those users need to take responsibility for doing weird things, not > library authors. > > > According to http://www.python.org/dev/peps/pep-0238, this project: > > http://www.vpython.org, has a need for running everything with -Qnew . > > I can't say whether they truely need it or not, I only point it out > > as an example of the above axiom. > > And they were wrong to do so. > > > This is something which will have to be done eventually anyways, since > > some day -Qnew will become the default. Maybe not in the 2.x series > > as you point out. > > Definitely not in the 2.x series. From the PEP: > > """ > - Classic division will remain the default in the Python 2.x > series; true division will be standard in Python 3.0. > """ > > Python 3.0 is the backwards-compatibility-breaking release. Scipy will have to > go through much larger changes then than division. Python 3.0 does not exist > yet, and it is impossible to write 3.0-compatible code. We'll have to deal with > those changes later. That's the perfect time to deal with the change in division > behavior. > > > OK, I admit I'm not submitting a patch myself, > > merely pointing it out in the hope that someone will say, hey I can > > fix that in 5 minutes. For all I know however a fix would require a > > major rewrite of scipy. > > It's not a 5-minute fix. > > > I was using -Qnew because I often use python as an interactive > > calculator and I was getting tired of typing "from __future__ import > > division" so I aliased python -Qnew. Too bad if this is something one > > ought not to do, because it makes the best calculator that I know of. > > It does no good to put the future statement in a module listed in > > PYTHONSTARTUP, because then, as you point out, the effect is limited > > to that module. > > Have you tried it? The file in PYTHONSTARTUP gets exec'ed, not imported. > > [~]$ cat $PYTHONSTARTUP > from __future__ import division > try: > import readline > except ImportError: > print "Module readline not available." > else: > import rlcompleter > readline.parse_and_bind("tab: complete") > > [~]$ python > Python 2.4.1 (#2, Mar 31 2005, 00:05:10) > [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> 3/2 > 1.5 > >>> > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Sun Apr 30 18:44:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 30 Apr 2006 17:44:55 -0500 Subject: [SciPy-dev] -Qnew option breaks scipy In-Reply-To: References: <44550EFF.4000009@gmail.com> <445529CA.6030408@gmail.com> Message-ID: <44553DE7.8020209@gmail.com> Eric Nodwell wrote: > OK, I'll accept that it's not really a bug (yet). But I disagree that > it's something one doesn't need to care about until python 3.x, and I > disagree even moreso that it's not even possible to deal with this > until 3.x. My understanding is that PEP238 has been accepted and is > final, and the change to division is fully implemented as of 2.2, with > no further changes to division occuring except the gradual phase-in > for backwards compatibility. From the PEP: > > "" > The -Q command line option takes a string argument that can take > four values: "old", "warn", "warnall", or "new". The default is > "old" in Python 2.2 but will change to "warn" in later 2.x > versions. > "" Sure. If that actually happens, then it would also be an appropriate time to deal with it. Honestly, I doubt it will; the PEP was written a long time ago, and Python 3.0 plans have changed significantly since then. It's also reasonably likely that people will be maintaining a separate 2.x branch in parallel to 3.x and the 2.x branch certainly won't have default warnings for 3.x features. My suggestion that it is impossible to write 3.0-compatible code is true; the full set of features of 3.0 don't exist yet, even in planning. We *could* do this small part now because this one feature is largely specced out already, but for little or no benefit. It makes more sense to do it when we do the real conversion for all of the new 3.0 features. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco