From nwagner at iam.uni-stuttgart.de Fri Mar 2 03:51:00 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 02 Mar 2007 09:51:00 +0100 Subject: [SciPy-dev] sandbox packages maskedarray and timeseries Message-ID: <45E7E574.5080107@iam.uni-stuttgart.de> Hi all, I have enabled the sandbox packages maskedarray and timeseries. The file enabled_packages.txt in /scipy/Lib/sandbox contains delaunay pyem numexpr arpack spline rbf maskedarray timeseries If I try to import maskedarray and timeseries I get Python 2.4 (#1, Oct 13 2006, 16:43:49) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.sandbox import maskedarray >>> from scipy.sandbox import timeseries Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/sandbox/timeseries/__init__.py", line 16, in ? import tcore File "/usr/lib64/python2.4/site-packages/scipy/sandbox/timeseries/tcore.py", line 18, in ? import maskedarray as MA ImportError: No module named maskedarray >>> import scipy >>> scipy.__version__ '0.5.3.dev2806' How can I fix this problem ? Nils From pgmdevlist at gmail.com Fri Mar 2 12:27:16 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 2 Mar 2007 12:27:16 -0500 Subject: [SciPy-dev] sandbox packages maskedarray and timeseries In-Reply-To: <45E7E574.5080107@iam.uni-stuttgart.de> References: <45E7E574.5080107@iam.uni-stuttgart.de> Message-ID: <200703021227.16723.pgmdevlist@gmail.com> On Friday 02 March 2007 03:51:00 Nils Wagner wrote: > Hi all, > > I have enabled the sandbox packages maskedarray and timeseries. Wow, thanks a lot. We didn't even make a proper annoucement on timeseries yet ! (We want to have the week-end to check for some bugs first...) > How can I fix this problem ? Mmh. I don't know, I never tried to install the package with Scipy. You could try to install maskedarray first, and then... *finishes his coffee and realizes that's not the question*. Oh: I guess it's a namespace problem. timeseries.tcore tries to find maskedarray, when in fact only scipy.sandbox.maskedarray is available in the namespace. Could you try something like from scipy.sandbox import maskedarray as maskedarray ? I've never tried this approach, I always have maskedarray and timeseries directly in my PYTHONPATH and everything works fine (of course, except for the bugs in the modules themselves, but we're working on that...) Let me know how it goes. From jdh2358 at gmail.com Sun Mar 4 16:53:32 2007 From: jdh2358 at gmail.com (John Hunter) Date: Sun, 4 Mar 2007 15:53:32 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? Message-ID: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> I was surprised by the nans in the cdf output below; ditto for the survival function which is probably using the same code (svn scipy compiled for OS X panther python2.3) In [1]: import scipy In [2]: import scipy.stats In [3]: scipy.__version__ Out[3]: '0.5.3.dev2818' In [4]: distribution = scipy.stats.norm(loc=0, scale=1) In [5]: x = scipy.arange(-3,3,0.1) In [6]: distribution.cdf(x) Out[6]: array([ 0.0013499 , 0.00186581, 0.00255513, 0.00346697, 0.00466119, nan, 0.00819754, 0.01072411, 0.01390345, 0.01786442, nan, 0.02871656, 0.03593032, 0.04456546, 0.05479929, nan, 0.08075666, 0.09680048, 0.11506967, 0.13566606, nan, 0.18406013, 0.2118554 , 0.24196365, 0.27425312, nan, 0.34457826, 0.38208858, 0.42074029, 0.46017216, 0.5 , 0.53982784, 0.57925971, 0.61791142, 0.65542174, 0.69146246, 0.72574688, 0.75803635, 0.7881446 , 0.81593987, 0.84134475, 0.86433394, 0.88493033, 0.90319952, 0.91924334, 0.9331928 , 0.94520071, 0.95543454, 0.96406968, 0.97128344, 0.97724987, 0.98213558, 0.98609655, 0.98927589, 0.99180246, 0.99379033, 0.99533881, 0.99653303, 0.99744487, 0.99813419]) John-Hunters-Computer:~> uname -a Darwin John-Hunters-Computer.local 7.9.0 Darwin Kernel Version 7.9.0: Wed Mar 30 20:11:17 PST 2005; root:xnu/xnu-517.12.7.obj~1/RELEASE_PPC Power Macintosh powerpc From jdh2358 at gmail.com Sun Mar 4 17:27:47 2007 From: jdh2358 at gmail.com (John Hunter) Date: Sun, 4 Mar 2007 16:27:47 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> Message-ID: <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> On 3/4/07, John Hunter wrote: > I was surprised by the nans in the cdf output below; ditto for the > survival function which is probably using the same code (svn scipy > compiled for OS X panther python2.3) In case this helps one of the scipy gurus -- I've narrowed the bug to scipy.special_cephes.erf but haven't been able to narrow it further. I wonder if this is related to a platform specific nan test.... Do others see this? In [20]: import scipy.special._cephes as c In [21]: x = scipy.arange(-3, -2, 0.1) In [22]: x Out[22]: array([-3. , -2.9, -2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1]) In [23]: c.erf(x) Out[23]: array([-0.99997791, -0.9999589 , -0.99992499, -0.99986567, -0.99976397, nan, -0.99931149, -0.99885682, -0.99813715, -0.99702053]) In [24]: x[5] Out[24]: -2.5 In [25]: c.erf(x[5]) Out[25]: nan In [26]: c.erf(float(x[5])) Out[26]: nan In [27]: c.erf(-2.5) Out[27]: -0.999593047983 From eric at enthought.com Sun Mar 4 17:39:00 2007 From: eric at enthought.com (eric jones) Date: Sun, 04 Mar 2007 16:39:00 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> Message-ID: <45EB4A84.3080507@enthought.com> It looks like it works with latest install of Python -- Enthought Edition on WinXP. This is python 2.4.3. In [81]: from scipy.special import _cephes as c In [83]: a Out[83]: array([-3. , -2.9, -2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1]) In [85]: c.erf(a) Out[85]: array([-0.99997791, -0.9999589 , -0.99992499, -0.99986567, -0.99976397, -0.99959305, -0.99931149, -0.99885682, -0.99813715, -0.99702053]) In [86]: import scipy In [87]: scipy.__version__ Out[87]: '0.5.3.dev2749' In [88]: import numpy In [89]: numpy.__version__ Out[89]: '1.0.2.dev3552' John Hunter wrote: > On 3/4/07, John Hunter wrote: > >> I was surprised by the nans in the cdf output below; ditto for the >> survival function which is probably using the same code (svn scipy >> compiled for OS X panther python2.3) >> > > In case this helps one of the scipy gurus -- I've narrowed the bug to > scipy.special_cephes.erf but haven't been able to narrow it further. > I wonder if this is related to a platform specific nan test.... Do > others see this? > > > In [20]: import scipy.special._cephes as c > > In [21]: x = scipy.arange(-3, -2, 0.1) > > In [22]: x > Out[22]: array([-3. , -2.9, -2.8, -2.7, -2.6, -2.5, -2.4, -2.3, -2.2, -2.1]) > > In [23]: c.erf(x) > Out[23]: > array([-0.99997791, -0.9999589 , -0.99992499, -0.99986567, -0.99976397, > nan, -0.99931149, -0.99885682, -0.99813715, -0.99702053]) > > In [24]: x[5] > Out[24]: -2.5 > > In [25]: c.erf(x[5]) > Out[25]: nan > > In [26]: c.erf(float(x[5])) > Out[26]: nan > > In [27]: c.erf(-2.5) > Out[27]: -0.999593047983 > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From fperez.net at gmail.com Sun Mar 4 17:53:31 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 4 Mar 2007 15:53:31 -0700 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <45EB4A84.3080507@enthought.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> Message-ID: On 3/4/07, eric jones wrote: > It looks like it works with latest install of Python -- Enthought > Edition on WinXP. This is python 2.4.3. Looks OK here too: In [8]: c.erf(x) Out[8]: array([-0.99997791, -0.9999589 , -0.99992499, -0.99986567, -0.99976397, -0.99959305, -0.99931149, -0.99885682, -0.99813715, -0.99702053]) In [9]: x[5] Out[9]: -2.5 In [10]: c.erf(x[5]) Out[10]: -0.999593047983 In [11]: c.erf(-2.5) Out[11]: -0.999593047983 In [12]: import numpy In [13]: numpy.__version__ Out[13]: '1.0.2.dev3569' In [14]: scipy.__version__ Out[14]: '0.5.3.dev2819' Ubuntu Edgy on x86, built from SVN right this minute. GCC is 4.1.2. Cheers, f From mauger at physics.ucdavis.edu Sun Mar 4 18:00:20 2007 From: mauger at physics.ucdavis.edu (Matthew Auger) Date: Sun, 4 Mar 2007 15:00:20 -0800 (PST) Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <45EB4A84.3080507@enthought.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> Message-ID: Three more data points: works fine for me on i686, x86_64, and my intel mac with scipy 0.5.3.dev2774 and numpy 1.0.2.dev3552. From jdh2358 at gmail.com Sun Mar 4 18:36:55 2007 From: jdh2358 at gmail.com (John Hunter) Date: Sun, 4 Mar 2007 17:36:55 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <45EB4A84.3080507@enthought.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> Message-ID: <88e473830703041536p4ebb795di6bf6246094b9a40e@mail.gmail.com> On 3/4/07, eric jones wrote: > It looks like it works with latest install of Python -- Enthought > Edition on WinXP. This is python 2.4.3. > > In [81]: from scipy.special import _cephes as c > Yep, it's an isnan bug. If I add a dummy function "erfjdh" to ndtr.c and _cephesmodule.c double erfjdh(double x) { double y, z; if (isnan(x)) { mtherr("erf", DOMAIN); return -2.0; } return 2.0; } and then run my test code In [1]: import scipy In [2]: import scipy.special._cephes as c In [3]: x = scipy.arange(-3, -2, 0.1) In [4]: c.erfjdh(x) Out[4]: array([ 2., 2., 2., 2., 2., -2., 2., 2., 2., 2.]) But apparently the internal isnan testing in ndtr.c is different than the toplevel testing because In [5]: scipy.isnan(x) Out[5]: array([False, False, False, False, False, False, False, False, False, False], dtype=bool) Still digging... From jdh2358 at gmail.com Sun Mar 4 19:08:15 2007 From: jdh2358 at gmail.com (John Hunter) Date: Sun, 4 Mar 2007 18:08:15 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <88e473830703041536p4ebb795di6bf6246094b9a40e@mail.gmail.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041536p4ebb795di6bf6246094b9a40e@mail.gmail.com> Message-ID: <88e473830703041608u4997e40axdb6bcf27f287f7ae@mail.gmail.com> On 3/4/07, John Hunter wrote: > Yep, it's an isnan bug. If I add a dummy function "erfjdh" to ndtr.c > and _cephesmodule.c latest update: the ndtr.c module is using cephes_isnan defined in cephes/isnan.c. With some experimenting, it appears I am falling into the "size(int)==4 and #ifdef IBMPC" branch of that function, when I should be in the "size(int)==4 and #ifdef MIEEE" branch as far as I can determine. If I manually define MIEEE in isnan.h, I avoid the unwanted isnans. Looking at mconf.h where these macros are defined, it looks like I am not hitting the branch #elif defined(sel) || defined(pyr) || defined(mc68000) || defined (m68k) || \ defined(is68k) || defined(tahoe) || defined(ibm032) || \ defined(ibm370) || defined(MIPSEB) || defined(_MIPSEB) || \ defined(__convex__) || defined(DGUX) || defined(hppa) || \ defined(apollo) || defined(_CRAY) || defined(__hppa) || \ defined(__hp9000) || defined(__hp9000s300) || \ defined(__hp9000s700) || defined(__AIX) || defined(_AIX) ||\ defined(__pyr__) || defined(__mc68000__) || defined(__sparc) ||\ defined(_IBMR2) || defined (BIT_ZERO_ON_LEFT) #define MIEEE 1 /* Motorola IEEE, high order words come first */ #define BIGENDIAN 1 Is there some additional "defined" check that needs to be added here? Are other mac power pc users seeing this problem? Thanks! JDH From jdh2358 at gmail.com Sun Mar 4 20:05:38 2007 From: jdh2358 at gmail.com (John Hunter) Date: Sun, 4 Mar 2007 19:05:38 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <45EB4A84.3080507@enthought.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> Message-ID: <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> On 3/4/07, eric jones wrote: > It looks like it works with latest install of Python -- Enthought > Edition on WinXP. This is python 2.4.3. > In [87]: scipy.__version__ > Out[87]: '0.5.3.dev2749' Hey Eric, On the enthought python download page http://code.enthought.com/enthon/, the scipy version is listed as "SciPy 0.5.0.2033: Scientific Library for Python" but you are showing a much more recent version. The code I am using in scipy.stats was broken in one of the 0.5.0.* branches I was testing on, so I want to make sure my students, who are mainly win32, are getting the latest packages. I was planning on pointing them to the enthought edition; is there a pointer to a development or testing release, or is the version number on the main page out of date? Thanks, JDH From eric at enthought.com Sun Mar 4 21:27:33 2007 From: eric at enthought.com (eric jones) Date: Sun, 04 Mar 2007 20:27:33 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> Message-ID: <45EB8015.9070408@enthought.com> The bleeding edge is using our new "enstaller" with eggs. We're (still...) working out issues with the installation process and also trying to unscramble some bugs in our package eggs. That said, ipython, scipy, numpy, and matplotlib all seem to work fine on it. Here are the latest directions for installing the packages used in a class I am teaching: 1) download http://code.enthought.com/downloads/enthon/enthought_setup-2.4.0.msi 2) install, choosing the defaults 3) The last step of the install process will ask if you want to launch the enstaller app, choose yes. This will bootstrap our our custom installer app. 4) When the gui pops up, the inner frame will have 2 tabs, "Installed" and "Repositories", click on the "Repositories" tab 5) Click on the "Find Installable Packages" button which will populate the table. 6) Find the package name "teaching_python", change the install status to "Install", then click the "Install Packages" button under the table. This is a meta-package which will actually install all of the dependencies. 7) When the app is done installing the package, you can close the app. eric John Hunter wrote: > On 3/4/07, eric jones wrote: > >> It looks like it works with latest install of Python -- Enthought >> Edition on WinXP. This is python 2.4.3. >> In [87]: scipy.__version__ >> Out[87]: '0.5.3.dev2749' >> > > Hey Eric, > > On the enthought python download page > http://code.enthought.com/enthon/, the scipy version is listed as > "SciPy 0.5.0.2033: Scientific Library for Python" but you are showing > a much more recent version. The code I am using in scipy.stats was > broken in one of the 0.5.0.* branches I was testing on, so I want to > make sure my students, who are mainly win32, are getting the latest > packages. I was planning on pointing them to the enthought edition; > is there a pointer to a development or testing release, or is the > version number on the main page out of date? > > Thanks, > JDH > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From fperez.net at gmail.com Sun Mar 4 21:32:52 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 4 Mar 2007 19:32:52 -0700 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <45EB8015.9070408@enthought.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> <45EB8015.9070408@enthought.com> Message-ID: On 3/4/07, eric jones wrote: > The bleeding edge is using our new "enstaller" with eggs. We're > (still...) working out issues > with the installation process and also trying to unscramble some bugs > in our package eggs. > That said, ipython, scipy, numpy, and matplotlib all seem to work fine > on it. > > Here are the latest directions for installing the packages used in a > class I am teaching: > > 1) download > http://code.enthought.com/downloads/enthon/enthought_setup-2.4.0.msi [...] Does this work?. Yesterday I had a student try to install everything to get MayaVi2 running on a WinXP partition of my laptop, and nothing really worked. Eventually I just rebooted to the ubuntu partition so we could get some work done. I was going to send a detailed report on the enthought-dev list perhaps tomorrow with more info. Do you still want it or should we just follow these instructions again and see how it goes? Cheers, f From eric at enthought.com Sun Mar 4 22:00:00 2007 From: eric at enthought.com (eric jones) Date: Sun, 04 Mar 2007 21:00:00 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> <45EB8015.9070408@enthought.com> Message-ID: <45EB87B0.4030000@enthought.com> I just tried, Mayavi, and it doesn't appear to work on this install, and so, no. Many things have been fixed in the last week, but it isn't ready for serious use yet. I would encourage you to submit any bugs you see. Bryce and Rick will appreciate error reports on packages. thanks, eric Fernando Perez wrote: > On 3/4/07, eric jones wrote: > >> The bleeding edge is using our new "enstaller" with eggs. We're >> (still...) working out issues >> with the installation process and also trying to unscramble some bugs >> in our package eggs. >> That said, ipython, scipy, numpy, and matplotlib all seem to work fine >> on it. >> >> Here are the latest directions for installing the packages used in a >> class I am teaching: >> >> 1) download >> http://code.enthought.com/downloads/enthon/enthought_setup-2.4.0.msi >> > > [...] > > Does this work?. Yesterday I had a student try to install everything > to get MayaVi2 running on a WinXP partition of my laptop, and nothing > really worked. Eventually I just rebooted to the ubuntu partition so > we could get some work done. > > I was going to send a detailed report on the enthought-dev list > perhaps tomorrow with more info. Do you still want it or should we > just follow these instructions again and see how it goes? > > Cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From fperez.net at gmail.com Sun Mar 4 22:04:05 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sun, 4 Mar 2007 20:04:05 -0700 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <45EB87B0.4030000@enthought.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> <45EB8015.9070408@enthought.com> <45EB87B0.4030000@enthought.com> Message-ID: On 3/4/07, eric jones wrote: > I just tried, Mayavi, and it doesn't appear to work on this install, and > so, no. Many things have been fixed in the last week, but it isn't > ready for serious use yet. > > I would encourage you to submit any bugs you see. Bryce and Rick will > appreciate error reports on packages. OK, I'll then try to do a more detailed writeup tomorrow. In the meantime, I have to say that ETS from SVN works flawlessly on ubuntu and the build is absolutely trivial, so this isn't a major problem for us yet. And the tools are amazing: you guys have done a really good job. Congrats. Cheers, f From eric at enthought.com Sun Mar 4 22:17:34 2007 From: eric at enthought.com (eric jones) Date: Sun, 04 Mar 2007 21:17:34 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> <45EB8015.9070408@enthought.com> <45EB87B0.4030000@enthought.com> Message-ID: <45EB8BCE.9030107@enthought.com> Hey Fernando, > In the meantime, I have to say that ETS from SVN works flawlessly on > ubuntu and the build is absolutely trivial, so this isn't a major > problem for us yet. Good to hear. Judging by comments on the enthought-dev list in the last week or two, it isn't trivial on every platform (gentoo, 64 bit I believe was (is?) very painful), so I'm glad to here its easy in some places. People here use it regularly on Windows, Mac, Redhat (64 bit), and Ubuntu. > And the tools are amazing: you guys have done a really good job. Congrats. > Again, good to hear. It always seems to move slower than you wish, but I'm glad you are finding them useful. see ya, eric > Cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From eric at enthought.com Mon Mar 5 00:21:22 2007 From: eric at enthought.com (eric jones) Date: Sun, 04 Mar 2007 23:21:22 -0600 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041705l7879c036xd034fbda44360733@mail.gmail.com> <45EB8015.9070408@enthought.com> <45EB87B0.4030000@enthought.com> Message-ID: <45EBA8D2.1030401@enthought.com> Bryce has fixed a few eggs and Mayavi now works. eric Fernando Perez wrote: > On 3/4/07, eric jones wrote: > >> I just tried, Mayavi, and it doesn't appear to work on this install, and >> so, no. Many things have been fixed in the last week, but it isn't >> ready for serious use yet. >> >> I would encourage you to submit any bugs you see. Bryce and Rick will >> appreciate error reports on packages. >> > > OK, I'll then try to do a more detailed writeup tomorrow. > > In the meantime, I have to say that ETS from SVN works flawlessly on > ubuntu and the build is absolutely trivial, so this isn't a major > problem for us yet. And the tools are amazing: you guys have done a > really good job. Congrats. > > Cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From mattknox_ca at hotmail.com Wed Mar 7 13:46:20 2007 From: mattknox_ca at hotmail.com (Matt Knox) Date: Wed, 7 Mar 2007 13:46:20 -0500 Subject: [SciPy-dev] [ANN] alpha release of the timeseries module Message-ID: Hi everyone, As you may or may not be aware, over the past several months we (Matt and Pierre) have been busy developing a timeseries module for scipy. The module has been completely overhauled from its original form when it was added to the scipy sandbox at the end of last year. In particular, masked arrays are now fully supported, and the dependence on the external package mxDateTime has been removed. The purpose of this package is to manipulate data indexed by dates (and time). The package defines three main classes: Date (as a single element), DateArray (as a ndarray of Date objects), and TimeSeries (as the combination of a DateArray and a masked array). Each of these main classes support a wide variety of frequencies: annual, quarterly, monthly, weekly, daily, business day, hourly, minutely, secondly, and more. Additional frequencies can be added fairly easily. Note that by frequency, we really mean time units. Time series usually do NOT have to be regularly spaced : the smallest time increment between two consecutive data is just expressed in one particular set of units. Dates (and hence DateArrays and TimeSeries) can be easily converted to other frequencies (the conversion algorithms implemented in C). TimeSeries can be indexed (and sliced!) by Date objects/strings, or indexed in the convential numpy way . The TimeSeries class is a subclass of the new MaskedArray class (the version in the scipy sandbox, which is itself a subclass of ndarray). Therefore, missing data are naturally supported, and the series are recognized as ndarrays by asarray/asanyarray. In addition to these three classes, we developed a series of addons: * a plotting add-on, that allows TimeSeries objects to be easily plotted using matplotlib with dynamic, intelligent axis labels (a tad slow at the moment, but very pretty...).* A report class for generating TimeSeries reports, to export the results to a spreadsheet program via csv, generate html tables, or inspect data from the console, etc.* The 'interpolate' sub-module, that permits to interpolate masked values in a MaskedArray (and hence, TimeSeries also) * A 'filters' sub-module provides some functions for 'running window' based filtering (this is rather incomplete at this point) * An io.fame sub-module allows reading/writing to/from FAME databases (still highly experimental) Our future plans include:* Porting the Date class entirely to C. This is partly underway, so you may notice some unusual redundancy between the python and C code with regards to Date handling. * Adding some more frequencies (quarterly with different year ends, groups of months, possibly higher frequencies than secondly, possibly user defined custom frequencies) * Improving the frequency support of the plotting module (it currently supports only Annual, Quarterly, Monthly, Daily, and Business Day. and weekly, vaguely). * Add additional library functions: percent change, ARMA models, moving standard deviation... If you are interested in the module, we would very much appreciate your feedback. The module is still under-going active development, so API changes are quite possible at this point (even if most of the core is failry stabilized). Please see the wiki at http://www.scipy.org/TimeSeriesPackage for some documentation. There should be enough there to get you started, although we don't claim it is completely comprehensive at this point. You can download the maskedarray and timeseries modules from the scipy sandbox in SVN. Please feel free to contact us on or off the mailing list with any questions/comments/suggestions you have about the module. - Matt Knox & Pierre Gerard-Marchant -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at curioussymbols.com Wed Mar 7 23:31:16 2007 From: john at curioussymbols.com (John Pye) Date: Thu, 08 Mar 2007 15:31:16 +1100 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' Message-ID: <45EF9194.8090902@curioussymbols.com> Hi all, I am getting a crash when I call "from scipy import io". Normally this works fine, but when I make the call from an *embedded* python script, it goes haywire. See below for some GDB output. You can see the C code has done some stuff, then the embedded python frame starts up. But there is a crash when it gets to "import scipy.linalg", although "import scipy" works fine. "import scipy.io" also gives a crash. Note that this is with standard Ubuntu packages for numpy and scipy, but with a locally build newer package of matplotlib. john at roadwork:~/ascend$ dpkg -l matplotlib python-scipy python-numpy Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-==========================-==========================-==================================================================== ii matplotlib 0.90.0-2 Matlab(TM) style python plotting package ii python-numpy 1.0rc1-0ubuntu1 Numerical Python adds a fast array facility to the Python language ii python-scipy 0.5.1-3ubuntu2 scientific tools for Python john at roadwork:~/ascend$ Has anyone else seen this behaviour? Is there anything I can do to work around this problem? Cheers JP base/generic/integrator/ida.c:1707 (integrator_ida_write_matrix): Calculating dg/dx... base/generic/system/jacobian.c:78 (system_jacobian): nr = 0 base/generic/system/jacobian.c:79 (system_jacobian): nv = 2 base/generic/linear/mtx_basic.c:3234 (mtx_write_region_mmio): Wrote matrix range (0 x 2) to file WROTE MATRICES TO FILE. NOW PROCESSING... IMPORTING scipy... IMPORTING scipy.linang... Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1209911104 (LWP 22996)] 0xb5a60b41 in PyArray_API () from /usr/lib/python2.4/site-packages/numpy/core/multiarray.so (gdb) where #0 0xb5a60b41 in PyArray_API () from /usr/lib/python2.4/site-packages/numpy/core/multiarray.so #1 0xb3987fc7 in init_flinalg () at /usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:952 #2 0x080d6844 in _PyImport_LoadDynamicModule (name=0xbf8577d7 "scipy.linalg._flinalg", pathname=0xbf856767 "/usr/lib/python2.4/site-packages/scipy/linalg/_flinalg.so", fp=0x869f8a8) at ../Python/importdl.c:53 #3 0x080d47d5 in load_module (name=0xbf8577d7 "scipy.linalg._flinalg", fp=0xb5a60b40, buf=0xbf856767 "/usr/lib/python2.4/site-packages/scipy/linalg/_flinalg.so", type=3, loader=0x8) at ../Python/import.c:1689 #4 0x080d4f8b in import_submodule (mod=0xb482550c, subname=0xbf8577e4 "_flinalg", fullname=0xbf8577d7 "scipy.linalg._flinalg") at ../Python/import.c:2276 #5 0x080d540a in load_next (mod=0xb482550c, altmod=0x8124c58, p_name=, buf=0xbf8577d7 "scipy.linalg._flinalg", p_buflen=0xbf8577cc) at ../Python/import.c:2096 #6 0x080d563e in PyImport_ImportModuleEx (name=0xb39b5f54 "_flinalg", globals=0xb39a11c4, locals=0xb39a11c4, fromlist=0x8124c58) at ../Python/import.c:1931 #7 0x080af9e0 in builtin___import__ (self=0x0, args=0xb39b81bc) at ../Python/bltinmodule.c:45 #8 0x08058c27 in PyObject_Call (func=0x8, arg=0xb39b81bc, kw=0x0) at ../Objects/abstract.c:1795 #9 0x080b395c in PyEval_CallObjectWithKeywords (func=0xb7debd6c, arg=0xb39b81bc, kw=0x0) at ../Python/ceval.c:3435 #10 0x080b77b3 in PyEval_EvalFrame (f=0x8424b1c) at ../Python/ceval.c:2020 #11 0x080ba4b9 in PyEval_EvalCodeEx (co=0xb39b6aa0, globals=0xb39a11c4, locals=0xb39a11c4, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2741 #12 0x080ba527 in PyEval_EvalCode (co=0xb39b6aa0, globals=0xb39a11c4, locals=0xb39a11c4) at ../Python/ceval.c:484 #13 0x080d3d8c in PyImport_ExecCodeModuleEx (name=0xbf85abb7 "scipy.linalg.flinalg", co=0xb39b6aa0, pathname=0xbf858b07 "/usr/lib/python2.4/site-packages/scipy/linalg/flinalg.pyc") at ../Python/import.c:636 #14 0x080d4406 in load_source_module (name=0xbf85abb7 "scipy.linalg.flinalg", pathname=0xbf858b07 "/usr/lib/python2.4/site-packages/scipy/linalg/flinalg.pyc", fp=) at ../Python/import.c:915 #15 0x080d4f8b in import_submodule (mod=0xb482550c, subname=0xbf85abc4 "flinalg", fullname=0xbf85abb7 "scipy.linalg.flinalg") at ../Python/import.c:2276 #16 0x080d540a in load_next (mod=0xb482550c, altmod=0x8124c58, p_name=, buf=0xbf85abb7 "scipy.linalg.flinalg", p_buflen=0xbf85abac) at ../Python/import.c:2096 #17 0x080d563e in PyImport_ImportModuleEx (name=0xb39b5e54 "flinalg", globals=0xb481ef0c, locals=0xb481ef0c, fromlist=0xb48215cc) at ../Python/import.c:1931 #18 0x080af9e0 in builtin___import__ (self=0x0, args=0xb39b80f4) at ../Python/bltinmodule.c:45 #19 0x08058c27 in PyObject_Call (func=0x8, arg=0xb39b80f4, kw=0x0) at ../Objects/abstract.c:1795 #20 0x080b395c in PyEval_CallObjectWithKeywords (func=0xb7debd6c, arg=0xb39b80f4, kw=0x0) at ../Python/ceval.c:3435 #21 0x080b77b3 in PyEval_EvalFrame (f=0x81a7374) at ../Python/ceval.c:2020 #22 0x080ba4b9 in PyEval_EvalCodeEx (co=0xb39b6960, globals=0xb481ef0c, locals=0xb481ef0c, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2741 #23 0x080ba527 in PyEval_EvalCode (co=0xb39b6960, globals=0xb481ef0c, locals=0xb481ef0c) at ../Python/ceval.c:484 #24 0x080d3d8c in PyImport_ExecCodeModuleEx (name=0xbf85df97 "scipy.linalg.basic", co=0xb39b6960, pathname=0xbf85bee7 "/usr/lib/python2.4/site-packages/scipy/linalg/basic.pyc") at ../Python/import.c:636 #25 0x080d4406 in load_source_module (name=0xbf85df97 "scipy.linalg.basic", pathname=0xbf85bee7 "/usr/lib/python2.4/site-packages/scipy/linalg/basic.pyc", fp=) at ../Python/import.c:915 #26 0x080d4f8b in import_submodule (mod=0xb482550c, subname=0xbf85dfa4 "basic", fullname=0xbf85df97 "scipy.linalg.basic") at ../Python/import.c:2276 #27 0x080d540a in load_next (mod=0xb482550c, altmod=0x8124c58, p_name=, buf=0xbf85df97 "scipy.linalg.basic", p_buflen=0xbf85df8c) at ../Python/import.c:2096 ---Type to continue, or q to quit--- From john at curioussymbols.com Wed Mar 7 23:50:14 2007 From: john at curioussymbols.com (John Pye) Date: Thu, 08 Mar 2007 15:50:14 +1100 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' In-Reply-To: <45EF9194.8090902@curioussymbols.com> References: <45EF9194.8090902@curioussymbols.com> Message-ID: <45EF9606.7000907@curioussymbols.com> Further on this... I reverted to the standard ubuntu matplotlib just to be safe. That didn't fix the problem. I looked at that __multiarray_api.h (judging from those underscores, it must be top secret :-) and without having any kind of detailed understanding of it, I see that there are some global variables here. I wonder if it's possible that things are getting messed up by the fact that I have got embedded python loading some modules that were already loaded *outside* my embedded python frame, even though the same interpreter is being reused. Perhaps it's a categorical "you can't use scipy in embedded python". But hopefully not. Cheers JP John Pye wrote: > Hi all, > > I am getting a crash when I call "from scipy import io". Normally this > works fine, but when I make the call from an *embedded* python script, > it goes haywire. See below for some GDB output. You can see the C code > has done some stuff, then the embedded python frame starts up. But there > is a crash when it gets to "import scipy.linalg", although "import > scipy" works fine. "import scipy.io" also gives a crash. > > Note that this is with standard Ubuntu packages for numpy and scipy, but > with a locally build newer package of matplotlib. > > john at roadwork:~/ascend$ dpkg -l matplotlib python-scipy python-numpy > Desired=Unknown/Install/Remove/Purge/Hold > | Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed > |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: > uppercase=bad) > ||/ Name Version Description > +++-==========================-==========================-==================================================================== > ii matplotlib 0.90.0-2 Matlab(TM) > style python plotting package > ii python-numpy 1.0rc1-0ubuntu1 Numerical > Python adds a fast array facility to the Python language > ii python-scipy 0.5.1-3ubuntu2 scientific > tools for Python > john at roadwork:~/ascend$ > > Has anyone else seen this behaviour? Is there anything I can do to work > around this problem? > > Cheers > JP > > base/generic/integrator/ida.c:1707 (integrator_ida_write_matrix): > Calculating dg/dx... > base/generic/system/jacobian.c:78 (system_jacobian): nr = 0 > base/generic/system/jacobian.c:79 (system_jacobian): nv = 2 > base/generic/linear/mtx_basic.c:3234 (mtx_write_region_mmio): Wrote > matrix range (0 x 2) to file > WROTE MATRICES TO FILE. NOW PROCESSING... > IMPORTING scipy... > IMPORTING scipy.linang... > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread -1209911104 (LWP 22996)] > 0xb5a60b41 in PyArray_API () from > /usr/lib/python2.4/site-packages/numpy/core/multiarray.so > (gdb) where > #0 0xb5a60b41 in PyArray_API () from > /usr/lib/python2.4/site-packages/numpy/core/multiarray.so > #1 0xb3987fc7 in init_flinalg () at > /usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:952 > #2 0x080d6844 in _PyImport_LoadDynamicModule (name=0xbf8577d7 > "scipy.linalg._flinalg", > pathname=0xbf856767 > "/usr/lib/python2.4/site-packages/scipy/linalg/_flinalg.so", > fp=0x869f8a8) at ../Python/importdl.c:53 > #3 0x080d47d5 in load_module (name=0xbf8577d7 "scipy.linalg._flinalg", > fp=0xb5a60b40, > buf=0xbf856767 > "/usr/lib/python2.4/site-packages/scipy/linalg/_flinalg.so", type=3, > loader=0x8) at ../Python/import.c:1689 > #4 0x080d4f8b in import_submodule (mod=0xb482550c, subname=0xbf8577e4 > "_flinalg", fullname=0xbf8577d7 "scipy.linalg._flinalg") > at ../Python/import.c:2276 > #5 0x080d540a in load_next (mod=0xb482550c, altmod=0x8124c58, > p_name=, > buf=0xbf8577d7 "scipy.linalg._flinalg", p_buflen=0xbf8577cc) at > ../Python/import.c:2096 > #6 0x080d563e in PyImport_ImportModuleEx (name=0xb39b5f54 "_flinalg", > globals=0xb39a11c4, locals=0xb39a11c4, fromlist=0x8124c58) > at ../Python/import.c:1931 > #7 0x080af9e0 in builtin___import__ (self=0x0, args=0xb39b81bc) at > ../Python/bltinmodule.c:45 > #8 0x08058c27 in PyObject_Call (func=0x8, arg=0xb39b81bc, kw=0x0) at > ../Objects/abstract.c:1795 > #9 0x080b395c in PyEval_CallObjectWithKeywords (func=0xb7debd6c, > arg=0xb39b81bc, kw=0x0) at ../Python/ceval.c:3435 > #10 0x080b77b3 in PyEval_EvalFrame (f=0x8424b1c) at ../Python/ceval.c:2020 > #11 0x080ba4b9 in PyEval_EvalCodeEx (co=0xb39b6aa0, globals=0xb39a11c4, > locals=0xb39a11c4, args=0x0, argcount=0, kws=0x0, > kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2741 > #12 0x080ba527 in PyEval_EvalCode (co=0xb39b6aa0, globals=0xb39a11c4, > locals=0xb39a11c4) at ../Python/ceval.c:484 > #13 0x080d3d8c in PyImport_ExecCodeModuleEx (name=0xbf85abb7 > "scipy.linalg.flinalg", co=0xb39b6aa0, > pathname=0xbf858b07 > "/usr/lib/python2.4/site-packages/scipy/linalg/flinalg.pyc") at > ../Python/import.c:636 > #14 0x080d4406 in load_source_module (name=0xbf85abb7 > "scipy.linalg.flinalg", > pathname=0xbf858b07 > "/usr/lib/python2.4/site-packages/scipy/linalg/flinalg.pyc", fp= optimized out>) > at ../Python/import.c:915 > #15 0x080d4f8b in import_submodule (mod=0xb482550c, subname=0xbf85abc4 > "flinalg", fullname=0xbf85abb7 "scipy.linalg.flinalg") > at ../Python/import.c:2276 > #16 0x080d540a in load_next (mod=0xb482550c, altmod=0x8124c58, > p_name=, > buf=0xbf85abb7 "scipy.linalg.flinalg", p_buflen=0xbf85abac) at > ../Python/import.c:2096 > #17 0x080d563e in PyImport_ImportModuleEx (name=0xb39b5e54 "flinalg", > globals=0xb481ef0c, locals=0xb481ef0c, fromlist=0xb48215cc) > at ../Python/import.c:1931 > #18 0x080af9e0 in builtin___import__ (self=0x0, args=0xb39b80f4) at > ../Python/bltinmodule.c:45 > #19 0x08058c27 in PyObject_Call (func=0x8, arg=0xb39b80f4, kw=0x0) at > ../Objects/abstract.c:1795 > #20 0x080b395c in PyEval_CallObjectWithKeywords (func=0xb7debd6c, > arg=0xb39b80f4, kw=0x0) at ../Python/ceval.c:3435 > #21 0x080b77b3 in PyEval_EvalFrame (f=0x81a7374) at ../Python/ceval.c:2020 > #22 0x080ba4b9 in PyEval_EvalCodeEx (co=0xb39b6960, globals=0xb481ef0c, > locals=0xb481ef0c, args=0x0, argcount=0, kws=0x0, > kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ceval.c:2741 > #23 0x080ba527 in PyEval_EvalCode (co=0xb39b6960, globals=0xb481ef0c, > locals=0xb481ef0c) at ../Python/ceval.c:484 > #24 0x080d3d8c in PyImport_ExecCodeModuleEx (name=0xbf85df97 > "scipy.linalg.basic", co=0xb39b6960, > pathname=0xbf85bee7 > "/usr/lib/python2.4/site-packages/scipy/linalg/basic.pyc") at > ../Python/import.c:636 > #25 0x080d4406 in load_source_module (name=0xbf85df97 "scipy.linalg.basic", > pathname=0xbf85bee7 > "/usr/lib/python2.4/site-packages/scipy/linalg/basic.pyc", fp= optimized out>) > at ../Python/import.c:915 > #26 0x080d4f8b in import_submodule (mod=0xb482550c, subname=0xbf85dfa4 > "basic", fullname=0xbf85df97 "scipy.linalg.basic") > at ../Python/import.c:2276 > #27 0x080d540a in load_next (mod=0xb482550c, altmod=0x8124c58, > p_name=, > buf=0xbf85df97 "scipy.linalg.basic", p_buflen=0xbf85df8c) at > ../Python/import.c:2096 > ---Type to continue, or q to quit--- > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From matthieu.brucher at gmail.com Thu Mar 8 02:59:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 8 Mar 2007 08:59:52 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: Hi, Here is a little proposal for the simple optimizer - I intend to make a damped one if the structure I propose is OK -. What is in the package : - Rosenbrock is the Rosenbrock function, with gradient and hessian method, the example - Optimizer is the core optimizer, the skeletton - StandardOptimizer is the standard optimizer - not very complicated in fact - with six optimization examples - Criterions is a file with three simple convergence criterions, monotony, relative error and absolute error. More complex can be created. - GradientStep is a class taht computes the gradient step of a function at a specific point - NewtonStep is the same as the latter, but with a Newton step. - NoAppendList is an empty list, not derived from list, but it could be done if needed. The goal was to be able to save every set of parameters if needed, by passing a list or a container to Optimizer Some may wonder why the step is a class and not a function. It could be a function, as I use functors, but I want a class so that state-based steps can be used as well, as Levenberg-Marquardt one for instance Now, it is not very complicated, just a bunch of class that are really simple, but if this kind of structure is interesting, I'd like some comments so as it can be made more Pythonic. Matthieu 2007/2/27, Matthieu Brucher : > > Hi, > > I'm migratting toward Python for some weeks, but I do not find the tools I > had to develop for my PhD in SciPy at the moment. I can't, for instance, > find an elegant way to save the set of parameters used in an optimization > for the standard algorithms. What is more, I think they can be more generic. > > What I did in C++, and I'd like your opinion about porting it in Python, > was to define a standard optimizer with no iteration loop - iterate was a > pure virtual method called by a optimize method -. This iteration loop was > then defined for standard optimizer or damped optimizer. Each time, the > parameters tested could be saved. Then, the step that had to be taken was an > instance of a class that used a gradient step, a Newton step, ... and the > same was used for the stoping criterion. The function was a class that > defined value, gradient, hessian, ... if needed. > For instance, a simplified instruction could have been : > Optimizer* optimizer = StandardOptimizer relevant in Python*/(function, GradientStep(), > SimpleCriterion(NbMaxIterations), step, saveParameters); > optimizer->optimize(); > optimizer->getOptimalParameters(); > > The "step" argument was a constant by which the computed step had to be > multiplied, by default, it was 1. > > I know that this kind of writting is not as clear and lightweight as the > current one, which is used by Matlab too. But perhaps giving more latitude > to the user can be proposed with this system. If people want, I can try > making a real Python example... > > Matthieu > > P.S. : sorry for the multiposting, I forgot that there were two ML for > SciPy :( > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: optimizerProposal.tar.gz Type: application/x-gzip Size: 2496 bytes Desc: not available URL: From cookedm at physics.mcmaster.ca Thu Mar 8 04:48:42 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 8 Mar 2007 04:48:42 -0500 Subject: [SciPy-dev] bug in scipy.stats.norm.cdf ? In-Reply-To: <88e473830703041608u4997e40axdb6bcf27f287f7ae@mail.gmail.com> References: <88e473830703041353h7e978fb6k370d2994d9f666cc@mail.gmail.com> <88e473830703041427v7a30229cqcbc27c53254f0861@mail.gmail.com> <45EB4A84.3080507@enthought.com> <88e473830703041536p4ebb795di6bf6246094b9a40e@mail.gmail.com> <88e473830703041608u4997e40axdb6bcf27f287f7ae@mail.gmail.com> Message-ID: <279E5257-8AC2-45C4-983B-9C0815B7A1A4@physics.mcmaster.ca> On Mar 4, 2007, at 19:08 , John Hunter wrote: > On 3/4/07, John Hunter wrote: > >> Yep, it's an isnan bug. If I add a dummy function "erfjdh" to ndtr.c >> and _cephesmodule.c > > latest update: the ndtr.c module is using cephes_isnan defined in > cephes/isnan.c. With some experimenting, it appears I am falling into > the "size(int)==4 and #ifdef IBMPC" branch of that function, when I > should be in the "size(int)==4 and #ifdef MIEEE" branch as far as I > can determine. > > If I manually define MIEEE in isnan.h, I avoid the unwanted isnans. > Looking at mconf.h where these macros are defined, it looks like I am > not hitting the branch > > #elif defined(sel) || defined(pyr) || defined(mc68000) || defined > (m68k) || \ > defined(is68k) || defined(tahoe) || defined(ibm032) || \ > defined(ibm370) || defined(MIPSEB) || defined(_MIPSEB) || \ > defined(__convex__) || defined(DGUX) || defined(hppa) || \ > defined(apollo) || defined(_CRAY) || defined(__hppa) || \ > defined(__hp9000) || defined(__hp9000s300) || \ > defined(__hp9000s700) || defined(__AIX) || defined(_AIX) ||\ > defined(__pyr__) || defined(__mc68000__) || defined > (__sparc) ||\ > defined(_IBMR2) || defined (BIT_ZERO_ON_LEFT) > #define MIEEE 1 /* Motorola IEEE, high order words come first */ > #define BIGENDIAN 1 > > Is there some additional "defined" check that needs to be added here? > Are other mac power pc users seeing this problem? Give it a try now; I removed all that and replaced it with using WORD_BIGENDIAN, defined in Python's pyconfig.h. (As a side effect, I removed support for VAX and PDP-11; somehow, I'm not too concerned about that...) -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 186 bytes Desc: This is a digitally signed message part URL: From aisaac at american.edu Thu Mar 8 08:35:34 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 Mar 2007 08:35:34 -0500 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: On Thu, 8 Mar 2007, Matthieu Brucher apparently wrote: > Here is a little proposal for the simple optimizer - I intend to make a > damped one if the structure I propose is OK -. > What is in the package : > - Rosenbrock is the Rosenbrock function, with gradient and hessian method, > the example > - Optimizer is the core optimizer, the skeletton > - StandardOptimizer is the standard optimizer - not very complicated in fact > - with six optimization examples > - Criterions is a file with three simple convergence criterions, monotony, > relative error and absolute error. More complex can be created. > - GradientStep is a class taht computes the gradient step of a function at a > specific point > - NewtonStep is the same as the latter, but with a Newton step. > - NoAppendList is an empty list, not derived from list, but it could be done > if needed. The goal was to be able to save every set of parameters if > needed, by passing a list or a container to Optimizer Hard to say since there is no interface description, but a couple reactions ... - isolate the examples (probably you already did) perhaps in a subdirectory - possibly package the step classes together - don't introduce the NoAppendList class unless it is really needed, and it doesn't seem to be. The Optimizer can just create a standard container when needed to keep track of as much of the optimization history as might be desired. (Thinking about an interface to express various desires might be useful.) - rename Criterions, perhaps to Criteria or ConvergeTest Cheers, Alan Isaac From matthieu.brucher at gmail.com Thu Mar 8 08:51:01 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 8 Mar 2007 14:51:01 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: Thanks for the comments ! Hard to say since there is no interface description, In fact, the interface description is in the files, but I can put a global description here, after I used your comments to enhance the code :) but a couple reactions ... > > - isolate the examples (probably you already did) perhaps in > a subdirectory Will be done - possibly package the step classes together This can be done, also for the optimizer type and the criterias as the goal is to be able to use every optimizer with every step and every criteria. - don't introduce the NoAppendList class unless it is really > needed, and it doesn't seem to be. The Optimizer can just > create a standard container when needed to keep track of > as much of the optimization history as might be desired. > (Thinking about an interface to express various desires > might be useful.) The purpose of this list was to not make a test inside the iteration loop. In the first idea I had, a test was made to know if the parameters were to be saved or not. I'm balanced between the two solutions. - rename Criterions, perhaps to Criteria or ConvergeTest Yes, sorry, I did not use the good translation. OK, I'll work on the update. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Mar 8 09:07:12 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 Mar 2007 09:07:12 -0500 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: On Thu, 8 Mar 2007, Matthieu Brucher apparently wrote: > The purpose of this list was to not make a test inside the > iteration loop. In the first idea I had, a test was made > to know if the parameters were to be saved or not. I'm > balanced between the two solutions. You can skip a test inside the loop and use a default method that is always called. def record_history(self,etcetera): pass Allow alternative functions to be passed by the user as part of the Optimizer object initialization so that overriding is not necessary. Of course the 'etcetera' part is important ... I can imagine wanting lots of pieces of the history. Cheers, Alan Isaac From ravi.rajagopal at amd.com Thu Mar 8 09:19:39 2007 From: ravi.rajagopal at amd.com (Ravikiran Rajagopal) Date: Thu, 8 Mar 2007 09:19:39 -0500 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' In-Reply-To: <45EF9606.7000907@curioussymbols.com> References: <45EF9194.8090902@curioussymbols.com> <45EF9606.7000907@curioussymbols.com> Message-ID: <200703080919.39687.ravi@ati.com> On Wednesday 07 March 2007 11:50:14 pm John Pye wrote: > Perhaps it's a categorical "you can't use scipy in embedded python". But > hopefully not. That is not the case. I have been using scipy in embedded python (in embedded ipython!) for a while now to provide a scripting interface to my SystemC code. It uses matplotlib to plot stuff, and I have not had any issues with it for a long time. I am not doing exactly what you seem to be doing since I am combining all of the pieces of code using boost.python. Regarding your specific problem, it is hard to say without looking at the code. Did you import numpy in your C code prior to importing it in python code? If so, did you call import_array? Regards, Ravi From matthieu.brucher at gmail.com Thu Mar 8 09:19:51 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 8 Mar 2007 15:19:51 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: > > You can skip a test inside the loop > and use a default method that is always called. > > def record_history(self,etcetera): > pass > > Allow alternative functions to be passed by the user as part > of the Optimizer object initialization so that overriding is > not necessary. > > Of course the 'etcetera' part is important ... I can > imagine wanting lots of pieces of the history. OK, I see what you mean, and it is far more generic this way, so I will take this road. For the etcetera argument, I suppose a **parameter is a good choice, and passing every variable of the loop to record_history is what you mean by "I can imagine wanting lots of pieces of the history." My last question before returning to work would be where to find the coding conventions for SciPy, I did not find them on the website, but I must missed something... Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Mar 8 11:45:32 2007 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 8 Mar 2007 11:45:32 -0500 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: On Thu, 8 Mar 2007, Matthieu Brucher apparently wrote: > For the etcetera argument, I suppose a **parameter is > a good choice It is the obvious choice, but I am not sure what the best approach will be. Presumably implementation will be revealing. > and passing every variable of the loop to record_history > is what you mean by "I can imagine wanting lots of pieces > of the history." Yes. Cheers, Alan Isaac From matthieu.brucher at gmail.com Thu Mar 8 14:21:44 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 8 Mar 2007 20:21:44 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: > > For the etcetera argument, I suppose a **parameter is > > a good choice > > It is the obvious choice, but I am not sure what the best > approach will be. Presumably implementation will be > revealing. Some of the arguments cannot be decided beforehand. For instance, for a damped optimizer, which sets of parameters should be saved ? Everyone, including each that is tested for minimization of the function, or only the one at the end of the loop ? I'll make a simple proposal for the interface with the modifications I added since your email. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at curioussymbols.com Thu Mar 8 18:53:36 2007 From: john at curioussymbols.com (John Pye) Date: Fri, 09 Mar 2007 10:53:36 +1100 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' In-Reply-To: <200703080919.39687.ravi@ati.com> References: <45EF9194.8090902@curioussymbols.com> <45EF9606.7000907@curioussymbols.com> <200703080919.39687.ravi@ati.com> Message-ID: <45F0A200.6010503@curioussymbols.com> Hi Ravi Thanks for the reply, and I'm pleased to hear that you've had success with what I'm trying to do. Did that backtrace mean anything to you? The only other place numpy is used in my code is in the highest-level, which is a python GUI written on top of my SWIG-wrapped C library. The latest thing is that I have attempted to make some python callbacks from *plugins* in the low-level C code, which, being plugins, didn't have access to all the SWIG wrappings. So numpy is being imported twice but in different 'frames'. I bet that something scipy/numpy is assuming with regard to global variables is what's falling over. Other modules have imported OK: sys, io, subprocess. Just not scipy.io or scipy.linalg. Here is the code where python gets launched in the plugin: https://pse.cheme.cmu.edu/svn-view/ascend/code/branches/extfn/models/johnpye/extpy/extpy.c?rev=1267&view=markup For the moment, I have been able to work around the problem by instead using a subprocess to do the scipy work, but it's not ideal. If you had any further thoughts that would be great. Cheers JP Ravikiran Rajagopal wrote: > > On Wednesday 07 March 2007 11:50:14 pm John Pye wrote: > > > >> >> Perhaps it's a categorical "you can't use scipy in embedded python". But >> >> hopefully not. >> >> >> > > > > That is not the case. I have been using scipy in embedded python (in embedded > > ipython!) for a while now to provide a scripting interface to my SystemC > > code. It uses matplotlib to plot stuff, and I have not had any issues with it > > for a long time. I am not doing exactly what you seem to be doing since I am > > combining all of the pieces of code using boost.python. > > > > Regarding your specific problem, it is hard to say without looking at the > > code. Did you import numpy in your C code prior to importing it in python > > code? If so, did you call import_array? > > > > Regards, > > Ravi > > > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > From robert.kern at gmail.com Thu Mar 8 19:02:41 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Mar 2007 18:02:41 -0600 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' In-Reply-To: <45F0A200.6010503@curioussymbols.com> References: <45EF9194.8090902@curioussymbols.com> <45EF9606.7000907@curioussymbols.com> <200703080919.39687.ravi@ati.com> <45F0A200.6010503@curioussymbols.com> Message-ID: <45F0A421.6050005@gmail.com> John Pye wrote: > Hi Ravi > > Thanks for the reply, and I'm pleased to hear that you've had success > with what I'm trying to do. Did that backtrace mean anything to you? > > The only other place numpy is used in my code is in the highest-level, > which is a python GUI written on top of my SWIG-wrapped C library. The > latest thing is that I have attempted to make some python callbacks from > *plugins* in the low-level C code, which, being plugins, didn't have > access to all the SWIG wrappings. So numpy is being imported twice but > in different 'frames'. I bet that something scipy/numpy is assuming with > regard to global variables is what's falling over. Other modules have > imported OK: sys, io, subprocess. Just not scipy.io or scipy.linalg. Or anything else that uses numpy. numpy does not currently allow being imported from multiple interpreters in the same process because of those global variables. Unfortunately, those variables can't be trivially removed (although if you figure out a workaround, I would love to hear it). However, embedding a Python interpreter in a C extension to a Python program is probably not a good idea regardless, and I'm not entirely sure why you would need to do so. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From john at curioussymbols.com Thu Mar 8 20:08:59 2007 From: john at curioussymbols.com (John Pye) Date: Fri, 09 Mar 2007 12:08:59 +1100 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' In-Reply-To: <45F0A421.6050005@gmail.com> References: <45EF9194.8090902@curioussymbols.com> <45EF9606.7000907@curioussymbols.com> <200703080919.39687.ravi@ati.com> <45F0A200.6010503@curioussymbols.com> <45F0A421.6050005@gmail.com> Message-ID: <45F0B3AB.4080908@curioussymbols.com> Hi Robert Thanks for the reply. It sounds like my guess about the global variables in numpy was right then. Globals are bad. Embedding python in this case was a necessary evil to do with the legacy architecture: the place where the callbacks needed to be made was inside a secondary DLL/SO that didn't have access to the SWIG wrapper layer. So I needed to fire up embedded python and grab hooks to values (GUI objects, for example) in the original interpreter via a global variable mechanism. Globals are bad, again, but at least it didn't crash :-) Perhaps the way I did this might be of interest. I used a hash table function to register global variables used in one interpreter frame so that they could be retrieved in the other frame. If numpy is creating some variables that might be required in other frames, perhaps they should be registered somewhere, if they can't be removed? Cheers JP Robert Kern wrote: > John Pye wrote: > >> Hi Ravi >> >> Thanks for the reply, and I'm pleased to hear that you've had success >> with what I'm trying to do. Did that backtrace mean anything to you? >> >> The only other place numpy is used in my code is in the highest-level, >> which is a python GUI written on top of my SWIG-wrapped C library. The >> latest thing is that I have attempted to make some python callbacks from >> *plugins* in the low-level C code, which, being plugins, didn't have >> access to all the SWIG wrappings. So numpy is being imported twice but >> in different 'frames'. I bet that something scipy/numpy is assuming with >> regard to global variables is what's falling over. Other modules have >> imported OK: sys, io, subprocess. Just not scipy.io or scipy.linalg. >> > > Or anything else that uses numpy. numpy does not currently allow being imported > from multiple interpreters in the same process because of those global > variables. Unfortunately, those variables can't be trivially removed (although > if you figure out a workaround, I would love to hear it). > > However, embedding a Python interpreter in a C extension to a Python program is > probably not a good idea regardless, and I'm not entirely sure why you would > need to do so. > > From robert.kern at gmail.com Thu Mar 8 22:06:58 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 08 Mar 2007 21:06:58 -0600 Subject: [SciPy-dev] crash with 'import scipy.linalg' and 'import scipy.io' In-Reply-To: <45F0B3AB.4080908@curioussymbols.com> References: <45EF9194.8090902@curioussymbols.com> <45EF9606.7000907@curioussymbols.com> <200703080919.39687.ravi@ati.com> <45F0A200.6010503@curioussymbols.com> <45F0A421.6050005@gmail.com> <45F0B3AB.4080908@curioussymbols.com> Message-ID: <45F0CF52.3050804@gmail.com> John Pye wrote: > Hi Robert > > Thanks for the reply. It sounds like my guess about the global variables > in numpy was right then. Globals are bad. Yes. We're not the only ones with the problem, though; the interpreter has the same problem: http://effbot.org/pyfaq/can-t-we-get-rid-of-the-global-interpreter-lock.htm > Embedding python in this case was a necessary evil to do with the legacy > architecture: the place where the callbacks needed to be made was inside > a secondary DLL/SO that didn't have access to the SWIG wrapper layer. So > I needed to fire up embedded python and grab hooks to values (GUI > objects, for example) in the original interpreter via a global variable > mechanism. Globals are bad, again, but at least it didn't crash :-) I'm afraid that I don't see the connection between that description of the problem and your solution. I'm sure there is one, but the description doesn't have enough details for me to see it. > Perhaps the way I did this might be of interest. I used a hash table > function to register global variables used in one interpreter frame so > that they could be retrieved in the other frame. If numpy is creating > some variables that might be required in other frames, perhaps they > should be registered somewhere, if they can't be removed? I'm happy to consider something that works and doesn't interfere with use in a single interpreter. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bratona at yahoo.co.uk Thu Mar 8 22:32:14 2007 From: bratona at yahoo.co.uk (Adam Malinowski) Date: Fri, 9 Mar 2007 03:32:14 +0000 (UTC) Subject: [SciPy-dev] problem compiling arpack library from the sandbox Message-ID: Hi all, I have compiled numpy and scipy from the latest svn with no problem, but on attempting to install the arpack library I get the following result: --- running install running build running config_fc running build_src building library "arpack" sources building extension "arpack._arpack" sources f2py options: [] adding 'build\src.win32-2.5\fortranobject.c' to sources. adding 'build\src.win32-2.5' to include_dirs. adding 'build\src.win32-2.5\build\src.win32-2.5\_arpack-f2pywrappers.f' to sou rces. building data_files sources running build_py running build_clib No module named msvccompiler in numpy.distutils, trying from distutils.. customize MSVCCompiler customize MSVCCompiler using build_clib customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_clib building 'arpack' library compiling Fortran sources Fortran f77 compiler: C:\cygwin\bin\g77.exe -g -Wall -fno-second-underscore -mno -cygwin -O3 -funroll-loops -march=pentium4 -mmmx -msse2 -msse -fomit-frame-point er -malign-double compile options: '-c' g77.exe:f77: .\ARPACK\SRC\cgetv0.f .\ARPACK\SRC\cgetv0.f: In subroutine `cgetv0': .\ARPACK\SRC\cgetv0.f:123: include 'debug.h' ^ Unable to open INCLUDE file `debug.h' at (^) .\ARPACK\SRC\cgetv0.f:124: include 'stat.h' ^ Unable to open INCLUDE file `stat.h' at (^) error: Command "C:\cygwin\bin\g77.exe -g -Wall -fno-second-underscore -mno-cygwi n -O3 -funroll-loops -march=pentium4 -mmmx -msse2 -msse -fomit-frame-pointer -ma lign-double -c -c .\ARPACK\SRC\cgetv0.f -o build\temp.win32-2.5\ARPACK\SRC\cgetv 0.o" failed with exit status 1 --- I'm running windows 2000 and compiling with cygwin. Please ask if any more information is required. Any help would be very much appreciated. Thanks, Adam From matthieu.brucher at gmail.com Fri Mar 9 04:58:52 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 9 Mar 2007 10:58:52 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: Here is my new proposal. So, the interface is separated in three modules : - Criteria contains the converge criteria - MonotonyCriterion, constructed with a iteration limit and an error level - RelativeValueCriterion, constructed with a iteration limit and an error level - AbsoluteValueCriterion, constructed with a iteration limit and an error level I think the names are self-explaining. The interface of these criteria is a simple __call__ method that take the current number of iterations, the last values of the cost function and the corresponding points. - Step contains some step that the optimizer can take in the process. Their interface is simple, a __call__ method with a cost function as an argument as well as the point at which the step must be computed - GradientStep needs that the cost function implements the gradient method - NewtonStep needs that the cost unction implements the gradient and the hessian method - Optimizer contains the optimizer skeletton as well as a standard optimizer - Optimizer implements the optimize method that calls iterate until the criterion is satisfied. Paramaters are the cost function and the criterion, can have a record argument that will be called on each iteration with the information of the step - point, value, iteration, step, ... - and can have a stepSize parameter - it is a factor that will be multiplied by the step, useful to have little step in a steepest /gradient descent - - StandardOptimizer implements the standard optimizer, that is the new point is the last point + the step. The additional arguments are an instance of a step - GradientStep or NewtonStep at this point - and... the starting point. perhaps these arguments should be put in the Optimizer. A cost function that must be optimized must/can have : - a __call__ method with a point as argument - a gradient method with the same argument, if needed - a hessian argument, same argument, if needed Other steps I use in my research need additional method, but for a simple proposal, no need for them. Some examples are provided in the Rosenbrock.py file, it is the Rosenbrock function, with __call__, gardient and hessian method. Then 6 optimizations are made, some converge to the real minimum, other don't because of the choice for the criterion - the MonotonyCriterion is not very useful here, but for a damped or a stochastic optimizer, it is a pertinent choice -. AppendList.py is an example for the "record" parameter, it saves only the points used in the optimization. 2007/3/8, Matthieu Brucher : > > > > For the etcetera argument, I suppose a **parameter is > > > a good choice > > > > It is the obvious choice, but I am not sure what the best > > approach will be. Presumably implementation will be > > revealing. > > > > Some of the arguments cannot be decided beforehand. For instance, for a > damped optimizer, which sets of parameters should be saved ? Everyone, > including each that is tested for minimization of the function, or only the > one at the end of the loop ? > > I'll make a simple proposal for the interface with the modifications I > added since your email. > > Matthieu > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: optimizerProposal_01.tar.gz Type: application/x-gzip Size: 5502 bytes Desc: not available URL: From pgmdevlist at gmail.com Fri Mar 9 06:09:58 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 9 Mar 2007 06:09:58 -0500 Subject: [SciPy-dev] Combining Numpy/C/fortran ? Message-ID: <200703090609.58667.pgmdevlist@gmail.com> Folks, I'm trying to write an interface to numpy for a piece of mixed C/fortran code, and I'm getting nowhere fast. Roughly, the code is organized as such: a first C file defines some base structures (involving long and double arrays) and initializes their members by calling Fortran routines, either directly or through a set of C functions defined in a second file. I need to get access to the initialized structures. I first started to get rid of the C structures by redefining them in Python, so that I would only have to call functions with array arguments (and not structures) and use f2py for the interface. However, this is not exactly a bug-safe approach. The best would be to get an interface Python/C structure, so that I could send ndarrays as inputs and get ndarrays as outputs. What is the easiest way to write such an interface ? Thanks a lot for any idea... From john at curioussymbols.com Fri Mar 9 06:44:43 2007 From: john at curioussymbols.com (John Pye) Date: Fri, 09 Mar 2007 22:44:43 +1100 Subject: [SciPy-dev] Combining Numpy/C/fortran ? In-Reply-To: <200703090609.58667.pgmdevlist@gmail.com> References: <200703090609.58667.pgmdevlist@gmail.com> Message-ID: <45F148AB.3040707@curioussymbols.com> Maybe this is what you're looking for? http://www.scipy.org/Cookbook/SWIG_and_NumPy Cheers JP Pierre GM wrote: > Folks, > I'm trying to write an interface to numpy for a piece of mixed C/fortran code, > and I'm getting nowhere fast. > > Roughly, the code is organized as such: a first C file defines some base > structures (involving long and double arrays) and initializes their members > by calling Fortran routines, either directly or through a set of C functions > defined in a second file. I need to get access to the initialized structures. > > I first started to get rid of the C structures by redefining them in Python, > so that I would only have to call functions with array arguments (and not > structures) and use f2py for the interface. However, this is not exactly a > bug-safe approach. The best would be to get an interface Python/C structure, > so that I could send ndarrays as inputs and get ndarrays as outputs. > > What is the easiest way to write such an interface ? > > Thanks a lot for any idea... > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From pgmdevlist at gmail.com Fri Mar 9 07:24:45 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 9 Mar 2007 07:24:45 -0500 Subject: [SciPy-dev] Combining Numpy/C/fortran ? In-Reply-To: <45F148AB.3040707@curioussymbols.com> References: <200703090609.58667.pgmdevlist@gmail.com> <45F148AB.3040707@curioussymbols.com> Message-ID: <200703090724.45977.pgmdevlist@gmail.com> On Friday 09 March 2007 06:44:43 John Pye wrote: > Maybe this is what you're looking for? > http://www.scipy.org/Cookbook/SWIG_and_NumPy Almost. I guess that should do the trick as long as I stay on the C side. However, how should I write the setup.py so that the required fortran files are compiled properly ? I haven't been able to find any example... And as I'm mixing C and fortran, I have to declare the arrays with an "order='F'", right ? From john at curioussymbols.com Fri Mar 9 18:41:42 2007 From: john at curioussymbols.com (John Pye) Date: Sat, 10 Mar 2007 10:41:42 +1100 Subject: [SciPy-dev] Combining Numpy/C/fortran ? In-Reply-To: <200703090724.45977.pgmdevlist@gmail.com> References: <200703090609.58667.pgmdevlist@gmail.com> <45F148AB.3040707@curioussymbols.com> <200703090724.45977.pgmdevlist@gmail.com> Message-ID: <45F1F0B6.7000909@curioussymbols.com> If you find difficulty in getting setup.py to do what you want, then try SCons (http://www.scons.org). It is python based, and supports linking fortran, SWIG, C, C++, (and lots of other types of code) very naturally. If you want to provide python access directly to a fortran function, the approach with SWIG would be to write a *C header file* that described the fortran function, then embed the header file in your SWIG .i file. Then something like the following should just work: env.SharedLibrary('_mymodule.so', ['myswig.i','myfortran.f']) This might be possible with setup.py; I'm not sure. Cheers JP Pierre GM wrote: > On Friday 09 March 2007 06:44:43 John Pye wrote: > >> Maybe this is what you're looking for? >> http://www.scipy.org/Cookbook/SWIG_and_NumPy >> > > Almost. I guess that should do the trick as long as I stay on the C side. > However, how should I write the setup.py so that the required fortran files > are compiled properly ? I haven't been able to find any example... > And as I'm mixing C and fortran, I have to declare the arrays with > an "order='F'", right ? > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From millman at berkeley.edu Fri Mar 9 20:29:43 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 9 Mar 2007 17:29:43 -0800 Subject: [SciPy-dev] Google Summer of Code 2007 Message-ID: Hey, I was wondering if Enthought would be interested in signing up as a mentor organization: http://groups.google.com/group/google-summer-of-code-announce/web/gsoc-mentor-organization-application-how-to It is a fair amount of work. Fernando Perez explained that PSF wants to focus on core Python development this year. More specifically after google tells them how many slots they get, the PSF plans to reserve a certain number of slots set aside specifically for core projects. Given the increase in interest in the summer of code, the PSF may not have many slots for scientific python. The reason this is important is that I have been talking with two students, David Cournapeau and Taylor Berg, about potential projects. They would both be related to machine learning and I think would be very useful contributions to SciPy. Fernando Perez and Brian Granger are also thinking about mentoring Min on an IPython project. I don't know if anyone else is thinking about other possible SciPy-related SoC projects. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fperez.net at gmail.com Sat Mar 10 02:56:26 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 10 Mar 2007 00:56:26 -0700 Subject: [SciPy-dev] Google Summer of Code 2007 In-Reply-To: References: Message-ID: On 3/9/07, Jarrod Millman wrote: > Hey, > > I was wondering if Enthought would be interested in signing up as a > mentor organization: > http://groups.google.com/group/google-summer-of-code-announce/web/gsoc-mentor-organization-application-how-to > It is a fair amount of work. > > Fernando Perez explained that PSF wants to focus on core Python > development this year. More specifically after google tells them how > many slots they get, the PSF plans to reserve a certain number of > slots set aside specifically for core projects. Given the increase in > interest in the summer of code, the PSF may not have many slots for > scientific python. As a side note, W. Stein just mentioned he has applied for SAGE to be a sponsor organization. If SAGE is accepted, at least there will be some guaranteed funding for python-related scientific projects through that. I still think it would be probably a good idea for Scipy to play as well, because I'm afraid that otherwise we'll be elbowed out from the PSF tent. Since we can't wait to know what the situation will be (the applications for sponsor organizations are due very soon, since they'll reply with final results on the 14th). I should mention that ipython /may/ be able to squeeze in under the PSF, given how (despite being hosted at scipy.org), it's a project with a wide enough audience that we may be able to get a PSF slot. But I doubt that 3 or 4 scipy projects would be available. Obviously for this to work there would need to be: 1. Interest from the scipy core team 2. Sufficient mentors for various possible projects. Regards f From aisaac at american.edu Sun Mar 11 14:26:44 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 11 Mar 2007 14:26:44 -0400 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: I don't really have time to look at this for the next week, but a couple quick comments. 1. Instead of:: if 'stepSize' in kwargs: self.stepSize = kwargs['stepSize'] else: self.stepSize = 1. I prefer this idiom:: self.stepSize = kwargs.get('stepSize',1) 2. All optimizers should have a maxiter attribute, even if you wish to set a large default. This needs corresponding changes in ``optimize``. 3. It seems like ``AppendList`` is an odd and specific object. I'd stick in in the example file. 4. I understand that you want an object that provides the function, gradient, and hessian. But when you make a class for these, it is full of (effectively) class functions, which suggests just using a module. I suspect there is a design issue to think about here. This might (??) go so far as to raise questions about the usefulness of the bundling. Cheers, Alan Isaac From matthieu.brucher at gmail.com Sun Mar 11 14:43:41 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 11 Mar 2007 19:43:41 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: 2007/3/11, Alan G Isaac : > > I don't really have time to look at this for the > next week, but a couple quick comments. Thanks for the comments, I hope other people will help be with it :) 1. Instead of:: > > if 'stepSize' in kwargs: > self.stepSize = kwargs['stepSize'] > else: > self.stepSize = 1. > > I prefer this idiom:: > > self.stepSize = kwargs.get('stepSize',1) Yes, true, I'll make the changes. 2. All optimizers should have a maxiter attribute, > even if you wish to set a large default. This needs > corresponding changes in ``optimize``. OK, it can be done, in fact in the C++ implementation I use, the maxiter is a variable of the optimizer, not of the criterion. 3. It seems like ``AppendList`` is an odd and specific > object. I'd stick in in the example file. Yes, it can be put there, it was there for modularization. 4. I understand that you want an object that provides > the function, gradient, and hessian. But when you > make a class for these, it is full of (effectively) > class functions, which suggests just using a module. It's not only a module, it is a real class, with a state. For instance, an approximation function can need a set of points that will be stored in the class, and a module is not enough to describe it - a simple linear approximation with a robust cost function for instance - I suspect there is a design issue to think about here. > This might (??) go so far as to raise questions about > the usefulness of the bundling. Perhaps a more precise example of the usefullness is needed ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sun Mar 11 16:07:11 2007 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 11 Mar 2007 16:07:11 -0400 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: > 4. I understand that you want an object that provides >> the function, gradient, and hessian. But when you >> make a class for these, it is full of (effectively) >> class functions, which suggests just using a module. On Sun, 11 Mar 2007, Matthieu Brucher apparently wrote: > It's not only a module, it is a real class, with a state. > For instance, an approximation function can need a set of > points that will be stored in the class, and a module is > not enough to describe it - a simple linear approximation > with a robust cost function for instance - This seems to be a different question? One question is the question of optimizer design: should it take as an argument an object that provides a certain mix of services, or should it take instead as arguments the functions proiding those services. I am not sure, just exploring it. I am used to optimizers that take a function, gradient procedure, and hessian procedure as arguments. I am just asking whether *requiring* these to be bundled is the right thing to do. This design would not mean that I cannot pass as arguments methods of some object. (I think I am responding to your objection here.) Note that requiring bundling imposes an interface requirement on the bundling object. This is not true if I just provide the functions/methods as arguments. > Perhaps a more precise example of the usefullness is needed ? Perhaps so. Cheers, Alan Isaac From jstrunk at enthought.com Tue Mar 13 23:57:48 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Tue, 13 Mar 2007 22:57:48 -0500 Subject: [SciPy-dev] network outage Message-ID: <200703132257.48848.jstrunk@enthought.com> Good evening, Earlier this evening we had a network outage due to a network equipment malfunction. This outage prevented access to the Enthought and SciPy servers from about 8:45-10pm CDT. I have fixed the problem and everything should be back to normal. I apologize for the inconvenience. Jeff Strunk IT Administrator Enthought, Inc. From jstrunk at enthought.com Wed Mar 14 00:01:49 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Tue, 13 Mar 2007 23:01:49 -0500 Subject: [SciPy-dev] network outage Message-ID: <200703132301.49106.jstrunk@enthought.com> Good evening, Earlier this evening we had a network outage due to a network equipment malfunction. This outage prevented access to the Enthought and SciPy servers from about 8:45-10pm CDT. I have fixed the problem and everything should be back to normal. I apologize for the inconvenience. Jeff Strunk IT Administrator Enthought, Inc. From jstrunk at enthought.com Tue Mar 13 23:41:16 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Tue, 13 Mar 2007 22:41:16 -0500 Subject: [SciPy-dev] network outage Message-ID: <200703132241.16355.jstrunk@enthought.com> Good evening, Earlier this evening we had a network outage due to a network equipment malfunction. This outage prevented access to the Enthought and SciPy servers from about 8:45-10pm CDT. I have fixed the problem and everything should be back to normal. I apologize for the inconvenience. Jeff Strunk IT Administrator Enthought, Inc. From bratona at yahoo.co.uk Tue Mar 13 12:43:39 2007 From: bratona at yahoo.co.uk (Adam Malinowski) Date: Tue, 13 Mar 2007 16:43:39 +0000 (UTC) Subject: [SciPy-dev] problem compiling arpack library from the sandbox References: Message-ID: My apologies if this was posted in the wrong list, I've posted the question with a little more detail in the user list. Adam From c.khroulev at gmail.com Wed Mar 14 03:49:09 2007 From: c.khroulev at gmail.com (Constantine Khroulev) Date: Wed, 14 Mar 2007 07:49:09 +0000 (UTC) Subject: [SciPy-dev] =?utf-8?q?A_typo_in_the_sparse=2Ecsc=5Fmatrix_docstri?= =?utf-8?q?ng?= Message-ID: Hello all, I'm trying to build a CSC sparse matrix, and it seems like there is a typo in the following docstring: ---------- beginning of the docstring ---------- Compressed sparse column matrix This can be instantiated in several ways: - csc_matrix(d) with a dense matrix d - csc_matrix(s) with another sparse matrix s (sugar for .tocsc()) - csc_matrix((M, N), [nzmax, dtype]) to construct a container, where (M, N) are dimensions and nzmax, dtype are optional, defaulting to nzmax=sparse.NZMAX and dtype='d'. - csc_matrix((data, ij), [(M, N), nzmax]) where data, ij satisfy: a[ij[k, 0], ij[k, 1]] = data[k] - csc_matrix((data, row, ptr), [(M, N)]) standard CSC representation ------------ end of the docstring -------------- According to the sparse.py itself, the line "a[ij[k, 0], ij[k, 1]] = data[k]" should read "a[ij[0, k], ij[1, k]] = data[k]". (Which I kind of like, because it makes porting things from MATLAB a little easier -- no need to call zip()). :) Thank you. -- Constantine From wnbell at gmail.com Wed Mar 14 04:30:00 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 14 Mar 2007 02:30:00 -0600 Subject: [SciPy-dev] A typo in the sparse.csc_matrix docstring In-Reply-To: References: Message-ID: On 3/14/07, Constantine Khroulev wrote: > Hello all, > > I'm trying to build a CSC sparse matrix, and it seems like there is a typo in > the following docstring: > > According to the sparse.py itself, the line > "a[ij[k, 0], ij[k, 1]] = data[k]" should read "a[ij[0, k], ij[1, k]] = data[k]". > Thanks for the report. It's been fixed in revision 2845. -- Nathan Bell wnbell at gmail.com From berthold at xn--hllmanns-n4a.de Mon Mar 12 17:34:14 2007 From: berthold at xn--hllmanns-n4a.de (=?utf-8?q?Berthold_H=C3=B6llmann?=) Date: Mon, 12 Mar 2007 22:34:14 +0100 Subject: [SciPy-dev] Combining Numpy/C/fortran ? In-Reply-To: <200703090724.45977.pgmdevlist@gmail.com> (Pierre GM's message of "Fri, 9 Mar 2007 07:24:45 -0500") References: <200703090609.58667.pgmdevlist@gmail.com> <45F148AB.3040707@curioussymbols.com> <200703090724.45977.pgmdevlist@gmail.com> Message-ID: Pierre GM writes: > On Friday 09 March 2007 06:44:43 John Pye wrote: >> Maybe this is what you're looking for? >> http://www.scipy.org/Cookbook/SWIG_and_NumPy > > Almost. I guess that should do the trick as long as I stay on the C side. > However, how should I write the setup.py so that the required fortran files > are compiled properly ? I haven't been able to find any example... > And as I'm mixing C and fortran, I have to declare the arrays with > an "order='F'", right ? numpy.distutils support compiling Fortran files from setup.py. -- From matthieu.brucher at gmail.com Wed Mar 14 09:32:29 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 14 Mar 2007 14:32:29 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: Hi again This seems to be a different question? > One question is the question of optimizer design: > should it take as an argument an object that provides > a certain mix of services, or should it take instead > as arguments the functions proiding those services. > I am not sure, just exploring it. For a object point of view, I think that the "thing" the more capable of knowing how to optimize itself is an object function. It knows better of the context than several functions/objects each responsible for the value, gradient, ... I am used to optimizers that take a function, > gradient procedure, and hessian procedure as arguments. > I am just asking whether *requiring* these to be bundled > is the right thing to do. I find it far more convinient this way. Here, there is only gradient and hessian, but one could imagine adding hessianp that returns hessian*gradient, and if this method is available, it is automatically used. In my research, I use partial gradient - only a part of the gradient, for some parameters, is computed -, and if I had to pass it to the optimizer each time, it woulb be very cumbersome - mainly because I use several optimizers used sequentially - It is written more shortly, every argument in the constructor can be used by the methods, ... This design would not mean that I cannot pass as arguments > methods of some object. (I think I am responding to your > objection here.) If you want to pass a method of another object to the optimizer, I suppose you have a design problem. How a method of another object can know better that a method from the object being optimized ? But then, you could do this by assigning the gradient method to your custom object - but as I said, it would be very very awkward and not adviced from a strictly computer science point of view -. I really would want to have a precise example as well :D Note that requiring bundling imposes an interface > requirement on the bundling object. This is not true > if I just provide the functions/methods as arguments. Yes, that's true, but for functions, you still have to provide them, the only thing you have to provide that is not in the functions solution is that every gradient or hessian must have the same name. Not much, knowing that you have a more concise module. > Perhaps a more precise example of the usefullness is needed ? > > Perhaps so. Enclosed is a simple example, only with a gradient, deriving the hessian was not useful for the example. So it is a class that allows the estimation the parameters (a, b) in y = ax + b, knowing y and x. The cost function is based on the German McClure "distance", so it has an additional parameters - and it does not converge efficiently - If I had to pass the gradient as a parameters, how could I pass efficiently y, x and the GM parameter to the function ? - it really is a question, BTW - Here, I can't forget a parameter, they are all needed at the construction, so less error prone, more robust as every call is made with the same parameters and more modular as my cost function is really one block. But then I know that it is not something we, as scientists, are accustomed to, but as Python is object oriented, we could use this. It was not possible with Matlab, and I suppose that this impacted a lot on how we want to use optimizers, something new is always more difficult to apprehend and use. Matthieu P.S. : I did not change the code at the moment, but I'm thinking about the most efficient way to do it :) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LineEstimation.py Type: text/x-python Size: 1970 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Thu Mar 15 04:17:33 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 15 Mar 2007 09:17:33 +0100 Subject: [SciPy-dev] ERROR: test_explicit (scipy.tests.test_odr.test_odr) Message-ID: <45F9011D.2070901@iam.uni-stuttgart.de> Hi all, scipy.test(1) results in 5 errors wrt odr package. It seems to be a python2.5 problem since I cannot reproduce these errors with python2.4 and >>> scipy.__version__ '0.5.3.dev2848' Anyway can someone reproduce these errors ? ERROR: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 46, in test_explicit out = explicit_odr.run() File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 1049, in run self.output = Output(apply(odr, args, kwds)) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 576, in __init__ self.stopreason = report_error(self.info) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 143, in report_error 'Iteration limit reached')[info % 10] IndexError: tuple index out of range Nils From berthold.hoellmann at gl-group.com Fri Mar 16 12:51:07 2007 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?Berthold_H=F6llmann?=) Date: Fri, 16 Mar 2007 17:51:07 +0100 Subject: [SciPy-dev] inconsistent use of tabs and spaces in indentation in numpy\distutils\log.py Message-ID: Calling my setup script with the -tt option to python I get: Traceback (most recent call last): File "setup.py", line 24, in ? from numpy.distutils.core import setup File "C:\Python24\lib\site-packages\numpy\distutils\__init__.py", line 5, in ? import ccompiler File "C:\Python24\lib\site-packages\numpy\distutils\ccompiler.py", line 11, in ? import log File "C:\Python24\lib\site-packages\numpy\distutils\log.py", line 7, in ? from misc_util import red_text, yellow_text, cyan_text, is_sequence, is_string File "C:\Python24\lib\site-packages\numpy\distutils\misc_util.py", line 975 import numpy.numarray.util as nnu ^ TabError: inconsistent use of tabs and spaces in indentation I would report this as bug, but numpy.scipy.org shows only the fedora apache default screen. This is numpy 1.0.1 Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. Germanischer Lloyd AG, 31393 AG HH, Hamburg, Vorstand: Dr. Hermann J. Klein, Rainer Sch?ndube, Vorsitzender des Aufsichtsrats: Dr. Wolfgang Peiner From jstrunk at enthought.com Fri Mar 16 12:59:53 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 16 Mar 2007 11:59:53 -0500 Subject: [SciPy-dev] inconsistent use of tabs and spaces in indentation in numpy\distutils\log.py In-Reply-To: References: Message-ID: <200703161159.53846.jstrunk@enthought.com> I'll see what virtualmin has done. Thanks for letting me know, Jeff On Friday 16 March 2007 11:51 am, Berthold H?llmann wrote: > numpy.scipy.org shows only the fedora > apache default screen. From jstrunk at enthought.com Fri Mar 16 12:59:53 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 16 Mar 2007 11:59:53 -0500 Subject: [SciPy-dev] inconsistent use of tabs and spaces in indentation in numpy\distutils\log.py In-Reply-To: References: Message-ID: <200703161159.53846.jstrunk@enthought.com> I'll see what virtualmin has done. Thanks for letting me know, Jeff On Friday 16 March 2007 11:51 am, Berthold H?llmann wrote: > numpy.scipy.org shows only the fedora > apache default screen. From jstrunk at enthought.com Fri Mar 16 13:02:33 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 16 Mar 2007 12:02:33 -0500 Subject: [SciPy-dev] inconsistent use of tabs and spaces in indentation in numpy\distutils\log.py In-Reply-To: <200703161159.53846.jstrunk@enthought.com> References: <200703161159.53846.jstrunk@enthought.com> Message-ID: <200703161202.33894.jstrunk@enthought.com> It is back to normal now. Thanks, Jeff On Friday 16 March 2007 11:59 am, Jeff Strunk wrote: > I'll see what virtualmin has done. > > Thanks for letting me know, > Jeff > > On Friday 16 March 2007 11:51 am, Berthold H?llmann wrote: > > numpy.scipy.org shows only the fedora > > apache default screen. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From berthold.hoellmann at gl-group.com Mon Mar 19 12:02:36 2007 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?Berthold_H=F6llmann?=) Date: Mon, 19 Mar 2007 17:02:36 +0100 Subject: [SciPy-dev] inconsistent use of tabs and spaces in indentation in numpy\distutils\log.py In-Reply-To: (Berthold =?iso-8859-15?Q?H=F6llmann's?= message of "Fri, 16 Mar 2007 17:51:07 +0100") References: Message-ID: Berthold H?llmann writes: > Calling my setup script with the -tt option to python I get: I now filed a bug report, but find it a bit hard to find the trac page. There is no link on numpy.scipy.org, I had to go to the tracker at sourceforge.net, find the note with the numarray tracker and follow the link to , could be easier. Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. Germanischer Lloyd AG, 31393 AG HH, Hamburg, Vorstand: Dr. Hermann J. Klein, Rainer Sch?ndube, Vorsitzender des Aufsichtsrats: Dr. Wolfgang Peiner From jstrunk at enthought.com Mon Mar 19 13:54:51 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Mon, 19 Mar 2007 12:54:51 -0500 Subject: [SciPy-dev] New Trac feature: TracReSTMacro Message-ID: <200703191254.52001.jstrunk@enthought.com> Good afternoon, By request, I have installed the TracReSTMacro on the numpy, scipy, and scikits tracs. This plugin allows you to display ReST formatted text directly from svn. For example, http://projects.scipy.org/neuroimaging/ni/wiki/ReadMe in its entirety is: [[ReST(/ni/trunk/README)]] Thank you, Jeff Strunk IT Administrator Enthought, Inc. From david at ar.media.kyoto-u.ac.jp Tue Mar 20 02:47:08 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 20 Mar 2007 15:47:08 +0900 Subject: [SciPy-dev] Google Summer of Code 2007 In-Reply-To: References: Message-ID: <45FF836C.2000909@ar.media.kyoto-u.ac.jp> Fernando Perez wrote: > On 3/9/07, Jarrod Millman wrote: >> Hey, >> >> I was wondering if Enthought would be interested in signing up as a >> mentor organization: >> http://groups.google.com/group/google-summer-of-code-announce/web/gsoc-mentor-organization-application-how-to >> It is a fair amount of work. >> >> Fernando Perez explained that PSF wants to focus on core Python >> development this year. More specifically after google tells them how >> many slots they get, the PSF plans to reserve a certain number of >> slots set aside specifically for core projects. Given the increase in >> interest in the summer of code, the PSF may not have many slots for >> scientific python. > > As a side note, W. Stein just mentioned he has applied for SAGE to be > a sponsor organization. If SAGE is accepted, at least there will be > some guaranteed funding for python-related scientific projects through > that. I still think it would be probably a good idea for Scipy to > play as well, because I'm afraid that otherwise we'll be elbowed out > from the PSF tent. > > Since we can't wait to know what the situation will be (the > applications for sponsor organizations are due very soon, since > they'll reply with final results on the 14th). > > I should mention that ipython /may/ be able to squeeze in under the > PSF, given how (despite being hosted at scipy.org), it's a project > with a wide enough audience that we may be able to get a PSF slot. > But I doubt that 3 or 4 scipy projects would be available. > > Obviously for this to work there would need to be: > > 1. Interest from the scipy core team > 2. Sufficient mentors for various possible projects. > It looks like neither scipy or SAGE are lister on the list of possible projects for this year. Does that mean there is no Summer of Code related to numpy/scipy this year ? cheers, David From fperez.net at gmail.com Tue Mar 20 02:54:04 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Tue, 20 Mar 2007 00:54:04 -0600 Subject: [SciPy-dev] Google Summer of Code 2007 In-Reply-To: <45FF836C.2000909@ar.media.kyoto-u.ac.jp> References: <45FF836C.2000909@ar.media.kyoto-u.ac.jp> Message-ID: On 3/20/07, David Cournapeau wrote: > It looks like neither scipy or SAGE are lister on the list of possible > projects for this year. Does that mean there is no Summer of Code > related to numpy/scipy this year ? Not specifically, but you could still try under the general Python projects umbrella. As long as you have a mentor, in principle it could work. cheers, f From perry at stsci.edu Tue Mar 20 14:21:19 2007 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 20 Mar 2007 14:21:19 -0400 Subject: [SciPy-dev] Google Summer of Code 2007 In-Reply-To: References: <45FF836C.2000909@ar.media.kyoto-u.ac.jp> Message-ID: <4CDD6657-AEDD-4A55-A58A-50A2711E045F@stsci.edu> STScI has been accepted as a Google Summer of Code 2007 mentoring organization, so we could host scientific python projects and mentors, particularly those related to numpy/scipy/matplotlib as well as more astronomically oriented ones. On Mar 20, 2007, at 2:54 AM, Fernando Perez wrote: > On 3/20/07, David Cournapeau wrote: > >> It looks like neither scipy or SAGE are lister on the list of >> possible >> projects for this year. Does that mean there is no Summer of Code >> related to numpy/scipy this year ? > > Not specifically, but you could still try under the general Python > projects umbrella. As long as you have a mentor, in principle it > could work. > > cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From perry at stsci.edu Tue Mar 20 16:51:48 2007 From: perry at stsci.edu (Perry Greenfield) Date: Tue, 20 Mar 2007 16:51:48 -0400 Subject: [SciPy-dev] Google Summer of Code 2007 In-Reply-To: <4CDD6657-AEDD-4A55-A58A-50A2711E045F@stsci.edu> References: <45FF836C.2000909@ar.media.kyoto-u.ac.jp> <4CDD6657-AEDD-4A55-A58A-50A2711E045F@stsci.edu> Message-ID: <734CADA7-A71E-429B-A309-EF2C243E9835@stsci.edu> On Mar 20, 2007, at 2:21 PM, Perry Greenfield wrote: > STScI has been accepted as a Google Summer of Code 2007 mentoring > organization, so we could host scientific python projects and > mentors, particularly those related to numpy/scipy/matplotlib as well > as more astronomically oriented ones. > I just noticed that the ideas page for STScI on the Google Summer of Code lists numarray as a potential project. I'm sorry if this gives a misleading impression that we are continuing work on this. I was asked to give a list of open source projects we *had* worked on and this somehow became the list of projects to work on for Summer of Code. I've asked for that page to be changed. Perry From graeme.okeefe at petnm.unimelb.edu.au Wed Mar 21 01:16:36 2007 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Wed, 21 Mar 2007 16:16:36 +1100 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation Message-ID: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> I have been using fmin_l_bfgs_b to perform bounded non-linear multi- parameter estimation. Is there a way to determine the confidence intervals on the resultant parameter estimates. I've seen reference to scipy.odr as a better alternative to scipy.optimize and have checked out the latest svn and currently trying to build (OS X 10.4.9) which is always fun. However, I can't find any documentation on what scipy.odr is. Can someone point me somewhere. regards, Graeme From graeme.okeefe at petnm.unimelb.edu.au Wed Mar 21 02:36:01 2007 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Wed, 21 Mar 2007 17:36:01 +1100 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> Message-ID: <4F3FF326-1083-42F5-AE24-D5F557FCCABF@petnm.unimelb.edu.au> I should have mentioned, I currently have scipy.__version__ = 0.5.2.dev2095 ppc. I retrieved numpy/scipy from svn (1.0.2.dev3583, 0.5.2.dev2850) I completed the build for numpy/scipy but first I had to remove "-lcc_dynamic" from lib/python2.4/site- packages/numpy/distutils/fcompiler/gnu.py, otherwise I got: /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -lcc_dynamic collect2: ld returned 1 exit status error: Command "/opt/local/bin/g77 -g -Wall -undefined dynamic_lookup -bundle build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/ src.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/src.darwin-8.9.0- Power_Macintosh-2.4/fortranobject.o -L/opt/local/lib -L/opt/local/ bin/../lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/ temp.darwin-8.9.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - lcc_dynamic -o build/lib.darwin-8.9.0-Power_Macintosh-2.4/scipy/ fftpack/_fftpack.so" failed with exit status 1 Removing -cc_dynamic fixed that, something I've come across before, but then when I finished and installed the build, I got the following: >>> import scipy >>> import scipy.optimize Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/optimize/__init__.py", line 7, in ? from optimize import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/optimize/optimize.py", line 26, in ? import linesearch File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/optimize/linesearch.py", line 3, in ? import minpack2 ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ optimize/minpack2.so: Symbol not found: _sprintf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/optimize/minpack2.so Expected in: dynamic lookup >>> import scipy.odr Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/odr/__init__.py", line 11, in ? import odrpack File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/odr/odrpack.py", line 103, in ? from scipy.odr import __odrpack ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/odr/ __odrpack.so: Symbol not found: _printf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/odr/__odrpack.so Expected in: dynamic lookup >>> I have /opt/local/lib in DYLD_LIBRARY_PATH, LD_LIBRARY_PATH. gcc --version powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5363) So I am now stumped, can anyone advise? regards, Graeme On 21/03/2007, at 4:16 PM, Graeme O'Keefe wrote: > > I have been using fmin_l_bfgs_b to perform bounded non-linear multi- > parameter estimation. > > Is there a way to determine the confidence intervals on the resultant > parameter estimates. > > I've seen reference to scipy.odr as a better alternative to > scipy.optimize and have checked out the latest svn and currently > trying to build (OS X 10.4.9) which is always fun. > > However, I can't find any documentation on what scipy.odr is. > > Can someone point me somewhere. > > regards, > > Graeme > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From graeme.okeefe at petnm.unimelb.edu.au Wed Mar 21 02:52:37 2007 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Wed, 21 Mar 2007 17:52:37 +1100 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <4F3FF326-1083-42F5-AE24-D5F557FCCABF@petnm.unimelb.edu.au> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> <4F3FF326-1083-42F5-AE24-D5F557FCCABF@petnm.unimelb.edu.au> Message-ID: I'm back. Just compiled the numpy/scipy scv's on OS-X intel, no changes required to gnu.py Builds fine and import scipy.odr is okay. My old scipy.optimize code works fine. So, I clearly have a problem with shared libraries to sort out on my ppc workstation. Sorry to bother people. regards, Graeme On 21/03/2007, at 5:36 PM, Graeme O'Keefe wrote: > I should have mentioned, I currently have scipy.__version__ = > 0.5.2.dev2095 ppc. > > I retrieved numpy/scipy from svn (1.0.2.dev3583, 0.5.2.dev2850) > > I completed the build for numpy/scipy > > but first I had to remove "-lcc_dynamic" from lib/python2.4/site- > packages/numpy/distutils/fcompiler/gnu.py, otherwise I got: > > /usr/bin/ld: can't locate file for: -lcc_dynamic > collect2: ld returned 1 exit status > /usr/bin/ld: can't locate file for: -lcc_dynamic > collect2: ld returned 1 exit status > error: Command "/opt/local/bin/g77 -g -Wall -undefined dynamic_lookup > -bundle build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/ > src.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o > build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o > build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o > build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o > build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o > build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/src.darwin-8.9.0- > Power_Macintosh-2.4/fortranobject.o -L/opt/local/lib -L/opt/local/ > bin/../lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/ > temp.darwin-8.9.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - > lcc_dynamic -o build/lib.darwin-8.9.0-Power_Macintosh-2.4/scipy/ > fftpack/_fftpack.so" failed with exit status 1 > > Removing -cc_dynamic fixed that, something I've come across before, > but then when I finished and installed the build, I got the following: > >>>> import scipy >>>> import scipy.optimize > Traceback (most recent call last): > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/optimize/__init__.py", line 7, in ? > from optimize import * > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/optimize/optimize.py", line 26, in ? > import linesearch > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/optimize/linesearch.py", line 3, in ? > import minpack2 > ImportError: Failure linking new module: /Library/Frameworks/ > Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ > optimize/minpack2.so: Symbol not found: _sprintf$LDBLStub > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ > lib/python2.4/site-packages/scipy/optimize/minpack2.so > Expected in: dynamic lookup > >>>> import scipy.odr > Traceback (most recent call last): > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/odr/__init__.py", line 11, in ? > import odrpack > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/odr/odrpack.py", line 103, in ? > from scipy.odr import __odrpack > ImportError: Failure linking new module: /Library/Frameworks/ > Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/odr/ > __odrpack.so: Symbol not found: _printf$LDBLStub > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ > lib/python2.4/site-packages/scipy/odr/__odrpack.so > Expected in: dynamic lookup > >>>> > > I have /opt/local/lib in DYLD_LIBRARY_PATH, LD_LIBRARY_PATH. > > gcc --version > powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. > build 5363) > > So I am now stumped, can anyone advise? > > regards, > > Graeme > > > > On 21/03/2007, at 4:16 PM, Graeme O'Keefe wrote: > >> >> I have been using fmin_l_bfgs_b to perform bounded non-linear multi- >> parameter estimation. >> >> Is there a way to determine the confidence intervals on the resultant >> parameter estimates. >> >> I've seen reference to scipy.odr as a better alternative to >> scipy.optimize and have checked out the latest svn and currently >> trying to build (OS X 10.4.9) which is always fun. >> >> However, I can't find any documentation on what scipy.odr is. >> >> Can someone point me somewhere. >> >> regards, >> >> Graeme >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From cookedm at physics.mcmaster.ca Wed Mar 21 03:06:35 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 21 Mar 2007 03:06:35 -0400 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> Message-ID: <20070321070635.GA32035@arbutus.physics.mcmaster.ca> On Wed, Mar 21, 2007 at 04:16:36PM +1100, Graeme O'Keefe wrote: > > I have been using fmin_l_bfgs_b to perform bounded non-linear multi- > parameter estimation. > > Is there a way to determine the confidence intervals on the resultant > parameter estimates. Not that I know of. > I've seen reference to scipy.odr as a better alternative to > scipy.optimize and have checked out the latest svn and currently > trying to build (OS X 10.4.9) which is always fun. > > However, I can't find any documentation on what scipy.odr is. scipy.odr is a wrapper of ODRPACK: http://www.netlib.org/odrpack/ Check out the docstring for scipy.odr.odrpack for how to use it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Wed Mar 21 03:06:35 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 21 Mar 2007 03:06:35 -0400 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> Message-ID: <20070321070635.GA32035@arbutus.physics.mcmaster.ca> On Wed, Mar 21, 2007 at 04:16:36PM +1100, Graeme O'Keefe wrote: > > I have been using fmin_l_bfgs_b to perform bounded non-linear multi- > parameter estimation. > > Is there a way to determine the confidence intervals on the resultant > parameter estimates. Not that I know of. > I've seen reference to scipy.odr as a better alternative to > scipy.optimize and have checked out the latest svn and currently > trying to build (OS X 10.4.9) which is always fun. > > However, I can't find any documentation on what scipy.odr is. scipy.odr is a wrapper of ODRPACK: http://www.netlib.org/odrpack/ Check out the docstring for scipy.odr.odrpack for how to use it. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From graeme.okeefe at petnm.unimelb.edu.au Wed Mar 21 18:26:55 2007 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Thu, 22 Mar 2007 09:26:55 +1100 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <20070321070635.GA32035@arbutus.physics.mcmaster.ca> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> <20070321070635.GA32035@arbutus.physics.mcmaster.ca> Message-ID: <05B305F2-668E-4967-8DF0-7FFD6C0A6151@petnm.unimelb.edu.au> thanks, I've gone through the docstrings and test_odr.py, quite straightforward, well packaged. I still run fmin_l_bfgs_b to bound the solution and then I use that as a starting point. Of course, now I know my model is really bad from the parameter sd_beta. regards, Graeme On 21/03/2007, at 6:06 PM, David M. Cooke wrote: > On Wed, Mar 21, 2007 at 04:16:36PM +1100, Graeme O'Keefe wrote: >> >> I have been using fmin_l_bfgs_b to perform bounded non-linear multi- >> parameter estimation. >> >> Is there a way to determine the confidence intervals on the resultant >> parameter estimates. > > Not that I know of. > >> I've seen reference to scipy.odr as a better alternative to >> scipy.optimize and have checked out the latest svn and currently >> trying to build (OS X 10.4.9) which is always fun. >> >> However, I can't find any documentation on what scipy.odr is. > > scipy.odr is a wrapper of ODRPACK: http://www.netlib.org/odrpack/ > Check out the docstring for scipy.odr.odrpack for how to use it. > > -- > |>|\/|< > /--------------------------------------------------------------------- > -----\ > |David M. Cooke http:// > arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robert.kern at gmail.com Wed Mar 21 18:45:54 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 21 Mar 2007 17:45:54 -0500 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <05B305F2-668E-4967-8DF0-7FFD6C0A6151@petnm.unimelb.edu.au> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> <20070321070635.GA32035@arbutus.physics.mcmaster.ca> <05B305F2-668E-4967-8DF0-7FFD6C0A6151@petnm.unimelb.edu.au> Message-ID: <4601B5A2.3090806@gmail.com> Graeme O'Keefe wrote: > thanks, > > I've gone through the docstrings and test_odr.py, quite > straightforward, well packaged. Thank you! > I still run fmin_l_bfgs_b to bound the solution and then I use that > as a starting point. > > Of course, now I know my model is really bad from the parameter sd_beta. Take those uncertainty estimates with a grain of salt. They are based on a linearization (quadraticization, really) of the loss function around the optimal parameters. So if the loss function is quite flat around the optimal value, but goes up more sharply than the paraboloid found by the Hessian matrix around the optimal parameters, then the uncertainties could be much larger than are really warranted. I always recommend doing a little Monte Carlo post-mortem to verify the estimates. Generate parameter values with numpy.random.multivariate_normal() with the optimal parameter values as the mean and cov_beta as the covariance matrix. Throw away the values outside of your bounds. Then put the good parameters into your function and plot all of them against your data. If you are doing ordinary least squares, it's also quite easy to evaluate the loss function, too, and plot its distribution. It's more difficult to do that with orthogonal distance regression because the functionality that finds the orthogonal distances is not exposed by itself. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From graeme.okeefe at petnm.unimelb.edu.au Wed Mar 21 19:40:33 2007 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Thu, 22 Mar 2007 10:40:33 +1100 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> <4F3FF326-1083-42F5-AE24-D5F557FCCABF@petnm.unimelb.edu.au> Message-ID: Found my ppc compile problem with -lcc_dynamic. Probably known by OS-X users here, but just in case. The issue is: need to use g77 with gcc3.3 but gfortran with gcc4.x http://nxg.me.uk/note/2004/restFP/ You can download gfortran-ppc-bin.tar.gz from here http://www.macresearch.org/xcode_gfortran_contest_winner_damien_bobillot Or the xcode plugin here. http://www.macresearch.org/xcode_gfortran_plugin_update regards, Graeme On 21/03/2007, at 5:52 PM, Graeme O'Keefe wrote: > I'm back. > > Just compiled the numpy/scipy scv's on OS-X intel, no changes > required to gnu.py > > Builds fine and import scipy.odr is okay. > > My old scipy.optimize code works fine. > > So, I clearly have a problem with shared libraries to sort out on my > ppc workstation. > > Sorry to bother people. > > regards, > > Graeme > > > > On 21/03/2007, at 5:36 PM, Graeme O'Keefe wrote: > >> I should have mentioned, I currently have scipy.__version__ = >> 0.5.2.dev2095 ppc. >> >> I retrieved numpy/scipy from svn (1.0.2.dev3583, 0.5.2.dev2850) >> >> I completed the build for numpy/scipy >> >> but first I had to remove "-lcc_dynamic" from lib/python2.4/site- >> packages/numpy/distutils/fcompiler/gnu.py, otherwise I got: >> >> /usr/bin/ld: can't locate file for: -lcc_dynamic >> collect2: ld returned 1 exit status >> /usr/bin/ld: can't locate file for: -lcc_dynamic >> collect2: ld returned 1 exit status >> error: Command "/opt/local/bin/g77 -g -Wall -undefined dynamic_lookup >> -bundle build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/ >> src.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o >> build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o >> build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o >> build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o >> build/temp.darwin-8.9.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o >> build/temp.darwin-8.9.0-Power_Macintosh-2.4/build/src.darwin-8.9.0- >> Power_Macintosh-2.4/fortranobject.o -L/opt/local/lib -L/opt/local/ >> bin/../lib/gcc/powerpc-apple-darwin7.9.0/3.4.4 -Lbuild/ >> temp.darwin-8.9.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - >> lcc_dynamic -o build/lib.darwin-8.9.0-Power_Macintosh-2.4/scipy/ >> fftpack/_fftpack.so" failed with exit status 1 >> >> Removing -cc_dynamic fixed that, something I've come across before, >> but then when I finished and installed the build, I got the >> following: >> >>>>> import scipy >>>>> import scipy.optimize >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/scipy/optimize/__init__.py", line 7, in ? >> from optimize import * >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/scipy/optimize/optimize.py", line 26, in ? >> import linesearch >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/scipy/optimize/linesearch.py", line 3, in ? >> import minpack2 >> ImportError: Failure linking new module: /Library/Frameworks/ >> Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ >> optimize/minpack2.so: Symbol not found: _sprintf$LDBLStub >> Referenced from: /Library/Frameworks/Python.framework/Versions/ >> 2.4/ >> lib/python2.4/site-packages/scipy/optimize/minpack2.so >> Expected in: dynamic lookup >> >>>>> import scipy.odr >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/scipy/odr/__init__.py", line 11, in ? >> import odrpack >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/scipy/odr/odrpack.py", line 103, in ? >> from scipy.odr import __odrpack >> ImportError: Failure linking new module: /Library/Frameworks/ >> Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/odr/ >> __odrpack.so: Symbol not found: _printf$LDBLStub >> Referenced from: /Library/Frameworks/Python.framework/Versions/ >> 2.4/ >> lib/python2.4/site-packages/scipy/odr/__odrpack.so >> Expected in: dynamic lookup >> >>>>> >> >> I have /opt/local/lib in DYLD_LIBRARY_PATH, LD_LIBRARY_PATH. >> >> gcc --version >> powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. >> build 5363) >> >> So I am now stumped, can anyone advise? >> >> regards, >> >> Graeme >> >> >> >> On 21/03/2007, at 4:16 PM, Graeme O'Keefe wrote: >> >>> >>> I have been using fmin_l_bfgs_b to perform bounded non-linear multi- >>> parameter estimation. >>> >>> Is there a way to determine the confidence intervals on the >>> resultant >>> parameter estimates. >>> >>> I've seen reference to scipy.odr as a better alternative to >>> scipy.optimize and have checked out the latest svn and currently >>> trying to build (OS X 10.4.9) which is always fun. >>> >>> However, I can't find any documentation on what scipy.odr is. >>> >>> Can someone point me somewhere. >>> >>> regards, >>> >>> Graeme >>> >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From graeme.okeefe at petnm.unimelb.edu.au Wed Mar 21 19:55:17 2007 From: graeme.okeefe at petnm.unimelb.edu.au (Graeme O'Keefe) Date: Thu, 22 Mar 2007 10:55:17 +1100 Subject: [SciPy-dev] confidence intervals on multi-parameter minimisation In-Reply-To: <4601B5A2.3090806@gmail.com> References: <09A22939-8C86-42A8-9333-C9252C04D93C@petnm.unimelb.edu.au> <20070321070635.GA32035@arbutus.physics.mcmaster.ca> <05B305F2-668E-4967-8DF0-7FFD6C0A6151@petnm.unimelb.edu.au> <4601B5A2.3090806@gmail.com> Message-ID: <5E6E08A6-03AA-4721-89F1-60A7A25DA366@petnm.unimelb.edu.au> I've manually plugged in parameters with "mean + sd_beta" and "mean - sd_beta" and the sd_beta values make sense from that point of view. But I will follow up with your suggestion of a monte-carlo analysis to look at the (residual**2).sum() vs "mean + random()" thanks, Graeme On 22/03/2007, at 9:45 AM, Robert Kern wrote: > Graeme O'Keefe wrote: >> thanks, >> >> I've gone through the docstrings and test_odr.py, quite >> straightforward, well packaged. > > Thank you! > >> I still run fmin_l_bfgs_b to bound the solution and then I use that >> as a starting point. >> >> Of course, now I know my model is really bad from the parameter >> sd_beta. > > Take those uncertainty estimates with a grain of salt. They are > based on a > linearization (quadraticization, really) of the loss function > around the optimal > parameters. So if the loss function is quite flat around the > optimal value, but > goes up more sharply than the paraboloid found by the Hessian > matrix around the > optimal parameters, then the uncertainties could be much larger > than are really > warranted. > > I always recommend doing a little Monte Carlo post-mortem to verify > the > estimates. Generate parameter values with > numpy.random.multivariate_normal() > with the optimal parameter values as the mean and cov_beta as the > covariance > matrix. Throw away the values outside of your bounds. Then put the > good > parameters into your function and plot all of them against your > data. If you are > doing ordinary least squares, it's also quite easy to evaluate the > loss > function, too, and plot its distribution. It's more difficult to do > that with > orthogonal distance regression because the functionality that finds > the > orthogonal distances is not exposed by itself. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From cimrman3 at ntc.zcu.cz Thu Mar 22 03:18:49 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 22 Mar 2007 08:18:49 +0100 Subject: [SciPy-dev] [Fwd: Re: wrapping BLOPEX for SciPy] Message-ID: <46022DD9.10007@ntc.zcu.cz> Hi all, FYI: I have asked Andrew Knyazev, the author of BLOPEX (sparse) eigenvalue solver, some questions related to making it usable from SciPy (which I plan to do, unless somebody else did/does it already). Some interesting points have appeared in the discussion, see below remarks of Julien Langou, one of the LAPACK developers (the discussion leading to those remarks is below below :). r. -------- Original Message -------- Subject: Re: wrapping BLOPEX for SciPy Date: Wed, 21 Mar 2007 14:28:24 -0600 (MDT) From: Julien Langou To: Andrew Knyazev CC: Robert Cimrman Hello, Some points from the top of my heads for some SciPY developers. * while Matlab eig correctly call SYEV (symmetric eigenvalue solver) or GEEV (nonsymmetric eigenvalue solver) from LAPACK depending whether the matrix is symmetric or not, Matlab eigs call ARPACK whether the matrix is symmetric or not. I always thought this was a big liability for such a software. Symmetric eigensolver would be way faster, more stable, eigenvectors would have guaranteed orthogonality and eigenvalues would be real. Below a Matlab script that shows that when there is a cluster of eigenvalue, eigs (ARPACK) returns the correct invariant subspace but the eigenvectors are not orthogonalized. (logic for a nonsymmetric eigensolver ...) >> clear >> n=10; V=orth(randn(n)); D=1:n; D(n-1)=n; A=V*diag(D)*V'; >> opts.disp=0; >> [Veigs,Deigs] = eigs(A,2,'LM',opts); >> Deigs Deigs = 10 0 0 10 >> Veigs'*Veigs ans = 1.0000 0.9545 0.9545 1.0000 Conclusion having a true symmetric eigensolver (e.g. lobpcg) for the SciPy eigs would be clearly a big plus with respect to Matlab. * They are four LAPACK symmetric eigensolvers. SYEV (symmetric QR), SYEVD (divide and conquer), SYEVD (divide and conquer), SYEVX (bisection and inverse iteration) and SYEVR (MRRR). Matlab uses SYEV. The most robust but by far the slowest. We have worked on all those algoirthms and they are now all robust. For MRRR, this is new. The version in 3.1 is really stable, the one in 3.0, well .... In term of performance, there is no comparison SYEVR and SYEVX are way faster than SYEV. Performance of SYEVX are really matrix dependent. Comparing SYEVD and SYEVR is hard. * SYEVX and SYEVR supports subset computation (same thing: subset computation in SYEVR is from 3.1). So in some sense if you really want to compute only 10 of the eigenvalue of a matrix with LAPACK, you can save a good bunch of FLOPS. This option is not available from Matlab. Note that the cost is still O(n^3) (due to tridiagonalization). * LAPACK-3.1 nonsymmetric eigensolver is about 3-5 times faster than the LAPACK-3.0 one. Matlab has still not yet upgraded to 3.1, as far as I know. * For the question whether to use eigs or eig; or equivallently when to use ARPACK or LAPACK. This depend on the size of the matrix, the sparsity of the matrix and the number of eigenvalues desired. Just to check attached is a plot of comparison betwenn eig and eigs for dense matrices going from n=100:100:1000, for k=[1 10]. Conclusion: eigs behaves as an N^2 algorithm as expected, eig behaves as a N^3 algorithm as expected. On my machine, if you want one eigenvalue, you better off with LAPACK for n<150. If you want 10 eigenvalues, you better off with LAPACK for n<450. Note that the scale is logarithmic so when n=1000, the n^3 factor completely kills LAPACK and it is about 5 times slower than eigs. -j On Wed, 21 Mar 2007, Andrew Knyazev wrote: > Robert Cimrman wrote: >> Andrew Knyazev wrote: >>> LAPACK is indeed for full matrices only. If you want all >>> eigenvectors, the matrix of all eigenvectors is usually full even >>> if your original matrix is sparse. Since you loose the sparsity >>> at the end anyway, the standard procedure is to convert your >>> sparse input matrix into full and plug it into LAPACK. Or may be >>> you try to do something non orthodox that I do not know. >> >> OK. What SciPy needs is an equivalent of Matlab's eigs. I have >> briefly looked at it, and it switches to eig( full( A ) ) only if >> all eigenvalues are requested. I would say that if one wants e.g. k >> = 999 eigenvalues out of n = 1000, making a full matrix would still >> be preferable. Is there any recommendation on for which k to switch >> to the full matrix mode, especially when using BLOPEX? >> >> Thank you for your time and answers! >> >> r. > > I never did a comparison between MATLAB's eig (which is an interface > to LAPACK) and eigs (which is an interface to ARPACK). I would guess > that under normal circumstances, e.g., having enough memory, eig > should be faster than eigs if >50% eigenvectors are requested, may be > less. With BLOPEX, I would guess that eig is faster for >10% (if no > preconditioning is used). The main advantage of BLOPEX is that it > allows using preconditioning to accelerate the convergence. The > preconditioning must be provided by the user, though. > > I copy Julien Langou, one of the LAPACK developers, who may be able > to add some more to this. > From matthieu.brucher at gmail.com Thu Mar 22 04:49:57 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 22 Mar 2007 09:49:57 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: Hi, I didn't have the time to make the changes Alan proposed, but I would like some other advice... The goal of my proposal is to have something better than MatLab Optimization Toolbox, at least for the simplest optimization, domain where Matlab does not use the litterature - for instance the conjugate-gradient method seems to not use the Wolfe conditions for convergence... -. And the structure for an optimizer is thought so as to be more modular, so implementing a new "optimizer" does not imply writting everything from scratch. Everything cannot be thought in advance, but some can. Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Thu Mar 22 09:45:07 2007 From: aisaac at american.edu (Alan Isaac) Date: Thu, 22 Mar 2007 08:45:07 -0500 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: On Thu, 22 Mar 2007, Matthieu Brucher wrote: > the structure for an optimizer is thought so as to be more > modular, so implementing a new "optimizer" does not imply > writting everything from scratch. Everything cannot be > thought in advance, but some can. We certainly agree on this. My main expressed concern was about interface. Now I have another possibly crazy thought. Right now you require an object that implements methods with certain names, which is ok but I think not perfect. Here is a possibly crazy thought, just to explore. How about making all arguments optional, and allowing passing either such an object or the needed components? Finally, to accmmodate existing objects with a different name structure, allow passing a dictionary of attribute names. fwiw, Alan From openopt at ukr.net Thu Mar 22 09:33:04 2007 From: openopt at ukr.net (dmitrey) Date: Thu, 22 Mar 2007 15:33:04 +0200 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: <46028590.1010506@ukr.net> Hallo Matthieu Brucher, Alan Isaac and other developers, (excuse my bad English) this is a last-year postgraduate from instityte of cybernetics, Ukraine National Sciences academy, optimization department I'm interested in the way you intend to continue optimization routines development in Python, & I'm observing the thread in forum. Despite I'm member of all 3 mailing lists referred to scipy/numpy, my messages somehow return "waiting for moderator approvement" & nothing is published. That's why I decided to add your emails for more safety in adresses for sending. So I have 3 years experience of optimization in MATLAB, and some experience with TOMLAB (tomopt.com). I wrote toolbox "OpenOpt" which can run both MATLAB & Octave http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=13115&objectType=file there are 6 solvers currently: 2 nonsmooth local (from our deparment) & 4 global (connected from other GPL sourses). There is also nonSmoothSole - fsolve equivalent for nonsmooth funcs (no guarantie for non-convex funcs), and example of comparison is included. The key feature is TOMLAB-like interface, it's like that: prob = ooAssign(objFun, x0, .....) r = ooRun(prob, solver) in TOMLAB you should write strict manner, like prob = nlpAssign(objFun, x0, [],[],[],A,b,Aeq,beq,[],[],[], [],[],f0,[],...) so I decided to replace it by string assignment prob = nlpAssign(objFun, x0, 'A', A, 'beq', beq, 'Aeq', Aeq) or prob = nlpAssign(objFun, x0, 'A=[1 2 3]; b=2; TolFun=1e-5; TolCon=1e-4; doPlot=1') etc then params may be assigned directly: prob.parallel.df=1;%use parallel calculation of numerical (sub)gradient via MATLAB dfeval() prob.doPlot= false; prob.fPattern = ... prob.cPattern = ... %patterns of dependences i_th constraint by x(j) prob.hPattern = ... prob.check.dc=1; %check user-supplied gradient etc So some time later I encounted things that don't allow effective further development (first of all passing by copy, not reference), & now I'm rewriting its to Pyhton (about 20-25% is done for now). I've got some experience and things in Python ver will be organiezed in a better way. for example prob = NLP(myObjFun, x0, TolFun=1e-5, TolCon=1e-4, TolGrad=1e-6, TolX=1e-4, MaxIter=1e4, MaxFunEvals=1e5, MaxTime=1e8, MaxCPUTime=1e8, IterPrint=1etc); # or prob = LP(...), prob = NSP(...) - nonSmoothProblem, prob = QP(...) etc prob.run() # or maybe r = prob.run() I intend to connect some unconstrained solvers from scipy.optimize. All people in my department are opensourse followers; we have much optimization-related software (most is for nonsmooth funcs & network problems, we research them from 1965), but almost all is Fortran-written. I intend to call for GSoC support, but the only one scipy-related person I found in http://wiki.python.org/moin/SummerOfCode/Mentors is Jarrod Millman And as far as I understood from conversation with some persons from PSF, this year in GSoC they are interested first of all in Python core, so chances for getting support are very low. However, if you can help me in any way please inform me. There are also some chances to achieve direct google support, but last year only 15 students have sucseeded. But I will continue my work in anyway. WBR, Dmitrey. Matthieu Brucher ?????: > Hi, > > I didn't have the time to make the changes Alan proposed, but I would > like some other advice... > The goal of my proposal is to have something better than MatLab > Optimization Toolbox, at least for the simplest optimization, domain > where Matlab does not use the litterature - for instance the > conjugate-gradient method seems to not use the Wolfe conditions for > convergence... -. > And the structure for an optimizer is thought so as to be more > modular, so implementing a new "optimizer" does not imply writting > everything from scratch. Everything cannot be thought in advance, but > some can. > > Matthieu > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nmarais at sun.ac.za Thu Mar 22 10:34:12 2007 From: nmarais at sun.ac.za (Neilen Marais) Date: Thu, 22 Mar 2007 16:34:12 +0200 Subject: [SciPy-dev] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU) References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> <45F03884.3000101@ntc.zcu.cz> <45F04386.6010105@ntc.zcu.cz> Message-ID: Hi Robert! On Thu, 08 Mar 2007 18:10:30 +0100, Robert Cimrman wrote: > Robert Cimrman wrote: > Well, I did it since I am going to need this, too :-) > > In [3]:scipy.linsolve.factorized? > ... > Definition: scipy.linsolve.factorized(A) > Docstring: > Return a fuction for solving a linear system, with A pre-factorized. > > Example: > solve = factorized( A ) # Makes LU decomposition. > x1 = solve( rhs1 ) # Uses the LU factors. > x2 = solve( rhs2 ) # Uses again the LU factors. > > This uses UMFPACK if available. This is a useful improvement, thanks. But why not just extend linsolve.splu to use umfpack so we can present a consistent interface? The essential difference between factorized and splu is that you get to explicity control the storage of the LU factorisation and get some additional info (i.e. the number of nonzeros), whereas factorised only gives you a solve function. The actual library used to do the sparse LU is just an implementation detail that should abstracted wherever possible, no? If nobody complains about the idea I'm willing to implement it. Thanks Neilen > > cheers, > r. -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From cimrman3 at ntc.zcu.cz Thu Mar 22 10:46:07 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 22 Mar 2007 15:46:07 +0100 Subject: [SciPy-dev] [SciPy-user] linsolve.factorized was: Re: Using umfpack to calculate a incomplete LU factorisation (ILU) In-Reply-To: References: <45EE899D.9040304@ntc.zcu.cz> <45F00D7A.4060603@iam.uni-stuttgart.de> <45F03884.3000101@ntc.zcu.cz> <45F04386.6010105@ntc.zcu.cz> Message-ID: <460296AF.4000405@ntc.zcu.cz> Neilen Marais wrote: >> Robert Cimrman wrote: >> Well, I did it since I am going to need this, too :-) >> >> In [3]:scipy.linsolve.factorized? >> ... >> Definition: scipy.linsolve.factorized(A) >> Docstring: >> Return a fuction for solving a linear system, with A pre-factorized. >> >> Example: >> solve = factorized( A ) # Makes LU decomposition. >> x1 = solve( rhs1 ) # Uses the LU factors. >> x2 = solve( rhs2 ) # Uses again the LU factors. >> >> This uses UMFPACK if available. > > This is a useful improvement, thanks. But why not just extend > linsolve.splu to use umfpack so we can present a consistent interface? The > essential difference between factorized and splu is that you get to > explicity control the storage of the LU factorisation and get some > additional info (i.e. the number of nonzeros), whereas factorised only > gives you a solve function. The actual library used to do the sparse LU is > just an implementation detail that should abstracted wherever possible, no? > > If nobody complains about the idea I'm willing to implement it. Sure, splu is an exception, every effort making it consistent is welcome. But note that umfpack always gives you complete LU factors, there is no ILU (drop-off) support - how would you tackle this? Maybe change its name to get_superlu_obj or something like that, use use_solver( useUmfpack = False ) at its beginning, and restore the use_solver setting at the end? r. From robert.kern at gmail.com Thu Mar 22 13:27:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Mar 2007 12:27:12 -0500 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: <46028590.1010506@ukr.net> References: <46028590.1010506@ukr.net> Message-ID: <4602BC70.1000907@gmail.com> dmitrey wrote: > All people in my department are opensourse followers; we have much > optimization-related software (most is for nonsmooth funcs & network > problems, we research them from 1965), but almost all is Fortran-written. > I intend to call for GSoC support, but the only one scipy-related person > I found in http://wiki.python.org/moin/SummerOfCode/Mentors is Jarrod > Millman > And as far as I understood from conversation with some persons from PSF, > this year in GSoC they are interested first of all in Python core, so > chances for getting support are very low. However, if you can help me in > any way please inform me. The Space Telescope Science Institute is also a mentoring organization this year, and they have offered to take some scipy projects, too. http://code.google.com/soc/stsci/about.html http://www-int.stsci.edu/~aconti/GSoC.htm I look forward to seeing what you come up with. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Thu Mar 22 13:37:59 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 22 Mar 2007 18:37:59 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: > Right now you require an object that implements methods with > certain names, which is ok but I think not perfect. Here is > a possibly crazy thought, just to explore. How about making > all arguments optional, and allowing passing either such an > object or the needed components? Finally, to accmmodate > existing objects with a different name structure, allow > passing a dictionary of attribute names. > Indeed, that could be a solution. The only question remaining is how to use the dictionnary, perhaps creating a fake object with the correct interface ? Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at tronarp.se Thu Mar 22 14:11:46 2007 From: otto at tronarp.se (Otto Tronarp) Date: Thu, 22 Mar 2007 19:11:46 +0100 Subject: [SciPy-dev] Have divide by zero handling changed lately? Message-ID: <20070322191146.r8vd37r774ocooks@mathcore.kicks-ass.org> Hello, have numpy's handling of divide by zero changed lately? import numpy numpy.seterr(divide='warn') v = numpy.asarray([0, 1], dtype='d') print 1/numpy.asarray([0], dtype='d') print 1/v print 1.0/v[0] In the code above 1.0/v[0] doesn't generate a warning, that can't be right? While writing this I actually tried some more things: 1.0/numpy.float32(0) has correct behavior (generates a warning) whereas 1.0/numpy.float64(0) silently gives us an inf. This is with In [95]:numpy.__version__ Out[95]:'1.0.1' on a i686 AMD Sempron(tm) 2600+ AuthenticAMD GNU/Linux Regards, Otto From pgmdevlist at gmail.com Thu Mar 22 14:49:10 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 22 Mar 2007 14:49:10 -0400 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: <200703221449.10138.pgmdevlist@gmail.com> On Thursday 22 March 2007 13:37:59 Matthieu Brucher wrote: > > Right now you require an object that implements methods with > > certain names, which is ok but I think not perfect. Here is > > a possibly crazy thought, just to explore. How about making > > all arguments optional, and allowing passing either such an > > object or the needed components? Finally, to accmmodate > > existing objects with a different name structure, allow > > passing a dictionary of attribute names. > > Indeed, that could be a solution. The only question remaining is how to use > the dictionnary, perhaps creating a fake object with the correct interface Just 2c: I just ported the loess package to numpy (the package is available in the scipy SVN sandbox as 'pyloess'). The loess part has a C-based structure that looks like what you're trying to reproduce. Basically, the package implements a new class, loess. The attributes of a loess object are themselves classes: one for inputs, one for outputs, and several others controlling different aspects of the estimation. The __init__method of a loess object requires two mandatory arguments for the independent variables (x) and the observed response (y). These arguments are used to initialize the inputs subclass. The __init__ method also accepts optional parameters, that modiify the values of the different parameters. However, one can also modify specific arguments directly. The outputs section is created when a new loess object is instantiated, but with empty arrays. The arrays are filled when the .fit() method of the loess object is called. Maybe you would like to check the tests/test_pyloess.py file to get a better idea. I'm currently updating the docs and writing a small example. From matthieu.brucher at gmail.com Thu Mar 22 15:19:55 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 22 Mar 2007 20:19:55 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: <200703221449.10138.pgmdevlist@gmail.com> References: <200703221449.10138.pgmdevlist@gmail.com> Message-ID: Thanks for this intel ;) I think that what I want to do is more general that what LOESS proposes, as it is not only about modelling. But there surely is stuff to use :) - and on other topic, there are some good ideas in LOESS that I could use in my PhD thesis - Matthieu 2007/3/22, Pierre GM : > > On Thursday 22 March 2007 13:37:59 Matthieu Brucher wrote: > > > Right now you require an object that implements methods with > > > certain names, which is ok but I think not perfect. Here is > > > a possibly crazy thought, just to explore. How about making > > > all arguments optional, and allowing passing either such an > > > object or the needed components? Finally, to accmmodate > > > existing objects with a different name structure, allow > > > passing a dictionary of attribute names. > > > > Indeed, that could be a solution. The only question remaining is how to > use > > the dictionnary, perhaps creating a fake object with the correct > interface > > Just 2c: > I just ported the loess package to numpy (the package is available in the > scipy SVN sandbox as 'pyloess'). The loess part has a C-based structure > that > looks like what you're trying to reproduce. > > Basically, the package implements a new class, loess. The attributes of a > loess object are themselves classes: one for inputs, one for outputs, and > several others controlling different aspects of the estimation. > > The __init__method of a loess object requires two mandatory arguments for > the > independent variables (x) and the observed response (y). These arguments > are > used to initialize the inputs subclass. The __init__ method also accepts > optional parameters, that modiify the values of the different parameters. > However, one can also modify specific arguments directly. > > The outputs section is created when a new loess object is instantiated, > but > with empty arrays. The arrays are filled when the .fit() method of the > loess > object is called. > > Maybe you would like to check the tests/test_pyloess.py file to get a > better > idea. I'm currently updating the docs and writing a small example. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Thu Mar 22 15:42:03 2007 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 22 Mar 2007 15:42:03 -0400 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: <200703221449.10138.pgmdevlist@gmail.com> Message-ID: <200703221542.03788.pgmdevlist@gmail.com> On Thursday 22 March 2007 15:19:55 Matthieu Brucher wrote: > Thanks for this intel ;) > I think that what I want to do is more general that what LOESS proposes, as > it is not only about modelling. But there surely is stuff to use :) No doubt about that. There was a need for lowess/stl/loess at one point, and I already have parts done. I just had to patch the rest. The info was given just as an example of implementation. The python part was strongly dependent on the underlying C code, hence the multiplication of subclasses. I considered simplify the implementation, but realized it already had everything I needed in a pretty neat form. From russel at appliedminds.net Thu Mar 22 16:55:35 2007 From: russel at appliedminds.net (Russel Howe) Date: Thu, 22 Mar 2007 13:55:35 -0700 Subject: [SciPy-dev] Unusual crash Message-ID: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> I am using today's svn scipy on OS X 10.4 intel with MacPorts python2.4. $ pythonw -c "import scipy.signal" Floating point exception $ Looking a little more closely, it is the module linalg.iterative that causes the crash. A gdb backtrace is below, has anybody else run into this before? Any ideas? I rebuilt python and scipy with no change. $ gdb /opt/local/Library/Frameworks/Python.framework/Versions/2.4/ Resources/Python.app/Contents/MacOS/Python (gdb) run -c "import scipy.linalg.iterative" Program received signal EXC_ARITHMETIC, Arithmetic exception. 0x900dfb26 in strtod_l () (gdb) bt #0 0x900dfb26 in strtod_l () #1 0x90030a66 in strtod () #2 0x002ac1be in PyOS_ascii_strtod (nptr=0xbfff8680 "1.", '0' , "1e-05", endptr=0x0) at Python/pystrtod.c:159 #3 0x002ac469 in PyOS_ascii_atof (nptr=0xbfff8680 "1.", '0' , "1e-05") at Python/pystrtod.c:257 #4 0x002a04c7 in r_object (p=0xbfffcd50) at Python/marshal.c:484 #5 0x002a0782 in r_object (p=0xbfffcd50) at Python/marshal.c:581 #6 0x002a053d in r_object (p=0xbfffcd50) at Python/marshal.c:651 #7 0x002a0da1 in PyMarshal_ReadLastObjectFromFile (fp=0xa000be50) at Python/marshal.c:804 #8 0x0029d403 in load_source_module (name=0xbfffd6f7 "scipy.linalg.iterative", pathname=0xbfffd257 "/opt/local/lib/ python2.4/site-packages/scipy/linalg/iterative.py", fp=0xa000bdf8) at Python/import.c:723 #9 0x0029e233 in import_submodule (mod=0x121bf70, subname=0xbfffd704 "iterative", fullname=0xbfffd6f7 "scipy.linalg.iterative") at Python/ import.c:2266 #10 0x0029e465 in load_next (mod=0x121bf70, altmod=0x2e0160, p_name=0xbfffd704, buf=0xbfffd6f7 "scipy.linalg.iterative", p_buflen=0xbfffdaf8) at Python/import.c:2086 #11 0x0029e919 in PyImport_ImportModuleEx (name=0x12f72b4 "iterative", globals=0x57420, locals=0x57420, fromlist=0x1214d30) at Python/import.c:1921 #12 0x00271adf in builtin___import__ (self=0x0, args=0x50810) at Python/bltinmodule.c:45 #13 0x0020e213 in PyObject_Call (func=0xf260, arg=0x50810, kw=0x0) at Objects/abstract.c:1795 #14 0x00279a8a in PyEval_CallObjectWithKeywords (func=0xf260, arg=0x50810, kw=0x0) at Python/ceval.c:3430 #15 0x0027c63f in PyEval_EvalFrame (f=0x634f30) at Python/ceval.c:2020 #16 0x00280c66 in PyEval_EvalCodeEx (co=0x12fa720, globals=0x57420, locals=0x57420, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #17 0x00280ea6 in PyEval_EvalCode (co=0x12fa720, globals=0x57420, locals=0x57420) at Python/ceval.c:484 #18 0x0029ce94 in PyImport_ExecCodeModuleEx (name=0xbfffecb7 "scipy.linalg", co=0x12fa720, pathname=0xbfffdf6f "/opt/local/lib/ python2.4/site-packages/scipy/linalg/__init__.pyc") at Python/ import.c:631 #19 0x0029d24a in load_source_module (name=0xbfffecb7 "scipy.linalg", pathname=0xbfffdf6f "/opt/local/lib/python2.4/site-packages/scipy/ linalg/__init__.pyc", fp=0xa000bda0) at Python/import.c:909 #20 0x0029dd57 in load_package (name=0xbfffecb7 "scipy.linalg", pathname=0x121fac0 "\003") at Python/import.c:965 #21 0x0029e233 in import_submodule (mod=0x55d70, subname=0xbfffecbd "linalg", fullname=0xbfffecb7 "scipy.linalg") at Python/import.c:2266 #22 0x0029e465 in load_next (mod=0x55d70, altmod=0x55d70, p_name=0xbfffecbd, buf=0xbfffecb7 "scipy.linalg", p_buflen=0xbffff0b8) at Python/import.c:2086 #23 0x0029e95d in PyImport_ImportModuleEx (name=0x506c4 "scipy.linalg.iterative", globals=0x20a50, locals=0x20a50, fromlist=0x2e0160) at Python/import.c:1928 #24 0x00271adf in builtin___import__ (self=0x0, args=0x1fae0) at Python/bltinmodule.c:45 #25 0x0020e213 in PyObject_Call (func=0xf260, arg=0x1fae0, kw=0x0) at Objects/abstract.c:1795 #26 0x00279a8a in PyEval_CallObjectWithKeywords (func=0xf260, arg=0x1fae0, kw=0x0) at Python/ceval.c:3430 #27 0x0027c63f in PyEval_EvalFrame (f=0x608890) at Python/ceval.c:2020 #28 0x00280c66 in PyEval_EvalCodeEx (co=0x4ee20, globals=0x20a50, locals=0x20a50, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2736 #29 0x00280ea6 in PyEval_EvalCode (co=0x4ee20, globals=0x20a50, locals=0x20a50) at Python/ceval.c:484 #30 0x002a80a0 in PyRun_SimpleStringFlags (command=0x600320 "import scipy.linalg.iterative\n", flags=0xbffff618) at Python/pythonrun.c:1265 #31 0x002b1493 in Py_Main (argc=1, argv=0xbffff6a0) at Modules/main.c: 481 #32 0x000018ee in ?? () #33 0x00001815 in ?? () (gdb) From robert.kern at gmail.com Thu Mar 22 17:07:13 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 22 Mar 2007 16:07:13 -0500 Subject: [SciPy-dev] Unusual crash In-Reply-To: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> Message-ID: <4602F001.7000603@gmail.com> Russel Howe wrote: > I am using today's svn scipy on OS X 10.4 intel with MacPorts python2.4. > > > $ pythonw -c "import scipy.signal" > Floating point exception > $ > > Looking a little more closely, it is the module linalg.iterative that > causes the crash. A gdb backtrace is below, has anybody else run > into this before? Any ideas? I rebuilt python and scipy with no > change. > > $ gdb /opt/local/Library/Frameworks/Python.framework/Versions/2.4/ > Resources/Python.app/Contents/MacOS/Python > > > (gdb) run -c "import scipy.linalg.iterative" > > > > Program received signal EXC_ARITHMETIC, Arithmetic exception. > 0x900dfb26 in strtod_l () > (gdb) bt > #0 0x900dfb26 in strtod_l () > #1 0x90030a66 in strtod () > #2 0x002ac1be in PyOS_ascii_strtod (nptr=0xbfff8680 "1.", '0' > , "1e-05", endptr=0x0) at Python/pystrtod.c:159 > #3 0x002ac469 in PyOS_ascii_atof (nptr=0xbfff8680 "1.", '0' 15 times>, "1e-05") at Python/pystrtod.c:257 > #4 0x002a04c7 in r_object (p=0xbfffcd50) at Python/marshal.c:484 > #5 0x002a0782 in r_object (p=0xbfffcd50) at Python/marshal.c:581 > #6 0x002a053d in r_object (p=0xbfffcd50) at Python/marshal.c:651 > #7 0x002a0da1 in PyMarshal_ReadLastObjectFromFile (fp=0xa000be50) at > Python/marshal.c:804 > #8 0x0029d403 in load_source_module (name=0xbfffd6f7 > "scipy.linalg.iterative", pathname=0xbfffd257 "/opt/local/lib/ > python2.4/site-packages/scipy/linalg/iterative.py", fp=0xa000bdf8) at > Python/import.c:723 That is indeed very unusual. Try deleting its .pyc file and then importing it. Then try importing it after the new .pyc is built. That will probably give you the same result (since you have rebuilt both Python and scipy), but it's the only thing I can think of. I do recommend using the official Python binary from python.org instead of MacPorts' Python. It seems to be the most robust distribution of Python for Macs. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From russel at appliedminds.net Fri Mar 23 19:33:53 2007 From: russel at appliedminds.net (Russel Howe) Date: Fri, 23 Mar 2007 16:33:53 -0700 Subject: [SciPy-dev] Unusual crash In-Reply-To: <4602F001.7000603@gmail.com> References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> <4602F001.7000603@gmail.com> Message-ID: <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> Found it! See here, known bug in gfortran 4.1 on OS-X Intel http://gcc.gnu.org/ml/gcc-bugs/2006-03/msg01680.html Now trying to find the right fortran compiler... From robert.kern at gmail.com Fri Mar 23 19:56:24 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 18:56:24 -0500 Subject: [SciPy-dev] Unusual crash In-Reply-To: <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> <4602F001.7000603@gmail.com> <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> Message-ID: <46046928.9060708@gmail.com> Russel Howe wrote: > Found it! > > See here, known bug in gfortran 4.1 on OS-X Intel > > http://gcc.gnu.org/ml/gcc-bugs/2006-03/msg01680.html > > Now trying to find the right fortran compiler... I'm not sure how that relates unless if loading the gfortran libraries somehow replaced the standard number parsing functions. The exception that you got was in the unmarshalling code when loading a .pyc. Anyways, I use the gfortran 4.3 binaries from here: http://hpc.sourceforge.net/ It looks like he's updated them again. I have the one from January. I'll try the March build and see how it goes. I wish he would put dates on the archives and keep old ones around. I believe only the gfortran binaries are necessary; you shouldn't have to install the gcc there, too. Ignore the g77 binaries: they don't work, near as I can tell. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Fri Mar 23 20:02:57 2007 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 23 Mar 2007 17:02:57 -0700 Subject: [SciPy-dev] New Operators in Python Message-ID: <46046AB1.2070809@ee.byu.edu> Every so often the idea of new operators comes up because of the need to do both "matrix-multiplication" and element-by-element multiplication. I think this is one area where the current Python approach is not as nice because we have a limited set of operators to work with. One thing I wonder is if we are being vocal enough with the Python 3000 crowd to try and get additional operators into the language itself. What if we could get a few new operators into the language to help us. If we don't ask for it, it certainly won't happen. My experience is that the difficulty of using the '*' operator for both matrix multiplication and element-by-element multiplication depending on the class of the object is not especially robust. It makes it harder to write generic code, and we still haven't gotten everything completely right. It is somewhat workable as it stands, but I think it would be nicer if we could have some "meta" operator that allowed an alternative definition of major operators. Something like @* for example (just picking a character that is already used for decorators). I wonder if we should propose such a thing for Python 3000. -Travis From steve at shrogers.com Fri Mar 23 20:13:26 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Fri, 23 Mar 2007 18:13:26 -0600 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46046AB1.2070809@ee.byu.edu> References: <46046AB1.2070809@ee.byu.edu> Message-ID: <46046D26.3070105@shrogers.com> Travis Oliphant wrote: > > It is somewhat workable as it stands, but I think it would be nicer if > we could have some "meta" operator that allowed an alternative > definition of major operators. Something like @* for example (just > picking a character that is already used for decorators). > > I wonder if we should propose such a thing for Python 3000. > It wouldn't hurt. I think 'characters' in Py3K will be Unicode by default, so there should be plenty of symbols available. # Steve From russel at appliedminds.net Fri Mar 23 20:15:47 2007 From: russel at appliedminds.net (Russel Howe) Date: Fri, 23 Mar 2007 17:15:47 -0700 Subject: [SciPy-dev] Unusual crash In-Reply-To: <46046928.9060708@gmail.com> References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> <4602F001.7000603@gmail.com> <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> <46046928.9060708@gmail.com> Message-ID: Yeah, I am not sure of that either. I tried and tried to build a simple python-only test case. I finally realized it only happened after importing a fortran-compiled extension. I think the change in flag settings is somehow persistent and blows up later. Russel On Mar 23, 2007, at 4:56 PM, Robert Kern wrote: > Russel Howe wrote: >> Found it! >> >> See here, known bug in gfortran 4.1 on OS-X Intel >> >> http://gcc.gnu.org/ml/gcc-bugs/2006-03/msg01680.html >> >> Now trying to find the right fortran compiler... > > I'm not sure how that relates unless if loading the gfortran > libraries somehow > replaced the standard number parsing functions. The exception that > you got was > in the unmarshalling code when loading a .pyc. > > Anyways, I use the gfortran 4.3 binaries from here: > > http://hpc.sourceforge.net/ > > It looks like he's updated them again. I have the one from January. > I'll try the > March build and see how it goes. I wish he would put dates on the > archives and > keep old ones around. > > I believe only the gfortran binaries are necessary; you shouldn't > have to > install the gcc there, too. Ignore the g77 binaries: they don't > work, near as I > can tell. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From robert.kern at gmail.com Fri Mar 23 20:17:23 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 19:17:23 -0500 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46046D26.3070105@shrogers.com> References: <46046AB1.2070809@ee.byu.edu> <46046D26.3070105@shrogers.com> Message-ID: <46046E13.5060306@gmail.com> Steven H. Rogers wrote: > Travis Oliphant wrote: >> It is somewhat workable as it stands, but I think it would be nicer if >> we could have some "meta" operator that allowed an alternative >> definition of major operators. Something like @* for example (just >> picking a character that is already used for decorators). >> >> I wonder if we should propose such a thing for Python 3000. > > It wouldn't hurt. I think 'characters' in Py3K will be Unicode by > default, so there should be plenty of symbols available. However, the characters available in Python's syntax (outside of strings) will still be restricted to ASCII. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Mar 23 20:18:12 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 19:18:12 -0500 Subject: [SciPy-dev] Unusual crash In-Reply-To: References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> <4602F001.7000603@gmail.com> <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> <46046928.9060708@gmail.com> Message-ID: <46046E44.8060405@gmail.com> Russel Howe wrote: > Yeah, I am not sure of that either. I tried and tried to build a > simple python-only test case. I finally realized it only happened > after importing a fortran-compiled extension. I think the change in > flag settings is somehow persistent and blows up later. That's quite possibly true. Let us know what happens with a newer gfortran. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wbaxter at gmail.com Fri Mar 23 20:54:35 2007 From: wbaxter at gmail.com (Bill Baxter) Date: Sat, 24 Mar 2007 09:54:35 +0900 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46046AB1.2070809@ee.byu.edu> References: <46046AB1.2070809@ee.byu.edu> Message-ID: On 3/24/07, Travis Oliphant wrote: > > Every so often the idea of new operators comes up because of the need to > do both "matrix-multiplication" and element-by-element multiplication. > > I think this is one area where the current Python approach is not as > nice because we have a limited set of operators to work with. I would probably use such operators if they existed. But I think it will be an uphill battle. Where else is there a pressing need for more operators besides Numpy? I agree that the current situation makes it very tedious to get things right when there's a mix of matrix and array in the code. It basically makes it so you can't use * at all, since you can't be sure which version you'll get. Or you have to asarray() or mat() everything just to be sure. But at that point just doing dot(a,b) to force matrix multiply (or multiply() to get elementwise) looks not much worse. I don't mind the occasional dot(a,b) here and there, actually. That's pretty readable. But as soon as you have 3 or more terms it gets ugly. dot(a,dot(b,c)) is just not as easy to read as (a * b * c). I think it would be nice if dot could take more than one argument. dot(a,b,c) wouldn't be bad. I seem to recall someone looked into that, though, and the fact that dot will take a 1-d array and treat it as either a row or a column made the result ambiguous in some cases. It seems though like that shouldn't be a show stopper. Just say that the interpretation is done greedily from left to right, so that dot(a,b,c,d) has the same meaning as dot(dot(dot(a,b),c),d). As a further extension, to control ordering, dot could also accept plain sequences, so dot(a,(b,c)) would dot(b,c) first. For me, those sort of changes would go a long way towards making life without extra operators bearable. --bb From robert.kern at gmail.com Fri Mar 23 20:56:16 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 23 Mar 2007 19:56:16 -0500 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46046AB1.2070809@ee.byu.edu> References: <46046AB1.2070809@ee.byu.edu> Message-ID: <46047730.4000000@gmail.com> Travis Oliphant wrote: > I wonder if we should propose such a thing for Python 3000. I think it's counter-productive to propose it for 3.0. The goal of that particular point release is to remove stuff and clean it up such that new stuff like this can be added in 3.1+. I wouldn't try to resurrect the currently outstanding PEPs until 3.0 is out the door and people start tackling 3.1. http://www.python.org/dev/peps/pep-0211/ http://www.python.org/dev/peps/pep-0225/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From steve at shrogers.com Fri Mar 23 21:50:51 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Fri, 23 Mar 2007 19:50:51 -0600 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46046E13.5060306@gmail.com> References: <46046AB1.2070809@ee.byu.edu> <46046D26.3070105@shrogers.com> <46046E13.5060306@gmail.com> Message-ID: <460483FB.2040402@shrogers.com> Robert Kern wrote: > Steven H. Rogers wrote: >> Travis Oliphant wrote: >>> It is somewhat workable as it stands, but I think it would be nicer if >>> we could have some "meta" operator that allowed an alternative >>> definition of major operators. Something like @* for example (just >>> picking a character that is already used for decorators). >>> >>> I wonder if we should propose such a thing for Python 3000. >> It wouldn't hurt. I think 'characters' in Py3K will be Unicode by >> default, so there should be plenty of symbols available. > > However, the characters available in Python's syntax (outside of strings) will > still be restricted to ASCII. > Still, Unicode could be used in a Mathematica notebook like front end and translated to an ASCII representation for Python to compile. Of course, if you're doing that, there would be no need to add operators to Python itself. # Steve From dahl.joachim at gmail.com Sat Mar 24 04:57:50 2007 From: dahl.joachim at gmail.com (Joachim Dahl) Date: Sat, 24 Mar 2007 09:57:50 +0100 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46046AB1.2070809@ee.byu.edu> References: <46046AB1.2070809@ee.byu.edu> Message-ID: <47347f490703240157t555a5343g31f94a5bfdb710a5@mail.gmail.com> That's a great idea! Having equivalents to Matlab's A\b and A.*B would make numerical codes simpler, and even though it may not sound terribly important to Python purists, it would help to make Python a more intuitive alternative to Matlab also. Asking the Python developers how they feel about user-defined dyadic operators can't hurt, and it doesn't sound like a large change that developers will object. On 3/24/07, Travis Oliphant wrote: > > > Every so often the idea of new operators comes up because of the need to > do both "matrix-multiplication" and element-by-element multiplication. > > I think this is one area where the current Python approach is not as > nice because we have a limited set of operators to work with. > > One thing I wonder is if we are being vocal enough with the Python 3000 > crowd to try and get additional operators into the language itself. > > What if we could get a few new operators into the language to help us. > If we don't ask for it, it certainly won't happen. > > My experience is that the difficulty of using the '*' operator for both > matrix multiplication and element-by-element multiplication depending on > the class of the object is not especially robust. It makes it harder to > write generic code, and we still haven't gotten everything completely > right. > > It is somewhat workable as it stands, but I think it would be nicer if > we could have some "meta" operator that allowed an alternative > definition of major operators. Something like @* for example (just > picking a character that is already used for decorators). > > I wonder if we should propose such a thing for Python 3000. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openopt at ukr.net Sat Mar 24 05:05:52 2007 From: openopt at ukr.net (dmitrey) Date: Sat, 24 Mar 2007 11:05:52 +0200 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <47347f490703240157t555a5343g31f94a5bfdb710a5@mail.gmail.com> References: <46046AB1.2070809@ee.byu.edu> <47347f490703240157t555a5343g31f94a5bfdb710a5@mail.gmail.com> Message-ID: <4604E9F0.1000405@ukr.net> But are you able to redefine the '\' operator? Afaik it's implemented in Python core as new line continuing. As me I would make available '.x' operators as first step. Then, probably, I would implement xx operators for matrices, like **, ^^, //, etc. + Probably I would raise a warning or error of "old-style cast" current operators. And some years later I would made them as errors. WBR, D. Joachim Dahl wrote: > That's a great idea! Having equivalents to Matlab's A\b and A.*B > would make numerical codes > simpler, and even though it may not sound terribly important to > Python purists, it would > help to make Python a more intuitive alternative to Matlab also. > > Asking the Python developers how they feel about user-defined dyadic > operators can't hurt, and > it doesn't sound like a large change that developers will object. > > > On 3/24/07, *Travis Oliphant* > wrote: > > > Every so often the idea of new operators comes up because of the > need to > do both "matrix-multiplication" and element-by-element multiplication. > > I think this is one area where the current Python approach is not as > nice because we have a limited set of operators to work with. > > One thing I wonder is if we are being vocal enough with the Python > 3000 > crowd to try and get additional operators into the language itself. > > What if we could get a few new operators into the language to help us. > If we don't ask for it, it certainly won't happen. > > My experience is that the difficulty of using the '*' operator for > both > matrix multiplication and element-by-element multiplication > depending on > the class of the object is not especially robust. It makes it > harder to > write generic code, and we still haven't gotten everything completely > right. > > It is somewhat workable as it stands, but I think it would be nicer if > we could have some "meta" operator that allowed an alternative > definition of major operators. Something like @* for example (just > picking a character that is already used for decorators). > > I wonder if we should propose such a thing for Python 3000. > > -Travis > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From perry at stsci.edu Sat Mar 24 14:47:19 2007 From: perry at stsci.edu (Perry Greenfield) Date: Sat, 24 Mar 2007 14:47:19 -0400 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46047730.4000000@gmail.com> References: <46046AB1.2070809@ee.byu.edu> <46047730.4000000@gmail.com> Message-ID: On Mar 23, 2007, at 8:56 PM, Robert Kern wrote: > Travis Oliphant wrote: > >> I wonder if we should propose such a thing for Python 3000. > > I think it's counter-productive to propose it for 3.0. The goal of > that > particular point release is to remove stuff and clean it up such > that new stuff > like this can be added in 3.1+. I wouldn't try to resurrect the > currently > outstanding PEPs until 3.0 is out the door and people start > tackling 3.1. > > http://www.python.org/dev/peps/pep-0211/ > http://www.python.org/dev/peps/pep-0225/ > Maybe you are both right. I don't think it would hurt to let the Python 3000 crowd know about our need, while also letting them know that it doesn't have to be in the first release. By bringing it up now we avoid the response of "why didn't you raise it before we came out with 3.0?" It should be left to them to decide when the appropriate time for consideration and implementation. Who knows, they may prefer to deal with it earlier rather than later. I think this is probably the appropriate time to at least give them a heads up that it is important to us. No other solution is as good as this one, not even close. They may consider it of limited use, but it's not the first time they have accommodated numeric needs that are fairly narrow (e.g., complex, rich comparisons). No one else has to use it if they don't want to, and it doesn't conflict with any other current usage (or even proposed as far as I know). So I'm all for asking now. Perry From robert.kern at gmail.com Sat Mar 24 17:34:58 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Mar 2007 16:34:58 -0500 Subject: [SciPy-dev] New Operators in Python In-Reply-To: References: <46046AB1.2070809@ee.byu.edu> <46047730.4000000@gmail.com> Message-ID: <46059982.4080400@gmail.com> Perry Greenfield wrote: > No other solution is as good as this one, not even close. They may > consider it of limited use, but it's not the first time they have > accommodated numeric needs that are fairly narrow (e.g., complex, > rich comparisons). No one else has to use it if they don't want to, > and it doesn't conflict with any other current usage (or even > proposed as far as I know). > > So I'm all for asking now. I also think it's critical that we don't *ask* for anything. Instead, we need to *offer* an implementation. If we are the only people who are going to use it, we will have to be the ones who pony up with the implementation. Using EasyExtend to experiment with the grammar is probably a good idea to ensure feasibility before going full-bore with a patch against the interpreter. http://www.fiber-space.de/EasyExtend/doc/EE.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From perry at stsci.edu Sat Mar 24 19:06:06 2007 From: perry at stsci.edu (Perry Greenfield) Date: Sat, 24 Mar 2007 19:06:06 -0400 Subject: [SciPy-dev] New Operators in Python In-Reply-To: <46059982.4080400@gmail.com> References: <46046AB1.2070809@ee.byu.edu> <46047730.4000000@gmail.com> <46059982.4080400@gmail.com> Message-ID: On Mar 24, 2007, at 5:34 PM, Robert Kern wrote: > Perry Greenfield wrote: > >> No other solution is as good as this one, not even close. They may >> consider it of limited use, but it's not the first time they have >> accommodated numeric needs that are fairly narrow (e.g., complex, >> rich comparisons). No one else has to use it if they don't want to, >> and it doesn't conflict with any other current usage (or even >> proposed as far as I know). >> >> So I'm all for asking now. > > I also think it's critical that we don't *ask* for anything. > Instead, we need to > *offer* an implementation. If we are the only people who are going > to use it, we > will have to be the ones who pony up with the implementation. > > Using EasyExtend to experiment with the grammar is probably a good > idea to > ensure feasibility before going full-bore with a patch against the > interpreter. > > http://www.fiber-space.de/EasyExtend/doc/EE.html I don't disagree. It would be good to get some feedback from them before investing much work on this to see if they would even consider including the work. If they say: "No way!" then the work is primarily political. If it is: "sure, we can consider it if you give us an implementation." Then it becomes technical work. But my impressions of how this has been received in the past looked more like "Don't think so unless we are persuaded that there is a real need for it", not that it wasn't worth their effort or wasn't technically possible. Perry From robert.kern at gmail.com Sat Mar 24 19:12:34 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 24 Mar 2007 18:12:34 -0500 Subject: [SciPy-dev] New Operators in Python In-Reply-To: References: <46046AB1.2070809@ee.byu.edu> <46047730.4000000@gmail.com> <46059982.4080400@gmail.com> Message-ID: <4605B062.2090602@gmail.com> Perry Greenfield wrote: > On Mar 24, 2007, at 5:34 PM, Robert Kern wrote: > >> Perry Greenfield wrote: >> >>> No other solution is as good as this one, not even close. They may >>> consider it of limited use, but it's not the first time they have >>> accommodated numeric needs that are fairly narrow (e.g., complex, >>> rich comparisons). No one else has to use it if they don't want to, >>> and it doesn't conflict with any other current usage (or even >>> proposed as far as I know). >>> >>> So I'm all for asking now. >> I also think it's critical that we don't *ask* for anything. >> Instead, we need to >> *offer* an implementation. If we are the only people who are going >> to use it, we >> will have to be the ones who pony up with the implementation. >> >> Using EasyExtend to experiment with the grammar is probably a good >> idea to >> ensure feasibility before going full-bore with a patch against the >> interpreter. >> >> http://www.fiber-space.de/EasyExtend/doc/EE.html > > I don't disagree. It would be good to get some feedback from them > before investing much work on this to see if they would even consider > including the work. If they say: "No way!" then the work is primarily > political. If it is: "sure, we can consider it if you give us an > implementation." Then it becomes technical work. But my impressions > of how this has been received in the past looked more like "Don't > think so unless we are persuaded that there is a real need for it", > not that it wasn't worth their effort or wasn't technically possible. Right now, though, the py3k list is entirely devoted to implementation for the 3.0 release. Ideas without implementations are shunted off to python-ideas, which Guido doesn't read. Language feature proposals without code are pretty much dead in the water at the moment. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From russel at appliedminds.net Sat Mar 24 21:08:25 2007 From: russel at appliedminds.net (Russel Howe) Date: Sat, 24 Mar 2007 18:08:25 -0700 Subject: [SciPy-dev] Unusual crash In-Reply-To: <46046E44.8060405@gmail.com> References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> <4602F001.7000603@gmail.com> <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> <46046928.9060708@gmail.com> <46046E44.8060405@gmail.com> Message-ID: <4E6C9FF8-691B-40D3-BA5C-020069AED41E@appliedminds.net> Both DarwinPorts gcc43 gfortran and the hpc binary worked. hpc: GNU Fortran (GCC) 4.3.0 20070316 (experimental) DarwinPorts: GNU Fortran (GCC) 4.3.0 20070309 (experimental) However, both of these have dropped the "95" from the version string, so I had to make this change to get either to work: Index: numpy/distutils/fcompiler/gnu.py =================================================================== --- numpy/distutils/fcompiler/gnu.py (revision 3593) +++ numpy/distutils/fcompiler/gnu.py (working copy) @@ -250,7 +250,7 @@ class Gnu95FCompiler(GnuFCompiler): compiler_type = 'gnu95' - version_match = simple_version_match(start='GNU Fortran 95') + version_match = simple_version_match(start='GNU Fortran (95|\(GCC \))') # 'gfortran --version' results: # Debian: GNU Fortran 95 (GCC 4.0.3 20051023 (prerelease) (Debian 4.0.2-3)) From robert.kern at gmail.com Sun Mar 25 17:58:37 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 25 Mar 2007 16:58:37 -0500 Subject: [SciPy-dev] Unusual crash In-Reply-To: <4E6C9FF8-691B-40D3-BA5C-020069AED41E@appliedminds.net> References: <4DD3A745-D4D7-4D1A-8868-EF27D0D3B550@appliedminds.net> <4602F001.7000603@gmail.com> <901111FD-5162-4E1F-8170-E8E52CEFBD32@appliedminds.net> <46046928.9060708@gmail.com> <46046E44.8060405@gmail.com> <4E6C9FF8-691B-40D3-BA5C-020069AED41E@appliedminds.net> Message-ID: <4606F08D.8070909@gmail.com> Russel Howe wrote: > Both DarwinPorts gcc43 gfortran and the hpc binary worked. > > hpc: GNU Fortran (GCC) 4.3.0 20070316 (experimental) > DarwinPorts: GNU Fortran (GCC) 4.3.0 20070309 (experimental) > > However, both of these have dropped the "95" from the version string, > so I had to make this change to get either to work: I'll check it in when scipy.org comes back up. Thank you! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Mon Mar 26 10:40:08 2007 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 26 Mar 2007 10:40:08 -0400 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: > Alan wrote: >> Right now you require an object that implements methods with >> certain names, which is ok but I think not perfect. Here is >> a possibly crazy thought, just to explore. How about making >> all arguments optional, and allowing passing either such an >> object or the needed components? Finally, to accmmodate >> existing objects with a different name structure, allow >> passing a dictionary of attribute names. On Thu, 22 Mar 2007, Matthieu Brucher apparently wrote: > Indeed, that could be a solution. The only question > remaining is how to use the dictionnary, perhaps creating > a fake object with the correct interface ? OK, I see why you want that approach. (So that you can still pass a single object around in your optimizer module.) Yes, that seems right... This seems to bundle naturally with a specific optimizer? If so, the class definition should reside in the StandardOptimizer module. Cheers, Alan Isaac PS For readability, I think Optimizer should define a "virtual" iterate method. E.g., def iterate(self): return NotImplemented From matthieu.brucher at gmail.com Mon Mar 26 11:13:48 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Mon, 26 Mar 2007 16:13:48 +0100 Subject: [SciPy-dev] Proposal for more generic optimizers (posted before on scipy-user) In-Reply-To: References: Message-ID: > > OK, I see why you want that approach. > (So that you can still pass a single object around in your > optimizer module.) Yes, that seems right... Exactly :) This seems to bundle naturally with a specific optimizer? I'm not an expert in optimization, but I intended several class/seminars on the subject, and at least for the usual simple optimizer - the standard optimizer, all damped approach, and all the other that use a step and a criterion test - use this interface, and with a lot of different steps that are usual - gradient, every conjugated gradient solution, (quasi-)Newton - or criteria. I even suppose it can do very well in semi-quadratic optimization, with very little change, but I have to finish some work before I can read some books on the subject to begin implementing it in Python. If so, the class definition should reside in the StandardOptimizer module. > > Cheers, > Alan Isaac > > PS For readability, I think Optimizer should define > a "virtual" iterate method. E.g., > def iterate(self): > return NotImplemented Yes, it seems better. Thanks for the opinion ! Matthieu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Mar 27 13:27:32 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 27 Mar 2007 19:27:32 +0200 Subject: [SciPy-dev] [Numpy-discussion] Tuning sparse stuff in NumPy In-Reply-To: References: <4607E40E.5000300@ntc.zcu.cz> <4607EDCD.5020001@ntc.zcu.cz> Message-ID: <46095404.4020106@ntc.zcu.cz> David Koch wrote: > Ok, > > I did and the results are: > csc * csc: 372.601957083 > csc * csc: 3.90811300278 a typo here? which one is csr? > csr * csc: 15.3202679157 > csr * csr: 3.84498214722 > > Mhm, quite insightful. Note, that in an operation X.transpose() * X, > where X > is csc_matrix, then X.tranpose() is automatically cast to csr_matrix. A > re-cast to csc make the whole operation faster. It's still about 1000 times > slower than Matlab but 4 times faster than before. ok. now which version of scipy (scipy.__version__) do you use (you may have posted it, but I missed it)? Not so long ago, there was an effort by Nathan Bell and others reimplementing sparsetools + scipy.sparse to get better usability and performance. My (almost latest) version is 0.5.3.dev2860. r. From nwagner at iam.uni-stuttgart.de Thu Mar 29 03:08:25 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 29 Mar 2007 09:08:25 +0200 Subject: [SciPy-dev] Can't install scipy from svn (caused by ndimage) Message-ID: <460B65E9.3020806@iam.uni-stuttgart.de> Hi all, I cannot build/install scipy from svn. Here is the output of python setup.py install ... Lib/ndimage/src/ni_morphology.c: In function ?NI_BinaryErosion?: Lib/ndimage/src/ni_morphology.c:101: warning: implicit declaration of function ?NA_OFFSETDATA? Lib/ndimage/src/ni_morphology.c:101: warning: cast to pointer from integer of different size Lib/ndimage/src/ni_morphology.c:110: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:128: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:129: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c: In function ?NI_BinaryErosion2?: Lib/ndimage/src/ni_morphology.c:313: warning: cast to pointer from integer of different size Lib/ndimage/src/ni_morphology.c:340: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:354: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:370: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:467: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:468: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c: In function ?NI_DistanceTransformBruteForce?: Lib/ndimage/src/ni_morphology.c:513: warning: pointer/integer type mismatch in conditional expression Lib/ndimage/src/ni_morphology.c:517: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:523: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:531: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:554: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c: In function ?NI_DistanceTransformOnePass?: Lib/ndimage/src/ni_morphology.c:681: warning: cast to pointer from integer of different size Lib/ndimage/src/ni_morphology.c:691: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:709: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c: In function ?NI_EuclideanFeatureTransform?: Lib/ndimage/src/ni_morphology.c:921: warning: pointer/integer type mismatch in conditional expression Lib/ndimage/src/ni_morphology.c:923: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c:924: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_morphology.c: At top level: /usr/lib64/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:944: warning: ?_import_array? defined but not used gcc: Lib/ndimage/src/ni_support.c Lib/ndimage/src/ni_support.c: In function ?NI_InitLineBuffer?: Lib/ndimage/src/ni_support.c:147: warning: implicit declaration of function ?NA_OFFSETDATA? Lib/ndimage/src/ni_support.c:147: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_support.c: At top level: /usr/lib64/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api.h:944: warning: ?_import_array? defined but not used gcc: Lib/ndimage/src/ni_fourier.c Lib/ndimage/src/ni_fourier.c: In function ?NI_FourierFilter?: Lib/ndimage/src/ni_fourier.c:190: warning: implicit declaration of function ?NA_OFFSETDATA? Lib/ndimage/src/ni_fourier.c:190: warning: initialization makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:315: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:316: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:366: error: ?npy_complex32? undeclared (first use in this function) Lib/ndimage/src/ni_fourier.c:366: error: (Each undeclared identifier is reported only once Lib/ndimage/src/ni_fourier.c:366: error: for each function it appears in.) Lib/ndimage/src/ni_fourier.c:366: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:366: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:367: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:367: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:373: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:373: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:374: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:374: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:401: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:401: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:402: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:402: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c: In function ?NI_FourierShift?: Lib/ndimage/src/ni_fourier.c:441: warning: initialization makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:497: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:498: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:524: error: ?npy_complex32? undeclared (first use in this function) Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:531: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:531: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:532: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:532: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c: In function ?NI_FourierFilter?: Lib/ndimage/src/ni_fourier.c:190: warning: implicit declaration of function ?NA_OFFSETDATA? Lib/ndimage/src/ni_fourier.c:190: warning: initialization makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:315: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:316: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:366: error: ?npy_complex32? undeclared (first use in this function) Lib/ndimage/src/ni_fourier.c:366: error: (Each undeclared identifier is reported only once Lib/ndimage/src/ni_fourier.c:366: error: for each function it appears in.) Lib/ndimage/src/ni_fourier.c:366: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:366: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:367: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:367: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:373: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:373: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:374: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:374: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:401: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:401: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:402: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:402: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c: In function ?NI_FourierShift?: Lib/ndimage/src/ni_fourier.c:441: warning: initialization makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:497: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:498: warning: assignment makes pointer from integer without a cast Lib/ndimage/src/ni_fourier.c:524: error: ?npy_complex32? undeclared (first use in this function) Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:524: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:525: error: ?struct ? has no member named ?i? Lib/ndimage/src/ni_fourier.c:531: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:531: error: syntax error before ?)? token Lib/ndimage/src/ni_fourier.c:532: error: ?struct ? has no member named ?r? Lib/ndimage/src/ni_fourier.c:532: error: ?struct ? has no member named ?i? error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -ILib/ndimage/src -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c Lib/ndimage/src/ni_fourier.c -o build/temp.linux-x86_64-2.4/Lib/ndimage/src/ni_fourier.o" failed with exit status 1 Nils From stefan at sun.ac.za Thu Mar 29 04:47:17 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 29 Mar 2007 10:47:17 +0200 Subject: [SciPy-dev] Can't install scipy from svn (caused by ndimage) In-Reply-To: <460B65E9.3020806@iam.uni-stuttgart.de> References: <460B65E9.3020806@iam.uni-stuttgart.de> Message-ID: <20070329084717.GB18196@mentat.za.net> On Thu, Mar 29, 2007 at 09:08:25AM +0200, Nils Wagner wrote: > I cannot build/install scipy from svn. Here is the output of > > python setup.py install Sorry, Nils, I'm just moving around parts of ndimage. I'll be done soon. Cheers St?fan From stefan at sun.ac.za Thu Mar 29 04:47:17 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 29 Mar 2007 10:47:17 +0200 Subject: [SciPy-dev] Can't install scipy from svn (caused by ndimage) In-Reply-To: <460B65E9.3020806@iam.uni-stuttgart.de> References: <460B65E9.3020806@iam.uni-stuttgart.de> Message-ID: <20070329084717.GB18196@mentat.za.net> On Thu, Mar 29, 2007 at 09:08:25AM +0200, Nils Wagner wrote: > I cannot build/install scipy from svn. Here is the output of > > python setup.py install Sorry, Nils, I'm just moving around parts of ndimage. I'll be done soon. Cheers St?fan From nwagner at iam.uni-stuttgart.de Thu Mar 29 07:16:13 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 29 Mar 2007 13:16:13 +0200 Subject: [SciPy-dev] Can't install scipy from svn (caused by ndimage) In-Reply-To: <20070329084717.GB18196@mentat.za.net> References: <460B65E9.3020806@iam.uni-stuttgart.de> <20070329084717.GB18196@mentat.za.net> Message-ID: <460B9FFD.5090007@iam.uni-stuttgart.de> Stefan van der Walt wrote: > On Thu, Mar 29, 2007 at 09:08:25AM +0200, Nils Wagner wrote: > >> I cannot build/install scipy from svn. Here is the output of >> >> python setup.py install >> > > Sorry, Nils, I'm just moving around parts of ndimage. I'll be done > soon. > > Cheers > St?fan > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi St?fan, Now python setup.py install works fine for me, but I get a segfault running scipy.test(1) on a 64-bit machine using >>> scipy.__version__ '0.5.3.dev2888' Here is a backtrace Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509653888 (LWP 6350)] PyArray_NewFromDescr (subtype=0x2aaaabf1dee0, descr=0x2aaaabf20140, nd=1, dims=0x2, strides=0x0, data=0x0, flags=0, obj=0x0) at arrayobject.c:5318 5318 if (dims[i] == 0) continue; (gdb) bt #0 PyArray_NewFromDescr (subtype=0x2aaaabf1dee0, descr=0x2aaaabf20140, nd=1, dims=0x2, strides=0x0, data=0x0, flags=0, obj=0x0) at arrayobject.c:5318 #1 0x00002aaaae12aea2 in NA_NewArray (buffer=0xa499e0, type=, ndim=1, shape=0x2) at nd_image.h:212 #2 0x00002aaaae12bf7c in Py_FilterFunc (buffer=, filter_size=, output=0x7fffff88b9d8, data=) at nd_image.c:351 #3 0x00002aaaae1320a9 in NI_GenericFilter (input=0xc68290, function=0x2aaaae12bf40 , data=0x7fffff88ba80, footprint=, output=0x95a070, mode=, cvalue=0, origins=0xa88eb0) at ni_filters.c:858 #4 0x00002aaaae12c263 in Py_GenericFilter (obj=, args=) at nd_image.c:411 Cheers, Nils From stefan at sun.ac.za Thu Mar 29 09:00:29 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 29 Mar 2007 15:00:29 +0200 Subject: [SciPy-dev] Can't install scipy from svn (caused by ndimage) In-Reply-To: <460B9FFD.5090007@iam.uni-stuttgart.de> References: <460B65E9.3020806@iam.uni-stuttgart.de> <20070329084717.GB18196@mentat.za.net> <460B9FFD.5090007@iam.uni-stuttgart.de> Message-ID: <20070329130029.GD18196@mentat.za.net> On Thu, Mar 29, 2007 at 01:16:13PM +0200, Nils Wagner wrote: > Stefan van der Walt wrote: > > On Thu, Mar 29, 2007 at 09:08:25AM +0200, Nils Wagner wrote: > > > >> I cannot build/install scipy from svn. Here is the output of > >> > >> python setup.py install > >> > > > > Sorry, Nils, I'm just moving around parts of ndimage. I'll be done > > soon. > > Now python setup.py install works fine for me, but I get a segfault > running scipy.test(1) on a 64-bit machine using > >>> scipy.__version__ > '0.5.3.dev2888' Like I said, I'm working on it. Already fixed the memory error in r2889. Cheers St?fan From a.schmolck at gmx.net Wed Mar 28 23:13:28 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 29 Mar 2007 04:13:28 +0100 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] Message-ID: [Second attempt to move this to scipy-dev, this time using the right email account; for those who have no idea what mlabwrap is about: it's a python to matlab bridge currently on the move from sourceforge to scipy's emerging scikits infrastructure] Hi, First, it's great that mlabwrap has found a new home at scipy.org, big thanks to everyone involved. I'm afraid that I'm still not quite up to speed on all infrastructural issues -- I've not used subversion before and I haven't completely grokked the scipy infrastructure yet (although I've been trying to read up, especially on svn) and I'm a bit puzzled by some things (see below), so please excuse if some of what follows sounds a bit clueless. In order to bring the mlabwrap project forward, there are three things I'd like to achieve in the immediate future (by end of next week): 1. Resolve remaining infrastructural questions 2. Bring out mlabwrap-1.0 final 3. Discuss a roadmap and feature requirements for mlabwrap 2.0 So let's start out with 1. : ---------------------------- "Jarrod Millman" writes: > Hey, > > Jeff got everything setup and I tested it out. Everything looks good > (Thanks Jeff and Robert!). > > I checked in mlabwrap v1.0b: > https://projects.scipy.org/scipy/scikits/browser/trunk/mlabwrap > And then since Jeff installed the rest macro; I added > [[ReST(/trunk/mlabwrap/README.txt)]] > which Trac renders like this: > https://projects.scipy.org/scipy/scikits/wiki/MlabWrap > I also moved the discussion about proxy objects from the NIPY site: > https://projects.scipy.org/scipy/scikits/wiki/MlabProxyObjects > > Hopefully everyone has access to the scikits subversion repository > now. If not, please try and get access ASAP. Shouldn't having a svn account mean that I'd be able to succesfully log in to the trac interface at using the same account? I can't either use the AlexanderSchmolck login I created on scipy.org, nor the POP3/SVN login a.schmolck at gmail.com[1] I received an email from webmin at scipy.org on Friday[3]; however I *can* commit to the repository using svn (I've sent an email to webmin at scipy.org, but haven't heard anything back). Another thing I've been wondering about is whether it would be easy to import the existing CVS repository into svn? I've fiddled around a little bit with cvs2svn and on first sight it looks like it migh work OK. Loosing the project history wouln't be too tragic (and I can just try to get the sf staff to upload the old CVS repository on the sf page, so it's publically available somewhere), but if we can just import the existing repository without too much of a hassle, I think this might be the best option. Before doing anything with the CVS repository, I'd first manually adapt the directory structure, though, bringing us to 1.a. source code and svn organization ''''''''''''''''''''''''''''''''''''' In an email some time ago Robert Kern proposed the following directory structure for scikits: branches/ tags/ trunk/ mlabwrap/ setup.py scikits/ __init__.py mlabwrap/ __init__.py The current directory structure misses the scikits/ dir and the second mlabwrap/ dir; instead it has mlabwrap.py at top-level (as well as the utility libraries awmstools.py and awmsmeta.py and the c++-extension mlabraw.cpp). Independent of scikits conventions, it seems that making mlabwrap a package and stuffing everything it needs inside (i.e. the utility libs and mlabraw) rather than polluting the toplevel module-namespace seems desirable I assume this question reflects my svn-newbie status, but why doesn't the scikits structure look something like this, given that as I understand it scikits are meant to be essentially independent (and hence independely versioned) projects under a common namespace? mlabwrap/ branches/ tags/ trunk/ setup.py scikits/ __init__.py mlabwrap/ __init__.py some_other_scikit/ branches/ ... It somehow seems to me that tags and branches should apply to individual projects and that one could still do convenient releases and svn-checkouts of scikits/ as a whole by using e.g. svn:externals to the individual projects. As I said, I'm not really knowledgable about svn, but I'd like to understand the logic a bit better, because I'm also trying to work out what to do about the awmsmeta.py and awmstools.py stuff, which isn't really a part of mlabwrap as such. I see three ways of dealing with the dependency on the utility modules: D1. Drop it and copy-and-paste the needed bits into mlabwrap.py (certainly possible, only a few functions are needed; OTOH it seems a bit ugly) D2. Rely on cheeseshop (but I don't have the impression that it is equally acceptable and painless for a python module to download and install dependencies as it is for a CTAN module) D3. Stuff copies into mlabwrap/ and do a relative import D3. seems most attractive to me at the moment but one would still have to figure out where the VC of awmstools.py and awmsmeta.py would "live"; I thought it would be cleanest if they were included as svn:externals and not directly part of the mlabwrap svn tree, so I thought maybe something like this (ignoring the question of where branches/ trunk/ and tags/ go exactly): mlabwrap/ setup.py scikits/ __init__.py README.txt # etc. mlabraw.py # dummy importing mlabwrap/mlabraw, for backwards comp. mlabwrap/ __init__.py -> awmstools.py -> awmsmeta.py mlabwraw.cpp test/ test_mlabwrap.py # etc. I also assume we want a branch for the 1.0.x version, a tag for the 1.0 release (when it's ready) and that development of the next version will continue in the trunk. Comments? Also are there any strong feelings/established procedures concerning __init__.py? The generic name is somewhat inconvient in various contexts (e.g. buffer switching im emacs) so I thought about making __init__.py a dummy importing ./_mlabwrap.py or so. OK, sorry this is getting so long, on to 2. bringing out mlabwrap-1.0 final: ------------------------------------ Apart from the question what the directory structure should look like, the only issue that I'm currently aware of is that there appears to be a problem with `setup.py` automatically finding out about the matlab version under windows; if no one steps up to do some windows testing I'm also willing to apply some fix I assume will work and release as is (windows users might have to set MATLAB_VERSION etc. in setup.py by hand if it doesn't, which isn't *that* terrible). Actually, one more thing: distutils vs. scipy distutils vs. setuptools -- which one should mlabwrap-1.0 final use? I'm only really fully familiar with the first one. Finally point 3, the road-map: ------------------------------ I'd like to get some idea what is felt as needed for the next version of mlabwrap (let's say 2), by when (e.g. because it would be really handy for NIPY; how important is e.g. that 'proxy.a.b = c' works as expected?) and who is interested on working on specific parts (like the move to ctypes). Also what python and matlab versions should mlabwrap 2 have to support? How should changes affecting backwards compatibility best be handled (maybe allowing getting v1 default behavior with something like `from mlabwrap.v1 import mlab`?)? I've put up some notes concerning mlabwrap development and design issues at , corrections, additions and feedback (using either the wiki page itself or scipy-dev) are highly welcome. Sorry for the relative long silence -- apart from hacking on the next version, writing up my thoughts and wikifying took *far* longer than I would have thought, especially since it's only personal note-quality rather than something polished. I hope it's at least partly intelligible and useful. cheers, 'as Footnotes: [1] Are those two account logins meant to be different or is this just because I was too dumb to notify Jeff Strunk of my newly created scipy.org account in time? BTW unless there's some convention that the svn login should be an email address, I'd rather just have a.schmolck (sans gmail.com), if it's something that's easy to change. From stefan at sun.ac.za Thu Mar 29 09:12:36 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 29 Mar 2007 15:12:36 +0200 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: References: Message-ID: <20070329131236.GF18196@mentat.za.net> On Thu, Mar 29, 2007 at 04:13:28AM +0100, Alexander Schmolck wrote: > In an email some time ago Robert Kern proposed the following directory > structure for scikits: > > branches/ > tags/ > trunk/ > mlabwrap/ > setup.py > scikits/ > __init__.py > mlabwrap/ > __init__.py Shouldn't that be branches/ tags/ trunk/ scikits/ setup.py __init__.py mlabwrap/ __init__.py gpc/ __init__.py etc.? Cheers St?fan From nwagner at iam.uni-stuttgart.de Thu Mar 29 09:24:29 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 29 Mar 2007 15:24:29 +0200 Subject: [SciPy-dev] Can't install scipy from svn (caused by ndimage) In-Reply-To: <20070329130029.GD18196@mentat.za.net> References: <460B65E9.3020806@iam.uni-stuttgart.de> <20070329084717.GB18196@mentat.za.net> <460B9FFD.5090007@iam.uni-stuttgart.de> <20070329130029.GD18196@mentat.za.net> Message-ID: <460BBE0D.8040408@iam.uni-stuttgart.de> Stefan van der Walt wrote: > On Thu, Mar 29, 2007 at 01:16:13PM +0200, Nils Wagner wrote: > >> Stefan van der Walt wrote: >> >>> On Thu, Mar 29, 2007 at 09:08:25AM +0200, Nils Wagner wrote: >>> >>> >>>> I cannot build/install scipy from svn. Here is the output of >>>> >>>> python setup.py install >>>> >>>> >>> Sorry, Nils, I'm just moving around parts of ndimage. I'll be done >>> soon. >>> >> Now python setup.py install works fine for me, but I get a segfault >> running scipy.test(1) on a 64-bit machine using >> >>>>> scipy.__version__ >>>>> >> '0.5.3.dev2888' >> > > Like I said, I'm working on it. Already fixed the memory error in r2889. > > Cheers > St?fan > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Thank you very much ! Cheers, Nils With python2.4 I get ====================================================================== FAIL: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/odr/tests/test_odr.py", line 49, in test_explicit np.array([ 1.2646548050648876e+03, -5.4018409956678255e+01, File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 1.26462971e+03, -5.42545890e+01, -8.64250389e-02]) y: array([ 1.26465481e+03, -5.40184100e+01, -8.78497122e-02]) ====================================================================== FAIL: test_multi (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.4/site-packages/scipy/odr/tests/test_odr.py", line 190, in test_multi np.array([ 4.3799880305938963, 2.4333057577497703, 8.0028845899503978, File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib64/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.31272063, 2.44289312, 7.76215871, 0.55995622, 0.46423343]) y: array([ 4.37998803, 2.43330576, 8.00288459, 0.51011472, 0.51739023]) ---------------------------------------------------------------------- Ran 1629 tests in 3.383s FAILED (failures=2) With python2.5 I get ====================================================================== ERROR: test_explicit (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 46, in test_explicit out = explicit_odr.run() File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 1049, in run self.output = Output(apply(odr, args, kwds)) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 576, in __init__ self.stopreason = report_error(self.info) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 143, in report_error 'Iteration limit reached')[info % 10] IndexError: tuple index out of range ====================================================================== ERROR: test_lorentz (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 291, in test_lorentz out = l_odr.run() File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 1049, in run self.output = Output(apply(odr, args, kwds)) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 576, in __init__ self.stopreason = report_error(self.info) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 143, in report_error 'Iteration limit reached')[info % 10] IndexError: tuple index out of range ====================================================================== ERROR: test_multi (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 187, in test_multi out = multi_odr.run() File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 1049, in run self.output = Output(apply(odr, args, kwds)) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 576, in __init__ self.stopreason = report_error(self.info) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 143, in report_error 'Iteration limit reached')[info % 10] IndexError: tuple index out of range ====================================================================== ERROR: test_pearson (scipy.tests.test_odr.test_odr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/scipy/odr/tests/test_odr.py", line 235, in test_pearson out = p_odr.run() File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 1049, in run self.output = Output(apply(odr, args, kwds)) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 576, in __init__ self.stopreason = report_error(self.info) File "/usr/local/lib64/python2.5/site-packages/scipy/odr/odrpack.py", line 143, in report_error 'Iteration limit reached')[info % 10] IndexError: tuple index out of range From schaouette at free.fr Thu Mar 29 09:25:12 2007 From: schaouette at free.fr (Gilles G.) Date: Thu, 29 Mar 2007 15:25:12 +0200 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: <460B9FFD.5090007@iam.uni-stuttgart.de> References: <460B65E9.3020806@iam.uni-stuttgart.de> <20070329084717.GB18196@mentat.za.net> <460B9FFD.5090007@iam.uni-stuttgart.de> Message-ID: <200703291525.14100.schaouette@free.fr> Hello, I would like to know if there is somme kind of roadmap for the developpement of SciPy/Numpy, I can't find it on the web site, neither on this mailing list. For example, I would like to know: - When will be the next stable version of SciPy/Numpy? - What will be the new features in the next version of SciPy - When is SciPy 1.0 planned? I am ready to begin a roadmap page on scipy.org wiki (as long as these informations are available). Thanks a lot for your answers and for this great piece of software! -- Gilles From robert.kern at gmail.com Thu Mar 29 09:36:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Mar 2007 08:36:01 -0500 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <20070329131236.GF18196@mentat.za.net> References: <20070329131236.GF18196@mentat.za.net> Message-ID: <460BC0C1.7050103@gmail.com> Stefan van der Walt wrote: > On Thu, Mar 29, 2007 at 04:13:28AM +0100, Alexander Schmolck wrote: >> In an email some time ago Robert Kern proposed the following directory >> structure for scikits: >> >> branches/ >> tags/ >> trunk/ >> mlabwrap/ >> setup.py >> scikits/ >> __init__.py >> mlabwrap/ >> __init__.py > > Shouldn't that be > > branches/ > tags/ > trunk/ > scikits/ > setup.py > __init__.py > mlabwrap/ > __init__.py > gpc/ > __init__.py > > etc.? No. The point of scikits is to keep each subpackage independent of the others. It is very difficult to build each subpackage separately from the others if they are in that kind of structure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Mar 29 09:58:21 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Mar 2007 08:58:21 -0500 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: References: Message-ID: <460BC5FD.4050602@gmail.com> Alexander Schmolck wrote: > 1.a. source code and svn organization > ''''''''''''''''''''''''''''''''''''' > > In an email some time ago Robert Kern proposed the following directory > structure for scikits: > > branches/ > tags/ > trunk/ > mlabwrap/ > setup.py > scikits/ > __init__.py > mlabwrap/ > __init__.py > > The current directory structure misses the scikits/ dir and the second > mlabwrap/ dir; instead it has mlabwrap.py at top-level (as well as the utility > libraries awmstools.py and awmsmeta.py and the c++-extension mlabraw.cpp). > Independent of scikits conventions, it seems that making mlabwrap a package > and stuffing everything it needs inside (i.e. the utility libs and mlabraw) > rather than polluting the toplevel module-namespace seems desirable > > I assume this question reflects my svn-newbie status, but why doesn't the > scikits structure look something like this, given that as I understand it > scikits are meant to be essentially independent (and hence independely > versioned) projects under a common namespace? > > mlabwrap/ > branches/ > tags/ > trunk/ > setup.py > scikits/ > __init__.py > mlabwrap/ > __init__.py > some_other_scikit/ > branches/ > ... > > It somehow seems to me that tags and branches should apply to individual > projects and that one could still do convenient releases and svn-checkouts of > scikits/ as a whole by using e.g. svn:externals to the individual projects. branches/ and tags/ directories can be shared between projects. There's nothing special about a branch or a tag in SVN; they are just copies. Just make sure you name your branches and tags appropriately (mlabwrap-1.0, etc.). I would like all of the scikits to be under one trunk, though, for conveniently grabbing the current source of all of the scikits without also grabbing every branch and tag. Branches and tags take little space on the server, but in checkouts, there would be a tremendous amount of duplication. > As > I said, I'm not really knowledgable about svn, but I'd like to understand the > logic a bit better, because I'm also trying to work out what to do about the > awmsmeta.py and awmstools.py stuff, which isn't really a part of mlabwrap as > such. I see three ways of dealing with the dependency on the utility modules: > > D1. Drop it and copy-and-paste the needed bits into mlabwrap.py (certainly > possible, only a few functions are needed; OTOH it seems a bit ugly) > D2. Rely on cheeseshop (but I don't have the impression that it is equally > acceptable and painless for a python module to download and install > dependencies as it is for a CTAN module) > D3. Stuff copies into mlabwrap/ and do a relative import > > D3. seems most attractive to me at the moment but one would still have to > figure out where the VC of awmstools.py and awmsmeta.py would "live"; I > thought it would be cleanest if they were included as svn:externals and not > directly part of the mlabwrap svn tree, so I thought maybe something like > this (ignoring the question of where branches/ trunk/ and tags/ go exactly): > > mlabwrap/ > setup.py > scikits/ > __init__.py > README.txt # etc. This should be one level up. > mlabraw.py # dummy importing mlabwrap/mlabraw, for backwards comp. This shouldn't be here. I'd just put it into the toplevel mlabwrap/ directory and not install it. Just have it there for people to use if they need backwards compatibility. > mlabwrap/ > __init__.py > -> awmstools.py > -> awmsmeta.py This seems appropriate. > mlabwraw.cpp > test/ > test_mlabwrap.py # etc. This directory should move into mlabwrap/scikits/mlabwrap/ or even mlabwrap/. > I also assume we want a branch for the 1.0.x version, a tag for the 1.0 release > (when it's ready) and that development of the next version will continue in > the trunk. > > Comments? > > Also are there any strong feelings/established procedures concerning > __init__.py? The generic name is somewhat inconvient in various contexts (e.g. > buffer switching im emacs) so I thought about making __init__.py a dummy > importing ./_mlabwrap.py or so. Leave scikits/__init__.py empty. Do what you like with scikits/mlabwrap/__init__.py . For a package as small as yours, importing everything from mlabwrap.py is reasonable. Then people will only have to import scikits.mlabwrap instead of scikits.mlabwrap.mlabwrap . > OK, sorry this is getting so long, on to > > 2. bringing out mlabwrap-1.0 final: > ------------------------------------ > > Apart from the question what the directory structure should look like, the > only issue that I'm currently aware of is that there appears to be a problem > with `setup.py` automatically finding out about the matlab version under > windows; if no one steps up to do some windows testing I'm also willing to > apply some fix I assume will work and release as is (windows users might have > to set MATLAB_VERSION etc. in setup.py by hand if it doesn't, which isn't > *that* terrible). Actually, one more thing: distutils vs. scipy distutils vs. > setuptools -- which one should mlabwrap-1.0 final use? I'm only really > fully familiar with the first one. We need setuptools to handle the scikits namespace package. Does your extension module use numpy? If so it should also use numpy.distutils. Just make sure to import setuptools first. import setuptools from numpy.distutils.core import setup ... -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Thu Mar 29 10:02:18 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 29 Mar 2007 16:02:18 +0200 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <460BC0C1.7050103@gmail.com> References: <20070329131236.GF18196@mentat.za.net> <460BC0C1.7050103@gmail.com> Message-ID: <20070329140218.GG18196@mentat.za.net> On Thu, Mar 29, 2007 at 08:36:01AM -0500, Robert Kern wrote: > > Shouldn't that be > > > > branches/ > > tags/ > > trunk/ > > scikits/ > > setup.py > > __init__.py > > mlabwrap/ > > __init__.py > > gpc/ > > __init__.py > > > > etc.? > > No. The point of scikits is to keep each subpackage independent of the others. > It is very difficult to build each subpackage separately from the others if they > are in that kind of structure. So what is the use of having a scikits subdirectory in each module? I'm just trying to understand the structure you propose. Cheers St?fan From robert.kern at gmail.com Thu Mar 29 10:17:05 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Mar 2007 09:17:05 -0500 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <20070329140218.GG18196@mentat.za.net> References: <20070329131236.GF18196@mentat.za.net> <460BC0C1.7050103@gmail.com> <20070329140218.GG18196@mentat.za.net> Message-ID: <460BCA61.9070008@gmail.com> Stefan van der Walt wrote: > On Thu, Mar 29, 2007 at 08:36:01AM -0500, Robert Kern wrote: >>> Shouldn't that be >>> >>> branches/ >>> tags/ >>> trunk/ >>> scikits/ >>> setup.py >>> __init__.py >>> mlabwrap/ >>> __init__.py >>> gpc/ >>> __init__.py >>> >>> etc.? >> No. The point of scikits is to keep each subpackage independent of the others. >> It is very difficult to build each subpackage separately from the others if they >> are in that kind of structure. > > So what is the use of having a scikits subdirectory in each module? > I'm just trying to understand the structure you propose. scikits is a namespace package. While each subpackage will be built separately, they will still be scikits.mlabwrap, scikits.gpc, etc. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jstrunk at enthought.com Thu Mar 29 10:33:09 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 29 Mar 2007 09:33:09 -0500 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: References: Message-ID: <200703290933.09675.jstrunk@enthought.com> On Wednesday 28 March 2007 10:13 pm, Alexander Schmolck wrote: > Shouldn't having a svn account mean that I'd be able to succesfully log in > to the trac interface at > using the > same account? I can't either use the AlexanderSchmolck login I created on > scipy.org, nor the POP3/SVN login a.schmolck at gmail.com[1] ?I received an > email from webmin at scipy.org on Friday[3]; however I *can* commit to the > repository using svn (I've sent an email to webmin at scipy.org, but haven't > heard anything back). Trac accounts, moin accounts, and SVN accounts are all separate. You'll need to register and fill in your email address in the settings section for each Trac you use. If you had a Scipy Trac account, it was copied over to the Scikits trac when it was created. However, the settings section was not synchronized. If you want to get email about tickets, you must enter your email address in the settings section of each trac you use. I will start to receive email sent to webmin at scipy.org now. Thanks, Jeff Strunk IT Administrator Enthought, Inc. From stefan at sun.ac.za Thu Mar 29 10:37:43 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 29 Mar 2007 16:37:43 +0200 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <460BCA61.9070008@gmail.com> References: <20070329131236.GF18196@mentat.za.net> <460BC0C1.7050103@gmail.com> <20070329140218.GG18196@mentat.za.net> <460BCA61.9070008@gmail.com> Message-ID: <20070329143743.GH18196@mentat.za.net> On Thu, Mar 29, 2007 at 09:17:05AM -0500, Robert Kern wrote: > > So what is the use of having a scikits subdirectory in each module? > > I'm just trying to understand the structure you propose. > > scikits is a namespace package. While each subpackage will be built separately, > they will still be scikits.mlabwrap, scikits.gpc, etc. OK, I see. I wrote a ctypes wrapper for Birchfield's KLT tracker, and I'd like to see Polygon (a gpc wrapper) maintained somewhere. I also have a numpy-ported NURBS wrapper. Then there's also some FFMpeg wrapper code to read/write avi files. Are any of these suitable candidates for scikits? Cheers St?fan From jh at physics.ucf.edu Thu Mar 29 11:25:10 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 29 Mar 2007 11:25:10 -0400 (EDT) Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: (scipy-dev-request@scipy.org) References: Message-ID: <20070329152510.5D360BA0DDD@phaser.physics.ucf.edu> > I would like to know if there is somme kind of roadmap for the developpement > of SciPy/Numpy, I can't find it on the web site, neither on this mailing > list. There is a roadmap for everything except the software itself (docs, releases, etc.) in http://scipy.org/Developer_Zone (scroll to the bottom). The Developer_Zone page is the right place to list people and to put a more formal roadmap/timetable. But, a timeline can't happen until people step up to lead the component efforts. It seems the community is in the perpetual state of having a *lot* of ready workers, but only a few committed leaders. We need: Someone to lead the creation of binary installers for all platforms, especially the various Linux distros. Someone to write a tutorial user manual that is not field-specific and that has plenty of worked examples. It must be presented as (at least) a single PDF plus example zip file. Someone to lead the scipy release team, as Travis does for numpy, and push actively toward a 1.0 release. This may be progressing quietly, below the radar, or may be stalled, I don't know. There was activity as recently as last December. We worked out the priorities and even some of the broad parameters of these projects years ago, but only a few of the identified goals have had any progress (notably the unification of numarray and Numeric, and ongoing maintenance of numpy, thank you Travis!). A road map is a good thing, but without more willing leaders, we will continue to have little progress in these key areas. On the positive side, the number of willing workers is high, so when someone does step up to lead, they will have a lot of help. I expect that a good leader who can delegate and manage effectively will put in no more time than one of the workers, say 4-10 hours a week. The situation of the lagging binary installers is most sad, and is the main thing that keeps people away from this software. Almost everyone I know in physics, and many people in astronomy, know about SciPy. However, they're all still waiting for it to "stabilize", which to them means they can install it with yum or synaptic or whatever (and it works every time when they do), and they see regular releases, and they see all the packages and platforms and OS versions supported for each release right from the start. Seeing a tarball for the current release, and just an outdated binary for *their* platform for some ancient releases, is insulting to them. It also indicates a project in danger of not having the critical mass to stay afloat for the long haul, so people rightly hesitate in committing their projects to python. We know better, but the only way to communicate that (and to get the large number of new users that would result) is to make binary packages in the traditional way, and keep them current. Since we've called for people to step forward and take charge of these areas before, do folks have ideas on ways to bring a financial incentive to bear? I've talked to more than a few people who could throw $10k at the problem, as could I, but there isn't an easy way to ensure long-term stability of a hired position. Would Enthough be willing to sell "support" for $5k a contract, and hire someone on a contract basis for, say, $50k to build binaries and write docs that happen also to go on the web site and get made part of the project? The supported entities would get to decide which platforms to do first, and would have some say in how it was done. Thoughts? --jh-- From schaouette at free.fr Thu Mar 29 12:33:26 2007 From: schaouette at free.fr (Gilles G.) Date: Thu, 29 Mar 2007 18:33:26 +0200 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: <20070329152510.5D360BA0DDD@phaser.physics.ucf.edu> References: <20070329152510.5D360BA0DDD@phaser.physics.ucf.edu> Message-ID: <200703291833.27313.schaouette@free.fr> Le Jeudi 29 Mars 2007 17:25, Joe Harrington a ?crit : > > I would like to know if there is somme kind of roadmap for the > > developpement of SciPy/Numpy, I can't find it on the web site, neither on > > this mailing list. > > There is a roadmap for everything except the software itself (docs, > releases, etc.) in http://scipy.org/Developer_Zone (scroll to the > bottom). The Developer_Zone page is the right place to list people > and to put a more formal roadmap/timetable. But, a timeline can't > happen until people step up to lead the component efforts. I know about this page, unfortunately a few links seem to be outdated. For example, the link "roadmap" doesn't work. After some investigations, I think I found the good link, i.e: http://projects.scipy.org/pipermail/scipy-dev/2004-October/002419.html Could you please tell me if it is the good one so that I can correct it on the wiki? Moreover, the link "proposal for wiki workflow" is obviously wrong, as it is directed to the "junkfilter" mailing-list on sourceforge.net. I could not find the true link, and googling did not help. Anyway, thank you for all the details you gave. It helps a lot! -- Gilles From jh at physics.ucf.edu Thu Mar 29 13:41:07 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 29 Mar 2007 13:41:07 -0400 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: <200703291833.27313.schaouette@free.fr> References: <20070329152510.5D360BA0DDD@phaser.physics.ucf.edu> <200703291833.27313.schaouette@free.fr> Message-ID: <200703291741.l2THf7kB010900@glup.physics.ucf.edu> >> > I would like to know if there is somme kind of roadmap for the >> > developpement of SciPy/Numpy, I can't find it on the web site, neither = on >> > this mailing list. > >> There is a roadmap for everything except the software itself (docs, >> releases, etc.) in http://scipy.org/Developer_Zone (scroll to the >> bottom). The Developer_Zone page is the right place to list people >> and to put a more formal roadmap/timetable. But, a timeline can't >> happen until people step up to lead the component efforts. > I know about this page, unfortunately a few links seem to be outdated. For > example, the link "roadmap" doesn't work. After some investigations, I think > I found the good link, i.e: > http://projects.scipy.org/pipermail/scipy-dev/2004-October/002419.html > Could you please tell me if it is the good one so that I can correct it on > the wiki? > Moreover, the link "proposal for wiki workflow" is obviously wrong, as it is > directed to the "junkfilter" mailing-list on sourceforge.net. I could not > find the true link, and googling did not help. Yes, that is the right link. I found the other one and fixed both, and added comments to help us find them again when they break next time. Thanks for letting me know about them! --jh-- From a.schmolck at gmx.net Thu Mar 29 21:43:09 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 30 Mar 2007 02:43:09 +0100 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <460BC5FD.4050602@gmail.com> References: <460BC5FD.4050602@gmail.com> Message-ID: Robert Kern writes: Thanks for the feedback! > > I assume this question reflects my svn-newbie status, but why doesn't the > > scikits structure look something like this, given that as I understand it > > scikits are meant to be essentially independent (and hence independely > > versioned) projects under a common namespace? > > > > mlabwrap/ > > branches/ > > tags/ > > trunk/ > > setup.py > > scikits/ > > __init__.py > > mlabwrap/ > > __init__.py > > some_other_scikit/ > > branches/ > > ... > > > > It somehow seems to me that tags and branches should apply to individual > > projects and that one could still do convenient releases and svn-checkouts of > > scikits/ as a whole by using e.g. svn:externals to the individual projects. > > branches/ and tags/ directories can be shared between projects. There's nothing > special about a branch or a tag in SVN; they are just copies. Just make sure you > name your branches and tags appropriately (mlabwrap-1.0, etc.). I would like all > of the scikits to be under one trunk, though, for conveniently grabbing the > current source of all of the scikits without also grabbing every branch and > tag. This is actually exactly what I thought svn:externals would solve (but maybe I don't understand the problems associated with that solution correctly): mlabwrap/ branches/ tags/ trunk/ ... some_other_scikit/ branches/ ... scikits trunk scikits -> mlabwrap/trunk/ # -> = external -> some_other_scikit/trunk/ ... tags ... Thus, if you want all scikits, you just checkout above scikits sub-dir. (I presumably haven't got the directory-structure quite right, but I hope it doesn't matter for getting the point across) On the other hand, if the tags/ and branches/ directories are shared between all projects, you need to prefix all tags/branches with the project name (which would otherwise not be necessary) and have potentially quite a lot of dirs too look at that don't interest you in the least (e.g. when browsing tags/ or branches/ with trac), and, AFAICT, no particularly easy way to just check out everything related to your project. I assume this could also be fixed by 'symlinking' via externals, but you'd need many, many more (per project: one for the trunk and one for each branch and tag, vs. one for the trunk of each project -- and possibly the odd 'whole' scikits release tag). Maybe the structure I proposed would also make importing and exporting of projects slightly easier (because it mirrors the typical layout of an individual svn project). Speaking of which -- is there something I can do with the existing CVS so that it can be easily imported in the scikits svn (in which case we can get rid of what's already checked in), or would importing the CVS involve a hassle in any case, because then I'll just archive it on sourceforge. The other thing I've been wondering is if such a setup couldn't also be made to accomodate something like Stefan van der Walt's layout proposal, which as far as I can see would allow for the most convenient way possible to grab all scikits and build them: setup.py scikits/ __init__.py -> mlabwrap/ mlabwrap_setup.py __init__.py awmstools.py ... -> some_other_scikit/ some_other_scikit_setup.py ... Couldn't one have a toplevel setup.py that just runs all scikits/DIRNAME/DIRNAME_setup.py's it can find, passing through the command line options (or something along those lines[1])? I.e. the distribution of each scikit would contain the same ``scikits/setup.py`` which just looks for subdirs with *setup.py's which it then calls (the XXX_setup.py's could also move one dir up). To install any subset of scikits from svn one could then just do something like: svn co ...trunk/scikits/{setup.py,scikits{_mlabwrap,some_other_scikit,...}} cd scikits; python setup.py install Just an idea (possibly a bad one :). > > > > mlabwrap/ > > setup.py > > scikits/ > > __init__.py > > README.txt # etc. > > This should be one level up. OK > > > mlabraw.py # dummy importing mlabwrap/mlabraw, for backwards comp. > > This shouldn't be here. I'd just put it into the toplevel mlabwrap/ directory > and not install it. Just have it there for people to use if they need backwards > compatibility. OK > > > mlabwrap/ > > __init__.py > > -> awmstools.py > > -> awmsmeta.py > > This seems appropriate. Good. > > > mlabwraw.cpp > > test/ > > test_mlabwrap.py # etc. > > This directory should move into mlabwrap/scikits/mlabwrap/ or even > mlabwrap/. OK, mlabwrap/test it is then. > Leave scikits/__init__.py empty. Do what you like with > scikits/mlabwrap/__init__.py . For a package as small as yours, importing > everything from mlabwrap.py is reasonable. Then people will only have to import > scikits.mlabwrap instead of scikits.mlabwrap.mlabwrap . OK, I think I'll have __init__.py just do an import * from ./_mlabwrap.py then. > > Actually, one more thing: distutils vs. scipy distutils vs. > > setuptools -- which one should mlabwrap-1.0 final use? I'm only really > > fully familiar with the first one. > > We need setuptools to handle the scikits namespace package. Does your extension > module use numpy? Yes, but optionally -- it also still works with Numeric (before someone grumbles, Numeric support will be ripped out as soon as 1.0 is released). > If so it should also use numpy.distutils. Just make sure to import > setuptools first. > > import setuptools > from numpy.distutils.core import setup > ... Is there a recipe/template for this somewhere? Googling "scipy setuptools" comes up with as the first hit, which seems to indicate that setuptools is still a bit alpha and the docs can't be trusted if one wants something that actually works. I've currently got a distutils setup.py that in common scenarios builds out of the box with ``python setup.py install`` (it automatically detects if numpy or Numeric is installed and which matlab version it needs to build for). What I'd hope setuptool will add (apart from scikits namespace support) is the ability to easy_install mlabwrap from PyPI, also downloading numpy if required. I'd assume the same applies to most other projects that will live under scikits, so it would be good to establish a standard way to do this and document it somewhere, if there isn't anything around already. Should people think that setuptools is still a bit problematic, I'll just release 1.0final on sourceforge and distutils-based (and outside the scikits hierachy), reserving the scikits treatment for subsequent versions. thanks, 'as Footnotes: [1] Obviously this would need to be fleshed out a bit and maybe it's more hassel than it's worth. From a.schmolck at gmx.net Thu Mar 29 21:52:23 2007 From: a.schmolck at gmx.net (Alexander Schmolck) Date: 30 Mar 2007 02:52:23 +0100 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <200703290933.09675.jstrunk@enthought.com> References: <200703290933.09675.jstrunk@enthought.com> Message-ID: Jeff Strunk writes: > On Wednesday 28 March 2007 10:13 pm, Alexander Schmolck wrote: > > Shouldn't having a svn account mean that I'd be able to succesfully log in > > to the trac interface at > > using the > > same account? I can't either use the AlexanderSchmolck login I created on > > scipy.org, nor the POP3/SVN login a.schmolck at gmail.com[1] ?I received an > > email from webmin at scipy.org on Friday[3]; however I *can* commit to the > > repository using svn (I've sent an email to webmin at scipy.org, but haven't > > heard anything back). > > Trac accounts, moin accounts, and SVN accounts are all separate. You'll need > to register and fill in your email address in the settings section for each > Trac you use. I see, thanks -- but is it customary/acceptable to use the same login (and possibly password, unless some systems are less secure than others) for all these accounts? Doing so would seem convenient... > If you had a Scipy Trac account, it was copied over to the Scikits trac when > it was created. No, I just had a moin account (AlexanderSchmolck) > If you want to get email about tickets, you must enter your email address in > the settings section of each trac you use. OK, I'll create a trac login as soon as I figured out what the login name ought to be :) cheers, 'as From robert.kern at gmail.com Thu Mar 29 22:05:36 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Mar 2007 21:05:36 -0500 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: References: <460BC5FD.4050602@gmail.com> Message-ID: <460C7070.2000904@gmail.com> Alexander Schmolck wrote: > Robert Kern writes: > > Thanks for the feedback! > >>> I assume this question reflects my svn-newbie status, but why doesn't the >>> scikits structure look something like this, given that as I understand it >>> scikits are meant to be essentially independent (and hence independely >>> versioned) projects under a common namespace? >>> >>> mlabwrap/ >>> branches/ >>> tags/ >>> trunk/ >>> setup.py >>> scikits/ >>> __init__.py >>> mlabwrap/ >>> __init__.py >>> some_other_scikit/ >>> branches/ >>> ... >>> >>> It somehow seems to me that tags and branches should apply to individual >>> projects and that one could still do convenient releases and svn-checkouts of >>> scikits/ as a whole by using e.g. svn:externals to the individual projects. >> branches/ and tags/ directories can be shared between projects. There's nothing >> special about a branch or a tag in SVN; they are just copies. Just make sure you >> name your branches and tags appropriately (mlabwrap-1.0, etc.). I would like all >> of the scikits to be under one trunk, though, for conveniently grabbing the >> current source of all of the scikits without also grabbing every branch and >> tag. > > This is actually exactly what I thought svn:externals would solve (but maybe I > don't understand the problems associated with that solution correctly): > > mlabwrap/ > branches/ > tags/ > trunk/ > ... > some_other_scikit/ > branches/ > ... > > scikits > trunk > scikits > -> mlabwrap/trunk/ # -> = external > -> some_other_scikit/trunk/ > ... > tags > ... > > Thus, if you want all scikits, you just checkout above scikits sub-dir. (I > presumably haven't got the directory-structure quite right, but I hope it > doesn't matter for getting the point across) > On the other hand, if the tags/ and branches/ directories are shared between > all projects, you need to prefix all tags/branches with the project name > (which would otherwise not be necessary) and have potentially quite a lot of > dirs too look at that don't interest you in the least (e.g. when browsing > tags/ or branches/ with trac), and, AFAICT, no particularly easy way to just > check out everything related to your project. I assume this could also be > fixed by 'symlinking' via externals, but you'd need many, many more (per > project: one for the trunk and one for each branch and tag, vs. one for the > trunk of each project -- and possibly the odd 'whole' scikits release tag). By and large, it simply doesn't matter to "get everything related to your project". Believe me, you don't want all of branches/ and tags/ in a checkout. On the other hand, "getting all of the packages in scikits" does matter. IMO, the inconvenience of prefixing your branches and tags is secondary to the performance problems of svn:external, which slows down all checkouts and updates for everyone (although I'll have to double-check that claim for svn:external links within the same repository). Also, it appears that Trac doesn't like svn:external to other repositories; I'm not sure if that extends to svn:external within the same repository. http://trac.edgewall.org/wiki/TracFaq#DoesTracsupportsvn:externalsubversionrepositories > Maybe the structure I proposed would also make importing and exporting of > projects slightly easier (because it mirrors the typical layout of an > individual svn project). Speaking of which -- is there something I can do with > the existing CVS so that it can be easily imported in the scikits svn (in > which case we can get rid of what's already checked in), or would importing > the CVS involve a hassle in any case, because then I'll just archive it on > sourceforge. I don't know. You'll have to read the cvs2svn documentation. > The other thing I've been wondering is if such a setup couldn't also be made > to accomodate something like Stefan van der Walt's layout proposal, which as > far as I can see would allow for the most convenient way possible to grab all > scikits and build them: > > setup.py > scikits/ > __init__.py > -> mlabwrap/ > mlabwrap_setup.py > __init__.py > awmstools.py > ... > -> some_other_scikit/ > some_other_scikit_setup.py Having two ways to install something is just begging for trouble. > ... > Couldn't one have a toplevel setup.py that just runs all > scikits/DIRNAME/DIRNAME_setup.py's it can find, passing through the command line > options (or something along those lines[1])? That's unworkable. I've tried. >> If so it should also use numpy.distutils. Just make sure to import >> setuptools first. >> >> import setuptools >> from numpy.distutils.core import setup >> ... > > Is there a recipe/template for this somewhere? Googling "scipy setuptools" > comes up with > > > > as the first hit, which seems to indicate that setuptools is still a bit alpha > and the docs can't be trusted if one wants something that actually works. Fernando's a curmudgeon, and that page is old. Ignore him. :-) Like I said, just import setuptools before you import numpy.distutils. Then use numpy.distutils as normal to handle all of the building and stuff. setuptools adds some keywords to setup() that you should also provide, namely namespace_packages=['scikits'], That's all that's necessary. There's no particular magic to combining setuptools and numpy.distutils. Read the setuptools documentation for other keyword arguments the setup() that you might want to use for the extra features it provides, like install_requires. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Thu Mar 29 22:32:33 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 29 Mar 2007 20:32:33 -0600 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: <460C7070.2000904@gmail.com> References: <460BC5FD.4050602@gmail.com> <460C7070.2000904@gmail.com> Message-ID: On 3/29/07, Robert Kern wrote: > > Is there a recipe/template for this somewhere? Googling "scipy setuptools" > > comes up with > > > > > > > > as the first hit, which seems to indicate that setuptools is still a bit alpha > > and the docs can't be trusted if one wants something that actually works. > > Fernando's a curmudgeon, and that page is old. Ignore him. :-) I certainly won't dispute the curmudgeon part :) And that page /is/ indeed old: if the problems that led me to develop my personal flavor of setuptools-allergy have all been solved, then my strategy will have worked out like a charm: wait things out, let a few high-profile projects start using setuptools and run into all these problems, and let that force the technical reality of setuptools catch up to the marketing. I'm sure the system has lots of good features, and it's even likely we may end up using it a lot in ipython for 1.0. By now things are probably a lot better than 2 years ago. Cheers, f From schaouette at free.fr Fri Mar 30 04:48:32 2007 From: schaouette at free.fr (Gilles G.) Date: Fri, 30 Mar 2007 10:48:32 +0200 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: <200703291741.l2THf7kB010900@glup.physics.ucf.edu> References: <200703291833.27313.schaouette@free.fr> <200703291741.l2THf7kB010900@glup.physics.ucf.edu> Message-ID: <200703301048.33664.schaouette@free.fr> > Yes, that is the right link. I found the other one and fixed both, > and added comments to help us find them again when they break next > time. Thanks for letting me know about them! > > --jh-- Thanks a lot, these links a really helpfull to understand the way you want things to be done in SciPy. Now, from what I understood, there must be a "private area" aimed at contributors only (I reckon it's the developper zone), where all developpement of code and documentation should happen, and the main site should be a "public area" where only stable and clean features and documentation are presented. For example, if I want to add some screenshots in the "ScreenShots" of the wiki, I must: - edit the ScreenShots page on the wiki (this page is only accessible from the MigratingFromPlone page, i.e. private area) - ask on this mailing list if these screenshots are good, or if I must change anything. - Once everyone is happy with the screenshot page, I can put a link to this ScreenShots page on the public area (for example on the main page). Did I understand well? I just don't want to edit the wiki the wrong way... -- Gilles From david at ar.media.kyoto-u.ac.jp Fri Mar 30 04:56:31 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 30 Mar 2007 17:56:31 +0900 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: <20070329152510.5D360BA0DDD@phaser.physics.ucf.edu> References: <20070329152510.5D360BA0DDD@phaser.physics.ucf.edu> Message-ID: <460CD0BF.8000604@ar.media.kyoto-u.ac.jp> Joe Harrington wrote: >> I would like to know if there is somme kind of roadmap for the developpement >> of SciPy/Numpy, I can't find it on the web site, neither on this mailing >> list. > > There is a roadmap for everything except the software itself (docs, > releases, etc.) in http://scipy.org/Developer_Zone (scroll to the > bottom). The Developer_Zone page is the right place to list people > and to put a more formal roadmap/timetable. But, a timeline can't > happen until people step up to lead the component efforts. > > It seems the community is in the perpetual state of having a *lot* of > ready workers, but only a few committed leaders. We need: > > Someone to lead the creation of binary installers for all platforms, > especially the various Linux distros. Concerning this one, my understanding is that official deb are available for debian etch and the next ubuntu, to be released in a few weeks. Maybe we can incorporate their effort in the numpy sources, a bit like this is done for pytables ? So providing binaries for every new release would be much easier ? Also, the main problem for installing numpy and scipy is to fullfill the dependencies, right ? (atlas, etc...) ? I've started to take a look at the build system from openSuse, which may be useful towards the goal of providing binaries (for linux, obviously; I don't know anything about the difficulies to provide windows or mac os X binaries): http://en.opensuse.org/Build_Service I hope to be able to provide preliminary binary builds (eg rpm) of ATLAS usable by numpy/scipy really soon, actually (I just have to familiarize a bit more with RPM; I am much more familiar with deb packaging). cheers, David From strawman at astraw.com Fri Mar 30 06:00:20 2007 From: strawman at astraw.com (Andrew Straw) Date: Fri, 30 Mar 2007 03:00:20 -0700 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: <200703301048.33664.schaouette@free.fr> References: <200703291833.27313.schaouette@free.fr> <200703291741.l2THf7kB010900@glup.physics.ucf.edu> <200703301048.33664.schaouette@free.fr> Message-ID: <460CDFB4.70400@astraw.com> Gilles G. wrote: >> Yes, that is the right link. I found the other one and fixed both, >> and added comments to help us find them again when they break next >> time. Thanks for letting me know about them! >> >> --jh-- >> > Thanks a lot, these links a really helpfull to understand the way you want > things to be done in SciPy. > Now, from what I understood, there must be a "private area" aimed at > contributors only (I reckon it's the developper zone), where all > developpement of code and documentation should happen, and the main site > should be a "public area" where only stable and clean features and > documentation are presented. > For example, if I want to add some screenshots in the "ScreenShots" of the > wiki, I must: > - edit the ScreenShots page on the wiki (this page is only accessible from the > MigratingFromPlone page, i.e. private area) > - ask on this mailing list if these screenshots are good, or if I must change > anything. > - Once everyone is happy with the screenshot page, I can put a link to this > ScreenShots page on the public area (for example on the main page). > Did I understand well? > I just don't want to edit the wiki the wrong way... > > Dear Gilles, That roadmap document pre-dates the current scipy wiki(s) and is therefore slightly out of date in that regard. As you note, there are two wikis, but the distinction is not public/private but rather user/developer. Basically, the developer site (the Trac instance) is for reporting and searching bugs, browsing the source code, and discussing the future development of scipy and numpy. The "user" wiki http://scipy.org is the place where we want to have the best user-oriented documentation, download links, news, screenshots, "how-to-help" pages, and so on. As far as uploading screenshots, please upload them directly on the ScreenShots page. We're big fans of editorial-decisions-by-wiki around here. If someone doesn't like it, it's up to them to get rid of it or modify it. And if you need permissions to do something you can't do, ask someone on the EditorsGroup page to add you to that page. My feeling is that as long as your information is accurate and your screenshots look half decent, people will appreciate that you are spending the time to contribute. From jstrunk at enthought.com Fri Mar 30 09:47:48 2007 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 30 Mar 2007 08:47:48 -0500 Subject: [SciPy-dev] mlabwrap scikit [Was: Re: scikits project] In-Reply-To: References: <200703290933.09675.jstrunk@enthought.com> Message-ID: <200703300847.48416.jstrunk@enthought.com> On Thursday 29 March 2007 8:52 pm, Alexander Schmolck wrote: > Jeff Strunk writes: ... > > > > Trac accounts, moin accounts, and SVN accounts are all separate. You'll > > need to register and fill in your email address in the settings section > > for each Trac you use. > > I see, thanks -- but is it customary/acceptable to use the same login (and > possibly password, unless some systems are less secure than others) for all > these accounts? Doing so would seem convenient... > The only account new users can't create for themselves is an svn account. SSL is not is wide use. You can use it to access svn and trac, but not moin. I don't know that very many people are using SSL for either. You can use whatever username and password you desire on any of them. Thanks, Jeff From jh at physics.ucf.edu Fri Mar 30 10:49:51 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Fri, 30 Mar 2007 10:49:51 -0400 Subject: [SciPy-dev] Roadmap for SciPy/Numpy In-Reply-To: (scipy-dev-request@scipy.org) References: Message-ID: <200703301449.l2UEnp5O013938@glup.physics.ucf.edu> > > Yes, that is the right link. I found the other one and fixed both, > > and added comments to help us find them again when they break next > > time. Thanks for letting me know about them! > > > --jh-- > Thanks a lot, these links a really helpfull to understand the way you want > things to be done in SciPy. > Now, from what I understood, there must be a "private area" aimed at > contributors only (I reckon it's the developper zone), where all > developpement of code and documentation should happen, and the main site > should be a "public area" where only stable and clean features and > documentation are presented. > For example, if I want to add some screenshots in the "ScreenShots" of the > wiki, I must: > - edit the ScreenShots page on the wiki (this page is only accessible from the > MigratingFromPlone page, i.e. private area) > - ask on this mailing list if these screenshots are good, or if I must change > anything. > - Once everyone is happy with the screenshot page, I can put a link to this > ScreenShots page on the public area (for example on the main page). > Did I understand well? > I just don't want to edit the wiki the wrong way... > -- > Gilles Yes, that's the idea for a completely new area of the site. If you're tweaking an existing area that's relatively mature, just be sure whoever maintains it is aware of what you're doing. If you're overhauling, talk to the community. Discussion might include where and how to add the page (e.g., on the main site, or under Documentation). We have to depend on people's good judgement, so please err on the side of asking. Since ScreenShots existed in the old plone site and nobody's migrated it yet, doing so would be a service. Thanks in advance for your contributions! --jh--