From schofield at ftw.at Mon May 1 06:37:00 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 1 May 2006 12:37:00 +0200 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <4452299B.9040105@ntc.zcu.cz> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> <445104E7.8020000@ftw.at> <4451DC6C.1070103@ntc.zcu.cz> <4451E425.70408@ftw.at> <4452299B.9040105@ntc.zcu.cz> Message-ID: On 28/04/2006, at 4:41 PM, Robert Cimrman wrote: > Ed Schofield wrote: >> Robert Cimrman wrote: >> >>> Yes, fmin_l_bfgs_b uses a wrapped fortran module, so I can think >>> only >>> about rewriting its high-level logic (the main iteration loop?) in >>> Python, using directly the fortran subroutines. But all functions in >>> optimize.py could use the common interface without pain. >>> >>> The two main possibilities are: >>> 1) call back each iteration (as I do in my fmin_sd) only >>> 2) call back via wrap_function macro, so that function/gradient >>> calls >>> e.g. in line search functions are not missed - the callback could >>> have >>> one arg saying from where it was called, so that in >>> postporcessing you >>> could plot e.g. just the data from main loop iterations. >>> >>> Now as I have written them down, I would vote for 2) in some form. >> >> Yes, (2) would be good :) > > OK, I am willing to do a prototype in the sandbox. Unfortunately I > will > be available first after 9. May :(, so be patient. Okay ;) I've been thinking more about the callback functions, and I now think a context parameter might be unnecessary. The reason is that the function and gradient (and perhaps Hessian) are provided by the user, and these can easily be written to perform any necessary logging or reliability analysis that should be performed each evaluation. All that's missing is the ability to run code each iteration (excluding line searches). >> >> It looks like a good start. I suggest you copy the whole >> scipy.optimize >> package into the sandbox (e.g. as 'newoptimize') and check your >> code in >> there... > > It's there and I removed dependencies on my other codes, so that > you can > try it - run test_optimize.py, or try_log.py Great. I'll help you work on it when I get some time (in a few weeks). -- Ed From jonathan.taylor at utoronto.ca Mon May 1 10:45:47 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Mon, 1 May 2006 10:45:47 -0400 Subject: [SciPy-dev] scipy.optimize In-Reply-To: <4450945A.5070008@ntc.zcu.cz> References: <444F97F3.6030100@ntc.zcu.cz> <444FA745.1010608@ieee.org> <4450945A.5070008@ntc.zcu.cz> Message-ID: <463e11f90605010745v634c11d8v411abebdd3946bc5@mail.gmail.com> > > Yes. logging is particularly requested quite often. As for > > configuration parameters, I suspect that it is best done by making a > > nice object-based interface to the optimization routines. I'm not sure > > what the benefit is of passing in a configuration object over simply > > using keyword arguments. > > Well, I have nothing against keyword arguments. But with a configuration > object, the function argument list would be smaller, all fmin_* > functions could have the same syntax and new configuration options would > not influence the argument list. Maybe I misunderstood, but you can just use **dict to send config options. d = {} d["opt1"] = 'foo' d["opt2"] = 'bar' d["num"] = 3 func(**d) J. From chanley at stsci.edu Mon May 1 13:14:26 2006 From: chanley at stsci.edu (Christopher Hanley) Date: Mon, 01 May 2006 13:14:26 -0400 Subject: [SciPy-dev] PyFITS 1.1 "alpha 2" release Message-ID: <445641F2.8040509@stsci.edu> ------------------ | PYFITS Release | ------------------ Space Telescope Science Institute is pleased to announce the "alpha 2" release of PyFITS 1.1. This release includes support for both the NUMPY and NUMARRAY array packages. This software can be downloaded at: http://www.stsci.edu/resources/software_hardware/pyfits/Download The NUMPY support in PyFITS is not nearly as well tested as the NUMARRAY support. We expect that you will encounter bugs. Please send bug reports to "help at stsci.edu". We intend to support NUMARRAY and NUMPY simultaneously for a transition period of no less than 1 year. Eventually, however, support for NUMARRAY will disappear. During this period, it is likely that new features will appear only for NUMPY. The support for NUMARRAY will primarily be to fix serious bugs and handle platform updates. ----------- | Version | ----------- Version 1.1a2; May 1, 2006 ------------------------------- | Major Changes since v1.1a1 | ------------------------------- * Corrected a bug in which the format for a Column object was not being parsed properly because of format repeats. * Due to a change in NUMPY, recarray field access via attributes will no longer clobber pre-existing class methods or attributes. This behavior was causing PYFITS to fail when attempting to read tables that contained column names that were equivalent to existing class attribute or method names such as "field". This change in numpy was made in revision 2288. ------------------------- | Software Requirements | ------------------------- PyFITS Version 1.1a2 REQUIRES: * Python 2.3 or later * NUMPY or NUMARRAY --------------------- | Installing PyFITS | --------------------- PyFITS 1.1a2 is distributed as a Python distutils module. Installation simply involves unpacking the package and executing % python setup.py install to install it in Python's site-packages directory. Alternatively the command %python setup.py install --local="/destination/directory/" will install PyFITS in an arbitrary directory which should be placed on PYTHONPATH. Once numarray or numpy has been installed, then PyFITS should be available for use under Python. ----------------- | Download Site | ----------------- http://www.stsci.edu/resources/software_hardware/pyfits/Download ---------- | Usage | ---------- Users will issue an "import pyfits" command as in the past. However, the use of the NUMPY or NUMARRAY version of PyFITS will be controlled by an environment variable called NUMERIX. Set NUMERIX to 'numarray' for the NUMARRAY version of PyFITS. Set NUMERIX to 'numpy' for the NUMPY version of pyfits. If only one array package is installed, that package's version of PyFITS will be imported. If both packages are installed the NUMERIX value is used to decide which version to import. If no NUMERIX value is set then the NUMARRAY version of PyFITS will be imported. Anything else will raise an exception upon import. --------------- | Bug Reports | --------------- Please send all PyFITS bug reports to help at stsci.edu ------------------ | Advanced Users | ------------------ Users who would like the "bleeding" edge of PyFITS can retrieve the software from our SUBVERSION repository hosted at: http://astropy.scipy.org/svn/pyfits/trunk We also provide a Trac site at: http://projects.scipy.org/astropy/pyfits/wiki -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From david at ar.media.kyoto-u.ac.jp Tue May 2 06:19:45 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 02 May 2006 19:19:45 +0900 Subject: [SciPy-dev] [Scipy-dev] Some code for Gaussian Mixture Model Message-ID: <44573241.4000204@ar.media.kyoto-u.ac.jp> Hi, I recently ported some of my matlab code, itself inspired by code found on h2m toolbox (http://www.tsi.enst.fr/~cappe/h2m/) for learning Gaussian Mixtures Models (diagonal and full covariance matrices) with Expectation Maximization algorithm. If there is some interest, I would like to give the code to scipy, but there are some things I am not sure about: - how to properly "submit" the code ? - license: must the license be BSD-like ? (I want to be sure about the license before publishing the code) Many things are missing, such as the possibility to add prior, tricks to avoid likelihood to converge towards singular covariance, online EM, etc... Also, I am quite new to numpy/scipy, so the code is still influenced by its matlab origin. Still, the code works (hopefully), can initiate a GMM with kmeans, and have functions to generate artificial GMM, as well as plotting helper functions. A picture worths 1000 words: http://www.ar.media.kyoto-u.ac.jp/members/david/gmm.png David From aisaac at american.edu Tue May 2 11:37:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 2 May 2006 11:37:27 -0400 Subject: [SciPy-dev] [Scipy-dev] Some code for Gaussian Mixture Model In-Reply-To: <44573241.4000204@ar.media.kyoto-u.ac.jp> References: <44573241.4000204@ar.media.kyoto-u.ac.jp> Message-ID: On Tue, 02 May 2006, David Cournapeau apparently wrote: > license: must the license be BSD-like ? (I want to be sure > about the license before publishing the code) My understanding is that if you hope that code will become part of SciPy, then the BSD license is preferred. (Other acceptable licenses seem to be MIT and Python.) So the basic answer to your question appears to be: yes. Note that as long as you are the only one working on it, you can always your code under other licenses later. (Of course you cannot take away privileges you already granted.) Cheers, Alan Isaac From robert.kern at gmail.com Tue May 2 12:17:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 02 May 2006 11:17:43 -0500 Subject: [SciPy-dev] [Scipy-dev] Some code for Gaussian Mixture Model In-Reply-To: <44573241.4000204@ar.media.kyoto-u.ac.jp> References: <44573241.4000204@ar.media.kyoto-u.ac.jp> Message-ID: <44578627.4010000@gmail.com> David Cournapeau wrote: > Hi, > > I recently ported some of my matlab code, itself inspired by code > found on h2m toolbox (http://www.tsi.enst.fr/~cappe/h2m/) for learning > Gaussian Mixtures Models (diagonal and full covariance matrices) with > Expectation Maximization algorithm. > If there is some interest, I would like to give the code to scipy, > but there are some things I am not sure about: > > - how to properly "submit" the code ? Is it large? If so, could you put a tarball on a website somewhere? If it's just one file, then attaching it to a ticket on the Trac works just fine. http://projects.scipy.org/scipy/scipy You will need to register first before you can create a ticket. We had spam problems. > - license: must the license be BSD-like ? (I want to be sure about > the license before publishing the code) Yes. For convenience, we prefer that you use the license contained in LICENSE.txt in the scipy source distribution. You can just switch out the copyright statement at the top. You are a "contributor" in the text of (c) so you can leave it as is, if you like. As far as I am concerned, though, you can just state that the terms are those given in scipy's LICENSE.txt. IANAL. TINLA. But this does make it easiest for us. Thank you very much for your contribution! I look forward to using it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Tue May 2 13:31:51 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 02 May 2006 19:31:51 +0200 Subject: [SciPy-dev] Some code for Gaussian Mixture Model In-Reply-To: <44578627.4010000@gmail.com> References: <44573241.4000204@ar.media.kyoto-u.ac.jp> <44578627.4010000@gmail.com> Message-ID: <44579787.2060206@ftw.at> > David Cournapeau wrote: > >> Hi, >> >> I recently ported some of my matlab code, itself inspired by code >> found on h2m toolbox (http://www.tsi.enst.fr/~cappe/h2m/) for learning >> Gaussian Mixtures Models (diagonal and full covariance matrices) with >> Expectation Maximization algorithm. >> If there is some interest, I would like to give the code to scipy ... >> Yes, that'd be great! This would be useful for several pattern-recognition applications. Is the EM code general enough for fitting other models too? If so, perhaps it should go in the optimize package... -- Ed From david.huard at gmail.com Tue May 2 16:30:43 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 2 May 2006 16:30:43 -0400 Subject: [SciPy-dev] A Fast Kernel Density Estimator for Scipy Message-ID: <91cf711d0605021330h3c732c99vc83d76c78f82dd7d@mail.gmail.com> Hi, I added a ticket with the code attached for a fast kernel density estimation class. Its a nonparametric method to evaluate the pdf given a sample. It has pretty much the same shape as stats.gaussian_kde, except that the dataset is clustered at class instantiation. The startup time is a lot longer, but once you get to the evaluation part, it is an order of magnitude faster. The computational bottleneck is the clustering of the sample. The implementation is pretty rustic, so if anyone has a clustering method hidden somewhere, it could speed things up considerably. Cheers, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed May 3 15:30:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 03 May 2006 21:30:19 +0200 Subject: [SciPy-dev] ImportError: cannot import name isspmatrix_csc Message-ID: Python 2.4 (#1, Mar 22 2005, 21:42:42) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import numpy import scipy import linsolve.umfpack -> failed: 'module' object has no attribute 'test' numpy.__version__ '0.9.7.2474' scipy.__version__ '0.4.9.1893' from scipy import * Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/sparse/__init__.py", line 5, in ? from sparse import * File "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 14, in ? from bisect import bisect_left File "/usr/lib/python2.4/bisect.py", line 1, in ? """Bisection algorithms.""" File "/usr/lib/python2.4/site-packages/scipy/linsolve/__init__.py", line 5, in ? from linsolve import * File "/usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", line 1, in ? from scipy.sparse import isspmatrix_csc, isspmatrix_csr, isspmatrix, spdiags ImportError: cannot import name isspmatrix_csc From nwagner at iam.uni-stuttgart.de Wed May 3 15:33:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 03 May 2006 21:33:37 +0200 Subject: [SciPy-dev] Possibly bug in linalg.lu Message-ID: Hi all, Can someone reproduce the results ? python falk.py results in import linsolve.umfpack -> failed: 'module' object has no attribute 'test' Residual 2.50586483749e-16 Residual should be zero but is 3.76445362717 I have attached a short script. linalg.lu doesn't work for the second shift L_r. Am I missing something ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: falk.py Type: text/x-python Size: 1077 bytes Desc: not available URL: From arnd.baecker at web.de Thu May 4 02:10:26 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 4 May 2006 08:10:26 +0200 (CEST) Subject: [SciPy-dev] ndimage import error on 64Bit Message-ID: Hi, I just innocently did an `from scipy import *` on a 64 Bit machine (this mornings svn, if that matters) and get the following message: In [1]: from scipy import * --------------------------------------------------------------------------- exceptions.ImportError Traceback (most recent call last) /home/abaecker/BUILDS3/ /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/ndimage/__init__.py 31 import numpy 32 if numpy.int_ == numpy.int64: ---> 33 raise ImportError, "ndimage does not yet support 64-bit systems" 34 from filters import * 35 from fourier import * ImportError: ndimage does not yet support 64-bit systems Would it be possible to convert the ImportError into a Warning? Best, Arnd From nwagner at iam.uni-stuttgart.de Thu May 4 02:42:36 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 04 May 2006 08:42:36 +0200 Subject: [SciPy-dev] ImportError: cannot import name isspmatrix_csc In-Reply-To: References: Message-ID: <4459A25C.9080602@iam.uni-stuttgart.de> Nils Wagner wrote: > Python 2.4 (#1, Mar 22 2005, 21:42:42) > [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more > information. > > import numpy > import scipy > > import linsolve.umfpack -> failed: 'module' object has no > attribute 'test' > > numpy.__version__ > > '0.9.7.2474' > > scipy.__version__ > > '0.4.9.1893' > > from scipy import * > > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/lib/python2.4/site-packages/scipy/sparse/__init__.py", > line 5, in ? > from sparse import * > File > "/usr/lib/python2.4/site-packages/scipy/sparse/sparse.py", > line 14, in ? > from bisect import bisect_left > File "/usr/lib/python2.4/bisect.py", line 1, in ? > """Bisection algorithms.""" > File > "/usr/lib/python2.4/site-packages/scipy/linsolve/__init__.py", > line 5, in ? > from linsolve import * > File > "/usr/lib/python2.4/site-packages/scipy/linsolve/linsolve.py", > line 1, in ? > from scipy.sparse import isspmatrix_csc, > isspmatrix_csr, isspmatrix, spdiags > ImportError: cannot import name isspmatrix_csc > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Sorry for the noise. I have written a script called bisect.py and saved it in my home directory. So this was at odds with /usr/lib64/python2.4/bisect.py. Nils From david at ar.media.kyoto-u.ac.jp Thu May 4 04:19:37 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 May 2006 17:19:37 +0900 Subject: [SciPy-dev] [Scipy-dev] Some code for Gaussian Mixture Model In-Reply-To: <44578627.4010000@gmail.com> References: <44573241.4000204@ar.media.kyoto-u.ac.jp> <44578627.4010000@gmail.com> Message-ID: <4459B919.1050909@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > >> Hi, >> >> I recently ported some of my matlab code, itself inspired by code >> found on h2m toolbox (http://www.tsi.enst.fr/~cappe/h2m/) for learning >> Gaussian Mixtures Models (diagonal and full covariance matrices) with >> Expectation Maximization algorithm. >> If there is some interest, I would like to give the code to scipy, >> but there are some things I am not sure about: >> >> - how to properly "submit" the code ? >> > > Sorry for the delay, but I had problems accessing the web server of my university... Anyway, I put the license of scipy in a small tar.gz available there: http://www.ar.media.kyoto-u.ac.jp/members/david/gmm.tar.gz There are basically two files: one for computing multivariate gaussian densities, and one for GMM. Both are required. There is no separate doc, but I tried to document all relevant functions. To get a similar picture than http://www.ar.media.kyoto-u.ac.jp/members/david/gmm.png you just have to execute the script gmm.py (you need matplotlib for that; the helper functions for drawing do not use any plotting package, though, so using xplt instead should be trivial) I started implementing some tests, but I haven't checked how to do it properly, and some are just usable by me to check that I get consistent results compared to my matlab implementation. I tried to test it for general cases, but I don't use it myself in high dimension, so I don't guarantee anything ! I am quite new to scipy, so there may have strange ways of doing things; I tried to be consistent, though. For the EM algorithm initialization, I reimplemented the kmean algorithm which does not seem to be available in scipy. David From david at ar.media.kyoto-u.ac.jp Thu May 4 04:34:35 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 04 May 2006 17:34:35 +0900 Subject: [SciPy-dev] Some code for Gaussian Mixture Model In-Reply-To: <44579787.2060206@ftw.at> References: <44573241.4000204@ar.media.kyoto-u.ac.jp> <44578627.4010000@gmail.com> <44579787.2060206@ftw.at> Message-ID: <4459BC9B.3080309@ar.media.kyoto-u.ac.jp> Ed Schofield wrote: > Yes, that'd be great! This would be useful for several > pattern-recognition applications. > > Is the EM code general enough for fitting other models too? If so, > perhaps it should go in the optimize package... > Well, EM being more a meta algorithm than an algorithm by itself, the implementation is quite different for each model; right now, the EM implementation is just split into both steps E and M. The E step is easily converted for other simple mixtures models (multinomial, for example); the M step is totally different. The code still carries an obvious matlab past (only functions, no use of lists, etc...); it would make more sense to have a class for each model of mixture, and implements EM for mixtures this way instead of having all those parameters (it would make the package more robust to users-input errors, too). Things I intend to add in a near future: - online EM for Gaussian - several methods to avoid the estimate to converge to a singular component (any help for bibliography would be appreciated here) - other initialization methods David From berthold.hoellmann at gl-group.com Thu May 4 07:40:50 2006 From: berthold.hoellmann at gl-group.com (=?iso-8859-15?Q?_Berthold_H=F6llmann?=) Date: Thu, 04 May 2006 13:40:50 +0200 Subject: [SciPy-dev] Option --f90exec not working for numpy.distutils? Message-ID: Hello, Trying to build some code of ours using numpy instead of Numeric / scipy_distutils I get the following error message. python setup.py config_fc --fcompiler=intel --f90exec=ifort81 --f77exec=ifort81 --f77flags="-assume byterecl" --f90flags="-assume byterecl" build running config_fc running build running build_py running build_scripts running config_fc running build running build_src building py_modules sources building library "py_brush" sources building library "engforcegen" sources building library "priefas" sources building library "f_dummy" sources building library "fortmisc" sources building extension "SXFPyBase" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "numfort" sources building extension "_Metis_Wrapper" sources building extension "BasePyBrush" sources building extension "StiffOnPlate" sources building extension "glMiscMod" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "iuw" sources building extension "_spsolve" sources building extension "sgah" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "engforcelib" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. building extension "priefaslib" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. building extension "AnsysInterface" sources f2py options: [] adding 'build/src/fortranobject.c' to sources. adding 'build/src' to include_dirs. running build_py running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelFCompiler customize IntelFCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext warning: build_ext: fcompiler=intel is not available. building 'SXFPyBase' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' compile options: '-DDEBUG -DC_COMPILER_ACC -Dlinux=1 -Ilib/SXFPyBase -I/usr/local/gltools/include -Ibuild/src -I/usr/local/gltools/python/Python-2.5/linux/lib/python2.5/site-packages/numpy/core/include -I/usr/local/gltools/python/Python-2.5/include/python2.5 -I/usr/local/gltools/python/Python-2.5/linux/include/python2.5 -c' Traceback (most recent call last): File "setup.py", line 206, in ext_modules=extInfo.exts) File "/usr/local/gltools/python/Python-2.5/linux/lib/python2.5/site-packages/numpy/distutils/core.py", line 85, in setup return old_setup(**new_attr) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/local/gltools/python/Python-2.5/linux/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", line 109, in run self.build_extensions() File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/command/build_ext.py", line 407, in build_extensions self.build_extension(ext) File "/usr/local/gltools/python/Python-2.5/linux/lib/python2.5/site-packages/numpy/distutils/command/build_ext.py", line 301, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' make[1]: *** [build] Fehler 1 make[1]: Leaving directory `/data/tmp/hoel/GLPy/test' make: *** [build] Fehler 2 Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. From bsouthey at gmail.com Thu May 4 11:37:30 2006 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 4 May 2006 10:37:30 -0500 Subject: [SciPy-dev] some statistical models / formulas In-Reply-To: <4446C4C1.2060003@stanford.edu> References: <4446C4C1.2060003@stanford.edu> Message-ID: Hi, John and I have wrote some code for statistical analysis biased towards our area. I have included the general linear model part and an example data set at: https://netfiles.uiuc.edu/southey/www/glm.tar.gz Basically, our features of our code includes: 1) Read in the data from a file using format codes to indicate if the data columns are numeric or alphanumeric. This is really along the lines of SAS's $ formatting but these are treated as 'class' variables. 2) We also allow the data to have: a) comment codes so to ignore selected lines of data b) missing value codes such as * or NA 3) The data is stored in masked array (due to the missing values) and any alphanumeric values are recoded to numeric as well as storing the unique levels. 4) A summary method, similar to SAS's proc means that can be done for a class variable (similar to the by statement in SAS). 5) A univariate general linear model similar to SAS's proc glm or R's lm (?). In theory this should fit both class variables and covariates as well as interactions and nested terms (although the interaction of a class term and covariate may not be correct). The current outcome is a ANOVA table similar to SAS's proc glm where there is the model fit and Type 1 and 3 sums of squares parts for each term. There are still some bugs in this one especially in terms of the Type 3 SS calculations (which should be done differently). 6) Directly form X'X and X'Y from the data rather than matrix multiplication based on the selected model. Regards Bruce Southey On 4/19/06, Jonathan Taylor wrote: > i have made a numpy/scipy package for some linear statistical models > > http://www-stat.stanford.edu/~jtaylo/scipy_stats_models-0.01a.tar.gz > > i was hoping that it might someday get into scipy.stats, maybe as > scipy.stats.models? > > anyways, i am sure the code needs work and more docs with examples, but > right now there is basic functionality for the following (the tests give > some examples): > > - model formulae as in R (to some extent) > - OLS (ordinary least square regression) > - WLS (weighted least square regression) > - AR1 regression (non-diagonal covariance -- right now just AR1 but easy > to extend to ARp) > - generalized linear models (all of R's links and variance functions but > extensible as well -- not everything has been rigorously tested but > logistic agrees with R, for instance) > - robust linear models using M estimators (with a number of standard > default robust norms as in R's rlm) > - robust scale estimates (MAD, Huber's proposal 2). > > it would be nice to add a few things over time, too, like: > > - mixed effects models > - generalized additive models (gam), generalized estimating equations > (gee).... > - nonlinear regression (i have some quasi working code for this, too, > but it is not yet included). > > + anything else people want to add. > > > > -- jonathan > > -- > ------------------------------------------------------------------------ > I'm part of the Team in Training: please support our efforts for the > Leukemia and Lymphoma Society! > > http://www.active.com/donate/tntsvmb/tntsvmbJTaylor > > GO TEAM !!! > > ------------------------------------------------------------------------ > Jonathan Taylor Tel: 650.723.9230 > Dept. of Statistics Fax: 650.725.8977 > Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo > 390 Serra Mall > Stanford, CA 94305 > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > > > > From fullung at gmail.com Mon May 8 08:23:47 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 8 May 2006 14:23:47 +0200 Subject: [SciPy-dev] Summer of Code proposal: Support Vector Machines Message-ID: <00f001c6729a$41424530$0502010a@dsp.sun.ac.za> Hello all I'd like to add my proposal for adding support for Support Vector Machines to SciPy to the list of Summer of Code projects. Details here: http://students.ee.sun.ac.za/~albert/pylibsvm/ Executive summary: I've wrapped libsvm (BSD licensed) with ctypes and I've been able to get great performance with massive dense problems (currently working on thousands of 13,000-dimensional features, going up to 100,000 dimensions soon). Along the way I found quite a few bugs in NumPy, many of which have been fixed. In that respect, this has already been a useful effort. Other work that has come out of my experiments with ctypes is the page on wiki on using NumPy arrays with ctypes: http://www.scipy.org/Cookbook/Ctypes For the Summer of Code I want to extend my code to handle all the problem types supported by libsvm, improve support for precomputed kernels and do some memory footprint and performance optimization. This work is directly related to my thesis, where I'm focusing on linear kernels for the moment. I think this would be a great piece of code to add to SciPy. I'll submit my application to Google in a few hours, so if you have any comments before then, I'll incorporate them into the proposal. Is anybody who registered as a mentor willing to take on this project? You don't have to be a expert on support vector machines. Neither am I, but it's pretty easy to pick up if you know a bit about linear algebra. If you're interested in performance optimization in general, this is the project for you! :-) Regards, Albert From gnchen at cortechs.net Mon May 8 09:31:48 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 8 May 2006 06:31:48 -0700 Subject: [SciPy-dev] Summer of Code proposal: Support Vector Machines In-Reply-To: <00f001c6729a$41424530$0502010a@dsp.sun.ac.za> References: <00f001c6729a$41424530$0502010a@dsp.sun.ac.za> Message-ID: <527744C9-8F17-4959-BE7D-BED38FC6272C@cortechs.net> Albert, When are you going to add libsvm in Scipy? Some of my stuff need hat. BTW, ctypes vs DIY. Any speed issue there?? I Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net On May 8, 2006, at 5:23 AM, Albert Strasheim wrote: > Hello all > > I'd like to add my proposal for adding support for Support Vector > Machines > to SciPy to the list of Summer of Code projects. > > Details here: > > http://students.ee.sun.ac.za/~albert/pylibsvm/ > > Executive summary: I've wrapped libsvm (BSD licensed) with ctypes > and I've > been able to get great performance with massive dense problems > (currently > working on thousands of 13,000-dimensional features, going up to > 100,000 > dimensions soon). > > Along the way I found quite a few bugs in NumPy, many of which have > been > fixed. In that respect, this has already been a useful effort. > Other work > that has come out of my experiments with ctypes is the page on wiki > on using > NumPy arrays with ctypes: > > http://www.scipy.org/Cookbook/Ctypes > > For the Summer of Code I want to extend my code to handle all the > problem > types supported by libsvm, improve support for precomputed kernels > and do > some memory footprint and performance optimization. > > This work is directly related to my thesis, where I'm focusing on > linear > kernels for the moment. > > I think this would be a great piece of code to add to SciPy. > > I'll submit my application to Google in a few hours, so if you have > any > comments before then, I'll incorporate them into the proposal. > > Is anybody who registered as a mentor willing to take on this > project? You > don't have to be a expert on support vector machines. Neither am I, > but it's > pretty easy to pick up if you know a bit about linear algebra. If > you're > interested in performance optimization in general, this is the > project for > you! :-) > > Regards, > > Albert > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From fullung at gmail.com Mon May 8 09:51:44 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 8 May 2006 15:51:44 +0200 Subject: [SciPy-dev] Summer of Code proposal: Support Vector Machines In-Reply-To: <527744C9-8F17-4959-BE7D-BED38FC6272C@cortechs.net> Message-ID: <011501c672a6$8aa93380$0502010a@dsp.sun.ac.za> Hello all > -----Original Message----- > From: scipy-dev-bounces at scipy.net [mailto:scipy-dev-bounces at scipy.net] On > Behalf Of Gennan Chen > Sent: 08 May 2006 15:32 > To: SciPy Developers List > Subject: Re: [SciPy-dev] Summer of Code proposal: Support Vector Machines > > Albert, > > When are you going to add libsvm in Scipy? Some of my stuff need hat. > BTW, ctypes vs DIY. Any speed issue there?? I I'm hoping to add everything as part of this SoC project. If you're eager to get going you can try my code here: http://students.ee.sun.ac.za/~albert/pylibsvm If you're doing sparse problems, it should just work like the old libsvm Python wrappers, although I haven't tested this extensively. You want to look at SparseProblem in problem.py and the tests at the end. If you have dense data that isn't too big, i.e. you can afford to have a copy of it in memory in the libsvm svm_node format, look at ArrayProblem. If you can't afford a copy, you can put your data in an array with dtype=svm_node_dtype and use SvmNodeArrayProblem, or put all your data in multiple arrays or lists and use SvmNodeListProblem. If you need to do this, let me know, and I'll write up some docs on how this works exactly (or look at the tests). The reason this hasn't been extensively documented yet is that I only finished this code yesterday. (The code checks that your matrices are in the right format, so that should help you if you want to explore this option.) As for ctypes vs DIY: before trying ctypes, I spent a week try to wrap libsvm with SWIG. After that experience, I wouldn't recommend this approach for wrapping C code that has to work nicely with NumPy arrays to anyone. The beauty of wrapping with ctypes is that there is no C code involved (if you do things right). Sure, you can still make things segfault, but most of the time it just works brilliantly. To give you an idea, from reading the ctypes tutorial to having a completely wrapped libsvm (about 10 functions, lots of pointers, a few intricate structs), took me 4 hours while I took me another few hours to figure out all the goodies that you'll see on the ctypes page on the SciPy wiki: http://www.scipy.org/Cookbook/Ctypes If anybody wants to mentor this project, please let me know. Cheers, Albert From nwagner at iam.uni-stuttgart.de Thu May 11 03:49:21 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 11 May 2006 09:49:21 +0200 Subject: [SciPy-dev] The future of scipy.linalg.flapack Message-ID: <4462EC81.4060605@iam.uni-stuttgart.de> Hi all, I would like to know if scipy.linalg.flapack will be completed one day. For example http://www.netlib.org/lapack/double/dsytrf.f is missing. Any comments ? Nils From fperez.net at gmail.com Thu May 11 16:00:09 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 11 May 2006 14:00:09 -0600 Subject: [SciPy-dev] Discussion with Guido: an idea for scipy'06 Message-ID: Hi all, I think the presence of Guido as keynote speaker for scipy'06 is something we should try to benefit of as much as possible. I figured a good way to do that would be to set aside a one hour period, at the end of the first day (his keynote is that day), to hold a discussion with him on all aspects of Python which are relevant to us as a group of users, and which he may contribute feedback to, incorporate into future versions, etc. I floated the idea privately by some people and got no negative (and some positive) feedback, so I'm now putting it out on the lists. If you all think it's a waste of time, it's easy to just kill the thing. Since I know that many on this list may not be able to attend the conference, but may still have valuable ideas to contribute, I thought the best way to proceed (assuming people want to do this) would be to prepare the key points for discussion on a public forum, true to the spirit of open source collaboration. For this purpose, I've just created a stub page in a hurry: http://scipy.org/SciPy06DiscussionWithGuido Feel free to contribute to it. Hopefully there (and on-list) we can sort out interesting questions, and we can contact Guido a few days before the conference so he has a chance to read it in advance. Cheers, f ps - I didn't link to this page from anywhere else on the wiki, so outside of this message it won't be easy to find. I just didn't feel comfortable touching the more 'visible' pages, but if this idea floats, we should make it easier to find by linking to it on one of the conference pages. From ndbecker2 at gmail.com Tue May 16 08:07:54 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 16 May 2006 08:07:54 -0400 Subject: [SciPy-dev] segfault in leastsq Message-ID: from scipy.optimize import leastsq from math import exp def f (x): print x return (exp (x) - exp (0.5)) print leastsq (f, 0.1) Python 2.4.2 (#1, Feb 12 2006, 03:45:41) [GCC 4.1.0 20060210 (Red Hat 4.1.0-0.24)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> ## working on region in file /usr/tmp/python-CzAVPU.py... [ 0.1] [ 0.1] Process Python segmentation fault scipy-0.4.8-4 numpy-0.9.6-1.fc5 From nwagner at iam.uni-stuttgart.de Tue May 16 08:13:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 16 May 2006 14:13:19 +0200 Subject: [SciPy-dev] segfault in leastsq In-Reply-To: References: Message-ID: <4469C1DF.20303@iam.uni-stuttgart.de> Neal Becker wrote: > from scipy.optimize import leastsq > from math import exp > > def f (x): > print x > return (exp (x) - exp (0.5)) > > print leastsq (f, 0.1) > > Python 2.4.2 (#1, Feb 12 2006, 03:45:41) > [GCC 4.1.0 20060210 (Red Hat 4.1.0-0.24)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> ## working on region in file /usr/tmp/python-CzAVPU.py... >>>> > [ 0.1] > [ 0.1] > > Process Python segmentation fault > scipy-0.4.8-4 > numpy-0.9.6-1.fc5 > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > Here is the output of gdb I am using Numpy version 0.9.7.2502 Scipy version 0.4.9.1897 Starting program: /usr/bin/python neal.py [Thread debugging using libthread_db enabled] [New Thread 16384 (LWP 17117)] [ 0.1] [ 0.1] Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 16384 (LWP 17117)] 0x00002aaab01be80f in minpack_lmdif (dummy=, args=) at __minpack.h:452 452 m = ap_fvec->dimensions[0]; (gdb) bt #0 0x00002aaab01be80f in minpack_lmdif (dummy=, args=) at __minpack.h:452 #1 0x00002aaaaac5496a in PyEval_EvalFrame (f=0x7d6fe0) at ceval.c:3547 #2 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaade3de30, globals=, locals=, args=0xb, argcount=2, kws=0x559740, kwcount=0, defs=0x2aaaad868d58, defcount=11, closure=0x0) at ceval.c:2730 #3 0x00002aaaaac53aba in PyEval_EvalFrame (f=0x5595a0) at ceval.c:3640 #4 0x00002aaaaac55404 in PyEval_EvalCodeEx (co=0x2aaaaab10f80, globals=, locals=, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2730 #5 0x00002aaaaac556d2 in PyEval_EvalCode (co=, globals=, locals=) at ceval.c:484 #6 0x00002aaaaac70719 in run_node (n=, filename=, globals=0x503db0, locals=0x503db0, flags=) at pythonrun.c:1265 #7 0x00002aaaaac71843 in PyRun_SimpleFileExFlags (fp=, filename=0x7fffffdb9adf "neal.py", closeit=1, flags=0x7fffffdb9320) at pythonrun.c:860 #8 0x00002aaaaac77b25 in Py_Main (argc=, argv=0x7fffffdb93c8) at main.c:484 #9 0x00002aaaab603ced in __libc_start_main () from /lib64/libc.so.6 #10 0x00000000004006ea in _start () at start.S:113 #11 0x00007fffffdb93b8 in ?? () #12 0x00002aaaaabc19c0 in rtld_errno () from /lib64/ld-linux-x86-64.so.2 From ndbecker2 at gmail.com Tue May 16 08:33:53 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Tue, 16 May 2006 08:33:53 -0400 Subject: [SciPy-dev] segfault in leastsq References: <4469C1DF.20303@iam.uni-stuttgart.de> Message-ID: Nils Wagner wrote: > Neal Becker wrote: >> from scipy.optimize import leastsq >> from math import exp >> >> def f (x): >> print x >> return (exp (x) - exp (0.5)) >> >> print leastsq (f, 0.1) >> >> Python 2.4.2 (#1, Feb 12 2006, 03:45:41) >> [GCC 4.1.0 20060210 (Red Hat 4.1.0-0.24)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>>>> ## working on region in file /usr/tmp/python-CzAVPU.py... >>>>> >> [ 0.1] >> [ 0.1] >> >> Process Python segmentation fault >> scipy-0.4.8-4 >> numpy-0.9.6-1.fc5 >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-dev >> > Here is the output of gdb > > I am using > Numpy version 0.9.7.2502 > Scipy version 0.4.9.1897 > > Starting program: /usr/bin/python neal.py > [Thread debugging using libthread_db enabled] > [New Thread 16384 (LWP 17117)] > [ 0.1] > [ 0.1] > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 16384 (LWP 17117)] > 0x00002aaab01be80f in minpack_lmdif (dummy=, > args=) at __minpack.h:452 > 452 m = ap_fvec->dimensions[0]; Yes, same here. From stefan at sun.ac.za Tue May 16 08:37:13 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 16 May 2006 14:37:13 +0200 Subject: [SciPy-dev] segfault in leastsq In-Reply-To: References: Message-ID: <20060516123713.GA17208@mentat.za.net> Running this code through Valgrind (on numpy r2503) reveals two problems. The first, in __minpack.h, is the one that causes your segfault: ==17580== Invalid read of size 4 ==17580== at 0x653A2D2: minpack_lmdif (__minpack.h:452) ==17580== by 0x80B62C6: PyEval_EvalFrame (in /usr/bin/python2.4) ==17580== by 0x80B771E: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==17580== by 0x80B6F92: PyEval_EvalFrame (in /usr/bin/python2.4) ==17580== by 0x80B771E: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==17580== by 0x80B7964: PyEval_EvalCode (in /usr/bin/python2.4) ==17580== by 0x80D94CB: PyRun_FileExFlags (in /usr/bin/python2.4) ==17580== by 0x80D976B: PyRun_SimpleFileExFlags (in /usr/bin/python2.4) ==17580== by 0x8055B32: Py_Main (in /usr/bin/python2.4) ==17580== by 0x4080EA1: __libc_start_main (in /lib/tls/i686/cmov/libc-2.3.6.so) ==17580== Address 0x0 is not stack'd, malloc'd or (recently) free'd ==17580== ==17580== Process terminating with default action of signal 11 (SIGSEGV) But then I also see the following, probably unrelated problem, which is caused by numpy's handling of scalars: ==17293== 48 (40 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 14 of 125 ==17293== at 0x401C422: malloc (vg_replace_malloc.c:149) ==17293== by 0x48E4D6A: array_alloc (arrayobject.c:5582) ==17293== by 0x48EEA73: PyArray_NewFromDescr (arrayobject.c:4393) ==17293== by 0x48FFC45: PyArray_FromArray (arrayobject.c:6414) ==17293== by 0x490710A: PyArray_FromAny (arrayobject.c:6843) ==17293== by 0x65961E9: minpack_lmdif (__minpack.h:438) ==17293== by 0x80B62C6: PyEval_EvalFrame (in /usr/bin/python2.4) ==17293== by 0x80B771E: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==17293== by 0x80B6F92: PyEval_EvalFrame (in /usr/bin/python2.4) ==17293== by 0x80B771E: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==17293== ==17293== ==17293== 296 bytes in 23 blocks are definitely lost in loss record 92 of 125 ==17293== at 0x401C422: malloc (vg_replace_malloc.c:149) ==17293== by 0x48D74EA: gentype_alloc (scalartypes.inc:339) ==17293== by 0x48FA8ED: PyArray_ScalarFromObject (scalartypes.inc:293) ==17293== by 0x4F54AFF: _int_convert_to_ctype (scalarmathmodule.c:2025) ==17293== by 0x4F646F2: int_richcompare (scalarmathmodule.c:2038) ==17293== by 0x807CE1E: (within /usr/bin/python2.4) ==17293== by 0x807E450: PyObject_RichCompare (in /usr/bin/python2.4) ==17293== by 0x80B2109: PyEval_EvalFrame (in /usr/bin/python2.4) ==17293== by 0x80B771E: PyEval_EvalCodeEx (in /usr/bin/python2.4) ==17293== by 0x80FC03C: (within /usr/bin/python2.4) Hope this helps pin it down. Regards St?fan On Tue, May 16, 2006 at 08:07:54AM -0400, Neal Becker wrote: > from scipy.optimize import leastsq > from math import exp > > def f (x): > print x > return (exp (x) - exp (0.5)) > > print leastsq (f, 0.1) > > Python 2.4.2 (#1, Feb 12 2006, 03:45:41) > [GCC 4.1.0 20060210 (Red Hat 4.1.0-0.24)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> ## working on region in file /usr/tmp/python-CzAVPU.py... > [ 0.1] > [ 0.1] > > Process Python segmentation fault > scipy-0.4.8-4 > numpy-0.9.6-1.fc5 From Norbert.Nemec.list at gmx.de Wed May 17 09:32:22 2006 From: Norbert.Nemec.list at gmx.de (Norbert Nemec) Date: Wed, 17 May 2006 15:32:22 +0200 Subject: [SciPy-dev] Support for numpy scalars in weave? Message-ID: <446B25E6.2040807@gmx.de> Hi there, it seems that weave does not yet correctly handle variables of the numpy scalar types. Consider the following code: --------------------------------- !/usr/bin/env python from numpy import * import scipy.weave import scipy.weave.converters pycpx = 1.0 npcpx = array([1j])[0] scipy.weave.inline(r""" std::complex a = pycpx * 1.0; std::complex b = npcpx * 1.0; """, arg_names=["pycpx","npcpx"], type_converters=scipy.weave.converters.blitz, ) ----------------------------------- running it (with up-to-date CVS versions of numpy and scipy installed) gives an error compiling the produced C++-code. Closer analysis shows the critical lines in the produced C++ file: ------------- py_pycpx = get_variable("pycpx",raw_locals,raw_globals); std::complex pycpx = convert_to_complex(py_pycpx,"pycpx"); py_npcpx = get_variable("npcpx",raw_locals,raw_globals); py::object npcpx = convert_to_catchall(py_npcpx,"npcpx"); ------------- Obviously, the "complex128scalar" type of numpy is unknown to weave. I think this should be fairly easy to fix for anyone who knows the weave internals a bit better than me. Greetings, Norbert From jstrunk at enthought.com Wed May 17 15:33:42 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Wed, 17 May 2006 14:33:42 -0500 Subject: [SciPy-dev] Enthought Network Maintenance Saturday 5/20/2006 Message-ID: <200605171433.42566.jstrunk@enthought.com> Good Afternoon, We need to put in some new switches and a router at Enthought this Saturday starting at 11am Central. This shouldn't take long, but there may be short periods of no connectivity over the course of a few hours. Please forward this along to other people who may be affected by it. Thank you, Jeff Strunk IT Administrator Enthought Inc. From jonathan.taylor at utoronto.ca Thu May 18 16:03:47 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Thu, 18 May 2006 16:03:47 -0400 Subject: [SciPy-dev] Summer of Code proposal: Support Vector Machines In-Reply-To: <011501c672a6$8aa93380$0502010a@dsp.sun.ac.za> References: <527744C9-8F17-4959-BE7D-BED38FC6272C@cortechs.net> <011501c672a6$8aa93380$0502010a@dsp.sun.ac.za> Message-ID: <463e11f90605181303v35dbf112v8c020119ff33abbd@mail.gmail.com> I suggest you post this to the numpy list too. On 5/8/06, Albert Strasheim wrote: > Hello all > > > -----Original Message----- > > From: scipy-dev-bounces at scipy.net [mailto:scipy-dev-bounces at scipy.net] On > > Behalf Of Gennan Chen > > Sent: 08 May 2006 15:32 > > To: SciPy Developers List > > Subject: Re: [SciPy-dev] Summer of Code proposal: Support Vector Machines > > > > Albert, > > > > When are you going to add libsvm in Scipy? Some of my stuff need hat. > > BTW, ctypes vs DIY. Any speed issue there?? I > > I'm hoping to add everything as part of this SoC project. > > If you're eager to get going you can try my code here: > > http://students.ee.sun.ac.za/~albert/pylibsvm > > If you're doing sparse problems, it should just work like the old libsvm > Python wrappers, although I haven't tested this extensively. You want to > look at SparseProblem in problem.py and the tests at the end. > > If you have dense data that isn't too big, i.e. you can afford to have a > copy of it in memory in the libsvm svm_node format, look at ArrayProblem. > > If you can't afford a copy, you can put your data in an array with > dtype=svm_node_dtype and use SvmNodeArrayProblem, or put all your data in > multiple arrays or lists and use SvmNodeListProblem. If you need to do this, > let me know, and I'll write up some docs on how this works exactly (or look > at the tests). The reason this hasn't been extensively documented yet is > that I only finished this code yesterday. (The code checks that your > matrices are in the right format, so that should help you if you want to > explore this option.) > > As for ctypes vs DIY: before trying ctypes, I spent a week try to wrap > libsvm with SWIG. After that experience, I wouldn't recommend this approach > for wrapping C code that has to work nicely with NumPy arrays to anyone. > > The beauty of wrapping with ctypes is that there is no C code involved (if > you do things right). Sure, you can still make things segfault, but most of > the time it just works brilliantly. To give you an idea, from reading the > ctypes tutorial to having a completely wrapped libsvm (about 10 functions, > lots of pointers, a few intricate structs), took me 4 hours while I took me > another few hours to figure out all the goodies that you'll see on the > ctypes page on the SciPy wiki: > > http://www.scipy.org/Cookbook/Ctypes > > If anybody wants to mentor this project, please let me know. > > Cheers, > > Albert > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From loredo at astro.cornell.edu Thu May 18 17:19:30 2006 From: loredo at astro.cornell.edu (Tom Loredo) Date: Thu, 18 May 2006 17:19:30 -0400 Subject: [SciPy-dev] Using randomkit in extensions Message-ID: <1147987170.446ce4e240ee8@astrosun2.astro.cornell.edu> Hi folks, There's been talk here in the last few months about the need to link to some numpy/scipy libraries in user's extensions. As far as I can follow the discussion, no solution to this problem has been found yet. Clue me in if I'm wrong! The recommended workaround has been to just duplicate code. It's inefficient but it works! That's what I've been doing. But it's a problematic solution for extensions that use random numbers. Code should produce reproducible results. Robert added the ability to save/restore state to numpy.random specifically to support reproducibility. Simply copying randomkit.[ch] or mtrand.c breaks this. So far my extensions have needed a deterministic number of uniform or normal samples, so I've written them to accept a numpy array of samples as input (the Python wrapper around the extension code hides this). But now I'm writing an extension that internally uses a rejection technique, with the expected # of rejections varying widely from case to case. So it's not possible to send it an array with the needed samples. A possible workaround would be to grab mtrand's state in Python and communicate it to my extension's copy of *rand*. The Python wrapper around the extension call could get the state back after the call and update numpy's mtrand state appropriately. Does this sound sensible? Is there a better way? If this would work, how should I proceed (I've never used Pyrex---will it be necessary)? I can write the extension in Fortran, C or C++. Thanks, Tom ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ From robert.kern at gmail.com Thu May 18 17:48:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 18 May 2006 16:48:09 -0500 Subject: [SciPy-dev] Using randomkit in extensions In-Reply-To: <1147987170.446ce4e240ee8@astrosun2.astro.cornell.edu> References: <1147987170.446ce4e240ee8@astrosun2.astro.cornell.edu> Message-ID: <446CEB99.6070902@gmail.com> Tom Loredo wrote: > Hi folks, > > There's been talk here in the last few months about the need > to link to some numpy/scipy libraries in user's extensions. > As far as I can follow the discussion, no solution to this > problem has been found yet. Clue me in if I'm wrong! Some time ago, I started working on a code generator script that would create a C include file that accessed a CObject in the numpy.random.mtrand module to expose the function pointers of all of the randomkit and distribution API much like the PyArray API is exported from numpy. It would also have to generate Pyrex code to add to mtrand.pyx to create that CObject. I have no time to work on it at the moment. It would probably take a bit of numpy.distutils jiggery-pokery to get it to build reliably. I can check in what I have if other people want to work on it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Fri May 19 07:16:06 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 19 May 2006 13:16:06 +0200 Subject: [SciPy-dev] Next SciPy release Message-ID: <446DA8F6.3040502@ftw.at> Hi all, Now that NumPy 0.9.8 is out, I'd like to make a new SciPy release (0.4.9) early next week. Now's the time for any last-minute fixes or documentation improvements (hint, hint!) -- Ed From david.huard at gmail.com Fri May 19 12:01:01 2006 From: david.huard at gmail.com (David Huard) Date: Fri, 19 May 2006 12:01:01 -0400 Subject: [SciPy-dev] Next SciPy release In-Reply-To: <446DA8F6.3040502@ftw.at> References: <446DA8F6.3040502@ftw.at> Message-ID: <91cf711d0605190901x101eb187h2e71d8ee8db40701@mail.gmail.com> Hi all, I'm trying to fix a bug with the integrate method of the UnivariateSpline class in the interpolate package (fitpack2.py), and I'd need some help. Here is what I understood so far. The fitpack library defines a bunch of function and subroutines with similar inputs, for instance *subroutine* splev(t,n,c,k,x,y,m,ier) integer n,k,m,ier *real*8* t(n),c(n),x(m),y(m) *real*8* *function* splint(t,n,c,k,a,b,wrk) *real*8* a,b integer n,k *real*8* t(n),c(n),wrk(n) Notice that for both, c is a length n vector. However, in the interface file fitpack.pyf, we have : for splev : real*8 dimension(n-k-1),depend(n,k),check(len(c)==n-k-1),intent(in) :: c and for splint: real*8 dimension(n),depend(n),check(len(c)==n) :: c So for splev, c has length (n-k-1) instead of n. However, when splev gets called from the UnivariateSpline class (with a c of length n-k-1 stored in _eval_args), it looks like it works. On the other hand, the integral method calling splint returns an error message like : 0-th dimension must be fixed to 14 but got 10 I filed a ticket (201) with an example that generates this error. From what I understand, changing the interface for c in splint to copy splev would solve the problem, but I don't undersand why splev works in the first place, since the fortran subroutine asks a length n vector. Thanks, David 2006/5/19, Ed Schofield : > > Hi all, > > Now that NumPy 0.9.8 is out, I'd like to make a new SciPy release > (0.4.9) early next week. > > Now's the time for any last-minute fixes or documentation improvements > (hint, hint!) > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Fri May 19 16:23:52 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 19 May 2006 16:23:52 -0400 Subject: [SciPy-dev] Sandbox setup change Message-ID: For those working with packages in scipy.sandbox, I've added a feature to the setup.py that will mean you won't have to edit it (and run the risk of committing your change in). Simply make a file Lib/sandbox/enabled_packages.txt with the subpackages you want to enable, one per line. Since that file doesn't exist in svn, you won't check it in. cheers! -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ndbecker2 at gmail.com Sat May 20 07:08:04 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Sat, 20 May 2006 07:08:04 -0400 Subject: [SciPy-dev] Next SciPy release References: <446DA8F6.3040502@ftw.at> <91cf711d0605190901x101eb187h2e71d8ee8db40701@mail.gmail.com> Message-ID: Anyone fixed the bug I reported 'segfault in leastsq'? I see a lot of progress was made to track it down - but I don't think I know enough about the code to try to fix it. From jstrunk at enthought.com Sat May 20 16:52:13 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Sat, 20 May 2006 15:52:13 -0500 Subject: [SciPy-dev] Network maintenace complete Message-ID: <200605201552.13247.jstrunk@enthought.com> Good afternoon. Thank you for allowing today's maintenance. I apologize for it taking longer than expected. Everything should be in order now. If you notice that something no longer works, please let me know and I will take care of it as soon as possible. Thank you for your patience, Jeff Strunk IT Administrator Enthought Inc. From nwagner at iam.uni-stuttgart.de Wed May 24 03:10:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 24 May 2006 09:10:05 +0200 Subject: [SciPy-dev] Ticket #128 Message-ID: <447406CD.2050900@iam.uni-stuttgart.de> Ticket #128 is already fixed in svn isn't it ? http://projects.scipy.org/scipy/numpy/ticket/128 Nils From fullung at gmail.com Wed May 24 03:46:48 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 24 May 2006 09:46:48 +0200 Subject: [SciPy-dev] Summer of Code: pylibsvm accepted! Message-ID: <002501c67f06$36380490$0a84a8c0@dsp.sun.ac.za> Hello all I just received confirmation from Google that my proposal for adding Support Vector Machine support to SciPy has been accepted. :-) Thanks very much to Robert Kern and Dave Kammeyer for their help so far. Dave will also be mentoring the project (thanks also to the other volunteers). The original proposal is here: http://students.ee.sun.ac.za/~albert/pylibsvm/ I've done quite a bit of work on the code since posting the proposal, so the timeline might change slightly, but I am still planning to have a first cut of the code in the SciPy sandbox by the end of June, with the final code done by the middle of August. If you're interested to see how this project progresses, check the URL mentioned above, starting in a day or two. Regards, Albert From lazycoding at gmail.com Wed May 24 04:45:13 2006 From: lazycoding at gmail.com (Wenjie He) Date: Wed, 24 May 2006 16:45:13 +0800 Subject: [SciPy-dev] Fail in building numpy(svn). In-Reply-To: References: Message-ID: I just follow the tutorial in http://www.scipy.org/Installing_SciPy/Windows to build numpy step by step, and everything is OK except building numpy. I use cygwin with gcc to compile them all, and use pre-compiler python-2.4 in windows xp sp2. I try build in both windows native command line and cygwin with: python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst And the result is the same with the error which is attached below. It is my first time to build a python module, and I don't know where to start to fix the error. Can anyone show me the way, plz? Wenjie -- I'm lazy, I'm coding. http://my.donews.com/henotii -- I'm lazy, I'm coding. http://my.donews.com/henotii -------------- next part -------------- A non-text attachment was scrubbed... Name: error.log Type: application/octet-stream Size: 10181 bytes Desc: not available URL: From schofield at ftw.at Wed May 24 05:27:43 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 24 May 2006 11:27:43 +0200 Subject: [SciPy-dev] Next SciPy release In-Reply-To: References: <446DA8F6.3040502@ftw.at> <91cf711d0605190901x101eb187h2e71d8ee8db40701@mail.gmail.com> Message-ID: <4474270F.20804@ftw.at> Neal Becker wrote: > Anyone fixed the bug I reported 'segfault in leastsq'? I see a lot of > progress was made to track it down - but I don't think I know enough about > the code to try to fix it. > Yes, I think Travis fixed this in SVN with r1898. So it looks like we're ready to go ... -- Ed From st at sigmasquared.net Wed May 24 06:05:18 2006 From: st at sigmasquared.net (Stephan Tolksdorf) Date: Wed, 24 May 2006 12:05:18 +0200 Subject: [SciPy-dev] Fail in building numpy(svn). In-Reply-To: References: Message-ID: <44742FDE.5020709@sigmasquared.net> Hi Wenjii Building Numpy in Cygwin is currently not supported, at least not until somebody applies the patch from ticket #114. This fact was pointed out in the very first line of scipy.org/Installing_SciPy/Windows However, even if you use MinGW (the compilers from mingw.org, NOT the mingw compilers in Cygwin) or apply the patch yourself, Numpy currently does not compile (ticket #129). A quick fix is to wait until your build fails and then to replace the backslashes in the first line of src.win32-2.4/numpy/core/src/umathmodule.c with two backslashes "\\". Regards, Stephan From jonathan.taylor at utoronto.ca Wed May 24 11:30:23 2006 From: jonathan.taylor at utoronto.ca (Jonathan Taylor) Date: Wed, 24 May 2006 11:30:23 -0400 Subject: [SciPy-dev] Summer of Code: pylibsvm accepted! In-Reply-To: <002501c67f06$36380490$0a84a8c0@dsp.sun.ac.za> References: <002501c67f06$36380490$0a84a8c0@dsp.sun.ac.za> Message-ID: <463e11f90605240830s325a78faubb1079a977cddbd4@mail.gmail.com> Congratulations. SVM support in python will be very useful in attracting people to use numpy/scipy. J. On 5/24/06, Albert Strasheim wrote: > Hello all > > I just received confirmation from Google that my proposal for adding Support > Vector Machine support to SciPy has been accepted. :-) > > Thanks very much to Robert Kern and Dave Kammeyer for their help so far. > Dave will also be mentoring the project (thanks also to the other > volunteers). > > The original proposal is here: > > http://students.ee.sun.ac.za/~albert/pylibsvm/ > > I've done quite a bit of work on the code since posting the proposal, so the > timeline might change slightly, but I am still planning to have a first cut > of the code in the SciPy sandbox by the end of June, with the final code > done by the middle of August. > > If you're interested to see how this project progresses, check the URL > mentioned above, starting in a day or two. > > Regards, > > Albert > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-dev > From gruben at bigpond.net.au Sat May 27 20:55:41 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sun, 28 May 2006 10:55:41 +1000 Subject: [SciPy-dev] neural nets Message-ID: <4478F50D.4020804@bigpond.net.au> Just looking at which summer of code projects got up and saw the Neural Nets in SciPy one by Frederic Mailhot mentored by Robert Kern. Congratulations. I just wanted to draw their attention to an ANN library which they may already know about but which may be useful for reference purposes: http://fann.sourceforge.net/fann.htm It already has Python bindings but is LGPL. Gary R. From Robin.K.Friedrich at usa-spaceops.com Tue May 30 17:27:00 2006 From: Robin.K.Friedrich at usa-spaceops.com (Friedrich, Robin K) Date: Tue, 30 May 2006 16:27:00 -0500 Subject: [SciPy-dev] ATLAS vs Lapack Message-ID: Hi, I'm having a little trouble building the scipy package on our Linux system. (RH EL 3) We have ATLAS 3.6.0 installed and it appears that the setup found it and used it to build. However at runtime the resulting lapack_lite module could not find a symbol. The symbol is defined in the blas_lite.c but maybe the ATLAS build messed up the lapack_lite stuff.??? >From the setup run: creating build/temp.linux-i686-2.4/scipy/corelib/lapack_lite compile options: '-DATLAS_INFO="\"3.6.0\"" -I/ips/fd/cm/cmg/rel/csw/oss/include/ATLAS -Iscipy/base/in clude -Ibuild/src/scipy/base -Iscipy/base/src -I/ips/fd/cm/cmg/rel/csw/oss/include/python2.4 -c' gcc: scipy/corelib/lapack_lite/lapack_litemodule.c gcc -pthread -shared build/temp.linux-i686-2.4/scipy/corelib/lapack_lite/lapack_litemodule.o -L/ips/f d/cm/cmg/rel/csw/oss/lib/ATLAS -llapack -lf77blas -lcblas -latlas -o build/lib.linux-i686-2.4/scipy/l ib/lapack_lite.so From the test attempt: ...lib/python2.4/site-packages/scipy $ python setup.py test Traceback (most recent call last): File "setup.py", line 26, in ? from scipy.distutils.core import setup File "/ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/__init__.py", line 35, in ? import scipy.basic.linalg as linalg File "/ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/basic/linalg.py", line 1, in ? from basic_lite import * File "/ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/basic/basic_lite.py", line 7, in ? import scipy.lib.lapack_lite as lapack_lite ImportError: /ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/lib/lapack_lite.so: undefined symbol: s_wsfe -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue May 30 17:33:16 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 30 May 2006 16:33:16 -0500 Subject: [SciPy-dev] ATLAS vs Lapack In-Reply-To: References: Message-ID: <447CBA1C.6090801@gmail.com> Friedrich, Robin K wrote: > Hi, > I'm having a little trouble building the scipy package on our Linux > system. (RH EL 3) > We have ATLAS 3.6.0 installed and it appears that the setup found it and > used it to build. However at runtime the resulting lapack_lite module > could not find a symbol. The symbol is defined in the blas_lite.c but > maybe the ATLAS build messed up the lapack_lite stuff.??? >>From the setup run: > > creating build/temp.linux-i686-2.4/scipy/corelib/lapack_lite > compile options: '-DATLAS_INFO="\"3.6.0\"" -I/ips/fd/cm/cmg/rel/csw/oss/include/ATLAS -Iscipy/base/in clude -Ibuild/src/scipy/base -Iscipy/base/src -I/ips/fd/cm/cmg/rel/csw/oss/include/python2.4 -c' > gcc: scipy/corelib/lapack_lite/lapack_litemodule.c > gcc -pthread -shared build/temp.linux-i686-2.4/scipy/corelib/lapack_lite/lapack_litemodule.o -L/ips/f d/cm/cmg/rel/csw/oss/lib/ATLAS -llapack -lf77blas -lcblas -latlas -o build/lib.linux-i686-2.4/scipy/l ib/lapack_lite.so > > > From the test attempt: > > ...lib/python2.4/site-packages/scipy $ python setup.py test > Traceback (most recent call last): > File "setup.py", line 26, in ? > from scipy.distutils.core import setup > File "/ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/__init__.py", line 35, in ? > import scipy.basic.linalg as linalg > File "/ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/basic/linalg.py", line 1, in ? > from basic_lite import * > File "/ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/basic/basic_lite.py", line 7, in ? > import scipy.lib.lapack_lite as lapack_lite > ImportError: /ips/fd/sw/csw/oss/lib/python2.4/site-packages/scipy/lib/lapack_lite.so: undefined symbol: s_wsfe This part of an old FAQ is still relevant: http://cens.ioc.ee/~pearu/scipy/INSTALL.html#using-non-gnu-fortran-compiler -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From arnd.baecker at web.de Wed May 31 16:58:27 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 31 May 2006 22:58:27 +0200 (CEST) Subject: [SciPy-dev] Wrapping band matrices Message-ID: Hi, I would like to contribute f2py generated wrappers for the LAPACK routines dgbrfs, dgbtrf, dgbtrs, dsbev, dsbevd, dsbevx which deal with symmetric band matrices. Originally they were developed with Numeric and the old f2py. At this point I have two questions: a) before the function signature looked like: dsbev - Function signature: w,z,info = dsbev(ab,[compute_v,lower,ldab,overwrite_ab]) but now it is dsbev - Function signature: w,z,info = dsbev(ab,[compute_v,lower,ldab]) I.e.: `overwrite_ab` is missing. What has to be done to get this again? b) What would be the best approach for the integration into scipy? The present status is at http://www.physik.tu-dresden.de/~baecker/python/band.zip The routines work "stand-alone" python setup.py build python tests/test_band.py and (after more clean-up/documentation)_ might_ be a good example/framework for others to add further LAPACK routines. In particular I think it is easier to test new stuff separately than to directly integrate it into scipy/Lib/linalg/generic_flapack.pyf Would it make sense to add this to the scipy sandbox at some point? (I would prefer to leave the final integration into scipy/Lib/linalg/* to a real expert...) In any case, before this, the next steps are: - do the sbev stuff for all routines in generic_band.pyf - address FIXMEs in generic_band.pyf - add unit tests for all routines (only one example is given at the moment) Many thanks, Arnd