From david at ar.media.kyoto-u.ac.jp Sat Dec 1 02:45:23 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 01 Dec 2007 16:45:23 +0900 Subject: [SciPy-dev] what the heck is an egg ;-) In-Reply-To: <1196453502.6047.400.camel@glup.physics.ucf.edu> References: <1196453502.6047.400.camel@glup.physics.ucf.edu> Message-ID: <47511113.3030906@ar.media.kyoto-u.ac.jp> Joe Harrington wrote: > A lot of the add-on software for numpy/scipy is distributed using novel > Python install processes (eggs, setup.py) setup.py and using distutils is not novel: this is the standard way to distribute python packages to be built for years (distutils was included in python in 2000, 2001 ?). It offers a common set of rules to build python packages, including compiled extensions, on a wide range of platforms. Eggs is a different matter. Think setup.py as the configure script for python packages > rather than tarballs or the > preferred OS-native installers (dpkg, rpm, etc.). Building binary packages is totally different than distributing source tarballs. Work on that is always welcome, but packaging is not a funny thing to do, the rewards mostly consisting in getting complaints when it does not work :) Several people develop rpm and deb (Andrew Straw has debian packages, for example, I started developing rpm for FC and openSUSE using the open suse build system), but ideally, this should be included upstream in the different distributions (this is the case for debian packages at least, I think: it is in debian and ubuntu). > I'm sure they are > described, perhaps even well, in other places, but, since scipy.org is > our portal, I think it would be good to have a few-line description of > each method on the Download page and a link to more detailed > descriptions elsewhere (or on subsidiary pages). An example of > installing a package many will want, like mayavi2, would be great. > > In particular, many sysadmins (who might be considering a user's request > for an install and know nothing about python) get nervous when package > managers other than the native one for the OS start mucking around in > the system directories, and are hesitant to use something like eggs. > Some statements describing what they do and where they put stuff would > be good (like, a guarrantee that they only tread in certain > directories). IMHO, eggs are a bad idea and a mess; I avoid them and setuptools like the plague. I don't like them, but they solve real problems for many people. If you have complaints about them, bugs reports and ideas on how to improve, the right place is the distutils ML, though. But except for windows, I don't think eggs are distributed for numpy/scipy, at least not officially. Where do you see eggs on scipy.org ? > How to update and how to completely remove a package > would be good. Is there a way to have them check periodically for > updates? Of course, a statement near the top of why these methods are > used rather than the native OS installers would help a lot. Some of the things eggs are supposed to give are: - handle dependencies - handle several versions Many OS "native" packaging systems do not enable those (handling dependency is not really in the culture of Mac OS X and windows, for example; I don't know any linux packaging system which enables several versions to be installed at the same time; at least deb and rpm do not make that possible, and this covers may 90 % of linux users). Mac OS X packaging system does not support uninstalling features either (at least officially); I don't know if eggs are uninstallable, though. I think the only real solution is to push for binary packages to be included upstream for linux distributions, and some people are working on that now. For other platforms, the one thing really missing is maybe binary packages for mac os X, but there is discussion now to solve this. Again, this is a manpower problem: producing good binary packages is hard and takes time, is not so funny, and most people able to do it do not need them (installing from source being easier). David From jh at physics.ucf.edu Sat Dec 1 03:37:36 2007 From: jh at physics.ucf.edu (Joe Harrington) Date: Sat, 01 Dec 2007 03:37:36 -0500 Subject: [SciPy-dev] what the heck is an egg ;-) Message-ID: <1196498256.6047.590.camel@glup.physics.ucf.edu> > setup.py and using distutils is not novel: this is the standard way to > distribute python packages to be built for years (distutils was included > in python in 2000, 2001 ?). Well, it's a matter of perspective, and the perspective I take is that of a new user to Python, or even a non-Python-using system manager who is installing numpy and friends for others to use. To them, *anything* that is not a tar.gz, RPM, or Deb is novel, and most would not dare to use even an un-novel tar.gz in their OS directories. Then we say, here, execute this setup.py as root, it's code for an interpreted language and you have no idea what it will do. Well, that's pretty terrifying, especially to the security-conscious. I know almost nothing about eggs. I see them being used for all the Enthought code, which provides the de facto standard 3D environment, mayavi2. What's a numerical package without a 3D environment? While that's not on scipy.org, it's darn close, and necessary for an environment that competes with IDL or Matlab. I agree that the correct path is to push everything into binary installs, even the experimental stuff. I love the OS installers, and I thank the packagers from the bottom of my heart! If only there were more of them, and if only they could handle more of these packages. The OS installers may not deal with multiple package version on Linux, but I have never wanted more than one version. Someone who does is probably a developer and can handle the tar installs, eggs, or whatever, and direct Python to find the results. I believe that we would double our community's size if all our major packages were available in binary installs for all major platforms. But, a plethora of packagers is not our situation. It would help the inexperienced (including the aforementioned system manager, who will never be experienced in Python) to have some plain talk about what these Python-specific installers do, and how to use them to install, uninstall, and automatically update. It can probably be knocked off in about a page per installer, but it has to be done by someone who knows them well. --jh-- From david at ar.media.kyoto-u.ac.jp Sat Dec 1 03:54:37 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sat, 01 Dec 2007 17:54:37 +0900 Subject: [SciPy-dev] what the heck is an egg ;-) In-Reply-To: <1196498256.6047.590.camel@glup.physics.ucf.edu> References: <1196498256.6047.590.camel@glup.physics.ucf.edu> Message-ID: <4751214D.9050904@ar.media.kyoto-u.ac.jp> Joe Harrington wrote: >> setup.py and using distutils is not novel: this is the standard way to >> distribute python packages to be built for years (distutils was included >> in python in 2000, 2001 ?). >> > > Well, it's a matter of perspective, and the perspective I take is that > of a new user to Python, or even a non-Python-using system manager who > is installing numpy and friends for others to use. To them, *anything* > that is not a tar.gz, RPM, or Deb is novel, and most would not dare to > use even an un-novel tar.gz in their OS directories. Then we say, here, > execute this setup.py as root, it's code for an interpreted language and > you have no idea what it will do. Well, that's pretty terrifying, > especially to the security-conscious. > If you have any strong concern on security, you don't use random tarballs sources downloaded from the internet. You can also install as a non root (I do this all the time, I don't have root access on my workstation at my lab, for example). I don't see how this is different than any other source package (using autoconf, for example). If you have a better idea, please speak up, but you will have a hard time to sell something which works as well as distutils, on as many platforms, and which is less novel. As an aside, I would say that someone who is not capable of installing a basic python package after 10 minutes is not worth being called an administrator: there is a README, you can find the info in one minute in google. Now, numpy + scipy is certainly more difficult to install, because of dependencies and so on, but then this is no different than pure C tarballs. > I know almost nothing about eggs. I see them being used for all the > Enthought code, which provides the de facto standard 3D environment, > mayavi2. What's a numerical package without a 3D environment? I don't know, I am using numpy and scipy for more than one year without even knowing what mayavi is. I have used matlab for years without 3d either. So this is a matter of perspective. > > I agree that the correct path is to push everything into binary > installs, even the experimental stuff. I love the OS installers, and I > thank the packagers from the bottom of my heart! If only there were > more of them, and if only they could handle more of these packages. The > OS installers may not deal with multiple package version on Linux, but I > have never wanted more than one version. Maybe not you, but for some usage, it seems to be extremely useful. Again, I really don't like eggs. But some people like them, for this exact reason. > Someone who does is probably a > developer and can handle the tar installs, eggs, or whatever, and direct > Python to find the results. I believe that we would double our > community's size if all our major packages were available in binary > installs for all major platforms. > Sure. So do you offer some man hours to work on that ? :) I agree 100% with you, binary packages are important, and could be improved for numpy/scipy, but this is a lot of boring work (in perticular, one problem is the dependencies on linux, which means even more boring work because it involves other packages; although, even once you have a buildable rpm/debian, having it integrated upstream is a lot of bureaucratic work). > But, a plethora of packagers is not our situation. It would help the > inexperienced (including the aforementioned system manager, who will > never be experienced in Python) to have some plain talk about what these > Python-specific installers do, and how to use them to install, > uninstall, and automatically update. It can probably be knocked off in > about a page per installer, but it has to be done by someone who knows > them well. > You mean http://www.scipy.org/Installing_SciPy ? I agree the page is getting a bit messy, perticularly for linux, but you can add/modify anything which you find appropriate, this is wiki, after all. AFAIK, you cannot uninstall python packages (most of the data are in one directory you can simply remove, though), except when using binary installers. If you care about easy installation, though, why not just using numpy/scipy packaged for your distribution (I understand you are using linux) ? It is officially available for ubuntu and debian, and binary packages are available for FC 6 and 7, as well as for open SUSE: http://www.scipy.org/Installing_SciPy/Linux (ashigabou repository) David From prabhu at aero.iitb.ac.in Sat Dec 1 12:37:25 2007 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sat, 01 Dec 2007 23:07:25 +0530 Subject: [SciPy-dev] what the heck is an egg ;-) In-Reply-To: <1196498256.6047.590.camel@glup.physics.ucf.edu> References: <1196498256.6047.590.camel@glup.physics.ucf.edu> Message-ID: <47519BD5.50908@aero.iitb.ac.in> Joe Harrington wrote: >> setup.py and using distutils is not novel: this is the standard way to >> distribute python packages to be built for years (distutils was included >> in python in 2000, 2001 ?). > > Well, it's a matter of perspective, and the perspective I take is that > of a new user to Python, or even a non-Python-using system manager who > is installing numpy and friends for others to use. To them, *anything* > that is not a tar.gz, RPM, or Deb is novel, and most would not dare to > use even an un-novel tar.gz in their OS directories. Then we say, here, > execute this setup.py as root, it's code for an interpreted language and > you have no idea what it will do. Well, that's pretty terrifying, > especially to the security-conscious. [...] > But, a plethora of packagers is not our situation. It would help the > inexperienced (including the aforementioned system manager, who will > never be experienced in Python) to have some plain talk about what these > Python-specific installers do, and how to use them to install, > uninstall, and automatically update. It can probably be knocked off in > about a page per installer, but it has to be done by someone who knows > them well. Here is what I do, maybe it will help. I'll skip the blah about eggs and keep this focussed. The nice thing about easy_install is that it will find packages for you, deal with dependencies, optionally build them if you need to and install packages for you. How best you use them depends on what you want to do. I'll cover the case for an ubuntu gutsy user who does not want a system wide install but wants to get a Python package installed. Lets say you want to install nosetests on ubuntu gutsy and don't want the one shipped or lets imagine said package isn't shipped. Here is how I have things setup for *myself*. Tell easy_install where you want stuff installed: $ cat ~/.pydistutils.cfg [install] install_lib = /home/prabhu/usr/lib/python install_scripts = /home/prabhu/usr/bin I have ~/usr where I put all packages I need that don't get installed on the system. If you use this you may want to do this: $ mkdir -p ~/usr/bin $ mkrid -p ~/usr/lib/python Next, tell Python of the directory: $ export PYTHONPATH=$PYTHONPATH:~/usr/lib/python $ export PATH=~/usr/bin:$PATH # Add this to your .bash_profile or equivalent. You need to have easy_install installed. So you can install setuptools via apt using # apt-get install python-setuptools Now if you have a direct connection to the internet you are set: $ easy_install nose This should check pypi grab nose for you install it along with any dependencies. If you are behind a proxy do this: $ export http_proxy=http://my.proxy.edu:80 It should work. Lets say you want to install enthought.traits from the ETS-2.6.0b1. Here you need to tell easy_install where to get the stuff from. Binary eggs are available for ubuntu gutsy so this is really trivially easy to do like so: easy_install -f http://code.enthought.com/enstaller/eggs/ubuntu/gusty/ enthought.traits Thats it. For mayavi2 things aren't so easy since we don't package VTK as eggs (yet), enthought has something to do this but it isn't baked enough yet AFAIK. So you need to install the dependencies from the ubuntu universe repo. I think this will do: # apt-get install python-wxgtk2.6 python-numpy python-vtk Now you are all set to try mayavi2: easy_install -f http://code.enthought.com/enstaller/eggs/ubuntu/gusty/ enthought.mayavi Thats it! Now, lets say you want to remove the packages. Do this: easy_install -m enthought.mayavi This doesn't delete anything but "disables' the egg. It does not purge all its dependencies. But removing them isn't a big deal. Since all you really need to do is: cd ~/usr/lib/python rm -rf enthought* edit easy-install.pth # Remove all the packages listed in the text file. Or easier: cd ~/usr/lib/python easy_install -m enthought.* rm -rf enthought.* That will remove all the 15 odd enthought eggs. If you want a system wide install on Debian, its easy. Lets say you want nosetests: easy_install --prefix=/usr/local nose Likewise for the others. I think this should get you started. It does reflect the way I use it but it seems to work OK for me. Remember that this is easiest when binary eggs available for your platform. For pure python packages it is unlikely there will be problems. For for more complex stuff like scipy/tvtk/kiva etc. it isn't always as simple. Everything isn't perfect yet but it is quite nice and convenient when it does work. HTH. cheers, prabhu From eads at soe.ucsc.edu Sat Dec 1 16:20:36 2007 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 01 Dec 2007 14:20:36 -0700 Subject: [SciPy-dev] what the heck is an egg ;-) In-Reply-To: <1196498256.6047.590.camel@glup.physics.ucf.edu> References: <1196498256.6047.590.camel@glup.physics.ucf.edu> Message-ID: <4751D024.4080005@soe.ucsc.edu> Joe Harrington wrote: >> setup.py and using distutils is not novel: this is the standard way to >> distribute python packages to be built for years (distutils was included >> in python in 2000, 2001 ?). > > Well, it's a matter of perspective, and the perspective I take is that > of a new user to Python, or even a non-Python-using system manager who > is installing numpy and friends for others to use. To them, *anything* > that is not a tar.gz, RPM, or Deb is novel, and most would not dare to > use even an un-novel tar.gz in their OS directories. Then we say, here, > execute this setup.py as root, it's code for an interpreted language and > you have no idea what it will do. Well, that's pretty terrifying, > especially to the security-conscious. It takes enough time to write a package, write the build scripts, document everything nicely, polish the documentation, write tests to cover an adequate number of cases, test the code, and maintain the code while not breaking the tests. Meanwhile, many of us need to get some science done in the middle of all of it. The fact that some people haven't gotten around to debbing or RPM'ing their packages is understandable. It's all a matter of whether the author has time or interest in doing packaging. If someone is reluctant to use a package because the author has yet to get it packaged up, then how is it the problem of the author or community? This does not mean we don't want to have everything nicely packaged up in RPM or dpkg format. > I know almost nothing about eggs. I see them being used for all the > Enthought code, which provides the de facto standard 3D environment, > mayavi2. What's a numerical package without a 3D environment? While > that's not on scipy.org, it's darn close, and necessary for an > environment that competes with IDL or Matlab. I've been using various numerical environments for years and have never had a need for a 3D environment. There are many people who need one but certainly not everyone. I'm not sure what you mean by competition because it's hard to compete with something that's free and easy-to-use, especially when it does what you want. Granted there are algorithms or tools in IDL and MATLAB that aren't in Scipy (the same could be said the other way around). The difference is that you don't pay for Scipy. You could certainly offer to pay someone in the community to write the thing you need, or you could do it yourself. I made the choice to avoid proprietary software in the development of my scientific code base so that I could reduce the costs of the science I do and make it more enjoyable. I'm no longer locked into expensive software upgrades and I can now spread my simulation across many machines without having to deal with license issues. I wrote on the order of about 15,000 lines of MATLAB code in my lifetime. It took a big investment of time to rewrite a lot of the most useful code in Python, and throw the rest in the trash can. For me, MATLAB made a lot of things harder. It made it harder to: collaborate with other people (some people outright refused to write code in MATLAB so they could interface with my code), run code at home due to license issues, distribute code to others in some sensible way, and run code on many machines. MATLAB also lacks decent object-orientation, which Python handles very nicely. The ability to pass by reference in Python has made it much easier to write memory-intensive and highly manipulative code. I realized from the start that I might need something that was unavailable in the Scipy framework, but I chose to take the risk so that I had the freedom to not deal with so many problems that make software development, simulation runs, and science not enjoyable. > I agree that the correct path is to push everything into binary > installs, even the experimental stuff. I love the OS installers, and I > thank the packagers from the bottom of my heart! If only there were > more of them, and if only they could handle more of these packages. The > OS installers may not deal with multiple package version on Linux, but I > have never wanted more than one version. Someone who does is probably a > developer and can handle the tar installs, eggs, or whatever, and direct > Python to find the results. I believe that we would double our > community's size if all our major packages were available in binary > installs for all major platforms. > > But, a plethora of packagers is not our situation. It would help the > inexperienced (including the aforementioned system manager, who will > never be experienced in Python) to have some plain talk about what these > Python-specific installers do, and how to use them to install, > uninstall, and automatically update. It can probably be knocked off in > about a page per installer, but it has to be done by someone who knows > them well. Even if we did package RPMs for all the add-ons, that should not leave a sysadmin complacent. Just because a file is an RPM does not make it safe to install as root. The real issues are whether the file came from a trusted source, whether it was generated by trusted people, and whether this can be verified. However, these a separate issues from the one you've raised--that because we use, what is perceived by some at least, an "ad hoc" build process, this will cause sysadmins to question the security of the packages. I see that as a problem of ignorance on part of the sysadmin. Would the sysadmin be less suspect if the more universal autoconf and automake were used? Would it be worth the effort to use something more standard even if it took much more time to setup and maintain just to assuage the sysadmin? Most languages have their own build tool, and some people choose to use them over a generic build tool sometimes because it is too frustrating, hassle-some, and time-consuming to get the generic one working right. When I first was exposed to python and saw this setup.py, I thought to myself what the heck is this non-standard thing and why should I learn how it works? Then one day, I was at a Border's cafe, and read about distutils in David Beazley's book. He made it seem so easy that I just had to try it when I got home, and within 30 minutes, I had several internal packages of mine building with Python's distutils. I was sold because it handles Python's idiosyncrasies in a much more fail-proof way than I could ever achieve with my makefiles. I could build source distributions, RPM spec files, windows installers, etc. with a few keystrokes. Mind you it is much more involved to ensure, with some confidence, that an RPM will universally work on any machine with the same platform on which it was generated but distutils at least simplifies the build process. Your point is well taken that there should be a one-liner somewhere easy-to-find that Python's distutils and setuptools are used for building various Scipy packages. But distutils is so standard that it comes built into Python, which is installed by default in most Linux distributions. About the paranoia of some sysadmins out there in internet land when non-standard build tools are used, I don't know what to do about that. But I hope many of them aren't thinking that just because a file is an RPM, it is safe, or more safe than building the same package from source with either a standard or non-standard build tool. Damian From ondrej at certik.cz Sat Dec 1 16:40:46 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sat, 1 Dec 2007 22:40:46 +0100 Subject: [SciPy-dev] scipy 0.6.0 tests kill python interpreter Message-ID: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> Hi, if I install scipy 0.6.0 on Debian and run tests, it kills the interpreter with Illegal instruction. More details here: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=452991 Is anyone able to reproduce it on their system, or is it just related to Debian? Could you please send me the correct run of tests, so that I can discover where exactly it differs? Thanks a lot, Ondrej From robert.kern at gmail.com Sat Dec 1 16:44:49 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Dec 2007 15:44:49 -0600 Subject: [SciPy-dev] scipy 0.6.0 tests kill python interpreter In-Reply-To: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> References: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> Message-ID: <4751D5D1.1070904@gmail.com> Ondrej Certik wrote: > Hi, > > if I install scipy 0.6.0 on Debian and run tests, it kills the > interpreter with Illegal instruction. > > More details here: > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=452991 > > Is anyone able to reproduce it on their system, or is it just related > to Debian? Could you please send me > the correct run of tests, so that I can discover where exactly it differs? Try scipy.test(verbosity=1). This will print the name of the test before it runs the test. Also, try running Python under gdb so we can get a backtrace. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ondrej at certik.cz Sat Dec 1 16:54:37 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sat, 1 Dec 2007 22:54:37 +0100 Subject: [SciPy-dev] scipy 0.6.0 tests kill python interpreter In-Reply-To: <4751D5D1.1070904@gmail.com> References: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> <4751D5D1.1070904@gmail.com> Message-ID: <85b5c3130712011354u1361396av66a995ea5b188b3@mail.gmail.com> On Dec 1, 2007 10:44 PM, Robert Kern wrote: > > Ondrej Certik wrote: > > Hi, > > > > if I install scipy 0.6.0 on Debian and run tests, it kills the > > interpreter with Illegal instruction. > > > > More details here: > > > > http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=452991 > > > > Is anyone able to reproduce it on their system, or is it just related > > to Debian? Could you please send me > > the correct run of tests, so that I can discover where exactly it differs? > > Try scipy.test(verbosity=1). This will print the name of the test before it runs > the test. Also, try running Python under gdb so we can get a backtrace. It doesn't seem to help much: >>> import scipy s>>> scipy.test(verbosity=1) Found 9 tests for scipy.cluster.vq Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.fftpack.helper Found 20 tests for scipy.fftpack.pseudo_diffs Found 1 tests for scipy.integrate Found 10 tests for scipy.integrate.quadpack Found 3 tests for scipy.integrate.quadrature Found 6 tests for scipy.interpolate Found 6 tests for scipy.interpolate.fitpack Found 4 tests for scipy.io.array_import Found 28 tests for scipy.io.mio Found 13 tests for scipy.io.mmio Found 5 tests for scipy.io.npfile Found 4 tests for scipy.io.recaster Found 16 tests for scipy.lib.blas Found 128 tests for scipy.lib.blas.fblas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 41 tests for scipy.linalg.basic Found 16 tests for scipy.linalg.blas Found 72 tests for scipy.linalg.decomp Found 128 tests for scipy.linalg.fblas Found 6 tests for scipy.linalg.iterative Found 4 tests for scipy.linalg.lapack Found 7 tests for scipy.linalg.matfuncs Found 9 tests for scipy.linsolve.umfpack Found 2 tests for scipy.maxentropy Found 3 tests for scipy.misc.pilutil Found 399 tests for scipy.ndimage Found 5 tests for scipy.odr Found 8 tests for scipy.optimize Found 1 tests for scipy.optimize.cobyla Found 10 tests for scipy.optimize.nonlin Found 4 tests for scipy.optimize.zeros Found 5 tests for scipy.signal.signaltools Found 4 tests for scipy.signal.wavelets Found 152 tests for scipy.sparse Found 342 tests for scipy.special.basic Found 3 tests for scipy.special.spfun_stats Found 107 tests for scipy.stats Found 73 tests for scipy.stats.distributions Found 10 tests for scipy.stats.morestats Found 0 tests for __main__ .../usr/lib/python2.4/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006950608e-07 ..................../usr/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ...... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .........................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..................................................................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...........................................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .......... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 2.90112626413e-09 ...Result may be inaccurate, approximate err = 7.27595761418e-12 ......Use minimum degree ordering on A'+A. ..Use minimum degree ordering on A'+A. ...Use minimum degree ordering on A'+A. ............................................................................................................./usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' .............................................................................................Illegal instruction But when running through gdb, it says: warnings.warn('Mode "reflect" may yield incorrect results on ' ............................................................................................. Program received signal SIGILL, Illegal instruction. [Switching to Thread 0xb7e278c0 (LWP 28932)] 0xb6b83d43 in ?? () from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so (gdb) bt #0 0xb6b83d43 in ?? () from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so #1 0xbf99bdd8 in ?? () #2 0xb6b87211 in NI_GenericFilter () from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so Backtrace stopped: frame did not save the PC (gdb) I don't have time to dig into this and fix it myself, but if you tell me what else to try, I will. Thanks a lot, Ondrej From robert.kern at gmail.com Sat Dec 1 17:06:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Dec 2007 16:06:07 -0600 Subject: [SciPy-dev] scipy 0.6.0 tests kill python interpreter In-Reply-To: <85b5c3130712011354u1361396av66a995ea5b188b3@mail.gmail.com> References: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> <4751D5D1.1070904@gmail.com> <85b5c3130712011354u1361396av66a995ea5b188b3@mail.gmail.com> Message-ID: <4751DACF.4090505@gmail.com> Ondrej Certik wrote: > On Dec 1, 2007 10:44 PM, Robert Kern wrote: >> Ondrej Certik wrote: >>> Hi, >>> >>> if I install scipy 0.6.0 on Debian and run tests, it kills the >>> interpreter with Illegal instruction. >>> >>> More details here: >>> >>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=452991 >>> >>> Is anyone able to reproduce it on their system, or is it just related >>> to Debian? Could you please send me >>> the correct run of tests, so that I can discover where exactly it differs? >> Try scipy.test(verbosity=1). This will print the name of the test before it runs >> the test. Also, try running Python under gdb so we can get a backtrace. > > It doesn't seem to help much: > >>>> import scipy > s>>> scipy.test(verbosity=1) My apologies. scipy.test(verbosity=2) > But when running through gdb, it says: > > warnings.warn('Mode "reflect" may yield incorrect results on ' > ............................................................................................. > Program received signal SIGILL, Illegal instruction. > [Switching to Thread 0xb7e278c0 (LWP 28932)] > 0xb6b83d43 in ?? () > from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so > (gdb) bt > #0 0xb6b83d43 in ?? () > from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so > #1 0xbf99bdd8 in ?? () > #2 0xb6b87211 in NI_GenericFilter () > from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so > Backtrace stopped: frame did not save the PC > (gdb) > > > I don't have time to dig into this and fix it myself, but if you tell > me what else to try, I will. Building scipy with debug symbols might help make the gdb backtrace more helpful, but let's wait for the results of verbosity=2. python setup.py build_ext -g build -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ondrej at certik.cz Sat Dec 1 18:00:11 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Sun, 2 Dec 2007 00:00:11 +0100 Subject: [SciPy-dev] scipy 0.6.0 tests kill python interpreter In-Reply-To: <4751DACF.4090505@gmail.com> References: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> <4751D5D1.1070904@gmail.com> <85b5c3130712011354u1361396av66a995ea5b188b3@mail.gmail.com> <4751DACF.4090505@gmail.com> Message-ID: <85b5c3130712011500w3fa64fcv90c6cb3defdc75eb@mail.gmail.com> On Dec 1, 2007 11:06 PM, Robert Kern wrote: > Ondrej Certik wrote: > > On Dec 1, 2007 10:44 PM, Robert Kern wrote: > >> Ondrej Certik wrote: > >>> Hi, > >>> > >>> if I install scipy 0.6.0 on Debian and run tests, it kills the > >>> interpreter with Illegal instruction. > >>> > >>> More details here: > >>> > >>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=452991 > >>> > >>> Is anyone able to reproduce it on their system, or is it just related > >>> to Debian? Could you please send me > >>> the correct run of tests, so that I can discover where exactly it differs? > >> Try scipy.test(verbosity=1). This will print the name of the test before it runs > >> the test. Also, try running Python under gdb so we can get a backtrace. > > > > It doesn't seem to help much: > > > >>>> import scipy > > s>>> scipy.test(verbosity=1) > > My apologies. > > scipy.test(verbosity=2) boundary modes/usr/lib/python2.4/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' ... ok center of mass 1 ... ok center of mass 2 ... ok center of mass 3 ... ok center of mass 4 ... ok center of mass 5 ... ok center of mass 6 ... ok center of mass 7 ... ok center of mass 8 ... ok center of mass 9 ... ok correlation 1 ... ok correlation 2 ... ok correlation 3 ... ok correlation 4 ... ok correlation 5 ... ok correlation 6 ... ok correlation 7 ... ok correlation 8 ... ok correlation 9 ... ok correlation 10 ... ok correlation 11 ... ok correlation 12 ... ok correlation 13 ... ok correlation 14 ... ok correlation 15 ... ok correlation 16 ... ok correlation 17 ... ok correlation 18 ... ok correlation 19 ... ok correlation 20 ... ok correlation 21 ... ok correlation 22 ... ok correlation 23 ... ok correlation 24 ... ok correlation 25 ... ok brute force distance transform 1 ... ok brute force distance transform 2 ... ok brute force distance transform 3 ... ok brute force distance transform 4 ... ok brute force distance transform 5 ... ok brute force distance transform 6 ... ok chamfer type distance transform 1 ... ok chamfer type distance transform 2 ... ok chamfer type distance transform 3 ... ok euclidean distance transform 1 ... ok euclidean distance transform 2 ... ok euclidean distance transform 3 ... ok euclidean distance transform 4 ... ok line extension 1 ... ok line extension 2 ... ok line extension 3 ... ok line extension 4 ... ok line extension 5 ... ok line extension 6 ... ok line extension 7 ... ok line extension 8 ... ok line extension 9 ... ok line extension 10 ... ok extrema 1 ... ok extrema 2 ... ok extrema 3 ... ok extrema 4 ... ok find_objects 1 ... ok find_objects 2 ... ok find_objects 3 ... ok find_objects 4 ... ok find_objects 5 ... ok find_objects 6 ... ok find_objects 7 ... ok find_objects 8 ... ok find_objects 9 ... ok ellipsoid fourier filter for complex transforms 1 ... ok ellipsoid fourier filter for real transforms 1 ... ok gaussian fourier filter for complex transforms 1 ... ok gaussian fourier filter for real transforms 1 ... ok shift filter for complex transforms 1 ... ok shift filter for real transforms 1 ... ok uniform fourier filter for complex transforms 1 ... ok uniform fourier filter for real transforms 1 ... ok gaussian filter 1 ... ok gaussian filter 2 ... ok gaussian filter 3 ... ok gaussian filter 4 ... ok gaussian filter 5 ... ok gaussian filter 6 ... ok gaussian gradient magnitude filter 1 ... ok gaussian gradient magnitude filter 2 ... ok gaussian laplace filter 1 ... ok gaussian laplace filter 2 ... ok generation of a binary structure 1 ... ok generation of a binary structure 2 ... ok generation of a binary structure 3 ... ok generation of a binary structure 4 ... ok generic filter 1Illegal instruction > > > But when running through gdb, it says: > > > > warnings.warn('Mode "reflect" may yield incorrect results on ' > > ............................................................................................. > > Program received signal SIGILL, Illegal instruction. > > [Switching to Thread 0xb7e278c0 (LWP 28932)] > > 0xb6b83d43 in ?? () > > from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so > > (gdb) bt > > #0 0xb6b83d43 in ?? () > > from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so > > #1 0xbf99bdd8 in ?? () > > #2 0xb6b87211 in NI_GenericFilter () > > from /usr/lib/python2.4/site-packages/scipy/ndimage/_nd_image.so > > Backtrace stopped: frame did not save the PC > > (gdb) > > > > > > I don't have time to dig into this and fix it myself, but if you tell > > me what else to try, I will. > > Building scipy with debug symbols might help make the gdb backtrace more > helpful, but let's wait for the results of verbosity=2. > > python setup.py build_ext -g build Maybe you already know where the problem is. If not, I'll try to build scipy with debug symbols and try it again. Ondrej From david at ar.media.kyoto-u.ac.jp Sun Dec 2 05:09:57 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 02 Dec 2007 19:09:57 +0900 Subject: [SciPy-dev] How to regenerate swig generated files ? Message-ID: <47528475.6010701@ar.media.kyoto-u.ac.jp> Hi, I encounter a relatively trivial build bug in sparsetools (ticket 549), which requires regenerating the sources by swig. I would like to know which version should be used, and which options ? It would be good to have those options noted somewhere, I think. As an aside, it would be good also if the source file could be split into several compilation units: a 3 Mb C++ source file is extremely heavy (compiling it with g++ and -O2 takes up to 400 Mb of ram, this is not pleasant when compiling scipy on virtual machines). cheers, David From wnbell at gmail.com Sun Dec 2 09:23:30 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 2 Dec 2007 08:23:30 -0600 Subject: [SciPy-dev] How to regenerate swig generated files ? In-Reply-To: <47528475.6010701@ar.media.kyoto-u.ac.jp> References: <47528475.6010701@ar.media.kyoto-u.ac.jp> Message-ID: On Dec 2, 2007 4:09 AM, David Cournapeau wrote: > Hi, > > I encounter a relatively trivial build bug in sparsetools (ticket > 549), which requires regenerating the sources by swig. I would like to > know which version should be used, and which options ? It would be good > to have those options noted somewhere, I think. Added http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/README.txt?rev=3605 > As an aside, it would be > good also if the source file could be split into several compilation > units: a 3 Mb C++ source file is extremely heavy (compiling it with g++ > and -O2 takes up to 400 Mb of ram, this is not pleasant when compiling > scipy on virtual machines). Agreed. If I can't figure out a nice way to do this soon, I'll probably drop support for 64bit index types. This will cut the compile time in half and it is only necessary for matrices with >= 2^31 row or columns. Since that would require at least 16GB of memory I think we can do without for now :) -- Nathan Bell wnbell at gmail.com From ondrej at certik.cz Mon Dec 3 10:12:33 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Mon, 3 Dec 2007 16:12:33 +0100 Subject: [SciPy-dev] scipy 0.6.0 tests kill python interpreter In-Reply-To: <85b5c3130712011500w3fa64fcv90c6cb3defdc75eb@mail.gmail.com> References: <85b5c3130712011340t3cbde164r12f316ffe3993ff8@mail.gmail.com> <4751D5D1.1070904@gmail.com> <85b5c3130712011354u1361396av66a995ea5b188b3@mail.gmail.com> <4751DACF.4090505@gmail.com> <85b5c3130712011500w3fa64fcv90c6cb3defdc75eb@mail.gmail.com> Message-ID: <85b5c3130712030712u6057fedy178bd217a2e05e1b@mail.gmail.com> > Maybe you already know where the problem is. If not, I'll try to build > scipy with debug symbols and try it again. Paul Metcalfe has sent my this patch: http://projects.scipy.org/scipy/scipy/changeset/3450 that fixes the problem. A new package will be uploaded to Debian soon. Thanks, Ondrej From david at ar.media.kyoto-u.ac.jp Tue Dec 4 00:19:51 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Dec 2007 14:19:51 +0900 Subject: [SciPy-dev] How to regenerate swig generated files ? In-Reply-To: References: <47528475.6010701@ar.media.kyoto-u.ac.jp> Message-ID: <4754E377.5060901@ar.media.kyoto-u.ac.jp> Nathan Bell wrote: > On Dec 2, 2007 4:09 AM, David Cournapeau wrote: > >> Hi, >> >> I encounter a relatively trivial build bug in sparsetools (ticket >> 549), which requires regenerating the sources by swig. I would like to >> know which version should be used, and which options ? It would be good >> to have those options noted somewhere, I think. >> > > Added > http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparsetools/README.txt?rev=3605 > Great, thanks. I just found out that I could do the change without touching the generated file, but still, this info is useful. cheers, David From nwagner at iam.uni-stuttgart.de Tue Dec 4 03:05:11 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 04 Dec 2007 09:05:11 +0100 Subject: [SciPy-dev] test_mio.py Message-ID: Hi all, python test_mio.py yields Traceback (most recent call last): File "/data/home/nwagner/local/lib64/python2.3/site-packages/scipy/io/tests/test_mio.py", line 25, in ? class TestMIOArray(NumpyTestCase): File "/data/home/nwagner/local/lib64/python2.3/site-packages/scipy/io/tests/test_mio.py", line 235, in TestMIOArray format = case in case_table4 and '4' or '5' ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() I am using python2.3 and the latest svn. Nils From david at ar.media.kyoto-u.ac.jp Tue Dec 4 06:21:39 2007 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 04 Dec 2007 20:21:39 +0900 Subject: [SciPy-dev] scipy.scons branch: building numpy and scipy with scons Message-ID: <47553843.7020709@ar.media.kyoto-u.ac.jp> Hi, I've just reached a first usable scipy.scons branch, so that scipy can be built entirely with scons (assuming you build numpy with scons too). You can get it from http://svn.scipy.org/svn/scipy/branches/scipy.scons. To build it, you just need to use numpy.scons branch instead of the trunk, and use setupscons.py instead of setup.py. Again, I would be happy to hear about failures, success (please report a ticket in this case), etc... Some of the most interesting things I can think of which work with scons: - you can control fortran and C flags from the command line: CFLAGS and FFLAGS won't override necessary flags, only optimization flags, so you can easily play with warning, optimization flags. For example: CFLAGS='-W -Wall -Wextra -DDEBUG' FFLAGS='-DDEBUG -W -Wall -Wextra' python setupscons build for debugging will work. No need to care about -fPIC and co, all this is handled automatically. - dependencies are handled correctly thanks to scons: for example, if you change a library (e.g. by using MKL=None to disable mkl), only link step will be redone. platforms known to work ----------------------- - linux with gcc/g77 or gcc/gfortran (both atlas and mkl 9 were tested). - linux with intel compilers (intel and gnu compilers can also be mixed, AFAIK). - solaris with sun compilers with sunperf, only tested on indiana. Notable non working things: --------------------------- - using netlib BLAS and LAPACK is not supported (only optimized ones are available: sunperf, atlas, mkl, and vecLib/Accelerate). - parallel build does NOT work (AFAICS, this is because f2py which do some things which are not thread-safe, but I have not yet found the exact problem). - I have not yet implemented umfpack checker, and as such umfpack cannot be built yet - I have not yet tweaked fortran compiler configurations for optimizations except for gnu compilers - c++ compilers configurations are not handled either. cheers, David From matthieu.brucher at gmail.com Wed Dec 5 12:04:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 5 Dec 2007 18:04:09 +0100 Subject: [SciPy-dev] Scikit for manifold learning techniques Message-ID: Hi, I'd like to create a new scikit (I know I didn't put much effort in the optimizers, but it will change when I will have more time) for manifold learning. At first, I'd like to implement some usual techniques like Isomap, LLE (some are in neuroimaging I heard) with different levels of interaction. I do this in my PhD thesis, so it is almost available like a scikit. It would be a twin-like of the Dimensionality Reduction toolbox for MatLab but with a different interaction : directly call the right global function (like isomap, mds, nlm or gedodesicNLM ATM) or give directly to an optimizer the cost function you want with a distance matrix (it will use my own optimizers). Eigenmaps will be available shortly (I have a referee that want it, so I will implement it), it will use scipy.sparse, and I hope I'll be able to propose two interfaces as well. If everything goes smoothly, I'll propose my own dimensionality reduction technique in the scikit as well. Comments ? Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Dec 5 14:41:17 2007 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Dec 2007 13:41:17 -0600 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: References: Message-ID: <4756FEDD.6010601@gmail.com> Matthieu Brucher wrote: > Hi, > > I'd like to create a new scikit (I know I didn't put much effort in the > optimizers, but it will change when I will have more time) for manifold > learning. At first, I'd like to implement some usual techniques like > Isomap, LLE (some are in neuroimaging I heard) with different levels of > interaction. I do this in my PhD thesis, so it is almost available like > a scikit. It would be a twin-like of the Dimensionality Reduction > toolbox for MatLab but with a different interaction : directly call the > right global function (like isomap, mds, nlm or gedodesicNLM ATM) or > give directly to an optimizer the cost function you want with a distance > matrix (it will use my own optimizers). > Eigenmaps will be available shortly (I have a referee that want it, so I > will implement it), it will use scipy.sparse, and I hope I'll be able to > propose two interfaces as well. > > If everything goes smoothly, I'll propose my own dimensionality > reduction technique in the scikit as well. > > Comments ? I'd love to have them! -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Thu Dec 6 11:53:11 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 6 Dec 2007 11:53:11 -0500 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: References: Message-ID: > I'd like to create a new scikit (I know I didn't put much effort in > the optimizers, but it will change when I will have more time) for > manifold learning. At first, I'd like to implement some usual > techniques like Isomap, LLE (some are in neuroimaging I heard) with > different levels of interaction. I do this in my PhD thesis, so it > is almost available like a scikit. It would be a twin-like of the > Dimensionality Reduction toolbox for MatLab but with a different > interaction : directly call the right global function (like isomap, > mds, nlm or gedodesicNLM ATM) or give directly to an optimizer the > cost function you want with a distance matrix (it will use my own > optimizers). > Eigenmaps will be available shortly (I have a referee that want it, > so I will implement it), it will use scipy.sparse, and I hope I'll > be able to propose two interfaces as well. > > If everything goes smoothly, I'll propose my own dimensionality > reduction technique in the scikit as well. Oh, this would be most fantastic. If desired, I can donate a PCA implementation, which would be a good "baseline" method to have in a dimensionality-reduction kit. Zach (PS. Yes, PCA is easy to implement, but it is also easy to get subtly wrong -- I've seen several such -- or to implement in a way that is a lot slower than it needs to be. I've spent a while making my implementation correct and as fast as possible for both n_data >> n_dims and vice-versa. If anyone wants, I'll send the code.) From matthieu.brucher at gmail.com Thu Dec 6 12:02:31 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Dec 2007 18:02:31 +0100 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: References: Message-ID: > > (PS. Yes, PCA is easy to implement, but it is also easy to get subtly > wrong -- I've seen several such -- or to implement in a way that is a > lot slower than it needs to be. I've spent a while making my > implementation correct and as fast as possible for both n_data >> > n_dims and vice-versa. If anyone wants, I'll send the code.) > I already have one with the Fukunaga modification, but I'll gladly compare both version to use the best one. Matthieu -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Dec 6 12:32:43 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 6 Dec 2007 10:32:43 -0700 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: References: Message-ID: On Dec 5, 2007 10:04 AM, Matthieu Brucher wrote: > Hi, > > I'd like to create a new scikit (I know I didn't put much effort in the > optimizers, but it will change when I will have more time) for manifold > learning. At first, I'd like to implement some usual techniques like Isomap, > LLE (some are in neuroimaging I heard) with different levels of interaction. > I do this in my PhD thesis, so it is almost available like a scikit. It > would be a twin-like of the Dimensionality Reduction toolbox for MatLab but > with a different interaction : directly call the right global function (like > isomap, mds, nlm or gedodesicNLM ATM) or give directly to an optimizer the > cost function you want with a distance matrix (it will use my own > optimizers). > Eigenmaps will be available shortly (I have a referee that want it, so I > will implement it), it will use scipy.sparse, and I hope I'll be able to > propose two interfaces as well. > > If everything goes smoothly, I'll propose my own dimensionality reduction > technique in the scikit as well. > > Comments ? Another enthusiastic +1 from the peanut gallery! Cheers, f From zpincus at stanford.edu Thu Dec 6 12:52:49 2007 From: zpincus at stanford.edu (Zachary Pincus) Date: Thu, 6 Dec 2007 12:52:49 -0500 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: References: Message-ID: <22B9B919-7A93-4ED8-B185-04A9AFCFF86A@stanford.edu> Attached is a relatively "full featured" implementation of basic PCA that should be reasonably fast. Perhaps it will be of sue to someone. Zach (I hereby put this code in the public domain.) -------------- next part -------------- A non-text attachment was scrubbed... Name: pca.py Type: text/x-python-script Size: 4131 bytes Desc: not available URL: -------------- next part -------------- On Dec 6, 2007, at 12:02 PM, Matthieu Brucher wrote: > (PS. Yes, PCA is easy to implement, but it is also easy to get subtly > wrong -- I've seen several such -- or to implement in a way that is a > lot slower than it needs to be. I've spent a while making my > implementation correct and as fast as possible for both n_data >> > n_dims and vice-versa. If anyone wants, I'll send the code.) > > I already have one with the Fukunaga modification, but I'll gladly > compare both version to use the best one. > > Matthieu > -- > French PhD student > Website : http://miles.developpez.com/ > Blogs : http://matt.eifelle.com and http://blog.developpez.com/? > blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev From matthieu.brucher at gmail.com Thu Dec 6 14:26:43 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Dec 2007 20:26:43 +0100 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: References: Message-ID: Three voices in favor of the scikit, no voice against. Other opinions ? I'd like to call it manifold_learning (obviously learn is not a good option). I think that the goal of learn is somewhat different that this scikit : - learn is more about classification for the moment - usually, a manifold learning technique is used before the classification (and so the two scikits could be complementary) If you read this, David, can you give an opinion on this ? Matthieu 2007/12/6, Fernando Perez : > > On Dec 5, 2007 10:04 AM, Matthieu Brucher > wrote: > > Hi, > > > > I'd like to create a new scikit (I know I didn't put much effort in the > > optimizers, but it will change when I will have more time) for manifold > > learning. At first, I'd like to implement some usual techniques like > Isomap, > > LLE (some are in neuroimaging I heard) with different levels of > interaction. > > I do this in my PhD thesis, so it is almost available like a scikit. It > > would be a twin-like of the Dimensionality Reduction toolbox for MatLab > but > > with a different interaction : directly call the right global function > (like > > isomap, mds, nlm or gedodesicNLM ATM) or give directly to an optimizer > the > > cost function you want with a distance matrix (it will use my own > > optimizers). > > Eigenmaps will be available shortly (I have a referee that want it, so I > > will implement it), it will use scipy.sparse, and I hope I'll be able to > > propose two interfaces as well. > > > > If everything goes smoothly, I'll propose my own dimensionality > reduction > > technique in the scikit as well. > > > > Comments ? > > Another enthusiastic +1 from the peanut gallery! > > Cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Thu Dec 6 14:28:09 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 6 Dec 2007 20:28:09 +0100 Subject: [SciPy-dev] Scikit for manifold learning techniques In-Reply-To: <22B9B919-7A93-4ED8-B185-04A9AFCFF86A@stanford.edu> References: <22B9B919-7A93-4ED8-B185-04A9AFCFF86A@stanford.edu> Message-ID: Cute, but I might adapt the interface to the one I use for the moment (as there are two different techniques you propose). Matthieu 2007/12/6, Zachary Pincus : > > Attached is a relatively "full featured" implementation of basic PCA > that should be reasonably fast. Perhaps it will be of sue to someone. > > Zach > (I hereby put this code in the public domain.) > > > > > > On Dec 6, 2007, at 12:02 PM, Matthieu Brucher wrote: > > > (PS. Yes, PCA is easy to implement, but it is also easy to get subtly > > wrong -- I've seen several such -- or to implement in a way that is a > > lot slower than it needs to be. I've spent a while making my > > implementation correct and as fast as possible for both n_data >> > > n_dims and vice-versa. If anyone wants, I'll send the code.) > > > > I already have one with the Fukunaga modification, but I'll gladly > > compare both version to use the best one. > > > > Matthieu > > -- > > French PhD student > > Website : http://miles.developpez.com/ > > Blogs : http://matt.eifelle.com and http://blog.developpez.com/? > > blog=92 > > LinkedIn : http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > -- French PhD student Website : http://miles.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Fri Dec 7 22:24:01 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 7 Dec 2007 21:24:01 -0600 Subject: [SciPy-dev] conversion among sparse formats Message-ID: I've added a small benchmark to the scipy.sparse unittests (enabled with level 5) that compares the cost of converting among the various sparse formats. The matrix is 900x900 with approximately 5 nonzeros per row (5-point finite difference stencil). Using the latest SVN, you can produce the table below with the following command: python -c 'from scipy.sparse import test; test(5)' Sparse Matrix Conversion ==================================================================== format | tocsr() | tocsc() | tocoo() | tolil() | todok() -------------------------------------------------------------------- csr | 0.000002 | 0.000292 | 0.000356 | 3.460000 | n/a csc | 0.000312 | 0.000002 | 0.000358 | 3.450000 | n/a coo | 0.000389 | 0.000361 | 0.000002 | 3.460000 | n/a lil | 0.024000 | 0.024000 | 0.022000 | 3.540000 | n/a dok | 0.027500 | 0.060000 | 0.025000 | 3.510000 | n/a The numbers are the time (in seconds) to convert a matrix in format on the left to each of the formats using the to___() methods. Some observations: 1) None of the formats supports todok(). Does anyone want/need this? 2) Something is definitely wrong with tolil() since the reverse conversions (e.g. lil_matrix.tocsr()) are 100x faster. 3) Conversion among CSR/CSC/COO is relatively cheap, so it should be leveraged when going between the 'slow' but flexible formats (lil and dok) and the 'fast' but inflexible formats (csr/csc/coo). Specifically, when converting any 'fast' format to a 'slow' format, you should first convert to the most convenient 'fast' format. Conversely, when converting from a 'slow' to a 'fast' format, using another 'fast' format as an intermediate incurs no noticable overhead. 4) lil_matrix and dok_matrix are the only construction-oriented formats we have, so it's important that they be reasonably quick. -- Nathan Bell wnbell at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sat Dec 8 12:37:10 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 8 Dec 2007 11:37:10 -0600 Subject: [SciPy-dev] conversion among sparse formats In-Reply-To: References: Message-ID: http://projects.scipy.org/scipy/scipy/changeset/3622 I made some improvements to address the tolil() problem. The lil_matrix() constructor was also simplified by using csr_matrix() to handle the conversions. Here are the new results: Sparse Matrix Conversion ==================================================================== format | tocsr() | tocsc() | tocoo() | tolil() | todok() -------------------------------------------------------------------- csr | 0.000002 | 0.000293 | 0.000350 | 0.006667 | n/a csc | 0.000295 | 0.000002 | 0.000391 | 0.007143 | n/a coo | 0.000382 | 0.000375 | 0.000002 | 0.007143 | n/a lil | 0.025000 | 0.024000 | 0.022000 | 0.000002 | n/a dok | 0.025000 | 0.070000 | 0.025000 | 0.030000 | n/a For comparison, here are the original figures: Sparse Matrix Conversion ==================================================================== format | tocsr() | tocsc() | tocoo() | tolil() | todok() -------------------------------------------------------------------- csr | 0.000002 | 0.000292 | 0.000356 | 3.460000 | n/a csc | 0.000312 | 0.000002 | 0.000358 | 3.450000 | n/a coo | 0.000389 | 0.000361 | 0.000002 | 3.460000 | n/a lil | 0.024000 | 0.024000 | 0.022000 | 3.540000 | n/a dok | 0.027500 | 0.060000 | 0.025000 | 3.510000 | n/a -- Nathan Bell wnbell at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sat Dec 8 17:15:50 2007 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 8 Dec 2007 16:15:50 -0600 Subject: [SciPy-dev] conversion among sparse formats In-Reply-To: References: Message-ID: I added a todok() method and improved lil_matrix.tocsr() for following results: Sparse Matrix Conversion ==================================================================== format | tocsr() | tocsc() | tocoo() | tolil() | todok() -------------------------------------------------------------------- csr | 0.0000023 | 0.0002976 | 0.0003663 | 0.0066667 | 0.0078571 csc | 0.0003049 | 0.0000021 | 0.0003484 | 0.0073333 | 0.0083333 coo | 0.0004074 | 0.0003717 | 0.0000022 | 0.0071429 | 0.0076923 lil | 0.0016129 | 0.0021739 | 0.0021277 | 0.0000013 | 0.0090909 dok | 0.0019608 | 0.0019231 | 0.0015152 | 0.0090909 | 0.0000016 Conversions from dense matrices to lil_matrix and dok_matrix are much faster also. -- Nathan Bell wnbell at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From robfalck at gmail.com Tue Dec 11 22:41:09 2007 From: robfalck at gmail.com (Rob Falck) Date: Tue, 11 Dec 2007 22:41:09 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status Message-ID: I'm currently implementing the Sequential Least Squares Quadratic Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. The Fortran code being wrapped with F2PY is here: http://www.netlib.org/toms/733 (its use within Scipy has been cleared) SLSQP provides for bounds on the independent variables, as well as equality and inequality constraint functions, which is a capability that doesn't exist in scipy.optimize. Currently the code works, although the constraint normals are being generated by approximation. I'm working on a way to pass these in. I think the most elegant way will be a single function that returns the matrix of constraint normals. For a demonstration of what the code can do, here is an optimization of f(x,y) = 2xy + 2x - x**2 - 2y**2 Example 14.2 in Chapra & Canale gives the maximum as x=2.0, y=1.0. The unbounded optimization tests find this solution. As expected, its faster when derivatives are provided rather than approximated. Unbounded optimization. Derivatives approximated. Elapsed time: 1.45792961121 ms Results [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.'] Unbounded optimization. Derivatives provided. Elapsed time: 1.03211402893 ms Results [[1.9999999515712266, 0.99999996181577444], -1.9999999999999984, 4, 0, 'Optimization terminated successfully.'] The following example uses an equality constraint to find the optimal when x=y. Bound optimization. Derivatives approximated. Elapsed time: 1.384973526 ms Results [[0.99999996845920858, 0.99999996845920858], -0.99999999999999889, 4, 0, 'Optimization terminated successfully.'] I've tried to conform to the syntax used by the other optimizers in scipy.optimize. The function definition and doc string are below. If anyone is interested in testing it out, let me know. def fmin_slsqp( func, x0 , eqcons=[], f_eqcons=None, ieqcons=[], f_ieqcons=None, bounds = [], fprime = None, fprime_cons=None,args = (), iter = 100, acc = 1.0E-6, iprint = 1, full_output = 0, epsilon = _epsilon ): """ Minimize a function using Sequential Least SQuares Programming Description: Python interface function for the SLSQP Optimization subroutine originally implemented by Dieter Kraft. Inputs: func - Objective function (in the form func(x, *args)) x0 - Initial guess for the independent variable(s). eqcons - A list of functions of length n such that eqcons[j](x0,*args) == 0.0 in a successfully optimized problem f_eqcons - A function of the form f_eqcons(x, *args) that returns an array in which each element must equal 0.0 in a successfully optimized problem. If f_eqcons is specified, eqcons is ignored. ieqcons - A list of functions of length n such that ieqcons[j](x0,*args) >= 0.0 in a successfully optimized problem f_ieqcons - A function of the form f_ieqcons(x0, *args) that returns an array in which each element must be greater or equal to 0.0 in a successfully optimized problem. If f_ieqcons is specified, ieqcons is ignored. bounds - A list of tuples specifying the lower and upper bound for each independent variable [ (xl0, xu0), (xl1, xu1), ...] fprime - A function that evaluates the partial derivatives of func fprime_cons - A function of the form f(x, *args) that returns the m by n array of constraint normals. If not provided, the normals will be approximated. Equality constraint normals precede inequality constraint normals. args - A sequence of additional arguments passed to func and fprime iter - The maximum number of iterations (int) acc - Requested accuracy (float) iprint - The verbosity of fmin_slsqp. iprint <= 0 : Silent operation iprint == 1 : Print summary upon completion (default) iprint >= 2 : Print status of each iterate and summary full_output - If 0, return only the minimizer of func (default). Otherwise, output final objective function and summary information. epsilon - The step size for finite-difference derivative estimates. Outputs: ( x, { fx, gnorm, its, imode, smode }) x - The final minimizer of func. fx - The final value of the objective function. its - The number of iterations. imode - The exit mode from the optimizer, as an integer. smode - A string describing the exit mode from the optimizer. Exit modes are defined as follows: -1 : Gradient evaluation required (g & a) 0 : Optimization terminated successfully. 1 : Function evaluation required (f & c) 2 : Number of equality constraints larger than number of independent variables 3 : More than 3*n iterations in LSQ subproblem 4 : Inequality constraints incompatible 5 : Singular matrix E in LSQ subproblem 6 : Singular matrix C in LSQ subproblem 7 : Rank-deficient equality constraint subproblem HFTI 8 : Positive directional derivative for linesearch 9 : Iteration limit exceeded -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Dec 12 01:59:53 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Dec 2007 00:59:53 -0600 Subject: [SciPy-dev] feedback on scipy.sparse Message-ID: I wanted to gather some feedback on the state of scipy.sparse and possible changes and improvements. ===== Constructors ===== Here are the current constructors for the various sparse classes: csr_matrix and csc_matrix def __init__(self, arg1, dims=None, dtype=None, copy=False): dok_matrix and lil_matrix def __init__(self, A=None, shape=None, dtype=None, copy=False): coo_matrix def __init__(self, arg1, dims=None, dtype=None): Empty matrices can now be constructed with xxx_matrix( (M,N) ) for all formats. 1) Should we prefer 'dims' over 'shape' or vice versa? IMO 'shape' is arguably more natural since all the types have a .shape attribute 2) It would be nice if xxx_matrix( A ) always worked when A is a sparse or dense matrix. Does anyone object to this? The functionality is already present (though the various .toxxx() methods) 3) When the user defines the dim (or shape) argument but the data violates these bounds, what should happen? IMO this merits an exception, as opposed to expanding the dimensions to accommodate the data. ===== sparse.py and sparse functions ===== sparse.py currently weighs in at nearly 3000 lines and will continue growing. I propose that we move the functions (e.g. spidentity(), spdiags(), spkron(), etc. ) to a separate file. Any comments or proposals for the name of this file? Would it be prudent to move the classes into separate files also? Also, these functions always return a specific sparse format. For example spidentity() always returns a csc_matrix, spkron() always returns a coo_matrix, etc. Currently, a user who wanted the identity matrix in CSR format would have to do a CSC->CSR conversion on the result of spidentity(). This is somewhat wasteful since the spidentity() could easily have generated the CSR format instead. It would be better to allow the user to specify the desired return type in the function call. For example, spidentity(n, dtype='d',format='csr') instead of spidentity(n, dtype='d').tocsr() Sometimes a given function has a very natural return type. For instance, when we have a dia_matrix() implementation (I'm working on one) then spdiags() would naturally use this format. If the user specified another type, spdiags( ..., format='csr') then spdiags() would, at worst, create the matrix in DIA format first and then convert to CSR (with dia_matrix.tocsr() ). I like this approach because it allows the implementation to be clever when cleverness is possible, but also doesn't place an undue burden on the programmer when implementing a new method. Furthermore, it shields the user from internal implementation changes that might change the default return format. I propose the following policy. Functions get a new parameter 'format' which defaults to None. The default implies that the function will return the matrix in whatever format is most natural (and subject to change). For example: spidentity(n, dtype='d',format=None) might return a dia_matrix(), or a special identity matrix format in the future. At a minimum, the valid values of 'format' will include the three-letter abbreviations of the currently supported sparse matrix types (i.e. 'csr', 'csc', 'coo', 'lil', etc). Comments? Also, feel free to respond with any other comments related to scipy.sparse -- Nathan Bell wnbell at gmail.com From stefan at sun.ac.za Wed Dec 12 03:28:45 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 12 Dec 2007 10:28:45 +0200 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: Message-ID: <20071212082845.GF5390@mentat.za.net> Hi Nathan On Wed, Dec 12, 2007 at 12:59:53AM -0600, Nathan Bell wrote: > I wanted to gather some feedback on the state of scipy.sparse and > possible changes and improvements. > > > ===== Constructors ===== > > Here are the current constructors for the various sparse classes: > > csr_matrix and csc_matrix > def __init__(self, arg1, dims=None, dtype=None, copy=False): > > dok_matrix and lil_matrix > def __init__(self, A=None, shape=None, dtype=None, copy=False): > > coo_matrix > def __init__(self, arg1, dims=None, dtype=None): > > Empty matrices can now be constructed with xxx_matrix( (M,N) ) for > all formats. > > 1) Should we prefer 'dims' over 'shape' or vice versa? IMO 'shape' > is arguably more natural since all the types have a .shape attribute +1 > 3) When the user defines the dim (or shape) argument but the data > violates these bounds, what should happen? IMO this merits an > exception, as opposed to expanding the dimensions to accommodate the > data. I agree -- silently modifying the shape to allow such assignments is a recipe for disaster. > sparse.py currently weighs in at nearly 3000 lines and will continue > growing. I propose that we move the functions (e.g. spidentity(), > spdiags(), spkron(), etc. ) to a separate file. Any comments or > proposals for the name of this file? Would it be prudent to move the > classes into separate files also? I'd like to see the separate classes moving into their own files. Eye, diags etc. make use of specific properties of each array type, so I wonder whether those operations shouldn't be implemented as static class methods? > Also, these functions always return a specific sparse format. For > example spidentity() always returns a csc_matrix, spkron() always > returns a coo_matrix, etc. Currently, a user who wanted the identity > matrix in CSR format would have to do a CSC->CSR conversion on the > result of spidentity(). This is somewhat wasteful since the > spidentity() could easily have generated the CSR format instead. It > would be better to allow the user to specify the desired return type > in the function call. For example, > spidentity(n, dtype='d',format='csr') > instead of > spidentity(n, dtype='d').tocsr() > Sometimes a given function has a very natural return type. For > instance, when we have a dia_matrix() implementation (I'm working on > one) then spdiags() would naturally use this format. If the user > specified another type, spdiags( ..., format='csr') then spdiags() > would, at worst, create the matrix in DIA format first and then > convert to CSR (with dia_matrix.tocsr() ). I like this approach > because it allows the implementation to be clever when cleverness is > possible, but also doesn't place an undue burden on the programmer > when implementing a new method. Furthermore, it shields the user from > internal implementation changes that might change the default return > format. This sounds like the right approach: in my mind, any operation should take the least amount of time possible. I.e. if a function needs to convert a sparse array to csr internally, then don't bother converting it back to the original type when returning. A user should string together a whole bunch of operations, and only at the end do a single conversion to the required array type. > I propose the following policy. Functions get a new parameter > 'format' which defaults to None. The default implies that the > function will return the matrix in whatever format is most natural > (and subject to change). For example: > spidentity(n, dtype='d',format=None) > might return a dia_matrix(), or a special identity matrix format in > the future. At a minimum, the valid values of 'format' will include > the three-letter abbreviations of the currently supported sparse > matrix types (i.e. 'csr', 'csc', 'coo', 'lil', etc). Comments? Sounds good! > Also, feel free to respond with any other comments related to > scipy.sparse At the moment, IIRC, functionality for different kinds of sparse arrays are located in the same classes, separated with if's. I would like to see the different classes pulled completely apart, so the only overlap is in common functionality. I'd also like to discuss the in-place memory assignment policy. When do we copy on write, and when do we return views? For example, taking a slice out of a lil_matrix returns a new sparse array. It is *possible* to create a view, but it gets a bit tricky. If each array had an "origin" property, such views could be trivially constructed, but it still does not cater for slices like x[::2]. Regards St?fan From dmitrey.kroshko at scipy.org Wed Dec 12 03:42:31 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 12 Dec 2007 10:42:31 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: Message-ID: <475F9EF7.4010509@scipy.org> Rob Falck wrote: > I'm currently implementing the Sequential Least Squares Quadratic > Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. > The Fortran code being wrapped with F2PY is here: > http://www.netlib.org/toms/733 (its use within Scipy has been cleared) > > If anyone is interested in testing it out, let me know. > Hi Rob, could you commit your changes to svn? I intend to provide connection to OpenOpt and try to bench the solver with other ones available. Also, having outputfcn would be nice, does the Fortran code provide the possibility? Regards, D. From matthieu.brucher at gmail.com Wed Dec 12 03:46:16 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 12 Dec 2007 09:46:16 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: Message-ID: > > Also, feel free to respond with any other comments related to scipy.sparse > I'd like to know if some fancy indexing will be available soon. I explain myself. I need to populate a sparse matrix with some weights for each line. I'd like to do s[i, indices] = weights but it does not seem to work. I could use a loop, but it would be slower and this is not acceptable (as it is possible to do so in Matlab). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Dec 12 05:03:04 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 12 Dec 2007 12:03:04 +0200 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: Message-ID: <20071212100304.GG5390@mentat.za.net> Hi Matthieu On Wed, Dec 12, 2007 at 09:46:16AM +0100, Matthieu Brucher wrote: > Also, feel free to respond with any other comments related to scipy.sparse > > > I'd like to know if some fancy indexing will be available soon. I explain > myself. I need to populate a sparse matrix with some weights for each line. I'd > like to do s[i, indices] = weights but it does not seem to work. I could use a > loop, but it would be slower and this is not acceptable (as it is possible to > do so in Matlab). The lil_matrix is used to construct matrices like that: In [1]: import scipy.sparse as S In [2]: x = S.lil_matrix((3,3)) In [3]: x[0,:] = 4 In [4]: x Out[4]: <3x3 sparse matrix of type '' with 3 stored elements in LInked List format> In [5]: x.todense() Out[5]: matrix([[ 4., 4., 4.], [ 0., 0., 0.], [ 0., 0., 0.]]) Regards St?fan From robfalck at gmail.com Wed Dec 12 10:15:57 2007 From: robfalck at gmail.com (Rob Falck) Date: Wed, 12 Dec 2007 10:15:57 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <475F9EF7.4010509@scipy.org> References: <475F9EF7.4010509@scipy.org> Message-ID: outputfcn would allow for graphical progress of the optimizer, correct? I don't see any reason why that wouldn't be possible. I'll need to read up on properly incorporating it into scipy for svn (this is the first commit I've done for the project), I'll try to have it up in the next day or three. I think its speed can be increased substantially with the addition of a constrain normals argument, but I'll add it as is, with approximations for now. On Dec 12, 2007 3:42 AM, dmitrey wrote: > Rob Falck wrote: > > I'm currently implementing the Sequential Least Squares Quadratic > > Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. > > The Fortran code being wrapped with F2PY is here: > > http://www.netlib.org/toms/733 (its use within Scipy has been cleared) > > > > > If anyone is interested in testing it out, let me know. > > > Hi Rob, > could you commit your changes to svn? > I intend to provide connection to OpenOpt and try to bench the solver > with other ones available. > Also, having outputfcn would be nice, does the Fortran code provide the > possibility? > Regards, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Wed Dec 12 10:22:40 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 12 Dec 2007 17:22:40 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: <475F9EF7.4010509@scipy.org> Message-ID: <475FFCC0.8040005@scipy.org> Could you provide at least possibility for user-supplied gradient of objective function? This feature would provide possibility of using p.connectIterFcn('df') (as well as it is done for ALGENCAN and some other solvers) so openopt graphic output would be enabled. Regards, D. Rob Falck wrote: > outputfcn would allow for graphical progress of the optimizer, > correct? I don't see any reason why that wouldn't be possible. > I'll need to read up on properly incorporating it into scipy for svn > (this is the first commit I've done for the project), I'll try to have > it up in the next day or three. > I think its speed can be increased substantially with the addition of > a constrain normals argument, but I'll add it as is, with > approximations for now. > > On Dec 12, 2007 3:42 AM, dmitrey < dmitrey.kroshko at scipy.org > > wrote: > > Rob Falck wrote: > > I'm currently implementing the Sequential Least Squares Quadratic > > Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. > > The Fortran code being wrapped with F2PY is here: > > http://www.netlib.org/toms/733 (its use within Scipy has been > cleared) > > > > > If anyone is interested in testing it out, let me know. > > > Hi Rob, > could you commit your changes to svn? > I intend to provide connection to OpenOpt and try to bench the solver > with other ones available. > Also, having outputfcn would be nice, does the Fortran code > provide the > possibility? > Regards, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > -- > - Rob Falck > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robfalck at gmail.com Wed Dec 12 11:16:35 2007 From: robfalck at gmail.com (Rob Falck) Date: Wed, 12 Dec 2007 11:16:35 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <475FFCC0.8040005@scipy.org> References: <475F9EF7.4010509@scipy.org> <475FFCC0.8040005@scipy.org> Message-ID: Yes, thats already implemented. On Dec 12, 2007 10:22 AM, dmitrey wrote: > Could you provide at least possibility for user-supplied gradient of > objective function? > This feature would provide possibility of using > p.connectIterFcn('df') > (as well as it is done for ALGENCAN and some other solvers) > so openopt graphic output would be enabled. > > Regards, D. > > Rob Falck wrote: > > outputfcn would allow for graphical progress of the optimizer, > > correct? I don't see any reason why that wouldn't be possible. > > I'll need to read up on properly incorporating it into scipy for svn > > (this is the first commit I've done for the project), I'll try to have > > it up in the next day or three. > > I think its speed can be increased substantially with the addition of > > a constrain normals argument, but I'll add it as is, with > > approximations for now. > > > > On Dec 12, 2007 3:42 AM, dmitrey < dmitrey.kroshko at scipy.org > > > wrote: > > > > Rob Falck wrote: > > > I'm currently implementing the Sequential Least Squares Quadratic > > > Programming (SLSQP) optimizer, by Dieter Kraft, for use in Scipy. > > > The Fortran code being wrapped with F2PY is here: > > > http://www.netlib.org/toms/733 (its use within Scipy has been > > cleared) > > > > > > > > If anyone is interested in testing it out, let me know. > > > > > Hi Rob, > > could you commit your changes to svn? > > I intend to provide connection to OpenOpt and try to bench the > solver > > with other ones available. > > Also, having outputfcn would be nice, does the Fortran code > > provide the > > possibility? > > Regards, D. > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > -- > > - Rob Falck > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Dec 12 19:40:18 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Dec 2007 18:40:18 -0600 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: <20071212100304.GG5390@mentat.za.net> References: <20071212100304.GG5390@mentat.za.net> Message-ID: On Dec 12, 2007 4:03 AM, Stefan van der Walt wrote: > > I'd like to know if some fancy indexing will be available soon. I explain > > myself. I need to populate a sparse matrix with some weights for each line. I'd > > like to do s[i, indices] = weights but it does not seem to work. I could use a > > loop, but it would be slower and this is not acceptable (as it is possible to > > do so in Matlab). > > The lil_matrix is used to construct matrices like that: Matthieu, if you know all the row and column indices and their corresponding values then you can use the coo_matrix format like this: In [1]: from scipy import * In [2]: from scipy.sparse import * In [3]: row = array([0,1,2,2,1]) In [4]: col = array([1,2,0,1,1]) In [5]: data = array([1,2,3,4,5]) In [6]: A = coo_matrix((data,(row,col)),dims=(3,3)) In [7]: A.todense() Out[7]: matrix([[0, 1, 0], [0, 5, 2], [3, 4, 0]]) Using COO in this manner should be 10x-100x faster than LIL. LIL and DOK are the only options that allow efficient insertion into a sparse matrix, however if you know all the entries a priori then COO is much faster. -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Wed Dec 12 20:14:49 2007 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 12 Dec 2007 19:14:49 -0600 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: <20071212082845.GF5390@mentat.za.net> References: <20071212082845.GF5390@mentat.za.net> Message-ID: On Dec 12, 2007 2:28 AM, Stefan van der Walt wrote: > I'd like to see the separate classes moving into their own files. > > Eye, diags etc. make use of specific properties of each array type, so > I wonder whether those operations shouldn't be implemented as static > class methods? That's a possibility. If we adopt the solution below you could simply define (in spmatrix) class spmatrix: def eye(n): return spidentity(n,format=self.format) I'd prefer to hold off on this idea until it's clear that people want it. I fear that adding too many static methods would clutter the classes. > > I propose the following policy. Functions get a new parameter > > 'format' which defaults to None. The default implies that the > > function will return the matrix in whatever format is most natural > > (and subject to change). For example: > > spidentity(n, dtype='d',format=None) > > might return a dia_matrix(), or a special identity matrix format in > > the future. At a minimum, the valid values of 'format' will include > > the three-letter abbreviations of the currently supported sparse > > matrix types (i.e. 'csr', 'csc', 'coo', 'lil', etc). Comments? > > Sounds good! Great. I'll go ahead with this idea unless someone else weighs in. > > Also, feel free to respond with any other comments related to > > scipy.sparse > > At the moment, IIRC, functionality for different kinds of sparse > arrays are located in the same classes, separated with if's. I would > like to see the different classes pulled completely apart, so the only > overlap is in common functionality. Do you mean the use of _cs_matrix() to abstract the common parts of csr_matrix and csc_matrix? If so, I recently removed the ifs from the constructor and replaced them with a better solution. I think the present implementation is a reasonable compromise between readability and redundancy. In the past the two classes were completely separate, each consisting of a few hundred lines of code, and had a tendency to drift apart since edits to one didn't always make it into the other. Tim's refactoring fixed this without complicating the implementation substantially. > I'd also like to discuss the in-place memory assignment policy. When > do we copy on write, and when do we return views? For example, taking > a slice out of a lil_matrix returns a new sparse array. It is > *possible* to create a view, but it gets a bit tricky. If each array > had an "origin" property, such views could be trivially constructed, > but it still does not cater for slices like x[::2]. That is a hard problem. Can you think of specific uses of this kind of functionality that merit the complexity of implementing it? For slices like x[::2] you could introduce a stride tuple in the views, but that could get ugly fast. -- Nathan Bell wnbell at gmail.com From cimrman3 at ntc.zcu.cz Thu Dec 13 04:21:58 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 10:21:58 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: Message-ID: <4760F9B6.1060008@ntc.zcu.cz> Hi Nathan, thanks for pushing scipy.sparse forwards! Nathan Bell wrote: > ===== Constructors ===== > > Here are the current constructors for the various sparse classes: > > csr_matrix and csc_matrix > def __init__(self, arg1, dims=None, dtype=None, copy=False): > > dok_matrix and lil_matrix > def __init__(self, A=None, shape=None, dtype=None, copy=False): > > coo_matrix > def __init__(self, arg1, dims=None, dtype=None): > > Empty matrices can now be constructed with xxx_matrix( (M,N) ) for > all formats. > > 1) Should we prefer 'dims' over 'shape' or vice versa? IMO 'shape' > is arguably more natural since all the types have a .shape attribute Yes, please. > 2) It would be nice if xxx_matrix( A ) always worked when A is a > sparse or dense matrix. Does anyone object to this? The > functionality is already present (though the various .toxxx() methods) +1. > 3) When the user defines the dim (or shape) argument but the data > violates these bounds, what should happen? IMO this merits an > exception, as opposed to expanding the dimensions to accommodate the > data. IMHO scipy.sparse should not assume anything that a user not asked explicitely -> I am for an exception. > ===== sparse.py and sparse functions ===== > > sparse.py currently weighs in at nearly 3000 lines and will continue > growing. I propose that we move the functions (e.g. spidentity(), > spdiags(), spkron(), etc. ) to a separate file. Any comments or > proposals for the name of this file? Would it be prudent to move the > classes into separate files also? sputils? Splitting into class files sounds good. > Also, these functions always return a specific sparse format. For > example spidentity() always returns a csc_matrix, spkron() always > returns a coo_matrix, etc. Currently, a user who wanted the identity > matrix in CSR format would have to do a CSC->CSR conversion on the > result of spidentity(). This is somewhat wasteful since the > spidentity() could easily have generated the CSR format instead. It > would be better to allow the user to specify the desired return type > in the function call. For example, > spidentity(n, dtype='d',format='csr') > instead of > spidentity(n, dtype='d').tocsr() > Sometimes a given function has a very natural return type. For > instance, when we have a dia_matrix() implementation (I'm working on > one) then spdiags() would naturally use this format. If the user > specified another type, spdiags( ..., format='csr') then spdiags() > would, at worst, create the matrix in DIA format first and then > convert to CSR (with dia_matrix.tocsr() ). I like this approach > because it allows the implementation to be clever when cleverness is > possible, but also doesn't place an undue burden on the programmer > when implementing a new method. Furthermore, it shields the user from > internal implementation changes that might change the default return > format. Good idea! Concerning the Stefan's idea of static methods for spidentity etc., we could use only one method for all of them, e.g. class spmatrix: def special( name, format = ... ): if name = 'identity': return spidentity(n,format=format) ... to prevent cluttering od the class you mentioned. best, r. From matthieu.brucher at gmail.com Thu Dec 13 04:40:23 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Dec 2007 10:40:23 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <20071212100304.GG5390@mentat.za.net> Message-ID: I thought I would use csr or csc as every row and column will have some values, but not the same each time, so I don't think that coo is what I need. But I will try lil when I have some time. Matthieu 2007/12/13, Nathan Bell : > > On Dec 12, 2007 4:03 AM, Stefan van der Walt wrote: > > > I'd like to know if some fancy indexing will be available soon. I > explain > > > myself. I need to populate a sparse matrix with some weights for each > line. I'd > > > like to do s[i, indices] = weights but it does not seem to work. I > could use a > > > loop, but it would be slower and this is not acceptable (as it is > possible to > > > do so in Matlab). > > > > The lil_matrix is used to construct matrices like that: > > Matthieu, if you know all the row and column indices and their > corresponding values then you can use the coo_matrix format like this: > > In [1]: from scipy import * > In [2]: from scipy.sparse import * > In [3]: row = array([0,1,2,2,1]) > In [4]: col = array([1,2,0,1,1]) > In [5]: data = array([1,2,3,4,5]) > In [6]: A = coo_matrix((data,(row,col)),dims=(3,3)) > In [7]: A.todense() > Out[7]: > matrix([[0, 1, 0], > [0, 5, 2], > [3, 4, 0]]) > > > Using COO in this manner should be 10x-100x faster than LIL. LIL and > DOK are the only options that allow efficient insertion into a sparse > matrix, however if you know all the entries a priori then COO is much > faster. > > -- > Nathan Bell wnbell at gmail.com > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Dec 13 05:51:56 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 11:51:56 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <20071212100304.GG5390@mentat.za.net> Message-ID: <47610ECC.1040101@ntc.zcu.cz> Hi Matthieu, Matthieu Brucher wrote: > I thought I would use csr or csc as every row and column will have some > values, but not the same each time, so I don't think that coo is what I > need. But I will try lil when I have some time. Suppose you have: row, column, value 0, 10, 1.0 0, 11, 2.0 1, 50, 1.0 1, 55, 3.0 1, 100, 4.0 - the values are not the same each time - there are two rows, each with own nonzero columns and values. then, as Nathan wrote: >> In [1]: from scipy import * >> In [2]: from scipy.sparse import * >> In [3]: row = array([0,0,1,1,1]) >> In [4]: col = array([10,11,50,55,100]) >> In [5]: data = array([1.,2.,1.,3.,4.]) >> In [6]: A = coo_matrix((data,(row,col)),dims=(3,3)) should construct such a matrix, no? r. > 2007/12/13, Nathan Bell : >> On Dec 12, 2007 4:03 AM, Stefan van der Walt wrote: >>>> I'd like to know if some fancy indexing will be available soon. I >> explain >>>> myself. I need to populate a sparse matrix with some weights for each >> line. I'd >>>> like to do s[i, indices] = weights but it does not seem to work. I >> could use a >>>> loop, but it would be slower and this is not acceptable (as it is >> possible to >>>> do so in Matlab). >>> The lil_matrix is used to construct matrices like that: >> Matthieu, if you know all the row and column indices and their >> corresponding values then you can use the coo_matrix format like this: >> >> In [1]: from scipy import * >> In [2]: from scipy.sparse import * >> In [3]: row = array([0,1,2,2,1]) >> In [4]: col = array([1,2,0,1,1]) >> In [5]: data = array([1,2,3,4,5]) >> In [6]: A = coo_matrix((data,(row,col)),dims=(3,3)) >> In [7]: A.todense() >> Out[7]: >> matrix([[0, 1, 0], >> [0, 5, 2], >> [3, 4, 0]]) >> From matthieu.brucher at gmail.com Thu Dec 13 05:57:43 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Dec 2007 11:57:43 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: <47610ECC.1040101@ntc.zcu.cz> References: <20071212100304.GG5390@mentat.za.net> <47610ECC.1040101@ntc.zcu.cz> Message-ID: 2007/12/13, Robert Cimrman : > > Hi Matthieu, > > Matthieu Brucher wrote: > > I thought I would use csr or csc as every row and column will have some > > values, but not the same each time, so I don't think that coo is what I > > need. But I will try lil when I have some time. > > Suppose you have: > > row, column, value > 0, 10, 1.0 > 0, 11, 2.0 > 1, 50, 1.0 > 1, 55, 3.0 > 1, 100, 4.0 > - the values are not the same each time - there are two rows, each with > own nonzero columns and values. > > then, as Nathan wrote: > >> In [1]: from scipy import * > >> In [2]: from scipy.sparse import * > >> In [3]: row = array([0,0,1,1,1]) > >> In [4]: col = array([10,11,50,55,100]) > >> In [5]: data = array([1.,2.,1.,3.,4.]) > >> In [6]: A = coo_matrix((data,(row,col)),dims=(3,3)) > > should construct such a matrix, no? > r. Yes, it should, I think I should go back to sleep... Is there a way to create a sparse matrix from a list of lists directly and a separate data array ? Just to know if I have to transform my list of lists to the format (row, col). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Dec 13 06:01:45 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 12:01:45 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <20071212100304.GG5390@mentat.za.net> <47610ECC.1040101@ntc.zcu.cz> Message-ID: <47611119.4060807@ntc.zcu.cz> Matthieu Brucher wrote: > 2007/12/13, Robert Cimrman : >> Hi Matthieu, >> >> Matthieu Brucher wrote: >>> I thought I would use csr or csc as every row and column will have some >>> values, but not the same each time, so I don't think that coo is what I >>> need. But I will try lil when I have some time. >> Suppose you have: >> >> row, column, value >> 0, 10, 1.0 >> 0, 11, 2.0 >> 1, 50, 1.0 >> 1, 55, 3.0 >> 1, 100, 4.0 >> - the values are not the same each time - there are two rows, each with >> own nonzero columns and values. >> >> then, as Nathan wrote: >>>> In [1]: from scipy import * >>>> In [2]: from scipy.sparse import * >>>> In [3]: row = array([0,0,1,1,1]) >>>> In [4]: col = array([10,11,50,55,100]) >>>> In [5]: data = array([1.,2.,1.,3.,4.]) >>>> In [6]: A = coo_matrix((data,(row,col)),dims=(3,3)) >> should construct such a matrix, no? >> r. Just remove/correct the dims argument - I left there the original one... > Yes, it should, I think I should go back to sleep... > Is there a way to create a sparse matrix from a list of lists directly and a > separate data array ? Just to know if I have to transform my list of lists > to the format (row, col). > > Matthieu You mean this? In [3]: row = [0,0,1,1,1] In [4]: col = [10,11,50,55,100] In [5]: rc = [row, col] In [6]: rc Out[6]: [[0, 0, 1, 1, 1], [10, 11, 50, 55, 100]] In [7]: data = array([1.,2.,1.,3.,4.]) In [9]: A = sp.coo_matrix((data,rc)) In [10]: A Out[10]: <2x101 sparse matrix of type '' with 5 stored elements in COOrdinate format> In [11]: print A (0, 10) 1.0 (0, 11) 2.0 (1, 50) 1.0 (1, 55) 3.0 (1, 100) 4.0 cheers, r. From matthieu.brucher at gmail.com Thu Dec 13 07:31:27 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 13 Dec 2007 13:31:27 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: <47611119.4060807@ntc.zcu.cz> References: <20071212100304.GG5390@mentat.za.net> <47610ECC.1040101@ntc.zcu.cz> <47611119.4060807@ntc.zcu.cz> Message-ID: > > > Yes, it should, I think I should go back to sleep... > > Is there a way to create a sparse matrix from a list of lists directly > and a > > separate data array ? Just to know if I have to transform my list of > lists > > to the format (row, col). > > > > Matthieu > > You mean this? > > In [3]: row = [0,0,1,1,1] > In [4]: col = [10,11,50,55,100] > In [5]: rc = [row, col] > In [6]: rc > Out[6]: [[0, 0, 1, 1, 1], [10, 11, 50, 55, 100]] > In [7]: data = array([1.,2.,1.,3.,4.]) > In [9]: A = sp.coo_matrix((data,rc)) > In [10]: A > Out[10]: > <2x101 sparse matrix of type '' > with 5 stored elements in COOrdinate format> > > In [11]: print A > (0, 10) 1.0 > (0, 11) 2.0 > (1, 50) 1.0 > (1, 55) 3.0 > (1, 100) 4.0 Not exactly. I have something like : a = [[0, 2, 5], [3, 4], [4, 2]] and then some data : data = [1, 2, 3, 4, 5, 6, 7] or [[1, 2, 3], [4, 5], [6, 7]]and then the matrix would be : [[1, 0, 2, 0, 0, 0] [0, 0, 0, 3, 4, 0] [0, 0, 7, 0, 6, 0]] Is there something like this ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Dec 13 07:59:49 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 13 Dec 2007 13:59:49 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <20071212100304.GG5390@mentat.za.net> <47610ECC.1040101@ntc.zcu.cz> <47611119.4060807@ntc.zcu.cz> Message-ID: <47612CC5.2060506@ntc.zcu.cz> Matthieu Brucher wrote: > Not exactly. > I have something like : > a = [[0, 2, 5], [3, 4], [4, 2]] > and then some data : > data = [1, 2, 3, 4, 5, 6, 7] or [[1, 2, 3], [4, 5], [6, 7]]and then > the matrix would be : > > [[1, 0, 2, 0, 0, 0] > [0, 0, 0, 3, 4, 0] > [0, 0, 7, 0, 6, 0]] > > Is there something like this ? I see, I do not think there is a function like this. r. From wnbell at gmail.com Thu Dec 13 11:23:23 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 13 Dec 2007 10:23:23 -0600 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <20071212100304.GG5390@mentat.za.net> <47610ECC.1040101@ntc.zcu.cz> <47611119.4060807@ntc.zcu.cz> Message-ID: On Dec 13, 2007 6:31 AM, Matthieu Brucher wrote: > Not exactly. > I have something like : > a = [[0, 2, 5], [3, 4], [4, 2]] > and then some data : > data = [1, 2, 3, 4, 5, 6, 7] or [[1, 2, 3], [4, 5], [6, 7]]and then > the matrix would be : > > [[1, 0, 2, 0, 0, 0] > [0, 0, 0, 3, 4, 0] > [0, 0, 7, 0, 6, 0]] Your list of lists nearly matches the lil_matrix format (which is an array of lists). Below is the code for lil_matrix.tocsr() which is the most efficient way I've found to convert that format to CSR: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/sparse.py 2555 def tocsr(self): 2556 """ Return Compressed Sparse Row format arrays for this matrix. 2557 """ 2558 2559 indptr = asarray([len(x) for x in self.rows], dtype=intc) 2560 indptr = concatenate( ( array([0],dtype=intc), cumsum(indptr) ) ) 2561 2562 nnz = indptr[-1] 2563 2564 indices = [] 2565 for x in self.rows: 2566 indices.extend(x) 2567 indices = asarray(indices,dtype=intc) 2568 2569 data = [] 2570 for x in self.data: 2571 data.extend(x) 2572 data = asarray(data,dtype=self.dtype) 2573 2574 return csr_matrix((data, indices, indptr), dims=self.shape) 2575 Essentially, it computes the row pointer first and then flattens the lists. If you find something faster let me know. -- Nathan Bell wnbell at gmail.com From wnbell at gmail.com Thu Dec 13 14:17:22 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 13 Dec 2007 13:17:22 -0600 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: <4760F9B6.1060008@ntc.zcu.cz> References: <4760F9B6.1060008@ntc.zcu.cz> Message-ID: On Dec 13, 2007 3:21 AM, Robert Cimrman wrote: > > 1) Should we prefer 'dims' over 'shape' or vice versa? IMO 'shape' > > is arguably more natural since all the types have a .shape attribute > > Yes, please. I'll deprecate 'dims' then. > Concerning the Stefan's idea of static methods for spidentity etc., we > could use only one method for all of them, e.g. > > class spmatrix: > > def special( name, format = ... ): > if name = 'identity': > return spidentity(n,format=format) > ... > > to prevent cluttering od the class you mentioned. Good idea, I hadn't considered that option. -- Nathan Bell wnbell at gmail.com From stefan at sun.ac.za Fri Dec 14 06:59:23 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 14 Dec 2007 13:59:23 +0200 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <4760F9B6.1060008@ntc.zcu.cz> Message-ID: <20071214115923.GK21897@mentat.za.net> On Thu, Dec 13, 2007 at 01:17:22PM -0600, Nathan Bell wrote: > > Concerning the Stefan's idea of static methods for spidentity etc., we > > could use only one method for all of them, e.g. > > > > class spmatrix: > > > > def special( name, format = ... ): > > if name = 'identity': > > return spidentity(n,format=format) > > ... > > > > to prevent cluttering od the class you mentioned. > > Good idea, I hadn't considered that option. If we do this, we are still merging functionality belonging in different classes together. Say we'd like to refactor lil_matrix into another package, for some reason. If we use "special", then extracting the functionality pertaining only to lil_matrix becomes a problem. I think functionality that requires insight into the entrails of an object belongs with that object. Furthermore, LilMatrix.special('eye',(3,4)) looks very verbose, compared to LilMatrix.eye([3,4]) How many of these functions do we have? If we have in the order of 5 to 10, then cluttering isn't really a problem. Also, not all of these function need to be re-implemented from spmatrix (can we rename this to SparseMatrix for consistency?). For example, LilMatrix inherits .eye, which can simply defined in terms of .diag. Regards St?fan From cimrman3 at ntc.zcu.cz Fri Dec 14 07:21:58 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 14 Dec 2007 13:21:58 +0100 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: <20071214115923.GK21897@mentat.za.net> References: <4760F9B6.1060008@ntc.zcu.cz> <20071214115923.GK21897@mentat.za.net> Message-ID: <47627566.3020508@ntc.zcu.cz> Stefan van der Walt wrote: > On Thu, Dec 13, 2007 at 01:17:22PM -0600, Nathan Bell wrote: >>> Concerning the Stefan's idea of static methods for spidentity etc., we >>> could use only one method for all of them, e.g. >>> >>> class spmatrix: >>> >>> def special( name, format = ... ): >>> if name = 'identity': >>> return spidentity(n,format=format) >>> ... >>> >>> to prevent cluttering od the class you mentioned. >> Good idea, I hadn't considered that option. > > If we do this, we are still merging functionality belonging in > different classes together. Say we'd like to refactor lil_matrix into > another package, for some reason. If we use "special", then > extracting the functionality pertaining only to lil_matrix becomes a > problem. I think functionality that requires insight into the > entrails of an object belongs with that object. > > Furthermore, > > LilMatrix.special('eye',(3,4)) looks very verbose, compared to > > LilMatrix.eye([3,4]) > > How many of these functions do we have? If we have in the order of 5 > to 10, then cluttering isn't really a problem. Well, I like the second way more, too. 'special' has nothing special to offer besides some clever hasattr/getattr magic to tell what functions are available for a different kinds of matrices etc. I proposed this only to address the potential cluttering problem, which is not present if there are only several such functions. regards, r. From stefan at sun.ac.za Fri Dec 14 07:31:03 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 14 Dec 2007 14:31:03 +0200 Subject: [SciPy-dev] feedback on scipy.sparse In-Reply-To: References: <20071212082845.GF5390@mentat.za.net> Message-ID: <20071214123103.GL21897@mentat.za.net> Hi Nathan On Wed, Dec 12, 2007 at 07:14:49PM -0600, Nathan Bell wrote: > On Dec 12, 2007 2:28 AM, Stefan van der Walt wrote: > > > Also, feel free to respond with any other comments related to > > > scipy.sparse > > > > At the moment, IIRC, functionality for different kinds of sparse > > arrays are located in the same classes, separated with if's. I would > > like to see the different classes pulled completely apart, so the only > > overlap is in common functionality. > > Do you mean the use of _cs_matrix() to abstract the common parts of > csr_matrix and csc_matrix? If so, I recently removed the ifs from the > constructor and replaced them with a better solution. I think the > present implementation is a reasonable compromise between readability > and redundancy. In the past the two classes were completely separate, > each consisting of a few hundred lines of code, and had a tendency to > drift apart since edits to one didn't always make it into the other. > Tim's refactoring fixed this without complicating the implementation > substantially. I think _cs_matrix is a good idea: the two classes share similar storage. Having 'if' statements inside _cs_matrix to check which of the two formats you are working with, however, would not be a good idea (but I don't see any of those). > > I'd also like to discuss the in-place memory assignment policy. When > > do we copy on write, and when do we return views? For example, taking > > a slice out of a lil_matrix returns a new sparse array. It is > > *possible* to create a view, but it gets a bit tricky. If each array > > had an "origin" property, such views could be trivially constructed, > > but it still does not cater for slices like x[::2]. > > That is a hard problem. Can you think of specific uses of this kind > of functionality that merit the complexity of implementing it? For > slices like x[::2] you could introduce a stride tuple in the views, > but that could get ugly fast. Say a user wants to examine the first 500 rows of his sparse matrix: x = build_sparse_matrix() print x[:500] It seems like a waste of time to make a new allocation (there may not even be enough memory to do so). Which reminds me, print x[:500] will yield some description of the sparse matrix. Do we have a way to print the elements of the sparse matrix? Are we aiming to support striding on assigments? I.e. x[::2] = 5 I suspect that will not be worth the trouble, since a for loop can be used to assign all the elements. Regards St?fan From cimrman3 at ntc.zcu.cz Fri Dec 14 10:00:42 2007 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 14 Dec 2007 16:00:42 +0100 Subject: [SciPy-dev] ANN: SFE-00.35.01 Message-ID: <47629A9A.8030708@ntc.zcu.cz> Let me announce SFE-00.35.01, bringing per term integration - now each term can use its own quadrature points. This is a major change at the heart of the code - some parts may not work as all terms were not migrated yet to the new framework. All test examples work, though, as well as acoustic band gaps. See http://ui505p06-mbs.ntc.zcu.cz/sfe . SFE is a finite element analysis software written almost entirely in Python. The code is released under BSD license. best regards, r. From oliphant at enthought.com Fri Dec 14 14:48:16 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 14 Dec 2007 13:48:16 -0600 Subject: [SciPy-dev] We are sprinting at Berkley Message-ID: <4762DE00.1040503@enthought.com> Hey everybody, A group of us (Robert K, Travis O, Fernando P, Jarrod M, Chris, and Tom) are at Berkley today sprinting on SciPy. Hopefully we will be able to make some great progress towards 0.7.0 and some useful clean-up. If anybody would like to join us we are on (and will be in) irc.freenode.net on channel scipy throughout the next 3 days (until Sunday). Come and dive in if you have some time to help (or even if you don't --- make the time ;-) ) Best regards, -Travis O. From oliphant at enthought.com Fri Dec 14 14:55:36 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 14 Dec 2007 13:55:36 -0600 Subject: [SciPy-dev] Gmail chat Message-ID: <4762DFB8.8010200@enthought.com> If anybody wants to join the gmail group chat, send your gmail account information to fernando perez (or me): -Travis O. From oliphant at enthought.com Sat Dec 15 14:15:38 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Sat, 15 Dec 2007 13:15:38 -0600 Subject: [SciPy-dev] Hanging out in IRC today Message-ID: <476427DA.2070101@enthought.com> We are all hanging out in irc.freenode.net today during the Sprint. The room is scipy. Come join us if you would like to coordinate activity. -Travis O. From eads at soe.ucsc.edu Sat Dec 15 14:27:23 2007 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 15 Dec 2007 12:27:23 -0700 Subject: [SciPy-dev] Hanging out in IRC today In-Reply-To: <476427DA.2070101@enthought.com> References: <476427DA.2070101@enthought.com> Message-ID: <47642A9B.10009@soe.ucsc.edu> Travis E. Oliphant wrote: > We are all hanging out in irc.freenode.net today during the Sprint. The > room is scipy. > > Come join us if you would like to coordinate activity. > > -Travis O. After connecting to irc.freenode.net, I tried joining the room with the command "/join scipy" but it did not work. Please advise. Damian ----------------------------------------------------- Damian Eads Graduate Student Jack Baskin School of Engineering, UCSC E2-381 1156 High Street Santa Cruz, CA 95064 http://www.soe.ucsc.edu/~eads From eads at soe.ucsc.edu Sat Dec 15 14:44:00 2007 From: eads at soe.ucsc.edu (Damian Eads) Date: Sat, 15 Dec 2007 12:44:00 -0700 Subject: [SciPy-dev] Hanging out in IRC today In-Reply-To: <47642A9B.10009@soe.ucsc.edu> References: <476427DA.2070101@enthought.com> <47642A9B.10009@soe.ucsc.edu> Message-ID: <47642E80.8090503@soe.ucsc.edu> Nevermind! I realized I should have typed /join #scipy . Damian Damian Eads wrote: > Travis E. Oliphant wrote: >> We are all hanging out in irc.freenode.net today during the Sprint. The >> room is scipy. >> >> Come join us if you would like to coordinate activity. >> >> -Travis O. > > After connecting to irc.freenode.net, I tried joining the room with the > command "/join scipy" but it did not work. > > Please advise. > > Damian From robfalck at gmail.com Sun Dec 16 22:29:39 2007 From: robfalck at gmail.com (Rob Falck) Date: Sun, 16 Dec 2007 22:29:39 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <475F9EF7.4010509@scipy.org> References: <475F9EF7.4010509@scipy.org> Message-ID: The source files for SLSQP have been submitted to Trac with Ticket #566 (I lack svn commit privileges), along with a short test script as an example. Feel free to test it and let me know what you think. > > > Hi Rob, > could you commit your changes to svn? > I intend to provide connection to OpenOpt and try to bench the solver > with other ones available. > Also, having outputfcn would be nice, does the Fortran code provide the > possibility? > Regards, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Mon Dec 17 01:07:10 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 16 Dec 2007 22:07:10 -0800 Subject: [SciPy-dev] Changes to the SciPy's developer trac site Message-ID: Hello, It looks like the spammers have figured out how to use our trac site again. I locked down the trac site a bit to make it more difficult for them. Basically, I removed some of the default permissions that all users get. I don't think this will effect many people; but if you have a trac acount and aren't able to do something because you don't have permission just send me an email. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From nwagner at iam.uni-stuttgart.de Mon Dec 17 02:40:35 2007 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 17 Dec 2007 08:40:35 +0100 Subject: [SciPy-dev] tests in sandbox package multigrid Message-ID: Hi Nathan, nearly all tests of your sandbox package multigrid work for me. Cheers, Nils python -i svn/scipy/scipy/sandbox/multigrid/tests/test_sa.py Found 6/6 tests for __main__ .E.... ====================================================================== ERROR: check_sa_constant_interpolation (__main__.TestSA) ---------------------------------------------------------------------- Traceback (most recent call last): File "svn/scipy/scipy/sandbox/multigrid/tests/test_sa.py", line 63, in check_sa_constant_interpolation S_expected = reference_sa_constant_interpolation(A,epsilon) File "svn/scipy/scipy/sandbox/multigrid/tests/test_sa.py", line 267, in reference_sa_constant_interpolation R = set(range(n)) NameError: global name 'set' is not defined ---------------------------------------------------------------------- Ran 6 tests in 5.965s FAILED (errors=1) From dmitrey.kroshko at scipy.org Mon Dec 17 04:25:36 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 17 Dec 2007 11:25:36 +0200 Subject: [SciPy-dev] Changes to the SciPy's developer trac site In-Reply-To: References: Message-ID: <47664090.6010105@scipy.org> Now I can't edit my own pages related to OpenOpt (there are no buttons "edit" no longer) and replace/attach new files in download page (http://scipy.org/scipy/scikits/wiki/OpenOptInstall) Could you fix the problem? Regards, D. Jarrod Millman wrote: > Hello, > > It looks like the spammers have figured out how to use our trac site > again. I locked down the trac site a bit to make it more difficult > for them. Basically, I removed some of the default permissions that > all users get. I don't think this will effect many people; but if you > have a trac acount and aren't able to do something because you don't > have permission just send me an email. > > Thanks, > > From millman at berkeley.edu Mon Dec 17 05:08:47 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 17 Dec 2007 02:08:47 -0800 Subject: [SciPy-dev] Changes to the SciPy's developer trac site In-Reply-To: <47664090.6010105@scipy.org> References: <47664090.6010105@scipy.org> Message-ID: On Dec 17, 2007 1:25 AM, dmitrey wrote: > Now I can't edit my own pages related to OpenOpt (there are no buttons > "edit" no longer) and replace/attach new files in download page > (http://scipy.org/scipy/scikits/wiki/OpenOptInstall) > Could you fix the problem? > Regards, D. Try it now and let me know if you have any more problems. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From stefan at sun.ac.za Mon Dec 17 05:11:20 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 17 Dec 2007 12:11:20 +0200 Subject: [SciPy-dev] Trac server error: no space left on device Message-ID: <20071217101120.GK21448@mentat.za.net> To whom it may concern, The machine on which the trac server is hosted has run out of hard-disk space. OSError: [Errno 28] No space left on device: '/home/scipy/trac/numpy/attachments/ticket/630' Regards St?fan From wnbell at gmail.com Mon Dec 17 11:01:49 2007 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 17 Dec 2007 10:01:49 -0600 Subject: [SciPy-dev] tests in sandbox package multigrid In-Reply-To: References: Message-ID: On Dec 17, 2007 1:40 AM, Nils Wagner wrote: > Hi Nathan, > > nearly all tests of your sandbox package multigrid work > for me. Oops. Try the most recent SVN. Is there a standard workaround for both forwards and backwards compatibility with 'set'? Will 'Sets' exist forever? -- Nathan Bell wnbell at gmail.com From millman at berkeley.edu Mon Dec 17 17:46:53 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 17 Dec 2007 14:46:53 -0800 Subject: [SciPy-dev] Fwd: Changes to the SciPy's developer trac site In-Reply-To: References: <47664090.6010105@scipy.org> Message-ID: Hey Dmitrey, I took care of this last night and sent a reply to the list (see below). But Fernando just told me that he never got it, so I am worried that you never got it either. Please let me know if you got my first message and whether I fixed your problem or not. Thanks, Jarrod ---------- Forwarded message ---------- From: Jarrod Millman Date: Dec 17, 2007 2:08 AM Subject: Re: [SciPy-dev] Changes to the SciPy's developer trac site To: SciPy Developers List On Dec 17, 2007 1:25 AM, dmitrey wrote: > Now I can't edit my own pages related to OpenOpt (there are no buttons > "edit" no longer) and replace/attach new files in download page > (http://scipy.org/scipy/scikits/wiki/OpenOptInstall) > Could you fix the problem? > Regards, D. Try it now and let me know if you have any more problems. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Tue Dec 18 03:52:38 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 18 Dec 2007 09:52:38 +0100 Subject: [SciPy-dev] Changes to the SciPy's developer trac site In-Reply-To: <47664090.6010105@scipy.org> References: <47664090.6010105@scipy.org> Message-ID: Hi Jarrod, I'd like to have editing privileges as well for the scikits trac (the rights are idnetical between the scipy and the scikit trac IIRC). thanks ! Matthieu 2007/12/17, dmitrey : > > Now I can't edit my own pages related to OpenOpt (there are no buttons > "edit" no longer) and replace/attach new files in download page > (http://scipy.org/scipy/scikits/wiki/OpenOptInstall) > Could you fix the problem? > Regards, D. > > Jarrod Millman wrote: > > Hello, > > > > It looks like the spammers have figured out how to use our trac site > > again. I locked down the trac site a bit to make it more difficult > > for them. Basically, I removed some of the default permissions that > > all users get. I don't think this will effect many people; but if you > > have a trac acount and aren't able to do something because you don't > > have permission just send me an email. > > > > Thanks, > > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Tue Dec 18 15:44:12 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 18 Dec 2007 22:44:12 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: <475F9EF7.4010509@scipy.org> Message-ID: <4768311C.7030906@scipy.org> Hi Rob, could you check the line 224 (slsqp.py) a = numpy.concatenate( (fprime_cons(x),zeros([la,1])),1) I found the problem is here (lines 163-165): meq = len(eqcons) # meq = The number of equality constraints m = meq + len(ieqcons) # m = The total number of constraints la = array([1,m]).max() # la = So when user pass for example f_eqcons instead of eqcons, there is no appropriate handling of the situation in the code above these lines, so it produces la = 1 and error in concatenate. I will try to continue connecting slsqp to OpenOpt after you'll inform about fixing the bug, ok? Regards, D. From dmitrey.kroshko at scipy.org Wed Dec 19 06:58:55 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 19 Dec 2007 13:58:55 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status Message-ID: <4769077F.7010807@scipy.org> hi all, Excuse me for repeating this message once again, my 1st one that I had sent yesterday seems to be lost because of scipy.org server down for several hours -------------------------------------------------- Hi Rob, could you check the line 224 (slsqp.py) a = numpy.concatenate( (fprime_cons(x),zeros([la,1])),1) I found the problem is here (lines 163-165): meq = len(eqcons) # meq = The number of equality constraints m = meq + len(ieqcons) # m = The total number of constraints la = array([1,m]).max() # la = So when user pass for example f_eqcons instead of eqcons, there is no appropriate handling of the situation in the code above these lines, so it produces la = 1 and error in concatenate. I'll try to continue checking slsqp connecting it to OpenOpt after you'll inform about fixing the bug, ok? Regards, D. From oliphant at enthought.com Wed Dec 19 13:52:35 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 19 Dec 2007 12:52:35 -0600 Subject: [SciPy-dev] SciPy Sprint results Message-ID: <47696873.1060008@enthought.com> Hi all, We had a great Sprint at Berkeley over last weekend. Jarrod deserves a huge hand for organizing it and Fernando should be also congradulated for making the Sprint a productive communication session with a lot of different people. Going forward, there will be a relatively informal SciPy board whose purpose is to keep SciPy (and NumPy) moving forward. Currently, this board consists of (alphabetically) Eric Jones Robert Kern Jarrod Millman Travis Oliphant Our goal is to clean up SciPy and get it ready for 1.0 release over the next year or so (which will need lots of help from the community). If anybody else is interested in serving on this board, just send me email. As part of this goal, we will be having regular "sprints" as well virtual "bug-days" and "doc-days" where people who want to participate using IRC can join in and coordinate efforts. There will be at least one bug-day or doc-day every month over the next year (on the last Friday of the month). The first one is a "doc-day" which will be held Friday on December 28, 2007 (getting started on New Year's resolutions early). This doc-day will be virtual where anyone with an internet connection can join in on the scipy channel on irc.freenode.net. At least one board member will be available at each "doc-day" or "bug-day" (even if we have to recruit board members to make it happen :-) ) The recent Sprint was very helpful. Jarrod is putting together some material from the Sprint. But, I wanted to provide over-view information for those who may be interested in what happend. Summary: A lot of great discussion took place (and some fine actual coding by a few) which resulted in the following plans: Schedule ------------ * NumPy 1.0.5 in mid January * SciPy 0.7.0 in mid March to April * NumPy 1.1 by August 2008 (may slip a bit depending on what is wanted to be included) The plans below are for NumPy 1.0.5 and SciPy 0.7.0 unless otherwise noted. IO ---- * scipy.io will be gutted and what functionality remains will be placed in numpy.io. * scipy.io will be a place for file readers and writers for various data formats (data, audio, video, images, matlab, excel, etc.) * NumPy will get a standard binary file format (.npy/.npz) for arrays/groups_of_arrays. * NumPy will be trying to incorporate some of matplotlib's csv2rec and rec2csv functionality. * Pickling arrays will be discouraged (you will still be able to do it, we will just try to not make it seem that it is the "default" way to save arrays). Testing --------- * scipy unit-testing will be "nose-compliant" and therefore nose will be required to run the SciPy tests. * NumPy will still use the current testing framework but will support SciPy's desire to be nose-compliant. NumPy 1.1 tests may move to just being "nose-compliant" * The goal is to make tests easier for contributors to write. Weave --------- * weave will not move into NumPy yet, but possibly at NumPy 1.1, there could be a separate package containing all the "wrapping" support code for NumPy in a more unified fashion (if somebody is interested in this, it is a great time to jump in). Sandbox ----------- * the scipy sandbox is disappearing (except for user playgrounds) and useful code in it will be placed in other areas. Python versions -------------------- * SciPy 0.7.0 will require Python 2.4 (we can now use decorators for SciPy). * NumPy will still be useful with Python 2.3 until at least 1.1 Other discussions ---------------------- * numpy-scons will be a separate package for now for building extensions with scons (we need experience to figure out what to do with it). * fixes to repr for numpy float scalars were put in place * Thanks to Rob Falck scipy.optimize grew slsqp (sequential least-squares programming) method (allows for equality and inequality constraints). The code by Dieter Kraft was wrapped. * We will be working to coordinate efforts with William Stein (of SAGE fame) in the future. Sage developers will be coming to Austin at the end of February to do some cooperative sprinting. * Brian Granger is working on a parallel version of NumPy that is very interesting. Deprecation approaches ------------------------------- Functions in SciPy that are disappearing will be "deprecated" with appendages to the docstring to explain how to do it differently. The deprecation will issue a warning when the function is run. In the next version, the function will disappear. Once SciPy hits 1.0, the deprecation paradigm will be a bit more conservative. A lot of fabulous things are happening with SciPy. It is an exciting time to be a part of it. There are a lot of ways to jump in and participate so feel free. If there is something you think needs addressing, then please share it. We may have a simple PEP process in the future, but for now the rule is basically "working code." Best regards, -Travis O. From oliphant at enthought.com Wed Dec 19 14:44:49 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 19 Dec 2007 13:44:49 -0600 Subject: [SciPy-dev] Test Message-ID: <476974B1.6060208@enthought.com> Is this list working? -Travis O. From oliphant at enthought.com Wed Dec 19 19:25:25 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 19 Dec 2007 18:25:25 -0600 Subject: [SciPy-dev] Another test Message-ID: <4769B675.4030703@enthought.com> Is this getting through. From oliphant at enthought.com Wed Dec 19 20:26:35 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Wed, 19 Dec 2007 19:26:35 -0600 Subject: [SciPy-dev] Mailing list was not sending out new mails for awhile. Message-ID: <4769C4CB.7080609@enthought.com> Hi all, The postfix service on the server hosting several of the mailing lists was down since Monday. Mails to the list were preserved and archived but were not being distributed to subscribers. We restarted the postfix service and messages should now be going out. Apologies for the problem. -Travis O. From aisaac at american.edu Wed Dec 19 20:31:53 2007 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 19 Dec 2007 20:31:53 -0500 Subject: [SciPy-dev] Another test In-Reply-To: <4769B675.4030703@enthought.com> References: <4769B675.4030703@enthought.com> Message-ID: On Wed, 19 Dec 2007, "Travis E. Oliphant" apparently wrote: > Is this getting through. I saw both messages. fwiw, Alan Isaac From steve at shrogers.com Wed Dec 19 20:32:37 2007 From: steve at shrogers.com (Steven H. Rogers) Date: Wed, 19 Dec 2007 18:32:37 -0700 Subject: [SciPy-dev] Another test In-Reply-To: <4769B675.4030703@enthought.com> References: <4769B675.4030703@enthought.com> Message-ID: <4769C635.1070107@shrogers.com> Travis E. Oliphant wrote: > Is this getting through. > Yes. From ryanlists at gmail.com Wed Dec 19 20:39:08 2007 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 19 Dec 2007 19:39:08 -0600 Subject: [SciPy-dev] SciPy Sprint results In-Reply-To: <47696873.1060008@enthought.com> References: <47696873.1060008@enthought.com> Message-ID: Thanks to all. This sounds great. Ryan On Dec 19, 2007 12:52 PM, Travis E. Oliphant wrote: > > Hi all, > > We had a great Sprint at Berkeley over last weekend. Jarrod deserves a > huge hand for organizing it and Fernando should be also congradulated > for making the Sprint a productive communication session with a lot of > different people. > > Going forward, there will be a relatively informal SciPy board whose > purpose is to keep SciPy (and NumPy) moving forward. Currently, this > board consists of (alphabetically) > > Eric Jones > Robert Kern > Jarrod Millman > Travis Oliphant > > Our goal is to clean up SciPy and get it ready for 1.0 release over the > next year or so (which will need lots of help from the community). If > anybody else is interested in serving on this board, just send me email. > > As part of this goal, we will be having regular "sprints" as well > virtual "bug-days" and "doc-days" where people who want to participate > using IRC can join in and coordinate efforts. There will be at least > one bug-day or doc-day every month over the next year (on the last > Friday of the month). The first one is a "doc-day" which will be held > Friday on December 28, 2007 (getting started on New Year's resolutions > early). This doc-day will be virtual where anyone with an internet > connection can join in on the scipy channel on irc.freenode.net. > > At least one board member will be available at each "doc-day" or > "bug-day" (even if we have to recruit board members to make it happen :-) ) > > The recent Sprint was very helpful. Jarrod is putting together some > material from the Sprint. But, I wanted to provide over-view > information for those who may be interested in what happend. > > Summary: > > A lot of great discussion took place (and some fine actual coding by a > few) which resulted in the following plans: > > Schedule > ------------ > * NumPy 1.0.5 in mid January > * SciPy 0.7.0 in mid March to April > * NumPy 1.1 by August 2008 (may slip a bit depending on what is wanted > to be included) > > The plans below are for NumPy 1.0.5 and SciPy 0.7.0 unless otherwise noted. > > IO > ---- > * scipy.io will be gutted and what functionality remains will be > placed in numpy.io. > * scipy.io will be a place for file readers and writers for various > data formats (data, audio, video, images, matlab, excel, etc.) > * NumPy will get a standard binary file format (.npy/.npz) for > arrays/groups_of_arrays. > * NumPy will be trying to incorporate some of matplotlib's csv2rec and > rec2csv functionality. > * Pickling arrays will be discouraged (you will still be able to do > it, we will just try to not make it seem that it is the "default" way to > save arrays). > > Testing > --------- > * scipy unit-testing will be "nose-compliant" and therefore nose will > be required to run the SciPy tests. > * NumPy will still use the current testing framework but will support > SciPy's desire to be nose-compliant. NumPy 1.1 tests may move to just > being "nose-compliant" > * The goal is to make tests easier for contributors to write. > > Weave > --------- > * weave will not move into NumPy yet, but possibly at NumPy 1.1, there > could be a separate package containing all the "wrapping" support code > for NumPy in a more unified fashion (if somebody is interested in this, > it is a great time to jump in). > > Sandbox > ----------- > * the scipy sandbox is disappearing (except for user playgrounds) and > useful code in it will be placed in other areas. > > Python versions > -------------------- > * SciPy 0.7.0 will require Python 2.4 (we can now use decorators for > SciPy). > * NumPy will still be useful with Python 2.3 until at least 1.1 > > Other discussions > ---------------------- > * numpy-scons will be a separate package for now for building > extensions with scons (we need experience to figure out what to do with > it). > * fixes to repr for numpy float scalars were put in place > * Thanks to Rob Falck scipy.optimize grew slsqp (sequential > least-squares programming) method (allows for equality and inequality > constraints). The code by Dieter Kraft was wrapped. > * We will be working to coordinate efforts with William Stein (of SAGE > fame) in the future. Sage developers will be coming to Austin at the > end of February to do some cooperative sprinting. > * Brian Granger is working on a parallel version of NumPy that is very > interesting. > > > Deprecation approaches > ------------------------------- > > Functions in SciPy that are disappearing will be "deprecated" with > appendages to the docstring to explain how to do it differently. The > deprecation will issue a warning when the function is run. In the next > version, the function will disappear. Once SciPy hits 1.0, the > deprecation paradigm will be a bit more conservative. > > > A lot of fabulous things are happening with SciPy. It is an exciting > time to be a part of it. There are a lot of ways to jump in and > participate so feel free. If there is something you think needs > addressing, then please share it. We may have a simple PEP process in > the future, but for now the rule is basically "working code." > > > Best regards, > > -Travis O. > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From robfalck at gmail.com Wed Dec 19 20:49:08 2007 From: robfalck at gmail.com (Rob Falck) Date: Wed, 19 Dec 2007 20:49:08 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <4768311C.7030906@scipy.org> References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> Message-ID: Thanks for the heads up. Passing the constraints via f_eqcons and f_ieqcons needs more work. I'll get on that asap. On Dec 18, 2007 3:44 PM, dmitrey wrote: > Hi Rob, could you check the line 224 (slsqp.py) > > a = numpy.concatenate( (fprime_cons(x),zeros([la,1])),1) > > I found the problem is here (lines 163-165): > meq = len(eqcons) # meq = The number of equality constraints > m = meq + len(ieqcons) # m = The total number of constraints > la = array([1,m]).max() # la = > > > So when user pass for example f_eqcons instead of eqcons, there is no > appropriate handling of the situation in the code above these lines, so > it produces la = 1 and error in concatenate. > > I will try to continue connecting slsqp to OpenOpt after you'll inform > about fixing the bug, ok? > > Regards, D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Dec 20 00:58:43 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 19 Dec 2007 22:58:43 -0700 Subject: [SciPy-dev] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: On Dec 19, 2007 6:39 PM, Ryan Krauss wrote: > Thanks to all. This sounds great. > > Ryan > > On Dec 19, 2007 12:52 PM, Travis E. Oliphant > wrote: > > > > Hi all, > > > > We had a great Sprint at Berkeley over last weekend. Jarrod deserves a > > huge hand for organizing it and Fernando should be also congradulated > > for making the Sprint a productive communication session with a lot of > > different people. > > > > * NumPy will get a standard binary file format (.npy/.npz) for > > arrays/groups_of_arrays. > Will this new binary format contain endianess/type data? I am a bit concerned that we don't yet have a reliable way to distinguish extended precision floats from the coming quad precision as they both tend to use 128 bytes on 64 bit machines. Perhaps extended precision should just be dropped at some point, especially as it is not particularly portable between architectures/compilers. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Dec 20 01:28:20 2007 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Dec 2007 00:28:20 -0600 Subject: [SciPy-dev] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: <476A0B84.1010408@gmail.com> Charles R Harris wrote: > On Dec 19, 2007 12:52 PM, Travis E. Oliphant > wrote: > > * NumPy will get a standard binary file format (.npy/.npz) for > > arrays/groups_of_arrays. > > Will this new binary format contain endianess/type data? I am a bit > concerned that we don't yet have a reliable way to distinguish extended > precision floats from the coming quad precision as they both tend to > use 128 bytes on 64 bit machines. Perhaps extended precision should just > be dropped at some point, especially as it is not particularly portable > between architectures/compilers. It uses the dtype.descr to describe the dtype of the array. You can see the implementation here: http://svn.scipy.org/svn/numpy/branches/lib_for_io/format.py If it has holes, I would like to fix them. Can you point me to some documentation on the different quad precision formats? Doesn't IEEE-854 standardize this? There has been some discussion about whether to continue with this format or attempt to read and write a very tiny subset of HDF5, so don't get too attached to the format until it hits the trunk. I'll drop some warnings into the code to that effect. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Thu Dec 20 02:07:28 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 20 Dec 2007 09:07:28 +0200 Subject: [SciPy-dev] SciPy Sprint results In-Reply-To: <47696873.1060008@enthought.com> References: <47696873.1060008@enthought.com> Message-ID: <20071220070728.GD7721@mentat.za.net> Hi Travis, During the sprint I also merged Pierre's MaskedArray code into the maskedarray branch. That is nearly done, with only a few unit tests still failing -- ones brought over from the old numpy.ma. This is mainly due to some changes in the API, for example put and putmask now behave like their ndarray counterparts. Pierre, would you remind us of any other such changes, and why they were made? What is the road forward with this code? We will probably only merge API changes into 1.4. Do we use svnmerge to keep the branch up to date until then? The branch can be found at http://svn.scipy.org/svn/numpy/branches/maskedarray Regards St?fan On Wed, Dec 19, 2007 at 12:52:35PM -0600, Travis E. Oliphant wrote: > Hi all, > > We had a great Sprint at Berkeley over last weekend. Jarrod deserves a > huge hand for organizing it and Fernando should be also congradulated > for making the Sprint a productive communication session with a lot of > different people. From dmitrey.kroshko at scipy.org Thu Dec 20 02:30:03 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 20 Dec 2007 09:30:03 +0200 Subject: [SciPy-dev] Fwd: Changes to the SciPy's developer trac site In-Reply-To: References: <47664090.6010105@scipy.org> Message-ID: <476A19FB.4020003@scipy.org> hi Jarrod, thank you, it works ok for me D. Jarrod Millman wrote: > Hey Dmitrey, > > I took care of this last night and sent a reply to the list (see > below). But Fernando just told me that he never got it, so I am > worried that you never got it either. Please let me know if you got > my first message and whether I fixed your problem or not. > > Thanks, > Jarrod > > > ---------- Forwarded message ---------- > From: Jarrod Millman > Date: Dec 17, 2007 2:08 AM > Subject: Re: [SciPy-dev] Changes to the SciPy's developer trac site > To: SciPy Developers List > > > On Dec 17, 2007 1:25 AM, dmitrey wrote: > >> Now I can't edit my own pages related to OpenOpt (there are no buttons >> "edit" no longer) and replace/attach new files in download page >> (http://scipy.org/scipy/scikits/wiki/OpenOptInstall) >> Could you fix the problem? >> Regards, D. >> > > Try it now and let me know if you have any more problems. > > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > > > > From brian.lee.hawthorne at gmail.com Thu Dec 20 03:14:38 2007 From: brian.lee.hawthorne at gmail.com (Brian Hawthorne) Date: Thu, 20 Dec 2007 00:14:38 -0800 Subject: [SciPy-dev] Test In-Reply-To: <476974B1.6060208@enthought.com> References: <476974B1.6060208@enthought.com> Message-ID: <796269930712200014u7c395ccet65cc817e6c543d12@mail.gmail.com> working for me On Dec 19, 2007 11:44 AM, Travis E. Oliphant wrote: > > Is this list working? > > -Travis O. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Thu Dec 20 17:15:54 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 20 Dec 2007 14:15:54 -0800 Subject: [SciPy-dev] new P3 binaries for NumPy 1.0.4 and SciPy 0.6.0 Message-ID: Hey, If you are having problems with NumPy and SciPy on Pentium III machines running Windows, please try the newly released binaries: numpy-1.0.4.win32-p3-py2.3.exe numpy-1.0.4.win32-p3-py2.4.exe numpy-1.0.4.win32-p3-py2.5.exe numpy-1.0.4.win32-p3-py2.5.msi scipy-0.6.0.win32-p3-py2.3.exe scipy-0.6.0.win32-p3-py2.4.exe scipy-0.6.0.win32-p3-py2.5.exe scipy-0.6.0.win32-p3-py2.5.msi Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From robfalck at gmail.com Thu Dec 20 21:15:42 2007 From: robfalck at gmail.com (Rob Falck) Date: Thu, 20 Dec 2007 21:15:42 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <4768311C.7030906@scipy.org> References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> Message-ID: I've submitted a ticket #570 that includes an updated version of slsqp.py. The function fmin_slsqp should now properly accept constraints via the callable arguments f_eqcons and f_ieqcons. In addition, callable arguments that provide the constraint jacobians are also working now. I also added a function to slsqp.py for approximating the Jacobians if fprime_eqcons or fprime_ieqcons are not provided. Perhaps this routine would be better off in scipy.common, since its very similar to approx_fprime. I also attached a new version of slsqp_test.py to the ticket that now shows examples of how to use fmin_slsqp with the new constraint arguments. Please let me know if you have any questions or uncover any more issues. http://scipy.org/scipy/scipy/ticket/570 - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Dec 21 03:20:28 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 21 Dec 2007 01:20:28 -0700 Subject: [SciPy-dev] SciPy Sprint results In-Reply-To: <476A0B84.1010408@gmail.com> References: <47696873.1060008@enthought.com> <476A0B84.1010408@gmail.com> Message-ID: On Dec 19, 2007 11:28 PM, Robert Kern wrote: > Charles R Harris wrote: > > On Dec 19, 2007 12:52 PM, Travis E. Oliphant > > wrote: > > > > * NumPy will get a standard binary file format (.npy/.npz) for > > > arrays/groups_of_arrays. > > > > Will this new binary format contain endianess/type data? I am a bit > > concerned that we don't yet have a reliable way to distinguish extended > > precision floats from the coming quad precision as they both tend to > > use 128 bytes on 64 bit machines. Perhaps extended precision should just > > be dropped at some point, especially as it is not particularly portable > > between architectures/compilers. > > It uses the dtype.descr to describe the dtype of the array. You can see > the > implementation here: > > http://svn.scipy.org/svn/numpy/branches/lib_for_io/format.py > > If it has holes, I would like to fix them. Can you point me to some > documentation on the different quad precision formats? Doesn't IEEE-854 > standardize this? > > There has been some discussion about whether to continue with this format > or > attempt to read and write a very tiny subset of HDF5, so don't get too > attached > to the format until it hits the trunk. I'll drop some warnings into the > code to > that effect. Here is a PDF version of DRAFT Standard for Floating-Point Arithmetic IEEE P754 . Table 2 on page 16 gives a good summary of the proposed formats. I believe P754 is the latest step in the revision of the 754 standard as 754r concluded last year. Note that the proposed standard also adds decimal floats, with decimal digits packed 3 in 10 bits. Here is a bit more from Intel, who are apparently working on implementations. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Fri Dec 21 03:25:57 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 21 Dec 2007 10:25:57 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status Message-ID: <476B7895.3000908@scipy.org> Hi Rob, 1. You should replace ( len(eqcons) + len(ieqcons), len(x0) ) by ( len(eqcons), len(x0) ) and ( len(ieqcons), len(x0) ) in slsqp docstring: fprime_eqcons : callable f(x,*args) A function of the form f(x, *args) that returns the m by n array of equality constraint normals. If not provided, the normals will be approximated. The array returned by fprime_cons should be sized as ( len(eqcons) + len(ieqcons), len(x0) ). fprime_ieqcons : callable f(x,*args) A function of the form f(x, *args) that returns the m by n array of inequality constraint normals. If not provided, the normals will be approximated. The array returned by fprime_cons should be sized as ( len(eqcons) + len(ieqcons), len(x0) ). 2. I have written connection to OO but it still fails to solve any from my primitive tests: I constantly get stop case "Singular matrix C in LSQ subproblem" So I have tried to call slsqp w/o OO interface, directly. See the file below, I think it would be nice would you make the example working; then I'll continue slsqp check. Currently it fails in line 206 meq = len(f_eqcons(x)) where f_eqcons(x) = array(133.40163659577431) Note also that I have pass 2 eq cons but this somehow has only 1 value (I have noticed you replaced f_eqcons by something in line 159, maybe it's the matter?) Regards, D. from numpy import * from scipy.optimize.slsqp import fmin_slsqp N = 10 M = 5 ff = lambda x: (abs(x-M) ** 1.5).sum() x0 = cos(arange(N)) c = lambda x: [2* x[0] **4-32, x[1]**2+x[2]**2 - 8] h1 = lambda x: 1e1*(x[-1]-1)**4 h2 = lambda x: (x[-2]-1.5)**4 #TODO: pass bounds to fmin_slsqp when all other will work ok ##bounds = [(-6,6)]*10 ##bounds[3] = (5.5, 6) ##bounds[4] = (6, 4.5) diffInt = 1e-8 x = fmin_slsqp( ff, x0 , f_eqcons=lambda x: asfarray(h1(x), h2(x)), f_ieqcons=c, bounds = [], fprime = None, fprime_eqcons=None, fprime_ieqcons=None, args = (), iter = 100, acc = 1.0E-6, iprint = -1, full_output = 0, epsilon = diffInt) From dmitrey.kroshko at scipy.org Fri Dec 21 03:40:57 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 21 Dec 2007 10:40:57 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> Message-ID: <476B7C19.1020003@scipy.org> P.S. I have tried passing ieqcons, eqcons either both Python lists or both numpy arrays, it doesn't work as well (same error I have mentioned) (in the code I sent 1st is Python lists, 2nd is array) Regards, D. From robfalck at gmail.com Fri Dec 21 09:04:05 2007 From: robfalck at gmail.com (Rob Falck) Date: Fri, 21 Dec 2007 09:04:05 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <476B7C19.1020003@scipy.org> References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> Message-ID: Thanks for the catch in the docstring. I've fixed that on the version of slsqp.py attached to the ticket. I'm not positive this is the correct minimum, but it definitely is trending towards this result when that error in encountered. This appears to be a case where SLSQP just can't quite zero in on the minimum. I know its a kludge, but lowering the requested accuracy to 1.0E-5yields a converged solution, within 3 or 4 decimal places of what appears to be the true minimum. Does this appear to be yielding proper results? from numpy import * from scipy.optimize.slsqp import fmin_slsqp N = 10 M = 5 ff = lambda x: (abs(x-M) ** 1.5).sum() x0 = cos(arange(N)) print "Initial guess", x0 c = lambda x: asfarray([2* x[0] **4-32, x[1]**2+x[2]**2 - 8]) h1 = lambda x: 1e1*(x[-1]-1)**4 h2 = lambda x: (x[-2]-1.5)**4 #TODO: pass bounds to fmin_slsqp when all other will work ok bounds = [(-6,6)]*10 bounds[3] = (5.5, 6) #bounds[4] = (6, 4.5) diffInt = 1e-8 # Unconstrained print "Unconstrainted" x = fmin_slsqp( ff, x0, epsilon = diffInt, bounds=bounds, iprint=1, acc= 1.0E-12) print x print "\n" # Inequality constraints print "Inequality constraints" x = fmin_slsqp( ff, x0, epsilon = diffInt, iprint=1, acc=1.0E-12, f_ieqcons = c, bounds=bounds ) print x print "\n" h1 = lambda x: 1e1*(x[-1]-1)**4 h2 = lambda x: (x[-2]-1.5)**4 # Inequality and Equality constraints print "Inequality and Equality constraints" x = fmin_slsqp( ff, x0, epsilon = diffInt, iprint=1, acc=1.0E-12, f_eqcons=lambda x: asfarray([h1(x), h2(x)]), f_ieqcons = c, bounds=bounds ) print "Solution vector", x print "Equality constraint values:",h1(x), h2(x) print "Inequality constraint values:", c(x) print "\n" # Inequality and Equality constraints - lower requested accuracy print "Inequality and Equality constraints - lower requested accuracy" x = fmin_slsqp( ff, x0, epsilon = diffInt, iprint=1, acc=1.0E-5, f_eqcons=lambda x: asfarray([h1(x), h2(x)]), f_ieqcons = c, bounds=bounds ) print "Solution vector", x print "Equality constraint values:",h1(x), h2(x) print "Inequality constraint values:", c(x) -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From robfalck at gmail.com Fri Dec 21 10:31:27 2007 From: robfalck at gmail.com (Rob Falck) Date: Fri, 21 Dec 2007 10:31:27 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> Message-ID: Also, your bounds[4] entry is backwards. This results in a nasty glibc error on my system. I'll add the logic to test for this later today. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Dec 21 11:03:18 2007 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 21 Dec 2007 09:03:18 -0700 Subject: [SciPy-dev] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> <476A0B84.1010408@gmail.com> Message-ID: On Dec 21, 2007 1:20 AM, Charles R Harris wrote: > > > On Dec 19, 2007 11:28 PM, Robert Kern wrote: > > > Charles R Harris wrote: > > > On Dec 19, 2007 12:52 PM, Travis E. Oliphant < > > oliphant at enthought.com > > > > wrote: > > > > > > * NumPy will get a standard binary file format (.npy/.npz) for > > > > arrays/groups_of_arrays. > > > > > > Will this new binary format contain endianess/type data? I am a bit > > > concerned that we don't yet have a reliable way to distinguish > > extended > > > precision floats from the coming quad precision as they both tend to > > > use 128 bytes on 64 bit machines. Perhaps extended precision should > > just > > > be dropped at some point, especially as it is not particularly > > portable > > > between architectures/compilers. > > > > It uses the dtype.descr to describe the dtype of the array. You can see > > the > > implementation here: > > > > http://svn.scipy.org/svn/numpy/branches/lib_for_io/format.py > > > > If it has holes, I would like to fix them. Can you point me to some > > documentation on the different quad precision formats? Doesn't IEEE-854 > > standardize this? > > > > There has been some discussion about whether to continue with this > > format or > > attempt to read and write a very tiny subset of HDF5, so don't get too > > attached > > to the format until it hits the trunk. I'll drop some warnings into the > > code to > > that effect. > > > Here is a PDF version of DRAFT Standard for Floating-Point Arithmetic IEEE > P754 . Table > 2 on page 16 gives a good summary of the proposed formats. I believe P754 is > the latest step in the revision of the 754 standard as 754r concluded last > year. Note that the proposed standard also adds decimal floats, with decimal > digits packed 3 in 10 bits. Here is a bit more from Intel, > who are apparently working on implementations. > The actual _use_ of the floating types will depend on the C compilers. Quads will probably show up as long doubles, displacing extended precision doubles. What will happen to BLAS, LAPACK, and all those bits? Quad precision support will likely be added soon enough, there are already quad versions of BLAS out there. Here is a interesting presentationsomewhat related to such things. Hey, maybe we should rewrite numpy in FORTRAN ;) Anyway, the current identification of numbers by bit width works fine for stepping/slicing through data and is probably influenced by that implementation detail, but as far as numerics go I think we need to know what is actually *in* those bits and it would be nice to have that method in place early on. Maybe a short UTF-8 header with enough room to actually be descriptive would do the trick, or we could hash longer names to get a short identifier, although hashed values would have to be recognized, not decoded. I think something wordy and descriptive, i.e., "big endian IEEE binary128", would be a good starting point. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Fri Dec 21 14:51:12 2007 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 21 Dec 2007 14:51:12 -0500 Subject: [SciPy-dev] new P3 binaries for NumPy 1.0.4 and SciPy 0.6.0 In-Reply-To: References: Message-ID: On Thu, 20 Dec 2007, Jarrod Millman apparently wrote: > If you are having problems with NumPy and SciPy on Pentium III > machines running Windows, please try the newly released binaries: I used the Python 2.5 binaries (.exe) on my home P3 and all seems well in use. I got no failures of numpy.test() but some warnings and failures from scipy.test(). The failures are listed below. Comment: finding the SciPy binaries is not obvious, since they do not show here Thank you! Alan Isaac ====================================================================== FAIL: check_syevr (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\lib\lapack\tests\esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 232, in assert_array_almost_equa l header='Arrays are not almost equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\lib\lapack\tests\esv_tests.py", line 66, in check_syevr_ irange assert_array_almost_equal(w,exact_w[rslice]) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 232, in assert_array_almost_equa l header='Arrays are not almost equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 217, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_bicg (scipy.linalg.tests.test_iterative.test_iterative_solvers) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\linalg\tests\test_iterative.py", line 57, in check_bicg assert norm(dot(self.A, x) - self.b) < 5*self.tol AssertionError ====================================================================== FAIL: check_bicgstab (scipy.linalg.tests.test_iterative.test_iterative_solvers) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\linalg\tests\test_iterative.py", line 69, in check_bicgs tab assert norm(dot(self.A, x) - self.b) < 5*self.tol AssertionError ====================================================================== FAIL: check_cgs (scipy.linalg.tests.test_iterative.test_iterative_solvers) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\linalg\tests\test_iterative.py", line 63, in check_cgs assert norm(dot(self.A, x) - self.b) < 5*self.tol AssertionError ====================================================================== FAIL: test_fromimage (scipy.misc.tests.test_pilutil.test_pilutil) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\misc\tests\test_pilutil.py", line 34, in test_fromimage assert img.min() >= imin AssertionError ====================================================================== FAIL: test_imresize (scipy.misc.tests.test_pilutil.test_pilutil) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\scipy\misc\tests\test_pilutil.py", line 18, in test_imresize assert_equal(im1.shape,(11,22)) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 137, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 145, in assert_equal assert desired == actual, msg AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ---------------------------------------------------------------------- Ran 1719 tests in 73.656s FAILED (failures=7) From dmitrey.kroshko at scipy.org Sun Dec 23 06:24:36 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 23 Dec 2007 13:24:36 +0200 Subject: [SciPy-dev] scipy.org is down again Message-ID: <476E4574.30709@scipy.org> scipy.org is down again, at least for some hours Hence, I suspect, mail list as well maybe, it has sense to split mail list server and scipy.org webserver? (elseware it's hard for users to inform about scipy.org webserver down). Regards, D. From cookedm at physics.mcmaster.ca Mon Dec 24 01:06:38 2007 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Mon, 24 Dec 2007 01:06:38 -0500 Subject: [SciPy-dev] Moving numexpr to its own project Message-ID: <6F89103E-9007-49C9-8B21-F0B4528EE9A3@physics.mcmaster.ca> Hi, With the pending closing of the scipy.sandbox (anybody want to cough up more details on that, btw? When? How?), I want to move numexpr to its own project, likely hosted on Google Code. It's not something I think is "sciencey" enough for a scikit (plus, I don't like sharing a bug database with other, separate, projects), and it's more like weave (but simpler). Tim H., Francesc, Ivan? You're the main contributors; does this work for you? -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From millman at berkeley.edu Mon Dec 24 01:59:52 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Sun, 23 Dec 2007 22:59:52 -0800 Subject: [SciPy-dev] Moving numexpr to its own project In-Reply-To: <6F89103E-9007-49C9-8B21-F0B4528EE9A3@physics.mcmaster.ca> References: <6F89103E-9007-49C9-8B21-F0B4528EE9A3@physics.mcmaster.ca> Message-ID: On Dec 23, 2007 10:06 PM, David M. Cooke wrote: > With the pending closing of the scipy.sandbox (anybody want to cough > up more details on that, btw? When? How?), I want to move numexpr to > its own project, likely hosted on Google Code. It's not something I > think is "sciencey" enough for a scikit (plus, I don't like sharing a > bug database with other, separate, projects), and it's more like weave > (but simpler). I will send out more details later tonight (I will write it up now); but the short answer is that we would like to remove the sandbox for the 0.7.0 release. Most of the code will move to scikits, but some will move directly into scipy. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From faltet at carabos.com Mon Dec 24 02:56:08 2007 From: faltet at carabos.com (Francesc Altet) Date: Mon, 24 Dec 2007 08:56:08 +0100 Subject: [SciPy-dev] Moving numexpr to its own project In-Reply-To: <6F89103E-9007-49C9-8B21-F0B4528EE9A3@physics.mcmaster.ca> References: <6F89103E-9007-49C9-8B21-F0B4528EE9A3@physics.mcmaster.ca> Message-ID: <200712240856.08804.faltet@carabos.com> A Monday 24 December 2007, David M. Cooke escrigu?: > Hi, > > With the pending closing of the scipy.sandbox (anybody want to cough > up more details on that, btw? When? How?), I want to move numexpr to > its own project, likely hosted on Google Code. It's not something I > think is "sciencey" enough for a scikit (plus, I don't like sharing a > bug database with other, separate, projects), and it's more like > weave (but simpler). > > Tim H., Francesc, Ivan? You're the main contributors; does this work > for you? Sounds good here. I'd like to help fixing some things like those listed in ticket #529: * Implement more functions, like exp, ln, log10 and others. * The setup.py is too terse, and seems to support only the gcc compiler. More work should be done to include support other compilers, and most specially MSVC. Also, MSVC should be directed to compile with optimization level 1 only, in order to get reasonable compile times. But that can wait until numexpr is in Google Code. It's nice to see numexpr taking flight as a stand-alone project :) Cheers, -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From jre at enthought.com Mon Dec 24 05:53:05 2007 From: jre at enthought.com (J. Ryan Earl) Date: Mon, 24 Dec 2007 04:53:05 -0600 Subject: [SciPy-dev] scipy.org is down again In-Reply-To: <476E4574.30709@scipy.org> References: <476E4574.30709@scipy.org> Message-ID: <476F8F91.5040401@enthought.com> Our building lost power for 5 hours and there were unfortunate complications that kept DNS and some other services out of commission. Technically, the SciPy server was never down, but no one could resolve its DNS. =( Please let me know if anyone notices anything else fishy. I believe all the external services have been returned. Regards, -ryan dmitrey wrote: > scipy.org is down again, at least for some hours > Hence, I suspect, mail list as well > > maybe, it has sense to split mail list server and scipy.org webserver? > (elseware it's hard for users to inform about scipy.org webserver down). > > Regards, D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > From dmitrey.kroshko at scipy.org Mon Dec 24 11:53:08 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 24 Dec 2007 18:53:08 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> Message-ID: <476FE3F4.3000802@scipy.org> hi Rob, seems like all is working correctly for now. Don't you mind me to commit your changes from ticket to svn repository? Regards, D. From robfalck at gmail.com Mon Dec 24 17:47:22 2007 From: robfalck at gmail.com (Rob Falck) Date: Mon, 24 Dec 2007 17:47:22 -0500 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: <476FE3F4.3000802@scipy.org> References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> Message-ID: I was granted commit privileges, and have already submitted these changes. I'm glad things are working well for you. If you have any comparisons with other OpenOpt solvers that you could share, I would appreciate seeing them. On Dec 24, 2007 11:53 AM, dmitrey wrote: > hi Rob, > seems like all is working correctly for now. > Don't you mind me to commit your changes from ticket to svn repository? > Regards, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- - Rob Falck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitrey.kroshko at scipy.org Tue Dec 25 04:01:04 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 25 Dec 2007 11:01:04 +0200 Subject: [SciPy-dev] SLSQP Constrained Optimizer Status In-Reply-To: References: <475F9EF7.4010509@scipy.org> <4768311C.7030906@scipy.org> <476B7C19.1020003@scipy.org> <476FE3F4.3000802@scipy.org> Message-ID: <4770C6D0.9050409@scipy.org> The problem is that it don't pass those my bench1 & bench2 I had used (but pass more simple examples). + sometimes it stops not far from argminimum and says "Singular matrix C in LSQ subproblem". You can try it by yourself (using the problems you deal with), I have already committed related changes to OO. Regards, D. Rob Falck wrote: > I was granted commit privileges, and have already submitted these > changes. I'm glad things are working well for you. If you have any > comparisons with other OpenOpt solvers that you could share, I would > appreciate seeing them. > > > > On Dec 24, 2007 11:53 AM, dmitrey > wrote: > > hi Rob, > seems like all is working correctly for now. > Don't you mind me to commit your changes from ticket to svn > repository? > Regards, D. > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > > -- > - Rob Falck > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From dmitrey.kroshko at scipy.org Tue Dec 25 14:28:27 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 25 Dec 2007 21:28:27 +0200 Subject: [SciPy-dev] new google initiative Message-ID: <477159DB.6070202@scipy.org> Hi all, could anyone explain in some words what is mentioned here in the Python mail list message and related links? Is the google initiative some kind of GSoC equivalent? Has it any finance support? I would read it by myself but there are too much difficult English text. http://groups.google.com/group/comp.lang.python.announce/browse_thread/thread/ec706f7ed34a8951/f7d7223e9b66cd33#f7d7223e9b66cd33 Regards, D. From matthieu.brucher at gmail.com Tue Dec 25 14:33:27 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 25 Dec 2007 20:33:27 +0100 Subject: [SciPy-dev] new google initiative In-Reply-To: <477159DB.6070202@scipy.org> References: <477159DB.6070202@scipy.org> Message-ID: Hi, It has financial support, but it is aimed to young developers (before they go to university). It is thus simple tasks (rewrite documentation, add some tests, ...) so that they can have a feeling at Open Source software. Matthieu 2007/12/25, dmitrey : > > Hi all, > could anyone explain in some words what is mentioned here in the Python > mail list message and related links? > Is the google initiative some kind of GSoC equivalent? Has it any > finance support? > I would read it by myself but there are too much difficult English text. > > > http://groups.google.com/group/comp.lang.python.announce/browse_thread/thread/ec706f7ed34a8951/f7d7223e9b66cd33#f7d7223e9b66cd33 > > Regards, D. > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Dec 25 14:35:01 2007 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Dec 2007 14:35:01 -0500 Subject: [SciPy-dev] new google initiative In-Reply-To: <477159DB.6070202@scipy.org> References: <477159DB.6070202@scipy.org> Message-ID: <47715B65.2020000@gmail.com> dmitrey wrote: > Hi all, > could anyone explain in some words what is mentioned here in the Python > mail list message and related links? Short answer: the program is only for students who have not started university studies, so I do not believe that you are eligible. http://code.google.com/opensource/ghop/2007-8/ It is not really like GSoC. It is a contest where you *might* win prizes; you do not get steady financial support. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Thu Dec 27 16:10:29 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Dec 2007 13:10:29 -0800 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day Message-ID: Hello, I started a blog, which has two posts about the NumPy/SciPy code sprint/strategic planning meeting early this month. It doesn't have very much content yet, but I thought it would be worth letting everyone know where it is: http://jarrodmillman.blogspot.com/ The most recent entry addresses some of the reasons we decided to retire the sandbox. I also created a ticket for removing the sandbox: http://projects.scipy.org/scipy/scipy/ticket/573 I will work on further documenting what went on at the sprint and some other things tomorrow during the first NumPy/SciPy doc-day. As a reminder, Friday's doc-day will be virtual where anyone with an internet connection can join in on the scipy channel on irc.freenode.net. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From matthieu.brucher at gmail.com Thu Dec 27 17:01:39 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 27 Dec 2007 23:01:39 +0100 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: Hi, Some of the packages already are in a scikit, like pyem and svm. I'd like to see arpack in the sparse folder (?) very fast as some my code would need a sparse solver (I proposed that it could be moved in a scikit but it makes sense to keep it in scipy so that sparse solvers are available in scipy). BTW, for the sparse solvers in linsolve, the module name does not show that it is for sparse arrays. Matthieu 2007/12/27, Jarrod Millman : > > Hello, > > I started a blog, which has two posts about the NumPy/SciPy code > sprint/strategic planning meeting early this month. It doesn't have > very much content yet, but I thought it would be worth letting > everyone know where it is: http://jarrodmillman.blogspot.com/ > > The most recent entry addresses some of the reasons we decided to > retire the sandbox. I also created a ticket for removing the sandbox: > http://projects.scipy.org/scipy/scipy/ticket/573 > > I will work on further documenting what went on at the sprint and some > other things tomorrow during the first NumPy/SciPy doc-day. As a > reminder, Friday's doc-day will be virtual where anyone with an > internet connection can join in on the scipy channel on > irc.freenode.net. > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Thu Dec 27 17:35:44 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 15:35:44 -0700 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: On Dec 27, 2007 2:10 PM, Jarrod Millman wrote: > Hello, > > I started a blog, which has two posts about the NumPy/SciPy code > sprint/strategic planning meeting early this month. It doesn't have > very much content yet, but I thought it would be worth letting > everyone know where it is: http://jarrodmillman.blogspot.com/ Any objections if I merge Min's work into the trunk? I'm afraid if we don't do it soon, with Matthew's work on testing, it will get out of date and hard to merge. All of that is confined to weave alone, and was pure bugfixing. I'm running the tests right now and will do it in two stages: 1. update the weave cleanup branch to current trunk status, so there's a clear commit of the differences. 2. merge the weave changes back into the trunk and then commit that. I'm not an svn merge expert, so if anyone spots a huge blunder-in-the-making, by all means speak up... Cheers, f From fperez.net at gmail.com Thu Dec 27 18:04:33 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 16:04:33 -0700 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: <47696873.1060008@enthought.com> References: <47696873.1060008@enthought.com> Message-ID: On Dec 19, 2007 11:52 AM, Travis E. Oliphant wrote: > Testing > --------- > * scipy unit-testing will be "nose-compliant" and therefore nose will > be required to run the SciPy tests. > * NumPy will still use the current testing framework but will support > SciPy's desire to be nose-compliant. NumPy 1.1 tests may move to just > being "nose-compliant" > * The goal is to make tests easier for contributors to write. This hadn't been committed yet, but last night I put it in, and Matthew Brett is planning today on updating the rest of the codebase to the nose tests, which do make a LOT of things vastly nicer and simpler to write, at the cost of a very small new dependency (only needed to run the actual test suite, not to use scipy). At some point in the next few days I imagine this will get merged back into trunk, though that merge probably needs to be done with a bit of coordination, since the changes will touch pretty much every scipy module (at least their test/ directory). > Weave > --------- > * weave will not move into NumPy yet, but possibly at NumPy 1.1, there > could be a separate package containing all the "wrapping" support code > for NumPy in a more unified fashion (if somebody is interested in this, > it is a great time to jump in). It's worth crediting Min (Benjamin Ragan-Kelley, ipython developer) for spending the whole day deep in the bowels of weave clenaing up tens of test failures that had been in for a long time. The weave tests aren't getting correctly picked up by scipy.test(), hence they hadn't been reported, but you can see them with weave.test(10). To make sure that this work doesn't fall too far out of sync with trunk, I just committed it all back into trunk as two separate changesets (one to update the branch itself in case anyone is going to work further on it, one to apply the changes to the trunk itself). Many thanks to Min for the spelunking work! Cheers, f From millman at berkeley.edu Thu Dec 27 21:59:56 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Dec 2007 18:59:56 -0800 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: On Dec 27, 2007 2:35 PM, Fernando Perez wrote: > Any objections if I merge Min's work into the trunk? I'm afraid if we > don't do it soon, with Matthew's work on testing, it will get out of > date and hard to merge. All of that is confined to weave alone, and > was pure bugfixing. +1 > I'm running the tests right now and will do it in > two stages: > > 1. update the weave cleanup branch to current trunk status, so there's > a clear commit of the differences. > 2. merge the weave changes back into the trunk and then commit that. > > I'm not an svn merge expert, so if anyone spots a huge > blunder-in-the-making, by all means speak up... That sounds right to me. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From oliphant at enthought.com Thu Dec 27 22:00:34 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 27 Dec 2007 21:00:34 -0600 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: <477466D2.9010406@enthought.com> Fernando Perez wrote: > On Dec 27, 2007 2:10 PM, Jarrod Millman wrote: > >> Hello, >> >> I started a blog, which has two posts about the NumPy/SciPy code >> sprint/strategic planning meeting early this month. It doesn't have >> very much content yet, but I thought it would be worth letting >> everyone know where it is: http://jarrodmillman.blogspot.com/ >> > > Any objections if I merge Min's work into the trunk? I'm afraid if we > don't do it soon, with Matthew's work on testing, it will get out of > date and hard to merge. All of that is confined to weave alone, and > was pure bugfixing. I'm running the tests right now and will do it in > two stages: > > 1. update the weave cleanup branch to current trunk status, so there's > a clear commit of the differences. > 2. merge the weave changes back into the trunk and then commit that. > > I'm not an svn merge expert, so if anyone spots a huge > blunder-in-the-making, by all means speak up... > Go ahead and do it. I'm never an expert on svn merge until *after* I've done the merge, either, but that's why we have version control :-) -Travis From millman at berkeley.edu Thu Dec 27 22:11:19 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Dec 2007 19:11:19 -0800 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: On Dec 27, 2007 2:01 PM, Matthieu Brucher wrote: > Some of the packages already are in a scikit, like pyem and svm. Several sandbox packages have been removed: ann, pyem, svm, arraysetops, cow, gplt, maskedarray, plt, wavelets, xplt, stats, and oliphant, see http://projects.scipy.org/scipy/scipy/ticket/573 > I'd like to see arpack in the sparse folder (?) very fast as some my code > would need a sparse solver (I proposed that it could be moved in a scikit > but it makes sense to keep it in scipy so that sparse solvers are available > in scipy). Yes, arpack should go into the sparse package. If you have the time, it would be great if you could help get it moved over. Ideally, we can get it moved into scipy.sparse before the 0.7 release around the end of March. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From oliphant at enthought.com Thu Dec 27 22:34:02 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 27 Dec 2007 21:34:02 -0600 Subject: [SciPy-dev] Scikits and stuff Message-ID: <47746EAA.9000402@enthought.com> Hey everyone, In preparation for doc-day tomorrow, I've been thinking about scikits and its relationship to scipy. There seem to be three distinct purposes of scikits: 1) A place for specialized tool boxes that build on numpy and/or scipy that live under a common name-space, but are too specialized to live in scipy itself. 2) A place for GPL or similarly-licensed tools so that people can understand what they are getting and appropriate use can be made. 3) A place for modularly-installed scipy tools. Given these three purposes. It seems that we should have three name-spaces: 1) scikits 2) ??? perhaps scigpl 3) scipy (why not use the same namespace --- there is a technological hurdle that may prevent egg distribution of these until we fix it. But, I think we can fix it and would rather not invent another name-space for something that should be scipy). This idea occurred to me, when I was thinking about "tagging" modules in scikits so users could understand the tool. But, at that point it seemed obvious that we should just have different name-spaces which would promote the needed clarity. Ideas. -Travis O. From oliphant at enthought.com Thu Dec 27 22:42:56 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 27 Dec 2007 21:42:56 -0600 Subject: [SciPy-dev] Doc-day Message-ID: <477470C0.3030602@enthought.com> Doc-day will start tomorrow (in about 12 hours). It will be Friday for much of America and be moving into Saturday for Europe and Asia. Join in on the irc.freenode.net (channel scipy) to coordinate effort. I imaging people will be in an out. I plan on being available in IRC from about 9:30 am CST to 6:00 pm CST and then possibly later. If you are available at different times in different parts of the world, jump in and pick something to work on. Jarrod and/or I will put out a list of priorities in the next few hours. -Travis O. From fperez.net at gmail.com Thu Dec 27 23:01:02 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 21:01:02 -0700 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <47746EAA.9000402@enthought.com> References: <47746EAA.9000402@enthought.com> Message-ID: On Dec 27, 2007 8:34 PM, Travis E. Oliphant wrote: > > Hey everyone, > > In preparation for doc-day tomorrow, I've been thinking about scikits > and its relationship to scipy. There seem to be three distinct > purposes of scikits: > > 1) A place for specialized tool boxes that build on numpy and/or scipy > that live under a common name-space, but are too specialized to live in > scipy itself. > 2) A place for GPL or similarly-licensed tools so that people can > understand what they are getting and appropriate use can be made. > 3) A place for modularly-installed scipy tools. > > Given these three purposes. It seems that we should have three name-spaces: > > 1) scikits > 2) ??? perhaps scigpl Loses points for ugly :) > 3) scipy (why not use the same namespace --- there is a technological > hurdle that may prevent egg distribution of these until we fix it. But, > I think we can fix it and would rather not invent another name-space for > something that should be scipy). I don't like the idea of the base scipy package being a moving target. Much like the standard library (for any given python version) is a known quantity, I'd like scipy to be the same. Rather than rehash it, I'm just going to copy here for the public discussion the reply I sent in our private chat on this topic. I think it states my view clearly, and others can then disagree. I'm pasting it in full so it reads normally, even if you've already addressed the domain-specific aspects above. Cheers, f. ########## What about domain-specific functionality, for example? I think that it's important that 'scipy version x.y' is a known, fixed quantity, so that installing it means having a well-defined set of tools. But over time, I can foresee lots of domain-specific functionality that is scipy-based being developed, and I simply don't think it's realistic (for many reasons) to pull all of it into scipy itself. Much like Matlab has 'toolboxes' and Mathematica has a simliar concept, I think there's value for the users in having a well-defined location where they can find lots of extra tools that are related to scipy, but somewhat independent of it in their development. The scikits would all honor similar naming conventions and doucumentation, we could have a centralized page listing them so they are easy to find, and users could add (via namespace packages) their own scikits without necessarily having write privileges over the central scipy directory. Basically in my mind the distinction is not "we did a poor job modularising scipy" but rather: - scipy: core library with large amounts of Fortran (as much of netlib as is reasonable) and functionality that can be reasonably considered to be of wide appeal. All of it BSD-compatible. - scikits: toolkits under a single umbrella namespace, easy to find (we can provide tools for this), with unified naming, coding, documentation and example conventions. Domain-specific codes go here, as well as GPL or patent-encumbered codes (but still open source). [Edit: I'm not sure if *any* patent-encumbered code is really a good idea, so perhaps this last sentence should be removed]. In addition, scikits could be the staging area for new projects to be developed until they mature a bit, for eventual inclusion into scipy itself. This would give us a monitoring mechanism to ensure that a contributor is developing a package according to the scipy standards of naming, quality, documentation, etc. while allowing the developer to proceed at his own pace without locking into the scipy release schedule. Eventually if a project turns out to work very well and is deemed of full general interest, it can be folded into scipy itself (like what happened to ElementTree or optik in the stdlib, for example). This way developers can also get users to follow their own release schedule, without the problems we have today with the sandbox (scikits should be available via eggs, so users can easily grab and update scikits they're interested in). For the above scipy.foo discussion, if foo==clustering, it probably belongs in scipy itself (people in all disciplines use that), but a DNA sequence analysis tool that finds clustering patterns directly operating on standard bioinformatics formats should probably be a scikit. I don't know about the others, but I find the above distinction reasonably clear and useful in practice. But perhaps I'm totally missing the mark. Cheers, f From wnbell at gmail.com Thu Dec 27 23:28:05 2007 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 27 Dec 2007 22:28:05 -0600 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: On Dec 27, 2007 9:11 PM, Jarrod Millman wrote: > > I'd like to see arpack in the sparse folder (?) very fast as some my code > > would need a sparse solver (I proposed that it could be moved in a scikit > > but it makes sense to keep it in scipy so that sparse solvers are available > > in scipy). > > Yes, arpack should go into the sparse package. If you have the time, > it would be great if you could help get it moved over. Ideally, we > can get it moved into scipy.sparse before the 0.7 release around the > end of March. How do you see sparse being structured? Currently sparse contains only the sparse matrix classes and a handful of creation functions (e.g. spdiags) and the iterative solvers live in scipy.linalg.iterative. It would be strange to put an eigensolver under sparse and iterative methods for linear systems under linalg. Also, lobpcg should live along side arpack wherever they end up. I could imagine a structure like: scipy.iterative.linear (for cg/gmres etc.) scipy.iterative.eigen (for arpack/lobpcg etc.) -- Nathan Bell wnbell at gmail.com From oliphant at enthought.com Fri Dec 28 00:00:44 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Thu, 27 Dec 2007 23:00:44 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> Message-ID: <477482FC.6060202@enthought.com> Fernando Perez wrote: > On Dec 27, 2007 8:34 PM, Travis E. Oliphant wrote: > >> Given these three purposes. It seems that we should have three name-spaces: >> >> 1) scikits >> 2) ??? perhaps scigpl >> > > Loses points for ugly :) > > True. Are there any other names: How about: sci_restrict sci_no_bsd sci_gpl scifree (as a nod to Free Software Foundation -- although there may be unintentional negative double entendre). scifsf The point is that it is very useful for users to be able to know that scikits and scipy and scipydev have a BSD or similar license, but "scifree" is GPL-like and creates possible encumbrances for people who use it in their code bases. >> 3) scipy (why not use the same namespace --- there is a technological >> hurdle that may prevent egg distribution of these until we fix it. But, >> I think we can fix it and would rather not invent another name-space for >> something that should be scipy). >> > > I don't like the idea of the base scipy package being a moving target. > Much like the standard library (for any given python version) is a > known quantity, I'd like scipy to be the same. > I'm fine with calling #3 something like scipydev. The point is that it would be good if there were some way for a developer of a Scientific ToolKit to indicate their intention when they develop it and for others to do so as well. -Travis From fperez.net at gmail.com Fri Dec 28 00:14:09 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 22:14:09 -0700 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <477482FC.6060202@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> Message-ID: On Dec 27, 2007 10:00 PM, Travis E. Oliphant wrote: > Fernando Perez wrote: > > On Dec 27, 2007 8:34 PM, Travis E. Oliphant wrote: > > > >> Given these three purposes. It seems that we should have three name-spaces: > >> > >> 1) scikits > >> 2) ??? perhaps scigpl > >> > > > > Loses points for ugly :) > > > > > True. Are there any other names: > > How about: > > sci_restrict > sci_no_bsd > sci_gpl > scifree (as a nod to Free Software Foundation -- although there may be > unintentional negative double entendre). > scifsf gscipy? > The point is that it is very useful for users to be able to know that > scikits and scipy and scipydev have a BSD or similar license, but > "scifree" is GPL-like and creates possible encumbrances for people who > use it in their code bases. I certainly agree on the value of making the bsd/gpl distinction very clear to any new user. > >> 3) scipy (why not use the same namespace --- there is a technological > >> hurdle that may prevent egg distribution of these until we fix it. But, > >> I think we can fix it and would rather not invent another name-space for > >> something that should be scipy). > >> > > > > I don't like the idea of the base scipy package being a moving target. > > Much like the standard library (for any given python version) is a > > known quantity, I'd like scipy to be the same. > > > I'm fine with calling #3 something like scipydev. The point is that it > would be good if there were some way for a developer of a Scientific > ToolKit to indicate their intention when they develop it and for others > to do so as well. What bothers you about using scikits for standalone packages, even if some of them might eventually become part of scipy proper at some point in the future? I'm not sure I see it... To summarize my take on it at this point, after your input, I'd have this layout: - scipy: fixed package (NOT namespace). All BSD. All components should have fairly broad appeal to a wide audience of scientific users, and code should be reasonably mature (we could argue about how true that is today, but let's not :) - gscipy: extensions to scipy that carry GPL restrictions. This would allow us to better integrate things like GSL, GMP, etc. - scikits: domain-specific toolkits and other self-contained packages that might be at some point candidates for scipy, but aren't yet mature enough to be included in the core. License can be BSD or GPL, per-package. It's a namespace package, so users can install only the components they want and update each independently. Seems clean enough for me... Cheers, f From peridot.faceted at gmail.com Fri Dec 28 00:23:12 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 28 Dec 2007 00:23:12 -0500 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> Message-ID: On 28/12/2007, Fernando Perez wrote: > - scikits: domain-specific toolkits and other self-contained packages > that might be at some point candidates for scipy, but aren't yet > mature enough to be included in the core. License can be BSD or GPL, > per-package. It's a namespace package, so users can install only the > components they want and update each independently. I think I agree with this - not that I'm a developer, but I hope you won't mind a user's opinion: it's really hard to tell whether a package will eventually become part of scipy or not. Many packages might start out domain-specific but as they mature and flesh out they might generalize and come to be recognized as generally useful. If domain-specific and immature packages are lumped together, no one is forced to make the decision on whether some package will eventually be of general applicability when it will someday be finished. I suppose alternatively, all immature packages could go in one namespace, to be moved to either scipy proper or a domain-specific namespace when they mature. But how often do open-source packages really reach a stable state? Many packages are useful for a sufficiently specific domain even when quite immature; the maturation process usually includes some generalization. Anne From millman at berkeley.edu Fri Dec 28 00:27:09 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Dec 2007 21:27:09 -0800 Subject: [SciPy-dev] Doc-day In-Reply-To: <477470C0.3030602@enthought.com> References: <477470C0.3030602@enthought.com> Message-ID: On Dec 27, 2007 7:42 PM, Travis E. Oliphant wrote: > Doc-day will start tomorrow (in about 12 hours). It will be Friday for > much of America and be moving into Saturday for Europe and Asia. Join > in on the irc.freenode.net (channel scipy) to coordinate effort. I > imaging people will be in an out. I plan on being available in IRC from > about 9:30 am CST to 6:00 pm CST and then possibly later. > > If you are available at different times in different parts of the > world, jump in and pick something to work on. Since this is our first doc-day, it will be fairly informal. Travis is going to be trying to get some estimate of which packages need the most work. But if there is some area of NumPy or SciPy you are familiar with, please go ahead and pitch in. Here is the current NumPy/ SciPy coding standard including docstring standards: http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines I will be working on making the roadmaps more detailed and better documenting the discussions from the coding sprint. Travis O. will be mostly working on NumPy docstrings and possibly deprecation warnings for scipy.io functions. Matthew B. will be working on converting SciPy tests to use nose per Fernando's email. If you are familiar with nose and want to help, please make sure to check with Matthew or Fernando first. I hope to see several of you on IRC tomorrow. Happy holidays, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fperez.net at gmail.com Fri Dec 28 00:41:17 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 22:41:17 -0700 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: References: <477470C0.3030602@enthought.com> Message-ID: On Dec 27, 2007 10:27 PM, Jarrod Millman wrote: > Since this is our first doc-day, it will be fairly informal. Travis > is going to be trying to get some estimate of which packages need the > most work. But if there is some area of NumPy or SciPy you are > familiar with, please go ahead and pitch in. Here is the current > NumPy/ SciPy coding standard including docstring standards: > http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines Care to make the Example section mandatory, instead of optional? I really think it should be mandatory. We may not do a good job of it initially, but at least we should express that it's of critical importance that every function contains at least one small example, whenever feasible. I also think that the above wiki page should have a minimal, self-contained example of a proper docstring with all 8 sections implemented. I'm honestly not sure at this point what the actual changes to epydoc are (in ipython we use regular epydoc with reST), and I think for many it would be much easier to get started by reading a small example rather than trying to abstract out what the exact markup should be from reading the description and the various documents linked to (doctest, reST, epydoc...). With such a guiding example, tomorrow people will be able to get up and going quickly... > I will be working on making the roadmaps more detailed and better > documenting the discussions from the coding sprint. > > Travis O. will be mostly working on NumPy docstrings and possibly > deprecation warnings for scipy.io functions. > > Matthew B. will be working on converting SciPy tests to use nose per > Fernando's email. If you are familiar with nose and want to help, > please make sure to check with Matthew or Fernando first. I'm afraid I won't be able to participate tomorrow, but one thing to remember is that with nose, any and all doctest examples should be automatically picked up (with the appropriate flags). So a *very easy* way for anyone to contribute is to simply add doctest examples to the codebase. Those serve automatically two purposes: they are small tests for each function, and they make the library vastly easier to use, since any function is just one foo? away from an example. As a reminder, those of you using ipython >= 0.8.2 can use this feature: In [1]: %doctest_mode *** Pasting of code with ">>>" or "..." has been enabled. Exception reporting mode: Plain Doctest mode is: ON >>> for i in range(10): ... print i, ... 0 1 2 3 4 5 6 7 8 9 >>> >>> for i in range(10): ... ... print i, ... 0 1 2 3 4 5 6 7 8 9 >>> %doctest_mode Exception reporting mode: Context Doctest mode is: OFF ######## The %doctest_mode magic switches the ipython prompt to >>> so you can continue using ipython but get the proper prompts for making pasteable docstests, and it also allows you to paste input that begins with '>>>' for execution, so you can try your doctests again. HTH, f From millman at berkeley.edu Fri Dec 28 00:50:29 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 27 Dec 2007 21:50:29 -0800 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: References: <477470C0.3030602@enthought.com> Message-ID: On Dec 27, 2007 9:41 PM, Fernando Perez wrote: > Care to make the Example section mandatory, instead of optional? I > really think it should be mandatory. We may not do a good job of it > initially, but at least we should express that it's of critical > importance that every function contains at least one small example, > whenever feasible. +1 > I also think that the above wiki page should have a minimal, > self-contained example of a proper docstring with all 8 sections > implemented. I'm honestly not sure at this point what the actual > changes to epydoc are (in ipython we use regular epydoc with reST), > and I think for many it would be much easier to get started by reading > a small example rather than trying to abstract out what the exact > markup should be from reading the description and the various > documents linked to (doctest, reST, epydoc...). I think we already have that. Take a look at the example at the bottom: http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#example It points to a plain text example: http://svn.scipy.org/svn/numpy/trunk/numpy/doc/example.py and what it looks like rendered: http://www.scipy.org/doc/example It also explains how to check the example code out of svn and generate the html using epydoc. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From fperez.net at gmail.com Fri Dec 28 00:57:14 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 22:57:14 -0700 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: References: <477470C0.3030602@enthought.com> Message-ID: On Dec 27, 2007 10:50 PM, Jarrod Millman wrote: > > I also think that the above wiki page should have a minimal, > > self-contained example of a proper docstring with all 8 sections > > implemented. I'm honestly not sure at this point what the actual > > changes to epydoc are (in ipython we use regular epydoc with reST), > > and I think for many it would be much easier to get started by reading > > a small example rather than trying to abstract out what the exact > > markup should be from reading the description and the various > > documents linked to (doctest, reST, epydoc...). > > I think we already have that. Take a look at the example at the bottom: > http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#example Oops, sorry. I read the page quickly and my brain was scanning for a big block of text in {{{ }}}, so I didn't look at that section carefully enough. My mistake. Cheers, f From oliphant at enthought.com Fri Dec 28 01:12:10 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 28 Dec 2007 00:12:10 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> Message-ID: <477493BA.9020605@enthought.com> Fernando Perez wrote: > On Dec 27, 2007 10:00 PM, Travis E. Oliphant wrote: > >> Fernando Perez wrote: >> >>> On Dec 27, 2007 8:34 PM, Travis E. Oliphant wrote: >>> >>> >>>> Given these three purposes. It seems that we should have three name-spaces: >>>> >>>> 1) scikits >>>> 2) ??? perhaps scigpl >>>> >>>> >>> Loses points for ugly :) >>> >>> >>> >> True. Are there any other names: >> >> How about: >> >> sci_restrict >> sci_no_bsd >> sci_gpl >> scifree (as a nod to Free Software Foundation -- although there may be >> unintentional negative double entendre). >> scifsf >> > > gscipy? > +1 > >> The point is that it is very useful for users to be able to know that >> scikits and scipy and scipydev have a BSD or similar license, but >> "scifree" is GPL-like and creates possible encumbrances for people who >> use it in their code bases. >> > > I certainly agree on the value of making the bsd/gpl distinction very > clear to any new user. > > >> >> I'm fine with calling #3 something like scipydev. The point is that it >> would be good if there were some way for a developer of a Scientific >> ToolKit to indicate their intention when they develop it and for others >> to do so as well. >> > > What bothers you about using scikits for standalone packages, even if > some of them might eventually become part of scipy proper at some > point in the future? I'm not sure I see it... To summarize my take > on it at this point, after your input, I'd have this layout: > I think the problem is one of "getting lost" in the scikits. I'd like there to be a way for a developer of a scikit to signal their intention from the start. When looking at the sandbox, I could not tell in many cases what the intent was. I'd rather have the developer of the project be clear about it from the get go. I can see there being hundreds of scikits, and trying to coordinate effort between developers trying to get something into scipy at somepoint is difficult if there is not a way to signal the intention up front. Also, having a name like scipydev tells everybody what the purpose of the project is. Right now, for example, I have no idea why delaunay, openopt, audiolab, and learn are scikits. They do not seem domain specific to me. But, then again, perhaps the developers don't want to put their packages into scipy. If that is the case, then I'd like that to be clear up front and use that to help fix whatever issues are causing scipy to be "unattractive" to a developer of a module with obvious wide-spread appeal. > - scipy: fixed package (NOT namespace). All BSD. All components > should have fairly broad appeal to a wide audience of scientific > users, and code should be reasonably mature (we could argue about how > true that is today, but let's not :) > > - gscipy: extensions to scipy that carry GPL restrictions. This > would allow us to better integrate things like GSL, GMP, etc. > > - scikits: domain-specific toolkits and other self-contained packages > that might be at some point candidates for scipy, but aren't yet > mature enough to be included in the core. License can be BSD or GPL, > per-package. It's a namespace package, so users can install only the > components they want and update each independently. > > Seems clean enough for me... > Hmm... It looks like we have a subtle difference about what should be in scikits. I would not put any GPL code there once there is a scipy and gscipy. If you want scikits to be a place for the "maturation" process (which is not unreasonable), then there should be gscikits as well so that it is always clear. If I understand correctly, you are arguing that scikits should be both domain specific and a "staging" area and that these roles don't need to be decided on upfront. I'm concerned that if the roles are not decided upon, nothing will ever move into scipy and there will be a whole bunch of disconnected scikits that do very much the same kinds of things (optimization, interpolation, loading files of various formats, etc.) and really should be in scipy, but with no incentive or push to actually get them there, because moving from scikits to scipy offers no benefit to the developer. If instead we restrict scikits to "domain specific" tools and target more general purpose tools for scipy but allow them to be staged and developed at their own pace using the scipydev name-space, then the tools that really should go into scipy will be named that way from the beginning, and the developer incentive will be to get the scipydev off their name as well as getting into the scipy package. It will also make it easier for SciPy developers to understand the intent of "abandoned" projects if things that are being developed are not lost in things that will never be included in SciPy. I think my concern stems from what is there now (in the sandbox) and why much of it has not moved into scipy already. I don't think just moving it all to scikits will fix those things and will still make me and others developing SciPy have to sift through potentially hundreds of scikits to determine the intent. -Travis O. From fperez.net at gmail.com Fri Dec 28 01:12:31 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 23:12:31 -0700 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> Message-ID: On Dec 27, 2007 10:23 PM, Anne Archibald wrote: > On 28/12/2007, Fernando Perez wrote: > > - scikits: domain-specific toolkits and other self-contained packages > > that might be at some point candidates for scipy, but aren't yet > > mature enough to be included in the core. License can be BSD or GPL, > > per-package. It's a namespace package, so users can install only the > > components they want and update each independently. > > I think I agree with this - not that I'm a developer, but I hope you > won't mind a user's opinion: it's really hard to tell whether a Thanks for your input, which I think provides a useful perspective on the above. But I'd like to make sure that something is clear: - user opinions are *always* welcome and encouraged on these lists. I myself hardly count as a numpy/scipy developer, but that has never stopped me from opening my big mouth, and it shouldn't stop anyone else. This is a community where the lines between users and developers are deliberately blurry and fluid: we expect anyone using these tools to one day be able to contribute something as a developer, and the next day to need help on the lists (Robert Kern excepted :). If at any point a comment of mine made it appear otherwise, I apologize, as it was certainly not my intent. The whole point of working in this type of environment is that contributions are accepted because of their intrinsic value, not because of whose name is behind them. *Credit* (in commit logs, credit files, etc) goes with names as is the standard tradition in academia, but hopefully we'll always acknowledge a good idea regardless of where it comes from. - Having said the above, there are users who have earned massive amounts of good karma due to stellar contributions on these mailing lists, and you are certainly at the very top of that group. Even more reason to always voice your opinion! Cheers, f From matthieu.brucher at gmail.com Fri Dec 28 01:13:51 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 07:13:51 +0100 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: > > > Weave > > --------- > > * weave will not move into NumPy yet, but possibly at NumPy 1.1, there > > could be a separate package containing all the "wrapping" support code > > for NumPy in a more unified fashion (if somebody is interested in this, > > it is a great time to jump in). > > It's worth crediting Min (Benjamin Ragan-Kelley, ipython developer) > for spending the whole day deep in the bowels of weave clenaing up > tens of test failures that had been in for a long time. The weave > tests aren't getting correctly picked up by scipy.test(), hence they > hadn't been reported, but you can see them with weave.test(10). > This is good news :) Will other compilers be supported by the Blits converters, like ICL or Visual Studio ? Last time I tried ty use ICL, some headers were missing :| Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthieu.brucher at gmail.com Fri Dec 28 01:18:26 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 07:18:26 +0100 Subject: [SciPy-dev] The future of the scipy.sandbox and a reminder of upcoming doc-day In-Reply-To: References: Message-ID: > > > I'd like to see arpack in the sparse folder (?) very fast as some my > code > > would need a sparse solver (I proposed that it could be moved in a > scikit > > but it makes sense to keep it in scipy so that sparse solvers are > available > > in scipy). > > Yes, arpack should go into the sparse package. If you have the time, > it would be great if you could help get it moved over. Ideally, we > can get it moved into scipy.sparse before the 0.7 release around the > end of March. I can help, but not before mid-January (I have to live without it because of some deadlines). Then, I will catch up with what was done. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Dec 28 01:28:56 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 23:28:56 -0700 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <477493BA.9020605@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> Message-ID: On Dec 27, 2007 11:12 PM, Travis E. Oliphant wrote: > Fernando Perez wrote: > > gscipy? > > > +1 OK. Settled? > > What bothers you about using scikits for standalone packages, even if > > some of them might eventually become part of scipy proper at some > > point in the future? I'm not sure I see it... To summarize my take > > on it at this point, after your input, I'd have this layout: > > > > I think the problem is one of "getting lost" in the scikits. I'd like > there to be a way for a developer of a scikit to signal their intention > from the start. When looking at the sandbox, I could not tell in many > cases what the intent was. I'd rather have the developer of the project > be clear about it from the get go. > > I can see there being hundreds of scikits, and trying to coordinate > effort between developers trying to get something into scipy at > somepoint is difficult if there is not a way to signal the intention up > front. > > Also, having a name like scipydev tells everybody what the purpose of > the project is. Right now, for example, I have no idea why delaunay, > openopt, audiolab, and learn are scikits. They do not seem domain > specific to me. But, then again, perhaps the developers don't want to > put their packages into scipy. If that is the case, then I'd like that > to be clear up front and use that to help fix whatever issues are > causing scipy to be "unattractive" to a developer of a module with > obvious wide-spread appeal. I think I'd answer to your concern here with Anne's recent post. I very much like her argument as to why certain tools may naturally evolve out from a domain-specific one into something more general over time. I realize that you are frustrated by the mess the sandbox became, but I think we shouldn't let that influence our decisions right now. I view that mess more as a historical accident due to lack of guided project management, than an intrinsic flaw of the naming conventions. I think that if we have clear guidelines we agree on, the problem will be naturally avoided. > Hmm... It looks like we have a subtle difference about what should be > in scikits. I would not put any GPL code there once there is a scipy > and gscipy. If you want scikits to be a place for the "maturation" > process (which is not unreasonable), then there should be gscikits as > well so that it is always clear. > > If I understand correctly, you are arguing that scikits should be both > domain specific and a "staging" area and that these roles don't need to > be decided on upfront. Correct (cf Anne's post for more on that view). > I'm concerned that if the roles are not decided upon, nothing will ever > move into scipy and there will be a whole bunch of disconnected scikits > that do very much the same kinds of things (optimization, interpolation, > loading files of various formats, etc.) and really should be in scipy, > but with no incentive or push to actually get them there, because moving > from scikits to scipy offers no benefit to the developer. > > If instead we restrict scikits to "domain specific" tools and target > more general purpose tools for scipy but allow them to be staged and > developed at their own pace using the scipydev name-space, then the > tools that really should go into scipy will be named that way from the > beginning, and the developer incentive will be to get the scipydev off > their name as well as getting into the scipy package. > > It will also make it easier for SciPy developers to understand the > intent of "abandoned" projects if things that are being developed are > not lost in things that will never be included in SciPy. > > I think my concern stems from what is there now (in the sandbox) and why > much of it has not moved into scipy already. I don't think just moving > it all to scikits will fix those things and will still make me and > others developing SciPy have to sift through potentially hundreds of > scikits to determine the intent. Honestly I think the sandbox problem won't reoccur (at least not as badly). I also think that asking developers to commit to 'core scipy' from the get-go may be too much in the beginning, while the suggestion "make a scikit out of it, and if it works after a while and it makes sense, it can be moved into the core where its release cycle will get locked into the rest" may be a bit less intimidating. I also happen not to like a whole lot the idea of yet another namespace: scipy, gscipy and scikits seems enough to me, and a fourth scipydev (and possibly a fifth gscikits) really feels like overkill to me. I think I've stated where I differ from you on this one, so I won't belabor it too much further. I'm not really trying to force the issue, and ultimately if you really prefer the extra namespace I can live with it. Perhaps others can provide their perspective as well... Cheers, f From fperez.net at gmail.com Fri Dec 28 01:30:25 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 27 Dec 2007 23:30:25 -0700 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: On Dec 27, 2007 11:13 PM, Matthieu Brucher wrote: > > It's worth crediting Min (Benjamin Ragan-Kelley, ipython developer) > > for spending the whole day deep in the bowels of weave clenaing up > > tens of test failures that had been in for a long time. The weave > > tests aren't getting correctly picked up by scipy.test(), hence they > > hadn't been reported, but you can see them with weave.test(10). > > > > This is good news :) > Will other compilers be supported by the Blits converters, like ICL or > Visual Studio ? Last time I tried ty use ICL, some headers were missing :| Tom Waite, from Berkeley, spent a lot of time getting the intel compilers to work on win32, but I don't think that code is committed. You may want to ping the Berkeley guys tomorrow on irc for details on the status of that (I'm not there right now). Cheers, f From oliphant at enthought.com Fri Dec 28 02:13:40 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 28 Dec 2007 01:13:40 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> Message-ID: <4774A224.90907@enthought.com> Anne Archibald wrote: > On 28/12/2007, Fernando Perez wrote: > >> - scikits: domain-specific toolkits and other self-contained packages >> that might be at some point candidates for scipy, but aren't yet >> mature enough to be included in the core. License can be BSD or GPL, >> per-package. It's a namespace package, so users can install only the >> components they want and update each independently. >> > > I think I agree with this - not that I'm a developer, but I hope you > won't mind a user's opinion: it's really hard to tell whether a > package will eventually become part of scipy or not. I certainly love users opinions as I think developers usually get things wrong because it is easy to forget what being a user is like. For certain cases, it is true that whether or not something is general purpose changes over time. But right now it is already the case that we have scikits that should be going into scipy and my big question is why they are not already there. Nothing I have heard alleviates the problem that namespace clarity is designed to address: * for users it will be much saner if common things go into scipy so that we don't end up with more and more ways to do common things like optimization * however, for developers there is no real incentive to move things from scikits into scipy if everything is just lumped together into scikits * there is also no simple way for an outside user to understand whether something in scikits is really slated for inclusion in SciPy or not There really is a difference between the kinds of things that Fernando is lumping into scikits. What prompted me to ask for new namespaces is precisely because I was thinking of proposing "tags" to go along with the packages. But, namespaces seems like a much better idea than adding a new layer called "tags" for the same namespace. -Travis From matthieu.brucher at gmail.com Fri Dec 28 02:14:04 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 08:14:04 +0100 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <477493BA.9020605@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> Message-ID: > > Also, having a name like scipydev tells everybody what the purpose of > the project is. Right now, for example, I have no idea why delaunay, > openopt, audiolab, and learn are scikits. They do not seem domain > specific to me. But, then again, perhaps the developers don't want to > put their packages into scipy. If that is the case, then I'd like that > to be clear up front and use that to help fix whatever issues are > causing scipy to be "unattractive" to a developer of a module with > obvious wide-spread appeal. > I can speak about openopt and learn (but in the future for the latter). For once, I don't think my code is good enough to be in scipy. For openopt, I don't know how it fits in scipy, it depends on how well dmitrey did the branches on the additional external solvers. Although my code is not domain specific (generic optimizers), it is not as easy to use as a simple call to a function (but far more powerful IMHO). Besides it might use an additional matrix library in the future for some modificied decomposition. As for learn, I may put some of my manifold learning stuff in it, and it uses ctypes or SWIG intensively as well as a matrix library (a lot is still done in C++) and it depends on my generic optimizers. So perhaps all that is not my code would fit in Scipy, but I'd like some additional thoughts about it ;). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From twaite at berkeley.edu Fri Dec 28 02:34:23 2007 From: twaite at berkeley.edu (Tom Waite) Date: Thu, 27 Dec 2007 23:34:23 -0800 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: I got the Intel( Windows) compiler to work, but had problems with the link. The Intel linker (xilink6) has a problem (multi-file optimization error) and I noted with my Visual Studio running with Intel, the Intel linker actually is running the MS linker. This is not completed as of now as I still need to resolve the minor changes I am having to make to the inherited distutils scripts in my IntelW.py. Tom On 12/27/07, Fernando Perez wrote: > > On Dec 27, 2007 11:13 PM, Matthieu Brucher > wrote: > > > > It's worth crediting Min (Benjamin Ragan-Kelley, ipython developer) > > > for spending the whole day deep in the bowels of weave clenaing up > > > tens of test failures that had been in for a long time. The weave > > > tests aren't getting correctly picked up by scipy.test(), hence they > > > hadn't been reported, but you can see them with weave.test(10). > > > > > > > This is good news :) > > Will other compilers be supported by the Blits converters, like ICL or > > Visual Studio ? Last time I tried ty use ICL, some headers were missing > :| > > Tom Waite, from Berkeley, spent a lot of time getting the intel > compilers to work on win32, but I don't think that code is committed. > You may want to ping the Berkeley guys tomorrow on irc for details on > the status of that (I'm not there right now). > > Cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Fri Dec 28 02:42:05 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 28 Dec 2007 01:42:05 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> Message-ID: <4774A8CD.90508@enthought.com> Matthieu Brucher wrote: > > Also, having a name like scipydev tells everybody what the purpose of > the project is. Right now, for example, I have no idea why delaunay, > openopt, audiolab, and learn are scikits. They do not seem domain > specific to me. But, then again, perhaps the developers don't want to > put their packages into scipy. If that is the case, then I'd like > that > to be clear up front and use that to help fix whatever issues are > causing scipy to be "unattractive" to a developer of a module with > obvious wide-spread appeal. > > > I can speak about openopt and learn (but in the future for the latter). > For once, I don't think my code is good enough to be in scipy. > For openopt, I don't know how it fits in scipy, it depends on how well > dmitrey did the branches on the additional external solvers. Although > my code is not domain specific (generic optimizers), it is not as easy > to use as a simple call to a function (but far more powerful IMHO). > Besides it might use an additional matrix library in the future for > some modificied decomposition. > As for learn, I may put some of my manifold learning stuff in it, and > it uses ctypes or SWIG intensively as well as a matrix library (a lot > is still done in C++) and it depends on my generic optimizers. Thanks for the input. This is the kind of information I was looking for. Here's my current proposal (which is very close to Fernando's I think with one nuance). scipy --- core facilities gscikits --- For GPL encumbered packages regardless of origin or destiny. scikits --- For BSD third-party packages. These may be packages with wide-spread appeal with a different calling convention than scipy or packages that the developers are not done with or just want to keep their own release cycles. Code may come out of here for inclusion into scipy, but it will do so using: scipy-somepackage (imports as scipy.somepackage but is distributed separately) --- Packages that will soon be released with scipy but for now are being distributed alone because they need a faster release cycle. These packages involve the input of SciPy developers more than a scikits package might. > > So perhaps all that is not my code would fit in Scipy, but I'd like > some additional thoughts about it ;). It sounds like your code would live in scikits and then if parts should be taken into scipy then they would be through the scipy-somepackage approach. -Travis From robert.kern at gmail.com Fri Dec 28 02:56:52 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Dec 2007 02:56:52 -0500 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774A8CD.90508@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> Message-ID: <4774AC44.8070305@gmail.com> Travis E. Oliphant wrote: > gscikits --- For GPL encumbered packages regardless of origin or destiny. I think this is a misnomer. There are also LGPL-, MPL-, CPL-, CeCILL-, OPL-, etc-encumbered packages, too. I don't see a good reason to not let these packages use the scikits namespace. Every package has its own license; scikits is not a package. It's just not that confusing. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fperez.net at gmail.com Fri Dec 28 03:44:16 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 28 Dec 2007 01:44:16 -0700 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: References: <477470C0.3030602@enthought.com> Message-ID: Since I won't be able to participate tomorrow, here's a small contribution. I just added this to ipython: http://projects.scipy.org/ipython/ipython/browser/ipython/trunk/IPython/dtutils.py You can load it in your startup file or interactively via from IPython.dtutils import idoctest Type idoctest? for calling details. This will enable you to do things like: In [96]: idoctest -------> idoctest() # at this point, you start typing/pasting your doctest. Hit Ctrl-D or two blanks to end: >>> for i in range(10): ... print i, ... 0 1 2 3 4 5 6 7 8 9 1 items passed all tests: 1 tests in interactive doctest 1 tests in 1 items. 1 passed and 0 failed. Test passed. If you have a failing doctest that you'd like to debug, you can use 'eraise=True' and errors will be immediately raised, so you can then call %debug to debug them. This should come in handy for all those doctests that you're all going to write tomorrow. If you aren't running off SVN ipython, you can simply copy the above file somewhere and load it interactively once ipython is running, it doesn't depend on any other changes to ipython. It probably still needs a bit of work to make it more convenient, so I'm open to feedback. But it should make life easier when writing frequent doctests. Cheers, f From wnbell at gmail.com Fri Dec 28 04:33:24 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 28 Dec 2007 03:33:24 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774AC44.8070305@gmail.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> Message-ID: On Dec 28, 2007 1:56 AM, Robert Kern wrote: > > gscikits --- For GPL encumbered packages regardless of origin or destiny. > > I think this is a misnomer. There are also LGPL-, MPL-, CPL-, CeCILL-, OPL-, > etc-encumbered packages, too. I don't see a good reason to not let these > packages use the scikits namespace. Every package has its own license; scikits > is not a package. It's just not that confusing. Personally, I don't see why different licenses would necessitate different namespaces either. IMO separating BSD scikits from everything else would needlessly confuse new users and diminish the overall 'scikit' mind share. Is it not fair to say that the distinctions among the various licenses are completely unimportant to the vast majority of the audience scikits is supposed to address? Furthermore, does anyone want to police this sort of policy should 100s of scikits be developed? Travis E. Oliphant wrote: > The point is that it is very useful for users to be able to know that > scikits and scipy and scipydev have a BSD or similar license, but > "scifree" is GPL-like and creates possible encumbrances for people who > use it in their code bases. I doubt that anyone who's legitimately concerned about such matters is going to trust that scikit.foo is actually BSD code without verifying it personally. -- Nathan Bell wnbell at gmail.com From oliphant at enthought.com Fri Dec 28 04:56:17 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 28 Dec 2007 03:56:17 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> Message-ID: <4774C841.6050607@enthought.com> Nathan Bell wrote: > On Dec 28, 2007 1:56 AM, Robert Kern wrote: > >>> gscikits --- For GPL encumbered packages regardless of origin or destiny. >>> >> I think this is a misnomer. There are also LGPL-, MPL-, CPL-, CeCILL-, OPL-, >> etc-encumbered packages, too. I don't see a good reason to not let these >> packages use the scikits namespace. Every package has its own license; scikits >> is not a package. It's just not that confusing. >> Think of it as GPL-inspired scikits, then. I did not mean for gscikits to only include the GPL itself. Sure, scikits is not a package, but it is a "namespace" and that has meaning. The point is what "meaning" do you want it to have. I see great value in a clear separation between GPL-inspired licenses and other licenses. If these are all awash in the scikits namespace, then it is going to be more difficult for people who would like to be able to use scikits packages but cannot use the GPL to know what they can and can't use. > > Personally, I don't see why different licenses would necessitate > different namespaces either. IMO separating BSD scikits from > everything else would needlessly confuse new users and diminish the > overall 'scikit' mind share. > It is the 'mind share' I'm interested in as well. Right now it is pretty clear that scipy is BSD (or similarly) licensed. It would be productive if that same kind of advertising were available for scikits. > Is it not fair to say that the distinctions among the various licenses > are completely unimportant to the vast majority of the audience > scikits is supposed to address? All I'm saying that the distinction between the licenses that impose restrictions on what you do with your own code that depends on them and licenses that don't do that is important enough to warrant a name-space division. > Furthermore, does anyone want to > police this sort of policy should 100s of scikits be developed? > It is much easier to "police" if all you have to do is change the name of the package it gets installed in. > Travis E. Oliphant wrote: > >> The point is that it is very useful for users to be able to know that >> scikits and scipy and scipydev have a BSD or similar license, but >> "scifree" is GPL-like and creates possible encumbrances for people who >> use it in their code bases. >> > > I doubt that anyone who's legitimately concerned about such matters is > going to trust that scikit.foo is actually BSD code without verifying > it personally. > Sure, but that same person is more likely not going to touch *any* of scikits if there is GPL code released in it's name-space (even if they are "separate" packages and especially if they are actually hosted on the same svn tree). It gets too murky for people who need to care to spend the time figuring it out --- they'll just go buy the off-the-shelf solution even if it isn't as good in some technical sense. -Travis From dmitrey.kroshko at scipy.org Fri Dec 28 05:21:57 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 28 Dec 2007 12:21:57 +0200 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774A8CD.90508@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> Message-ID: <4774CE45.6000504@scipy.org> I would like to propose to ask user (after scipy installation) about willing to install some more scikits, for example >>> you can select some scikits packages you want to install: ver date license description 1. learn 0.1 2007-12-01 GPL machine learn 2. pylab 0.4b 2007-11-01 LGPL matlab-Python bridge ... etc >>>enter your choise (comma- or space- separated) It will require working internet connection. Alternatively (and/or optionally) user can specify directory containing scikits files, transferred as tar.gz files or from svn repository w/o working internet connection. Also, user may be interested in installing scikit directly from svn repository (latest snapshot), so idea of organizing this possibility also should be taken into account. I guess only few (which are already rather good, or because no good alternative is available for now) scikits should be proposed. When some scikit become out of support because newer more powerful one have appeared or support dropped or due to any other reasons - scikit name should be excluded from the list of propositions. Regards, D. From ondrej at certik.cz Fri Dec 28 05:37:30 2007 From: ondrej at certik.cz (Ondrej Certik) Date: Fri, 28 Dec 2007 11:37:30 +0100 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774CE45.6000504@scipy.org> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774CE45.6000504@scipy.org> Message-ID: <85b5c3130712280237qf372782kd1985df8cf5f9afb@mail.gmail.com> On Dec 28, 2007 11:21 AM, dmitrey wrote: > I would like to propose to ask user (after scipy installation) about > willing to install some more scikits, for example > > >>> you can select some scikits packages you want to install: > > ver date license description > > 1. learn 0.1 2007-12-01 GPL machine learn > 2. pylab 0.4b 2007-11-01 LGPL matlab-Python bridge > ... > etc > > >>>enter your choise (comma- or space- separated) > > It will require working internet connection. Alternatively (and/or > optionally) user can specify directory containing scikits files, > transferred as tar.gz files or from svn repository w/o working internet > connection. This should not be turned on by default. ./setup.py install just needs to install scipy and that's it. No user interaction, no internet connection. Ondrej From stefan at sun.ac.za Fri Dec 28 05:56:19 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 28 Dec 2007 12:56:19 +0200 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774A8CD.90508@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> Message-ID: <20071228105619.GA9051@mentat.za.net> On Fri, Dec 28, 2007 at 01:42:05AM -0600, Travis E. Oliphant wrote: > scikits --- For BSD third-party packages. These may be packages with > wide-spread appeal with a different calling convention than scipy or > packages that the developers are not done with or just want to keep > their own release cycles. Code may come out of here for inclusion into > scipy, but it will do so using: > > scipy-somepackage (imports as scipy.somepackage but is distributed > separately) --- Packages that will soon be released with scipy but for > now are being distributed alone because they need a faster release > cycle. These packages involve the input of SciPy developers more than a > scikits package might. I think this may cause some confusion: how would you explain to users that, while they have the latest scipy installed, they cannot import a sub-package under the root scipy namespace? If the package was located under scipy.scikit.somepackage, that would be a bit better, but I recall that Robert said it would be troublesome to implement. As for the scikits, I'd like to see them all under one roof, regardless of their licenses (they were created to support non-BSD licensing, IIRC). Regards St?fan From stefan at sun.ac.za Fri Dec 28 06:08:41 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 28 Dec 2007 13:08:41 +0200 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: References: <477470C0.3030602@enthought.com> Message-ID: <20071228110840.GB9051@mentat.za.net> On Thu, Dec 27, 2007 at 10:41:17PM -0700, Fernando Perez wrote: > I'm afraid I won't be able to participate tomorrow, but one thing to > remember is that with nose, any and all doctest examples should be > automatically picked up (with the appropriate flags). So a *very > easy* way for anyone to contribute is to simply add doctest examples > to the codebase. Those serve automatically two purposes: they are > small tests for each function, and they make the library vastly easier > to use I am not familiar with the nosetest machinery, but may I suggest that numpy is made available to doctests as 'N' or 'np'? As an example I attach a hackish script I use to excercise all the doctests in numpy. Cheers St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy_doctest.py Type: text/x-python Size: 1041 bytes Desc: not available URL: From dmitrey.kroshko at scipy.org Fri Dec 28 08:10:54 2007 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 28 Dec 2007 15:10:54 +0200 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774AC44.8070305@gmail.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> Message-ID: <4774F5DE.2090102@scipy.org> There are lots of scikit attributes important to users: - license - current maturity status - development rate So yielding lots of *sci* classes only for to separate scikits by license is not a good idea (+taking into account lots of different license classes). BTW, some scikits are BSD-licensed. I think it should be something like database, like PYPI. scipy should have a function (like "aptitude install pylab") to list all available (maybe, via some kind of search, specifying interested parameters - isOpenSourse, isOSIApproved, including some keywords, etc), extract and install required scikits from internet. Maybe, it has some sense to start the function after scipy installation (or yielding a text message after scipy installation finish about the function). Regards, D. From matthieu.brucher at gmail.com Fri Dec 28 08:19:51 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 14:19:51 +0100 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: My worries are for the Intel compiler and Linux (this way, the compiler is free for my purposes). Last time I tried to use weave, the compilation failed because some headers were missing in Scipy. I don't know if this was fixed as not additional folder is present in the trunk. Matthieu 2007/12/28, Tom Waite : > > I got the Intel( Windows) compiler to work, but had problems with the > link. The Intel linker (xilink6) has a problem (multi-file optimization > error) and I noted with my Visual Studio running with Intel, the Intel > linker actually is running the MS linker. This is not completed as of now as > I still need to resolve the minor changes I am having to make to the > inherited distutils scripts in my IntelW.py. > > Tom > > > On 12/27/07, Fernando Perez wrote: > > > > On Dec 27, 2007 11:13 PM, Matthieu Brucher > > wrote: > > > > > > It's worth crediting Min (Benjamin Ragan-Kelley, ipython developer) > > > > for spending the whole day deep in the bowels of weave clenaing up > > > > tens of test failures that had been in for a long time. The weave > > > > tests aren't getting correctly picked up by scipy.test(), hence they > > > > hadn't been reported, but you can see them with weave.test(10). > > > > > > > > > > This is good news :) > > > Will other compilers be supported by the Blits converters, like ICL or > > > Visual Studio ? Last time I tried ty use ICL, some headers were > > missing :| > > > > Tom Waite, from Berkeley, spent a lot of time getting the intel > > compilers to work on win32, but I don't think that code is committed. > > You may want to ping the Berkeley guys tomorrow on irc for details on > > the status of that (I'm not there right now). > > > > Cheers, > > > > f > > _______________________________________________ > > Scipy-dev mailing list > > Scipy-dev at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at enthought.com Fri Dec 28 12:32:03 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 28 Dec 2007 11:32:03 -0600 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: <20071228112617.GC9051@mentat.za.net> References: <477470C0.3030602@enthought.com> <20071228112617.GC9051@mentat.za.net> Message-ID: <47753313.2010006@enthought.com> Stefan van der Walt wrote: > On Thu, Dec 27, 2007 at 09:27:09PM -0800, Jarrod Millman wrote: > >> On Dec 27, 2007 7:42 PM, Travis E. Oliphant wrote: >> >>> Doc-day will start tomorrow (in about 12 hours). It will be Friday for >>> much of America and be moving into Saturday for Europe and Asia. Join >>> in on the irc.freenode.net (channel scipy) to coordinate effort. I >>> imaging people will be in an out. I plan on being available in IRC from >>> about 9:30 am CST to 6:00 pm CST and then possibly later. >>> >>> If you are available at different times in different parts of the >>> world, jump in and pick something to work on. >>> >> Since this is our first doc-day, it will be fairly informal. Travis >> is going to be trying to get some estimate of which packages need the >> most work. But if there is some area of NumPy or SciPy you are >> familiar with, please go ahead and pitch in. Here is the current >> NumPy/ SciPy coding standard including docstring standards: >> http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines >> > > I have some questions regarding Travis' latest modifications to the > documentation guidelines: > > The following section was removed, why? > > """ > A reST-documented module should define:: > > __docformat__ = 'restructuredtext en' > > at the top level in accordance with `PEP 258 > `__. Note that the > ``__docformat__`` variable in a package's ``__init__.py`` file does > not apply to objects defined in subpackages and submodules. > """ > I don't see the point in every file having a __docformat__ line, when our documentaion formatting tool should already know the standard we are using is. It's just more cruft. Besides the PEP was rejected, so I don't know why we should be following it. We obviously need a pre-processor to map our files to epydoc, so let's do that instead of contorting docstrings into a format demanded by the tool. The tool is a "tool" for us to use not be chained by. > We had a long discussion on the mailing list on the pros and cons of > "*Parameters*:" vs. "Parameters:". I see now that it has been changed > to > > Parameters > ---------- > > Is this still recognised as a list? > > I noted that the examples are now no longer indented: does ReST allow this? > > > Note that building the example documentation, `epydoc example.py`, now > warns: > > File /tmp/example.py, line 19, in example.foo > Warning: Line 24: Wrong underline character for heading. > Warning: Lines 27, 30, 32, 37, 39, 41, 43, 48, 50: Improper paragraph indentation. > > > While I understand the rationale behind > > "The guiding principle is that human readers of the text itself are > given precedence over contorting the docstring so that epydoc_ > produces nice output." > > I think that it would be impractical to break compatibility with the > only documentation builder we currently have (unless there are others > I am unaware of?). > > In my mind, the rationale trumps the "impracticality" especially since epydoc still produces readable output. It is only the formatting that is not pretty-printed (although it doesn't look bad there either). So, AFAIK, nothing is actually broken (except a little HTML formatting on output of epydoc). But, now the docstrings don't look crufty to my eyes. Basically, the problem is that I *really* don't like the *Parameters*: thing and all the indentation that takes place inside of docstrings to satisfy epydoc. It is also inconsistent with the docstring standard that Enthought uses for the ETS tools suite. It was really hard for me to start writing docstrings that followed that pattern. It seems better to have docstrings than to have them fit into an epydoc_-defined pattern. I'm sorry if I made changes unilaterally. I didn't think there would be much concern as long as docstrings were moving forward. -Travis From ellisonbg.net at gmail.com Fri Dec 28 13:03:45 2007 From: ellisonbg.net at gmail.com (Brian Granger) Date: Fri, 28 Dec 2007 11:03:45 -0700 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <47746EAA.9000402@enthought.com> References: <47746EAA.9000402@enthought.com> Message-ID: <6ce0ac130712281003o52e6edb4x701b5a0ef87a781a@mail.gmail.com> > In preparation for doc-day tomorrow, I've been thinking about scikits > and its relationship to scipy. There seem to be three distinct > purposes of scikits: > > 1) A place for specialized tool boxes that build on numpy and/or scipy > that live under a common name-space, but are too specialized to live in > scipy itself. > 2) A place for GPL or similarly-licensed tools so that people can > understand what they are getting and appropriate use can be made. > 3) A place for modularly-installed scipy tools. > > Given these three purposes. It seems that we should have three name-spaces: >From working with lots of end users, I get the feeling that just having numpy and scipy is complicated enough. I completely understand why numpy and scipy are separate packages, but from a users perspective it _is_ complicated. I get lots of questions from users who can't find such and such functions in numpy - when it is in scipy. The addition of scikits complicates the landscape even further - especially because most users don't care about the seemingly subtle differences between BSD/GPL licenses. I fear that having multiple namespaces under scikits just complicates things even further. Even is scikits is a single download, a user would potentially have to search through 5 top-level namespaces (numpy, scipy, scikits, gscipy) to find something. This is made worse, given the fact that the names don't really reflect the actual content of the packages. Having two packages whose sole difference is the licenses used by subpackages is a horrible situation. Things like that are "implementation details" from a user's perspective and we shouldn't exposing those things are part of our "public API." Because of these complexities, I think in many cases people will keep things out of scikits and just release things as standalone projects. For scikits to be a success, I think its purpose has to be dead clear and its organization has to be dead simple and focused on what users will experience when they sit down to use it. I think a single top-level namespace works best for this. Brian > 1) scikits > 2) ??? perhaps scigpl > 3) scipy (why not use the same namespace --- there is a technological > hurdle that may prevent egg distribution of these until we fix it. But, > I think we can fix it and would rather not invent another name-space for > something that should be scipy). > > This idea occurred to me, when I was thinking about "tagging" modules in > scikits so users could understand the tool. But, at that point it > seemed obvious that we should just have different name-spaces which > would promote the needed clarity. > > Ideas. > > -Travis O. > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From wnbell at gmail.com Fri Dec 28 13:09:43 2007 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 28 Dec 2007 12:09:43 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774C841.6050607@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> <4774C841.6050607@enthought.com> Message-ID: On Dec 28, 2007 3:56 AM, Travis E. Oliphant wrote: > do you want it to have. I see great value in a clear separation > between GPL-inspired licenses and other licenses. So does Richard M. Stallman :) Most users don't care about OSS politics. > If these are all awash in the scikits namespace, then it is going to be > more difficult for people who would like to be able to use scikits > packages but cannot use the GPL to know what they can and can't use. And therefore a majority of users should be inconvenienced for a small minority? > It is the 'mind share' I'm interested in as well. Right now it is > pretty clear that scipy is BSD (or similarly) licensed. It would be > productive if that same kind of advertising were available for scikits. To some, not many. > All I'm saying that the distinction between the licenses that impose > restrictions on what you do with your own code that depends on them and > licenses that don't do that is important enough to warrant a name-space > division. This simply isn't true. I don't use 'gapt-get' or 'gsourceforge'. I'd be a bit irritated if I had to fire up gapt-get for X but had to use apt-get for Y. Not to mention the potential problem of collisions (scikits.foo and gscikits.foo). > > Furthermore, does anyone want to > > police this sort of policy should 100s of scikits be developed? > > > It is much easier to "police" if all you have to do is change the name > of the package it gets installed in. So you are going to audit the contents of scikits periodically? Even if scikits has hundreds/thousands of packages and contributors? > Sure, but that same person is more likely not going to touch *any* of > scikits if there is GPL code released in it's name-space (even if they > are "separate" packages and especially if they are actually hosted on > the same svn tree). It gets too murky for people who need to care to > spend the time figuring it out --- they'll just go buy the off-the-shelf > solution even if it isn't as good in some technical sense. Who are these people? I can't imagine a business taking 'our word' for it and blindly using a scikit in their product. The first time GPL code slips into a scikit the distinction will become meaningless and untrustworthy. The problem I have with 'gscikits' is that if the scikit idea actually takes off then it will require an army of people to keep scikits free of encumbered code. We cannot trust casual contributors to do their homework and keep GPL out of their scikits. The distinction is only as good as the certifier. -- Nathan Bell wnbell at gmail.com From oliphant at enthought.com Fri Dec 28 13:48:00 2007 From: oliphant at enthought.com (Travis E. Oliphant) Date: Fri, 28 Dec 2007 12:48:00 -0600 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <6ce0ac130712281003o52e6edb4x701b5a0ef87a781a@mail.gmail.com> References: <47746EAA.9000402@enthought.com> <6ce0ac130712281003o52e6edb4x701b5a0ef87a781a@mail.gmail.com> Message-ID: <477544E0.9020804@enthought.com> Brian Granger wrote: >> In preparation for doc-day tomorrow, I've been thinking about scikits >> and its relationship to scipy. There seem to be three distinct >> purposes of scikits: >> >> 1) A place for specialized tool boxes that build on numpy and/or scipy >> that live under a common name-space, but are too specialized to live in >> scipy itself. >> 2) A place for GPL or similarly-licensed tools so that people can >> understand what they are getting and appropriate use can be made. >> 3) A place for modularly-installed scipy tools. >> >> Given these three purposes. It seems that we should have three name-spaces: >> > > >From working with lots of end users, I get the feeling that just > having numpy and scipy is complicated enough. I completely understand > why numpy and scipy are separate packages, but from a users > perspective it _is_ complicated. I get lots of questions from users > who can't find such and such functions in numpy - when it is in scipy. > Very good. This I can agree with. I would love to see fewer namespaces. Originally, I thought scikits was just supposed to be a place for GPL-like packages to go. But, it would appear that others see other purposes for it. Apparently nobody agrees that we need to keep scikits namespace from being a mine-field of GPL-like code. I don't care enough about it to continue arguing. I don't buy the arguments that allowing a scipy-somepackage to be distributed separately is going to cause unwarranted confusion. In fact, I would like it to cause a little-bit of confusion as to why it doesn't come installed with scipy already (although I suspect that in many distributions it will be delivered) because that puts pressure to get scipy-somepackage into scipy itself when it's user base grows. So, I'm going to encourage that by moving some of the sandbox packages to that style of install and not into the scikits name-space. Jarrod has a document listing the packages that are targeted for inclusion into scipy. These should be distributed as scipy-somepackage (or scipy_somepackage). -Travis From fperez.net at gmail.com Fri Dec 28 15:32:00 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 28 Dec 2007 13:32:00 -0700 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: On Dec 28, 2007 6:19 AM, Matthieu Brucher wrote: > My worries are for the Intel compiler and Linux (this way, the compiler is > free for my purposes). Are you sure it's free for your purposes? Class use as a student is OK, but research is not, unfortunately: http://www.intel.com/cd/software/products/asmo-na/eng/219771.htm Note that academic use of the products does not qualify for a non-commercial license. Intel offers heavily discounted licenses to academic developers through our Academic Developer Program. They have more details here: http://www.intel.com/cd/software/products/asmo-na/eng/219692.htm I'm pretty sure at some point intel changed this, because I'm almost certain that years ago academic research fell under their free license conditions. Not anymore, in a pretty explicit fashion (except for pure student-doing-unpaid-homework use). Cheers, f From matthieu.brucher at gmail.com Fri Dec 28 15:43:45 2007 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 28 Dec 2007 21:43:45 +0100 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: Yes, I'm aware of this, but I can test it at home to see what warnings or errors it raises ;) (although I would like to use it in my research, but no budget for this...) Matthieu 2007/12/28, Fernando Perez : > > On Dec 28, 2007 6:19 AM, Matthieu Brucher > wrote: > > My worries are for the Intel compiler and Linux (this way, the compiler > is > > free for my purposes). > > Are you sure it's free for your purposes? Class use as a student is > OK, but research is not, unfortunately: > > http://www.intel.com/cd/software/products/asmo-na/eng/219771.htm > > Note that academic use of the products does not qualify for a > non-commercial license. Intel offers heavily discounted licenses to > academic developers through our Academic Developer Program. > > They have more details here: > > http://www.intel.com/cd/software/products/asmo-na/eng/219692.htm > > I'm pretty sure at some point intel changed this, because I'm almost > certain that years ago academic research fell under their free license > conditions. Not anymore, in a pretty explicit fashion (except for > pure student-doing-unpaid-homework use). > > Cheers, > > f > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Fri Dec 28 18:26:27 2007 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 28 Dec 2007 16:26:27 -0700 Subject: [SciPy-dev] [Numpy-discussion] SciPy Sprint results In-Reply-To: References: <47696873.1060008@enthought.com> Message-ID: On Dec 28, 2007 1:43 PM, Matthieu Brucher wrote: > Yes, I'm aware of this, but I can test it at home to see what warnings or > errors it raises ;) (although I would like to use it in my research, but no > budget for this...) I just figured I'd mention it, because it caught me by surprise last time I went to update from an old version and saw their new faq with such explicit provisions against academic free use... cheers, f From peridot.faceted at gmail.com Fri Dec 28 18:42:35 2007 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 28 Dec 2007 18:42:35 -0500 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774C841.6050607@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> <4774C841.6050607@enthought.com> Message-ID: On 28/12/2007, Travis E. Oliphant wrote: > All I'm saying that the distinction between the licenses that impose > restrictions on what you do with your own code that depends on them and > licenses that don't do that is important enough to warrant a name-space > division. I think this is the key point. You think there are lots of these people, and they are important. Others think there are not many of these people and making them work harder is fine. I wonder if the difference of opinions is largely a difference of ideas on what non-BSD licenses allow. In particular, you talk about "restrictions on what you do with your own code". My interpretation is that if I am writing some scientific code and I want to work with numpy/scipy/scikits/what have you, I may do one of two things: * Write python code that simply imports some packages and uses functions/classes from them. * Extract and modify source code from the library to produce a version of numpy/scipy/the scikit that can do more. As I understand the notion of "derived work", the latter is a derived work of the library and so the GPL (for example) forces me to release my modifications unde the GPL. But (as I understand it), the former is *not* a derived work of the scikit, and so my code can be under any license I wish. Is this correct? I realize there are packaging issues - if I want to make one tidy exectuable that includes my code plus python plus all libraries I use I may need to provide some source code. This does not seem unduly troublesome. Anne From robert.kern at gmail.com Fri Dec 28 18:49:07 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Dec 2007 18:49:07 -0500 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> <4774C841.6050607@enthought.com> Message-ID: <47758B73.2030407@gmail.com> Anne Archibald wrote: > On 28/12/2007, Travis E. Oliphant wrote: >> All I'm saying that the distinction between the licenses that impose >> restrictions on what you do with your own code that depends on them and >> licenses that don't do that is important enough to warrant a name-space >> division. > > I think this is the key point. You think there are lots of these > people, and they are important. Others think there are not many of > these people and making them work harder is fine. > > I wonder if the difference of opinions is largely a difference of > ideas on what non-BSD licenses allow. > > In particular, you talk about "restrictions on what you do with your own code". > > My interpretation is that if I am writing some scientific code and I > want to work with numpy/scipy/scikits/what have you, I may do one of > two things: > > * Write python code that simply imports some packages and uses > functions/classes from them. > > * Extract and modify source code from the library to produce a version > of numpy/scipy/the scikit that can do more. > > As I understand the notion of "derived work", the latter is a derived > work of the library and so the GPL (for example) forces me to release > my modifications unde the GPL. But (as I understand it), the former is > *not* a derived work of the scikit, and so my code can be under any > license I wish. Is this correct? There is no case law to decide this. The law itself is unclear. However, the FSF considers both instances an area where you must release the whole code under a GPL-compatible license. Since the FSF is the author of the GPL, most programmers who release their code under the GPL follow this interpretation as well. The programmer's interpretation is the most important one (since at least US law holds as most important the shared understanding of the license by the licensor and the licensee); the FSF's interpretation is only important because it is the most common. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Dec 28 18:53:04 2007 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 28 Dec 2007 18:53:04 -0500 Subject: [SciPy-dev] Scikits and stuff In-Reply-To: <4774C841.6050607@enthought.com> References: <47746EAA.9000402@enthought.com> <477482FC.6060202@enthought.com> <477493BA.9020605@enthought.com> <4774A8CD.90508@enthought.com> <4774AC44.8070305@gmail.com> <4774C841.6050607@enthought.com> Message-ID: <47758C60.7010908@gmail.com> Travis E. Oliphant wrote: > All I'm saying that the distinction between the licenses that impose > restrictions on what you do with your own code that depends on them and > licenses that don't do that is important enough to warrant a name-space > division. I think it's very important. However, I think that a different namespace package is entirely the wrong tool to use. "License: BSD" and "License: GPL" is the correct tool to use. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Fri Dec 28 19:25:35 2007 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 29 Dec 2007 02:25:35 +0200 Subject: [SciPy-dev] [Numpy-discussion] Doc-day In-Reply-To: <47753313.2010006@enthought.com> References: <477470C0.3030602@enthought.com> <20071228112617.GC9051@mentat.za.net> <47753313.2010006@enthought.com> Message-ID: <20071229002535.GF9051@mentat.za.net> On Fri, Dec 28, 2007 at 11:32:03AM -0600, Travis E. Oliphant wrote: > I don't see the point in every file having a __docformat__ line, when > our documentaion formatting tool should already know the standard we are > using is. It's just more cruft. Besides the PEP was rejected, so I > don't know why we should be following it. Sorry, I didn't see the PEP was rejected. Moot point, then. > We obviously need a pre-processor to map our files to epydoc, so let's > do that instead of contorting docstrings into a format demanded by the > tool. That's a good idea, thanks. I'll take a look. > It seems better to have docstrings than to have them fit into an > epydoc_-defined pattern. Certainly. Thanks for the feedback. Regards St?fan From millman at berkeley.edu Mon Dec 31 17:43:37 2007 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 31 Dec 2007 14:43:37 -0800 Subject: [SciPy-dev] planet.scipy.org Message-ID: Hey, I just wanted to announce that we now have a NumPy/SciPy blog aggregator thanks to Ga?l Varoquaux: http://planet.scipy.org/ Feel free to contact me if you have a blog that you would like included. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/