From david.huard at gmail.com Wed Nov 1 09:56:57 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 1 Nov 2006 09:56:57 -0500 Subject: [SciPy-dev] Ebuilds and Attribute error In-Reply-To: <200610311927.31883.martin.hoefling@gmx.de> References: <200610311927.31883.martin.hoefling@gmx.de> Message-ID: <91cf711d0611010656i2312a000tc32b4dcaeca7127d@mail.gmail.com> There's a typo, it should be x,y = mgrid[-1:1:20j,-1:1:20j] I couldn't find where this is on the wiki, can you fix it ? David 2006/10/31, Martin H?fling : > > Hi there, > > first of all, i've created subversion ebuilds for numpy and scipy so if > anyone > is interested in them just tell me. > It seems as if there' something wrong with numy: > > I tried from the tutorial in the documentation. > > In [1]: from scipy import * > In [2]: x,y = mgrid[-1:1:20j,-1:1,20j] > > --------------------------------------------------------------------------- > exceptions.AttributeError Traceback (most > recent > call last) > > /home/martin/ > > /usr/lib/python2.4/site-packages/numpy/lib/index_tricks.py in > __getitem__(self, key) > 129 typ = int > 130 for k in range(len(key)): > --> 131 step = key[k].step > 132 start = key[k].start > 133 if start is None: start=0 > > AttributeError: 'complex' object has no attribute 'step' > > I tried also to revert back to numpy 1.0, no change. > > Best wishes, > Martin > -- > HTML erh?ht den Informationsgehalt eines Postings *immer* um ein paar > unvorteilhafte Informationen ?ber den Verfasser. > (Thore Tams, de.soc.netzkultur, 17.5.1999) > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.hoefling at gmx.de Wed Nov 1 11:38:59 2006 From: martin.hoefling at gmx.de (Martin =?utf-8?q?H=C3=B6fling?=) Date: Wed, 1 Nov 2006 17:38:59 +0100 Subject: [SciPy-dev] Ebuilds and Attribute error In-Reply-To: <91cf711d0611010656i2312a000tc32b4dcaeca7127d@mail.gmail.com> References: <200610311927.31883.martin.hoefling@gmx.de> <91cf711d0611010656i2312a000tc32b4dcaeca7127d@mail.gmail.com> Message-ID: <200611011738.59843.martin.hoefling@gmx.de> Am Mittwoch, 1. November 2006 15:56 schrieb David Huard: > There's a typo, it should be > x,y = mgrid[-1:1:20j,-1:1:20j] > > I couldn't find where this is on the wiki, can you fix it ? I have it from an old pdf, probably describing an "ancient" scipy version. That's why i compiled scipy from svn cause i tried to use some packages from "sandbox". Thanks for your tip. At least i can now use the example... Regards Martin -- Falls dir die Antwort zu ungenau erscheint, k?nnte es an der Fragestellung liegen. (Daniel Fass in de.org.ccc) From wbaxter at gmail.com Wed Nov 1 20:18:26 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 2 Nov 2006 10:18:26 +0900 Subject: [SciPy-dev] PIL and sparse matrices? Message-ID: >From what I understand PIL images are basically stored as a list of pointers to rows of data. It's a good way to do it if you want to manipulate images so big that you become unlikely to be able to allocate that much contiguous free space. It seems to me that CSR sparse matrices are pretty similar in structure, just with the addition of a list of indices per row. Perhaps the same machinery could be dumbed down a bit to allow for PIL image data to be handled directly in SciPy. Perhaps a "UDR" format (uncompressed dense row)? Just a thought. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Thu Nov 2 06:16:56 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 02 Nov 2006 12:16:56 +0100 Subject: [SciPy-dev] PIL and sparse matrices? In-Reply-To: References: Message-ID: <4549D3A8.6080905@ntc.zcu.cz> Bill Baxter wrote: >>> From what I understand PIL images are basically stored as a list of >>> pointers > to rows of data. It's a good way to do it if you want to manipulate images > so big that you become unlikely to be able to allocate that much contiguous > free space. > It seems to me that CSR sparse matrices are pretty similar in structure, > just with the addition of a list of indices per row. Perhaps the same > machinery could be dumbed down a bit to allow for PIL image data to be > handled directly in SciPy. Perhaps a "UDR" format (uncompressed dense > row)? > > Just a thought. > --bb There is no problem in adding new sparse matrix type, but what would you like to use it for? As it is now, the sparse matrix module is good for solving large sparse systems of linear equations. The other modules of SciPy/NumPy do not work with sparse matrices (as they do not understand their format - spmatrix does not inherit from ndarray, it is a separate composite class) and so to use them you would have to convert the image to a regular dense array anyway (IMHO). r. From nwagner at iam.uni-stuttgart.de Thu Nov 2 07:37:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 02 Nov 2006 13:37:38 +0100 Subject: [SciPy-dev] Synonym for the Modified Bessel Function of the Third Kind Message-ID: <4549E692.1020403@iam.uni-stuttgart.de> Hi all, Using ipython info special yields Bessel Functions jn -- Bessel function of integer order and real argument. jv -- Bessel function of real-valued order and complex argument. jve -- Exponentially scaled Bessel function. yn -- Bessel function of second kind (integer order). yv -- Bessel function of the second kind (real-valued order). yve -- Exponentially scaled Bessel function of the second kind. kn -- Modified Bessel function of the third kind (integer order). kv -- Modified Bessel function of the third kind (real order). kve -- Exponentially scaled modified Bessel function of the third kind. iv -- Modified Bessel function. ive -- Exponentially scaled modified Bessel function. hankel1 -- Hankel function of the first kind. hankel1e -- Exponentially scaled Hankel function of the first kind. hankel2 -- Hankel function of the second kind. hankel2e -- Exponentially scaled Hankel function of the second kind. lmbda -- **Sequence of lambda functions with arbitrary order v. kn denotes the modified Bessel function of the third kind. However the term "Modified Bessel function of the *second* kind" seems to be more common. http://mathworld.wolfram.com/ModifiedBesselFunctionoftheThirdKind.html It would be nice if "*second*" could be added. Nils From myeates at jpl.nasa.gov Thu Nov 2 13:56:48 2006 From: myeates at jpl.nasa.gov (Mathew Yeates) Date: Thu, 02 Nov 2006 10:56:48 -0800 Subject: [SciPy-dev] 0.5.2 Windows binary? Message-ID: <454A3F70.60503@jpl.nasa.gov> Anybody got a 0.5.2 (or 0.5.1 recompiled with numpy 1.0) windows binary? Mathew From nwagner at iam.uni-stuttgart.de Mon Nov 6 10:17:58 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Nov 2006 16:17:58 +0100 Subject: [SciPy-dev] How to speed up the computation of triple integrals Message-ID: <454F5226.9010007@iam.uni-stuttgart.de> Hi all, Is there a way to speed up the computation of triple integrals ? Attached is a small script illustrating the task. Any pointer would be appreciated. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: 8node.py Type: text/x-python Size: 3204 bytes Desc: not available URL: From david.huard at gmail.com Mon Nov 6 11:28:20 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 6 Nov 2006 11:28:20 -0500 Subject: [SciPy-dev] How to speed up the computation of triple integrals In-Reply-To: <454F5226.9010007@iam.uni-stuttgart.de> References: <454F5226.9010007@iam.uni-stuttgart.de> Message-ID: <91cf711d0611060828m4da64a8cj9add1dd693a6a5ae@mail.gmail.com> Have you tried vectorizing ? You could define sign_r = array([1,-1,-1,1,1,-1,-1,1]) sign_s = ... sign_t = ... def h(r,s,t): return .125 * (1 + sign_r * r) * (1 + sign_s * s) * (1 + sign_t * t) and similarly for hp. I don't know how much speed up you'd get but I guess its worth a try. David As I understand it, this is not a topic for scipy-dev, but rather for scipy-user or numpy-discussion. 2006/11/6, Nils Wagner : > > Hi all, > > Is there a way to speed up the computation of triple integrals ? > > Attached is a small script illustrating the task. > > Any pointer would be appreciated. > > Nils > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fullung at gmail.com Mon Nov 6 16:01:46 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 6 Nov 2006 23:01:46 +0200 Subject: [SciPy-dev] scipy.org serving up Internal Server Errors? Message-ID: <013201c701e6$c56a4390$0a83a8c0@ratbert> Hello all scipy.org seems to be serving up Internal Server Error pages intermittently. Alternatively, trying to access something like http://projects.scipy.org/mpi4py/timeline simply hangs after the connection is established. Enthought folks? Cheers, Albert From fperez.net at gmail.com Mon Nov 6 16:35:44 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 6 Nov 2006 14:35:44 -0700 Subject: [SciPy-dev] scipy.org serving up Internal Server Errors? In-Reply-To: <013201c701e6$c56a4390$0a83a8c0@ratbert> References: <013201c701e6$c56a4390$0a83a8c0@ratbert> Message-ID: On 11/6/06, Albert Strasheim wrote: > Hello all > > scipy.org seems to be serving up Internal Server Error pages intermittently. > Alternatively, trying to access something like > http://projects.scipy.org/mpi4py/timeline simply hangs after the connection > is established. At this moment, I can access the mpi4py timeline just fine from here (colorado.edu domain). Just a data point. Cheers, f From ellisonbg.net at gmail.com Mon Nov 6 17:04:34 2006 From: ellisonbg.net at gmail.com (Brian Granger) Date: Mon, 6 Nov 2006 15:04:34 -0700 Subject: [SciPy-dev] scipy.org serving up Internal Server Errors? In-Reply-To: <013201c701e6$c56a4390$0a83a8c0@ratbert> References: <013201c701e6$c56a4390$0a83a8c0@ratbert> Message-ID: <6ce0ac130611061404o5fd49e6cl92747c73d71bdc3b@mail.gmail.com> Jeff Strunk said that apache needed to be restarted and that things should be fine now. Brian On 11/6/06, Albert Strasheim wrote: > Hello all > > scipy.org seems to be serving up Internal Server Error pages intermittently. > Alternatively, trying to access something like > http://projects.scipy.org/mpi4py/timeline simply hangs after the connection > is established. > > Enthought folks? > > Cheers, > > Albert > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Tue Nov 7 03:25:25 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 07 Nov 2006 09:25:25 +0100 Subject: [SciPy-dev] How to speed up the computation of triple integrals In-Reply-To: <91cf711d0611060828m4da64a8cj9add1dd693a6a5ae@mail.gmail.com> References: <454F5226.9010007@iam.uni-stuttgart.de> <91cf711d0611060828m4da64a8cj9add1dd693a6a5ae@mail.gmail.com> Message-ID: <455042F5.6010506@iam.uni-stuttgart.de> David Huard wrote: > Have you tried vectorizing ? > > You could define > > sign_r = array([1,-1,-1,1,1,-1,-1,1]) > sign_s = ... > sign_t = ... > > def h(r,s,t): > return .125 * (1 + sign_r * r) * (1 + sign_s * s) * (1 + sign_t > * t) > > and similarly for hp. > > I don't know how much speed up you'd get but I guess its worth a try. > > David > > As I understand it, this is not a topic for scipy-dev, but rather for > scipy-user or numpy-discussion. > > 2006/11/6, Nils Wagner >: > > Hi all, > > Is there a way to speed up the computation of triple integrals ? > > Attached is a small script illustrating the task. > > Any pointer would be appreciated. > > Nils > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Thank you for your hint. I will give it a try. Nils From jonathan.taylor at stanford.edu Thu Nov 9 16:37:55 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu, 09 Nov 2006 16:37:55 -0500 Subject: [SciPy-dev] banded generalized eigenvalue problems Message-ID: <45539FB3.7040608@stanford.edu> Earlier message bounced: wrong email address: ------------------ Hi, I need to solve a banded generalized eigenvalue problem and was going to try to mimic the code in generic_flapack.pyf to generate a wrapper for the appropriate lapack function: dsbgv (and its name variants). Is this the recommended way of using extra lapack functionality that is not presently in scipy? If not, any other suggestions? Thanks, Jonathan From robert.kern at gmail.com Thu Nov 9 17:11:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 09 Nov 2006 16:11:23 -0600 Subject: [SciPy-dev] banded generalized eigenvalue problems In-Reply-To: <45539FB3.7040608@stanford.edu> References: <45539FB3.7040608@stanford.edu> Message-ID: <4553A78B.5080206@gmail.com> Jonathan Taylor wrote: > I need to solve a banded generalized eigenvalue problem and was going to > try to mimic the code in generic_flapack.pyf to generate a wrapper for > the appropriate lapack function: dsbgv (and its name variants). > > Is this the recommended way of using extra lapack functionality that is > not presently in scipy? If not, any other suggestions? I would say that's a pretty good approach not least because your wrapper can then immediately become a contribution to scipy.linalg . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From a.u.r.e.l.i.a.n at gmx.net Fri Nov 10 03:01:30 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Fri, 10 Nov 2006 09:01:30 +0100 Subject: [SciPy-dev] banded generalized eigenvalue problems In-Reply-To: <4553A78B.5080206@gmail.com> References: <45539FB3.7040608@stanford.edu> <4553A78B.5080206@gmail.com> Message-ID: <200611100901.30754.a.u.r.e.l.i.a.n@gmx.net> On Thursday 09 November 2006 23:11, Robert Kern wrote: > Jonathan Taylor wrote: > > I need to solve a banded generalized eigenvalue problem and was going to > > try to mimic the code in generic_flapack.pyf to generate a wrapper for > > the appropriate lapack function: dsbgv (and its name variants). > > > > Is this the recommended way of using extra lapack functionality that is > > not presently in scipy? If not, any other suggestions? > > I would say that's a pretty good approach not least because your wrapper > can then immediately become a contribution to scipy.linalg . However note that you can not pass a wrapper like in generic_flapack.pyf to f2py directly. Some preprocessing is done to avoid having to write similar wrappers for each type (s, d, c, z) separately. Looking at generic_flapack.pyf, you will note terms like , , etc., which are expanded accordingly. Be careful, probably you can not put the real and complex routines together. Preprocessing is done by $SCIPY_DIR/Lib/linalg/interface_gen.py. You will need this if you want to test your wrapper before inclusion in scipy. -Johannes From nwagner at iam.uni-stuttgart.de Mon Nov 13 13:10:49 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 13 Nov 2006 19:10:49 +0100 Subject: [SciPy-dev] LAPACK 3.1 Message-ID: FWIW http://www.netlib.org/lapack/lapack-3.1.0.changes Nils From bertle at smoerz.org Wed Nov 15 06:35:39 2006 From: bertle at smoerz.org (Roman Bertle) Date: Wed, 15 Nov 2006 12:35:39 +0100 Subject: [SciPy-dev] scipy.stats.sem is wrong Message-ID: <20061115113539.GA8738@smoerz.org> Hello, i think scipy.stats.sem is wrong. It gives the same result as scipy.stats.stderr (using N-1 and not N), whereas scipy.stats.tsem uses N and gives the correct result. I have attached a patch correcting this. Related to this, i wonder why there are so many related functions in scipy.stats doing the same, but in a slightly different way. E.g. there are nanstd, std, tstd, some use numpy.std, some not, some take an axis argument, some not. And there is samplestd and samplevar, but sampleerr is called sem instead. Shouldn't these functions be unified somehow? Regards, Roman ------------------------- diff -rud python-scipy-0.5.1/Lib/stats/stats.py python-scipy-0.5.1-new/Lib/stats/stats.py --- python-scipy-0.5.1/Lib/stats/stats.py 2006-08-29 11:58:37.000000000 +0200 +++ python-scipy-0.5.1-new/Lib/stats/stats.py 2006-11-15 12:18:23.000000000 +0100 @@ -1166,9 +1166,7 @@ integer (the axis over which to operate) """ a, axis = _chk_asarray(a, axis) - n = a.shape[axis] - s = samplestd(a,axis) / sqrt(n-1) - return s + return samplestd(a,axis) / float(sqrt(a.shape[axis])) def z(a, score): diff -rud python-scipy-0.5.1/Lib/stats/tests/test_stats.py python-scipy-0.5.1-new/Lib/stats/tests/test_stats.py --- python-scipy-0.5.1/Lib/stats/tests/test_stats.py 2006-08-29 11:58:37.000000000 +0200 +++ python-scipy-0.5.1-new/Lib/stats/tests/test_stats.py 2006-11-15 12:11:29.000000000 +0100 @@ -740,15 +740,16 @@ ## assert_approx_equal(y,0.775177399) y = scipy.stats.stderr(self.testcase) assert_approx_equal(y,0.6454972244) + def check_sem(self): """ this is not in R, so used - sqrt(var(testcase)*3/4)/sqrt(3) + sqrt(samplevar(testcase))/sqrt(4) """ #y = scipy.stats.sem(self.shoes[0]) #assert_approx_equal(y,0.775177399) y = scipy.stats.sem(self.testcase) - assert_approx_equal(y,0.6454972244) + assert_approx_equal(y,0.5590169944) def check_z(self): """ ------------------------- From david.huard at gmail.com Wed Nov 15 09:21:56 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 15 Nov 2006 09:21:56 -0500 Subject: [SciPy-dev] scipy.stats.sem is wrong In-Reply-To: <20061115113539.GA8738@smoerz.org> References: <20061115113539.GA8738@smoerz.org> Message-ID: <91cf711d0611150621r489c792bna418af44a19c5a94@mail.gmail.com> Roman, A couple of months ago was Statistical Review Month, where users and devs were asked to look at functions froms stats, weed out the duplicates, add docstrings, etc. If I remember correctly, at the end of the month, unreviewed functions were to be stored in the sandbox (a good incentive if you ask me). The work is started (thanks to Robert), but it's not over. If you want to have a go at it, look at the scipy trac site, there are dozens of open tickets for statistical functions. That's also the place to submit patches. http://projects.scipy.org/scipy/scipy/report/8 Regards, David 2006/11/15, Roman Bertle : > > Hello, > > i think scipy.stats.sem is wrong. It gives the same result as > scipy.stats.stderr (using N-1 and not N), whereas scipy.stats.tsem > uses N and gives the correct result. I have attached a patch correcting > this. > > Related to this, i wonder why there are so many related functions in > scipy.stats doing the same, but in a slightly different way. E.g. there > are nanstd, std, tstd, some use numpy.std, some not, some take an axis > argument, some not. And there is samplestd and samplevar, but sampleerr > is called sem instead. Shouldn't these functions be unified somehow? > > Regards, > > Roman > ------------------------- > diff -rud python-scipy-0.5.1/Lib/stats/stats.py python-scipy-0.5.1-new > /Lib/stats/stats.py > --- python-scipy-0.5.1/Lib/stats/stats.py 2006-08-29 11:58: > 37.000000000 +0200 > +++ python-scipy-0.5.1-new/Lib/stats/stats.py 2006-11-15 12:18: > 23.000000000 +0100 > @@ -1166,9 +1166,7 @@ > integer (the axis over which to operate) > """ > a, axis = _chk_asarray(a, axis) > - n = a.shape[axis] > - s = samplestd(a,axis) / sqrt(n-1) > - return s > + return samplestd(a,axis) / float(sqrt(a.shape[axis])) > > > def z(a, score): > diff -rud python-scipy-0.5.1/Lib/stats/tests/test_stats.py > python-scipy-0.5.1-new/Lib/stats/tests/test_stats.py > --- python-scipy-0.5.1/Lib/stats/tests/test_stats.py 2006-08-29 11:58: > 37.000000000 +0200 > +++ python-scipy-0.5.1-new/Lib/stats/tests/test_stats.py 2006-11-15 > 12:11:29.000000000 +0100 > @@ -740,15 +740,16 @@ > ## assert_approx_equal(y,0.775177399) > y = scipy.stats.stderr(self.testcase) > assert_approx_equal(y,0.6454972244) > + > def check_sem(self): > """ > this is not in R, so used > - sqrt(var(testcase)*3/4)/sqrt(3) > + sqrt(samplevar(testcase))/sqrt(4) > """ > #y = scipy.stats.sem(self.shoes[0]) > #assert_approx_equal(y,0.775177399) > y = scipy.stats.sem(self.testcase) > - assert_approx_equal(y,0.6454972244) > + assert_approx_equal(y,0.5590169944) > > def check_z(self): > """ > ------------------------- > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthold.hoellmann at gl-group.com Wed Nov 15 11:53:13 2006 From: berthold.hoellmann at gl-group.com (=?ISO-8859-15?Q?Berthold_H=F6llmann?=) Date: Wed, 15 Nov 2006 17:53:13 +0100 Subject: [SciPy-dev] Problem converting from scipy_distutils to numpy.distutils Message-ID: Hello, I maintain a larger project where I utilize scipy_distutils for quite some time. Now I find the time to convert the project to numpy from Numeric. I want to be able to use different versions of the INTEL Fortran compiler under Linux easily, so in my path I have only script named like ifort90 or ifort91 that call the correct compiler. So To compile my project I use a command line like: python setup.py config_fc --fcompiler=intel --f90exec=ifort91 --f77exec=ifort91 build It used to work with scipy_distutils, but now I get: ... customize UnixCCompiler using build_ext warning: build_ext: fcompiler=intel is not available. ... Traceback (most recent call last): File "setup.py", line 201, in ext_modules=extInfo.exts) File "/usr/software/gltools/python/Python-2.5/lib/python2.5/site-packages/numpy-1.0-py2.5-linux-i686.egg/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/usr/software/gltools/python/Python-2.5/lib/python2.5/site-packages/numpy-1.0-py2.5-linux-i686.egg/numpy/distutils/command/build_ext.py", line 121, in run self.build_extensions() File "/usr/local/gltools/python/Python-2.5/lib/python2.5/distutils/command/build_ext.py", line 407, in build_extensions self.build_extension(ext) File "/usr/software/gltools/python/Python-2.5/lib/python2.5/site-packages/numpy-1.0-py2.5-linux-i686.egg/numpy/distutils/command/build_ext.py", line 312, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' It seems I need a command line switch for the linker as well? Kind regards Berthold H?llmann -- Germanischer Lloyd AG CAE Development Vorsetzen 35 20459 Hamburg Phone: +49(0)40 36149-7374 Fax: +49(0)40 36149-7320 e-mail: berthold.hoellmann at gl-group.com Internet: http://www.gl-group.com This e-mail and any attachment thereto may contain confidential information and/or information protected by intellectual property rights for the exclusive attention of the intended addressees named above. Any access of third parties to this e-mail is unauthorised. Any use of this e-mail by unintended recipients such as total or partial copying, distribution, disclosure etc. is prohibited and may be unlawful. When addressed to our clients the content of this e-mail is subject to the General Terms and Conditions of GL's Group of Companies applicable at the date of this e-mail. If you have received this e-mail in error, please notify the sender either by telephone or by e-mail and delete the material from any computer. GL's Group of Companies does not warrant and/or guarantee that this message at the moment of receipt is authentic, correct and its communication free of errors, interruption etc. From david at ar.media.kyoto-u.ac.jp Thu Nov 16 06:33:06 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 16 Nov 2006 20:33:06 +0900 Subject: [SciPy-dev] PyEM problem In-Reply-To: <455C470A.8070208@ai.rug.nl> References: <455C470A.8070208@ai.rug.nl> Message-ID: <455C4C72.2090109@ar.media.kyoto-u.ac.jp> Axel Brink wrote: > Dear David, > > I run into a problem when trying your PyEM example "Creating, sampling > and plotting a mixture": > > Python 2.4.3 (#1, Jun 13 2006, 11:46:08) > [GCC 4.1.1 20060525 (Red Hat 4.1.1-1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy as N > >>> import pylab as P > >>> from scipy.sandbox.pyem import GM > using scipy.cluster.vq > >>> > >>> #------------------------------ > ... # Hyper parameters: > ... # - K: number of clusters > ... # - d: dimension > ... k = 3 > >>> d = 2 > >>> > >>> #------------------------------------------------------- > ... # Values for weights, mean and (diagonal) variances > ... # - the weights are an array of rank 1 > ... # - mean is expected to be rank 2 with one row for one component > ... # - variances are also expteced to be rank 2. For diagonal, one row > ... # is one diagonal, for full, the first d rows are the first > variance, > ... # etc... In this case, the variance matrix should be k*d rows and d > ... # colums > ... w = N.array([0.2, 0.45, 0.35]) > >>> mu = N.array([[4.1, 3], [1, 5], [-2, -3]]) > >>> va = N.array([[1, 1.5], [3, 4], [2, 3.5]]) > >>> > >>> #----------------------------------------- > ... # First method: directly from parameters: > ... # Both methods are equivalents. > ... gm = GM.fromvalues(w, mu, va) > >>> > >>> #------------------------------------- > ... # Second method to build a GM instance: > ... gm = GM(k, d, mode = 'diag') > >>> # The set_params checks that w, mu, and va corresponds to k, d and m > ... gm.set_params(w, mu, va) > Traceback (most recent call last): > File "", line 2, in ? > AttributeError: GM instance has no attribute 'set_params' > > > I also tried 'set_param', but this doesn't help either: > > >>> gm.set_param(w, mu, va) > Traceback (most recent call last): > File "", line 1, in ? > File > "/usr/lib/python2.4/site-packages/scipy/sandbox/pyem/gauss_mix.py", > line 84, in set_param > raise GmParamError("Number of given components is %d, expected %d" > NameError: global name 'shape' is not defined > > Numpy version: 1.0.1.dev3436 > Scipy version: 0.5.2.dev2319 > Can you help me out or point me to someone else who can? Thanks in > advance. Thank you for your interest in pyem. First, may I request that next time, you set a ticket to scipy tracker ? It is easier to track things, and everybody has access to it. Now, there are actually 3 problems in the code ! A syntax error, as you spot; the example is also wrong because I inverted d and k: GM.__init__ requires first the number of dimension, then the number of components... The error should have been obvious, if there wasn't a third error, that is an error in the raised exception string ! I will update the code + doc accordingly, everything should be available in svn in a few minutes, cheers, David From nmarais at sun.ac.za Fri Nov 17 21:19:32 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Sat, 18 Nov 2006 04:19:32 +0200 Subject: [SciPy-dev] ARPACK wrapper Message-ID: Hi I've commited the beginnings of an ARPACK wrapper partially adressing http://projects.scipy.org/scipy/scipy/ticket/231 in r2323 I based it on the arpack-0.10.tar.bz2 source that aric attached to #231. I added an ARPACK driver that demonstrates how to use ARPACK's shift-invert mode to solve a generalised eigensystem. Unfortunately I haven't generated any compact test data for that yet (I was using matrices created by my own FEM app) but will soon. As things stand now a lot of work still remains to be done. I can think of (off the top of my head) 1) Merging Aric and my drivers 2) Handling more cases with an easy-to-use interface 3) Using the symmetric/complex/etc. modes 4) Many more, no doubt ;) I'm away for the rest of the weekend, but I look forward to comments on Monday :) Regards Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nwagner at iam.uni-stuttgart.de Mon Nov 20 03:45:47 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Nov 2006 09:45:47 +0100 Subject: [SciPy-dev] ARPACK wrapper In-Reply-To: References: Message-ID: <45616B3B.2060702@iam.uni-stuttgart.de> Neilen Marais wrote: > Hi > > I've commited the beginnings of an ARPACK > wrapper partially adressing > http://projects.scipy.org/scipy/scipy/ticket/231 in r2323 > > I based it on the arpack-0.10.tar.bz2 source that aric attached to #231. I > added an ARPACK driver that demonstrates how to use ARPACK's shift-invert > mode to solve a generalised eigensystem. Unfortunately I haven't generated > any compact test data for that yet (I was using matrices created by my own > FEM app) but will soon. > > As things stand now a lot of work still remains to be done. I can think of > (off the top of my head) > > 1) Merging Aric and my drivers > 2) Handling more cases with an easy-to-use interface > 3) Using the symmetric/complex/etc. modes > 4) Many more, no doubt ;) > > I'm away for the rest of the weekend, but I look forward to comments on > Monday :) > > Regards > Neilen > > Hi Neilen, Thank you very much for your hard work on the ARPACK wrapper. Just now I have installed the sandbox package and tried to solve a random eigenvalue problem. In order to compare the results with the workhorse eig I have used a very small order n. The number of desired eigenpairs is equal to k=4 in my example, but the shape of the array of eigenvectors is (n,k+1) and for the eigenvalues it is (k+1,). The eigenvectors returned by arpack.eigen are zero. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_eigs.py Type: text/x-python Size: 133 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Mon Nov 20 08:07:03 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Nov 2006 14:07:03 +0100 Subject: [SciPy-dev] Current status of arpack.speigs.eigvals Message-ID: <4561A877.9050005@iam.uni-stuttgart.de> Hi all, I am wondering if arpack.speigs.eigvals is ready to compute eigenvalues. a = random.rand(10,10) matvec = lambda x:dot(a,x) nev = 4 ncv = 2*nev w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) Traceback (most recent call last): File "test_eigs.py", line 34, in ? w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) File "/usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/speigs.py", line 71, in eigvals if info != 0: raise "Hell" # Indicates some error during the Arnouldi iterations Hell Am I missing something ? Nils From hagberg at lanl.gov Mon Nov 20 09:33:26 2006 From: hagberg at lanl.gov (Aric Hagberg) Date: Mon, 20 Nov 2006 07:33:26 -0700 Subject: [SciPy-dev] ARPACK wrapper In-Reply-To: <45616B3B.2060702@iam.uni-stuttgart.de> References: <45616B3B.2060702@iam.uni-stuttgart.de> Message-ID: <20061120143326.GD21335@t7.lanl.gov> On Mon, Nov 20, 2006 at 09:45:47AM +0100, Nils Wagner wrote: > Hi Neilen, > > In order to compare the results with the workhorse eig I have used a > very small > order n. The number of desired eigenpairs is equal to k=4 in my example, > but the > shape of the array of eigenvectors is (n,k+1) and for the eigenvalues it > is (k+1,). > The eigenvectors returned by arpack.eigen are zero. > > Nils Hi Nils, The size of the return arrays are intentional (k+1). This is the way ARPACK returns eigenvalues and eigenvectors for nonsymmetric matrices. I think the idea is that the k'th eigenvalue (largest, smallest, etc) might be a complex conjugate pair and then you might want k+1 (the conjugate). Else, if the k'th eigenvalue is real that entry is zero. I can run your test example successfully. Do the tests in arpack/tests/test_arpack.py work for you? Aric From nwagner at iam.uni-stuttgart.de Mon Nov 20 09:47:54 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Nov 2006 15:47:54 +0100 Subject: [SciPy-dev] ARPACK wrapper In-Reply-To: <20061120143326.GD21335@t7.lanl.gov> References: <45616B3B.2060702@iam.uni-stuttgart.de> <20061120143326.GD21335@t7.lanl.gov> Message-ID: <4561C01A.9000109@iam.uni-stuttgart.de> Aric Hagberg wrote: > On Mon, Nov 20, 2006 at 09:45:47AM +0100, Nils Wagner wrote: > >> Hi Neilen, >> >> In order to compare the results with the workhorse eig I have used a >> very small >> order n. The number of desired eigenpairs is equal to k=4 in my example, >> but the >> shape of the array of eigenvectors is (n,k+1) and for the eigenvalues it >> is (k+1,). >> The eigenvectors returned by arpack.eigen are zero. >> >> Nils >> > > Hi Nils, > > The size of the return arrays are intentional (k+1). This is > the way ARPACK returns eigenvalues and eigenvectors for nonsymmetric > matrices. I think the idea is that the k'th eigenvalue > (largest, smallest, etc) might be a complex conjugate pair and > then you might want k+1 (the conjugate). Else, if the k'th > eigenvalue is real that entry is zero. > > I can run your test example successfully. Do the tests in > arpack/tests/test_arpack.py work for you? > > Aric > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > help (arpack.eigen) yields eigen(A, k=6, M=None, ncv=None, which='LM', maxiter=None, tol=0, return_eigenvectors=True) Return k eigenvalues and eigenvectors of the matrix A. Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for w[i] eigenvalues with corresponding eigenvectors x[i]. Inputs: A -- A matrix, array or an object with matvec(x) method to perform the matrix vector product A * x. The sparse matrix formats in scipy.sparse are appropriate for A. k -- The number of eigenvalue/eigenvectors desired M -- (Not implemented) A symmetric positive-definite matrix for the generalized eigenvalue problem A * x = w * M * x Outputs: w -- An array of k eigenvalues v -- An array of k eigenvectors, k[i] is the eigenvector corresponding to the eigenvector w[i] This info doesn't match with your explanation. Concerning the tests I get /usr/bin/python /usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/tests/test_speigs.py Found 1 tests for __main__ _naupd: Number of update iterations taken ----------------------------------------- 1 - 1: 17 _naupd: Number of wanted "converged" Ritz values ------------------------------------------------ 1 - 1: 4 _naupd: Real part of the final Ritz values ------------------------------------------ 1 - 4: 1.033E+00 7.746E-01 5.164E-01 2.582E-01 _naupd: Imaginary part of the final Ritz values ----------------------------------------------- 1 - 4: 0.000E+00 0.000E+00 0.000E+00 0.000E+00 _naupd: Associated Ritz estimates --------------------------------- 1 - 4: 4.508E-17 7.450E-22 7.087E-26 4.834E-29 ============================================= = Nonsymmetric implicit Arnoldi update code = = Version Number: 2.4 = = Version Date: 07/31/96 = ============================================= = Summary of timing statistics = ============================================= Total number update iterations = 17 Total number of OP*x operations = 59 Total number of B*x operations = 0 Total number of reorthogonalization steps = 58 Total number of iterative refinement steps = 0 Total number of restart steps = 0 Total time in user OP*x operation = 0.004000 Total time in user B*x operation = 0.000000 Total time in Arnoldi update routine = 0.008000 Total time in naup2 routine = 0.008000 Total time in basic Arnoldi iteration loop = 0.004000 Total time in reorthogonalization phase = 0.000000 Total time in (re)start vector generation = 0.000000 Total time in Hessenberg eig. subproblem = 0.000000 Total time in getting the shifts = 0.000000 Total time in applying the shifts = 0.000000 Total time in convergence testing = 0.000000 Total time in computing final Ritz vectors = 0.000000 . ---------------------------------------------------------------------- Ran 1 test in 0.107s OK /usr/bin/python /usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/tests/test_arpack.py Found 5 tests for __main__ ..... ---------------------------------------------------------------------- Ran 5 tests in 0.039s OK Nils From nwagner at iam.uni-stuttgart.de Mon Nov 20 09:57:37 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 20 Nov 2006 15:57:37 +0100 Subject: [SciPy-dev] ARPACK wrapper In-Reply-To: <4561C01A.9000109@iam.uni-stuttgart.de> References: <45616B3B.2060702@iam.uni-stuttgart.de> <20061120143326.GD21335@t7.lanl.gov> <4561C01A.9000109@iam.uni-stuttgart.de> Message-ID: <4561C261.90606@iam.uni-stuttgart.de> Nils Wagner wrote: > Aric Hagberg wrote: > >> On Mon, Nov 20, 2006 at 09:45:47AM +0100, Nils Wagner wrote: >> >> >>> Hi Neilen, >>> >>> In order to compare the results with the workhorse eig I have used a >>> very small >>> order n. The number of desired eigenpairs is equal to k=4 in my example, >>> but the >>> shape of the array of eigenvectors is (n,k+1) and for the eigenvalues it >>> is (k+1,). >>> The eigenvectors returned by arpack.eigen are zero. >>> >>> Nils >>> >>> >> Hi Nils, >> >> The size of the return arrays are intentional (k+1). This is >> the way ARPACK returns eigenvalues and eigenvectors for nonsymmetric >> matrices. I think the idea is that the k'th eigenvalue >> (largest, smallest, etc) might be a complex conjugate pair and >> then you might want k+1 (the conjugate). Else, if the k'th >> eigenvalue is real that entry is zero. >> >> I can run your test example successfully. Do the tests in >> arpack/tests/test_arpack.py work for you? >> >> Aric >> >> >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > help (arpack.eigen) yields > eigen(A, k=6, M=None, ncv=None, which='LM', maxiter=None, tol=0, > return_eigenvectors=True) > Return k eigenvalues and eigenvectors of the matrix A. > > Solves A * x[i] = w[i] * x[i], the standard eigenvalue problem for > w[i] eigenvalues with corresponding eigenvectors x[i]. > > Inputs: > > A -- A matrix, array or an object with matvec(x) method to perform > the matrix vector product A * x. The sparse matrix formats > in scipy.sparse are appropriate for A. > > k -- The number of eigenvalue/eigenvectors desired > > M -- (Not implemented) > A symmetric positive-definite matrix for the generalized > eigenvalue problem A * x = w * M * x > > Outputs: > > w -- An array of k eigenvalues > > v -- An array of k eigenvectors, k[i] is the eigenvector corresponding > to the eigenvector w[i] > > This info doesn't match with your explanation. > > Concerning the tests I get > > /usr/bin/python > /usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/tests/test_speigs.py > Found 1 tests for __main__ > > _naupd: Number of update iterations taken > ----------------------------------------- > 1 - 1: 17 > > > _naupd: Number of wanted "converged" Ritz values > ------------------------------------------------ > 1 - 1: 4 > > > _naupd: Real part of the final Ritz values > ------------------------------------------ > 1 - 4: 1.033E+00 7.746E-01 5.164E-01 2.582E-01 > > > _naupd: Imaginary part of the final Ritz values > ----------------------------------------------- > 1 - 4: 0.000E+00 0.000E+00 0.000E+00 0.000E+00 > > > _naupd: Associated Ritz estimates > --------------------------------- > 1 - 4: 4.508E-17 7.450E-22 7.087E-26 4.834E-29 > > > > ============================================= > = Nonsymmetric implicit Arnoldi update code = > = Version Number: 2.4 = > = Version Date: 07/31/96 = > ============================================= > = Summary of timing statistics = > ============================================= > > > Total number update iterations = 17 > Total number of OP*x operations = 59 > Total number of B*x operations = 0 > Total number of reorthogonalization steps = 58 > Total number of iterative refinement steps = 0 > Total number of restart steps = 0 > Total time in user OP*x operation = 0.004000 > Total time in user B*x operation = 0.000000 > Total time in Arnoldi update routine = 0.008000 > Total time in naup2 routine = 0.008000 > Total time in basic Arnoldi iteration loop = 0.004000 > Total time in reorthogonalization phase = 0.000000 > Total time in (re)start vector generation = 0.000000 > Total time in Hessenberg eig. subproblem = 0.000000 > Total time in getting the shifts = 0.000000 > Total time in applying the shifts = 0.000000 > Total time in convergence testing = 0.000000 > Total time in computing final Ritz vectors = 0.000000 > > . > ---------------------------------------------------------------------- > Ran 1 test in 0.107s > > OK > > > /usr/bin/python > /usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/tests/test_arpack.py > Found 5 tests for __main__ > ..... > ---------------------------------------------------------------------- > Ran 5 tests in 0.039s > > OK > > Nils > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi Aric, Please can you run the following I get something like _naupd: Number of update iterations taken ----------------------------------------- 1 - 1: 61 _naupd: Number of wanted "converged" Ritz values ------------------------------------------------ 1 - 1: 0 _naupd: Real part of the final Ritz values ------------------------------------------ _naupd: Imaginary part of the final Ritz values ----------------------------------------------- _naupd: Associated Ritz estimates --------------------------------- ============================================= = Nonsymmetric implicit Arnoldi update code = = Version Number: 2.4 = = Version Date: 07/31/96 = ============================================= = Summary of timing statistics = ============================================= Total number update iterations = 61 Total number of OP*x operations = 225 Total number of B*x operations = 0 Total number of reorthogonalization steps = 118 Total number of iterative refinement steps = 0 Total number of restart steps = 0 Total time in user OP*x operation = 0.008002 Total time in user B*x operation = 0.000000 Total time in Arnoldi update routine = 0.020002 Total time in naup2 routine = 0.020002 Total time in basic Arnoldi iteration loop = 0.008002 Total time in reorthogonalization phase = 0.000000 Total time in (re)start vector generation = 0.000000 Total time in Hessenberg eig. subproblem = 0.008000 Total time in getting the shifts = 0.000000 Total time in applying the shifts = 0.004000 Total time in convergence testing = 0.000000 Total time in computing final Ritz vectors = 0.000000 Traceback (most recent call last): File "eigs1.py", line 11, in ? w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) File "/usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/speigs.py", line 71, in eigvals if info != 0: raise "Hell" # Indicates some error during the Arnouldi iterations Hell -------------- next part -------------- A non-text attachment was scrubbed... Name: eigs1.py Type: text/x-python Size: 193 bytes Desc: not available URL: From hagberg at lanl.gov Mon Nov 20 10:07:03 2006 From: hagberg at lanl.gov (Aric Hagberg) Date: Mon, 20 Nov 2006 08:07:03 -0700 Subject: [SciPy-dev] ARPACK wrapper In-Reply-To: <4561C261.90606@iam.uni-stuttgart.de> References: <45616B3B.2060702@iam.uni-stuttgart.de> <20061120143326.GD21335@t7.lanl.gov> <4561C01A.9000109@iam.uni-stuttgart.de> <4561C261.90606@iam.uni-stuttgart.de> Message-ID: <20061120150703.GA23328@t7.lanl.gov> On Mon, Nov 20, 2006 at 03:57:37PM +0100, Nils Wagner wrote: > > Please can you run the following > I get something like > > _naupd: Number of update iterations taken > ----------------------------------------- > 1 - 1: 61 [snip] > Traceback (most recent call last): > File "eigs1.py", line 11, in ? > w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) > File > "/usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/speigs.py", > line 71, in eigvals > if info != 0: raise "Hell" # Indicates some error during the > Arnouldi iterations > Hell Yes, I get the same. We'll wait for Neilen to reply here since he wrote that bit. Aric From nmarais at sun.ac.za Mon Nov 20 11:29:15 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 20 Nov 2006 18:29:15 +0200 Subject: [SciPy-dev] ARPACK wrapper References: <45616B3B.2060702@iam.uni-stuttgart.de> <20061120143326.GD21335@t7.lanl.gov> <4561C01A.9000109@iam.uni-stuttgart.de> <4561C261.90606@iam.uni-stuttgart.de> <20061120150703.GA23328@t7.lanl.gov> Message-ID: Hi On Mon, 20 Nov 2006 08:07:03 -0700, Aric Hagberg wrote: > On Mon, Nov 20, 2006 at 03:57:37PM +0100, Nils Wagner wrote: >> >> Please can you run the following >> I get something like >> >> _naupd: Number of update iterations taken >> ----------------------------------------- >> 1 - 1: 61 > > [snip] > >> Traceback (most recent call last): >> File "eigs1.py", line 11, in ? >> w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) >> File >> "/usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/speigs.py", >> line 71, in eigvals >> if info != 0: raise "Hell" # Indicates some error during the >> Arnouldi iterations >> Hell > > Yes, I get the same. We'll wait for Neilen to reply here since > he wrote that bit. I'm actually busy playing with this right now. Part of the problem is that there are a number of tunables. I'm also fixing the exception to be more descriptive than the raise hell pun I forgot in there ;P I'll be committing to CVS soon. Regards Neilen > > Aric -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nmarais at sun.ac.za Mon Nov 20 11:35:37 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 20 Nov 2006 18:35:37 +0200 Subject: [SciPy-dev] Current status of arpack.speigs.eigvals References: <4561A877.9050005@iam.uni-stuttgart.de> Message-ID: You'll find it works better for big matrices, since that's what the ARPACK "tunables" were chosen for. Anyway, no, it's far from ready for primetime. I've used it successfully to solve my problems though. Patches appreciated (though wait for my update later today). Regards Neilen On Mon, 20 Nov 2006 14:07:03 +0100, Nils Wagner wrote: > Hi all, > > I am wondering if arpack.speigs.eigvals is ready to compute eigenvalues. > > a = random.rand(10,10) > matvec = lambda x:dot(a,x) > nev = 4 > ncv = 2*nev > w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) > > Traceback (most recent call last): > File "test_eigs.py", line 34, in ? > w1,v1 = arpack.speigs.eigvals(matvec,a.shape[0],nev=nev,ncv=ncv) > File > "/usr/lib64/python2.4/site-packages/scipy/sandbox/arpack/speigs.py", > line 71, in eigvals > if info != 0: raise "Hell" # Indicates some error during the > Arnouldi iterations > Hell > > > Am I missing something ? > > > Nils -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nmarais at sun.ac.za Mon Nov 20 15:17:37 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 20 Nov 2006 22:17:37 +0200 Subject: [SciPy-dev] ARPACK Wrapper update Message-ID: Hi. I spent some time looking at Nils's problem (I'll follow up on his post), and also refactoring the wrappers a bit. First off, I'd like to get some opinions. I think we should structure the modules such that a user doesn't know they are dealing with ARPACK, but merely a sparse eigensolver unless they want to do advanced things. This should be in a module called speigs. It should present an interface such as Aric's code in arpack.py. Then there should be a medium-level interface that takes care of ARPACK's Fortran nastiness for you but let's you use more advanced features. This is what is currently in the file called speigs.py. The user-level code should call the medium-level ARPACK interface to get the work done. And of course the raw Fortran wrappers should be available to control freaks. The way stuff is now (which is purely a coincidence and not really by design) is backwards. My code that is now in speigs.py should be in arpack.py, and vice-versa. I refactored the code in speigs.py so that the generalised and ordinary solvers don't duplicade code. I also renamed the function calls to start with ARPACK to make it clear that they are tied to ARPACK and aren't general user routines. So, I propose to: 1) rename speigs.py to arpack.py, and vice-versa 2) Make speigs and arpack separate modules 3) Modify the code in the proposed speigs.py to call the code in the proposed arpack.py to do the actual ARPACK work Does this sound good? At the moment the ARPACK_ functions can handle real, general, double-precision matrices, and use the following ARPACK modes: a) Generalised eigen-problems with spectrum shift b) Standard eigen-problems with no spectrum shift It should also support: c) Generalised eigen-problems with no spectrum shift d) Standard with spectrum shift Adding c) and d) should not be too much work. Once that is done we should look for a reasonable way to support complex matrices and other numerical types. I think Aric's wrappers already address that to some extent, so I'll take a look there. Thoughts? Regards Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nmarais at sun.ac.za Mon Nov 20 16:04:08 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 20 Nov 2006 23:04:08 +0200 Subject: [SciPy-dev] ARPACK wrapper References: <45616B3B.2060702@iam.uni-stuttgart.de> Message-ID: Nils, On Mon, 20 Nov 2006 09:45:47 +0100, Nils Wagner wrote: ARPACK uses iterative techniques to find eigenvalues. If it converges, the eigenvalues are always right, but a part of the spectrum may be missed in some cases. ARPACK is particularly good when you want a small number of eigenvalues of a big system. The problem you're trying to solve is therefor very badly suited to ARPACK. Also, using a random matrix may on the odd occasion result in a matrix with very badly conditioned eigenspace (the matrix made up of all the eigenvectors) and thereby cause large numerical errors. It also needed more iterations that the maximum I had set previously (based on the size of the problem). By adding an absolute minimum that was solved. Solving very big matrices generated by my code I've yet to miss any eigenvalues (I'm comparing to analytical results), so it does seem to be fairly reliable. Convergence is affected by, amongst others, the number of Arnouldi vectors ARPACK uses (the ncv parameter). Let me try to demonstrate this a bit: > > from scipy.sandbox import arpack > from scipy import * > a = random.rand(10,10) > k = 4 > ws, vs = arpack.eigen(a,k) > > wd, vd = linalg.eig(a) I've changed the function name but it works as before: import numpy as N from scipy.sandbox import arpack from scipy import * a = random.rand(10,10) k = 4 #sparse, reqesting the k eigen values with the smallest magnitude (the default) ws, vs = arpack.speigs.ARPACK_eigs(get_matvec(a),a.shape[0],k) sort_ind = N.abs(ws).argsort() ws = ws[sort_ind] vs = vs[:,sort_ind] #dense wd, vd = linalg.eig(a) sort_ind = N.abs(wd).argsort() wd = wd[sort_ind] vd = vd[:,sort_ind] Here I sorted both the sparse and dense results by the absolute value of the eigenvalues. The outputs are: In [41]: wd Out[41]: array([-0.17472186+0.j , 0.19316651+0.12446598j, 0.19316651-0.12446598j, 0.02219263+0.3638865j , 0.02219263-0.3638865j , -0.53332444+0.j , -0.73274754+0.j , 0.82145598+0.28728599j, 0.82145598-0.28728599j, 5.25910771+0.j ]) In [42]: ws Out[42]: array([-0.17472186+0.j , 0.19316651+0.12446598j, 0.02219263+0.3638865j , 0.02219263-0.3638865j ]) Note that we got 4 eigenvalues, but they aren't the ones with the smalleste magnitude. A couple were missed. In [43]: N.abs(wd[[0, 1, 3, 4]] - ws) Out[43]: array([ 0.00000000e+00, 2.58261119e-15, 2.77555756e-17, 2.77555756e-17]) This shows that all the eigenvalues computed by the ARPACK code are valid and agree with the dense calculation. Which one if more accurate is open to speculation. You can see how to determine this by looking in the test_speigs.py file where a PDP^1 factorisation is used to construct a matrix with known eigenvalues and vectors. In my experience there is no clear winner. Now let's try solving the same matrix using 9 Arnouldi vectors: #sparse ws, vs = arpack.speigs.ARPACK_eigs(get_matvec(a),a.shape[0],k, ncv=8) sort_ind = N.abs(ws).argsort() ws = ws[sort_ind] vs = vs[:,sort_ind] In [50]: wd Out[50]: array([-0.17472186+0.j , 0.19316651+0.12446598j, 0.19316651-0.12446598j, 0.02219263+0.3638865j , 0.02219263-0.3638865j , -0.53332444+0.j , -0.73274754+0.j , 0.82145598+0.28728599j, 0.82145598-0.28728599j, 5.25910771+0.j ]) In [51]: ws Out[51]: array([-0.17472186+0.j , 0.19316651+0.12446598j, 0.19316651-0.12446598j, 0.02219263+0.3638865j ]) In [52]: N.abs(wd[0:4]-ws) Out[52]: array([ 3.60822483e-16, 8.69330150e-16, 8.69330150e-16, 4.33555951e-16]) Now you can see none of the eigenvalues are missed. Interesting this ncv specified here is less than the default (which should be k*2+1 == 9). Anyway, this is likely to change on every run since you're using random matrices. If I remember and understood the ARPACK manual correctly, it is better at finding eigenvalues of large magnitude than small. The symmetric case should also be easier to solve. Regards Neilen > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nwagner at iam.uni-stuttgart.de Tue Nov 21 03:55:53 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Nov 2006 09:55:53 +0100 Subject: [SciPy-dev] ARPACK wrapper In-Reply-To: References: <45616B3B.2060702@iam.uni-stuttgart.de> Message-ID: <4562BF19.5030807@iam.uni-stuttgart.de> Neilen Marais wrote: > Nils, > > On Mon, 20 Nov 2006 09:45:47 +0100, Nils Wagner wrote: > > ARPACK uses iterative techniques to find eigenvalues. If it converges, the > eigenvalues are always right, but a part of the spectrum may be missed in some > cases. ARPACK is particularly good when you want a small number of eigenvalues > of a big system. > > The problem you're trying to solve is therefor very badly suited to > ARPACK. Also, using a random matrix may on the odd occasion result in a matrix > with very badly conditioned eigenspace (the matrix made up of all the > eigenvectors) and thereby cause large numerical errors. It also needed > more iterations that the maximum I had set previously (based on the size > of the problem). By adding an absolute minimum that was solved. > > Solving very big matrices generated by my code I've yet to miss any eigenvalues > (I'm comparing to analytical results), so it does seem to be fairly > reliable. Convergence is affected by, amongst others, the number of Arnouldi > vectors ARPACK uses (the ncv parameter). Let me try to demonstrate this a bit: > > >> from scipy.sandbox import arpack >> from scipy import * >> a = random.rand(10,10) >> k = 4 >> ws, vs = arpack.eigen(a,k) >> >> wd, vd = linalg.eig(a) >> > > I've changed the function name but it works as before: > > import numpy as N > from scipy.sandbox import arpack > from scipy import * > a = random.rand(10,10) > k = 4 > > #sparse, reqesting the k eigen values with the smallest magnitude (the default) > ws, vs = arpack.speigs.ARPACK_eigs(get_matvec(a),a.shape[0],k) > sort_ind = N.abs(ws).argsort() > ws = ws[sort_ind] > vs = vs[:,sort_ind] > > #dense > wd, vd = linalg.eig(a) > sort_ind = N.abs(wd).argsort() > wd = wd[sort_ind] > vd = vd[:,sort_ind] > > Here I sorted both the sparse and dense results by the absolute value of the > eigenvalues. > > The outputs are: > > In [41]: wd > Out[41]: > array([-0.17472186+0.j , 0.19316651+0.12446598j, > 0.19316651-0.12446598j, 0.02219263+0.3638865j , > 0.02219263-0.3638865j , -0.53332444+0.j , > -0.73274754+0.j , 0.82145598+0.28728599j, > 0.82145598-0.28728599j, 5.25910771+0.j ]) > > In [42]: ws > Out[42]: > array([-0.17472186+0.j , 0.19316651+0.12446598j, > 0.02219263+0.3638865j , 0.02219263-0.3638865j ]) > > Note that we got 4 eigenvalues, but they aren't the ones with the smalleste > magnitude. A couple were missed. > > In [43]: N.abs(wd[[0, 1, 3, 4]] - ws) > Out[43]: > array([ 0.00000000e+00, 2.58261119e-15, 2.77555756e-17, > 2.77555756e-17]) > > This shows that all the eigenvalues computed by the ARPACK code are valid and > agree with the dense calculation. Which one if more accurate is open to > speculation. You can see how to determine this by looking in the test_speigs.py > file where a PDP^1 factorisation is used to construct a matrix with known > eigenvalues and vectors. In my experience there is no clear winner. > > Now let's try solving the same matrix using 9 Arnouldi vectors: > > #sparse > ws, vs = arpack.speigs.ARPACK_eigs(get_matvec(a),a.shape[0],k, ncv=8) > sort_ind = N.abs(ws).argsort() > ws = ws[sort_ind] > vs = vs[:,sort_ind] > > In [50]: wd > Out[50]: > array([-0.17472186+0.j , 0.19316651+0.12446598j, > 0.19316651-0.12446598j, 0.02219263+0.3638865j , > 0.02219263-0.3638865j , -0.53332444+0.j , > -0.73274754+0.j , 0.82145598+0.28728599j, > 0.82145598-0.28728599j, 5.25910771+0.j ]) > > In [51]: ws > Out[51]: > array([-0.17472186+0.j , 0.19316651+0.12446598j, > 0.19316651-0.12446598j, 0.02219263+0.3638865j ]) > > In [52]: N.abs(wd[0:4]-ws) > Out[52]: > array([ 3.60822483e-16, 8.69330150e-16, 8.69330150e-16, > 4.33555951e-16]) > > Now you can see none of the eigenvalues are missed. Interesting this ncv > specified here is less than the default (which should be k*2+1 == 9). Anyway, > this is likely to change on every run since you're using random matrices. > > If I remember and understood the ARPACK manual correctly, it is better at > finding eigenvalues of large magnitude than small. The symmetric case should > also be easier to solve. > > Regards > Neilen > > >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > Hi Neilen, Thank you for your comments. IMHO the size of the array returned by arpack.speigs.ARPACK_eigs should correspond to the number of wanted "converged" Ritz values. arpack.speigs.ARPACK_iteration has no docstring. What is the meaning of this function ? Can I solve sparse eigenvalue problems with complex matrices ? Do you have some examples illustrating the usage of ARPACK wrt to generalized eigenvalue problems ? Regards, Nils From nwagner at iam.uni-stuttgart.de Tue Nov 21 04:54:05 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 21 Nov 2006 10:54:05 +0100 Subject: [SciPy-dev] ARPACK_gen_eigs Message-ID: <4562CCBD.3040100@iam.uni-stuttgart.de> Hi all, The attached script yields python -i eigs1.py Use minimum degree ordering on A'+A. *** glibc detected *** free(): invalid next size (fast): 0x0000000000da8bf0 *** Abort Can someone reproduce this behaviour ? If I change sigma=1 to sigma=1.0 it works fine. And, why is the shift restricted to real values ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: eigs1.py Type: text/x-python Size: 682 bytes Desc: not available URL: From jonathan.taylor at stanford.edu Tue Nov 21 14:46:55 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Tue, 21 Nov 2006 14:46:55 -0500 Subject: [SciPy-dev] cholesky decomposition for banded matrices Message-ID: <456357AF.1070500@stanford.edu> A week or so ago, I asked about generalized eigenvalue problems for banded matrices -- turns out all I needed was a Cholesky decomposition. I added support for banded cholesky decomposition and solution of banded linear systems with Hermitian or symmetric matrices in scipy.linalg with some tests. The tests are not as exhaustive as they should be.... Patch is attached. -- Jonathan -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: chol_banded.patch URL: From nwagner at iam.uni-stuttgart.de Wed Nov 22 10:46:20 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Nov 2006 16:46:20 +0100 Subject: [SciPy-dev] Segmentation fault using linsolve Message-ID: <456470CC.4080303@iam.uni-stuttgart.de> Hi all, Can someone reproduce the segfault by running the following test from scipy import * n = 15 A = sparse.lil_matrix((n,n)) for i in arange(0,n): A[i,:n] = random.rand(n) B = 2.*sparse.speye(n,n) sigma = 1.0 sigma_solve = linsolve.splu(A - sigma*B).solve Nils I am using the latest svn versions of numpy/scipy. This is the output of gdb GNU gdb 6.3 Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-suse-linux"...(no debugging symbols found) Using host libthread_db library "/lib64/tls/libthread_db.so.1". (gdb) run test_linsolve.py Starting program: /usr/bin/python test_linsolve.py (no debugging symbols found) (no debugging symbols found) [Thread debugging using libthread_db enabled] [New Thread 46912509653888 (LWP 6613)] (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) (no debugging symbols found) Use minimum degree ordering on A'+A. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912509653888 (LWP 6613)] 0x00002aaab0e6e9a1 in genmmd_ (neqns=0x7fffffa847ac, xadj=0x8d6d60, adjncy=0x998550, invp=0x8e19ec, perm=0x88cb20, delta=0x7fffffa847a4, dhead=0x8e196c, qsize=0x8e6040, llist=0x92bcbc, marker=0x94c78c, maxint=0x7fffffa847a0, nofsub=0x7fffffa8479c) at mmd.c:162 162 perm[nextmd] = -mdeg; From tim.leslie at gmail.com Wed Nov 22 11:26:51 2006 From: tim.leslie at gmail.com (Tim Leslie) Date: Thu, 23 Nov 2006 03:26:51 +1100 Subject: [SciPy-dev] Segmentation fault using linsolve In-Reply-To: <456470CC.4080303@iam.uni-stuttgart.de> References: <456470CC.4080303@iam.uni-stuttgart.de> Message-ID: On 11/23/06, Nils Wagner wrote: > Hi all, > > Can someone reproduce the segfault by running the following test I can confirm a segfault using: Python 2.4.4c1 (#2, Oct 11 2006, 20:00:03) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy, numpy s>>> scipy.__version__, numpy.__version__ ('0.5.2.dev2322', '1.0.1.dev3435') HTH Tim > > from scipy import * > > n = 15 > A = sparse.lil_matrix((n,n)) > for i in arange(0,n): > A[i,:n] = random.rand(n) > B = 2.*sparse.speye(n,n) > sigma = 1.0 > sigma_solve = linsolve.splu(A - sigma*B).solve > > > Nils > > I am using the latest svn versions of numpy/scipy. > This is the output of gdb > > GNU gdb 6.3 > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you are > welcome to change it and/or distribute copies of it under certain > conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "x86_64-suse-linux"...(no debugging symbols > found) > Using host libthread_db library "/lib64/tls/libthread_db.so.1". > > (gdb) run test_linsolve.py > Starting program: /usr/bin/python test_linsolve.py > (no debugging symbols found) > (no debugging symbols found) > [Thread debugging using libthread_db enabled] > [New Thread 46912509653888 (LWP 6613)] > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > (no debugging symbols found) > Use minimum degree ordering on A'+A. > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 46912509653888 (LWP 6613)] > 0x00002aaab0e6e9a1 in genmmd_ (neqns=0x7fffffa847ac, xadj=0x8d6d60, > adjncy=0x998550, invp=0x8e19ec, perm=0x88cb20, delta=0x7fffffa847a4, > dhead=0x8e196c, qsize=0x8e6040, llist=0x92bcbc, marker=0x94c78c, > maxint=0x7fffffa847a0, nofsub=0x7fffffa8479c) at mmd.c:162 > 162 perm[nextmd] = -mdeg; > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From nwagner at iam.uni-stuttgart.de Wed Nov 22 11:37:12 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Nov 2006 17:37:12 +0100 Subject: [SciPy-dev] Kronecker sum Message-ID: <45647CB8.1000005@iam.uni-stuttgart.de> Hi all, I have written a small function to compute the Kronecker sum of two matrices. Could this be added to basic.py as an addition to linalg.kron ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: kronsum.py Type: text/x-python Size: 696 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Wed Nov 22 11:45:51 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Nov 2006 17:45:51 +0100 Subject: [SciPy-dev] Segmentation fault using linsolve In-Reply-To: References: <456470CC.4080303@iam.uni-stuttgart.de> Message-ID: <45647EBF.8050107@iam.uni-stuttgart.de> Tim Leslie wrote: > On 11/23/06, Nils Wagner wrote: > >> Hi all, >> >> Can someone reproduce the segfault by running the following test >> > > I can confirm a segfault using: > > Python 2.4.4c1 (#2, Oct 11 2006, 20:00:03) > [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import scipy, numpy >>>> > s>>> scipy.__version__, numpy.__version__ > ('0.5.2.dev2322', '1.0.1.dev3435') > > HTH > > Tim > > Thank you Tim. Meanwhile I have added a line A = A.tocsc() see below from scipy import * n = 15 A = sparse.lil_matrix((n,n)) for i in arange(0,n): A[i,:n] = random.rand(n) A = A.tocsc() B = 2.*sparse.speye(n,n) sigma = 1.0 sigma_solve = linsolve.splu(A - sigma*B).solve print sigma_solve If I run the script in interactive mode I get python -i test_linsolve.py Use minimum degree ordering on A'+A. *** glibc detected *** free(): invalid next size (fast): 0x00000000008d3c30 *** Abort If I run the script without -i I get python test_linsolve.py Use minimum degree ordering on A'+A. Any idea ? Nils >> from scipy import * >> >> n = 15 >> A = sparse.lil_matrix((n,n)) >> for i in arange(0,n): >> A[i,:n] = random.rand(n) >> B = 2.*sparse.speye(n,n) >> sigma = 1.0 >> sigma_solve = linsolve.splu(A - sigma*B).solve >> >> >> Nils >> >> I am using the latest svn versions of numpy/scipy. >> This is the output of gdb >> >> GNU gdb 6.3 >> Copyright 2004 Free Software Foundation, Inc. >> GDB is free software, covered by the GNU General Public License, and you are >> welcome to change it and/or distribute copies of it under certain >> conditions. >> Type "show copying" to see the conditions. >> There is absolutely no warranty for GDB. Type "show warranty" for details. >> This GDB was configured as "x86_64-suse-linux"...(no debugging symbols >> found) >> Using host libthread_db library "/lib64/tls/libthread_db.so.1". >> >> (gdb) run test_linsolve.py >> Starting program: /usr/bin/python test_linsolve.py >> (no debugging symbols found) >> (no debugging symbols found) >> [Thread debugging using libthread_db enabled] >> [New Thread 46912509653888 (LWP 6613)] >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> (no debugging symbols found) >> Use minimum degree ordering on A'+A. >> >> Program received signal SIGSEGV, Segmentation fault. >> [Switching to Thread 46912509653888 (LWP 6613)] >> 0x00002aaab0e6e9a1 in genmmd_ (neqns=0x7fffffa847ac, xadj=0x8d6d60, >> adjncy=0x998550, invp=0x8e19ec, perm=0x88cb20, delta=0x7fffffa847a4, >> dhead=0x8e196c, qsize=0x8e6040, llist=0x92bcbc, marker=0x94c78c, >> maxint=0x7fffffa847a0, nofsub=0x7fffffa8479c) at mmd.c:162 >> 162 perm[nextmd] = -mdeg; >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> >> > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > From pgmdevlist at gmail.com Wed Nov 22 12:02:15 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 22 Nov 2006 12:02:15 -0500 Subject: [SciPy-dev] scikits ? [was Re: Kronecker sum] In-Reply-To: <45647CB8.1000005@iam.uni-stuttgart.de> References: <45647CB8.1000005@iam.uni-stuttgart.de> Message-ID: <200611221202.15627.pgmdevlist@gmail.com> On Wednesday 22 November 2006 11:37, Nils Wagner wrote: > I have written a small function to compute the Kronecker sum of two > matrices. > Could this be added to basic.py as an addition to linalg.kron ? I hope Nils won't mind my hijacking his threads: What's the state of those famous scikits that had been suggested (where you could install only the packages you want from scipy without getting the whole shebang) ? Corollary: Is there a proper (most prefered) way to post small functions, classes or whole modules on scipy.org ? Like a cheeseshop ? From robert.kern at gmail.com Wed Nov 22 12:21:56 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Nov 2006 11:21:56 -0600 Subject: [SciPy-dev] scikits ? [was Re: Kronecker sum] In-Reply-To: <200611221202.15627.pgmdevlist@gmail.com> References: <45647CB8.1000005@iam.uni-stuttgart.de> <200611221202.15627.pgmdevlist@gmail.com> Message-ID: <45648734.70103@gmail.com> Pierre GM wrote: > What's the state of those famous scikits that had been suggested (where you > could install only the packages you want from scipy without getting the whole > shebang) ? That's not what scikits is intended to be. scikits would be an entirely separate package. Allowing subpackages of scipy to be separately installable is another effort, and one that is stalled. The approach I took didn't pan out. > Corollary: > Is there a proper (most prefered) way to post small functions, classes or > whole modules on scipy.org ? Like a cheeseshop ? Write up a wiki page about it and attach the file to the page. If there's more than one file (say you want a README or a LICENSE or even a test suite), go ahead and use the Python Package Index and write up a wiki page on scipy.org . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed Nov 22 13:40:39 2006 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 22 Nov 2006 13:40:39 -0500 Subject: [SciPy-dev] scikits ? [was Re: Kronecker sum] In-Reply-To: <45648734.70103@gmail.com> References: <45647CB8.1000005@iam.uni-stuttgart.de> <200611221202.15627.pgmdevlist@gmail.com> <45648734.70103@gmail.com> Message-ID: <200611221340.39757.pgmdevlist@gmail.com> > That's not what scikits is intended to be. scikits would be an entirely > separate package. *looks more info in the list archive* Oh, OK. Sorry for the misunderstanding > Allowing subpackages of scipy to be separately > installable is another effort, and one that is stalled. The approach I took > didn't pan out. What went wrong, if I may ask / > > > Corollary: > > Is there a proper (most prefered) way to post small functions, classes or > > whole modules on scipy.org ? Like a cheeseshop ? > > Write up a wiki page about it and attach the file to the page. If there's > more than one file (say you want a README or a LICENSE or even a test > suite), go ahead and use the Python Package Index and write up a wiki page > on scipy.org . OK. Is there any scipy specific template that should be foloowed ? From robert.kern at gmail.com Wed Nov 22 14:38:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 22 Nov 2006 13:38:10 -0600 Subject: [SciPy-dev] scikits ? [was Re: Kronecker sum] In-Reply-To: <200611221340.39757.pgmdevlist@gmail.com> References: <45647CB8.1000005@iam.uni-stuttgart.de> <200611221202.15627.pgmdevlist@gmail.com> <45648734.70103@gmail.com> <200611221340.39757.pgmdevlist@gmail.com> Message-ID: <4564A722.3030004@gmail.com> Pierre GM wrote: >> That's not what scikits is intended to be. scikits would be an entirely >> separate package. > *looks more info in the list archive* > Oh, OK. Sorry for the misunderstanding > >> Allowing subpackages of scipy to be separately >> installable is another effort, and one that is stalled. The approach I took >> didn't pan out. > > What went wrong, if I may ask / I'll explain later when I have more time. >>> Corollary: >>> Is there a proper (most prefered) way to post small functions, classes or >>> whole modules on scipy.org ? Like a cheeseshop ? >> Write up a wiki page about it and attach the file to the page. If there's >> more than one file (say you want a README or a LICENSE or even a test >> suite), go ahead and use the Python Package Index and write up a wiki page >> on scipy.org . > > OK. Is there any scipy specific template that should be foloowed ? For which approach? For putting a module on the Package Index, just follow the standard guidelines for any Python module; just don't try to call it scipy.mymodule or something similar. Although your module may depend on scipy, there's no way for it to "integrate" into the scipy packaging as yet except by being explicitly added to the SVN repository. For dropping something on the wiki, look at the various Cookbook pages for examples: http://www.scipy.org/Cookbook -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jettlogic at gmail.com Thu Nov 23 09:17:04 2006 From: jettlogic at gmail.com (Jett Logic) Date: Thu, 23 Nov 2006 16:17:04 +0200 Subject: [SciPy-dev] Failure under Solaris 9, 64-bit Message-ID: <4565AD60.9020502@gmail.com> I have to use Sun C 5.7 and Sun Fortran 95 8.1, producing ELF64's with F77='f90 -xcode=pic32 -xarch=v9' (also tried f77, no difference) and CC='cc -mt -xcode=pic32 -xarch=v9' I compiled Python 2.5 (--enable-shared) then built blas and lapack following http://www.scipy.org/Installing_SciPy/BuildingGeneral. I passed -L and -R to ld for each library directory. I had to pass -G to ld as well for the Fortran linking steps to prevent error of missing "main" symbol in crt1.o. However, "from scipy import special" fails as below beause of the _cephes module which has a bunch of Fortran constants in it: {{ Python 2.5 (r25:51908, Nov 20 2006, 02:58:57) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import special Traceback (most recent call last): File "", line 1, in File "/export/home/medscan/local64/lib/python2.5/site-packages/scipy/special/__init__.py", line 8, in from basic import * File "/export/home/medscan/local64/lib/python2.5/site-packages/scipy/special/basic.py", line 8, in from _cephes import * ImportError: ld.so.1: python: fatal: relocation error: R_SPARC_H44: file /export/home/medscan/local64/lib/python2.5/site-packages/scipy/special/_cephes.so: symbol __f90_default_input_unit: value 0x3fffffffde7 does not fit }} (note that scipy compiles and runs fine using the 32-bit gcc toolchain, but I need 64-bit) Should I file this as a bug? Is there a workaround? From david at ar.media.kyoto-u.ac.jp Tue Nov 28 02:10:44 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 28 Nov 2006 16:10:44 +0900 Subject: [SciPy-dev] Does it worth the trouble to support non contiguous array in C extensions ? Message-ID: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> Hi, I am about to push in SVN the first version of a small package to compute lpc coefficients and lpc residual. I spent most of the time trying to understand how to handle non contiguous arrays in various parts of the C code... Now, I am wondering: does it really worth the trouble ? I noticed that most of the time, it is even faster (and obviously much easier to code/debug/test, and much more reliable) to just copy the data in a contiguous new array before processing with a C function expecting contiguous array... Is there a general policy regarding thoses issues for scipy ? Is it enough to write simple C extensions expecting contiguous arrays, and converting input to contiguous layout if necessary ? Cheers, David From cimrman3 at ntc.zcu.cz Tue Nov 28 04:39:55 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 28 Nov 2006 10:39:55 +0100 Subject: [SciPy-dev] Does it worth the trouble to support non contiguous array in C extensions ? In-Reply-To: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> References: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> Message-ID: <456C03EB.3010108@ntc.zcu.cz> David Cournapeau wrote: > I am about to push in SVN the first version of a small package to > compute lpc coefficients and lpc residual. I spent most of the time > trying to understand how to handle non contiguous arrays in various > parts of the C code... > Now, I am wondering: does it really worth the trouble ? I noticed > that most of the time, it is even faster (and obviously much easier to > code/debug/test, and much more reliable) to just copy the data in a > contiguous new array before processing with a C function expecting > contiguous array... > Is there a general policy regarding thoses issues for scipy ? Is it > enough to write simple C extensions expecting contiguous arrays, and > converting input to contiguous layout if necessary ? Well, I do it the simple way - you usually loose more time figuring out the c-code for general arrays then ensuring contiguous arrays in Python. Premature optimization is bad, so unless your code consumes too much memory or is too slow (in reality), there is no need to wrestle with complex C extensions. just my 2 cents and imho, r. ps: not sure if there is a general policy - people's cases do differ. From nwagner at iam.uni-stuttgart.de Tue Nov 28 10:24:57 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 28 Nov 2006 16:24:57 +0100 Subject: [SciPy-dev] UMFPACK support in scipy Message-ID: <456C54C9.8090308@iam.uni-stuttgart.de> Hi all, which UMFPACK versions are supported in scipy ? help (linsolve) lists version 4.4 and version 5.0 of UMFPACK. The current version is 5.0.1 http://www.cise.ufl.edu/research/sparse/umfpack/ Nils From robert.kern at gmail.com Tue Nov 28 12:45:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Nov 2006 11:45:31 -0600 Subject: [SciPy-dev] Does it worth the trouble to support non contiguous array in C extensions ? In-Reply-To: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> References: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> Message-ID: <456C75BB.9090303@gmail.com> David Cournapeau wrote: > Is there a general policy regarding thoses issues for scipy ? Is it > enough to write simple C extensions expecting contiguous arrays, and > converting input to contiguous layout if necessary ? That's what I've always done when hand-writing C code. That's essentially what most f2py wrappers do, too, since FORTRAN-77 will *only* deal with contiguous arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Tue Nov 28 12:47:52 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Nov 2006 11:47:52 -0600 Subject: [SciPy-dev] UMFPACK support in scipy In-Reply-To: <456C54C9.8090308@iam.uni-stuttgart.de> References: <456C54C9.8090308@iam.uni-stuttgart.de> Message-ID: <456C7648.6020804@gmail.com> Nils Wagner wrote: > Hi all, > > which UMFPACK versions are supported in scipy ? > help (linsolve) lists version 4.4 and version 5.0 of UMFPACK. Yes. That gives you your answer. If you try another version and it works for you, go ahead and tell us, and we'll update that docstring. If that docstring doesn't get updated, that's because no one has told us that any other version works. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Tue Nov 28 19:18:03 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 28 Nov 2006 17:18:03 -0700 Subject: [SciPy-dev] Does it worth the trouble to support non contiguous array in C extensions ? In-Reply-To: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> References: <456BE0F4.1030601@ar.media.kyoto-u.ac.jp> Message-ID: <456CD1BB.1030704@ee.byu.edu> David Cournapeau wrote: >Hi, > > I am about to push in SVN the first version of a small package to >compute lpc coefficients and lpc residual. I spent most of the time >trying to understand how to handle non contiguous arrays in various >parts of the C code... > Now, I am wondering: does it really worth the trouble ? I noticed >that most of the time, it is even faster (and obviously much easier to >code/debug/test, and much more reliable) to just copy the data in a >contiguous new array before processing with a C function expecting >contiguous array... > Is there a general policy regarding thoses issues for scipy ? Is it >enough to write simple C extensions expecting contiguous arrays, and >converting input to contiguous layout if necessary ? > > > There is no policy. I'm of the opinion that strided arrays are often handled very straightforwardly (except in Fortran) and so try to implement the algorithm on strided arrays when possible. But, if it takes me too much time to think about it, then I just force a contiguous array. There is also the problem of byte-order and alignment that can occur for a general NumPy array. Dealing with these always involves a copy (at least of a chunk at a time). Therefore, at the very least you should request an "aligned" array of a native data-type. Most, just extend that request to a CONTIGUOUS array as-well (either FORTRAN or C-order). I don't see it as a problem except for in memory-limited situations. Lot's of code in NumPy itself forces a contiguous array in order to do the processing. -Travis From nwagner at iam.uni-stuttgart.de Wed Nov 29 08:57:16 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 29 Nov 2006 14:57:16 +0100 Subject: [SciPy-dev] UMFPACK support in scipy In-Reply-To: <456C7648.6020804@gmail.com> References: <456C54C9.8090308@iam.uni-stuttgart.de> <456C7648.6020804@gmail.com> Message-ID: <456D91BC.5040800@iam.uni-stuttgart.de> Robert Kern wrote: > Nils Wagner wrote: > >> Hi all, >> >> which UMFPACK versions are supported in scipy ? >> help (linsolve) lists version 4.4 and version 5.0 of UMFPACK. >> > > Yes. That gives you your answer. If you try another version and it works for > you, go ahead and tell us, and we'll update that docstring. If that docstring > doesn't get updated, that's because no one has told us that any other version works. > > Hi Robert, This is to let you know that I have tried to use UMFPACK version 5.0.1. Here are my findings: scipy.test(1) results in Warning: FAILURE importing tests for /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) Warning: FAILURE importing tests for /usr/local/lib64/python2.4/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py:17: AttributeError: 'module' object has no attribute 'umfpack' (in ?) scipy.show_config() yields amd_info: libraries = ['amd'] library_dirs = ['/usr/local/src/UMFPACKv5.0.1/AMD/Lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/local/src/UMFPACKv5.0.1/AMD/Include'] include_dirs = ['/usr/local/src/UMFPACKv5.0.1/AMD/Include'] umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/local/src/UMFPACKv5.0.1/UMFPACK/Lib', '/usr/local/src/UMFPACKv5.0.1/AMD/Lib'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/local/src/UMFPACKv5.0.1/UMFPACK/Include', '-I/usr/local/src/UMFPACKv5.0.1/AMD/Include'] include_dirs = ['/usr/local/src/UMFPACKv5.0.1/UMFPACK/Include', '/usr/local/src/UMFPACKv5.0.1/AMD/Include'] How can I fix this problem ? Nils From a.u.r.e.l.i.a.n at gmx.net Thu Nov 30 10:02:21 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Thu, 30 Nov 2006 16:02:21 +0100 Subject: [SciPy-dev] "fancy index" assignment Message-ID: <200611301602.22063.a.u.r.e.l.i.a.n@gmx.net> Hi, I have a problem with fancy assignment. Even though left and right side of the assignment have the same shape, an exception occurs. numpy is freshly built 10 minutes ago. Minimal example: #################################################### import numpy print numpy.__version__ # --> 1.0.1.dev3462 array=numpy.array m = \ array([[[111, 112, 113, 114, 115, 116], [121, 122, 123, 124, 125, 126], [131, 132, 133, 134, 135, 136], [141, 142, 143, 144, 145, 146], [151, 152, 153, 154, 155, 156], [161, 162, 163, 164, 165, 166]], [[211, 212, 213, 214, 215, 216], [221, 222, 223, 224, 225, 226], [231, 232, 233, 234, 235, 236], [241, 242, 243, 244, 245, 246], [251, 252, 253, 254, 255, 256], [261, 262, 263, 264, 265, 266]]]) f = \ array([[[10111, 10112], [10121, 10122], [10131, 10132], [10141, 10142], [10151, 10152], [10161, 10162]], [[10211, 10212], [10221, 10222], [10231, 10232], [10241, 10242], [10251, 10252], [10261, 10262]]]) print m[:,:,(2,4)].shape # --> (2,6,2) print f.shape # --> (2,6,2) m[:,:,(2,4)] = f ################################################## # error message: --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/jloehnert/ ValueError: array is not broadcastable to correct shape #################################################### With a 2D array this kind of operation works fine. Is this a bug? Johannes