From ckkart at hoc.net Tue Nov 1 03:59:59 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 01 Nov 2005 09:59:59 +0100 Subject: [SciPy-user] newcore arrayobject.h Message-ID: <43672E8F.2020400@hoc.net> Hi, I remeber there was a discussion about where to put the scipy header files. It seems that at the current place weave.inline (from newcore) is not able to find them: /home/ck/.python23_compiled/sc_90c028da8eb690aebc73d1ab952ee6742.cpp:17:31: scipy/arrayobject.h: No such file or directory /home/ck/.python23_compiled/sc_90c028da8eb690aebc73d1ab952ee6742.cpp:445: error: `PyArrayObject' was not declared in this scope Christian From rkern at ucsd.edu Tue Nov 1 04:11:50 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 01 Nov 2005 01:11:50 -0800 Subject: [SciPy-user] newcore arrayobject.h In-Reply-To: <43672E8F.2020400@hoc.net> References: <43672E8F.2020400@hoc.net> Message-ID: <43673156.9070104@ucsd.edu> Christian Kristukat wrote: > Hi, > I remeber there was a discussion about where to put the scipy header files. It > seems that at the current place weave.inline (from newcore) is not able to find > them: weave never got properly updated to use scipy.distutils instead of scipy_distutils. Everything should just work when using scipy.distutils. I've made these changes in the latest SVN; there are surely other problems with weave, but please try this and see if it works. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ckkart at hoc.net Tue Nov 1 04:19:12 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 01 Nov 2005 10:19:12 +0100 Subject: [SciPy-user] newcore arrayobject.h In-Reply-To: <43673156.9070104@ucsd.edu> References: <43672E8F.2020400@hoc.net> <43673156.9070104@ucsd.edu> Message-ID: <43673310.3080306@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: > >>Hi, >>I remeber there was a discussion about where to put the scipy header files. It >>seems that at the current place weave.inline (from newcore) is not able to find >>them: > > > weave never got properly updated to use scipy.distutils instead of > scipy_distutils. Everything should just work when using scipy.distutils. > I've made these changes in the latest SVN; there are surely other > problems with weave, but please try this and see if it works. > Thanks. numarray.h is found but now a new problem showed up: /home/ck/.python23_compiled/sc_90c028da8eb690aebc73d1ab952ee6744.cpp: In function `PyObject* compiled_func(PyObject*, PyObject*)': /home/ck/.python23_compiled/sc_90c028da8eb690aebc73d1ab952ee6744.cpp:823: error: ISO C++ says that `typename blitz::SliceInfo::T_slice blitz::Array::operator()(T1, T2) const [with T1 = long int, T2 = int, P_numtype = double, int N_rank = 2]' and `P_numtype& __restrict blitz::Array::operator()(int, int) [with P_numtype = double, int N_rank = 2]' are ambiguous even though the worst conversion for the former is better than the worst conversion for the latter /home/ck/.python23_compiled/sc_90c028da8eb690aebc73d1ab952ee6744.cpp:823: error: cannot convert `blitz::Array' to `double' for argument `2' to `double dist(double, double, int)' Christian From gerard.vermeulen at grenoble.cnrs.fr Tue Nov 1 05:00:37 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Tue, 1 Nov 2005 11:00:37 +0100 Subject: [SciPy-user] Does scipy-newcore break the sequence protocol? Message-ID: <20051101110037.6f4bdeae.gerard.vermeulen@grenoble.cnrs.fr> The following code using the sequence protocol works with Numeric-23 and numarray, but fails with scipy-newcore (checked out a few days ago): ... float v[3]; printf("SbColor gets a %s.\n", a0->ob_type->tp_name); if ((PySequence_Check(a0)) && 3 == PySequence_Length(a0)) { PyObject *item; int ok = 1; // Does scipy-newcore break the sequence protocol? for (int i=0; i<3; i++) { item = PySequence_GetItem(a0, i); printf("item %d is a %s.\n", i, item->ob_type->tp_name); if (PyFloat_CheckExact(item)) { v[i] = float(PyFloat_AsDouble(item)); } else if (PyInt_CheckExact(item)) { v[i] = float(PyInt_AsLong(item)); } else { ok = 0; } Py_DECREF(item); } if (!ok) { PyErr_SetString( PyExc_TypeError, "The sequence may only contain integers and floats."); return 0; } else { sipCpp = new SbColor(v); } } else { ... This interpreter session shows what happens if I create a new SbColor using a newcore scipy array as argument instead of a tuple: >>> from scipy import * Importing io to scipy Importing interpolate to scipy Importing fftpack to scipy Importing special to scipy Importing cluster to scipy Importing sparse to scipy Importing signal to scipy Failed to import signal cannot import name comb Importing utils to scipy Importing lib to scipy Importing integrate to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy >>> import iv >>> c = iv.SbColor((0, 0, 0.5)) SbColor gets a tuple. item 0 is a int. item 1 is a int. item 2 is a float. >>> c = iv.SbColor(array((0, 0, 0.5))) SbColor gets a scipy.ndarray. item 0 is a float64_arrtype. item 1 is a float64_arrtype. item 2 is a float64_arrtype. Traceback (most recent call last): File "", line 1, in ? TypeError: The sequence may only contain integers and floats. >>> Gerard PS: why is scipy so noisy? I also get all those 'Importing..' messages when my binary module only calls import_array()? From gerard.vermeulen at grenoble.cnrs.fr Tue Nov 1 05:26:30 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Tue, 1 Nov 2005 11:26:30 +0100 Subject: [SciPy-user] Does scipy-newcore break the sequence protocol? In-Reply-To: <20051101110037.6f4bdeae.gerard.vermeulen@grenoble.cnrs.fr> References: <20051101110037.6f4bdeae.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <20051101112630.4882ceb0.gerard.vermeulen@grenoble.cnrs.fr> Sorry for the noise: replacing PyFloat_CheckExact by PyFloat_Check, etc.. solved my problem. Gerard From bgoli at sun.ac.za Wed Nov 2 08:14:53 2005 From: bgoli at sun.ac.za (Brett Olivier) Date: Wed, 2 Nov 2005 15:14:53 +0200 Subject: [SciPy-user] newscipy distutils question Message-ID: <200511021514.54431.bgoli@sun.ac.za> Hi I use scipy_distutils for building Fortran extension libraries. I used to use: from scipy_distutils.core import setup, Extension With newScipy core I've found: from scipy.distutils.core import setup but does "Extension" still exist or is there a better way of doing things? Thanks in advance Brett -- Brett G. Olivier Postdoctoral Fellow Triple-J Group for Molecular Cell Physiology Stellenbosch University bgoli at sun dot ac dot za http://pysces.sourceforge.net Tel +27-21-8082704 Fax +27-21-8085863 Mobile +27-82-7329306 PENGUINICITY!! From jens_brandenburg at gmx.net Wed Nov 2 10:59:18 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Wed, 2 Nov 2005 16:59:18 +0100 (MET) Subject: [SciPy-user] building scipy Message-ID: <15017.1130947158@www66.gmx.net> hello everybody, I'm trying to build scipy on my SUSE Linux 10.0 OSS using the online documentation... But I didn't get it running. During the "make install" process the following error occurs: Lib/stats/ranlib_all.c:2158: error: previous implicit declaration of ?inrgcm? was here error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -I/usr/local/include/python2.3 -c Lib/stats/ranlib_all.c -o build/temp.linux-i686-2.3/Lib/stats/ranlib_all.o" failed with exit status 1 Okay, I hope someone has seen this before and knows how to solve the problem ;-(( Maybe some words about the system: Python-2.3.5, IPython-0.6.15, F2Py-latest; build BLAS, LAPACK and ATLAS libraries following the online documentation... cheers Jens Brandenburg -- Jens Brandenburg 10 GB Mailbox, 100 FreeSMS/Monat http://www.gmx.net/de/go/topmail +++ GMX - die erste Adresse f?r Mail, Message, More +++ From oliphant at ee.byu.edu Wed Nov 2 12:49:35 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Nov 2005 10:49:35 -0700 Subject: [SciPy-user] building scipy In-Reply-To: <15017.1130947158@www66.gmx.net> References: <15017.1130947158@www66.gmx.net> Message-ID: <4368FC2F.3010608@ee.byu.edu> Jens Brandenburg wrote: >hello everybody, > >I'm trying to build scipy on my SUSE Linux 10.0 OSS using the online >documentation... But I didn't get it running. During the "make install" >process the following error occurs: > > I presume you are running the older version of scipy, but you should make that clear. I'm not sure what the "make install" process is. Are you talking about python setup.py install? >Lib/stats/ranlib_all.c:2158: error: previous implicit declaration of >?inrgcm? was here >error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall >-Wstrict-prototypes -fPIC -I/usr/local/include/python2.3 -c >Lib/stats/ranlib_all.c -o >build/temp.linux-i686-2.3/Lib/stats/ranlib_all.o" failed with exit status 1 > > I've never seen this problem before, but are you showing the entire error log? Which version of gcc are you using? -Travis From pau.gargallo at gmail.com Thu Nov 3 04:43:44 2005 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 3 Nov 2005 09:43:44 +0000 Subject: [SciPy-user] about weave performance evaluation In-Reply-To: <17254.27500.620505.812975@vulcan.linux.in> References: <6ef8f3380510250545j1617fb0ekf6b48cc5b60c403f@mail.gmail.com> <87hdb5lhzl.fsf@uwo.ca> <6ef8f3380510251040x30210ebbyfa2f10d9a4017449@mail.gmail.com> <87u0f4b7tw.fsf@uwo.ca> <17254.27500.620505.812975@vulcan.linux.in> Message-ID: <6ef8f3380511030143v228cd5a7j3010eae160732b61@mail.gmail.com> that was really clear, thank you guys, pau On 10/31/05, Prabhu Ramachandran wrote: > >>>>> "Dan" == Dan Christensen writes: > > >> 1- i don't understand the 25 fold increase of the last sentence > >> of the cited text. Shouldn't we conclude that the use of > >> numeric speeds up by a factor 100? > > Dan> I think 25 is correct. When the numeric method gets err < > Dan> eps, it will be farther from the correct solution than when > Dan> the pure python method gets err < eps, since it is in effect > Dan> taking smaller steps. To get as accurate an answer, you'd > Dan> have to adjust eps to make the code run approximately twice > Dan> as long. > > Just to explain this a little more clearly, the issue is that the pure > Python code will converge (in terms of number of iterations taken) > twice as fast as the Numeric code. I.e. if the pure Python code takes > x iterations to converge, the Numeric code will take 2*x iterations > (because it uses temporaries). Therefore, if you want to compare the > time taken for convergence, then you will end up with only a 25 fold > speed increase instead of the 50 fold increase. If OTOH, you merely > want to compare the time taken for 1 single iteration then you get a > 50 fold speed increase by using Numeric arrays. > > >> 2- in the final comparison the time used by numeric is > >> 29.3s. If we want to compare the performance between the > >> different implementations, should we divide this time by 2 ? > > It depends. If the timing criterion is convergence to a particular > error then divide by 2. If it is just to measure pure performance of > 100 iterations on a 500x500 problem then the number should be left as > such. My intention was to just show pure performance so readers get a > feel for the kind of speed improvement they get with different options > for a similar calculation. > > cheers, > prabhu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From jens_brandenburg at gmx.net Thu Nov 3 06:36:48 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Thu, 3 Nov 2005 12:36:48 +0100 (MET) Subject: [SciPy-user] building scipy References: <4368FC2F.3010608@ee.byu.edu> Message-ID: <10555.1131017808@www83.gmx.net> hello again, >I presume you are running the older version of scipy, but you should >make that clear. I've downloaded the "Complete SciPy 0.3.2 Source Tarball" from http://scipy.org/download/ and I hope this is the newest version somehow... >Are you talking about "python setup.py install"? Yes, I do. Sorry, for having been unclear... (Somehow the Python command came to my mind... ;-) >Which version of gcc are you using? I use gcc-4.0.2_20050901-3, F2Py-2.45.241_1926, Python-2.3.5, IPython-0.6.15... Well, I hope this will take us a little further. cheers Jens PS.: Maybe someone would like to explain to me what this scipy_core is all about?! Do I still need NumPy? -- Jens Brandenburg Telefonieren Sie schon oder sparen Sie noch? NEU: GMX Phone_Flat http://www.gmx.net/de/go/telefonie From fonnesbeck at gmail.com Thu Nov 3 10:32:20 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 3 Nov 2005 10:32:20 -0500 Subject: [SciPy-user] newcore module import inconsistency Message-ID: <723eb6930511030732p5df0b57fnee3e9d0300e44ce0@mail.gmail.com> I notice that, under the latest newcore svn build, you cannot import sub-modules as you would expect: from scipy.stats import exponential gives an error. Instead, I need to write: from scipy import stats exponential = stats.exponential I hoping this wasnt designed that way; it seems rather un-pythonic. Am I doing something wrong? Thanks, -- Chris Fonnesbeck Atlanta, GA From oliphant at ee.byu.edu Thu Nov 3 10:53:02 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 08:53:02 -0700 Subject: [SciPy-user] newcore module import inconsistency In-Reply-To: <723eb6930511030732p5df0b57fnee3e9d0300e44ce0@mail.gmail.com> References: <723eb6930511030732p5df0b57fnee3e9d0300e44ce0@mail.gmail.com> Message-ID: <436A325E.8060506@ee.byu.edu> Chris Fonnesbeck wrote: >I notice that, under the latest newcore svn build, you cannot import >sub-modules as you would expect: > >from scipy.stats import exponential > >gives an error. Instead, I need to write: > >from scipy import stats >exponential = stats.exponential > > > stats is not part of newcore. There is an exponential in scipy.basic.random however. The only issue is that some variables get converted to a higher namespace when scipy is imported. They are always available in their package, however. Thus, from scipy.basic.random import exponential works from scipy import random exponential = random.exponential also works Improvements From oliphant at ee.byu.edu Thu Nov 3 10:55:56 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 08:55:56 -0700 Subject: [SciPy-user] building scipy In-Reply-To: <10555.1131017808@www83.gmx.net> References: <4368FC2F.3010608@ee.byu.edu> <10555.1131017808@www83.gmx.net> Message-ID: <436A330C.4060601@ee.byu.edu> Jens Brandenburg wrote: >he > >ns > >PS.: Maybe someone would like to explain to me what this scipy_core is all >about?! Do I still need NumPy? > > Scipy core is the replacement for NumPy (it is Numeric Next Generation). Thus, you no longer need NumPy to use SciPy. Go to http://numeric.scipy.org for more information. Best, -Travis From oliphant at ee.byu.edu Thu Nov 3 10:58:02 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 08:58:02 -0700 Subject: [SciPy-user] building scipy In-Reply-To: <10555.1131017808@www83.gmx.net> References: <4368FC2F.3010608@ee.byu.edu> <10555.1131017808@www83.gmx.net> Message-ID: <436A338A.6000208@ee.byu.edu> Jens Brandenburg wrote: >hello again, > > > >I use gcc-4.0.2_20050901-3, F2Py-2.45.241_1926, Python-2.3.5, >IPython-0.6.15... Well, I hope this will take us a little further. > > I've heard of problems with gcc 4.0 and scipy 0.3.2. You might want to use a different gcc. >cheers > >Jens > >PS.: Maybe someone would like to explain to me what this scipy_core is all >about?! Do I still need NumPy? > > > I should be more clear. You still need NumPy to use the older version of scipy. The new scipy_core works with the new scipy (which is only available in a subversion server right now). -Travis From fonnesbeck at gmail.com Thu Nov 3 10:58:00 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 3 Nov 2005 10:58:00 -0500 Subject: [SciPy-user] unexpected array behaviour in newcore Message-ID: <723eb6930511030758r2b9ed79emeebcc89116ed18fe@mail.gmail.com> I am trying to cast data into arrays within functions to allow for consistent treatment of vector and scalar variables without having to use switch statements. However, casting a scalar to an array under newcore yields some unexpected behaviour: >>> a = array(5) >>> [i for i in a] [] Why would an empty list be returned here? Thanks, -- Chris Fonnesbeck Atlanta, GA From fonnesbeck at gmail.com Thu Nov 3 11:06:12 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 3 Nov 2005 11:06:12 -0500 Subject: [SciPy-user] newcore module import inconsistency In-Reply-To: <436A325E.8060506@ee.byu.edu> References: <723eb6930511030732p5df0b57fnee3e9d0300e44ce0@mail.gmail.com> <436A325E.8060506@ee.byu.edu> Message-ID: <723eb6930511030806q57380346yffa63949b12ba48a@mail.gmail.com> On 11/3/05, Travis Oliphant wrote: > Chris Fonnesbeck wrote: > > >I notice that, under the latest newcore svn build, you cannot import > >sub-modules as you would expect: > > > >from scipy.stats import exponential > > > >gives an error. Instead, I need to write: > > > >from scipy import stats > >exponential = stats.exponential > > > > > > > stats is not part of newcore. > > There is an exponential in scipy.basic.random however. > > The only issue is that some variables get converted to a higher > namespace when scipy is imported. They are always available in their > package, however. > > Thus, > > from scipy.basic.random import exponential works > > from scipy import random > exponential = random.exponential > > also works > > Improvements > Sorry, I meant random, not stats. From oliphant at ee.byu.edu Thu Nov 3 11:13:56 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Nov 2005 09:13:56 -0700 Subject: [SciPy-user] unexpected array behaviour in newcore In-Reply-To: <723eb6930511030758r2b9ed79emeebcc89116ed18fe@mail.gmail.com> References: <723eb6930511030758r2b9ed79emeebcc89116ed18fe@mail.gmail.com> Message-ID: <436A3744.6070108@ee.byu.edu> Chris Fonnesbeck wrote: >I am trying to cast data into arrays within functions to allow for >consistent treatment of vector and scalar variables without having to >use switch statements. However, casting a scalar to an array under >newcore yields some unexpected behaviour: > > > >>>>a = array(5) >>>>[i for i in a] >>>> >>>> >[] > >Why would an empty list be returned here? > > Because a 0-dimensional array is not a sequence (its a scalar). You could claim this should raise an error, which it should. We are using the default iterator for arrays, which apparently does not raise the error. There are functions like atleast_1d(a) which will convert scalars to 1-d vectors if that is what you are trying to do. Using scalars as sequences in Numeric was a hack. -Travis From vbalko at gmail.com Thu Nov 3 14:49:33 2005 From: vbalko at gmail.com (Vlado Balko) Date: Thu, 03 Nov 2005 20:49:33 +0100 Subject: [SciPy-user] nntp Message-ID: <436A69CD.5020500@gmail.com> hello, can I find this mailing list somewhere on the usenet? And are you sure, that searching on this site http://www.scipy.org/mailinglists/ is functional. Always returning no results. balky From rkern at ucsd.edu Thu Nov 3 15:12:26 2005 From: rkern at ucsd.edu (Robert Kern) Date: Thu, 03 Nov 2005 12:12:26 -0800 Subject: [SciPy-user] nntp In-Reply-To: <436A69CD.5020500@gmail.com> References: <436A69CD.5020500@gmail.com> Message-ID: <436A6F2A.2090604@ucsd.edu> Vlado Balko wrote: > hello, > > can I find this mailing list somewhere on the usenet? gmane.comp.python.scientific.user gmane.comp.python.scientific.devel http://gmane.org/ > And are you sure, > that searching on this site http://www.scipy.org/mailinglists/ is > functional. Always returning no results. Much of the website is broken, yes. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From vbalko at gmail.com Thu Nov 3 18:04:02 2005 From: vbalko at gmail.com (balky) Date: Fri, 04 Nov 2005 00:04:02 +0100 Subject: [SciPy-user] nntp In-Reply-To: <436A6F2A.2090604@ucsd.edu> References: <436A69CD.5020500@gmail.com> <436A6F2A.2090604@ucsd.edu> Message-ID: thank you balky Robert Kern wrote: > Vlado Balko wrote: > >>hello, >> >>can I find this mailing list somewhere on the usenet? > > > gmane.comp.python.scientific.user > gmane.comp.python.scientific.devel > > http://gmane.org/ > > >>And are you sure, >>that searching on this site http://www.scipy.org/mailinglists/ is >>functional. Always returning no results. > > > Much of the website is broken, yes. > From fonnesbeck at gmail.com Fri Nov 4 09:12:41 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Fri, 4 Nov 2005 09:12:41 -0500 Subject: [SciPy-user] MemoryError in scipy_core Message-ID: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> In the course of moving PyMC from Numeric to scipy_core, I am running into some pretty serious memory issues. For those of you unfamilair with PyMC, it is simply a Bayesian simulation module that estimates model paramters by iteratively sampling from the joint posterior distribution of the model, and saving each sample to an array. Under Numeric, I could safely run several hundered thousand iterations of pretty complex models (i.e. lots of paramters) without trouble. Under scipy_core, PyMC hogs most of the system resources (you really cant do anything else while its running), and crashes after just over 10K iterations, under a pretty simple model. Here is the end of the output: Iteration 14000 at 21696.1090701 Iteration 14100 at 21901.323216 Iteration 14200 at 22108.8079159 Iteration 14300 at 22316.150553 python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug python(3910,0xa000ed68) malloc: *** vm_allocate(size=1069056) failed (error code=3) python(3910,0xa000ed68) malloc: *** error: can't allocate region python(3910,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug Traceback (most recent call last): File "regs.py", line 1075, in ? results = sampler.sample(iterations,burn=burn) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/MCMC.py", line 2143, in sample parameter.propose(debug) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/MCMC.py", line 664, in propose if not self._sampler.test(): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/MCMC.py", line 2060, in test like = self.calculate_likelihood() File "regs.py", line 703, in calculate_likelihood like += self.poisson_like(int(c),exp(theta)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/MCMC.py", line 1192, in poisson_like if sum(mu<=0): raise LikelihoodError File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/base/oldnumeric.py", line 312, in sum return asarray(x).sum(axis, rtype) MemoryError Not sure if this is a leak, or what. Any assistance most welcome. -- Chris Fonnesbeck Atlanta, GA From fonnesbeck at gmail.com Fri Nov 4 10:03:12 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Fri, 4 Nov 2005 10:03:12 -0500 Subject: [SciPy-user] f2py in scipy_core with arrays Message-ID: <723eb6930511040703v65932514t9c4efd0a49b7ddb8@mail.gmail.com> I have a number of f2py extensions that I am trying to use with the f2py that is distributed with scipy_core. However, I run into a problem using these compiled extensions with either Numeric or numarray arrays. The error I get every time is: -> 1208 return _poisson(x, mu) global _poisson = x = array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0]) mu = array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) 1209 1210 def gamma_like(self, x, alpha, beta, name='gamma'): TypeError: array cannot be safely cast to required type Not sure why this should happen. Thanks, -- Chris Fonnesbeck Atlanta, GA From fonnesbeck at gmail.com Fri Nov 4 10:14:32 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Fri, 4 Nov 2005 10:14:32 -0500 Subject: [SciPy-user] scipy.f2py issues Message-ID: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> Again, not sure if this belongs on this list or the f2py list, but here goes. Delving deeper into the f2py array handling issue from my previous post, I get a very strange result using a Numeric array with an f2py module. The function _poisson() is simply an f2py function that returns the log-likelihood of a poisson distribution for particular passed values. Here is what happens: (Pdb) _poisson([3.],[4.]) Out[2]: -1.6328763961791992 (Pdb) _poisson(array([3.]),[4.]) Out[2]: -19775746048.0 Why should a list and an array return different values? Ten orders of magnitude different! Thanks, -- Chris Fonnesbeck Atlanta, GA From oliphant at ee.byu.edu Fri Nov 4 13:38:05 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 11:38:05 -0700 Subject: [SciPy-user] MemoryError in scipy_core In-Reply-To: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> References: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> Message-ID: <436BAA8D.6010302@ee.byu.edu> Chris Fonnesbeck wrote: >In the course of moving PyMC from Numeric to scipy_core, I am running >into some pretty serious memory issues. For those of you unfamilair >with PyMC, it is simply a Bayesian simulation module that estimates >model paramters by iteratively sampling from the joint posterior >distribution of the model, and saving each sample to an array. Under >Numeric, I could safely run several hundered thousand iterations of >pretty complex models (i.e. lots of paramters) without trouble. Under >scipy_core, PyMC hogs most of the system resources (you really cant do >anything else while its running), and crashes after just over 10K >iterations, under a pretty simple model. Here is the end of the > > >output: > > It's possible that it's a memory leak. I would love to close it if it is. Is there someway I could run the model you are using? Also, which version of scipy core are you using? All arrays are created and deleted from the same section of code and so it wouldn't be too difficult to track down if certain arrays that were created were not getting deleted. Can you monitor memory usage somehow and report. I will also look more closely at memory usage on some simple tests and try to figure this one out. Thanks for your report and assistance. -Travis From oliphant at ee.byu.edu Fri Nov 4 13:39:34 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 11:39:34 -0700 Subject: [SciPy-user] scipy.f2py issues In-Reply-To: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> Message-ID: <436BAAE6.30903@ee.byu.edu> Chris Fonnesbeck wrote: >Again, not sure if this belongs on this list or the f2py list, but here goes. > >Delving deeper into the f2py array handling issue from my previous >post, I get a very strange result using a Numeric array with an f2py >module. The function _poisson() is simply an f2py function that >returns the log-likelihood of a poisson distribution for particular >passed values. Here is what happens: > >(Pdb) _poisson([3.],[4.]) >Out[2]: -1.6328763961791992 >(Pdb) _poisson(array([3.]),[4.]) >Out[2]: -19775746048.0 > >Why should a list and an array return different values? Ten orders of >magnitude different! > > Good question. They should both be converted to the same thing internally (an ndarray). Apparently they are not. We'll look into this. -Travis From haase at msg.ucsf.edu Fri Nov 4 14:58:42 2005 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 4 Nov 2005 12:58:42 -0700 Subject: [SciPy-user] arr.type() becomes arr.dtype - BOTH in scipyCore AND numarray !? Message-ID: <200511041158.43129.haase@msg.ucsf.edu> Hi, I'm really excited about the progress of scipy - great work, Travis ! Also I always need to thank Todd et al. for numarray which we are using for about 4 years now. I was following - I thought - all the postings here, but I don't remember when and what the reason was when a.type() changed to a.dtype (also there is a "dtypecode" somewhere !?). Any reference or explanation would be great. I have to say that the (old) parenthesis where always quite "annoying" ! ;-) Question: does the way allow assignments like "a.dtype = Float32". What does it do ? If not, is it raising an error (I had 2 different people yesterday who tried to assign to a.type here in our lab ...) Also is this now completely supported/tested and suggested for numarray ? (For the time numarray is still separate) Thanks again to everybody, Sebastian Haase UCSF, Sedat Lab From oliphant at ee.byu.edu Fri Nov 4 15:21:43 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 13:21:43 -0700 Subject: [SciPy-user] arr.type() becomes arr.dtype - BOTH in scipyCore AND numarray !? In-Reply-To: <200511041158.43129.haase@msg.ucsf.edu> References: <200511041158.43129.haase@msg.ucsf.edu> Message-ID: <436BC2D7.4050003@ee.byu.edu> Sebastian Haase wrote: >Hi, >I'm really excited about the progress of scipy - great work, Travis ! >Also I always need to thank Todd et al. for numarray which we are using for >about 4 years now. > >I was following - I thought - all the postings here, but I don't remember when >and what the reason was when a.type() changed to a.dtype (also there is a >"dtypecode" somewhere !?). Any reference or explanation would be great. >I have to say that the (old) parenthesis where always quite "annoying" ! ;-) > > > There is a dtypechar that gives a character string. a.dtypechar The problem is that it can get very confusing to distinguish between Python's use of the word type and an array's use of type. So, we changed the array's use of the word type to "dtype" for data-type. All ndarray's have the same type (scipy.ndarray), but they can have different data types. >Question: does the way allow assignments like "a.dtype = Float32". > > Yes, you can do that (if the itemsizes are compatible). It doesn't change any data, but just changes the array information header so that a now thinks the data is float32. Attributes are intrinsic parts of the array. Setting them changes the array directly. You can also get a float32 view of the array without changing the original as b = a.view(float32) The old capitalized names are supported, but the lower-case names are now standard. The module scipy.base.convertcode has functions that can automatically change your source code to be compatible with any new naming conventions. You can look at the file to see all the changes. These changes are also documented in Chapter 2 of my book which is available for free at http://www.tramy.us/guidetoscipy.html -Travis >Sebastian Haase >UCSF, Sedat Lab > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From oliphant at ee.byu.edu Fri Nov 4 15:28:37 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 13:28:37 -0700 Subject: [SciPy-user] [Numpy-discussion] How to install SciPy Core In-Reply-To: <2fd52c3d0511041216w237b0d92v2e0e6cbeb488d1da@mail.gmail.com> References: <2fd52c3d0511041216w237b0d92v2e0e6cbeb488d1da@mail.gmail.com> Message-ID: <436BC475.6060605@ee.byu.edu> Shu Li wrote: > Hi, > > I am a little confused about how to install SciPy Core. I am new to > SciPy so I think starting with the new core is a good idea. When I > installed the core package first and then scipy, in the process of > installing scipy, it showed it also installed a lot of stuff to the > core directory which made me worry that part of the new core is > overwritten by old core. And when I installed scipy first, then some > words on the web saying some "__init__.py" file will "break the scipy" > certainly worried me because I don't know what the break here means > and why people are not fixing it if it is avoidable. The new scipy core is completely different from Numeric. It is a replacement for Numeric. In the past you had to install Numeric scipy_core ---| scipy ---| these were sometimes installed together as scipy_complete or just scipy. Now, you need to install scipy_core (newcore) scipy (newscipy) You can install both Numeric and newcore if you want (say for compatibility with old 3rd-party code that has not caught up with the changes yet). But, old scipy will not work with newcore because they use the same directory and the __init__ file has changed. So, just use newcore and newscipy and you will be fine and cutting edge. > > Also even when new core(0.4.2 beta) is installed the scipy still > complained not seeing "Numeric", which added to the confusion. Could > anybody offer some explanation? Thanks a lot! You cannot use old scipy (i.e. version 0.3.2) with the new core. You need to use the new scipy with the new core. Right now, the new scipy is available by check out from the svn server: svn co http://svn.scipy.org/svn/scipy/branches/newscipy svn co http://svn.scipy.org/svn/scipy_core/branches/newcore gets you the latest scipy_core as well --- the svn repository is usually in a state where it can compile, since most developers run off of it directly. -Travis From oliphant at ee.byu.edu Fri Nov 4 16:22:40 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 04 Nov 2005 14:22:40 -0700 Subject: [SciPy-user] MemoryError in scipy_core In-Reply-To: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> References: <723eb6930511040612j68b9e8a4m1b792198ee785c04@mail.gmail.com> Message-ID: <436BD120.4060302@ee.byu.edu> Chris Fonnesbeck wrote: >In the course of moving PyMC from Numeric to scipy_core, I am running >into some pretty serious memory issues. For those of you unfamilair >with PyMC, it is simply a Bayesian simulation module that estimates >model paramters by iteratively sampling from the joint posterior >distribution of the model, and saving each sample to an array. Under >Numeric, I could safely run several hundered thousand iterations of >pretty complex models (i.e. lots of paramters) without trouble. Under >scipy_core, PyMC hogs most of the system resources (you really cant do >anything else while its running), and crashes after just over 10K >iterations, under a pretty simple model. Here is the end of the >output: > > I just ran scipy's testing suite through valgrind. I found a couple of small items. I don't think they are related to what is happening here. I'll keep looking. I'm still interested to know what version of scipy core you are trying.. -Travis From ryanlists at gmail.com Fri Nov 4 17:00:01 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 4 Nov 2005 17:00:01 -0500 Subject: [SciPy-user] changing what is printed to the command prompt Message-ID: This is really more of a pure python question but here goes.... I have a user defined class and when I type the variable name at the command prompt I get: In [29]: temp Out[29]: Can I define a method that would make this print something more useful? I guess I am really asking, is there a Python method that is calls for each class when just the variable name is typed at the command prompt. I tried a __str__ method, but that didn't work. Thanks, Ryan From Fernando.Perez at colorado.edu Fri Nov 4 17:05:23 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 04 Nov 2005 15:05:23 -0700 Subject: [SciPy-user] changing what is printed to the command prompt In-Reply-To: References: Message-ID: <436BDB23.5000800@colorado.edu> Ryan Krauss wrote: > This is really more of a pure python question but here goes.... > > I have a user defined class and when I type the variable name at the > command prompt I get: > > In [29]: temp > Out[29]: > > Can I define a method that would make this print something more > useful? I guess I am really asking, is there a Python method that is > calls for each class when just the variable name is typed at the > command prompt. I tried a __str__ method, but that didn't work. __repr__ Cheers, f From wjdandreta at att.net Fri Nov 4 17:17:59 2005 From: wjdandreta at att.net (Bill Dandreta) Date: Fri, 04 Nov 2005 17:17:59 -0500 Subject: [SciPy-user] changing what is printed to the command prompt In-Reply-To: References: Message-ID: <436BDE17.6000106@att.net> An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Fri Nov 4 17:24:24 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 4 Nov 2005 17:24:24 -0500 Subject: [SciPy-user] changing what is printed to the command prompt In-Reply-To: <436BDB23.5000800@colorado.edu> References: <436BDB23.5000800@colorado.edu> Message-ID: Thanks Fernando. On 11/4/05, Fernando Perez wrote: > Ryan Krauss wrote: > > This is really more of a pure python question but here goes.... > > > > I have a user defined class and when I type the variable name at the > > command prompt I get: > > > > In [29]: temp > > Out[29]: > > > > Can I define a method that would make this print something more > > useful? I guess I am really asking, is there a Python method that is > > calls for each class when just the variable name is typed at the > > command prompt. I tried a __str__ method, but that didn't work. > > __repr__ > > Cheers, > > f > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From jmiller at stsci.edu Fri Nov 4 17:57:14 2005 From: jmiller at stsci.edu (Todd Miller) Date: Fri, 04 Nov 2005 17:57:14 -0500 Subject: [SciPy-user] arr.type() becomes arr.dtype - BOTH in scipyCore AND numarray !? In-Reply-To: <200511041158.43129.haase@msg.ucsf.edu> References: <200511041158.43129.haase@msg.ucsf.edu> Message-ID: <436BE74A.7080109@stsci.edu> Sebastian Haase wrote: >Also I always need to thank Todd et al. for numarray which we are using for >about 4 years now. > > I'm glad you found numarray useful. >I was following - I thought - all the postings here, but I don't remember when >and what the reason was when a.type() changed to a.dtype (also there is a >"dtypecode" somewhere !?). Any reference or explanation would be great. >I have to say that the (old) parenthesis where always quite "annoying" ! ;-) > >Question: does the way allow assignments like "a.dtype = Float32". >What does it do ? If not, is it raising an error (I had 2 different people >yesterday who tried to assign to a.type here in our lab ...) > >Also is this now completely supported/tested and suggested for numarray ? (For >the time numarray is still separate) > > I'm adding support for some of newcore's new interface features out of desire to make it easier to migrate. Our intent is to make it possible to write newcore code and run it on numarray now as newcore matures. Not every newcore feature is going to be supported, but we'll make an effort to support those which are easy to implement. Let me know is there's some newcore idiom you want to use that numarray doesn't have yet. Regards, Todd From vbalko at gmail.com Sat Nov 5 07:30:45 2005 From: vbalko at gmail.com (Vlado Balko) Date: Sat, 05 Nov 2005 13:30:45 +0100 Subject: [SciPy-user] weave Message-ID: <436CA5F5.8090800@gmail.com> Hello, I`m trying to learn weave to speed up my program but at the example code >>> import weave >>> a = 1 >>> weave.inline('printf("%d\\n",a);',['a']) I get the error Traceback (most recent call last): File "", line 1, in ? File "E:\Program Files\Python\Lib\site-packages\weave\inline_tools.py", line 3 22, in inline results = attempt_function_call(code,local_dict,global_dict) File "E:\Program Files\Python\Lib\site-packages\weave\inline_tools.py", line 3 72, in attempt_function_call function_list = function_catalog.get_functions(code,module_dir) File "E:\Program Files\Python\Lib\site-packages\weave\catalog.py", line 568, i n get_functions function_list = self.get_cataloged_functions(code) File "E:\Program Files\Python\Lib\site-packages\weave\catalog.py", line 488, i n get_cataloged_functions cat = get_catalog(path,mode) File "E:\Program Files\Python\Lib\site-packages\weave\catalog.py", line 262, i n get_catalog if (dumb and os.path.exists(catalog_file+'.dat')) \ File "E:\Program Files\Python\lib\ntpath.py", line 256, in exists st = os.stat(path) TypeError: coercing to Unicode: need string or buffer, NoneType found >>> I have Scipy 0.3.2 installed where is the problem? balky From fonnesbeck at gmail.com Sat Nov 5 14:33:57 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Sat, 5 Nov 2005 14:33:57 -0500 Subject: [SciPy-user] scipy.f2py issues In-Reply-To: <436BAAE6.30903@ee.byu.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> Message-ID: <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> On 11/4/05, Travis Oliphant wrote: > Chris Fonnesbeck wrote: > > >Again, not sure if this belongs on this list or the f2py list, but here goes. > > > >Delving deeper into the f2py array handling issue from my previous > >post, I get a very strange result using a Numeric array with an f2py > >module. The function _poisson() is simply an f2py function that > >returns the log-likelihood of a poisson distribution for particular > >passed values. Here is what happens: > > > >(Pdb) _poisson([3.],[4.]) > >Out[2]: -1.6328763961791992 > >(Pdb) _poisson(array([3.]),[4.]) > >Out[2]: -19775746048.0 > > > >Why should a list and an array return different values? Ten orders of > >magnitude different! > > > > > Good question. They should both be converted to the same thing > internally (an ndarray). Apparently they are not. We'll look into this. > Travis, The PyMC package that I sent you regarding the possible memory leak in scipy_core also contains the _poisson f2py function (called from poisson_likelihood), if you want to check it out. Cheers, C. -- Chris Fonnesbeck Atlanta, GA From mcantor at stanford.edu Sat Nov 5 20:27:00 2005 From: mcantor at stanford.edu (mike cantor) Date: Sat, 05 Nov 2005 17:27:00 -0800 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.co m> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> Message-ID: <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> Is there any way to enforce upper and/or lower bounds on parameters (x0) optimized by leastsq? If not can anyone tell me where I might look to hack this? Thanks, -mike From sransom at nrao.edu Sat Nov 5 20:51:10 2005 From: sransom at nrao.edu (Scott Ransom) Date: Sat, 5 Nov 2005 20:51:10 -0500 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> Message-ID: <20051106015110.GA16522@ssh.cv.nrao.edu> Just to remind people, there is a very full featured non-linear least squares fitter that is written in pure python (but based on MINPACK) available here: http://cars9.uchicago.edu/software/python/mpfit.html It allows for constraints on paramters and gives error estimates (computed from the covariance matrix) on the fitted parameters in a convenient format. Scott On Sat, Nov 05, 2005 at 05:27:00PM -0800, mike cantor wrote: > Is there any way to enforce upper and/or lower bounds on parameters (x0) > optimized by leastsq? If not can anyone tell me where I might look to hack > this? > > Thanks, > -mike > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From mcantor at stanford.edu Sat Nov 5 21:07:56 2005 From: mcantor at stanford.edu (mike cantor) Date: Sat, 05 Nov 2005 18:07:56 -0800 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <20051106015110.GA16522@ssh.cv.nrao.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> <20051106015110.GA16522@ssh.cv.nrao.edu> Message-ID: <6.0.1.1.2.20051105180535.01e2f7c0@mcantor.pobox.stanford.edu> Thanks Scott, How do you think it compares in terms of speed with the SciPy implementation that wraps C code? Speed is a big concern for me. -mike At 05:51 PM 11/5/2005, you wrote: >Just to remind people, there is a very full featured non-linear >least squares fitter that is written in pure python (but based >on MINPACK) available here: > >http://cars9.uchicago.edu/software/python/mpfit.html > >It allows for constraints on paramters and gives error >estimates (computed from the covariance matrix) on the fitted >parameters in a convenient format. > >Scott > > >On Sat, Nov 05, 2005 at 05:27:00PM -0800, mike cantor wrote: > > Is there any way to enforce upper and/or lower bounds on parameters (x0) > > optimized by leastsq? If not can anyone tell me where I might look to > hack > > this? > > > > Thanks, > > -mike > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > >-- >-- >Scott M. Ransom Address: NRAO >Phone: (434) 296-0320 520 Edgemont Rd. >email: sransom at nrao.edu Charlottesville, VA 22903 USA >GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user From sransom at nrao.edu Sat Nov 5 21:37:03 2005 From: sransom at nrao.edu (Scott Ransom) Date: Sat, 5 Nov 2005 21:37:03 -0500 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <6.0.1.1.2.20051105180535.01e2f7c0@mcantor.pobox.stanford.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> <20051106015110.GA16522@ssh.cv.nrao.edu> <6.0.1.1.2.20051105180535.01e2f7c0@mcantor.pobox.stanford.edu> Message-ID: <20051106023703.GA21935@ssh.cv.nrao.edu> On Sat, Nov 05, 2005 at 06:07:56PM -0800, mike cantor wrote: > Thanks Scott, > > How do you think it compares in terms of speed with the SciPy > implementation that wraps C code? Speed is a big concern for me. Speed is its one downfall. It is definitely much slower than the SciPy leastsq (due to being completely done in C). I guess there probably wouldn't be such a huge hit if you had a complex function to optimize which dominated the run time, but if that is not the case, figuring out a workaround for leastsq will probably be your best bet. Scott > At 05:51 PM 11/5/2005, you wrote: > >Just to remind people, there is a very full featured non-linear > >least squares fitter that is written in pure python (but based > >on MINPACK) available here: > > > >http://cars9.uchicago.edu/software/python/mpfit.html > > > >It allows for constraints on paramters and gives error > >estimates (computed from the covariance matrix) on the fitted > >parameters in a convenient format. > > > >Scott > > > > > >On Sat, Nov 05, 2005 at 05:27:00PM -0800, mike cantor wrote: > > > Is there any way to enforce upper and/or lower bounds on parameters (x0) > > > optimized by leastsq? If not can anyone tell me where I might look to > > hack > > > this? > > > > > > Thanks, > > > -mike > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > >-- > >-- > >Scott M. Ransom Address: NRAO > >Phone: (434) 296-0320 520 Edgemont Rd. > >email: sransom at nrao.edu Charlottesville, VA 22903 USA > >GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.net > >http://www.scipy.net/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From rkern at ucsd.edu Sat Nov 5 22:47:52 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sat, 05 Nov 2005 19:47:52 -0800 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> Message-ID: <436D7CE8.4010707@ucsd.edu> mike cantor wrote: > Is there any way to enforce upper and/or lower bounds on parameters (x0) > optimized by leastsq? If not can anyone tell me where I might look to hack > this? If you just need the optimizal value and not some (dubious) estimate of the uncertainty, then you can use one of the constrained minimizers. You simply have to make an appropriate misfit function: def f(beta, x): # compute values y given parameters beta at points x def misfit(beta, x, y): diff = y - f(beta, x) return scipy.sum(diff*diff) beta_opt = scipy.optimize.fmin_cobyla(f, beta0, constraints, (x, y)) -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From brendansimons at yahoo.ca Sat Nov 5 23:58:48 2005 From: brendansimons at yahoo.ca (Brendan Simons) Date: Sat, 5 Nov 2005 23:58:48 -0500 Subject: [SciPy-user] One step closer to a scipy matlab/mathematica/mathcad replacement? In-Reply-To: References: Message-ID: Just came across this project to add symbolic processing to python, for anyone interested: http://swiginac.berlios.de/ One day I hope to have the skills to put Ginac, Scipy, iPython, and matplotlib together into a viable matlab replacement. Brendan -- Brendan Simons, Project Engineer Stern Laboratories, Hamilton Ontario www.sternlab.com __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From samrobertsmith at gmail.com Sun Nov 6 07:46:56 2005 From: samrobertsmith at gmail.com (Shi Mu) Date: Sun, 6 Nov 2005 04:46:56 -0800 Subject: [SciPy-user] CVS Message-ID: <1d987df30511060446y153c2dc4l2388f1fc5df57b17@mail.gmail.com> why it did not see anthing when checking http://www.scipy.org/cvs/viewcvs/ From rkern at ucsd.edu Sun Nov 6 08:00:13 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 06 Nov 2005 05:00:13 -0800 Subject: [SciPy-user] CVS In-Reply-To: <1d987df30511060446y153c2dc4l2388f1fc5df57b17@mail.gmail.com> References: <1d987df30511060446y153c2dc4l2388f1fc5df57b17@mail.gmail.com> Message-ID: <436DFE5D.8050508@ucsd.edu> Shi Mu wrote: > why it did not see anthing when checking http://www.scipy.org/cvs/viewcvs/ We are no longer using CVS. Unfortunately, the www.scipy.org website has not been updated to reflect this. Current development is taking place on two branches in a Subversion repository. http://svn.scipy.org/svn/scipy_core/branches/newcore/ http://svn.scipy.org/svn/scipy/branches/newscipy/ -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ckkart at hoc.net Sun Nov 6 09:03:24 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 06 Nov 2005 15:03:24 +0100 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> Message-ID: <436E0D2C.404@hoc.net> mike cantor wrote: > Is there any way to enforce upper and/or lower bounds on parameters (x0) > optimized by leastsq? If not can anyone tell me where I might look to hack > this? One way that seems to work quite good is to cheat the optimizer by externally mapping +-inf to the upper/lower bounds of the parameters: If a is the parameter with upper/lower bounds amax/amin, then newa will be the parameter which you should use to evaluate your function: newa = (amin+amax)/2.0+(amax-amin)/2.0*a/(abs(a+1)) Like this when the optimizer is pushing the parameters to +-inf they will in fact slowly approach amax and amin. I'm not sure if this works with any function but up to now I did not have any problems. Btw. I succesfully use this method in a leastsq curve fitting application called peak-o-mat (http://lorentz.sf.net). Regards, Christian From stephen.walton at csun.edu Sun Nov 6 10:36:25 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun, 06 Nov 2005 07:36:25 -0800 Subject: [SciPy-user] weave In-Reply-To: <436CA5F5.8090800@gmail.com> References: <436CA5F5.8090800@gmail.com> Message-ID: <436E22F9.4000802@csun.edu> Vlado Balko wrote: >Hello, > >I`m trying to learn weave to speed up my program but at the example code > > >>> import weave > >>> a = 1 > >>> weave.inline('printf("%d\\n",a);',['a']) > > >I get the error > > Can anyone else help here? This example works fine on newscipy SVN. From fonnesbeck at gmail.com Sun Nov 6 11:50:51 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Sun, 6 Nov 2005 11:50:51 -0500 Subject: [SciPy-user] dot() fails in scipy_core Message-ID: <723eb6930511060850g3053626as54583822dfdcea85@mail.gmail.com> Regardless of what arguments are passed to dot(), it causes a bus error. Anyone else get this? Running recent svn build on OSX 10.4.3. C. -- Chris Fonnesbeck Atlanta, GA From stephen.walton at csun.edu Sun Nov 6 21:30:23 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Sun, 06 Nov 2005 18:30:23 -0800 Subject: [SciPy-user] dot() fails in scipy_core In-Reply-To: <723eb6930511060850g3053626as54583822dfdcea85@mail.gmail.com> References: <723eb6930511060850g3053626as54583822dfdcea85@mail.gmail.com> Message-ID: <436EBC3F.8090702@csun.edu> Chris Fonnesbeck wrote: >Regardless of what arguments are passed to dot(), it causes a bus >error. Anyone else get this? > I'm not. newscipy is passing all tests here under FC4. From rkern at ucsd.edu Sun Nov 6 21:58:09 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 06 Nov 2005 18:58:09 -0800 Subject: [SciPy-user] dot() fails in scipy_core In-Reply-To: <723eb6930511060850g3053626as54583822dfdcea85@mail.gmail.com> References: <723eb6930511060850g3053626as54583822dfdcea85@mail.gmail.com> Message-ID: <436EC2C1.4060204@ucsd.edu> Chris Fonnesbeck wrote: > Regardless of what arguments are passed to dot(), it causes a bus > error. Anyone else get this? Running recent svn build on OSX 10.4.3. It works fine for me on OS X 10.4.3 with revision 1440 of scipy_core against Accelerate.framework. Some things changed in the past few days that changed the structure layouts, so you may want to clear out build/ entirely and rebuild cleanly. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From simon at arrowtheory.com Mon Nov 7 12:28:34 2005 From: simon at arrowtheory.com (Simon Burton) Date: Mon, 7 Nov 2005 17:28:34 +0000 Subject: [SciPy-user] One step closer to a scipy matlab/mathematica/mathcad replacement? In-Reply-To: References: Message-ID: <20051107172834.275b9659.simon@arrowtheory.com> On Sat, 5 Nov 2005 23:58:48 -0500 Brendan Simons wrote: > Just came across this project to add symbolic processing to python, > for anyone interested: > http://swiginac.berlios.de/ downloading now... > One day I hope to have the skills to put Ginac, Scipy, iPython, and > matplotlib together into a viable matlab replacement. err, matlab does not do symbolic computation.. ? Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From rkern at ucsd.edu Mon Nov 7 01:40:09 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 06 Nov 2005 22:40:09 -0800 Subject: [SciPy-user] One step closer to a scipy matlab/mathematica/mathcad replacement? In-Reply-To: <20051107172834.275b9659.simon@arrowtheory.com> References: <20051107172834.275b9659.simon@arrowtheory.com> Message-ID: <436EF6C9.5090900@ucsd.edu> Simon Burton wrote: > On Sat, 5 Nov 2005 23:58:48 -0500 > Brendan Simons wrote: >>One day I hope to have the skills to put Ginac, Scipy, iPython, and >>matplotlib together into a viable matlab replacement. > > err, matlab does not do symbolic computation.. ? I can if you can afford it. http://www.mathworks.com/products/symbolic/ -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From rkern at ucsd.edu Mon Nov 7 01:41:37 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 06 Nov 2005 22:41:37 -0800 Subject: [SciPy-user] One step closer to a scipy matlab/mathematica/mathcad replacement? In-Reply-To: <436EF6C9.5090900@ucsd.edu> References: <20051107172834.275b9659.simon@arrowtheory.com> <436EF6C9.5090900@ucsd.edu> Message-ID: <436EF721.5080204@ucsd.edu> Robert Kern wrote: > Simon Burton wrote: > >>On Sat, 5 Nov 2005 23:58:48 -0500 >>Brendan Simons wrote: > >>>One day I hope to have the skills to put Ginac, Scipy, iPython, and >>>matplotlib together into a viable matlab replacement. >> >>err, matlab does not do symbolic computation.. ? > > I can if you can afford it. *It* can ... -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From seth.olsen at gmail.com Mon Nov 7 03:28:39 2005 From: seth.olsen at gmail.com (Dr. Seth Olsen) Date: Mon, 7 Nov 2005 18:28:39 +1000 Subject: [SciPy-user] Build problems Message-ID: Hi Sci-py users, I'm trying to build Sci-py on a Pentium M machine running Fedora core 4 with gcc v.4.0.1 and python 2.4.1. I have already installed atlas, atlas_lapack, fftw, f2py etc. in their latest incarnations as visible on the Fedora.orgrpm site via yum. When I execute 'python setup.py build', the building stops with an error. The problem seems to be with the dfitpack module. The failed command in the build gives this standard output: building 'scipy.interpolate.dfitpack' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC' creating build/temp.linux-i686-2.4/build/src/Lib/interpolate compile options: '-Ibuild/src -I/usr/include/python2.4 -c' gcc: build/src/Lib/interpolate/dfitpackmodule.c building 'scipy.interpolate.dfitpack' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC' creating build/temp.linux-i686-2.4/build/src/Lib/interpolate compile options: '-Ibuild/src -I/usr/include/python2.4 -c' gcc: build/src/Lib/interpolate/dfitpackmodule.c build/src/Lib/interpolate/dfitpackmodule.c: In function $-1??f2py_rout_dfitpack_surfit_smth??: build/src/Lib/interpolate/dfitpackmodule.c:2446: error: invalid storage class for function $-1??calc_lwrk1?? build/src/Lib/interpolate/dfitpackmodule.c:2458: error: invalid storage class for function $-1??calc_lwrk2?? build/src/Lib/interpolate/dfitpackmodule.c: In function $-1??f2py_rout_dfitpack_surfit_lsq??: build/src/Lib/interpolate/dfitpackmodule.c:2882: error: invalid storage class for function $-1??calc_lwrk1?? build/src/Lib/interpolate/dfitpackmodule.c:2894: error: invalid storage class for function $-1??calc_lwrk2?? build/src/Lib/interpolate/dfitpackmodule.c: In function $-1??f2py_rout_dfitpack_surfit_smth??: build/src/Lib/interpolate/dfitpackmodule.c:2446: error: invalid storage class for function $-1??calc_lwrk1?? build/src/Lib/interpolate/dfitpackmodule.c:2458: error: invalid storage class for function $-1??calc_lwrk2?? build/src/Lib/interpolate/dfitpackmodule.c: In function $-1??f2py_rout_dfitpack_surfit_lsq??: build/src/Lib/interpolate/dfitpackmodule.c:2882: error: invalid storage class for function $-1??calc_lwrk1?? build/src/Lib/interpolate/dfitpackmodule.c:2894: error: invalid storage class for function $-1??calc_lwrk2?? error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC -Ibuild/src -I/usr/include/python2.4 -c build/src/Lib/interpolate/dfitpackmodule.c -o build/temp.linux-i686-2.4/build/src/Lib/interpolate/dfitpackmodule.o" failed with exit status 1 In the above, please replace the nonsense $-1?? with the backward quote ` and 2?? with the forward quote '. Has anybody seen this sort of thing before? Does anyone know how I can get around it (and, preferably, still keep the dfitpack functionality)? Cheers, Seth -- ccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccms Dr Seth Olsen, PhD Postdoctoral Fellow, Biomolecular Modeling Group Centre for Computational Molecular Science Chemistry Building, The University of Queensland Qld 4072, Brisbane, Australia tel (617) 33653732 fax (617) 33654623 email: s.olsen1 at uq.edu.au Web: www.ccms.uq.edu.au ccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccmsccms -------------- next part -------------- An HTML attachment was scrubbed... URL: From giovanni.samaey at cs.kuleuven.ac.be Mon Nov 7 13:23:38 2005 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Mon, 07 Nov 2005 19:23:38 +0100 Subject: [SciPy-user] scipy and MPI: how I solved "import scipy" hanging Message-ID: <436F9BAA.5030300@cs.kuleuven.ac.be> Hi all, I have been struggling to get newscipy working on a cluster, in which each node is trying to import scipy. The problem was that the import is hanging, on a weird place, namely when in the file machar.py from scipy.base, an object of the class MachAr(object) is being created. The snippet of source code that is hanging is displayed below (with line numbers) 246 import math 247 self.precision = int(-math.log10(float_to_float(self.eps))) 248 ten = two + two + two + two + two 249 resolution = ten ** (-self.precision) 250 self.resolution = float_to_float(resolution) 251 self._str_resolution = float_to_str(resolution) Now here comes the funny thing: line 249 hangs when executed using pyMPI (from the pyMPI project) and also using mpipython (from ScientificPython). Apparently, "ten" is equal to [10.] here (I presume this is a list). When changing the line to 249 resolution = ten[0] ** (-self.precision) I get past this. I don't understand why, and I would be glad to find out why, but the message here is simply for sharing this with other people who might find the same things... Best, Giovanni Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From zollars at caltech.edu Mon Nov 7 13:44:40 2005 From: zollars at caltech.edu (Eric Zollars) Date: Mon, 07 Nov 2005 10:44:40 -0800 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <436D7CE8.4010707@ucsd.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> <436D7CE8.4010707@ucsd.edu> Message-ID: <436FA098.7040103@caltech.edu> Robert- Could you flesh out this answer some more? I've had to do this in the past and was sure I was missing something. In the simplest case a set of: y1 = (b0 + b1*x1 + b2*x2)1 . . yn = (b0 + b1*x1 + b2*x2)n In fortran I would pass x as a 2d matrix to the function misfit. What do you do in scipy if x is a vector at each point y? Eric Robert Kern wrote: > mike cantor wrote: > >>Is there any way to enforce upper and/or lower bounds on parameters (x0) >>optimized by leastsq? If not can anyone tell me where I might look to hack >>this? > > > If you just need the optimizal value and not some (dubious) estimate of > the uncertainty, then you can use one of the constrained minimizers. You > simply have to make an appropriate misfit function: > > def f(beta, x): > # compute values y given parameters beta at points x > > def misfit(beta, x, y): > diff = y - f(beta, x) > return scipy.sum(diff*diff) > > beta_opt = scipy.optimize.fmin_cobyla(f, beta0, constraints, (x, y)) > From lanceboyle at bluebottle.com Tue Nov 8 04:51:58 2005 From: lanceboyle at bluebottle.com (lanceboyle at bluebottle.com) Date: Tue, 8 Nov 2005 02:51:58 -0700 Subject: [SciPy-user] One step closer to a scipy matlab/mathematica/mathcad replacement? In-Reply-To: References: Message-ID: This would be a way noble project. Didn't someone else just post here a few days ago about combining a symbolic package and TeX with Python? If that wasn't you, are you guys communicating? The Matlab symbolics module is Maple, apparently Maple 8. (Maple itself is currently at v10). www.maplesoft.com A more appropriate (IMO) target is Maple itself, since it (I suppose) more elegantly integrates symbolics, numerics, and document preparation into a single, _very_ deep program. And its programming language is more like a real language than Matlab's pathos. (All these remarks by a Maple newbie.) But the sadness of Maple is that it now runs its cross-platform interface on Java. As as Mac human interface Nazis, that's bad news to me. But Maple is an awesome program and in terms of its current interface (Java warts notwithstanding) it makes Mathematica look like 1986 (which it is) in terms of interface. There are two programs that bear attention from someone making progress in this area. Both are real, "grown-up" programs, both have been around since the late 1980s, both are still available and actively marketed, both are very reasonably priced, and both have fully-functioning demos, I think. LiveMath Maker was formerly known as Theorist. Its interface lets you drag parts of expressions around and does all the symbolics and plotting that most people could want. For some odd reason it is marketed as a tool for high school and college students, but I've used it for years for some very extensive stuff. www.livemath.com This company also makes a killer equation setter that for 15 years has made most others look like pikers. Their programs are a little rusty around the edges (no Quartz drawing for Macintosh LiveMath Maker, for example). The other program is Igor Pro www.wavemetrics.com. It is strictly numeric but has a stunningly capable interface for organizing huge projects and manipulating data as well as document production. I would love to see the best features of Maple, LiveMath Maker, and Igor Pro combined into one coherent package with Python underpinnings. It seems like all of the pieces are already available as open source and are awaiting someone to take on the large task of making them play together. BTW, what is the state of affairs on Chaco? Lance On Nov 5, 2005, at 9:58 PM, Brendan Simons wrote: > Just came across this project to add symbolic processing to python, > for anyone interested: > http://swiginac.berlios.de/ > One day I hope to have the skills to put Ginac, Scipy, iPython, and > matplotlib together into a viable matlab replacement. > > Brendan > From fonnesbeck at gmail.com Tue Nov 8 11:20:44 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 8 Nov 2005 11:20:44 -0500 Subject: [SciPy-user] random number sampling performance in newcore Message-ID: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> I was surprised to find that the random number sampler in newcore is significantly slower than RandomArray in Numeric (at least for binomial sampling). A quick comparison showed the scipy.basic.random.binomial sampler to be over 8 times slower than RandomArray.binomial. I was surprised, since the newcore stuff is Pyrex-based (isnt it?). Am I the only one observing such differences, or am I using the wrong method? If not, is the performance expected to improve significantly. Thanks, C. -- Chris Fonnesbeck Atlanta, GA From stephen.walton at csun.edu Tue Nov 8 11:44:49 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Tue, 08 Nov 2005 08:44:49 -0800 Subject: [SciPy-user] Build problems In-Reply-To: References: Message-ID: <4370D601.8000603@csun.edu> Dr. Seth Olsen wrote: > I'm trying to build Sci-py on a Pentium M machine running Fedora core > 4 with gcc v.4.0.1 and python 2.4.1. I have already installed atlas, > atlas_lapack, fftw, f2py etc. in their latest incarnations as visible > on the Fedora.org rpm site via yum. At a wild guess, I'd suspect that the f2py being distributed at fedora.org is old. Are you building "old" or "new" Scipy? From stephen.walton at csun.edu Tue Nov 8 11:53:12 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Tue, 08 Nov 2005 08:53:12 -0800 Subject: [SciPy-user] parameter bounds using leastsq In-Reply-To: <436FA098.7040103@caltech.edu> References: <723eb6930511040714h3b697bd7v2a7c95a240950b06@mail.gmail.com> <436BAAE6.30903@ee.byu.edu> <723eb6930511051133u5e3db87dw3b11de57d7b3a4f5@mail.gmail.com> <6.0.1.1.2.20051105172446.01e29c38@mcantor.pobox.stanford.edu> <436D7CE8.4010707@ucsd.edu> <436FA098.7040103@caltech.edu> Message-ID: <4370D7F8.6080704@csun.edu> Eric Zollars wrote: >Robert- > Could you flesh out this answer some more? I've had to do this in the >past and was sure I was missing something. In the simplest case a set of: >y1 = (b0 + b1*x1 + b2*x2)1 >. >. >yn = (b0 + b1*x1 + b2*x2)n > >In fortran I would pass x as a 2d matrix to the function misfit. What >do you do in scipy if x is a vector at each point y? > > You would write your function f accordingly. Because Scipy is vector oriented, a simple y=b0+b1*x[0]+b2*x[1] would work, assuming x is a Scipy/Numeric array of shape (2,n). By the way, I humbly suggest there was a typo in Robert's original post: >>def f(beta, x): >> # compute values y given parameters beta at points x >> >>def misfit(beta, x, y): >> diff = y - f(beta, x) >> return scipy.sum(diff*diff) >> >>beta_opt = scipy.optimize.fmin_cobyla(f, beta0, constraints, (x, y)) >> >> I think the "f" argument in the call to fmin_cobyla needs to be "misfit" else you're not fitting to the data. Steve Walton From ryanlists at gmail.com Tue Nov 8 15:34:06 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 8 Nov 2005 15:34:06 -0500 Subject: [SciPy-user] transcendental root finding Message-ID: Does anyone out there have a robust algorithm for finding all of the roots of a transcendental equation within a certain range of the independent variable. I wrote one myself that takes a vector of guesses that are used in optimize.newton. I am trying to use this algoritm as part of a root locus finding tool. Each initial guess would represent a branch of the locus and I was hoping to stay on the branch as a gain is slowly increased. For at least one of my branches this isn't going very well and newton converges to another nearby solution on some occassions. Any thoughts? Ryan From nwagner at mecha.uni-stuttgart.de Tue Nov 8 15:45:50 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Nov 2005 21:45:50 +0100 Subject: [SciPy-user] transcendental root finding In-Reply-To: References: Message-ID: On Tue, 8 Nov 2005 15:34:06 -0500 Ryan Krauss wrote: > Does anyone out there have a robust algorithm for >finding all of the > roots of a transcendental equation within a certain >range of the > independent variable. I wrote one myself that takes a >vector of > guesses that are used in optimize.newton. I am trying >to use this > algoritm as part of a root locus finding tool. Each >initial guess > would represent a branch of the locus and I was hoping >to stay on the > branch as a gain is slowly increased. For at least one >of my branches > this isn't going very well and newton converges to >another nearby > solution on some occassions. > > Any thoughts? > Have you tried a homotopy approach H(x,t) = (1-t) g(x) + t f(x) = 0 t \in [0,1] f(x) is your transcendental equation g(x) is a simpler function with known zeros. You start with t=0 and increase t until t=1. For t=1 you will hopefully find a solution of f(x)=0. Nils > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From ryanlists at gmail.com Tue Nov 8 15:58:35 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 8 Nov 2005 15:58:35 -0500 Subject: [SciPy-user] transcendental root finding In-Reply-To: References: Message-ID: I have not tried that. I will think about that one. It sounds interesting. Ryan On 11/8/05, Nils Wagner wrote: > On Tue, 8 Nov 2005 15:34:06 -0500 > Ryan Krauss wrote: > > Does anyone out there have a robust algorithm for > >finding all of the > > roots of a transcendental equation within a certain > >range of the > > independent variable. I wrote one myself that takes a > >vector of > > guesses that are used in optimize.newton. I am trying > >to use this > > algoritm as part of a root locus finding tool. Each > >initial guess > > would represent a branch of the locus and I was hoping > >to stay on the > > branch as a gain is slowly increased. For at least one > >of my branches > > this isn't going very well and newton converges to > >another nearby > > solution on some occassions. > > > > Any thoughts? > > > Have you tried a homotopy approach > > H(x,t) = (1-t) g(x) + t f(x) = 0 > t \in [0,1] > > f(x) is your transcendental equation > g(x) is a simpler function with known zeros. > You start with t=0 and increase t until t=1. > > For t=1 you will hopefully find a solution of f(x)=0. > > Nils > > > Ryan > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From rkern at ucsd.edu Tue Nov 8 19:11:01 2005 From: rkern at ucsd.edu (Robert Kern) Date: Tue, 08 Nov 2005 16:11:01 -0800 Subject: [SciPy-user] random number sampling performance in newcore In-Reply-To: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> References: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> Message-ID: <43713E95.4030804@ucsd.edu> Chris Fonnesbeck wrote: > I was surprised to find that the random number sampler in newcore is > significantly slower than RandomArray in Numeric (at least for > binomial sampling). A quick comparison showed the > scipy.basic.random.binomial sampler to be over 8 times slower than > RandomArray.binomial. I was surprised, since the newcore stuff is > Pyrex-based (isnt it?). It's a Pyrex wrapper to pure C code. See distributions.c for the actual implementations of the distribution algorithms. > Am I the only one observing such differences, > or am I using the wrong method? If not, is the performance expected to > improve significantly. Can you show us the code you're using? Particularly, the parameters p and n. I use two different algorithms for the binomial distribution, a waiting time algorithm when p*n <= 30 (or (1-p)*n <= 30 if p > 0.5) and the BTPE algorithm (the same algorithm that's in RANLIB, but reimplemented) when p*n > 30 (or (1-p)*n ...). In [10]: tra = Timer('x=RA.binomial(n,p,size)', 'import RandomArray as RA; n=200; p=0.25; size=10000') In [11]: tra.repeat(3,100) Out[11]: [1.5208730697631836, 1.5047860145568848, 1.5198800563812256] In [12]: tra = Timer('x=RA.binomial(n,p,size)', 'import RandomArray as RA; n=20; p=0.25; size=10000') In [13]: tra.repeat(3,100) Out[13]: [0.89270305633544922, 0.90227413177490234, 1.183967113494873] In [14]: tmt = Timer('x=random.binomial(n,p,size)', 'from scipy import random; n=200; p=0.25; size=10000') In [15]: tmt.repeat(3,100) Out[15]: [1.922713041305542, 2.0582780838012695, 1.906635046005249] In [16]: tmt = Timer('x=random.binomial(n,p,size)', 'from scipy import random; n=20; p=0.25; size=10000') In [17]: tmt.repeat(3,100) Out[17]: [3.1456100940704346, 2.9999620914459229, 2.9954609870910645] It looks like my implementation of BTPE is 30% slower than RANLIB and the waiting time algorithm is 3 times slower than RANLIB's BTPE on my machine. Compare to normal(): In [21]: tra = Timer('x=RA.normal(0.0, 1.0, size)', 'import RandomArray as RA; size=10000') In [22]: tra.repeat(3,100) Out[22]: [0.70471906661987305, 0.76926708221435547, 0.7034919261932373] In [23]: tmt = Timer('x=random.normal(size=size)', 'from scipy import random; size=10000') In [24]: tmt.repeat(3,100) Out[24]: [0.56431293487548828, 0.52272319793701172, 0.79108381271362305] So the evidence suggests that the slowness is constrained to individual distributions, one of them being binomial. I'll futz with it. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pebarrett at gmail.com Wed Nov 9 09:07:04 2005 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 9 Nov 2005 09:07:04 -0500 Subject: [SciPy-user] One step closer to a scipy matlab/mathematica/mathcad replacement? In-Reply-To: <20051107172834.275b9659.simon@arrowtheory.com> References: <20051107172834.275b9659.simon@arrowtheory.com> Message-ID: <40e64fa20511090607o4fd613fcx59898fcb84560362@mail.gmail.com> On 11/7/05, Simon Burton wrote: > > On Sat, 5 Nov 2005 23:58:48 -0500 > Brendan Simons wrote: > > > Just came across this project to add symbolic processing to python, > > for anyone interested: > > http://swiginac.berlios.de/ > > downloading now... > > > One day I hope to have the skills to put Ginac, Scipy, iPython, and > > matplotlib together into a viable matlab replacement. > > err, matlab does not do symbolic computation.. ? > What about Pythonica for symbolic computation? It still needs work, but its been around for 6 or 7 years now. The URL is here: http://www.tildesoft.com/Pythonica.html -- Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From fonnesbeck at gmail.com Wed Nov 9 09:11:28 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Wed, 9 Nov 2005 09:11:28 -0500 Subject: [SciPy-user] random number sampling performance in newcore In-Reply-To: <43713E95.4030804@ucsd.edu> References: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> <43713E95.4030804@ucsd.edu> Message-ID: <723eb6930511090611i59909364o8528453afc78b734@mail.gmail.com> On 11/8/05, Robert Kern wrote: > Chris Fonnesbeck wrote: > > I was surprised to find that the random number sampler in newcore is > > significantly slower than RandomArray in Numeric (at least for > > binomial sampling). A quick comparison showed the > > scipy.basic.random.binomial sampler to be over 8 times slower than > > RandomArray.binomial. I was surprised, since the newcore stuff is > > Pyrex-based (isnt it?). > > It's a Pyrex wrapper to pure C code. See distributions.c for the actual > implementations of the distribution algorithms. > > > Am I the only one observing such differences, > > or am I using the wrong method? If not, is the performance expected to > > improve significantly. > > Can you show us the code you're using? Particularly, the parameters p > and n. I use two different algorithms for the binomial distribution, a > waiting time algorithm when p*n <= 30 (or (1-p)*n <= 30 if p > 0.5) and > the BTPE algorithm (the same algorithm that's in RANLIB, but > reimplemented) when p*n > 30 (or (1-p)*n ...). > My tests were pretty simple and informal -- just 100K identically distributed samples: In [1]: from scipy import random as rnums In [2]: from time import time In [3]: import RandomArray as ra In [4]: start=time();foo=ra.binomial(100,0.3,100000);print time()-start 0.21244597435 In [5]: start=time();foo=rnums.binomial(100,0.3,100000);print time()-start 1.73217105865 Its also faster in RandomArray for gamma, but not as dramatic: In [7]: start=time();foo=rnums.gamma(3,5,1000000);print time()-start 0.938844919205 In [8]: start=time();foo=ra.gamma(3,5,1000000);print time()-start 0.690024852753 -- Chris Fonnesbeck Atlanta, GA From ryanlists at gmail.com Wed Nov 9 09:52:54 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 9 Nov 2005 09:52:54 -0500 Subject: [SciPy-user] transcendental root finding In-Reply-To: References: Message-ID: Any other ideas? Ryan On 11/8/05, Ryan Krauss wrote: > I have not tried that. I will think about that one. It sounds interesting. > > Ryan > > On 11/8/05, Nils Wagner wrote: > > On Tue, 8 Nov 2005 15:34:06 -0500 > > Ryan Krauss wrote: > > > Does anyone out there have a robust algorithm for > > >finding all of the > > > roots of a transcendental equation within a certain > > >range of the > > > independent variable. I wrote one myself that takes a > > >vector of > > > guesses that are used in optimize.newton. I am trying > > >to use this > > > algoritm as part of a root locus finding tool. Each > > >initial guess > > > would represent a branch of the locus and I was hoping > > >to stay on the > > > branch as a gain is slowly increased. For at least one > > >of my branches > > > this isn't going very well and newton converges to > > >another nearby > > > solution on some occassions. > > > > > > Any thoughts? > > > > > Have you tried a homotopy approach > > > > H(x,t) = (1-t) g(x) + t f(x) = 0 > > t \in [0,1] > > > > f(x) is your transcendental equation > > g(x) is a simpler function with known zeros. > > You start with t=0 and increase t until t=1. > > > > For t=1 you will hopefully find a solution of f(x)=0. > > > > Nils > > > > > Ryan > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > From paul.cristini at univ-pau.fr Wed Nov 9 10:29:45 2005 From: paul.cristini at univ-pau.fr (Paul Cristini) Date: Wed, 09 Nov 2005 16:29:45 +0100 Subject: [SciPy-user] transcendental root finding In-Reply-To: References: Message-ID: <20051109162945.5oda3o2u0ckocs4w@www.univ-pau.fr> There is also a method based on winding number integrals Classical algorithms for finding roots are not that efficient because you often need to have good initial guesses. The best way is to use Davies algorithm. You need to define a contour in the complex plane and then you can find in a first step the numer of roots which are inside your contour and then the roots. B. Davies 1986, Locating the zeros of an analytic function, J. Comput. Phys. 66 36-49. I experienced myself this algorithm and it is really great. In that way you are really sure that you are not missing any root. The only problem is that you have to go the complex plane and take care of possible branch cuts. Paul Quoting Ryan Krauss : > Any other ideas? > > Ryan > > On 11/8/05, Ryan Krauss wrote: >> I have not tried that. I will think about that one. It sounds interesting. >> >> Ryan >> >> On 11/8/05, Nils Wagner wrote: >> > On Tue, 8 Nov 2005 15:34:06 -0500 >> > Ryan Krauss wrote: >> > > Does anyone out there have a robust algorithm for >> > >finding all of the >> > > roots of a transcendental equation within a certain >> > >range of the >> > > independent variable. I wrote one myself that takes a >> > >vector of >> > > guesses that are used in optimize.newton. I am trying >> > >to use this >> > > algoritm as part of a root locus finding tool. Each >> > >initial guess >> > > would represent a branch of the locus and I was hoping >> > >to stay on the >> > > branch as a gain is slowly increased. For at least one >> > >of my branches >> > > this isn't going very well and newton converges to >> > >another nearby >> > > solution on some occassions. >> > > >> > > Any thoughts? >> > > >> > Have you tried a homotopy approach >> > >> > H(x,t) = (1-t) g(x) + t f(x) = 0 >> > t \in [0,1] >> > >> > f(x) is your transcendental equation >> > g(x) is a simpler function with known zeros. >> > You start with t=0 and increase t until t=1. >> > >> > For t=1 you will hopefully find a solution of f(x)=0. >> > >> > Nils >> > >> > > Ryan >> > > >> > > _______________________________________________ >> > > SciPy-user mailing list >> > > SciPy-user at scipy.net >> > > http://www.scipy.net/mailman/listinfo/scipy-user >> > >> > >> > >> > _______________________________________________ >> > SciPy-user mailing list >> > SciPy-user at scipy.net >> > http://www.scipy.net/mailman/listinfo/scipy-user >> > >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From nwagner at mecha.uni-stuttgart.de Wed Nov 9 10:42:39 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Nov 2005 16:42:39 +0100 Subject: [SciPy-user] transcendental root finding In-Reply-To: <20051109162945.5oda3o2u0ckocs4w@www.univ-pau.fr> References: <20051109162945.5oda3o2u0ckocs4w@www.univ-pau.fr> Message-ID: <437218EF.9060003@mecha.uni-stuttgart.de> Paul Cristini wrote: >There is also a method based on winding number integrals >Classical algorithms for finding roots are not that efficient because you >often need to have good initial guesses. The best way is to use Davies >algorithm. You need to define a contour in the complex plane and then you >can find in a first step the numer of roots which are inside your contour and >then the roots. >B. Davies 1986, Locating the zeros of an analytic function, J. Comput. Phys. >66 36-49. >I experienced myself this algorithm and it is really great. In that way you >are really sure that you are not missing any root. >The only problem is that you have to go the complex plane and take care of >possible branch cuts. > >Paul > >Quoting Ryan Krauss : > > >>Any other ideas? >> >>Ryan >> >>On 11/8/05, Ryan Krauss wrote: >> >>>I have not tried that. I will think about that one. It sounds interesting. >>> >>>Ryan >>> >>>On 11/8/05, Nils Wagner wrote: >>> >>>>On Tue, 8 Nov 2005 15:34:06 -0500 >>>> Ryan Krauss wrote: >>>> >>>>>Does anyone out there have a robust algorithm for >>>>>finding all of the >>>>>roots of a transcendental equation within a certain >>>>>range of the >>>>>independent variable. I wrote one myself that takes a >>>>>vector of >>>>>guesses that are used in optimize.newton. I am trying >>>>>to use this >>>>>algoritm as part of a root locus finding tool. Each >>>>>initial guess >>>>>would represent a branch of the locus and I was hoping >>>>>to stay on the >>>>>branch as a gain is slowly increased. For at least one >>>>>of my branches >>>>>this isn't going very well and newton converges to >>>>>another nearby >>>>>solution on some occassions. >>>>> >>>>>Any thoughts? >>>>> >>>>> >>>>Have you tried a homotopy approach >>>> >>>>H(x,t) = (1-t) g(x) + t f(x) = 0 >>>>t \in [0,1] >>>> >>>>f(x) is your transcendental equation >>>>g(x) is a simpler function with known zeros. >>>>You start with t=0 and increase t until t=1. >>>> >>>>For t=1 you will hopefully find a solution of f(x)=0. >>>> >>>>Nils >>>> >>>> >>>>>Ryan >>>>> >>>>>_______________________________________________ >>>>>SciPy-user mailing list >>>>>SciPy-user at scipy.net >>>>>http://www.scipy.net/mailman/listinfo/scipy-user >>>>> >>>> >>>>_______________________________________________ >>>>SciPy-user mailing list >>>>SciPy-user at scipy.net >>>>http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>>> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Here are more references Kravanja, Sakurai, Van Barel On locating clusters of zeros of analytic functions BIT 1999 Vol. 39 No. 4 646--682 Kravanja, Van Barel A derivative-free algorithm for computing zeros of analytic functions Computing 1999 Vol. 63 69--91 Nils From fonnesbeck at gmail.com Wed Nov 9 11:23:59 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Wed, 9 Nov 2005 11:23:59 -0500 Subject: [SciPy-user] scipy.f2py and Numeric arrays Message-ID: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> It appears that passing Numeric arrays to extensions built with scipy.f2py do not work. (Pdb) _poisson(x,mu) <==== These arguments are Numeric arrays Out[2]: nan (Pdb) _poisson(scipy.array(x),scipy.array(mu)) Out[2]: -146.43360900878906 (Pdb) _poisson(x.tolist(),mu.tolist()) Out[2]: -146.43360900878906 Is this intended behaviour? I still need to be able to use Numeric arrays until scipy_core becomes more stable. -- Chris Fonnesbeck Atlanta, GA From oliphant at ee.byu.edu Wed Nov 9 14:15:02 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 12:15:02 -0700 Subject: [SciPy-user] scipy.f2py and Numeric arrays In-Reply-To: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> References: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> Message-ID: <43724AB6.1080209@ee.byu.edu> Chris Fonnesbeck wrote: >It appears that passing Numeric arrays to extensions built with >scipy.f2py do not work. > >(Pdb) _poisson(x,mu) <==== These arguments are Numeric arrays >Out[2]: nan >(Pdb) _poisson(scipy.array(x),scipy.array(mu)) >Out[2]: -146.43360900878906 >(Pdb) _poisson(x.tolist(),mu.tolist()) >Out[2]: -146.43360900878906 > >Is this intended behaviour? I still need to be able to use Numeric >arrays until scipy_core becomes more stable. > > > Yes, scipy.f2py does not support Numeric anymore, directly. However, a Numeric array should still be converted properly to a scipy array under the covers. So, this looks like a bug. -Travis From giovanni.samaey at cs.kuleuven.ac.be Wed Nov 9 16:24:21 2005 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Wed, 09 Nov 2005 22:24:21 +0100 Subject: [SciPy-user] scipy.ndarray as Numeric array? Message-ID: <43726905.5050604@cs.kuleuven.ac.be> Hi all, is it possible in some way to pass a scipy.ndarray as a Numeric array to other functions? I am using scipy together with mpipython and the mpi communication protocols there assume that what is being communicated is an array. How would I be able to perform a cast? (An unrelated question -- will there be integration of parallel programming tools in scipy in some future -- near or far? ) Best, and thanks beforehand. Giovanni Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From strawman at astraw.com Wed Nov 9 16:34:14 2005 From: strawman at astraw.com (Andrew Straw) Date: Wed, 09 Nov 2005 13:34:14 -0800 Subject: [SciPy-user] ANN: seppo - simple embarrassingly parallel python Message-ID: <43726B56.8040800@astraw.com> I'd like to announce "seppo" - simple embarrassingly parallel python. This should be considered a very alpha version, and was released to the public to gauge interest/reaction. Overview ======== The map function is well-known in Python, allowing a single function to be called on each member of an iterable sequence: map( function, [1,2,3,4] ) The seppo module allows the same functionality, but distributed over several processes: seppo.map_parallel( some_module.function, [1,2,3,4] ) In this case, each iteration may evaluate the function in a different process, possibly in a different computer. The idea is a simple concept and is hopefully natural transition for Python programmers to use the power of multi-processor computers and clusters. For more information, or to download ==================================== Please see: http://www.its.caltech.edu/~astraw/seppo.html From oliphant at ee.byu.edu Wed Nov 9 16:47:52 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 14:47:52 -0700 Subject: [SciPy-user] scipy.ndarray as Numeric array? In-Reply-To: <43726905.5050604@cs.kuleuven.ac.be> References: <43726905.5050604@cs.kuleuven.ac.be> Message-ID: <43726E88.3080902@ee.byu.edu> Giovanni Samaey wrote: >Hi all, > >is it possible in some way to pass a scipy.ndarray as a Numeric array to >other functions? >I am using scipy together with mpipython and the mpi communication >protocols there >assume that what is being communicated is an array. >How would I be able to perform a cast? > > Yes, if you use Numeric 24 or greater (Numeric will construct a Numeric array from the scipy array using the array interface). The CVS version of Numeric is the best code so far. There will be a Numeric 24.2 in a couple of days to fix some 64-bit bugs in Numeric 24.1 >(An unrelated question -- will there be integration of parallel >programming tools in scipy in some >future -- near or far? ) > > Definitely in full scipy. It is definitely a topic of interest. I saw some good talks at SciPy 2005 on it. There were some tools like cow in scipy before the switch. The cow subpackage has been moved to the sandbox area because it has not received attention for a while. There are some newer tools emerging to replace cow. I'd like to see them in scipy before long. They are avaialble as separate packages already. I don't recall the names immediately. Go to the SciPy 2005 presentations and you will see a few talks of relevance. -Travis From oliphant at ee.byu.edu Wed Nov 9 20:27:05 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 09 Nov 2005 18:27:05 -0700 Subject: [SciPy-user] scipy.f2py and Numeric arrays In-Reply-To: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> References: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> Message-ID: <4372A1E9.4000002@ee.byu.edu> Chris Fonnesbeck wrote: >It appears that passing Numeric arrays to extensions built with >scipy.f2py do not work. > >(Pdb) _poisson(x,mu) <==== These arguments are Numeric arrays >Out[2]: nan >(Pdb) _poisson(scipy.array(x),scipy.array(mu)) >Out[2]: -146.43360900878906 >(Pdb) _poisson(x.tolist(),mu.tolist()) >Out[2]: -146.43360900878906 > >Is this intended behaviour? I still need to be able to use Numeric >arrays until scipy_core becomes more stable. > > This was a bug in the new array_struct array interface that was just recently added. It has been fixed now in SciPy SVN. -Travis From ckkart at hoc.net Thu Nov 10 05:06:44 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 10 Nov 2005 11:06:44 +0100 Subject: [SciPy-user] indexing arrays with 'long int' doesn't work Message-ID: <43731BB4.60908@hoc.net> Hi, with old scipy (current svn checkout) I encountered problems when using a long int array (python array module) to index a scipy array. 'take' works but looping does not: In [1]:import scipy_base In [2]:import array as ar In [3]:a = ar.array('L') In [4]:a.fromlist([0,1,2]) In [5]:b = array([4,6,3,4,]) In [6]:take(b,a) Out[6]:array([4, 6, 3]) In [8]:for i in a: ...: print b[i] ...: --------------------------------------------------------------------------- exceptions.IndexError Traceback (most recent call last) /home/ck/ IndexError: index must be either an int or a sequence Christian From giovanni.samaey at cs.kuleuven.ac.be Thu Nov 10 09:29:04 2005 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Thu, 10 Nov 2005 15:29:04 +0100 Subject: [SciPy-user] scipy.ndarray as Numeric array? In-Reply-To: <43726905.5050604@cs.kuleuven.ac.be> References: <43726905.5050604@cs.kuleuven.ac.be> Message-ID: <43735930.20406@cs.kuleuven.ac.be> Giovanni Samaey wrote: Thanks for the help. I have constructed a minimal example for people who want to do the same as I did -- I made an error initially and solved it. import scipy import Numeric import Scientific.MPI as MPI mpi = MPI.world.duplicate() glob = Numeric.zeros((mpi.size,1),Numeric.Int) local = scipy.ones((1,),scipy.Int) mpi.share(Numeric.array(local),glob) print glob When I run this, I get (on every processor ;-) ): File "test.py", line 10, in ? mpi.share(Numeric.array(local),glob) Scientific.MPI.core.MPIError: send and receive arrays are not compatible This is avoided by putting: mpi.share(Numeric.array(local,*Numeric.Int*),glob) > > Hi all, > > is it possible in some way to pass a scipy.ndarray as a Numeric array to > other functions? > I am using scipy together with mpipython and the mpi communication > protocols there > assume that what is being communicated is an array. > How would I be able to perform a cast? > > (An unrelated question -- will there be integration of parallel > programming tools in scipy in some > future -- near or far? ) > > Best, and thanks beforehand. > > Giovanni Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From fonnesbeck at gmail.com Thu Nov 10 14:35:53 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 10 Nov 2005 14:35:53 -0500 Subject: [SciPy-user] scipy.f2py and Numeric arrays In-Reply-To: <4372A1E9.4000002@ee.byu.edu> References: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> <4372A1E9.4000002@ee.byu.edu> Message-ID: <723eb6930511101135o51ffb380xdafea9949e0a0828@mail.gmail.com> On 11/9/05, Travis Oliphant wrote: > Chris Fonnesbeck wrote: > > >It appears that passing Numeric arrays to extensions built with > >scipy.f2py do not work. > > > >(Pdb) _poisson(x,mu) <==== These arguments are Numeric arrays > >Out[2]: nan > >(Pdb) _poisson(scipy.array(x),scipy.array(mu)) > >Out[2]: -146.43360900878906 > >(Pdb) _poisson(x.tolist(),mu.tolist()) > >Out[2]: -146.43360900878906 > > > >Is this intended behaviour? I still need to be able to use Numeric > >arrays until scipy_core becomes more stable. > > > > > This was a bug in the new array_struct array interface that was just > recently added. It has been fixed now in SciPy SVN. > Just updated from svn. When I try and pass Numeric arrays I get a type error: TypeError: array cannot be safely cast to required type Thanks, C. -- Chris Fonnesbeck Atlanta, GA From dd55 at cornell.edu Thu Nov 10 14:36:57 2005 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 10 Nov 2005 14:36:57 -0500 Subject: [SciPy-user] efficiently importing ascii data Message-ID: <200511101436.58074.dd55@cornell.edu> I'm reading arrays of data from an ascii file and converting to appropriate numerical types. The data files can get pretty big, I was wondering if someone here might have a suggestion on how to speed things up. The following illustrates two bottlenecks: the list comprehension step and the conversion of the resulting list to an array: from time import clock from scipy import array # simulate some data: s='1e-7,' ascii_data=(s*1000000)[:-1] # convert it to an array: t0=clock() temp=[float(i) for i in ascii_data.split(',')] dt=clock()-t0 print dt data=array(temp) print clock()-(t0+dt) On my system, the lc takes 1.8s, while creating the array takes 2.9s. Could anyone suggest how I might speed things up? I considered using map(), it is about 25% faster than the list comprehension, but I've read that map will go away in python3000. Thanks, Darren P.S. I should mention that my data files are somewhat complex, so I cant use python's csv module, or scipy's load. Here's a small example to show the complexity of the formatting: #S 1 growthtime MCS 0.01 3000 MCA 0 5000 #L Seconds monitor Bicron I0 I1 I3 temperature pressure vfc_mon Ti Mn Epoch #@MCA %16C 30 4.56151e+06 44184 5.15098e+06 6.97912e+06 34466 22737.3 1.68483e+07 29984 2529 882 1079258492.6 #C MCS pass 0 @AMCS 18 13 20 18 19 20 16 13 15 14 13 16 15 13 20 7\ 8 14 14 19 12 7 17 16 13 23 21 12 17 13 11 12\ 19 15 17 13 12 14 15 21 11 12 16 11 17 13 18 20\ [This continues on at great length...] #C MCA data @AMCA 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\ [so does this] 30 4.56151e+06 44184 5.15098e+06 6.97912e+06 34466 22737.3 1.68483e+07 29984 2529 882 1079258492.6 #C MCS pass 1 @AMCS 18 13 20 18 19 20 16 13 15 14 13 16 15 13 20 7\ 8 14 14 19 12 7 17 16 13 23 21 12 17 13 11 12\ 19 15 17 13 12 14 15 21 11 12 16 11 17 13 18 20\ [...] #C MCA data @AMCA 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\ [...] [etc.] From oliphant at ee.byu.edu Thu Nov 10 15:02:10 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 13:02:10 -0700 Subject: [SciPy-user] scipy.f2py and Numeric arrays In-Reply-To: <723eb6930511101135o51ffb380xdafea9949e0a0828@mail.gmail.com> References: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> <4372A1E9.4000002@ee.byu.edu> <723eb6930511101135o51ffb380xdafea9949e0a0828@mail.gmail.com> Message-ID: <4373A742.6060106@ee.byu.edu> Chris Fonnesbeck wrote: >On 11/9/05, Travis Oliphant wrote: > > >>Chris Fonnesbeck wrote: >> >> >> >>>It appears that passing Numeric arrays to extensions built with >>>scipy.f2py do not work. >>> >>>(Pdb) _poisson(x,mu) <==== These arguments are Numeric arrays >>>Out[2]: nan >>>(Pdb) _poisson(scipy.array(x),scipy.array(mu)) >>>Out[2]: -146.43360900878906 >>>(Pdb) _poisson(x.tolist(),mu.tolist()) >>>Out[2]: -146.43360900878906 >>> >>>Is this intended behaviour? I still need to be able to use Numeric >>>arrays until scipy_core becomes more stable. >>> >>> >>> >>> >>This was a bug in the new array_struct array interface that was just >>recently added. It has been fixed now in SciPy SVN. >> >> >> > >Just updated from svn. When I try and pass Numeric arrays I get a type error: > >TypeError: array cannot be safely cast to required type > > > And you didn't get this with Numeric? That would be surprising, unless f2py used to FORCE CAST the results --- I can look into that. It looks like your generated Fortran code is expecting float32. If you try and pass in float64 you get the type error as the cast tries to get made. The behavior of f2py should not have changed, but looking at the old code seems to indicate that you would have received this error previously as well. -Travis From fonnesbeck at gmail.com Thu Nov 10 15:05:29 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 10 Nov 2005 15:05:29 -0500 Subject: [SciPy-user] scipy.f2py and Numeric arrays In-Reply-To: <4373A742.6060106@ee.byu.edu> References: <723eb6930511090823v7ef83aafobdcaead2a84adaeb@mail.gmail.com> <4372A1E9.4000002@ee.byu.edu> <723eb6930511101135o51ffb380xdafea9949e0a0828@mail.gmail.com> <4373A742.6060106@ee.byu.edu> Message-ID: <723eb6930511101205g26bf971fg3f3e657bc4e4708b@mail.gmail.com> On 11/10/05, Travis Oliphant wrote: > Chris Fonnesbeck wrote: > > >On 11/9/05, Travis Oliphant wrote: > > > > > >>Chris Fonnesbeck wrote: > >> > >> > >> > >>>It appears that passing Numeric arrays to extensions built with > >>>scipy.f2py do not work. > >>> > >>>(Pdb) _poisson(x,mu) <==== These arguments are Numeric arrays > >>>Out[2]: nan > >>>(Pdb) _poisson(scipy.array(x),scipy.array(mu)) > >>>Out[2]: -146.43360900878906 > >>>(Pdb) _poisson(x.tolist(),mu.tolist()) > >>>Out[2]: -146.43360900878906 > >>> > >>>Is this intended behaviour? I still need to be able to use Numeric > >>>arrays until scipy_core becomes more stable. > >>> > >>> > >>> > >>> > >>This was a bug in the new array_struct array interface that was just > >>recently added. It has been fixed now in SciPy SVN. > >> > >> > >> > > > >Just updated from svn. When I try and pass Numeric arrays I get a type error: > > > >TypeError: array cannot be safely cast to required type > > > > > > > And you didn't get this with Numeric? That would be surprising, > unless f2py used to FORCE CAST the results --- I can look into that. > > It looks like your generated Fortran code is expecting float32. If you > try and pass in float64 you get the type error as the cast tries to get > made. > > The behavior of f2py should not have changed, but looking at the old > code seems to indicate that you would have received this error > previously as well. > This absolutely worked before. PyMC was originally based around Numeric, and f2py statistical likelihood functions received Numeric array arguments regularly. -- Chris Fonnesbeck Atlanta, GA From jens_brandenburg at gmx.net Thu Nov 10 15:14:46 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Thu, 10 Nov 2005 21:14:46 +0100 Subject: [SciPy-user] building scipy Message-ID: <4373AA36.2070101@gmx.net> hello again, after successful building and installing (thank's a lot Ken) SciPy on SUSE 10 OSS (gcc-4.0.2*) a simple ">>>t=scipy.test()" gives me the following: ERROR: check_sh_legendre (scipy.special.basic.test_basic.test_sh_legendre) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/scipy/special/tests/test_basic.py", line 1806, in check_sh_legendre Ps1 = sh_legendre(1) File "/usr/local/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 593, in sh_legendre x,w,mu0 = ps_roots(n,mu=1) File "/usr/local/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 584, in ps_roots return js_roots(n,1.0,1.0,mu=mu) File "/usr/local/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 205, in js_roots val = gen_roots_and_weights(n,an_Js,sbn_Js,mu0) File "/usr/local/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 121, in gen_roots_and_weights eig = get_eig_func() File "/usr/local/lib/python2.3/site-packages/scipy/special/orthogonal.py", line 91, in get_eig_func eig = scipy.linalg.eig AttributeError: 'module' object has no attribute 'eig' ---------------------------------------------------------------------- Ran 747 tests in 1.391s FAILED (errors=15) Looks like something's wrong with orthogonal.py... but I'm not too sure about this AttributeError. 'eig' is a function, isn't it? Any suggestions? Thank's so far Jens From fonnesbeck at gmail.com Thu Nov 10 15:29:07 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 10 Nov 2005 15:29:07 -0500 Subject: [SciPy-user] strange ValueError in scipy_core Message-ID: <723eb6930511101229y74ea39cfjb1ab1973989816d2@mail.gmail.com> I am converting some code over to scipy_core which involves populating a relatively sparse matrix, so I am generating an array of zeros, and substituting the non-zero elements. However, this substitution does not seem to be allowed. Here is the relevant section of code. # Initialize projection matrix matrix = zeros((10,10),typecode='d') # Populate with non-zero elements matrix[0,4] = 0.5*s1 matrix[1,0] = s2 matrix[2,1] = s3 matrix[3,2] = s4*(1.-g4) matrix[3,3] = sP*(1.-gP) matrix[4,2] = s4*g4 matrix[4,3] = sP*gP matrix[4,5] = sA*gB matrix[5,4] = sA matrix[5,5] = sA*(1.-gB) matrix[6,4] = 0.5*s1 matrix[7,6] = s2 matrix[8,7] = s3 matrix[9,8] = s4 matrix[9,9] = sA But I get the following: --> 777 matrix[0,4] = 0.5*s1 matrix = [[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,]] s1 = 0.78583498304255861 778 matrix[1,0] = s2 779 matrix[2,1] = s3 ValueError: cannot handle misaligned or not writeable arrays. -- Chris Fonnesbeck Atlanta, GA From oliphant at ee.byu.edu Thu Nov 10 17:41:22 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 15:41:22 -0700 Subject: [SciPy-user] strange ValueError in scipy_core In-Reply-To: <723eb6930511101229y74ea39cfjb1ab1973989816d2@mail.gmail.com> References: <723eb6930511101229y74ea39cfjb1ab1973989816d2@mail.gmail.com> Message-ID: <4373CC92.2030908@ee.byu.edu> Chris Fonnesbeck wrote: >I am converting some code over to scipy_core which involves populating >a relatively sparse matrix, so I am generating an array of zeros, and >substituting the non-zero elements. However, this substitution does >not seem to be allowed. Here is the relevant section of code. > > > # Initialize projection matrix > matrix = zeros((10,10),typecode='d') > > # Populate with non-zero elements > matrix[0,4] = 0.5*s1 > matrix[1,0] = s2 > matrix[2,1] = s3 > matrix[3,2] = s4*(1.-g4) > matrix[3,3] = sP*(1.-gP) > matrix[4,2] = s4*g4 > matrix[4,3] = sP*gP > matrix[4,5] = sA*gB > matrix[5,4] = sA > matrix[5,5] = sA*(1.-gB) > matrix[6,4] = 0.5*s1 > matrix[7,6] = s2 > matrix[8,7] = s3 > matrix[9,8] = s4 > matrix[9,9] = sA > >But I get the following: > >--> 777 matrix[0,4] = 0.5*s1 > matrix = [[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,] > [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,]] > s1 = 0.78583498304255861 > 778 matrix[1,0] = s2 > 779 matrix[2,1] = s3 > >ValueError: cannot handle misaligned or not writeable arrays. > > > > I'm pretty sure this error is coming from the Numeric module (not scipy). It could be a problem with the __array_struct__ interface, though. Or it could be a problem with one of your arrays having it's writeable and/or aligned flags set improperly for some reason. Basically, new scipy arrays can be misaligned and read-only in memory. This is not supported for Numeric arrays, so if you try to convert a scipy array with those advanced features to a Numeric array you get this error (from Numeric). In sum --- we need more information about what kind of arrays (Numeric or scipy) all these things are. You are definitely mixing scipy with Numeric. This should work, but it is less well tested. We also need to see why some of your arrays are getting misaligned or are not writeable. The code you posted is not enough to discern this. If needed, the flags for a scipy array can be had as .flags which returns a dictionary of Boolean values. -Travis From oliphant at ee.byu.edu Thu Nov 10 18:33:09 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 16:33:09 -0700 Subject: [SciPy-user] strange ValueError in scipy_core In-Reply-To: <723eb6930511101229y74ea39cfjb1ab1973989816d2@mail.gmail.com> References: <723eb6930511101229y74ea39cfjb1ab1973989816d2@mail.gmail.com> Message-ID: <4373D8B5.7020404@ee.byu.edu> Chris Fonnesbeck wrote: >I am converting some code over to scipy_core which involves populating >a relatively sparse matrix, so I am generating an array of zeros, and >substituting the non-zero elements. However, this substitution does >not seem to be allowed. Here is the relevant section of code. > > I think I've recreated what you are doing. Matrix is a Numeric array and apparently the other items are scipy core arrays. The multiplication result actually produces an array scalar. The problem was with the __array_struct__ interface for the array scalar. Array scalars are only readable so the WRITEABLE flag was getting reset thus raising the error in Numeric. (But, setting the flag was unnecessary because the data from the scalar was copied anyway). Thanks for bringing this one to light. It's fixed in SVN scipy core. -Travis From oliphant at ee.byu.edu Thu Nov 10 19:29:04 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 10 Nov 2005 17:29:04 -0700 Subject: [SciPy-user] SciPy SVN altered so that newscipy and newcore are now the trunk. Message-ID: <4373E5D0.5050305@ee.byu.edu> I've finished altering the subversion repository of SciPy so that the new development is taking place on the trunk of both scipy and scipy_core. The old versions are under branches named oldcore and oldscipy. Get the new repositor(y,ies) using: *Core*: svn co http://svn.scipy.org/svn/scipy_core/trunk core *Full SciPy*: svn co http://svn.scipy.org/svn/scipy/trunk scipy Doing both will place two directories named core and scipy in your current directory containing the current state of both repositories. python setup.py install should work in each directory. The Freeze is now over. I want to track down the bug that Christopher Hanley noted and another f2py-related bug before making a release, which I expect to happen by the weekend. -Travis From fonnesbeck at gmail.com Thu Nov 10 19:33:27 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Thu, 10 Nov 2005 19:33:27 -0500 Subject: [SciPy-user] strange ValueError in scipy_core In-Reply-To: <4373D8B5.7020404@ee.byu.edu> References: <723eb6930511101229y74ea39cfjb1ab1973989816d2@mail.gmail.com> <4373D8B5.7020404@ee.byu.edu> Message-ID: <723eb6930511101633o6f1aa8caj1ca58666ff5bcb2@mail.gmail.com> On 11/10/05, Travis Oliphant wrote: > Chris Fonnesbeck wrote: > > >I am converting some code over to scipy_core which involves populating > >a relatively sparse matrix, so I am generating an array of zeros, and > >substituting the non-zero elements. However, this substitution does > >not seem to be allowed. Here is the relevant section of code. > > > > > I think I've recreated what you are doing. Matrix is a Numeric array > and apparently the other items are scipy core arrays. The > multiplication result actually produces an array scalar. The problem > was with the __array_struct__ interface for the array scalar. > > Array scalars are only readable so the WRITEABLE flag was getting reset > thus raising the error in Numeric. (But, setting the flag was > unnecessary because the data from the scalar was copied anyway). > > Thanks for bringing this one to light. It's fixed in SVN scipy core. > Thanks. Will go break something else now. C. -- Chris Fonnesbeck Atlanta, GA From cookedm at physics.mcmaster.ca Thu Nov 10 22:21:53 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 10 Nov 2005 22:21:53 -0500 Subject: [SciPy-user] [Numpy-discussion] SciPy SVN altered so that newscipy and newcore are now the trunk. In-Reply-To: <4373E5D0.5050305@ee.byu.edu> (Travis Oliphant's message of "Thu, 10 Nov 2005 17:29:04 -0700") References: <4373E5D0.5050305@ee.byu.edu> Message-ID: Travis Oliphant writes: > I've finished altering the subversion repository of SciPy so that the > new development is taking place on the trunk of both scipy and scipy_core. > > The old versions are under branches named oldcore and oldscipy. > > Get the new repositor(y,ies) using: > > *Core*: > svn co http://svn.scipy.org/svn/scipy_core/trunk core > > *Full SciPy*: > svn co http://svn.scipy.org/svn/scipy/trunk scipy > > Doing both will place two directories named core and scipy in your > current directory containing the current state of both repositories. Alternatively, instead of doing a full checkout, you can switch your working copies: Within your (old) newcore directory: svn sw http://svn.scipy.org/svn/scipy_core/trunk and same for full scipy. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From saliola at gmail.com Fri Nov 11 04:06:21 2005 From: saliola at gmail.com (Franco Saliola) Date: Fri, 11 Nov 2005 04:06:21 -0500 Subject: [SciPy-user] Building on 64 bit machines Message-ID: Hello. I am having some trouble building SciPy on a 64 bit machine. Actually, after changing a bunch of flags I got it to build, but I am getting an errors for scipy.test(level=1). The test returns, AttributeError: 'module' object has no attribute 'eig' Is there documentation somewhere about how to build a 64 bit version of SciPy? The machine is an Intel dual processor. Any help would be greatly appreciated. Franco -- From rkern at ucsd.edu Fri Nov 11 04:15:20 2005 From: rkern at ucsd.edu (Robert Kern) Date: Fri, 11 Nov 2005 01:15:20 -0800 Subject: [SciPy-user] random number sampling performance in newcore In-Reply-To: <723eb6930511090611i59909364o8528453afc78b734@mail.gmail.com> References: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> <43713E95.4030804@ucsd.edu> <723eb6930511090611i59909364o8528453afc78b734@mail.gmail.com> Message-ID: <43746128.2080306@ucsd.edu> Chris Fonnesbeck wrote: > My tests were pretty simple and informal -- just 100K identically > distributed samples: > > In [1]: from scipy import random as rnums > > In [2]: from time import time > > In [3]: import RandomArray as ra > > In [4]: start=time();foo=ra.binomial(100,0.3,100000);print time()-start > 0.21244597435 > > In [5]: start=time();foo=rnums.binomial(100,0.3,100000);print time()-start > 1.73217105865 It turns out the algorithm I was using for the small-mean case wasn't the greatest. This is fixed, now. > Its also faster in RandomArray for gamma, but not as dramatic: > > In [7]: start=time();foo=rnums.gamma(3,5,1000000);print time()-start > 0.938844919205 > > In [8]: start=time();foo=ra.gamma(3,5,1000000);print time()-start > 0.690024852753 On my computer, it's even less dramatic. In [15]: Timer('x=RA.gamma(3,1.0,1000000)', 'import RandomArray as RA').repeat(3,10) Out[15]: [12.582072019577026, 12.48114800453186, 12.486257076263428] In [16]: Timer('x=random.gamma(3,size=1000000)', 'from scipy import random').repeat(3,10) Out[16]: [14.418467044830322, 14.438578128814697, 14.841632127761841] RANLIB does use a somewhat more state-of-the-art generator for gamma than I do, but one that is also longer and rather more complicated. I happen to count those criteria at least as important as speed (especially as I'm the one doing the implementing, here). If you see any more glaring performance problems, let me know, and I'll investigate them. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Fri Nov 11 04:16:48 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 11 Nov 2005 02:16:48 -0700 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: Message-ID: <43746180.1000301@ee.byu.edu> Franco Saliola wrote: >Hello. > >I am having some trouble building SciPy on a 64 bit machine. Actually, >after changing a bunch of flags I got it to build, but I am getting an >errors for scipy.test(level=1). The test returns, > > Which SciPy are you talking about? The new scipy or the old scipy. To be honest, I don't know if the old scipy ever worked on 64-bit machines. The new scipy should build fine on 64-bit machines. -Travis From arnd.baecker at web.de Fri Nov 11 06:17:05 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 11 Nov 2005 12:17:05 +0100 (CET) Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: <43746180.1000301@ee.byu.edu> References: <43746180.1000301@ee.byu.edu> Message-ID: On Fri, 11 Nov 2005, Travis Oliphant wrote: > Franco Saliola wrote: > > >Hello. > > > >I am having some trouble building SciPy on a 64 bit machine. Actually, > >after changing a bunch of flags I got it to build, but I am getting an > >errors for scipy.test(level=1). The test returns, > > > > > Which SciPy are you talking about? The new scipy or the old scipy. > > To be honest, I don't know if the old scipy ever worked on 64-bit machines. I got it working - with quite a bit of help here. See http://www.scipy.org/mailinglists/mailman?fn=scipy-dev/2005-November/003949.html for my notes on this (note that the subject concerns newcore, but my notes are really for "old" scipy. Best, Arnd From rkern at ucsd.edu Fri Nov 11 06:23:19 2005 From: rkern at ucsd.edu (Robert Kern) Date: Fri, 11 Nov 2005 03:23:19 -0800 Subject: [SciPy-user] random number sampling performance in newcore In-Reply-To: <43746128.2080306@ucsd.edu> References: <723eb6930511080820x3f78d336uc33bd78982420986@mail.gmail.com> <43713E95.4030804@ucsd.edu> <723eb6930511090611i59909364o8528453afc78b734@mail.gmail.com> <43746128.2080306@ucsd.edu> Message-ID: <43747F27.6000807@ucsd.edu> Robert Kern wrote: > RANLIB does use a somewhat more state-of-the-art generator for gamma > than I do, but one that is also longer and rather more complicated. I > happen to count those criteria at least as important as speed > (especially as I'm the one doing the implementing, here). Never mind. I found an even faster but incredibly small and simple algorithm for gamma variates. scipy.random.gamma() should now be about as fast or faster than RandomArray.gamma() depending on the parameters. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From jens_brandenburg at gmx.net Fri Nov 11 07:05:30 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Fri, 11 Nov 2005 13:05:30 +0100 (MET) Subject: [SciPy-user] building scipy References: <4373AA36.2070101@gmx.net> Message-ID: <3522.1131710730@www35.gmx.net> hello everybody, to give you some more information about the problem I've tried the following under IPython: In [3]: from scipy.linalg import eig And this is the result: exceptions.ImportError: /usr/local/lib/python2.3/site-packages/scipy/linalg/_fli nalg.so: undefined symbol: _gfortran_compare_string --------------------------------------------------------------------------- scipy_base.ppimport.PPImportError Traceback (most re cent call last) /home/jens-ifw/ /usr/local/lib/python2.3/site-packages/scipy_base/ppimport.py in __getattr__(self, name) 301 module = self.__dict__['_ppimport_module'] 302 except KeyError: --> 303 module = self._ppimport_importer() 304 return getattr(module, name) 305 /usr/local/lib/python2.3/site-packages/scipy_base/ppimport.py in _ppimport_importer(self) 260 exc_info = self.__dict__.get('_ppimport_exc_info') 261 if exc_info is not None: --> 262 raise PPImportError,\ 263 ''.join(traceback.format_exception(*exc_info)) 264 else: PPImportError: Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/scipy_base/ppimport.py", line 273, in _ppimport_importer module = __import__(name,None,None,['*']) File "/usr/local/lib/python2.3/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/local/lib/python2.3/site-packages/scipy/linalg/basic.py", line 223, in ? import decomp File "/usr/local/lib/python2.3/site-packages/scipy/linalg/decomp.py", line 16, in ? from blas import get_blas_funcs File "/usr/local/lib/python2.3/site-packages/scipy/linalg/blas.py", line 13, in ? import fblas ImportError: /usr/local/lib/python2.3/site-packages/scipy/linalg/fblas.so: undefined symbol: _gfortran_runtime_error I hope someone has an idea or at least reply to my request... ;-(( By the way - do I need the compat-readline* package to run IPython properly on SUSE 10.0 OSS??? Thank's a lot cheers Jens -- Jens Brandenburg 10 GB Mailbox, 100 FreeSMS/Monat http://www.gmx.net/de/go/topmail +++ GMX - die erste Adresse f?r Mail, Message, More +++ From svetosch at gmx.net Fri Nov 11 09:58:28 2005 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 11 Nov 2005 15:58:28 +0100 Subject: [SciPy-user] qz decomposition how to? Message-ID: <4374B194.3070306@gmx.net> Hi, total Scipy newbie here -- I have tried to find an answer on the web and user guides, but I'm a little lost in different versions etc. So here goes: Is it possible to perform a QZ (alias generalized Schur) decomposition with some pre-built Scipy version? (preferably on Windows, but I'd be also glad to hear about other options) Relatively low-level Lapack accessing would be ok; btw as far as I understand that would be lapack functions xGGES. Currently I have windows Enthought Python 2.3.5 installed (uninstalled previously used Python 2.4 to hopefully have all the Scipy tools fitting together). Thanks much for your time and help, -sven From ckkart at hoc.net Fri Nov 11 10:05:14 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 11 Nov 2005 16:05:14 +0100 Subject: [SciPy-user] fmin_tnc arguments Message-ID: <4374B32A.7000101@hoc.net> Hi, fmin_tnc requires that both the input vector and the evaluated gradient are python lists. I propose to change fmin_tnc to accept numeric arrays/ndarrays, too. I'd be glad if someone more experienced in writing C extensions than me could do that. If not I'll try it myself. Btw, a general question concerning conversion between lists and arrays: the memory consumption will double temporarily during the conversion and decrease again when one of the objects is deleted, right? Is there way to do that 'inplace'? Regards, Christian From nwagner at mecha.uni-stuttgart.de Fri Nov 11 10:04:45 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 11 Nov 2005 16:04:45 +0100 Subject: [SciPy-user] qz decomposition how to? In-Reply-To: <4374B194.3070306@gmx.net> References: <4374B194.3070306@gmx.net> Message-ID: <4374B30D.7070303@mecha.uni-stuttgart.de> Sven Schreiber wrote: >Hi, >total Scipy newbie here -- I have tried to find an answer on the web and user guides, but I'm a >little lost in different versions etc. So here goes: > >Is it possible to perform a QZ (alias generalized Schur) decomposition with some pre-built Scipy >version? (preferably on Windows, but I'd be also glad to hear about other options) Relatively >low-level Lapack accessing would be ok; btw as far as I understand that would be lapack functions xGGES. > >Currently I have windows Enthought Python 2.3.5 installed (uninstalled previously used Python 2.4 to >hopefully have all the Scipy tools fitting together). > >Thanks much for your time and help, >-sven > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Use help (linalg.flapack) t,sdim,w,vs,work,info = cgees(cselect,a,compute_v=1,sort_t=0,lwork=3*n,cselect_extra_args=(),overwrite_a=0) t,sdim,w,vs,work,info = zgees(zselect,a,compute_v=1,sort_t=0,lwork=3*n,zselect_extra_args=(),overwrite_a=0) t,sdim,wr,wi,vs,work,info = dgees(dselect,a,compute_v=1,sort_t=0,lwork=3*n,dselect_extra_args=(),overwrite_a=0) t,sdim,wr,wi,vs,work,info = sgees(sselect,a,compute_v=1,sort_t=0,lwork=3*n,sselect_extra_args=(),overwrite_a=0) alphar,alphai,beta,vl,vr,work,info = sggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) alphar,alphai,beta,vl,vr,work,info = dggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) alpha,beta,vl,vr,work,info = cggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) alpha,beta,vl,vr,work,info = zggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) See also linalg.schur, linalg.rsf2csf and linalg.eig Nils From svetosch at gmx.net Fri Nov 11 11:10:41 2005 From: svetosch at gmx.net (Sven Schreiber) Date: Fri, 11 Nov 2005 17:10:41 +0100 Subject: [SciPy-user] qz decomposition how to? In-Reply-To: <4374B30D.7070303@mecha.uni-stuttgart.de> References: <4374B194.3070306@gmx.net> <4374B30D.7070303@mecha.uni-stuttgart.de> Message-ID: <4374C281.1020007@gmx.net> Nils Wagner schrieb: > > Use > > help (linalg.flapack) > > t,sdim,w,vs,work,info = > cgees(cselect,a,compute_v=1,sort_t=0,lwork=3*n,cselect_extra_args=(),overwrite_a=0) > t,sdim,w,vs,work,info = > zgees(zselect,a,compute_v=1,sort_t=0,lwork=3*n,zselect_extra_args=(),overwrite_a=0) > t,sdim,wr,wi,vs,work,info = > dgees(dselect,a,compute_v=1,sort_t=0,lwork=3*n,dselect_extra_args=(),overwrite_a=0) > t,sdim,wr,wi,vs,work,info = > sgees(sselect,a,compute_v=1,sort_t=0,lwork=3*n,sselect_extra_args=(),overwrite_a=0) > alphar,alphai,beta,vl,vr,work,info = > sggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) > alphar,alphai,beta,vl,vr,work,info = > dggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) > alpha,beta,vl,vr,work,info = > cggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) > alpha,beta,vl,vr,work,info = > zggev(a,b,compute_vl=1,compute_vr=1,lwork=8*n,overwrite_a=0,overwrite_b=0) > > See also linalg.schur, linalg.rsf2csf and linalg.eig > > Nils > > That was a quick reply, and it seems to work. Thank you! > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > From drewdemento at yahoo.com Fri Nov 11 12:31:08 2005 From: drewdemento at yahoo.com (Andrew D) Date: Fri, 11 Nov 2005 09:31:08 -0800 (PST) Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy Message-ID: <20051111173108.18525.qmail@web50709.mail.yahoo.com> Hello, I'm using an up-to-date build of newcore and newscipy (newcore 1467, newscipy 1431) on Red Hat Enterprise Linux WS release 3 (Taroon Update 6) running on a Pentium 4. Running the "roots" command with complex arguments seems to fail, producing incorrect results even in the simplest cases: ******************************************************************** Python 2.4.2 (#1, Nov 7 2005, 13:09:14) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-53)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * Importing io to scipy Importing fftpack to scipy Importing special to scipy Importing cluster to scipy Importing sparse to scipy Importing utils to scipy Importing interpolate to scipy Importing lib to scipy Importing integrate to scipy Importing signal to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy >>> roots([1+2.j,1]) array([-1.+0.j]) >>> roots([2.j,1]) Traceback (most recent call last): File "", line 1, in ? File ".../lib/python2.4/site-packages/scipy/base/polynomial.py", line 112, in roots roots = _eigvals(A) File ".../lib/python2.4/site-packages/scipy/base/polynomial.py", line 30, in _eigvals return eigvals(arg) File ".../lib/python2.4/site-packages/scipy/linalg/decomp.py", line 172, in eigvals return eig(a,b=b,left=0,right=0,overwrite_a=overwrite_a) File ".../lib/python2.4/site-packages/scipy/linalg/decomp.py", line 114, in eig a1 = asarray_chkfinite(a) File ".../lib/python2.4/site-packages/scipy/base/function_base.py", line 211, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ******************************************************************** SciPy v0.32 get's it right: >>> roots([1.+2.j,1]) array([-0.2+0.4j]) >>> roots([2.j,1]) array([-0.+0.5j]) Is this a known problem? The culprit seems to be the lines 103-105 in polynomial.py: # casting: if incoming array isn't floating point, make it floating point. if not isinstance(p.dtype, (NX.floating, NX.complexfloating)): p = p.astype(float) which force p to be real, if it is complex, even though it appears that they should leave things be if they are complex. Andrew __________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com From fonnesbeck at gmail.com Fri Nov 11 12:34:11 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Fri, 11 Nov 2005 12:34:11 -0500 Subject: [SciPy-user] updated OSX install guide Message-ID: <723eb6930511110934j7418a589j61546f1930a4dbdd@mail.gmail.com> I have updated the install tutorial for SciPy on OSX: http://www.scipy.org/documentation/Members/fonnesbeck/osx_build.txt It now contains fixes for a number of installation problems that folks have emailed me with. Once things have settled down a bit, I will create a similar document for scipy_core. C. -- Chris Fonnesbeck Atlanta, GA From Fernando.Perez at colorado.edu Fri Nov 11 13:44:31 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 11 Nov 2005 11:44:31 -0700 Subject: [SciPy-user] building scipy In-Reply-To: <3522.1131710730@www35.gmx.net> References: <4373AA36.2070101@gmx.net> <3522.1131710730@www35.gmx.net> Message-ID: <4374E68F.6040803@colorado.edu> Jens Brandenburg wrote: > hello everybody, > > to give you some more information about the problem I've tried the following > under IPython: > > In [3]: from scipy.linalg import eig > > And this is the result: > > exceptions.ImportError: > /usr/local/lib/python2.3/site-packages/scipy/linalg/_fli > nalg.so: undefined symbol: _gfortran_compare_string > I hope someone has an idea or at least reply to my request... ;-(( By the > way - do I need the compat-readline* package to run IPython properly on SUSE > 10.0 OSS??? I don't use SUSE, so I can't answer specifically, but basically as long as Python was built with readline support under suse, you shouldn't need anything else at all to use ipython. If the up/down arrow keys work for you as expected, you're all set. As for your scipy problem, the gfortran message indicates you are using gcc4 (which replaced g77 with gfortran). I know in some platforms people have had a hard time with gcc4, but I don't know enough specifics to say much that can be useful (I'm still using gcc3.4/g77 or the Intel compilers). Cheers, f From rmuller at sandia.gov Sat Nov 12 08:42:28 2005 From: rmuller at sandia.gov (Rick Muller) Date: Sat, 12 Nov 2005 06:42:28 -0700 Subject: [SciPy-user] Scipy, FC4 and g77 Message-ID: <207E29F9-3E44-44B3-A452-FDD01DA22365@sandia.gov> I'm increasingly frustrated by the difficulties in building Scipy on FC4. I've gotten a lot of very useful help from members of this list, but I still do not have a working version of the entire program. This has kept me from moving my software (PyQuante, a python quantum chemistry suite) from Numpy to Scipy, despite all of the nice goodies that Scipy offers. Is is possible to build g77 on top of gcc4? gfortran doesn't compile a lot of the projects here at Sandia that used to work with g77, and I'm concerned about waiting for gfortran to get up to speed. If the g77 code just needs an official maintainer/bug fixer, I'm sure I can find someone to do this. However, if there is a more substantial reason why the code can't be built, I'd be interested in hearing it. Apologies for asking this question here, rather than on a GCC list. This is a relatively friendly forum, and since I figure people here are struggling with the same issues, I hope that I'd get a more useful answer here than on some GCC list. R. Rick Muller rmuller at sandia.gov From yaroslavvb at gmail.com Sat Nov 12 14:29:11 2005 From: yaroslavvb at gmail.com (Yaroslav Bulatov) Date: Sat, 12 Nov 2005 11:29:11 -0800 Subject: [SciPy-user] vectorize bug/feature Message-ID: Suppose my function f returns 0, .1, .2 for values 0,1,2. When I do ff=vectorize(f); ff([1,2,3]), I get [0,0,0] which is counter-intuitive. Apparently it sees that f(0) is a long, so it makes the array of longs, and .1,.2 turn into 0. From aisaac at american.edu Sat Nov 12 17:37:23 2005 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 12 Nov 2005 17:37:23 -0500 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: <200511101436.58074.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> Message-ID: On Thu, 10 Nov 2005, Darren Dale apparently wrote: > I'm reading arrays of data from an ascii file and > converting to appropriate numerical types. The data files > can get pretty big, I was wondering if someone here might > have a suggestion on how to speed things up. This is a common request on the SciPy Users list. I asked Mike Miller to consider releasing TableIO http://php.iupui.edu/~mmiller3/python/ under a more Pythonic license so that SciPy could use it. He initially sounded willing, but he never actually sent a message releasing the code under another license. Nor did he say that he was ultimately unwilling to do so. If you can work with GPL'd code, you might try TableIO. I'll bcc: him on this to see if he has decided. fwiw, Alan Isaac From oliphant at ee.byu.edu Sat Nov 12 22:19:07 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 12 Nov 2005 20:19:07 -0700 Subject: [SciPy-user] Scipy, FC4 and g77 In-Reply-To: <207E29F9-3E44-44B3-A452-FDD01DA22365@sandia.gov> References: <207E29F9-3E44-44B3-A452-FDD01DA22365@sandia.gov> Message-ID: <4376B0AB.7010406@ee.byu.edu> Rick Muller wrote: >I'm increasingly frustrated by the difficulties in building Scipy on >FC4. I've gotten a lot of very useful help from members of this list, >but I still do not have a working version of the entire program. This >has kept me from moving my software (PyQuante, a python quantum >chemistry suite) from Numpy to Scipy, despite all of the nice goodies >that Scipy offers. > > You realize you don't need a fortran compiler to install scipy_core, right? You should be able to convert from NumPy to ScipyCore without the fortran compiler issue. Now, if you want the additional code of full scipy, then the fortran compiler is needed. I think gfortran is not capable enough yet, and so you need to use g77. Can you install gcc 3.3 on FC4 and just use that to compile g77? I know, for example, that some people on MacOSX use gcc 3.3 instead of 4.0 because of some issues with additional features the new compiler is trying to support but not quite getting. -Travis From oliphant at ee.byu.edu Sat Nov 12 22:20:27 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 12 Nov 2005 20:20:27 -0700 Subject: [SciPy-user] vectorize bug/feature In-Reply-To: References: Message-ID: <4376B0FB.4040401@ee.byu.edu> Yaroslav Bulatov wrote: >Suppose my function f returns 0, .1, .2 for values 0,1,2. >When I do ff=vectorize(f); ff([1,2,3]), I get [0,0,0] which is >counter-intuitive. Apparently it sees that f(0) is a long, so it makes >the array of longs, and .1,.2 turn into 0. > > If you don't declare the output type (using the otypes keyword to vectorize), then the code simply uses the first argument to try and discern the output type. You should pass in a fixed output type and bypass the guess. -Travis From Fernando.Perez at colorado.edu Sun Nov 13 02:48:03 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 00:48:03 -0700 Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy In-Reply-To: <20051111173108.18525.qmail@web50709.mail.yahoo.com> References: <20051111173108.18525.qmail@web50709.mail.yahoo.com> Message-ID: <4376EFB3.50909@colorado.edu> Andrew D wrote: > Hello, > I'm using an up-to-date build of newcore and newscipy (newcore > 1467, newscipy 1431) on Red Hat Enterprise Linux WS release 3 (Taroon > Update 6) running on a Pentium 4. > Is this a known problem? The culprit seems to be the lines 103-105 in > polynomial.py: > # casting: if incoming array isn't floating point, make it floating > point. > if not isinstance(p.dtype, (NX.floating, NX.complexfloating)): > p = p.astype(float) > > which force p to be real, if it is complex, even though it appears > that they should leave things be if they are complex. I tracked this down to the following behavior in the scipy type hierarchy. This looks to me like a bug, unless I'm missing something: In [14]: zarr = scipy.array([1j,2j]) In [15]: zarr.dtype Out[15]: In [16]: isinstance(zarr.dtype,scipy.complexfloating) Out[16]: False I don't know the type inheritance code at all, and a bit of grepping didn't point me quite in the right direction. I'll keep looking, but I want to narrow down the bug so that perhaps someone who knows this code well can fix this one quickly. I should mention that at least the hunt for this problem led me to fixing a bug in ipython related to embedding, so the work wasn't in vain :) Cheers, f From rkern at ucsd.edu Sun Nov 13 03:03:57 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 13 Nov 2005 00:03:57 -0800 Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy In-Reply-To: <4376EFB3.50909@colorado.edu> References: <20051111173108.18525.qmail@web50709.mail.yahoo.com> <4376EFB3.50909@colorado.edu> Message-ID: <4376F36D.5020108@ucsd.edu> Fernando Perez wrote: > I tracked this down to the following behavior in the scipy type hierarchy. > This looks to me like a bug, unless I'm missing something: > > In [14]: zarr = scipy.array([1j,2j]) > > In [15]: zarr.dtype > Out[15]: > > In [16]: isinstance(zarr.dtype,scipy.complexfloating) > Out[16]: False You tested the wrong predicate, I think. In [66]: issubclass(zarr.dtype, complexfloating) Out[66]: True -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Sun Nov 13 03:09:27 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 01:09:27 -0700 Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy In-Reply-To: <4376EFB3.50909@colorado.edu> References: <20051111173108.18525.qmail@web50709.mail.yahoo.com> <4376EFB3.50909@colorado.edu> Message-ID: <4376F4B7.40406@colorado.edu> Fernando Perez wrote: > I tracked this down to the following behavior in the scipy type hierarchy. > This looks to me like a bug, unless I'm missing something: > > In [14]: zarr = scipy.array([1j,2j]) > > In [15]: zarr.dtype > Out[15]: > > In [16]: isinstance(zarr.dtype,scipy.complexfloating) > Out[16]: False OK, a bit more info. I found that: In [17]: isinstance(zarr.dtype(),scipy.complexfloating) Out[17]: True [Note the call on dtype] This is in fact how regular python works: In [26]: isinstance(int,int) Out[26]: False In [27]: isinstance(int(),int) Out[27]: True However, where scipy differs from python is here: In [28]: iarr = scipy.arange(2) In [29]: isinstance(iarr,scipy.integer) Out[29]: False So in order to do typechecking via isinstance(), we seem to be forced to make a function call (expensive): In [30]: isinstance(iarr.dtype,scipy.integer) Out[30]: False In [31]: isinstance(iarr.dtype(),scipy.integer) Out[31]: True I'm not exactly sure what Travis' intent was in here. But it would seem to me that [29] above should have come out True, if arrays are to be as similar to standard python objects as possible in terms of type handling. But as I said, I don't know the structure of the type hierarchy code well enough to say whether this is feasible or not. Cheers, f From Fernando.Perez at colorado.edu Sun Nov 13 03:17:13 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 01:17:13 -0700 Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy In-Reply-To: <4376F36D.5020108@ucsd.edu> References: <20051111173108.18525.qmail@web50709.mail.yahoo.com> <4376EFB3.50909@colorado.edu> <4376F36D.5020108@ucsd.edu> Message-ID: <4376F689.2070509@colorado.edu> Robert Kern wrote: > Fernando Perez wrote: > > >>I tracked this down to the following behavior in the scipy type hierarchy. >>This looks to me like a bug, unless I'm missing something: >> >>In [14]: zarr = scipy.array([1j,2j]) >> >>In [15]: zarr.dtype >>Out[15]: >> >>In [16]: isinstance(zarr.dtype,scipy.complexfloating) >>Out[16]: False > > > You tested the wrong predicate, I think. > > In [66]: issubclass(zarr.dtype, complexfloating) > Out[66]: True OK, in that case the roots() call in scipy.polynomial needs fixing, and perhaps a fair bit of code out there that used isinstance is going to break. See my other message (which I sent as yours came in) for further details on this. Before we change any code, I'd like to understand (and hopefully this should be clearly documented) how types should be handled with the new scipy. In fact, we're all so used to isinstance that this is probably going to bite a lot of people. A quick look at my install of the old scipy: abdul[zipy-old]> egrep -r isinstance * | wc -l 161 While most of those are not using the numeric type hierarchy, a few are. I wonder if we have more broken code like the roots() case due to this, and worry about how much _user_ code is going to go poof for the same reason. Cheers, f From rkern at ucsd.edu Sun Nov 13 03:36:38 2005 From: rkern at ucsd.edu (Robert Kern) Date: Sun, 13 Nov 2005 00:36:38 -0800 Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy In-Reply-To: <4376F4B7.40406@colorado.edu> References: <20051111173108.18525.qmail@web50709.mail.yahoo.com> <4376EFB3.50909@colorado.edu> <4376F4B7.40406@colorado.edu> Message-ID: <4376FB16.1020601@ucsd.edu> Fernando Perez wrote: > This is in fact how regular python works: > > In [26]: isinstance(int,int) > Out[26]: False > > In [27]: isinstance(int(),int) > Out[27]: True > > However, where scipy differs from python is here: > > In [28]: iarr = scipy.arange(2) > > In [29]: isinstance(iarr,scipy.integer) > Out[29]: False The scipy.{integer,complexfloating,...} are the scalar type objects. Full arrays are instances of scipy.ndarray, not the scalar type objects. However, the arrays have the dtype attribute which should be one of these scalar type objects. You can check the inheritance of the scalar type objects using issubclass(), not isinstance(). [From other email:] > OK, in that case the roots() call in scipy.polynomial needs fixing, It is now. > and perhaps a fair bit of code out there that used isinstance is > going to break. See my other message (which I sent as yours came in) > for further details on this. Before we change any code, I'd like to > understand (and hopefully this should be clearly documented) how types > should be handled with the new scipy. How's that? isinstance(Numeric.ones(5), Numeric.Int) never worked. On general arrays isinstance() can only tell if it's a scipy.ndarray or something else. Nothing's changed so far. isinstance() works as expected on the scalar types although issubclass(x.dtype, whatever) works, too. Take a closer look at the actual uses of isinstance() in the old scipy. They're all testing for ArrayType, a class, or one of the standard types. That all works the same with scipy_core. -- Robert Kern rkern at ucsd.edu "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From Fernando.Perez at colorado.edu Sun Nov 13 03:52:41 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Sun, 13 Nov 2005 01:52:41 -0700 Subject: [SciPy-user] Finding complex roots of complex polynomials in newscipy In-Reply-To: <4376FB16.1020601@ucsd.edu> References: <20051111173108.18525.qmail@web50709.mail.yahoo.com> <4376EFB3.50909@colorado.edu> <4376F4B7.40406@colorado.edu> <4376FB16.1020601@ucsd.edu> Message-ID: <4376FED9.1020806@colorado.edu> Robert Kern wrote: > The scipy.{integer,complexfloating,...} are the scalar type objects. > Full arrays are instances of scipy.ndarray, not the scalar type objects. > However, the arrays have the dtype attribute which should be one of > these scalar type objects. You can check the inheritance of the scalar > type objects using issubclass(), not isinstance(). > > [From other email:] > > >>OK, in that case the roots() call in scipy.polynomial needs fixing, > > > It is now. OK, thanks for the fix and the types clarifications. As you can tell, I'm just starting to get up to speed with newcore/newscipy and I'm still a bit confused. I'll get there, promise :) Cheers, f From rmuller at sandia.gov Sun Nov 13 07:53:01 2005 From: rmuller at sandia.gov (Rick Muller) Date: Sun, 13 Nov 2005 05:53:01 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 27, Issue 13 In-Reply-To: References: Message-ID: <419E4D26-5043-47B7-9AAA-176A3DFC1885@sandia.gov> On Nov 13, 2005, at 12:36 AM, scipy-user-request at scipy.net wrote: > You realize you don't need a fortran compiler to install scipy_core, > right? You should be able to convert from NumPy to ScipyCore without > the fortran compiler issue. Right. I'd actually like to use a few other toys in Scipy (spherical harmonics, some of the optimization stuff). > > Now, if you want the additional code of full scipy, then the fortran > compiler is needed. I think gfortran is not capable enough yet, > and so > you need to use g77. Can you install gcc 3.3 on FC4 and just use that > to compile g77? > > I know, for example, that some people on MacOSX use gcc 3.3 instead of > 4.0 because of some issues with additional features the new > compiler is > trying to support but not quite getting. The Macs are really nice, since you can automagically select which gcc compiler you want. Something like this on linux would be really nice. I guess I'm just puzzled why the gcc project would remove something that was already working. Absoft did the same thing, force their f77 users to start using their f9x product. This is probably a good way to detect bugs in f9x product, but why would you take away something that was already working? I guess I need to ask this on one of the gcc lists. Rick From gerard.vermeulen at grenoble.cnrs.fr Sun Nov 13 10:33:22 2005 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Sun, 13 Nov 2005 16:33:22 +0100 Subject: [SciPy-user] SciPy-user Digest, Vol 27, Issue 13 In-Reply-To: <419E4D26-5043-47B7-9AAA-176A3DFC1885@sandia.gov> References: <419E4D26-5043-47B7-9AAA-176A3DFC1885@sandia.gov> Message-ID: <20051113163322.57c2c091.gerard.vermeulen@grenoble.cnrs.fr> On Sun, 13 Nov 2005 05:53:01 -0700 "Rick Muller" wrote: [ .. ] > > The Macs are really nice, since you can automagically select which > gcc compiler you want. Something like this on linux would be really > nice. > > I guess I'm just puzzled why the gcc project would remove something > that was already working. Absoft did the same thing, force their f77 > users to start using their f9x product. This is probably a good way > to detect bugs in f9x product, but why would you take away something > that was already working? I guess I need to ask this on one of the > gcc lists. > gcc-4.0 is a new (and less mature than gcc-3.4). The g77 project leader stepped down: http://world.std.com/~burley/g77.html#how Try to find out if your Linux distro supports also an older compiler set (sometimes some distros do). Gerard From dd55 at cornell.edu Sun Nov 13 10:59:36 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sun, 13 Nov 2005 10:59:36 -0500 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: References: <200511101436.58074.dd55@cornell.edu> Message-ID: <200511131059.37107.dd55@cornell.edu> Hi Alan, On Saturday 12 November 2005 5:37 pm, Alan G Isaac wrote: > On Thu, 10 Nov 2005, Darren Dale apparently wrote: > > I'm reading arrays of data from an ascii file and > > converting to appropriate numerical types. The data files > > can get pretty big, I was wondering if someone here might > > have a suggestion on how to speed things up. > > This is a common request on the SciPy Users list. Now that you mention it, I think I may have asked once before. > I asked Mike Miller to consider releasing TableIO > http://php.iupui.edu/~mmiller3/python/ > under a more Pythonic license so that SciPy could use it. > He initially sounded willing, but he never actually sent > a message releasing the code under another license. Nor did > he say that he was ultimately unwilling to do so. > If you can work with GPL'd code, you might try TableIO. > I'll bcc: him on this to see if he has decided. Thank you for the suggestion. I looked at TableIO, but I havent been able to get it working properly. I tried to read a file that had '1e7,' repeated a million times, and it gave me an array that looked like array([[ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.], [ 10000000.], [ 0.]]) I am considering using scipy's fromfile function, which gives a big speed boost over io.read_array, but I don't understand what this docstring is trying to tell me: WARNING: This function should be used sparingly, as it is not a robust method of persistence. But it can be useful to read in simply-formatted or binary data quickly. From sransom at nrao.edu Sun Nov 13 12:05:34 2005 From: sransom at nrao.edu (Scott Ransom) Date: Sun, 13 Nov 2005 12:05:34 -0500 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: <200511131059.37107.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> <200511131059.37107.dd55@cornell.edu> Message-ID: <20051113170534.GD25555@ssh.cv.nrao.edu> Hmmm. Your results with TableIO seem very strange. I use it all the time and it works like a charm. For a single column of values, you should probably use something like the following: myarr = TableIO.readColumns("myfile.txt", "#")[0] And that will give you the column of numbers as a single 1-D array (if you omit the [0] at the end, you will get a 1 element list where the element is your array -- if there are more columns, each list element is another column). Scott On Sun, Nov 13, 2005 at 10:59:36AM -0500, Darren Dale wrote: > Hi Alan, > > On Saturday 12 November 2005 5:37 pm, Alan G Isaac wrote: > > On Thu, 10 Nov 2005, Darren Dale apparently wrote: > > > I'm reading arrays of data from an ascii file and > > > converting to appropriate numerical types. The data files > > > can get pretty big, I was wondering if someone here might > > > have a suggestion on how to speed things up. > > > > This is a common request on the SciPy Users list. > > Now that you mention it, I think I may have asked once before. > > > I asked Mike Miller to consider releasing TableIO > > http://php.iupui.edu/~mmiller3/python/ > > under a more Pythonic license so that SciPy could use it. > > He initially sounded willing, but he never actually sent > > a message releasing the code under another license. Nor did > > he say that he was ultimately unwilling to do so. > > If you can work with GPL'd code, you might try TableIO. > > I'll bcc: him on this to see if he has decided. > > Thank you for the suggestion. I looked at TableIO, but I havent been able to > get it working properly. I tried to read a file that had '1e7,' repeated a > million times, and it gave me an array that looked like > > array([[ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.], > [ 10000000.], > [ 0.]]) > > I am considering using scipy's fromfile function, which gives a big speed > boost over io.read_array, but I don't understand what this docstring is > trying to tell me: > > WARNING: This function should be used sparingly, as it is not > a robust method of persistence. But it can be useful to > read in simply-formatted or binary data quickly. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From nadavh at visionsense.com Sun Nov 13 13:16:33 2005 From: nadavh at visionsense.com (Nadav Horesh) Date: Sun, 13 Nov 2005 20:16:33 +0200 Subject: [SciPy-user] Problem compiling scipy fromsvprepository Message-ID: <07C6A61102C94148B8104D42DE95F7E86DF06E@exchange2k.envision.co.il> I installed scipy_core (ver 0.6.1) and after scipy from the svn repository. The installation went smoothly but when I tried to "import scipy" it complains that clapack_sgesv is missing. "nm clapack.so" discovered that all the function of the form clapack_xxx are undefined. Any ideas? System: gentoo linux with blas+lapack+atlas, gcc3.3.6 Nadav. From m.moghimi at gmail.com Sun Nov 13 13:17:51 2005 From: m.moghimi at gmail.com (Mohammad Moghimi) Date: Sun, 13 Nov 2005 21:47:51 +0330 Subject: [SciPy-user] scipy installation Message-ID: Hi I have tried to install scipy on python2.4.1 and I got these error massages: -------------------------------------------------------------------------------------------------------------------------------------------------------------------- fftw_info: NOT AVAILABLE dfftw_info: NOT AVAILABLE FFTW (http://www.fftw.org/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [fftw]) or by setting the FFTW environment variable. djbfft_info: NOT AVAILABLE DJBFFT (http://cr.yp.to/djbfft.html) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [djbfft]) or by setting the DJBFFT environment variable. blas_opt_info: atlas_blas_threads_info: scipy_distutils.system_info.atlas_blas_threads_info NOT AVAILABLE atlas_blas_info: scipy_distutils.system_info.atlas_blas_info NOT AVAILABLE scipy_core\scipy_distutils\system_info.py:982: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) blas_info: NOT AVAILABLE scipy_core\scipy_distutils\system_info.py:991: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the scipy_distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) blas_src_info: NOT AVAILABLE scipy_core\scipy_distutils\system_info.py:994: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the scipy_distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) NOT AVAILABLE Traceback (most recent call last): File "setup.py", line 111, in ? setup_package(ignore_packages) File "setup.py", line 85, in setup_package ignore_packages = ignore_packages) File "scipy_core\scipy_distutils\misc_util.py", line 475, in get_subpackages config = setup_module.configuration(*args) File "E:\pythonproject\SciPy_complete-0.3.2\Lib\integrate\setup_integrate.py ", line 22, in configuration raise NotFoundError,'no blas resources found' scipy_distutils.system_info.NotFoundError: no blas resources found -------------------------------------------------------------------------------------------------------------------------------------------------------------------- can you help me, what exact packages I should install to use scipy? -- Mohammad do you Python?!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nadavh at visionsense.com Sun Nov 13 13:28:51 2005 From: nadavh at visionsense.com (Nadav Horesh) Date: Sun, 13 Nov 2005 20:28:51 +0200 Subject: [SciPy-user] scipy installation Message-ID: <07C6A61102C94148B8104D42DE95F7E86DF06F@exchange2k.envision.co.il> Which OS? Do you have blas/lapack/atlas/fftw libraries installed? Nadav. From m.moghimi at gmail.com Sun Nov 13 14:49:55 2005 From: m.moghimi at gmail.com (Mohammad Moghimi) Date: Sun, 13 Nov 2005 23:19:55 +0330 Subject: [SciPy-user] scipy installation In-Reply-To: <07C6A61102C94148B8104D42DE95F7E86DF06F@exchange2k.envision.co.il> References: <07C6A61102C94148B8104D42DE95F7E86DF06F@exchange2k.envision.co.il> Message-ID: Hi Windows XP Prof, Or Linux SuSE 9.3 Prof. I couldn't install it on neither of two operating systems. What are these packages? I mean blas/lapack/atlas/fftw. can you tell me where can I download them? for example I have tried to get fftw from www.fftw.org but I can't find any python version there? On 11/13/05, Nadav Horesh wrote: > > Which OS? Do you have blas/lapack/atlas/fftw libraries installed? > > Nadav. > > > -- -- Mohammad do you Python?!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Sun Nov 13 15:44:11 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 13 Nov 2005 15:44:11 -0500 Subject: [SciPy-user] scipy installation In-Reply-To: References: <07C6A61102C94148B8104D42DE95F7E86DF06F@exchange2k.envision.co.il> Message-ID: atlas/blas etc. are compiled programs and libraries, not python programs. They are included in the scipy installer for windows but require python 2.3. They should be available through Suse's package manager. Ryan On 11/13/05, Mohammad Moghimi wrote: > Hi > Windows XP Prof, Or Linux SuSE 9.3 Prof. I couldn't install it on neither > of two operating systems. > What are these packages? I mean blas/lapack/atlas/fftw. > can you tell me where can I download them? > for example I have tried to get fftw from www.fftw.org but I can't find any > python version there? > > > On 11/13/05, Nadav Horesh wrote: > > Which OS? Do you have blas/lapack/atlas/fftw libraries installed? > > > > Nadav. > > > > > > > > > > -- > -- Mohammad > do you Python?!! > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From dickrp at wckn.com Sun Nov 13 17:59:08 2005 From: dickrp at wckn.com (Robert Dick) Date: Sun, 13 Nov 2005 16:59:08 -0600 Subject: [SciPy-user] SciPy in svn can't find "comb" or "factorial", is noisy, and doesn't find AMD math lib Message-ID: <200511131659.08309.dickrp@wckn.com> The latest release is a year old so I built SciPy_core and SciPy from svn. Bugs: Attempt by Lib/signal/filter_design.py to import 'comb' from scipy fails. Attempt by Lib/signal/signaltools.py to import 'factorial' from scipy fails. Fix by importing from scipy.utils, instead. Odd default: SciPy imports are quite noisy, indicating every package imported. For normal users, a reasonable default in _import_tools.py would be try: # print 'Importing',package_name,'to',parent_name exec ('import '+package_name, frame.f_globals,frame.f_locals Those interested in using AMD's fast math libraries may find this site.cfg useful. [blas] # for overriding the names of the atlas libraries blas_libs = acml language = f77 [lapack] lapack_libs = acml language = f77 -Robert Dick- From pearu at scipy.org Sun Nov 13 17:27:48 2005 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 13 Nov 2005 16:27:48 -0600 (CST) Subject: [SciPy-user] SciPy in svn can't find "comb" or "factorial", is noisy, and doesn't find AMD math lib In-Reply-To: <200511131659.08309.dickrp@wckn.com> References: <200511131659.08309.dickrp@wckn.com> Message-ID: On Sun, 13 Nov 2005, Robert Dick wrote: > The latest release is a year old so I built SciPy_core and SciPy from svn. > > Bugs: Attempt by Lib/signal/filter_design.py to import 'comb' from scipy > fails. Attempt by Lib/signal/signaltools.py to import 'factorial' from scipy > fails. Fix by importing from scipy.utils, instead. Thanks, fixed in svn. > Odd default: SciPy imports are quite noisy, indicating every package imported. > For normal users, a reasonable default in _import_tools.py would be > try: > # print 'Importing',package_name,'to',parent_name > exec ('import '+package_name, frame.f_globals,frame.f_locals It's not default. It's a work-in-progress noise. Please ignore it, someday it will be gone. Pearu From dickrp at wckn.com Sun Nov 13 21:59:28 2005 From: dickrp at wckn.com (Robert Dick) Date: Sun, 13 Nov 2005 20:59:28 -0600 Subject: [SciPy-user] scipy.linalg.eig() returns transposed eigenvector matrix Message-ID: <200511132059.28334.dickrp@wckn.com> scipy.linalg.eig() returns transposed eigenvector matrix Results with old Numeric: >>> import LinearAlgebra as la >>> from Numeric import * >>> la.eigenvectors(array([[1.0, 1.0], [1.0, 1.0]])) (array([ 2., 0.]), array([[ 0.70710678, 0.70710678], [-0.70710678, 0.70710678]])) Results with svn current SciPy linked against AMD ACML BLAS/LAPACK. >>> import scipy.linalg as la >>> from scipy import * >>> la.eig(array([[1.0, 1.0], [1.0, 1.0]])) (array([ 2.+0.j, 0.+0.j]), array([[ 0.70710678, -0.70710678], [ 0.70710678, 0.70710678]])) >>> la.eig(array([[1.0, 1.0], [1.0, 1.0]]))[1].transpose() array([[ 0.70710678, 0.70710678], [-0.70710678, 0.70710678]]) Can somebody else reproduce this? This result contradicts the documentation. I'm linking against the AMD ACML BLAS/LAPACK library and indicating that f77 was used. That library has vanilla f77 routines as well as C wrappers. The C wrappers assume column major matrices (f77-style). However, that shouldn't have mattered because the f77 routines should have been used. -Robert Dick- From oliphant at ee.byu.edu Sun Nov 13 22:46:39 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 13 Nov 2005 20:46:39 -0700 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: <200511131059.37107.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> <200511131059.37107.dd55@cornell.edu> Message-ID: <4378089F.8000902@ee.byu.edu> Darren Dale wrote: >I am considering using scipy's fromfile function, which gives a big speed >boost over io.read_array, but I don't understand what this docstring is >trying to tell me: > > WARNING: This function should be used sparingly, as it is not > a robust method of persistence. But it can be useful to > read in simply-formatted or binary data quickly. > > It's simply trying to advertise that fromfile and tofile are very raw functions. They should work fine as far as they go, but there may be easier solutions. I don't expect the capability of these to increase. But, for example, something like a TableIO could take advantage of them. -Travis From oliphant at ee.byu.edu Sun Nov 13 22:51:29 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sun, 13 Nov 2005 20:51:29 -0700 Subject: [SciPy-user] Binaries for scipy created Message-ID: <437809C1.5020909@ee.byu.edu> I've used the scipy sourceforge site to place binaries for a "release" of full scipy (built on the new core). The version is 0.4.3. There is an rpm and windows binaries, as well as a full tar ball. I know there are people out there who would like to try scipy but don't want to wrestle with the install. The rpms and/or windows binaries might help. This is the first time, I've made binaries for other people to use. Hopefully they work fine, but errors may be reported. -Travis From dd55 at cornell.edu Sun Nov 13 23:31:28 2005 From: dd55 at cornell.edu (Darren Dale) Date: Sun, 13 Nov 2005 23:31:28 -0500 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: <4378089F.8000902@ee.byu.edu> References: <200511101436.58074.dd55@cornell.edu> <200511131059.37107.dd55@cornell.edu> <4378089F.8000902@ee.byu.edu> Message-ID: <200511132331.28415.dd55@cornell.edu> On Sunday 13 November 2005 10:46 pm, you wrote: > Darren Dale wrote: > >I am considering using scipy's fromfile function, which gives a big speed > >boost over io.read_array, but I don't understand what this docstring is > >trying to tell me: > > > > ? ?WARNING: This function should be used sparingly, as it is not > > ? ?a robust method of persistence. ?But it can be useful to > > ? ?read in simply-formatted or binary data quickly. > > It's simply trying to advertise that fromfile and tofile are very raw > functions. ? They should work fine as far as they go, but there may be > easier solutions. ?I don't expect the capability of these to increase. > But, for example, something like a TableIO could take advantage of them. I was wondering if the fromstring function could be expanded to include ascii strings. Right now, I'm considering reading my messy file, creating a neatly formatted string, writing that string back into a temporary file, and then using the fromfile function. It would be so much nicer to just create the array from the string itself. Darren From dickrp at wckn.com Mon Nov 14 00:59:42 2005 From: dickrp at wckn.com (Robert Dick) Date: Sun, 13 Nov 2005 23:59:42 -0600 Subject: [SciPy-user] eig() segfaults on SuSE 9.3 with ACML, Numeric's eigenvectors works Message-ID: <200511132359.43082.dickrp@wckn.com> SuSE 9.3 comes with libblas and liblapack. However, they don't define srotmg. Therefore, SciPy doesn't work with them. AMD provides an optimized BLAS/LAPACK library called ACML. After installing that and directing SciPy to it with site.cfg, SciPy builds. Unfortunately, linalg.eig(), produces results that do not conform to the documentation (the matrix holding the eigenvectors is transposed). I transpose this matrix in my own code but find that eig() also intermittantly segfaults for matrices of significant size. My machine is otherwise quite stable: this is probably not caused by a hardware problem. Has anybody been able to get SciPy working reliably on an Athlon SuSE 9.3 machine? Does the following code work without segfaulting for anybody else? It runs fine as long as I use Numeric but segfaults if I use SciPy. By the way, I'm linking against AMD's ACML BLAS/LAPACK library from both Numeric and SciPy. ---- import Numeric as n1 import LinearAlgebra as la1 import scipy as n2 import scipy.linalg as la2 while 1: mat2 = n2.stats.rand(5, 5) mat1 = n1.array(mat2) print la1.eigenvectors(mat1) # print la2.eig(mat2) ---- Does the following code work reliably for anybody else? ---- import scipy.linalg as la1 import scipy as n1 la1.eig(n1.array([[1.0, 1.0], [1.0, 1.0]])) ---- Should be (from Numeric's LinearAlgebra) (array([ 2., 0.]), array([[ 0.70710678, 0.70710678], [-0.70710678, 0.70710678]])) Unfortunately, it is transposed (array([ 2.+0.j, 0.+0.j]), array([[ 0.70710678, -0.70710678], [ 0.70710678, 0.70710678]])) I have some PCA and non-linear multi-layer neural network backprop code I would be happy to give to SciPy. However, I can't properly test this code with SciPy unless I have a stable eig(). Thanks for any suggestions, -Robert Dick- From NadavH at VisionSense.com Mon Nov 14 01:13:14 2005 From: NadavH at VisionSense.com (Nadav Horesh) Date: Mon, 14 Nov 2005 08:13:14 +0200 Subject: [SciPy-user] scipy installation In-Reply-To: References: <07C6A61102C94148B8104D42DE95F7E86DF06F@exchange2k.envision.co.il> Message-ID: <43782AFA.8050406@VisionSense.com> An HTML attachment was scrubbed... URL: From arnd.baecker at web.de Mon Nov 14 02:00:29 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 08:00:29 +0100 (CET) Subject: [SciPy-user] SciPy in svn can't find "comb" or "factorial", is noisy, and doesn't find AMD math lib In-Reply-To: <200511131659.08309.dickrp@wckn.com> References: <200511131659.08309.dickrp@wckn.com> Message-ID: On Sun, 13 Nov 2005, Robert Dick wrote: [...] > Those interested in using AMD's fast math libraries may find this site.cfg > useful. > > [blas] > # for overriding the names of the atlas libraries > blas_libs = acml > language = f77 > > [lapack] > lapack_libs = acml > language = f77 I always wanted to try this out - thanx! How much faster is ACML in comparison with ATLAS? Best, Arnd From arnd.baecker at web.de Mon Nov 14 02:48:27 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Mon, 14 Nov 2005 08:48:27 +0100 (CET) Subject: [SciPy-user] eig() segfaults on SuSE 9.3 with ACML, Numeric's eigenvectors works In-Reply-To: <200511132359.43082.dickrp@wckn.com> References: <200511132359.43082.dickrp@wckn.com> Message-ID: On Sun, 13 Nov 2005, Robert Dick wrote: > SuSE 9.3 comes with libblas and liblapack. However, they don't define srotmg. > Therefore, SciPy doesn't work with them. > > AMD provides an optimized BLAS/LAPACK library called ACML. After installing > that and directing SciPy to it with site.cfg, SciPy builds. Unfortunately, > linalg.eig(), produces results that do not conform to the documentation (the > matrix holding the eigenvectors is transposed). I transpose this matrix in > my own code but find that eig() also intermittantly segfaults for matrices of > significant size. My machine is otherwise quite stable: this is probably not > caused by a hardware problem. > > Has anybody been able to get SciPy working reliably on an Athlon SuSE 9.3 > machine? > > Does the following code work without segfaulting for anybody else? It runs > fine as long as I use Numeric but segfaults if I use SciPy. By the way, I'm > linking against AMD's ACML BLAS/LAPACK library from both Numeric and SciPy. > ---- > import Numeric as n1 > import LinearAlgebra as la1 > import scipy as n2 > import scipy.linalg as la2 > > while 1: > mat2 = n2.stats.rand(5, 5) > mat1 = n1.array(mat2) > print la1.eigenvectors(mat1) > # print la2.eig(mat2) > ---- I also get a segfault for import scipy as n2 import scipy.linalg as la2 mat2 = n2.stats.rand(5, 5) print la2.eig(mat2) This is on SUSE 10.0, 64Bit Opteron, In [3]: scipy.__core_version__ Out[3]: '0.6.2.1482' In [4]: scipy.__scipy_version__ Out[4]: '0.4.2_1442' I am using ATLAS, so I don't think it is an ACML-ATLAS issue. Some more details: gdb (gdb) file /scr/python/bin/python (gdb) run Starting program: /scr/python/bin/python [Thread debugging using libthread_db enabled] [New Thread 46912507335168 (LWP 31397)] Python 2.4.2 (#1, Oct 4 2005, 10:10:47) [GCC 3.4.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy as n2 [...] >>> import scipy.linalg as la2 >>> mat2 = n2.stats.rand(5, 5) >>> print la2.eig(mat2) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 46912507335168 (LWP 31397)] 0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 (gdb) bt #0 0x00002aaaab3a2cdb in memmove () from /lib64/tls/libc.so.6 #1 0x00002aaaab96246c in PyArray_CopyInto (dest=0x2aaab55609e0, src=0x8) at arrayobject.c:658 #2 0x00002aaaab96735b in array_imag_set (self=0x2aaab55608f0, val=0x83000d690097dd63) at arrayobject.c:4260 #3 0x00000000004c21ea in getset_set (descr=0x2aaaaab37ea8, obj=0x2aaab55608f0, value=0x2aaab5560990) at descrobject.c:200 #4 0x000000000043cc05 in PyObject_GenericSetAttr (obj=0x2aaab55608f0, name=0x2aaaaab08ba0, value=0x2aaab5560990) at object.c:1378 #5 0x000000000043c690 in PyObject_SetAttr (v=0x2aaab55608f0, name=0x2aaaaab08ba0, value=0x2aaab5560990) at object.c:1128 #6 0x0000000000475f53 in PyEval_EvalFrame (f=0x8a7b10) at ceval.c:1761 #7 0x0000000000479fb1 in PyEval_EvalFrame (f=0x742340) at ceval.c:3640 #8 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaab2610b20, globals=0x8c71b1, locals=0x83000d690097dd6b, args=0x742340, argcount=1, kws=0x747678, kwcount=0, defs=0x2aaab260fb48, defcount=5, closure=0x0) at ceval.c:2736 #9 0x00000000004788f7 in PyEval_EvalFrame (f=0x7474e0) at ceval.c:3650 #10 0x000000000047ad2f in PyEval_EvalCodeEx (co=0x2aaab24dcb90, globals=0x8c71b1, locals=0x83000d690097dd6b, args=0x7474e0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #11 0x000000000047af72 in PyEval_EvalCode (co=0x83000d690097dd63, globals=0x8c71b1, locals=0x83000d690097dd6b) at ceval.c:484 #12 0x00000000004a1c72 in PyRun_InteractiveOneFlags (fp=0x2aaaaab13a30, filename=0x4cbf24 "", flags=0x7fffffd28f2c) at pythonrun.c:1265 #13 0x00000000004a1e04 in PyRun_InteractiveLoopFlags (fp=0x2aaaab556b00, filename=0x4cbf24 "", flags=0x7fffffd28f2c) at pythonrun.c:695 #14 0x00000000004a2350 in PyRun_AnyFileExFlags (fp=0x2aaaab556b00, filename=0x8c71b1 "\f~.\\$\034\a?^\206W6X\237?", closeit=0, flags=0x7fffffd28f2c) at pythonrun.c:658 #15 0x0000000000410788 in Py_Main (argc=0, argv=0x7fffffd2a9ce) at main.c:484 #16 0x00002aaaab34d5aa in __libc_start_main () from /lib64/tls/libc.so.6 #17 0x000000000040fdfa in _start () at start.S:113 #18 0x00007fffffd29028 in ?? () #19 0x00002aaaaabc19c0 in rtld_errno () from /lib64/ld-linux-x86-64.so.2 Travis, does this already help you enough to see where the problem comes from, or do you need further input? Concerning the transpose: If I am not mistaken, already "old" scipy leads to a transposed matrix. Best, Arnd From oliphant at ee.byu.edu Mon Nov 14 06:37:08 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 14 Nov 2005 04:37:08 -0700 Subject: [SciPy-user] New scipy core requires rebuild of extension modules Message-ID: <437876E4.30309@ee.byu.edu> Version 1484 (current SVN) of scipy core requires a complete rebuild because of changes to the C-API. Sorry about that. I've changed the version number to 0.7 to reflect the need to rebuild extension modules. I'm almost done with the C-API documentation and so hopefully shouldn't have to change things much more. The new call is multi = PyArray_MultIterNew(int n, ...) which returns a PyArrayMultiIterObject * for easier access to array broadcasting. Simply call the function with your Python Objects as arguments. In return you get an object that can be used like: i=PyArray_MultiIter_SIZE(multi); while(i--) { /* deal with PyArray_MultiIter_DATA(multi, n) which is a pointer to the data to use */ PyArray_MultiIter_NEXT(multi); } -Travis From wjdandreta at att.net Mon Nov 14 08:10:48 2005 From: wjdandreta at att.net (Bill Dandreta) Date: Mon, 14 Nov 2005 08:10:48 -0500 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: <200511132331.28415.dd55@cornell.edu> References: <200511101436.58074.dd55@cornell.edu> <200511131059.37107.dd55@cornell.edu> <4378089F.8000902@ee.byu.edu> <200511132331.28415.dd55@cornell.edu> Message-ID: <43788CD8.2070201@att.net> Darren Dale wrote: >I was wondering if the fromstring function could be expanded to include ascii >strings. Right now, I'm considering reading my messy file, creating a neatly >formatted string, writing that string back into a temporary file, and then >using the fromfile function. It would be so much nicer to just create the >array from the string itself. > >Darren > You might be able to use an 'in memory file' using StringIO from the StringIO module. Bill From dd55 at cornell.edu Mon Nov 14 09:47:44 2005 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 14 Nov 2005 09:47:44 -0500 Subject: [SciPy-user] efficiently importing ascii data In-Reply-To: <43788CD8.2070201@att.net> References: <200511101436.58074.dd55@cornell.edu> <200511132331.28415.dd55@cornell.edu> <43788CD8.2070201@att.net> Message-ID: <200511140947.44555.dd55@cornell.edu> On Monday 14 November 2005 08:10 am, Bill Dandreta wrote: > Darren Dale wrote: > >I was wondering if the fromstring function could be expanded to include > > ascii strings. Right now, I'm considering reading my messy file, creating > > a neatly formatted string, writing that string back into a temporary > > file, and then using the fromfile function. It would be so much nicer to > > just create the array from the string itself. > > > >Darren > > You might be able to use an 'in memory file' using StringIO from the > StringIO module. I don't think fromfile plays nice with StringIO: from scipy import * from StringIO import StringIO f=StringIO('1e7,150,1e-5') # at this point, the file-like object f should be ready for reading x=fromfile(f,sep=',') Traceback (most recent call last): File "test.py", line 5, in ? x=fromfile(f,sep=',') IOError: first argument must be an open file Darren From alopez at imim.es Mon Nov 14 09:51:54 2005 From: alopez at imim.es (LOPEZ GARCIA DE LOMANA, ADRIAN) Date: Mon, 14 Nov 2005 15:51:54 +0100 Subject: [SciPy-user] Minimizing functions of two variables with fmin_bfgs Message-ID: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es> Hi all, I have a problem using the optimization modules. I'm using fmin_bfgs. It works very well for minimizing functions of just one parameter: import Numeric import scipy from scipy.optimize import fmin_bfgs def fitness(p): return p**2 def fitness_der(p): return 2 * p p = [158.0] popt = fmin_bfgs(fitness, p, fprime = fitness_der) print popt but while I pretend to expand it to a multiparameter function using a vector, import Numeric import scipy from scipy.optimize import fmin_bfgs def fitness(p): return p[0]**2 + p[1] def fitness_der(p): return [2 * p[0] + 1, 1] p = [158.0, 314.0] popt = fmin_bfgs(fitness, p, fprime = fitness_der) print popt it crashes: Traceback (most recent call last): File "hybrid.py", line 12, in ? popt = fmin_bfgs(fitness, p, fprime = fitness_der) File "/usr/local/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 675, in fmin_bfgs yk = gfkp1 - gfk TypeError: unsupported operand type(s) for -: 'list' and 'list' Some ideas? How can I minimize a function of several parameters? "fprime = fitness_der" does no understand that the partial derivatives are a list of same dimension of "p"? Thanks a lot for your comments in advance, Adri?n. From nwagner at mecha.uni-stuttgart.de Mon Nov 14 10:02:58 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Nov 2005 16:02:58 +0100 Subject: [SciPy-user] Minimizing functions of two variables with fmin_bfgs In-Reply-To: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es> References: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es> Message-ID: <4378A722.6010605@mecha.uni-stuttgart.de> LOPEZ GARCIA DE LOMANA, ADRIAN wrote: >Hi all, > >I have a problem using the optimization modules. I'm using fmin_bfgs. It works very well for minimizing functions of just one parameter: > >import Numeric >import scipy >from scipy.optimize import fmin_bfgs > >def fitness(p): > return p**2 > >def fitness_der(p): > return 2 * p > >p = [158.0] >popt = fmin_bfgs(fitness, p, fprime = fitness_der) >print popt > >but while I pretend to expand it to a multiparameter function using a vector, > >import Numeric >import scipy >from scipy.optimize import fmin_bfgs > >def fitness(p): > return p[0]**2 + p[1] > >def fitness_der(p): > return [2 * p[0] + 1, 1] > >p = [158.0, 314.0] >popt = fmin_bfgs(fitness, p, fprime = fitness_der) >print popt > >it crashes: > >Traceback (most recent call last): > File "hybrid.py", line 12, in ? > popt = fmin_bfgs(fitness, p, fprime = fitness_der) > File "/usr/local/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 675, in fmin_bfgs > yk = gfkp1 - gfk >TypeError: unsupported operand type(s) for -: 'list' and 'list' > >Some ideas? How can I minimize a function of several parameters? "fprime = fitness_der" does no understand that the partial derivatives are a list of same dimension of "p"? > >Thanks a lot for your comments in advance, > >Adri?n. > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Try import scipy from scipy.optimize import fmin_bfgs def fitness(p): return p[0]**2 + p[1]**2 def fitness_der(p): return scipy.array(([2 * p[0] , 2.0*p[1]] )) p = [1.0, 1.0] popt = fmin_bfgs(fitness, p, fprime = fitness_der) print popt From ckkart at hoc.net Mon Nov 14 10:11:43 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 14 Nov 2005 16:11:43 +0100 Subject: [SciPy-user] Minimizing functions of two variables with fmin_bfgs In-Reply-To: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es> References: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es> Message-ID: <4378A92F.5090303@hoc.net> LOPEZ GARCIA DE LOMANA, ADRIAN wrote: > Hi all, > > I have a problem using the optimization modules. I'm using fmin_bfgs. It works very well for minimizing functions of just one parameter: > > import Numeric > import scipy > from scipy.optimize import fmin_bfgs > > def fitness(p): > return p**2 > > def fitness_der(p): > return 2 * p > > p = [158.0] > popt = fmin_bfgs(fitness, p, fprime = fitness_der) > print popt > > but while I pretend to expand it to a multiparameter function using a vector, > > import Numeric > import scipy > from scipy.optimize import fmin_bfgs > > def fitness(p): > return p[0]**2 + p[1] > > def fitness_der(p): > return [2 * p[0] + 1, 1] > > p = [158.0, 314.0] > popt = fmin_bfgs(fitness, p, fprime = fitness_der) > print popt > Unfortunately the minimizers in scipy.optimize use different input types. fmin_bfgs needs that the derivatives are returned as a scipy.array, however e.g. fmin_tnc insists on recieving a python list. Btw. the minimizer will fail with your function as it doesn't have a global minimum. Regards, Christian From alopez at imim.es Mon Nov 14 10:12:08 2005 From: alopez at imim.es (LOPEZ GARCIA DE LOMANA, ADRIAN) Date: Mon, 14 Nov 2005 16:12:08 +0100 Subject: [SciPy-user] Minimizing functions of two variables withfmin_bfgs Message-ID: <66373AD054447F47851FCC5EB49B361101490A86@basquet.imim.es> THANKS A LOT !!!! It works fine !!! Adri?n. -----Original Message----- From: scipy-user-bounces at scipy.net on behalf of Nils Wagner Sent: Mon 14/11/2005 15:02 To: SciPy Users List Subject: Re: [SciPy-user] Minimizing functions of two variables withfmin_bfgs LOPEZ GARCIA DE LOMANA, ADRIAN wrote: >Hi all, > >I have a problem using the optimization modules. I'm using fmin_bfgs. It works very well for minimizing functions of just one parameter: > >import Numeric >import scipy >from scipy.optimize import fmin_bfgs > >def fitness(p): > return p**2 > >def fitness_der(p): > return 2 * p > >p = [158.0] >popt = fmin_bfgs(fitness, p, fprime = fitness_der) >print popt > >but while I pretend to expand it to a multiparameter function using a vector, > >import Numeric >import scipy >from scipy.optimize import fmin_bfgs > >def fitness(p): > return p[0]**2 + p[1] > >def fitness_der(p): > return [2 * p[0] + 1, 1] > >p = [158.0, 314.0] >popt = fmin_bfgs(fitness, p, fprime = fitness_der) >print popt > >it crashes: > >Traceback (most recent call last): > File "hybrid.py", line 12, in ? > popt = fmin_bfgs(fitness, p, fprime = fitness_der) > File "/usr/local/lib/python2.4/site-packages/scipy/optimize/optimize.py", line 675, in fmin_bfgs > yk = gfkp1 - gfk >TypeError: unsupported operand type(s) for -: 'list' and 'list' > >Some ideas? How can I minimize a function of several parameters? "fprime = fitness_der" does no understand that the partial derivatives are a list of same dimension of "p"? > >Thanks a lot for your comments in advance, > >Adri?n. > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Try import scipy from scipy.optimize import fmin_bfgs def fitness(p): return p[0]**2 + p[1]**2 def fitness_der(p): return scipy.array(([2 * p[0] , 2.0*p[1]] )) p = [1.0, 1.0] popt = fmin_bfgs(fitness, p, fprime = fitness_der) print popt _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From aisaac at american.edu Mon Nov 14 10:26:35 2005 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 14 Nov 2005 10:26:35 -0500 Subject: [SciPy-user] =?utf-8?q?Minimizing_functions_of_two_variables_with?= =?utf-8?q?=09fmin_bfgs?= In-Reply-To: <4378A92F.5090303@hoc.net> References: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es><4378A92F.5090303@hoc.net> Message-ID: On Mon, 14 Nov 2005, Christian Kristukat apparently wrote: > Unfortunately the minimizers in scipy.optimize use > different input types. fmin_bfgs needs that the > derivatives are returned as a scipy.array, however e.g. > fmin_tnc insists on recieving a python list. Btw. the > minimizer will fail with your function as it doesn't have > a global minimum. Is this just historical accident? On the face of it, any sequence should do. Thanks, Alan From ckkart at hoc.net Mon Nov 14 10:34:02 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 14 Nov 2005 16:34:02 +0100 Subject: [SciPy-user] Minimizing functions of two variables with fmin bfgs In-Reply-To: References: <66373AD054447F47851FCC5EB49B361101490A85@basquet.imim.es><4378A92F.5090303@hoc.net> Message-ID: <4378AE6A.1070902@hoc.net> Alan G Isaac wrote: > On Mon, 14 Nov 2005, Christian Kristukat apparently wrote: > >>Unfortunately the minimizers in scipy.optimize use >>different input types. fmin_bfgs needs that the >>derivatives are returned as a scipy.array, however e.g. >>fmin_tnc insists on recieving a python list. Btw. the >>minimizer will fail with your function as it doesn't have >>a global minimum. > > > Is this just historical accident? > On the face of it, any sequence should do. I agree. But currently that's not the case. Especially considering the temporary memory consumption when converting array<->list this should be taken care of. Regards, Christian From stephen.walton at csun.edu Mon Nov 14 11:32:32 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Mon, 14 Nov 2005 08:32:32 -0800 Subject: [SciPy-user] Scipy, FC4 and g77 In-Reply-To: <207E29F9-3E44-44B3-A452-FDD01DA22365@sandia.gov> References: <207E29F9-3E44-44B3-A452-FDD01DA22365@sandia.gov> Message-ID: <4378BC20.2050406@csun.edu> Rick Muller wrote: >I'm increasingly frustrated by the difficulties in building Scipy on >FC4. I've gotten a lot of very useful help from members of this list, >but I still do not have a working version of the entire program. > I have Scipy working on FC4 but with some work. To summarize the key points: I don't use the ATLAS libraries which are part of Fedora Extras, but build my own; they live in /usr/lib/atlas. I use g77 to build Scipy on some of my systems, and Absoft Fortran on others. For the g77 systems, I do have one unsupported hack, which is to symbolically link libg2c.a and libg2c.so from the gcc 3.2.3 library directory to the gcc 4.0.1 directory so that a link step which uses gcc4 can find them. Stephen Walton From rmuller at sandia.gov Mon Nov 14 13:43:31 2005 From: rmuller at sandia.gov (Rick Muller) Date: Mon, 14 Nov 2005 11:43:31 -0700 Subject: [SciPy-user] Problems calling scipy.special.sph_harm Message-ID: I'm having trouble calling the spherical harmonics function in scipy.special.sph_harm. When I do something like: In [1]: from scipy.special import * In [2]: sph_harm(-2,2,0,0.1) Python(1363,0xa000ed68) malloc: *** error for object 0x3beed0: incorrect checksum for freed object - object was probably modified after being freed, break at szone_error to debug Python(1363,0xa000ed68) malloc: *** set a breakpoint in szone_error to debug Segmentation fault Any hints what I'm doing wrong? Rick Muller rmuller at sandia.gov From dickrp at wckn.com Tue Nov 15 01:46:19 2005 From: dickrp at wckn.com (Robert Dick) Date: Tue, 15 Nov 2005 00:46:19 -0600 Subject: [SciPy-user] [SciPy-dev] scipy.linalg.eig() returns transposed eigenvector matrix Message-ID: <200511150046.19597.dickrp@wckn.com> Fernando Perez wrote: >Travis Oliphant wrote: >>Pearu Peterson wrote: >>>This is a matter of definition. scipy.linalg.eig and >>>scipy.basic.linalg.eig return correct results according to their >>>documentation. Just scipy.linalg.eig assumes that eigenvectors are >>>returned column-wise, i.e. >>Thanks for the clarification, Pearu. I'm glad things are actually >>working as advertised. >If I may suggest, I think these two should be unified, though. It will be >seriously disconcerting for new users to find that > If the two are to be unified, I think scipy.basic should change. But, > that leads to a problem because of compatibility with Numeric. > > So, what to do? > > We could change scipy.basic.linalg.eig and keep > scipy.basic.linalg.eigenvectors as the old Numeric behavior. If you need to decide which one to change, identify the common case. Is it more common to access all dimensions of one eigenvector or access one dimension of many eigenvectors? The common case should be the easiest to express, i.e., if one wants the first eigenvector, should la.eig(m)[1][0] or la.eig(m)[1][:, 0] be used? The first convention (Numeric-style) maintains compatibility with Numeric and conforms with the documentation in "Guide to SciPy: Core System": The second element of the return tuple contains the eigenvectors in the rows (x[i] is the ith eigenvector). The second convention (current scipy.eig) is more MATLAB-like. Other comments: It might be good to change "Building New SciPy" to "Building SciPy" and change "Building SciPy" to "Building Old Versions of SciPy" at http://www.scipy.org/documentation/ When I decided to try SciPy I read the first, and apparently default, document. It indicates that Numeric is necessary for SciPy. As a consqeuence, I wasted a lot of time rewriting significant portions of Numeric in order to get it to build with ACML. Only after later getting SciPy from svn did I realize that Numeric is no longer required. The downloads page might best direct folks to the svn repository. Svn is easy but, when trying a new package, I generally go with tarball releases under the assumption that they are more stable. However, scipy's tarball is a year old. I would have gone straight to svn if the page suggested it and that would have resulted in a better first impression of SciPy. In case this sounds too negative, I should state that SciPy has great potential and I intend to keep using it and sending bug reports (or checking in, after establishing history). Best Regards, -Robert Dick- From oliphant at ee.byu.edu Tue Nov 15 02:03:28 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 15 Nov 2005 00:03:28 -0700 Subject: [SciPy-user] [SciPy-dev] scipy.linalg.eig() returns transposed eigenvector matrix In-Reply-To: <200511150046.19597.dickrp@wckn.com> References: <200511150046.19597.dickrp@wckn.com> Message-ID: <43798840.6080601@ee.byu.edu> Robert Dick wrote: >Other comments: > > >In case this sounds too negative, I should state that SciPy has great >potential and I intend to keep using it and sending bug reports (or checking >in, after establishing history). > > Don't worry about sounding negative. Good comments are always welcome. Most people who encounter the same problems wouldn't comment but just move on --- and we'd never know the real problem. So, Thank you. The website is definitely stale and needs fixing. From nwagner at mecha.uni-stuttgart.de Tue Nov 15 03:35:33 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Nov 2005 09:35:33 +0100 Subject: [SciPy-user] Difficulties with integrate.odeint Message-ID: <43799DD5.2050902@mecha.uni-stuttgart.de> Hi all, I am going to integrate a set of ordinary differential equations \dot(z} = P(t) z , z(t=0) = z_0 with integrate.odeint. The matrix P(t) varies periodically with period T, P(t) = P(t+T). def v(t,omega_0,v_0): return v_0*sin(omega_0*t) def vp(t,omega_0,v_0): return v_0*omega_0*cos(omega_0*t) def P(t): # # Periodically varying system matrix # P(t+T) = P(T) # tmp=zeros((2*(n-1),2*(n-1)),Float) tmp[:n-1,n-1:] = identity(n-1) tmp[n-1:,:n-1] = -vp(t,omega_0,v_0)*MinvK_1-v(t,omega_0,v_0)**2*MinvK_2-MinvK_3-MinvK_4 tmp[n-1:,n-1:] = -v(t,omega_0,v_0)*MinvG return tmp def F(z,t): return dot(P(t),z) z,infodict = integrate.odeint(F, z0, t,full_output=1) lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout in above message, i1 = 500 in above message, r1 = 0.2849206026085E-01 Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. The output of infodict is >>> infodict {'nfe': array([ 756, 0, 0, 0, 0, 1087514384, 1177877182, -1058152689, 1835251574, 1090228948, -534120349, -1057872112, 1177877180, 1087233807, 1842801181, -1058126336, -2070079459, 1089984242, 80, 80]), 'nje': array([ 24, 0, 0, 0, 0, 1079201176, -952557537, 1081298328, 1948672715, 1023763292, -952557537, -1066185320, -952557538, -1068282472, -1370358758, 1081140197, -952557538, 1079201176, 0, 49]), 'tolsf': array([ 2.41898232e+001, 0.00000000e+000, 4.33528912e-005, -6.11764773e+004, 1.12941277e+005, -7.43530547e+004, 1.52941193e+004, -6.20000134e+004, 9.76471574e+004, 1.69759663e-312, 3.39519327e-313, 8.55589578e-314, 2.23445150e-312, 9.91680933e+000, 0.00000000e+000, 4.99646000e+005, -1.28205663e+005, 5.11999116e+005, -7.15518276e+005, 1.69759663e-312]), 'nqu': array([ 5, 0, 0, 0, 0, 1076090224, 32, 105, 138113336, 136892392, 48, 8, 1076466816, 81, 1076090288, 1076090288, -1742606915, -1064103817, 1076283023, 1081461165]), 'lenrw': 116, 'tcur': array([ 2.84920603e-002, 0.00000000e+000, -4.25338500e+005, 1.70135427e+006, -2.83559045e+006, 1.70135427e+006, -4.25338567e+005, 1.70135427e+006, -2.41025188e+006, 1.69759663e-312, 2.40966029e+006, -1.70342483e+006, 4.25930155e+005, -1.69898792e+006, 2.83559045e+006, -1.70372062e+006, 4.24746979e+005, -1.69928371e+006, 2.41084347e+006, 1.69759663e-312]), 'hu': array([ 6.05099624e-005, 0.00000000e+000, 2.41898193e+001, 2.78290221e+002, 8.55842102e-015, -2.78290221e+002, -6.95725552e+001, 2.43503943e+002, 6.95725552e+001, 3.24665356e-312, 9.91790963e+000, 6.77647660e+004, -1.69411915e+004, 6.77647660e+004, -1.12941277e+005, 6.77647660e+004, -1.69411915e+004, 6.77647660e+004, -9.60000852e+004, 1.55437997e-312]), 'imxer': 1077570368, 'leniw': 26, 'tsw': array([ 1.46187823e-003, 0.00000000e+000, 0.00000000e+000, -6.06373313e+004, 1.12044275e+005, -7.38157991e+004, 1.51593328e+004, -6.14609855e+004, 9.68849425e+004, 1.69759663e-312, -7.19575108e+005, 5.01006985e+005, -1.25045833e+005, 5.13361798e+005, -8.44620941e+005, 5.00183330e+005, -1.28340450e+005, 5.12538144e+005, -7.16280491e+005, 1.69759663e-312]), 'message': 'Excess work done on this call (perhaps wrong Dfun type).', 'nst': array([ 500, 0, 0, 0, 0, 1161229464, 1162010880, 1076222336, 1076222048, 1162010944, 1162000288, 1162000328, 1162010976, 1076623264, 1162011008, 1162000368, 1162011040, 1076506496, 1162000408, 1162011072]), 'mused': array([ 2, 0, 0, 0, 0, 1087505788, -1712838703, -1058169942, 1740837064, 1090214596, -920046736, -1057880708, -1712838705, 1087216554, -1989227584, -1058143585, 344329165, 1089972047, 80, 80])} Any idea how to resolve the problem ? Thanks in advance Nils infodict -- a dictionary of optional outputs: 'hu' : a vector of step sizes successfully used for each time step. 'tcur' : a vector with the value of t reached for each time step. (will always be at least as large as the input times). 'tolsf' : a vector of tolerance scale factors, greater than 1.0, computed when a request for too much accuracy was detected. 'tsw' : the value of t at the time of the last method switch (given for each time step). 'nst' : the cumulative number of time steps. 'nfe' : the cumulative number of function evaluations for eadh time step. 'nje' : the cumulative number of jacobian evaluations for each time step. 'nqu' : a vector of method orders for each successful step. 'imxer' : index of the component of largest magnitude in the weighted local error vector (e / ewt) on an error return. 'lenrw' : the length of the double work array required. 'leniw' : the length of integer work array required. 'mused' : a vector of method indicators for each successful time step: 1 -- adams (nonstiff) 2 -- bdf (stiff) From svetosch at gmx.net Tue Nov 15 04:41:42 2005 From: svetosch at gmx.net (Sven Schreiber) Date: Tue, 15 Nov 2005 10:41:42 +0100 Subject: [SciPy-user] Binaries for scipy created In-Reply-To: <437809C1.5020909@ee.byu.edu> References: <437809C1.5020909@ee.byu.edu> Message-ID: <4379AD56.3000606@gmx.net> Travis Oliphant schrieb: > I've used the scipy sourceforge site to place binaries for a "release" > of full scipy (built on the new core). The version is 0.4.3. There is > an rpm and windows binaries, as well as a full tar ball. Thanks very much from a newbie here, the previous lack of (windows) binaries for python 2.4 almost deterred me from seriously trying out scipy! Too bad only that I replaced my 2.4-installation with the enthought-2.3 one day before you posted those binaries... One question though: does "full scipy" mean that it has all of the (lapack) functionality I get from having the enthought-2.3-bundled version installed? Thanks for clarification. -sven From jens_brandenburg at gmx.net Tue Nov 15 10:26:41 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Tue, 15 Nov 2005 16:26:41 +0100 (MET) Subject: [SciPy-user] building scipy Message-ID: <17945.1132068401@www58.gmx.net> Hi SciPy users, Because I have to rebuild Python and SciPy I decided to build the new SciPy... But some (maybe stupid) questions came up: Where can I get current SciPy and SciPy_core packages? This "svn co http://svn.scipy.org/svn/scipy_core/branches/newcore" and "svn co http://svn.scipy.org/svn/scipy/branches/newscipy" doesn't work for me at all... don't know why - respectively if I look to http://svn.scipy.org/svn/scipy/branches/ I just find v0_3_2 wich seems to indicate that SciPy-0.3.2 is the current version?! So, I loaded SciPy and Scipy_core down from http://www.scipy.org/download/ but "python setup.py install" in the newcore directory fails returning: numpy_info: NOT AVIALABLE But after the last answer of Travis I thought that I don't need the Numeric package anymore... Okay, you see I'm really confused now... Help! ;-)) Cheers Jens -- Jens Brandenburg Highspeed-Freiheit. Bei GMX superg?nstig, z.B. GMX DSL_Cityflat, DSL-Flatrate f?r nur 4,99 Euro/Monat* http://www.gmx.net/de/go/dsl From giovanni.samaey at cs.kuleuven.ac.be Tue Nov 15 10:33:32 2005 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Tue, 15 Nov 2005 16:33:32 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <17945.1132068401@www58.gmx.net> References: <17945.1132068401@www58.gmx.net> Message-ID: <4379FFCC.7080004@cs.kuleuven.ac.be> The repositories have moved to trunk -- the only information about this has passed on this list. >From Travis' mail: Get the new repositor(y,ies) using: *Core*: svn co http://svn.scipy.org/svn/scipy_core/trunk core *Full SciPy*: svn co http://svn.scipy.org/svn/scipy/trunk scipy Doing both will place two directories named core and scipy in your current directory containing the current state of both repositories. python setup.py install should work in each directory. Best, Giovanni Jens Brandenburg wrote: >Hi SciPy users, > > >Because I have to rebuild Python and SciPy I decided to build the new >SciPy... But some (maybe stupid) questions came up: Where can I get current >SciPy and SciPy_core packages? This "svn co >http://svn.scipy.org/svn/scipy_core/branches/newcore" and "svn co >http://svn.scipy.org/svn/scipy/branches/newscipy" doesn't work for me at >all... don't know why - respectively if I look to >http://svn.scipy.org/svn/scipy/branches/ I just find v0_3_2 wich seems to >indicate that SciPy-0.3.2 is the current version?! So, I loaded SciPy and >Scipy_core down from http://www.scipy.org/download/ but "python setup.py >install" in the newcore directory fails returning: > >numpy_info: NOT AVIALABLE > >But after the last answer of Travis I thought that I don't need the Numeric >package anymore... Okay, you see I'm really confused now... Help! ;-)) > >Cheers > >Jens > > > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From rmuller at sandia.gov Tue Nov 15 12:51:05 2005 From: rmuller at sandia.gov (Rick Muller) Date: Tue, 15 Nov 2005 10:51:05 -0700 Subject: [SciPy-user] More on problems calling scipy.special.sph_harm Message-ID: Did a little detective work today. It looks like the crash is happening in the scipy.special.lpmn function. BTW, I'm running on a Macintosh (OS X 10.4), python 2.3.5, scipy 0.3.2. In [1]: from scipy.special import lpmn In [4]: for l in range(4): ...: for m in range(l+1): ...: print l,m ...: plm = lpmn(m,l,0.1) ...: 0 0 1 0 1 1 2 0 2 1 2 2 Python(4406) malloc: *** error for object 0x355020: incorrect checksum for freed object - object was probably modified after being freed, break at szone_error to debug Python(4406) malloc: *** set a breakpoint in szone_error to debug 3 0 Segmentation fault From jens_brandenburg at gmx.net Tue Nov 15 17:21:57 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Tue, 15 Nov 2005 23:21:57 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <4379FFCC.7080004@cs.kuleuven.ac.be> References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> Message-ID: <437A5F85.8020703@gmx.net> hello again, Giovanni Samaey schrieb: >The repositories have moved to trunk -- the only information about this has >passed on this list. > Now I got them... ;-) Thank's! >python setup.py install should work in each directory. > > Sadly building SciPy doesn't work - as far as I see... ".../newscipy>python setup.py install" returns ... building 'superlu_src' library compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' error: file 'SuperLU/SRC/*.c' does not exist Still helpless Jens From dickrp at wckn.com Tue Nov 15 18:39:04 2005 From: dickrp at wckn.com (Robert Dick) Date: Tue, 15 Nov 2005 17:39:04 -0600 Subject: [SciPy-user] [SciPy-dev] scipy.linalg.eig() returns transposed eigenvector matrix Message-ID: <200511151739.04845.dickrp@wckn.com> >> If you need to decide which one to change, identify the common case. Is it >> more common to access all dimensions of one eigenvector or access one >> dimension of many eigenvectors? The common case should be the easiest to >> express, i.e., if one wants the first eigenvector, should >> la.eig(m)[1][0] >> or >> la.eig(m)[1][:, 0] >> be used? Pearu Peterson: > I am not convinced that getting eigenvectors one-by-one is the most common > case of la.eig usage. Sometimes one needs to operate with the whole matrix > of eigenvectors and then the mathematically "correct" representation of > the eigenmatrix would be more convinient. Then let's go with the eig() convention (an eigenvector in each column). It's not very dangerous because the function names are different. Anybody implementing a backward-compatible scipy.Numeric will just need to transpose the results of eig() in an eigenvectors() wrapper. The only real problem now is the conflict between the book and the code. -Robert Dick- From jeff at taupro.com Wed Nov 16 01:43:25 2005 From: jeff at taupro.com (Jeff Rush) Date: Wed, 16 Nov 2005 00:43:25 -0600 Subject: [SciPy-user] Call for a Scientific/Engineering Tutorial or Two at PyCon 2006 Message-ID: <437AD50D.1070107@taupro.com> Greetings. I'm working with the Python Conference planning group, for a conference to be held in Dallas in 2006. As you probably know from postings to python-announce, this year, for the first time, we're having a full day (Feb 23, 2006) of trainer-compensated tutorials, structured as either half-day or full-day sessions. From the feedback we received at last year's PyCon, there was a strong interest in scientific/engineering uses of Python, which has a rich set of tools and frameworks. We hope this year to dedicate one room on Tutorial Day to such topics. While I was looking over the submissions for the tutorials tonight, I noticed there were none related to science/engineering. It would be cool to have a half-day class on one of the underlying packages used across disciplines. Note that these are to be professionally-run half or full day classes, suggested with handouts or notebooks. And the trainer will receive monetary compensation, depending upon the number of attendees that register for specific tutorials. If you or someone on your staff is interested, there is more information at: http://us.pycon.org/TX2006/CallForTutorials While the submission deadline expires tonight, we have some flexibility. Jeff Rush From stephen.walton at csun.edu Wed Nov 16 11:31:12 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 16 Nov 2005 08:31:12 -0800 Subject: [SciPy-user] building scipy In-Reply-To: <437A5F85.8020703@gmx.net> References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> <437A5F85.8020703@gmx.net> Message-ID: <437B5ED0.50907@csun.edu> Jens Brandenburg wrote: >Sadly building SciPy doesn't work - as far as I see... > >".../newscipy>python setup.py install" returns > >... >building 'superlu_src' library >compiling C sources >gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall >-Wstrict-prototypes -fPIC' >error: file 'SuperLU/SRC/*.c' does not exist > > Stupid question: did you do "python setup.py install" in core first? scipy_core has to be installed before you can build full Scipy. From oliphant at ee.byu.edu Wed Nov 16 15:28:32 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 13:28:32 -0700 Subject: [SciPy-user] Problems with scipy_core-0.4.2.win32-py2.41.exe -- tofile In-Reply-To: <437B2171.4060207@xs4all.nl> References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> Message-ID: <437B9670.1050104@ee.byu.edu> Paul Hofstra wrote: > > Hi Travis, > > First of all, I made a mistake on the version. I have this problem > with the latest version 0.6, not with 0.4.2. > > I tried x.tofile("C:/test.dat") and it fails in the same way (Python > crashes) > > Best regards, > Paul Thanks for clarifying and for pointing this out. This is an issue with the binary. I'm using mingw32 to compile the binary which was not linking against the right library for Python 2.4. It works right for Python 2.3, however. Hopefully, the issue has now been fixed in the SVN version of scipy_core (along with a fromfile bug too..) Compiling on windows is not too difficult. If you want to try it, I can guide you along. Otherwise, you can use the Python 2.3 version (but fromfile may not work right) or wait for another release -- probably 2-3 weeks. Best, -Travis From travis.brady at gmail.com Wed Nov 16 16:54:32 2005 From: travis.brady at gmail.com (Travis Brady) Date: Wed, 16 Nov 2005 13:54:32 -0800 Subject: [SciPy-user] Problems with scipy_core-0.4.2.win32-py2.41.exe -- tofile In-Reply-To: <437B9670.1050104@ee.byu.edu> References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> Message-ID: Hi, I'd actually like to try compiling for 2.4 too, is this possible w/o Visual Studio (I am using Python 2.4.1 compiled with VS)? Maybe with MingW? Also, has anyone ever attempted to implement quantile regression in scipy/numeric? I could do it with RPy but that's rather slow and I'd like to maybe take a crack at a Python implementation just for the heck of it. Travis On 11/16/05, Travis Oliphant wrote: > > Paul Hofstra wrote: > > > > > Hi Travis, > > > > First of all, I made a mistake on the version. I have this problem > > with the latest version 0.6, not with 0.4.2. > > > > I tried x.tofile("C:/test.dat") and it fails in the same way (Python > > crashes) > > > > Best regards, > > Paul > > Thanks for clarifying and for pointing this out. This is an issue with > the binary. I'm using mingw32 to compile the binary which was not > linking against the right library for Python 2.4. > > It works right for Python 2.3, however. Hopefully, the issue has now > been fixed in the SVN version of scipy_core (along with a fromfile bug > too..) Compiling on windows is not too difficult. If you want to try > it, I can guide you along. Otherwise, you can use the Python 2.3 > version (but fromfile may not work right) or wait for another release -- > probably 2-3 weeks. > > Best, > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Nov 16 17:07:25 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 15:07:25 -0700 Subject: [SciPy-user] Building scipy core on windows In-Reply-To: References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> Message-ID: <437BAD9D.3080409@ee.byu.edu> Travis Brady wrote: > Hi, > > I'd actually like to try compiling for 2.4 too, is this possible w/o > Visual Studio (I am using Python 2.4.1 compiled with VS)? > Maybe with MingW? Yes, that's what I use. Get MSYS and MinGW. Download a binary version of ATLAS if you want fast linear algebra. If you don't care, then don't worry about that part --- the code should still build. Then, check out the latest SVN tree (Tortoise SVN is an excellent windows SVN client that makes it easy). The URL is http://svn.scipy.org/svn/scipy_core/trunk You should be able to go into the directory where you placed the tree and type python setup.py config --compiler=mingw32 build --compiler=mingw32 install or python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst to get an installable executable. Alternatively, to avoid all the --compiler=xxxxx noise you can create (or modify if you already have one) a distutils configuration file for your version of Python. The file name is \Lib\distutils\distutils.cfg and the contents should contain [build] compiler = mingw32 [config] compiler = mingw32 On my system C:\Python24\Lib\distutils\distutils.cfg is where it is located. -Travis From travis.brady at gmail.com Wed Nov 16 19:25:16 2005 From: travis.brady at gmail.com (Travis Brady) Date: Wed, 16 Nov 2005 16:25:16 -0800 Subject: [SciPy-user] Building scipy core on windows In-Reply-To: <437BAD9D.3080409@ee.byu.edu> References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> <437BAD9D.3080409@ee.byu.edu> Message-ID: Where can I download an ATLAS binary? Everywhere I look I see mentions of building it with Cygwin or MingW but no binary executables for Win32. Travis On 11/16/05, Travis Oliphant wrote: > > Travis Brady wrote: > > > Hi, > > > > I'd actually like to try compiling for 2.4 too, is this possible w/o > > Visual Studio (I am using Python 2.4.1 compiled with VS)? > > Maybe with MingW? > > Yes, that's what I use. Get MSYS and MinGW. Download a binary version > of ATLAS if you want fast linear algebra. If you don't care, then don't > worry about that part --- the code should still build. > > Then, check out the latest SVN tree (Tortoise SVN is an excellent > windows SVN client that makes it easy). The URL is > http://svn.scipy.org/svn/scipy_core/trunk > > You should be able to go into the directory where you placed the tree > and type > > python setup.py config --compiler=mingw32 build --compiler=mingw32 install > > or > > python setup.py config --compiler=mingw32 build --compiler=mingw32 > bdist_wininst > > to get an installable executable. > > > Alternatively, to avoid all the --compiler=xxxxx noise you can create > (or modify if you already have one) a distutils configuration file for > your version of Python. The file name is > > \Lib\distutils\distutils.cfg > > and the contents should contain > > [build] > compiler = mingw32 > > [config] > compiler = mingw32 > > > On my system > > C:\Python24\Lib\distutils\distutils.cfg > > is where it is located. > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Nov 16 19:36:01 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Nov 2005 16:36:01 -0800 Subject: [SciPy-user] Building scipy core on windows In-Reply-To: References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> <437BAD9D.3080409@ee.byu.edu> Message-ID: <437BD071.9090501@gmail.com> Travis Brady wrote: > Where can I download an ATLAS binary? Everywhere I look I see mentions > of building it with Cygwin or MingW but no binary executables for Win32. http://www.scipy.org/download/atlasbinaries/ -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From haase at msg.ucsf.edu Wed Nov 16 19:37:48 2005 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 16 Nov 2005 16:37:48 -0800 Subject: [SciPy-user] new scipyCore features in numarray - indexing... In-Reply-To: <436BE74A.7080109@stsci.edu> References: <200511041158.43129.haase@msg.ucsf.edu> <436BE74A.7080109@stsci.edu> Message-ID: <200511161637.48268.haase@msg.ucsf.edu> Todd, I'm just thinking of a nice feature that I think is now part of new scipyCore: Mixing index ranges in one axis with index lists in another. Example: I have index 4,7,9 that I'm intrested in: use a[ [4,7,9] ] If I want all section I obviously just say a[ : ] But what do I do in the 2d case where I want 4,7,9 in one axis and all in the other ? I understood that the new scipyCore allows a[:, [4,7,9]] whereas numarray gives an error !? Thanks, Sebastian Haase On Friday 04 November 2005 14:57, Todd Miller wrote: > Sebastian Haase wrote: > >Also I always need to thank Todd et al. for numarray which we are using > > for about 4 years now. > > I'm glad you found numarray useful. > > >I was following - I thought - all the postings here, but I don't remember > > when and what the reason was when a.type() changed to a.dtype (also > > there is a "dtypecode" somewhere !?). Any reference or explanation would > > be great. I have to say that the (old) parenthesis where always quite > > "annoying" ! ;-) > > > >Question: does the way allow assignments like "a.dtype = Float32". > >What does it do ? If not, is it raising an error (I had 2 different people > >yesterday who tried to assign to a.type here in our lab ...) > > > >Also is this now completely supported/tested and suggested for numarray ? > > (For the time numarray is still separate) > > I'm adding support for some of newcore's new interface features out of > desire to make it easier to migrate. Our intent is to make it possible > to write newcore code and run it on numarray now as newcore matures. > Not every newcore feature is going to be supported, but we'll make an > effort to support those which are easy to implement. Let me know is > there's some newcore idiom you want to use that numarray doesn't have yet. > > Regards, > Todd > From travis.brady at gmail.com Wed Nov 16 22:12:59 2005 From: travis.brady at gmail.com (Travis Brady) Date: Wed, 16 Nov 2005 19:12:59 -0800 Subject: [SciPy-user] Building scipy core on windows In-Reply-To: <437BD071.9090501@gmail.com> References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> <437BAD9D.3080409@ee.byu.edu> <437BD071.9090501@gmail.com> Message-ID: Hmmm.... Got the binaries but can't seem to get around setup.py not being able to find various modules all of which are there. Do I need to set something beforehand so it knows, for instance, that distutils is not the scipy.distutils and all that. I've never successfully installed something as large as Scipy on my windows machine w/o a binary. thanks everyone for your help so far. Travis On 11/16/05, Robert Kern wrote: > > Travis Brady wrote: > > Where can I download an ATLAS binary? Everywhere I look I see mentions > > of building it with Cygwin or MingW but no binary executables for Win32. > > http://www.scipy.org/download/atlasbinaries/ > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Nov 16 22:17:42 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 16 Nov 2005 20:17:42 -0700 Subject: [SciPy-user] Building scipy core on windows In-Reply-To: References: <437B0112.5050700@xs4all.nl> <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> <437BAD9D.3080409@ee.byu.edu> <437BD071.9090501@gmail.com> Message-ID: <437BF656.3090109@ee.byu.edu> Travis Brady wrote: > Hmmm.... > Got the binaries but can't seem to get around setup.py not being able > to find various modules all of which are there. Perhaps you could show us what the errors are and somebody could help. From nwagner at mecha.uni-stuttgart.de Thu Nov 17 05:29:29 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 17 Nov 2005 11:29:29 +0100 Subject: [SciPy-user] I/O features in scipy Message-ID: <437C5B89.4070309@mecha.uni-stuttgart.de> Hi all, I am curious about the interest of users to have an interface to binary/ascii files generated by NASTRAN or ANSYS (two well-known finite element packages). Scipy is able to read/write matrices in MatrixMarket format and Matlab files. BTW, which MATLAB versions are currently supported by io.loadmat io.savemat ? There exists some projects which might be useful in this context. NASTRAN: http://savannah.nongnu.org/cgi-bin/viewcvs/tops/tops/usr/extra/op4tools/ ANSYS: http://evgenii.rudnyi.ru/soft/readAnsys/ Any comment would be appreciated. Nils From jens_brandenburg at gmx.net Thu Nov 17 07:11:53 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Thu, 17 Nov 2005 13:11:53 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <437B5ED0.50907@csun.edu> References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> <437A5F85.8020703@gmx.net> <437B5ED0.50907@csun.edu> Message-ID: <437C7389.9090900@gmx.net> hi, Stephen Walton schrieb: >Stupid question: did you do "python setup.py install" in core first? > ... of course! It looks like this: x21lx-jensb:/usr/local/lib/python2.4/site-packages/scipy # ls . __init__.py base distutils setup.py .. __init__.pyc basic doc setup.pyc __core_config__.py _import_tools.py core_version.py f2py test __core_config__.pyc _import_tools.pyc core_version.pyc lib weave Inbetween another question raised: "How could I make my own ATLAS (build from source) libraries available to SciPy?" I 've set the following environment variables: export BUILD_DIR=/usr/local/builds export BLAS_SRC=$BUILD_DIR/src/BLAS export BLAS=$BLAS_SRC/libfblas.a export LAPACK_SRC=$BUILD_DIR/src/LAPACK export LAPACK=$LAPACK_SRC/lapack_LINUX.a export ATLAS_SRC=$BUILD_DIR/src/ATLAS export ATLAS=$ATLAS_SRC/lib/Linux_i686centrino Thank 's in advance Jens From pearu at scipy.org Thu Nov 17 06:27:53 2005 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 17 Nov 2005 05:27:53 -0600 (CST) Subject: [SciPy-user] building scipy In-Reply-To: <437A5F85.8020703@gmx.net> References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> <437A5F85.8020703@gmx.net> Message-ID: On Tue, 15 Nov 2005, Jens Brandenburg wrote: > hello again, > > Giovanni Samaey schrieb: > >> The repositories have moved to trunk -- the only information about this has >> passed on this list. >> > Now I got them... ;-) Thank's! > >> python setup.py install should work in each directory. >> >> > Sadly building SciPy doesn't work - as far as I see... > > ".../newscipy>python setup.py install" returns ^^^^^^^^^^^^?? > ... > building 'superlu_src' library > compiling C sources > gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall > -Wstrict-prototypes -fPIC' > error: file 'SuperLU/SRC/*.c' does not exist Could you try getting core scipy and scipy sources from SVN repositories again? You obviously have incomplete sources. Pearu From jens_brandenburg at gmx.net Thu Nov 17 11:21:50 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Thu, 17 Nov 2005 17:21:50 +0100 Subject: [SciPy-user] building scipy In-Reply-To: References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> <437A5F85.8020703@gmx.net> Message-ID: <437CAE1E.3000707@gmx.net> hi Pearu, > Could you try getting core scipy and scipy sources from SVN repositories again? You obviously have incomplete sources. I got 1499 and 1443... Now it works! ;-) What has happend to scipy.xplt? Are there some alternativs? cheers Jens From nwagner at mecha.uni-stuttgart.de Thu Nov 17 11:23:46 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 17 Nov 2005 17:23:46 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <437CAE1E.3000707@gmx.net> References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> <437A5F85.8020703@gmx.net> <437CAE1E.3000707@gmx.net> Message-ID: <437CAE92.30203@mecha.uni-stuttgart.de> Jens Brandenburg wrote: >hi Pearu, > > > Could you try getting core scipy and scipy sources from SVN >repositories again? You obviously have incomplete sources. > >I got 1499 and 1443... Now it works! ;-) > >What has happend to scipy.xplt? Are there some alternativs? >matplotlib http://matplotlib.sourceforge.net/installing.html > Nils >cheers > >Jens > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > From ryanlists at gmail.com Thu Nov 17 13:37:54 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 13:37:54 -0500 Subject: [SciPy-user] distutils help Message-ID: I am trying to package my Python+Maxima+Latex symbolic stuff and I am having trouble understanding the distutils documentation. I want to include a README.txt file that will be there when you unzip the source distribution. As I understand the distutils documentation this should be a data_file. Here is my setup.py script that isn't working: #!/usr/bin/env python from distutils.core import setup setup(name='textfiles', version='0.1', description='Utilities for processing textfiles', author='Ryan Krauss', author_email='ryanlists at gmail.com', url='http://www.imdl.gatech.edu/ryan', package_dir = {'': '/home/ryan/rwkpython'}, packages=['textfiles', 'textfiles.latexlist'], py_modules = ['rwkmisc','rwklist'], package_data={'textfiles.latexlist': ['example.tex','example.py']}, data_files=[('', ['README.txt']),], ) setup.py is in a different directory than where the modules are actually located. The README is in the same directory as setup.py. The README doesn't show up in the manifest and is no where to be found when I unpack and install (I actually copied it to all the directories I thought might be possible and it still didn't work. I also tried putting it in a doc folder relative to setup.py and changing to data_files=[('',['doc/README.txt']),],) What am I doing wrong? Ryan From Jim.Vickroy at noaa.gov Thu Nov 17 13:56:08 2005 From: Jim.Vickroy at noaa.gov (Jim Vickroy) Date: Thu, 17 Nov 2005 11:56:08 -0700 Subject: [SciPy-user] distutils help In-Reply-To: References: Message-ID: <437CD248.2060200@noaa.gov> Hello Ryan, I just looked at one of my working files, and the only difference I can see is that I explicitly specify a target directory name for the files in the first element of the tuple. For example: data_files=[('docs',['doc/README.txt']),] You may wish to give that a try until someone more knowledgeable responds. -- jv Ryan Krauss wrote: >I am trying to package my Python+Maxima+Latex symbolic stuff and I am >having trouble understanding the distutils documentation. I want to >include a README.txt file that will be there when you unzip the source >distribution. As I understand the distutils documentation this should >be a data_file. Here is my setup.py script that isn't working: > >#!/usr/bin/env python > >from distutils.core import setup > >setup(name='textfiles', > version='0.1', > description='Utilities for processing textfiles', > author='Ryan Krauss', > author_email='ryanlists at gmail.com', > url='http://www.imdl.gatech.edu/ryan', > package_dir = {'': '/home/ryan/rwkpython'}, > packages=['textfiles', 'textfiles.latexlist'], > py_modules = ['rwkmisc','rwklist'], > package_data={'textfiles.latexlist': ['example.tex','example.py']}, > data_files=[('', ['README.txt']),], > ) > >setup.py is in a different directory than where the modules are >actually located. The README is in the same directory as setup.py. >The README doesn't show up in the manifest and is no where to be found >when I unpack and install (I actually copied it to all the directories >I thought might be possible and it still didn't work. I also tried >putting it in a doc folder relative to setup.py and changing to >data_files=[('',['doc/README.txt']),],) > >What am I doing wrong? > >Ryan > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From travis.brady at gmail.com Thu Nov 17 14:09:54 2005 From: travis.brady at gmail.com (Travis Brady) Date: Thu, 17 Nov 2005 11:09:54 -0800 Subject: [SciPy-user] Building scipy core on windows In-Reply-To: <437BF656.3090109@ee.byu.edu> References: <437B0AAC.8010705@ee.byu.edu> <437B2171.4060207@xs4all.nl> <437B9670.1050104@ee.byu.edu> <437BAD9D.3080409@ee.byu.edu> <437BD071.9090501@gmail.com> <437BF656.3090109@ee.byu.edu> Message-ID: Wow. That site.cfg file is critical. I'd been running into all sorts of errors, it telling me that Python was built with MSVC and that I didn't have it so it wouldn't work despite my having specified --compiler=mingw32. When it worked I had installed MSYS 1.0 with MingW in the msys\1.0\mingw directory, downloaded ATLAS binaries from scipy.org and a site.cfg file created with just [atlas] library_dirs = C:\usr\src\ATLAS in it and then I just did: python setup.py config --compiler=mingw32 build --compiler=mingw32 install And that worked. thanks, Travis, turned out to be much less painful than I'd expected. Travis On 11/16/05, Travis Oliphant wrote: > > Travis Brady wrote: > > > Hmmm.... > > Got the binaries but can't seem to get around setup.py not being able > > to find various modules all of which are there. > > Perhaps you could show us what the errors are and somebody could help. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Fernando.Perez at colorado.edu Thu Nov 17 14:19:16 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 17 Nov 2005 12:19:16 -0700 Subject: [SciPy-user] distutils help In-Reply-To: References: Message-ID: <437CD7B4.3020606@colorado.edu> Ryan Krauss wrote: > I am trying to package my Python+Maxima+Latex symbolic stuff and I am > having trouble understanding the distutils documentation. I want to > include a README.txt file that will be there when you unzip the source > distribution. As I understand the distutils documentation this should > be a data_file. Here is my setup.py script that isn't working: The easiest solution is to use a proper MANIFEST.in file which lists what you want/don't want included, and make sure that your setup always wipes out the actual MANIFEST file (otherwise distutils doesn't update correctly). The scipy setup is extremely complex and not the best to learn from for simple tasks (its complexity is necessary, given what it's trying to do). I'd recommend that you have a look at ipython's setup file, which while being non-trivial is a lot easier to understand, and it handles things like what you are asking. I'll be happy to answer questions you may have about it. Cheers, f From ryanlists at gmail.com Thu Nov 17 14:48:02 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 14:48:02 -0500 Subject: [SciPy-user] distutils help In-Reply-To: <437CD7B4.3020606@colorado.edu> References: <437CD7B4.3020606@colorado.edu> Message-ID: The MANIFEST seems crucial, but is not discussed in the docs for python 2.4. I started looking at the ipython setup.py, but it is quite a step up in complexity from the distutils examples. Fernando, do you have a MANIFEST.in file that goes along with the ipython setup.py? I can't seem to find where the README gets included from setup.py alone. Thanks, Ryan On 11/17/05, Fernando Perez wrote: > Ryan Krauss wrote: > > I am trying to package my Python+Maxima+Latex symbolic stuff and I am > > having trouble understanding the distutils documentation. I want to > > include a README.txt file that will be there when you unzip the source > > distribution. As I understand the distutils documentation this should > > be a data_file. Here is my setup.py script that isn't working: > > The easiest solution is to use a proper MANIFEST.in file which lists what you > want/don't want included, and make sure that your setup always wipes out the > actual MANIFEST file (otherwise distutils doesn't update correctly). The > scipy setup is extremely complex and not the best to learn from for simple > tasks (its complexity is necessary, given what it's trying to do). I'd > recommend that you have a look at ipython's setup file, which while being > non-trivial is a lot easier to understand, and it handles things like what you > are asking. I'll be happy to answer questions you may have about it. > > Cheers, > > f > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From Fernando.Perez at colorado.edu Thu Nov 17 15:55:48 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 17 Nov 2005 13:55:48 -0700 Subject: [SciPy-user] distutils help In-Reply-To: References: <437CD7B4.3020606@colorado.edu> Message-ID: <437CEE54.8030802@colorado.edu> Ryan Krauss wrote: > The MANIFEST seems crucial, but is not discussed in the docs for python 2.4. > > I started looking at the ipython setup.py, but it is quite a step up > in complexity from the distutils examples. Yes, but still a lot less complex than the scipy one. The problem with the distutils examples (IMO) is that they are _too_ simple, and it takes a lot of trial and error to figure out how to do more complicated thingz. distutils has a horrible learning curve: it's really easy to do what their examples say, but anything beyond that requires digging through the (mostly uncommented) source code. > Fernando, do you have a MANIFEST.in file that goes along with the > ipython setup.py? I can't seem to find where the README gets included > from setup.py alone. Here goes. If you do an svn checkout of ipython you'll get all of that. Cheers, f -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: MANIFEST.in URL: From Fernando.Perez at colorado.edu Thu Nov 17 15:58:39 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 17 Nov 2005 13:58:39 -0700 Subject: [SciPy-user] distutils help In-Reply-To: <437CEE54.8030802@colorado.edu> References: <437CD7B4.3020606@colorado.edu> <437CEE54.8030802@colorado.edu> Message-ID: <437CEEFF.4010103@colorado.edu> Fernando Perez wrote: > trial and error to figure out how to do more complicated thingz. distutils things typo, not trying to sound like a 14 year old. sorry. f From ryanlists at gmail.com Thu Nov 17 16:03:00 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 16:03:00 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff Message-ID: Sorry this took so long, but I now have a package for symbolic stuff available. It can be found at: http://www.imdl.gatech.edu/ryan/python/ This is my first attempt at distutils and sharing my code in this way, so some patience may be required. I am admittedly a hack. This package assumes that LaTeX and Maxima are already installed on your system (and are somewhere on you path so that latex and maxima are both valid commands at a system prompt.) There is an example included and I think it should be fairly straight forward and painless to try out. I have tested it on Linux and windows. Let me know if anything is unclear or if you have any problems. I will try to stay on top of my email over the next few days in case this turns out to be really buggy. Let me know what you think, Ryan On 11/17/05, daniel malik wrote: > > I see your post at Sci py-user list, an very interessed with this package. > Can you send the package ? > very thank's > Daniel > -- > Daniel MALIK From Fernando.Perez at colorado.edu Thu Nov 17 16:08:06 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 17 Nov 2005 14:08:06 -0700 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: References: Message-ID: <437CF136.7070806@colorado.edu> Ryan Krauss wrote: > Sorry this took so long, but I now have a package for symbolic stuff > available. It can be found at: > http://www.imdl.gatech.edu/ryan/python/ Once you flesh out the (inevitable) bugs with a bit of public use, please don't forget to make an entry on the topical software wiki. This is filling a much-needed gap in python-for-science. BTW, you don't specify a license at all. If I may suggest using BSD, so that integration with the rest of scipy/matplotlib/ipython is not an issue in the future, that would be great. The decision is yours to make, of course. Regards, f From ryanlists at gmail.com Thu Nov 17 16:13:28 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 16:13:28 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: <437CF136.7070806@colorado.edu> References: <437CF136.7070806@colorado.edu> Message-ID: Sorry, yes I meant to use a BSD license. I don't know completely how this will work though because Maxima is GPL. I don't link to Maxima in anyway, I just call it from the command line. Does that pose a problem for comercial use? Can you bundle GPL software with your own proprietary stuff if you don't link to it? Ryan On 11/17/05, Fernando Perez wrote: > Ryan Krauss wrote: > > Sorry this took so long, but I now have a package for symbolic stuff > > available. It can be found at: > > http://www.imdl.gatech.edu/ryan/python/ > > Once you flesh out the (inevitable) bugs with a bit of public use, please > don't forget to make an entry on the topical software wiki. This is filling a > much-needed gap in python-for-science. > > BTW, you don't specify a license at all. If I may suggest using BSD, so that > integration with the rest of scipy/matplotlib/ipython is not an issue in the > future, that would be great. The decision is yours to make, of course. > > Regards, > > f > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From simon at arrowtheory.com Thu Nov 17 16:46:47 2005 From: simon at arrowtheory.com (Simon Burton) Date: Fri, 18 Nov 2005 08:46:47 +1100 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: References: Message-ID: <20051118084647.0dbe92c3.simon@arrowtheory.com> On Thu, 17 Nov 2005 16:03:00 -0500 Ryan Krauss wrote: > > Sorry this took so long, but I now have a package for symbolic stuff > available. It can be found at: > http://www.imdl.gatech.edu/ryan/python/ > > This is my first attempt at distutils and sharing my code in this way, > so some patience may be required. I am admittedly a hack. > > This package assumes that LaTeX and Maxima where can i download maxima ? Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 02 6249 6940 http://arrowtheory.com From ryanlists at gmail.com Thu Nov 17 16:58:28 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 16:58:28 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: <20051118084647.0dbe92c3.simon@arrowtheory.com> References: <20051118084647.0dbe92c3.simon@arrowtheory.com> Message-ID: Maxima homepage: http://maxima.sourceforge.net/ which has Maxima docs and links to the download page: http://sourceforge.net/project/showfiles.php?group_id=4933 (it is also available in many linux package managers - ubuntu for sure. For ubuntu, make sure you also install the xmaxima package.) On 11/17/05, Simon Burton wrote: > On Thu, 17 Nov 2005 16:03:00 -0500 > Ryan Krauss wrote: > > > > > Sorry this took so long, but I now have a package for symbolic stuff > > available. It can be found at: > > http://www.imdl.gatech.edu/ryan/python/ > > > > This is my first attempt at distutils and sharing my code in this way, > > so some patience may be required. I am admittedly a hack. > > > > This package assumes that LaTeX and Maxima > > where can i download maxima ? > > Simon. > > -- > Simon Burton, B.Sc. > Licensed PO Box 8066 > ANU Canberra 2601 > Australia > Ph. 61 02 6249 6940 > http://arrowtheory.com > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From stefan at sun.ac.za Thu Nov 17 18:26:54 2005 From: stefan at sun.ac.za (Stefan van der Walt) Date: Fri, 18 Nov 2005 01:26:54 +0200 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: References: <437CF136.7070806@colorado.edu> Message-ID: <20051117232654.GB32661@alpha> On Thu, Nov 17, 2005 at 04:13:28PM -0500, Ryan Krauss wrote: > Sorry, yes I meant to use a BSD license. I don't know completely how > this will work though because Maxima is GPL. I don't link to Maxima > in anyway, I just call it from the command line. Does that pose a > problem for comercial use? Can you bundle GPL software with your own > proprietary stuff if you don't link to it? If you don't link to GPL'd code, that license doesn't pose a problem. You are allowed to bundle GPL software with commercial packages, as long as you distribute the source as well. In fact, you are allowed to sell GPL software -- but finding buyers might prove difficult :) St?fan From saliola at gmail.com Thu Nov 17 19:20:05 2005 From: saliola at gmail.com (Franco Saliola) Date: Thu, 17 Nov 2005 19:20:05 -0500 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: <43746180.1000301@ee.byu.edu> References: <43746180.1000301@ee.byu.edu> Message-ID: On 11/11/05, Travis Oliphant wrote: > Franco Saliola wrote: > > >Hello. > > > >I am having some trouble building SciPy on a 64 bit machine. Actually, > >after changing a bunch of flags I got it to build, but I am getting an > >errors for scipy.test(level=1). The test returns, > > Which SciPy are you talking about? The new scipy or the old scipy. > > To be honest, I don't know if the old scipy ever worked on 64-bit machines. > > The new scipy should build fine on 64-bit machines. Thanks for the email Travis. But I can't get new scipy to work. In following the instructions on the Build New SciPy website, http://www.scipy.org/documentation/buildnewscipy.txt I get an error message when I try to download the newcore branch of scipy_core, URL 'http://svn.scipy.org/svn/scipy_core/branches/newcore' doesn't exist. I'm not sure which instruction I should be following. Thanks for you help, Franco -- From oliphant at ee.byu.edu Thu Nov 17 19:26:58 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 Nov 2005 17:26:58 -0700 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: <43746180.1000301@ee.byu.edu> Message-ID: <437D1FD2.6070603@ee.byu.edu> Franco Saliola wrote: >On 11/11/05, Travis Oliphant wrote: > > >>Franco Saliola wrote: >> >> >> >>>Hello. >>> >>>I am having some trouble building SciPy on a 64 bit machine. Actually, >>>after changing a bunch of flags I got it to build, but I am getting an >>>errors for scipy.test(level=1). The test returns, >>> >>> >>Which SciPy are you talking about? The new scipy or the old scipy. >> >>To be honest, I don't know if the old scipy ever worked on 64-bit machines. >> >>The new scipy should build fine on 64-bit machines. >> >> > >Thanks for the email Travis. But I can't get new scipy to work. In >following the instructions on the Build New SciPy website, > >http://www.scipy.org/documentation/buildnewscipy.txt > >I get an error message when I try to download the newcore branch of scipy_core, > > URL 'http://svn.scipy.org/svn/scipy_core/branches/newcore' doesn't exist. > > > It's now the main trunk: http://svn.scipy.org/svn/scipy_core/trunk -Travis From ryanlists at gmail.com Thu Nov 17 20:15:41 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 20:15:41 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: References: <437CF136.7070806@colorado.edu> <20051117232654.GB32661@alpha> Message-ID: Actually, what I am doing is exacly covered by "command-line arguments are communication mechanisms normally used between two separate programs". So, I will attach a BSD license soon. Ryan On 11/17/05, Ryan Krauss wrote: > I found this under the FAQ at gnu.org and it would appear that Stephan > is right (but the gnu folks don't want to say that for sure): > > ============================ > What is the difference between "mere aggregation" and "combining two > modules into one program"? > Mere aggregation of two programs means putting them side by side > on the same CD-ROM or hard disk. We use this term in the case where > they are separate programs, not parts of a single program. In this > case, if one of the programs is covered by the GPL, it has no effect > on the other program. > > Combining two modules means connecting them together so that they > form a single larger program. If either part is covered by the GPL, > the whole combination must also be released under the GPL--if you > can't, or won't, do that, you may not combine them. > > What constitutes combining two parts into one program? This is a > legal question, which ultimately judges will decide. We believe that a > proper criterion depends both on the mechanism of communication (exec, > pipes, rpc, function calls within a shared address space, etc.) and > the semantics of the communication (what kinds of information are > interchanged). > > If the modules are included in the same executable file, they are > definitely combined in one program. If modules are designed to run > linked together in a shared address space, that almost surely means > combining them into one program. > > By contrast, pipes, sockets and command-line arguments are > communication mechanisms normally used between two separate programs. > So when they are used for communication, the modules normally are > separate programs. But if the semantics of the communication are > intimate enough, exchanging complex internal data structures, that too > could be a basis to consider the two parts as combined into a larger > program. > ========================= > > I think what I have done is a "mere aggregation" so that it could be > distributed on the same CD rom as Maxima and not need to be GPL - i.e. > that Maxima's GPL license would not affect anything else being > distributed with it. My input/output sharing is less sophisticated > that pipes. > > Ryan > > On 11/17/05, Stefan van der Walt wrote: > > On Thu, Nov 17, 2005 at 04:13:28PM -0500, Ryan Krauss wrote: > > > Sorry, yes I meant to use a BSD license. I don't know completely how > > > this will work though because Maxima is GPL. I don't link to Maxima > > > in anyway, I just call it from the command line. Does that pose a > > > problem for comercial use? Can you bundle GPL software with your own > > > proprietary stuff if you don't link to it? > > > > If you don't link to GPL'd code, that license doesn't pose a problem. > > You are allowed to bundle GPL software with commercial packages, as > > long as you distribute the source as well. > > > > In fact, you are allowed to sell GPL software -- but finding buyers > > might prove difficult :) > > > > St?fan > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > From ryanlists at gmail.com Thu Nov 17 20:14:08 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 20:14:08 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: <20051117232654.GB32661@alpha> References: <437CF136.7070806@colorado.edu> <20051117232654.GB32661@alpha> Message-ID: I found this under the FAQ at gnu.org and it would appear that Stephan is right (but the gnu folks don't want to say that for sure): ============================ What is the difference between "mere aggregation" and "combining two modules into one program"? Mere aggregation of two programs means putting them side by side on the same CD-ROM or hard disk. We use this term in the case where they are separate programs, not parts of a single program. In this case, if one of the programs is covered by the GPL, it has no effect on the other program. Combining two modules means connecting them together so that they form a single larger program. If either part is covered by the GPL, the whole combination must also be released under the GPL--if you can't, or won't, do that, you may not combine them. What constitutes combining two parts into one program? This is a legal question, which ultimately judges will decide. We believe that a proper criterion depends both on the mechanism of communication (exec, pipes, rpc, function calls within a shared address space, etc.) and the semantics of the communication (what kinds of information are interchanged). If the modules are included in the same executable file, they are definitely combined in one program. If modules are designed to run linked together in a shared address space, that almost surely means combining them into one program. By contrast, pipes, sockets and command-line arguments are communication mechanisms normally used between two separate programs. So when they are used for communication, the modules normally are separate programs. But if the semantics of the communication are intimate enough, exchanging complex internal data structures, that too could be a basis to consider the two parts as combined into a larger program. ========================= I think what I have done is a "mere aggregation" so that it could be distributed on the same CD rom as Maxima and not need to be GPL - i.e. that Maxima's GPL license would not affect anything else being distributed with it. My input/output sharing is less sophisticated that pipes. Ryan On 11/17/05, Stefan van der Walt wrote: > On Thu, Nov 17, 2005 at 04:13:28PM -0500, Ryan Krauss wrote: > > Sorry, yes I meant to use a BSD license. I don't know completely how > > this will work though because Maxima is GPL. I don't link to Maxima > > in anyway, I just call it from the command line. Does that pose a > > problem for comercial use? Can you bundle GPL software with your own > > proprietary stuff if you don't link to it? > > If you don't link to GPL'd code, that license doesn't pose a problem. > You are allowed to bundle GPL software with commercial packages, as > long as you distribute the source as well. > > In fact, you are allowed to sell GPL software -- but finding buyers > might prove difficult :) > > St?fan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ted.horst at earthlink.net Thu Nov 17 11:32:35 2005 From: ted.horst at earthlink.net (Ted Horst) Date: Thu, 17 Nov 2005 10:32:35 -0600 Subject: [SciPy-user] scipy_core bug in where Message-ID: I have discovered a bug in the where function in scipy_core. >>> from scipy import array, int16, where >>> a = array([1,2,3,-32766,4,5,-32766,6], dtype = int16) >>> where(a != -32766, a, a.max()) array([ 65538, 229378, 262149, 6, 65538, 229378, 6, -2147352570]) First, I was a little surprised that a.max() was type int32 rather than int16 (which causes the result to have dtype int32), but then a is clearly be misinterpreted. It can be worked around like this: >>> where(a != -32766, a, a.max().astype(int16)) array([1, 2, 3, 6, 4, 5, 6, 6], dtype=int16) From ryanlists at gmail.com Thu Nov 17 23:27:26 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 17 Nov 2005 23:27:26 -0500 Subject: [SciPy-user] good python books Message-ID: My parents are asking for Christmas gift ideas. Can anyone recommend python books that are worth owning - especially technical/scientific ones or advanced ones. Thanks, Ryan From oliphant at ee.byu.edu Thu Nov 17 23:42:06 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 Nov 2005 21:42:06 -0700 Subject: [SciPy-user] scipy_core bug in where In-Reply-To: References: Message-ID: <437D5B9E.7030405@ee.byu.edu> Ted Horst wrote: >I have discovered a bug in the where function in scipy_core. > > >>> from scipy import array, int16, where > >>> a = array([1,2,3,-32766,4,5,-32766,6], dtype = int16) > >>> where(a != -32766, a, a.max()) >array([ 65538, 229378, 262149, 6, 65538, > 229378, 6, -2147352570]) > > >First, I was a little surprised that a.max() was type int32 rather than >int16 (which causes the result to have dtype int32), but then a is >clearly be misinterpreted. > > Thanks for catching the bug. We'll look into it. By the way, you can reduce in the type int16 if you need to using a.max(rtype=int16) -- This was recently changed from that default to avoid a different kind of surprise (wrap around effects). This usage makes me regret somewhat the switch. >It can be worked around like this: > > > Also. where( a != -32766, a, a.max(rtype=int16)) Notice in this case you could also do a[a==-32766] = a.max() -Travis From Fernando.Perez at colorado.edu Fri Nov 18 00:12:05 2005 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 17 Nov 2005 22:12:05 -0700 Subject: [SciPy-user] scipy_core bug in where In-Reply-To: <437D5B9E.7030405@ee.byu.edu> References: <437D5B9E.7030405@ee.byu.edu> Message-ID: <437D62A5.5080001@colorado.edu> Travis Oliphant wrote: >>First, I was a little surprised that a.max() was type int32 rather than >>int16 (which causes the result to have dtype int32), but then a is >>clearly be misinterpreted. >> >> > > Thanks for catching the bug. We'll look into it. > > By the way, you can reduce in the type int16 if you > need to using a.max(rtype=int16) -- This was recently changed from > that default to avoid a different kind of surprise (wrap around effects). > This usage makes me regret somewhat the switch. I still think that for accumulators (things like sum/prod), getting rid of the wrap around bug is the right decision. But things like min/max can, by definition, never overflow. Are you using the same internal machinery for both, so that distinguishing between 'things that can overflow' and things that can't forces an extra code path/slowdown? Would it be possible to resolve this at the top level code, so that no if statements need to be introduced in every evaluation? Sorry for being a little vague, but I just don't know that code well enough to word things more precisely. Still, I think that giving users a reliable a.sum() was a good thing. Cheers, f From oliphant at ee.byu.edu Thu Nov 17 23:47:24 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 17 Nov 2005 21:47:24 -0700 Subject: [SciPy-user] scipy_core bug in where In-Reply-To: References: Message-ID: <437D5CDC.6040407@ee.byu.edu> Ted Horst wrote: >I have discovered a bug in the where function in scipy_core. > > >>> from scipy import array, int16, where > >>> a = array([1,2,3,-32766,4,5,-32766,6], dtype = int16) > >>> where(a != -32766, a, a.max()) >array([ 65538, 229378, 262149, 6, 65538, > 229378, 6, -2147352570]) > > >First, I was a little surprised that a.max() was type int32 rather than >int16 (which causes the result to have dtype int32), but then a is >clearly be misinterpreted. > > Actually a.max() should return int16. It is an oversight and will be fixed. -travis From mayer at acm.org Fri Nov 18 00:28:53 2005 From: mayer at acm.org (Robert George Mayer) Date: Thu, 17 Nov 2005 22:28:53 -0700 Subject: [SciPy-user] good python books In-Reply-To: References: Message-ID: <200511172228.54123.mayer@acm.org> On Thursday 17 November 2005 09:27 pm, Ryan Krauss wrote: > My parents are asking for Christmas gift ideas. Can anyone recommend > python books that are worth owning - especially technical/scientific > ones or advanced ones. > > Thanks, > > Ryan Python Scripting for Computational Science, ISBN 3-540-43508-5. For prices look at: http://www3.addall.com/New/submitNew.cgi?query=3-540-43508-5&type=ISBN&location=10000&state=&dispCurr=USD A good book, although the author thinks that Tkinter is a reasonable GUI. - BOB From mantha at chem.unr.edu Fri Nov 18 01:24:43 2005 From: mantha at chem.unr.edu (Jordan Mantha) Date: Thu, 17 Nov 2005 22:24:43 -0800 Subject: [SciPy-user] good python books In-Reply-To: <200511172228.54123.mayer@acm.org> References: <200511172228.54123.mayer@acm.org> Message-ID: <437D73AB.1090901@chem.unr.edu> Robert George Mayer wrote: > On Thursday 17 November 2005 09:27 pm, Ryan Krauss wrote: >> My parents are asking for Christmas gift ideas. Can anyone recommend >> python books that are worth owning - especially technical/scientific >> ones or advanced ones. >> >> Thanks, >> >> Ryan > > Python Scripting for Computational Science, ISBN 3-540-43508-5. For prices > look at: > > http://www3.addall.com/New/submitNew.cgi?query=3-540-43508-5&type=ISBN&location=10000&state=&dispCurr=USD > > A good book, although the author thinks that Tkinter is a reasonable GUI. > > - BOB > I would like to second Bob's recommendation. I have been using this book for learning Python. I get a lot more out of it than a lot of other python books because it is scientifically oriented. It seems a lot of Python books I pick up at the library are more for web or programming applications. It has a fair amount of info on using numarray (or scipy or Numeric) to speed up computationally intensive code. I also find the examples are much easier for me to learn by because I understand more of what the code is trying to achieve. That said, I agree that the Gui part isn't very interesting to me. I would rather try wxpython or pygtk. It is also nice to have gone through a quick tutorial online or something because it does assume some basic programming knowledge. Anyway, that's my $0.02 -Jordn Mantha From robert.kern at gmail.com Fri Nov 18 02:42:14 2005 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Nov 2005 23:42:14 -0800 Subject: [SciPy-user] good python books In-Reply-To: References: Message-ID: <437D85D6.90202@gmail.com> Ryan Krauss wrote: > My parents are asking for Christmas gift ideas. Can anyone recommend > python books that are worth owning - especially technical/scientific > ones or advanced ones. _Guide to SciPy: Core System_ http://www.tramy.us/ -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From saliola at gmail.com Fri Nov 18 04:46:32 2005 From: saliola at gmail.com (Franco Saliola) Date: Fri, 18 Nov 2005 04:46:32 -0500 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: <43746180.1000301@ee.byu.edu> Message-ID: Hello Arnd, On 11/11/05, Arnd Baecker wrote: > On Fri, 11 Nov 2005, Travis Oliphant wrote: > > To be honest, I don't know if the old scipy ever worked on 64-bit machines. > > I got it working - with quite a bit of help here. Thanks for your notes on the installation. But you should be aware that these notes are no longer valid as the latest trunk in scipy_core doesn't contain a scipy_distutils directory. And it is not clear -- to me at least -- how to install scipy_distutils. Franco -- From oliphant at ee.byu.edu Fri Nov 18 05:42:04 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 03:42:04 -0700 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: <43746180.1000301@ee.byu.edu> Message-ID: <437DAFFC.10503@ee.byu.edu> Franco Saliola wrote: >Hello Arnd, > >On 11/11/05, Arnd Baecker wrote: > > >>On Fri, 11 Nov 2005, Travis Oliphant wrote: >> >> >>>To be honest, I don't know if the old scipy ever worked on 64-bit machines. >>> >>> >>I got it working - with quite a bit of help here. >> >> > >Thanks for your notes on the installation. But you should be aware >that these notes are no longer valid as the latest trunk in scipy_core >doesn't contain a scipy_distutils directory. And it is not clear -- to >me at least -- how to install scipy_distutils. > > > It's now called scipy.distutils and it comes with scipy_core. -Travis From jens_brandenburg at gmx.net Fri Nov 18 05:52:18 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Fri, 18 Nov 2005 11:52:18 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <437CAE92.30203@mecha.uni-stuttgart.de> References: <17945.1132068401@www58.gmx.net> <4379FFCC.7080004@cs.kuleuven.ac.be> <437A5F85.8020703@gmx.net> <437CAE1E.3000707@gmx.net> <437CAE92.30203@mecha.uni-stuttgart.de> Message-ID: <437DB262.3010106@gmx.net> hi Nils, Can I use matplotlib without Numeric or numarray 'cause I installed newscipy with scipy_core instead... cheers Jens From dd55 at cornell.edu Fri Nov 18 06:27:26 2005 From: dd55 at cornell.edu (Darren Dale) Date: Fri, 18 Nov 2005 06:27:26 -0500 Subject: [SciPy-user] building scipy In-Reply-To: <437DB262.3010106@gmx.net> References: <17945.1132068401@www58.gmx.net> <437CAE92.30203@mecha.uni-stuttgart.de> <437DB262.3010106@gmx.net> Message-ID: <200511180627.26781.dd55@cornell.edu> On Friday 18 November 2005 5:52 am, Jens Brandenburg wrote: > hi Nils, > > Can I use matplotlib without Numeric or numarray 'cause I installed > newscipy with scipy_core instead... You can use matplotlib with scipy_core, but you need to have Numeric 24.x installed to do so. Darren From jens_brandenburg at gmx.net Fri Nov 18 06:51:28 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Fri, 18 Nov 2005 12:51:28 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <200511180627.26781.dd55@cornell.edu> References: <17945.1132068401@www58.gmx.net> <437CAE92.30203@mecha.uni-stuttgart.de> <437DB262.3010106@gmx.net> <200511180627.26781.dd55@cornell.edu> Message-ID: <437DC040.80902@gmx.net> hello, Isn't it possible to use something like scipy.base instead of Numeric? Because if I understand it right scipy_core should replace Numeric... Again a more general question raises: "What was before Numeric is now...?" --> To use old python programs I have to replace "from Numeric import */ import Numeric" by what? Thanks Jens From nwagner at mecha.uni-stuttgart.de Fri Nov 18 06:58:27 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Fri, 18 Nov 2005 12:58:27 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <437DC040.80902@gmx.net> References: <17945.1132068401@www58.gmx.net> <437CAE92.30203@mecha.uni-stuttgart.de> <437DB262.3010106@gmx.net> <200511180627.26781.dd55@cornell.edu> <437DC040.80902@gmx.net> Message-ID: <437DC1E3.60106@mecha.uni-stuttgart.de> Jens Brandenburg wrote: >hello, > >Isn't it possible to use something like scipy.base instead of Numeric? >Because if I understand it right scipy_core should replace Numeric... > >Again a more general question raises: "What was before Numeric is >now...?" --> To use old python programs I have to replace "from Numeric >import */ import Numeric" by what? > >Thanks > >Jens > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Please read http://numeric.scipy.org/ Nils From robert.kern at gmail.com Fri Nov 18 07:11:57 2005 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Nov 2005 04:11:57 -0800 Subject: [SciPy-user] building scipy In-Reply-To: <437DC040.80902@gmx.net> References: <17945.1132068401@www58.gmx.net> <437CAE92.30203@mecha.uni-stuttgart.de> <437DB262.3010106@gmx.net> <200511180627.26781.dd55@cornell.edu> <437DC040.80902@gmx.net> Message-ID: <437DC50D.7030301@gmail.com> Jens Brandenburg wrote: > hello, > > Isn't it possible to use something like scipy.base instead of Numeric? > Because if I understand it right scipy_core should replace Numeric... Certainly. Someone simply has to do the work on matplotlib to do this. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From a.h.jaffe at gmail.com Fri Nov 18 07:20:19 2005 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Fri, 18 Nov 2005 12:20:19 +0000 Subject: [SciPy-user] Problems with linalg.cholesky? Message-ID: Hi all, In the newest incarnation of scipy_core, I am having trouble with the cholesky(a) routine: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/linalg/basic_lite.py in cholesky_decomposition(a) 115 else: 116 lapack_routine = lapack_lite.dpotrf --> 117 results = lapack_routine('L', n, a, m, 0) 118 if results['info'] > 0: 119 raise LinAlgError, 'Matrix is not positive definite - Cholesky decomposition cannot be computed' LapackError: Parameter a is not contiguous in lapack_lite.dpotrf But this isn't true; I get this error even when I pass trivial and contiguous matrices such as the output of identity(). Other linalg routines (included complicated ones like singular_value_decomp) seem to work fine. Any ideas? Andrew From jens_brandenburg at gmx.net Fri Nov 18 08:43:08 2005 From: jens_brandenburg at gmx.net (Jens Brandenburg) Date: Fri, 18 Nov 2005 14:43:08 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <437DC50D.7030301@gmail.com> References: <17945.1132068401@www58.gmx.net> <437CAE92.30203@mecha.uni-stuttgart.de> <437DB262.3010106@gmx.net> <200511180627.26781.dd55@cornell.edu> <437DC040.80902@gmx.net> <437DC50D.7030301@gmail.com> Message-ID: <437DDA6C.3000703@gmx.net> hi, Robert Kern wrote: >Certainly. Someone simply has to do the work on matplotlib to do this. > Any volunteers??? I could have look on it but I need some more information... for instance: ?scipy.base==Numeric? Well, looking forward... Jens From jh at oobleck.astro.cornell.edu Fri Nov 18 12:12:10 2005 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Fri, 18 Nov 2005 12:12:10 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: (scipy-user-request@scipy.net) References: Message-ID: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> I agree that as long as you're just using the command line, you're safe, though I wonder how complex the command lines have to be before GNU regards them as integrated. That's said... 1. I think (IANAL, etc.) that you're personally safe in that you don't actually distribute maxima yourself. If a derived work is created at all, someone else is doing it. 2. If a distributor wanted to be paranoid, they could put all their GPL software in one CD set, all the rest in another, distribute them separately, and say, "if you get both CDs and install them as advertized, you get the following added functionality". That way, the USER is creating any derived works. Since the user didn't distribute a copy of the derived work she created, she's safe. 2. Distribute under BOTH licenses. Let the user decide which to use, PERL style. The GPL doesn't say that software that is combined with it may not *also* be licensed under terms of other licenses, when not combined with it. It just says that it must be licensed under GPL when used with other GPL software. Symbolics.com still sells a commercial version of MACSYMA. So, someone commercial wanting to sell it binary-only and being nervous about maxima's GPL could contribute an interface to MACSYMA and bundle that (after dealing with Symbolics' lawyers), or could (try to) cut a deal for a different license with the authors of maxima, or could (hah!) develop their own MACSYMA clone. Your option for BSD-style licensing would then already be available to them. 3. Anyone packaging the combined program would have to put in the source code to maxima, under its GNU license. In that case, they would probably not care whether they distributed the source to your program as well. --jh-- From wjdandreta at att.net Fri Nov 18 12:14:00 2005 From: wjdandreta at att.net (Bill Dandreta) Date: Fri, 18 Nov 2005 12:14:00 -0500 Subject: [SciPy-user] good python books In-Reply-To: <437D73AB.1090901@chem.unr.edu> References: <200511172228.54123.mayer@acm.org> <437D73AB.1090901@chem.unr.edu> Message-ID: <437E0BD8.1080602@att.net> Jordan Mantha wrote: >I would like to second Bob's recommendation. I have been using this book >for learning Python. I get a lot more out of it than a lot of other >python books because it is scientifically oriented. It seems a lot of >Python books I pick up at the library are more for web or programming >applications. It has a fair amount of info on using numarray (or scipy >or Numeric) to speed up computationally intensive code. I also find the >examples are much easier for me to learn by because I understand more of >what the code is trying to achieve. >That said, I agree that the Gui part isn't very interesting to me. I >would rather try wxpython or pygtk. It is also nice to have gone through >a quick tutorial online or something because it does assume some basic >programming knowledge. Anyway, that's my $0.02 > Here's a good resource for Python tutorials (over 200 listed). http://www.awaretek.com/tutorials.html Bill From saliola at gmail.com Fri Nov 18 16:17:17 2005 From: saliola at gmail.com (Franco Saliola) Date: Fri, 18 Nov 2005 16:17:17 -0500 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: <437D1FD2.6070603@ee.byu.edu> References: <43746180.1000301@ee.byu.edu> <437D1FD2.6070603@ee.byu.edu> Message-ID: > >But I can't get new scipy to work. In > >following the instructions on the Build New SciPy website, > > > >http://www.scipy.org/documentation/buildnewscipy.txt Almost there. I am getting an error during the test. ====================================================================== ERROR: test_asmatrix (scipy.base.matrix.test_matrix.test_properties) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Python-2.4.2-newSciPy/lib/python2.4/site-packages/scipy/base/ tests/test_matrix.py", line 73, in test_asmatrix mA = asmatrix(A) NameError: global name 'asmatrix' is not defined ====================================================================== FAIL: test_basic (scipy.base.matrix.test_matrix.test_autocasting) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/Python-2.4.2-newSciPy/lib/python2.4/site-packages/scipy/base/ tests/test_matrix.py", line 87, in test_basic assert mB.dtype == float64 AssertionError ---------------------------------------------------------------------- Ran 1370 tests in 107.806s Suggestions? Thanks, Franco -- From oliphant at ee.byu.edu Fri Nov 18 16:20:08 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 14:20:08 -0700 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: <43746180.1000301@ee.byu.edu> <437D1FD2.6070603@ee.byu.edu> Message-ID: <437E4588.7090804@ee.byu.edu> Franco Saliola wrote: >>>But I can't get new scipy to work. In >>>following the instructions on the Build New SciPy website, >>> >>>http://www.scipy.org/documentation/buildnewscipy.txt >>> >>> > >Almost there. I am getting an error during the test. > >====================================================================== >ERROR: test_asmatrix (scipy.base.matrix.test_matrix.test_properties) >---------------------------------------------------------------------- >Traceback (most recent call last): > File "/usr/local/Python-2.4.2-newSciPy/lib/python2.4/site-packages/scipy/base/ >tests/test_matrix.py", line 73, in test_asmatrix > mA = asmatrix(A) >NameError: global name 'asmatrix' is not defined > >====================================================================== >FAIL: test_basic (scipy.base.matrix.test_matrix.test_autocasting) >---------------------------------------------------------------------- >Traceback (most recent call last): > File "/usr/local/Python-2.4.2-newSciPy/lib/python2.4/site-packages/scipy/base/ >tests/test_matrix.py", line 87, in test_basic > assert mB.dtype == float64 >AssertionError > >---------------------------------------------------------------------- >Ran 1370 tests in 107.806s > >Suggestions? > > Update your tree and try again. You caught it in the middle of changes. -Travis From saliola at gmail.com Fri Nov 18 16:24:54 2005 From: saliola at gmail.com (Franco Saliola) Date: Fri, 18 Nov 2005 16:24:54 -0500 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: <437E4588.7090804@ee.byu.edu> References: <43746180.1000301@ee.byu.edu> <437D1FD2.6070603@ee.byu.edu> <437E4588.7090804@ee.byu.edu> Message-ID: Travis suggested: > Update your tree and try again. You caught it in the middle of changes. That's funny. :-) -- From saliola at gmail.com Fri Nov 18 16:37:10 2005 From: saliola at gmail.com (Franco Saliola) Date: Fri, 18 Nov 2005 16:37:10 -0500 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: <43746180.1000301@ee.byu.edu> <437D1FD2.6070603@ee.byu.edu> <437E4588.7090804@ee.byu.edu> Message-ID: On 11/18/05, Franco Saliola wrote: > Travis suggested: > > > Update your tree and try again. You caught it in the middle of changes. > > That's funny. :-) And that did it. I can now install the new scipy on a 64 bit machine. I even got the OLD scipy to install with Arnd Baecker's instructions and Travis's note. Now on to installing matplotlib... . Thanks. Franco -- From oliphant at ee.byu.edu Fri Nov 18 16:48:30 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 18 Nov 2005 14:48:30 -0700 Subject: [SciPy-user] Building on 64 bit machines In-Reply-To: References: <43746180.1000301@ee.byu.edu> <437D1FD2.6070603@ee.byu.edu> <437E4588.7090804@ee.byu.edu> Message-ID: <437E4C2E.5010709@ee.byu.edu> Franco Saliola wrote: >On 11/18/05, Franco Saliola wrote: > > >>Travis suggested: >> >> >> >>>Update your tree and try again. You caught it in the middle of changes. >>> >>> >>That's funny. :-) >> >> > >And that did it. I can now install the new scipy on a 64 bit machine. > >I even got the OLD scipy to install with Arnd Baecker's instructions >and Travis's note. > >Now on to installing matplotlib... . > > > Be sure to install Numeric 24.2 for now, as well, as matplotlib does not yet recognize scipy core arrays natively. Anybody want a nice, helpful project. Convert matplotlib to work with scipy core directly (no need for the numerix layer). -Travis From lanceboyle at bluebottle.com Fri Nov 18 23:39:29 2005 From: lanceboyle at bluebottle.com (lanceboyle at bluebottle.com) Date: Fri, 18 Nov 2005 21:39:29 -0700 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> References: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> Message-ID: <61C98DAB-814C-4C29-B5F3-DEFC065581B0@bluebottle.com> Could someone provide a couple screen shots of this set-up and a description of what it does and so on? I'd like to know a bit more before I install everything. Jerry From stefan at sun.ac.za Sat Nov 19 05:13:34 2005 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 19 Nov 2005 12:13:34 +0200 Subject: [SciPy-user] calculating factorial Message-ID: <20051119101334.GJ9916@alpha> I couldn't find "factorial", "combination", "permutation" etc. in SciPy. Luckily, they're simple -- only problem is that my implementation doesn't work :) See the code listing below. I thought that the two methods for calculating factorials should work equally well. I get an integer overflow, no matter what typecode I use. I realise that this behaviour might have changed with the new scipy-base, so I'd appreciate if someone could test it and see. from scipy import prod, arange, __version__ print "Scipy version", __version__ def factorial1(n): f = 1 while (n >= 2): f, n = f * n, n - 1 return f def factorial2(n): return prod(arange(2, n+1, typecode='u')) print "\nFactorial method 1", factorial1(100) print "\nFactorial method 2", factorial2(100) Scipy version 0.3.3_309.4626 Factorial method 1 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000 Factorial method 2 Traceback (most recent call last): File "./fact.py", line 17, in ? print "\nFactorial method 2", factorial2(100) File "./fact.py", line 14, in factorial2 return prod(arange(2, n+1, typecode='u')) File "/home/stefan/lib/python/scipy_base/function_base.py", line 208, in prod return _no_axis_is_all(multiply.reduce, m, axis) File "/home/stefan/lib/python/scipy_base/function_base.py", line 164, in _no_axis_is_all r = function(m, axis) ArithmeticError: Integer overflow in multiply. St?fan From robert.kern at gmail.com Sat Nov 19 09:45:01 2005 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 19 Nov 2005 06:45:01 -0800 Subject: [SciPy-user] calculating factorial In-Reply-To: <20051119101334.GJ9916@alpha> References: <20051119101334.GJ9916@alpha> Message-ID: <437F3A6D.7040103@gmail.com> Stefan van der Walt wrote: > I couldn't find "factorial", "combination", "permutation" etc. in > SciPy. Luckily, they're simple -- only problem is that my > implementation doesn't work :) > > See the code listing below. I thought that the two methods for > calculating factorials should work equally well. I get an integer > overflow, no matter what typecode I use. 100! simply isn't representable by any of the numerical datatypes in Numeric or scipy_core. It's too large. However, you can use arrays of Python objects to utilize Python's long integers. In scipy_core: In [11]: a = array(range(2,101), dtype=object) In [12]: multiply.reduce(a) Out[12]: 93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000L -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ryanlists at gmail.com Sat Nov 19 13:07:25 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 19 Nov 2005 13:07:25 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: <61C98DAB-814C-4C29-B5F3-DEFC065581B0@bluebottle.com> References: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> <61C98DAB-814C-4C29-B5F3-DEFC065581B0@bluebottle.com> Message-ID: I could work on screen shots but I don't know if they would be terribly interesting. What is does primarily is allow very clean LaTeX-symbolic integration. I really like including explanations in my derivations and I like looking at pretty LaTeX output because it helps me spot errors when I am looking at well formatted output instead of the kind of command line output that is typical of computer algebra programs. Maybe I should have made clearer that it doesn't actually provide any computer algebra functionality that isn't already in Maxima. It just changes the interface so that instead of working with Maxima or Xmaxima directly, you write a LaTeX-like file so that you can do something like: This is some explanation of my equation \begin{maxima} eq1:x+y=2 \end{maxima} This is some comment after my equation. So it is sort of a literate programming approach to LaTeX and symbolic stuff. Python takes what is in the \begin{maxima}..\end{maxima} and passes it to Maxima, Python then replaces the maxima environment with This is some explanation of my equation \begin{equation} output for Maxima \end{equation}. This is some comment after my equation. Perhaps not rocket science, but I really like my derivations to be done in LaTeX and I don't like the Maxima user interface very much. So, if I had screen shots, they would show a LaTeX input file, a Python script that goes between LaTeX and Maxima and then a pretty LaTeX output. It really should be painless to download and install and it is very small (80kb). This was my first distutils experience. Any comments on whether it installed as expected? Ryan On 11/18/05, lanceboyle at bluebottle.com wrote: > Could someone provide a couple screen shots of this set-up and a > description of what it does and so on? I'd like to know a bit more > before I install everything. > > Jerry > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From lanceboyle at bluebottle.com Sat Nov 19 18:34:30 2005 From: lanceboyle at bluebottle.com (lanceboyle at bluebottle.com) Date: Sat, 19 Nov 2005 16:34:30 -0700 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: References: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> <61C98DAB-814C-4C29-B5F3-DEFC065581B0@bluebottle.com> Message-ID: <6D572829-6533-40F1-8842-E39D9085175A@bluebottle.com> Thanks--that helps. Jerry On Nov 19, 2005, at 11:07 AM, Ryan Krauss wrote: > I could work on screen shots but I don't know if they would be > terribly interesting. What is does primarily is allow very clean > LaTeX-symbolic integration. I really like including explanations in > my derivations and I like looking at pretty LaTeX output because it > helps me spot errors when I am looking at well formatted output > instead of the kind of command line output that is typical of computer > algebra programs. > > Maybe I should have made clearer that it doesn't actually provide any > computer algebra functionality that isn't already in Maxima. It just > changes the interface so that instead of working with Maxima or > Xmaxima directly, you write a LaTeX-like file so that you can do > something like: > > This is some explanation of my equation > \begin{maxima} > eq1:x+y=2 > \end{maxima} > This is some comment after my equation. > > So it is sort of a literate programming approach to LaTeX and symbolic > stuff. Python takes what is in the \begin{maxima}..\end{maxima} and > passes it to Maxima, Python then replaces the maxima environment with > > This is some explanation of my equation > \begin{equation} > output for Maxima > \end{equation}. > This is some comment after my equation. > > Perhaps not rocket science, but I really like my derivations to be > done in LaTeX and I don't like the Maxima user interface very much. > > So, if I had screen shots, they would show a LaTeX input file, a > Python script that goes between LaTeX and Maxima and then a pretty > LaTeX output. > > It really should be painless to download and install and it is very > small (80kb). This was my first distutils experience. Any comments > on whether it installed as expected? > > Ryan > > On 11/18/05, lanceboyle at bluebottle.com > wrote: > >> Could someone provide a couple screen shots of this set-up and a >> description of what it does and so on? I'd like to know a bit more >> before I install everything. >> >> Jerry >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From oliphant at ee.byu.edu Sat Nov 19 23:47:59 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Sat, 19 Nov 2005 21:47:59 -0700 Subject: [SciPy-user] calculating factorial In-Reply-To: <437F3A6D.7040103@gmail.com> References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> Message-ID: <437FFFFF.2060004@ee.byu.edu> Robert Kern wrote: >Stefan van der Walt wrote: > > >>I couldn't find "factorial", "combination", "permutation" etc. in >>SciPy. Luckily, they're simple -- only problem is that my >>implementation doesn't work :) >> >> Look in scipy.utils -Travis From stefan at sun.ac.za Sun Nov 20 00:13:03 2005 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 20 Nov 2005 07:13:03 +0200 Subject: [SciPy-user] calculating factorial In-Reply-To: <437FFFFF.2060004@ee.byu.edu> References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> <437FFFFF.2060004@ee.byu.edu> Message-ID: <20051120051303.GB13393@alpha> On Sat, Nov 19, 2005 at 09:47:59PM -0700, Travis Oliphant wrote: > >>I couldn't find "factorial", "combination", "permutation" etc. in > >>SciPy. Luckily, they're simple -- only problem is that my > >>implementation doesn't work :) > >> > >> > Look in scipy.utils I found it in 0.3.3_309.4626 now -- it's under the root scipy namespace. Thanks. St?fan From stefan at sun.ac.za Sun Nov 20 00:09:16 2005 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 20 Nov 2005 07:09:16 +0200 Subject: [SciPy-user] calculating factorial In-Reply-To: <437FFFFF.2060004@ee.byu.edu> References: <20051119101334.GJ9916@alpha> <437F3A6D.7040103@gmail.com> <437FFFFF.2060004@ee.byu.edu> Message-ID: <20051120050916.GA13393@alpha> On Sat, Nov 19, 2005 at 09:47:59PM -0700, Travis Oliphant wrote: > >>I couldn't find "factorial", "combination", "permutation" etc. in > >>SciPy. Luckily, they're simple -- only problem is that my > >>implementation doesn't work :) > >> > >> > Look in scipy.utils I am still using 0.3.3_309.4626, but thanks for the note -- I'll keep it in mind when I upgrade. St?fan From travis at enthought.com Mon Nov 21 09:45:01 2005 From: travis at enthought.com (Travis N. Vaught) Date: Mon, 21 Nov 2005 08:45:01 -0600 Subject: [SciPy-user] OT: Enthought IT Admin Job Posting Message-ID: <4381DD6D.9030202@enthought.com> All, I'm posting this here in the hopes that someone that is passionate about python/numeric/scipy will "answer the call": `Enthought, Inc. `__ (Austin, TX, USA) ==================================================================== **Job Description**: Position: IT Administrator --------------------------- Enthought is looking for an exceptional IT Administrator to manage the IT infrastructure for it's Austin, TX offices. This person will have a passion for supporting software development tools and environments. The target server platforms include Linux (RedHat and Fedora), Windows XP, and Solaris. Workstations are a mix of Windows and Linux. We're looking for an Admin that focuses on supporting software developers. Desired Skills and Capabilities: * B.S. in Computer Science of other related field (preferably not MIS) * 5+ Years Experience in Enterprise IT Administration Role * Ability to program or script in a programming language or shell - (Python, Perl and small C programs) * Ability to solve problems quickly and completely * Strong inter-personal and communication skills as well as a team player in a group of highly talented software developers * Willingness to pitch in wherever needed * Interested in making developer's lives easier Duties: * Perform installation, patching and other server maintenance to ensure security and stability of server infrastructure. * Maintain core infrastructure technologies such as Firewall, VPN, apache web server, mail server, NIS, NFS, DNS, DHCP, SSH, network-wide backups, RAID and Samba * Identify routine tasks and automate through shell/Python scripting * Perform on-call duties and out of hours maintenance as necessary * Configure Nortel phone system * Build RPMs for RH/FC Linux * Using development tools on Linux such as gcc/g++, make, autoconf, etc. * Working with Python and building & installing Python packages using distutils * Support developer tools such as SVN repositories, bug trackers, Software Project Management utilities Company: -------- Enthought is a scientific computing company located in downtown Austin, Texas. Founded in 2001, it has grown nearly 100% per year both in staff and revenue to become a stable and profitable technology company. This growth has been possible because of Enthought?s talented team and because of our commitment to developing quality software. We strive to combine advanced algorithm development with modern software practices, such as component based architectures, application scripting, and intuitive user interface design. We take a holistic approach to software development, in which architects and developers team with technical writers, human factors specialists, and project managers (always highly technical individuals) to develop a complete solution for our customers. Much of our work is based on the Python programming language, and we are actively engaged in open source development (www.scipy.org ). We?re lucky enough to work on interesting problems and are looking for talented people to join us. Some of our current efforts are in the areas of geophysics, electromagnetics, fluid dynamics, micro-rheology, CAD, 2-D and 3-D visualization, and others. All of these tools are developed as plug-ins into our Envisage "Scientific IDE" framework. * **Contact**: Travis N. Vaught, CEO * **E-mail contact**: jobs at enthought.com * **Web**: http://www.enthought.com/careers.htm -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From pajer at iname.com Mon Nov 21 11:58:56 2005 From: pajer at iname.com (Gary) Date: Mon, 21 Nov 2005 11:58:56 -0500 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <4381DD6D.9030202@enthought.com> References: <4381DD6D.9030202@enthought.com> Message-ID: <4381FCD0.4020602@iname.com> Just starting to explore new scipy core. I'm seeing different versions of the same function. I've seen this before, but I never bothered about it given that I had Numeric, and then scipy duplicating some of the functionality of Numeric. But now I'm wondering what's up. example1: scipy.exp and scipy.special.exp appear to be different. At least their representations and doc strings are. No big deal, but why two versions? example2: scipy.ifft returns a ndarray but scipy.fftpack.ifft returns an old fashioned Numeric array (if I'm not mistaken). Is there some reason? or will all this kind of stuff eventually go away? is one version preferable? (since I want ndarray objects, I don't see any use for scipy.fftpack.ifft) I suppose the "non-preferred" versions might be deprecated, or used behind the scenes somewhere. if so, is there some way of figuring out which is the preferred version (other than trying it out. In other words, does the hierarchical structrure tell me somehow which is the prefered function?) -gary From oliphant at ee.byu.edu Mon Nov 21 13:00:35 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 11:00:35 -0700 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <4381FCD0.4020602@iname.com> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> Message-ID: <43820B43.4070008@ee.byu.edu> Gary wrote: >Just starting to explore new scipy core. > >I'm seeing different versions of the same function. I've seen this >before, but I never bothered about it given that I had Numeric, and then >scipy duplicating some of the functionality of Numeric. But now I'm >wondering what's up. > >example1: >scipy.exp and scipy.special.exp appear to be different. At least their >representations and doc strings are. No big deal, but why two versions? > > > These are the same on my system. >>> id(scipy.exp) 136079568 >>> id(scipy.special.exp) 136079568 >example2: >scipy.ifft returns a ndarray but >scipy.fftpack.ifft returns an old fashioned Numeric array (if I'm not >mistaken). > > These are also the same on my system. >>> id(scipy.ifft) 1129116820 >>> id(scipy.fftpack.ifft) 1129116820 Perhaps you may have an installation issue. -Travis From rowen at cesmail.net Mon Nov 21 16:41:07 2005 From: rowen at cesmail.net (Russell E. Owen) Date: Mon, 21 Nov 2005 13:41:07 -0800 Subject: [SciPy-user] good python books References: Message-ID: In article , Ryan Krauss wrote: > My parents are asking for Christmas gift ideas. Can anyone recommend > python books that are worth owning - especially technical/scientific > ones or advanced ones. Python Cookbook, 2nd edition is really nice. It ranges from basic to advanced and I've learned a lot from it. For daily reference I use Python Essential Reference (unfortunately not very up to date) and the on-line manuals. For certain advanced topics I look at Python in a Nutshell. Also, consider books on design patterns and object-oriented design. I'm not familiar with the range of material available, but in the second category I've been enjoying "Applying UML and Patterns", by Craig Larman. -- Russell From pajer at iname.com Mon Nov 21 18:11:20 2005 From: pajer at iname.com (Gary) Date: Mon, 21 Nov 2005 18:11:20 -0500 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <43820B43.4070008@ee.byu.edu> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> <43820B43.4070008@ee.byu.edu> Message-ID: <43825418.8030004@iname.com> Travis Oliphant wrote: >Gary wrote: > > > >>Just starting to explore new scipy core. >> >>I'm seeing different versions of the same function. I've seen this >>before, but I never bothered about it given that I had Numeric, and then >>scipy duplicating some of the functionality of Numeric. But now I'm >>wondering what's up. >> >>example1: >>scipy.exp and scipy.special.exp appear to be different. At least their >>representations and doc strings are. No big deal, but why two versions? >> >> >> >> >> >These are the same on my system. > > >>> id(scipy.exp) >136079568 > >>> id(scipy.special.exp) >136079568 > > > >>example2: >>scipy.ifft returns a ndarray but >>scipy.fftpack.ifft returns an old fashioned Numeric array (if I'm not >>mistaken). >> >> >> >> >These are also the same on my system. > > >>> id(scipy.ifft) >1129116820 > >>> id(scipy.fftpack.ifft) >1129116820 > >Perhaps you may have an installation issue. > > Yes, that's it. I tried to install from ground zero, and I begin to realize I don't quite understand the organization ... WinXP. I wiped out my site-packages/scipy, but left scipy_base, scipy_distutils, scipy_test. (are they still valid in the "new scipy" ?) Grabbed the current SVN, and compiled using mingw (without ATLAS) using python setup.py config --compiler=mingw32 build --compiler=mingw32 install No errors, and scipy.fft and scipy.fftpack.fft point to the same function. Progress. scipy.exp exists, but there doesn't seem to be scipy.special. (i.e., import scipy.special returns "no such module") This must come from some other part of scipy. So two questions, a specific and a general. The specific: what do I do to get the rest of scipy (assuming my interpretation is correct)? The general: help me out with the general organization. Here's my understanding: Part of "new scipy" replaces Numeric. What exactly is this part called? A different part of "new scipy" are the features built on top of the Numeric replacement. (apparantly scipy.special is one of these, but scipy.fftpack is not) Are these distinguished by their location in the heirarchy? Where are they found? How are they installed? Am I making sense? And apologies for very basic questions. -gary >-Travis > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From pajer at iname.com Mon Nov 21 18:37:39 2005 From: pajer at iname.com (Gary) Date: Mon, 21 Nov 2005 18:37:39 -0500 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <43825418.8030004@iname.com> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> <43820B43.4070008@ee.byu.edu> <43825418.8030004@iname.com> Message-ID: <43825A43.4020707@iname.com> Gary wrote: >Travis Oliphant wrote: > > > >>Gary wrote: >> >> >> >> >> >>>Just starting to explore new scipy core. >>> >>>I'm seeing different versions of the same function. I've seen this >>> [...] >>Perhaps you may have an installation issue. >> >> >> >> >Yes, that's it. I tried to install from ground zero, and I begin to >realize I don't quite understand the organization ... > >WinXP. I wiped out my site-packages/scipy, but left scipy_base, >scipy_distutils, scipy_test. (are they still valid in the "new scipy" ?) >Grabbed the current SVN, and compiled using mingw (without ATLAS) using > >python setup.py config --compiler=mingw32 build --compiler=mingw32 install > >No errors, and scipy.fft and scipy.fftpack.fft point to the same >function. Progress. > >scipy.exp exists, but there doesn't seem to be scipy.special. (i.e., >import scipy.special returns "no such module") >This must come from some other part of scipy. > >So two questions, a specific and a general. > >The specific: what do I do to get the rest of scipy (assuming my >interpretation is correct)? > >The general: help me out with the general organization. Here's my >understanding: Part of "new scipy" replaces Numeric. What exactly is >this part called? >A different part of "new scipy" are the features built on top of the >Numeric replacement. (apparantly scipy.special is one of these, but >scipy.fftpack is not) Are these distinguished by their location in the >heirarchy? Where are they found? How are they installed? Am I making >sense? > >And apologies for very basic questions. > >-gary > > I finally took a peek at scipy.org, and found a little enlightenment. Sorry about the noise I wiped out everything scipy from site-packages (renamed, actually) Installed scipy core from the Windows installer on sf.net Then compiled/installed scipy from svn as above. Upon import scipy, python crashes with a "do you want to send an error report" window. I'm still using python 2.3. Could that be it? -g From oliphant at ee.byu.edu Mon Nov 21 18:53:42 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 16:53:42 -0700 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <43825418.8030004@iname.com> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> <43820B43.4070008@ee.byu.edu> <43825418.8030004@iname.com> Message-ID: <43825E06.1030105@ee.byu.edu> Gary wrote: >Yes, that's it. I tried to install from ground zero, and I begin to >realize I don't quite understand the organization ... > > Your feedback is essential. We want to minimize confusion. There are a lot of people we are trying to bring together and they all have their "way to view" the situation. No doubt there are still confusions. >WinXP. I wiped out my site-packages/scipy, but left scipy_base, >scipy_distutils, scipy_test. (are they still valid in the "new scipy" ?) > > No. These are useless for the new scipy (they won't even be looked at). They have become scipy.base scipy.distutils scipy.test >Grabbed the current SVN, and compiled using mingw (without ATLAS) using > >python setup.py config --compiler=mingw32 build --compiler=mingw32 install > >No errors, and scipy.fft and scipy.fftpack.fft point to the same >function. Progress. > > Great.... What you have now is officially called scipy_core (that's it's package name). Sometimes it's referred to as SciPy Core or just Core between the initiated. SciPy Core is a replacement for all of Numeric (and for most of numarray. There are a few numarray packages that are not included in scipy core --- not sure if these will ever be part of the core or just add-ons --- that will have to be decided. Numeric included some basic fft, linear algebra, and random number routines. These (and a bit more) are all included in scipy core and can be referenced as scipy.fftpack scipy.linalg scipy.random If you install full scipy then scipy.fftpack and scipy.linalg become the expanded versions with expanded and better functionality based on FORTRAN interfaces. The basic versions are always available under scipy.basic.linalg and scipy.basic.fftpack if you want to only ever use the scipy core version. This approach allows you to use scipy.linalg.inv and have it map to the scipy_core version if that's all the user has installed or the full scipy version if that's what the user has installed. -Travis From oliphant at ee.byu.edu Mon Nov 21 18:55:31 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 16:55:31 -0700 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <43825A43.4020707@iname.com> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> <43820B43.4070008@ee.byu.edu> <43825418.8030004@iname.com> <43825A43.4020707@iname.com> Message-ID: <43825E73.4010305@ee.byu.edu> >I finally took a peek at scipy.org, and found a little enlightenment. >Sorry about the noise >I wiped out everything scipy from site-packages (renamed, actually) >Installed scipy core from the Windows installer on sf.net >Then compiled/installed scipy from svn as above. > >Upon import scipy, python crashes with a "do you want to send an error >report" window. >I'm still using python 2.3. Could that be it? > > O.K. just a minute. Previously you said you installed scipy_core from svn. Now, you say you download a binary version of scipy_core and then install the same thing again? Python 2.3 should work fine. -Travis From pajer at iname.com Mon Nov 21 22:17:49 2005 From: pajer at iname.com (Gary) Date: Mon, 21 Nov 2005 22:17:49 -0500 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <43825E73.4010305@ee.byu.edu> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> <43820B43.4070008@ee.byu.edu> <43825418.8030004@iname.com> <43825A43.4020707@iname.com> <43825E73.4010305@ee.byu.edu> Message-ID: <43828DDD.606@iname.com> Travis Oliphant wrote: >>I finally took a peek at scipy.org, and found a little enlightenment. >>Sorry about the noise >>I wiped out everything scipy from site-packages (renamed, actually) >>Installed scipy core from the Windows installer on sf.net >>Then compiled/installed scipy from svn as above. >> >>Upon import scipy, python crashes with a "do you want to send an error >>report" window. >>I'm still using python 2.3. Could that be it? >> >> >> >> > >O.K. just a minute. Previously you said you installed scipy_core from >svn. Now, you say you download a binary version of scipy_core and then >install the same thing again? > > Is that what I did? I looked at the www.scipy.org front page. It says "*This site does not reflect the rapid growth of scipy that has taken place while a new core scipy array object has been built to replace Numeric.* This link provides more information on where to get the new scipy core. A version of scipy that works on newcore is available for anyonymous check out from a subversion repostitory at *http://svn.scipy.org/svn/scipy/trunk*." Where "this link" points to sf.net. One can interpret this as: The new SciPy Core is on sf.net. If you want a full version of SciPy that works with it, then get that part from svn.../scipy/trunk. So are you saying that sf.net and svn both contain only SciPy Core (but different versions)? No, that can't be right. When I installed *only* the svn version, there was no scipy.special. When I subsequently installed the sf.net version, scipy.special was installed. But wait ... maybe I should install the older sf.net first, then the newer svn version afterwards...let me try that. No. Still crashes, with the words "Importing cluster to scipy". I must have something else wrong, a mising prereq or something. Recall that the svn version works fine all by itself. Let me try the sf.net version *only* (wiping site-packages and starting clean).... Hmmm, in this case "import scipy" reports "no module scipy". scipy is in the file system. There is no __init__.py in the scipy directory. I'm using the Windows installer scipy-0.4.3.win32-py2.3.exe Another thought ... when I say "svn" I mean *http://svn.scipy.org/svn/scipy/trunk . Is there another component available via svn that I'm not aware of? thanks for your patience, gary * >Python 2.3 should work fine. > >-Travis > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From oliphant at ee.byu.edu Mon Nov 21 22:34:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 21 Nov 2005 20:34:33 -0700 Subject: [SciPy-user] Multiple versions of functions ? In-Reply-To: <43828DDD.606@iname.com> References: <4381DD6D.9030202@enthought.com> <4381FCD0.4020602@iname.com> <43820B43.4070008@ee.byu.edu> <43825418.8030004@iname.com> <43825A43.4020707@iname.com> <43825E73.4010305@ee.byu.edu> <43828DDD.606@iname.com> Message-ID: <438291C9.8060407@ee.byu.edu> The binary version of scipy will not work with the svn version of scipy_core (new version of C-API for scipy-core). So, I would not try it with anything but the binary version of scipy core available. First wipe everything clean. Either download and install the version of scipy core on the numpy sourceforge project and then the binary version of scipy from the scipy sourceforge project or get the SVN versions of both and install them one after the other. scipy_core is http://svn.scipy.org/svn/scipy_core/trunk full scipy is http://svn.scipy.org/svn/scipy/trunk Good luck. -Travis From sgarcia at olfac.univ-lyon1.fr Tue Nov 22 05:28:42 2005 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Tue, 22 Nov 2005 11:28:42 +0100 Subject: [SciPy-user] conv2 Message-ID: <4382F2DA.8030708@olfac.univ-lyon1.fr> Is there a function for 2D convolution ? conv2 in matlab thanks samuel -- Samuel GARCIA CNRS - UMR5020 Universite Claude Bernard LYON 1 Laboratoire des Neurosciences et Systemes Sensoriels 50, avenue Tony Garnier 69366 LYON Cedex 07 04 37 28 74 64 From oliphant at ee.byu.edu Tue Nov 22 06:46:33 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 22 Nov 2005 04:46:33 -0700 Subject: [SciPy-user] conv2 In-Reply-To: <4382F2DA.8030708@olfac.univ-lyon1.fr> References: <4382F2DA.8030708@olfac.univ-lyon1.fr> Message-ID: <43830519.20400@ee.byu.edu> Samuel GARCIA wrote: >Is there a function for 2D convolution ? >conv2 in matlab > >thanks > >samuel > > > Yes, but only in full scipy. scipy.signal.convolve --- N-dimensional convolution scipy.signal.convolve2d -- 2-d convolution with a few extra features. -Travis From vbalko at gmail.com Tue Nov 22 07:06:09 2005 From: vbalko at gmail.com (balky) Date: Tue, 22 Nov 2005 13:06:09 +0100 Subject: [SciPy-user] newscipy Message-ID: <438309B1.3050405@gmail.com> hello, i want to try install newscipy on my win machine. First is needed to install scipy_core 0.6.2, then scipy 0.4.3. But what about numpy - is numpy 24-2 needed to working with scipy correctly? thanks From arnd.baecker at web.de Tue Nov 22 07:06:02 2005 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 22 Nov 2005 13:06:02 +0100 (CET) Subject: [SciPy-user] newscipy In-Reply-To: <438309B1.3050405@gmail.com> References: <438309B1.3050405@gmail.com> Message-ID: On Tue, 22 Nov 2005, balky wrote: > hello, > > i want to try install newscipy on my win machine. First is needed to > install scipy_core 0.6.2, then scipy 0.4.3. But what about numpy - is > numpy 24-2 needed to working with scipy correctly? No, scipy_core replaces Numeric. Good luck, Arnd From pjrandew at sun.ac.za Tue Nov 22 10:13:38 2005 From: pjrandew at sun.ac.za (Randewijk P-J ) Date: Tue, 22 Nov 2005 17:13:38 +0200 Subject: [SciPy-user] Binaries for scipy created Message-ID: Thank-you for the binaries, Just a request for 0.4.4... Could you make them "quiet" please, so that we don't get: >>> from scipy import * Importing io to scipy Importing fftpack to scipy Importing special to scipy Importing utils to scipy Importing cluster to scipy Importing sparse to scipy Importing interpolate to scipy Importing lib to scipy Importing integrate to scipy Importing signal to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy >>> But for the mean time, 0.4.3 is Great...!! (as I don't know how to make binaries myself...) Thanks for scipy and thanks for the new scipy_core Peter-Jan > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > Sent: 14 November 2005 05:51 > To: SciPy Developers List; SciPy Users List > Subject: [SciPy-user] Binaries for scipy created > > > I've used the scipy sourceforge site to place binaries for a > "release" > of full scipy (built on the new core). The version is 0.4.3. > There is > an rpm and windows binaries, as well as a full tar ball. > > I know there are people out there who would like to try scipy > but don't > want to wrestle with the install. The rpms and/or windows binaries > might help. This is the first time, I've made binaries for > other people > to use. Hopefully they work fine, but errors may be reported. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user > From worknit at gmail.com Tue Nov 22 16:55:31 2005 From: worknit at gmail.com (Jon Savian) Date: Tue, 22 Nov 2005 13:55:31 -0800 Subject: [SciPy-user] installing scipy 0.3.2 with python 2.4.2 Message-ID: <8d9f49590511221355o464b5a6cwa5043934ca0c2c4f@mail.gmail.com> Hi Is there any drawbacks to installing scipy 0.3.2 with python 2.4.2 as opposed to 2.3.3? Or is scipy fully compatible with 2.4.2? Jon From Michael_OKeefe at nrel.gov Tue Nov 22 18:02:49 2005 From: Michael_OKeefe at nrel.gov (O'Keefe, Michael) Date: Tue, 22 Nov 2005 16:02:49 -0700 Subject: [SciPy-user] question about logical "and" and ndarray in Scipy Message-ID: Hello All: I have a question of the logical 'and' as it relates to ndarray s and whether what I'm seeing is a bug or not. I just upgraded to the latest scipy (as shown below) on a WindowsXP system and found a piece of code that worked previously (essentially line 5 below) but now gives an error. After a little digging, I found that using an '==' would be equivalent to the 'and' I had wanted to use. However, just curious if the syntax in line 5 should have worked as well. [1]>>> import scipy [2]>>> scipy.__scipy_version__ '0.4.3' [3]>>> scipy.__core_version__ '0.6.1' [4]>>> a=scipy.array([1,2,3,4,5,6],'d') [5]>>> idx = scipy.where( (a>2) and (a<5), 1, 0 ) Traceback (most recent call last): File "", line 1, in -toplevel- idx = scipy.where( (a>2) and (a<5), 1, 0 ) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() [6]>>> idx = scipy.where( (a>2)==(a<5), 1, 0 ) [7]>>> idx array([0, 0, 1, 1, 0, 0]) Thanks in advance, Michael From aisaac at american.edu Tue Nov 22 21:30:43 2005 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 22 Nov 2005 21:30:43 -0500 Subject: [SciPy-user] question about logical "and" and ndarray in Scipy In-Reply-To: References: Message-ID: On Tue, 22 Nov 2005, "O'Keefe, Michael" apparently wrote: > After a little digging, I found that using an '==' would > be equivalent to the 'and' I had wanted to use. However, > just curious if the syntax in line 5 should have worked as > well. > [1]>>> import scipy > [2]>>> scipy.__scipy_version__ > '0.4.3' > [3]>>> scipy.__core_version__ > '0.6.1' > [4]>>> a=scipy.array([1,2,3,4,5,6],'d') > [5]>>> idx = scipy.where( (a>2) and (a<5), 1, 0 ) > Traceback (most recent call last): > File "", line 1, in -toplevel- > idx = scipy.where( (a>2) and (a<5), 1, 0 ) > ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() > [6]>>> idx = scipy.where( (a>2)==(a<5), 1, 0 ) > [7]>>> idx > array([0, 0, 1, 1, 0, 0]) If I understand your intent, your code illustrates the reason for the change: (a>2) and (a<5) would just return (a<5). The change is to preclude such code. Use &. hth, Alan Isaac From oliphant at ee.byu.edu Tue Nov 22 21:45:56 2005 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 22 Nov 2005 19:45:56 -0700 Subject: [SciPy-user] Using ctypes and scipy Message-ID: <4383D7E4.9000606@ee.byu.edu> I'm including a (small) section in my book on the use of ctypes with scipy. But, I wanted to let users know a little bit of how nifty it can be for calling out to an external compiled-library. Ctypes is an additional Python module downloaded separately ( see http://starship.python.net/crew/theller/ctypes/ ) Be warned, however, that calling C-code directly like this can easily result in segfaults if you make a mistake in calling the external library. But it is a quick-and-dirty way to access arbitrary library code. Consider the C-code in a file called simple.c: int addone(double *ptr, int size) { int k; for (k=0; k where # is the extension for shared libraries on your system. # Create c-types arguments to pass cN = c_int(N) # This is a pointer to the actual location of the data ptr = c_void_p(int(a.__array_data__[0], 0)) #Now call the routines num = lib.makeone(ptr, cN) num = lib.addone(ptr, cN) print a[:10] Output is: [ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] Kind of nifty, no? -Travis From aisaac at american.edu Tue Nov 22 21:57:12 2005 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 22 Nov 2005 21:57:12 -0500 Subject: [SciPy-user] question about logical "and" and ndarray in Scipy In-Reply-To: References: Message-ID: On Tue, 22 Nov 2005, "O'Keefe, Michael" apparently wrote: > (a>2) and (a<5) This reminds me: the difference in SciPy between (a>2)&(a<5) and (a>2 & a<5) should I think be listed as a "gotcha". (The operator precedence is surprising.) fwiw, Alan Isaac From prabhu_r at users.sf.net Wed Nov 23 00:23:50 2005 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 23 Nov 2005 10:53:50 +0530 Subject: [SciPy-user] Using ctypes and scipy In-Reply-To: <4383D7E4.9000606@ee.byu.edu> References: <4383D7E4.9000606@ee.byu.edu> Message-ID: <17283.64742.566505.19840@enthought.cfl.aero.iitm.ernet.in> >>>>> "Travis" == Travis Oliphant writes: Travis> I'm including a (small) section in my book on the use of Travis> ctypes with scipy. But, I wanted to let users know a Travis> little bit of how nifty it can be for calling out to an Travis> external compiled-library. Ctypes is an additional Python Travis> module downloaded separately ( see Travis> http://starship.python.net/crew/theller/ctypes/ ) [...] Travis> Kind of nifty, no? Definitely! Thanks for the tip! While I think using scipy.weave is a lot cleaner, this is certainly useful to know! In fact, this is an interesting way to write a pure Python wrapper for a C library. cheers, prabhu From elcorto at gmx.net Wed Nov 23 07:18:48 2005 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 23 Nov 2005 13:18:48 +0100 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: References: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> <61C98DAB-814C-4C29-B5F3-DEFC065581B0@bluebottle.com> Message-ID: <43845E28.10704@gmx.net> Hi What's the reason that you don't use the GNU TeXmacs interface? (http://maxima.sourceforge.net/screenshots/maximatexmacs.png) I haven't tried it but it looks pretty nice. cheers, steve Ryan Krauss wrote: > I could work on screen shots but I don't know if they would be > terribly interesting. What is does primarily is allow very clean > LaTeX-symbolic integration. I really like including explanations in > my derivations and I like looking at pretty LaTeX output because it > helps me spot errors when I am looking at well formatted output > instead of the kind of command line output that is typical of computer > algebra programs. [...] -- grep -ir fuck /usr/src/linux From d.howey at imperial.ac.uk Wed Nov 23 07:41:30 2005 From: d.howey at imperial.ac.uk (Howey, David A) Date: Wed, 23 Nov 2005 12:41:30 -0000 Subject: [SciPy-user] good python books Message-ID: <056D32E9B2D93B49B01256A88B3EB2187670AE@icex2.ic.ac.uk> "Learning Python" by Lutz and Ascher (O'Reilly) is great. I use it all the time.. Dave -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Russell E. Owen Sent: 21 November 2005 21:41 To: scipy-user at scipy.org Subject: Re: [SciPy-user] good python books In article , Ryan Krauss wrote: > My parents are asking for Christmas gift ideas. Can anyone recommend > python books that are worth owning - especially technical/scientific > ones or advanced ones. Python Cookbook, 2nd edition is really nice. It ranges from basic to advanced and I've learned a lot from it. For daily reference I use Python Essential Reference (unfortunately not very up to date) and the on-line manuals. For certain advanced topics I look at Python in a Nutshell. Also, consider books on design patterns and object-oriented design. I'm not familiar with the range of material available, but in the second category I've been enjoying "Applying UML and Patterns", by Craig Larman. -- Russell _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From ryanlists at gmail.com Wed Nov 23 08:22:56 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 Nov 2005 08:22:56 -0500 Subject: [SciPy-user] python+maxima+latex for symbolic stuff In-Reply-To: <43845E28.10704@gmx.net> References: <200511181712.jAIHCAQT029481@oobleck.astro.cornell.edu> <61C98DAB-814C-4C29-B5F3-DEFC065581B0@bluebottle.com> <43845E28.10704@gmx.net> Message-ID: It does look pretty nice. I like wxMaxima as well for interactive use. The primary motivation for what I have done is LaTeX integration. I found that everytime I learned a new algebra program, I had to learn how to output to a text file and make a decent LaTeX file. When I do something symbolically, I want to be able to add explanations and then I want to be able to include the derivation in my thesis or a paper. I had something working fairly well using the symbolic toolbox of Matlab, but I found that my matlab files contained more LaTeX code than matlab code and it was tedious to have to use special commands to send all the LaTeX code to my output file. So, when I decided I wanted to get away from Matlab and use an open-source algebra program, I decided I would use LaTeX as my input language. Ryan On 11/23/05, Steve Schmerler wrote: > Hi > > What's the reason that you don't use the GNU TeXmacs interface? > (http://maxima.sourceforge.net/screenshots/maximatexmacs.png) > > I haven't tried it but it looks pretty nice. > > cheers, > steve > > > Ryan Krauss wrote: > > I could work on screen shots but I don't know if they would be > > terribly interesting. What is does primarily is allow very clean > > LaTeX-symbolic integration. I really like including explanations in > > my derivations and I like looking at pretty LaTeX output because it > > helps me spot errors when I am looking at well formatted output > > instead of the kind of command line output that is typical of computer > > algebra programs. > [...] > > -- > grep -ir fuck /usr/src/linux > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ryanlists at gmail.com Wed Nov 23 16:37:23 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 Nov 2005 16:37:23 -0500 Subject: [SciPy-user] evaluating a really long, multiline expression Message-ID: I have a really long symbolic expression that is being generated by Maxima and output to a text file. I need to turn this into a python function that I will use in optimize.newton and optimize.fmin. It is almost 9000 lines and the first 10 lines look like this : bv3 = (s^5*(abeam^2*betabeam^2*c3beam *(ml2*(rl2*(Kact*kj2 *(ml1 *((ml0 *(2*kbase*Lbeam*Ll0*rl0 -2*kbase*Lbeam*rl0^2) -2*Izl0*kbase*Lbeam-2*cbase*cj1*Lbeam) *rl1^2 +Ll1 *((ml0 *(2*kbase*Lbeam*rl0^2 -2*kbase*Lbeam*Ll0*rl0) +2*Izl0*kbase*Lbeam+2*cbase*cj1*Lbeam) Aside from replacing the ^ with ** and needing to get rid of the un-python-ish spacing at the front, any thoughts on how to make python evalute this expression quickly (assuming that all the variables are assigned numeric values by the function). I am a little nervous that such a long expression will take a really long time to evaluate. But I haven't tried it yet. Any thoughts on parsing this thing? am I better off leaving it a 9000 line statement or making one really long line (I think about 300,000 characters)? Is there any speed trade off with multiline commands? Ryan From cookedm at physics.mcmaster.ca Wed Nov 23 17:48:32 2005 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 23 Nov 2005 17:48:32 -0500 Subject: [SciPy-user] evaluating a really long, multiline expression In-Reply-To: (Ryan Krauss's message of "Wed, 23 Nov 2005 16:37:23 -0500") References: Message-ID: Ryan Krauss writes: > I have a really long symbolic expression that is being generated by > Maxima and output to a text file. I need to turn this into a python > function that I will use in optimize.newton and optimize.fmin. It is > almost 9000 lines and the first 10 lines look like this : > > bv3 = (s^5*(abeam^2*betabeam^2*c3beam > *(ml2*(rl2*(Kact*kj2 > *(ml1 > *((ml0 > *(2*kbase*Lbeam*Ll0*rl0 > -2*kbase*Lbeam*rl0^2) > -2*Izl0*kbase*Lbeam-2*cbase*cj1*Lbeam) > *rl1^2 > +Ll1 > *((ml0 > *(2*kbase*Lbeam*rl0^2 > -2*kbase*Lbeam*Ll0*rl0) > +2*Izl0*kbase*Lbeam+2*cbase*cj1*Lbeam) > > Aside from replacing the ^ with ** and needing to get rid of the > un-python-ish spacing at the front, any thoughts on how to make python > evalute this expression quickly (assuming that all the variables are > assigned numeric values by the function). I am a little nervous that > such a long expression will take a really long time to evaluate. But > I haven't tried it yet. Any thoughts on parsing this thing? am I > better off leaving it a 9000 line statement or making one really long > line (I think about 300,000 characters)? Is there any speed trade off > with multiline commands? There might be a slight difference in time in loading the file between multiline and one line, but there would be no time difference in evaluating the expression. Editors deal better with things on multiple lines, though. Does Maxima have some method of rewriting expressions in terms of a sequence of expressions, pulling out common subcomputations? Maple has this: > ex := expand((x+y)^5); 5 4 3 2 2 3 4 5 ex := x + 5 x y + 10 x y + 10 x y + 5 x y + y > codegen[optimize](ex); 2 2 2 2 t1 = x , t2 = t1 , t7 = y , t13 = t7 , t17 = t2 x + 5 t2 y + 10 t1 x t7 + 10 t1 t7 y + 5 x t13 + t13 y Something like that would *really* help. Also, speedwise, you might think about putting it into a Pyrex file, and compiling it as a C module. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Wed Nov 23 18:01:46 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Nov 2005 15:01:46 -0800 Subject: [SciPy-user] evaluating a really long, multiline expression In-Reply-To: References: Message-ID: <4384F4DA.2000503@gmail.com> David M. Cooke wrote: > Does Maxima have some method of rewriting expressions in terms of a > sequence of expressions, pulling out common subcomputations? Maple has > this: > >>ex := expand((x+y)^5); > > 5 4 3 2 2 3 4 5 > ex := x + 5 x y + 10 x y + 10 x y + 5 x y + y > >>codegen[optimize](ex); > > 2 2 2 2 > t1 = x , t2 = t1 , t7 = y , t13 = t7 , > > t17 = t2 x + 5 t2 y + 10 t1 x t7 + 10 t1 t7 y + 5 x t13 + t13 y > > Something like that would *really* help. Also, speedwise, you might > think about putting it into a Pyrex file, and compiling it as a C module. I don't think Maxima has common subexpression elimination. At least, not two years ago when I last checked. However, it does have FORTRAN output, and cobbling up a small FORTRAN subroutine from that and a .pyf is quite easy. FORTRAN compilers generally *do* have common subexpression elimination. Here's the c.l.py thread where I asked a similar question: http://mail.python.org/pipermail/python-list/2003-October/189345.html -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ryanlists at gmail.com Wed Nov 23 18:13:23 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 23 Nov 2005 18:13:23 -0500 Subject: [SciPy-user] evaluating a really long, multiline expression In-Reply-To: <4384F4DA.2000503@gmail.com> References: <4384F4DA.2000503@gmail.com> Message-ID: Robert, Do you remember how to get the output of maxima's fortran command to dump to a file? I can only get it to print to the screen, and that isn't going to work for me. Ryan On 11/23/05, Robert Kern wrote: > David M. Cooke wrote: > > > Does Maxima have some method of rewriting expressions in terms of a > > sequence of expressions, pulling out common subcomputations? Maple has > > this: > > > >>ex := expand((x+y)^5); > > > > 5 4 3 2 2 3 4 5 > > ex := x + 5 x y + 10 x y + 10 x y + 5 x y + y > > > >>codegen[optimize](ex); > > > > 2 2 2 2 > > t1 = x , t2 = t1 , t7 = y , t13 = t7 , > > > > t17 = t2 x + 5 t2 y + 10 t1 x t7 + 10 t1 t7 y + 5 x t13 + t13 y > > > > Something like that would *really* help. Also, speedwise, you might > > think about putting it into a Pyrex file, and compiling it as a C module. > > I don't think Maxima has common subexpression elimination. At least, not > two years ago when I last checked. However, it does have FORTRAN output, > and cobbling up a small FORTRAN subroutine from that and a .pyf is quite > easy. FORTRAN compilers generally *do* have common subexpression > elimination. > > Here's the c.l.py thread where I asked a similar question: > > http://mail.python.org/pipermail/python-list/2003-October/189345.html > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Nov 23 18:27:26 2005 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Nov 2005 15:27:26 -0800 Subject: [SciPy-user] evaluating a really long, multiline expression In-Reply-To: References: <4384F4DA.2000503@gmail.com> Message-ID: <4384FADE.1010703@gmail.com> Ryan Krauss wrote: > Robert, > > Do you remember how to get the output of maxima's fortran command to > dump to a file? I can only get it to print to the screen, and that > isn't going to work for me. >From memory, no, but googling suggests that you should try something like this: with_stdout ("myprogram.f", my_fortran_output_stuff (a, b, c)); -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From emsellem at obs.univ-lyon1.fr Thu Nov 24 03:22:26 2005 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 24 Nov 2005 09:22:26 +0100 Subject: [SciPy-user] installing scipy with python2.4 (Suse10) Message-ID: <43857842.8040004@obs.univ-lyon1.fr> Hi, I am desesperately trying to install scipy. I tried many things but always hit a wall. The last obstacle now is that I get the error: ####################################### gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mtune=i686 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' compile options: '-Ibuild/src -I/usr/include/python2.4 -c' gcc: build/src/Lib/interpolate/dfitpackmodule.c build/src/Lib/interpolate/dfitpackmodule.c: In function ?f2py_rout_dfitpack_surfit_smth?: build/src/Lib/interpolate/dfitpackmodule.c:2528: error: invalid storage class for function ?calc_lwrk1? build/src/Lib/interpolate/dfitpackmodule.c:2540: error: invalid storage class for function ?calc_lwrk2? build/src/Lib/interpolate/dfitpackmodule.c: In function ?f2py_rout_dfitpack_surfit_lsq?: build/src/Lib/interpolate/dfitpackmodule.c:2976: error: invalid storage class for function ?calc_lwrk1? build/src/Lib/interpolate/dfitpackmodule.c:2988: error: invalid storage class for function ?calc_lwrk2? build/src/Lib/interpolate/dfitpackmodule.c: In function ?f2py_rout_dfitpack_surfit_smth?: build/src/Lib/interpolate/dfitpackmodule.c:2528: error: invalid storage class for function ?calc_lwrk1? build/src/Lib/interpolate/dfitpackmodule.c:2540: error: invalid storage class for function ?calc_lwrk2? build/src/Lib/interpolate/dfitpackmodule.c: In function ?f2py_rout_dfitpack_surfit_lsq?: build/src/Lib/interpolate/dfitpackmodule.c:2976: error: invalid storage class for function ?calc_lwrk1? build/src/Lib/interpolate/dfitpackmodule.c:2988: error: invalid storage class for function ?calc_lwrk2? error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mtune=i686 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC -Ibuild/src -I/usr/include/python2.4 -c build/src/Lib/interpolate/dfitpackmodule.c -o build/temp.linux-i686-2.4/build/src/Lib/interpolate/dfitpackmodule.o" failed with exit status 1 ####################################### which I see was also posted in the past, but I cannot find the cure. I tried the cvs version: cvs -d :pserver:anonymous at scipy.org:/home/cvsroot login cvs -d :pserver:anonymous at scipy.org:/home/cvsroot checkout scipy but I cannot connect.. (note that I installed f2py from the cvs version: cvs -d :pserver:anonymous at cens.ioc.ee:/home/cvs login cvs -z6 -d :pserver:anonymous at cens.ioc.ee:/home/cvs checkout f2py2e cd f2py2e make install) So my question now is: HOW can I finally install scipy!!! (I have a Dell D600 with SuSe10.0, python2.4) Thanks for your help, Eric From nwagner at mecha.uni-stuttgart.de Thu Nov 24 03:27:05 2005 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Thu, 24 Nov 2005 09:27:05 +0100 Subject: [SciPy-user] installing scipy with python2.4 (Suse10) In-Reply-To: <43857842.8040004@obs.univ-lyon1.fr> References: <43857842.8040004@obs.univ-lyon1.fr> Message-ID: <43857959.5020101@mecha.uni-stuttgart.de> Eric Emsellem wrote: >Hi, >I am desesperately trying to install scipy. I tried many things but >always hit a wall. The last obstacle now is that I get the error: >####################################### >gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 >-mtune=i686 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' >compile options: '-Ibuild/src -I/usr/include/python2.4 -c' >gcc: build/src/Lib/interpolate/dfitpackmodule.c >build/src/Lib/interpolate/dfitpackmodule.c: In function >?f2py_rout_dfitpack_surfit_smth?: >build/src/Lib/interpolate/dfitpackmodule.c:2528: error: invalid storage >class for function ?calc_lwrk1? >build/src/Lib/interpolate/dfitpackmodule.c:2540: error: invalid storage >class for function ?calc_lwrk2? >build/src/Lib/interpolate/dfitpackmodule.c: In function >?f2py_rout_dfitpack_surfit_lsq?: >build/src/Lib/interpolate/dfitpackmodule.c:2976: error: invalid storage >class for function ?calc_lwrk1? >build/src/Lib/interpolate/dfitpackmodule.c:2988: error: invalid storage >class for function ?calc_lwrk2? >build/src/Lib/interpolate/dfitpackmodule.c: In function >?f2py_rout_dfitpack_surfit_smth?: >build/src/Lib/interpolate/dfitpackmodule.c:2528: error: invalid storage >class for function ?calc_lwrk1? >build/src/Lib/interpolate/dfitpackmodule.c:2540: error: invalid storage >class for function ?calc_lwrk2? >build/src/Lib/interpolate/dfitpackmodule.c: In function >?f2py_rout_dfitpack_surfit_lsq?: >build/src/Lib/interpolate/dfitpackmodule.c:2976: error: invalid storage >class for function ?calc_lwrk1? >build/src/Lib/interpolate/dfitpackmodule.c:2988: error: invalid storage >class for function ?calc_lwrk2? >error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 >-march=i586 -mtune=i686 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g >-fPIC -Ibuild/src -I/usr/include/python2.4 -c >build/src/Lib/interpolate/dfitpackmodule.c -o >build/temp.linux-i686-2.4/build/src/Lib/interpolate/dfitpackmodule.o" >failed with exit status 1 >####################################### >which I see was also posted in the past, but I cannot find the cure. > >I tried the cvs version: >cvs -d :pserver:anonymous at scipy.org:/home/cvsroot login >cvs -d :pserver:anonymous at scipy.org:/home/cvsroot checkout scipy > >but I cannot connect.. > >(note that I installed f2py from the cvs version: >cvs -d :pserver:anonymous at cens.ioc.ee:/home/cvs login >cvs -z6 -d :pserver:anonymous at cens.ioc.ee:/home/cvs checkout f2py2e >cd f2py2e >make install) > >So my question now is: HOW can I finally install scipy!!! >(I have a Dell D600 with SuSe10.0, python2.4) > >Thanks for your help, > >Eric > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > The cvs version is obsolete. Please use svn co http://svn.scipy.org/svn/scipy_core/trunk core svn co http://svn.scipy.org/svn/scipy/trunk scipy to get the latest version of scipy. Nils From noel.oboyle2 at mail.dcu.ie Thu Nov 24 04:20:32 2005 From: noel.oboyle2 at mail.dcu.ie (Noel O'Boyle) Date: Thu, 24 Nov 2005 09:20:32 +0000 Subject: [SciPy-user] evaluating a really long, multiline expression In-Reply-To: References: Message-ID: <1132824032.4380.2.camel@sandwi.ch.cam.ac.uk> On Wed, 2005-11-23 at 16:37 -0500, Ryan Krauss wrote: > I have a really long symbolic expression that is being generated by > Maxima and output to a text file. I need to turn this into a python > function that I will use in optimize.newton and optimize.fmin. It is > almost 9000 lines and the first 10 lines look like this : > > bv3 = (s^5*(abeam^2*betabeam^2*c3beam > *(ml2*(rl2*(Kact*kj2 > *(ml1 > *((ml0 > *(2*kbase*Lbeam*Ll0*rl0 > -2*kbase*Lbeam*rl0^2) > -2*Izl0*kbase*Lbeam-2*cbase*cj1*Lbeam) > *rl1^2 > +Ll1 > *((ml0 > *(2*kbase*Lbeam*rl0^2 > -2*kbase*Lbeam*Ll0*rl0) > +2*Izl0*kbase*Lbeam+2*cbase*cj1*Lbeam) > > Aside from replacing the ^ with ** and needing to get rid of the > un-python-ish spacing at the front, I don't think you need to get rid of the spacing, as it's all within a bracket (implicit continuation of lines -- only the start of the expression needs to be indented correctly). > any thoughts on how to make python > evalute this expression quickly (assuming that all the variables are > assigned numeric values by the function). I am a little nervous that > such a long expression will take a really long time to evaluate. But > I haven't tried it yet. Any thoughts on parsing this thing? am I > better off leaving it a 9000 line statement or making one really long > line (I think about 300,000 characters)? Is there any speed trade off > with multiline commands? > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From basvandijk at home.nl Thu Nov 24 10:38:12 2005 From: basvandijk at home.nl (basvandijk at home.nl) Date: Thu, 24 Nov 2005 16:38:12 +0100 Subject: [SciPy-user] Chaco & wxWindows Message-ID: <5640612.1132846692299.JavaMail.root@webmail1.groni1> Hi, I have an existing wxWindows application that uses scipy.plt for plotting. Now I would like to use Chaco for plotting. My application has a: "class PlotPanel(wxPanel)" in which I call: "self.plt = scipy.plt.wxplt.plot_window(self, -1)" to create a plotting window and "self.plt.plot(...)" to plot my data. What is the best way to convert my application to Chaco? Greetings, Bas van Dijk. From massimo.sandal at unibo.it Fri Nov 25 10:18:37 2005 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 25 Nov 2005 16:18:37 +0100 Subject: [SciPy-user] is scipy OK for my needs? Message-ID: <43872B4D.4020508@unibo.it> Hi, I'm planning to rewrite from scratch a big Matlab spaghetti-code mess we use for data analysis. I'd like to use Python and SciPy, for various reasons (familiarity and love with Python, free-as-in-freedom license, etc.). I'm currently looking at the SciPy documentation, but I still have some doubt. Shortly, my application needs these main capabilities: 1) Simple interactive GUI (just a column of buttons should work well),this should be provided by WxPython or the like, isn't it? 2) Non-linear, fast curve fitting. It must work quite fast on as much data points as 1000-2000 (although I can trick it to use less data points without losing much precision, I guess). 3) Interaction with plots (I must be able to click with the mouse two or more points I have to choose visually, and fit the data in between, and not elsewhere) 4) Plot export in various graphic formats (SVG or other vectorial would be the best). I'm doing all this on Debian GNU/Linux. Can you tell me if SciPy is a good and reliable package for my needs? Thanks a lot, Massimo -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From ryanlists at gmail.com Fri Nov 25 12:41:44 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Fri, 25 Nov 2005 12:41:44 -0500 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: <43872B4D.4020508@unibo.it> References: <43872B4D.4020508@unibo.it> Message-ID: I think all of this can work. wxPython is a good plan for gui stuff. For the most part, plotting is best handled by matplotlib: http://matplotlib.sourceforge.net/ I don't know if your specific needs about clicking on data points is possible out-of-the-box, but you could probably add this capability by creating your own wxPython graph panel. Matplotlib has a very active mailing list, you should ask the graph related part of your question there. Nonlinear curve fitting is handled by the scipy.optimize module: Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.4/site-packages/scipy/optimize/__init__.py Docstring: Optimization Tools ================== A collection of general-purpose optimization routines. fmin -- Nelder-Mead Simplex algorithm (uses only function calls) fmin_powell -- Powell's (modified) level set method (uses only function calls) fmin_cg -- Non-linear (Polak-Rubiere) conjugate gradient algorithm (can use function and gradient). fmin_bfgs -- Quasi-Newton method (can use function and gradient) fmin_ncg -- Line-search Newton Conjugate Gradient (can use function, gradient and hessian). leastsq -- Minimize the sum of squares of M equations in N unknowns given a starting estimate. fmin_l_bfgs_b -- Zhu, Byrd, and Nocedal's L-BFGS-B constrained optimizer (if you use this please quote their papers -- see help) fmin_tnc -- Truncated Newton Code originally written by Stephen Nash and adapted to C by Jean-Sebastien Roy. fmin_cobyla -- Contrained Optimization BY Linear Approximation Global Optimizers anneal -- Simulated Annealing brute -- Brute Force searching Optimizer Scalar function minimizers fminbound -- Bounded minimization of a scalar function. brent -- 1-D function minimization using Brent method. golden -- 1-D function minimization using Golden Section method bracket -- Bracket a minimum (given two starting points) golden -- 1-D function minimization using Golden Section method bracket -- Bracket a minimum (given two starting points) Also a collection of general_purpose root-finding routines. fsolve -- Non-linear multi-variable equation solver. Scalar function solvers brentq -- quadratic interpolation Brent method brenth -- Brent method (modified by Harris with hyperbolic extrapolation) ridder -- Ridder's method bisect -- Bisection method newton -- Secant method or Newton's method fixed_point -- Single-variable fixed-point solver. Utility Functions line_search -- Return a step that satisfies the strong Wolfe conditions. check_grad -- Check the supplied derivative using finite difference techniques. Hope this helps. I you already love Python, make the switch, you won't regret it. Ryan On 11/25/05, massimo sandal wrote: > Hi, > > I'm planning to rewrite from scratch a big Matlab spaghetti-code mess we > use for data analysis. I'd like to use Python and SciPy, for various > reasons (familiarity and love with Python, free-as-in-freedom license, > etc.). I'm currently looking at the SciPy documentation, but I still > have some doubt. > > Shortly, my application needs these main capabilities: > > 1) Simple interactive GUI (just a column of buttons should work > well),this should be provided by WxPython or the like, isn't it? > > 2) Non-linear, fast curve fitting. It must work quite fast on as much > data points as 1000-2000 (although I can trick it to use less data > points without losing much precision, I guess). > > 3) Interaction with plots (I must be able to click with the mouse two or > more points I have to choose visually, and fit the data in between, and > not elsewhere) > > 4) Plot export in various graphic formats (SVG or other vectorial would > be the best). > > I'm doing all this on Debian GNU/Linux. > > Can you tell me if SciPy is a good and reliable package for my needs? > Thanks a lot, > > Massimo > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > > From fonnesbeck at gmail.com Mon Nov 28 11:15:14 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Mon, 28 Nov 2005 11:15:14 -0500 Subject: [SciPy-user] updated OSX installation instructions Message-ID: <723eb6930511280815pa81329cqb5d21d142b1bde2e@mail.gmail.com> I have gone ahead and updated the OSX build instructions to use the new scipy/scipy_core: http://www.scipy.org/Members/fonnesbeck/osx_build.txt If anyone has difficulty installing on OSX using these instructions, or notice errors or inconsistencies, please let me know. Thanks, C. -- Chris Fonnesbeck Atlanta, GA From massimo.sandal at unibo.it Mon Nov 28 11:25:37 2005 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 28 Nov 2005 17:25:37 +0100 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: References: <43872B4D.4020508@unibo.it> Message-ID: <438B2F81.90207@unibo.it> Ryan Krauss wrote: > I think all of this can work. Thank you for your answer. I was also thinking about non linear filters. I saw there's something about them in the documentation, I'd like to know how good and user friendly they are. I also noticed (and tested) that scipy has basic .mat compatibility, that would be great for interoperability with my old data :). > For the most part, plotting is best handled by matplotlib: > http://matplotlib.sourceforge.net/ > I don't know if your specific needs about clicking on data points is > possible out-of-the-box, but you could probably add this capability by > creating your own wxPython graph panel. Matplotlib has a very active > mailing list, you should ask the graph related part of your question > there. Matplotlib seems good. I'm struggling to find a debian package for it... -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From fonnesbeck at gmail.com Mon Nov 28 11:31:35 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Mon, 28 Nov 2005 11:31:35 -0500 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: <438B2F81.90207@unibo.it> References: <43872B4D.4020508@unibo.it> <438B2F81.90207@unibo.it> Message-ID: <723eb6930511280831m273143cavb38fa761290f3534@mail.gmail.com> On 11/28/05, massimo sandal wrote: > Ryan Krauss wrote: > > I think all of this can work. > > Thank you for your answer. I was also thinking about non linear filters. > I saw there's something about them in the documentation, I'd like to > know how good and user friendly they are. I have a package, called PyKF, that does non-linear (unscented) Kalman filtering. I was thinking of someday adding it to SciPy, but I have not moved forward with that yet. Let me know if you would like to try it. C. -- Chris Fonnesbeck Atlanta, GA From ryanlists at gmail.com Mon Nov 28 11:47:57 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 28 Nov 2005 11:47:57 -0500 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: <438B2F81.90207@unibo.it> References: <43872B4D.4020508@unibo.it> <438B2F81.90207@unibo.it> Message-ID: The author of matplotlib is a ubuntu users, which is a debian derivative. His name is John Hunter and he had mentioned the following in a previous scipy post: =================================== For the standard matplotlib package, add these lines to your /etc/apt/sources.list: deb http://anakonda.altervista.org/debian packages/ deb-src http://anakonda.altervista.org/debian sources/ and then run: > sudo apt-get update > sudo apt-get install python-matplotlib python-matplotlib-doc These are all professional debian packages and most of them are a little out of date. If you like to live on the bleeding edge, you can use my poor man's Ubuntu Hoary packages (I've been told they also work for Breezy) which have matplotlib 0.84, ipython-0.6.16.cvs and scipy 3.3.304.4617. You need to enable universe and multiverse in /etc/apt/sources.list and then add deb http://peds-pc311.bsd.uchicago.edu binary/ ========================================= If you already have scipy working, installing matplotlib from source is pretty simple. Download the tarball from the matplotlib page, unzip it and do: sudo python setup.py install The only catch is that it depends on many of the gui development packages to make the different gui renders. So, you need to install gnome-dev, GTK2-dev and Tk Tcl dev and so on. Ryan On 11/28/05, massimo sandal wrote: > Ryan Krauss wrote: > > I think all of this can work. > > Thank you for your answer. I was also thinking about non linear filters. > I saw there's something about them in the documentation, I'd like to > know how good and user friendly they are. > > I also noticed (and tested) that scipy has basic .mat compatibility, > that would be great for interoperability with my old data :). > > > For the most part, plotting is best handled by matplotlib: > > http://matplotlib.sourceforge.net/ > > I don't know if your specific needs about clicking on data points is > > possible out-of-the-box, but you could probably add this capability by > > creating your own wxPython graph panel. Matplotlib has a very active > > mailing list, you should ask the graph related part of your question > > there. > > Matplotlib seems good. > I'm struggling to find a debian package for it... > > -- > Massimo Sandal > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > tel: +39-051-2094388 > fax: +39-051-2094387 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > > From massimo.sandal at unibo.it Mon Nov 28 12:04:44 2005 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 28 Nov 2005 18:04:44 +0100 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: References: <43872B4D.4020508@unibo.it> <438B2F81.90207@unibo.it> Message-ID: <438B38AC.8070300@unibo.it> Ryan Krauss wrote: > The author of matplotlib is a ubuntu users, which is a debian > derivative. His name is John Hunter and he had mentioned the > following in a previous scipy post: > =================================== > For the standard matplotlib package, add these lines to your > /etc/apt/sources.list: > > deb http://anakonda.altervista.org/debian packages/ > deb-src http://anakonda.altervista.org/debian sources/ Thank you. Last time I checked it seemed these URLs where not reachable, so I woed, but it was a synaptic problem (adding the sources by hand worked perfectly). > These are all professional debian packages and most of them are a > little out of date. If I don't lose too much, I'd prefer to stick with package management. I have experience in installing from source on Linux, but I don't like to do it unless absolutely necessary because I feel the system becomes quite a mess, afterwards. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From mcantor at stanford.edu Mon Nov 28 12:19:34 2005 From: mcantor at stanford.edu (Michael Nathaniel Cantor) Date: Mon, 28 Nov 2005 09:19:34 -0800 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: References: <43872B4D.4020508@unibo.it> <438B2F81.90207@unibo.it> Message-ID: <1133198374.438b3c26778e9@webmail.stanford.edu> Somewhere in this thread someone references using Matplotlib from within wxPython. I have done this successfully by embedding a plot in a wxDialog, so I can open and close multiple plot windows as desired. If you'd like the code, e-mail me at mcantor at stanford.edu (I check the mailing list posts only sporadically). Cheers, -mike > I don't know if your specific needs about clicking on data points is > possible out-of-the-box, but you could probably add this capability by > creating your own wxPython graph panel. Matplotlib has a very active > mailing list, you should ask the graph related part of your question > there. Quoting Ryan Krauss : > The author of matplotlib is a ubuntu users, which is a debian > derivative. His name is John Hunter and he had mentioned the > following in a previous scipy post: > =================================== > For the standard matplotlib package, add these lines to your > /etc/apt/sources.list: > > deb http://anakonda.altervista.org/debian packages/ > deb-src http://anakonda.altervista.org/debian sources/ > > and then run: > > > > sudo apt-get update > > sudo apt-get install python-matplotlib python-matplotlib-doc > > These are all professional debian packages and most of them are a > little out of date. > > If you like to live on the bleeding edge, you can use my poor man's > Ubuntu Hoary packages (I've been told they also work for Breezy) which > have matplotlib 0.84, ipython-0.6.16.cvs and scipy 3.3.304.4617. You > need to enable universe and multiverse in /etc/apt/sources.list and > then add > > deb http://peds-pc311.bsd.uchicago.edu binary/ > ========================================= > > If you already have scipy working, installing matplotlib from source > is pretty simple. Download the tarball from the matplotlib page, > unzip it and do: > sudo python setup.py install > > The only catch is that it depends on many of the gui development > packages to make the different gui renders. So, you need to install > gnome-dev, GTK2-dev and Tk Tcl dev and so on. > > Ryan > > > On 11/28/05, massimo sandal wrote: > > Ryan Krauss wrote: > > > I think all of this can work. > > > > Thank you for your answer. I was also thinking about non linear > filters. > > I saw there's something about them in the documentation, I'd like to > > know how good and user friendly they are. > > > > I also noticed (and tested) that scipy has basic .mat compatibility, > > that would be great for interoperability with my old data :). > > > > > For the most part, plotting is best handled by matplotlib: > > > http://matplotlib.sourceforge.net/ > > > I don't know if your specific needs about clicking on data points is > > > possible out-of-the-box, but you could probably add this capability > by > > > creating your own wxPython graph panel. Matplotlib has a very active > > > mailing list, you should ask the graph related part of your question > > > there. > > > > Matplotlib seems good. > > I'm struggling to find a debian package for it... > > > > -- > > Massimo Sandal > > University of Bologna > > Department of Biochemistry "G.Moruzzi" > > > > snail mail: > > Via Irnerio 48, 40126 Bologna, Italy > > > > email: > > massimo.sandal at unibo.it > > > > tel: +39-051-2094388 > > fax: +39-051-2094387 > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From managan at llnl.gov Mon Nov 28 13:11:28 2005 From: managan at llnl.gov (Rob Managan) Date: Mon, 28 Nov 2005 10:11:28 -0800 Subject: [SciPy-user] FFTPack problems on OSX Message-ID: After updating to the latest svn sources (core revision 1534, scipy revision 1460) I am getting failures in the fftpack section (10 in all). Is anyone else seeing this? The first failure reported is given here. FAIL: check_definition (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_diff) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 89, in check_definition assert_array_almost_equal(diff(sin(x),2),direct_diff(sin(x),2)) File "/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ -6.3519652e-15 -3.8268343e-01 -7.0710678e-01 -9.2387953e-01 -1.0000000e+00 -9.2387953e-01 -7.0710678e-01 -3.82... Array 2: [ -7.3854931e-15 6.5259351e-15 -2.4942634e-15 -7.5636114e-17 1.4745663e-15 -1.9133685e-15 2.2804788e-16 8.70... -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From wjdandreta at att.net Mon Nov 28 14:21:22 2005 From: wjdandreta at att.net (Bill Dandreta) Date: Mon, 28 Nov 2005 14:21:22 -0500 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: <438B38AC.8070300@unibo.it> References: <43872B4D.4020508@unibo.it> <438B2F81.90207@unibo.it> <438B38AC.8070300@unibo.it> Message-ID: <438B58B2.9010007@att.net> massimo sandal wrote: > > If I don't lose too much, I'd prefer to stick with package management. > I have experience in installing from source on Linux, but I don't > like to do it unless absolutely necessary because I feel the system > becomes quite a mess, afterwards. Check out checkinstall (http://asic-linux.com.mx/~izto/checkinstall/ ) It monitors installation from source and creates your choice of Debian Slackware or rpm package for you. It makes it very easy to uninstall programs that you install from source (or binaries). Bill From pwang at enthought.com Mon Nov 28 18:50:09 2005 From: pwang at enthought.com (Peter Wang) Date: Mon, 28 Nov 2005 17:50:09 -0600 Subject: [SciPy-user] Chaco & wxWindows In-Reply-To: <5640612.1132846692299.JavaMail.root@webmail1.groni1> References: <5640612.1132846692299.JavaMail.root@webmail1.groni1> Message-ID: <438B97B1.8060106@enthought.com> basvandijk at home.nl wrote: > My application has a: "class PlotPanel(wxPanel)" in which I call: > "self.plt = scipy.plt.wxplt.plot_window(self, -1)" to create a plotting window and "self.plt.plot(...)" to plot my data. > > What is the best way to convert my application to Chaco? Hi Bas, It is relatively easy to embed a Chaco plot into a WX panel: ----------------------- import wx from Numeric import arange, sin from enthought.enable.wx import Window from enthought.chaco.wx.plot import PlotComponent, PlotCanvas, PlotValue class MyFrame ( wx.Frame ): def __init__ ( self, *args, **kw ): wx.Frame.__init__( *(self,) + args, **kw ) data = sin(arange(-5.0, 5.0, 0.3)) plot = PlotValue(data, type="scatter,line") plot_window = Window(self, component=PlotComponent(plot)) sizer = wx.BoxSizer(wx.HORIZONTAL) sizer.Add(plot_window.control, 1, wx.EXPAND) self.SetSizer(sizer) self.SetAutoLayout( True ) self.Show( True ) ----------------------- Note that this is using the latest stuff from the enthought lib, and not the old version of Chaco that was bundled with Enthon. If you don't have it, you will need to grab it and do a quick build per the instructions here: http://www.enthought.com/enthought/wiki/GrabbingAndBuilding You can add a title, legend, etc. by create PlotLabels and PlotTitles, and adding them to a PlotCanvas. You can look in chaco/demo/ for demos and code samples. Is there anything you want the user to be able to change about the plot, once it's displayed? -Peter From fonnesbeck at gmail.com Mon Nov 28 20:23:39 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Mon, 28 Nov 2005 20:23:39 -0500 Subject: [SciPy-user] speeding up arrays in scipy_core? Message-ID: <723eb6930511281723y6fd5a4a6p2f476192c32e340b@mail.gmail.com> I notice, while profiling some code using scipy_core arrays, that some of the bottlenecks include numeric.asarray() and several oldnumeric methods, including resize. Is there a strategy for speeding up arrays in scipy_core, in general, that we did not do with Numeric? Just curious. -- Chris Fonnesbeck Atlanta, GA From fonnesbeck at gmail.com Mon Nov 28 21:06:31 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Mon, 28 Nov 2005 21:06:31 -0500 Subject: [SciPy-user] object_arrtype addition error Message-ID: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> Under scipy_core, I have a dictionary that contains what I thought were floating point numbers, but are actually 'object_arrtype': (Pdb) aic_quantiles Out[3]: {2.5: 345.38217627260781, 25: 346.73173987124062, 50: 347.50728308413125, 75: 349.61766516420937, 97.5: 361.87424170229531} (Pdb) type(aic_quantiles[50]) Out[3]: When I try and add a scalar to any of these, I get a TypeError: (Pdb) aic_quantiles[50] + 4 *** TypeError: unsupported operand type(s) for +: 'object_arrtype' and 'int' (Pdb) aic_quantiles[50] + 4. *** TypeError: unsupported operand type(s) for +: 'object_arrtype' and 'float' Surely this is not intended behaviour? If so, I need some clarification. Thanks, Chris -- Chris Fonnesbeck Atlanta, GA From oliphant.travis at ieee.org Mon Nov 28 22:39:45 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Nov 2005 20:39:45 -0700 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> Message-ID: <438BCD81.3040909@ieee.org> Chris Fonnesbeck wrote: >Under scipy_core, I have a dictionary that contains what I thought >were floating point numbers, but are actually 'object_arrtype': > > > One question is why these are object array scalars (entries in an object array), instead of floating point numbers in the first place. Perhaps there is something going on there. >(Pdb) aic_quantiles >Out[3]: >{2.5: 345.38217627260781, > 25: 346.73173987124062, > 50: 347.50728308413125, > 75: 349.61766516420937, > 97.5: 361.87424170229531} >(Pdb) type(aic_quantiles[50]) >Out[3]: > > > >When I try and add a scalar to any of these, I get a TypeError: > >(Pdb) aic_quantiles[50] + 4 >*** TypeError: unsupported operand type(s) for +: 'object_arrtype' and 'int' >(Pdb) aic_quantiles[50] + 4. >*** TypeError: unsupported operand type(s) for +: 'object_arrtype' and 'float' > > No, this is not intended. It looks like it is a side-effect from the recent changes to allow object arrays to punt to objects that defined rop operations. It looks like the punting is pre-mature. -Travis From schofield at ftw.at Tue Nov 29 02:48:46 2005 From: schofield at ftw.at (Ed Schofield) Date: Tue, 29 Nov 2005 07:48:46 +0000 Subject: [SciPy-user] Maximum entropy module Message-ID: <3BECC3A1-3FCE-45A8-A0B4-C0F0B384D130@ftw.at> Hi all, I've been working on getting my maximum entropy module into shape for scipy. Unless there are any objections I'll upload it into the sandbox in the next few days. It's quite well documented, but I'd be happy to answer any questions on [SciPy-user] or [SciPy-dev] about how to use it and interpret the results. I'd also welcome any feedback on improving it further. -- Ed From fonnesbeck at gmail.com Tue Nov 29 07:34:49 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 07:34:49 -0500 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <438BCD81.3040909@ieee.org> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> Message-ID: <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> On 11/28/05, Travis Oliphant wrote: > Chris Fonnesbeck wrote: > > >Under scipy_core, I have a dictionary that contains what I thought > >were floating point numbers, but are actually 'object_arrtype': > > > > > > > One question is why these are object array scalars (entries in an object > array), instead of floating point numbers in the first place. Perhaps > there is something going on there. > Its just my PyMC code calculating the quantiles of the trace (the big array of sampled values), and throwing those values into a dictionary: def quantiles(self, qlist=[2.5, 25, 50, 75, 97.5], start=0, end=None): """Returns a dictionary of requested quantiles""" # Make a copy of trace trace = self.get_trace(start, end) # For multivariate node if rank(trace)>1: # Transpose first, then sort, then transpose back trace = t(sort(t(trace))) else: # Sort univariate node trace = sort(trace) try: # Generate specified quantiles quants = [trace[int(len(trace)*q/100.0)] for q in qlist] return dict(zip(qlist, quants)) except IndexError: print "Too few elements for quantile calculation" -- Chris Fonnesbeck Atlanta, GA From nmarais at sun.ac.za Tue Nov 29 09:00:26 2005 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 29 Nov 2005 16:00:26 +0200 Subject: [SciPy-user] Efficiently scaling matrix rows Message-ID: <1133272827.8967.8.camel@localhost.localdomain> Hi I have a an n x 3 matrix (i.e. each row contains three elements). I want to scale each row by the element of a 1-d array. I.o.w, if I have arr = array([1., 2., 3., ....]), I want to scale the first row of the matrix by arr[0], the second by arr[1], etc. ATM I'm doing this: coefs = reshape(repeat(arr, 3), (-1, 3)) mat = coefs*mat Is there a more efficent (and potentially easier to read) way that does not require redundant copies of arr, while avoiding a slow python loop? Thanks Neilen From ckkart at hoc.net Tue Nov 29 09:33:48 2005 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 29 Nov 2005 15:33:48 +0100 Subject: [SciPy-user] Efficiently scaling matrix rows In-Reply-To: <1133272827.8967.8.camel@localhost.localdomain> References: <1133272827.8967.8.camel@localhost.localdomain> Message-ID: <438C66CC.1090100@hoc.net> Neilen Marais wrote: > Hi > > I have a an n x 3 matrix (i.e. each row contains three elements). I want > to scale each row by the element of a 1-d array. I.o.w, if I have > > arr = array([1., 2., 3., ....]), > > I want to scale the first row of the matrix by arr[0], the second by > arr[1], etc. ATM I'm doing this: > > coefs = reshape(repeat(arr, 3), (-1, 3)) > mat = coefs*mat > > Is there a more efficent (and potentially easier to read) way that does > not require redundant copies of arr, while avoiding a slow python loop? mat = mat*arr[:,NewAxis] should do the same. Regards, Christian From fonnesbeck at gmail.com Tue Nov 29 10:02:09 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 10:02:09 -0500 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> Message-ID: <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> On 11/29/05, Chris Fonnesbeck wrote: > On 11/28/05, Travis Oliphant wrote: > > Chris Fonnesbeck wrote: > > > > >Under scipy_core, I have a dictionary that contains what I thought > > >were floating point numbers, but are actually 'object_arrtype': > > > > > > > > > > > One question is why these are object array scalars (entries in an object > > array), instead of floating point numbers in the first place. Perhaps > > there is something going on there. > > > > Its just my PyMC code calculating the quantiles of the trace (the big > array of sampled values), and throwing those values into a dictionary: > I figured it out. The trace array is intitialized as: self._trace = array([None]*size) Which, of course, is of type object_arrtype. When each element is replaced by a proper value, the type does not change. I suppose I have to initialize to float values, though I liked using None's under numeric because if for some reason the whole array was not populated, I would get an error if I tried to do anything useful with the array. C. -- Chris Fonnesbeck Atlanta, GA From fonnesbeck at gmail.com Tue Nov 29 10:38:43 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 10:38:43 -0500 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> Message-ID: <723eb6930511290738r2fbbb8afj86748f09acf6e307@mail.gmail.com> On 11/29/05, Chris Fonnesbeck wrote: > On 11/29/05, Chris Fonnesbeck wrote: > > On 11/28/05, Travis Oliphant wrote: > > > Chris Fonnesbeck wrote: > > > > > > >Under scipy_core, I have a dictionary that contains what I thought > > > >were floating point numbers, but are actually 'object_arrtype': > > > > > > > > > > > > > > > One question is why these are object array scalars (entries in an object > > > array), instead of floating point numbers in the first place. Perhaps > > > there is something going on there. > > > > > > > Its just my PyMC code calculating the quantiles of the trace (the big > > array of sampled values), and throwing those values into a dictionary: > > > > I figured it out. The trace array is intitialized as: > > self._trace = array([None]*size) > > Which, of course, is of type object_arrtype. When each element is > replaced by a proper value, the type does not change. I suppose I have > to initialize to float values, though I liked using None's under > numeric because if for some reason the whole array was not populated, > I would get an error if I tried to do anything useful with the array. > This brings up a question, then, about how to initialize an array when you dont know ahead of time what type it will contain. I like to initialize a large array at the beginning and populate it, rather than iteratively append to a growing array (that's the Pythonic way, right?). C. -- Chris Fonnesbeck Atlanta, GA From emsellem at obs.univ-lyon1.fr Tue Nov 29 10:52:27 2005 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Tue, 29 Nov 2005 16:52:27 +0100 Subject: [SciPy-user] what type of data for scipy.stats.mannwhitneyu ? Message-ID: <438C793B.3030009@obs.univ-lyon1.fr> Hi, I am trying to compare two datasets with scipy.stats.mannwhitneyu(x,y). HOwever I cannot make it work. It always gives me: TypeError: data type not understood or TypeError: Not a numeric type So the question is: what type should "x" and "y" be? (I tried arrays, tuples, etc)... If someone can provide me with a specific example (explicit x, and y for which this functions works), that would help. Thanks! Eric From fonnesbeck at gmail.com Tue Nov 29 10:55:57 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 10:55:57 -0500 Subject: [SciPy-user] problems installing f2py extensions in scipy_core Message-ID: <723eb6930511290755g1b92854fwe40dcc696a92e542@mail.gmail.com> I am running into a new problem building a scipy_core based module that includes f2py extrensions. When I build the project using "python setup.py build", the build proceeds without error (including the f2py extensions). But then when I try to install with "sudo python setup.py install", it fails: compiling C sources gcc options: '-fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fPIC -fno-common -dynamic -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' compile options: '-Ibuild/src -I/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/base/include -I/Library/Frameworks/Python.framework/Versions/2.4/include/python2.4 -c' gcc: build/src/PyMC/flibmodule.c compiling Fortran sources g77(f77) options: '-Wall -fno-second-underscore -fPIC -O3 -funroll-loops -mcpu=7450 -mtune=7450' error: PyMC/flib.f: Invalid argument Curiously, it tries to rebuild the f2py extensions, even though they have already been built and have not changed. This is where it fails. I am not doing anything different than usual, except that I am using a recent SVN build of scipy_core. Thanks, C. -- Chris Fonnesbeck Atlanta, GA From ryanlists at gmail.com Tue Nov 29 11:41:41 2005 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 29 Nov 2005 11:41:41 -0500 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <723eb6930511290738r2fbbb8afj86748f09acf6e307@mail.gmail.com> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> <723eb6930511290738r2fbbb8afj86748f09acf6e307@mail.gmail.com> Message-ID: But how can Python know how much space to save for the array if it doesn't know the data type? On 11/29/05, Chris Fonnesbeck wrote: > On 11/29/05, Chris Fonnesbeck wrote: > > On 11/29/05, Chris Fonnesbeck wrote: > > > On 11/28/05, Travis Oliphant wrote: > > > > Chris Fonnesbeck wrote: > > > > > > > > >Under scipy_core, I have a dictionary that contains what I thought > > > > >were floating point numbers, but are actually 'object_arrtype': > > > > > > > > > > > > > > > > > > > One question is why these are object array scalars (entries in an object > > > > array), instead of floating point numbers in the first place. Perhaps > > > > there is something going on there. > > > > > > > > > > Its just my PyMC code calculating the quantiles of the trace (the big > > > array of sampled values), and throwing those values into a dictionary: > > > > > > > I figured it out. The trace array is intitialized as: > > > > self._trace = array([None]*size) > > > > Which, of course, is of type object_arrtype. When each element is > > replaced by a proper value, the type does not change. I suppose I have > > to initialize to float values, though I liked using None's under > > numeric because if for some reason the whole array was not populated, > > I would get an error if I tried to do anything useful with the array. > > > > This brings up a question, then, about how to initialize an array when > you dont know ahead of time what type it will contain. I like to > initialize a large array at the beginning and populate it, rather than > iteratively append to a growing array (that's the Pythonic way, > right?). > > C. > > -- > Chris Fonnesbeck > Atlanta, GA > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From alopez at imim.es Tue Nov 29 11:42:25 2005 From: alopez at imim.es (LOPEZ GARCIA DE LOMANA, ADRIAN) Date: Tue, 29 Nov 2005 17:42:25 +0100 Subject: [SciPy-user] Building SciPy on OS X 10.4.3 Message-ID: <66373AD054447F47851FCC5EB49B361101490AAA@basquet.imim.es> Hi all, first of all I want to thank Chris Fonnesbeck who help me to build SciPy under Mac OS X these days. Still I've got some errors at the scipy.test() that I would like to publish. My OS X is 10.4.3, and the Python is 2.4.2. I install all the requirements with (apparently) no errors. I've download the sources from the SourceForge page and edit the file Lib/fftpack/setup.py as it is said at http://www.scipy.org/documentation/Members/fonnesbeck/osx_build.txt The building and the intalling steps finished cleanly. But while running the scipy.test(), I've got some errors: Macintosh-3:~/Desktop/scipy-0.4.3 adrianlopez$ python ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#1, Oct 3 2005, 09:39:46) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Importing test to scipy Importing base to scipy Importing basic to scipy Importing io to scipy Importing fftpack to scipy Importing special to scipy Importing cluster to scipy Importing sparse to scipy Importing utils to scipy Importing interpolate to scipy Importing lib to scipy Importing integrate to scipy Importing signal to scipy Importing optimize to scipy Importing linalg to scipy Importing stats to scipy >>> scipy.test() Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 2 tests for scipy.base.umath Found 92 tests for scipy.stats.stats Found 9 tests for scipy.base.twodim_base Found 36 tests for scipy.linalg.decomp Found 49 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 5 tests for scipy.interpolate.fitpack Found 4 tests for scipy.base.index_tricks Found 12 tests for scipy.io.mmio Found 1 tests for scipy.integrate Found 4 tests for scipy.linalg.lapack Found 24 tests for scipy.base.function_base Found 18 tests for scipy.fftpack.basic Found 1 tests for scipy.optimize.zeros Found 92 tests for scipy.stats Found 6 tests for scipy.base.matrix **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 42 tests for scipy.base.type_check Found 339 tests for scipy.special.basic Found 7 tests for scipy.linalg.matfuncs Found 41 tests for scipy.linalg.basic Found 1 tests for scipy.optimize.cobyla Found 10 tests for scipy.stats.morestats Found 14 tests for scipy.linalg.blas Found 17 tests for scipy.base.ma Found 3 tests for scipy.base.getlimits Found 70 tests for scipy.stats.distributions Found 6 tests for scipy.optimize Found 49 tests for scipy.sparse Found 4 tests for scipy.fftpack.helper Found 44 tests for scipy.base.shape_base Found 3 tests for scipy.basic.helper Found 3 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .......caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...............................................................................................................................................E....E...........E....E...........E....E........F.........F.....F.F......./Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:410: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) .....................Residual: 1.05006926991e-07 ... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ..........................F...F..F..FF.F....TESTING CONVERGENCE zero should be 1 function f2 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004661 cc.brenth : 0.9999999999999997 cc.brentq : 0.9999999999999577 function f3 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000000000 cc.brenth : 1.0000000000000009 cc.brentq : 1.0000000000000011 function f4 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000001454 cc.brenth : 0.9999999999993339 cc.brentq : 0.9999999999993339 function f5 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004574 cc.brenth : 0.9999999999991444 cc.brentq : 0.9999999999991444 function f6 cc.bisect : 1.0000000000001952 cc.ridder : 0.9999999999998167 cc.brenth : 0.9999999999997568 cc.brentq : 1.0000000000011442 ..........................................................................................................................................................................................................................................................................................................................................................................................................................Gegenbauer, a = 0.87957454011 .....................................................................................F...........................Result may be inaccurate, approximate err = 3.05848163994e-08 ...Result may be inaccurate, approximate err = 4.54747350886e-13 ............................................Result: [ 4.95535778 0.6666553 ] (exact result = 4.955356249106168, 0.666666666666666) ...Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ...... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by scipy/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...............................................................................................................E....E...........E....E...........E....E............................................................... ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173, in __radd__ return csc.__radd__(other) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in assert_array_equal reduced = ravel(equal(x,y)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146, in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csc) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense sum1 = self.dat + self.datsp File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 173, in __radd__ return csc.__radd__(other) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 502, in __radd__ raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ NotImplementedError: adding a scalar to a CSC matrix is not yet supported ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_csr) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== ERROR: Check whether adding a dense matrix to a sparse matrix works ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense assert_array_equal(sum1, 2*self.dat) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 724, in assert_array_equal reduced = ravel(equal(x,y)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/sparse.py", line 146, in __cmp__ raise TypeError, "comparison of sparse matrices not implemented" TypeError: comparison of sparse matrices not implemented ====================================================================== ERROR: check_matmat (scipy.sparse.test_sparse.test_dok) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/sparse/tests/test_sparse.py", line 84, in check_matmat assert_array_almost_equal((a*bsp).todense(), dot(a,b)) AttributeError: 'scipy.ndarray' object has no attribute 'todense' ====================================================================== FAIL: check_definition (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_diff) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 89, in check_definition assert_array_almost_equal(diff(sin(x),2),direct_diff(sin(x),2)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ -6.3519652e-15 -3.8268343e-01 -7.0710678e-01 -9.2387953e-01 -1.0000000e+00 -9.2387953e-01 -7.0710678e-01 -3.82... Array 2: [ -7.3854931e-15 6.5259351e-15 -2.4942634e-15 -7.5636114e-17 1.4745663e-15 -1.9133685e-15 2.2804788e-16 8.70... ====================================================================== FAIL: check_random_even (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 335, in check_random_even assert_array_almost_equal(direct_hilbert(direct_ihilbert(f)),f) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.+0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.... Array 2: [-0.0511398 0.4170731 0.1837093 -0.2308072 -0.2344555 0.4470189 -0.3737561 -0.2798173 -0.3871084 0.2063998 -0.38172... ====================================================================== FAIL: check_definition (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_shift) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 393, in check_definition assert_array_almost_equal(shift(sin(x),a),direct_shift(sin(x),a)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 88.8888888889%): Array 1: [ 0.0998334 0.4341242 0.7160532 0.9116156 0.9972237 0.9625519 0.8117822 0.5630995 0.2464987 -0.0998334 -0.43412... Array 2: [ 0.0998334 0.0938127 0.0764768 0.0499167 0.0173359 -0.0173359 -0.0499167 -0.0764768 -0.0938127 -0.0998334 -0.09381... ====================================================================== FAIL: check_random_even (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_pseudo_diffs.py", line 243, in check_random_even assert_array_almost_equal(direct_tilbert(direct_itilbert(f,h),h),f) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.+0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.... Array 2: [ 0.1989969 0.4450684 0.1778266 -0.1660265 -0.096574 -0.2737429 -0.4183166 0.4751433 0.2629773 0.0808432 -0.26623... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 102, in check_definition assert_array_almost_equal(y,y1) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 20.+0.j 0.+0.j -4.+4.j 0.+0.j -4.+0.j 0.-0.j -4.-4.j 0.-0.j] Array 2: [ 20. +3.j -0.7071068+0.7071068j -7. +4.j -0.7071068-0.7071068j -4. -3.j 0.707106... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 429, in check_definition assert_array_almost_equal(y,direct_dftn(x)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 22.2222222222%): Array 1: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] [-13.5+7.7942286j 0. +0.j 0. +0.j ] [-13.5-7.79... Array 2: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] [-13.5+0.j -0. +0.j -0. -0.j ] [-13.5+0.j ... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 187, in check_definition assert_array_almost_equal(y,y1) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.5+0.j 0. -0.j -0.5-0.5j 0. -0.j -0.5+0.j 0. +0.j -0.5+0.5j 0. +0.j ] Array 2: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... ====================================================================== FAIL: check_random_real (scipy.fftpack.basic.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 221, in check_random_real assert_array_almost_equal (ifft(fft(x)),x) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 98.0392156863%): Array 1: [ 0.4127615 +0.0000000e+00j 0.5061932 +3.2653618e-18j 0.2375916 -6.4218783e-17j 0.479863 +2.0291934e-17j 0.851459... Array 2: [ 0.4127615 0.6425217 0.3923591 0.8931939 0.8700022 0.6040551 0.4261222 0.792201 0.887018 0.8708383 0.35789... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_ifftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 598, in check_definition assert_array_almost_equal(y,direct_idftn(x)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 22.2222222222%): Array 1: [[ 5. +0.j -0.5-0.2886751j -0.5+0.2886751j] [-1.5-0.8660254j 0. +0.j 0. +0.j ] [-1.5+0.8660254j ... Array 2: [[ 5. +0.j -0.5-0.2886751j -0.5+0.2886751j] [-1.5+0.j -0. -0.j -0. +0.j ] [-1.5+0.j ... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/tests/test_basic.py", line 345, in check_definition assert_array_almost_equal(y,ifft(x1)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 758, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 50.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 2.625+0.j -0.375-0.j -0.375-0.j -0.375-0.j 0.625+0.j -0.375+0.j -0.375+0.j -0.375+0.j] ====================================================================== FAIL: check_round (scipy.special.basic.test_basic.test_round) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1793, in check_round assert_array_equal(rnd,rndrl) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/test/testing.py", line 733, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 25.0%): Array 1: [10 10 10 11] Array 2: [10 10 11 11] ---------------------------------------------------------------------- Ran 1210 tests in 7.888s FAILED (failures=11, errors=12) >>> Are they serious errors or can I use SciPy confidently? Thanks for your help, Adri?n. PS: Before this, I've also tried checking the files via svn and I've got errors with the fftw libraries: > > > Macintosh-3:~/Desktop/scipy adrianlopez$ python > > > ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on > > > Python 2.4.2 (#1, Oct 3 2005, 09:39:46) > > > [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin > > > Type "help", "copyright", "credits" or "license" for more information. > > > >>> import scipy > > > Importing test to scipy > > > Importing base to scipy > > > Importing basic to scipy > > > Importing io to scipy > > > Importing fftpack to scipy > > > Failed to import fftpack > > > Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub > > > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so > > > Expected in: dynamic lookup > > > > > > Importing special to scipy > > > Importing cluster to scipy > > > Importing sparse to scipy > > > Importing utils to scipy > > > Importing interpolate to scipy > > > Importing lib to scipy > > > Importing integrate to scipy > > > Importing signal to scipy > > > Failed to import signal > > > Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub > > > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so > > > Expected in: dynamic lookup > > > > > > Importing optimize to scipy > > > Importing linalg to scipy > > > Importing stats to scipy I could give a more detailed description about this if someone is interested. Thanks again, Adri?n. From fonnesbeck at gmail.com Tue Nov 29 12:07:16 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 12:07:16 -0500 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> <723eb6930511290738r2fbbb8afj86748f09acf6e307@mail.gmail.com> Message-ID: <723eb6930511290907k764a0a28i93c301e4c5c3731d@mail.gmail.com> On 11/29/05, Ryan Krauss wrote: > But how can Python know how much space to save for the array if it > doesn't know the data type? > I wouldnt argue with you except that I was able to do this in Numeric. I could initialize an array to: foo = array([None]*1000) Then populate it with whatever I like afterwards. C. -- Chris Fonnesbeck Atlanta, GA From managan at llnl.gov Tue Nov 29 12:12:15 2005 From: managan at llnl.gov (Rob Managan) Date: Tue, 29 Nov 2005 09:12:15 -0800 Subject: [SciPy-user] Building SciPy on OS X 10.4.3 In-Reply-To: <66373AD054447F47851FCC5EB49B361101490AAA@basquet.imim.es> References: <66373AD054447F47851FCC5EB49B361101490AAA@basquet.imim.es> Message-ID: I just want to add that I am seeing the same errors on OSX 10.3.9, python 2.4.1 using the latest svn revisions. I did not use to see these errors back around end of October or early November (except for the check_round failure) At 5:42 PM +0100 11/29/05, LOPEZ GARCIA DE LOMANA, ADRIAN wrote: >Hi all, > >first of all I want to thank Chris Fonnesbeck who help me to build >SciPy under Mac OS X these days. >Still I've got some errors at the scipy.test() that I would like to publish. > >My OS X is 10.4.3, and the Python is 2.4.2. I install all the >requirements with (apparently) no errors. > >I've download the sources from the SourceForge page and edit the >file Lib/fftpack/setup.py as it is said at >http://www.scipy.org/documentation/Members/fonnesbeck/osx_build.txt > -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From fonnesbeck at gmail.com Tue Nov 29 12:38:41 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 12:38:41 -0500 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <723eb6930511290907k764a0a28i93c301e4c5c3731d@mail.gmail.com> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> <723eb6930511290738r2fbbb8afj86748f09acf6e307@mail.gmail.com> <723eb6930511290907k764a0a28i93c301e4c5c3731d@mail.gmail.com> Message-ID: <723eb6930511290938l2e3a5bdi78d6d1ffa16690e2@mail.gmail.com> On 11/29/05, Chris Fonnesbeck wrote: > On 11/29/05, Ryan Krauss wrote: > > But how can Python know how much space to save for the array if it > > doesn't know the data type? > > > > I wouldnt argue with you except that I was able to do this in Numeric. > I could initialize an array to: > > foo = array([None]*1000) > > Then populate it with whatever I like afterwards. > Here is another problem (perhaps I should have started a new thread for this, but its related): I have an array that I built up of lists of floats. Here is a small, trivial example: (Pdb) foo Out[3]: array([[ 1.27826531 0.43086964], [ 1.27826531 0.43086964]], dtype=object) Try calling ravel(): (Pdb) foo.ravel() Out[3]: array([[ 1.27826531 0.43086964], [ 1.27826531 0.43086964]], dtype=object) I see *why* this is happening, but I dont think that it should. I expect an array of shape (4,) when I call ravel() on this array. C. -- Chris Fonnesbeck Atlanta, GA From basvandijk at home.nl Tue Nov 29 13:29:15 2005 From: basvandijk at home.nl (Bas van Dijk) Date: Tue, 29 Nov 2005 19:29:15 +0100 Subject: [SciPy-user] Chaco & wxWindows In-Reply-To: <438B97B1.8060106@enthought.com> References: <5640612.1132846692299.JavaMail.root@webmail1.groni1> <438B97B1.8060106@enthought.com> Message-ID: <200511291929.15364.basvandijk@home.nl> > It is relatively easy to embed a Chaco plot into a WX panel: > ... Thanks Peter, that looks easy indeed! > Is there anything you want the user to be able to change about the plot, > once it's displayed? The user should be able to zoom in and out and scroll over the plot (Preferably by using the +,- and arrow keys). It would also be nice if the user can change other things like the scale of the axes and maybe also the axes titles. But I already read in the excellent Chaco tutorial how to do that. Another question I have is if the Chaco plots can be used in an interactive manner? The application I developed is a logging program. It connects to a scientific instrument and starts plotting the values it receives. The reason I want to use another plotting package (I'm now using scipy.plt) is that scipy.plt doesn't do double buffering. So when a new value comes in, the screen flickers when it is plotted. Do you know if Chaco can solve this problem? Bas. From eric at enthought.com Tue Nov 29 13:40:46 2005 From: eric at enthought.com (eric jones) Date: Tue, 29 Nov 2005 12:40:46 -0600 Subject: [SciPy-user] Efficiently scaling matrix rows In-Reply-To: <1133272827.8967.8.camel@localhost.localdomain> References: <1133272827.8967.8.camel@localhost.localdomain> Message-ID: <438CA0AE.4070601@enthought.com> Here is a snippet that does what you want. The key is to use "broadcasting." Here we take the 1D scale, and turn it into a Nx1 array. Travis has some discussion of broadcasting here: http://www.tramy.us/scipybooksample.pdf I also wrote a paper for pyzine a while back about broadcasting, but I don't have a copy available on this machine. I'll try to scare it up and pass it on to you. eric from scipy import * N = 5.0 a = ones((N,3),typecode=Float32) print a scale = arange(N) print scale b = a * scale[:,NewAxis] print b Neilen Marais wrote: >Hi > >I have a an n x 3 matrix (i.e. each row contains three elements). I want >to scale each row by the element of a 1-d array. I.o.w, if I have > >arr = array([1., 2., 3., ....]), > >I want to scale the first row of the matrix by arr[0], the second by >arr[1], etc. ATM I'm doing this: > >coefs = reshape(repeat(arr, 3), (-1, 3)) >mat = coefs*mat > >Is there a more efficent (and potentially easier to read) way that does >not require redundant copies of arr, while avoiding a slow python loop? > >Thanks >Neilen > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From oliphant.travis at ieee.org Tue Nov 29 14:06:30 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Nov 2005 12:06:30 -0700 Subject: [SciPy-user] object_arrtype addition error In-Reply-To: <723eb6930511290907k764a0a28i93c301e4c5c3731d@mail.gmail.com> References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> <438BCD81.3040909@ieee.org> <723eb6930511290434u60029be4q666b2067c988ed69@mail.gmail.com> <723eb6930511290702q27a6eb81g9ae98ea451f5e9c8@mail.gmail.com> <723eb6930511290738r2fbbb8afj86748f09acf6e307@mail.gmail.com> <723eb6930511290907k764a0a28i93c301e4c5c3731d@mail.gmail.com> Message-ID: <438CA6B6.2080805@ieee.org> Chris Fonnesbeck wrote: >On 11/29/05, Ryan Krauss wrote: > > >>But how can Python know how much space to save for the array if it >>doesn't know the data type? >> >> >> > >I wouldnt argue with you except that I was able to do this in Numeric. >I could initialize an array to: > >foo = array([None]*1000) > >Then populate it with whatever I like afterwards. > > > Yes, you could do this, just as you can with scipy_core. But, Numeric was always dealing with an object array. It's just that the object returned was a Python float. Now, selection from object arrays returns a "wrapped" object (that's what an object scalar is). In your case it's a "wrapped" Python float that allows the scalar to have all the methods and attributes of arrays. So, you have always been using object arrays. That's fine, if you recognize that object arrays can be a little slower and consume more memory. It all depends on what you do with them. The only thing that's changed is what gets returned from an object array. The reason for the change is to provide (method and attribute) consistency between scalar elements of an array and the array itself. -Travis From oliphant.travis at ieee.org Tue Nov 29 14:08:34 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Nov 2005 12:08:34 -0700 Subject: [SciPy-user] [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438C0547.6060902@mecha.uni-stuttgart.de> References: <438C0547.6060902@mecha.uni-stuttgart.de> Message-ID: <438CA732.9080003@ieee.org> Nils Wagner wrote: There have been reports of sparse matrix errors after the recent change to scipy_core to allow other object array math to work correctly. I'm not getting these errors. Please make sure you've got an updated version of Lib/sparse/sparse.py In particular, make sure that the spmatrix class defines the __array_priority__ attribute. The hand-off to the rop methods only occurs if the object has this attribute. -Travis From managan at llnl.gov Tue Nov 29 14:53:17 2005 From: managan at llnl.gov (Rob Managan) Date: Tue, 29 Nov 2005 11:53:17 -0800 Subject: [SciPy-user] [SciPy-dev] 12 errors using 0.7.3.1538 0.4.2_1461 In-Reply-To: <438CA732.9080003@ieee.org> References: <438C0547.6060902@mecha.uni-stuttgart.de> <438CA732.9080003@ieee.org> Message-ID: At 12:08 PM -0700 11/29/05, Travis Oliphant wrote: >Nils Wagner wrote: > >There have been reports of sparse matrix errors after the recent change >to scipy_core to allow other object array math to work correctly. > >I'm not getting these errors. > >Please make sure you've got an updated version of Lib/sparse/sparse.py > >In particular, make sure that the spmatrix class defines the >__array_priority__ attribute. The hand-off to the rop methods only >occurs if the object has this attribute. > >-Travis > The current sparse.py in svn (revision 1461) does not appear to have this attribute. I just renamed my copy and when svn restored the original it has no string "priority" in it. Hope this helps sort out where the problem is! -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From robert.kern at gmail.com Tue Nov 29 15:18:01 2005 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 29 Nov 2005 12:18:01 -0800 Subject: [SciPy-user] what type of data for scipy.stats.mannwhitneyu ? In-Reply-To: <438C793B.3030009@obs.univ-lyon1.fr> References: <438C793B.3030009@obs.univ-lyon1.fr> Message-ID: <438CB779.2010905@gmail.com> Eric Emsellem wrote: > Hi, > I am trying to compare two datasets with scipy.stats.mannwhitneyu(x,y). > HOwever I cannot make it work. It always gives me: > > TypeError: data type not understood > or > TypeError: Not a numeric type You should always post at least the part of the traceback that shows the line that's throwing the exception. In this case: x,y = asarray(x, y) I don't understand why anyone ever thought that would work, but it's fixed now in SVN. x and y should both be rank-1 arrays, possibly of different lengths. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pwang at enthought.com Tue Nov 29 16:49:21 2005 From: pwang at enthought.com (Peter Wang) Date: Tue, 29 Nov 2005 15:49:21 -0600 Subject: [SciPy-user] Chaco & wxWindows In-Reply-To: <200511291929.15364.basvandijk@home.nl> References: <5640612.1132846692299.JavaMail.root@webmail1.groni1> <438B97B1.8060106@enthought.com> <200511291929.15364.basvandijk@home.nl> Message-ID: <438CCCE1.4090501@enthought.com> (I'm moving this thread to the scipy-chaco list, and CCing this message to the general scipy-users list.) Bas van Dijk wrote: > The user should be able to zoom in and out and scroll over the plot > (Preferably by using the +,- and arrow keys). There is a very flexible interaction mechanism to allow composing these sorts of behaviors. The zoom and pan interactions are already implemented, and others can be derived easily from existing ones. (The pan interaction is actually still on a branch and not the trunk, but it's easy enough to move that over.) You just need to modify the previous code a little bit to add zooming: ... from enthought.chaco.interaction.zoom_interaction import ZoomInteraction class MyFrame ( wx.Frame ): def __init__ ( self, *args, **kw ): wx.Frame.__init__( *(self,) + args, **kw ) data = sin(arange(-5.0, 5.0, 0.3)) zoom = ZoomInteraction(reset_on_complete=True) x_axis = PlotAxis(title="X axis") y_axis = PlotAxis(title="Y axis") plot = PlotValue(data, type="scatter,line", axis_index = x_axis, axis = y_axis, interaction=zoom) plot_window = Window(self, component=PlotComponent(plot)) ... There's a quirk in the zoom interaction that requires you to create the axes explicitly, instead of letting the canvas create it automatically. Also, there is no limit to the amount that a user can try to zoom, so you need to do that yourself, e.g. by checking on the bound_low and bound_high of the axis objects. As written, the zoom interaction allows the user to select a rectangular region with the left mouse button, and a single right-click will unzoom. You can look at the code inside chaco/interactions/zoom_interaction to see how to make your own interaction that behaves somewhat differently. It's pretty trivial to adapt the existing interactions to use keypresses. I've added a keyboard_pan_demo.py to chaco/demo/. Here is what the panning interaction looks like: class KeyboardPan(PlotInteraction): stepsize = 10 # number of pixels to move left or right accepts_focus = True def _key_changed(self, event): # event is an enable.events.KeyEvent; character is a simple char # or the name of a control char (see enable.wx.window.key_map). if event.character == "Right": delta = self.stepsize elif event.character == "Left": delta = -self.stepsize else: return handled_axes = [] for plotvalue in self.canvas.values: x_axis = plotvalue.axis_index # Check for auto-scaling and turn it off. if 'auto' in (x_axis.bound_low, x_axis.bound_high): axis_info = plotvalue._axis_info_index x_axis.bound_low = axis_info.min x_axis.bound_high = axis_info.max xscale = plotvalue._plot_sxdx / (x_axis.bound_high - x_axis.bound_low) if x_axis not in handled_axes: x_axis.bound_low += delta/xscale x_axis.bound_high += delta/xscale handled_axes.append(x_axis) self.canvas.update() return Note that as you pan, the axis scale labels on the bottom change, and so the plot resizes a bit. This behavior can be mitigated by creating the axis with a set format: x_axis = PlotAxis(title="X axis", label_format="%2d") Also note that you have to click on the plot first before it starts receiving keyboard events. This is a known issue that's on my list of things to fix. > It would also be nice if the user can change other things like the scale of > the axes and maybe also the axes titles. But I already read in the excellent > Chaco tutorial how to do that. Great! But keep in mind that the tutorial is a little out of date with what's in the SVN. :) > Another question I have is if the Chaco plots can be used in an interactive > manner? The application I developed is a logging program. It connects to a > scientific instrument and starts plotting the values it receives. Yes, after you've updated data, just call canvas.update() to schedule a redraw. > The reason I want to use another plotting package (I'm now using scipy.plt) is > that scipy.plt doesn't do double buffering. So when a new value comes in, the > screen flickers when it is plotted. > Do you know if Chaco can solve this problem? Well, yes and no. Chaco does do double buffering, because the draw process is done to an internal bitmap that then gets blitted by WX. However, if you update too quickly, then flicker simply gets translated into "sluggishness", i.e. things don't update in a responsive fashion. At what rate is your data coming in? Perhaps you need to be buffering data, and not the graphics drawing? -Peter From Glen.Mabey at swri.org Tue Nov 29 17:29:54 2005 From: Glen.Mabey at swri.org (Glen W. Mabey) Date: Tue, 29 Nov 2005 16:29:54 -0600 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: <723eb6930511280831m273143cavb38fa761290f3534@mail.gmail.com> References: <43872B4D.4020508@unibo.it> <438B2F81.90207@unibo.it> <723eb6930511280831m273143cavb38fa761290f3534@mail.gmail.com> Message-ID: <20051129222954.GC4555@bams.swri.edu> On Mon, Nov 28, 2005 at 11:31:35AM -0500, Chris Fonnesbeck wrote: > I have a package, called PyKF, that does non-linear (unscented) Kalman > filtering. I was thinking of someday adding it to SciPy, but I have > not moved forward with that yet. Let me know if you would like to try > it. While I have no immediate need for a Kalman filter, I would like to voice my desire so see whatever you've got included with scipy. Glen Mabey From gnchen at cortechs.net Tue Nov 29 19:13:08 2005 From: gnchen at cortechs.net (Gennan Chen) Date: Tue, 29 Nov 2005 16:13:08 -0800 Subject: [SciPy-user] Building SciPy on OS X 10.4.3 In-Reply-To: <66373AD054447F47851FCC5EB49B361101490AAA@basquet.imim.es> Message-ID: I used FINK's python 2.4 and I stuck at the same place. Hope it is alright to use though. Anyone can confirm that will be great!! Gen On 11/29/05 8:42 AM, "LOPEZ GARCIA DE LOMANA, ADRIAN" wrote: > > Hi all, > > first of all I want to thank Chris Fonnesbeck who help me to build SciPy under > Mac OS X these days. > Still I've got some errors at the scipy.test() that I would like to publish. > > My OS X is 10.4.3, and the Python is 2.4.2. I install all the requirements > with (apparently) no errors. > > I've download the sources from the SourceForge page and edit the file > Lib/fftpack/setup.py as it is said at > http://www.scipy.org/documentation/Members/fonnesbeck/osx_build.txt > > The building and the intalling steps finished cleanly. But while running the > scipy.test(), I've got some errors: > > Macintosh-3:~/Desktop/scipy-0.4.3 adrianlopez$ python > ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on > Python 2.4.2 (#1, Oct 3 2005, 09:39:46) > [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy > Importing test to scipy > Importing base to scipy > Importing basic to scipy > Importing io to scipy > Importing fftpack to scipy > Importing special to scipy > Importing cluster to scipy > Importing sparse to scipy > Importing utils to scipy > Importing interpolate to scipy > Importing lib to scipy > Importing integrate to scipy > Importing signal to scipy > Importing optimize to scipy > Importing linalg to scipy > Importing stats to scipy >>>> scipy.test() > Found 4 tests for scipy.io.array_import > Found 128 tests for scipy.linalg.fblas > Found 2 tests for scipy.base.umath > Found 92 tests for scipy.stats.stats > Found 9 tests for scipy.base.twodim_base > Found 36 tests for scipy.linalg.decomp > Found 49 tests for scipy.sparse.sparse > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > Found 5 tests for scipy.interpolate.fitpack > Found 4 tests for scipy.base.index_tricks > Found 12 tests for scipy.io.mmio > Found 1 tests for scipy.integrate > Found 4 tests for scipy.linalg.lapack > Found 24 tests for scipy.base.function_base > Found 18 tests for scipy.fftpack.basic > Found 1 tests for scipy.optimize.zeros > Found 92 tests for scipy.stats > Found 6 tests for scipy.base.matrix > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by scipy/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > Found 42 tests for scipy.lib.lapack > Found 42 tests for scipy.base.type_check > Found 339 tests for scipy.special.basic > Found 7 tests for scipy.linalg.matfuncs > Found 41 tests for scipy.linalg.basic > Found 1 tests for scipy.optimize.cobyla > Found 10 tests for scipy.stats.morestats > Found 14 tests for scipy.linalg.blas > Found 17 tests for scipy.base.ma > Found 3 tests for scipy.base.getlimits > Found 70 tests for scipy.stats.distributions > Found 6 tests for scipy.optimize > Found 49 tests for scipy.sparse > Found 4 tests for scipy.fftpack.helper > Found 44 tests for scipy.base.shape_base > Found 3 tests for scipy.basic.helper > Found 3 tests for scipy.signal.signaltools > Found 0 tests for __main__ > > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > .......caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > .............................................................................. > .................................................................E....E....... > ....E....E...........E....E........F.........F.....F.F......./Library/Framewor > ks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/interpolate > /fitpack2.py:410: UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > warnings.warn(message) > .....................Residual: 1.05006926991e-07 > ... > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by scipy/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > ..........................F...F..F..FF.F....TESTING CONVERGENCE > > zero should be 1 > > function f2 > > cc.bisect : 1.0000000000001952 > cc.ridder : 1.0000000000004661 > cc.brenth : 0.9999999999999997 > cc.brentq : 0.9999999999999577 > > > > function f3 > > cc.bisect : 1.0000000000001952 > cc.ridder : 1.0000000000000000 > cc.brenth : 1.0000000000000009 > cc.brentq : 1.0000000000000011 > > > > function f4 > > cc.bisect : 1.0000000000001952 > cc.ridder : 1.0000000000001454 > cc.brenth : 0.9999999999993339 > cc.brentq : 0.9999999999993339 > > > > function f5 > > cc.bisect : 1.0000000000001952 > cc.ridder : 1.0000000000004574 > cc.brenth : 0.9999999999991444 > cc.brentq : 0.9999999999991444 > > > > function f6 > > cc.bisect : 1.0000000000001952 > cc.ridder : 0.9999999999998167 > cc.brenth : 0.9999999999997568 > cc.brentq : 1.0000000000011442 > > > > .............................................................................. > .............................................................................. > .............................................................................. > .............................................................................. > .............................................................................. > ....................Gegenbauer, a = 0.87957454011 > .............................................................................. > .......F...........................Result may be inaccurate, approximate err = > 3.05848163994e-08 > ...Result may be inaccurate, approximate err = 4.54747350886e-13 > ............................................Result: [ 4.95535778 0.6666553 ] > (exact result = 4.955356249106168, 0.666666666666666) > ...Ties preclude use of exact statistic. > ..Ties preclude use of exact statistic. > ...... > **************************************************************** > WARNING: cblas module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by scipy/system_info.py, > then scipy uses fblas instead of cblas. > **************************************************************** > > .............................................................................. > .................................E....E...........E....E...........E....E..... > .......................................................... > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense > sum1 = self.dat + self.datsp > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 502, in __radd__ > raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ > NotImplementedError: adding a scalar to a CSC matrix is not yet supported > > ====================================================================== > ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csc) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 84, in check_matmat > assert_array_almost_equal((a*bsp).todense(), dot(a,b)) > AttributeError: 'scipy.ndarray' object has no attribute 'todense' > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense > sum1 = self.dat + self.datsp > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 173, in __radd__ > return csc.__radd__(other) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 502, in __radd__ > raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ > NotImplementedError: adding a scalar to a CSC matrix is not yet supported > > ====================================================================== > ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_csr) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 84, in check_matmat > assert_array_almost_equal((a*bsp).todense(), dot(a,b)) > AttributeError: 'scipy.ndarray' object has no attribute 'todense' > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense > assert_array_equal(sum1, 2*self.dat) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 724, in assert_array_equal > reduced = ravel(equal(x,y)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 146, in __cmp__ > raise TypeError, "comparison of sparse matrices not implemented" > TypeError: comparison of sparse matrices not implemented > > ====================================================================== > ERROR: check_matmat (scipy.sparse.sparse.test_sparse.test_dok) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 84, in check_matmat > assert_array_almost_equal((a*bsp).todense(), dot(a,b)) > AttributeError: 'scipy.ndarray' object has no attribute 'todense' > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense > sum1 = self.dat + self.datsp > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 502, in __radd__ > raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ > NotImplementedError: adding a scalar to a CSC matrix is not yet supported > > ====================================================================== > ERROR: check_matmat (scipy.sparse.test_sparse.test_csc) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 84, in check_matmat > assert_array_almost_equal((a*bsp).todense(), dot(a,b)) > AttributeError: 'scipy.ndarray' object has no attribute 'todense' > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 148, in check_add_dense > sum1 = self.dat + self.datsp > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 173, in __radd__ > return csc.__radd__(other) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 502, in __radd__ > raise NotImplementedError, 'adding a scalar to a CSC matrix is ' \ > NotImplementedError: adding a scalar to a CSC matrix is not yet supported > > ====================================================================== > ERROR: check_matmat (scipy.sparse.test_sparse.test_csr) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 84, in check_matmat > assert_array_almost_equal((a*bsp).todense(), dot(a,b)) > AttributeError: 'scipy.ndarray' object has no attribute 'todense' > > ====================================================================== > ERROR: Check whether adding a dense matrix to a sparse matrix works > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 149, in check_add_dense > assert_array_equal(sum1, 2*self.dat) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 724, in assert_array_equal > reduced = ravel(equal(x,y)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/sparse.py", line 146, in __cmp__ > raise TypeError, "comparison of sparse matrices not implemented" > TypeError: comparison of sparse matrices not implemented > > ====================================================================== > ERROR: check_matmat (scipy.sparse.test_sparse.test_dok) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/sparse/tests/test_sparse.py", line 84, in check_matmat > assert_array_almost_equal((a*bsp).todense(), dot(a,b)) > AttributeError: 'scipy.ndarray' object has no attribute 'todense' > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_diff) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_pseudo_diffs.py", line 89, in check_definition > assert_array_almost_equal(diff(sin(x),2),direct_diff(sin(x),2)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 87.5%): > Array 1: [ -6.3519652e-15 -3.8268343e-01 -7.0710678e-01 > -9.2387953e-01 > -1.0000000e+00 -9.2387953e-01 -7.0710678e-01 -3.82... > Array 2: [ -7.3854931e-15 6.5259351e-15 -2.4942634e-15 > -7.5636114e-17 > 1.4745663e-15 -1.9133685e-15 2.2804788e-16 8.70... > > > ====================================================================== > FAIL: check_random_even > (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_hilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_pseudo_diffs.py", line 335, in check_random_even > assert_array_almost_equal(direct_hilbert(direct_ihilbert(f)),f) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0.+0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j > 0.-0.j 0.-0.j > 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.... > Array 2: [-0.0511398 0.4170731 0.1837093 -0.2308072 -0.2344555 > 0.4470189 > -0.3737561 -0.2798173 -0.3871084 0.2063998 -0.38172... > > > ====================================================================== > FAIL: check_definition > (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_shift) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_pseudo_diffs.py", line 393, in check_definition > assert_array_almost_equal(shift(sin(x),a),direct_shift(sin(x),a)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 88.8888888889%): > Array 1: [ 0.0998334 0.4341242 0.7160532 0.9116156 0.9972237 > 0.9625519 > 0.8117822 0.5630995 0.2464987 -0.0998334 -0.43412... > Array 2: [ 0.0998334 0.0938127 0.0764768 0.0499167 0.0173359 > -0.0173359 > -0.0499167 -0.0764768 -0.0938127 -0.0998334 -0.09381... > > > ====================================================================== > FAIL: check_random_even > (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_tilbert) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_pseudo_diffs.py", line 243, in check_random_even > assert_array_almost_equal(direct_tilbert(direct_itilbert(f,h),h),f) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 0.+0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j > 0.-0.j 0.-0.j > 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.... > Array 2: [ 0.1989969 0.4450684 0.1778266 -0.1660265 -0.096574 > -0.2737429 > -0.4183166 0.4751433 0.2629773 0.0808432 -0.26623... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_basic.py", line 102, in check_definition > assert_array_almost_equal(y,y1) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 20.+0.j 0.+0.j -4.+4.j 0.+0.j -4.+0.j 0.-0.j > -4.-4.j 0.-0.j] > Array 2: [ 20. +3.j -0.7071068+0.7071068j -7. > +4.j > -0.7071068-0.7071068j -4. -3.j 0.707106... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fftn) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_basic.py", line 429, in check_definition > assert_array_almost_equal(y,direct_dftn(x)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 22.2222222222%): > Array 1: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] > [-13.5+7.7942286j 0. +0.j 0. +0.j ] > [-13.5-7.79... > Array 2: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] > [-13.5+0.j -0. +0.j -0. -0.j ] > [-13.5+0.j ... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.basic.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_basic.py", line 187, in check_definition > assert_array_almost_equal(y,y1) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 100.0%): > Array 1: [ 2.5+0.j 0. -0.j -0.5-0.5j 0. -0.j -0.5+0.j 0. +0.j > -0.5+0.5j > 0. +0.j ] > Array 2: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j > 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... > > > ====================================================================== > FAIL: check_random_real (scipy.fftpack.basic.test_basic.test_ifft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_basic.py", line 221, in check_random_real > assert_array_almost_equal (ifft(fft(x)),x) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 98.0392156863%): > Array 1: [ 0.4127615 +0.0000000e+00j 0.5061932 +3.2653618e-18j > 0.2375916 -6.4218783e-17j 0.479863 +2.0291934e-17j > 0.851459... > Array 2: [ 0.4127615 0.6425217 0.3923591 0.8931939 0.8700022 > 0.6040551 > 0.4261222 0.792201 0.887018 0.8708383 0.35789... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.basic.test_basic.test_ifftn) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_basic.py", line 598, in check_definition > assert_array_almost_equal(y,direct_idftn(x)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 22.2222222222%): > Array 1: [[ 5. +0.j -0.5-0.2886751j -0.5+0.2886751j] > [-1.5-0.8660254j 0. +0.j 0. +0.j ] > [-1.5+0.8660254j ... > Array 2: [[ 5. +0.j -0.5-0.2886751j -0.5+0.2886751j] > [-1.5+0.j -0. -0.j -0. +0.j ] > [-1.5+0.j ... > > > ====================================================================== > FAIL: check_definition (scipy.fftpack.basic.test_basic.test_irfft) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/fftpack/tests/test_basic.py", line 345, in check_definition > assert_array_almost_equal(y,ifft(x1)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 758, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 50.0%): > Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 > 0.4356602 -0.375 > 0.9356602] > Array 2: [ 2.625+0.j -0.375-0.j -0.375-0.j -0.375-0.j 0.625+0.j > -0.375+0.j > -0.375+0.j -0.375+0.j] > > > ====================================================================== > FAIL: check_round (scipy.special.basic.test_basic.test_round) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/special/tests/test_basic.py", line 1793, in check_round > assert_array_equal(rnd,rndrl) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages > /scipy/test/testing.py", line 733, in assert_array_equal > assert cond,\ > AssertionError: > Arrays are not equal (mismatch 25.0%): > Array 1: [10 10 10 11] > Array 2: [10 10 11 11] > > > ---------------------------------------------------------------------- > Ran 1210 tests in 7.888s > > FAILED (failures=11, errors=12) > >>>> > > Are they serious errors or can I use SciPy confidently? > > Thanks for your help, > > Adri?n. > > PS: Before this, I've also tried checking the files via svn and I've got > errors with the fftw libraries: > >>>> Macintosh-3:~/Desktop/scipy adrianlopez$ python >>>> ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on >>>> Python 2.4.2 (#1, Oct 3 2005, 09:39:46) >>>> [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin >>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>> import scipy >>>> Importing test to scipy >>>> Importing base to scipy >>>> Importing basic to scipy >>>> Importing io to scipy >>>> Importing fftpack to scipy >>>> Failed to import fftpack >>>> Failure linking new module: >>>> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag >>>> es/scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub >>>> Referenced from: >>>> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag >>>> es/scipy/fftpack/_fftpack.so >>>> Expected in: dynamic lookup >>>> >>>> Importing special to scipy >>>> Importing cluster to scipy >>>> Importing sparse to scipy >>>> Importing utils to scipy >>>> Importing interpolate to scipy >>>> Importing lib to scipy >>>> Importing integrate to scipy >>>> Importing signal to scipy >>>> Failed to import signal >>>> Failure linking new module: >>>> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag >>>> es/scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub >>>> Referenced from: >>>> /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packag >>>> es/scipy/fftpack/_fftpack.so >>>> Expected in: dynamic lookup >>>> >>>> Importing optimize to scipy >>>> Importing linalg to scipy >>>> Importing stats to scipy > > I could give a more detailed description about this if someone is interested. > > Thanks again, > > Adri?n. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From dd55 at cornell.edu Tue Nov 29 20:29:22 2005 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 29 Nov 2005 20:29:22 -0500 Subject: [SciPy-user] recognizing djbfft Message-ID: <200511292029.23404.dd55@cornell.edu> Could someone tell me if svn scipy will recognize djbfft? I have djbfft-0.76 installed, in /usr/lib/ and /usr/include, but scipy does not find it according to the output of system_info.py. Thanks, Darren From fonnesbeck at gmail.com Tue Nov 29 22:36:52 2005 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Tue, 29 Nov 2005 22:36:52 -0500 Subject: [SciPy-user] f2py (scipy_core) array argument issues Message-ID: <723eb6930511291936t15701968g4d1406255417edcc@mail.gmail.com> I have some f2py extensions that take array arguments: SUBROUTINE poisson(x,mu,n,like) c Poisson log-likelihood function cf2py integer dimension(n),intent(in) :: x cf2py real dimension(n),intent(in) :: mu cf2py real intent(out) :: like cf2py integer intent(hide),depend(x) :: n=len(x) However, when I try and pass a scipy_core array to the function, it fails: -> 1200 return _poisson(x, mu) global _poisson = x = array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, 0, 0, 1, 0]) mu = array([ 1.]) 1201 error: failed in converting 2nd argument `mu' of flib.poisson to C/Fortran array Any ideas? -- Chris Fonnesbeck Atlanta, GA From oliphant.travis at ieee.org Tue Nov 29 22:49:54 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 29 Nov 2005 20:49:54 -0700 Subject: [SciPy-user] f2py (scipy_core) array argument issues In-Reply-To: <723eb6930511291936t15701968g4d1406255417edcc@mail.gmail.com> References: <723eb6930511291936t15701968g4d1406255417edcc@mail.gmail.com> Message-ID: <438D2162.7060809@ieee.org> Chris Fonnesbeck wrote: >I have some f2py extensions that take array arguments: > > SUBROUTINE poisson(x,mu,n,like) > >c Poisson log-likelihood function > >cf2py integer dimension(n),intent(in) :: x >cf2py real dimension(n),intent(in) :: mu >cf2py real intent(out) :: like >cf2py integer intent(hide),depend(x) :: n=len(x) > >However, when I try and pass a scipy_core array to the function, it fails: > >-> 1200 return _poisson(x, mu) > global _poisson = > x = array([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6, 3, >3, 5, 4, 5, 3, 1, > 4, 4, 1, 5, 5, 3, 4, 2, 5, 2, 2, 3, 4, 2, 1, 3, 2, 2, 1, 1, 1, 1, 3, > 0, 0, 1, 0]) > mu = array([ 1.]) > 1201 > >error: failed in converting 2nd argument `mu' of flib.poisson to C/Fortran array > >Any ideas? >-- > > It looks like mu is the wrong length. From your subroutine description both x and mu should have the same length n. But, mu only has length 1. -Travis From oliphant.travis at ieee.org Mon Nov 28 23:25:01 2005 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Nov 2005 21:25:01 -0700 Subject: [SciPy-user] [SciPy-dev] C API In-Reply-To: References: Message-ID: <438BD81D.3080503@ieee.org> Nariman Habili wrote: >Travis, I have purchased a copy of your "Guide to SciPy: Core System", some >sections seem to be missing, for example section 13.2, "Using the Array >iterator in C". I would appreciate it if you could explain to me how I can >use the array iterator. > > The iterator object makes it easy to walk through an entire ndarray or move to a particular location. It is general purpose and so possibly slower on 2-d images than one of the other macros I'll explain next Suppose array is your PyArrayObject * variable. PyArrayIterObject *iter; iter = (PyArrayIterObject *)PyArray_IterNew(array); while (iter->index < iter->size) { /* iter->dataptr points to the current element of the array -- recast it to the appropriate type if you want */ PyArray_ITER_NEXT(iter); } You can also use PyArray_ITER_GOTO(iter, ind) where ind is an array of intp integers giving the index into the array (available in the latest SVN version of scipy core). If you just want to access a particular i,j location in a 2-d array, then use ptr = ( *)PyArray_GETPTR2(obj, i, j); This could be used in a dual for-loop to process the data in an image: for (i=0; i References: <723eb6930511281806q1ea9b176vfb73a2532afcf44@mail.gmail.com> Message-ID: <438BD5A5.4050101@ieee.org> Chris Fonnesbeck wrote: >Under scipy_core, I have a dictionary that contains what I thought >were floating point numbers, but are actually 'object_arrtype': > >(Pdb) aic_quantiles >Out[3]: >{2.5: 345.38217627260781, > 25: 346.73173987124062, > 50: 347.50728308413125, > 75: 349.61766516420937, > 97.5: 361.87424170229531} >(Pdb) type(aic_quantiles[50]) >Out[3]: > >When I try and add a scalar to any of these, I get a TypeError: > >(Pdb) aic_quantiles[50] + 4 >*** TypeError: unsupported operand type(s) for +: 'object_arrtype' and 'int' >(Pdb) aic_quantiles[50] + 4. >*** TypeError: unsupported operand type(s) for +: 'object_arrtype' and 'float' > >Surely this is not intended behaviour? If so, I need some clarification. > > > This should now be fixed in SVN. -Travis From aisaac at american.edu Wed Nov 30 02:21:21 2005 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 30 Nov 2005 02:21:21 -0500 Subject: [SciPy-user] is scipy OK for my needs? In-Reply-To: <723eb6930511280831m273143cavb38fa761290f3534@mail.gmail.com> References: <43872B4D.4020508@unibo.it><438B2F81.90207@unibo.it><723eb6930511280831m273143cavb38fa761290f3534@mail.gmail.com> Message-ID: On Mon, 28 Nov 2005, Chris Fonnesbeck apparently wrote: > I have a package, called PyKF, that does non-linear > (unscented) Kalman filtering. I was thinking of someday > adding it to SciPy, but I have not moved forward with that > yet. Let me know if you would like to try it. Is it up to date here? http://sourceforge.net/projects/pykf/ Cheers, Alan Isaac From wjdandreta at att.net Wed Nov 30 10:51:19 2005 From: wjdandreta at att.net (Bill Dandreta) Date: Wed, 30 Nov 2005 10:51:19 -0500 Subject: [SciPy-user] (OT) Grouping of data points. Message-ID: <438DCA77.8070303@att.net> This is probably not the best place to ask this question but maybe someone can point me in the right direction. I have a set of 2-D data points. Is there an algorithm for selecting subsets of data points that have nearly equal spacing in one dimension. Here is a simple example: 05, 19 09, 12 25, 77 37, 25 43, 46 47, 12 51, 22 65, 18 Points 1,3,5 and 8 are a subset with spacing 20 (25-5), 18 (46-25), and 22(65-43) in the 1st dimension. Another sub-group is 1,3,6 and 8 with spacing 20 (25-5), 22 (47-25), and 18 (65-47). Bill From stephen.walton at csun.edu Wed Nov 30 16:10:03 2005 From: stephen.walton at csun.edu (Stephen Walton) Date: Wed, 30 Nov 2005 13:10:03 -0800 Subject: [SciPy-user] (OT) Grouping of data points. In-Reply-To: <438DCA77.8070303@att.net> References: <438DCA77.8070303@att.net> Message-ID: <438E152B.6050804@csun.edu> Bill Dandreta wrote: >This is probably not the best place to ask this question but maybe >someone can point me in the right direction. > >I have a set of 2-D data points. Is there an algorithm for selecting >subsets of data points that have nearly equal spacing in one dimension. > > You need graph theory. I just did a similar project (in MATLAB, but anyway) having to do with sunspots. I needed to put sunspots into groups by finding sunspots which were "close" in latitude and longitude. The eventual algorithm looked like this: 1. For each point i, compute its distance from point j. (This part of the algorithm is n**2, unfortunately.) Set A[i,j] and A[j,i] to 1 if the distance is less than some value, and 0 if it is greater. A is in fact an adjacency graph in MATLAB-ese, where A[i,j] is nonzero if vertex i of a graph is connected to vertex j by an edge. "Distance" here can have any definition of course; it doesn't have to be Cartesian. 2. Find the components in the resulting graph, defined as those sets of vertices which are in fact connected. The algorithm for doing this is at http://www.ececs.uc.edu/~gpurdy/lec20.html, in section 20.2. I also recommend the graph theory tutorial at http://www.utm.edu/departments/math/graph. Stephen Walton