From flyingdeckchair at googlemail.com Wed Oct 1 04:59:46 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Wed, 1 Oct 2008 09:59:46 +0100 Subject: [SciPy-user] unusual fitting problem In-Reply-To: <200809301240.54583.pgmdevlist@gmail.com> References: <200809301240.54583.pgmdevlist@gmail.com> Message-ID: Hello, Thanks for the reply Pierre. I may be misunderstanding you, but it seems to me that if I interpolate between two values, the relationship will be linear. What I need is for it to appear linear when plotted on a loglog scale. I'm sure I'm not explaining this very well, owing to this not really being my field, so I've attached an image of the kind of curve I'm attempting to fit. Please let me know if I'm just being a dunce. Thanks again, Pete. 2008/9/30 Pierre GM > On Tuesday 30 September 2008 12:36:00 peter websdell wrote: > > Howdy gang, > > I have an unusual fitting problem that has me totally stumped. > > Have you thought about modifying your data ? > you want log(y) = a + b*log(x) ? > Use Y=log(y) and X=log(X), fit a straight-line the standard way and you > should > get your parameters a & b. > That should be easier than trying to fit y=exp(a+b*log(x)) = exp(a) * x**b > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: curve1.JPG Type: image/jpeg Size: 70310 bytes Desc: not available URL: From massimo.sandal at unibo.it Wed Oct 1 05:28:03 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 01 Oct 2008 11:28:03 +0200 Subject: [SciPy-user] unusual fitting problem In-Reply-To: References: <200809301240.54583.pgmdevlist@gmail.com> Message-ID: <48E342A3.1080701@unibo.it> peter websdell wrote: > Hello, > > Thanks for the reply Pierre. > > I may be misunderstanding you, but it seems to me that if I interpolate > between two values, the relationship will be linear. What I need is for > it to appear linear when plotted on a loglog scale. If I understand correctly: The relationship will be linear, but between the *logarithms*. By using Y=log(y) and X=log(x) mathematically it's just having log/log paper. You are just linearizing the equations. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From flyingdeckchair at googlemail.com Wed Oct 1 05:36:01 2008 From: flyingdeckchair at googlemail.com (peter websdell) Date: Wed, 1 Oct 2008 10:36:01 +0100 Subject: [SciPy-user] unusual fitting problem In-Reply-To: <48E342A3.1080701@unibo.it> References: <200809301240.54583.pgmdevlist@gmail.com> <48E342A3.1080701@unibo.it> Message-ID: Yes! Of course. Apologies for the stupid question. Cheers, Pete. 2008/10/1 massimo sandal > peter websdell wrote: > >> Hello, >> >> Thanks for the reply Pierre. >> >> I may be misunderstanding you, but it seems to me that if I interpolate >> between two values, the relationship will be linear. What I need is for it >> to appear linear when plotted on a loglog scale. >> > > If I understand correctly: The relationship will be linear, but between the > *logarithms*. > > By using Y=log(y) and X=log(x) mathematically it's just having log/log > paper. You are just linearizing the equations. > > m. > -- > Massimo Sandal , Ph.D. > University of Bologna > Department of Biochemistry "G.Moruzzi" > > snail mail: > Via Irnerio 48, 40126 Bologna, Italy > > email: > massimo.sandal at unibo.it > > web: > http://www.biocfarm.unibo.it/samori/people/sandal.html > > tel: +39-051-2094388 > fax: +39-051-2094387 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Wed Oct 1 06:06:48 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 1 Oct 2008 12:06:48 +0200 Subject: [SciPy-user] sparse matrices -- slicing and fancy indexing for assignment In-Reply-To: References: <4FBBE3B4-041E-4593-8ADA-1318BF43A853@mit.edu> Message-ID: <9457e7c80810010306j433fcc2cyb5f78b45bee9e150@mail.gmail.com> 2008/9/30 Nathan Bell : > This is probably all you'll see in 0.7.0 > > Note that assigning to a CSR/CSC matrix is in general a bad idea. Any > change to the sparsity structure of these formats requires O(nnz) > operations, which basically means reconstructing the matrix from > scratch. > > OTOH MATLAB lets you do it, so we will probably support it someday > too. If you submit a patch soon we might be able to integrate it by > 0.7.0. I was about to document this, but Nathan has already done the job: http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sparse/info.py Regards St?fan From philbinj at gmail.com Wed Oct 1 08:59:44 2008 From: philbinj at gmail.com (James Philbin) Date: Wed, 1 Oct 2008 13:59:44 +0100 Subject: [SciPy-user] r4763 won't compile, missing lsodar.pyf Message-ID: <2b1c8c4f0810010559v6defd4bbsd510798cfd47a55c@mail.gmail.com> Hi, "python setup.py build" errors with: "target build/src.linux-x86_64-2.5/lsodarmodule.c does not exist: Assuming lsodarmodule.c was generated with "build_src --inplace" command." And indeed, scipy/integrate/setup.py has: "config.add_extension('lsodar', sources=['lsodar.pyf'], libraries=libs, **newblas)" But scipy/integrate doesn't contain lsodar.pyf? Thanks, James From robince at gmail.com Wed Oct 1 09:50:09 2008 From: robince at gmail.com (Robin) Date: Wed, 1 Oct 2008 14:50:09 +0100 Subject: [SciPy-user] broadcasting elementwise on sparse matrix Message-ID: Hi, Is there any way to get broadcasted element wise operations on sparse matrices? I would like to do something like A * x if A was (n,n) array and x was a (n,) array. I saw the sparse matrices have a .multiply method, but it doesn't seem to broadcast in the same way as the standard multiplication, ie As.multiply(x) gives an "inconsistent shapes" error. The best I have is As.multiply(tile(x,(n,1))) although this has the memory overhead that is normally avoided with broadcasting. Is there an alternative? Thanks Robin From travis at enthought.com Wed Oct 1 10:36:54 2008 From: travis at enthought.com (Travis Vaught) Date: Wed, 1 Oct 2008 09:36:54 -0500 Subject: [SciPy-user] Texas Python Regional Unconference Reminders Message-ID: <0107962E-D762-497B-BCEC-24CBD78B381B@enthought.com> Greetings, The Texas Python Regional Unconference is coming up this weekend (October 4-5) and I wanted to send out some more details of the meeting. The web page for the meeting is here: http://www.scipy.org/TXUncon2008 The meeting is _absolutely free_, so please add yourself to the Attendees page if you're able to make it. Also, if you're planning to attend, please send me the following information (to travis at enthought.com ) so I can request wireless access for you during the meeting: - Full Name - Phone or email - Address - Affiliation There are still opportunities to present your pet projects at the meeting, so feel free to sign up on the presentation schedule here: http://www.scipy.org/TXUncon2008Schedule For those who are in town Friday evening, we're planning to get together for a casual dinner in downtown Austin that night. We'll meet at Enthought offices (http://www.enthought.com/contact/map-directions.php ) and walk to a casual restaurant nearby. Show up as early as 5:30pm and you can hang out and tour the Enthought offices--we'll head out to eat at 7:00pm sharp. Best, Travis From stefan at sun.ac.za Wed Oct 1 10:58:38 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 1 Oct 2008 16:58:38 +0200 Subject: [SciPy-user] broadcasting elementwise on sparse matrix In-Reply-To: References: Message-ID: <9457e7c80810010758w4973a9e9v2fe1f238b5d85ccd@mail.gmail.com> Hi Robin 2008/10/1 Robin : > Is there any way to get broadcasted element wise operations on sparse matrices? > > I would like to do something like A * x if A was (n,n) array and x was > a (n,) array. I saw the sparse matrices have a .multiply method, but > it doesn't seem to broadcast in the same way as the standard > multiplication, ie As.multiply(x) gives an "inconsistent shapes" > error. > > The best I have is As.multiply(tile(x,(n,1))) although this has the memory > overhead that is normally avoided with broadcasting. Is there an alternative? You can broadcast over the indices, and then use them to individually address the elements of the sparse matrix. Not super-fast, but better than nothing: In [4]: a, b = np.broadcast_arrays([1,2,3], [[1,2,3], [4,5,6]]) In [5]: a Out[5]: array([[1, 2, 3], [1, 2, 3]]) In [6]: b Out[6]: array([[1, 2, 3], [4, 5, 6]]) Cheers St?fan From wnbell at gmail.com Wed Oct 1 11:52:32 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 1 Oct 2008 11:52:32 -0400 Subject: [SciPy-user] broadcasting elementwise on sparse matrix In-Reply-To: References: Message-ID: On Wed, Oct 1, 2008 at 9:50 AM, Robin wrote: > > Is there any way to get broadcasted element wise operations on sparse matrices? > > I would like to do something like A * x if A was (n,n) array and x was > a (n,) array. I saw the sparse matrices have a .multiply method, but > it doesn't seem to broadcast in the same way as the standard > multiplication, ie As.multiply(x) gives an "inconsistent shapes" > error. > > The best I have is As.multiply(tile(x,(n,1))) although this has the memory > overhead that is normally avoided with broadcasting. Is there an alternative? > That particular operation is just scaling of the columns, right? If so, you can use: >>> As * spdiags([x],[0], n,n,) If As is already a CSR matrix, then you can cheat a little and do the operation in place: >>> As.data *= x[As.indices] -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robince at gmail.com Wed Oct 1 12:11:12 2008 From: robince at gmail.com (Robin) Date: Wed, 1 Oct 2008 17:11:12 +0100 Subject: [SciPy-user] broadcasting elementwise on sparse matrix In-Reply-To: References: Message-ID: On Wed, Oct 1, 2008 at 4:52 PM, Nathan Bell wrote: > On Wed, Oct 1, 2008 at 9:50 AM, Robin wrote: > That particular operation is just scaling of the columns, right? If > so, you can use: >>>> As * spdiags([x],[0], n,n,) Yes - I realised that and was using spdiags, which is a lot quicker. > If As is already a CSR matrix, then you can cheat a little and do the > operation in place: >>>> As.data *= x[As.indices] That's quite tricky... :) It is a CSC matrix - not sure why, but I am saving and loading it from Matlab files, and when I first started (a year or so again) it seemed CSC was the most reliable (or at least I always got a CSC from .mat files). Presumably the same thing the same thing will work though. Thanks very much, Robin From wnbell at gmail.com Wed Oct 1 12:17:36 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 1 Oct 2008 12:17:36 -0400 Subject: [SciPy-user] broadcasting elementwise on sparse matrix In-Reply-To: References: Message-ID: On Wed, Oct 1, 2008 at 12:11 PM, Robin wrote: > > That's quite tricky... :) > It is a CSC matrix - not sure why, but I am saving and loading it from > Matlab files, and when I first started (a year or so again) it seemed > CSC was the most reliable (or at least I always got a CSC from .mat > files). > Presumably the same thing the same thing will work though. > Yep, except it would scale rows. I'd stick with spdiags() for safety. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robince at gmail.com Wed Oct 1 12:20:19 2008 From: robince at gmail.com (Robin) Date: Wed, 1 Oct 2008 17:20:19 +0100 Subject: [SciPy-user] dot behaviour with sparse matrix Message-ID: Not sure if this is a bug or not but it was certainly unexpected behaviour for me. I found that dot(dense,dense) works as expected (ie dimensions (8,255),(255,8) -> (8,8)) and that dot(sparse,sparse) also works that way, producing a sparse result. But dot(dense,sparse) gives an output with the same dimensions as the dense input. Not sure why this is happening and I found it a bit confusing. Is this a bug or inteded behaviour? Actually I was just going through an example and I notice that dot(csc_matrix, dense_matrix) raises a NotImplemented type - so perhaps the other cases should also: With A, B arrays: In [29]: A.shape Out[29]: (3, 5) In [30]: B.shape Out[30]: (5, 3) In [31]: dot(A,B).shape Out[31]: (3, 3) In [32]: dot(sparse.csc_matrix(A),sparse.csc_matrix(B)).shape Out[32]: (3, 3) In [33]: dot(A,sparse.csc_matrix(B)).shape Out[33]: (3, 5) In [34]: dot(sparse.csc_matrix(A),B).shape Out[34]: (5, 3) with A,B dense matrices: In [42]: A.shape Out[42]: (3, 5) In [43]: B.shape Out[43]: (5, 3) In [44]: dot(A,B).shape Out[44]: (3, 3) In [45]: dot(sparse.csc_matrix(A),sparse.csc_matrix(B)).shape Out[45]: (3, 3) In [46]: dot(A,sparse.csc_matrix(B)).shape Out[46]: (3, 5) In [47]: dot(sparse.csc_matrix(A),B).shape --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /Users/robince/ in () AttributeError: 'NotImplementedType' object has no attribute 'shape' Cheers Robin From wnbell at gmail.com Wed Oct 1 12:29:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 1 Oct 2008 12:29:59 -0400 Subject: [SciPy-user] dot behaviour with sparse matrix In-Reply-To: References: Message-ID: On Wed, Oct 1, 2008 at 12:20 PM, Robin wrote: > Not sure if this is a bug or not but it was certainly unexpected > behaviour for me. > > I found that dot(dense,dense) works as expected > (ie dimensions (8,255),(255,8) -> (8,8)) > and that dot(sparse,sparse) also works that way, producing a sparse result. > > But dot(dense,sparse) gives an output with the same dimensions as the > dense input. Not sure why this is happening and I found it a bit > confusing. Is this a bug or inteded behaviour? > > Actually I was just going through an example and I notice that > dot(csc_matrix, dense_matrix) raises a NotImplemented type - so > perhaps the other cases should also: > The sparse formats do support sparse * sparse, sparse * dense, and dense * sparse using the standard infix operator. I'm surprised that dot() works at all with sparse arguments. If you can get to the bottom of this I'd be happy to fix it. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Wed Oct 1 15:39:04 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 1 Oct 2008 14:39:04 -0500 Subject: [SciPy-user] r4763 won't compile, missing lsodar.pyf In-Reply-To: <2b1c8c4f0810010559v6defd4bbsd510798cfd47a55c@mail.gmail.com> References: <2b1c8c4f0810010559v6defd4bbsd510798cfd47a55c@mail.gmail.com> Message-ID: <3d375d730810011239l10cbeb1cr2f123685b1e1e304@mail.gmail.com> On Wed, Oct 1, 2008 at 07:59, James Philbin wrote: > Hi, > > "python setup.py build" > errors with: > "target build/src.linux-x86_64-2.5/lsodarmodule.c does not exist: > Assuming lsodarmodule.c was generated with "build_src --inplace" command." > > And indeed, scipy/integrate/setup.py has: > "config.add_extension('lsodar', > sources=['lsodar.pyf'], > libraries=libs, > **newblas)" > > But scipy/integrate doesn't contain lsodar.pyf? It was a mistaken commit. Fixed in r4764. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wesmckinn at gmail.com Thu Oct 2 14:18:45 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Thu, 2 Oct 2008 14:18:45 -0400 Subject: [SciPy-user] Bug in scipy.stats skew, kurtosis Message-ID: <6c476c8a0810021118w654cf45s50194b22e7c1f2cb@mail.gmail.com> Right now scipy.stats.skew (normal) and scipy.stats.kurtosis (fisher = False) applied to a 1-dimensional array return an ndarray of the single value with no dimensions. In [13]: scipy.stats.skew([1,2,3,4,5]) Out[13]: array(0.0) I am using Scipy 0.6, but I looked at the latest SVN and this behavior has not changed. It's being caused by these lines: m2 = moment(a, 2, axis) m3 = moment(a, 3, axis) zero = (m2 == 0) vals = np.where(zero, 0, m3 / m2**1.5) when the condition is not an array in numpy.where, the result is this 0-dim array. Not sure if this is a SciPy issue or a numpy issue. If it's a Scipy issue I can put in a ticket for this. I am using a workaround for now (check if the result of the calc is an array and has no shape, if so cast to float) but this seems like a bug. Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Thu Oct 2 16:37:20 2008 From: josef.pktd at gmail.com (joep) Date: Thu, 2 Oct 2008 13:37:20 -0700 (PDT) Subject: [SciPy-user] Bug in scipy.stats skew, kurtosis In-Reply-To: <6c476c8a0810021118w654cf45s50194b22e7c1f2cb@mail.gmail.com> References: <6c476c8a0810021118w654cf45s50194b22e7c1f2cb@mail.gmail.com> Message-ID: <3d5c2fc1-c1f8-4514-8680-962344e258cc@25g2000hsk.googlegroups.com> On Oct 2, 2:18?pm, "Wes McKinney" wrote: > Right now scipy.stats.skew (normal) and scipy.stats.kurtosis (fisher = > False) applied to a 1-dimensional array return an ndarray of the single > value with no dimensions. > > In [13]: scipy.stats.skew([1,2,3,4,5]) > Out[13]: array(0.0) > > I am using Scipy 0.6, but I looked at the latest SVN and this behavior has > not changed. It's being caused by these lines: > > m2 = moment(a, 2, axis) > m3 = moment(a, 3, axis) > zero = (m2 == 0) > vals = np.where(zero, 0, m3 / m2**1.5) > > when the condition is not an array in numpy.where, the result is this 0-dim > array. Not sure if this is a SciPy issue or a numpy issue. If it's a Scipy > issue I can put in a ticket for this. I am using a workaround for now (check > if the result of the calc is an array and has no shape, if so cast to float) > but this seems like a bug. > > Thanks, > Wes > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user In Changeset 4752, this has been changed for pdf, cdf, and so on, however other methods such as stats still have the same property: >>> stats.norm.cdf(0.4) 0.65542174161032418 >>> stats.norm.stats(moments='s') array(0.0) >>> stats.norm.stats(moments='s')[()] 0.0 >>> stats.norm.stats(moments='s').shape () For consistency, it might be better to apply this change to all methods in scipy.stats that return 0-dim array. Josef From nwagner at iam.uni-stuttgart.de Fri Oct 3 02:35:16 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Oct 2008 08:35:16 +0200 Subject: [SciPy-user] Python Optimization Modeling Objects (Pyomo) Message-ID: FWIW, http://www.optimization-online.org/DB_HTML/2008/09/2095.html Nils From dmitrey.kroshko at scipy.org Fri Oct 3 03:15:21 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 03 Oct 2008 10:15:21 +0300 Subject: [SciPy-user] Python Optimization Modeling Objects (Pyomo) In-Reply-To: References: Message-ID: <48E5C689.5020508@scipy.org> Thank you Nils, I was wondering why my daily visitors number this week jumped from ~ 50 to ~ 80-90, and now I know the answer: in the article they mentioned "However, Coopr Opt is not as mature as the OpenOpt package" :) Unfortunately, some days ago I committed a nasty bug and now I'm hunting for the one because my latest svn doesn't work properly with NL problems. D. Nils Wagner wrote: > FWIW, > > http://www.optimization-online.org/DB_HTML/2008/09/2095.html > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Fri Oct 3 04:40:59 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 03 Oct 2008 10:40:59 +0200 Subject: [SciPy-user] Python Optimization Modeling Objects (Pyomo) In-Reply-To: <48E5C689.5020508@scipy.org> References: <48E5C689.5020508@scipy.org> Message-ID: On Fri, 03 Oct 2008 10:15:21 +0300 dmitrey wrote: > Thank you Nils, I was wondering why my daily visitors >number this week > jumped from ~ 50 to ~ 80-90, and now I know the answer: >in the article > they mentioned > "However, Coopr Opt is not as mature as the OpenOpt >package" > :) > > Unfortunately, some days ago I committed a nasty bug and >now I'm hunting > for the one because my latest svn doesn't work properly >with NL problems. > > D. > > Nils Wagner wrote: >> FWIW, >> >> http://www.optimization-online.org/DB_HTML/2008/09/2095.html >> >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Dmitrey, You might be interested in http://www.optimization-online.org/DB_HTML/2008/09/2101.html (recent paper on pswarm) as well. Cheers, Nils From jeremy at jeremysanders.net Fri Oct 3 04:49:43 2008 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Fri, 03 Oct 2008 09:49:43 +0100 Subject: [SciPy-user] ANN: Veusz 1.1 Message-ID: Veusz 1.1 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2008 Jeremy Sanders Licenced under the GPL (version 2 or greater). Veusz is a scientific plotting package written in Python, using PyQt4 for display and user-interfaces, and numpy for handling the numeric data. Veusz is designed to produce publication-ready Postscript/PDF output. The user interface aims to be simple, consistent and powerful. Veusz provides a GUI, command line, embedding and scripting interface (based on Python) to its plotting facilities. It also allows for manipulation and editing of datasets. Feature changes from 1.0: * Axes autoscale when plotting functions * Labels can be dragged around on plots * More marker symbols * SVG export of plots * The point plotting and axis range code has been rewritten. * Includes quite a few minor bugfixes Features of package: * X-Y plots (with errorbars) * Line and function plots * Contour plots * Images (with colour mappings and colorbars) * Stepped plots (for histograms) * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * LaTeX-like formatting for text * EPS/PDF/PNG export * Scripting interface * Dataset creation/manipulation * Embed Veusz within other programs * Text, CSV and FITS importing Requirements: Python (2.3 or greater required) http://www.python.org/ Qt >= 4.3 (free edition) http://www.trolltech.com/products/qt/ PyQt >= 4.3 (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numpy >= 1.0 http://numpy.scipy.org/ Microsoft Core Fonts (recommended for nice output) http://corefonts.sourceforge.net/ PyFITS >= 1.1 (optional for FITS import) http://www.stsci.edu/resources/software_hardware/pyfits For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). Issues: * Can be very slow to plot large datasets if antialiasing is enabled. Right click on graph and disable antialias to speed up output. * The embedding interface appears to crash on exiting. If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The latest code can always be found in the SVN repository. Jeremy Sanders From jrs65 at cam.ac.uk Fri Oct 3 13:06:23 2008 From: jrs65 at cam.ac.uk (Richard Shaw) Date: Fri, 03 Oct 2008 18:06:23 +0100 Subject: [SciPy-user] Python 2.6 compilation error Message-ID: <48E6510F.8060808@cam.ac.uk> Hello, I'm trying to compile Scipy 0.6 for Python 2.6 and I'm suffering from a compilation error in scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h with conflicting types for ?_Py_c_abs?. In fact the same error as this bug: http://scipy.org/scipy/scipy/ticket/735 Is there a workaround I could use in the meantime, rather than downgrading to Python 2.5? Anything would be appreciated. Thanks, Richard From dhsu2 at u.washington.edu Fri Oct 3 17:27:53 2008 From: dhsu2 at u.washington.edu (David Hsu) Date: Fri, 3 Oct 2008 21:27:53 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Installation_from_source_on_OS_X=3A_=27Non?= =?utf-8?q?eType=27=09object_has_no_attribute_=27link=5Fshared=5Fob?= =?utf-8?q?ject=27?= References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> <20080926225534.GA6402@astro.ox.ac.uk> <3d375d730809261614j7696ced5je18889036ea67fa1@mail.gmail.com> <5B3D66A5-B141-4DE7-99B3-A53F5EEE00E8@astro.ox.ac.uk> Message-ID: Michael Williams astro.ox.ac.uk> writes: > > > On 27 Sep 2008, at 00:14, Robert Kern wrote: > > I very much recommend against using the binaries from HPC. They > > release binaries for buggy bleeding-edge versions of gfortran, and > > don't keep previous versions around. > > Thanks for the heads-up. I'm not experienced with Fortran. > > > I have had *much* more luck with > > the binaries over here: > > > > http://r.research.att.com/tools/ > > > > Since you don't have admin privileges, you need to do a little command > > line work instead of just being able to use the installer. Mount > > gfortran-4.2.3.dmg. At the terminal: > > That procedure to install the compiler worked fine, and scipy itself > seems to have built cleanly using it. > > Thanks very much! > Hi, I'm having the exact same problem and error message, but for some reason I can't get past this step. I've tried the suggestions here: http://www.scipy.org/Installing_SciPy/Mac_OS_X At the step where it requires building the fftw library, I get: DRIVE:fftw-3.1.2 dhsu$ make make all-recursive Making all in support make[2]: *** No rule to make target `all'. Stop. make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 and "sudo make install" has the same result. Then, when I go to build SciPy itself, I get almost the exact same errors as in the thread above. What am I doing wrong? Thanks, David Specifications: MacBook Pro Core Duo Python 2.5.1 Numpy 1.0.4 SciPy 0.6.0 GNU Fortran 4.2.3 (as suggested) From oldcanine at yahoo.com Sat Oct 4 07:56:04 2008 From: oldcanine at yahoo.com (Barry Olddog) Date: Sat, 4 Oct 2008 04:56:04 -0700 (PDT) Subject: [SciPy-user] undefined symbols when trying to import packages with linalg Message-ID: <127988.64002.qm@web59605.mail.ac4.yahoo.com> I've been struggling with building scipy into python2.5 for a couple days on a new Centos 5.2, 64-bit machine. I finally got everything built. At first I installed the blas and lapack packages, and then numpy, which seems ok, and finally scipy. The first problem was getting scipy to find the blas and lapack libraries, and then various build errors. I removed the packages, and built Atlas myself with lapack. Now scipy builds, and importing just scipy alone is ok, but it balks at importing some of the packages, including stats, linalg, optimize. Here's the complete import error: >>> import scipy.stats Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, in from stats import * File "/usr/local/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, in import scipy.linalg as linalg File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8 , in from basic import * File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, in from lapack import get_lapack_funcs File "/usr/local/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, in from scipy.linalg import flapack ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/flapack.so: und efined symbol: _gfortran_st_write_done I had thought that I was using g77 consistently. Earlier, I was able to find references to conflicts betwen g77 and gfortran, which is why I built my own atlas, etc. So the _gfortran_st_write_done error seems odd. Any suggestions? I just did this on a similar machine without any problem whatsoever. I'm not sure but I maybe installed the atlas rpm on the earlier one. Could that have been what did it? Thanks From josef.pktd at gmail.com Sat Oct 4 09:51:12 2008 From: josef.pktd at gmail.com (joep) Date: Sat, 4 Oct 2008 06:51:12 -0700 (PDT) Subject: [SciPy-user] call to vectorize works only after several calls Message-ID: I get some strange behavior, when vectorized function is called without valid arguments: The first time I call it, it throws an exception, if I call it again, after I called a with a non-empyty argument list, then it works. I don't see why the underlying state should have changed. Any clues? This makes testing and debugging pretty difficult Josef ------------- file: test_ppf_indexerror.py ---------------------- from scipy import stats print '\nfirst try' try: print stats.foldcauchy.ppf(0,0.2) except Exception,e: print 'error occured on first try' print e print '\ncalling with vec arguments' print stats.foldcauchy.ppf([-1,0.0, 0.1,0.9,1.0,2],0.2) print '\nsecond try' print stats.foldcauchy.ppf(0,0.2) print ' second try worked' ------------------------------------------------ when called from command line, this produces (with trunk and scipy 0.6.0, numpy 1.2rc2): {{{ >python test_ppf_indexerror.py first try error occured on first try invalid index calling with vec arguments [ NaN 0. 0.16455889 6.31992596 Inf NaN] second try 0.0 second try worked }}} Here is the traceback if I call it just once on the command line: {{{ python -c "from scipy import stats;print stats.foldcauchy.ppf(0,0.2)" Traceback (most recent call last): File "", line 1, in File "C:\Josef\_progs\virtualpy25\envscipy\lib\site-packages\scipy \stats\distr ibutions.py", line 587, in ppf place(output,cond,self._ppf(*goodargs)*scale + loc) File "C:\Josef\_progs\virtualpy25\envscipy\lib\site-packages\scipy \stats\distr ibutions.py", line 399, in _ppf return self.vecfunc(q,*args) File "C:\Programs\Python25\Lib\site-packages\numpy\lib \function_base.py", line 1648, in __call__ newargs.append(asarray(arg).flat[0]) IndexError: invalid index }}} From wnbell at gmail.com Sat Oct 4 18:39:08 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 4 Oct 2008 18:39:08 -0400 Subject: [SciPy-user] Python 2.6 compilation error In-Reply-To: <48E6510F.8060808@cam.ac.uk> References: <48E6510F.8060808@cam.ac.uk> Message-ID: On Fri, Oct 3, 2008 at 1:06 PM, Richard Shaw wrote: > > I'm trying to compile Scipy 0.6 for Python 2.6 and I'm suffering from a > compilation error in > scipy/sparse/linalg/dsolve/SuperLU/SRC/scomplex.h with conflicting types > for '_Py_c_abs'. In fact the same error as this bug: > > http://scipy.org/scipy/scipy/ticket/735 > > Is there a workaround I could use in the meantime, rather than > downgrading to Python 2.5? Anything would be appreciated. > Hi Richard, thanks for the report. I've committed a workaround to scipy r4767 in SVN so this issue doesn't appear in the coming SciPy 0.7 release. I'd ask you to try using this version to see if the bug is indeed fixed. If you'd like to apply the same workaround to 0.6 you can use the following script: sparse/linalg/dsolve/SuperLU/SRC$ for f in *; do sed 's/c_abs/slu_c_abs/g' < $f > temp; mv temp $f; done; It simply renames c_abs() and c_abs1() to slu_c_abs() and slu_c_abs1() respectively. We really need to update our SuperLU code to a more recent release. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Sat Oct 4 19:34:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 4 Oct 2008 18:34:38 -0500 Subject: [SciPy-user] undefined symbols when trying to import packages with linalg In-Reply-To: <127988.64002.qm@web59605.mail.ac4.yahoo.com> References: <127988.64002.qm@web59605.mail.ac4.yahoo.com> Message-ID: <3d375d730810041634o282334dao7fe8dac3a972ad8@mail.gmail.com> On Sat, Oct 4, 2008 at 06:56, Barry Olddog wrote: > I've been struggling with building scipy into python2.5 for a couple > days on a new Centos 5.2, 64-bit machine. I finally got everything > built. At first I installed the blas and lapack packages, and then > numpy, which seems ok, and finally scipy. > > The first problem > was getting scipy to find the blas and lapack libraries, and then > various build errors. I removed the packages, and built Atlas myself > with lapack. Now scipy builds, and importing just scipy alone is ok, > but it balks at importing some of the packages, including stats, > linalg, optimize. Here's the complete import error: > >>>> import scipy.stats > Traceback (most recent call last): > File "", line 1, in > File "/usr/local/lib/python2.5/site-packages/scipy/stats/__init__.py", line 7, > in > from stats import * > File "/usr/local/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, > in > import scipy.linalg as linalg > File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", line 8 > , in > from basic import * > File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, > in > from lapack import get_lapack_funcs > File "/usr/local/lib/python2.5/site-packages/scipy/linalg/lapack.py", line 17, > in > from scipy.linalg import flapack > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/flapack.so: und > efined symbol: _gfortran_st_write_done > > I > had thought that I was using g77 consistently. Earlier, I was able to > find references to conflicts betwen g77 and gfortran, which is why I > built my own atlas, etc. So the _gfortran_st_write_done error seems odd. > > Any > suggestions? Use ldd on scipy/linalg/flapack.so to see what shared libraries it is trying to link with. It should also show you exactly which .so files it manages to find for the given library. You will probably see that scipy/linalg/flapack.so is looking for libgfortran but not finding it. If not, then keep using ldd on the found shared libraries until you find the culprit. That is the library that was accidentally built with gfortran. If it's scipy/linalg/flapack.so, then look over your build log again (and rebuild if necessary). Look near the beginning where numpy.distutils is telling you what Fortran compilers it is finding. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cournape at gmail.com Sun Oct 5 01:59:08 2008 From: cournape at gmail.com (David Cournapeau) Date: Sun, 5 Oct 2008 14:59:08 +0900 Subject: [SciPy-user] Installation from source on OS X: 'NoneType' object has no attribute 'link_shared_object' In-Reply-To: References: <4B321E75-C644-48DD-96ED-71AEF9FE9CB3@astro.ox.ac.uk> <3d375d730809231039t239263e1wcf67da11c650c9b9@mail.gmail.com> <3d375d730809231153p43290a4bu10cdf119908c8830@mail.gmail.com> <20080926225534.GA6402@astro.ox.ac.uk> <3d375d730809261614j7696ced5je18889036ea67fa1@mail.gmail.com> <5B3D66A5-B141-4DE7-99B3-A53F5EEE00E8@astro.ox.ac.uk> Message-ID: <5b8d13220810042259g69c87cf9s28a48fcea18a0597@mail.gmail.com> On Sat, Oct 4, 2008 at 6:27 AM, David Hsu wrote: > > Hi, I'm having the exact same problem and error message, but for some reason I > can't get past this step. I've tried the suggestions here: > > http://www.scipy.org/Installing_SciPy/Mac_OS_X It does not really answer your question, but fftw is an optional dependency for scipy. You don't need to install it to have a working scipy install. cheers, David From nwagner at iam.uni-stuttgart.de Sun Oct 5 05:29:52 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 05 Oct 2008 11:29:52 +0200 Subject: [SciPy-user] Installing nose on Windows XP Message-ID: Hi all, How do I install nose on Windows ? This is my first attempt to run numpy/scipy on Windows. The problem is that I cannot run numpy.test() on Windows. Nils From david at ar.media.kyoto-u.ac.jp Sun Oct 5 05:19:44 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Sun, 05 Oct 2008 18:19:44 +0900 Subject: [SciPy-user] Installing nose on Windows XP In-Reply-To: References: Message-ID: <48E886B0.4040302@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Hi all, > > How do I install nose on Windows ? > This is my first attempt to run numpy/scipy on Windows. > The problem is that I cannot run numpy.test() on Windows. You can use easy_install: easy_install nose You have to use the -U if you want to upgrade to a new version if you already have it installed: easy_install -U nose You can also simply install it from sources (it is pure python, with no dependency, so just a matter of python setup.py install) http://somethingaboutorange.com/mrl/projects/nose/ From fredmfp at gmail.com Sun Oct 5 08:37:17 2008 From: fredmfp at gmail.com (fred) Date: Sun, 05 Oct 2008 14:37:17 +0200 Subject: [SciPy-user] [numpy distutils] cpu detected... Message-ID: <48E8B4FD.10503@gmail.com> Hi, Looking at numpy/distutils/cpuinfo.py (numpy 1.1.0) I wonder why Core2 is only detected for 64 bits arch, and not for 32 bits. I have a Core2 and fortran codes run faster when -xT flag is specified for 32 bits arch. Any hint ? TIA. Cheers, -- Fred From fredmfp at gmail.com Sun Oct 5 09:25:57 2008 From: fredmfp at gmail.com (fred) Date: Sun, 05 Oct 2008 15:25:57 +0200 Subject: [SciPy-user] [numpy distutils] cpu detected... In-Reply-To: <48E8B4FD.10503@gmail.com> References: <48E8B4FD.10503@gmail.com> Message-ID: <48E8C065.7070703@gmail.com> fred a ?crit : > Hi, > > Looking at numpy/distutils/cpuinfo.py (numpy 1.1.0) > I wonder why Core2 is only detected for 64 bits arch, and not for 32 bits. > > I have a Core2 and fortran codes run faster when -xT flag is specified > for 32 bits arch. > > Any hint ? In fact, I really don't understand. Knowing that IntelEM64TCompiler class inherit from IntelFCompiler in numpy/distutils/fcompiler/intel.py, get_flags_arch() redefines (line 165) IntelFCompiler's method. So my Core2 can't be detected on my 64 bits arch and thus IntelEM64TFCompiler's get_flags_arch() returns an empty list. Do I misunderstand something trivial ??? Cheers, -- Fred From oldcanine at yahoo.com Sun Oct 5 11:19:25 2008 From: oldcanine at yahoo.com (Barry Olddog) Date: Sun, 5 Oct 2008 08:19:25 -0700 (PDT) Subject: [SciPy-user] undefined symbols when trying to import packages with linalg Message-ID: <664056.52116.qm@web59611.mail.ac4.yahoo.com> ----- Original Message ---- > From: Robert Kern > To: SciPy Users List > Sent: Saturday, October 4, 2008 7:34:38 PM > Subject: Re: [SciPy-user] undefined symbols when trying to import packages with linalg > > On Sat, Oct 4, 2008 at 06:56, Barry Olddog wrote: > > I've been struggling with building scipy into python2.5 for a couple > > days on a new Centos 5.2, 64-bit machine. I finally got everything > > built. At first I installed the blas and lapack packages, and then > > numpy, which seems ok, and finally scipy. > > > > The first problem > > was getting scipy to find the blas and lapack libraries, and then > > various build errors. I removed the packages, and built Atlas myself > > with lapack. Now scipy builds, and importing just scipy alone is ok, > > but it balks at importing some of the packages, including stats, > > linalg, optimize. Here's the complete import error: > > > >>>> import scipy.stats > > Traceback (most recent call last): > > File "", line 1, in > > File "/usr/local/lib/python2.5/site-packages/scipy/stats/__init__.py", line > 7, > > in > > from stats import * > > File "/usr/local/lib/python2.5/site-packages/scipy/stats/stats.py", line 192, > > in > > import scipy.linalg as linalg > > File "/usr/local/lib/python2.5/site-packages/scipy/linalg/__init__.py", line > 8 > > , in > > from basic import * > > File "/usr/local/lib/python2.5/site-packages/scipy/linalg/basic.py", line 17, > > in > > from lapack import get_lapack_funcs > > File "/usr/local/lib/python2.5/site-packages/scipy/linalg/lapack.py", line > 17, > > in > > from scipy.linalg import flapack > > ImportError: /usr/local/lib/python2.5/site-packages/scipy/linalg/flapack.so: > und > > efined symbol: _gfortran_st_write_done > > > > I > > had thought that I was using g77 consistently. Earlier, I was able to > > find references to conflicts betwen g77 and gfortran, which is why I > > built my own atlas, etc. So the _gfortran_st_write_done error seems odd. > > > > Any > > suggestions? > > Use ldd on scipy/linalg/flapack.so to see what shared libraries it is > trying to link with. It should also show you exactly which .so files > it manages to find for the given library. You will probably see that > scipy/linalg/flapack.so is looking for libgfortran but not finding it. > If not, then keep using ldd on the found shared libraries until you > find the culprit. That is the library that was accidentally built with > gfortran. > > If it's scipy/linalg/flapack.so, then look over your build log again > (and rebuild if necessary). Look near the beginning where > numpy.distutils is telling you what Fortran compilers it is finding. > > -- > Robert Kern > Thanks for the reply. It pointed me to a way out. I tried to track down the reference with ldd but never did. But I had compiled the full lapack library as a static library, and my only guess is that somewhere in there was the offending symbol. The config in lapack seemed to me to be clear about g77, but I took a drastic measure and solved the problem. It may be inelegant, but I moved gfortran out of /usr/bin, leaving g77 as the only choice, and then rebuilt everything -- lapack with blas, atlas and finally scipy Barry From jrs65 at cam.ac.uk Sun Oct 5 11:28:37 2008 From: jrs65 at cam.ac.uk (Richard Shaw) Date: Sun, 05 Oct 2008 16:28:37 +0100 Subject: [SciPy-user] Python 2.6 compilation error In-Reply-To: References: <48E6510F.8060808@cam.ac.uk> Message-ID: <48E8DD25.5060400@cam.ac.uk> Nathan Bell wrote: > Hi Richard, thanks for the report. I've committed a workaround to > scipy r4767 in SVN so this issue doesn't appear in the coming SciPy > 0.7 release. Thanks Nathan that works a treat, it's all compiled and installed. I do get an error when running scipy.test(), though it seems to be wioth the testing code, scipy itself seems to run fine. I've attached the message and traceback below. Thanks again, Richard -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test_error.txt URL: From xavier.gnata at gmail.com Sun Oct 5 15:01:55 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 5 Oct 2008 21:01:55 +0200 Subject: [SciPy-user] Install scipy on ubnutu 8.10 Message-ID: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> Hi, I have installed an ubuntu 8.10 within a kvm image. The goal is to write an as simple as possible procedure to compile scipy on this distribution. I would like to be able to compile scipy only after having installed some packages. No tricks. Good news, g77 is not needed anymore :) I have one issue: scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' I have installed libsuitesparse-dev providing /usr/include/suitesparse/umfpack.h Of course, I can solve this with a symlink but it is very ugly. It looks like there is something to fix in scipy auto-detection because this "usr/include/suitesparse/umfpack.h" should be detected and used. Once again, my goal is to end up with something like: 1) apt-get install "the correct list of packages" 2) compile/install scipy svn *wthout any extra config*. 3) you have a nice scipy installed and you can even check that using scipy.test() (ok ok it is an svn version so if you get some errors please report them ;)) Xavier From wnbell at gmail.com Sun Oct 5 15:35:19 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 5 Oct 2008 15:35:19 -0400 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> Message-ID: On Sun, Oct 5, 2008 at 3:01 PM, Xavier Gnata wrote: > > I have installed an ubuntu 8.10 within a kvm image. > The goal is to write an as simple as possible procedure to compile > scipy on this distribution. > I would like to be able to compile scipy only after having installed > some packages. No tricks. Hi Xavier, I think this is a great idea. I have sold other people on numpy/scipy, but myriad BLAS/LAPACK/fortran combinations makes the installation process unpleasant for new users. > I have one issue: > scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to > find 'umfpack.h' > I have installed libsuitesparse-dev providing /usr/include/suitesparse/umfpack.h > Of course, I can solve this with a symlink but it is very ugly. > It looks like there is something to fix in scipy auto-detection > because this "usr/include/suitesparse/umfpack.h" should be detected > and used. The "right" way to do this is to put a [umfpack] section in site.cfg http://projects.scipy.org/pipermail/scipy-dev/2007-April/006910.html Ideally, auto-detection would always include knowledge about where the major distributions put these files. Keep in mind that scipy's support for UMFPACK is deprecated in 0.7 and has been moved to a scikit. > > Once again, my goal is to end up with something like: > 1) apt-get install "the correct list of packages" > 2) compile/install scipy svn *wthout any extra config*. > 3) you have a nice scipy installed and you can even check that using > scipy.test() (ok ok it is an svn version so if you get some errors > please report them ;)) > For 2) you can either improve the auto-detection (I don't know how myself) or simply not use UMFPACK. The danger is that someone might already have libsuitesparse-dev installed, so fixing the auto-detection is probably a better approach. Anyway, I'm eager to see this information become available. Do you plan to provide instructions for both 32-bit and 64-bit versions? In 8.04 I believe there were some differences in the availability of BLAS libraries between the two. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From xavier.gnata at gmail.com Sun Oct 5 16:15:35 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Sun, 05 Oct 2008 22:15:35 +0200 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> Message-ID: <48E92067.3080908@gmail.com> Nathan Bell wrote: > On Sun, Oct 5, 2008 at 3:01 PM, Xavier Gnata wrote: > >> I have installed an ubuntu 8.10 within a kvm image. >> The goal is to write an as simple as possible procedure to compile >> scipy on this distribution. >> I would like to be able to compile scipy only after having installed >> some packages. No tricks. >> > > Hi Xavier, I think this is a great idea. I have sold other people on > numpy/scipy, but myriad BLAS/LAPACK/fortran combinations makes the > installation process unpleasant for new users. > > same here :( >> I have one issue: >> scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to >> find 'umfpack.h' >> I have installed libsuitesparse-dev providing /usr/include/suitesparse/umfpack.h >> Of course, I can solve this with a symlink but it is very ugly. >> It looks like there is something to fix in scipy auto-detection >> because this "usr/include/suitesparse/umfpack.h" should be detected >> and used. >> > > The "right" way to do this is to put a [umfpack] section in site.cfg > http://projects.scipy.org/pipermail/scipy-dev/2007-April/006910.html > > Ideally, auto-detection would always include knowledge about where the > major distributions put these files. Keep in mind that scipy's > support for UMFPACK is deprecated in 0.7 and has been moved to a > scikit. > > Well I would like to have the feedback of the developers on the best way to fix the auto-detection. site.cfg should not be needed here. >> Once again, my goal is to end up with something like: >> 1) apt-get install "the correct list of packages" >> 2) compile/install scipy svn *wthout any extra config*. >> 3) you have a nice scipy installed and you can even check that using >> scipy.test() (ok ok it is an svn version so if you get some errors >> please report them ;)) >> >> > > For 2) you can either improve the auto-detection (I don't know how > myself) or simply not use UMFPACK. The danger is that someone might > already have libsuitesparse-dev installed, so fixing the > auto-detection is probably a better approach. > > Anyway, I'm eager to see this information become available. Do you > plan to provide instructions for both 32-bit and 64-bit versions? In > 8.04 I believe there were some differences in the availability of BLAS > libraries between the two. > > I do not need umfpack but my goal is to perform a scipy installation with all the libs users could need. I should be the same for 32 and 64 using 8.10 but I have to check that (with kvm it should be easy) My plan is to provide an installation procedure *without an hand compiled ATLAS*. Ubuntu is going to release 8.10 end of october. It would be nice to have a standard way to compile the svn on this "intrepid ibex". Of course, the simplest way to having something working is apt-get install python-scipy. The drawback with these nice packages it that you cannot ask for patch or test a new feature. If someone wants to write a procedure to compile a full optimized scipy using ATLAS on 8.10 , he's really welcome :) Xavier From wnbell at gmail.com Sun Oct 5 16:22:59 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 5 Oct 2008 16:22:59 -0400 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <48E92067.3080908@gmail.com> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> <48E92067.3080908@gmail.com> Message-ID: On Sun, Oct 5, 2008 at 4:15 PM, Xavier Gnata wrote: > > I should be the same for 32 and 64 using 8.10 but I have to check that > (with kvm it should be easy) > My plan is to provide an installation procedure *without an hand > compiled ATLAS*. > The prepackaged ATLAS libraries should be sufficient for most users. My understanding is that these support SSE2, which is the most important SIMD advancement. > Ubuntu is going to release 8.10 end of october. It would be nice to have > a standard way to compile the svn on this "intrepid ibex". > Of course, the simplest way to having something working is apt-get > install python-scipy. The drawback with these nice packages it that you > cannot ask for patch or test a new feature. Yep, we should have both. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Sun Oct 5 17:08:41 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 5 Oct 2008 16:08:41 -0500 Subject: [SciPy-user] Python 2.6 compilation error In-Reply-To: <48E8DD25.5060400@cam.ac.uk> References: <48E6510F.8060808@cam.ac.uk> <48E8DD25.5060400@cam.ac.uk> Message-ID: <3d375d730810051408q2b7e428cq3c1e3aba2c91c337@mail.gmail.com> On Sun, Oct 5, 2008 at 10:28, Richard Shaw wrote: > Nathan Bell wrote: > >> Hi Richard, thanks for the report. I've committed a workaround to >> scipy r4767 in SVN so this issue doesn't appear in the coming SciPy >> 0.7 release. > > Thanks Nathan that works a treat, it's all compiled and installed. > > I do get an error when running scipy.test(), though it seems to be wioth the > testing code, scipy itself seems to run fine. I've attached the message and > traceback below. nose 0.10.3 isn't compatible with Python 2.6. Upgrade to 0.10.4. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bblais at bryant.edu Sun Oct 5 21:46:25 2008 From: bblais at bryant.edu (Brian Blais) Date: Sun, 5 Oct 2008 21:46:25 -0400 Subject: [SciPy-user] where to post this tool? Message-ID: <2D72079F-9E8A-4DE1-AF85-A23581B5E97A@bryant.edu> Hello, I am working on a tool which is a wrapper around odeint, but allows a more convenient input of the equations, and was wondering where would be a good place to post such a thing (when it is a bit more stable/ mature)? Right now, I can do things like: sim=Simulation() sim.add("p'=a*p*(1-p/K)",100,plot=True) sim.params(a=1.5,K=300) sim.run(0,50) for logistic growth, but also things like the lorenz model: s=Simulation() # lorenz model s.add("C' = sig*(L-C)",13,plot=1) s.add("L' = r*C-L-C*M",8.1) s.add("M' = C*L-b*M",45,plot=2) s.params(r=28.0,sig=10.0,b=8.0/3.0) s.run(0,30) I can do some higher order equations, and I hope to be able to do vector equations in the same way. Anyway, I think there might be some who would be interested in this tool, and was wondering if there is a repository for such things. It isn't so big to be a scikit, but too large to be a simple example in a cookbook I think, but the scipy cookbook looks like it might be close to the right place. in matlab, there was a matlab repository for user-contributed m-files. is there such a thing for python/scipy/ numpy? thanks, Brian Blais -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Mon Oct 6 08:28:42 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 06 Oct 2008 21:28:42 +0900 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> Message-ID: <48EA047A.2060504@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > Hi, > > I have installed an ubuntu 8.10 within a kvm image. > The goal is to write an as simple as possible procedure to compile > scipy on this distribution. > I would like to be able to compile scipy only after having installed > some packages. No tricks. > Good news, g77 is not needed anymore :) It was already the case for 8.04, but it was a bit confusing: both g77 and gfortran libraries were available. Is it better with 8.10 ? > > I have one issue: > scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to > find 'umfpack.h' > I have installed libsuitesparse-dev providing /usr/include/suitesparse/umfpack.h > Of course, I can solve this with a symlink but it is very ugly. > It looks like there is something to fix in scipy auto-detection > because this "usr/include/suitesparse/umfpack.h" should be detected > and used. That's really the debian packagers' fault. Why do they think it is a good idea to change the header path of the library is beyond me; it breaks every single package which depends on it. That's stupid. We could get around it; but I though umfpack was being deprecated in scipy (that is, we would do a scikit from it, but scipy would not depend on it anymore). > > Once again, my goal is to end up with something like: > 1) apt-get install "the correct list of packages" > 2) compile/install scipy svn *wthout any extra config*. > 3) you have a nice scipy installed and you can even check that using > scipy.test() (ok ok it is an svn version so if you get some errors > please report them ;)) It has been the case for a long time :) On old ubuntu: sudo apt-get install g77 gcc python-dev atlas3-base-dev On more recent ones: sudo apt-get install gcc gfortran python-dev libatlas3gf-sse2-dev The real solution would be to prived our own deb, though. cheers, David From cdcasey at gmail.com Mon Oct 6 12:19:13 2008 From: cdcasey at gmail.com (chris) Date: Mon, 6 Oct 2008 11:19:13 -0500 Subject: [SciPy-user] Some failing tests Message-ID: when i run scipy.test() on RedHat 32-bit 3 & 4, I get the following feedback. Are these things I need to worry about? It's running scipy 0.6.0. Not sure if these are fixed in the next version, or if there's a way I can fix them now.... >>> import scipy >>> scipy.test() Failed importing scipy.linsolve.umfpack: 'module' object has no attribute 'umfpack' Found 9/9 tests for scipy.cluster.tests.test_vq Found 20/20 tests for scipy.fftpack.tests.test_pseudo_diffs Found 4/4 tests for scipy.fftpack.tests.test_helper Found 18/18 tests for scipy.fftpack.tests.test_basic Found 3/3 tests for scipy.integrate.tests.test_quadrature Found 1/1 tests for scipy.integrate.tests.test_integrate Found 10/10 tests for scipy.integrate.tests.test_quadpack Found 6/6 tests for scipy.tests.test_interpolate Found 6/6 tests for scipy.tests.test_fitpack Found 13/13 tests for scipy.io.tests.test_mmio Found 4/4 tests for scipy.io.tests.test_recaster Found 4/4 tests for scipy.io.tests.test_array_import Found 5/5 tests for scipy.io.tests.test_npfile Found 28/28 tests for scipy.io.tests.test_mio Found 128/128 tests for scipy.lib.blas.tests.test_fblas Found 16/16 tests for scipy.lib.blas.tests.test_blas **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42/42 tests for scipy.lib.lapack.tests.test_lapack Found 6/6 tests for scipy.linalg.tests.test_iterative Found 41/41 tests for scipy.linalg.tests.test_basic Found 128/128 tests for scipy.linalg.tests.test_fblas Found 72/72 tests for scipy.linalg.tests.test_decomp Found 4/4 tests for scipy.linalg.tests.test_lapack Found 16/16 tests for scipy.linalg.tests.test_blas Found 7/7 tests for scipy.linalg.tests.test_matfuncs Failed importing /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linsolve/umfpack/tests/test_umfpack.py: 'module' object has no attribute 'umfpack' Found 2/2 tests for scipy.maxentropy.tests.test_maxentropy Found 3/3 tests for scipy.tests.test_pilutil Found 395/395 tests for scipy.ndimage.tests.test_ndimage Found 5/5 tests for scipy.odr.tests.test_odr Found 8/8 tests for scipy.optimize.tests.test_optimize Found 1/1 tests for scipy.optimize.tests.test_cobyla Found 10/10 tests for scipy.optimize.tests.test_nonlin Found 4/4 tests for scipy.optimize.tests.test_zeros Found 5/5 tests for scipy.signal.tests.test_signaltools Found 4/4 tests for scipy.signal.tests.test_wavelets Found 152/152 tests for scipy.sparse.tests.test_sparse Found 342/342 tests for scipy.special.tests.test_basic Found 3/3 tests for scipy.special.tests.test_spfun_stats Found 10/10 tests for scipy.stats.tests.test_morestats Found 107/107 tests for scipy.stats.tests.test_stats Found 73/73 tests for scipy.stats.tests.test_distributions building extensions here: /home/ccasey/.python25_compiled/m10 Found 1/1 tests for scipy.weave.tests.test_ext_tools Found 0/0 tests for scipy.weave.tests.test_c_spec Found 74/74 tests for scipy.weave.tests.test_size_check Found 2/2 tests for scipy.weave.tests.test_blitz_tools Found 0/0 tests for scipy.weave.tests.test_scxx_object Found 0/0 tests for scipy.weave.tests.test_scxx_dict Found 0/0 tests for scipy.weave.tests.test_scxx_sequence Found 26/26 tests for scipy.weave.tests.test_catalog Found 1/1 tests for scipy.weave.tests.test_ast_tools Found 0/0 tests for scipy.weave.tests.test_inline_tools Found 3/3 tests for scipy.weave.tests.test_standard_array_spec Failed importing /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/weave/tests/test_wx_spec.py: Could not locate wxPython base directory. Found 16/16 tests for scipy.weave.tests.test_slice_handler Found 9/9 tests for scipy.weave.tests.test_build_tools .../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ...................................................Residual: 1.05006950608e-07 ................./home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....................... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ........................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................................FF................................................................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................................................................ **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** .. **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .................Result may be inaccurate, approximate err = 5.71185593749e-09 ................................................................................................................../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' .................................................................................................................................................................................................................................................................................................................................................................Use minimum degree ordering on A'+A. .....................................Use minimum degree ordering on A'+A. .....................................Use minimum degree ordering on A'+A. ................................Use minimum degree ordering on A'+A. ....................................................................................................................................................................................................................................................................................................................................................0.2 0.2 0.2 ......0.2 ..0.2 0.2 0.2 0.2 0.2 .....................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ......................./home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:166: FutureWarning: The semantics of histogram will be modified in release 1.2 to improve outlier handling. The new behavior can be obtained using new=True. Note that the new version accepts/returns the bin edges instead of the left bin edges. Please read the docstring for more information. Please read the docstring for more information.""", FutureWarning) /home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:181: FutureWarning: Outliers handling will change in version 1.2. Please read the docstring for details. Please read the docstring for details.""", FutureWarning) ..............................................................................................................................................................................................................................................................................................warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations . ====================================================================== FAIL: check_syevr (scipy.lib.lapack.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange (scipy.lib.lapack.tests.test_lapack.test_flapack_float)---------------------------------------------------------------------- Traceback (most recent call last): File "/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ---------------------------------------------------------------------- Ran 1847 tests in 6.389s FAILED (failures=2) From cdcasey at gmail.com Mon Oct 6 12:26:25 2008 From: cdcasey at gmail.com (chris) Date: Mon, 6 Oct 2008 11:26:25 -0500 Subject: [SciPy-user] Some failing tests In-Reply-To: References: Message-ID: Sorry. That should have read RedHat 4 32-bit (not 3 & 4). Also, using numpy 1.1.1. On Mon, Oct 6, 2008 at 11:19 AM, chris wrote: > when i run scipy.test() on RedHat 32-bit 3 & 4, I get the following > feedback. Are these things I need to worry about? It's running scipy > 0.6.0. Not sure if these are fixed in the next version, or if there's > a way I can fix them now.... > > >>>> import scipy >>>> scipy.test() > Failed importing scipy.linsolve.umfpack: 'module' object has no > attribute 'umfpack' > Found 9/9 tests for scipy.cluster.tests.test_vq > Found 20/20 tests for scipy.fftpack.tests.test_pseudo_diffs > Found 4/4 tests for scipy.fftpack.tests.test_helper > Found 18/18 tests for scipy.fftpack.tests.test_basic > Found 3/3 tests for scipy.integrate.tests.test_quadrature > Found 1/1 tests for scipy.integrate.tests.test_integrate > Found 10/10 tests for scipy.integrate.tests.test_quadpack > Found 6/6 tests for scipy.tests.test_interpolate > Found 6/6 tests for scipy.tests.test_fitpack > Found 13/13 tests for scipy.io.tests.test_mmio > Found 4/4 tests for scipy.io.tests.test_recaster > Found 4/4 tests for scipy.io.tests.test_array_import > Found 5/5 tests for scipy.io.tests.test_npfile > Found 28/28 tests for scipy.io.tests.test_mio > Found 128/128 tests for scipy.lib.blas.tests.test_fblas > Found 16/16 tests for scipy.lib.blas.tests.test_blas > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > Found 42/42 tests for scipy.lib.lapack.tests.test_lapack > Found 6/6 tests for scipy.linalg.tests.test_iterative > Found 41/41 tests for scipy.linalg.tests.test_basic > Found 128/128 tests for scipy.linalg.tests.test_fblas > Found 72/72 tests for scipy.linalg.tests.test_decomp > Found 4/4 tests for scipy.linalg.tests.test_lapack > '/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linalg/fblas.so'> > Found 16/16 tests for scipy.linalg.tests.test_blas > Found 7/7 tests for scipy.linalg.tests.test_matfuncs > Failed importing > /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linsolve/umfpack/tests/test_umfpack.py: > 'module' object has no attribute 'umfpack' > Found 2/2 tests for scipy.maxentropy.tests.test_maxentropy > Found 3/3 tests for scipy.tests.test_pilutil > Found 395/395 tests for scipy.ndimage.tests.test_ndimage > Found 5/5 tests for scipy.odr.tests.test_odr > Found 8/8 tests for scipy.optimize.tests.test_optimize > Found 1/1 tests for scipy.optimize.tests.test_cobyla > Found 10/10 tests for scipy.optimize.tests.test_nonlin > Found 4/4 tests for scipy.optimize.tests.test_zeros > Found 5/5 tests for scipy.signal.tests.test_signaltools > Found 4/4 tests for scipy.signal.tests.test_wavelets > Found 152/152 tests for scipy.sparse.tests.test_sparse > Found 342/342 tests for scipy.special.tests.test_basic > Found 3/3 tests for scipy.special.tests.test_spfun_stats > Found 10/10 tests for scipy.stats.tests.test_morestats > Found 107/107 tests for scipy.stats.tests.test_stats > Found 73/73 tests for scipy.stats.tests.test_distributions > building extensions here: /home/ccasey/.python25_compiled/m10 > Found 1/1 tests for scipy.weave.tests.test_ext_tools > Found 0/0 tests for scipy.weave.tests.test_c_spec > Found 74/74 tests for scipy.weave.tests.test_size_check > Found 2/2 tests for scipy.weave.tests.test_blitz_tools > Found 0/0 tests for scipy.weave.tests.test_scxx_object > Found 0/0 tests for scipy.weave.tests.test_scxx_dict > Found 0/0 tests for scipy.weave.tests.test_scxx_sequence > Found 26/26 tests for scipy.weave.tests.test_catalog > Found 1/1 tests for scipy.weave.tests.test_ast_tools > Found 0/0 tests for scipy.weave.tests.test_inline_tools > Found 3/3 tests for scipy.weave.tests.test_standard_array_spec > Failed importing > /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/weave/tests/test_wx_spec.py: > Could not locate wxPython base directory. > Found 16/16 tests for scipy.weave.tests.test_slice_handler > Found 9/9 tests for scipy.weave.tests.test_build_tools > .../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/cluster/vq.py:477: > UserWarning: One of the clusters is empty. Re-run kmean with a > different initialization. > warnings.warn("One of the clusters is empty. " > exception raised as expected: One of the clusters is empty. Re-run > kmean with a different initialization. > ...................................................Residual: 1.05006950608e-07 > ................./home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/interpolate/fitpack2.py:458: > UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > warnings.warn(message) > ....................... > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > ........................................caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ............................................FF................................................................caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ............................................................................ > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > .. > **************************************************************** > WARNING: cblas module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses fblas instead of cblas. > **************************************************************** > > .................Result may be inaccurate, approximate err = 5.71185593749e-09 > ................................................................................................................../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/ndimage/interpolation.py:41: > UserWarning: Mode "reflect" may yield incorrect results on boundaries. > Please use "mirror" instead. > warnings.warn('Mode "reflect" may yield incorrect results on ' > .................................................................................................................................................................................................................................................................................................................................................................Use > minimum degree ordering on A'+A. > .....................................Use minimum degree ordering on A'+A. > .....................................Use minimum degree ordering on A'+A. > ................................Use minimum degree ordering on A'+A. > ....................................................................................................................................................................................................................................................................................................................................................0.2 > 0.2 > 0.2 > ......0.2 > ..0.2 > 0.2 > 0.2 > 0.2 > 0.2 > .....................Ties preclude use of exact statistic. > ..Ties preclude use of exact statistic. > ......................./home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:166: > FutureWarning: > The semantics of histogram will be modified in > release 1.2 to improve outlier handling. The new behavior can be > obtained using new=True. Note that the new version accepts/returns > the bin edges instead of the left bin edges. > Please read the docstring for more information. > Please read the docstring for more information.""", FutureWarning) > /home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:181: > FutureWarning: > Outliers handling will change in version 1.2. > Please read the docstring for details. > Please read the docstring for details.""", FutureWarning) > ..............................................................................................................................................................................................................................................................................................warning: > specified build_dir '_bad_path_' does not exist or is not writable. > Trying default locations > ...warning: specified build_dir '..' does not exist or is not > writable. Trying default locations > ..warning: specified build_dir '_bad_path_' does not exist or is not > writable. Trying default locations > ...warning: specified build_dir '..' does not exist or is not > writable. Trying default locations > . > ====================================================================== > FAIL: check_syevr (scipy.lib.lapack.tests.test_lapack.test_flapack_float) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", > line 41, in check_syevr > assert_array_almost_equal(w,exact_w) > File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", > line 255, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", > line 240, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 33.3333333333%) > x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) > y: array([-0.66992434, 0.48769389, 9.18223045]) > > ====================================================================== > FAIL: check_syevr_irange > (scipy.lib.lapack.tests.test_lapack.test_flapack_float)---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", > line 66, in check_syevr_irange > assert_array_almost_equal(w,exact_w[rslice]) > File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", > line 255, in assert_array_almost_equal > header='Arrays are not almost equal') > File "/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", > line 240, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 33.3333333333%) > x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) > y: array([-0.66992434, 0.48769389, 9.18223045]) > > ---------------------------------------------------------------------- > Ran 1847 tests in 6.389s > > FAILED (failures=2) > > From stefan at sun.ac.za Mon Oct 6 12:28:21 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 6 Oct 2008 18:28:21 +0200 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> Message-ID: <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> 2008/9/26 David Cournapeau : > Yes, it would be nice. What do other people think about deprecating > all the numpy re-export in scipy ? It would be nice to do for 0.7 > (e.g. in 0.7, deprecated, in 0.8, removed). There were no objections to this, so may we go ahead? St?fan From nwagner at iam.uni-stuttgart.de Mon Oct 6 12:40:05 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 06 Oct 2008 18:40:05 +0200 Subject: [SciPy-user] Some failing tests In-Reply-To: References: Message-ID: On Mon, 6 Oct 2008 11:26:25 -0500 chris wrote: > Sorry. That should have read RedHat 4 32-bit (not 3 & >4). Also, using > numpy 1.1.1. > > On Mon, Oct 6, 2008 at 11:19 AM, chris > wrote: >> when i run scipy.test() on RedHat 32-bit 3 & 4, I get >>the following >> feedback. Are these things I need to worry about? It's >>running scipy >> 0.6.0. Not sure if these are fixed in the next version, >>or if there's >> a way I can fix them now.... >> >> >>>>> import scipy >>>>> scipy.test() >> Failed importing scipy.linsolve.umfpack: 'module' object >>has no >> attribute 'umfpack' >> Found 9/9 tests for scipy.cluster.tests.test_vq >> Found 20/20 tests for >>scipy.fftpack.tests.test_pseudo_diffs >> Found 4/4 tests for scipy.fftpack.tests.test_helper >> Found 18/18 tests for scipy.fftpack.tests.test_basic >> Found 3/3 tests for >>scipy.integrate.tests.test_quadrature >> Found 1/1 tests for >>scipy.integrate.tests.test_integrate >> Found 10/10 tests for >>scipy.integrate.tests.test_quadpack >> Found 6/6 tests for scipy.tests.test_interpolate >> Found 6/6 tests for scipy.tests.test_fitpack >> Found 13/13 tests for scipy.io.tests.test_mmio >> Found 4/4 tests for scipy.io.tests.test_recaster >> Found 4/4 tests for scipy.io.tests.test_array_import >> Found 5/5 tests for scipy.io.tests.test_npfile >> Found 28/28 tests for scipy.io.tests.test_mio >> Found 128/128 tests for scipy.lib.blas.tests.test_fblas >> Found 16/16 tests for scipy.lib.blas.tests.test_blas >> >> **************************************************************** >> WARNING: clapack module is empty >> ----------- >> See scipy/INSTALL.txt for troubleshooting. >> Notes: >> * If atlas library is not found by >>numpy/distutils/system_info.py, >> then scipy uses flapack instead of clapack. >> **************************************************************** >> >> Found 42/42 tests for >>scipy.lib.lapack.tests.test_lapack >> Found 6/6 tests for scipy.linalg.tests.test_iterative >> Found 41/41 tests for scipy.linalg.tests.test_basic >> Found 128/128 tests for scipy.linalg.tests.test_fblas >> Found 72/72 tests for scipy.linalg.tests.test_decomp >> Found 4/4 tests for scipy.linalg.tests.test_lapack >> > '/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linalg/fblas.so'> >> Found 16/16 tests for scipy.linalg.tests.test_blas >> Found 7/7 tests for scipy.linalg.tests.test_matfuncs >> Failed importing >> /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linsolve/umfpack/tests/test_umfpack.py: >> 'module' object has no attribute 'umfpack' >> Found 2/2 tests for >>scipy.maxentropy.tests.test_maxentropy >> Found 3/3 tests for scipy.tests.test_pilutil >> Found 395/395 tests for >>scipy.ndimage.tests.test_ndimage >> Found 5/5 tests for scipy.odr.tests.test_odr >> Found 8/8 tests for scipy.optimize.tests.test_optimize >> Found 1/1 tests for scipy.optimize.tests.test_cobyla >> Found 10/10 tests for scipy.optimize.tests.test_nonlin >> Found 4/4 tests for scipy.optimize.tests.test_zeros >> Found 5/5 tests for scipy.signal.tests.test_signaltools >> Found 4/4 tests for scipy.signal.tests.test_wavelets >> Found 152/152 tests for scipy.sparse.tests.test_sparse >> Found 342/342 tests for scipy.special.tests.test_basic >> Found 3/3 tests for >>scipy.special.tests.test_spfun_stats >> Found 10/10 tests for scipy.stats.tests.test_morestats >> Found 107/107 tests for scipy.stats.tests.test_stats >> Found 73/73 tests for >>scipy.stats.tests.test_distributions >> building extensions here: >>/home/ccasey/.python25_compiled/m10 >> Found 1/1 tests for scipy.weave.tests.test_ext_tools >> Found 0/0 tests for scipy.weave.tests.test_c_spec >> Found 74/74 tests for scipy.weave.tests.test_size_check >> Found 2/2 tests for scipy.weave.tests.test_blitz_tools >> Found 0/0 tests for scipy.weave.tests.test_scxx_object >> Found 0/0 tests for scipy.weave.tests.test_scxx_dict >> Found 0/0 tests for >>scipy.weave.tests.test_scxx_sequence >> Found 26/26 tests for scipy.weave.tests.test_catalog >> Found 1/1 tests for scipy.weave.tests.test_ast_tools >> Found 0/0 tests for scipy.weave.tests.test_inline_tools >> Found 3/3 tests for >>scipy.weave.tests.test_standard_array_spec >> Failed importing >> /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/weave/tests/test_wx_spec.py: >> Could not locate wxPython base directory. >> Found 16/16 tests for >>scipy.weave.tests.test_slice_handler >> Found 9/9 tests for scipy.weave.tests.test_build_tools >> .../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/cluster/vq.py:477: >> UserWarning: One of the clusters is empty. Re-run kmean >>with a >> different initialization. >> warnings.warn("One of the clusters is empty. " >> exception raised as expected: One of the clusters is >>empty. Re-run >> kmean with a different initialization. >> ...................................................Residual: >>1.05006950608e-07 >> ................./home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/interpolate/fitpack2.py:458: >> UserWarning: >> The coefficients of the spline returned have been >>computed as the >> minimal norm least-squares solution of a (numerically) >>rank deficient >> system (deficiency=7). If deficiency is large, the >>results may be >> inaccurate. Deficiency may strongly depend on the value >>of eps. >> warnings.warn(message) >> ....................... >> Don't worry about a warning regarding the number of >>bytes read. >> Warning: 1000000 bytes requested, 20 bytes read. >> ........................................caxpy:n=4 >> ..caxpy:n=3 >> ....ccopy:n=4 >> ..ccopy:n=3 >> .............cscal:n=4 >> ....cswap:n=4 >> ..cswap:n=3 >> .....daxpy:n=4 >> ..daxpy:n=3 >> ....dcopy:n=4 >> ..dcopy:n=3 >> .............dscal:n=4 >> ....dswap:n=4 >> ..dswap:n=3 >> .....saxpy:n=4 >> ..saxpy:n=3 >> ....scopy:n=4 >> ..scopy:n=3 >> .............sscal:n=4 >> ....sswap:n=4 >> ..sswap:n=3 >> .....zaxpy:n=4 >> ..zaxpy:n=3 >> ....zcopy:n=4 >> ..zcopy:n=3 >> .............zscal:n=4 >> ....zswap:n=4 >> ..zswap:n=3 >> ............................................FF................................................................caxpy:n=4 >> ..caxpy:n=3 >> ....ccopy:n=4 >> ..ccopy:n=3 >> .............cscal:n=4 >> ....cswap:n=4 >> ..cswap:n=3 >> .....daxpy:n=4 >> ..daxpy:n=3 >> ....dcopy:n=4 >> ..dcopy:n=3 >> .............dscal:n=4 >> ....dswap:n=4 >> ..dswap:n=3 >> .....saxpy:n=4 >> ..saxpy:n=3 >> ....scopy:n=4 >> ..scopy:n=3 >> .............sscal:n=4 >> ....sswap:n=4 >> ..sswap:n=3 >> .....zaxpy:n=4 >> ..zaxpy:n=3 >> ....zcopy:n=4 >> ..zcopy:n=3 >> .............zscal:n=4 >> ....zswap:n=4 >> ..zswap:n=3 >> ............................................................................ >> **************************************************************** >> WARNING: clapack module is empty >> ----------- >> See scipy/INSTALL.txt for troubleshooting. >> Notes: >> * If atlas library is not found by >>numpy/distutils/system_info.py, >> then scipy uses flapack instead of clapack. >> **************************************************************** >> >> .. >> **************************************************************** >> WARNING: cblas module is empty >> ----------- >> See scipy/INSTALL.txt for troubleshooting. >> Notes: >> * If atlas library is not found by >>numpy/distutils/system_info.py, >> then scipy uses fblas instead of cblas. >> **************************************************************** >> >> .................Result may be inaccurate, approximate >>err = 5.71185593749e-09 >> ................................................................................................................../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/ndimage/interpolation.py:41: >> UserWarning: Mode "reflect" may yield incorrect results >>on boundaries. >> Please use "mirror" instead. >> warnings.warn('Mode "reflect" may yield incorrect >>results on ' >> .................................................................................................................................................................................................................................................................................................................................................................Use >> minimum degree ordering on A'+A. >> .....................................Use minimum degree >>ordering on A'+A. >> .....................................Use minimum degree >>ordering on A'+A. >> ................................Use minimum degree >>ordering on A'+A. >> ....................................................................................................................................................................................................................................................................................................................................................0.2 >> 0.2 >> 0.2 >> ......0.2 >> ..0.2 >> 0.2 >> 0.2 >> 0.2 >> 0.2 >> .....................Ties preclude use of exact >>statistic. >> ..Ties preclude use of exact statistic. >> ......................./home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:166: >> FutureWarning: >> The semantics of histogram will be modified in >> release 1.2 to improve outlier handling. The new >>behavior can be >> obtained using new=True. Note that the new >>version accepts/returns >> the bin edges instead of the left bin edges. >> Please read the docstring for more information. >> Please read the docstring for more information.""", >>FutureWarning) >> /home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:181: >> FutureWarning: >> Outliers handling will change in version >>1.2. >> Please read the docstring for details. >> Please read the docstring for details.""", >>FutureWarning) >> ..............................................................................................................................................................................................................................................................................................warning: >> specified build_dir '_bad_path_' does not exist or is >>not writable. >> Trying default locations >> ...warning: specified build_dir '..' does not exist or >>is not >> writable. Trying default locations >> ..warning: specified build_dir '_bad_path_' does not >>exist or is not >> writable. Trying default locations >> ...warning: specified build_dir '..' does not exist or >>is not >> writable. Trying default locations >> . >> ====================================================================== >> FAIL: check_syevr >>(scipy.lib.lapack.tests.test_lapack.test_flapack_float) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >>"/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", >> line 41, in check_syevr >> assert_array_almost_equal(w,exact_w) >> File >>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >> line 255, in assert_array_almost_equal >> header='Arrays are not almost equal') >> File >>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >> line 240, in assert_array_compare >> assert cond, msg >> AssertionError: >> Arrays are not almost equal >> >> (mismatch 33.3333333333%) >> x: array([-0.66992444, 0.48769444, 9.18222618], >>dtype=float32) >> y: array([-0.66992434, 0.48769389, 9.18223045]) >> >> ====================================================================== >> FAIL: check_syevr_irange >> (scipy.lib.lapack.tests.test_lapack.test_flapack_float)---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >>"/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", >> line 66, in check_syevr_irange >> assert_array_almost_equal(w,exact_w[rslice]) >> File >>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >> line 255, in assert_array_almost_equal >> header='Arrays are not almost equal') >> File >>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >> line 240, in assert_array_compare >> assert cond, msg >> AssertionError: >> Arrays are not almost equal >> >> (mismatch 33.3333333333%) >> x: array([-0.66992444, 0.48769444, 9.18222618], >>dtype=float32) >> y: array([-0.66992434, 0.48769389, 9.18223045]) >> >> ---------------------------------------------------------------------- >> Ran 1847 tests in 6.389s >> >> FAILED (failures=2) >> >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user This is a known issue See http://scipy.org/scipy/scipy/ticket/375 Nils From cdcasey at gmail.com Mon Oct 6 12:55:30 2008 From: cdcasey at gmail.com (chris) Date: Mon, 6 Oct 2008 11:55:30 -0500 Subject: [SciPy-user] Some failing tests In-Reply-To: References: Message-ID: Thanks! On Mon, Oct 6, 2008 at 11:40 AM, Nils Wagner wrote: > On Mon, 6 Oct 2008 11:26:25 -0500 > chris wrote: >> Sorry. That should have read RedHat 4 32-bit (not 3 & >>4). Also, using >> numpy 1.1.1. >> >> On Mon, Oct 6, 2008 at 11:19 AM, chris >> wrote: >>> when i run scipy.test() on RedHat 32-bit 3 & 4, I get >>>the following >>> feedback. Are these things I need to worry about? It's >>>running scipy >>> 0.6.0. Not sure if these are fixed in the next version, >>>or if there's >>> a way I can fix them now.... >>> >>> >>>>>> import scipy >>>>>> scipy.test() >>> Failed importing scipy.linsolve.umfpack: 'module' object >>>has no >>> attribute 'umfpack' >>> Found 9/9 tests for scipy.cluster.tests.test_vq >>> Found 20/20 tests for >>>scipy.fftpack.tests.test_pseudo_diffs >>> Found 4/4 tests for scipy.fftpack.tests.test_helper >>> Found 18/18 tests for scipy.fftpack.tests.test_basic >>> Found 3/3 tests for >>>scipy.integrate.tests.test_quadrature >>> Found 1/1 tests for >>>scipy.integrate.tests.test_integrate >>> Found 10/10 tests for >>>scipy.integrate.tests.test_quadpack >>> Found 6/6 tests for scipy.tests.test_interpolate >>> Found 6/6 tests for scipy.tests.test_fitpack >>> Found 13/13 tests for scipy.io.tests.test_mmio >>> Found 4/4 tests for scipy.io.tests.test_recaster >>> Found 4/4 tests for scipy.io.tests.test_array_import >>> Found 5/5 tests for scipy.io.tests.test_npfile >>> Found 28/28 tests for scipy.io.tests.test_mio >>> Found 128/128 tests for scipy.lib.blas.tests.test_fblas >>> Found 16/16 tests for scipy.lib.blas.tests.test_blas >>> >>> **************************************************************** >>> WARNING: clapack module is empty >>> ----------- >>> See scipy/INSTALL.txt for troubleshooting. >>> Notes: >>> * If atlas library is not found by >>>numpy/distutils/system_info.py, >>> then scipy uses flapack instead of clapack. >>> **************************************************************** >>> >>> Found 42/42 tests for >>>scipy.lib.lapack.tests.test_lapack >>> Found 6/6 tests for scipy.linalg.tests.test_iterative >>> Found 41/41 tests for scipy.linalg.tests.test_basic >>> Found 128/128 tests for scipy.linalg.tests.test_fblas >>> Found 72/72 tests for scipy.linalg.tests.test_decomp >>> Found 4/4 tests for scipy.linalg.tests.test_lapack >>> >> '/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linalg/fblas.so'> >>> Found 16/16 tests for scipy.linalg.tests.test_blas >>> Found 7/7 tests for scipy.linalg.tests.test_matfuncs >>> Failed importing >>> /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/linsolve/umfpack/tests/test_umfpack.py: >>> 'module' object has no attribute 'umfpack' >>> Found 2/2 tests for >>>scipy.maxentropy.tests.test_maxentropy >>> Found 3/3 tests for scipy.tests.test_pilutil >>> Found 395/395 tests for >>>scipy.ndimage.tests.test_ndimage >>> Found 5/5 tests for scipy.odr.tests.test_odr >>> Found 8/8 tests for scipy.optimize.tests.test_optimize >>> Found 1/1 tests for scipy.optimize.tests.test_cobyla >>> Found 10/10 tests for scipy.optimize.tests.test_nonlin >>> Found 4/4 tests for scipy.optimize.tests.test_zeros >>> Found 5/5 tests for scipy.signal.tests.test_signaltools >>> Found 4/4 tests for scipy.signal.tests.test_wavelets >>> Found 152/152 tests for scipy.sparse.tests.test_sparse >>> Found 342/342 tests for scipy.special.tests.test_basic >>> Found 3/3 tests for >>>scipy.special.tests.test_spfun_stats >>> Found 10/10 tests for scipy.stats.tests.test_morestats >>> Found 107/107 tests for scipy.stats.tests.test_stats >>> Found 73/73 tests for >>>scipy.stats.tests.test_distributions >>> building extensions here: >>>/home/ccasey/.python25_compiled/m10 >>> Found 1/1 tests for scipy.weave.tests.test_ext_tools >>> Found 0/0 tests for scipy.weave.tests.test_c_spec >>> Found 74/74 tests for scipy.weave.tests.test_size_check >>> Found 2/2 tests for scipy.weave.tests.test_blitz_tools >>> Found 0/0 tests for scipy.weave.tests.test_scxx_object >>> Found 0/0 tests for scipy.weave.tests.test_scxx_dict >>> Found 0/0 tests for >>>scipy.weave.tests.test_scxx_sequence >>> Found 26/26 tests for scipy.weave.tests.test_catalog >>> Found 1/1 tests for scipy.weave.tests.test_ast_tools >>> Found 0/0 tests for scipy.weave.tests.test_inline_tools >>> Found 3/3 tests for >>>scipy.weave.tests.test_standard_array_spec >>> Failed importing >>> /home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/weave/tests/test_wx_spec.py: >>> Could not locate wxPython base directory. >>> Found 16/16 tests for >>>scipy.weave.tests.test_slice_handler >>> Found 9/9 tests for scipy.weave.tests.test_build_tools >>> .../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/cluster/vq.py:477: >>> UserWarning: One of the clusters is empty. Re-run kmean >>>with a >>> different initialization. >>> warnings.warn("One of the clusters is empty. " >>> exception raised as expected: One of the clusters is >>>empty. Re-run >>> kmean with a different initialization. >>> ...................................................Residual: >>>1.05006950608e-07 >>> ................./home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/interpolate/fitpack2.py:458: >>> UserWarning: >>> The coefficients of the spline returned have been >>>computed as the >>> minimal norm least-squares solution of a (numerically) >>>rank deficient >>> system (deficiency=7). If deficiency is large, the >>>results may be >>> inaccurate. Deficiency may strongly depend on the value >>>of eps. >>> warnings.warn(message) >>> ....................... >>> Don't worry about a warning regarding the number of >>>bytes read. >>> Warning: 1000000 bytes requested, 20 bytes read. >>> ........................................caxpy:n=4 >>> ..caxpy:n=3 >>> ....ccopy:n=4 >>> ..ccopy:n=3 >>> .............cscal:n=4 >>> ....cswap:n=4 >>> ..cswap:n=3 >>> .....daxpy:n=4 >>> ..daxpy:n=3 >>> ....dcopy:n=4 >>> ..dcopy:n=3 >>> .............dscal:n=4 >>> ....dswap:n=4 >>> ..dswap:n=3 >>> .....saxpy:n=4 >>> ..saxpy:n=3 >>> ....scopy:n=4 >>> ..scopy:n=3 >>> .............sscal:n=4 >>> ....sswap:n=4 >>> ..sswap:n=3 >>> .....zaxpy:n=4 >>> ..zaxpy:n=3 >>> ....zcopy:n=4 >>> ..zcopy:n=3 >>> .............zscal:n=4 >>> ....zswap:n=4 >>> ..zswap:n=3 >>> ............................................FF................................................................caxpy:n=4 >>> ..caxpy:n=3 >>> ....ccopy:n=4 >>> ..ccopy:n=3 >>> .............cscal:n=4 >>> ....cswap:n=4 >>> ..cswap:n=3 >>> .....daxpy:n=4 >>> ..daxpy:n=3 >>> ....dcopy:n=4 >>> ..dcopy:n=3 >>> .............dscal:n=4 >>> ....dswap:n=4 >>> ..dswap:n=3 >>> .....saxpy:n=4 >>> ..saxpy:n=3 >>> ....scopy:n=4 >>> ..scopy:n=3 >>> .............sscal:n=4 >>> ....sswap:n=4 >>> ..sswap:n=3 >>> .....zaxpy:n=4 >>> ..zaxpy:n=3 >>> ....zcopy:n=4 >>> ..zcopy:n=3 >>> .............zscal:n=4 >>> ....zswap:n=4 >>> ..zswap:n=3 >>> ............................................................................ >>> **************************************************************** >>> WARNING: clapack module is empty >>> ----------- >>> See scipy/INSTALL.txt for troubleshooting. >>> Notes: >>> * If atlas library is not found by >>>numpy/distutils/system_info.py, >>> then scipy uses flapack instead of clapack. >>> **************************************************************** >>> >>> .. >>> **************************************************************** >>> WARNING: cblas module is empty >>> ----------- >>> See scipy/INSTALL.txt for troubleshooting. >>> Notes: >>> * If atlas library is not found by >>>numpy/distutils/system_info.py, >>> then scipy uses fblas instead of cblas. >>> **************************************************************** >>> >>> .................Result may be inaccurate, approximate >>>err = 5.71185593749e-09 >>> ................................................................................................................../home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/ndimage/interpolation.py:41: >>> UserWarning: Mode "reflect" may yield incorrect results >>>on boundaries. >>> Please use "mirror" instead. >>> warnings.warn('Mode "reflect" may yield incorrect >>>results on ' >>> .................................................................................................................................................................................................................................................................................................................................................................Use >>> minimum degree ordering on A'+A. >>> .....................................Use minimum degree >>>ordering on A'+A. >>> .....................................Use minimum degree >>>ordering on A'+A. >>> ................................Use minimum degree >>>ordering on A'+A. >>> ....................................................................................................................................................................................................................................................................................................................................................0.2 >>> 0.2 >>> 0.2 >>> ......0.2 >>> ..0.2 >>> 0.2 >>> 0.2 >>> 0.2 >>> 0.2 >>> .....................Ties preclude use of exact >>>statistic. >>> ..Ties preclude use of exact statistic. >>> ......................./home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:166: >>> FutureWarning: >>> The semantics of histogram will be modified in >>> release 1.2 to improve outlier handling. The new >>>behavior can be >>> obtained using new=True. Note that the new >>>version accepts/returns >>> the bin edges instead of the left bin edges. >>> Please read the docstring for more information. >>> Please read the docstring for more information.""", >>>FutureWarning) >>> /home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/lib/function_base.py:181: >>> FutureWarning: >>> Outliers handling will change in version >>>1.2. >>> Please read the docstring for details. >>> Please read the docstring for details.""", >>>FutureWarning) >>> ..............................................................................................................................................................................................................................................................................................warning: >>> specified build_dir '_bad_path_' does not exist or is >>>not writable. >>> Trying default locations >>> ...warning: specified build_dir '..' does not exist or >>>is not >>> writable. Trying default locations >>> ..warning: specified build_dir '_bad_path_' does not >>>exist or is not >>> writable. Trying default locations >>> ...warning: specified build_dir '..' does not exist or >>>is not >>> writable. Trying default locations >>> . >>> ====================================================================== >>> FAIL: check_syevr >>>(scipy.lib.lapack.tests.test_lapack.test_flapack_float) >>> ---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>>"/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", >>> line 41, in check_syevr >>> assert_array_almost_equal(w,exact_w) >>> File >>>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >>> line 255, in assert_array_almost_equal >>> header='Arrays are not almost equal') >>> File >>>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >>> line 240, in assert_array_compare >>> assert cond, msg >>> AssertionError: >>> Arrays are not almost equal >>> >>> (mismatch 33.3333333333%) >>> x: array([-0.66992444, 0.48769444, 9.18222618], >>>dtype=float32) >>> y: array([-0.66992434, 0.48769389, 9.18223045]) >>> >>> ====================================================================== >>> FAIL: check_syevr_irange >>> (scipy.lib.lapack.tests.test_lapack.test_flapack_float)---------------------------------------------------------------------- >>> Traceback (most recent call last): >>> File >>>"/home/ccasey/epd/lib/python2.5/site-packages/scipy-0.6.0.0006_s-py2.5-linux-i686.egg/scipy/lib/lapack/tests/esv_tests.py", >>> line 66, in check_syevr_irange >>> assert_array_almost_equal(w,exact_w[rslice]) >>> File >>>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >>> line 255, in assert_array_almost_equal >>> header='Arrays are not almost equal') >>> File >>>"/home/ccasey/epd/lib/python2.5/site-packages/numpy-1.1.1.0001-py2.5-linux-i686.egg/numpy/testing/utils.py", >>> line 240, in assert_array_compare >>> assert cond, msg >>> AssertionError: >>> Arrays are not almost equal >>> >>> (mismatch 33.3333333333%) >>> x: array([-0.66992444, 0.48769444, 9.18222618], >>>dtype=float32) >>> y: array([-0.66992434, 0.48769389, 9.18223045]) >>> >>> ---------------------------------------------------------------------- >>> Ran 1847 tests in 6.389s >>> >>> FAILED (failures=2) >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > This is a known issue > See > http://scipy.org/scipy/scipy/ticket/375 > > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jah.mailinglist at gmail.com Mon Oct 6 13:47:06 2008 From: jah.mailinglist at gmail.com (jah) Date: Mon, 6 Oct 2008 10:47:06 -0700 Subject: [SciPy-user] Ubuntu Libraries Message-ID: Hi, I was hoping someone could clarify the ubuntu packages and what is actually needed by scipy. 1) I keep seeing posts about g77 not being required anymore and gfortran preferred. What is the "official" statement? In INSTALL.txt, only g77 is mentioned for Ubuntu packages via: """Debian/Ubuntu packages (g77): atlas3-base atlas3-base-dev""". The only reference to gfortran is with Mac OS X. Also, in "Optional Packages" debian packages 'gcc g++ g77' are recommened. 2) The install notes also say that a complete version of LAPACK is required for scipy. All the ubuntu descriptions say that only a subset of routines from LAPACK are included with ATLAS. For example, http://packages.ubuntu.com/hardy/libatlas-sse2-dev. Is this a problem? 3) It looks like there are two packages: atlas3-sse2 and libatlas-sse2-dev. Are both required? It seems that they refer to different versions of ATLAS. http://packages.ubuntu.com/hardy/atlas3-sse2 http://packages.ubuntu.com/hardy/libatlas-sse2-dev On reading http://www.scipy.org/Installing_SciPy/Linux, it seems like I only should need libatlas-sse2-dev. Correct? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Mon Oct 6 14:34:19 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 6 Oct 2008 14:34:19 -0400 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> Message-ID: 2008/10/6 St?fan van der Walt : > 2008/9/26 David Cournapeau : >> Yes, it would be nice. What do other people think about deprecating >> all the numpy re-export in scipy ? It would be nice to do for 0.7 >> (e.g. in 0.7, deprecated, in 0.8, removed). > > There were no objections to this, so may we go ahead? My only concern is possible user confusion: some functions (e.g. sqrt) are provided as "enhanced" versions in scipy, while others are simply reexported. If we remove the reexports, users can't simply use scipy.whatever to get the best-available version of each function, they have to know whether an enhanced version exists. Of course, since the enhanced versions exist because their APIs differ in important and possibly surprising ways (e.g., sqrt(-1) has a different return type from sqrt(1)) this may be a good thing. Anne From haase at msg.ucsf.edu Mon Oct 6 14:59:39 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 6 Oct 2008 20:59:39 +0200 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> Message-ID: On Mon, Oct 6, 2008 at 8:34 PM, Anne Archibald wrote: > 2008/10/6 St?fan van der Walt : >> 2008/9/26 David Cournapeau : >>> Yes, it would be nice. What do other people think about deprecating >>> all the numpy re-export in scipy ? It would be nice to do for 0.7 >>> (e.g. in 0.7, deprecated, in 0.8, removed). >> >> There were no objections to this, so may we go ahead? > > My only concern is possible user confusion: some functions (e.g. sqrt) > are provided as "enhanced" versions in scipy, while others are simply > reexported. If we remove the reexports, users can't simply use > scipy.whatever to get the best-available version of each function, > they have to know whether an enhanced version exists. Of course, since > the enhanced versions exist because their APIs differ in important and > possibly surprising ways (e.g., sqrt(-1) has a different return type > from sqrt(1)) this may be a good thing. > Arguing that SciPy is below 1.0 I think the reexporting should be minimized as much as possible. I don't thing that some people's preference for "from scipy import *" (without a preceding "from numpy import *") should not be a deciding point. It would be nice if it were clear enough (in general) if a given function should be expected to part of numpy or part of scipy. Left over uncertainties should get documented is a short form - a list maybe. My two cents.... -Sebastian Haase From rmay31 at gmail.com Mon Oct 6 15:06:17 2008 From: rmay31 at gmail.com (Ryan May) Date: Mon, 06 Oct 2008 14:06:17 -0500 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> Message-ID: <48EA61A9.6040305@gmail.com> St?fan van der Walt wrote: > 2008/9/26 David Cournapeau : >> Yes, it would be nice. What do other people think about deprecating >> all the numpy re-export in scipy ? It would be nice to do for 0.7 >> (e.g. in 0.7, deprecated, in 0.8, removed). > > There were no objections to this, so may we go ahead? +1 (My 0.02) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma From chiefmurph at hotmail.com Mon Oct 6 16:23:42 2008 From: chiefmurph at hotmail.com (Dan Murphy) Date: Mon, 6 Oct 2008 13:23:42 -0700 Subject: [SciPy-user] Gaussian quadrature error In-Reply-To: References: <0F8253EA348F49F6A3C3A0AEFE044866@GatewayLaptop> Message-ID: Thanks for the clear explanation, Anne. If I understand the second take-home lesson -- feed integrators unity-order quantities -- then instead of calling quadrature as I did: integrate.quadrature(f,-1000.0,0.0) I should scale my function f so that my call looks more like integrate.quadrature(f,-1.0,0.0) Was that what you meant? Also, when you say "scipy.optimize.quad is a little more general-purpose" did you mean "scipy.integrate.quad"? Indeed, the call integrate(quad(f,-1000.0,0.0) worked great! Thanks, and sorry for the late followup. Dan Murphy > Date: Wed, 17 Sep 2008 12:00:09 -0400> From: peridot.faceted at gmail.com> To: scipy-user at scipy.org> Subject: Re: [SciPy-user] Gaussian quadrature error> > 2008/9/14 Dan Murphy :> > I am trying out the integrate.quadrature function on the function f(x)=e**x> > to the left of the y-axis. If the lower bound in not too negative, I get a> > reasonable answer, but if the lower bound is too negative, I get 0.0 as the> > value of the integral. Here is the code:> >> >> >> > from scipy import *> >> >> >> > def f(x):> >> > return e**x> >> >> >> > integrate.quadrature(f,-10.0,0.0) # answer is (0.999954600065,> > 3.14148596026e-010)> >> >> >> > but> >> >> >> > integrate.quadrature(f,-1000.0,0.0) # yields (8.35116510531e-090,> > 8.35116510531e-090)> >> >> >> > Note that 'val' and 'err' are equal. Is this a bug in quadrature?> > No, unfortunately. It is a limitation of numerical quadrature in> general. Specifically, no matter how adaptive the algorithm is, it can> only base its result on a finite number of sampled points of the> function. If these points are all zero to numerical accuracy, then the> answer must be zero. So if you imagine those samples are taken at the> midpoints of 10 intervals evenly spaced between -1000 and 0, then the> rightmost one returns a value of e**(-50), which is as close to zero> as makes no nevermind. You might be all right if this were an adaptive> scheme and if it used the endpoints, since one endpoint is guaranteed> to give you one. But not using the endpoints is a design feature of> some numerical integration schemes.> > The take-home lesson is that you can't just use numerical quadrature> systems blindly; you have to know the features and limitations of the> particular one you're using. Gaussian quadrature can be very accurate> for smooth functions, but it has a very specific domain of> applicability. scipy.optimize.quad is a little more general-purpose by> intent (and necessarily a little less efficient when Gaussian> quadrature will do) but it can be tricked too.> > A more specific take-home lesson is to try to normalize your problem> as much as possible, so that all quantities you feed your integrator> are of order unity. Yes, it's a pain to have to handle scale factors> yourself, particularly in the normal case when you're solving a family> of related problems. But you'll get much more reliable performance.> > Anne> _______________________________________________> SciPy-user mailing list> SciPy-user at scipy.org> http://projects.scipy.org/mailman/listinfo/scipy-user _________________________________________________________________ Stay up to date on your PC, the Web, and your mobile phone with Windows Live. http://clk.atdmt.com/MRT/go/msnnkwxp1020093185mrt/direct/01/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Mon Oct 6 18:59:15 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Tue, 07 Oct 2008 00:59:15 +0200 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <48EA047A.2060504@ar.media.kyoto-u.ac.jp> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> <48EA047A.2060504@ar.media.kyoto-u.ac.jp> Message-ID: <48EA9843.5000505@gmail.com> > >> Hi, >> >> I have installed an ubuntu 8.10 within a kvm image. >> The goal is to write an as simple as possible procedure to compile >> scipy on this distribution. >> I would like to be able to compile scipy only after having installed >> some packages. No tricks. >> Good news, g77 is not needed anymore :) >> > > It was already the case for 8.04, but it was a bit confusing: both g77 > and gfortran libraries were available. Is it better with 8.10 ? > > looks like it is better. >> I have one issue: >> scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to >> find 'umfpack.h' >> I have installed libsuitesparse-dev providing /usr/include/suitesparse/umfpack.h >> Of course, I can solve this with a symlink but it is very ugly. >> It looks like there is something to fix in scipy auto-detection >> because this "usr/include/suitesparse/umfpack.h" should be detected >> and used. >> > > That's really the debian packagers' fault. Why do they think it is a > good idea to change the header path of the library is beyond me; it > breaks every single package which depends on it. That's stupid. > > ok. I'm going to write a bug report. w&s. > We could get around it; but I though umfpack was being deprecated in > scipy (that is, we would do a scikit from it, but scipy would not > depend on it anymore). > > Well it is good news :) but even At revision 4786, I get this: umfpack_info: libraries umfpack not found in /usr/lib /usr/lib/python2.5/site-packages/numpy/distutils/system_info.py:414: UserWarning: UMFPACK sparse solver (http://www.cise.ufl.edu/research/sparse/umfpack/) not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [umfpack]) or by setting the UMFPACK environment variable. warnings.warn(self.notfounderror.__doc__) NOT AVAILABLE on anorther 8.04 (and not 8.10) ubuntu. >> Once again, my goal is to end up with something like: >> 1) apt-get install "the correct list of packages" >> 2) compile/install scipy svn *wthout any extra config*. >> 3) you have a nice scipy installed and you can even check that using >> scipy.test() (ok ok it is an svn version so if you get some errors >> please report them ;)) >> > > It has been the case for a long time :) On old ubuntu: > > sudo apt-get install g77 gcc python-dev atlas3-base-dev > > On more recent ones: > > sudo apt-get install gcc gfortran python-dev libatlas3gf-sse2-dev > > Except if you have installed libsuiteparse- 3.1.0 You also need g++ don't you? On my cucrrent install of 8.10, I cannot install libatlas3gf-sse2 but only libatlas3gf-base but it is only a beta. scipy is a pretty uncommon piece of software, it is a scientific one :) All the guys using scipy I know are using the svn version because they *need* *this* so nice feature. Quite often, they also need to look at the scipy source to see how good an optimization algorithm can be it. Sometimes they even have to hack the code to fit there needs (I did it...) Scipy is so nice because it is powerfull and open source. All the users I know (except maybe one ;)) want to be able to compile the svn but they don't really like to play with system administration. That is why I think we should have a very easy and accurate procedure to install scipy on ubuntu 8.10 (I could test on suse if I have time). Xavier > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From xavier.gnata at gmail.com Mon Oct 6 19:02:19 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Tue, 07 Oct 2008 01:02:19 +0200 Subject: [SciPy-user] Ubuntu Libraries In-Reply-To: References: Message-ID: <48EA98FB.6010703@gmail.com> jah wrote: > Hi, > > I was hoping someone could clarify the ubuntu packages and what is > actually needed by scipy. > > 1) I keep seeing posts about g77 not being required anymore and > gfortran preferred. What is the "official" statement? In > INSTALL.txt, only g77 is mentioned for Ubuntu packages via: > """Debian/Ubuntu packages (g77): atlas3-base atlas3-base-dev""". The > only reference to gfortran is with Mac OS X. Also, in "Optional > Packages" debian packages 'gcc g++ g77' are recommened. > > 2) The install notes also say that a complete version of LAPACK is > required for scipy. All the ubuntu descriptions say that only a > subset of routines from LAPACK are included with ATLAS. For example, > http://packages.ubuntu.com/hardy/libatlas-sse2-dev. Is this a problem? > > 3) It looks like there are two packages: atlas3-sse2 and > libatlas-sse2-dev. Are both required? It seems that they refer to > different versions of ATLAS. > > http://packages.ubuntu.com/hardy/atlas3-sse2 > http://packages.ubuntu.com/hardy/libatlas-sse2-dev > > On reading http://www.scipy.org/Installing_SciPy/Linux, it seems like > I only should need libatlas-sse2-dev. Correct? > > Thanks! Well it really means that this documentation is not up to date or maybe not as accurate as it should be. I'm trying to clarify that for the ubuntu 8.10 users (release end of october). Xavier From ondrej at certik.cz Mon Oct 6 21:40:12 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 7 Oct 2008 03:40:12 +0200 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <48EA047A.2060504@ar.media.kyoto-u.ac.jp> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> <48EA047A.2060504@ar.media.kyoto-u.ac.jp> Message-ID: <85b5c3130810061840w333b4790k103adc8a07b1c6e9@mail.gmail.com> On Mon, Oct 6, 2008 at 2:28 PM, David Cournapeau wrote: > Xavier Gnata wrote: >> Hi, >> >> I have installed an ubuntu 8.10 within a kvm image. >> The goal is to write an as simple as possible procedure to compile >> scipy on this distribution. >> I would like to be able to compile scipy only after having installed >> some packages. No tricks. >> Good news, g77 is not needed anymore :) > > It was already the case for 8.04, but it was a bit confusing: both g77 > and gfortran libraries were available. Is it better with 8.10 ? > >> >> I have one issue: >> scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to >> find 'umfpack.h' >> I have installed libsuitesparse-dev providing /usr/include/suitesparse/umfpack.h >> Of course, I can solve this with a symlink but it is very ugly. >> It looks like there is something to fix in scipy auto-detection >> because this "usr/include/suitesparse/umfpack.h" should be detected >> and used. > > That's really the debian packagers' fault. Why do they think it is a > good idea to change the header path of the library is beyond me; it > breaks every single package which depends on it. That's stupid. > > We could get around it; but I though umfpack was being deprecated in > scipy (that is, we would do a scikit from it, but scipy would not > depend on it anymore). The way forward is to get involved with Debian packaging and get this fixed. I did that for the scipy package and fixed that by applying a simple patch to scipy. As to the default umfpack location, I also wondered just like you, but it's useful to ask on the Debian list itself and ask the people who do the packaging. :) So you can read the rationale here (read the whole thread): http://lists.alioth.debian.org/pipermail/pkg-scicomp-devel/2008-September/003133.html I.e. citing: " we had a discussion with the author of suitesparse, the author of the suitesparse interface in Octave and the person in charge of the fedora octave/suitesparse package and we agreed in using /usr/include/suitesparse as the place of the headers. " So if you (or anyone) have opinions on this, please join our teams in Debian: http://wiki.debian.org/Teams/DebianScientificComputingTeam http://wiki.debian.org/Teams/PythonModulesTeam and let's get things fixed/discussed/moving. As to building the official scipy/numpy packages in Debian or Ubuntu, if something isn't working, it's my fault, as I did the uploads for the last couple (a lot) revisions. Please report a bug in that case. On Tue, Oct 7, 2008 at 12:59 AM, Xavier Gnata wrote: >> >> That's really the debian packagers' fault. Why do they think it is a >> good idea to change the header path of the library is beyond me; it >> breaks every single package which depends on it. That's stupid. >> >> > ok. I'm going to write a bug report. w&s. Before you do, please read the thread I posted above. Thanks, Ondrej From ondrej at certik.cz Mon Oct 6 21:47:54 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 7 Oct 2008 03:47:54 +0200 Subject: [SciPy-user] Ubuntu Libraries In-Reply-To: References: Message-ID: <85b5c3130810061847s5ffdc55agf3061e85ce09556c@mail.gmail.com> On Mon, Oct 6, 2008 at 7:47 PM, jah wrote: > Hi, > > I was hoping someone could clarify the ubuntu packages and what is actually > needed by scipy. > > 1) I keep seeing posts about g77 not being required anymore and gfortran > preferred. What is the "official" statement? In INSTALL.txt, only g77 is > mentioned for Ubuntu packages via: """Debian/Ubuntu packages (g77): > atlas3-base atlas3-base-dev""". The only reference to gfortran is with Mac > OS X. Also, in "Optional Packages" debian packages 'gcc g++ g77' are > recommened. Debian (and thus Ubuntu as well) has transitioned from g77 to gfortran, so only gfortran is needed. > > 2) The install notes also say that a complete version of LAPACK is required > for scipy. All the ubuntu descriptions say that only a subset of routines > from LAPACK are included with ATLAS. For example, > http://packages.ubuntu.com/hardy/libatlas-sse2-dev. Is this a problem? As far as I know, you can use atlas instead interchangebly. You can find some (fixed) Debian bug reports about that in the python-numpy package. > > 3) It looks like there are two packages: atlas3-sse2 and > libatlas-sse2-dev. Are both required? It seems that they refer to > different versions of ATLAS. > > http://packages.ubuntu.com/hardy/atlas3-sse2 > http://packages.ubuntu.com/hardy/libatlas-sse2-dev > > On reading http://www.scipy.org/Installing_SciPy/Linux, it seems like I only > should need libatlas-sse2-dev. Correct? The easiest way to determine the build dependencies is to do: $ apt-get source python-scipy $ cd python-scipy-0.6.0/ $ cat debian/control Source: python-scipy Section: python Priority: extra Maintainer: Debian Python Modules Team Uploaders: Alexandre Fayolle , Marco Presi (Zufus) , Ondrej Certik Build-Depends: debhelper (>= 5.0.37.2), dpkg-dev (>= 1.13.19), quilt, python-all-dev, python-central (>= 0.5), python-numpy (>= 1:1.0.2), gfortran, sharutils, swig, libsuitesparse-dev (>= 3.1.0-3), libnetcdf-dev, libx11-dev, libblas-dev | libatlas-base-dev, liblapack-dev | libatlas-base-dev, libfftw3-dev XS-Python-Version: all Standards-Version: 3.8.0 Homepage: http://www.scipy.org/ Vcs-Svn: svn://svn.debian.org/python-modules/packages/scipy/trunk Vcs-Browser: http://svn.debian.org/wsvn/python-modules/packages/scipy/trunk/?op=log XS-DM-Upload-Allowed: yes And you can read the Build-Depends right away. Btw, is there any reason why the official Debian/Ubuntu packages are not sufficient for you? I.e. is it because you need the svn version of scipy? I haven't checked out if the svn scipy builds using the same build dependencies as 0.6.0, that's true. Ondrej From peridot.faceted at gmail.com Mon Oct 6 22:23:32 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 6 Oct 2008 22:23:32 -0400 Subject: [SciPy-user] Gaussian quadrature error In-Reply-To: References: <0F8253EA348F49F6A3C3A0AEFE044866@GatewayLaptop> Message-ID: 2008/10/6 Dan Murphy : > Thanks for the clear explanation, Anne. If I understand the second take-home > lesson -- feed integrators unity-order quantities -- then instead of calling > quadrature as I did: > integrate.quadrature(f,-1000.0,0.0) > I should scale my function f so that my call looks more like > integrate.quadrature(f,-1.0,0.0) > Was that what you meant? Yes, though with the particular function you used - exp(x) - you then have the problem of exceedingly rapid changes within the region of integration. If what you actually wanted was to integrate from -infinity to zero, then you might do better to rescale your x coordinate nonlinearly, say by x' = 1/(1-x) so that the integration becomes one from zero to one without a sharp edge near one side. A further take-home lesson is that Gaussian quadrature is extremely efficient for functions that behave like high-order polynomials - which exp(x) does on small intervals but not on large intervals. However, you can use customized Gaussian quadrature based on a different set of polynomials to integrate functions that look like a polynomial times a weight factor. > Also, when you say "scipy.optimize.quad is a little more general-purpose" > did you mean "scipy.integrate.quad"? Indeed, the call > integrate(quad(f,-1000.0,0.0) > worked great! Oops. Yes. It's still possible to trick it, but scipy.integrate.quad is based on old, well-tested code that is designed to be fairly robust against peculiar functions. It will be a little slower than Gaussian quadrature for functions that are extremely smooth, but then Gaussian quadrature suffers very badly when handed functions like abs(x). Anne From peridot.faceted at gmail.com Mon Oct 6 22:30:02 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 6 Oct 2008 22:30:02 -0400 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> Message-ID: 2008/10/6 Sebastian Haase : > On Mon, Oct 6, 2008 at 8:34 PM, Anne Archibald > wrote: >> 2008/10/6 St?fan van der Walt : >>> 2008/9/26 David Cournapeau : >>>> Yes, it would be nice. What do other people think about deprecating >>>> all the numpy re-export in scipy ? It would be nice to do for 0.7 >>>> (e.g. in 0.7, deprecated, in 0.8, removed). >>> >>> There were no objections to this, so may we go ahead? >> >> My only concern is possible user confusion: some functions (e.g. sqrt) >> are provided as "enhanced" versions in scipy, while others are simply >> reexported. If we remove the reexports, users can't simply use >> scipy.whatever to get the best-available version of each function, >> they have to know whether an enhanced version exists. Of course, since >> the enhanced versions exist because their APIs differ in important and >> possibly surprising ways (e.g., sqrt(-1) has a different return type >> from sqrt(1)) this may be a good thing. >> > Arguing that SciPy is below 1.0 I think the reexporting should be > minimized as much as possible. > I don't thing that some people's preference for > "from scipy import *" > (without a preceding "from numpy import *") should not be a deciding point. The case I was concerned about was import scipy as sp x = sp.cos(2*sp.arccos(y)) If this is changed to import numpy as np x = np.cos(2*np.arccos(y)) it suddenly stops working for values y>1. To keep the same behaviour it needs to be import scipy as sp import numpy as np x = np.cos(2*sp.arccos(y)) This is perhaps all right, but it does mean that users need to pay attention. Anne From robert.kern at gmail.com Mon Oct 6 22:40:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 6 Oct 2008 21:40:38 -0500 Subject: [SciPy-user] Inconsistent standard deviation and variance implementation in scipy vs. scipy.stats In-Reply-To: References: <200809241605.01575.jr@sun.ac.za> <200809241205.02637.pgmdevlist@gmail.com> <48DA9519.1000203@american.edu> <5b8d13220809252119ya7b7b28p400d914792b129a8@mail.gmail.com> <9457e7c80810060928i239f2a7eg5dee5121617317b5@mail.gmail.com> Message-ID: <3d375d730810061940q2d358482n80341e2367f8fd5c@mail.gmail.com> On Mon, Oct 6, 2008 at 21:30, Anne Archibald wrote: > The case I was concerned about was > > import scipy as sp > x = sp.cos(2*sp.arccos(y)) > > If this is changed to > > import numpy as np > x = np.cos(2*np.arccos(y)) > > it suddenly stops working for values y>1. To keep the same behaviour > it needs to be > > import scipy as sp > import numpy as np > x = np.cos(2*sp.arccos(y)) > > This is perhaps all right, but it does mean that users need to pay attention. Well, if we remove some names from scipy/__init__.py, we should remove them all. The actual definitions of those extended-domain functions are actually in numpy.lib.scimath, not scipy. We can make a convenient module inside numpy that basically does this: from numpy import * from numpy.lib.scimath import * Then the transition for people using "import scipy" becomes quite easy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Mon Oct 6 23:49:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 07 Oct 2008 12:49:55 +0900 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <85b5c3130810061840w333b4790k103adc8a07b1c6e9@mail.gmail.com> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> <48EA047A.2060504@ar.media.kyoto-u.ac.jp> <85b5c3130810061840w333b4790k103adc8a07b1c6e9@mail.gmail.com> Message-ID: <48EADC63.7020107@ar.media.kyoto-u.ac.jp> Ondrej Certik wrote: > > The way forward is to get involved with Debian packaging and get this fixed. > > I did that for the scipy package and fixed that by applying a simple > patch to scipy. As to the default umfpack location, I also wondered > just like you, but it's useful to ask on the Debian list itself and > ask the people who do the packaging. :) Oh, I know why they did that (avoiding cluttering /usr/include). I don't think there is a point in discussing, they won't change it now. We have to handle this in our umfpack detection scheme. cheers, David From david at ar.media.kyoto-u.ac.jp Mon Oct 6 23:53:38 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 07 Oct 2008 12:53:38 +0900 Subject: [SciPy-user] Ubuntu Libraries In-Reply-To: References: Message-ID: <48EADD42.9080608@ar.media.kyoto-u.ac.jp> jah wrote: > Hi, > > I was hoping someone could clarify the ubuntu packages and what is > actually needed by scipy. > > 1) I keep seeing posts about g77 not being required anymore and > gfortran preferred. What is the "official" statement? In > INSTALL.txt, only g77 is mentioned for Ubuntu packages via: > """Debian/Ubuntu packages (g77): atlas3-base atlas3-base-dev""". The > only reference to gfortran is with Mac OS X. Also, in "Optional > Packages" debian packages 'gcc g++ g77' are recommened. It depends on the version. g77 and gfortran are not ABI compatible: you can't mix them in your binary, including shared libraries. So the advice was and still is to use the compiler of your distribution's ABI. Ubuntu 8.04 transitionned from g77 to gfortran ABI (both ABI were available in e.g. atlas: atlas3-base-dev for g77 ABI vs libatlas-sse2-dev for gfortran ABI). > > 2) The install notes also say that a complete version of LAPACK is > required for scipy. All the ubuntu descriptions say that only a > subset of routines from LAPACK are included with ATLAS. For example, > http://packages.ubuntu.com/hardy/libatlas-sse2-dev. Is this a problem? Ubuntu (and debian) do include the full lapack into atlas: you have nothing to do with them on Ubuntu. > > 3) It looks like there are two packages: atlas3-sse2 and > libatlas-sse2-dev. Are both required? It seems that they refer to > different versions of ATLAS. > > http://packages.ubuntu.com/hardy/atlas3-sse2 > http://packages.ubuntu.com/hardy/libatlas-sse2-dev > > On reading http://www.scipy.org/Installing_SciPy/Linux, it seems like > I only should need libatlas-sse2-dev. Correct? Yep, if you use gfortran. You should install libatlas-sse2-dev if you build everything (numpy and scipy) with gfortran. David From jah.mailinglist at gmail.com Tue Oct 7 03:23:11 2008 From: jah.mailinglist at gmail.com (jah) Date: Tue, 7 Oct 2008 00:23:11 -0700 Subject: [SciPy-user] Ubuntu Libraries In-Reply-To: <85b5c3130810061847s5ffdc55agf3061e85ce09556c@mail.gmail.com> References: <85b5c3130810061847s5ffdc55agf3061e85ce09556c@mail.gmail.com> Message-ID: On Mon, Oct 6, 2008 at 6:47 PM, Ondrej Certik wrote: > > Btw, is there any reason why the official Debian/Ubuntu packages are > not sufficient for you? I.e. is it because you need the svn version of > scipy? That is exactly why. Also, started from scratch with Python 2.6, hoping to learn something in the process. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Tue Oct 7 07:43:17 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Tue, 7 Oct 2008 13:43:17 +0200 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <48EADC63.7020107@ar.media.kyoto-u.ac.jp> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> <48EA047A.2060504@ar.media.kyoto-u.ac.jp> <85b5c3130810061840w333b4790k103adc8a07b1c6e9@mail.gmail.com> <48EADC63.7020107@ar.media.kyoto-u.ac.jp> Message-ID: <2a1f8a930810070443s70c45f69o4c98cdad5f63d473@mail.gmail.com> On Tue, Oct 7, 2008 at 5:49 AM, David Cournapeau < david at ar.media.kyoto-u.ac.jp> wrote: > Ondrej Certik wrote: > > > > The way forward is to get involved with Debian packaging and get this > fixed. > > > > I did that for the scipy package and fixed that by applying a simple > > patch to scipy. As to the default umfpack location, I also wondered > > just like you, but it's useful to ask on the Debian list itself and > > ask the people who do the packaging. :) > > Oh, I know why they did that (avoiding cluttering /usr/include). I don't > think there is a point in discussing, they won't change it now. We have > to handle this in our umfpack detection scheme. > > Thanks! Indeed it is fully pointless to try to change that. libsuitesparse must have somthing special as a library but anyway...if it is fixed by adding /usr/include/suitesparse in the scipy seearch path it is perfect :) Ondrej : Maybe I should try to have a script tool able to take the sources of the scipy package, replace scipy sources by the current svn ones and try to rebuild the package every week or so. Xavier -------------- next part -------------- An HTML attachment was scrubbed... URL: From ondrej at certik.cz Tue Oct 7 08:14:05 2008 From: ondrej at certik.cz (Ondrej Certik) Date: Tue, 7 Oct 2008 14:14:05 +0200 Subject: [SciPy-user] Install scipy on ubnutu 8.10 In-Reply-To: <2a1f8a930810070443s70c45f69o4c98cdad5f63d473@mail.gmail.com> References: <2a1f8a930810051201w24e83e04k1341884441c51aa@mail.gmail.com> <48EA047A.2060504@ar.media.kyoto-u.ac.jp> <85b5c3130810061840w333b4790k103adc8a07b1c6e9@mail.gmail.com> <48EADC63.7020107@ar.media.kyoto-u.ac.jp> <2a1f8a930810070443s70c45f69o4c98cdad5f63d473@mail.gmail.com> Message-ID: <85b5c3130810070514q56cd8971l93bb170dc611c46d@mail.gmail.com> On Tue, Oct 7, 2008 at 1:43 PM, Xavier Gnata wrote: > > > On Tue, Oct 7, 2008 at 5:49 AM, David Cournapeau > wrote: >> >> Ondrej Certik wrote: >> > >> > The way forward is to get involved with Debian packaging and get this >> > fixed. >> > >> > I did that for the scipy package and fixed that by applying a simple >> > patch to scipy. As to the default umfpack location, I also wondered >> > just like you, but it's useful to ask on the Debian list itself and >> > ask the people who do the packaging. :) >> >> Oh, I know why they did that (avoiding cluttering /usr/include). I don't >> think there is a point in discussing, they won't change it now. We have >> to handle this in our umfpack detection scheme. >> > > Thanks! Indeed it is fully pointless to try to change that. libsuitesparse > must have somthing special as a library but anyway...if it is fixed by > adding /usr/include/suitesparse in the scipy seearch path it is perfect :) Well, I thought it should change, but apparently there are reasons for the current way. And for me, I don't really care where it is, as long as it works. So imho adding the /usr/include/suitesparse in the search path is the way to go. > > Ondrej : Maybe I should try to have a script tool able to take the sources > of the scipy package, replace scipy sources by the current svn ones and try > to rebuild the package every week or so. Indeed, that'd be useful. Basically you just need the debian dir, that's it. Let me know if you need anything fixed. As I said, if you have time, we'd be happy if you join the Debian Python Modules Team and help us maintain the scipy/numpy packages in Debian/Ubuntu. Ondrej From nmarais at sun.ac.za Tue Oct 7 15:06:54 2008 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 7 Oct 2008 19:06:54 +0000 (UTC) Subject: [SciPy-user] Solving complex RHS for real matrix using umfpack and scipy.sparse.linalg.dsolve.factorized Message-ID: Hi, I have a real sparse matrix that I factorized using scipy.sparse.linalg.dsolve.factorized(). When I solve it with a complex RHS, I always get a real return. Do I need to set the matrix type as complex in this case, or is there a better way? Thanks Neilen From peridot.faceted at gmail.com Tue Oct 7 16:12:37 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 7 Oct 2008 16:12:37 -0400 Subject: [SciPy-user] Solving complex RHS for real matrix using umfpack and scipy.sparse.linalg.dsolve.factorized In-Reply-To: References: Message-ID: 2008/10/7 Neilen Marais : > I have a real sparse matrix that I factorized using > scipy.sparse.linalg.dsolve.factorized(). When I solve it with a complex > RHS, I always get a real return. Do I need to set the matrix type as > complex in this case, or is there a better way? If all you're doing is solving y = A*x, then you can simply solve for the real and imaginary parts separately, since a real matrix won't mix them and the problem is linear. Anne From nwagner at iam.uni-stuttgart.de Wed Oct 8 10:39:51 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Oct 2008 16:39:51 +0200 Subject: [SciPy-user] timeseries documentation via sphinx Message-ID: Hi all, I tried to build the documentation of timeseries make latex PAPER=a4 mkdir -p build/latex build/doctrees /data/home/nwagner/local/bin/sphinx-build -b latex -d build/doctrees -D latex_paper_size=a4 source build/latex /data/home/nwagner/local/lib/python2.5/site-packages/matplotlib/__init__.py:367: UserWarning: matplotlibrc text.usetex can not be used with *Agg backend unless dvipng-1.5 or later is installed on your system warnings.warn( 'matplotlibrc text.usetex can not be used with *Agg ' Sphinx v0.4.3, building latex trying to load pickled env... not found Exception occurred: File "/data/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/../ext/numpydoc.py", line 361, in monkeypatch_sphinx_ext_autodoc if autodoc.RstGenerator.format_signature is our_format_signature: AttributeError: 'module' object has no attribute 'RstGenerator' The full traceback has been saved in /tmp/sphinx-err-2jMNGx.log, if you want to report the issue to the author. Please also report this if it was a user error, so that a better error message can be provided next time. Send reports to sphinx-dev at googlegroups.com. Thanks! make: *** [latex] Fehler 1 How can I resolve this problem ? Nils From david at ar.media.kyoto-u.ac.jp Wed Oct 8 10:28:17 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 08 Oct 2008 23:28:17 +0900 Subject: [SciPy-user] timeseries documentation via sphinx In-Reply-To: References: Message-ID: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Hi all, > > I tried to build the documentation of timeseries > You need the developement version of sphinx (0.5dev) David From philbinj at gmail.com Wed Oct 8 11:41:48 2008 From: philbinj at gmail.com (James Philbin) Date: Wed, 8 Oct 2008 16:41:48 +0100 Subject: [SciPy-user] Left hand sparse matrix multiplication Message-ID: <2b1c8c4f0810080841l4df18c1fre07ee259a6164f74@mail.gmail.com> Hi, I'm trying to compute x*A where x is a dense row vector and A is a sparse CSC matrix. A.rmatvec seems to do what I want but is wasteful as it computes: self.transpose().matvec( other ) i.e. it computes A^T * x^T. It seems there should be a much more efficient overload for csc's rmatvec which doesn't involve computing the transpose. I hope i'm understanding things correctly. Thanks, James On Tue, Oct 7, 2008 at 9:12 PM, Anne Archibald wrote: > 2008/10/7 Neilen Marais : > >> I have a real sparse matrix that I factorized using >> scipy.sparse.linalg.dsolve.factorized(). When I solve it with a complex >> RHS, I always get a real return. Do I need to set the matrix type as >> complex in this case, or is there a better way? > > If all you're doing is solving y = A*x, then you can simply solve for > the real and imaginary parts separately, since a real matrix won't mix > them and the problem is linear. > > Anne > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Oct 8 12:32:14 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Oct 2008 18:32:14 +0200 Subject: [SciPy-user] timeseries documentation via sphinx In-Reply-To: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> References: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, 08 Oct 2008 23:28:17 +0900 David Cournapeau wrote: > Nils Wagner wrote: >> Hi all, >> >> I tried to build the documentation of timeseries >> > > You need the developement version of sphinx (0.5dev) > > David Hi David, Do I need more than svn co http://svn.python.org/projects/doctools/trunk/sphinx sphinx Nils From mattknox.ca at gmail.com Wed Oct 8 12:33:14 2008 From: mattknox.ca at gmail.com (Matt Knox) Date: Wed, 8 Oct 2008 16:33:14 +0000 (UTC) Subject: [SciPy-user] timeseries documentation via sphinx References: Message-ID: > Hi all, > > I tried to build the documentation of timeseries > ...... > > How can I resolve this problem ? > > Nils > Also note that there are pre-built docs at http://pytseries.sourceforge.net/ If you have trouble building them after switching to the development version of sphinx, let me know. - Matt From dominique.orban at gmail.com Wed Oct 8 12:36:10 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Wed, 8 Oct 2008 12:36:10 -0400 Subject: [SciPy-user] Left hand sparse matrix multiplication In-Reply-To: <2b1c8c4f0810080841l4df18c1fre07ee259a6164f74@mail.gmail.com> References: <2b1c8c4f0810080841l4df18c1fre07ee259a6164f74@mail.gmail.com> Message-ID: <8793ae6e0810080936m3a2b65f5vcce043968f5f8f8e@mail.gmail.com> On Wed, Oct 8, 2008 at 11:41 AM, James Philbin wrote: > Hi, > > I'm trying to compute x*A where x is a dense row vector and A is a > sparse CSC matrix. A.rmatvec seems to do what I want but is wasteful > as it computes: > self.transpose().matvec( other ) > i.e. it computes A^T * x^T. > > It seems there should be a much more efficient overload for csc's > rmatvec which doesn't involve computing the transpose. I hope i'm > understanding things correctly. Do you only need matrix-vector products with A in this form, i.e., x*A, or do you also need A*x? If you only need x*A you're probably better off storing B=A^T in CSR format and computing B*x' where x' is the column vector x (always stored as a column vector.) Dominique > On Tue, Oct 7, 2008 at 9:12 PM, Anne Archibald > wrote: >> 2008/10/7 Neilen Marais : >> >>> I have a real sparse matrix that I factorized using >>> scipy.sparse.linalg.dsolve.factorized(). When I solve it with a complex >>> RHS, I always get a real return. Do I need to set the matrix type as >>> complex in this case, or is there a better way? >> >> If all you're doing is solving y = A*x, then you can simply solve for >> the real and imaginary parts separately, since a real matrix won't mix >> them and the problem is linear. >> >> Anne >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Wed Oct 8 13:20:03 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Oct 2008 19:20:03 +0200 Subject: [SciPy-user] timeseries documentation via sphinx In-Reply-To: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> References: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> Message-ID: On Wed, 08 Oct 2008 23:28:17 +0900 David Cournapeau wrote: > Nils Wagner wrote: >> Hi all, >> >> I tried to build the documentation of timeseries >> > > You need the developement version of sphinx (0.5dev) > > David I have updated to the development version of sphinx ... make latex mkdir -p build/latex build/doctrees sphinx-build -b latex -d build/doctrees -D latex_paper_size=a4 source build/latex Sphinx v0.5, building latex loading pickled environment... not found building [latex]: all documents updating environment: 14 added, 0 changed, 0 removed reading sources... core/Date core/DateArray core/TimeSeries core/index index installing intro lib/database lib/filtering lib/index lib/interpolation lib/plotting DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/yahoo.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/yahoo.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/yahoo.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/yahoo.py DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/expmave.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/expmave.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/expmave.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/expmave.py DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/sepaxis.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/sepaxis.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/sepaxis.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/sepaxis.py DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom1.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom1.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom1.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom1.py DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom2.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom2.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom2.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom2.py DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom3.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom3.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom3.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom3.py DEBUG: current directory: /home/nwagner/svn/timeseries/scikits/timeseries/doc DEBUG: fullpath:/home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom4.py DEBUG: outdirnm:/home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom4.py /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom4.png building /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/plotting/zoom4.py lib/report license WARNING: /home/nwagner/svn/timeseries/scikits/timeseries/doc/source/lib/database.rst:132: (WARNING/2) autodoc can't import/find module 'scikits.timeseries.lib.tstables', it reported error: "No module named tables", please check your spelling and sys.path pickling environment... done checking consistency... done processing TimeSeries.tex... index intro license installing core/index core/Date core/DateArray core/TimeSeries lib/index lib/interpolation lib/filtering lib/plotting lib/report lib/database resolving references... writing... Exception occurred: File "/usr/local/lib64/python2.5/site-packages/Sphinx-0.5dev_20081008-py2.5.egg/sphinx/latexwriter.py", line 580, in visit_entry raise NotImplementedError('Column or row spanning cells are ' NotImplementedError: Column or row spanning cells are not implemented. The full traceback has been saved in /tmp/sphinx-err-33YnJ3.log, if you want to report the issue to the author. Please also report this if it was a user error, so that a better error message can be provided next time. Send reports to sphinx-dev at googlegroups.com. Thanks! make: *** [latex] Error 1 Nils From pgmdevlist at gmail.com Wed Oct 8 13:20:35 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 8 Oct 2008 13:20:35 -0400 Subject: [SciPy-user] timeseries documentation via sphinx In-Reply-To: References: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> Message-ID: <200810081320.35661.pgmdevlist@gmail.com> Oh, looks like somebody committed some fixes without removing the DEBUG statements. Sorry about that. Anyhow: yes, there's a problem with the tables. I'll try to find a workaround. In the meantime, the html docs should work OK. From nwagner at iam.uni-stuttgart.de Wed Oct 8 13:33:35 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 08 Oct 2008 19:33:35 +0200 Subject: [SciPy-user] timeseries documentation via sphinx In-Reply-To: <200810081320.35661.pgmdevlist@gmail.com> References: <48ECC381.6040403@ar.media.kyoto-u.ac.jp> <200810081320.35661.pgmdevlist@gmail.com> Message-ID: On Wed, 8 Oct 2008 13:20:35 -0400 Pierre GM wrote: > Oh, looks like somebody committed some fixes without >removing the DEBUG > statements. Sorry about that. > Anyhow: yes, there's a problem with the tables. I'll try >to find a workaround. > In the meantime, the html docs should work OK. Yes, works for me ! Thank you very much ! make html mkdir -p build/html build/doctrees sphinx-build -b html -d build/doctrees -D latex_paper_size=a4 source build/html Sphinx v0.5, building html loading pickled environment... done building [html]: targets for 14 source files that are out of date updating environment: 0 added, 1 changed, 0 removed reading sources... core/TimeSeries pickling environment... done checking consistency... done preparing documents... done writing output... core/Date core/DateArray core/TimeSeries core/index index installing intro lib/database lib/filtering lib/index lib/interpolation lib/plotting lib/report license writing additional files... genindex search copying images... /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom3.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom2.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/expmave.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom1.pdf /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/yahoo.pdf /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom4.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom1.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/sepaxis.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/sepaxis.pdf /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom4.pdf /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom2.pdf /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/yahoo.png /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/expmave.pdf /home/nwagner/svn/timeseries/scikits/timeseries/doc/build/plots/zoom3.pdf copying static files... done dumping search index... done dumping object inventory... done build succeeded. Build finished. The HTML pages are in build/html. From wnbell at gmail.com Wed Oct 8 15:28:05 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 8 Oct 2008 15:28:05 -0400 Subject: [SciPy-user] Left hand sparse matrix multiplication In-Reply-To: <2b1c8c4f0810080841l4df18c1fre07ee259a6164f74@mail.gmail.com> References: <2b1c8c4f0810080841l4df18c1fre07ee259a6164f74@mail.gmail.com> Message-ID: On Wed, Oct 8, 2008 at 11:41 AM, James Philbin wrote: > > I'm trying to compute x*A where x is a dense row vector and A is a > sparse CSC matrix. A.rmatvec seems to do what I want but is wasteful > as it computes: > self.transpose().matvec( other ) > i.e. it computes A^T * x^T. > > It seems there should be a much more efficient overload for csc's > rmatvec which doesn't involve computing the transpose. I hope i'm > understanding things correctly. > CSR.T and CSC.T are constant time operations, they just return the matrix in the "opposite" format. In your case, A.T is equivalent to csr_matrix((A.data,A.indices,A.indptr), shape=(A.shape[1],A.shape[0])), which simply reinterprets the CSC format of A as the CSR format of A.T. This does not hold for other sparse formats so there *is* some room for improvement. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From Dharhas.Pothina at twdb.state.tx.us Wed Oct 8 15:54:30 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Wed, 08 Oct 2008 14:54:30 -0500 Subject: [SciPy-user] Calculating daily, monthly and seasonal averages of hourly time series data. Message-ID: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> Hi, I'm trying to analyze hourly salinity data. I was wondering if there is a simple way of calculating daily, monthly and seasonal averages of hourly time series data. So assuming I have two arrays that contain several years of hourly (or every 15min) salinity data: a datetime array called 'fielddates' & a data array called 'salinity' How would I go about getting the various averages. The seasonal averages would be say defined as May through September etc. I had a look at scikits.timeseries but it looks like it would require upgrading numpy to install and there isn't enough high level documentation on how to use it for me to be confident in picking it up in the time frame I'm looking at. I'm also not completely clear if it can handle stuff that happens on a scale smaller than a day. If anyone can point me to any usage examples for it that would be appreciated. Thanks, - dharhas From lroubeyrie at limair.asso.fr Thu Oct 9 04:51:27 2008 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Thu, 09 Oct 2008 10:51:27 +0200 Subject: [SciPy-user] Calculating daily, monthly and seasonal averages of hourly time series data. In-Reply-To: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> References: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> Message-ID: <1223542287.26062.20.camel@poste5> Hi Dharhas, scikits.timeseries is perfect for what you want in a very useable way : ############################### In [29]: import scikits.timeseries as ts In [30]: sdate=ts.Date('H', '2007-01-01 00:00') In [31]: fielddates=ts.date_array(start_date=sdate, freq='H', length=365*24*2) In [32]: salinity=random(365*24*2)*100 In [33]: mes=ts.time_series(data=salinity, dates=fielddates) In [34]: mes Out[34]: timeseries([ 23.84116045 49.51437251 89.29221711 ..., 37.00510947 41.12589836 78.65572656], dates = [01-jan-2007 00:00 ... 30-d?c-2008 23:00], freq = H) In [35]: mes_avmonth=mes.convert(freq='M', func=mean) In [36]: mes_avmonth Out[36]: timeseries([ 49.29718906 50.64688937 49.88193999 48.97144253 49.5788259 50.41340038 50.15047009 51.70933261 50.5635153 51.15084406 51.15362514 51.51443468 49.17556599 49.26877667 50.21416724 49.37037657 51.00724033 49.43337134 49.60398056 50.24470761 50.62350109 51.15572702 51.37652011 49.24193747], dates = [jan-2007 ... d?c-2008], freq = M) In [37]: mes_avyear=mes.convert(freq='Y', func=mean) In [38]: mes_avyear Out[38]: timeseries([ 50.41903159 50.06468157], dates = [2007 2008], freq = A-DEC) In [39]: mes_avseason=mes[(mes.month>=5) & (mes.month<=9)].mean() In [40]: mes_avseason Out[40]: 50,33380690600049 ############################### Le mercredi 08 octobre 2008 ? 14:54 -0500, Dharhas Pothina a ?crit : > Hi, > > I'm trying to analyze hourly salinity data. I was wondering if there is a simple way of calculating daily, monthly and seasonal averages of hourly time series data. > > So assuming I have two arrays that contain several years of hourly (or every 15min) salinity data: a datetime array called 'fielddates' & a data array called 'salinity' > > How would I go about getting the various averages. The seasonal averages would be say defined as May through September etc. > > I had a look at scikits.timeseries but it looks like it would require upgrading numpy to install and there isn't enough high level documentation on how to use it for me to be confident in picking it up in the time frame I'm looking at. I'm also not completely clear if it can handle stuff that happens on a scale smaller than a day. If anyone can point me to any usage examples for it that would be appreciated. > > Thanks, > > - dharhas > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Lionel Roubeyrie charg? d'?tudes LIMAIR - La Surveillance de l'Air en Limousin http://www.limair.asso.fr From icy.flame.gm at gmail.com Thu Oct 9 07:44:16 2008 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Thu, 9 Oct 2008 12:44:16 +0100 Subject: [SciPy-user] Extrema finding Message-ID: I am trying to find a list of all maxima and minima, each for a given (1D) numpy array. Anyone know of a quick way to do it? Ideally the function will return the extrema values and their positions. Relatively simple function to implement in Python, but that would be painfully slow. The typical data array I am looking at, has approximately 500k elements of double precision float. Any thoughts and suggestions are much appreciated. Thanks! From ndbecker2 at gmail.com Thu Oct 9 08:06:06 2008 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 09 Oct 2008 08:06:06 -0400 Subject: [SciPy-user] Extrema finding References: Message-ID: iCy-fLaME wrote: > I am trying to find a list of all maxima and minima, each for a given > (1D) numpy array. Anyone know of a quick way to do it? > > Ideally the function will return the extrema values and their > positions. Relatively simple function to implement in Python, but that > would be painfully slow. The typical data array I am looking at, has > approximately 500k elements of double precision float. > > Any thoughts and suggestions are much appreciated. Thanks! I have code for that in c++, which can be used with pyublas and boost::python. :) From jdh2358 at gmail.com Thu Oct 9 08:21:47 2008 From: jdh2358 at gmail.com (John Hunter) Date: Thu, 9 Oct 2008 07:21:47 -0500 Subject: [SciPy-user] Extrema finding In-Reply-To: References: Message-ID: <88e473830810090521qe247f41l9649f3c460e95d38@mail.gmail.com> On Thu, Oct 9, 2008 at 6:44 AM, iCy-fLaME wrote: > I am trying to find a list of all maxima and minima, each for a given > (1D) numpy array. Anyone know of a quick way to do it? > > Ideally the function will return the extrema values and their > positions. Relatively simple function to implement in Python, but that > would be painfully slow. The typical data array I am looking at, has > approximately 500k elements of double precision float. I use the following to find the indices of extrema: def extrema_sampling(x, withend=False): """ return the indices into the local maxima or minima of x if withend, include the 0 and end points in the sampling """ d = np.diff(x) d2 = d[:-1]*d[1:] ind = [] if withend: ind.append(0) ind.extend(np.nonzero(d2<0)[0]+1) if withend and ind[-1]!=len(x)-1: ind.append(len(x)-1) return np.array(ind) From dan.collins at uchsc.edu Thu Oct 9 10:33:19 2008 From: dan.collins at uchsc.edu (dan collins uchsc) Date: Thu, 9 Oct 2008 08:33:19 -0600 Subject: [SciPy-user] Extrema finding Message-ID: I use the following technique to fine minima and maxima. x = 1d vector y = x; y = delete(y, [0], axis=0); y = append(y,0) xy = x-y a = [0] for i in range(0, size(xy)): a = append(a,xy[i]-xy[i+1]) minima = where(a == 2) maximu = where(a == -2) > > > > Message: 7 > Date: Thu, 9 Oct 2008 12:44:16 +0100 > From: iCy-fLaME > Subject: [SciPy-user] Extrema finding > To: "SciPy Users List" > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > I am trying to find a list of all maxima and minima, each for a given > (1D) numpy array. Anyone know of a quick way to do it? > > Ideally the function will return the extrema values and their > positions. Relatively simple function to implement in Python, but that > would be painfully slow. The typical data array I am looking at, has > approximately 500k elements of double precision float. > > Any thoughts and suggestions are much appreciated. Thanks! > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lroubeyrie at limair.asso.fr Thu Oct 9 10:39:49 2008 From: lroubeyrie at limair.asso.fr (Lionel Roubeyrie) Date: Thu, 09 Oct 2008 16:39:49 +0200 Subject: [SciPy-user] Calculating daily, monthly and seasonal averages of hourly time series data. In-Reply-To: <48EDB6AF.63BA.009B.0@twdb.state.tx.us> References: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> <1223542287.26062.20.camel@poste5> <48EDB6AF.63BA.009B.0@twdb.state.tx.us> Message-ID: <1223563189.30896.47.camel@poste5> > Ok I think I understood your example below. Can you give me an example > of how to deal with missing data? If you take my last example, you have 3 separated arrays: mes.data : your values mes.dates : the date array mes.mask : like the maskedarray module (timeseries is based on it). Trying to mask the 10 first values: ################################## In [16]: mask=zeros_like(mes.data) In [17]: mask[0:10]=True In [18]: mask Out[18]: array([ 1., 1., 1., ..., 0., 0., 0.]) In [19]: mes2=ts.time_series(mes, mask=mask) In [20]: mes2 Out[20]: timeseries([-- -- -- ..., 17.9699245692 66.8968405206 24.7117965045], dates = [01-jan-2007 00:00 ... 30-d?c-2008 23:00], freq = H) ################################## A timeseries can be constructed based on another timeseries, like I do here with mes2. Note that just the values are masked (missing), not the dates because all fields have a value (masked or not). > Does this general technique work for data that is on a 15 minute frequency Yes, but no :-) The timeseries module doesn't handle directly QH frequency, but minute frequency (freq='T'). Look at that : ################################# In [28]: fielddates=ts.date_array(['2007-01-01 00:00', '2007-01-01 00:15', '2007-01-01 00:30', '2007-01-01 00:45'], freq='T') In [29]: salinity=random(4)*100 In [30]: mes=ts.time_series(data=salinity, dates=fielddates) In [31]: mes.has_missing_dates() Out[31]: True ################################# There's not QH native frequency, then there's some missing dates (you can also look for duplicated dates, very convenient!). But you can fill these missing dates : ############################### In [36]: mes2=mes.fill_missing_dates() In [37]: mes2.has_missing_dates() Out[37]: False In [38]: mes2 Out[38]: timeseries([2.33824586442 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 36.180901427 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 39.0648471531 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 55.4226606997], dates = [01-jan-2007 00:00 ... 01-jan-2007 00:45], freq = T) ############################### Or the module can handle directly these missing dates when you convert the timeseries to a lower frequency: ############################### In [39]: mes.convert(freq='H', func=mean) Out[39]: timeseries([ 33.25166379], dates = [01-jan-2007 00:00], freq = H) ############################### You can try with func=None, it will just fill the missing dates with missing values :-p > or datasets where the frequency is > variable (ie some months we have 10 readings other months we may have > 30? Like you see, just pass you datas with the corrects dates, and it rocks, but don't mix minute frequency with hour frequency! Here I take 3 daily samples in january, and one in october : ################################# In [41]: fielddates=ts.date_array(['2007-01-01', '2007-01-02', '2007-01-03', '2007-10-15'], freq='D') In [42]: salinity=random(4)*100 In [43]: mes=ts.time_series(data=salinity, dates=fielddates) In [44]: mes Out[44]: timeseries([ 59.63468614 38.60721076 64.52554805 66.17637291], dates = [01-jan-2007 02-jan-2007 03-jan-2007 15-oct-2007], freq = D) In [45]: mes.convert(freq='M', func=mean) Out[45]: timeseries([54.2558149823 -- -- -- -- -- -- -- -- 66.1763729106], dates = [jan-2007 ... oct-2007], freq = M) ################################### Computing the monthly average goes fine, the module fill the missing months by masked values. > > Also how stable is the scikits.timeseries? Is it reasonably usable? Yes, we use it intensively on large projects and Pierre G.M. has made a very good tool. Cordialy > > thanks, > > - dharhas > > >>> Lionel Roubeyrie 10/9/2008 3:51 AM >>> > Hi Dharhas, > scikits.timeseries is perfect for what you want in a very useable way > : > > ############################### > In [29]: import scikits.timeseries as ts > > In [30]: sdate=ts.Date('H', '2007-01-01 00:00') > > In [31]: fielddates=ts.date_array(start_date=sdate, freq='H', > length=365*24*2) > > In [32]: salinity=random(365*24*2)*100 > > In [33]: mes=ts.time_series(data=salinity, dates=fielddates) > > In [34]: mes > Out[34]: > timeseries([ 23.84116045 49.51437251 89.29221711 ..., 37.00510947 > 41.12589836 > 78.65572656], > dates = [01-jan-2007 00:00 ... 30-d?c-2008 23:00], > freq = H) > > > In [35]: mes_avmonth=mes.convert(freq='M', func=mean) > > In [36]: mes_avmonth > Out[36]: > timeseries([ 49.29718906 50.64688937 49.88193999 48.97144253 > 49.5788259 > 50.41340038 50.15047009 51.70933261 50.5635153 51.15084406 > 51.15362514 51.51443468 49.17556599 49.26877667 50.21416724 > 49.37037657 51.00724033 49.43337134 49.60398056 50.24470761 > 50.62350109 51.15572702 51.37652011 49.24193747], > dates = [jan-2007 ... d?c-2008], > freq = M) > > > In [37]: mes_avyear=mes.convert(freq='Y', func=mean) > > In [38]: mes_avyear > Out[38]: > timeseries([ 50.41903159 50.06468157], > dates = [2007 2008], > freq = A-DEC) > > > In [39]: mes_avseason=mes[(mes.month>=5) & (mes.month<=9)].mean() > > In [40]: mes_avseason > Out[40]: 50,33380690600049 > ############################### > > > Le mercredi 08 octobre 2008 ? 14:54 -0500, Dharhas Pothina a ?crit : > > Hi, > > > > I'm trying to analyze hourly salinity data. I was wondering if there > is a simple way of calculating daily, monthly and seasonal averages of > hourly time series data. > > > > So assuming I have two arrays that contain several years of hourly > (or every 15min) salinity data: a datetime array called 'fielddates' & a > data array called 'salinity' > > > > How would I go about getting the various averages. The seasonal > averages would be say defined as May through September etc. > > > > I had a look at scikits.timeseries but it looks like it would require > upgrading numpy to install and there isn't enough high level > documentation on how to use it for me to be confident in picking it up > in the time frame I'm looking at. I'm also not completely clear if it > can handle stuff that happens on a scale smaller than a day. If anyone > can point me to any usage examples for it that would be appreciated. > > > > Thanks, > > > > - dharhas > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lionel Roubeyrie charg? d'?tudes LIMAIR - La Surveillance de l'Air en Limousin http://www.limair.asso.fr From icy.flame.gm at gmail.com Thu Oct 9 12:20:08 2008 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Thu, 9 Oct 2008 17:20:08 +0100 Subject: [SciPy-user] Extrema finding In-Reply-To: <88e473830810090521qe247f41l9649f3c460e95d38@mail.gmail.com> References: <88e473830810090521qe247f41l9649f3c460e95d38@mail.gmail.com> Message-ID: On Thu, Oct 9, 2008 at 1:21 PM, John Hunter wrote: > On Thu, Oct 9, 2008 at 6:44 AM, iCy-fLaME wrote: >> I am trying to find a list of all maxima and minima, each for a given >> (1D) numpy array. Anyone know of a quick way to do it? > > >> >> Ideally the function will return the extrema values and their >> positions. Relatively simple function to implement in Python, but that >> would be painfully slow. The typical data array I am looking at, has >> approximately 500k elements of double precision float. > > I use the following to find the indices of extrema: > > > def extrema_sampling(x, withend=False): > """ > return the indices into the local maxima or minima of x > if withend, include the 0 and end points in the sampling > """ > d = np.diff(x) > d2 = d[:-1]*d[1:] > ind = [] > if withend: > ind.append(0) > ind.extend(np.nonzero(d2<0)[0]+1) > if withend and ind[-1]!=len(x)-1: > ind.append(len(x)-1) > return np.array(ind) This is an excellent way of indexing extrema! Although it doesnt return exactly what I wanted and it wont catch gradient changes to zero. Inspired by the above, I have come up with the following, hope it can be of some use to others: def extrema(x, max = True, min = True, strict = False, withend = False): """ This function will index the extrema of a given array x. Options: max If true, will index maxima min If true, will index minima strict If true, will not index changes to zero gradient withend If true, always include x[0] and x[-1] This function will return a tuple of extrema indexies and values """ # This is the gradient from numpy import zeros dx = zeros(len(x)) from numpy import diff dx[1:] = diff(x) dx[0] = dx[1] # Clean up the gradient in order to pick out any change of sign from numpy import sign dx = sign(dx) # define the threshold for whether to pick out changes to zero gradient threshold = 0 if strict: threshold = 1 # Second order diff to pick out the spikes d2x = diff(dx) if max and min: d2x = abs(d2x) elif max: d2x = -d2x # Take care of the two ends if withend: d2x[0] = 2 d2x[-1] = 2 # Sift out the list of extremas from numpy import nonzero ind = nonzero(d2x > threshold)[0] return ind, x[ind] From philbinj at gmail.com Thu Oct 9 12:30:52 2008 From: philbinj at gmail.com (James Philbin) Date: Thu, 9 Oct 2008 17:30:52 +0100 Subject: [SciPy-user] Left hand sparse matrix multiplication In-Reply-To: References: <2b1c8c4f0810080841l4df18c1fre07ee259a6164f74@mail.gmail.com> Message-ID: <2b1c8c4f0810090930o36d96401o5f6b2b74f6fd2e8f@mail.gmail.com> >> It seems there should be a much more efficient overload for csc's >> rmatvec which doesn't involve computing the transpose. I hope i'm >> understanding things correctly. >> > > CSR.T and CSC.T are constant time operations, they just return the > matrix in the "opposite" format. Aha, great. It already does what I want. Thanks, James From Dharhas.Pothina at twdb.state.tx.us Thu Oct 9 13:02:10 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 09 Oct 2008 12:02:10 -0500 Subject: [SciPy-user] Calculating daily, monthly and seasonal averages of hourly time series data. In-Reply-To: <1223563189.30896.47.camel@poste5> References: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> <1223542287.26062.20.camel@poste5> <48EDB6AF.63BA.009B.0@twdb.state.tx.us> <1223563189.30896.47.camel@poste5> Message-ID: <48EDF2C2.63BA.009B.0@twdb.state.tx.us> This sounds great. I'm going to have to see how complicated it is to get the recent versions of numpy/scipy/matplotlib installed on Fedora 8 so I can try the timeseries scikit out. From what I could tell there are no repositories or rpms available for matplotlib 0.98 on Fedora 8 or 9. thanks for your help. - dharhas >>> Lionel Roubeyrie 10/9/2008 9:39 AM >>> > Ok I think I understood your example below. Can you give me an example > of how to deal with missing data? If you take my last example, you have 3 separated arrays: mes.data : your values mes.dates : the date array mes.mask : like the maskedarray module (timeseries is based on it). Trying to mask the 10 first values: ################################## In [16]: mask=zeros_like(mes.data) In [17]: mask[0:10]=True In [18]: mask Out[18]: array([ 1., 1., 1., ..., 0., 0., 0.]) In [19]: mes2=ts.time_series(mes, mask=mask) In [20]: mes2 Out[20]: timeseries([-- -- -- ..., 17.9699245692 66.8968405206 24.7117965045], dates = [01-jan-2007 00:00 ... 30-d?c-2008 23:00], freq = H) ################################## A timeseries can be constructed based on another timeseries, like I do here with mes2. Note that just the values are masked (missing), not the dates because all fields have a value (masked or not). > Does this general technique work for data that is on a 15 minute frequency Yes, but no :-) The timeseries module doesn't handle directly QH frequency, but minute frequency (freq='T'). Look at that : ################################# In [28]: fielddates=ts.date_array(['2007-01-01 00:00', '2007-01-01 00:15', '2007-01-01 00:30', '2007-01-01 00:45'], freq='T') In [29]: salinity=random(4)*100 In [30]: mes=ts.time_series(data=salinity, dates=fielddates) In [31]: mes.has_missing_dates() Out[31]: True ################################# There's not QH native frequency, then there's some missing dates (you can also look for duplicated dates, very convenient!). But you can fill these missing dates : ############################### In [36]: mes2=mes.fill_missing_dates() In [37]: mes2.has_missing_dates() Out[37]: False In [38]: mes2 Out[38]: timeseries([2.33824586442 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 36.180901427 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 39.0648471531 -- -- -- -- -- -- -- -- -- -- -- -- -- -- 55.4226606997], dates = [01-jan-2007 00:00 ... 01-jan-2007 00:45], freq = T) ############################### Or the module can handle directly these missing dates when you convert the timeseries to a lower frequency: ############################### In [39]: mes.convert(freq='H', func=mean) Out[39]: timeseries([ 33.25166379], dates = [01-jan-2007 00:00], freq = H) ############################### You can try with func=None, it will just fill the missing dates with missing values :-p > or datasets where the frequency is > variable (ie some months we have 10 readings other months we may have > 30? Like you see, just pass you datas with the corrects dates, and it rocks, but don't mix minute frequency with hour frequency! Here I take 3 daily samples in january, and one in october : ################################# In [41]: fielddates=ts.date_array(['2007-01-01', '2007-01-02', '2007-01-03', '2007-10-15'], freq='D') In [42]: salinity=random(4)*100 In [43]: mes=ts.time_series(data=salinity, dates=fielddates) In [44]: mes Out[44]: timeseries([ 59.63468614 38.60721076 64.52554805 66.17637291], dates = [01-jan-2007 02-jan-2007 03-jan-2007 15-oct-2007], freq = D) In [45]: mes.convert(freq='M', func=mean) Out[45]: timeseries([54.2558149823 -- -- -- -- -- -- -- -- 66.1763729106], dates = [jan-2007 ... oct-2007], freq = M) ################################### Computing the monthly average goes fine, the module fill the missing months by masked values. > > Also how stable is the scikits.timeseries? Is it reasonably usable? Yes, we use it intensively on large projects and Pierre G.M. has made a very good tool. Cordialy > > thanks, > > - dharhas > > >>> Lionel Roubeyrie 10/9/2008 3:51 AM >>> > Hi Dharhas, > scikits.timeseries is perfect for what you want in a very useable way > : > > ############################### > In [29]: import scikits.timeseries as ts > > In [30]: sdate=ts.Date('H', '2007-01-01 00:00') > > In [31]: fielddates=ts.date_array(start_date=sdate, freq='H', > length=365*24*2) > > In [32]: salinity=random(365*24*2)*100 > > In [33]: mes=ts.time_series(data=salinity, dates=fielddates) > > In [34]: mes > Out[34]: > timeseries([ 23.84116045 49.51437251 89.29221711 ..., 37.00510947 > 41.12589836 > 78.65572656], > dates = [01-jan-2007 00:00 ... 30-d?c-2008 23:00], > freq = H) > > > In [35]: mes_avmonth=mes.convert(freq='M', func=mean) > > In [36]: mes_avmonth > Out[36]: > timeseries([ 49.29718906 50.64688937 49.88193999 48.97144253 > 49.5788259 > 50.41340038 50.15047009 51.70933261 50.5635153 51.15084406 > 51.15362514 51.51443468 49.17556599 49.26877667 50.21416724 > 49.37037657 51.00724033 49.43337134 49.60398056 50.24470761 > 50.62350109 51.15572702 51.37652011 49.24193747], > dates = [jan-2007 ... d?c-2008], > freq = M) > > > In [37]: mes_avyear=mes.convert(freq='Y', func=mean) > > In [38]: mes_avyear > Out[38]: > timeseries([ 50.41903159 50.06468157], > dates = [2007 2008], > freq = A-DEC) > > > In [39]: mes_avseason=mes[(mes.month>=5) & (mes.month<=9)].mean() > > In [40]: mes_avseason > Out[40]: 50,33380690600049 > ############################### > > > Le mercredi 08 octobre 2008 ? 14:54 -0500, Dharhas Pothina a ?crit : > > Hi, > > > > I'm trying to analyze hourly salinity data. I was wondering if there > is a simple way of calculating daily, monthly and seasonal averages of > hourly time series data. > > > > So assuming I have two arrays that contain several years of hourly > (or every 15min) salinity data: a datetime array called 'fielddates' & a > data array called 'salinity' > > > > How would I go about getting the various averages. The seasonal > averages would be say defined as May through September etc. > > > > I had a look at scikits.timeseries but it looks like it would require > upgrading numpy to install and there isn't enough high level > documentation on how to use it for me to be confident in picking it up > in the time frame I'm looking at. I'm also not completely clear if it > can handle stuff that happens on a scale smaller than a day. If anyone > can point me to any usage examples for it that would be appreciated. > > > > Thanks, > > > > - dharhas > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Lionel Roubeyrie charg? d'?tudes LIMAIR - La Surveillance de l'Air en Limousin http://www.limair.asso.fr _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From pgmdevlist at gmail.com Thu Oct 9 13:17:28 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 9 Oct 2008 13:17:28 -0400 Subject: [SciPy-user] =?iso-8859-15?q?Calculating_daily=2C_monthly_and=09s?= =?iso-8859-15?q?easonal=09averages=09of_hourly=09time_series_data=2E?= In-Reply-To: <48EDF2C2.63BA.009B.0@twdb.state.tx.us> References: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> <1223563189.30896.47.camel@poste5> <48EDF2C2.63BA.009B.0@twdb.state.tx.us> Message-ID: <200810091317.29158.pgmdevlist@gmail.com> Dharhas, What you need for timeseries is a recent numpy (>=1.2.0) and scipy(>=0.7svn...). If you can compile the latest sources from SVN, you're good to go. The documentation is here: http://pytseries.sourceforge.net/ If it doesn't cover areas you need, let us know and help us by writing some examples/tutorial/whatever. As an extra comment on Lionel's answers: * Once you have a time series, you can directly mask some data without having to recreate a series >>> series=ts.time_series(np.random.rand(365), start_date=ts.now('D')) >>># Mask the first 10 elements >>> series[:10] = ma.masked >>> #Maske the data in August >>> series[series.month==8] = ma.masked From nwagner at iam.uni-stuttgart.de Thu Oct 9 13:35:32 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 09 Oct 2008 19:35:32 +0200 Subject: [SciPy-user] Wrapping mebdfdae.f Message-ID: Hi all, I tried to wrap mebdfdae.f http://www.ma.ic.ac.uk/~jcash/IVP_software_BSD/mebdfdae.f If failed with f2py -c -m mebdfdae mebdfdae.f running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building extension "mebdfdae" sources f2py options: [] f2py:> /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c creating /tmp/tmp9spUor creating /tmp/tmp9spUor/src.linux-i686-2.4 Reading fortran codes... Reading file 'mebdfdae.f' (format:fix,strict) Post-processing... Block: mebdfdae Block: mebdf Block: ovdriv Block: interp Block: coset Block: pset Block: pderv Block: f Block: dec Block: sol Block: dgbfa Block: daxpy Block: dscal Block: idamax Block: dgbsl Block: ddot Block: errors Block: prdict Block: f Block: itrat2 Block: f Block: stiff Block: f Block: mas Block: rscale Block: cpyary Block: hchose Block: dlamch Block: dlamc1 Block: dlamc2 Block: dlamc3 Block: dlamc4 Block: dlamc5 Block: lsame Post-processing (stage 2)... Building modules... Constructing call-back function "cb_f_in_pset__user__routines" getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' def f(t,y,wrkspc,ipar,rpar,ierr,[n]): return Constructing call-back function "cb_pderv_in_pset__user__routines" getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' def pderv(t,y,pwcopy,ipar,rpar,ierr,[n,n]): return Constructing call-back function "cb_f_in_prdict__user__routines" getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' def f(t,y,yprime,ipar,rpar,ierr,[n]): return Constructing call-back function "cb_f_in_itrat2__user__routines" getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' def f(t,save1,save2,ipar,rpar,ierr,[n]): return Constructing call-back function "cb_f_in_stiff__user__routines" getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' def f(t,y,save1,ipar,rpar,ierr,[n]): return Constructing call-back function "cb_mas_in_stiff__user__routines" getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' def mas(n,am,ldmas,ipar,rpar,ierr): return Building module "mebdfdae"... Constructing wrapper function "mebdf"... routsign2map: Confused: function mebdf has externals ['f', 'pderv', 'mas'] but no "use" statement. getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' sign2map: Confused: external f is not in lcb_map[]. append_needs: unknown need 'f' append_needs: unknown need 'f' sign2map: Confused: external pderv is not in lcb_map[]. append_needs: unknown need 'pderv' append_needs: unknown need 'pderv' sign2map: Confused: external mas is not in lcb_map[]. append_needs: unknown need 'mas' append_needs: unknown need 'mas' mebdf(t0,ho,y0,tout,tend,mf,idid,lout,work,iwork,mbnd,masbnd,maxder,itol,rtol,atol,rpar,ipar,f,pderv,mas,ierr,[n,lwork,liwork,f_extra_args,pderv_extra_args,mas_extra_args]) Constructing wrapper function "ovdriv"... routsign2map: Confused: function ovdriv has externals ['f', 'pderv', 'mas'] but no "use" statement. getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' sign2map: Confused: external f is not in lcb_map[]. append_needs: unknown need 'f' append_needs: unknown need 'f' sign2map: Confused: external pderv is not in lcb_map[]. append_needs: unknown need 'pderv' append_needs: unknown need 'pderv' sign2map: Confused: external mas is not in lcb_map[]. append_needs: unknown need 'mas' append_needs: unknown need 'mas' ovdriv(t0,ho,y0,tout,tend,mf,idid,lout,y,yhold,ynhold,ymax,errors,save1,save2,scale,arh,pw,pwcopy,am,ipiv,mbnd,masbnd,nind1,nind2,nind3,maxder,itol,rtol,atol,rpar,ipar,f,pderv,mas,nqused,nstep,nfail,nfe,nje,ndec,nbsol,npset,ncoset,maxord,maxstp,uround,hused,epsjac,ierr,[n,f_extra_args,pderv_extra_args,mas_extra_args]) Constructing wrapper function "interp"... interp(jstart,h,t,y,tout,y0,[n]) Constructing wrapper function "coset"... coset(nq,el,elst,tq,ncoset,maxord) Constructing wrapper function "pset"... sign2map: Confused: external mas is not in lcb_map['pderv', 'f']. append_needs: unknown need 'mas' append_needs: unknown need 'mas' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' pset(y,h,t,uround,epsjac,con,miter,mbnd,masbnd,nind1,nind2,nind3,ier,f,pderv,mas,nrenew,ymax,save1,save2,pw,pwcopy,am,wrkspc,ipiv,itol,rtol,atol,npset,nje,nfe,ndec,ipar,rpar,ierr,[n,f_extra_args,pderv_extra_args,mas_extra_args]) Constructing wrapper function "dec"... dec(a,ip,ier,[n,ndim]) Constructing wrapper function "sol"... sol(a,b,ip,[n,ndim]) Constructing wrapper function "dgbfa"... dgbfa(abd,n,ml,mu,ipvt,info,[lda]) Constructing wrapper function "daxpy"... daxpy(n,da,dx,incx,dy,incy) Constructing wrapper function "dscal"... dscal(n,da,dx,incx) Creating wrapper for Fortran function "idamax"("idamax")... Constructing wrapper function "idamax"... idamax = idamax(n,dx,incx) Constructing wrapper function "dgbsl"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' dgbsl(abd,n,ml,mu,ipvt,b,job,[lda]) Creating wrapper for Fortran function "ddot"("ddot")... Constructing wrapper function "ddot"... ddot = ddot(n,dx,incx,dy,incy) Constructing wrapper function "errors"... errors(n,tq,edn,e,eup,bnd,eddn) Constructing wrapper function "prdict"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' prdict(t,h,y,l,yprime,nfe,ipar,rpar,f,ierr,[n,f_extra_args]) Constructing wrapper function "itrat2"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' itrat2(qqq,y,t,hbeta,errbnd,arh,crate,tcrate,m,worked,ymax,error,save1,save2,scale,pw,mf,mbnd,am,masbnd,nind1,nind2,nind3,ipiv,lmb,itol,rtol,atol,ipar,rpar,hused,nbsol,nfe,nqused,f,ierr,[n,f_extra_args]) Constructing wrapper function "stiff"... getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' getarrdims:warning: assumed shape array, using 0 instead of '*' sign2map: Confused: external pderv is not in lcb_map['mas', 'f']. append_needs: unknown need 'pderv' append_needs: unknown need 'pderv' stiff(h,hmax,hmin,jstart,kflag,mf,mbnd,masbnd,nind1,nind2,nind3,t,tout,tend,y,ymax,error,save1,save2,scale,pw,pwcopy,am,yhold,ynhold,arh,ipiv,lout,maxder,itol,rtol,atol,rpar,ipar,f,pderv,mas,nqused,nstep,nfail,nfe,nje,ndec,nbsol,npset,ncoset,maxord,maxstp,uround,epsjac,hused,ierr,[n,f_extra_args,pderv_extra_args,mas_extra_args]) Constructing wrapper function "rscale"... rscale(l,rh,y,[n]) Constructing wrapper function "cpyary"... cpyary(source,target,[nelem]) Constructing wrapper function "hchose"... hchose(rh,h,ovride) Creating wrapper for Fortran function "dlamch"("dlamch")... Constructing wrapper function "dlamch"... dlamch = dlamch(cmach) Constructing wrapper function "dlamc1"... dlamc1(beta,t,rnd,ieee1) Constructing wrapper function "dlamc2"... dlamc2(beta,t,rnd,eps,emin,rmin,emax,rmax) Creating wrapper for Fortran function "dlamc3"("dlamc3")... Constructing wrapper function "dlamc3"... dlamc3 = dlamc3(a,b) Constructing wrapper function "dlamc4"... dlamc4(emin,start,base) Constructing wrapper function "dlamc5"... dlamc5(beta,p,emin,ieee,emax,rmax) Creating wrapper for Fortran function "lsame"("lsame")... Constructing wrapper function "lsame"... lsame = lsame(ca,cb) Constructing COMMON block support for "stpsze"... hstpsz Wrote C/API module "mebdfdae" to file "/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c" Fortran 77 wrappers are saved to "/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdae-f2pywrappers.f" adding '/tmp/tmp9spUor/src.linux-i686-2.4/fortranobject.c' to sources. adding '/tmp/tmp9spUor/src.linux-i686-2.4' to include_dirs. copying /usr/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmp9spUor/src.linux-i686-2.4 copying /usr/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmp9spUor/src.linux-i686-2.4 adding '/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdae-f2pywrappers.f' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler Found executable /usr/bin/g77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize GnuFCompiler using build_ext building 'mebdfdae' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -Wall -g -fPIC creating /tmp/tmp9spUor/tmp creating /tmp/tmp9spUor/tmp/tmp9spUor creating /tmp/tmp9spUor/tmp/tmp9spUor/src.linux-i686-2.4 compile options: '-I/tmp/tmp9spUor/src.linux-i686-2.4 -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' gcc: /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: redefinition of `n_cb_capi' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: `n_cb_capi' previously declared here /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `cb_pderv_in_pset__user__routines': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: redeclaration of `n_cb_capi' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: `n_cb_capi' previously declared here /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:608: error: redeclaration of `n' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: error: `n' previously declared here /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: warning: unused variable `n' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_mebdf': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: `f_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: (Each undeclared identifier is reported only once /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: for each function it appears in.) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: syntax error before "f_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: error: `pderv_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: error: syntax error before "pderv_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: error: `mas_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: error: syntax error before "mas_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1555: error: `pderv_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1557: error: `pderv' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1560: error: `pderv_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1572: error: `mas_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1574: error: `mas' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1577: error: `mas_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1612: error: `f_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1614: error: `f' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1617: error: `f_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_ovdriv': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: error: `f_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: error: syntax error before "f_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: error: `pderv_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: error: syntax error before "pderv_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: error: `mas_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: error: syntax error before "mas_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2164: error: `pderv_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2166: error: `pderv' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2169: error: `pderv_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2184: error: `mas_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2186: error: `mas' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2189: error: `mas_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2201: error: `f_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2203: error: `f' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2206: error: `f_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_pset': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: error: `mas_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: error: syntax error before "mas_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3228: error: `mas_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3230: error: `mas' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3233: error: `mas_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_stiff': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: error: `pderv_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: error: syntax error before "pderv_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5920: error: `pderv_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5922: error: `pderv' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5925: error: `pderv_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: redefinition of `n_cb_capi' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: `n_cb_capi' previously declared here /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `cb_pderv_in_pset__user__routines': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: redeclaration of `n_cb_capi' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: error: `n_cb_capi' previously declared here /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:608: error: redeclaration of `n' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: error: `n' previously declared here /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: warning: unused variable `n' /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_mebdf': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: `f_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: (Each undeclared identifier is reported only once /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: for each function it appears in.) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: error: syntax error before "f_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: error: `pderv_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: error: syntax error before "pderv_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: error: `mas_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: error: syntax error before "mas_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1555: error: `pderv_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1557: error: `pderv' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1560: error: `pderv_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1572: error: `mas_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1574: error: `mas' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1577: error: `mas_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1612: error: `f_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1614: error: `f' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1617: error: `f_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_ovdriv': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: error: `f_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: error: syntax error before "f_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: error: `pderv_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: error: syntax error before "pderv_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: error: `mas_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: error: syntax error before "mas_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2164: error: `pderv_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2166: error: `pderv' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2169: error: `pderv_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2184: error: `mas_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2186: error: `mas' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2189: error: `mas_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2201: error: `f_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2203: error: `f' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2206: error: `f_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_pset': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: error: `mas_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: error: syntax error before "mas_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3228: error: `mas_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3230: error: `mas' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3233: error: `mas_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: error: `nofoptargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In function `f2py_rout_mebdfdae_stiff': /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: error: `pderv_typedef' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: error: syntax error before "pderv_cptr" /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5920: error: `pderv_cptr' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5922: error: `pderv' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5925: error: `pderv_nofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: `maxnofargs' undeclared (first use in this function) /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: syntax error at '#' token /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: error: `nofoptargs' undeclared (first use in this function) error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -Wall -g -fPIC -I/tmp/tmp9spUor/src.linux-i686-2.4 -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c -o /tmp/tmp9spUor/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.o" failed with exit status 1 How can I fix this problem ? Nils From matthew.brett at gmail.com Thu Oct 9 14:13:03 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 9 Oct 2008 11:13:03 -0700 Subject: [SciPy-user] Reference to algorithm for matrix rank Message-ID: <1e2af89e0810091113o6af11cb7ue23a792e85be7f42@mail.gmail.com> Hi, I wanted to write a generic matrix rank algorithm. The general form seems to be standard: def matrix_rank(M, tol): S = svd(M, compute_uv=False) return np.sum(S > tol) but what I can't find is some citable reference for a general way to choose 'tol'. Does anyone know of the right source for this? Thanks a lot, Matthew From bblais at bryant.edu Thu Oct 9 14:26:50 2008 From: bblais at bryant.edu (Brian Blais) Date: Thu, 9 Oct 2008 14:26:50 -0400 Subject: [SciPy-user] why does linalg.sqrtm return an array? Message-ID: <7F046513-107E-4F6D-95F3-A160CF26A800@bryant.edu> Hello, I was just bitten by the fact that sqrtm returns an array, not a matrix. Every time I tried to test it with Q*Q, or Q.T*Q I got very strange results (I was just about to post that it was seriously broken). Then I found out that it returns an array! Why is that? It would seem that if you are doing a matrix square root, you are working with matrices mostly in that calculation, so a matrix is the consistent thing to return (and perhaps the least surprising thing to return). Is there a reason for this? thanks, Brian Blais -- Brian Blais bblais at bryant.edu http://web.bryant.edu/~bblais -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Oct 9 14:30:40 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 09 Oct 2008 20:30:40 +0200 Subject: [SciPy-user] why does linalg.sqrtm return an array? In-Reply-To: <7F046513-107E-4F6D-95F3-A160CF26A800@bryant.edu> References: <7F046513-107E-4F6D-95F3-A160CF26A800@bryant.edu> Message-ID: On Thu, 9 Oct 2008 14:26:50 -0400 Brian Blais wrote: > Hello, > > I was just bitten by the fact that sqrtm returns an >array, not a matrix. Every time I tried to test it with >Q*Q, or Q.T*Q I got very strange results (I was just >about to post that it was seriously broken). Then I >found out that it returns an array! Why is that? It >would seem that if you are doing a matrix square root, >you are working with matrices mostly in that >calculation, so a matrix is the consistent thing to >return (and perhaps the least surprising thing to > return). Is there a reason for this? > > > thanks, > > Brian Blais > > -- > Brian Blais > bblais at bryant.edu > http://web.bryant.edu/~bblais > > > This is a known problem. See http://projects.scipy.org/scipy/scipy/ticket/585 Nils From Dharhas.Pothina at twdb.state.tx.us Thu Oct 9 14:42:49 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 09 Oct 2008 13:42:49 -0500 Subject: [SciPy-user] Calculating daily, monthly and seasonal averages of hourly time series data. In-Reply-To: <200810091317.29158.pgmdevlist@gmail.com> References: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> <1223563189.30896.47.camel@poste5> <48EDF2C2.63BA.009B.0@twdb.state.tx.us> <200810091317.29158.pgmdevlist@gmail.com> Message-ID: <48EE0A59.63BA.009B.0@twdb.state.tx.us> Pierre, I did already find the documentation you linked to and it looks like a excellent reference for the individual functions and classes. What I felt was missing was an overview of how to use the main features of the package in the form of examples, tutorials etc. I've never tried compiling the scipy sources from svn. If I can manage it, I'll try the timeseries toolkit out. Once I start using it, I'd be happy to document my experiences and see if I can come up with some examples/tutorials based on what I am doing. fyi the website says Numpy 1.2.1 or later. - dharhas >>> Pierre GM 10/9/2008 12:17 PM >>> Dharhas, What you need for timeseries is a recent numpy (>=1.2.0) and scipy(>=0.7svn...). If you can compile the latest sources from SVN, you're good to go. The documentation is here: http://pytseries.sourceforge.net/ If it doesn't cover areas you need, let us know and help us by writing some examples/tutorial/whatever. As an extra comment on Lionel's answers: * Once you have a time series, you can directly mask some data without having to recreate a series >>> series=ts.time_series(np.random.rand(365), start_date=ts.now('D')) >>># Mask the first 10 elements >>> series[:10] = ma.masked >>> #Maske the data in August >>> series[series.month==8] = ma.masked _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From bolme1234 at comcast.net Thu Oct 9 15:06:33 2008 From: bolme1234 at comcast.net (David Bolme) Date: Thu, 9 Oct 2008 13:06:33 -0600 Subject: [SciPy-user] Extrema finding In-Reply-To: References: <88e473830810090521qe247f41l9649f3c460e95d38@mail.gmail.com> Message-ID: This is a version for finding extrema in a 2D array. It requires the ndimage maximum/minimum filters. For the one dimensional case substitute size=[3] or use maximum_filter1d. I keep writing this code over and over. I am surprised that there is not a general purpose extrema finding routine in scipy. def localMax(mat): mx = maximum_filter(mat, size=[3,3]) mn = minimum_filter(mat, size=[3,3]) # (mat == mx) true if pixel is equal to the local max # The next computation suppresses responses where # the function is flat. local_maxima = ((mat == mx) & (mat != mn)) # Get the indices of the maxima extrema = nonzero(local_maxima) return extrema -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dharhas.Pothina at twdb.state.tx.us Thu Oct 9 15:27:28 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Thu, 09 Oct 2008 14:27:28 -0500 Subject: [SciPy-user] numpy/scipy svn installation question. Message-ID: <48EE14CF.63BA.009B.0@twdb.state.tx.us> Hi, I want to install the svn versions of numpy/scipy (and 0.98 version of matplotlib) so that I can install the scikits.timeseries package. I have an existing installation of older versions of numpy/scipy/matplotlib installed through the fedora package manager. >From what I understand from the website I should be able to build numpy & scipy by using python setup.py install Is there a way to install the svn versions alongside my existing installations. Assuming I can do that how do I choose which version to use in a script. thanks - dharhas From robert.kern at gmail.com Thu Oct 9 19:34:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 9 Oct 2008 18:34:40 -0500 Subject: [SciPy-user] why does linalg.sqrtm return an array? In-Reply-To: <7F046513-107E-4F6D-95F3-A160CF26A800@bryant.edu> References: <7F046513-107E-4F6D-95F3-A160CF26A800@bryant.edu> Message-ID: <3d375d730810091634v3e751fb5k722735ded08a601c@mail.gmail.com> On Thu, Oct 9, 2008 at 13:26, Brian Blais wrote: > Hello, > I was just bitten by the fact that sqrtm returns an array, not a matrix. > Every time I tried to test it with Q*Q, or Q.T*Q I got very strange results > (I was just about to post that it was seriously broken). Then I found out > that it returns an array! Why is that? It would seem that if you are doing > a matrix square root, you are working with matrices mostly in that > calculation, so a matrix is the consistent thing to return (and perhaps the > least surprising thing to return). Is there a reason for this? Arguably, it should return a matrix object if given a matrix object, but it should never return a matrix object if given a pure ndarray. Just because one is doing matrix operations doesn't mean one wants to use matrix objects. But the fact that it returns an ndarray when given a matrix object is a bug, and you're welcome to fix it. It's not always easy to do, though, so you will find many such functions which do not preserve the type of the input(s). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Thu Oct 9 19:40:06 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 9 Oct 2008 18:40:06 -0500 Subject: [SciPy-user] Reference to algorithm for matrix rank In-Reply-To: <1e2af89e0810091113o6af11cb7ue23a792e85be7f42@mail.gmail.com> References: <1e2af89e0810091113o6af11cb7ue23a792e85be7f42@mail.gmail.com> Message-ID: <3d375d730810091640i414d0725scd4374f0037b0499@mail.gmail.com> On Thu, Oct 9, 2008 at 13:13, Matthew Brett wrote: > Hi, > > I wanted to write a generic matrix rank algorithm. The general form > seems to be standard: > > def matrix_rank(M, tol): > S = svd(M, compute_uv=False) > return np.sum(S > tol) > > but what I can't find is some citable reference for a general way to > choose 'tol'. Does anyone know of the right source for this? You should get the book _Matrix Computations_ by Golub and van Loan. You actually want tol to be relative to S.max(), not an absolute tolerance. I like this: np.sum(S > (S.max() * np.finfo(M.dtype).eps) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Fri Oct 10 02:25:31 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 10 Oct 2008 08:25:31 +0200 Subject: [SciPy-user] Trouble with f2py Message-ID: Hi all, If I run f2py I get f2py Traceback (most recent call last): File "/data/home/nwagner/local/bin/f2py", line 20, in ? from numpy.f2py import main File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/__init__.py", line 125, in ? import add_newdocs File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/add_newdocs.py", line 9, in ? from lib import add_newdoc File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/__init__.py", line 4, in ? from type_check import * File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/__init__.py", line 5, in ? import multiarray ImportError: /data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_FromUnicode How can I resolve this problem ? Nils From robert.kern at gmail.com Fri Oct 10 02:29:40 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Oct 2008 01:29:40 -0500 Subject: [SciPy-user] Trouble with f2py In-Reply-To: References: Message-ID: <3d375d730810092329nf441f9l483e6181fb02aeeb@mail.gmail.com> On Fri, Oct 10, 2008 at 01:25, Nils Wagner wrote: > Hi all, > > If I run f2py I get > > f2py > Traceback (most recent call last): > File "/data/home/nwagner/local/bin/f2py", line 20, in ? > from numpy.f2py import main > File > "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/__init__.py", > line 125, in ? > import add_newdocs > File > "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/add_newdocs.py", > line 9, in ? > from lib import add_newdoc > File > "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/__init__.py", > line 4, in ? > from type_check import * > File > "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/type_check.py", > line 8, in ? > import numpy.core.numeric as _nx > File > "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/__init__.py", > line 5, in ? > import multiarray > ImportError: > /data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/multiarray.so: > undefined symbol: PyUnicodeUCS2_FromUnicode > > How can I resolve this problem ? It looks like you installed a numpy binary built with a Python built with UCS2 Unicode support while your Python binary was built with UCS4 Unicode support. Where did you get your Python? Where did you get your numpy? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Fri Oct 10 02:46:37 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 10 Oct 2008 08:46:37 +0200 Subject: [SciPy-user] Trouble with f2py In-Reply-To: <3d375d730810092329nf441f9l483e6181fb02aeeb@mail.gmail.com> References: <3d375d730810092329nf441f9l483e6181fb02aeeb@mail.gmail.com> Message-ID: On Fri, 10 Oct 2008 01:29:40 -0500 "Robert Kern" wrote: > On Fri, Oct 10, 2008 at 01:25, Nils Wagner > wrote: >> Hi all, >> >> If I run f2py I get >> >> f2py >> Traceback (most recent call last): >> File "/data/home/nwagner/local/bin/f2py", line 20, in >>? >> from numpy.f2py import main >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/__init__.py", >> line 125, in ? >> import add_newdocs >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/add_newdocs.py", >> line 9, in ? >> from lib import add_newdoc >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/__init__.py", >> line 4, in ? >> from type_check import * >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/lib/type_check.py", >> line 8, in ? >> import numpy.core.numeric as _nx >> File >> "/data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/__init__.py", >> line 5, in ? >> import multiarray >> ImportError: >> /data/home/nwagner/local/lib/python2.5/site-packages/numpy/core/multiarray.so: >> undefined symbol: PyUnicodeUCS2_FromUnicode >> >> How can I resolve this problem ? > > It looks like you installed a numpy binary built with a >Python built > with UCS2 Unicode support while your Python binary was >built with UCS4 > Unicode support. Where did you get your Python? Where >did you get your > numpy? > Hi Robert, I have modified the first line in f2py from #!/usr/bin/env python to #!/usr/bin/env /data/home/nwagner/local/bin/python Now it works. Sorry for the noise. BTW, can you reproduce the problem concerning mebdfdae.f and f2py (my previous mail on the list) ? Nils From washakie at gmail.com Fri Oct 10 08:45:02 2008 From: washakie at gmail.com (John [H2O]) Date: Fri, 10 Oct 2008 05:45:02 -0700 (PDT) Subject: [SciPy-user] scipy sclicing Message-ID: <19917625.post@talk.nabble.com> Could someone explain what I'm doing wrong here? >>> i = array(range(140,149)) >>> j = array(range(5,20)) >>> i array([140, 141, 142, 143, 144, 145, 146, 147, 148]) >>> j array([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>> a = acc[i,j] Traceback (most recent call last): File "", line 1, in ValueError: shape mismatch: objects cannot be broadcast to a single shape >>> a = acc[140:148,5:19] >>> I'm just following the indexing arrays usage on this page: http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html How come I can't use arrays to index my array? Thanks! -- View this message in context: http://www.nabble.com/scipy-sclicing-tp19917625p19917625.html Sent from the Scipy-User mailing list archive at Nabble.com. From stefan at sun.ac.za Fri Oct 10 09:28:24 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 10 Oct 2008 15:28:24 +0200 Subject: [SciPy-user] scipy sclicing In-Reply-To: <19917625.post@talk.nabble.com> References: <19917625.post@talk.nabble.com> Message-ID: <9457e7c80810100628x246291e4o23f5ae0bfb7885ed@mail.gmail.com> 2008/10/10 John [H2O] : > > Could someone explain what I'm doing wrong here? > >>>> i = array(range(140,149)) >>>> j = array(range(5,20)) >>>> i > array([140, 141, 142, 143, 144, 145, 146, 147, 148]) >>>> j > array([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>>> a = acc[i,j] > Traceback (most recent call last): > File "", line 1, in > ValueError: shape mismatch: objects cannot be broadcast to a single shape The shapes of your indices, In [5]: a.shape Out[5]: (9,) In [6]: b.shape Out[6]: (15,) cannot be broadcast to a single shape. Either give the same number of indices in i and j, or use i[:,None] and j. Cheers St?fan From pearu at cens.ioc.ee Fri Oct 10 09:03:49 2008 From: pearu at cens.ioc.ee (Pearu Peterson) Date: Fri, 10 Oct 2008 16:03:49 +0300 (EEST) Subject: [SciPy-user] Wrapping mebdfdae.f In-Reply-To: References: Message-ID: <59715.172.17.0.4.1223643829.squirrel@cens.ioc.ee> Hi, The problem is that some Fortran functions in mebdfdae.f take external pderv argument that signature f2py cannot determine automatically. If you really need to call such Fortran functions then you need to create a pyf file and define pderv signature there. Otherwise, just wrap only those functions that you need to access from Python: f2py -c -m mebdfdae mebdfdae.f only: HTH, Pearu On Thu, October 9, 2008 8:35 pm, Nils Wagner wrote: > Hi all, > > I tried to wrap mebdfdae.f > http://www.ma.ic.ac.uk/~jcash/IVP_software_BSD/mebdfdae.f > > If failed with > > f2py -c -m mebdfdae mebdfdae.f > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build > commands --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build > commands --fcompiler options > running build_src > building extension "mebdfdae" sources > f2py options: [] > f2py:> /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c > creating /tmp/tmp9spUor > creating /tmp/tmp9spUor/src.linux-i686-2.4 > Reading fortran codes... > Reading file 'mebdfdae.f' (format:fix,strict) > Post-processing... > Block: mebdfdae > Block: mebdf > Block: ovdriv > Block: interp > Block: coset > Block: pset > Block: pderv > Block: f > Block: dec > Block: sol > Block: dgbfa > Block: daxpy > Block: dscal > Block: idamax > Block: dgbsl > Block: ddot > Block: errors > Block: prdict > Block: f > Block: itrat2 > Block: f > Block: stiff > Block: f > Block: mas > Block: rscale > Block: cpyary > Block: hchose > Block: dlamch > Block: dlamc1 > Block: dlamc2 > Block: dlamc3 > Block: dlamc4 > Block: dlamc5 > Block: lsame > Post-processing (stage 2)... > Building modules... > Constructing call-back function > "cb_f_in_pset__user__routines" > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > def f(t,y,wrkspc,ipar,rpar,ierr,[n]): return > Constructing call-back function > "cb_pderv_in_pset__user__routines" > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > def pderv(t,y,pwcopy,ipar,rpar,ierr,[n,n]): > return > Constructing call-back function > "cb_f_in_prdict__user__routines" > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > def f(t,y,yprime,ipar,rpar,ierr,[n]): return > Constructing call-back function > "cb_f_in_itrat2__user__routines" > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > def f(t,save1,save2,ipar,rpar,ierr,[n]): return > Constructing call-back function > "cb_f_in_stiff__user__routines" > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > def f(t,y,save1,ipar,rpar,ierr,[n]): return > Constructing call-back function > "cb_mas_in_stiff__user__routines" > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > def mas(n,am,ldmas,ipar,rpar,ierr): return > Building module "mebdfdae"... > Constructing wrapper function "mebdf"... > routsign2map: Confused: function mebdf has externals ['f', > 'pderv', 'mas'] but no "use" statement. > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > sign2map: Confused: external f is not in lcb_map[]. > append_needs: unknown need 'f' > append_needs: unknown need 'f' > sign2map: Confused: external pderv is not in lcb_map[]. > append_needs: unknown need 'pderv' > append_needs: unknown need 'pderv' > sign2map: Confused: external mas is not in lcb_map[]. > append_needs: unknown need 'mas' > append_needs: unknown need 'mas' > mebdf(t0,ho,y0,tout,tend,mf,idid,lout,work,iwork,mbnd,masbnd,maxder,itol,rtol,atol,rpar,ipar,f,pderv,mas,ierr,[n,lwork,liwork,f_extra_args,pderv_extra_args,mas_extra_args]) > Constructing wrapper function "ovdriv"... > routsign2map: Confused: function ovdriv has externals > ['f', 'pderv', 'mas'] but no "use" statement. > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > sign2map: Confused: external f is not in lcb_map[]. > append_needs: unknown need 'f' > append_needs: unknown need 'f' > sign2map: Confused: external pderv is not in lcb_map[]. > append_needs: unknown need 'pderv' > append_needs: unknown need 'pderv' > sign2map: Confused: external mas is not in lcb_map[]. > append_needs: unknown need 'mas' > append_needs: unknown need 'mas' > ovdriv(t0,ho,y0,tout,tend,mf,idid,lout,y,yhold,ynhold,ymax,errors,save1,save2,scale,arh,pw,pwcopy,am,ipiv,mbnd,masbnd,nind1,nind2,nind3,maxder,itol,rtol,atol,rpar,ipar,f,pderv,mas,nqused,nstep,nfail,nfe,nje,ndec,nbsol,npset,ncoset,maxord,maxstp,uround,hused,epsjac,ierr,[n,f_extra_args,pderv_extra_args,mas_extra_args]) > Constructing wrapper function "interp"... > interp(jstart,h,t,y,tout,y0,[n]) > Constructing wrapper function "coset"... > coset(nq,el,elst,tq,ncoset,maxord) > Constructing wrapper function "pset"... > sign2map: Confused: external mas is not in > lcb_map['pderv', 'f']. > append_needs: unknown need 'mas' > append_needs: unknown need 'mas' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > pset(y,h,t,uround,epsjac,con,miter,mbnd,masbnd,nind1,nind2,nind3,ier,f,pderv,mas,nrenew,ymax,save1,save2,pw,pwcopy,am,wrkspc,ipiv,itol,rtol,atol,npset,nje,nfe,ndec,ipar,rpar,ierr,[n,f_extra_args,pderv_extra_args,mas_extra_args]) > Constructing wrapper function "dec"... > dec(a,ip,ier,[n,ndim]) > Constructing wrapper function "sol"... > sol(a,b,ip,[n,ndim]) > Constructing wrapper function "dgbfa"... > dgbfa(abd,n,ml,mu,ipvt,info,[lda]) > Constructing wrapper function "daxpy"... > daxpy(n,da,dx,incx,dy,incy) > Constructing wrapper function "dscal"... > dscal(n,da,dx,incx) > Creating wrapper for Fortran function > "idamax"("idamax")... > Constructing wrapper function "idamax"... > idamax = idamax(n,dx,incx) > Constructing wrapper function "dgbsl"... > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > dgbsl(abd,n,ml,mu,ipvt,b,job,[lda]) > Creating wrapper for Fortran function > "ddot"("ddot")... > Constructing wrapper function "ddot"... > ddot = ddot(n,dx,incx,dy,incy) > Constructing wrapper function "errors"... > errors(n,tq,edn,e,eup,bnd,eddn) > Constructing wrapper function "prdict"... > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > prdict(t,h,y,l,yprime,nfe,ipar,rpar,f,ierr,[n,f_extra_args]) > Constructing wrapper function "itrat2"... > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > itrat2(qqq,y,t,hbeta,errbnd,arh,crate,tcrate,m,worked,ymax,error,save1,save2,scale,pw,mf,mbnd,am,masbnd,nind1,nind2,nind3,ipiv,lmb,itol,rtol,atol,ipar,rpar,hused,nbsol,nfe,nqused,f,ierr,[n,f_extra_args]) > Constructing wrapper function "stiff"... > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > getarrdims:warning: assumed shape array, using 0 instead > of '*' > sign2map: Confused: external pderv is not in > lcb_map['mas', 'f']. > append_needs: unknown need 'pderv' > append_needs: unknown need 'pderv' > stiff(h,hmax,hmin,jstart,kflag,mf,mbnd,masbnd,nind1,nind2,nind3,t,tout,tend,y,ymax,error,save1,save2,scale,pw,pwcopy,am,yhold,ynhold,arh,ipiv,lout,maxder,itol,rtol,atol,rpar,ipar,f,pderv,mas,nqused,nstep,nfail,nfe,nje,ndec,nbsol,npset,ncoset,maxord,maxstp,uround,epsjac,hused,ierr,[n,f_extra_args,pderv_extra_args,mas_extra_args]) > Constructing wrapper function "rscale"... > rscale(l,rh,y,[n]) > Constructing wrapper function "cpyary"... > cpyary(source,target,[nelem]) > Constructing wrapper function "hchose"... > hchose(rh,h,ovride) > Creating wrapper for Fortran function > "dlamch"("dlamch")... > Constructing wrapper function "dlamch"... > dlamch = dlamch(cmach) > Constructing wrapper function "dlamc1"... > dlamc1(beta,t,rnd,ieee1) > Constructing wrapper function "dlamc2"... > dlamc2(beta,t,rnd,eps,emin,rmin,emax,rmax) > Creating wrapper for Fortran function > "dlamc3"("dlamc3")... > Constructing wrapper function "dlamc3"... > dlamc3 = dlamc3(a,b) > Constructing wrapper function "dlamc4"... > dlamc4(emin,start,base) > Constructing wrapper function "dlamc5"... > dlamc5(beta,p,emin,ieee,emax,rmax) > Creating wrapper for Fortran function > "lsame"("lsame")... > Constructing wrapper function "lsame"... > lsame = lsame(ca,cb) > Constructing COMMON block support for > "stpsze"... > hstpsz > Wrote C/API module "mebdfdae" to file > "/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c" > Fortran 77 wrappers are saved to > "/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdae-f2pywrappers.f" > adding > '/tmp/tmp9spUor/src.linux-i686-2.4/fortranobject.c' to > sources. > adding '/tmp/tmp9spUor/src.linux-i686-2.4' to > include_dirs. > copying > /usr/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.c > -> /tmp/tmp9spUor/src.linux-i686-2.4 > copying > /usr/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.h > -> /tmp/tmp9spUor/src.linux-i686-2.4 > adding > '/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdae-f2pywrappers.f' > to sources. > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize GnuFCompiler > Found executable /usr/bin/g77 > gnu: no Fortran 90 compiler found > gnu: no Fortran 90 compiler found > customize GnuFCompiler > gnu: no Fortran 90 compiler found > gnu: no Fortran 90 compiler found > customize GnuFCompiler using build_ext > building 'mebdfdae' extension > compiling C sources > C compiler: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 > -march=i586 -mcpu=i686 -fmessage-length=0 -Wall -g -fPIC > > creating /tmp/tmp9spUor/tmp > creating /tmp/tmp9spUor/tmp/tmp9spUor > creating /tmp/tmp9spUor/tmp/tmp9spUor/src.linux-i686-2.4 > compile options: '-I/tmp/tmp9spUor/src.linux-i686-2.4 > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c' > gcc: /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: redefinition of `n_cb_capi' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: `n_cb_capi' previously declared here > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `cb_pderv_in_pset__user__routines': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: redeclaration of `n_cb_capi' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: `n_cb_capi' previously declared here > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:608: > error: redeclaration of `n' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: > error: `n' previously declared here > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: > warning: unused variable `n' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_mebdf': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: `f_typedef' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: (Each undeclared identifier is reported only once > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: for each function it appears in.) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: syntax error before "f_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: > error: `pderv_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: > error: syntax error before "pderv_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: > error: `mas_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: > error: syntax error before "mas_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1555: > error: `pderv_cptr' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1557: > error: `pderv' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1560: > error: `pderv_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1572: > error: `mas_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1574: > error: `mas' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1577: > error: `mas_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1612: > error: `f_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1614: > error: `f' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1617: > error: `f_nofargs' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_ovdriv': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: > error: `f_typedef' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: > error: syntax error before "f_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: > error: `pderv_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: > error: syntax error before "pderv_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: > error: `mas_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: > error: syntax error before "mas_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2164: > error: `pderv_cptr' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2166: > error: `pderv' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2169: > error: `pderv_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2184: > error: `mas_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2186: > error: `mas' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2189: > error: `mas_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2201: > error: `f_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2203: > error: `f' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2206: > error: `f_nofargs' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_pset': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: > error: `mas_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: > error: syntax error before "mas_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3228: > error: `mas_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3230: > error: `mas' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3233: > error: `mas_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_stiff': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: > error: `pderv_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: > error: syntax error before "pderv_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5920: > error: `pderv_cptr' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5922: > error: `pderv' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5925: > error: `pderv_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: redefinition of `n_cb_capi' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: `n_cb_capi' previously declared here > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `cb_pderv_in_pset__user__routines': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: redeclaration of `n_cb_capi' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:599: > error: `n_cb_capi' previously declared here > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:608: > error: redeclaration of `n' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: > error: `n' previously declared here > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:607: > warning: unused variable `n' > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_mebdf': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: `f_typedef' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: (Each undeclared identifier is reported only once > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: for each function it appears in.) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1467: > error: syntax error before "f_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: > error: `pderv_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1472: > error: syntax error before "pderv_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: > error: `mas_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1477: > error: syntax error before "mas_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1555: > error: `pderv_cptr' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1557: > error: `pderv' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1560: > error: `pderv_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1561: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1572: > error: `mas_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1574: > error: `mas' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1577: > error: `mas_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1578: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1612: > error: `f_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1614: > error: `f' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1617: > error: `f_nofargs' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:1618: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_ovdriv': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: > error: `f_typedef' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2020: > error: syntax error before "f_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: > error: `pderv_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2025: > error: syntax error before "pderv_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: > error: `mas_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2030: > error: syntax error before "mas_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2164: > error: `pderv_cptr' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2166: > error: `pderv' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2169: > error: `pderv_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2170: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2184: > error: `mas_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2186: > error: `mas' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2189: > error: `mas_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2190: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2201: > error: `f_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2203: > error: `f' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2206: > error: `f_nofargs' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:2207: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_pset': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: > error: `mas_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3027: > error: syntax error before "mas_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3228: > error: `mas_cptr' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3230: > error: `mas' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3233: > error: `mas_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:3234: > error: `nofoptargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c: In > function `f2py_rout_mebdfdae_stiff': > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: > error: `pderv_typedef' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5829: > error: syntax error before "pderv_cptr" > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5920: > error: `pderv_cptr' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5922: > error: `pderv' undeclared (first use in this function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5925: > error: `pderv_nofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: `maxnofargs' undeclared (first use in this > function) > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: syntax error at '#' token > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c:5926: > error: `nofoptargs' undeclared (first use in this > function) > error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG > -O2 -march=i586 -mcpu=i686 -fmessage-length=0 -Wall -g > -fPIC -I/tmp/tmp9spUor/src.linux-i686-2.4 > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c > /tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.c -o > /tmp/tmp9spUor/tmp/tmp9spUor/src.linux-i686-2.4/mebdfdaemodule.o" > failed with exit status 1 > > How can I fix this problem ? > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From washakie at gmail.com Fri Oct 10 09:49:07 2008 From: washakie at gmail.com (John [H2O]) Date: Fri, 10 Oct 2008 06:49:07 -0700 (PDT) Subject: [SciPy-user] scipy sclicing In-Reply-To: <19917625.post@talk.nabble.com> References: <19917625.post@talk.nabble.com> Message-ID: <19918233.post@talk.nabble.com> This seems to work: def slize(X,i,j): X = X[i,:] X = X[:,j] return X Problems with the approach? -- View this message in context: http://www.nabble.com/scipy-sclicing-tp19917625p19918233.html Sent from the Scipy-User mailing list archive at Nabble.com. From pav at iki.fi Fri Oct 10 09:58:40 2008 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 10 Oct 2008 13:58:40 +0000 (UTC) Subject: [SciPy-user] scipy sclicing References: <19917625.post@talk.nabble.com> Message-ID: Fri, 10 Oct 2008 05:45:02 -0700, John [H2O] wrote: > Could someone explain what I'm doing wrong here? > >>>> i = array(range(140,149)) >>>> j = array(range(5,20)) >>>> i > array([140, 141, 142, 143, 144, 145, 146, 147, 148]) >>>> j > array([ 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) >>>> a = acc[i,j] > Traceback (most recent call last): > File "", line 1, in > ValueError: shape mismatch: objects cannot be broadcast to a single > shape [clip] > How come I can't use arrays to index my array? See also the reference documentation; chapter 3.4 in Guide to Numpy [1]. .. [1] http://www.tramy.us/numpybook.pdf -- Pauli Virtanen From peridot.faceted at gmail.com Fri Oct 10 09:59:21 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Fri, 10 Oct 2008 09:59:21 -0400 Subject: [SciPy-user] scipy sclicing In-Reply-To: <19918233.post@talk.nabble.com> References: <19917625.post@talk.nabble.com> <19918233.post@talk.nabble.com> Message-ID: 2008/10/10 John [H2O] : > > This seems to work: > > def slize(X,i,j): > X = X[i,:] > X = X[:,j] > return X > > Problems with the approach? No problem, per se, but it's a bit inefficient. I think the problem is that multidimensional slicing with arrays doesn't work quite the way you think it does. Let's say I have a 10 by 10 array X and I want rows 1, 3, and 5, and columns 2 and 4. I can't write X[ np.array([1,3,5]), np.array([2,4]) ] because that's not how numpy's fancy indexing works. When you supply arrays of indices like this (as opposed to slices), the idea is that you're picking out arbitrary collections of elements, not just rectangular hunks. For example if I want elements (1,2), (3,4), and (5,0) I can write: X[ np.array([1,3,5]), np.array([2,4,0]) ] But what if you want a rectangular slice? Naively, you would have to construct two big arrays: X[ np.array([[1,1],[3,3],[5,5]]), np.array([[2,4],[2,4],[2,4]]) ] This will give you what you want, but building those index arrays is annoying. Fortunately numpy's broadcasting can do it for you, repeating each array along a new axis: X[ np.array([1,3,5])[:,np.newaxis], np.array([2,4])[np.newaxis,:] ] Anne From pgmdevlist at gmail.com Fri Oct 10 11:21:50 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Fri, 10 Oct 2008 11:21:50 -0400 Subject: [SciPy-user] =?iso-8859-1?q?Calculating_daily=2C_monthly=09and=09?= =?iso-8859-1?q?seasonal=09averages=09of_hourly=09time_series_data=2E?= In-Reply-To: <48EE0A59.63BA.009B.0@twdb.state.tx.us> References: <48ECC9A6.63BA.009B.0@twdb.state.tx.us> <200810091317.29158.pgmdevlist@gmail.com> <48EE0A59.63BA.009B.0@twdb.state.tx.us> Message-ID: <200810101121.50966.pgmdevlist@gmail.com> Dharhas, I agree that the documentation might be lacking a tutorial. Feel free to start one and we will add it on the site. > fyi the website says Numpy 1.2.1 or later. 1.2.1 will have some bug fixes on numpy.ma that timeseries should rely on. From philbinj at gmail.com Fri Oct 10 11:34:40 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 10 Oct 2008 16:34:40 +0100 Subject: [SciPy-user] Sparse sub-matrix indexing Message-ID: <2b1c8c4f0810100834w2f2023b0jd78241d70bae377c@mail.gmail.com> Hi, I've come across this inconsistency with indexing sparse matrices: In [3]: A = np.arange(100).reshape(10,10) In [4]: B = spsp.csr_matrix(A) In [5]: A[np.array([2]),:][:,np.array([2])] Out[5]: array([[22]]) In [6]: B[np.array([2]),:][:,np.array([2])] --------------------------------------------------------------------------- Traceback (most recent call last) /home/james/ in () /usr/lib/python2.5/site-packages/scipy/sparse/csr.py in __getitem__(self, key) 226 #[1:2,??] 227 if isintlike(col) or isinstance(col, slice): --> 228 return self._get_submatrix(row, col) #[1:2,j] 229 else: 230 P = extractor(col,self.shape[1]).T #[1:2,[1,2]] /usr/lib/python2.5/site-packages/scipy/sparse/csr.py in _get_submatrix(self, row_slice, col_slice) 355 356 i0, i1 = process_slice( row_slice, M ) --> 357 j0, j1 = process_slice( col_slice, N ) 358 check_bounds( i0, i1, M ) 359 check_bounds( j0, j1, N ) /usr/lib/python2.5/site-packages/scipy/sparse/csr.py in process_slice(sl, num) 346 347 else: --> 348 raise TypeError('expected slice or scalar') 349 350 def check_bounds( i0, i1, num ): : expected slice or scalar However, this works fine: In [7]: B[np.array([2,3]),:][:,np.array([2,3])] Out[7]: <2x2 sparse matrix of type '' with 4 stored elements in Compressed Sparse Row format> Also, this works fine: In [8]: B[[2],:][:,[2]] Out[8]: <1x1 sparse matrix of type '' with 1 stored elements in Compressed Sparse Row format> Thanks, James From philbinj at gmail.com Fri Oct 10 12:17:51 2008 From: philbinj at gmail.com (James Philbin) Date: Fri, 10 Oct 2008 17:17:51 +0100 Subject: [SciPy-user] scipy.sparse.linalg.eigen inconsistency Message-ID: <2b1c8c4f0810100917t2c903082w5c9ad156043daab4@mail.gmail.com> Hi, This doesn't work: In [19]: spspla.eigen(np.array([[1.0]]),k=1) --------------------------------------------------------------------------- Traceback (most recent call last) /home/james/ in () /usr/lib/python2.5/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in eigen(A, k, M, sigma, which, v0, ncv, maxiter, tol, return_eigenvectors) 163 raise ValueError("k must be positive, k=%d"%k) 164 if k == n: --> 165 raise ValueError("k must be less than rank(A), k=%d"%k) 166 if maxiter <= 0: 167 raise ValueError("maxiter must be positive, maxiter=%d"%maxiter) : k must be less than rank(A), k=1 It should find the eigenvector [[1.0]] with eigenvalue 1.0. James From washakie at gmail.com Fri Oct 10 12:20:56 2008 From: washakie at gmail.com (John [H2O]) Date: Fri, 10 Oct 2008 09:20:56 -0700 (PDT) Subject: [SciPy-user] scipy sclicing In-Reply-To: References: <19917625.post@talk.nabble.com> <19918233.post@talk.nabble.com> Message-ID: <19921526.post@talk.nabble.com> > > Fortunately numpy's broadcasting can do it for you, > repeating each array along a new axis: > > X[ np.array([1,3,5])[:,np.newaxis], np.array([2,4])[np.newaxis,:] ] > > > Anne > So perhaps the approach should be: def slize(X,i,j): from numpy import array, newaxis X = X[ array([i)[:,newaxis], array(j)[newaxis,:] ] return X Seems a little easier for me to remember! -- View this message in context: http://www.nabble.com/scipy-sclicing-tp19917625p19921526.html Sent from the Scipy-User mailing list archive at Nabble.com. From dmitrey.kroshko at scipy.org Fri Oct 10 13:25:56 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 10 Oct 2008 20:25:56 +0300 Subject: [SciPy-user] [optimization] 10X speedup example for scipy.optimize solver, via using oofun Message-ID: <48EF9024.4000007@scipy.org> Hi all, if anyone is interested here's 10X speedup example for scipy fmin_ncg using openopt oofun. Other solvers (from or beyond scipy.optimize) can yield similar speedup. http://openopt.blogspot.com/2008/10/example-of-10x-speedup-for-nlp-via.html Regards, D. From wnbell at gmail.com Fri Oct 10 13:38:50 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 10 Oct 2008 13:38:50 -0400 Subject: [SciPy-user] Sparse sub-matrix indexing In-Reply-To: <2b1c8c4f0810100834w2f2023b0jd78241d70bae377c@mail.gmail.com> References: <2b1c8c4f0810100834w2f2023b0jd78241d70bae377c@mail.gmail.com> Message-ID: On Fri, Oct 10, 2008 at 11:34 AM, James Philbin wrote: > Hi, > > I've come across this inconsistency with indexing sparse matrices: > In [3]: A = np.arange(100).reshape(10,10) > In [4]: B = spsp.csr_matrix(A) > In [5]: A[np.array([2]),:][:,np.array([2])] > Out[5]: array([[22]]) > In [6]: B[np.array([2]),:][:,np.array([2])] Thanks for the report. It should be fixed in r4792. btw, you should get a trac account so future reports don't get lost: http://projects.scipy.org/scipy/scipy -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From contact at pythonxy.com Fri Oct 10 16:12:33 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Fri, 10 Oct 2008 22:12:33 +0200 Subject: [SciPy-user] [ Python(x,y) ] New release : 2.1.2 Message-ID: <48EFB731.10900@pythonxy.com> Hi all, As you may already know, Python(x,y) is a free scientific-oriented Python Distribution based on Qt and Eclipse providing a self-consistent scientific development environment. Release 2.1.2 is now available on http://www.pythonxy.com. (Full Edition, Basic Edition, Light Edition, Custom Edition and Update) Changes history Version 2.1.2 (10-09-2008) * Added: o mercurial 1.0.2: Revision control system o MercurialEclipse 1.1.867: Mercurial Eclipse plugin o docutils 0.5.0: Text processing system for processing plaintext documentation into useful formats, such as HTML or LaTeX (includes reStructuredText) o jinja 1.2: Sandboxed template engine (provides a Django-like non-XML syntax and compiles templates into executable python code) o pygments 0.11.1: Generic syntax highlighter for general use in all kinds of software * Updated: o Pydev 1.3.22 o MinGW 3.4.5.4: added GDB o xy 1.0.7: Python html help is automatically generated from .chm file * Corrected: o SciPy 0.6.0.1 : tiny update to remove deprecation warnings following NumPy latest update o IPython 0.9.1.2: post-install install script was not executed entirely o Issue 25 (PATH environment variable could be corrupted): Python(x,y) (main installer), Console2, MinGW, Notepad++, SWIG, GDAL, GDCM, OpenCV, PyQt4 and VTK Regards, Pierre Raybaut From matthew.brett at gmail.com Fri Oct 10 16:13:42 2008 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 10 Oct 2008 13:13:42 -0700 Subject: [SciPy-user] Reference to algorithm for matrix rank In-Reply-To: <3d375d730810091640i414d0725scd4374f0037b0499@mail.gmail.com> References: <1e2af89e0810091113o6af11cb7ue23a792e85be7f42@mail.gmail.com> <3d375d730810091640i414d0725scd4374f0037b0499@mail.gmail.com> Message-ID: <1e2af89e0810101313v4692da27k62601d2c3fcaaab2@mail.gmail.com> Hi, > You should get the book _Matrix Computations_ by Golub and van Loan. > You actually want tol to be relative to S.max(), not an absolute > tolerance. I like this: > > np.sum(S > (S.max() * np.finfo(M.dtype).eps) Thanks a lot - I'll have a look. I saw that matlab does something like this: eps = np.finfo(S.dtype).eps tol = max(M.shape)*eps*S[0] but I didn't know why the max(M.shape)... Best, Matthew From matthieu.brucher at gmail.com Fri Oct 10 16:16:05 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 10 Oct 2008 22:16:05 +0200 Subject: [SciPy-user] Reference to algorithm for matrix rank In-Reply-To: <1e2af89e0810101313v4692da27k62601d2c3fcaaab2@mail.gmail.com> References: <1e2af89e0810091113o6af11cb7ue23a792e85be7f42@mail.gmail.com> <3d375d730810091640i414d0725scd4374f0037b0499@mail.gmail.com> <1e2af89e0810101313v4692da27k62601d2c3fcaaab2@mail.gmail.com> Message-ID: 2008/10/10 Matthew Brett : > Hi, > >> You should get the book _Matrix Computations_ by Golub and van Loan. >> You actually want tol to be relative to S.max(), not an absolute >> tolerance. I like this: >> >> np.sum(S > (S.max() * np.finfo(M.dtype).eps) > > Thanks a lot - I'll have a look. > > I saw that matlab does something like this: > > eps = np.finfo(S.dtype).eps > tol = max(M.shape)*eps*S[0] > > but I didn't know why the max(M.shape)... The bigger the matrix, the bigger the numerical errors are ? Matthieu -- Information System Engineer, PhD Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From robert.kern at gmail.com Fri Oct 10 16:45:32 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 10 Oct 2008 15:45:32 -0500 Subject: [SciPy-user] Reference to algorithm for matrix rank In-Reply-To: <1e2af89e0810101313v4692da27k62601d2c3fcaaab2@mail.gmail.com> References: <1e2af89e0810091113o6af11cb7ue23a792e85be7f42@mail.gmail.com> <3d375d730810091640i414d0725scd4374f0037b0499@mail.gmail.com> <1e2af89e0810101313v4692da27k62601d2c3fcaaab2@mail.gmail.com> Message-ID: <3d375d730810101345q7569a685t1806de466169211e@mail.gmail.com> On Fri, Oct 10, 2008 at 15:13, Matthew Brett wrote: > Hi, > >> You should get the book _Matrix Computations_ by Golub and van Loan. >> You actually want tol to be relative to S.max(), not an absolute >> tolerance. I like this: >> >> np.sum(S > (S.max() * np.finfo(M.dtype).eps) > > Thanks a lot - I'll have a look. > > I saw that matlab does something like this: > > eps = np.finfo(S.dtype).eps > tol = max(M.shape)*eps*S[0] > > but I didn't know why the max(M.shape)... Neither do I. Golub and van Loan define "numerical rank deficiency" as using tol=eps*S[0] (note that S[0] is the maximum singular value and thus the 2-norm of the matrix). The thing is, there really isn't one definition, much like there isn't a single definition of the norm of a matrix. For example, if your data come from uncertain measurements with uncertainties greater than floating point epsilon, choosing a tolerance of about the uncertainty is probably a better idea (the tolerance may be absolute if the uncertainties are absolute rather than relative, even). When floating point roundoff is your concern, then "numerical rank deficiency" is a better concept, but exactly what the relevant measure of the tolerance is depends on the operations you intend to do with your matrix. Possibly, the Matlab implementors had some set of operations in mind. Unfortunately, they don't cite anything relevant to that point, and my imagination fails to come up with one. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alan at ajackson.org Fri Oct 10 21:14:49 2008 From: alan at ajackson.org (Alan Jackson) Date: Fri, 10 Oct 2008 20:14:49 -0500 Subject: [SciPy-user] Extrema finding In-Reply-To: References: <88e473830810090521qe247f41l9649f3c460e95d38@mail.gmail.com> Message-ID: <20081010201449.23c1644d@ajackson.org> On Thu, 9 Oct 2008 13:06:33 -0600 David Bolme wrote: > This is a version for finding extrema in a 2D array. It requires the > ndimage maximum/minimum filters. For the one dimensional case > substitute size=[3] or use maximum_filter1d. I keep writing this code > over and over. I am surprised that there is not a general purpose > extrema finding routine in scipy. > > def localMax(mat): > mx = maximum_filter(mat, size=[3,3]) > mn = minimum_filter(mat, size=[3,3]) > > # (mat == mx) true if pixel is equal to the local max > # The next computation suppresses responses where > # the function is flat. > local_maxima = ((mat == mx) & (mat != mn)) > > # Get the indices of the maxima > extrema = nonzero(local_maxima) > return extrema > > Here's one I wrote a few months back... def extrema(trace) a = np.sign(np.diff(trace)) zerolocs = np.transpose(np.where(a[1:]+a[0:-1]==0.)).flatten() + 1 if zerolocs[0] < 1: zerolocs = zerolocs[1:] if zerolocs[-1]>len(a)-2: zerolocs = zerolocs[0:-1] return zerolocs -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From nwagner at iam.uni-stuttgart.de Sat Oct 11 03:35:33 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 11 Oct 2008 09:35:33 +0200 Subject: [SciPy-user] [optimization] 10X speedup example for scipy.optimize solver, via using oofun In-Reply-To: <48EF9024.4000007@scipy.org> References: <48EF9024.4000007@scipy.org> Message-ID: On Fri, 10 Oct 2008 20:25:56 +0300 dmitrey wrote: > Hi all, > if anyone is interested here's 10X speedup example for >scipy fmin_ncg > using openopt oofun. Other solvers (from or beyond >scipy.optimize) can > yield similar speedup. > > http://openopt.blogspot.com/2008/10/example-of-10x-speedup-for-nlp-via.html > > Regards, D. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user python -i svn/openopt/scikits/openopt/examples/oofun/speedup.py Traceback (most recent call last): File "svn/openopt/scikits/openopt/examples/oofun/speedup.py", line 25, in ? from scikits.openopt import NLP, oofun, oovar File "/usr/lib/python2.4/site-packages/scikits/openopt/__init__.py", line 6, in ? from oo import LP, NLP, NSP, MILP, QP, NLSP, LSP, GLP, LLSP, MMP, LLAVP File "/usr/lib/python2.4/site-packages/scikits/openopt/oo.py", line 6, in ? from Kernel.BaseProblem import * File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 11, in ? from ooIterPrint import ooTextOutput File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/ooIterPrint.py", line 6 'isFeasible': lambda p: ('+' if p.rk References: <48EF9024.4000007@scipy.org> Message-ID: <48F08AE8.6000006@scipy.org> So now Python 2.5 is required (as it is informed in oo install page). Mb I could fix it and commit (mb I'll do this today) but IIRC this is not a single place where I use Python 2.5 features. to run the example, use iprint = -1(line 23) and to check the time elapsed add the lines after r=p.solve(...): print r.elapsed['solver_time'] print r.elapsed['solver_cputime'] Regards, D. Nils Wagner wrote: > On Fri, 10 Oct 2008 20:25:56 +0300 > dmitrey wrote: > >> Hi all, >> if anyone is interested here's 10X speedup example for >> scipy fmin_ncg >> using openopt oofun. Other solvers (from or beyond >> scipy.optimize) can >> yield similar speedup. >> >> http://openopt.blogspot.com/2008/10/example-of-10x-speedup-for-nlp-via.html >> >> Regards, D. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > python -i > svn/openopt/scikits/openopt/examples/oofun/speedup.py > Traceback (most recent call last): > File > "svn/openopt/scikits/openopt/examples/oofun/speedup.py", > line 25, in ? > from scikits.openopt import NLP, oofun, oovar > File > "/usr/lib/python2.4/site-packages/scikits/openopt/__init__.py", > line 6, in ? > from oo import LP, NLP, NSP, MILP, QP, NLSP, LSP, > GLP, LLSP, MMP, LLAVP > File > "/usr/lib/python2.4/site-packages/scikits/openopt/oo.py", > line 6, in ? > from Kernel.BaseProblem import * > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 11, in ? > from ooIterPrint import ooTextOutput > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/ooIterPrint.py", > line 6 > 'isFeasible': lambda p: ('+' if p.rk '-') > ^ > SyntaxError: invalid syntax > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From dmitrey.kroshko at scipy.org Sat Oct 11 07:23:41 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 11 Oct 2008 14:23:41 +0300 Subject: [SciPy-user] [optimization] 10X speedup example for scipy.optimize solver, via using oofun In-Reply-To: <48F08AE8.6000006@scipy.org> References: <48EF9024.4000007@scipy.org> <48F08AE8.6000006@scipy.org> Message-ID: <48F08CBD.60405@scipy.org> I have committed the iterprint changes to svn, you could try now. Regards, D. dmitrey wrote: > Nils Wagner wrote: > >> On Fri, 10 Oct 2008 20:25:56 +0300 >> dmitrey wrote: >> >> >>> Hi all, >>> if anyone is interested here's 10X speedup example for >>> scipy fmin_ncg >>> using openopt oofun. Other solvers (from or beyond >>> scipy.optimize) can >>> yield similar speedup. >>> >>> http://openopt.blogspot.com/2008/10/example-of-10x-speedup-for-nlp-via.html >>> >>> Regards, D. >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >> python -i >> svn/openopt/scikits/openopt/examples/oofun/speedup.py >> Traceback (most recent call last): >> File >> "svn/openopt/scikits/openopt/examples/oofun/speedup.py", >> line 25, in ? >> from scikits.openopt import NLP, oofun, oovar >> File >> "/usr/lib/python2.4/site-packages/scikits/openopt/__init__.py", >> line 6, in ? >> from oo import LP, NLP, NSP, MILP, QP, NLSP, LSP, >> GLP, LLSP, MMP, LLAVP >> File >> "/usr/lib/python2.4/site-packages/scikits/openopt/oo.py", >> line 6, in ? >> from Kernel.BaseProblem import * >> File >> "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", >> line 11, in ? >> from ooIterPrint import ooTextOutput >> File >> "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/ooIterPrint.py", >> line 6 >> 'isFeasible': lambda p: ('+' if p.rk> '-') >> ^ >> SyntaxError: invalid syntax >> >> Nils >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Sat Oct 11 11:46:31 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 11 Oct 2008 17:46:31 +0200 Subject: [SciPy-user] openopt ImportError: cannot import name oovar Message-ID: python -i svn/openopt/scikits/openopt/examples/oofun/speedup.py Traceback (most recent call last): File "svn/openopt/scikits/openopt/examples/oofun/speedup.py", line 28, in from scikits.openopt import NLP, oofun, oovar ImportError: cannot import name oovar From dmitrey.kroshko at scipy.org Sat Oct 11 15:17:33 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sat, 11 Oct 2008 22:17:33 +0300 Subject: [SciPy-user] openopt ImportError: cannot import name oovar In-Reply-To: References: Message-ID: <48F0FBCD.3080109@scipy.org> I checked out openopt from svn and it works for me. Does anyone else has the problem? D. Nils Wagner wrote: > python -i > svn/openopt/scikits/openopt/examples/oofun/speedup.py > Traceback (most recent call last): > File > "svn/openopt/scikits/openopt/examples/oofun/speedup.py", > line 28, in > from scikits.openopt import NLP, oofun, oovar > ImportError: cannot import name oovar > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Sun Oct 12 04:55:39 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 12 Oct 2008 10:55:39 +0200 Subject: [SciPy-user] openopt ImportError: cannot import name oovar In-Reply-To: <48F0FBCD.3080109@scipy.org> References: <48F0FBCD.3080109@scipy.org> Message-ID: On Sat, 11 Oct 2008 22:17:33 +0300 dmitrey wrote: > I checked out openopt from svn and it works for me. > Does anyone else has the problem? > > D. > > Nils Wagner wrote: >> python -i >> svn/openopt/scikits/openopt/examples/oofun/speedup.py >> Traceback (most recent call last): >> File >> "svn/openopt/scikits/openopt/examples/oofun/speedup.py", >> line 28, in >> from scikits.openopt import NLP, oofun, oovar >> ImportError: cannot import name oovar >> Strange. I can import oovar on my old laptop (python2.4). Python 2.4 (#1, Oct 13 2006, 17:13:31) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from scikits.openopt import oovar >>> Another python2.4 issue is present python -i svn/openopt/scikits/openopt/examples/oofun/speedup.py Traceback (most recent call last): File "svn/openopt/scikits/openopt/examples/oofun/speedup.py", line 38, in ? r = p.solve(solver) File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 201, in solve return runProbSolver(self, *args, **kwargs) File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/runProbSolver.py", line 102, in runProbSolver p.__prepare__() File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 385, in __prepare__ self.__construct_x_from_ooVars__() File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", line 370, in __construct_x_from_ooVars__ var.__initialize__(self) File "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/ooVar.py", line 63, in __initialize__ if any(self.lb > self.ub): NameError: global name 'any' is not defined Nils From dmitrey.kroshko at scipy.org Sun Oct 12 07:10:43 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Sun, 12 Oct 2008 14:10:43 +0300 Subject: [SciPy-user] openopt ImportError: cannot import name oovar In-Reply-To: References: <48F0FBCD.3080109@scipy.org> Message-ID: <48F1DB33.7030707@scipy.org> Hi Nils, try now. as for the importing issue, maybe you updated svn from svn/openopt/scikits/openopt/ while it is has to be from svn/openopt/ I'm not fond of the folders structure but it was demanded from all scikits (I don't know why less deeply structured folders were not organized, it brings lots of inconveniences). Regards,D. Nils Wagner wrote: > On Sat, 11 Oct 2008 22:17:33 +0300 > dmitrey wrote: > >> I checked out openopt from svn and it works for me. >> Does anyone else has the problem? >> >> D. >> >> Nils Wagner wrote: >> >>> python -i >>> svn/openopt/scikits/openopt/examples/oofun/speedup.py >>> Traceback (most recent call last): >>> File >>> "svn/openopt/scikits/openopt/examples/oofun/speedup.py", >>> line 28, in >>> from scikits.openopt import NLP, oofun, oovar >>> ImportError: cannot import name oovar >>> >>> > > Strange. > I can import oovar on my old laptop (python2.4). > > Python 2.4 (#1, Oct 13 2006, 17:13:31) > [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more > information. > >>>> from scikits.openopt import oovar >>>> >>>> > > Another python2.4 issue is present > > python -i > svn/openopt/scikits/openopt/examples/oofun/speedup.py > Traceback (most recent call last): > File > "svn/openopt/scikits/openopt/examples/oofun/speedup.py", > line 38, in ? > r = p.solve(solver) > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 201, in solve > return runProbSolver(self, *args, **kwargs) > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/runProbSolver.py", > line 102, in runProbSolver > p.__prepare__() > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 385, in __prepare__ > self.__construct_x_from_ooVars__() > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/BaseProblem.py", > line 370, in __construct_x_from_ooVars__ > var.__initialize__(self) > File > "/usr/lib/python2.4/site-packages/scikits/openopt/Kernel/ooVar.py", > line 63, in __initialize__ > if any(self.lb > self.ub): > NameError: global name 'any' is not defined > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From nwagner at iam.uni-stuttgart.de Sun Oct 12 07:30:51 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 12 Oct 2008 13:30:51 +0200 Subject: [SciPy-user] openopt ImportError: cannot import name oovar In-Reply-To: <48F1DB33.7030707@scipy.org> References: <48F0FBCD.3080109@scipy.org> <48F1DB33.7030707@scipy.org> Message-ID: On Sun, 12 Oct 2008 14:10:43 +0300 dmitrey wrote: > Hi Nils, > try now. > > as for the importing issue, maybe you updated svn from > > svn/openopt/scikits/openopt/ > > while it is has to be from svn/openopt/ > > I'm not fond of the folders structure but it was >demanded from all > scikits (I don't know why less deeply structured folders >were not > organized, it brings lots of inconveniences). > > Regards,D. Hi Dmitrey, I use svn co http://svn.scipy.org/svn/scikits/trunk/openopt openopt in $HOME/svn to checkout openopt. Nils From nwagner at iam.uni-stuttgart.de Sun Oct 12 07:37:05 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 12 Oct 2008 13:37:05 +0200 Subject: [SciPy-user] openopt ImportError: cannot import name oovar In-Reply-To: <48F1DB33.7030707@scipy.org> References: <48F0FBCD.3080109@scipy.org> <48F1DB33.7030707@scipy.org> Message-ID: Hi Dmitrey, The example speedup.py works with r1537. Thank you. python -i svn/openopt/scikits/openopt/examples/oofun/speedup.py ----------------------------------------------------- solver: scipy_ncg problem: unnamed goal: minimum iter objFunVal 0 2.228e+06 5 1.142e+06 10 6.139e+02 15 4.909e+02 20 4.888e+02 25 4.869e+02 28 4.862e+02 istop: 1000 Solver: Time Elapsed = 5.95 CPU Time Elapsed = 5.91 objFunValue: 486.19658 evals f: 38562 evals of costly func g: 567 ----------------------------------------------------- solver: scipy_ncg problem: unnamed goal: minimum iter objFunVal 0 2.228e+06 5 4.982e+02 10 4.938e+02 15 4.912e+02 20 4.891e+02 25 4.871e+02 29 4.861e+02 istop: 1000 Solver: Time Elapsed = 42.5 CPU Time Elapsed = 40.3 objFunValue: 486.11448 evals f: 24235 evals of costly func g: 24235 Cheers, Nils From kwmsmith at gmail.com Sun Oct 12 14:45:57 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Sun, 12 Oct 2008 18:45:57 +0000 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu Message-ID: Hi list, I've recently acquired a new Ubuntu system, and unfortunately some 3rd party software I'd like to use won't work under the 64 bit version of the OS, yet. It's not a deal breaker, however, and I'm willing to work around it as long as there are compelling reasons to use 64 bit for numerical work, specifically under scipy, etc. I'm not well versed in the low-level pros and cons of 64 bit vs 32 bit, esp. as to how it affects scipy performance. Can you enlighten me? I"m on an Intel Core 2 Duo, 3.16 GHz, E8500, 4 GB RAM. Will using 64 bit yield great benefits running scipy/numpy code automatically, or will it require fine-tuning on my part? Will 64 bit yield benefits besides having a larger address space (besides the fact that my 4 GB RAM is still addressable by 32 bits, anyway)? Thanks for your input, Kurt From wnbell at gmail.com Sun Oct 12 15:14:18 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sun, 12 Oct 2008 15:14:18 -0400 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: On Sun, Oct 12, 2008 at 2:45 PM, Kurt Smith wrote: > > for numerical work, specifically under scipy, etc. I'm not well > versed in the low-level pros and cons of 64 bit vs 32 bit, esp. as to > how it affects scipy performance. Can you enlighten me? I"m on an > Intel Core 2 Duo, 3.16 GHz, E8500, 4 GB RAM. Will using 64 bit yield > great benefits running scipy/numpy code automatically, or will it > require fine-tuning on my part? Will 64 bit yield benefits besides > having a larger address space (besides the fact that my 4 GB RAM is > still addressable by 32 bits, anyway)? > I wouldn't expect any significant performance difference. The x86-64 architecture does bring a few improvements such as more registers and (guaranteed?) SSE2 support, but the benefit of these depends on the nature of the application. http://en.wikipedia.org/wiki/X86-64#Architectural_features Something in the neighborhood of 20% faster for general purpose codes should be typical. OTOH I've found that some pure Python code that makes heavy use of dictionaries runs slower, possibly owing to the fact that 64-bit pointers reduce the effective CPU cache size. Also, you wouldn't be able to address *all* 4GB RAM in a 32-bit OS. Both Windows and Linux reserve some part of the upper addresses for the OS, so a process will be limited to something less (3GB to 3.5GB IIRC). Keep in mind that 64-bit Linux is approximately 2x cooler than 32-bit Linux, so the choice is easy IMO. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From warren.weckesser at gmail.com Sun Oct 12 16:02:48 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 12 Oct 2008 15:02:48 -0500 Subject: [SciPy-user] f2py: problem with PUBLIC access spec of a derived type Message-ID: <114880320810121302w5b970ba7n4f8bab188af1a1d3@mail.gmail.com> Hi, I am learning about f2py, with the goal of wrapping a delay differential equation solver that is written in Fortran 90. The solver is Shampine and Thompson's DDE_SOLVER. Here is what I get when I run f2py: ------------ $ f2py dde_solver_m_unix.f90 -h tmp.pyf Reading fortran codes... Reading file 'dde_solver_m_unix.f90' (format:free) Line #316 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_SOL" analyzeline: No name/args pattern found for line. Line #331 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_OPTS" analyzeline: No name/args pattern found for line. Line #340 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_INT" analyzeline: No name/args pattern found for line. rmbadname1: Replacing "index" with "index_bn". rmbadname1: Replacing "index" with "index_bn". rmbadname1: Replacing "index" with "index_bn". ------------ Why is f2py complaining about the PUBLIC access spec for the derived types? Regards, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwmsmith at gmail.com Sun Oct 12 17:04:34 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Sun, 12 Oct 2008 21:04:34 +0000 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: On Sun, Oct 12, 2008 at 7:14 PM, Nathan Bell wrote: > On Sun, Oct 12, 2008 at 2:45 PM, Kurt Smith wrote: >> >> for numerical work, specifically under scipy, etc. I'm not well >> versed in the low-level pros and cons of 64 bit vs 32 bit, esp. as to >> how it affects scipy performance. Can you enlighten me? I"m on an >> Intel Core 2 Duo, 3.16 GHz, E8500, 4 GB RAM. Will using 64 bit yield >> great benefits running scipy/numpy code automatically, or will it >> require fine-tuning on my part? Will 64 bit yield benefits besides >> having a larger address space (besides the fact that my 4 GB RAM is >> still addressable by 32 bits, anyway)? >> > > I wouldn't expect any significant performance difference. The x86-64 > architecture does bring a few improvements such as more registers and > (guaranteed?) SSE2 support, but the benefit of these depends on the > nature of the application. > http://en.wikipedia.org/wiki/X86-64#Architectural_features > > Something in the neighborhood of 20% faster for general purpose codes > should be typical. OTOH I've found that some pure Python code that > makes heavy use of dictionaries runs slower, possibly owing to the > fact that 64-bit pointers reduce the effective CPU cache size. Is this 20% from your own experience, or are there timings out there? I've looked around, but the comparisons are pretty old and made when very few 64 bit apps were around, so 32 had the day. Anyone aware of comparisons between 64 and 32 bit performance for numerical codes? > Also, you wouldn't be able to address *all* 4GB RAM in a 32-bit OS. > Both Windows and Linux reserve some part of the upper addresses for > the OS, so a process will be limited to something less (3GB to 3.5GB > IIRC). > > Keep in mind that 64-bit Linux is approximately 2x cooler than 32-bit > Linux, so the choice is easy IMO. This temperature difference is really surprising to me, and I'm not able to find anything that talks about it in detail. Anything more you can say as to why this would be the case? Thanks, Kurt From warren.weckesser at gmail.com Sun Oct 12 17:16:20 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 12 Oct 2008 16:16:20 -0500 Subject: [SciPy-user] f2py: problem with PUBLIC access spec of a derived type In-Reply-To: <114880320810121302w5b970ba7n4f8bab188af1a1d3@mail.gmail.com> References: <114880320810121302w5b970ba7n4f8bab188af1a1d3@mail.gmail.com> Message-ID: <114880320810121416y567efd4eg7aebc93cec9cc9f3@mail.gmail.com> Hi again, On Sun, Oct 12, 2008 at 3:02 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > Hi, > > I am learning about f2py, with the goal of wrapping a delay differential > equation solver that is written in Fortran 90. The solver is Shampine and > Thompson's DDE_SOLVER. Here is what I get when I run f2py: > > ------------ > $ f2py dde_solver_m_unix.f90 -h tmp.pyf > Reading fortran codes... > Reading file 'dde_solver_m_unix.f90' (format:free) > Line #316 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_SOL" > analyzeline: No name/args pattern found for line. > Line #331 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_OPTS" > analyzeline: No name/args pattern found for line. > Line #340 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_INT" > analyzeline: No name/args pattern found for line. > rmbadname1: Replacing "index" with "index_bn". > rmbadname1: Replacing "index" with "index_bn". > rmbadname1: Replacing "index" with "index_bn". > > ------------ > > Why is f2py complaining about the PUBLIC access spec for the derived types? > I should have added that the problem appears to be the PUBLIC spec because if I remove the PUBLIC spec, so, for example, line 316 is " TYPE DDE_SOL", then I don't get the "analyzeline" errors. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sun Oct 12 19:39:39 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 12 Oct 2008 18:39:39 -0500 Subject: [SciPy-user] f2py: problem with PUBLIC access spec of a derived type In-Reply-To: <114880320810121416y567efd4eg7aebc93cec9cc9f3@mail.gmail.com> References: <114880320810121302w5b970ba7n4f8bab188af1a1d3@mail.gmail.com> <114880320810121416y567efd4eg7aebc93cec9cc9f3@mail.gmail.com> Message-ID: <114880320810121639j39220781k536f860e2787418@mail.gmail.com> Answering my own question: according to the f2py FAQ at http://cens.ioc.ee/projects/f2py2e/FAQ.html, derived types are not supported in F90 code. Warren On Sun, Oct 12, 2008 at 4:16 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > Hi again, > > On Sun, Oct 12, 2008 at 3:02 PM, Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> Hi, >> >> I am learning about f2py, with the goal of wrapping a delay differential >> equation solver that is written in Fortran 90. The solver is Shampine and >> Thompson's DDE_SOLVER. Here is what I get when I run f2py: >> >> ------------ >> $ f2py dde_solver_m_unix.f90 -h tmp.pyf >> Reading fortran codes... >> Reading file 'dde_solver_m_unix.f90' (format:free) >> Line #316 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_SOL" >> analyzeline: No name/args pattern found for line. >> Line #331 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_OPTS" >> analyzeline: No name/args pattern found for line. >> Line #340 in dde_solver_m_unix.f90:" TYPE, PUBLIC :: DDE_INT" >> analyzeline: No name/args pattern found for line. >> rmbadname1: Replacing "index" with "index_bn". >> rmbadname1: Replacing "index" with "index_bn". >> rmbadname1: Replacing "index" with "index_bn". >> >> ------------ >> >> Why is f2py complaining about the PUBLIC access spec for the derived >> types? >> > > I should have added that the problem appears to be the PUBLIC spec because > if I remove the PUBLIC spec, so, for example, line 316 is " TYPE DDE_SOL", > then I don't get the "analyzeline" errors. > > Warren > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 13 01:53:19 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 13 Oct 2008 00:53:19 -0500 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: <3d375d730810122253v7480cbbdwd89fae8c01e394d4@mail.gmail.com> On Sun, Oct 12, 2008 at 16:04, Kurt Smith wrote: > On Sun, Oct 12, 2008 at 7:14 PM, Nathan Bell wrote: >> Keep in mind that 64-bit Linux is approximately 2x cooler than 32-bit >> Linux, so the choice is easy IMO. > > This temperature difference is really surprising to me, and I'm not > able to find anything that talks about it in detail. Anything more > you can say as to why this would be the case? He didn't mean "cooler" as in, "Yesterday was cooler than today," but rather "cooler" as in, "James Dean was cooler than I will ever be." -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From philbinj at gmail.com Mon Oct 13 10:04:32 2008 From: philbinj at gmail.com (James Philbin) Date: Mon, 13 Oct 2008 15:04:32 +0100 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False Message-ID: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> I've filed this as trac #754, repeated here for visibility. --- Running scipy version 0.7.0.dev4763 coo_matrix.tocsr + tocsc both ignore the sum_duplicates parameter: In [1]: from numpy import * In [2]: from scipy.sparse import * In [3]: data = array([1,1,1,1,1,1,1]) In [4]: row = array([0,0,1,3,1,0,0]) In [5]: col = array([0,2,1,3,1,0,0]) In [6]: A = coo_matrix( (data,(row,col)), shape=(4,4)) In [8]: A.tocsr(sum_duplicates=False).todense() Out[8]: matrix([[3, 0, 1, 0], [0, 2, 0, 0], [0, 0, 0, 0], [0, 0, 0, 1]]) In [9]: A.tocsc(sum_duplicates=False).todense() Out[9]: matrix([[3, 0, 1, 0], [0, 2, 0, 0], [0, 0, 0, 0], [0, 0, 0, 1]]) --- Thanks, James From kwmsmith at gmail.com Mon Oct 13 10:33:59 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Mon, 13 Oct 2008 09:33:59 -0500 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: <3d375d730810122253v7480cbbdwd89fae8c01e394d4@mail.gmail.com> References: <3d375d730810122253v7480cbbdwd89fae8c01e394d4@mail.gmail.com> Message-ID: On Mon, Oct 13, 2008 at 12:53 AM, Robert Kern wrote: > On Sun, Oct 12, 2008 at 16:04, Kurt Smith wrote: >> On Sun, Oct 12, 2008 at 7:14 PM, Nathan Bell wrote: > >>> Keep in mind that 64-bit Linux is approximately 2x cooler than 32-bit >>> Linux, so the choice is easy IMO. >> >> This temperature difference is really surprising to me, and I'm not >> able to find anything that talks about it in detail. Anything more >> you can say as to why this would be the case? > > He didn't mean "cooler" as in, "Yesterday was cooler than today," but > rather "cooler" as in, "James Dean was cooler than I will ever be. Thanks -- you saved me from a rather long wild goose chase. From wnbell at gmail.com Mon Oct 13 10:35:23 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Oct 2008 10:35:23 -0400 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False In-Reply-To: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> References: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> Message-ID: On Mon, Oct 13, 2008 at 10:04 AM, James Philbin wrote: > I've filed this as trac #754, repeated here for visibility. > > --- > Running scipy version 0.7.0.dev4763 > > coo_matrix.tocsr + tocsc both ignore the sum_duplicates parameter: > > In [1]: from numpy import * > In [2]: from scipy.sparse import * > In [3]: data = array([1,1,1,1,1,1,1]) > In [4]: row = array([0,0,1,3,1,0,0]) > In [5]: col = array([0,2,1,3,1,0,0]) > In [6]: A = coo_matrix( (data,(row,col)), shape=(4,4)) > In [8]: A.tocsr(sum_duplicates=False).todense() > Out[8]: > matrix([[3, 0, 1, 0], > [0, 2, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 1]]) > In [9]: A.tocsc(sum_duplicates=False).todense() > Out[9]: > matrix([[3, 0, 1, 0], > [0, 2, 0, 0], > [0, 0, 0, 0], > [0, 0, 0, 1]]) Hi James, Note that CSR.todense() implicitly sums duplicate entries (it's essentially zeros((N,M)) += A). You should find that the CSR representation *does* contain the duplicate entries: In [1]: from numpy import * In [2]: from scipy.sparse import * In [3]: data = array([1,1,1,1,1,1,1]) In [4]: row = array([0,0,1,3,1,0,0]) In [5]: col = array([0,2,1,3,1,0,0]) In [6]: A = coo_matrix( (data,(row,col)), shape=(4,4)) In [7]: B = A.tocsr(sum_duplicates=False) In [8]: B.indptr Out[8]: array([0, 4, 6, 6, 7], dtype=int32) In [9]: B.indices Out[9]: array([0, 2, 0, 0, 1, 1, 3], dtype=int32) In [10]: B.data Out[10]: array([1, 1, 1, 1, 1, 1, 1]) In [11]: B Out[11]: <4x4 sparse matrix of type '' with 7 stored elements in Compressed Sparse Row format> -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Mon Oct 13 10:41:31 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Oct 2008 10:41:31 -0400 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: <3d375d730810122253v7480cbbdwd89fae8c01e394d4@mail.gmail.com> Message-ID: On Mon, Oct 13, 2008 at 10:33 AM, Kurt Smith wrote: >> >> He didn't mean "cooler" as in, "Yesterday was cooler than today," but >> rather "cooler" as in, "James Dean was cooler than I will ever be. > > Thanks -- you saved me from a rather long wild goose chase. Yeah, sorry about that. I should have terminated that statement with an emoticon :) -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wnbell at gmail.com Mon Oct 13 10:43:53 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Oct 2008 10:43:53 -0400 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: On Sun, Oct 12, 2008 at 5:04 PM, Kurt Smith wrote: > > Is this 20% from your own experience, or are there timings out there? > I've looked around, but the comparisons are pretty old and made when > very few 64 bit apps were around, so 32 had the day. Anyone aware of > comparisons between 64 and 32 bit performance for numerical codes? > I've never done a proper study of the matter, so that's just my take from benchmarks and anecdotes I've read. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From philbinj at gmail.com Mon Oct 13 10:52:51 2008 From: philbinj at gmail.com (James Philbin) Date: Mon, 13 Oct 2008 15:52:51 +0100 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False In-Reply-To: References: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> Message-ID: <2b1c8c4f0810130752w138837fan9ef20c59204a1eb2@mail.gmail.com> Hi, > Note that CSR.todense() implicitly sums duplicate entries (it's > essentially zeros((N,M)) += A). You should find that the CSR > representation *does* contain the duplicate entries: Hmm, I see. This is quite subtle (+ suprising) as to all intents and purposes the csr_matrix behaves as if the duplicates had been summed whether or not sum_duplicates=True or False. The parameter name probably needs to be changed and/or something said in the docstring. What I was actually looking for was a way for duplicates to be ignored, which i've found with dok_matrix. Thanks a lot, James From peridot.faceted at gmail.com Mon Oct 13 11:25:15 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 13 Oct 2008 11:25:15 -0400 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: 2008/10/12 Kurt Smith : > I've recently acquired a new Ubuntu system, and unfortunately some 3rd > party software I'd like to use won't work under the 64 bit version of > the OS, yet. It's not a deal breaker, however, and I'm willing to > work around it as long as there are compelling reasons to use 64 bit > for numerical work, specifically under scipy, etc. I'm not well > versed in the low-level pros and cons of 64 bit vs 32 bit, esp. as to > how it affects scipy performance. Can you enlighten me? I"m on an > Intel Core 2 Duo, 3.16 GHz, E8500, 4 GB RAM. Will using 64 bit yield > great benefits running scipy/numpy code automatically, or will it > require fine-tuning on my part? Will 64 bit yield benefits besides > having a larger address space (besides the fact that my 4 GB RAM is > still addressable by 32 bits, anyway)? I think memory is the only major reason to prefer 64 bits. The speed difference will probably be minimal. But having a 64-bit address space means that if your applications get too big for RAM they just slow down, as opposed to crashing and forcing you to rewrite them. You will also find, I think, that running in 32-bit mode you cannot access the last gigabyte or so of physical RAM. This can make a big difference too. Anne From mhearne at usgs.gov Mon Oct 13 13:12:00 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Mon, 13 Oct 2008 11:12:00 -0600 Subject: [SciPy-user] Results of squeeze function on a [1,1] size array In-Reply-To: <48E14401.7030600@ru.nl> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> <48E14401.7030600@ru.nl> Message-ID: <48F38160.4090705@usgs.gov> Using numpy version 1.1.0.dev5077 Using scipy version 0.7.0.dev4174 on Mac OS X If I do the following: x = zeros([1,1]) x[0] = 3.4756 y = squeeze(x) What is y? When I print or do arithmetic with y, it seems like it's a scalar. However, the type() function seems to indicate that it's a numpy.ndarray, but with 0 dimensionality. Questions: How do I detect when I'm in this state? How can I convert this from a (sort of) scalar into an array with length of one? --Mike -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From robert.kern at gmail.com Mon Oct 13 13:16:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 13 Oct 2008 12:16:10 -0500 Subject: [SciPy-user] Results of squeeze function on a [1,1] size array In-Reply-To: <48F38160.4090705@usgs.gov> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> <48E14401.7030600@ru.nl> <48F38160.4090705@usgs.gov> Message-ID: <3d375d730810131016h1296a069id1672ea1e2bfce72@mail.gmail.com> On Mon, Oct 13, 2008 at 12:12, Michael Hearne wrote: > Using numpy version 1.1.0.dev5077 > Using scipy version 0.7.0.dev4174 > > on Mac OS X > > If I do the following: > x = zeros([1,1]) > x[0] = 3.4756 > y = squeeze(x) > > What is y? When I print or do arithmetic with y, it seems like it's a > scalar. Why do you say that? It looks like a 0-dim array to me. In [1]: from numpy import * In [2]: x = zeros([1,1]) In [3]: x Out[3]: array([[ 0.]]) In [4]: x[0] = 3.4756 In [5]: x Out[5]: array([[ 3.4756]]) In [6]: y = squeeze(x) In [7]: y Out[7]: array(3.4756) In [8]: y.shape Out[8]: () > However, the type() function seems to indicate that it's a > numpy.ndarray, but with 0 dimensionality. Yup. > Questions: > How do I detect when I'm in this state? isinstance(y, numpy.ndarray) and y.shape == () > How can I convert this from a (sort of) scalar into an array with length > of one? atleast_1d(y) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cdcasey at gmail.com Mon Oct 13 14:09:27 2008 From: cdcasey at gmail.com (chris) Date: Mon, 13 Oct 2008 13:09:27 -0500 Subject: [SciPy-user] scipy 0.6.0 build failing Message-ID: I'm trying to build scipy 0.6.0 on RHEL 3, and am getting the following failure: g77:f77: scipy/fftpack/dfftpack/zfftf1.f /tmp/cceGs6VT.s: Assembler messages: /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' /tmp/cceGs6VT.s:2994: Error: suffix or operands invalid for `movd' /tmp/cceGs6VT.s: Assembler messages: /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' /tmp/cceGs6VT.s:2994: Error: suffix or operands invalid for `movd' error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer -malign-double -c -c scipy/fftpack/dfftpack/zfftf1.f -o build/temp.linux-i686-2.5/scipy/fftpack/dfftpack/zfftf1.o" failed with exit status 1 g77 version 3.2.3 gcc version 3.2.3 as version 2.14.90.0.4 From mhearne at usgs.gov Mon Oct 13 14:31:11 2008 From: mhearne at usgs.gov (Michael Hearne) Date: Mon, 13 Oct 2008 12:31:11 -0600 Subject: [SciPy-user] Results of squeeze function on a [1,1] size array In-Reply-To: <3d375d730810131016h1296a069id1672ea1e2bfce72@mail.gmail.com> References: <0191F372-8348-42DC-AB33-61A2833157A3@gmail.com> <48E14401.7030600@ru.nl> <48F38160.4090705@usgs.gov> <3d375d730810131016h1296a069id1672ea1e2bfce72@mail.gmail.com> Message-ID: <48F393EF.5020708@usgs.gov> Didn't know there was such a thing as a 0-dim array. It seems to _behave_ like a scalar! Problem solved... Thanks, Mike Robert Kern wrote: > On Mon, Oct 13, 2008 at 12:12, Michael Hearne wrote: > >> Using numpy version 1.1.0.dev5077 >> Using scipy version 0.7.0.dev4174 >> >> on Mac OS X >> >> If I do the following: >> x = zeros([1,1]) >> x[0] = 3.4756 >> y = squeeze(x) >> >> What is y? When I print or do arithmetic with y, it seems like it's a >> scalar. >> > > Why do you say that? It looks like a 0-dim array to me. > > In [1]: from numpy import * > > In [2]: x = zeros([1,1]) > > In [3]: x > Out[3]: array([[ 0.]]) > > In [4]: x[0] = 3.4756 > > In [5]: x > Out[5]: array([[ 3.4756]]) > > In [6]: y = squeeze(x) > > In [7]: y > Out[7]: array(3.4756) > > In [8]: y.shape > Out[8]: () > > >> However, the type() function seems to indicate that it's a >> numpy.ndarray, but with 0 dimensionality. >> > > Yup. > > >> Questions: >> How do I detect when I'm in this state? >> > > isinstance(y, numpy.ndarray) and y.shape == () > > >> How can I convert this from a (sort of) scalar into an array with length >> of one? >> > > atleast_1d(y) > > -- ------------------------------------------------------ Michael Hearne mhearne at usgs.gov (303) 273-8620 USGS National Earthquake Information Center 1711 Illinois St. Golden CO 80401 Senior Software Engineer Synergetics, Inc. ------------------------------------------------------ From wnbell at gmail.com Mon Oct 13 15:30:29 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Oct 2008 15:30:29 -0400 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False In-Reply-To: <2b1c8c4f0810130752w138837fan9ef20c59204a1eb2@mail.gmail.com> References: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> <2b1c8c4f0810130752w138837fan9ef20c59204a1eb2@mail.gmail.com> Message-ID: On Mon, Oct 13, 2008 at 10:52 AM, James Philbin wrote: > > Hmm, I see. This is quite subtle (+ suprising) as to all intents and > purposes the csr_matrix behaves as if the duplicates had been summed > whether or not sum_duplicates=True or False. The parameter name > probably needs to be changed and/or something said in the docstring. > What I was actually looking for was a way for duplicates to be > ignored, which i've found with dok_matrix. > By "ignored" do you mean that you want only the first or last value to be used? Summing duplicates when converting COO->CSR is fairly common (e.g. UMFPACK does it) and quite useful if you're assembling FEM matrices. Furthermore, regarding duplicate entries as parts of a sum is necessary if one wants to maintain consistency with matrix-vector multiplication (i.e A*x == A.tocsr() * x). In theory you could change this as well, but it would be *very* costly. FYI, others have expressed an interest more general accumulation methods: http://thread.gmane.org/gmane.comp.python.scientific.devel/7667 I'll think about how to implement this. It should be straightfoward to do in pure numpy, but I'd want it to be fast for the common cases Viral listed in the message above. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From philbinj at gmail.com Mon Oct 13 18:33:45 2008 From: philbinj at gmail.com (James Philbin) Date: Mon, 13 Oct 2008 23:33:45 +0100 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False In-Reply-To: References: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> <2b1c8c4f0810130752w138837fan9ef20c59204a1eb2@mail.gmail.com> Message-ID: <2b1c8c4f0810131533x430b6254x140691ce2a934942@mail.gmail.com> > By "ignored" do you mean that you want only the first or last value to be used? My use case is perhaps a bit non-standard. I'm approximately computing a large pairwise similarity matrix distributed across multiple processes. The algorithm will sometimes output the same pairwise distance more than once, so all the subsequent values will be the same. I think dok_matrix is fine for my needs. BTW, i've found that __setitem__ is v slow for dok_matrix. Is this just because of the checks which are made? Using dict.__setitem__(mat, (r,c), val) is about an order of magnitude faster. > Summing duplicates when converting COO->CSR is fairly common (e.g. > UMFPACK does it) and quite useful if you're assembling FEM matrices. > Furthermore, regarding duplicate entries as parts of a sum is > necessary if one wants to maintain consistency with matrix-vector > multiplication (i.e A*x == A.tocsr() * x). In theory you could change > this as well, but it would be *very* costly. I'm not arguing that summing duplicate entries is not desirable. I'm just arguing that a function which reads .tocsr(sum_duplicates=False) and then sums the duplicates implicitly is misnamed. > FYI, others have expressed an interest more general accumulation methods: > http://thread.gmane.org/gmane.comp.python.scientific.devel/7667 This is never something i've needed, but I agree it could be useful. Thanks, James From mgb11 at cornell.edu Mon Oct 13 20:22:26 2008 From: mgb11 at cornell.edu (Marc Berthoud) Date: Mon, 13 Oct 2008 19:22:26 -0500 Subject: [SciPy-user] Record Array: How to add a column? Message-ID: <48F3E642.4040001@cornell.edu> What is the best way to add a column to a record array? Example: Add to this array recarray( [ ( 'alpha' , 1 ) , ( 'beta ' , 2 ) , ( 'gamma' , 3 ) ] , dtype = [ ( 'NAMES' , '|S5' ) , ( 'NR' , '>i2' ) ] ) the following column array( [ 3.1 , 3.5 , 8.1 ] ) to make that array recarray( [ ( 'alpha' , 1, 3.1 ), ( 'beta ' , 2 , 3.5 ) , ( 'gamma' , 3 , 8.1 ) ] , dtype = [ ( 'NAMES' , '|S5' ) , ( 'NR' , ' References: <48F3E642.4040001@cornell.edu> Message-ID: <3d375d730810131741w1643b60br88623f418c8305d3@mail.gmail.com> On Mon, Oct 13, 2008 at 19:22, Marc Berthoud wrote: > What is the best way to add a column to a record array? > > Example: > Add to this array > recarray( [ ( 'alpha' , 1 ) , ( 'beta ' , 2 ) , ( 'gamma' , 3 ) ] , > dtype = [ ( 'NAMES' , '|S5' ) , ( 'NR' , '>i2' ) ] ) > the following column > array( [ 3.1 , 3.5 , 8.1 ] ) > to make that array > recarray( [ ( 'alpha' , 1, 3.1 ), ( 'beta ' , 2 , 3.5 ) , ( 'gamma' , > 3 , 8.1 ) ] , > dtype = [ ( 'NAMES' , '|S5' ) , ( 'NR' , ' any ideas? > > At this point the nicest way I know to do this is as follows: > names = list( oldarray.dtype.names ) # get the record names > names.append( 'VAL' ) # add the name for the new column > olddescr = oldarray.dtype.descr # get the old type descriptor > formats = [ olddescr[0][1] ] # initialize formats list, enter first > format > arrays = [ oldarray[ olddescr[0][0] ] ] # initialize arrays list, > enter first array > for i in range( 1 , len(olddescr) ) : # loop over remaining columns > formats.append( olddescr[i][1] ) # add format of column > arrays.append( oldarray[ olddescr[i][0] ] ) # add array of column > formats.append( ' arrays.append( array( [3.1,3.5,8.1] ) ) # add array of new column > newarray = fromarrays( arrays , names = names , formats = formats ) > # make new record array > but that is very ugly programming. > Any Ideas? This is somewhat more straightforward: http://projects.scipy.org/pipermail/numpy-discussion/2007-September/029357.html -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Mon Oct 13 22:57:15 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Oct 2008 22:57:15 -0400 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False In-Reply-To: <2b1c8c4f0810131533x430b6254x140691ce2a934942@mail.gmail.com> References: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> <2b1c8c4f0810130752w138837fan9ef20c59204a1eb2@mail.gmail.com> <2b1c8c4f0810131533x430b6254x140691ce2a934942@mail.gmail.com> Message-ID: On Mon, Oct 13, 2008 at 6:33 PM, James Philbin wrote: > > same. I think dok_matrix is fine for my needs. BTW, i've found that > __setitem__ is v slow for dok_matrix. Is this just because of the > checks which are made? Using dict.__setitem__(mat, (r,c), val) is > about an order of magnitude faster. I don't use dok_matrix, so I don't know why it would be that much slower. If you can speed it up and submit a patch I'd happily apply it. > I'm not arguing that summing duplicate entries is not desirable. I'm > just arguing that a function which reads .tocsr(sum_duplicates=False) > and then sums the duplicates implicitly is misnamed. Please understand, it *does not* sum the duplicates. As I illustrated before, the duplicates are carried over to the CSR format. It's just that CSR->dense *does* sum duplicates. I agree that sum_duplicates=False is somewhat ambiguous, do you have a suggestion for how this could be made more clear? For instance, would an interface like: coo_matrix.tocsr(duplicates='sum') coo_matrix.tocsr(duplicates='last') coo_matrix.tocsr(duplicates='max') be preferred? If I understand correctly, you'd want to use .tocsr(duplicates='last'). Another question is whether we want to put this in the COO->CSR (and CSC) conversions. At this point, I think COO->CSR should *always* sum duplicates together and we should instead provide a separate function or member function of coo_matrix that provides additional options, like 'last', 'max', etc. In general, any binary operator (T,T) -> T could be used as an accumulator, but we would provide the most common options. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From kwmsmith at gmail.com Mon Oct 13 23:07:35 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Mon, 13 Oct 2008 22:07:35 -0500 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: On Mon, Oct 13, 2008 at 10:25 AM, Anne Archibald wrote: > 2008/10/12 Kurt Smith : > >> I've recently acquired a new Ubuntu system, and unfortunately some 3rd >> party software I'd like to use won't work under the 64 bit version of >> the OS, yet. It's not a deal breaker, however, and I'm willing to >> work around it as long as there are compelling reasons to use 64 bit >> for numerical work, specifically under scipy, etc. I'm not well >> versed in the low-level pros and cons of 64 bit vs 32 bit, esp. as to >> how it affects scipy performance. Can you enlighten me? I"m on an >> Intel Core 2 Duo, 3.16 GHz, E8500, 4 GB RAM. Will using 64 bit yield >> great benefits running scipy/numpy code automatically, or will it >> require fine-tuning on my part? Will 64 bit yield benefits besides >> having a larger address space (besides the fact that my 4 GB RAM is >> still addressable by 32 bits, anyway)? > > I think memory is the only major reason to prefer 64 bits. The speed > difference will probably be minimal. But having a 64-bit address space > means that if your applications get too big for RAM they just slow > down, as opposed to crashing and forcing you to rewrite them. You will > also find, I think, that running in 32-bit mode you cannot access the > last gigabyte or so of physical RAM. This can make a big difference > too. Thanks to all for your answers; since posting I've worked out some more issues and 64 bit looks better and better. Indeed, it is at least 2x "cooler." ;-) I'll do more benchmarks of my own, but so far things look pretty good. Kurt From dcday137 at gmail.com Mon Oct 13 23:36:16 2008 From: dcday137 at gmail.com (Collin Day) Date: Mon, 13 Oct 2008 21:36:16 -0600 Subject: [SciPy-user] Difference between ffts? Message-ID: <20081013213616.667c6106@Krypton.homenet> Hi all, I have looked around, but can't seem to find an answer. I have been trying the following (according to the Getting started page - http://www.scipy.org/Getting_Started) from scipy import * a=zeros(1000) a[:100]=1 b=fft(a) plot(abs(b)) and I get what you would expect - the abs. value of a sinc function rect(x) ->F-> sinc(Frequency) now, if I try the Scipy.fftpack import scipy.fftapck as S c=S.fft(a) figure() plot(abs(c)) I get something I would expect if I did a FFT on a sine function (kind of like dual spikes equally spaced) I do see there is a difference in the packing or how the fft output is represented. How do you plot the data from fftpack so that it looks correct? Should I even bother? Thanks for any help! -Collin From wnbell at gmail.com Mon Oct 13 23:47:15 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 13 Oct 2008 23:47:15 -0400 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: On Mon, Oct 13, 2008 at 11:07 PM, Kurt Smith wrote: > > Thanks to all for your answers; since posting I've worked out some > more issues and 64 bit looks better and better. Indeed, it is at > least 2x "cooler." ;-) I'll do more benchmarks of my own, but so far > things look pretty good. > Good to hear! Let us know what you find. I'm sure there are others in the same situation that would like to know what to expect. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robert.kern at gmail.com Mon Oct 13 23:48:41 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 13 Oct 2008 22:48:41 -0500 Subject: [SciPy-user] Difference between ffts? In-Reply-To: <20081013213616.667c6106@Krypton.homenet> References: <20081013213616.667c6106@Krypton.homenet> Message-ID: <3d375d730810132048k3c14eaa8oa71f9dc9b12772c1@mail.gmail.com> On Mon, Oct 13, 2008 at 22:36, Collin Day wrote: > Hi all, > > I have looked around, but can't seem to find an answer. I have been > trying the following (according to the Getting started page - > http://www.scipy.org/Getting_Started) > > from scipy import * > a=zeros(1000) > a[:100]=1 > b=fft(a) > > plot(abs(b)) > > and I get what you would expect - the abs. value of a sinc function > > rect(x) ->F-> sinc(Frequency) > > now, if I try the Scipy.fftpack > > import scipy.fftapck as S > > c=S.fft(a) > > figure() > > plot(abs(c)) > > I get something I would expect if I did a FFT on a sine function (kind > of like dual spikes equally spaced) Can you show us the plots? I don't see a difference. > I do see there is a difference in the packing or how the fft output is > represented. How do you plot the data from fftpack so that it looks > correct? Should I even bother? Typically, I will use fftfreq() to get the "X" values in packed form and then use fftshift() on both X and Y to "unpack" the arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From philbinj at gmail.com Tue Oct 14 04:56:42 2008 From: philbinj at gmail.com (James Philbin) Date: Tue, 14 Oct 2008 09:56:42 +0100 Subject: [SciPy-user] scipy.sparse: coo_matrix ignores sum_duplicates=False In-Reply-To: References: <2b1c8c4f0810130704n7fb317bcr1ee613c1a5975e29@mail.gmail.com> <2b1c8c4f0810130752w138837fan9ef20c59204a1eb2@mail.gmail.com> <2b1c8c4f0810131533x430b6254x140691ce2a934942@mail.gmail.com> Message-ID: <2b1c8c4f0810140156n64a963f8v1ccc4d8dfdfc74c3@mail.gmail.com> > Please understand, it *does not* sum the duplicates. As I illustrated > before, the duplicates are carried over to the CSR format. It's just > that CSR->dense *does* sum duplicates. No, I do understand. This is why I said 'implicitly'. The CSR keeps the duplicates, but then always behaves as if they'd been summed. To the user, therefore, the effect is the same. > I agree that sum_duplicates=False is somewhat ambiguous, do you have a > suggestion for how this could be made more clear? For instance, would > an interface like: > coo_matrix.tocsr(duplicates='sum') > coo_matrix.tocsr(duplicates='last') > coo_matrix.tocsr(duplicates='max') > be preferred? If I understand correctly, you'd want to use > .tocsr(duplicates='last'). I'm not sure it's worth you having to implement something which i'm not sure that many people really need. I don't want scipy.sparse to get feature-itis. I'd be happy if the sum_duplicates parameter was removed altogether, with the standard behaviour being the one for sum_duplicates=True. Then just state clearly in the docstring what that behaviour is. > Another question is whether we want to put this in the COO->CSR (and > CSC) conversions. At this point, I think COO->CSR should *always* sum > duplicates together and we should instead provide a separate function > or member function of coo_matrix that provides additional options, > like 'last', 'max', etc. In general, any binary operator (T,T) -> T > could be used as an accumulator, but we would provide the most common > options. This seems fine, but I don't in general like modal options as they tend to be bug-prone. Maybe a separate member of coo_matrix called 'merge_duplicates' which would apply some operation in-place on coo_matrix where the user could specify 'sum', 'max', 'first', 'last', etc. Thanks, James From jdh2358 at gmail.com Tue Oct 14 06:35:06 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 14 Oct 2008 05:35:06 -0500 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <3d375d730810131741w1643b60br88623f418c8305d3@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <3d375d730810131741w1643b60br88623f418c8305d3@mail.gmail.com> Message-ID: <88e473830810140335y7686d87m9254587f55aaac84@mail.gmail.com> On Mon, Oct 13, 2008 at 7:41 PM, Robert Kern wrote: > This is somewhat more straightforward: > > http://projects.scipy.org/pipermail/numpy-discussion/2007-September/029357.html I took Robert's suggestion from the link above and added rec_append_fields to matplotlib.mlab -- I think it may have been called rec_append_field in 0.98.3, but we altered it in svn HEAD to support multiple column adds. There are a number of nice helper functions for recarrays there * rec2txt : pretty print a record array * rec2csv : store record array in CSV file * csv2rec : import record array from CSV file with type inspection * rec_append_fields: adds field(s)/array(s) to record array * rec_drop_fields : drop fields from record array * rec_join : join two record arrays on sequence of fields * rec_groupby : summarize data by groups (similar to SQL GROUP BY) * rec_summarize : helper code to filter rec array fields into new fields rec_join is really nice -- supports inner and outer joins with default fill values and customizable postfixing of column names when joining two record arrays with identically named fields. Here is an example showing many of these functions in action """ Illustrate the rec array utility funcitons by loading prices from a csv file, computing the daily returns, appending the results to the record arrays, joining on date """ import urllib import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab # grab the price data off yahoo u1 = urllib.urlretrieve('http://ichart.finance.yahoo.com/table.csv?s=AAPL&d=9&e=14&f=2008&g=d&a=8&b=7&c=1984&ignore=.csv') u2 = urllib.urlretrieve('http://ichart.finance.yahoo.com/table.csv?s=GOOG&d=9&e=14&f=2008&g=d&a=8&b=7&c=1984&ignore=.csv') # load the CSV files into record arrays r1 = mlab.csv2rec(file(u1[0])) r2 = mlab.csv2rec(file(u2[0])) # compute the daily returns and add these columns to the arrays gains1 = np.zeros_like(r1.adj_close) gains2 = np.zeros_like(r2.adj_close) gains1[1:] = np.diff(r1.adj_close)/r1.adj_close[:-1] gains2[1:] = np.diff(r2.adj_close)/r2.adj_close[:-1] r1 = mlab.rec_append_fields(r1, 'gains', gains1) r2 = mlab.rec_append_fields(r2, 'gains', gains2) # now join them by date; the default postfixes are 1 and 2 r = mlab.rec_join('date', r1, r2) # long appl, short goog g = r.gains1-r.gains2 tr = (1+g).cumprod() # the total return # plot the return fig = plt.figure() ax = fig.add_subplot(111) ax.plot(r.date, tr) ax.set_title('total return: long appl, short goog') ax.grid() fig.autofmt_xdate() plt.show() From hwchen.mailman at gmail.com Tue Oct 14 11:02:09 2008 From: hwchen.mailman at gmail.com (Huang-Wen Chen) Date: Tue, 14 Oct 2008 11:02:09 -0400 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <88e473830810140335y7686d87m9254587f55aaac84@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <3d375d730810131741w1643b60br88623f418c8305d3@mail.gmail.com> <88e473830810140335y7686d87m9254587f55aaac84@mail.gmail.com> Message-ID: <368e8c230810140802x4a973009l749c1da77a79c57d@mail.gmail.com> On Tue, Oct 14, 2008 at 6:35 AM, John Hunter wrote: > I took Robert's suggestion from the link above and added > rec_append_fields to matplotlib.mlab -- I think it may have been > called rec_append_field in 0.98.3, but we altered it in svn HEAD to > support multiple column adds. There are a number of nice helper > functions for recarrays there > > Here is an example showing many of these functions in action The example is amazing. It's so simple to do the elegant job, but I think it requires some modification because the the size of the two dataset is different: In [68]: r1.shape Out[68]: (6080,) In [69]: r2.shape Out[69]: (1046,) Maybe we need to do some trimming in the example such as: l = min(r1.shape[0], r2.shape[0]) r1,r2=r1[:l],r2[:l] -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredmfp at gmail.com Tue Oct 14 11:40:59 2008 From: fredmfp at gmail.com (fred) Date: Tue, 14 Oct 2008 17:40:59 +0200 Subject: [SciPy-user] 64 bit and 32 bit on Ubuntu In-Reply-To: References: Message-ID: <48F4BD8B.8070408@gmail.com> Anne Archibald a ?crit : > I think memory is the only major reason to prefer 64 bits. The speed > difference will probably be minimal. But having a 64-bit address space > means that if your applications get too big for RAM they just slow > down, as opposed to crashing and forcing you to rewrite them. You will > also find, I think, that running in 32-bit mode you cannot access the > last gigabyte or so of physical RAM. This can make a big difference > too. I have computed interpolation on 3D data (50x50x50) using kriging method. On the same computer (Core2 Duo), the same OS (debian lenny), same configuration (same packages), same compiler (f2py with ifort 10.1), same options (-xT on both arch), I get: on i686 arch : 3358 seconds on x86_64 arch : 2130 seconds I don't say this is relevant, or whatsoever. I only say: that's it ;-) However, I would be happy to hear some comment/objection about this result. Only my 2 cts. Cheers, -- Fred From jdh2358 at gmail.com Tue Oct 14 11:53:08 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 14 Oct 2008 10:53:08 -0500 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <368e8c230810140802x4a973009l749c1da77a79c57d@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <3d375d730810131741w1643b60br88623f418c8305d3@mail.gmail.com> <88e473830810140335y7686d87m9254587f55aaac84@mail.gmail.com> <368e8c230810140802x4a973009l749c1da77a79c57d@mail.gmail.com> Message-ID: <88e473830810140853o5e6d5e99g5cbf04a0989febf5@mail.gmail.com> On Tue, Oct 14, 2008 at 10:02 AM, Huang-Wen Chen wrote: > The example is amazing. It's so simple to do the elegant job, but I think it > requires some modification because the the size of the two dataset is > different: > > In [68]: r1.shape > Out[68]: (6080,) > > In [69]: r2.shape > Out[69]: (1046,) No, that is one of the main points of the example in the call to rec_join -- it does an inner join (intersection) aligned by date. Since it is an inner join, dates in one that are not in the other are dropped. It can also do an outer join (union) using the "jointype" keyword arg to rec_join. But thanks for the kind words on the example -- I agree. Record arrays are really powerful data structures and with some of the functions in mlab, which hopefully will end up in some form in numpy eventually, you have much of the flexibility of sql tables. See for example 'group by' using record arrays in rec_groupby_demo.py at http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/examples/misc/ JDH From cmac at mit.edu Tue Oct 14 12:19:21 2008 From: cmac at mit.edu (Christopher MacMinn) Date: Tue, 14 Oct 2008 12:19:21 -0400 Subject: [SciPy-user] ImportError: cannot import name Tester Message-ID: <05AECBC4-87D0-470E-A9C6-5FE9DB851E0C@mit.edu> Hi folks - I just installed Scipy 0.7.0 build 4797, and import is failing: # --------------------------------------------------------- In [3]: import scipy --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /Library/Python/2.5/site-packages/ in () /Library/Python/2.5/site-packages/scipy/__init__.py in () 86 __doc__ += pkgload.get_pkgdocs() 87 ---> 88 from numpy.testing import Tester 89 test = Tester().test 90 bench = Tester().bench ImportError: cannot import name Tester In [4]: # --------------------------------------------------------- I imagine I can fix this by commenting out the troublesome lines in __init__.py, but perhaps it is a bug in this build? I did not have this problem with build 4786. I'm running Python 2.5 and numpy 1.2.0 on Mac OS 10.5. Best, Chris MacMinn From robert.kern at gmail.com Tue Oct 14 12:23:17 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Oct 2008 11:23:17 -0500 Subject: [SciPy-user] ImportError: cannot import name Tester In-Reply-To: <05AECBC4-87D0-470E-A9C6-5FE9DB851E0C@mit.edu> References: <05AECBC4-87D0-470E-A9C6-5FE9DB851E0C@mit.edu> Message-ID: <3d375d730810140923k4f715d15o6fab2d0f419b0af3@mail.gmail.com> On Tue, Oct 14, 2008 at 11:19, Christopher MacMinn wrote: > Hi folks - > > I just installed Scipy 0.7.0 build 4797, and import is failing: > > # --------------------------------------------------------- > > In [3]: import scipy > --------------------------------------------------------------------------- > ImportError Traceback (most recent call > last) > > /Library/Python/2.5/site-packages/ in () > > /Library/Python/2.5/site-packages/scipy/__init__.py in () > 86 __doc__ += pkgload.get_pkgdocs() > 87 > ---> 88 from numpy.testing import Tester > 89 test = Tester().test > 90 bench = Tester().bench > > ImportError: cannot import name Tester > > In [4]: > > # --------------------------------------------------------- > > I imagine I can fix this by commenting out the troublesome lines in > __init__.py, but perhaps it is a bug in this build? I did not have > this problem with build 4786. > > I'm running Python 2.5 and numpy 1.2.0 on Mac OS 10.5. Hmm, numpy 1.2.0 should have numpy.testing.Tester. Can you double-check that that is the numpy which is actually getting picked up? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Tue Oct 14 12:28:42 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 14 Oct 2008 12:28:42 -0400 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <88e473830810140853o5e6d5e99g5cbf04a0989febf5@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <368e8c230810140802x4a973009l749c1da77a79c57d@mail.gmail.com> <88e473830810140853o5e6d5e99g5cbf04a0989febf5@mail.gmail.com> Message-ID: <200810141228.42850.pgmdevlist@gmail.com> John, Do you plan to have your modifications part of numpy.records ? In any case, I'll try to check whether it is easy to add support to missing data: MaskedArrays should now support with flexible-types. From cmac at mit.edu Tue Oct 14 12:32:44 2008 From: cmac at mit.edu (Christopher MacMinn) Date: Tue, 14 Oct 2008 12:32:44 -0400 Subject: [SciPy-user] SciPy-user Digest, Vol 62, Issue 27 In-Reply-To: References: Message-ID: >> I just installed Scipy 0.7.0 build 4797, and import is failing: >> >> >> >> In [3]: import scipy >> ... >> ImportError: cannot import name Tester >> >> >> I'm running Python 2.5 and numpy 1.2.0 on Mac OS 10.5. > > > Hmm, numpy 1.2.0 should have numpy.testing.Tester. Can you > double-check that that is the numpy which is actually getting picked > up? I just fired up ipython again and imported numpy and scipy with no problems. It must have been a path issue or something that cleared itself up, although that strikes me as strange. In any case, I'm good now. Sorry for the false alarm, and thanks! Best, Chris From jdh2358 at gmail.com Tue Oct 14 12:55:25 2008 From: jdh2358 at gmail.com (John Hunter) Date: Tue, 14 Oct 2008 11:55:25 -0500 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <200810141228.42850.pgmdevlist@gmail.com> References: <48F3E642.4040001@cornell.edu> <368e8c230810140802x4a973009l749c1da77a79c57d@mail.gmail.com> <88e473830810140853o5e6d5e99g5cbf04a0989febf5@mail.gmail.com> <200810141228.42850.pgmdevlist@gmail.com> Message-ID: <88e473830810140955u6f5d830cueb24a3f4bc980458@mail.gmail.com> On Tue, Oct 14, 2008 at 11:28 AM, Pierre GM wrote: > John, > Do you plan to have your modifications part of numpy.records ? In any case, > I'll try to check whether it is easy to add support to missing data: > MaskedArrays should now support with flexible-types. I do not have concrete plans, but I have spoken with Jarrod about moving some of these over, making some of them record array methods, others available in the np.rec namespace. I think the consensus is that these are useful and belong in numpy, but we are awaiting someone to do the port. On the subject of masked record arrays. We added masked array support to mlab.csv2rec some time ago and it has caused no shortage of headaches because of differences in the interface for objects for masked record arrays and regular recarrays. The following example shows a record array with a 'date' column which is a O4 python object type. Here is the behavior of the recarray In [212]: !cat test1.csv date,age,name 2008-01-01,10,'tom' 2008-01-02,11,'dick' 2008-01-03,12,'harry' In [213]: r1 = mlab.csv2rec('test1.csv') In [214]: type(r1) Out[214]: In [215]: r1.dtype Out[215]: dtype([('date', '|O4'), ('age', ' In [219]: print r2.dtype [('date', '|O4'), ('age', '", line 1, in ? AttributeError: 'MaskedArray' object has no attribute 'year' It would help us a lot in this regard if we could access the underlying object. Is there a reason why the masked array behaves differently when it comes to accessing the underlying object methods and is there a sensible way to make them compatible? Thanks, JDH From pgmdevlist at gmail.com Tue Oct 14 13:14:22 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 14 Oct 2008 13:14:22 -0400 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <88e473830810140955u6f5d830cueb24a3f4bc980458@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <200810141228.42850.pgmdevlist@gmail.com> <88e473830810140955u6f5d830cueb24a3f4bc980458@mail.gmail.com> Message-ID: <200810141314.22369.pgmdevlist@gmail.com> John, MaskedRecords have always been the poor lost child of MaskedArrays. In 1.3.x and 1.2.1, I tried to improve the support of flexible-type, and MaskedRecords should follow. Of course, I can only check whether it works in the cases I need. Could you send me a self-contained example showing what you expect (example + class definitions...) ? My guess here is that when you query the field date on a standard 'recarray', you get a date object, which has a 'year' attribute. However, when you query it as a MaskedRecords, the 'date' column is transformed into a MaskedArray (in order to handle missing data), and then you can't directly access the 'year' attribute: you'd need to access 'date._data.year' instead. Once again, that's a very first guess, I'd need to do some testing. On a side note, TimeSeries objects support flexible type as well, if you catch my drift... P. > In the next example, the data file has a missing value on the last row > in the 'age' column, so we return a masked record array > > In [217]: !cat test2.csv > date,age,name > 2008-01-01,10,'tom' > 2008-01-02,11,'dick' > 2008-01-03,,'harry' > In [218]: type(r2) > Out[218]: > > In [219]: print r2.dtype > [('date', '|O4'), ('age', ' > In [220]: r2[0].date.year > ------------------------------------------------------------ > Traceback (most recent call last): > File "", line 1, in ? > AttributeError: 'MaskedArray' object has no attribute 'year' > > It would help us a lot in this regard if we could access the > underlying object. Is there a reason why the masked array behaves > differently when it comes to accessing the underlying object methods > and is there a sensible way to make them compatible? > > Thanks, > JDH > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From oliphant at enthought.com Tue Oct 14 18:55:44 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Tue, 14 Oct 2008 17:55:44 -0500 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <88e473830810140853o5e6d5e99g5cbf04a0989febf5@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <3d375d730810131741w1643b60br88623f418c8305d3@mail.gmail.com> <88e473830810140335y7686d87m9254587f55aaac84@mail.gmail.com> <368e8c230810140802x4a973009l749c1da77a79c57d@mail.gmail.com> <88e473830810140853o5e6d5e99g5cbf04a0989febf5@mail.gmail.com> Message-ID: <48F52370.80609@enthought.com> John Hunter wrote: > > No, that is one of the main points of the example in the call to > rec_join -- it does an inner join (intersection) aligned by date. > Since it is an inner join, dates in one that are not in the other are > dropped. It can also do an outer join (union) using the "jointype" > keyword arg to rec_join. > > But thanks for the kind words on the example -- I agree. Record > arrays are really powerful data structures and with some of the > functions in mlab, which hopefully will end up in some form in numpy > I hope so too. These are good examples, John. Thanks for the continual high-quality code. -Travis From dcday137 at gmail.com Tue Oct 14 19:46:22 2008 From: dcday137 at gmail.com (Collin Day) Date: Tue, 14 Oct 2008 17:46:22 -0600 Subject: [SciPy-user] Difference between ffts? - plots of what i am getting. Message-ID: <20081014174622.128f368b@Krypton.homenet> No problem. Sorry for not replying directly - I get the digests so my mail box remains uncluttered. Anyway, I have attached two files. Following are the lines of code I used: import scipy as S import scipy.fftpack as SF a=zeros(1000) b=S.fft(a) plot(abs(b)) c=SF.fft(a) figure() plot(c) Hope this helps. Thanks for looking at it! -C -------------- next part -------------- A non-text attachment was scrubbed... Name: S_fft_b.png Type: image/png Size: 25934 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: SF_fft_c.png Type: image/png Size: 15114 bytes Desc: not available URL: From dcday137 at gmail.com Tue Oct 14 19:51:34 2008 From: dcday137 at gmail.com (Collin Day) Date: Tue, 14 Oct 2008 17:51:34 -0600 Subject: [SciPy-user] Correction Difference between ffts? - plots of what i am getting. Message-ID: <20081014175134.3875433d@Krypton.homenet> Oops - I meant plot(abs(c)) for the code on that last plot statement. Thanks again. -C From robert.kern at gmail.com Tue Oct 14 20:53:13 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 14 Oct 2008 19:53:13 -0500 Subject: [SciPy-user] Difference between ffts? - plots of what i am getting. In-Reply-To: <20081014174622.128f368b@Krypton.homenet> References: <20081014174622.128f368b@Krypton.homenet> Message-ID: <3d375d730810141753y7ace1e0i34bad7eb4e055c60@mail.gmail.com> On Tue, Oct 14, 2008 at 18:46, Collin Day wrote: > No problem. Sorry for not replying directly - I get the digests so my > mail box remains uncluttered. Anyway, I have attached two files. > Following are the lines of code I used: Hmm. Did you build scipy with any special FFT libraries? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Tue Oct 14 21:55:14 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 15 Oct 2008 10:55:14 +0900 Subject: [SciPy-user] Difference between ffts? - plots of what i am getting. In-Reply-To: <20081014174622.128f368b@Krypton.homenet> References: <20081014174622.128f368b@Krypton.homenet> Message-ID: <48F54D82.5000005@ar.media.kyoto-u.ac.jp> Collin Day wrote: > No problem. Sorry for not replying directly - I get the digests so my > mail box remains uncluttered. Anyway, I have attached two files. > Following are the lines of code I used: > Hi Collin, Would it be possible to provide the exact script which produces the graph ? One graph looks like a log graph, the other linear, and I can't see how it would be possible for both graphs to be produced by the above script, even assuming bugs in scipy ? Also, to know which fft was used, could you execute the following command and give us the result: import scipy scipy.show_config() thanks, David From dcday137 at gmail.com Tue Oct 14 23:38:30 2008 From: dcday137 at gmail.com (Collin Day) Date: Wed, 15 Oct 2008 03:38:30 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Difference_between_ffts=3F_-_plots_of_what?= =?utf-8?q?_i_am=09getting=2E?= References: <20081014174622.128f368b@Krypton.homenet> <48F54D82.5000005@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > Collin Day wrote: > > No problem. Sorry for not replying directly - I get the digests so my > > mail box remains uncluttered. Anyway, I have attached two files. > > Following are the lines of code I used: > > > > Hi Collin, > > Would it be possible to provide the exact script which produces the > graph ? One graph looks like a log graph, the other linear, and I can't > see how it would be possible for both graphs to be produced by the above > script, even assuming bugs in scipy ? > > Also, to know which fft was used, could you execute the following > command and give us the result: > > import scipy > scipy.show_config() > > thanks, > > David > Sure. The following script will produce the plots: #!/usr/bin/python from pylab import * import scipy as S import scipy.fftpack as SF a=zeros(1000) a[:100]=1 b=S.fft(a) c=SF.fft(a) figure(1) plot(abs(b)) show(1) figure(2) plot(abs(c)) show(2) and scipy was compiled using the FFTW library. The only thing I can thinkk of would be to recompile not using it. Hope that helps. I really hope I have not done something stupid and missed it. Anyway, thanks again everyone! -C From dcday137 at gmail.com Tue Oct 14 23:44:55 2008 From: dcday137 at gmail.com (Collin Day) Date: Wed, 15 Oct 2008 03:44:55 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Difference_between_ffts=3F_-_plots_of_what?= =?utf-8?q?_i_am=09getting=2E?= References: <20081014174622.128f368b@Krypton.homenet> <48F54D82.5000005@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau ar.media.kyoto-u.ac.jp> writes: > > Collin Day wrote: > > No problem. Sorry for not replying directly - I get the digests so my > > mail box remains uncluttered. Anyway, I have attached two files. > > Following are the lines of code I used: > > > > Hi Collin, > > Would it be possible to provide the exact script which produces the > graph ? One graph looks like a log graph, the other linear, and I can't > see how it would be possible for both graphs to be produced by the above > script, even assuming bugs in scipy ? > > Also, to know which fft was used, could you execute the following > command and give us the result: > > import scipy > scipy.show_config() > > thanks, > > David > Sorry - forgot to post the following: lapack_info: libraries = ['lapack'] library_dirs = ['/usr/lib'] language = f77 lapack_opt_info: libraries = ['lapack', 'blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] umfpack_info: libraries = ['umfpack', 'amd'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_UMFPACK_H', None), ('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/include', '-I/usr/include'] include_dirs = ['/usr/include'] blas_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 atlas_threads_info: NOT AVAILABLE fftw2_info: libraries = ['rfftw', 'fftw'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_FFTW_H', None)] include_dirs = ['/usr/include'] atlas_blas_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE atlas_blas_threads_info: NOT AVAILABLE fftw3_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE amd_info: libraries = ['amd'] library_dirs = ['/usr/lib'] define_macros = [('SCIPY_AMD_H', None)] swig_opts = ['-I/usr/include'] include_dirs = ['/usr/include'] blas_opt_info: libraries = ['blas'] library_dirs = ['/usr/lib'] language = f77 define_macros = [('NO_ATLAS_INFO', 1)] atlas_info: NOT AVAILABLE lapack_mkl_info: NOT AVAILABLE mkl_info: NOT AVAILABLE From sahar at cmt.co.il Wed Oct 15 04:49:32 2008 From: sahar at cmt.co.il (Sahar Vilan) Date: Wed, 15 Oct 2008 10:49:32 +0200 Subject: [SciPy-user] saving raw image In-Reply-To: Message-ID: I used scipy for basic image processing of raw images, and I can't save these images in the same format: To open image I use: s = file(path, 'rb').read() raw = fromstring(s, uint16).astype(int) I try to save this matrix as raw image again but I get some garbage when I open it in some viewer (Image-J, for instance). Can anyone help me with this? Thanks, Sahar ******************************************************************************************************* This e-mail message may contain confidential,and privileged information or data that constitute proprietary information of CMT Medical Ltd. Any review or distribution by others is strictly prohibited. If you are not the intended recipient you are hereby notified that any use of this information or data by any other person is absolutely prohibited. If you are not the intended recipient, please delete all copies. Thank You. http://www.cmt.co.il ******************************************************************************************************** ************************************************************************************ This footnote confirms that this email message has been scanned by PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. ************************************************************************************ From zachary.pincus at yale.edu Wed Oct 15 10:20:44 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 15 Oct 2008 10:20:44 -0400 Subject: [SciPy-user] saving raw image In-Reply-To: References: Message-ID: <68B3EEDF-FB62-40B5-944A-D85B4E28A0B5@yale.edu> Hi Sahar, Please send a minimal example of loading and saving an array that produces "garbage" output, if you could... (If you could send a small input file that would be helpful too.) Also, does the garbage you see in ImageJ have some structure -- does it look streaky, like you can see rows of pixels that should be together, but the rows don't line up right? One possible problem is that the array you are saving has been promoted to a different dtype (e.g. by participating in signed/float arithmetic), so the pixels you save are no longer uint16s. Unless you have an 'astype(uint16)' in your save code, this could be the issue, since the loading code you showed does immediately convert the array from uint16s. (This is why sending a complete example of the failure is useful...) Another possible problem has to do with the order the pixels are read/ written out. Typically, images on disk are stored as rows of pixels next to one another -- this is "column major" or "fortran" order (going from one memory location to the next typically increments the x- value, except at row boundaries where the y-value is incremented, so it is said that the x-value "varies the fastest"). Typically, numpy arrays are created in row major or "C" order, where the y-value varies the fastest. When loading and manipulating images, you need either to make sure that the images are loaded in fortran-order, or reverse the x/y coordinates and shape. So, e.g. when loading a 200x300 image: s = file(path, 'rb').read() raw = fromstring(s, uint16).astype(int) image = raw.reshape((200,300), order='F') now, image[30,40] gives the same pixel as coordinate (30,40) in ImageJ. If you did: image = raw.reshape((300,200), order='C') then image[30,40] would give the same pixel as coordinate (40,30) in ImageJ. Note that the 'C' order is default. So: image = raw.reshape((200,300)) will give garbage, with the rows of pixels broken up along the wrong boundaries, giving rise to the "streaky" images I mentioned. Finally, the tostring() method also takes an order option, so for saving images, you need: raw = image.tostring(order='F') (assuming that you reshaped the image as order 'F') Zach Pincus On Oct 15, 2008, at 4:49 AM, Sahar Vilan wrote: > I used scipy for basic image processing of raw images, and I can't > save > these images in the same format: > To open image I use: > s = file(path, 'rb').read() > raw = fromstring(s, uint16).astype(int) > > I try to save this matrix as raw image again but I get some garbage > when I > open it in some viewer (Image-J, for instance). > Can anyone help me with this? > > Thanks, > Sahar From elcorto at gmx.net Wed Oct 15 11:12:03 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 15 Oct 2008 17:12:03 +0200 Subject: [SciPy-user] npfile deprecated? Message-ID: <20081015151203.GA7878@ramrod.starsheriffs.de> Hi all Since I'm currently in the need to store and read binary data a lot, I choose npfile. Since [1] didn't mention it, I attempted to update the doc here. I added scipy.io.npfile, numpy.load, numpy.save and numpy.savez. At work, I still use 0.6.0. However, I found that in the current svn HEAD, the scipy.io.npfile docstring says that it is deprecated and I should use ndarray.tofile() and numpy.fromfile() instead. But with the latter, I can't control byteorder, row- or column-major order etc. That's a bit puzzling. I admitt that I haven't followed the development of the new numpy file format very closely (since npfile worked nicely so far). So, is the .npy/.npz way the current standard to store platform-independent binary arrays with all metainfo? [1] http://scipy.org/Cookbook/InputOutput best, steve From scotta_2002 at yahoo.com Wed Oct 15 12:28:16 2008 From: scotta_2002 at yahoo.com (Scott Askey) Date: Wed, 15 Oct 2008 09:28:16 -0700 (PDT) Subject: [SciPy-user] does scipy.signal.lti support mimo ? Message-ID: <14420.43490.qm@web36504.mail.mud.yahoo.com> Does scipy.signal.lti.ss2tf support multi-input/multi-ouput? When I use ss2tf(A,B,C,D) it provides a proper transfer funtion if and only if A is N*N, B is N*1 , C is 1*n and D is 1*1. Is there any better documentation or a tutorial for the LTI suite? V/R Scott From wesmckinn at gmail.com Wed Oct 15 13:45:20 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 15 Oct 2008 13:45:20 -0400 Subject: [SciPy-user] Matlab can interfere with Python? Message-ID: <6c476c8a0810151045wddcef6fgab1b98d9b4f8578a@mail.gmail.com> I have experienced this bizarre problem on two my coworker's machine who run Matlab (7.6.0 rev 2008a) and I have been teaching to use Python/Numpy/Scipy. Occasionally Matlab will somehow prevent Python from linking with various DLLs or severely inhibit the execution of code. I am not really sure how this could be (other than the Borg at the MathWorks not wanting anyone to use open source software), but has anyone else experienced this? As soon as I killed Matlab the problem disappeared. Here is the linkage output from a Cython dll that was killed by this: C:\MinGW\bin\gcc.exe -mno-cygwin -shared -s build\temp.win32-2.5\Release\tseries.o build\temp.win32-2.5\Release\tseries.def -LC:\Python25\libs -LC:\P ython25\PCBuild -lpython25 -lmsvcr71 -o tseries.pyd Maybe it's locking up the windows C runtime msvcr71? Any help here would be appreciated. Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Oct 15 13:52:20 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 15 Oct 2008 19:52:20 +0200 Subject: [SciPy-user] linalg.lstsq Message-ID: Hi all, I have a question concerning linalg.lstsq. AFAIK, the residues of linalg.lstsq should be real in case of complex matrices. Am I missing something ? from scipy.linalg import lstsq from scipy import rand m = 9 n = 5 A = rand(m,n)+1j*rand(m,n) b = rand(m) x,residues,rank,s = lstsq(A,b) print 'residues',residues # should be real m = 9 n = 9 A = rand(m,n)+1j*rand(m,n) b = rand(m) x,residues,rank,s = lstsq(A,b) print 'residues',residues # should be real order of machine precision Nils From 302302 at centrum.cz Wed Oct 15 14:21:19 2008 From: 302302 at centrum.cz (302302) Date: Wed, 15 Oct 2008 20:21:19 +0200 Subject: [SciPy-user] Time series graph In-Reply-To: <200810152021.7089@centrum.cz> References: <200810152017.29054@centrum.cz> <200810152018.13925@centrum.cz> <200810152019.29624@centrum.cz> <200810152020.8881@centrum.cz> <200810152021.7089@centrum.cz> Message-ID: <200810152021.6780@centrum.cz> Hi, I need to plot a time series (the data in the database has around 10000 rows) and I need dynamically zoom and move with them. Something like http://finance.google.com/finance?q=INDEXDJX:.DJI%20INDEXNASDAQ:.IXIC%20INDEXSP:.INX Is there any efficient way how to do it with scipy or matplotlib? Thanks Cenek From robert.kern at gmail.com Wed Oct 15 15:07:27 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Oct 2008 14:07:27 -0500 Subject: [SciPy-user] npfile deprecated? In-Reply-To: <20081015151203.GA7878@ramrod.starsheriffs.de> References: <20081015151203.GA7878@ramrod.starsheriffs.de> Message-ID: <3d375d730810151207i6efe0f6bwed94e65eeeae79ec@mail.gmail.com> On Wed, Oct 15, 2008 at 10:12, Steve Schmerler wrote: > Hi all > > Since I'm currently in the need to store and read binary data a lot, > I choose npfile. Since [1] didn't mention it, I attempted to > update the doc here. I added scipy.io.npfile, numpy.load, numpy.save > and numpy.savez. At work, I still use 0.6.0. However, I found that in > the current svn HEAD, the scipy.io.npfile docstring says that it is > deprecated and I should use ndarray.tofile() and numpy.fromfile() > instead. But with the latter, I can't control byteorder, row- or > column-major order etc. That's a bit puzzling. > > I admitt that I haven't followed the development of the new numpy file > format very closely (since npfile worked nicely so far). So, is the > .npy/.npz way the current standard to store platform-independent binary > arrays with all metainfo? Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pgmdevlist at gmail.com Wed Oct 15 16:04:20 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Wed, 15 Oct 2008 16:04:20 -0400 Subject: [SciPy-user] Time series graph In-Reply-To: <200810152021.6780@centrum.cz> References: <200810152017.29054@centrum.cz> <200810152021.7089@centrum.cz> <200810152021.6780@centrum.cz> Message-ID: <200810151604.20704.pgmdevlist@gmail.com> Cenek, Try the scikits.timeseries package (http://pytseries.sourceforge.net/installing.html). One of its module (sciktis.timeseries.lib.plotlib) implements plotting capacities on top of matplotlib, and supports an update of ticks depending on the zoom level. On Wednesday 15 October 2008 14:21:19 302302 wrote: > Hi, > > I need to plot a time series (the data in the database has around 10000 > rows) and I need dynamically zoom and move with them. Something like > http://finance.google.com/finance?q=INDEXDJX:.DJI%20INDEXNASDAQ:.IXIC%20IND >EXSP:.INX Is there any efficient way how to do it with scipy or matplotlib? > > Thanks Cenek > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From strawman at astraw.com Wed Oct 15 16:49:28 2008 From: strawman at astraw.com (Andrew Straw) Date: Wed, 15 Oct 2008 13:49:28 -0700 Subject: [SciPy-user] Matlab can interfere with Python? In-Reply-To: <6c476c8a0810151045wddcef6fgab1b98d9b4f8578a@mail.gmail.com> References: <6c476c8a0810151045wddcef6fgab1b98d9b4f8578a@mail.gmail.com> Message-ID: <48F65758.4000302@astraw.com> Just to be clear, it seems like you have experienced problems with 2 things: 1) While doing the link step to create a .dll (well, .pyd file) when running MinGW's GCC on a .c file, the linking fails due to an error. (The .c file being autogenerated by Cython when compiling a .pyx file.) If this is correct, what do you mean by "prevent Python from linking" -- Python is not executing in this case. What is the exact gcc error message? Does gcc not find the python25.dll? Does gcc fail to link other files that have nothing to do with python (i.e. they don't use symbols from python25.dll)? 2) "severly inhibit execution of code" -- are the symptoms any different than the CPU being busy or the RAM being full? Have you checked those things? Still just trying to understand the nature of the problem, Andrew Wes McKinney wrote: > I have experienced this bizarre problem on two my coworker's machine > who run Matlab (7.6.0 rev 2008a) and I have been teaching to use > Python/Numpy/Scipy. Occasionally Matlab will somehow prevent Python > from linking with various DLLs or severely inhibit the execution of > code. I am not really sure how this could be (other than the Borg at > the MathWorks not wanting anyone to use open source software), but has > anyone else experienced this? As soon as I killed Matlab the problem > disappeared. > > Here is the linkage output from a Cython dll that was killed by this: > > C:\MinGW\bin\gcc.exe -mno-cygwin -shared -s > build\temp.win32-2.5\Release\tseries.o > build\temp.win32-2.5\Release\tseries.def -LC:\Python25\libs -LC:\P > ython25\PCBuild -lpython25 -lmsvcr71 -o tseries.pyd > > Maybe it's locking up the windows C runtime msvcr71? > > Any help here would be appreciated. > > Thanks, > Wes > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From wesmckinn at gmail.com Wed Oct 15 18:21:27 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 15 Oct 2008 18:21:27 -0400 Subject: [SciPy-user] Matlab can interfere with Python? In-Reply-To: <48F65758.4000302@astraw.com> References: <6c476c8a0810151045wddcef6fgab1b98d9b4f8578a@mail.gmail.com> <48F65758.4000302@astraw.com> Message-ID: <6c476c8a0810151521p3bca0c04m6ec32a45bb561110@mail.gmail.com> There's nothing wrong with the DLL build =) I was experiencing problems with unit-tested production code being ground to a halt when MATLAB is running in the background. In other words, I would execute a simple line of code in the Python shell and it would hang or take a very long time to execute. The instant MATLAB was closed (matlab not executing any code, just idling), Python stopped hanging and code would run as usual. (Seriously!) I listed the gcc linkage since I thought Matlab might be interfering somehow with the use of the Microsoft C runtime, but that doesn't make that much sense either because Matlab has it's own C runtime DLL. Thanks, Wes On Wed, Oct 15, 2008 at 4:49 PM, Andrew Straw wrote: > Just to be clear, it seems like you have experienced problems with 2 > things: > > 1) While doing the link step to create a .dll (well, .pyd file) when > running MinGW's GCC on a .c file, the linking fails due to an error. > (The .c file being autogenerated by Cython when compiling a .pyx file.) > If this is correct, what do you mean by "prevent Python from linking" -- > Python is not executing in this case. What is the exact gcc error > message? Does gcc not find the python25.dll? Does gcc fail to link other > files that have nothing to do with python (i.e. they don't use symbols > from python25.dll)? > > 2) "severly inhibit execution of code" -- are the symptoms any different > than the CPU being busy or the RAM being full? Have you checked those > things? > > Still just trying to understand the nature of the problem, > Andrew > > Wes McKinney wrote: > > I have experienced this bizarre problem on two my coworker's machine > > who run Matlab (7.6.0 rev 2008a) and I have been teaching to use > > Python/Numpy/Scipy. Occasionally Matlab will somehow prevent Python > > from linking with various DLLs or severely inhibit the execution of > > code. I am not really sure how this could be (other than the Borg at > > the MathWorks not wanting anyone to use open source software), but has > > anyone else experienced this? As soon as I killed Matlab the problem > > disappeared. > > > > Here is the linkage output from a Cython dll that was killed by this: > > > > C:\MinGW\bin\gcc.exe -mno-cygwin -shared -s > > build\temp.win32-2.5\Release\tseries.o > > build\temp.win32-2.5\Release\tseries.def -LC:\Python25\libs -LC:\P > > ython25\PCBuild -lpython25 -lmsvcr71 -o tseries.pyd > > > > Maybe it's locking up the windows C runtime msvcr71? > > > > Any help here would be appreciated. > > > > Thanks, > > Wes > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Wed Oct 15 18:37:31 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 16 Oct 2008 00:37:31 +0200 Subject: [SciPy-user] npfile deprecated? In-Reply-To: <3d375d730810151207i6efe0f6bwed94e65eeeae79ec@mail.gmail.com> References: <20081015151203.GA7878@ramrod.starsheriffs.de> <3d375d730810151207i6efe0f6bwed94e65eeeae79ec@mail.gmail.com> Message-ID: <20081015223731.GA8661@ramrod.starsheriffs.de> On Oct 15 14:07 -0500, Robert Kern wrote: > On Wed, Oct 15, 2008 at 10:12, Steve Schmerler wrote: [...] > > > > However, I found that in > > the current svn HEAD, the scipy.io.npfile docstring says that it is > > deprecated and I should use ndarray.tofile() and numpy.fromfile() > > instead. But with the latter, I can't control byteorder, row- or > > column-major order etc. That's a bit puzzling. > > > > I admitt that I haven't followed the development of the new numpy file > > format very closely (since npfile worked nicely so far). So, is the > > .npy/.npz way the current standard to store platform-independent binary > > arrays with all metainfo? > > Yup. > Ah OK, thanks. Regarding that docstrings, what about removing the lines "You can achieve the same effect as using npfile, using ndarray.tofile and numpy.fromfile." from the np.deprecate_with_doc() text? The tofile/fromfile route doesn't provide the same functionallity. It's confusing, since their docstrings even discourage their usage for cross-platform case. best, steve From robert.kern at gmail.com Wed Oct 15 18:49:43 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 15 Oct 2008 17:49:43 -0500 Subject: [SciPy-user] npfile deprecated? In-Reply-To: <20081015223731.GA8661@ramrod.starsheriffs.de> References: <20081015151203.GA7878@ramrod.starsheriffs.de> <3d375d730810151207i6efe0f6bwed94e65eeeae79ec@mail.gmail.com> <20081015223731.GA8661@ramrod.starsheriffs.de> Message-ID: <3d375d730810151549q525db796g848f29be0bd614f6@mail.gmail.com> On Wed, Oct 15, 2008 at 17:37, Steve Schmerler wrote: > On Oct 15 14:07 -0500, Robert Kern wrote: >> On Wed, Oct 15, 2008 at 10:12, Steve Schmerler wrote: > > [...] >> > >> > However, I found that in >> > the current svn HEAD, the scipy.io.npfile docstring says that it is >> > deprecated and I should use ndarray.tofile() and numpy.fromfile() >> > instead. But with the latter, I can't control byteorder, row- or >> > column-major order etc. That's a bit puzzling. >> > >> > I admitt that I haven't followed the development of the new numpy file >> > format very closely (since npfile worked nicely so far). So, is the >> > .npy/.npz way the current standard to store platform-independent binary >> > arrays with all metainfo? >> >> Yup. >> > > Ah OK, thanks. > > Regarding that docstrings, what about removing the lines > > "You can achieve the same effect as using npfile, using ndarray.tofile > and numpy.fromfile." > > from the np.deprecate_with_doc() text? The tofile/fromfile route doesn't > provide the same functionallity. It's confusing, since their docstrings > even discourage their usage for cross-platform case. What functionality do you think that npfile provided which tofile()/fromfile() doesn't? npfile never wrote out header information. The files were always just the raw platform-specific binary bytes. It's API is somewhat more convenient (you can specify the shape in the method call rather than modifying it later), but that's about it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From strawman at astraw.com Wed Oct 15 23:22:15 2008 From: strawman at astraw.com (Andrew Straw) Date: Wed, 15 Oct 2008 20:22:15 -0700 Subject: [SciPy-user] Matlab can interfere with Python? In-Reply-To: <6c476c8a0810151521p3bca0c04m6ec32a45bb561110@mail.gmail.com> References: <6c476c8a0810151045wddcef6fgab1b98d9b4f8578a@mail.gmail.com> <48F65758.4000302@astraw.com> <6c476c8a0810151521p3bca0c04m6ec32a45bb561110@mail.gmail.com> Message-ID: <48F6B367.4080703@astraw.com> Hmm, the only thing I can think of is that MATLAB has hold of some resources and therefore the OS won't let Python use them. Is there anything like lsof or strace one can use on Windows? Even putting print statements all over your Python process and seeing what statement is blocking execution would be a start. I'm afraid my Windows knowledge is pretty minimal, so I can't be of much help here. -Andrew Wes McKinney wrote: > There's nothing wrong with the DLL build =) I was experiencing > problems with unit-tested production code being ground to a halt when > MATLAB is running in the background. In other words, I would execute a > simple line of code in the Python shell and it would hang or take a > very long time to execute. The instant MATLAB was closed (matlab not > executing any code, just idling), Python stopped hanging and code > would run as usual. (Seriously!) > > I listed the gcc linkage since I thought Matlab might be interfering > somehow with the use of the Microsoft C runtime, but that doesn't make > that much sense either because Matlab has it's own C runtime DLL. > > Thanks, > Wes > > On Wed, Oct 15, 2008 at 4:49 PM, Andrew Straw > wrote: > > Just to be clear, it seems like you have experienced problems with > 2 things: > > 1) While doing the link step to create a .dll (well, .pyd file) when > running MinGW's GCC on a .c file, the linking fails due to an error. > (The .c file being autogenerated by Cython when compiling a .pyx > file.) > If this is correct, what do you mean by "prevent Python from > linking" -- > Python is not executing in this case. What is the exact gcc error > message? Does gcc not find the python25.dll? Does gcc fail to link > other > files that have nothing to do with python (i.e. they don't use symbols > from python25.dll)? > > 2) "severly inhibit execution of code" -- are the symptoms any > different > than the CPU being busy or the RAM being full? Have you checked those > things? > > Still just trying to understand the nature of the problem, > Andrew > > Wes McKinney wrote: > > I have experienced this bizarre problem on two my coworker's machine > > who run Matlab (7.6.0 rev 2008a) and I have been teaching to use > > Python/Numpy/Scipy. Occasionally Matlab will somehow prevent Python > > from linking with various DLLs or severely inhibit the execution of > > code. I am not really sure how this could be (other than the Borg at > > the MathWorks not wanting anyone to use open source software), > but has > > anyone else experienced this? As soon as I killed Matlab the problem > > disappeared. > > > > Here is the linkage output from a Cython dll that was killed by > this: > > > > C:\MinGW\bin\gcc.exe -mno-cygwin -shared -s > > build\temp.win32-2.5\Release\tseries.o > > build\temp.win32-2.5\Release\tseries.def -LC:\Python25\libs -LC:\P > > ython25\PCBuild -lpython25 -lmsvcr71 -o tseries.pyd > > > > Maybe it's locking up the windows C runtime msvcr71? > > > > Any help here would be appreciated. > > > > Thanks, > > Wes > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From sahar at cmt.co.il Thu Oct 16 02:53:08 2008 From: sahar at cmt.co.il (Sahar Vilan) Date: Thu, 16 Oct 2008 08:53:08 +0200 Subject: [SciPy-user] saving raw image In-Reply-To: <68B3EEDF-FB62-40B5-944A-D85B4E28A0B5@yale.edu> Message-ID: Hi Zach, Thanks for your help. I used your advices and got an image I can view. However, its size doubles and I have to open it as 32 bit while the original looks fine as 16 bit. Here is the code I used: # ************************************************ from scipy import * # read image f0 = file('Image0.raw', 'rb').read() Raw = fromstring(f0, uint16).astype(int) Im = Raw.reshape([1024, 1024], order='F') # save image Im_str = Im.tostring(order='F') f1 = file( 'Image1.raw', 'wb').write(Im_str) # ************************************************ Thanks again, Sahar -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org]On Behalf Of Zachary Pincus Sent: Wed, October 15, 2008 4:21 PM To: SciPy Users List Subject: Re: [SciPy-user] saving raw image Hi Sahar, Please send a minimal example of loading and saving an array that produces "garbage" output, if you could... (If you could send a small input file that would be helpful too.) Also, does the garbage you see in ImageJ have some structure -- does it look streaky, like you can see rows of pixels that should be together, but the rows don't line up right? One possible problem is that the array you are saving has been promoted to a different dtype (e.g. by participating in signed/float arithmetic), so the pixels you save are no longer uint16s. Unless you have an 'astype(uint16)' in your save code, this could be the issue, since the loading code you showed does immediately convert the array from uint16s. (This is why sending a complete example of the failure is useful...) Another possible problem has to do with the order the pixels are read/ written out. Typically, images on disk are stored as rows of pixels next to one another -- this is "column major" or "fortran" order (going from one memory location to the next typically increments the x- value, except at row boundaries where the y-value is incremented, so it is said that the x-value "varies the fastest"). Typically, numpy arrays are created in row major or "C" order, where the y-value varies the fastest. When loading and manipulating images, you need either to make sure that the images are loaded in fortran-order, or reverse the x/y coordinates and shape. So, e.g. when loading a 200x300 image: s = file(path, 'rb').read() raw = fromstring(s, uint16).astype(int) image = raw.reshape((200,300), order='F') now, image[30,40] gives the same pixel as coordinate (30,40) in ImageJ. If you did: image = raw.reshape((300,200), order='C') then image[30,40] would give the same pixel as coordinate (40,30) in ImageJ. Note that the 'C' order is default. So: image = raw.reshape((200,300)) will give garbage, with the rows of pixels broken up along the wrong boundaries, giving rise to the "streaky" images I mentioned. Finally, the tostring() method also takes an order option, so for saving images, you need: raw = image.tostring(order='F') (assuming that you reshaped the image as order 'F') Zach Pincus On Oct 15, 2008, at 4:49 AM, Sahar Vilan wrote: > I used scipy for basic image processing of raw images, and I can't > save > these images in the same format: > To open image I use: > s = file(path, 'rb').read() > raw = fromstring(s, uint16).astype(int) > > I try to save this matrix as raw image again but I get some garbage > when I > open it in some viewer (Image-J, for instance). > Can anyone help me with this? > > Thanks, > Sahar _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user **************************************************************************** *************************** This e-mail message may contain confidential,and privileged information or data that constitute proprietary information of CMT Medical Ltd. Any review or distribution by others is strictly prohibited. If you are not the intended recipient you are hereby notified that any use of this information or data by any other person is absolutely prohibited. If you are not the intended recipient, please delete all copies. Thank You. http://www.cmt.co.il **************************************************************************** **************************** **************************************************************************** ******** This footnote confirms that this email message has been scanned by PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. **************************************************************************** ******** No virus found in this incoming message. Checked by AVG - http://www.avg.com Version: 8.0.173 / Virus Database: 270.8.0/1725 - Release Date: 14/10/2008 21:25 ******************************************************************************************************* This e-mail message may contain confidential,and privileged information or data that constitute proprietary information of CMT Medical Ltd. Any review or distribution by others is strictly prohibited. If you are not the intended recipient you are hereby notified that any use of this information or data by any other person is absolutely prohibited. If you are not the intended recipient, please delete all copies. Thank You. http://www.cmt.co.il ******************************************************************************************************** ************************************************************************************ This footnote confirms that this email message has been scanned by PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. ************************************************************************************ From wbaxter at gmail.com Thu Oct 16 03:37:12 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 16 Oct 2008 16:37:12 +0900 Subject: [SciPy-user] CHOLMOD via scipy Message-ID: Is CHOLMOD accessible through scipy somehow? Or if not, is it available to python via some other easy-to-install package? --bb From cournape at gmail.com Thu Oct 16 03:42:41 2008 From: cournape at gmail.com (David Cournapeau) Date: Thu, 16 Oct 2008 16:42:41 +0900 Subject: [SciPy-user] Matlab can interfere with Python? In-Reply-To: <6c476c8a0810151521p3bca0c04m6ec32a45bb561110@mail.gmail.com> References: <6c476c8a0810151045wddcef6fgab1b98d9b4f8578a@mail.gmail.com> <48F65758.4000302@astraw.com> <6c476c8a0810151521p3bca0c04m6ec32a45bb561110@mail.gmail.com> Message-ID: <5b8d13220810160042s19a5acd1yaf4ea034edc2516b@mail.gmail.com> On Thu, Oct 16, 2008 at 7:21 AM, Wes McKinney wrote: > There's nothing wrong with the DLL build =) I was experiencing problems with > unit-tested production code being ground to a halt when MATLAB is running in > the background. In other words, I would execute a simple line of code in the > Python shell and it would hang or take a very long time to execute. The > instant MATLAB was closed (matlab not executing any code, just idling), > Python stopped hanging and code would run as usual. (Seriously!) > > I listed the gcc linkage since I thought Matlab might be interfering somehow > with the use of the Microsoft C runtime, but that doesn't make that much > sense either because Matlab has it's own C runtime DLL. I sincerely doubt matlab has its own C runtime on windows. But even if it is shared, if two processes using the same dll could interfere each other: it would be an extremely serious bug in the core OS. Highly unlikely. Does it happen only with python ? Can you reproduce the hanging with a small script ? The easiest way would be to be able to run the python interpreter in the VS debugger, to see where it hangs. cheers, David From schut at sarvision.nl Thu Oct 16 04:39:26 2008 From: schut at sarvision.nl (Vincent Schut) Date: Thu, 16 Oct 2008 10:39:26 +0200 Subject: [SciPy-user] saving raw image In-Reply-To: References: <68B3EEDF-FB62-40B5-944A-D85B4E28A0B5@yale.edu> Message-ID: Numpy int's are int32 by default, so in your '.astype(int)' you're casting to int32. If you want a different type, you need to specify the number of bits. As a side note, you're reading as *unsigned* integer (uint16) and then converting to *signed* int32. Try "Raw = fromstring(f0, uint16).astype(uint16)" to keep the original (unsigned int16) datatype. Vincent. Sahar Vilan wrote: > Hi Zach, > Thanks for your help. > > I used your advices and got an image I can view. However, its size doubles > and I have to open it as 32 bit while the original looks fine as 16 bit. > Here is the code I used: > # ************************************************ > from scipy import * > > # read image > f0 = file('Image0.raw', 'rb').read() > Raw = fromstring(f0, uint16).astype(int) > Im = Raw.reshape([1024, 1024], order='F') > > # save image > Im_str = Im.tostring(order='F') > f1 = file( 'Image1.raw', 'wb').write(Im_str) > > # ************************************************ > Thanks again, > Sahar > > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org]On > Behalf Of Zachary Pincus > Sent: Wed, October 15, 2008 4:21 PM > To: SciPy Users List > Subject: Re: [SciPy-user] saving raw image > > > Hi Sahar, > > Please send a minimal example of loading and saving an array that > produces "garbage" output, if you could... (If you could send a small > input file that would be helpful too.) > > Also, does the garbage you see in ImageJ have some structure -- does > it look streaky, like you can see rows of pixels that should be > together, but the rows don't line up right? > > One possible problem is that the array you are saving has been > promoted to a different dtype (e.g. by participating in signed/float > arithmetic), so the pixels you save are no longer uint16s. Unless you > have an 'astype(uint16)' in your save code, this could be the issue, > since the loading code you showed does immediately convert the array > from uint16s. (This is why sending a complete example of the failure > is useful...) > > Another possible problem has to do with the order the pixels are read/ > written out. Typically, images on disk are stored as rows of pixels > next to one another -- this is "column major" or "fortran" order > (going from one memory location to the next typically increments the x- > value, except at row boundaries where the y-value is incremented, so > it is said that the x-value "varies the fastest"). Typically, numpy > arrays are created in row major or "C" order, where the y-value varies > the fastest. When loading and manipulating images, you need either to > make sure that the images are loaded in fortran-order, or reverse the > x/y coordinates and shape. > > So, e.g. when loading a 200x300 image: > > > s = file(path, 'rb').read() > raw = fromstring(s, uint16).astype(int) > image = raw.reshape((200,300), order='F') > > now, image[30,40] gives the same pixel as coordinate (30,40) in ImageJ. > > If you did: > image = raw.reshape((300,200), order='C') > then image[30,40] would give the same pixel as coordinate (40,30) in > ImageJ. > > Note that the 'C' order is default. So: > image = raw.reshape((200,300)) > will give garbage, with the rows of pixels broken up along the wrong > boundaries, giving rise to the "streaky" images I mentioned. > > Finally, the tostring() method also takes an order option, so for > saving images, you need: > raw = image.tostring(order='F') > (assuming that you reshaped the image as order 'F') > > Zach Pincus > > > > On Oct 15, 2008, at 4:49 AM, Sahar Vilan wrote: > >> I used scipy for basic image processing of raw images, and I can't >> save >> these images in the same format: >> To open image I use: >> s = file(path, 'rb').read() >> raw = fromstring(s, uint16).astype(int) >> >> I try to save this matrix as raw image again but I get some garbage >> when I >> open it in some viewer (Image-J, for instance). >> Can anyone help me with this? >> >> Thanks, >> Sahar > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > **************************************************************************** > *************************** This e-mail message may contain confidential,and > privileged information or data that constitute proprietary information of > CMT Medical Ltd. Any review or distribution by others is strictly > prohibited. If you are not the intended recipient you are hereby notified > that any use of this information or data by any other person is absolutely > prohibited. If you are not the intended recipient, please delete all copies. > Thank You. http://www.cmt.co.il > **************************************************************************** > **************************** > > > > > > > > **************************************************************************** > ******** > This footnote confirms that this email message has been scanned by > PineApp Mail-SeCure for the presence of malicious code, vandals & computer > viruses. > **************************************************************************** > ******** > > > > > No virus found in this incoming message. > Checked by AVG - http://www.avg.com > Version: 8.0.173 / Virus Database: 270.8.0/1725 - Release Date: 14/10/2008 > 21:25 > > > > > ******************************************************************************************************* > This e-mail message may contain confidential,and privileged information or data that constitute proprietary information of CMT Medical Ltd. Any review or distribution by others is strictly prohibited. If you are not the intended recipient you are hereby notified that any use of this information or data by any other person is absolutely prohibited. If you are not the intended recipient, please delete all copies. Thank You. http://www.cmt.co.il > ******************************************************************************************************** > > > > > > > > ************************************************************************************ > This footnote confirms that this email message has been scanned by > PineApp Mail-SeCure for the presence of malicious code, vandals & computer viruses. > ************************************************************************************ From tmspriy at nus.edu.sg Thu Oct 16 04:53:42 2008 From: tmspriy at nus.edu.sg (priya_tmsi) Date: Thu, 16 Oct 2008 01:53:42 -0700 (PDT) Subject: [SciPy-user] signal.medfilt Message-ID: <20009033.post@talk.nabble.com> I am trying to write a code for median filter using scipy the following is my code from numpy import * import ImageFilter from scipy import * import scipy.signal as signal img = file('girl.png','rb').read() imgDest=signal.medfilt(img) f1 = file( 'pygirl.png').write(imgDest) I get the following error on running this code Traceback (most recent call last): File "medfilt.py", line 6, in ? imgDest=signal.medfilt(img) File "/usr/lib/python2.4/site-packages/scipy/signal/signaltools.py", line 232, in medfilt return sigtools._order_filterND(volume,domain,order) ValueError: data type must provide an itemsize Can any one help please...:,(:,( -- View this message in context: http://www.nabble.com/signal.medfilt-tp20009033p20009033.html Sent from the Scipy-User mailing list archive at Nabble.com. From nwagner at iam.uni-stuttgart.de Thu Oct 16 06:08:15 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 16 Oct 2008 12:08:15 +0200 Subject: [SciPy-user] CHOLMOD via scipy In-Reply-To: References: Message-ID: On Thu, 16 Oct 2008 16:37:12 +0900 "Bill Baxter" wrote: > Is CHOLMOD accessible through scipy somehow? > Or if not, is it available to python via some other >easy-to-install package? > > --bb > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Bill, It is on the "Wishlist" ... http://projects.scipy.org/scipy/scipy/ticket/261 You may want to use cvxopt in the meantime See http://abel.ee.ucla.edu/cvxopt/documentation/users-guide/node41.html Cheers, Nils From zachary.pincus at yale.edu Thu Oct 16 08:14:54 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 16 Oct 2008 08:14:54 -0400 Subject: [SciPy-user] saving raw image In-Reply-To: References: <68B3EEDF-FB62-40B5-944A-D85B4E28A0B5@yale.edu> Message-ID: > Raw = fromstring(f0, uint16).astype(uint16) Note that this is a tad unnecessary -- fromstring produces arrays with the requested dtype: In : numpy.fromstring('abcd', dtype=numpy.uint16) Out: array([25185, 25699], dtype=uint16) However, it is easy to accidentally promote the output of math operations to different dtypes in math operations with signed int types or floats. You best bet is to make sure that you convert the image back to 16-bit unsigned ints before saving: # save image Im_str = Im.astype(numpy.uint16).tostring(order='F') f1 = file( 'Image1.raw', 'wb').write(Im_str) Note that this conversion works like C casting, so if you've got negative values you will have wraparound. Zach From elcorto at gmx.net Thu Oct 16 08:15:07 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Thu, 16 Oct 2008 14:15:07 +0200 Subject: [SciPy-user] npfile deprecated? In-Reply-To: <3d375d730810151549q525db796g848f29be0bd614f6@mail.gmail.com> References: <20081015151203.GA7878@ramrod.starsheriffs.de> <3d375d730810151207i6efe0f6bwed94e65eeeae79ec@mail.gmail.com> <20081015223731.GA8661@ramrod.starsheriffs.de> <3d375d730810151549q525db796g848f29be0bd614f6@mail.gmail.com> Message-ID: <20081016121507.GA13628@ramrod.starsheriffs.de> On Oct 15 17:49 -0500, Robert Kern wrote: > What functionality do you think that npfile provided which > tofile()/fromfile() doesn't? npfile never wrote out header > information. The files were always just the raw platform-specific > binary bytes. It's API is somewhat more convenient (you can specify > the shape in the method call rather than modifying it later), but > that's about it. > Well ok, it's mostly only the API rather than functionallity. I suppose that with tofile/fromfile, I could use approximately this instead: >>> sys.byteorder 'little' >>> sh = (3,5) >>> ord = 'F' >>> a = asarray(np.ramdom.random(sh), order=ord) >>> dt = a.dtype >>> a.tofile('test.dat') and on another machine >>> a = fromfile('test.dat', dtype=dt).reshape(sh) >>> a = asarray(a, order='F') >>> if sys.byteorder == 'big': a = a.byteswap() Here, and in the npfile case, one would use wrappers writing and reading metainfos (dtype, order, byteorder and shape) to and from an extra file (at least, I did this). The only functionallity difference is that files on disk would be always in C-order. Anyway, god bless numpy.save() :-) best, steve From zachary.pincus at yale.edu Thu Oct 16 08:22:14 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Thu, 16 Oct 2008 08:22:14 -0400 Subject: [SciPy-user] signal.medfilt In-Reply-To: <20009033.post@talk.nabble.com> References: <20009033.post@talk.nabble.com> Message-ID: Hi Priya, I think the problem may be that you aren't properly reading in the images... unless you've got some other code that is overriding the builtin python 'file' function, then the following simply reads in the bits from a png file into a string, without decoding/decompressing/etc the file into a numpy array: > img = file('girl.png','rb').read() As such, it's no surprise that a function which works on numpy arrays is choking when fed a string. You'll need to get some sort of image IO library to read PNGs. My suggestion would be to look into PIL, the Python Imaging Library, which can read images and be used to produce numpy arrays from them. Once you are able to read in images, I also suggest looking at scipy.ndimage, which provides useful image manipulation tools. Zach On Oct 16, 2008, at 4:53 AM, priya_tmsi wrote: > > I am trying to write a code for median filter using scipy > the following is my code > > from numpy import * > import ImageFilter > from scipy import * > import scipy.signal as signal > img = file('girl.png','rb').read() > imgDest=signal.medfilt(img) > f1 = file( 'pygirl.png').write(imgDest) > > I get the following error on running this code > > Traceback (most recent call last): > File "medfilt.py", line 6, in ? > imgDest=signal.medfilt(img) > File "/usr/lib/python2.4/site-packages/scipy/signal/ > signaltools.py", line > 232, in medfilt > return sigtools._order_filterND(volume,domain,order) > ValueError: data type must provide an itemsize > > > Can any one help please...:,(:,( > > -- > View this message in context: http://www.nabble.com/signal.medfilt-tp20009033p20009033.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From cohen at slac.stanford.edu Thu Oct 16 12:35:42 2008 From: cohen at slac.stanford.edu (Johann Cohen-Tanugi) Date: Thu, 16 Oct 2008 18:35:42 +0200 Subject: [SciPy-user] scipy 0.6.0 build failing In-Reply-To: References: Message-ID: <48F76D5E.9050708@slac.stanford.edu> I am not a specialist, but the first thing I would do is delete the local build directory, and then retry using the --fcompiler=gnu95 argument to python setup.py build, so that gfortran is picked up instead of g77. HTH, Johann chris wrote: > I'm trying to build scipy 0.6.0 on RHEL 3, and am getting the following failure: > > > g77:f77: scipy/fftpack/dfftpack/zfftf1.f > /tmp/cceGs6VT.s: Assembler messages: > /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' > /tmp/cceGs6VT.s:2994: Error: suffix or operands invalid for `movd' > /tmp/cceGs6VT.s: Assembler messages: > /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' > /tmp/cceGs6VT.s:2994: Error: suffix or operands invalid for `movd' > error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 > -funroll-loops -march=i686 -mmmx -msse2 -msse -fomit-frame-pointer > -malign-double -c -c scipy/fftpack/dfftpack/zfftf1.f -o > build/temp.linux-i686-2.5/scipy/fftpack/dfftpack/zfftf1.o" failed with > exit status 1 > > > g77 version 3.2.3 > gcc version 3.2.3 > as version 2.14.90.0.4 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cournape at gmail.com Fri Oct 17 01:16:33 2008 From: cournape at gmail.com (David Cournapeau) Date: Fri, 17 Oct 2008 14:16:33 +0900 Subject: [SciPy-user] scipy 0.6.0 build failing In-Reply-To: References: Message-ID: <5b8d13220810162216x96b9c8ek3648fed2724fe999@mail.gmail.com> On Tue, Oct 14, 2008 at 3:09 AM, chris wrote: > I'm trying to build scipy 0.6.0 on RHEL 3, and am getting the following failure: > > > g77:f77: scipy/fftpack/dfftpack/zfftf1.f > /tmp/cceGs6VT.s: Assembler messages: > /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' Your version of g77 has a bug, and generate invalid machine instructions when the -msse option is used. You should update your g77 if possible, David From travis at enthought.com Fri Oct 17 12:35:03 2008 From: travis at enthought.com (Travis Vaught) Date: Fri, 17 Oct 2008 11:35:03 -0500 Subject: [SciPy-user] Time series graph In-Reply-To: <200810151604.20704.pgmdevlist@gmail.com> References: <200810152017.29054@centrum.cz> <200810152021.7089@centrum.cz> <200810152021.6780@centrum.cz> <200810151604.20704.pgmdevlist@gmail.com> Message-ID: On Oct 15, 2008, at 3:04 PM, Pierre GM wrote: > Cenek, > Try the scikits.timeseries package > (http://pytseries.sourceforge.net/installing.html). > One of its module (sciktis.timeseries.lib.plotlib) implements plotting > capacities on top of matplotlib, and supports an update of ticks > depending on > the zoom level. > > > On Wednesday 15 October 2008 14:21:19 302302 wrote: >> Hi, >> >> I need to plot a time series (the data in the database has around >> 10000 >> rows) and I need dynamically zoom and move with them. Something like >> http://finance.google.com/finance?q=INDEXDJX:.DJI%20INDEXNASDAQ:.IXIC%20IND >> EXSP:.INX Is there any efficient way how to do it with scipy or >> matplotlib? >> >> Thanks Cenek >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hey Cenek, There's a chaco example of this here: https://svn.enthought.com/enthought/browser/Chaco/trunk/examples/financial_plot_dates.py It gives you zoomability and very nice tick handling of dates: -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedGraphic.png Type: image/png Size: 30249 bytes Desc: not available URL: -------------- next part -------------- Zooming in quite a bit gives tick labels at the hour or minute scales (with reasonable scales in between): -------------- next part -------------- A non-text attachment was scrubbed... Name: pastedGraphic.png Type: image/png Size: 25954 bytes Desc: not available URL: -------------- next part -------------- HTH, Travis From olli.sipila at helsinki.fi Sat Oct 18 08:06:41 2008 From: olli.sipila at helsinki.fi (Olli =?utf-8?b?U2lwaWzDpA==?=) Date: Sat, 18 Oct 2008 15:06:41 +0300 Subject: [SciPy-user] scipy 0.6 build error Message-ID: <20081018150641.17457hwrt91wafkh@webmail.helsinki.fi> I'm having a problem installing scipy on OS X Tiger (10.4.11). To be more specific, I'm trying to install the package manually according to the instructions on the scipy.org webpage; the building process goes smoothly until I get the following error message: compile options: '-DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c' extra options: '-faltivec' gcc: scipy/linsolve/_superluobject.c In file included from scipy/linsolve/_superluobject.h:8, from scipy/linsolve/_superluobject.c:5: scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for '_Py_c_abs' /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobject.h:30: error: previous declaration of '_Py_c_abs' was here In file included from scipy/linsolve/_superluobject.h:8, from scipy/linsolve/_superluobject.c:5: scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for '_Py_c_abs' /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobject.h:30: error: previous declaration of '_Py_c_abs' was here lipo: can't figure out the architecture type of: /var/tmp//ccFsb10y.out In file included from scipy/linsolve/_superluobject.h:8, from scipy/linsolve/_superluobject.c:5: scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for '_Py_c_abs' /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobject.h:30: error: previous declaration of '_Py_c_abs' was here In file included from scipy/linsolve/_superluobject.h:8, from scipy/linsolve/_superluobject.c:5: scipy/linsolve/SuperLU/SRC/scomplex.h:60: error: conflicting types for '_Py_c_abs' /Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/complexobject.h:30: error: previous declaration of '_Py_c_abs' was here lipo: can't figure out the architecture type of: /var/tmp//ccFsb10y.out error: Command "gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -DNO_ATLAS_INFO=3 -DUSE_VENDOR_BLAS=1 -I/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c scipy/linsolve/_superluobject.c -o build/temp.macosx-10.4-fat-2.6/scipy/linsolve/_superluobject.o -faltivec" failed with exit status 1 If it is indeed the -faltivec option that is causing this error, is there any way to disable it? My configuration at the moment is MAC OS X Tiger (10.4.11) gcc version 4.0.1 gfortran version 4.2.3 (obtained from the link at scipy.org) The version of scipy I'm trying to build is 0.6. If someone has any thoughts on how to fix this, it would be very much appreciated. - Olli Sipil? From contact at pythonxy.com Sun Oct 19 09:35:31 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 19 Oct 2008 15:35:31 +0200 Subject: [SciPy-user] Reinteract Message-ID: <48FB37A3.2010304@pythonxy.com> Hi all, A few months ago, I've discovered Reinteract (http://www.reinteract.org), a promising project providing an interactive Python environment based on worksheets (it reminds me somehow of Mathematica's). Now Reinteract has a lot more features, and even if the developer claims that it's still an early stage development release (0.4.0), it's clearly worth a try. Look at this screencast, it's very impressive: http://www.gnome.org/~otaylor/reinteract-demo.html So, if you're interested in trying Reinteract on Windows, I invite you to download the Reinteract and PyGTK (Reinteract's backend) Python(x,y) plugins: http://www.pythonxy.com/additional.php#third Note that even if a Python(x,y) install is recommended (e.g. for a better help integration and update management), you may install a Python(x,y) plugin over a standard Python 2.5 install (i.e. if you have installed Python 2.5 with the official .msi installer). Cheers, Pierre From pgmdevlist at gmail.com Sun Oct 19 19:43:16 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 19 Oct 2008 19:43:16 -0400 Subject: [SciPy-user] Record Array: How to add a column? In-Reply-To: <88e473830810140955u6f5d830cueb24a3f4bc980458@mail.gmail.com> References: <48F3E642.4040001@cornell.edu> <200810141228.42850.pgmdevlist@gmail.com> <88e473830810140955u6f5d830cueb24a3f4bc980458@mail.gmail.com> Message-ID: <200810191943.16946.pgmdevlist@gmail.com> John, Could you give the SVN>=r5949 a try ? That fixes one of the problems you were describing in a previous email: r2[0].date.year The reason why it didn't work before was that a masked array was systematically returned in __getattribute__. Now, you'll get a masked array if the output has a shape (ie, not a single record), or if it's a single record with some mask fields. Note that I'm not guaranteeing that you won't run into some problems again at one point or another... From cournape at gmail.com Sun Oct 19 22:00:08 2008 From: cournape at gmail.com (David Cournapeau) Date: Mon, 20 Oct 2008 11:00:08 +0900 Subject: [SciPy-user] scipy 0.6 build error In-Reply-To: <20081018150641.17457hwrt91wafkh@webmail.helsinki.fi> References: <20081018150641.17457hwrt91wafkh@webmail.helsinki.fi> Message-ID: <5b8d13220810191900i7d511b25vc6bb678ef821a97a@mail.gmail.com> On Sat, Oct 18, 2008 at 9:06 PM, Olli Sipil? wrote: > > The version of scipy I'm trying to build is 0.6. If someone has any > thoughts on how to fix this, it would be very much appreciated. You should not use python 2.6 for the time being. Numpy and scipy cannot not (yet) be considered as 2.6 compatible. cheers, David From olli.sipila at helsinki.fi Mon Oct 20 02:51:06 2008 From: olli.sipila at helsinki.fi (Olli =?utf-8?b?U2lwaWzDpA==?=) Date: Mon, 20 Oct 2008 09:51:06 +0300 Subject: [SciPy-user] scipy 0.6 build error In-Reply-To: <5b8d13220810191900i7d511b25vc6bb678ef821a97a@mail.gmail.com> References: <20081018150641.17457hwrt91wafkh@webmail.helsinki.fi> <5b8d13220810191900i7d511b25vc6bb678ef821a97a@mail.gmail.com> Message-ID: <20081020095106.58557anc9o1evwwq@webmail.helsinki.fi> Hm ok, I'll give it a go. I recall 2.5 also giving me an error message (this I think is the reason why I wanted to try 2.6), but I'll post that error here as well if 2.5 doesn't work either. - Olli Quoting "David Cournapeau" : > On Sat, Oct 18, 2008 at 9:06 PM, Olli Sipil? wrote: >> >> The version of scipy I'm trying to build is 0.6. If someone has any >> thoughts on how to fix this, it would be very much appreciated. > > You should not use python 2.6 for the time being. Numpy and scipy > cannot not (yet) be considered as 2.6 compatible. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From lionel at gamr7.com Mon Oct 20 05:37:06 2008 From: lionel at gamr7.com (Lionel Barret De Nazaris) Date: Mon, 20 Oct 2008 11:37:06 +0200 Subject: [SciPy-user] Arc-length reparametrization. (newbie question ?) Message-ID: <48FC5142.508@gamr7.com> Hello all, I've just inherited from a bunch of not so-good code about cubic splines, and as a relative scipy newbie, I was wondering what would be the right way to do it. I need to do Arc-length reparametrization.( http://www.math.hmc.edu/~gu/math142/mellon/Differential_Geometry/Geometry_of_curves/Parametric_Curves_and_arc.html ) In the current code, I recognize the use of the newton's method to converge on a good estimate but this leave me non-plussed. The heavy use of integration to compute the curve length without keeping the intermediary results is really weird. So how would you do it, using the best of what scipy has to offer ? Note : I've looked as the splines in scipy , but this seems more focused on finding the spline that fits the samples, than on this kind of manipulation. Here we have the spline and its control points. this the code (see http://pastebin.com/m7e276652 if indentation is not correct) : def derivated_curvilign_abscissa(t, points): return Vector(vectorial_derivated_cubic_interpolator(t,points)).magnitude() def curvilign_abscissa(t, points): """ input : points = spline control points [init_point, init_virtual_point, end_virtual_point, end_point] t = a float between 0 and 1 ouput : the arc-length between f(t) and the start of the curve (aka curvilign_abscissa) """ return Vector(points[0]).magnitude() + quad(lambda x :derivated_curvilign_abscissa( x, points), 0., t)[0] def create_curvy_grid(points, samples): """ input : points = spline control points [init_point, init_virtual_point, end_virtual_point, end_point] samples = list of of equally spaced t (float) between 0 and 1 output : a list of float t (float) where moving from tn to tn+1 means an advancing on the curve of an equal length s. """ curve_length = curvilign_abscissa(1., points) -curvilign_abscissa(0., points) # == def function_to_solve(x, translation) : return - translation + curvilign_abscissa(x, points) -curvilign_abscissa(0., points) # == return [newton(lambda x : function_to_solve(x, curve_length * step_grid), 1 - PRECISION) for step_grid in grid] # == From david at ar.media.kyoto-u.ac.jp Mon Oct 20 07:48:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 20 Oct 2008 20:48:55 +0900 Subject: [SciPy-user] scipy 0.6 build error In-Reply-To: <20081020095106.58557anc9o1evwwq@webmail.helsinki.fi> References: <20081018150641.17457hwrt91wafkh@webmail.helsinki.fi> <5b8d13220810191900i7d511b25vc6bb678ef821a97a@mail.gmail.com> <20081020095106.58557anc9o1evwwq@webmail.helsinki.fi> Message-ID: <48FC7027.9010502@ar.media.kyoto-u.ac.jp> Olli Sipil? wrote: > Hm ok, I'll give it a go. I recall 2.5 also giving me an error message > (this I think is the reason why I wanted to try 2.6), but I'll post > that error here as well if 2.5 doesn't work either. > Yes, please report any error on 2.5. Note that even if the error is not specific to 2.6, you should not (yet) use numpy and/or scipy with 2.6. cheers, David From josegomez at gmx.net Mon Oct 20 10:05:40 2008 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Mon, 20 Oct 2008 16:05:40 +0200 Subject: [SciPy-user] Best way to test several values Message-ID: <20081020140540.5360@gmx.net> Hi, I have a 2D array of numbers, and I want to use "where" to effectively mask bits of the array out. Essentially, if any element in this 2D array of numbers belongs to a given list (stored as a 1D array), the condition should be e.g., True. Up to now, I have used numpy.any ( data==QA ), where data is a single element of my 2D array, and QA is my acceptance values list. However, I need to loop through elements, and is taking a long time. Any hints? numpy/scipy seems to have a plethora of clever functions to this sort of thing efficiently! Thanks Jose -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser From robert.kern at gmail.com Mon Oct 20 10:15:06 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 20 Oct 2008 09:15:06 -0500 Subject: [SciPy-user] Best way to test several values In-Reply-To: <20081020140540.5360@gmx.net> References: <20081020140540.5360@gmx.net> Message-ID: <3d375d730810200715y31cf878erf1a3a88302dc768d@mail.gmail.com> On Mon, Oct 20, 2008 at 09:05, Jose Luis Gomez Dans wrote: > Hi, > I have a 2D array of numbers, and I want to use "where" to effectively mask > bits of the array out. Essentially, if any element in this 2D array of > numbers belongs to a given list (stored as a 1D array), the condition should > be e.g., True. > > Up to now, I have used numpy.any ( data==QA ), where data is a single element > of my 2D array, and QA is my acceptance values list. However, I need to loop > through elements, and is taking a long time. > > Any hints? numpy/scipy seems to have a plethora of clever functions to this > sort of thing efficiently! In [12]: a = random.randint(0, 10, (10,10)) In [13]: b = reshape(a, a.shape + (1,)) In [14]: QA = array([2,4,6]) In [15]: QA2 = reshape(QA, (1,1,-1)) In [16]: (b == QA2).any(axis=-1) Out[16]: array([[ True, True, False, False, True, False, False, False, False, True], [ True, False, False, False, True, True, True, False, False, False], [ True, True, False, False, True, True, False, True, True, False], [ True, False, False, False, False, False, True, False, True, False], [False, True, False, False, True, False, False, False, True, True], [False, False, True, False, False, False, False, True, False, False], [False, False, True, True, True, True, False, False, False, False], [ True, False, True, False, False, False, False, False, False, False], [False, False, False, False, False, False, False, False, False, True], [ True, False, False, False, False, False, False, False, True, True]], dtype=bool) In [17]: a[Out[16]] Out[17]: array([6, 2, 6, 4, 6, 4, 6, 2, 2, 4, 2, 6, 2, 2, 6, 4, 2, 6, 4, 2, 6, 4, 6, 2, 6, 2, 6, 4, 4, 2, 2, 4, 4]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Mon Oct 20 20:54:40 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 20 Oct 2008 20:54:40 -0400 Subject: [SciPy-user] where is models.py? Message-ID: <48FD2850.5090308@american.edu> models.py seems to have been moved? Alan Isaac From millman at berkeley.edu Mon Oct 20 23:02:38 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 20 Oct 2008 20:02:38 -0700 Subject: [SciPy-user] where is models.py? In-Reply-To: <48FD2850.5090308@american.edu> References: <48FD2850.5090308@american.edu> Message-ID: On Mon, Oct 20, 2008 at 5:54 PM, Alan G Isaac wrote: > models.py seems to have been moved? Assuming you are referring to the models package, I removed it in August: http://projects.scipy.org/scipy/scipy/changeset/4639 (If not, sorry for the confusion.) I sent an email to the list before hand explaining that scipy.stats.models wasn't ready for release and proposed removing it so that we could move forward on the 0.7 release. I haven't looked at the code since then, but as I recall there were several broken tests and some open questions about what the API should be. I spoke with Jonathan Taylor (he wrote the package) and he said he wasn't going to have time in the near future to work on it. This is very unfortunate since the statistical models package provides some really useful functionality and I would love to see it in the 0.7 release. For the time being, it is living (and hopefully developing) in the nipy project: https://launchpad.net/nipy Once we get it working and clean up the API, we will ask for a code review before contributing it to scipy. Clearly, we should have done this in the first place, but live and learn I guess. Our hope is that we will be able to get it in for the scipy 0.8 release. We would love to have other contributors. If anyone is interested in helping, please let me know. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From wesmckinn at gmail.com Tue Oct 21 00:34:48 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 21 Oct 2008 00:34:48 -0400 Subject: [SciPy-user] where is models.py? In-Reply-To: References: <48FD2850.5090308@american.edu> Message-ID: <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> What is the timeline for the 0.7 release? I use the models package extensively and would personally be interested in developing it sooner rather than later if it could potentially be a part of the new release. Thanks, Wes On Oct 20, 2008, at 11:02 PM, Jarrod Millman wrote: > On Mon, Oct 20, 2008 at 5:54 PM, Alan G Isaac > wrote: >> models.py seems to have been moved? > > Assuming you are referring to the models package, I removed it in > August: > http://projects.scipy.org/scipy/scipy/changeset/4639 > (If not, sorry for the confusion.) > > I sent an email to the list before hand explaining that > scipy.stats.models wasn't ready for release and proposed removing it > so that we could move forward on the 0.7 release. I haven't looked at > the code since then, but as I recall there were several broken tests > and some open questions about what the API should be. I spoke with > Jonathan Taylor (he wrote the package) and he said he wasn't going to > have time in the near future to work on it. > > This is very unfortunate since the statistical models package provides > some really useful functionality and I would love to see it in the 0.7 > release. For the time being, it is living (and hopefully developing) > in the nipy project: https://launchpad.net/nipy > Once we get it working and clean up the API, we will ask for a code > review before contributing it to scipy. Clearly, we should have done > this in the first place, but live and learn I guess. Our hope is that > we will be able to get it in for the scipy 0.8 release. > > We would love to have other contributors. If anyone is interested in > helping, please let me know. > > Thanks, > > -- > Jarrod Millman > Computational Infrastructure for Research Labs > 10 Giannini Hall, UC Berkeley > phone: 510.643.4014 > http://cirl.berkeley.edu/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From millman at berkeley.edu Tue Oct 21 02:26:16 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Mon, 20 Oct 2008 23:26:16 -0700 Subject: [SciPy-user] where is models.py? In-Reply-To: <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> References: <48FD2850.5090308@american.edu> <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> Message-ID: On Mon, Oct 20, 2008 at 9:34 PM, Wes McKinney wrote: > What is the timeline for the 0.7 release? I use the models package > extensively and would personally be interested in developing it > sooner rather than later if it could potentially be a part of the new > release. I would like to have released 0.7, but there are still too many failing tests. I just checked everything out and ran the test suite on a Fedora Linux box and got several test failures (KNOWNFAIL=6, SKIP=7, errors=6, failures=10; see the attached log for details). I am sure that the problems are even worse on the other platforms. Before we get the release out we are going to have to focus on bugs squashing for a bit. So if you are interested in helping get models back into the trunk, you can try and squash a few bugs as a warm up ;) Once we get 0.7 released, I will be happy to discuss putting scipy.models back in the trunk. (If you are interested in working on models, but can't help out on other scipy bugs, maybe we could see who else is interested in working on the models code and try to organize a sprint at Berkeley. Please let me know what your interest level/area is and I can see if I can help.) I would also like to get build scripts for Windows and Mac finished. David Cournapeau has been working on the Window's build scripts: http://projects.scipy.org/scipy/scipy/browser/trunk/tools/win32/build_scripts I need to check with him to see what the status is (his last check-in was less than an hour ago). I believe that the Mac build works fine and I just need to check them in. (I am out of town currently, but I can look into this over the weekend.) Once I get back to Berkeley, I will try to do a more concerted push to get everyone to start killing bugs so we can release 0.7 soon. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ -------------- next part -------------- A non-text attachment was scrubbed... Name: tests_scipy.log.gz Type: application/x-gzip Size: 4296 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Tue Oct 21 02:19:58 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 21 Oct 2008 15:19:58 +0900 Subject: [SciPy-user] where is models.py? In-Reply-To: <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> References: <48FD2850.5090308@american.edu> <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> Message-ID: <48FD748E.1080508@ar.media.kyoto-u.ac.jp> Wes McKinney wrote: > What is the timeline for the 0.7 release? I use the models package > extensively and would personally be interested in developing it > sooner rather than later if it could potentially be a part of the new > release. > > IIRC, the main problem of the models package was its dependency on weave. I am not familiar with weave much, but I think the consensus was it was not adapted for scipy packages: it causes some build issues (since weave itself is in scipy, there is a bootstrap problem for once). So if you rewrote the part which depends on weave, it could be reincluded in scipy. But to be honest, I think we should not accept more code in scipy: scipy is already quite big, and there is not enough man power. More code means more maintenance problems. Would it be a big problem for you if the code was put in a scikit ? In any case, I am strongly against any new code for scipy 0.7. We are already really late, there are tons of bugs, and not many people fixing them. Adding code just make it worse at this point. cheers, David From david at ar.media.kyoto-u.ac.jp Tue Oct 21 02:32:03 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 21 Oct 2008 15:32:03 +0900 Subject: [SciPy-user] where is models.py? In-Reply-To: References: <48FD2850.5090308@american.edu> <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> Message-ID: <48FD7763.7090608@ar.media.kyoto-u.ac.jp> Jarrod Millman wrote: > > I would also like to get build scripts for Windows and Mac finished. > David Cournapeau has been working on the Window's build scripts: > http://projects.scipy.org/scipy/scipy/browser/trunk/tools/win32/build_scripts > I need to check with him to see what the status is (his last check-in > was less than an hour ago). I am hoping to get it done for today. The idea was to have it ready so that we can quickly generate alpha and beta versions for testing on windows. There is still the problem that the installer is big (around 40 Mb), but I don't see any easy solution in the short term for this, David From millman at berkeley.edu Tue Oct 21 03:28:55 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 21 Oct 2008 00:28:55 -0700 Subject: [SciPy-user] where is models.py? In-Reply-To: <48FD748E.1080508@ar.media.kyoto-u.ac.jp> References: <48FD2850.5090308@american.edu> <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> <48FD748E.1080508@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Oct 20, 2008 at 11:19 PM, David Cournapeau wrote: > IIRC, the main problem of the models package was its dependency on > weave. I am not familiar with weave much, but I think the consensus was > it was not adapted for scipy packages: it causes some build issues > (since weave itself is in scipy, there is a bootstrap problem for once). > So if you rewrote the part which depends on weave, it could be > reincluded in scipy. Chris Burns and Tom Waite rewrote the weave bit just before I decided to pull it: http://projects.scipy.org/scipy/scipy/changeset/4602 http://projects.scipy.org/scipy/scipy/changeset/4630 http://projects.scipy.org/scipy/scipy/changeset/4631 http://projects.scipy.org/scipy/scipy/changeset/4632 While they were removing the weave dependency, it became apparent that the packages wasn't ready for release. > But to be honest, I think we should not accept more code in scipy: scipy > is already quite big, and there is not enough man power. More code means > more maintenance problems. Would it be a big problem for you if the code > was put in a scikit ? I don't think anyone in my group is going to have time to make it into a scikit. Unless there is some serious interest and people willing to work on it, I expect that we will work on the models code in the nipy trunk going forward. > In any case, I am strongly against any new code for scipy 0.7. We are > already really late, there are tons of bugs, and not many people fixing > them. Adding code just make it worse at this point. +1. Any new code or functionality should be worked on outside of trunk at least until 0.7 is released. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From millman at berkeley.edu Tue Oct 21 03:30:01 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Tue, 21 Oct 2008 00:30:01 -0700 Subject: [SciPy-user] where is models.py? In-Reply-To: <48FD7763.7090608@ar.media.kyoto-u.ac.jp> References: <48FD2850.5090308@american.edu> <30911767-95EC-4418-94CF-38E7434D194A@gmail.com> <48FD7763.7090608@ar.media.kyoto-u.ac.jp> Message-ID: On Mon, Oct 20, 2008 at 11:32 PM, David Cournapeau wrote: > I am hoping to get it done for today. The idea was to have it ready so > that we can quickly generate alpha and beta versions for testing on > windows. There is still the problem that the installer is big (around 40 > Mb), but I don't see any easy solution in the short term for this, Excellent. I really wouldn't worry about the size. There are a lot more important things for you to devote your time to at this point. Thanks for doing this. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From olli.sipila at helsinki.fi Tue Oct 21 05:33:12 2008 From: olli.sipila at helsinki.fi (Olli =?utf-8?b?U2lwaWzDpA==?=) Date: Tue, 21 Oct 2008 12:33:12 +0300 Subject: [SciPy-user] scipy 0.6 build error In-Reply-To: <48FC7027.9010502@ar.media.kyoto-u.ac.jp> References: <20081018150641.17457hwrt91wafkh@webmail.helsinki.fi> <5b8d13220810191900i7d511b25vc6bb678ef821a97a@mail.gmail.com> <20081020095106.58557anc9o1evwwq@webmail.helsinki.fi> <48FC7027.9010502@ar.media.kyoto-u.ac.jp> Message-ID: <20081021123312.52972pjc1sc1x0ig@webmail.helsinki.fi> I reverted back to 2.5 and managed to build it now. I originally had trouble with 2.5, but it seems it had more to do with my account permissions than with python itself. Thanks for pointing out the problem with 2.6! - Olli Quoting "David Cournapeau" : > Olli Sipil? wrote: >> Hm ok, I'll give it a go. I recall 2.5 also giving me an error message >> (this I think is the reason why I wanted to try 2.6), but I'll post >> that error here as well if 2.5 doesn't work either. >> > > Yes, please report any error on 2.5. Note that even if the error is not > specific to 2.6, you should not (yet) use numpy and/or scipy with 2.6. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From elmico.filos at gmail.com Tue Oct 21 05:52:04 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Tue, 21 Oct 2008 11:52:04 +0200 Subject: [SciPy-user] Simple combinatorics with Numpy Message-ID: Hi, I would like to use NumPy/SciPy to do some basic combinatorics on small (size<6) 1D arrays of integers. Imagine I have an array x=([1,3,5,8]) from which I draw, with replacement, a sample of, say, 3 numbers. The ordering of the sample is unimportant. I want to calculate the histogram of the averages of the samples (i.e., avg(1,1,1)=1, avg(1,3,5)=3, etc). For that, I need to *enumerate* all the possible unordered samples of size 3 that can be drawn from x, i.e., (1,1,1), (1,1,3), (1,1,5), (1,1,8), (1,3,3), (1,5,5), ... [there are comb(4+3-1,3)=20 different samples] and then, I need to compute, for each of them, the weight c/(4**3), where c is a multinomial factor that takes into account possible repetitions of elements in the sample. Although I think I could come up with some dirty, inefficient lines of code doing the job, I wonder whether there is any NumPy/SciPy function / trick that could come in handy. I have seen that in Mathematica there is a function called 'Compositions(n,n)' that returns a list with all the possible arrangements of n nonnegative integers that sum up n (e.g., Compositions(2,2) = {(0,2),(1,1),(2,0)}). There is also a function 'KSubset' that returns all the subsets of a given size that can be formed from a larger set. These function are useful to generate all the possible samples. Is there anything similar in Numpy/Scipy? Efficiency is not really an issue here, since the sets are always small. I am aware that this is not the type of job NumPy is originally designed for, but since the problem is simple and the implementation seems feasible (in principle), I wanted to hear the opinion/hints of the experts on that. My apologies if the answer turns out to be trivial. Thanks for your attention. From mnandris at blueyonder.co.uk Tue Oct 21 14:18:52 2008 From: mnandris at blueyonder.co.uk (Michael) Date: Tue, 21 Oct 2008 19:18:52 +0100 Subject: [SciPy-user] Simple combinatorics with Numpy In-Reply-To: References: Message-ID: <1224613132.7437.4.camel@mik> On Tue, 2008-10-21 at 12:00 -0500, scipy-user-request at scipy.org wrote: > For that, I need to *enumerate* all the possible unordered > samples of size 3 that can be drawn from x, i.e., > > (1,1,1), (1,1,3), (1,1,5), (1,1,8), (1,3,3), (1,5,5), ... > [there are comb(4+3-1,3)=20 different samples] >>> from scipy.misc import comb >>> comb(6,3) array(20.000000000000014) To get the actual samples, there is a recipe in the Python Cookbook, 2nd edition, p725 -- "When you think of the long and gloomy history of man, you will find far more hideous crimes have been committed in the name of obedience than have been committed in the name of rebellion". C.P.Snow, "Either-Or" (1961) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part URL: From tjhnson at gmail.com Tue Oct 21 17:25:49 2008 From: tjhnson at gmail.com (T J) Date: Tue, 21 Oct 2008 14:25:49 -0700 Subject: [SciPy-user] Simple combinatorics with Numpy In-Reply-To: References: Message-ID: On Tue, Oct 21, 2008 at 2:52 AM, M F wrote: > > Imagine I have an array x=([1,3,5,8]) from which I draw, with replacement, a > sample of, say, 3 numbers. The ordering of the sample is unimportant. I want to > calculate the histogram of the averages of the samples (i.e., avg(1,1,1)=1, > avg(1,3,5)=3, etc). For that, I need to *enumerate* all the possible unordered > samples of size 3 that can be drawn from x, i.e., There was a post in numpy-discussion a couple of days ago which provided a possible solution: http://projects.scipy.org/pipermail/numpy-discussion/2008-October/038178.html def boxings(n, k): seq, i = [n]*k + [0], k while i: yield tuple(seq[i] - seq[i+1] for i in xrange(k)) i = seq.index(0) - 1 seq[i:k] = [seq[i] - 1] * (k-i) This enumerates the ways to place n indistinguishable tokens into k distinguishable boxes (order of boxes is not of concern) when there is no maximum on the occupancy of any box. You want to make j unordered selections from m distinguishable items when the items are replaced after each selection. n <-> j k <-> m In the example you suggested, boxings(3,4) does the trick, but this gives you the number of times each item was selected. There is probably a simple way to directly generate the samples, but it is possible to do it using the above function too. def samples_ur(items, k): """Returns k unordered samples (with replacement) from items.""" n = len(items) for sample in boxings(k, n): selections = [[items[i]]*count for i,count in enumerate(sample)] yield tuple([x for sel in selections for x in sel]) >From there, you should be able to do what you want (eg average). > and then, I need to compute, for each of them, the weight c/(4**3), where c is a > multinomial factor that takes into account possible repetitions of elements in > the sample. What exactly are you looking for with c? Do you want to know the number of permutations for each enumerated item? This isn't the prettiest code, but if so, the attached file does what you want. -------------- next part -------------- A non-text attachment was scrubbed... Name: samples.py Type: text/x-python Size: 1139 bytes Desc: not available URL: From tjhnson at gmail.com Tue Oct 21 17:26:21 2008 From: tjhnson at gmail.com (T J) Date: Tue, 21 Oct 2008 14:26:21 -0700 Subject: [SciPy-user] Simple combinatorics with Numpy In-Reply-To: <1224613132.7437.4.camel@mik> References: <1224613132.7437.4.camel@mik> Message-ID: On Tue, Oct 21, 2008 at 11:18 AM, Michael wrote: > On Tue, 2008-10-21 at 12:00 -0500, scipy-user-request at scipy.org wrote: > >> For that, I need to *enumerate* all the possible unordered >> samples of size 3 that can be drawn from x, i.e., >> >> (1,1,1), (1,1,3), (1,1,5), (1,1,8), (1,3,3), (1,5,5), ... >> [there are comb(4+3-1,3)=20 different samples] > >>>> from scipy.misc import comb >>>> comb(6,3) > array(20.000000000000014) > > To get the actual samples, there is a recipe in the Python Cookbook, 2nd > edition, p725 > I could be wrong, but I don't think the recipe provides what the OP is requesting. The corresponding docstring would be: ''' take n (not necessarily distinct) items, order is irrelevant ''' which is, noticeably, absent from page 725 (http://books.google.com/books?id=Q0s6Vgb98CQC&pg=PA725) From dineshbvadhia at hotmail.com Tue Oct 21 17:42:20 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Tue, 21 Oct 2008 14:42:20 -0700 Subject: [SciPy-user] Calling Scipy Sparse from C++ Message-ID: Hi! Is it possible to call the Scipy Sparse library (or a Python module that includes Scipy Sparse code) from a C++ program and if so, where can I find some reference information with examples? Thank-you. Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at enthought.com Tue Oct 21 17:53:51 2008 From: travis at enthought.com (Travis Vaught) Date: Tue, 21 Oct 2008 16:53:51 -0500 Subject: [SciPy-user] ANN: Enthought Python Distribution - New Release Message-ID: <02DFC911-B86B-4F92-A83F-1FB7AADAB98B@enthought.com> Greetings, Enthought, Inc. is very pleased to announce the newest release of the Enthought Python Distribution (EPD) Py2.5 v4.0.30002: http://www.enthought.com/epd This release contains updates to many of EPD's packages, including NumPy, IPython, matplotlib, VTK, etc. This is also the first release to include a 3.x version of the Enthought Tool Suite (http://code.enthought.com/ ). The release notes for this release, including the list of included packages, may be found here: https://svn.enthought.com/epd/wiki/Python2.5.2/4.0.300/GA Many thanks to the EPD team for putting this release together, and to the community of folks who have provided all of the valuable tools bundled here. Best Regards, Travis --------- About EPD --------- The Enthought Python Distribution (EPD) is a "kitchen-sink-included" distribution of the Python? Programming Language, including over 80 additional tools and libraries. The EPD bundle includes NumPy, SciPy, IPython, 2D and 3D visualization, database adapters, and a lot of other tools right out of the box. http://www.enthought.com/products/epd.php It is currently available as an easy, single-click installer for Windows XP (x86), Mac OS X (a universal binary for Intel 10.4 and above) and RedHat EL3 (x86 and amd64). EPD is free for 30-day trial use and for use in degree-granting academic institutions. An annual Subscription and installation support are available for commercial use (http://www.enthought.com/products/epddownload.php ) including an Enterprise Subscription with support for particular deployment environments (http://www.enthought.com/products/enterprise.php ). From wesmckinn at gmail.com Tue Oct 21 17:57:18 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 21 Oct 2008 17:57:18 -0400 Subject: [SciPy-user] Calling Scipy Sparse from C++ In-Reply-To: References: Message-ID: <6c476c8a0810211457v2c20387cq142dcf30b90fd753@mail.gmail.com> Look here: http://www.python.org/doc/2.5.2/ext/ext.html I've been successful embedding the Python interpreter in a C++ application-- the best way I found was to create a function that calls a "bridge" module and translates the output from your C++ program (which has to be a PyObject *), or alternately you could create a NumPy array inside C++ and pass that (though I haven't tried this). If you create a generic interface you can then hot-swap Python code making use of numpy / scipy, much better than coding C++! - Wes On Tue, Oct 21, 2008 at 5:42 PM, Dinesh B Vadhia wrote: > Hi! Is it possible to call the Scipy Sparse library (or a Python module > that includes Scipy Sparse code) from a C++ program and if so, where can I > find some reference information with examples? Thank-you. > > Dinesh > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dcday137 at gmail.com Wed Oct 22 00:42:36 2008 From: dcday137 at gmail.com (Collin Day) Date: Wed, 22 Oct 2008 04:42:36 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?Difference_between_ffts=3F_-_plots_of_what?= =?utf-8?q?_i_am=09getting=2E?= References: <20081014174622.128f368b@Krypton.homenet> <48F54D82.5000005@ar.media.kyoto-u.ac.jp> Message-ID: Sorry - just wondering if any of this stuff makes sense? I suppose I could just not use the fftpack.fft - it is the one that appears to be "incorrect" From lionel at gamr7.com Wed Oct 22 05:18:15 2008 From: lionel at gamr7.com (Lionel Barret De Nazaris) Date: Wed, 22 Oct 2008 11:18:15 +0200 Subject: [SciPy-user] Arc-length reparametrization. (newbie question ?) In-Reply-To: <48FC5142.508@gamr7.com> References: <48FC5142.508@gamr7.com> Message-ID: <48FEEFD7.8050303@gamr7.com> Hi, I answer to myself in the case that someone google this in the future. Here follows my solution. This is suboptimal and looks like a naive translation but it does works and is short (i.e manageable) # cp are the curve control points, it is an array # Bernstein polynomial # (see http://en.wikipedia.org/wiki/Bernstein_blending_function) t = poly1d([1, 0]) # t s = poly1d([-1, 1]) # 1-t # Bezier curve as combination of Bernstein polynomial # ( see http://en.wikipedia.org/wiki/Bezier_curve) ps = [s*s*s, 3*t*s*s, 3*s*t*t, t*t*t] # position ps_prime = map(lambda p: p.deriv(), ps) # first derivative ps_second = map(lambda p: p.deriv(m = 2), ps) # second derivative def Y(t, cp): """ the position on the curve at time t """ ts = array( [p(t) for p in ps] ) # calculating the t coeff i = ts * cp # appliying the t coeff to the control points x = i[0,:].sum() # selection the xs and summing them y = i[1,:].sum() # selection the ys and summing them return x, y def DY(t, cp): """ the velocity vector on the curve at time t """ ts = array( [p(t) for p in ps_prime] ) i = ts * cp x = i[0,:].sum() y = i[1,:].sum() return x, y def Speed(t, cp): """ the speed (i.e velocity magnitude) on the curve at time t """ x, y = DY(t, cp) return sqrt(x*x+y*y) def ArcLength(t, cp, tmin=0): """ the curve length corresponding to time t """ return quad(Speed, tmin, t, args=(cp,))[0] def getCurveParameter(s, cp, max_tries, epsilon): """ the time t corresponding to the curve length s """ L = ArcLength(1,cp) t = 0 + s * (1-0)/L for i in range(max_tries): F = ArcLength(t, cp) - s if abs(F) < epsilon: return t else: DF = Speed(t, cp) t -= F/DF return t def getCurveParameter2(s, cp): """ the time t corresponding to the curve length s """ L = ArcLength(1, cp) t = 0 + s * (1-0)/L f = lambda t : ArcLength(t, cp) - s return newton(f, t) Lionel Barret De Nazaris wrote: > Hello all, > > I've just inherited from a bunch of not so-good code about cubic > splines, and as a relative scipy newbie, I was wondering what would be > the right way to do it. > > I need to do Arc-length reparametrization.( > http://www.math.hmc.edu/~gu/math142/mellon/Differential_Geometry/Geometry_of_curves/Parametric_Curves_and_arc.html > > ) > > In the current code, I recognize the use of the newton's method to > converge on a good estimate but this leave me non-plussed. The heavy use > of integration to compute the curve length without keeping the > intermediary results is really weird. > > So how would you do it, using the best of what scipy has to offer ? > > Note : I've looked as the splines in scipy , but this seems more focused > on finding the spline that fits the samples, than on this kind of > manipulation. > Here we have the spline and its control points. > > this the code (see http://pastebin.com/m7e276652 if indentation is not > correct) : > > > def derivated_curvilign_abscissa(t, points): > return > Vector(vectorial_derivated_cubic_interpolator(t,points)).magnitude() > > def curvilign_abscissa(t, points): > """ > input : > points = spline control points [init_point, init_virtual_point, > end_virtual_point, end_point] > t = a float between 0 and 1 > ouput : > the arc-length between f(t) and the start of the curve (aka > curvilign_abscissa) > """ > return Vector(points[0]).magnitude() + quad(lambda x > :derivated_curvilign_abscissa( x, points), 0., t)[0] > > def create_curvy_grid(points, samples): > """ > input : > points = spline control points [init_point, init_virtual_point, > end_virtual_point, end_point] > samples = list of of equally spaced t (float) between 0 and 1 > output : > a list of float t (float) where moving from tn to tn+1 means an > advancing on the curve of an equal length s. > """ > curve_length = curvilign_abscissa(1., points) -curvilign_abscissa(0., > points) > # == > def function_to_solve(x, translation) : > return - translation + curvilign_abscissa(x, points) > -curvilign_abscissa(0., points) > # == > return [newton(lambda x : function_to_solve(x, curve_length * > step_grid), 1 - PRECISION) for step_grid in grid] > # == > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Best Regards, lionel Barret de Nazaris, Gamr7 Founder & CTO ========================= Gamr7 : Cities for Games http://www.gamr7.com From opossumnano at gmail.com Wed Oct 22 05:04:23 2008 From: opossumnano at gmail.com (Tiziano Zito) Date: Wed, 22 Oct 2008 11:04:23 +0200 Subject: [SciPy-user] Modular toolkit for Data Processing 2.4 released! Message-ID: <20081022090423.GA18890@localhost> We are glad to announce release 2.4 of the Modular toolkit for Data Processing (MDP). MDP is a Python library of widely used data processing algorithms that can be combined according to a pipeline analogy to build more complex data processing software. The base of available algorithms includes, to name but the most common, Principal Component Analysis (PCA and NIPALS), several Independent Component Analysis algorithms (CuBICA, FastICA, TDSEP, and JADE), Slow Feature Analysis, Restricted Boltzmann Machine, and Locally Linear Embedding. What's new in version 2.4? -------------------------------------- - The new version introduces a new parallel package to execute the MDP algorithms on multiple processors or machines. The package also offers an interface to develop customized schedulers and parallel algorithms. Old MDP scripts can be turned into their parallelized equivalent with one simple command. - The number of available algorithms is increased with the Locally Linear Embedding and Hessian eigenmaps algorithms to perform dimensionality reduction and manifold learning (many thanks to Jake VanderPlas for his contribution!) - Some more bug fixes, useful features, and code migration towards Python 3.0 Resources --------- Download: http://sourceforge.net/project/showfiles.php?group_id=116959 Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://sourceforge.net/mail/?group_id=116959 -- Pietro Berkes Volen Center for Complex Systems Brandeis University Waltham, MA, USA Niko Wilbert Institute for Theoretical Biology Humboldt-University Berlin, Germany Tiziano Zito Bernstein Center for Computational Neuroscience Humboldt-University Berlin, Germany From nwagner at iam.uni-stuttgart.de Wed Oct 22 05:56:09 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 22 Oct 2008 11:56:09 +0200 Subject: [SciPy-user] Fourier series Message-ID: Hi all, Is there a function in scipy to compute the Fourier coefficients a_0, a_1, b_1, a_2, b_2 of a periodic function f(t)=f(t+T) http://en.wikipedia.org/wiki/Fourier_series An example would be appreciated. Nils From aarchiba at physics.mcgill.ca Wed Oct 22 06:44:01 2008 From: aarchiba at physics.mcgill.ca (Anne Archibald) Date: Wed, 22 Oct 2008 06:44:01 -0400 Subject: [SciPy-user] Fourier series In-Reply-To: References: Message-ID: 2008/10/22 Nils Wagner : > Is there a function in scipy to compute the Fourier > coefficients > a_0, a_1, b_1, a_2, b_2 of a periodic function f(t)=f(t+T) > > http://en.wikipedia.org/wiki/Fourier_series > > An example would be appreciated. Not exactly. The usual way to do such a thing would be to evaluate the function on a grid and use a fast fourier transform: n = 1024 xs = np.arange(n)*T/float(n) ys = f(xs) ft = np.fft.rfft(ys) You will need to adjust the normalizations somewhat. You will also need to choose a large enough n to capture all features of interest - I recommend running it with at least 2 and 4 times the number of Fourier coefficients you need and comparing. (The extra ones are to avoid missing features that fall between grid points.) This method assumes your computational cost would be dominated by the Fourier transform. If instead your computational cost is dominated by the function evaluations, you'll have to come up with something more clever that does adaptive sampling to find all points of interest in the function. You might try something like using scipy.integrate.quad and storing the results of all evaluations. You'd then use these points to evaluate your Fourier transform. Anne From david at ar.media.kyoto-u.ac.jp Wed Oct 22 06:36:04 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 22 Oct 2008 19:36:04 +0900 Subject: [SciPy-user] scipy super pack installer for win32: please test Message-ID: <48FF0214.4080302@ar.media.kyoto-u.ac.jp> Hi, A quick note to mention I have generated a "superpack" installer for scipy, for testing purposes. This is similar to numpy superpack installer: the installer detects your CPU at installation time and install the right scipy: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/scipy-0.7.0.dev4826-win32-superpack-python2.5.exe How to test: ------------ Inside python: import scipy scipy.test() If python does not crash, this is OK. Test failures are OK, since those will be fixed within the 0.7.0 release. Please report any problem on scipy-dev ML, cheers, David From Joris.DeRidder at ster.kuleuven.be Wed Oct 22 06:54:43 2008 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Wed, 22 Oct 2008 12:54:43 +0200 Subject: [SciPy-user] Fourier series In-Reply-To: References: Message-ID: <9947772F-BD46-4ADA-AD18-BB601AFDD61F@ster.kuleuven.be> On 22 Oct 2008, at 11:56 , Nils Wagner wrote: > Hi all, > > Is there a function in scipy to compute the Fourier > coefficients > a_0, a_1, b_1, a_2, b_2 of a periodic function f(t)=f(t+T) > > http://en.wikipedia.org/wiki/Fourier_series > > An example would be appreciated. > > Nils I guess you mean numerical, not analytical? As far as I know it's only possible if you can set an upper limit to its bandwidth. Suppose you know for certain that no frequencies higher than 100 Hz occur in your signal, than you can uniformly discretize the signal with spacing delta t = 1/(2*100) = 0.005, and then use FFT (http://www.scipy.org/Numpy_Example_List ) to compute the Fourier coefficients. Cheers, Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From elmico.filos at gmail.com Wed Oct 22 06:57:11 2008 From: elmico.filos at gmail.com (=?ISO-8859-1?Q?Mico_Fil=F3s?=) Date: Wed, 22 Oct 2008 12:57:11 +0200 Subject: [SciPy-user] Simple combinatorics with Numpy In-Reply-To: References: Message-ID: Thanks a lot for your answers. The functions in sample.py do perfectly the job. The motivation for this unordered sampling with replacement was perhaps a little unclear in my first post. I need to count the different ways that a mean sample of size 3 can occur. The probability of a given value for the sample mean depends on how many *ordered* samples give rise to that given value. For instance, {1,1,1} = [(1,1,1)] -> 1 {1,1,3} = [(1,1,3), (1,3,1), (3,1,1)] -> 5/3 {1,3,5} = [(1,3,5), (3,1,5), (1,5,3), (5,1,3), (5,3,1), (3,5,1)] -> 8/3 {1,3,3} = [(1,3,3), (3,1,3), (3,3,1)] -> 7/3 {3,3,3} = [(3,3,3)] -> 3 ... where {} and () denote the unordered and ordered samples, respectively. The number of different orderings associated with a particular sample is given by a multinomial distribution: if there are 3 places and 4 different numbers repeated k1, k2, k3, k4 times, there are c = 3!/(k1!k2!k3!k4!) ordered samples. Of course, I could also have proceeded the brute force way, enumerating all 3^4 ordered samples, computing their average, and counting the number of times each different value occurs. Sorry for being so wordy. From david at ar.media.kyoto-u.ac.jp Wed Oct 22 07:59:57 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Wed, 22 Oct 2008 20:59:57 +0900 Subject: [SciPy-user] Fourier series In-Reply-To: <9947772F-BD46-4ADA-AD18-BB601AFDD61F@ster.kuleuven.be> References: <9947772F-BD46-4ADA-AD18-BB601AFDD61F@ster.kuleuven.be> Message-ID: <48FF15BD.1040402@ar.media.kyoto-u.ac.jp> Joris De Ridder wrote: > > I guess you mean numerical, not analytical? As far as I know it's only > possible if you can set an upper limit to its bandwidth. Concretely, by having only some samples of your functions, there is an implied bandwidth for your signal (otherwise, the sampling process would have destroyed your signal through aliasing). Then, FFT coefficients can approach the Fourier coefficients by refining the frequency "sampling" (considering the FFT as a frequency sampled of the Fourier coefficients). Also, numerically speaking, you can consider that any real signal has finite bandwidth, or more exactly the coefficients are negligeable for high frequency: the fourier coefficients are decreasing toward 0 for smooth functions. More precisely, the Fourier transform TF(n) "behaves as" 1/n^k where k is bigger when the function is more regular: the sum of (1+n^-k)^2 * F(n) is finite for the function k-times differentiable. Incidentally, one mathematical object for the study of smooth functions is Sobolev space, which definition is based on this property. cheers, David From ivo.maljevic at gmail.com Wed Oct 22 10:05:53 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 22 Oct 2008 10:05:53 -0400 Subject: [SciPy-user] Fourier series In-Reply-To: References: Message-ID: <826c64da0810220705g228156d8s89cf4bf2a7c56b8a@mail.gmail.com> I think if you spent some time in deriving formulas it should be possible to find it: 1) Since the function f(t) is periodic, its spectrum is discrete anyway, so FFT will do the job, provided that you sample the frequency at multiple of harmonics. 2) DFT coefficients are complex, but similar to c_n complex coefficients for the fourier series, there is a relationship: c_n = a_n -j*b_n, n > 0 c_n = a_n+j*b_n, n < 0 So, for positive m: a_m = 0.5*(c_m + c_{-m}), and similar approach goes for b_m. Sorry I cannot give you more info at this time. Hopefully this sketchy approach will give you some ideas. Ivo 2008/10/22 Nils Wagner > Hi all, > > Is there a function in scipy to compute the Fourier > coefficients > a_0, a_1, b_1, a_2, b_2 of a periodic function f(t)=f(t+T) > > http://en.wikipedia.org/wiki/Fourier_series > > An example would be appreciated. > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Oct 22 11:22:06 2008 From: josef.pktd at gmail.com (joep) Date: Wed, 22 Oct 2008 08:22:06 -0700 (PDT) Subject: [SciPy-user] scipy super pack installer for win32: please test In-Reply-To: <48FF0214.4080302@ar.media.kyoto-u.ac.jp> References: <48FF0214.4080302@ar.media.kyoto-u.ac.jp> Message-ID: <5ed2aa23-d992-40d8-90c5-eb34968ffa2a@k37g2000hsf.googlegroups.com> > http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/scipy-0.7.0.dev4826-win32-superpack-python2.5.exe installs without problems on windowsXP, sse2 The only problem I had was that I had a trunk version of scipy (on a non site-packages directory) in easy-install.pth, which I needed to comment out. Initially it loaded this version instead of the one installed in site-packages by the superpack. One weird thing I found, when I run scipy.test() in a command shell, or ipython shell, then I get a large output with Depreciation Warnings and test output. When I run scipy.test() in an Idle shell, I don't get any Depreciation Warnings or test output. I have no idea why the output should be different. Another observation which is not clear but is not very important is that, when I compiled the trunk version a few weeks ago (with MingW) and I run scipy.test(), then I don't get the 2 test failures in test_lapack.test_all_lapack. (my lapack and atlas files were downloaded a long time ago from the scipy.org install description page) Josef Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.__file__ 'C:\\Programs\\Python25\\lib\\site-packages\\scipy\\__init__.pyc' >>> scipy.test() Running unit tests for scipy NumPy version 1.2.0rc2 NumPy is installed in C:\Programs\Python25\lib\site-packages\numpy SciPy version 0.7.0.dev4826 SciPy is installed in C:\Programs\Python25\lib\site-packages\scipy Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] nose version 0.10.4 ... ---------------------------------------------------------------------- Ran 2317 tests in 46.969s FAILED (KNOWNFAIL=2, SKIP=14, failures=2) >>> >>> scipy.show_config() umfpack_info: NOT AVAILABLE dfftw_info: NOT AVAILABLE blas_opt_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\local\\lib\\yop\\sse2'] define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] language = c mkl_info: NOT AVAILABLE djbfft_info: NOT AVAILABLE atlas_blas_threads_info: NOT AVAILABLE lapack_opt_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\local\\lib\\yop\\sse2'] define_macros = [('ATLAS_INFO', '"\\"?.?.?\\""')] language = f77 fftw2_info: NOT AVAILABLE fftw3_info: NOT AVAILABLE atlas_info: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\local\\lib\\yop\\sse2'] language = f77 lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE atlas_blas_info: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\local\\lib\\yop\\sse2'] language = c atlas_threads_info: NOT AVAILABLE >>> 2 failures are ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.10.4-py2.5.egg \nose\case.p y", line 182, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\scipy\lib\lapack\tests \esv_tests. py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "C:\Programs\Python25\Lib\site-packages\numpy\testing \utils.py", line 311 , in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Programs\Python25\Lib\site-packages\numpy\testing \utils.py", line 296 , in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "c:\programs\python25\lib\site-packages\nose-0.10.4-py2.5.egg \nose\case.p y", line 182, in runTest self.test(*self.arg) File "C:\Programs\Python25\Lib\site-packages\scipy\lib\lapack\tests \esv_tests. py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "C:\Programs\Python25\Lib\site-packages\numpy\testing \utils.py", line 311 , in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Programs\Python25\Lib\site-packages\numpy\testing \utils.py", line 296 , in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) From david at ar.media.kyoto-u.ac.jp Wed Oct 22 11:27:11 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 23 Oct 2008 00:27:11 +0900 Subject: [SciPy-user] scipy super pack installer for win32: please test In-Reply-To: <5ed2aa23-d992-40d8-90c5-eb34968ffa2a@k37g2000hsf.googlegroups.com> References: <48FF0214.4080302@ar.media.kyoto-u.ac.jp> <5ed2aa23-d992-40d8-90c5-eb34968ffa2a@k37g2000hsf.googlegroups.com> Message-ID: <48FF464F.9090906@ar.media.kyoto-u.ac.jp> joep wrote: > > installs without problems on windowsXP, sse2 > The only problem I had was that I had a trunk version of scipy (on a > non site-packages directory) in easy-install.pth, which I needed to > comment out. Initially it loaded this version instead of the one > installed > in site-packages by the superpack. Unfortunately, there is not much I can do here, since this is a setuptools problem. In my experience, the easy-install.pth is fragile, and when you start using it, you should be ready to edit it by hand occasionally. > > One weird thing I found, when I run scipy.test() in a command shell, > or ipython shell, then I get a large output with Depreciation Warnings > and test output. When I run scipy.test() in an Idle shell, I don't get > any Depreciation Warnings or test output. I have no idea why the > output should be different. Maybe something linked to the Idle shell: warning and deprecation can be controlled by the python interpreter. I find it a bit strange to ignore warnings by default, though. > > Another observation which is not clear but is not very important is > that, when I compiled the trunk version a few weeks ago (with MingW) > and I run scipy.test(), then I don't get the 2 test failures in > test_lapack.test_all_lapack. (my lapack and atlas files were > downloaded a long time ago from the scipy.org install description > page) Different flags/compilers/atlas sources can cause some differences in the results. The errors in the tests are small but not negligeable; I don't know if they are significant. The superpack uses ATLAS 3.8.2, and it does not produce those errors on linux; so this may be caused by relatively old compilers on windows (mingw gcc is really old if you use the released ones). To complicate the matter, a given ATLAS build is not reproducible, making those issues even more difficult to track. Thanks for testing, David From ivo.maljevic at gmail.com Wed Oct 22 13:32:32 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 22 Oct 2008 13:32:32 -0400 Subject: [SciPy-user] Fourier series In-Reply-To: References: Message-ID: <826c64da0810221032mee82227k8e4296fcc2ce448e@mail.gmail.com> Maybe this simple matlab code can help you to develop ideas further: clear N = 1000; %time covers only one period, excluding the last point, as that % is the begining of the next period t=(0:N-1)/N; dt = t(2) - t(1); % set the sampling frequency to some reasonable value f=1/dt/5 % uncomment if you want only one cos function. This function is periodic, % period T = 1 / f => fundamental frequency is %x=cos(2*pi*f*t); x=cos(2*pi*f*t)+cos(4*pi*f*t); X = 1/length(t)*fft(x); nonzero_index = find(abs(X) > 1e-3); l = length(nonzero_index); n = (nonzero_index(1:l/2)-1)/f X(nonzero_index) a = 2*real(X(nonzero_index)) b = -2*imag(X(nonzero_index)) As you know, cos() is a periodic function with only one non-zero coefficient in the fourier sereis. In this trivial example, you will get n =1 and a_n and b_n coefficients if you use the signal from the first line, or n=1,2 for the signal from the second line. It works even if you lower N down to 10 or some other number that is a multiple of 5 (for this example). Look at the comments too, to know when to end the period of the signal. Granted, this is a simplified example, and you may want to figure out how many sample points you need. Just to remind you, from Parseval's theorem you know that the power of a periodic signal (which has a finite power) is the sum of all these coefficients squared. This means that for any signal that is F-transformable you will find that you only need a finite number of sample points. This is of course far away from the real deal, I just thougt that this might give you some ideas on which way to go. Or, maybe somebody else has already done this, which I wouldn't be surprised, and you will get your answer without much work. Ivo . In thi 2008/10/22 Nils Wagner > Hi all, > > Is there a function in scipy to compute the Fourier > coefficients > a_0, a_1, b_1, a_2, b_2 of a periodic function f(t)=f(t+T) > > http://en.wikipedia.org/wiki/Fourier_series > > An example would be appreciated. > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at erikwickstrom.com Wed Oct 22 13:55:25 2008 From: erik at erikwickstrom.com (Erik Wickstrom) Date: Wed, 22 Oct 2008 10:55:25 -0700 Subject: [SciPy-user] Some help with chisquare Message-ID: <3d381e170810221055q262d308ah1377a0f0f0fc6c7d@mail.gmail.com> Hi, I'm trying to port an application to python, and want to use scipy to handle the statistics. The app takes several tests and uses chi-square to determines which has the highest success rate with a confidence of 95% or better (critical values/degrees of freedom). For example: Test a: Total trials = 100 Total successes = 60 Test b: Total trials = 105 Total successes = 46 Test c: Total trials = 98 Total successes = 52 It then puts the data through some sort of chi-square formula (or so the comments say) and produces a chi-square value that can be compared against the critical values for 95% confidence. Trouble is, I'm not sure which of the many scipy chi-square functions to use, and what data I need to feed into them.... Any suggestions? Thanks! Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From reckoner at gmail.com Wed Oct 22 14:36:43 2008 From: reckoner at gmail.com (Reckoner) Date: Wed, 22 Oct 2008 11:36:43 -0700 Subject: [SciPy-user] 2008 scipy conference review Message-ID: the following link is a review of the 2008 scipy conference in Pasadena: https://www.osc.edu/cms/sip/node/4 From borreguero at gmail.com Wed Oct 22 17:22:34 2008 From: borreguero at gmail.com (Jose Borreguero) Date: Wed, 22 Oct 2008 17:22:34 -0400 Subject: [SciPy-user] How to write to a NetCDF file with 64-bit offset mode? Message-ID: <7cced4ed0810221422w12960981pd9a0b991eaefbe36@mail.gmail.com> Dear scipy users, I open a NetCDF file for writing with: object=NetCDFFile(outfile, 'w') I need to create a file with the 64-bit offset mode on. Is there a flag like NC_64BIT_OFFSET that I can pass to the constructor? If so, then how? regards, -- Jose M. Borreguero Postdoctoral Associate Oak Ridge National Laboratory P.O. Box 2008, M.S. 6164 Oak Ridge, TN 37831 phone: 865-241-3071 fax: 865-576-5491 Email: borreguerojm at ornl.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Oct 22 18:04:42 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 22 Oct 2008 18:04:42 -0400 Subject: [SciPy-user] scipy super pack installer for win32: please test In-Reply-To: <5ed2aa23-d992-40d8-90c5-eb34968ffa2a@k37g2000hsf.googlegroups.com> References: <48FF0214.4080302@ar.media.kyoto-u.ac.jp> <5ed2aa23-d992-40d8-90c5-eb34968ffa2a@k37g2000hsf.googlegroups.com> Message-ID: <48FFA37A.4050804@american.edu> >> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/scipy-0.7.0.dev4826-win32-superpack-python2.5.exe joep wrote: > when I run scipy.test() in a command shell, > or ipython shell, then I get a large output with Depreciation Warnings > and test output. Similar experience. Three failures. (I know these are not a current concern.) Output below. Alan Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys, scipy >>> sys.stderr = open('c:/temp/temp.out','w') >>> scipy.test() Running unit tests for scipy NumPy version 1.2.0 NumPy is installed in C:\Python25\lib\site-packages\numpy SciPy version 0.7.0.dev4826 SciPy is installed in C:\Python25\lib\site-packages\scipy Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] nose version 0.10.1 Warning: 1000000 bytes requested, 20 bytes read. caxpy:n=4 caxpy:n=3 ccopy:n=4 ccopy:n=3 cscal:n=4 cswap:n=4 cswap:n=3 daxpy:n=4 daxpy:n=3 dcopy:n=4 dcopy:n=3 dscal:n=4 dswap:n=4 dswap:n=3 saxpy:n=4 saxpy:n=3 scopy:n=4 scopy:n=3 sscal:n=4 sswap:n=4 sswap:n=3 zaxpy:n=4 zaxpy:n=3 zcopy:n=4 zcopy:n=3 zscal:n=4 zswap:n=4 zswap:n=3 ATLAS version 3.8.2 built by david on Tue Aug 5 13:01:25 TST 2008: UNAME : CYGWIN_NT-5.1 donau-win 1.5.25(0.156/4/2) 2008-06-12 19:34 i686 Cy gwin INSTFLG : -1 0 -a 1 ARCHDEFS : -DATL_OS_WinNT -DATL_ARCH_P4 -DATL_CPUMHZ=3200 -DGCCWIN -DUseClock -DATL_SSE2 -DATL_SSE1 -DATL_GAS_x8632 F2CDEFS : -DAdd__ -DF77_INTEGER=int -DStringSunStyle CACHEEDGE: 3145728 F77 : g77, version GNU Fortran (GCC) 3.4.4 (cygming special, gdc 0.12, u sing dmd 0.125) F77FLAGS : -O -m32 SMC : gcc, version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 SKC : gcc, version gcc (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125) SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -m32 caxpy:n=4 caxpy:n=3 ccopy:n=4 ccopy:n=3 cscal:n=4 cswap:n=4 cswap:n=3 daxpy:n=4 daxpy:n=3 dcopy:n=4 dcopy:n=3 dscal:n=4 dswap:n=4 dswap:n=3 saxpy:n=4 saxpy:n=3 scopy:n=4 scopy:n=3 sscal:n=4 sswap:n=4 sswap:n=3 zaxpy:n=4 zaxpy:n=3 zcopy:n=4 zcopy:n=3 zscal:n=4 zswap:n=4 zswap:n=3 Result may be inaccurate, approximate err = 9.82300567882e-009 Result may be inaccurate, approximate err = 1.87387729512e-010 warning: specified build_dir '_bad_path_' does not exist or is not writable. Try ing default locations warning: specified build_dir '_bad_path_' does not exist or is not writable. Try ing default locations error removing c:\docume~1\alanis~1\locals~1\temp\tmpkbpafhcat_test: c:\docume~1 \alanis~1\locals~1\temp\tmpkbpafhcat_test: The directory is not empty building extensions here: c:\docume~1\alanis~1\locals~1\temp\Alan Isaac\python25 _compiled\m4 >>> And here is the error log: C:\Python25\lib\site-packages\scipy\linsolve\__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) .................................................................................................................................C:\Python25\lib\site-packages\scipy\cluster\vq.py:570: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " ...................................................C:\Python25\lib\site-packages\scipy\integrate\odepack.py:144: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. ixpr, mxstep, mxhnil, mxordn, mxords) C:\Python25\lib\site-packages\scipy\integrate\odepack.py:144: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. ixpr, mxstep, mxhnil, mxordn, mxords) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:304: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qawse(func,a,b,wvar,integr,args,full_output,epsabs,epsrel,limit) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:304: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qawse(func,a,b,wvar,integr,args,full_output,epsabs,epsrel,limit) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:306: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qawce(func,a,b,wvar,args,full_output,epsabs,epsrel,limit) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:306: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qawce(func,a,b,wvar,args,full_output,epsabs,epsrel,limit) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:295: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qawfe(thefunc,-b,wvar,integr,args,full_output,epsabs,limlst,limit,maxp1) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:295: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qawfe(thefunc,-b,wvar,integr,args,full_output,epsabs,limlst,limit,maxp1) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:249: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:249: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qagse(func,a,b,args,full_output,epsabs,epsrel,limit) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:251: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:251: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qagie(func,bound,infbounds,args,full_output,epsabs,epsrel,limit) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:273: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qawoe(func,a,b,wvar,integr,args,full_output,epsabs,epsrel,limit,maxp1,1) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:273: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qawoe(func,a,b,wvar,integr,args,full_output,epsabs,epsrel,limit,maxp1,1) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:280: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qawfe(func,a,wvar,integr,args,full_output,epsabs,limlst,limit,maxp1) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:280: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qawfe(func,a,wvar,integr,args,full_output,epsabs,limlst,limit,maxp1) .C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:259: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. return _quadpack._qagpe(func,a,b,the_points,args,full_output,epsabs,epsrel,limit) C:\Python25\lib\site-packages\scipy\integrate\quadpack.py:259: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. return _quadpack._qagpe(func,a,b,the_points,args,full_output,epsabs,epsrel,limit) ......C:\Python25\lib\site-packages\scipy\interpolate\fitpack2.py:479: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....C:\Python25\lib\site-packages\scipy\interpolate\fitpack2.py:420: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ...........C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py:760: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py:760: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. tx,ty,nxest,nyest,wrk,lwrk1,lwrk2) C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py:837: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. z,ier=_fitpack._bispev(tx,ty,c,kx,ky,x,y,dx,dy) C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py:837: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. z,ier=_fitpack._bispev(tx,ty,c,kx,ky,x,y,dx,dy) ...............................C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py:485: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. y,ier=_fitpack._spl_(x,der,t,c,k) C:\Python25\lib\site-packages\scipy\interpolate\fitpack.py:485: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. y,ier=_fitpack._spl_(x,der,t,c,k) ..................KK..C:\Python25\lib\site-packages\scipy\io\tests\test_array_import.py:29: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. b = numpyio.fread(fid,1000000,N.Int16,N.Int) C:\Python25\lib\site-packages\scipy\io\tests\test_array_import.py:29: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. b = numpyio.fread(fid,1000000,N.Int16,N.Int) .C:\Python25\lib\site-packages\numpy\lib\utils.py:106: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) C:\Python25\lib\site-packages\numpy\lib\utils.py:106: DeprecationWarning: read_array is deprecated warnings.warn(str1, DeprecationWarning) ......................C:\Python25\lib\site-packages\numpy\lib\utils.py:106: DeprecationWarning: npfile is deprecated warnings.warn(str1, DeprecationWarning) ....................................................................................................................................................................................FF................................................................................................................................................................................................................................................................................................F...........................................................................................................................................................................................................................................................................................................................................................................................................C:\Python25\lib\site-packages\scipy\odr\odrpack.py:1055: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. self.output = Output(apply(odr, args, kwds)) C:\Python25\lib\site-packages\scipy\odr\odrpack.py:1055: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. self.output = Output(apply(odr, args, kwds)) .................C:\Python25\lib\site-packages\scipy\optimize\minpack.py:270: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew. retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) C:\Python25\lib\site-packages\scipy\optimize\minpack.py:270: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr. retval = _minpack._lmdif(func,x0,args,full_output,ftol,xtol,gtol,maxfev,epsfcn,factor,diag) .................................SSSSSSSSSSS...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................C:\Python25\lib\site-packages\scipy\stats\stats.py:413: DeprecationWarning: scipy.stats.mean is deprecated; please update your code to use numpy.mean. Please note that: - numpy.mean axis argument defaults to None, not 0 - numpy.mean has a ddof argument to replace bias in a more general manner. scipy.stats.mean(a, bias=True) can be replaced by numpy.mean(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) .C:\Python25\lib\site-packages\scipy\stats\stats.py:1237: DeprecationWarning: scipy.stats.std is deprecated; please update your code to use numpy.std. Please note that: - numpy.std axis argument defaults to None, not 0 - numpy.std has a ddof argument to replace bias in a more general manner. scipy.stats.std(a, bias=True) can be replaced by numpy.std(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) C:\Python25\lib\site-packages\scipy\stats\stats.py:1214: DeprecationWarning: scipy.stats.var is deprecated; please update your code to use numpy.var. Please note that: - numpy.var axis argument defaults to None, not 0 - numpy.var has a ddof argument to replace bias in a more general manner. scipy.stats.var(a, bias=True) can be replaced by numpy.var(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) .C:\Python25\lib\site-packages\scipy\stats\morestats.py:618: UserWarning: Ties preclude use of exact statistic. warnings.warn("Ties preclude use of exact statistic.") ......C:\Python25\lib\site-packages\scipy\stats\stats.py:491: DeprecationWarning: scipy.stats.median is deprecated; please update your code to use numpy.median. Please note that: - numpy.median axis argument defaults to None, not 0 - numpy.median has a ddof argument to replace bias in a more general manner. scipy.stats.median(a, bias=True) can be replaced by numpy.median(x, axis=0, ddof=1). axis=0, ddof=1).""", DeprecationWarning) ......................................................C:\Python25\lib\site-packages\numpy\lib\function_base.py:343: Warning: The semantics of histogram has been modified in the current release to fix long-standing issues with outliers handling. The main changes concern 1. the definition of the bin edges, now including the rightmost edge, and 2. the handling of upper outliers, now ignored rather than tallied in the rightmost bin. The previous behaviour is still accessible using `new=False`, but is scheduled to be deprecated in the next release (1.3). *This warning will not printed in the 1.3 release.* Use `new=True` to bypass this warning. Please read the docstring for more information. """, Warning) .................................................................................................................................................................................................................................... ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\nose\case.py", line 203, in runTest self.test(*self.arg) File "C:\Python25\Lib\site-packages\scipy\lib\lapack\tests\esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 311, in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 296, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\nose\case.py", line 203, in runTest self.test(*self.arg) File "C:\Python25\Lib\site-packages\scipy\lib\lapack\tests\esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 311, in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 296, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769444, 9.18222618], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_imresize (test_pilutil.TestPILUtil) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 82, in skipper return f(*args, **kwargs) File "C:\Python25\Lib\site-packages\scipy\misc\tests\test_pilutil.py", line 24, in test_imresize assert_equal(im1.shape,(11,22)) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 174, in assert_equal assert_equal(len(actual),len(desired),err_msg,verbose) File "C:\Python25\Lib\site-packages\numpy\testing\utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 0 DESIRED: 2 ====================================================================== KNOWNFAIL: test_mio.test_load ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\nose\case.py", line 203, in runTest self.test(*self.arg) File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 119, in skipper raise KnownFailureTest, msg KnownFailureTest: Test skipped due to known failure ====================================================================== KNOWNFAIL: test_mio.test_round_trip ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\lib\site-packages\nose\case.py", line 203, in runTest self.test(*self.arg) File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 119, in skipper raise KnownFailureTest, msg KnownFailureTest: Test skipped due to known failure ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of real matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of complex matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Getting factors of real matrix ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Prefactorize (with UMFPACK) matrix for solving with multiple rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Prefactorize matrix for solving with multiple rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision complex ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision, sparse rhs ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve with UMFPACK: double precision ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ====================================================================== SKIP: Solve: single precision ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python25\Lib\site-packages\numpy\testing\decorators.py", line 80, in skipper raise nose.SkipTest, msg SkipTest: UMFPACK appears not to be compiled ---------------------------------------------------------------------- Ran 2317 tests in 70.461s FAILED (failures=3) From aisaac at american.edu Wed Oct 22 20:07:58 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 22 Oct 2008 20:07:58 -0400 Subject: [SciPy-user] scipy super pack installer for win32: please test In-Reply-To: <48FFA37A.4050804@american.edu> References: <48FF0214.4080302@ar.media.kyoto-u.ac.jp> <5ed2aa23-d992-40d8-90c5-eb34968ffa2a@k37g2000hsf.googlegroups.com> <48FFA37A.4050804@american.edu> Message-ID: <48FFC05E.5000208@american.edu> >>> http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy/scipy-0.7.0.dev4826-win32-superpack-python2.5.exe > joep wrote: >> when I run scipy.test() in a command shell, >> or ipython shell, then I get a large output with Depreciation Warnings >> and test output. On 10/22/2008 6:04 PM Alan G Isaac apparently wrote: > Similar experience. Three failures. (I know these are not > a current concern.) Output below. Hmm, on a different machine (Intel T2600) I get only 2 errors. Don't know if that's useful info... Alan From dwf at cs.toronto.edu Thu Oct 23 03:20:30 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 23 Oct 2008 03:20:30 -0400 Subject: [SciPy-user] ndimage starting points Message-ID: Hi all, Lately I've been looking at ndimage for replacing some of the functionality in the Matlab Image Processing Toolbox and elsewhere, but am running into some documentation holes. The parts of ndimage that I can figure out how to use work brilliantly and as advertised (the filters module for example), but a lot of the functions in some submodules don't say much about what form of input they take. I'm hoping that someone more familiar with the codebase can point me in the right direction, and I'll be happy to clean up whatever comes of this thread (and anything else I discover) so that we can put it into docstrings or the cookbook. I'm mainly focusing on ndimage.measurements for now. So, here's the list. Any help or clarification is appreciated. - I've figured out, without much help from the docstrings, that the labels= argument to many of the functions is an integer array (that can be) produced using the very handy label() function. This probably deserves a mention in the module docstring (which I am happy to write). - label() takes an optional "structure" argument - what exactly is this, what form does it take, how does one create it, and in what circumstances should it be used? Also, is it intended to be used with thresholded images? - The same question about 'structure' goes for watershed_ift, as well as what form the 'markers' argument takes (I'm assuming an array with markers.shape == input.shape). dtype is... anything numeric I guess? It says negatives are treated differently than positives, but nothing else. - center_of_mass() - this may be a dumb question, but this produces an "index" (which is not integer valued) in ndim(input) space, where higher values in position (i_1, i_2, ... i_n) produce more "pull" on the center of mass than a lower value in the same position would? - Is there a reason that find_objects() takes a "max_label" argument whereas every other function takes a scalar or sequence "index" argument? It seems inconsistent, though there may be some good algorithmic reason for it. - On a similar note, find_objects computes a "bounding box" of some sort when generating slices, I'm guessing? (or are slices far more general than I had thought?) - histogram()'s documentation seems incomplete to me. Just to be clear, does it always produce a one-dimensional object, regardless of the dimensionality of the input? - Some things like variance() don't immediately seem to add anything to the standard numpy functions, I am assuming that the ability to mask by label is their key advantage. Can someone confirm or correct this? That's about all I've got for now. Thanks in advance, David From stefan at sun.ac.za Thu Oct 23 03:39:28 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 23 Oct 2008 09:39:28 +0200 Subject: [SciPy-user] ndimage starting points In-Reply-To: References: Message-ID: <9457e7c80810230039l16749388mbcfd72abab05e054@mail.gmail.com> Hi David 2008/10/23 David Warde-Farley : > Lately I've been looking at ndimage for replacing some of the > functionality in the Matlab Image Processing Toolbox and elsewhere, > but am running into some documentation holes. The parts of ndimage > that I can figure out how to use work brilliantly and as advertised > (the filters module for example), but a lot of the functions in some > submodules don't say much about what form of input they take. The ndimage module comes from Numeric. Unfortunately, the version we included came from a different source and did not have all the Numeric documentation. It would be great if you could write a patch to bring some of the docs over -- they may also answer your questions below. > - Some things like variance() don't immediately seem to add anything > to the standard numpy functions, I am assuming that the ability to > mask by label is their key advantage. Can someone confirm or correct > this? Much of the ndimage functionality was implemented before NumPy. I think we can already replace a big part of its functionality using Python + Numpy, without going down to the C level (and I think this would benefit the library in general). Cheers St?fan From millman at berkeley.edu Thu Oct 23 03:41:51 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 23 Oct 2008 00:41:51 -0700 Subject: [SciPy-user] ndimage starting points In-Reply-To: References: Message-ID: Since you are offering to help out with this, I would like to see someone do the following: Take the existing numarray.ndimage docs: http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-numarray.ndimage.html and the cookbook stuff: http://www.scipy.org/SciPyPackages/Ndimage and merge what you can into the docstrings and convert the rest into restructured text and commit it to the scipy trunk. That way we can start working on generating sphinx documentation for scipy. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From dwf at cs.toronto.edu Thu Oct 23 04:02:21 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 23 Oct 2008 04:02:21 -0400 Subject: [SciPy-user] ndimage starting points In-Reply-To: References: Message-ID: <72FAC71F-2575-407B-96B2-4869BB320ED5@cs.toronto.edu> On 23-Oct-08, at 3:41 AM, Jarrod Millman wrote: > Since you are offering to help out with this, I would like to see > someone do the following: > > Take the existing numarray.ndimage docs: > http://stsdas.stsci.edu/numarray/numarray-1.5.html/module-numarray.ndimage.html > and the cookbook stuff: > http://www.scipy.org/SciPyPackages/Ndimage > and merge what you can into the docstrings and convert the rest into > restructured text and commit it to the scipy trunk. That way we can > start working on generating sphinx documentation for scipy. Thanks for the link. I don't know how I missed the numarray stuff. I'd be happy to bring as much of this as possible into docstrings and other SciPy docs - I'll keep a few editor windows open as I sort through reading them. David From dwf at cs.toronto.edu Thu Oct 23 04:08:35 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Thu, 23 Oct 2008 04:08:35 -0400 Subject: [SciPy-user] ndimage starting points In-Reply-To: <9457e7c80810230039l16749388mbcfd72abab05e054@mail.gmail.com> References: <9457e7c80810230039l16749388mbcfd72abab05e054@mail.gmail.com> Message-ID: <75B1A618-5A46-4170-B01E-E21E15AF5662@cs.toronto.edu> Hi Stefan, Thanks for the quick reply! On 23-Oct-08, at 3:39 AM, St?fan van der Walt wrote: > Hi David > > 2008/10/23 David Warde-Farley : > > The ndimage module comes from Numeric. Unfortunately, the version we > included came from a different source and did not have all the Numeric > documentation. It would be great if you could write a patch to bring > some of the docs over -- they may also answer your questions below. Did you mean to say numarray? Jarrod pointed me at the docs for a numarray nd_image module. Hopefully there isn't also a module from Numeric as well... Come to think of it, I do know why I missed those docs -- Google isn't smart enough to return results for "nd_image" when you search for ndimage :) >> - Some things like variance() don't immediately seem to add anything >> to the standard numpy functions, I am assuming that the ability to >> mask by label is their key advantage. Can someone confirm or correct >> this? > > Much of the ndimage functionality was implemented before NumPy. I > think we can already replace a big part of its functionality using > Python + Numpy, without going down to the C level (and I think this > would benefit the library in general). So, the one thing that, for example variance() does is take a 'labels' parameter and an 'index' parameter, which seems to suggest to me that the purpose of this is to compute variance within a labeled object (or all labeled objects, disregarding the background). I imagine this would be easily reimplemented in pure Python, though, you're right. For now I'll see about getting this documentation up to snuff using the magical tome Jarrod pointed me toward. Cheers, David From stefan at sun.ac.za Thu Oct 23 04:53:44 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 23 Oct 2008 10:53:44 +0200 Subject: [SciPy-user] ndimage starting points In-Reply-To: <75B1A618-5A46-4170-B01E-E21E15AF5662@cs.toronto.edu> References: <9457e7c80810230039l16749388mbcfd72abab05e054@mail.gmail.com> <75B1A618-5A46-4170-B01E-E21E15AF5662@cs.toronto.edu> Message-ID: <9457e7c80810230153q48bd61a9sdeb92ae8a38d8911@mail.gmail.com> 2008/10/23 David Warde-Farley : > Did you mean to say numarray? Jarrod pointed me at the docs for a Yup, sorry. > For now I'll see about getting this documentation up to snuff using > the magical tome Jarrod pointed me toward. Thanks, that'd be much appreciated! Cheers St?fan From macrozhu at gmail.com Thu Oct 23 08:34:42 2008 From: macrozhu at gmail.com (Macro Zhu) Date: Thu, 23 Oct 2008 14:34:42 +0200 Subject: [SciPy-user] longfloat print out problem in Windows XP Message-ID: <11b97ec0810230534w546e6034q422a13530f12fdb1@mail.gmail.com> Hi, I met this print out problem for longfloat type: > print scipy.longfloat(10) -1.49166814624e-154 > print float(scipy.longfloat(10)) 10.0 > print scipy.array(10, dtype='longfloat') -1.49166814624e-154 The same problem happens with Numpy as well. How can I get the print out of longfloat correctly, except by converting it back to the float type? I am using scipy 0.6.0 and numpy 1.2.0 with python 2.4 in Windows XP SP3. Thanks! -Mac -------------- next part -------------- An HTML attachment was scrubbed... URL: From walter at aims.ac.za Thu Oct 23 08:28:40 2008 From: walter at aims.ac.za (Walter Mudzimbabwe) Date: Thu, 23 Oct 2008 14:28:40 +0200 (SAST) Subject: [SciPy-user] prime numbers Message-ID: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> can anybody help me figure out why the following program cannot produce primes upto 10. -------------------------------------------------- from scipy import * def isdivisible(n,listt): for i in range(len(listt)): if (n%listt[i]==0): return 1 else: return 0 def primes_upto(m): u=[1,2] for i in range(3,m+1): if (isdivisible(i,u[1:])==0): u.append(i) return u print primes_upto(10) ----------------------------------------------------- it's output is: [1, 2, 3, 5, 7, 9] -- Walter Mudzimbabwe (Formerly with AIMS) University of Western Cape. Mathematics Dept, Private Bag X17, 7535 Bellville, RSA Contact :+27 78 5188402 mudzmudz at gmail.com "Those of many tricks take them to the grave".......waltermudz20008 From david at ar.media.kyoto-u.ac.jp Thu Oct 23 09:10:34 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 23 Oct 2008 22:10:34 +0900 Subject: [SciPy-user] longfloat print out problem in Windows XP In-Reply-To: <11b97ec0810230534w546e6034q422a13530f12fdb1@mail.gmail.com> References: <11b97ec0810230534w546e6034q422a13530f12fdb1@mail.gmail.com> Message-ID: <490077CA.2040802@ar.media.kyoto-u.ac.jp> Macro Zhu wrote: > Hi, > > I met this print out problem for longfloat type: > > > print scipy.longfloat(10) > -1.49166814624e-154 > > > print float(scipy.longfloat(10)) > 10.0 > > > print scipy.array(10, dtype='longfloat') > -1.49166814624e-154 > > The same problem happens with Numpy as well. > How can I get the print out of longfloat correctly, except by > converting it back to the float type? You can't. I won't bore you with details, but basically, the Microsoft C runtime (printf) does not handle long double. There is a solution, but it was too involved when 1.2.0 was about to be released. I hope to solve the problem for 1.3.0. Note that the problem is only in the print statement. For example: a = scipy.longfloat(10) b = scipy.longfloat(10) print a # garbage print a.astype(scipy.float) # 10 print (a+b).astype(scipy.float) # 20 cheers, David From gael.varoquaux at normalesup.org Thu Oct 23 09:34:11 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 23 Oct 2008 15:34:11 +0200 Subject: [SciPy-user] prime numbers In-Reply-To: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> References: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> Message-ID: <20081023133411.GA10344@phare.normalesup.org> On Thu, Oct 23, 2008 at 02:28:40PM +0200, Walter Mudzimbabwe wrote: > can anybody help me figure out why the following program cannot produce > primes upto 10. I haven't spent more the a few seconds looking at that (little time), but my guess is: > -------------------------------------------------- > from scipy import * > def isdivisible(n,listt): > for i in range(len(listt)): > if (n%listt[i]==0): > return 1 > else: > return 0 "else: ..." should be indented as the "for", not as the "if". Ga?l From josef.pktd at gmail.com Thu Oct 23 09:51:14 2008 From: josef.pktd at gmail.com (joep) Date: Thu, 23 Oct 2008 06:51:14 -0700 (PDT) Subject: [SciPy-user] Problems with Google groups Message-ID: <53ee39e7-4d94-40e9-8575-dbe56fe2c1bd@b1g2000hsg.googlegroups.com> Just an observation I tried to reply to the prime numbers question through Google groups, but the message is declared as deleted or expired. It seems that Google groups is not very reliable anymore. Numpy-discussion has disappeared from Google groups a few days ago. Time for me to look for an alternative. Josef From josef.pktd at gmail.com Thu Oct 23 09:53:24 2008 From: josef.pktd at gmail.com (joep) Date: Thu, 23 Oct 2008 06:53:24 -0700 (PDT) Subject: [SciPy-user] prime numbers In-Reply-To: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> References: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> Message-ID: <1644ab8c-38ff-40b6-87af-cd3ad09d74ac@b1g2000hsg.googlegroups.com> On Oct 23, 8:28?am, "Walter Mudzimbabwe" wrote: > can anybody help me figure out why the following program cannot produce > primes upto 10. > -------------------------------------------------- > from scipy import * > > def isdivisible(n,listt): > ? ?for i in range(len(listt)): > ? ? ? ?if (n%listt[i]==0): > ? ? ? ? ? ?return 1 > ? ? ? ?else: > ? ? ? ? ? ?return 0 > > def primes_upto(m): > ? ?u=[1,2] > ? ?for i in range(3,m+1): > ? ? ? ? if (isdivisible(i,u[1:])==0): > ? ? ? ? ? ?u.append(i) > ? ?return u > > print primes_upto(10) > ----------------------------------------------------- > it's output is: > > [1, 2, 3, 5, 7, 9] > > -- > Walter Mudzimbabwe (Formerly with AIMS) > University of Western Cape. > Mathematics Dept, > Private Bag X17, > 7535 Bellville, > RSA > > Contact :+27 78 5188402 > ? ? ? ? ?mudzm... at gmail.com > > "Those of many tricks take them to the grave".......waltermudz20008 > > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user return 0 at the end of the for loop def isdivisible(n,listt): for i in range(len(listt)): if (n%listt[i]==0): return 1 return 0 From et.gaudrain at free.fr Thu Oct 23 09:40:53 2008 From: et.gaudrain at free.fr (Etienne Gaudrain) Date: Thu, 23 Oct 2008 14:40:53 +0100 Subject: [SciPy-user] prime numbers In-Reply-To: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> References: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> Message-ID: <49007EE5.5090902@free.fr> Hi, This is probably not the best place to post your question... However, this version should work: -------------------------------------------------- from scipy import * def isdivisible(n,listt): for i in range(len(listt)): if (n%listt[i]==0): return 1 return 0 def primes_upto(m): u=[1,2] for i in range(3,m+1): if (isdivisible(i,u[1:])==0): u.append(i) return u print primes_upto(10) ----------------------------------------------------- Note that this is not the best algorithm. Google "primes sieve". -Etienne Walter Mudzimbabwe wrote: > can anybody help me figure out why the following program cannot produce > primes upto 10. > -------------------------------------------------- > from scipy import * > > def isdivisible(n,listt): > for i in range(len(listt)): > if (n%listt[i]==0): > return 1 > else: > return 0 > > def primes_upto(m): > u=[1,2] > for i in range(3,m+1): > if (isdivisible(i,u[1:])==0): > u.append(i) > return u > > print primes_upto(10) > ----------------------------------------------------- > it's output is: > > [1, 2, 3, 5, 7, 9] > > > > -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Etienne Gaudrain Centre for the Neural Basis of Hearing Department of Physiology, Development and Neuroscience University of Cambridge Downing Street Cambridge CB2 3EG UK Phone: +44 (1223) 333 859 office Fax: +44 (1223) 333 840 department ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From ajvogel at tuks.co.za Thu Oct 23 09:57:53 2008 From: ajvogel at tuks.co.za (Adolph J. Vogel) Date: Thu, 23 Oct 2008 15:57:53 +0200 Subject: [SciPy-user] prime numbers In-Reply-To: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> References: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> Message-ID: <200810231557.53357.ajvogel@tuks.co.za> On Thursday 23 October 2008 14:28:40 Walter Mudzimbabwe wrote: > can anybody help me figure out why the following program cannot produce > primes upto 10. > -------------------------------------------------- > from scipy import * > > def isdivisible(n,listt): > for i in range(len(listt)): > if (n%listt[i]==0): > return 1 > else: > return 0 > > def primes_upto(m): > u=[1,2] > for i in range(3,m+1): At this point your only passing the last element of u, which in this case is two. So in isdivisibile() your only checking to see if it is divisible by two. You need to check and see whether or not its divisible by the divisors smaller then two as well. Your code is also very complicated for such an simple task. def isdivisible(p): for d in range(1,p): if p % d == 0: return False return True primes = [] def prime_upto(m): for i in range(3,m+1,2): if isdivisible(i): primes.append(i) -- Adolph J. Vogel BEng(Mech) From josef.pktd at gmail.com Thu Oct 23 10:02:13 2008 From: josef.pktd at gmail.com (joep) Date: Thu, 23 Oct 2008 07:02:13 -0700 (PDT) Subject: [SciPy-user] Problems with Google groups In-Reply-To: <53ee39e7-4d94-40e9-8575-dbe56fe2c1bd@b1g2000hsg.googlegroups.com> References: <53ee39e7-4d94-40e9-8575-dbe56fe2c1bd@b1g2000hsg.googlegroups.com> Message-ID: Looks like it was just a temporary problem with the prime numbers thread. On Oct 23, 9:51?am, joep wrote: > Just an observation > > I tried to reply to the prime numbers question through Google groups, > but the message is declared as deleted or expired. It seems that > Google groups is not very reliable anymore. Numpy-discussion has > disappeared from Google groups a few days ago. > > Time for me to look for an alternative. > > Josef > _______________________________________________ > SciPy-user mailing list > SciPy-u... at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user From nwagner at iam.uni-stuttgart.de Thu Oct 23 10:02:25 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 23 Oct 2008 16:02:25 +0200 Subject: [SciPy-user] array manipulation Message-ID: Hi all, >>> M array([[1025338, 1, 1, 1], [1036103, 1, 1, 1], [2008297, 1, 1, 1], [2086888, 0, 0, 1], [2127079, 1, 0, 0], [2157100, 0, 1, 0], [2157969, 1, 1, 1], [2222852, 1, 0, 1]]) >>> 1025338 in M[:,0] True >>> 2157100 in M[:,0] True How can I obtain the row index of numbers that belong to M ? Nils From robert.kern at gmail.com Thu Oct 23 10:24:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 23 Oct 2008 09:24:55 -0500 Subject: [SciPy-user] array manipulation In-Reply-To: References: Message-ID: <3d375d730810230724i7c64efcci4998f8509a73bf5e@mail.gmail.com> On Thu, Oct 23, 2008 at 09:02, Nils Wagner wrote: > Hi all, > >>>> M > array([[1025338, 1, 1, 1], > [1036103, 1, 1, 1], > [2008297, 1, 1, 1], > [2086888, 0, 0, 1], > [2127079, 1, 0, 0], > [2157100, 0, 1, 0], > [2157969, 1, 1, 1], > [2222852, 1, 0, 1]]) >>>> 1025338 in M[:,0] > True >>>> 2157100 in M[:,0] > True > > How can I obtain the row index of numbers that belong > to M ? searchsorted() if you can guarantee that the column is always sorted. Otherwise nonzero(x == M[:,0])[0]. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Thu Oct 23 10:37:43 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 23 Oct 2008 10:37:43 -0400 Subject: [SciPy-user] prime numbers In-Reply-To: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> References: <60401.196.11.235.119.1224764920.squirrel@webmail.aims.ac.za> Message-ID: <49008C37.8080100@american.edu> Walter Mudzimbabwe wrote: > can anybody help me figure out why the following program cannot produce > primes upto 10. I think this has nothing to do with SciPy? Try comp.lang.python instead. But you can try: def isdivisible(n, listt): return not all(n%d for d in listt) This will short circuit appropriately. Alan Isaac From daniele at grinta.net Thu Oct 23 13:24:24 2008 From: daniele at grinta.net (Daniele Nicolodi) Date: Thu, 23 Oct 2008 19:24:24 +0200 Subject: [SciPy-user] Suggestion about algorithm Message-ID: <4900B348.9050304@grinta.net> Hello, i'm going to ask something not strictly related to scipy. Forgive me if this is not appropriate on the mailing list, but i don't know where else i can seek for help, any suggestion is appreciated. I'm measuring the quality factor Q of a mechanical oscillator. I use the ring down technique: i excite the oscillator to a big oscillation amplitude so that my read out noise is negligible and then i observe the decay of the oscillation amplitude during time. The evolution of the amplitude A(t) in time can be described, negletting any external perturbation, as: A(t) = A0 * exp(-t/Beta) where Q = w0 / 2*Beta and w0 is the oscillator natural frequency. I usually analyze my data extracting the amplitude of each oscillation and then computing: Beta = - dA(t)/dt / A(t) where dA(t)/dt is the first derivative of the amplitude computed as the difference between the amplitude of the current cicle and the previous cicle divided by the period of oscillation. The problem arises because my oscillator has a very long period (about 500 seconds) and a very high Q (about 600000). This means that the observation time is much shorter than the characteristic time of the system and that the value of Beta i want to resolve is very small. In this situation my uncertainty on Beta is too big to resolve Q. Does someone have a suggestion for a better technique to analyze my data? There is any smarter thing i can do? Thanks. Bye. -- Daniele From robert.kern at gmail.com Thu Oct 23 14:20:47 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 23 Oct 2008 13:20:47 -0500 Subject: [SciPy-user] Suggestion about algorithm In-Reply-To: <4900B348.9050304@grinta.net> References: <4900B348.9050304@grinta.net> Message-ID: <3d375d730810231120n97f301cs7b3200904f11a615@mail.gmail.com> On Thu, Oct 23, 2008 at 12:24, Daniele Nicolodi wrote: > Hello, i'm going to ask something not strictly related to scipy. Forgive > me if this is not appropriate on the mailing list, but i don't know > where else i can seek for help, any suggestion is appreciated. > > I'm measuring the quality factor Q of a mechanical oscillator. I use the > ring down technique: i excite the oscillator to a big oscillation > amplitude so that my read out noise is negligible and then i observe the > decay of the oscillation amplitude during time. > > The evolution of the amplitude A(t) in time can be described, negletting > any external perturbation, as: > > A(t) = A0 * exp(-t/Beta) > > where Q = w0 / 2*Beta and w0 is the oscillator natural frequency. > > I usually analyze my data extracting the amplitude of each oscillation > and then computing: > > Beta = - dA(t)/dt / A(t) > > where dA(t)/dt is the first derivative of the amplitude computed as the > difference between the amplitude of the current cicle and the previous > cicle divided by the period of oscillation. > > The problem arises because my oscillator has a very long period (about > 500 seconds) and a very high Q (about 600000). This means that the > observation time is much shorter than the characteristic time of the > system and that the value of Beta i want to resolve is very small. > In this situation my uncertainty on Beta is too big to resolve Q. > > Does someone have a suggestion for a better technique to analyze my > data? There is any smarter thing i can do? Can you just get the oscillating curve itself rather than extracting the peaks? It might be easiest just to fit the decaying oscillator function to the curve. Your uncertainly may still be large, but probably better than what you currently have. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Thu Oct 23 15:50:21 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 23 Oct 2008 12:50:21 -0700 Subject: [SciPy-user] ndimage starting points In-Reply-To: <72FAC71F-2575-407B-96B2-4869BB320ED5@cs.toronto.edu> References: <72FAC71F-2575-407B-96B2-4869BB320ED5@cs.toronto.edu> Message-ID: On Thu, Oct 23, 2008 at 1:02 AM, David Warde-Farley wrote: > Thanks for the link. I don't know how I missed the numarray stuff. The nd_image documentation was written in latex. I don't have time to look for it right now, but later this week I should be able to get you the latex sources (unless no one beats me to it). You could use pandoc to try and auto-convert them to restructured text for the first pass. You will probably need to clean it up a bit, but it will get you most of the way there. http://johnmacfarlane.net/pandoc/ Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From spmcinerney at hotmail.com Thu Oct 23 16:53:59 2008 From: spmcinerney at hotmail.com (Stephen McInerney) Date: Thu, 23 Oct 2008 13:53:59 -0700 Subject: [SciPy-user] prime numbers In-Reply-To: References: Message-ID: Walter, The culprit is the else: return 0 clause in your isdivisible() fn You return 0 immediately if n is not divisible by the first entry listt[0] i.e. 2. That's why you see all the odd numbers. > def isdivisible(n,listt):> for i in range(len(listt)):> if (n%listt[i]==0):> return 1> else:> return 0 but you want this, where the 'return 0' is a fallthrough when all iterations of the for loop have failed to find any divisor: > if (n%listt[i]==0):> return 1> > return 0 # this is a fallthrough PS If you don't omit 1 from listt then you don't need to say listt[1:] PPS another refinement to make this a proper Sieve of Eratosthenes is that you only need to test up to sqrt(n) rounded down, i.e. int(math.sqrt(n)) Thus e.g. when you consider primeness of 31 you don't need to test if it's divisible by all primes up to 29, only up to 5. This will speed things up a lot. Regards, Stephen > Message: 9> Date: Thu, 23 Oct 2008 14:28:40 +0200 (SAST)> From: "Walter Mudzimbabwe" > Subject: [SciPy-user] prime numbers> To: scipy-user at scipy.org> Message-ID:> <60401.196.11.235.119.1224764920.squirrel at webmail.aims.ac.za>> Content-Type: text/plain;charset=iso-8859-1> > > can anybody help me figure out why the following program cannot produce> primes upto 10.> --------------------------------------------------> from scipy import *> > def isdivisible(n,listt):> for i in range(len(listt)):> if (n%listt[i]==0):> return 1> else:> return 0> > def primes_upto(m):> u=[1,2]> for i in range(3,m+1):> if (isdivisible(i,u[1:])==0):> u.append(i)> return u> > print primes_upto(10)> -----------------------------------------------------> it's output is:> > [1, 2, 3, 5, 7, 9]> > > > -- > Walter Mudzimbabwe (Formerly with AIMS)> University of Western Cape.> Mathematics Dept,> Private Bag X17,> 7535 Bellville,> RSA> > Contact :+27 78 5188402> mudzmudz at gmail.com _________________________________________________________________ Want to read Hotmail messages in Outlook? The Wordsmiths show you how. http://windowslive.com/connect/post/wedowindowslive.spaces.live.com-Blog-cns!20EE04FBC541789!167.entry?ocid=TXT_TAGLM_WL_hotmail_092008 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Thu Oct 23 17:22:41 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 23 Oct 2008 23:22:41 +0200 Subject: [SciPy-user] SciPy Sprint Weekend: 1 and 2 November Message-ID: <9457e7c80810231422m6cab2e75wbcb343472449312@mail.gmail.com> Event: SciPy Sprint Date: 1, 2 November 2007 Find us on: irc.freenode.net in channel #scipy Hi all, As industry and academia slows down towards the end of the year, we may be able to tap some developer resources for a final sprint or two towards SciPy 0.7. There has never been so much low-hanging fruit before: the bug tracker is filled with tickets just waiting to be triaged and fixed! I would like to hold a SciPy sprint on the 1st and 2nd of November. At the moment, I can only volunteer my own time, but I am confident that many of our students and colleagues would be able to lend a hand. If some of you are willing to donate 2 hours of your time, this would be a great opportunity to assist us in getting ready for the next Big Release of SciPy. We shall start working on Saturday morning here in South Africa, and soon thereafter America should wake up to the smell of freshly triaged tickets. I look forward to seeing you there, and to the 0.7 release of SciPy! Kind regards, St?fan P.S. I sent this to the scipy-user list, since I firmly believe that many of our users have the potential to make valuable contributions as developers. Don't be shy -- we appreciate help on all levels. P.S.S. If you are in South Africa, I'll provide free drinks and internet. That's almost reason enough to come! From pav at iki.fi Thu Oct 23 17:54:17 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 23 Oct 2008 21:54:17 +0000 (UTC) Subject: [SciPy-user] ndimage starting points References: Message-ID: Thu, 23 Oct 2008 00:41:51 -0700, Jarrod Millman wrote: > Since you are offering to help out with this, I would like to see > someone do the following: > > Take the existing numarray.ndimage docs: > http://stsdas.stsci.edu/numarray/numarray-1.5.html/module- numarray.ndimage.html > > and the cookbook stuff: > > http://www.scipy.org/SciPyPackages/Ndimage > > and merge what you can into the docstrings and convert the rest into > restructured text and commit it to the scipy trunk. That way we can > start working on generating sphinx documentation for scipy. I started working on Sphinx stuff for Scipy last week, and got this far: http://www.iki.fi/pav/tmp/scipy-refguide.tar.gz (source) http://www.iki.fi/pav/tmp/scipy-refguide/ It's a Sphinx framework similar to the Numpy reference guide we started working on this summer. There's in principle also a BZR branch for this in Launchpad, https://code.launchpad.net/~pauli-virtanen/scipy/scipy-refguide which you could track using Bazaar (http://bazaar-vcs.org/), but for some reason Launchpad decided to dislike me today, so it doesn't work now. (BTW, should I put this to Scipy SVN, and if yes, where? What about the corresponding Numpy documentation?) There's already a very simple bare-bones version of the ndimage module page there already, but it only lists the functions and their docstrings which Sphinx extracts from scipy.ndimage. -- Pauli Virtanen From sebastian.rooks at free.fr Thu Oct 23 18:13:43 2008 From: sebastian.rooks at free.fr (Sebastian Rooks) Date: Thu, 23 Oct 2008 22:13:43 +0000 (UTC) Subject: [SciPy-user] Suggestion about algorithm References: <4900B348.9050304@grinta.net> Message-ID: Daniele Nicolodi grinta.net> writes: > > Hello, i'm going to ask something not strictly related to scipy. Forgive > me if this is not appropriate on the mailing list, but i don't know > where else i can seek for help, any suggestion is appreciated. > > I'm measuring the quality factor Q of a mechanical oscillator. I use the > ring down technique: i excite the oscillator to a big oscillation > amplitude so that my read out noise is negligible and then i observe the > decay of the oscillation amplitude during time. What about Harminv ? http://ab-initio.mit.edu/wiki/index.php/Harminv Regards, Seb From stefan at sun.ac.za Fri Oct 24 03:55:49 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 24 Oct 2008 09:55:49 +0200 Subject: [SciPy-user] SciPy Sprint Weekend: 1 and 2 November In-Reply-To: <9457e7c80810240054o3f00cc5fp7528a65af10cc2c7@mail.gmail.com> References: <9457e7c80810231422m6cab2e75wbcb343472449312@mail.gmail.com> <4C4FE880-8830-4153-83CF-7565E5EA4646@cs.toronto.edu> <9457e7c80810240054o3f00cc5fp7528a65af10cc2c7@mail.gmail.com> Message-ID: <9457e7c80810240055m4f53ae56yb4621b5c786f2f22@mail.gmail.com> Sorry, wrong list. ---------- Forwarded message ---------- From: St?fan van der Walt Date: 2008/10/24 Subject: Re: [SciPy-user] SciPy Sprint Weekend: 1 and 2 November To: SciPy Developers List 2008/10/24 David Warde-Farley : > I think you mean 2008? ;) Yes, of course! Event: SciPy Sprint Date: 1, 2 November 2008 Find us on: irc.freenode.net in channel #scipy I hope that all of you can join us. Nathan has put a lot of work into sparse matrices, David has been working on build systems, other David improved special functions, we have a brand new spatial module by Anne, other other David is documenting ndimage, Damian implemented hierarchical clustering and Tiziano is contributing generalised eigenproblems. This is going to be a very useful release -- we must just get it out the door! Cheers St?fan From stefan at sun.ac.za Fri Oct 24 03:59:45 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 24 Oct 2008 09:59:45 +0200 Subject: [SciPy-user] ndimage starting points In-Reply-To: References: Message-ID: <9457e7c80810240059m1accd206pd3ece9689cec8d67@mail.gmail.com> Hi Pauli, 2008/10/23 Pauli Virtanen : > I started working on Sphinx stuff for Scipy last week, and got this far: > > http://www.iki.fi/pav/tmp/scipy-refguide.tar.gz (source) > http://www.iki.fi/pav/tmp/scipy-refguide/ This already looks great! SciPy is a *big* library, and I think this reference guide will be extremely useful. Did you see the post about an improved NumpyExt on the Sphinx mailing list? Maybe we can incorporate those changes back into our code. Cheers St?fan From millman at berkeley.edu Fri Oct 24 05:12:32 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Fri, 24 Oct 2008 02:12:32 -0700 Subject: [SciPy-user] SciPy Sprint Weekend: 1 and 2 November In-Reply-To: <9457e7c80810231422m6cab2e75wbcb343472449312@mail.gmail.com> References: <9457e7c80810231422m6cab2e75wbcb343472449312@mail.gmail.com> Message-ID: On Thu, Oct 23, 2008 at 2:22 PM, St?fan van der Walt wrote: > Event: SciPy Sprint > Date: 1, 2 November 2008 > Find us on: irc.freenode.net in channel #scipy This is an absolutely excellent idea! I will organize a sprint at UC Berkeley for the 1st and 2nd of November and try to rope in as many people as I am able. If we start when you stop (and vice versa), we may be able to pull of 48 consecutive hours of test fixing, code clean up, and documentation. I am looking forward to releasing SciPy 0.7 very soon. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From daniele at grinta.net Fri Oct 24 06:32:07 2008 From: daniele at grinta.net (Daniele Nicolodi) Date: Fri, 24 Oct 2008 12:32:07 +0200 Subject: [SciPy-user] Suggestion about algorithm In-Reply-To: <3d375d730810231120n97f301cs7b3200904f11a615@mail.gmail.com> References: <4900B348.9050304@grinta.net> <3d375d730810231120n97f301cs7b3200904f11a615@mail.gmail.com> Message-ID: <4901A427.8010504@grinta.net> Robert Kern wrote: > Can you just get the oscillating curve itself rather than extracting > the peaks? It might be easiest just to fit the decaying oscillator > function to the curve. Your uncertainly may still be large, but > probably better than what you currently have. The point into demodulating the oscillation an taking the derivative of the amplitude is that in this way every oscillation becomes an independent estimation of the decay constant Beta. I'm convinced that a fitting procedure is not statistically correct because each point in my time series is strongly correlated with the previous ones (the characteristic time of the system is of a couple of years). In other words an external disturbance acting on the oscillator (think to it as a kick) influences only a single data point in my estimation of Beta with the derivative. I then average all the independent data points. With a fit a kick would instead affect the estimation very badly. Ciao -- Daniele From lopmart at gmail.com Fri Oct 24 09:03:43 2008 From: lopmart at gmail.com (Jose Lopez) Date: Fri, 24 Oct 2008 06:03:43 -0700 Subject: [SciPy-user] error at sparse Message-ID: <4eeef9d40810240603i5c0c0897y2fdf15b9d9052682@mail.gmail.com> hi, i am working with scipy before, and i was used the next line code 'Matrix=sparse.lil_matrix((100,100),float)' and did not have any problem, but now, the python give me the next error 'NameError: name 'sparse' is not defined ' somebody knows why? thanks pd i have Python 2.5.2 and SciPy 0.6.0.1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 24 09:06:05 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 24 Oct 2008 08:06:05 -0500 Subject: [SciPy-user] ndimage starting points In-Reply-To: References: Message-ID: <3d375d730810240606q53b2c05fx873f923101600fb6@mail.gmail.com> On Thu, Oct 23, 2008 at 16:54, Pauli Virtanen wrote: > Thu, 23 Oct 2008 00:41:51 -0700, Jarrod Millman wrote: > >> Since you are offering to help out with this, I would like to see >> someone do the following: >> >> Take the existing numarray.ndimage docs: >> http://stsdas.stsci.edu/numarray/numarray-1.5.html/module- > numarray.ndimage.html >> >> and the cookbook stuff: >> >> http://www.scipy.org/SciPyPackages/Ndimage >> >> and merge what you can into the docstrings and convert the rest into >> restructured text and commit it to the scipy trunk. That way we can >> start working on generating sphinx documentation for scipy. > > I started working on Sphinx stuff for Scipy last week, and got this far: > > http://www.iki.fi/pav/tmp/scipy-refguide.tar.gz (source) > http://www.iki.fi/pav/tmp/scipy-refguide/ > > It's a Sphinx framework similar to the Numpy reference guide we started > working on this summer. There's in principle also a BZR branch for this > in Launchpad, > > https://code.launchpad.net/~pauli-virtanen/scipy/scipy-refguide > > which you could track using Bazaar (http://bazaar-vcs.org/), but for some > reason Launchpad decided to dislike me today, so it doesn't work now. > (BTW, should I put this to Scipy SVN, and if yes, where? What about the > corresponding Numpy documentation?) If you would like to, sure. If you want to stick with Bazaar, that's fine with me, too, but I would like to see official links from somewhere (like the Doc Marathon wiki page) to what you consider to be the trunk for each. If you do want to move it over to SVN, go ahead and make numpy-refguide/ and scipy-refguide/ directories all the way at the root as siblings to trunk/. Inside them, you can either make branches/ tags/ trunk/ or you can reuse the top-level branches/ and tags/. Probably the former. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Oct 24 09:17:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 24 Oct 2008 08:17:34 -0500 Subject: [SciPy-user] error at sparse In-Reply-To: <4eeef9d40810240603i5c0c0897y2fdf15b9d9052682@mail.gmail.com> References: <4eeef9d40810240603i5c0c0897y2fdf15b9d9052682@mail.gmail.com> Message-ID: <3d375d730810240617u232fc119mbf0eddf43cd4486c@mail.gmail.com> On Fri, Oct 24, 2008 at 08:03, Jose Lopez wrote: > hi, i am working with scipy before, and i was used the next line code > 'Matrix=sparse.lil_matrix((100,100),float)' and did not have any problem, > but now, the python give me the next error 'NameError: name 'sparse' is not > defined ' > somebody knows why? Most likely you are missing an import of the sparse package. If not, can you show us a complete example that fails? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lopmart at gmail.com Fri Oct 24 09:42:25 2008 From: lopmart at gmail.com (Jose Lopez) Date: Fri, 24 Oct 2008 06:42:25 -0700 Subject: [SciPy-user] error at sparse In-Reply-To: <3d375d730810240617u232fc119mbf0eddf43cd4486c@mail.gmail.com> References: <4eeef9d40810240603i5c0c0897y2fdf15b9d9052682@mail.gmail.com> <3d375d730810240617u232fc119mbf0eddf43cd4486c@mail.gmail.com> Message-ID: <4eeef9d40810240642u28450542g37f64e864fa32ffa@mail.gmail.com> my headers are from pylab import * from scipy import * JL On Fri, Oct 24, 2008 at 6:17 AM, Robert Kern wrote: > On Fri, Oct 24, 2008 at 08:03, Jose Lopez wrote: > > hi, i am working with scipy before, and i was used the next line code > > 'Matrix=sparse.lil_matrix((100,100),float)' and did not have any problem, > > but now, the python give me the next error 'NameError: name 'sparse' is > not > > defined ' > > somebody knows why? > > Most likely you are missing an import of the sparse package. If not, > can you show us a complete example that fails? > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 24 10:36:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 24 Oct 2008 09:36:34 -0500 Subject: [SciPy-user] error at sparse In-Reply-To: <4eeef9d40810240642u28450542g37f64e864fa32ffa@mail.gmail.com> References: <4eeef9d40810240603i5c0c0897y2fdf15b9d9052682@mail.gmail.com> <3d375d730810240617u232fc119mbf0eddf43cd4486c@mail.gmail.com> <4eeef9d40810240642u28450542g37f64e864fa32ffa@mail.gmail.com> Message-ID: <3d375d730810240736l25ac95acqf38663a20fed59fc@mail.gmail.com> On Fri, Oct 24, 2008 at 08:42, Jose Lopez wrote: > my headers are > > from pylab import * > from scipy import * Right. The subpackages of scipy do not get imported with "from scipy import *". You need to do from scipy import sparse -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dlrt2 at ast.cam.ac.uk Fri Oct 24 10:26:19 2008 From: dlrt2 at ast.cam.ac.uk (David Trethewey) Date: Fri, 24 Oct 2008 15:26:19 +0100 Subject: [SciPy-user] optimize.leastsq - Value Error: shape mismatch Message-ID: <4901DB0B.3060704@ast.cam.ac.uk> I get the following error: Traceback (most recent call last): File "M31FeHfit_total.py", line 86, in p1,success = optimize.leastsq(errfunc, p0, args = (X, Y)) File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 264, in leastsq m = check_func(func,x0,args,n)[0] File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 11, in check_func res = atleast_1d(thefunc(*((x0[:numinputs],)+args))) File "M31FeHfit_total.py", line 81, in errfunc = lambda p, x, y: fitfunc(p,x) - y # Distance to the target function ValueError: shape mismatch: objects cannot be broadcast to a single shape when running the following code: hista = hist(FeH_sub_range,bins) print "hista[1] = ",hista[1] print "hista[0] = ",hista[0] X = numpy.array(hista[1]) Y = numpy.array(hista[0]) print X #fit gaussian fitfunc = lambda p, x: (p[0]**2)*exp(-(x-p[1])**2/(2*p[2]**2)) # Target function errfunc = lambda p, x, y: fitfunc(p,x) - y # Distance to the target function doublegauss = lambda q,x: (q[0]**2)*exp(-(x-q[1])**2/(2*q[2]**2)) + (q[3]**2)*exp(-(x-q[4])**2/(2*q[5]**2)) doublegausserr = lambda q,x,y: doublegauss(q,x) - y p0 = numpy.array([10.0,-2,0.5]) p1,success = optimize.leastsq(errfunc, p0, args = (X, Y)) As I am passing the arguments as numpy arrays I don't see why there is a problem. Additionally this problem has arisen since changing version of Python. David Trethewey From dlrt2 at ast.cam.ac.uk Fri Oct 24 10:31:52 2008 From: dlrt2 at ast.cam.ac.uk (David Trethewey) Date: Fri, 24 Oct 2008 15:31:52 +0100 Subject: [SciPy-user] optimize.leastsq - Value Error: shape mismatch Message-ID: <4901DC58.9050400@ast.cam.ac.uk> Traceback (most recent call last): File "M31FeHfit_total.py", line 86, in p1,success = optimize.leastsq(errfunc, p0, args = (X, Y)) File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 264, in leastsq m = check_func(func,x0,args,n)[0] File "/usr/lib/python2.5/site-packages/scipy/optimize/minpack.py", line 11, in check_func res = atleast_1d(thefunc(*((x0[:numinputs],)+args))) File "M31FeHfit_total.py", line 81, in errfunc = lambda p, x, y: fitfunc(p,x) - y # Distance to the target function ValueError: shape mismatch: objects cannot be broadcast to a single shape when running the following code: hista = hist(FeH_sub_range,bins) print "hista[1] = ",hista[1] print "hista[0] = ",hista[0] X = numpy.array(hista[1]) Y = numpy.array(hista[0]) print X #fit gaussian fitfunc = lambda p, x: (p[0]**2)*exp(-(x-p[1])**2/(2*p[2]**2)) # Target function errfunc = lambda p, x, y: fitfunc(p,x) - y # Distance to the target function doublegauss = lambda q,x: (q[0]**2)*exp(-(x-q[1])**2/(2*q[2]**2)) + (q[3]**2)*exp(-(x-q[4])**2/(2*q[5]**2)) doublegausserr = lambda q,x,y: doublegauss(q,x) - y p0 = numpy.array([10.0,-2,0.5]) p1,success = optimize.leastsq(errfunc, p0, args = (X, Y)) As I am passing the arguments as numpy arrays I don't see why there is a problem. Additionally this problem has arisen since changing version of Python. David Trethewey From jdh2358 at gmail.com Sat Oct 25 09:50:31 2008 From: jdh2358 at gmail.com (John Hunter) Date: Sat, 25 Oct 2008 08:50:31 -0500 Subject: [SciPy-user] decimate Message-ID: <88e473830810250650i9100ef0tf52acc3ec1787b8a@mail.gmail.com> One of the functions I used to use a lot in matlab was decimate, which downsamples data after doing a low pass filter to prevent aliasing. It would be nice to have an analogous simple function in scipy which does the same -- I found the example below implementation on google. Does this look like somthing suitable for inclusion in scipy, or is there an equivalently easier way to do it with existing scipy tools. The context of this question is that we are teaching python for scientific computing this weekend to some students in Calremont who are trying to wean themselves from the matlab teat http://snippets.dzone.com/posts/show/1209 From josef.pktd at gmail.com Sat Oct 25 14:03:38 2008 From: josef.pktd at gmail.com (joep) Date: Sat, 25 Oct 2008 11:03:38 -0700 (PDT) Subject: [SciPy-user] return type of inverse cdf of discrete distribution? Message-ID: <3aa03dc4-73ba-4198-bdb6-222f9048c433@u57g2000hsf.googlegroups.com> What should be the return type of the inverse cdf (and inverse survival function of a discrete distribution? The problem is handling of inf for boundary and nans for invalid input. Options are * return floating point (double) with inf and nans returned as for the continuous distribution, or * return integer and throw an exception if return values are inf or nans (or restricting to open interval (0,1). Currently, scipy.stats returns integers (long), but the treatment is not consistent, e.g. instead of nans, zeros are returned for invalid input and inf on boundary throws casting error. I just checked in R: continuous distribution: inverse cdf returns nans and infs, e.g. > qnorm(c(0.5,1.0,2.0), 0, 25) [1] 0 Inf NaN discrete distribution in VGAM: only accept values in (0,1): e.g. > qpospois(c(0.5,1.0,2.0), 25) Error in qpospois(c(0.5, 1, 2), 25) : bad input for argument "p" > qpospois(1.0, 25) Error in qpospois(1, 25) : bad input for argument "p" > qpospois(0.0, 25) Error in qpospois(0, 25) : bad input for argument "p" > qpospois(c(0.0000001,0.5,0.999999999), 25) [1] 4 25 60 > however in stats package in R: no domain checking, and returns nans and inf > aa=qpois(c(0.5,1.0,2.0), 25) Warning message: In qpois(p, lambda, lower.tail, log.p) : NaNs produced > aa [1] 25 Inf NaN > typeof(aa) [1] "double" > aap=qpospois(c(0.5,0.99), 25) > typeof(aap) [1] "double" > aa1=qpois(0.5, 25) > typeof(aa1) [1] "double" both VGAM and stats in R return double. Changing the return type in scipy.stats discrete distribution to double would be a break in API, I don't know if this is relevant or if anybody cares. An alternative would be to choose the return type depending on the presence of nans or infs, but that might not be very reliable for applications. Josef From oliphant at enthought.com Sat Oct 25 15:42:57 2008 From: oliphant at enthought.com (Travis E. Oliphant) Date: Sat, 25 Oct 2008 14:42:57 -0500 Subject: [SciPy-user] decimate In-Reply-To: <88e473830810250650i9100ef0tf52acc3ec1787b8a@mail.gmail.com> References: <88e473830810250650i9100ef0tf52acc3ec1787b8a@mail.gmail.com> Message-ID: <490376C1.4050006@enthought.com> John Hunter wrote: > One of the functions I used to use a lot in matlab was decimate, which > downsamples data after doing a low pass filter to prevent aliasing. > You can use the function resample in scipy.signal. It uses a Fourier method for the low-pass filter. resample(x, len(x)/q) should give a similar result as decimate(x, q) -Travis From wbaxter at gmail.com Sat Oct 25 15:53:10 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Sun, 26 Oct 2008 04:53:10 +0900 Subject: [SciPy-user] decimate In-Reply-To: <490376C1.4050006@enthought.com> References: <88e473830810250650i9100ef0tf52acc3ec1787b8a@mail.gmail.com> <490376C1.4050006@enthought.com> Message-ID: On Sun, Oct 26, 2008 at 4:42 AM, Travis E. Oliphant wrote: > John Hunter wrote: >> One of the functions I used to use a lot in matlab was decimate, which >> downsamples data after doing a low pass filter to prevent aliasing. >> > > You can use the function resample in scipy.signal. It uses a Fourier > method for the low-pass filter. > > resample(x, len(x)/q) > > should give a similar result as decimate(x, q) Ooh, wiki fodder! Pasted this tidbit here: http://www.scipy.org/NumPy_for_Matlab_Users --bb From nono.231 at gmail.com Sat Oct 25 16:14:46 2008 From: nono.231 at gmail.com (I. Soumpasis) Date: Sat, 25 Oct 2008 21:14:46 +0100 Subject: [SciPy-user] ANN: Python programs for epidemic modelling Message-ID: <3ff92a550810251314m18d0596fxffb0a09658a21260@mail.gmail.com> Dear lists, DeductiveThinking.com now provides the Python programs for the book of M. Keeling & P. Rohani "Modeling Infectious Diseases in Humans and Animals", Princeton University Press, 2008. The book has on-line material which includes programs for different models in various programming languages and mathematical tools such as, "C++, FORTRAN and Matlab, while some are also coded in the web-based Java programming language to allow readers to quickly experiment with these types of models", as it is stated at the website. The Python version of the programs were written long ago and submitted to the book's on line material website (available soon). The Python programs with the basic equations modelled and the results in figures were now uploaded on a special wiki page of DeductiveThinking.com. Since, the programs are heavily using numpy, scipy and matplotlib libraries, I send this announcement to all the three lists and the main python-list; sorry for double-posting. The announcement with the related links is uploaded here http://blog.deductivethinking.com/?p=29. The programs are at http://wiki.deductivethinking.com/wiki/Python_Programs_for_Modelling_Infectious_Diseases_book. For those who are interested on modelling and epidemiology, they can take a look at the main site (http://deductivethinking.com) or the main page of the wiki (http://wiki.deductivethinking.com) and follow the epidemiology links. The website is in its beginning, so limited information is uploaded up to now. Thanks for your time and I hope it will be useful for some people, Best Regards, Ilias Soumpasis -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Sat Oct 25 16:21:50 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Sat, 25 Oct 2008 22:21:50 +0200 Subject: [SciPy-user] decimate In-Reply-To: References: <88e473830810250650i9100ef0tf52acc3ec1787b8a@mail.gmail.com> <490376C1.4050006@enthought.com> Message-ID: On Sat, Oct 25, 2008 at 9:53 PM, Bill Baxter wrote: > On Sun, Oct 26, 2008 at 4:42 AM, Travis E. Oliphant > wrote: >> John Hunter wrote: >>> One of the functions I used to use a lot in matlab was decimate, which >>> downsamples data after doing a low pass filter to prevent aliasing. >>> >> >> You can use the function resample in scipy.signal. It uses a Fourier >> method for the low-pass filter. >> >> resample(x, len(x)/q) >> >> should give a similar result as decimate(x, q) > > Ooh, wiki fodder! > Pasted this tidbit here: http://www.scipy.org/NumPy_for_Matlab_Users Should it read Sci.signal.resample !? -Sebastian Haase From wbaxter at gmail.com Sat Oct 25 16:27:13 2008 From: wbaxter at gmail.com (Bill Baxter) Date: Sun, 26 Oct 2008 05:27:13 +0900 Subject: [SciPy-user] decimate In-Reply-To: References: <88e473830810250650i9100ef0tf52acc3ec1787b8a@mail.gmail.com> <490376C1.4050006@enthought.com> Message-ID: On Sun, Oct 26, 2008 at 5:21 AM, Sebastian Haase wrote: > On Sat, Oct 25, 2008 at 9:53 PM, Bill Baxter wrote: >> On Sun, Oct 26, 2008 at 4:42 AM, Travis E. Oliphant >> wrote: >>> John Hunter wrote: >>>> One of the functions I used to use a lot in matlab was decimate, which >>>> downsamples data after doing a low pass filter to prevent aliasing. >>>> >>> >>> You can use the function resample in scipy.signal. It uses a Fourier >>> method for the low-pass filter. >>> >>> resample(x, len(x)/q) >>> >>> should give a similar result as decimate(x, q) >> >> Ooh, wiki fodder! >> Pasted this tidbit here: http://www.scipy.org/NumPy_for_Matlab_Users > Should it read Sci.signal.resample !? Oops. I guess so. I missed the mention of the package there. --bb From aisaac at american.edu Sat Oct 25 17:46:25 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 25 Oct 2008 17:46:25 -0400 Subject: [SciPy-user] ANN: Python programs for epidemic modelling In-Reply-To: <3ff92a550810251314m18d0596fxffb0a09658a21260@mail.gmail.com> References: <3ff92a550810251314m18d0596fxffb0a09658a21260@mail.gmail.com> Message-ID: <490393B1.7030904@american.edu> On 10/25/2008 4:14 PM I. Soumpasis apparently wrote: > http://blog.deductivethinking.com/?p=29 This is cool. But I do not see a license. May I hope this is released under the new BSD license, like the packages it depends on? Thanks, Alan Isaac From nono.231 at gmail.com Sat Oct 25 18:07:53 2008 From: nono.231 at gmail.com (I. Soumpasis) Date: Sat, 25 Oct 2008 23:07:53 +0100 Subject: [SciPy-user] [Numpy-discussion] ANN: Python programs for epidemic modelling In-Reply-To: <490393B1.7030904@american.edu> References: <3ff92a550810251314m18d0596fxffb0a09658a21260@mail.gmail.com> <490393B1.7030904@american.edu> Message-ID: <3ff92a550810251507q3696aa15x46f5b96684ff7f3c@mail.gmail.com> 2008/10/25 Alan G Isaac > On 10/25/2008 4:14 PM I. Soumpasis apparently wrote: > > http://blog.deductivethinking.com/?p=29 > > This is cool. > But I do not see a license. > May I hope this is released under the new BSD license, > like the packages it depends on? > > The programs are GPL licensed. More info on the section of copyrights http://wiki.deductivethinking.com/wiki/Deductive_Thinking:Copyrights. I hope it is ok, Ilias -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Sat Oct 25 23:55:43 2008 From: aisaac at american.edu (Alan G Isaac) Date: Sat, 25 Oct 2008 23:55:43 -0400 Subject: [SciPy-user] [Numpy-discussion] ANN: Python programs for epidemic modelling In-Reply-To: <3ff92a550810251507q3696aa15x46f5b96684ff7f3c@mail.gmail.com> References: <3ff92a550810251314m18d0596fxffb0a09658a21260@mail.gmail.com> <490393B1.7030904@american.edu> <3ff92a550810251507q3696aa15x46f5b96684ff7f3c@mail.gmail.com> Message-ID: <4903EA3F.2070504@american.edu> On 10/25/2008 6:07 PM I. Soumpasis wrote: > The programs are GPL licensed. More info on the section of copyrights > http://wiki.deductivethinking.com/wiki/Deductive_Thinking:Copyrights. > I hope it is ok, Well, that depends what you mean by "ok". Obviously, the author picks the license s/he prefers. But a GPL license means that some people will avoid your code, so you make wish to make sure you thought the licensing issue for this code carefully. As a point of comparison, note that all your package dependencies have a new BSD license. Alan Isaac From pwdavenport at gmail.com Sat Oct 25 08:40:43 2008 From: pwdavenport at gmail.com (Peter Davenport) Date: Sat, 25 Oct 2008 12:40:43 +0000 (UTC) Subject: [SciPy-user] array indices, column names Message-ID: I want to read in a tab separated data table (from excel), with row and column names, then remove or add sets of columns to this table as required. The key here is to be able to read in a list of columns that I want to remove, rather than doing it one at a time. I'm a python novice who's dabbled in tcl and perl, I'm trying python and scipy since they apparently handle matrixes better. I can't find a way of doing this in excel with data filters - hence my attempt at python and scipy. I hope this post is clear, pls tell me if otherwise. As far as I can see from the scipy cookbook and documentation, my best bet seems to be as follows: To read my table into python from a tab separated txt file and set it as an array. A = array([[columnnames, a, b, c, d] [row1, 1, 2, 3, 4] [row2, 5, 6, 7, 8] [row3, 9, 10, 11, 12]]) I can then get specific columns using, b=A[,0:2] and presumably it won't be too complex to create new array without these columns in a manner such as: badcolumns = [1:2:4] c=A[,badcolumns] My issues is then, how do I turn a list of column names into a list of column indices and then remove them from an array? i.e. starting with a list of column names: badcnames = [a,b,d] how do i get a list of indices: badcolumns = [1:2:4] so that I can then remove them from the array. Any advice greatly appreciated, Pete From tom.denniston at gmail.com Sun Oct 26 11:05:38 2008 From: tom.denniston at gmail.com (Tom Denniston) Date: Sun, 26 Oct 2008 10:05:38 -0500 Subject: [SciPy-user] array indices, column names In-Reply-To: References: Message-ID: There are a number of things to say here: First, if you want to read directly from .xls files you might want to try pyexcelerator. As I understand it what you are try to do is find what index corresponds to the columnames you care about in the larger list of all the column names. If this is the case use numpy.searchsorted. Finally, you might find it easier to read your info into a recaraay using numpy.fromiter and then you will be able to access columns by name. Hope this helps --Tom On Oct 25, 2008, at 7:40 AM, Peter Davenport wrote: > I want to read in a tab separated data table (from excel), with row > and column > names, then remove or add sets of columns to this table as required. > The key here is to be able to read in a list of columns that I want > to remove, > rather than doing it one at a time. > > I'm a python novice who's dabbled in tcl and perl, I'm trying python > and scipy > since they apparently handle matrixes better. > I can't find a way of doing this in excel with data filters - hence > my attempt > at python and scipy. > > I hope this post is clear, pls tell me if otherwise. > > As far as I can see from the scipy cookbook and documentation, my > best bet seems > to be as follows: > > To read my table into python from a tab separated txt file and set > it as an > array. > > A = array([[columnnames, a, b, c, d] > [row1, 1, 2, 3, 4] > [row2, 5, 6, 7, 8] > [row3, 9, 10, 11, 12]]) > > I can then get specific columns using, > > b=A[,0:2] > > and presumably it won't be too complex to create new array without > these columns > in a manner such as: > > badcolumns = [1:2:4] > c=A[,badcolumns] > > My issues is then, how do I turn a list of column names into a list > of column > indices and then remove them from an array? > > i.e. starting with a list of column names: > > badcnames = [a,b,d] > > how do i get a list of indices: > > badcolumns = [1:2:4] > > so that I can then remove them from the array. > > > Any advice greatly appreciated, > > Pete > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From voodoochild2006 at gmail.com Sun Oct 26 12:50:16 2008 From: voodoochild2006 at gmail.com (Ritchie) Date: Sun, 26 Oct 2008 09:50:16 -0700 Subject: [SciPy-user] scipy 0.6 installation problem on os x 10.5.5 Message-ID: I'm trying to install scipy 0.6 on my leopard MBP, after a python setup.py build as root, I got the following error message : compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DMACOSX -I/usr/include/ffi -DENABLE_DTRACE -arch i386 -arch ppc -pipe creating build/temp.macosx-10.5-i386-2.5/scipy/optimize/Zeros compile options: '-c' gcc: scipy/optimize/Zeros/brenth.c cc1: error: unrecognized command line option "-Wno-long-double" cc1: error: unrecognized command line option "-Wno-long-double" lipo: can't open input file: /var/tmp//ccaG7ngI.out (No such file or directory) cc1: error: unrecognized command line option "-Wno-long-double" cc1: error: unrecognized command line option "-Wno-long-double" lipo: can't open input file: /var/tmp//ccaG7ngI.out (No such file or directory) error: Command "gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DMACOSX -I/usr/include/ffi -DENABLE_DTRACE -arch i386 -arch ppc -pipe -c scipy/optimize/Zeros/brenth.c -o build/temp.macosx-10.5-i386-2.5/scipy/optimize/Zeros/brenth.o" failed with exit status 1 I googled for a while, could not find any solution. Anybody has any idea? I'm using xcode 3.1.1, gcc-4.2.1, gfortran-4.2.3. Another problem is that if I do a sudo python setup.py build as a normal user, I will have the following error message : adding 'scipy/sparse/linalg/dsolve/umfpack/umfpack.i' to sources. swig: scipy/sparse/linalg/dsolve/umfpack/umfpack.i swig -python -o build/src.macosx-10.5-i386-2.5/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -outdir build/src.macosx-10.5-i386-2.5/scipy/sparse/linalg/dsolve/umfpack scipy/sparse/linalg/dsolve/umfpack/umfpack.i scipy/sparse/linalg/dsolve/umfpack/umfpack.i:192: Error: Unable to find 'umfpack.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:193: Error: Unable to find 'umfpack_solve.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:194: Error: Unable to find 'umfpack_defaults.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:195: Error: Unable to find 'umfpack_triplet_to_col.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:196: Error: Unable to find 'umfpack_col_to_triplet.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:197: Error: Unable to find 'umfpack_transpose.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:198: Error: Unable to find 'umfpack_scale.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:200: Error: Unable to find 'umfpack_report_symbolic.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:201: Error: Unable to find 'umfpack_report_numeric.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:202: Error: Unable to find 'umfpack_report_info.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:203: Error: Unable to find 'umfpack_report_control.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:215: Error: Unable to find 'umfpack_symbolic.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:216: Error: Unable to find 'umfpack_numeric.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:225: Error: Unable to find 'umfpack_free_symbolic.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:226: Error: Unable to find 'umfpack_free_numeric.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:248: Error: Unable to find 'umfpack_get_lunz.h' scipy/sparse/linalg/dsolve/umfpack/umfpack.i:272: Error: Unable to find 'umfpack_get_numeric.h' error: command 'swig' failed with exit status 1 even though I updated and export my $UMFPACK variable. However root does not have this problem. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From keflavich at gmail.com Sun Oct 26 15:33:40 2008 From: keflavich at gmail.com (Keflavich) Date: Sun, 26 Oct 2008 12:33:40 -0700 (PDT) Subject: [SciPy-user] Event handling, API programming Message-ID: <96169678-2695-4e01-8d66-af24cca835bd@w24g2000prd.googlegroups.com> Hi, I'm trying to make myself a set of widgets for the first time. I've gotten to the point that I can draw rectangles and lines and make them do the right things when re-drawing figures, zooming, etc., but I'm still a little lost on some points, and I haven't found any really good documentation. So, first question: Where should I go for documentation first? I've been using examples, e.g. widgets.py, and the pygtk event handling page, http://www.pygtk.org/pygtk2tutorial/sec-EventHandling.html. This page was a useful explanation of the stuff in widgets.py: http://www.nabble.com/some-API-documentation-td16204232.html. Second question: I have two subplots of different data with the same dimensions. I'd like to zoom in to the same region on both figures when I use zoom-to-box on either one. How can I do this? (I'm using tkAgg) Thanks, Adam From robert.kern at gmail.com Sun Oct 26 15:43:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 26 Oct 2008 14:43:36 -0500 Subject: [SciPy-user] Event handling, API programming In-Reply-To: <96169678-2695-4e01-8d66-af24cca835bd@w24g2000prd.googlegroups.com> References: <96169678-2695-4e01-8d66-af24cca835bd@w24g2000prd.googlegroups.com> Message-ID: <3d375d730810261243y60c898e8q88fa3cbd8dfc5697@mail.gmail.com> On Sun, Oct 26, 2008 at 14:33, Keflavich wrote: > Hi, I'm trying to make myself a set of widgets for the first time. > I've gotten to the point that I can draw rectangles and lines and make > them do the right things when re-drawing figures, zooming, etc., but > I'm still a little lost on some points, and I haven't found any really > good documentation. > > So, first question: Where should I go for documentation first? > > I've been using examples, e.g. widgets.py, and the pygtk event > handling page, http://www.pygtk.org/pygtk2tutorial/sec-EventHandling.html. > This page was a useful explanation of the stuff in widgets.py: > http://www.nabble.com/some-API-documentation-td16204232.html. > > Second question: I have two subplots of different data with the same > dimensions. I'd like to zoom in to the same region on both figures > when I use zoom-to-box on either one. How can I do this? (I'm using > tkAgg) You will want to ask on the matplotlib list. https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Oct 26 15:45:30 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 26 Oct 2008 14:45:30 -0500 Subject: [SciPy-user] scipy 0.6 installation problem on os x 10.5.5 In-Reply-To: References: Message-ID: <3d375d730810261245k2e472ec9t94b1527f14a45f26@mail.gmail.com> On Sun, Oct 26, 2008 at 11:50, Ritchie wrote: > I'm trying to install scipy 0.6 on my leopard MBP, after a > python setup.py build > as root, I got the following error message : > compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp > -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall > -Wstrict-prototypes -DMACOSX -I/usr/include/ffi -DENABLE_DTRACE -arch i386 > -arch ppc -pipe > creating build/temp.macosx-10.5-i386-2.5/scipy/optimize/Zeros > compile options: '-c' > gcc: scipy/optimize/Zeros/brenth.c > cc1: error: unrecognized command line option "-Wno-long-double" > cc1: error: unrecognized command line option "-Wno-long-double" > lipo: can't open input file: /var/tmp//ccaG7ngI.out (No such file or > directory) > cc1: error: unrecognized command line option "-Wno-long-double" > cc1: error: unrecognized command line option "-Wno-long-double" > lipo: can't open input file: /var/tmp//ccaG7ngI.out (No such file or > directory) > error: Command "gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp > -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall > -Wstrict-prototypes -DMACOSX -I/usr/include/ffi -DENABLE_DTRACE -arch i386 > -arch ppc -pipe -c scipy/optimize/Zeros/brenth.c -o > build/temp.macosx-10.5-i386-2.5/scipy/optimize/Zeros/brenth.o" failed with > exit status 1 > > I googled for a while, could not find any solution. Anybody has any idea? > I'm using xcode 3.1.1, gcc-4.2.1, gfortran-4.2.3. That's your problem. You should use the same version of gcc (4.0.1) that compiled your Python executable. It looks like gcc dropped a flag in the intervening period, but Python tries to use the same flags it was built with to build extensions. I haven't had any problem mixing gcc 4.0.1 with gfortran 4.2.x. At least for building scipy. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From voodoochild2006 at gmail.com Sun Oct 26 17:44:32 2008 From: voodoochild2006 at gmail.com (Ritchie Cai) Date: Sun, 26 Oct 2008 14:44:32 -0700 Subject: [SciPy-user] scipy 0.6 installation problem on os x 10.5.5 In-Reply-To: <3d375d730810261245k2e472ec9t94b1527f14a45f26@mail.gmail.com> References: <3d375d730810261245k2e472ec9t94b1527f14a45f26@mail.gmail.com> Message-ID: Thanks, I just got it to work. Actually I tried 4.0 before, but I was just doing a simple easy_install scipy, and it gives some other error messages. I guess I should have compiled from source like this: export MACOSX_DEPLOYMENT_TARGET=10.5 cd ../scipy python setup.py build_src build_clib --fcompiler=gnu95 build_ext --fcompiler=gnu95 build sudo python setup.py install On Oct 26, 2008, at 12:45 PM, Robert Kern wrote: > On Sun, Oct 26, 2008 at 11:50, Ritchie > wrote: >> I'm trying to install scipy 0.6 on my leopard MBP, after a >> python setup.py build >> as root, I got the following error message : >> compiler: gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp >> -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall >> -Wstrict-prototypes -DMACOSX -I/usr/include/ffi -DENABLE_DTRACE - >> arch i386 >> -arch ppc -pipe >> creating build/temp.macosx-10.5-i386-2.5/scipy/optimize/Zeros >> compile options: '-c' >> gcc: scipy/optimize/Zeros/brenth.c >> cc1: error: unrecognized command line option "-Wno-long-double" >> cc1: error: unrecognized command line option "-Wno-long-double" >> lipo: can't open input file: /var/tmp//ccaG7ngI.out (No such file or >> directory) >> cc1: error: unrecognized command line option "-Wno-long-double" >> cc1: error: unrecognized command line option "-Wno-long-double" >> lipo: can't open input file: /var/tmp//ccaG7ngI.out (No such file or >> directory) >> error: Command "gcc -fno-strict-aliasing -Wno-long-double -no-cpp- >> precomp >> -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -Os -Wall >> -Wstrict-prototypes -DMACOSX -I/usr/include/ffi -DENABLE_DTRACE - >> arch i386 >> -arch ppc -pipe -c scipy/optimize/Zeros/brenth.c -o >> build/temp.macosx-10.5-i386-2.5/scipy/optimize/Zeros/brenth.o" >> failed with >> exit status 1 >> >> I googled for a while, could not find any solution. Anybody has any >> idea? >> I'm using xcode 3.1.1, gcc-4.2.1, gfortran-4.2.3. > > That's your problem. You should use the same version of gcc (4.0.1) > that compiled your Python executable. It looks like gcc dropped a flag > in the intervening period, but Python tries to use the same flags it > was built with to build extensions. I haven't had any problem mixing > gcc 4.0.1 with gfortran 4.2.x. At least for building scipy. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From markbak at gmail.com Mon Oct 27 11:00:17 2008 From: markbak at gmail.com (Mark Bakker) Date: Mon, 27 Oct 2008 16:00:17 +0100 Subject: [SciPy-user] what happened to Numpy-Discussion on Google? Message-ID: <6946b9500810270800v2679ffe2hde1e057b093db44f@mail.gmail.com> Does anybody know what happened to the Numpy-Discussion group on Google? It was quite helpful (and very quick with replies). Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Oct 27 12:35:08 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 27 Oct 2008 11:35:08 -0500 Subject: [SciPy-user] what happened to Numpy-Discussion on Google? In-Reply-To: <6946b9500810270800v2679ffe2hde1e057b093db44f@mail.gmail.com> References: <6946b9500810270800v2679ffe2hde1e057b093db44f@mail.gmail.com> Message-ID: <3d375d730810270935u1ea1cc84g7f20aec5880a7b6e@mail.gmail.com> On Mon, Oct 27, 2008 at 10:00, Mark Bakker wrote: > Does anybody know what happened to the Numpy-Discussion group on Google? > > It was quite helpful (and very quick with replies). Apparently a number of groups (possibly just ones which are gatewayed to other mailing lists like Numpy-Discussion is gatewayed to numpy-discussion at scipy.org) simply disappeared from Google recently. You can subscribe to the actual mailing list here: http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rcsqtc at iiqab.csic.es Mon Oct 27 13:11:11 2008 From: rcsqtc at iiqab.csic.es (Ramon Crehuet) Date: Mon, 27 Oct 2008 18:11:11 +0100 Subject: [SciPy-user] Problem with numeric array bounds Message-ID: <4905F62F.8090407@iiqab.csic.es> Dear all, I know Numeric is no longer supported, but we are using an application (nMoldyn) that uses it, and we are encoutering an unusual problem. I am using the Numeric 23.8.2 version to create arrays, but there is something wrong when it comes to create a subarray: ---------------------------------------------------------------------- Python 2.5 (r25:51908, Jan 10 2008, 18:01:52) [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import Numeric >>> a=Numeric.array([7,5,1]) >>> a array([7, 5, 1]) >>> a[0:] array([7, 5]) ---------------------------------------------------------------------- The subarray ends at the penultimate element when using ':' On the other hand, there is no problem when defining sublists. Can be something wrong with python and/or Numeric installation? Thanks in advance, Ramon From robert.kern at gmail.com Mon Oct 27 13:38:18 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 27 Oct 2008 12:38:18 -0500 Subject: [SciPy-user] Problem with numeric array bounds In-Reply-To: <4905F62F.8090407@iiqab.csic.es> References: <4905F62F.8090407@iiqab.csic.es> Message-ID: <3d375d730810271038o6a96d774ibd7d369ea7e26915@mail.gmail.com> On Mon, Oct 27, 2008 at 12:11, Ramon Crehuet wrote: > Dear all, > I know Numeric is no longer supported, but we are using an application > (nMoldyn) that uses it, and we are encoutering an unusual problem. > I am using the Numeric 23.8.2 version to create arrays, but there is > something wrong when it comes to create a subarray: > ---------------------------------------------------------------------- > Python 2.5 (r25:51908, Jan 10 2008, 18:01:52) > [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import Numeric >>>> a=Numeric.array([7,5,1]) >>>> a > array([7, 5, 1]) >>>> a[0:] > array([7, 5]) > ---------------------------------------------------------------------- > > The subarray ends at the penultimate element when using ':' On > the other hand, there is no problem when defining sublists. Can be > something wrong with python and/or Numeric installation? Try Numeric 24.2. On my machine, it gives the correct results: >>> import Numeric >>> a = Numeric.array([7,5,1]) >>> a[0:] array([7, 5, 1]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Oct 27 16:14:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 27 Oct 2008 15:14:59 -0500 Subject: [SciPy-user] Some help with chisquare In-Reply-To: <3d381e170810221055q262d308ah1377a0f0f0fc6c7d@mail.gmail.com> References: <3d381e170810221055q262d308ah1377a0f0f0fc6c7d@mail.gmail.com> Message-ID: <3d375d730810271314v271bf2edna316c54025c2e395@mail.gmail.com> On Wed, Oct 22, 2008 at 12:55, Erik Wickstrom wrote: > Hi, > > I'm trying to port an application to python, and want to use scipy to handle > the statistics. > > The app takes several tests and uses chi-square to determines which has the > highest success rate with a confidence of 95% or better (critical > values/degrees of freedom). > > For example: > Test a: > Total trials = 100 > Total successes = 60 > > Test b: > Total trials = 105 > Total successes = 46 > > Test c: > Total trials = 98 > Total successes = 52 > > It then puts the data through some sort of chi-square formula (or so the > comments say) and produces a chi-square value that can be compared against > the critical values for 95% confidence. > > Trouble is, I'm not sure which of the many scipy chi-square functions to > use, and what data I need to feed into them.... scipy.stats.chisquare() is probably what you want. Pass it arrays of the actual and expected frequencies for each Test. It will return to you a Chi^2 value and the associated p-value. If the p-value is < 0.05, then the Chi^2 value is greater than the critical value for the 95% confidence region. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bsouthey at gmail.com Mon Oct 27 17:12:21 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 27 Oct 2008 16:12:21 -0500 Subject: [SciPy-user] Some help with chisquare In-Reply-To: <3d375d730810271314v271bf2edna316c54025c2e395@mail.gmail.com> References: <3d381e170810221055q262d308ah1377a0f0f0fc6c7d@mail.gmail.com> <3d375d730810271314v271bf2edna316c54025c2e395@mail.gmail.com> Message-ID: <49062EB5.30004@gmail.com> Robert Kern wrote: > On Wed, Oct 22, 2008 at 12:55, Erik Wickstrom wrote: > >> Hi, >> >> I'm trying to port an application to python, and want to use scipy to handle >> the statistics. >> >> The app takes several tests and uses chi-square to determines which has the >> highest success rate with a confidence of 95% or better (critical >> values/degrees of freedom). >> >> For example: >> Test a: >> Total trials = 100 >> Total successes = 60 >> >> Test b: >> Total trials = 105 >> Total successes = 46 >> >> Test c: >> Total trials = 98 >> Total successes = 52 >> >> It then puts the data through some sort of chi-square formula (or so the >> comments say) and produces a chi-square value that can be compared against >> the critical values for 95% confidence. >> >> Trouble is, I'm not sure which of the many scipy chi-square functions to >> use, and what data I need to feed into them.... >> > > scipy.stats.chisquare() is probably what you want. Pass it arrays of > the actual and expected frequencies for each Test. It will return to > you a Chi^2 value and the associated p-value. If the p-value is < > 0.05, then the Chi^2 value is greater than the critical value for the > 95% confidence region. > > I think there is insufficient information here because the description is rather unclear. I think this sounds like the Cochran-Mantel-Haenszel test (http://en.wikipedia.org/wiki/Cochran_test). A formula to calculate the chi-square value and degrees of freedom would be clearer as well as the actual value and p-value returned for the above example. Bruce From 302302 at centrum.cz Mon Oct 27 21:06:04 2008 From: 302302 at centrum.cz (302302) Date: Tue, 28 Oct 2008 02:06:04 +0100 Subject: [SciPy-user] Matplotlib redraw axis In-Reply-To: <200810280139.2758@centrum.cz> References: <200810280132.21533@centrum.cz> <200810280133.21644@centrum.cz> <200810280134.9435@centrum.cz> <200810280135.21986@centrum.cz> <200810280136.19448@centrum.cz> <200810280137.19596@centrum.cz> <200810280138.19689@centrum.cz> <200810280139.2758@centrum.cz> Message-ID: <200810280206.2166@centrum.cz> Hi, I'm dealing with problem how to redraw just label ticks in one certain subplot with in matplotlib. If I change description of axis in the subplot (by .set_yticks() and .set_yticklabels()) I have to redraw whole figure (figure.canvas.draw()) to see the change. But I need to redraw either just the one subplot with axis description or just the descriptions. Is it possible to use there something like "blit" technique? Thanks for any advice. Czenek From jdh2358 at gmail.com Mon Oct 27 21:14:12 2008 From: jdh2358 at gmail.com (John Hunter) Date: Mon, 27 Oct 2008 20:14:12 -0500 Subject: [SciPy-user] Matplotlib redraw axis In-Reply-To: <200810280206.2166@centrum.cz> References: <200810280132.21533@centrum.cz> <200810280133.21644@centrum.cz> <200810280134.9435@centrum.cz> <200810280135.21986@centrum.cz> <200810280136.19448@centrum.cz> <200810280137.19596@centrum.cz> <200810280138.19689@centrum.cz> <200810280139.2758@centrum.cz> <200810280206.2166@centrum.cz> Message-ID: <88e473830810271814x3b3ad789sb9a43e9733ce6cea@mail.gmail.com> On Mon, Oct 27, 2008 at 8:06 PM, 302302 <302302 at centrum.cz> wrote: > Hi, > I'm dealing with problem how to redraw just label ticks in one certain subplot with in matplotlib. If I change description of axis in the subplot (by .set_yticks() and .set_yticklabels()) I have to redraw whole figure (figure.canvas.draw()) to see the change. > But I need to redraw either just the one subplot with axis description or just the descriptions. > Is it possible to use there something like "blit" technique? matplotlib questions should be addressed to the matplotlib-users list at: https://lists.sourceforge.net/lists/listinfo/matplotlib-users we'll be happy to answer your questions over there. JDH From 302302 at centrum.cz Mon Oct 27 21:34:41 2008 From: 302302 at centrum.cz (302302) Date: Tue, 28 Oct 2008 02:34:41 +0100 Subject: [SciPy-user] Matplotlib redraw axis In-Reply-To: <88e473830810271814x3b3ad789sb9a43e9733ce6cea@mail.gmail.com> References: <200810280132.21533@centrum.cz> <200810280133.21644@centrum.cz> <200810280206.2166@centrum.cz> <88e473830810271814x3b3ad789sb9a43e9733ce6cea@mail.gmail.com> Message-ID: <200810280234.1338@centrum.cz> Ok, thanks you for directing me to more appropriate forum. Czenek ______________________________________________________________ > Od: jdh2358 at gmail.com > Komu: "SciPy Users List" <scipy-user at scipy.org> > Datum: 28.10.2008 02:14 > P?edm?t: Re: [SciPy-user] Matplotlib redraw axis > >On Mon, Oct 27, 2008 at 8:06 PM, 302302 <302302 at centrum.cz> wrote: >> Hi, >> I'm dealing with problem how to redraw just label ticks in one certain subplot with in matplotlib. If I change description of axis in the subplot (by .set_yticks() and .set_yticklabels()) I have to redraw whole figure (figure.canvas.draw()) to see the change. >> But I need to redraw either just the one subplot with axis description or just the descriptions. >> Is it possible to use there something like "blit" technique? > >matplotlib questions should be addressed to the matplotlib-users list at: > > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > >we'll be happy to answer your questions over there. > >JDH >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > From mwojc at p.lodz.pl Tue Oct 28 06:38:32 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Tue, 28 Oct 2008 11:38:32 +0100 Subject: [SciPy-user] combinatorics - all set partitions Message-ID: Hi! I'm trying to find an algorithm (and possibly the python code) implementing the problem of finding all possible partitions of the set. Example of all partitions for the set { 1, 2, 3 } is: { {1}, {2}, {3} } { {1, 2}, {3} } { {1, 3}, {2} } { {1}, {2, 3} } { {1, 2, 3} } but i need general partitioning tool. I thought maybe someone from the group knows the solution... Greetings, -- Marek Wojciechowski From wesmckinn at gmail.com Tue Oct 28 09:46:24 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 28 Oct 2008 09:46:24 -0400 Subject: [SciPy-user] combinatorics - all set partitions In-Reply-To: References: Message-ID: <6c476c8a0810280646o694b561bgbf54115de4d23b49@mail.gmail.com> Have you tried looking in Sage (www.sagemath.org)? I think it has everything you need: http://www.sagemath.org/doc/ref/module-sage.combinat.set-partition.html On Tue, Oct 28, 2008 at 6:38 AM, Marek Wojciechowski wrote: > Hi! > I'm trying to find an algorithm (and possibly the python code) implementing > the problem of finding all possible partitions of the set. > > Example of all partitions for the set { 1, 2, 3 } is: > { {1}, {2}, {3} } > { {1, 2}, {3} } > { {1, 3}, {2} } > { {1}, {2, 3} } > { {1, 2, 3} } > but i need general partitioning tool. > > I thought maybe someone from the group knows the solution... > > Greetings, > -- > Marek Wojciechowski > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reckoner at gmail.com Tue Oct 28 12:37:35 2008 From: reckoner at gmail.com (Reckoner) Date: Tue, 28 Oct 2008 12:37:35 -0400 Subject: [SciPy-user] combinatorics - all set partitions In-Reply-To: References: Message-ID: see chooser.py http://code.activestate.com/recipes/302478/ http://mail.python.org/pipermail/python-list/2006-May/383412.html and more at http://www.diigo.com/user/reckoner/combinatorics?tab=250 On 10/28/08, Marek Wojciechowski wrote: > Hi! > I'm trying to find an algorithm (and possibly the python code) implementing > the problem of finding all possible partitions of the set. > > Example of all partitions for the set { 1, 2, 3 } is: > { {1}, {2}, {3} } > { {1, 2}, {3} } > { {1, 3}, {2} } > { {1}, {2, 3} } > { {1, 2, 3} } > but i need general partitioning tool. > > I thought maybe someone from the group knows the solution... > > Greetings, > > -- > Marek Wojciechowski > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rcsqtc at iiqab.csic.es Tue Oct 28 13:15:19 2008 From: rcsqtc at iiqab.csic.es (Ramon Crehuet) Date: Tue, 28 Oct 2008 18:15:19 +0100 Subject: [SciPy-user] Problem with numeric array bounds In-Reply-To: References: Message-ID: <490748A7.9080303@iiqab.csic.es> Hi, Unfortunately I have to stick to Numeric 23.8 because the application is not fully compatible with Numeric 24. In fact, we have another machine where Numeric 23.8 is working flowlessly so it must be a compilation problem. However we have updated gcc (to 4.1.3) but the problem remains. Any suggestion? Cheers, Ramon > Message: 2 > Date: Mon, 27 Oct 2008 12:38:18 -0500 > From: "Robert Kern" > Subject: Re: [SciPy-user] Problem with numeric array bounds > To: "SciPy Users List" > Message-ID: > <3d375d730810271038o6a96d774ibd7d369ea7e26915 at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > On Mon, Oct 27, 2008 at 12:11, Ramon Crehuet wrote: >> Dear all, >> I know Numeric is no longer supported, but we are using an application >> (nMoldyn) that uses it, and we are encoutering an unusual problem. >> I am using the Numeric 23.8.2 version to create arrays, but there is >> something wrong when it comes to create a subarray: >> ---------------------------------------------------------------------- >> Python 2.5 (r25:51908, Jan 10 2008, 18:01:52) >> [GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >>>>> import Numeric >>>>> a=Numeric.array([7,5,1]) >>>>> a >> array([7, 5, 1]) >>>>> a[0:] >> array([7, 5]) >> ---------------------------------------------------------------------- >> >> The subarray ends at the penultimate element when using ':' On >> the other hand, there is no problem when defining sublists. Can be >> something wrong with python and/or Numeric installation? > > Try Numeric 24.2. On my machine, it gives the correct results: > >>>> import Numeric >>>> a = Numeric.array([7,5,1]) >>>> a[0:] > array([7, 5, 1]) > From robert.kern at gmail.com Tue Oct 28 13:58:44 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 28 Oct 2008 12:58:44 -0500 Subject: [SciPy-user] Problem with numeric array bounds In-Reply-To: <490748A7.9080303@iiqab.csic.es> References: <490748A7.9080303@iiqab.csic.es> Message-ID: <3d375d730810281058q46edd660yf8b06e61eb60712b@mail.gmail.com> On Tue, Oct 28, 2008 at 12:15, Ramon Crehuet wrote: > Hi, > Unfortunately I have to stick to Numeric 23.8 because the application is > not fully compatible with Numeric 24. In fact, we have another machine > where Numeric 23.8 is working flowlessly so it must be a compilation > problem. However we have updated gcc (to 4.1.3) but the problem remains. > Any suggestion? Huh. Not really. What kind of machines does it fail and work on, respectively? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From mwojc at p.lodz.pl Tue Oct 28 15:11:22 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Tue, 28 Oct 2008 20:11:22 +0100 Subject: [SciPy-user] combinatorics - all set partitions References: Message-ID: Marek Wojciechowski wrote: > Hi! > I'm trying to find an algorithm (and possibly the python code) > implementing the problem of finding all possible partitions of the set. > > Example of all partitions for the set { 1, 2, 3 } is: > { {1}, {2}, {3} } > { {1, 2}, {3} } > { {1, 3}, {2} } > { {1}, {2, 3} } > { {1, 2, 3} } > but i need general partitioning tool. > > I thought maybe someone from the group knows the solution... > > Greetings, Thanks for the answers.. However I've ended up finally with writing the generator by myself: from copy import deepcopy def addelement(partlist, e): newpartlist = [] for part in partlist: npart = part + [[e]] newpartlist += [npart] for i in xrange(len(part)): npart = deepcopy(part) npart[i] += [e] newpartlist += [npart] return newpartlist def partition(n): if n == 0: return [] partlist = [[[1]]] for i in xrange(2, n+1): partlist = addelement(partlist, i) return partlist print partition(4) This seems to give good results and is enough for me. Thanks again! -- Marek Wojciechowski From argriffi at ncsu.edu Tue Oct 28 15:57:40 2008 From: argriffi at ncsu.edu (alex) Date: Tue, 28 Oct 2008 15:57:40 -0400 Subject: [SciPy-user] combinatorics - all set partitions In-Reply-To: References: Message-ID: <49076EB4.2020309@ncsu.edu> Marek Wojciechowski wrote: > Marek Wojciechowski wrote: > > >> Hi! >> I'm trying to find an algorithm (and possibly the python code) >> implementing the problem of finding all possible partitions of the set. >> >> Example of all partitions for the set { 1, 2, 3 } is: >> { {1}, {2}, {3} } >> { {1, 2}, {3} } >> { {1, 3}, {2} } >> { {1}, {2, 3} } >> { {1, 2, 3} } >> but i need general partitioning tool. >> >> I thought maybe someone from the group knows the solution... >> >> Greetings, >> > > Thanks for the answers.. However I've ended up finally with writing the > generator by myself: > > from copy import deepcopy > def addelement(partlist, e): > newpartlist = [] > for part in partlist: > npart = part + [[e]] > newpartlist += [npart] > for i in xrange(len(part)): > npart = deepcopy(part) > npart[i] += [e] > newpartlist += [npart] > return newpartlist > > def partition(n): > if n == 0: return [] > partlist = [[[1]]] > for i in xrange(2, n+1): > partlist = addelement(partlist, i) > return partlist > > print partition(4) > > This seems to give good results and is enough for me. > > Thanks again! > You might want to call it something other than partition to avoid conflicts with the builtin function. From jmiller at stsci.edu Tue Oct 28 16:15:22 2008 From: jmiller at stsci.edu (Todd Miller) Date: Tue, 28 Oct 2008 16:15:22 -0400 Subject: [SciPy-user] ANN: Cyrano v0.1 demo tool Message-ID: <490772DA.7000807@stsci.edu> Cyrano is a simple Tk GUI that helps a speaker interactively demonstrate Python code. Cyrano steps through a demonstration script by "typing" it into a shell window while you focus on talking to the audience. Cyrano's shell window is based on IPython so a speaker can interject spontaneous commands at any time during his demo. Cyrano supports customization of fonts and colors using an RC file. Cyrano is open source software with a BSD-style license. The first public release of Cyrano, v0.1, is downloadable here: http://stsdas.stsci.edu/cyrano Send questions or comments to jmiller at stsci.edu. Cheers, Todd Miller From xavier.gnata at gmail.com Wed Oct 29 08:07:03 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Wed, 29 Oct 2008 13:07:03 +0100 Subject: [SciPy-user] run=2339 errors=0 failures=3 on ibex Message-ID: <490851E7.1020706@gmail.com> Hi, I have compiled scipy svn on ubuntu ibex. It looks fine but I have 3 small failures inn scipy.test() : ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "/var/lib/python-support/python2.5/nose/case.py", line 182, in runTest self.test(*self.arg) File "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 41, in check_syevr assert_array_almost_equal(w,exact_w) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 310, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769462, 9.18222713], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_lapack.test_all_lapack ---------------------------------------------------------------------- Traceback (most recent call last): File "/var/lib/python-support/python2.5/nose/case.py", line 182, in runTest self.test(*self.arg) File "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 310, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 295, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 33.3333333333%) x: array([-0.66992444, 0.48769462, 9.18222713], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: test_pbdv (test_basic.TestCephes) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", line 368, in test_pbdv assert_equal(cephes.pbdv(1,0),(0.0,0.0)) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 176, in assert_equal assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,err_msg), verbose) File "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", line 183, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: item=1 ACTUAL: 1.0 DESIRED: 0.0 I haven't started to have a deep look into this test code so far. Any ideas? Xavier From nwagner at iam.uni-stuttgart.de Wed Oct 29 08:16:30 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 29 Oct 2008 13:16:30 +0100 Subject: [SciPy-user] run=2339 errors=0 failures=3 on ibex In-Reply-To: <490851E7.1020706@gmail.com> References: <490851E7.1020706@gmail.com> Message-ID: On Wed, 29 Oct 2008 13:07:03 +0100 Xavier Gnata wrote: > Hi, > > I have compiled scipy svn on ubuntu ibex. > It looks fine but I have 3 small failures inn >scipy.test() : > > ====================================================================== > > >FAIL: > test_lapack.test_all_lapack > > ---------------------------------------------------------------------- > > > Traceback (most recent call > last): > File "/var/lib/python-support/python2.5/nose/case.py", >line 182, in > runTest > > self.test(*self.arg) > > File > "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", > line 41, in > check_syevr > > > assert_array_almost_equal(w,exact_w) > > File >"/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >line > 310, in > assert_array_almost_equal > > > header='Arrays are not almost > equal') > File >"/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >line > 295, in > assert_array_compare > > > > raise > AssertionError(msg) > > AssertionError: > > > > Arrays are not almost > equal > > > (mismatch 33.3333333333%) > x: array([-0.66992444, 0.48769462, 9.18222713], >dtype=float32) > y: array([-0.66992434, 0.48769389, 9.18223045]) > > > ====================================================================== >FAIL: test_lapack.test_all_lapack > > ---------------------------------------------------------------------- > Traceback (most recent call last): > > File "/var/lib/python-support/python2.5/nose/case.py", >line 182, in > runTest > > self.test(*self.arg) > > File > "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", > line 66, in > check_syevr_irange > > > assert_array_almost_equal(w,exact_w[rslice]) > > File >"/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >line > 310, in > assert_array_almost_equal > > > header='Arrays are not almost > equal') > File >"/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >line > 295, in > assert_array_compare > > > > raise > AssertionError(msg) > > AssertionError: > > > > Arrays are not almost > equal > > > (mismatch 33.3333333333%) > x: array([-0.66992444, 0.48769462, 9.18222713], >dtype=float32) > y: array([-0.66992434, 0.48769389, 9.18223045]) > > This is a known failure http://projects.scipy.org/scipy/scipy/ticket/375 However I cannot reproduce the next failure (test_pbdv) Nils ====================================================================== >FAIL: test_pbdv (test_basic.TestCephes) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", > line 368, in test_pbdv > assert_equal(cephes.pbdv(1,0),(0.0,0.0)) > File >"/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >line > 176, in assert_equal > assert_equal(actual[k], desired[k], 'item=%r\n%s' % >(k,err_msg), > verbose) > File >"/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >line > 183, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > item=1 > > ACTUAL: 1.0 > DESIRED: 0.0 > > I haven't started to have a deep look into this test >code so far. > Any ideas? > > Xavier > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From millman at berkeley.edu Wed Oct 29 10:53:11 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 29 Oct 2008 07:53:11 -0700 Subject: [SciPy-user] ANN: NumPy 1.2.1 In-Reply-To: References: Message-ID: I'm pleased to announce the release of NumPy 1.2.1. NumPy is the fundamental package needed for scientific computing with Python. It contains: * a powerful N-dimensional array object * sophisticated (broadcasting) functions * basic linear algebra functions * basic Fourier transforms * sophisticated random number capabilities * tools for integrating Fortran code. Besides it's obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide-variety of databases. This bugfix release comes almost one month after the 1.2.0 release. Please note that NumPy 1.2.1 requires Python 2.4 or greater. For information, please see the release notes: https://sourceforge.net/project/shownotes.php?release_id=636728&group_id=1369 You can download the release from here: https://sourceforge.net/project/showfiles.php?group_id=1369 Thank you to everybody who contributed to this release. Enjoy, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From xavier.gnata at gmail.com Wed Oct 29 12:49:35 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Wed, 29 Oct 2008 17:49:35 +0100 Subject: [SciPy-user] run=2339 errors=0 failures=3 on ibex In-Reply-To: References: <490851E7.1020706@gmail.com> Message-ID: <4908941F.6000901@gmail.com> Ok so there is a simple fix which is fully correct: -assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True) +assert_array_almost_equal(x, y, decimal=5, err_msg='', verbose=True) This fix has been proposed a long time ago (07/04/07 ) Where is the problem preventing to merge it? Nils is not able to reproducec the last faillure. I have to try to understand why... Xavier >> Hi, >> >> I have compiled scipy svn on ubuntu ibex. >> It looks fine but I have 3 small failures inn >> scipy.test() : >> >> ====================================================================== >> >> >> FAIL: >> test_lapack.test_all_lapack >> >> ---------------------------------------------------------------------- >> >> >> Traceback (most recent call >> last): >> File "/var/lib/python-support/python2.5/nose/case.py", >> line 182, in >> runTest >> >> self.test(*self.arg) >> >> File >> "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", >> line 41, in >> check_syevr >> >> >> assert_array_almost_equal(w,exact_w) >> >> File >> "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >> line >> 310, in >> assert_array_almost_equal >> >> >> header='Arrays are not almost >> equal') >> File >> "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >> line >> 295, in >> assert_array_compare >> >> >> >> raise >> AssertionError(msg) >> >> AssertionError: >> >> >> >> Arrays are not almost >> equal >> >> >> (mismatch 33.3333333333%) >> x: array([-0.66992444, 0.48769462, 9.18222713], >> dtype=float32) >> y: array([-0.66992434, 0.48769389, 9.18223045]) >> >> >> ====================================================================== >> FAIL: test_lapack.test_all_lapack >> >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> >> File "/var/lib/python-support/python2.5/nose/case.py", >> line 182, in >> runTest >> >> self.test(*self.arg) >> >> File >> "/usr/lib/python2.5/site-packages/scipy/lib/lapack/tests/esv_tests.py", >> line 66, in >> check_syevr_irange >> >> >> assert_array_almost_equal(w,exact_w[rslice]) >> >> File >> "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >> line >> 310, in >> assert_array_almost_equal >> >> >> header='Arrays are not almost >> equal') >> File >> "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >> line >> 295, in >> assert_array_compare >> >> >> >> raise >> AssertionError(msg) >> >> AssertionError: >> >> >> >> Arrays are not almost >> equal >> >> >> (mismatch 33.3333333333%) >> x: array([-0.66992444, 0.48769462, 9.18222713], >> dtype=float32) >> y: array([-0.66992434, 0.48769389, 9.18223045]) >> >> >> > > This is a known failure > > http://projects.scipy.org/scipy/scipy/ticket/375 > > However I cannot reproduce the next failure (test_pbdv) > > Nils > > ====================================================================== > >> FAIL: test_pbdv (test_basic.TestCephes) >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File >> "/usr/lib/python2.5/site-packages/scipy/special/tests/test_basic.py", >> line 368, in test_pbdv >> assert_equal(cephes.pbdv(1,0),(0.0,0.0)) >> File >> "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >> line >> 176, in assert_equal >> assert_equal(actual[k], desired[k], 'item=%r\n%s' % >> (k,err_msg), >> verbose) >> File >> "/usr/lib/python2.5/site-packages/numpy/testing/utils.py", >> line >> 183, in assert_equal >> raise AssertionError(msg) >> AssertionError: >> Items are not equal: >> item=1 >> >> ACTUAL: 1.0 >> DESIRED: 0.0 >> >> I haven't started to have a deep look into this test >> code so far. >> Any ideas? >> >> Xavier >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From dineshbvadhia at hotmail.com Wed Oct 29 13:42:18 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Wed, 29 Oct 2008 10:42:18 -0700 Subject: [SciPy-user] Sparse matrices and memory usage Message-ID: This question is primarily for Nathan: We want size the amount of memory required for our application. Assume a sparse matrix A that consumes N mb of memory. During a matrix-vector multiplication ie. Ax, how much additional memory is used (for temporary arrays and vectors)? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Wed Oct 29 13:53:40 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 29 Oct 2008 13:53:40 -0400 Subject: [SciPy-user] Sparse matrices and memory usage In-Reply-To: References: Message-ID: On Wed, Oct 29, 2008 at 1:42 PM, Dinesh B Vadhia wrote: > This question is primarily for Nathan: > > We want size the amount of memory required for our application. Assume a > sparse matrix A that consumes N mb of memory. During a matrix-vector > multiplication ie. Ax, how much additional memory is used (for temporary > arrays and vectors)? > For CSR and CSC matrices (and a few others), usually only the output vector (y in y=A*x) needs to be allocated. However, other formats that do not provide a matrix-vector method get converted to one that does. Also, if the data types of the matrix and vector are not the same, then one or both are upcast. For instance, multiplying a csr_matrix with dtype=int8 by a float64 vector will cause the data array of the csr_matrix to be upcast to float64 first. In the future (i.e. SciPy 0.8) we might support mixed types, which would avoid the upcast. For now, you should ensure that types match if you're worried about memory consumption. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From rowen at u.washington.edu Wed Oct 29 17:09:32 2008 From: rowen at u.washington.edu (Russell E. Owen) Date: Wed, 29 Oct 2008 14:09:32 -0700 Subject: [SciPy-user] Is the Mac binary of numpy 1.2.1 usable under MacOS X 10.4 or 10.3.9? Message-ID: The Mac binary of numpy 1.2.1 claims to be for MacOS X 10.5. Can it be used with older versions of the operating system? I'm at 10.4 and have a few users still stuck back at 10.3.9. I can't seem to find this info on the numpy web site, nor in the ReadMe of the installer, nor on google. Note: I use the python.org Mac Python 2.5, not the built-in python. -- Russell From wesmckinn at gmail.com Wed Oct 29 18:23:29 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 29 Oct 2008 18:23:29 -0400 Subject: [SciPy-user] Enabling NaN-usage in F77 code on Windows Message-ID: <6c476c8a0810291523n2845000bxdc5486656e91967e@mail.gmail.com> I'm having some trouble getting NaN's to return from f77 code running under latest f2py in both g77 and gfortran. I would prefer to use gfortran but whenever I set a result value = NAN, it comes back to Python as 0. Has anyone tackled this issue? I am new to using f2py, have been moving along fine with everything else but ran into this. Here is a sample function for the rolling mean of a series having this behavior: SUBROUTINE ROLLMEAN(DATA,WINDOW,N,AVE,T) INTEGER*8 N, WINDOW REAL*8 AVE,DATA(N),T(N) INTEGER*8 J REAL*8 P,S,S1 C CF2PY REAL*8 INTENT(IN, COPY) DATA CF2PY INTEGER*8 INTENT(IN) WINDOW CF2PY INTEGER*8 INTENT(HIDE), DEPEND(DATA), CHECK(N>=2) :: N = SHAPE(DATA, 0) CF2PY REAL*8 INTENT(OUT, COPY), DEPEND(N), DIMENSION(N) :: T CF2PY REAL*8 INTENT(HIDE) :: AVE C S=0. S1=0. DO J=1,WINDOW P=DATA(J) S1=S1+P T(J)=NAN ENDDO AVE=S1/WINDOW T(WINDOW)=AVE DO J=WINDOW+1,N P=DATA(J) S=DATA(J-WINDOW) S1=S1+P-S AVE=S1/WINDOW T(J)=AVE ENDDO RETURN END Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Oct 29 18:37:12 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 29 Oct 2008 17:37:12 -0500 Subject: [SciPy-user] Enabling NaN-usage in F77 code on Windows In-Reply-To: <6c476c8a0810291523n2845000bxdc5486656e91967e@mail.gmail.com> References: <6c476c8a0810291523n2845000bxdc5486656e91967e@mail.gmail.com> Message-ID: <3d375d730810291537w6ce0a64byd58a46ec4b96bfae@mail.gmail.com> On Wed, Oct 29, 2008 at 17:23, Wes McKinney wrote: > I'm having some trouble getting NaN's to return from f77 code running under > latest f2py in both g77 and gfortran. I would prefer to use gfortran but > whenever I set a result value = NAN, it comes back to Python as 0. Has > anyone tackled this issue? I am new to using f2py, have been moving along > fine with everything else but ran into this. Is NAN a builtin symbol in FORTRAN-77? I don't think it is. I think what's happening is that the compiler sees you using the name and implicitly creates a variable for it and initializes it to 0. You will have to make your own NAN. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wesmckinn at gmail.com Wed Oct 29 18:56:07 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Wed, 29 Oct 2008 18:56:07 -0400 Subject: [SciPy-user] Enabling NaN-usage in F77 code on Windows In-Reply-To: <3d375d730810291537w6ce0a64byd58a46ec4b96bfae@mail.gmail.com> References: <6c476c8a0810291523n2845000bxdc5486656e91967e@mail.gmail.com> <3d375d730810291537w6ce0a64byd58a46ec4b96bfae@mail.gmail.com> Message-ID: <6c476c8a0810291556q61ff147bqa2a76db13b5de3c0@mail.gmail.com> You would be right about the NAN not being a built-in symbol...any suggestions for making a NAN? I don't know if there's a a generic way that will make it back to Python correctly. On Wed, Oct 29, 2008 at 6:37 PM, Robert Kern wrote: > On Wed, Oct 29, 2008 at 17:23, Wes McKinney wrote: > > I'm having some trouble getting NaN's to return from f77 code running > under > > latest f2py in both g77 and gfortran. I would prefer to use gfortran but > > whenever I set a result value = NAN, it comes back to Python as 0. Has > > anyone tackled this issue? I am new to using f2py, have been moving along > > fine with everything else but ran into this. > > Is NAN a builtin symbol in FORTRAN-77? I don't think it is. I think > what's happening is that the compiler sees you using the name and > implicitly creates a variable for it and initializes it to 0. You will > have to make your own NAN. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Oct 29 18:57:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 29 Oct 2008 17:57:38 -0500 Subject: [SciPy-user] Enabling NaN-usage in F77 code on Windows In-Reply-To: <6c476c8a0810291556q61ff147bqa2a76db13b5de3c0@mail.gmail.com> References: <6c476c8a0810291523n2845000bxdc5486656e91967e@mail.gmail.com> <3d375d730810291537w6ce0a64byd58a46ec4b96bfae@mail.gmail.com> <6c476c8a0810291556q61ff147bqa2a76db13b5de3c0@mail.gmail.com> Message-ID: <3d375d730810291557j23491aech644e07496a71b73@mail.gmail.com> On Wed, Oct 29, 2008 at 17:56, Wes McKinney wrote: > You would be right about the NAN not being a built-in symbol...any > suggestions for making a NAN? I don't know if there's a a generic way that > will make it back to Python correctly. inf = 1e200 * 1e200 nan = inf * 0 Most likely. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From millman at berkeley.edu Thu Oct 30 00:20:46 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 29 Oct 2008 21:20:46 -0700 Subject: [SciPy-user] Is the Mac binary of numpy 1.2.1 usable under MacOS X 10.4 or 10.3.9? In-Reply-To: References: Message-ID: On Wed, Oct 29, 2008 at 2:09 PM, Russell E. Owen wrote: > The Mac binary of numpy 1.2.1 claims to be for MacOS X 10.5. Can it be > used with older versions of the operating system? I'm at 10.4 and have a > few users still stuck back at 10.3.9. I built the dmg and believe that it should work with 10.3.9 or greater, but I haven't tested it. If you have a test system, try it and run the tests. Please report back with the results. If it works, I will update the release notes. If not, we can try and track down the problem and build new binaries. > Note: I use the python.org Mac Python 2.5, not the built-in python. Good, the NumPy binary requires that you use the python.org binary. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From tgrav at mac.com Thu Oct 30 00:36:37 2008 From: tgrav at mac.com (Tommy Grav) Date: Thu, 30 Oct 2008 00:36:37 -0400 Subject: [SciPy-user] Is the Mac binary of numpy 1.2.1 usable under MacOS X 10.4 or 10.3.9? In-Reply-To: References: Message-ID: <7445DA88-4EB3-4934-A41B-82E312B28812@mac.com> On Oct 30, 2008, at 12:20 AM, Jarrod Millman wrote: > > Good, the NumPy binary requires that you use the python.org binary. It also works fine with ActivePython 2.5.2.2 (ActiveState Software Inc.) based on Python 2.5.2 (r252:60911, Mar 27 2008, 17:40:23) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. on a Intel Macbook running 10.5.5. All test pass except one knownfail. Cheers Tommy From david at ar.media.kyoto-u.ac.jp Thu Oct 30 00:35:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 30 Oct 2008 13:35:56 +0900 Subject: [SciPy-user] run=2339 errors=0 failures=3 on ibex In-Reply-To: <4908941F.6000901@gmail.com> References: <490851E7.1020706@gmail.com> <4908941F.6000901@gmail.com> Message-ID: <490939AC.9030808@ar.media.kyoto-u.ac.jp> Xavier Gnata wrote: > Ok so there is a simple fix which is fully correct: > > -assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True) > > +assert_array_almost_equal(x, y, decimal=5, err_msg='', verbose=True) > The obvious question is does that fix the issue or does it only hide the problem ? 5 decimals is relatively poor, but float32 can only be expected to have 7 decimals anyway. And assuming the failure is caused by different BLAS/LAPACK implementations, it is not rare to see relatively significant differences between two given BLAS/LAPACK (even same version but different compilers; I have never seen a BLAS/LAPACK passing all the netlib LAPACK tests, for example: g77 and gfortran break them at different places, for example). Did you use g77 before intrepid (intrepid finally use gfortran as the default ABI for fortran) ? cheers, David From fredmfp at gmail.com Thu Oct 30 04:47:32 2008 From: fredmfp at gmail.com (fred) Date: Thu, 30 Oct 2008 09:47:32 +0100 Subject: [SciPy-user] Enabling NaN-usage in F77 code on Windows In-Reply-To: <3d375d730810291557j23491aech644e07496a71b73@mail.gmail.com> References: <6c476c8a0810291523n2845000bxdc5486656e91967e@mail.gmail.com> <3d375d730810291537w6ce0a64byd58a46ec4b96bfae@mail.gmail.com> <6c476c8a0810291556q61ff147bqa2a76db13b5de3c0@mail.gmail.com> <3d375d730810291557j23491aech644e07496a71b73@mail.gmail.com> Message-ID: <490974A4.3040902@gmail.com> Robert Kern a ?crit : > On Wed, Oct 29, 2008 at 17:56, Wes McKinney wrote: >> You would be right about the NAN not being a built-in symbol...any >> suggestions for making a NAN? I don't know if there's a a generic way that >> will make it back to Python correctly. > > inf = 1e200 * 1e200 > nan = inf * 0 > > Most likely. I use: foo = -1 nan = sqrt(foo) Cheers, -- Fred From rowen at u.washington.edu Thu Oct 30 12:51:36 2008 From: rowen at u.washington.edu (Russell E. Owen) Date: Thu, 30 Oct 2008 09:51:36 -0700 Subject: [SciPy-user] Is the Mac binary of numpy 1.2.1 usable under MacOS X 10.4 or 10.3.9? References: Message-ID: In article , "Jarrod Millman" wrote: > On Wed, Oct 29, 2008 at 2:09 PM, Russell E. Owen > wrote: > > The Mac binary of numpy 1.2.1 claims to be for MacOS X 10.5. Can it be > > used with older versions of the operating system? I'm at 10.4 and have a > > few users still stuck back at 10.3.9. > > I built the dmg and believe that it should work with 10.3.9 or > greater, but I haven't tested it. If you have a test system, try it > and run the tests. Please report back with the results. If it works, > I will update the release notes. If not, we can try and track down > the problem and build new binaries. > > > Note: I use the python.org Mac Python 2.5, not the built-in python. > > Good, the NumPy binary requires that you use the python.org binary. On MacOS X 10.4.11 (I don't have access to 10.3.9) and after installing "nose", numpy.test() runs fine for awhile, then prints the appended text and exits python. Two puzzles: - What do the initial set of messages mean? - Why does it crash out of Python if it cannot find a fortran compiler? I can install one if necessary, but I'm not very keen to have one around just to run tests, especially since the fortran world is so terribly fragmented right now. This is with Python 2.5.2 from python.org. -- Russell Not implemented: Defined_Binary_Op Not implemented: Defined_Binary_Op Defined_Operator not defined used by Generic_Spec Needs match implementation: Allocate_Stmt Needs match implementation: Associate_Construct Needs match implementation: Backspace_Stmt Needs match implementation: Block_Data Needs match implementation: Case_Construct Needs match implementation: Case_Selector Needs match implementation: Case_Stmt Needs match implementation: Control_Edit_Desc Needs match implementation: Data_Component_Def_Stmt Needs match implementation: Data_Implied_Do Needs match implementation: Data_Stmt Needs match implementation: Data_Stmt_Set Needs match implementation: Deallocate_Stmt Needs match implementation: Derived_Type_Def Needs match implementation: Endfile_Stmt Needs match implementation: Entry_Stmt Needs match implementation: Enum_Def Needs match implementation: Flush_Stmt Needs match implementation: Forall_Construct Needs match implementation: Forall_Header Needs match implementation: Forall_Triplet_Spec Needs match implementation: Format_Item Needs match implementation: Function_Stmt Needs match implementation: Generic_Binding Needs match implementation: Generic_Spec Needs match implementation: Implicit_Part Needs match implementation: Inquire_Stmt Needs match implementation: Interface_Block Needs match implementation: Interface_Body Needs match implementation: Interface_Stmt Needs match implementation: Internal_Subprogram_Part Needs match implementation: Io_Implied_Do Needs match implementation: Io_Implied_Do_Control Needs match implementation: Main_Program Needs match implementation: Module Needs match implementation: Module_Subprogram_Part Needs match implementation: Namelist_Stmt Needs match implementation: Pointer_Assignment_Stmt Needs match implementation: Position_Edit_Desc Needs match implementation: Proc_Attr_Spec Needs match implementation: Proc_Component_Def_Stmt Needs match implementation: Procedure_Declaration_Stmt Needs match implementation: Procedure_Stmt Needs match implementation: Read_Stmt Needs match implementation: Rewind_Stmt Needs match implementation: Select_Type_Construct Needs match implementation: Select_Type_Stmt Needs match implementation: Specific_Binding Needs match implementation: Target_Stmt Needs match implementation: Type_Bound_Procedure_Part Needs match implementation: Where_Construct ----- Nof match implementation needs: 51 out of 224 Nof tests needs: 224 out of 224 Total number of classes: 529 ----- No module named test_derived_scalar_ext , recompiling test_derived_scalar_ext. Parsing '/tmp/tmpYXkcLK.f90'.. Generating interface for test_derived_scalar_ext Subroutine: f2py_test_derived_scalar_ext_foo Generating interface for test_derived_scalar_ext.myt: f2py_type_myt_32 Generating interface for Integer: npy_int32 Generating interface for test_derived_scalar_ext Subroutine: f2py_test_derived_scalar_ext_f2pywrap_foo2 setup arguments: ' build_ext --build-temp tmp/ext_temp --build-lib tmp build_clib --build-temp tmp/clib_temp --build-clib tmp/clib_clib' running build_ext running build_src building library "test_derived_scalar_ext_fortran_f2py" sources building library "test_derived_scalar_ext_f_wrappers_f2py" sources building extension "test_derived_scalar_ext" sources running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Could not locate executable gfortran customize G95FCompiler Could not locate executable g95 don't know how to compile Fortran code on platform 'posix' building 'test_derived_scalar_ext_fortran_f2py' library error: library test_derived_scalar_ext_fortran_f2py has Fortran sources but no Fortran compiler found From soren.skou.nielsen at gmail.com Thu Oct 30 14:24:02 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Thu, 30 Oct 2008 19:24:02 +0100 Subject: [SciPy-user] Weave extension with lots of input and returns Message-ID: Hi, I'm trying to make a weave python extension to use in my program. I already did it in inline, but that doesn't work with py2exe (needs compiler), so I'm creating extensions instead. Heres the problem. Inline took care of everything, ext_tools obviously doesn't. how do I make a function that accepts like 10 inputs of numpy arrays and different constants, and returns 3 arrays after processing? All examples in the package are very simple and mostly deal with a single Int or a single PyObject.. Should I declare them all as PyObjects? So that: PyObject MyFunction(PyObject* a, PyObject* b, etc etc.., PyObject* k) { C processing code } and how do I then return three arrays? Any help is appreciated! Thanks, Soren -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.gnata at gmail.com Thu Oct 30 16:12:57 2008 From: xavier.gnata at gmail.com (Xavier Gnata) Date: Thu, 30 Oct 2008 21:12:57 +0100 Subject: [SciPy-user] run=2339 errors=0 failures=3 on ibex In-Reply-To: <490939AC.9030808@ar.media.kyoto-u.ac.jp> References: <490851E7.1020706@gmail.com> <4908941F.6000901@gmail.com> <490939AC.9030808@ar.media.kyoto-u.ac.jp> Message-ID: <490A1549.2020109@gmail.com> David Cournapeau wrote: > Xavier Gnata wrote: > >> Ok so there is a simple fix which is fully correct: >> >> -assert_array_almost_equal(x, y, decimal=6, err_msg='', verbose=True) >> >> +assert_array_almost_equal(x, y, decimal=5, err_msg='', verbose=True) >> >> > > The obvious question is does that fix the issue or does it only hide the > problem ? 5 decimals is relatively poor, but float32 can only be > expected to have 7 decimals anyway. And assuming the failure is caused > by different BLAS/LAPACK implementations, it is not rare to see > relatively significant differences between two given BLAS/LAPACK (even > same version but different compilers; I have never seen a BLAS/LAPACK > passing all the netlib LAPACK tests, for example: g77 and gfortran break > them at different places, for example). > > Ok my fault. It was not a good idea to "fix" it that way. So first I can try to implement the same computation in C/whatever using float32 and see how many correct digits we get. If it "breaks" on some systems, IMHO it means that the check is too stringent. > Did you use g77 before intrepid (intrepid finally use gfortran as the > default ABI for fortran) ? > > Full gfortran. No g77. So what should be the way forward? Should we involve gfortran/lapack guys asking for how many digits we should get? Should we move from an error to a warning (because the result may only be not as accurate as expected). As you know, I really would like to see scipy compiling and running the test suite smoothly :) It is very important to "sell" scipy to people (people I know at least ;)). Cheers, Xavier > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cdcasey at gmail.com Thu Oct 30 19:48:37 2008 From: cdcasey at gmail.com (chris) Date: Thu, 30 Oct 2008 18:48:37 -0500 Subject: [SciPy-user] scipy 0.6.0 build failing In-Reply-To: <5b8d13220810162216x96b9c8ek3648fed2724fe999@mail.gmail.com> References: <5b8d13220810162216x96b9c8ek3648fed2724fe999@mail.gmail.com> Message-ID: I'm using g77 3.2.3 on RedHat 3. Is there a particular version of g77/gcc I should updrage to? A fresh compile of 3.2.3 gave the same error. -Chris On Fri, Oct 17, 2008 at 12:16 AM, David Cournapeau wrote: > On Tue, Oct 14, 2008 at 3:09 AM, chris wrote: >> I'm trying to build scipy 0.6.0 on RHEL 3, and am getting the following failure: >> >> >> g77:f77: scipy/fftpack/dfftpack/zfftf1.f >> /tmp/cceGs6VT.s: Assembler messages: >> /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' > > Your version of g77 has a bug, and generate invalid machine > instructions when the -msse option is used. You should update your g77 > if possible, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cdcasey at gmail.com Thu Oct 30 20:33:54 2008 From: cdcasey at gmail.com (chris) Date: Thu, 30 Oct 2008 19:33:54 -0500 Subject: [SciPy-user] scipy 0.6.0 build failing In-Reply-To: References: <5b8d13220810162216x96b9c8ek3648fed2724fe999@mail.gmail.com> Message-ID: And maybe more importantly, will I have problems if I use g77 3.4.x and gcc 3.2.x? On Thu, Oct 30, 2008 at 6:48 PM, chris wrote: > I'm using g77 3.2.3 on RedHat 3. Is there a particular version of > g77/gcc I should updrage to? A fresh compile of 3.2.3 gave the same > error. > > -Chris > > On Fri, Oct 17, 2008 at 12:16 AM, David Cournapeau wrote: >> On Tue, Oct 14, 2008 at 3:09 AM, chris wrote: >>> I'm trying to build scipy 0.6.0 on RHEL 3, and am getting the following failure: >>> >>> >>> g77:f77: scipy/fftpack/dfftpack/zfftf1.f >>> /tmp/cceGs6VT.s: Assembler messages: >>> /tmp/cceGs6VT.s:598: Error: suffix or operands invalid for `movd' >> >> Your version of g77 has a bug, and generate invalid machine >> instructions when the -msse option is used. You should update your g77 >> if possible, >> >> David >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > From tjhnson at gmail.com Thu Oct 30 20:43:45 2008 From: tjhnson at gmail.com (T J) Date: Thu, 30 Oct 2008 17:43:45 -0700 Subject: [SciPy-user] swig c++ class storing pointer to array Message-ID: Hi, I'm new to using swig and am struggling with arrays and python. I have created a C++ class which stores a pointer to an array as a member. Using the numpy.i interface file, I would like to be able to pass in arrays from Python (without copying) to the constructor. The end goal is to be able to pass in an numpy array, use the C++ class to do some operations on the array, create some new arrays, and then be able to access those arrays (as they are updated) from within Python. To get started, I've created a very basic file with implements the storing of the arrays, and here is how I used it: >>> import testme >>> x = testme.MyClass([1,2,3]) >>> x.printme() 1.5865e-268 1.08557e-269 1.08557e-269 >>> a = numpy.array([1,2,3.]) >>> x = testme.MyClass(a) >>> x.printme() 4.18205e-62 3.60756e-313 1.60847e-268 So I am getting uninitialized values. When I look at _wrap_new_MyClass in the wrapper file created by swig, I can insert: result->printme() before "return resultobj", and the proper values are printed. So I'm not understanding why this isn't working...or what pointer is being stored in the class. I've attached all the necessary files. Curiously, the following does seem to work: >>> x = numpy.array([.5,.3,2.]) >>> y = numpy.array(x) >>> z = testme.MyClass(y) >>> z.printme() 0.5 0.3 2 Can someone explain this? -------------- next part -------------- A non-text attachment was scrubbed... Name: testme.i Type: application/octet-stream Size: 224 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy.i Type: application/octet-stream Size: 56154 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testme.h Type: text/x-chdr Size: 163 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testme.cpp Type: text/x-c++src Size: 249 bytes Desc: not available URL: From david at ar.media.kyoto-u.ac.jp Fri Oct 31 00:28:43 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 31 Oct 2008 13:28:43 +0900 Subject: [SciPy-user] scipy 0.6.0 build failing In-Reply-To: References: <5b8d13220810162216x96b9c8ek3648fed2724fe999@mail.gmail.com> Message-ID: <490A897B.9050001@ar.media.kyoto-u.ac.jp> chris wrote: > I'm using g77 3.2.3 on RedHat 3. Is there a particular version of > g77/gcc I should updrage to? A fresh compile of 3.2.3 gave the same > error. > Yep, that's expected: the upstream version of 3.2.3 is the culprit if that was not clear. The exact problem is the use of the -msse2 flag; I believe that if you remove this flag from the build, it should build correctly. That would certainly be easier to try than compiling your own compiler :) Now, unfortunately, there is no easy way to modify those compiler flags without modifying the source code. You could remove lines 268-270 (the exact numbers may be difference depending on the version of numpy you are using), in the file numpy/distutils/fcompiler/gnu.py: if gnu_ver > '3.2.2': if cpu.has_sse2(): opt.append('-msse2') if cpu.has_sse(): opt.append('-msse') After the modification, remove the build directory of numpy entirely, and start the build from scratch. cheers, David From kartita at gmail.com Fri Oct 31 00:52:20 2008 From: kartita at gmail.com (Kimberly Artita) Date: Thu, 30 Oct 2008 23:52:20 -0500 Subject: [SciPy-user] f2py "Segmentation fault"-revisited, please help Message-ID: Hi, Can someone please tell me why I keep getting a segmentation fault? my fortran script (gfortran_test.f90): subroutine readin_test implicit none character(len=4) :: title (60) character (len=13) :: bigsub, sbsout, rchout, rsvout, lwqout, wtrout open (2,file="gfortran.txt", delim='none') print *, "title" read (2,5100) title print *, title read (2,5000) bigsub, sbsout, rchout, rsvout, lwqout, wtrout print *, "bigsub, sbsout, rchout, rsvout, lwqout, wtrout" print *, bigsub, sbsout, rchout, rsvout, lwqout, wtrout close(2) 5100 format (20a4) 5000 format (6a) end subroutine readin_test my python script (gfortran_test.py): import gfortran_test gfortran_test.readin_test() my text file (gfortran.txt): General Input/Output section (file.cio): Thu Mar 13 17:32:19 2008 AVSWAT2000 - SWAT interface MDL basins.bsb basins.sbs basins.rch basins.rsv basins.lqo basins.wtr using this version of gfortran: i686-pc-linux-gnu-4.1.2 with either numpy-1.0.4-r2 or numpy-1.2.0 I can compile gfortran_test.f90 as a standalone program and it works! BUT, when I call it as a subroutine from python using f2py, it fails! I type: f2py --fcompiler=gfortran -c -m gfortran_test gfortran_test.f90 Why?????? -- Kimberly S. Artita PhD Intern, CDM Graduate Student, Engineering Science Southern Illinois University Carbondale Carbondale, Illinois 62901-6603 (618)-528-0349 e-mail: kartita at gmail.com, kartita at siu.edu web: http://civil.engr.siu.edu/GraduateStudents/artita/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Oct 31 00:52:59 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 31 Oct 2008 13:52:59 +0900 Subject: [SciPy-user] f2py "Segmentation fault"-revisited, please help In-Reply-To: References: Message-ID: <490A8F2B.9070403@ar.media.kyoto-u.ac.jp> Kimberly Artita wrote: > Hi, > > Can someone please tell me why I keep getting a segmentation fault? > > my fortran script (gfortran_test.f90): > subroutine readin_test > > implicit none > > character(len=4) :: title (60) > character (len=13) :: bigsub, sbsout, rchout, rsvout, lwqout, wtrout > open (2,file="gfortran.txt", delim='none') > print *, "title" > read (2,5100) title > print *, title > read (2,5000) bigsub, sbsout, rchout, rsvout, lwqout, wtrout > > print *, "bigsub, sbsout, rchout, rsvout, lwqout, wtrout" > print *, bigsub, sbsout, rchout, rsvout, lwqout, wtrout > close(2) > > 5100 format (20a4) > 5000 format (6a) > I don't know about the exact problem, but C/Fortran mixing is already quite error prone and has many warts, and file IO is even worse (because the C runtime and the fortran runtimes must cooperate in the same process, and they generally don't cooperate well). You should avoid it if you can. > I can compile gfortran_test.f90 as a standalone program and it works! Can you try with a main written in C (numpy is in C) ? I would not be surprised if that's the issue, cheers, David From markbak at gmail.com Fri Oct 31 05:49:08 2008 From: markbak at gmail.com (Mark Bakker) Date: Fri, 31 Oct 2008 10:49:08 +0100 Subject: [SciPy-user] overlapping polygons Message-ID: <6946b9500810310249i10b8d62fy3ace317f2921391b@mail.gmail.com> Hello list - I am now using the shapely Python package to compute the intersection or difference of two closed polygons. Does scipy (or numpy) have similar abilities? Any other suggestions? Not that I don't like shapely, it just means one additional dependency for my code. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnurser at googlemail.com Fri Oct 31 06:20:07 2008 From: gnurser at googlemail.com (George Nurser) Date: Fri, 31 Oct 2008 10:20:07 +0000 Subject: [SciPy-user] f2py "Segmentation fault"-revisited, please help In-Reply-To: References: Message-ID: <1d1e6ea70810310320y108d0edr1af274637f7ae774@mail.gmail.com> Hi, 2008/10/31 Kimberly Artita : > Hi, > > Can someone please tell me why I keep getting a segmentation fault? [cut] You need to compile with fcompiler=gnu95 > > I type: f2py --fcompiler=gfortran -c -m gfortran_test gfortran_test.f90 Do f2py --fcompiler=gnu95 -c -m gfortran_test gfortran_test.f90 It worked fine for me (gfortran 4.3.2, Numpy 1.3.0.dev5867, Mac OS X) HTH, George Nurser. From stefan at sun.ac.za Fri Oct 31 08:18:24 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Fri, 31 Oct 2008 14:18:24 +0200 Subject: [SciPy-user] overlapping polygons In-Reply-To: <6946b9500810310249i10b8d62fy3ace317f2921391b@mail.gmail.com> References: <6946b9500810310249i10b8d62fy3ace317f2921391b@mail.gmail.com> Message-ID: <9457e7c80810310518h7a57e425j9ee9beb80ba64b97@mail.gmail.com> Hi Mark These capabilities are not available in SciPy or NumPy. I have BSD polygon clipping code for the case where you intersect one rectangle and another convex polygon. If you need to calculate the intersection between two arbitrary polygons, you can also use Joerg Raedler's gpc wrapper (it's a very light-weight dependency): http://bazaar.launchpad.net/%7Estefanv/supreme/main/files/180?file_id=Polygon1.16-20060502184912-b1a90b5a4870e7ba That's one ugly URL. Sorry. Cheers St?fan 2008/10/31 Mark Bakker : > Hello list - > > I am now using the shapely Python package to compute the intersection or > difference of two closed polygons. > > Does scipy (or numpy) have similar abilities? > > Any other suggestions? Not that I don't like shapely, it just means one > additional dependency for my code. > > Thanks, Mark From soren.skou.nielsen at gmail.com Fri Oct 31 10:43:14 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Fri, 31 Oct 2008 15:43:14 +0100 Subject: [SciPy-user] Blitz with ext_tools Message-ID: Hi, Is there a way I can use blitz with ext_tools? so that I can refer to arrays like a(x,y) in the c code?? Soren -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdcasey at gmail.com Fri Oct 31 12:40:24 2008 From: cdcasey at gmail.com (chris) Date: Fri, 31 Oct 2008 11:40:24 -0500 Subject: [SciPy-user] scipy 0.6.0 build failing In-Reply-To: <490A897B.9050001@ar.media.kyoto-u.ac.jp> References: <5b8d13220810162216x96b9c8ek3648fed2724fe999@mail.gmail.com> <490A897B.9050001@ar.media.kyoto-u.ac.jp> Message-ID: On Thu, Oct 30, 2008 at 11:28 PM, David Cournapeau wrote: > chris wrote: >> I'm using g77 3.2.3 on RedHat 3. Is there a particular version of >> g77/gcc I should updrage to? A fresh compile of 3.2.3 gave the same >> error. >> > > Yep, that's expected: the upstream version of 3.2.3 is the culprit if > that was not clear. The exact problem is the use of the -msse2 flag; I > believe that if you remove this flag from the build, it should build > correctly. That would certainly be easier to try than compiling your own > compiler :) > > Now, unfortunately, there is no easy way to modify those compiler flags > without modifying the source code. You could remove lines 268-270 (the > exact numbers may be difference depending on the version of numpy you > are using), in the file numpy/distutils/fcompiler/gnu.py: > > if gnu_ver > '3.2.2': > if cpu.has_sse2(): opt.append('-msse2') > if cpu.has_sse(): opt.append('-msse') > > After the modification, remove the build directory of numpy entirely, > and start the build from scratch. > If only the -msse2 flag is causing the problem, would it be sufficient to remove line 269? Or is there a reason they both have to be removed? -Chris From kwmsmith at gmail.com Fri Oct 31 13:09:59 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Fri, 31 Oct 2008 12:09:59 -0500 Subject: [SciPy-user] Explanation of different edge modes in scipy.ndimage Message-ID: Hello, I'm doing some gaussian filtering of periodic 2D arrays using scipy.ndimage.gaussian_filter. There is a 'mode' argument that is set to 'reflect' by default. In _ni_support.py:34 there is a conversion function, '_extend_mode_to_code' that gives the different modes available. For periodic data I believe I should use 'wrap', but I'm interested to know what the other modes mean, esp the difference between 'reflect' and 'mirror'. For the record, the modes defined are 'nearest', 'wrap', 'reflect', 'mirror', and 'constant'. For future reference, is there a place where these arguments are documented? Thanks, Kurt -------------- next part -------------- An HTML attachment was scrubbed... URL: From w.kejia at gmail.com Fri Oct 31 13:21:38 2008 From: w.kejia at gmail.com (Wu, Kejia) Date: Fri, 31 Oct 2008 10:21:38 -0700 Subject: [SciPy-user] About Random Number Generation Message-ID: <1225473698.7737.2.camel@localhost> Hi all, I tried the example code here: http://numpy.scipy.org/numpydoc/numpy-20.html#71863 But failed: -------------------------------------- rng.py, line 5, in import RNG ImportError: No module named RNG -------------------------------------- Any suggestion? Thanks at first. Also, can any body tell me whether the random number algorithm in RNG package is a pseudorandom one or a real-random one? And is there an available implementation for Monte Carlo method in NumPy? Thanks a lot for any reply. Regards, Kejia From kartita at gmail.com Fri Oct 31 13:34:53 2008 From: kartita at gmail.com (Kimberly Artita) Date: Fri, 31 Oct 2008 12:34:53 -0500 Subject: [SciPy-user] f2py "Segmentation fault"-revisited, please help In-Reply-To: References: Message-ID: Tried it on a different machine (numpy-1.2.0 and gcc-4.3.2 on linux) Either way (--fcompiler=gnu95 or gfortran) gives a segfault The output says "General", then segfaults. It is reading the space as a delimiter, even though I specify delim='none' My laptop and the desktop used above run gentoo. A third desktop using ubuntu with gcc-4.3.2 and numpy-1.2.0 works fine. What gives? Kim On Fri, Oct 31, 2008 at 5:20 AM, George Nurser wrote: > Hi, > > 2008/10/31 Kimberly Artita : > > Hi, > > > > Can someone please tell me why I keep getting a segmentation fault? > [cut] > > You need to compile with fcompiler=gnu95 > > > > > I type: f2py --fcompiler=gfortran -c -m gfortran_test gfortran_test.f90 > > Do > f2py --fcompiler=gnu95 -c -m gfortran_test gfortran_test.f90 > > It worked fine for me (gfortran 4.3.2, Numpy 1.3.0.dev5867, Mac OS X) > > HTH, George Nurser. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > On Thu, Oct 30, 2008 at 11:52 PM, Kimberly Artita wrote: > Hi, > > Can someone please tell me why I keep getting a segmentation fault? > > my fortran script (gfortran_test.f90): > subroutine readin_test > > implicit none > > character(len=4) :: title (60) > character (len=13) :: bigsub, sbsout, rchout, rsvout, lwqout, wtrout > open (2,file="gfortran.txt", delim='none') > print *, "title" > read (2,5100) title > print *, title > read (2,5000) bigsub, sbsout, rchout, rsvout, lwqout, wtrout > > print *, "bigsub, sbsout, rchout, rsvout, lwqout, wtrout" > print *, bigsub, sbsout, rchout, rsvout, lwqout, wtrout > close(2) > > 5100 format (20a4) > 5000 format (6a) > > end subroutine readin_test > > my python script (gfortran_test.py): > import gfortran_test > > gfortran_test.readin_test() > > my text file (gfortran.txt): > General Input/Output section (file.cio): Thu Mar 13 17:32:19 2008 > AVSWAT2000 - SWAT interface MDL > > > basins.bsb basins.sbs basins.rch basins.rsv basins.lqo > basins.wtr > > > using this version of gfortran: i686-pc-linux-gnu-4.1.2 with either > numpy-1.0.4-r2 or numpy-1.2.0 > > I can compile gfortran_test.f90 as a standalone program and it works! > > BUT, when I call it as a subroutine from python using f2py, it fails! > I type: f2py --fcompiler=gfortran -c -m gfortran_test gfortran_test.f90 > > > Why?????? > > -- > Kimberly S. Artita > PhD Intern, CDM > Graduate Student, Engineering Science > Southern Illinois University Carbondale > Carbondale, Illinois 62901-6603 > (618)-528-0349 > e-mail: kartita at gmail.com, kartita at siu.edu > web: http://civil.engr.siu.edu/GraduateStudents/artita/index.html > -- Kimberly S. Artita PhD Intern, CDM Graduate Student, Engineering Science Southern Illinois University Carbondale Carbondale, Illinois 62901-6603 (618)-528-0349 e-mail: kartita at gmail.com, kartita at siu.edu web: http://civil.engr.siu.edu/GraduateStudents/artita/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From dwf at cs.toronto.edu Fri Oct 31 14:31:55 2008 From: dwf at cs.toronto.edu (David Warde-Farley) Date: Fri, 31 Oct 2008 14:31:55 -0400 Subject: [SciPy-user] About Random Number Generation In-Reply-To: <1225473698.7737.2.camel@localhost> References: <1225473698.7737.2.camel@localhost> Message-ID: <2A0F5F54-E5E3-40CB-B25C-DE93EB26B872@cs.toronto.edu> On 31-Oct-08, at 1:21 PM, Wu, Kejia wrote: > Also, can any body tell me whether the random number algorithm in RNG > package is a pseudorandom one or a real-random one? You can't generate real-random numbers in software alone. Real random number generation relies on sampling some random physical process. Google "real random number" and you'll find a number of online sources of genuine random numbers, including random.org (which uses atmospheric noise) and hotbits (which uses radioactive decay). > And is there an > available implementation for Monte Carlo method in NumPy? Try http://code.google.com/p/pymc/ David