From schofield at ftw.at Sun Jan 1 21:36:08 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 2 Jan 2006 02:36:08 +0000 Subject: [SciPy-user] default dtype In-Reply-To: <43B586F9.1020904@ieee.org> References: <43B586F9.1020904@ieee.org> Message-ID: <9E95E32C-373A-4830-A8B3-FF16605B7181@ftw.at> On 30/12/2005, at 7:14 PM, Travis Oliphant wrote: > Alan G Isaac wrote: > >> Is an integer data type the obvious >> default for 'empty'? I expected float. >> >> >> > This question comes up occasionally. The reason for int is largely > historical --- that's what was decided long ago when Numeric came out. > Changing this in some places would break a lot of code, I'm afraid. > And the default for empty is done for consistency. I felt it > better to > have one default rather than many. > > The default can be changed in one place in the C-code if we did decide > to change it. Now's the time because version 1.0 is approaching > in the > next couple of months. Version 0.9 will be the first-of-the-year > release. +5 on changing the default to float. I think we'd look back on this decision in several years as difficult but right. Here are some ideas on how we could ease the transition: (1) We could provide new functions intzeros(), intones(), and intempty () with the same behaviour as the current functions. That is, integer types would be the default, but this could be overridden by a dtype keyword argument. Then converting any old Numeric / numarray code would just require another global string substitution in convertcode.py. (2) We could provide two sets of functions, intzeros() etc. and floatzeros() etc., and remove the default interpretation altogether from the standard zeros() functions. This is not ideal long term, but could be a useful temporary measure during a transition for shaking out bugs from the scicore and scipy trees. (3) The default type could be chosen by the user as a package-level global variable. I think this would be the best solution. Then the old integer default could be turned on with one line of Python code. I suppose that Python functions using this default, given the static evaluation of default argument values, would need the "dtype=None" idiom in function headers followed by dtype=global_dtype in the function body. -- Ed > -Travis > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From agn at noc.soton.ac.uk Mon Jan 2 07:37:24 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Mon, 2 Jan 2006 12:37:24 +0000 Subject: [SciPy-user] Import probs with rev 1735 svn scipy core In-Reply-To: <43B5BAD5.4010107@ieee.org> References: <86522b1a0512291603l44c5873mcd070fe312da346b@mail.gmail.com> <20051230011947.51030.qmail@web25806.mail.ukl.yahoo.com> <20051230172434.GA93502@cutter.rexx.com> <429CBCA7-E71A-41B1-8A63-497130A7EEFC@noc.soton.ac.uk> <43B5BAD5.4010107@ieee.org> Message-ID: On 30 Dec 2005, at 22:55, Travis Oliphant wrote: > George Nurser wrote: > >> Just tried to get rev 1735 svn scipy core going on an Opteron. >> >> Non root install of Scipy, site.cfg to use acml libraries for lapack >> and blas. >> LD_LIBRARY_PATH includes acml directory. >> >> Otherwise default. >> >> python setup.py install --home=... seems to work fine. >> >> But when I try >> python >>>>> import scipy >> Importing ScipyTest of testing to scipy >> Failed to import base >> cannot import name ccompiler >> Importing fft of corefft to scipy >> Importing ifft of corefft to scipy >> Failed to import random >> 'module' object has no attribute 'dtypedescr' >> >> Puzzled about ccompiler message, as ccompile.py seems to be in >> distutils directory, not base subdirectory. >> >> > > Is this a fresh install on the system or are their older > installations. Enough has changed in the package loading that I > would > delete any old install and the build directory (if you had a previous > check-out) and install again. > > Otherwise, I'm not sure what is going on. > > -Travis Problem was that I had prepended the scipy install directory onto my sys.path. Hence it was trying scipy.distutils routines before the python distutils routines. Putting the scipy install directory at the end of sys.path solved the problem. Works fine now. No test failures. -George Nurser. From schofield at ftw.at Mon Jan 2 08:53:17 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 02 Jan 2006 13:53:17 +0000 Subject: [SciPy-user] [SciPy-dev] default dtype In-Reply-To: <200601021247.49311.faltet@carabos.com> References: <9E95E32C-373A-4830-A8B3-FF16605B7181@ftw.at> <2698AB6B-03E0-484B-A2CC-305FC2D55134@ftw.at> <200601021247.49311.faltet@carabos.com> Message-ID: <43B9304D.1040701@ftw.at> Francesc Altet wrote: >A Dilluns 02 Gener 2006 03:38, Ed Schofield va escriure: > > >>+5 on changing the default to float. I think we'd look back on this >>decision in several years as difficult but right. >> >> >Sorry, but I don't agree. If we want Python to include the container >for array objects, making the default be double seems to stress the >fact that this object is meant primarily for scientific use (which is >true to some extent). However, for the sake of stablishing a *real* >standard to keep datasets, I'd advocate the default to remain int. In >addition, there are a lot of uses for integer arrays (indices, >images...). IMO, making the double the default would discourage the >use of the object between people not used to write >scientific/technical apps. > >The only issue is the possible confusion in users when they will >receive Int32 arrays in 32-bit platforms and Int64 arrays in 64-bit >ones. I don't know, but perhaps this single reason is strong enough to >change the default to Float64. > > Interesting point. Then what do you think about an integer default that is redefinable by the user? For example: >>> import scicore / whatever >>> scicore.default_dtype = float64 Then zeros(), empty(), and ones() (any others?) would use the new default. I think the flexibility would be nice, and it should be feasible ... Perhaps the most compelling argument against an integer default is the current behaviour with unsafe casting: >>> a = zeros(10) >>> a[0] = 1.159262 >>> a[0] 1 and this argument would evaporate if unsafe casts were required to be more explicit. I promised to provide a patch and run some timings for this, but I haven't done this yet. -- Ed From agn at noc.soton.ac.uk Mon Jan 2 09:07:53 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Mon, 2 Jan 2006 14:07:53 +0000 Subject: [SciPy-user] default dtype for integer arrays In-Reply-To: <43B9304D.1040701@ftw.at> References: <9E95E32C-373A-4830-A8B3-FF16605B7181@ftw.at> <2698AB6B-03E0-484B-A2CC-305FC2D55134@ftw.at> <200601021247.49311.faltet@carabos.com> <43B9304D.1040701@ftw.at> Message-ID: <602ADCF4-2798-4EAD-AEC2-F02D1121D6CA@noc.soton.ac.uk> On 2 Jan 2006, at 13:53, Ed Schofield wrote: > Francesc Altet wrote: > >> A Dilluns 02 Gener 2006 03:38, Ed Schofield va escriure: >> >> >>> +5 on changing the default to float. I think we'd look back on this >>> decision in several years as difficult but right. >>> >>> >> Sorry, but I don't agree. If we want Python to include the container >> for array objects, making the default be double seems to stress the >> fact that this object is meant primarily for scientific use (which is >> true to some extent). However, for the sake of stablishing a *real* >> standard to keep datasets, I'd advocate the default to remain int. In >> addition, there are a lot of uses for integer arrays (indices, >> images...). IMO, making the double the default would discourage the >> use of the object between people not used to write >> scientific/technical apps. >> >> The only issue is the possible confusion in users when they will >> receive Int32 arrays in 32-bit platforms and Int64 arrays in 64-bit >> ones. Excuse my putting in a (probably ignorant) point here. We use 32-bit integers for our data, even on the 64-bit machines. If the default for integer arrays remained at 32 bits on 64 bit machines, or was user definable, this would be very helpful. Regards, George Nurser. From Fernando.Perez at colorado.edu Mon Jan 2 10:01:41 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 02 Jan 2006 08:01:41 -0700 Subject: [SciPy-user] [SciPy-dev] default dtype In-Reply-To: <200601021556.18401.faltet@carabos.com> References: <9E95E32C-373A-4830-A8B3-FF16605B7181@ftw.at> <200601021247.49311.faltet@carabos.com> <43B9304D.1040701@ftw.at> <200601021556.18401.faltet@carabos.com> Message-ID: <43B94055.9000109@colorado.edu> Francesc Altet wrote: > A Dilluns 02 Gener 2006 14:53, Ed Schofield va escriure: > >>Then what do you think about an integer default that is redefinable by >> >>the user? For example: >> >>> import scicore / whatever >> >>> scicore.default_dtype = float64 >> >>Then zeros(), empty(), and ones() (any others?) would use the new >>default. I think the flexibility would be nice, and it should be >>feasible ... > > > Yes. I think this would be really nice to have. That way, the > 64/32-bit dichotomy would disappear. Is this supposed to be a fully global setting? If so, what about import numerix numerix.default_dtype = 'something' import somemodule somemodule.foo() boom! Now, the foo() call either: 1. blows up, because it had unqualified zeros() calls whose dtype has now changed, or 2. resets numerix.default_type back, and your code blows up next. I think this can be handled with a _call_ (numerix.set_default_dtype()), but it requires special stack-handling code so that numerix can know to apply the new default only to calls made from the same module. Cheers, f From aisaac at american.edu Mon Jan 2 11:54:33 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 2 Jan 2006 11:54:33 -0500 Subject: [SciPy-user] [SciPy-dev] default dtype In-Reply-To: <43B9304D.1040701@ftw.at> References: <9E95E32C-373A-4830-A8B3-FF16605B7181@ftw.at><2698AB6B-03E0-484B-A2CC-305FC2D55134@ftw.at><200601021247.49311.faltet@carabos.com><43B9304D.1040701@ftw.at> Message-ID: On Mon, 02 Jan 2006, Ed Schofield apparently wrote: > Perhaps the most compelling argument against an integer > default is the current behaviour with unsafe casting: This is an important argument, but the most compelling argument I believe is that the integer arrays are - unlikely to be expected by the inexperienced (or those with experience in say GAUSS or Matlab) - less often wanted (or so I believe; is this wrong?) - likely to be set explicitly in any case by those who really want them So the principle of least surprise suggests a float default, and I believe the most common use case does as well. Following on the problem of surprise, even if unsafe casting is eliminated (and it may be viewed as a feature by some!), new users will still be unpleasantly surprised. I.e., new users will be surprised even if there is an error message instead of >>> x=ones((2,2)) >>> print 3.14*x [[ 3.14, 3.14,] [ 3.14, 3.14,]] >>> x[0,0]=x[0,0]*3.14 >>> print x [[3,1,] [1,1,]] Cheers, Alan Isaac From managan at llnl.gov Tue Jan 3 13:29:57 2006 From: managan at llnl.gov (Rob Managan) Date: Tue, 3 Jan 2006 10:29:57 -0800 Subject: [SciPy-user] Error importing Message-ID: I just updated to the latest svn and after doing a full rebuild and install I get this error. I probably did something stupid but a pointer would be helpful. Thanks!! [mangrove:~/Documents/devel/scipy] managan% python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/__init__.py", line 335, in ? pkgload(verbose=SCIPY_IMPORT_VERBOSE,postpone=True) File "/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/__init__.py", line 216, in __call__ self.warn('Overwriting %s=%s (was %s)' \ AttributeError: PackageLoader instance has no attribute '_obj2str' >>> -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From managan at llnl.gov Tue Jan 3 14:22:09 2006 From: managan at llnl.gov (Rob Managan) Date: Tue, 3 Jan 2006 11:22:09 -0800 Subject: [SciPy-user] Error importing In-Reply-To: References: Message-ID: It seems I needed to delete the scipy directory in site-packages. Now i see that the import functionality has been updated since i now tells that misc failed to import Image. Nice addition!! >I just updated to the latest svn and after doing a full rebuild and >install I get this error. I probably did something stupid but a >pointer would be helpful. > >Thanks!! > >[mangrove:~/Documents/devel/scipy] managan% python >Python 2.4.1 (#2, Mar 31 2005, 00:05:10) >[GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin >Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy >... >AttributeError: PackageLoader instance has no attribute '_obj2str' >>>> >-- -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From Chris.Fonnesbeck at MyFWC.com Tue Jan 3 15:05:37 2006 From: Chris.Fonnesbeck at MyFWC.com (Fonnesbeck, Chris) Date: Tue, 3 Jan 2006 15:05:37 -0500 Subject: [SciPy-user] new full scipy binary for windows? Message-ID: I am wondering if there is a relatively recent windows installer for the new full scipy. I am having trouble finding all the build prerequisites for getting it done myself on my old windows laptop (mainly because fftw.org seems to be down). If anyone has a binary sitting on a ftp site somewhere, please send me a pointer -- I did not see one on the sourceforge site. Thanks, C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: Chris.Fonnesbeck at MyFWC.com -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2131 bytes Desc: not available URL: From oliphant.travis at ieee.org Tue Jan 3 20:08:44 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 03 Jan 2006 18:08:44 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again Message-ID: <43BB201C.7050801@ieee.org> Please give feedback on the following names: 1. psipi 2. scicore 3. muscle 4. numstar We have to move from numerix due to trademark concerns (I don't want to have any that I can obviously avoid). -Travis From rshepard at appl-ecosys.com Tue Jan 3 21:59:15 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 3 Jan 2006 18:59:15 -0800 (PST) Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB201C.7050801@ieee.org> References: <43BB201C.7050801@ieee.org> Message-ID: On Tue, 3 Jan 2006, Travis Oliphant wrote: > Please give feedback on the following names: > > 1. psipi > 2. scicore > 3. muscle > 4. numstar Travis, My vote is for #2. Names should be descriptive as much as possible. Rich -- Richard B. Shepard, Ph.D. | Author of "Quantifying Environmental Applied Ecosystem Services, Inc. (TM) | Impact Assessments Using Fuzzy Logic" Voice: 503-667-4517 Fax: 503-667-8863 From j.merritt at pgrad.unimelb.edu.au Tue Jan 3 22:17:05 2006 From: j.merritt at pgrad.unimelb.edu.au (Jonathan Merritt) Date: Wed, 04 Jan 2006 14:17:05 +1100 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: References: <43BB201C.7050801@ieee.org> Message-ID: <43BB3E31.6080500@pgrad.unimelb.edu.au> Rich Shepard wrote: >On Tue, 3 Jan 2006, Travis Oliphant wrote: > > >>Please give feedback on the following names: >> >>1. psipi >>2. scicore >>3. muscle >>4. numstar >> >> > >Travis, > > My vote is for #2. Names should be descriptive as much as possible. > >Rich > > I'm a scipy user rather than a contributor, but I'd definitely choose #2 as well. Please don't choose #3, since you might eventually confuse someone who's using SciPy for biomechanics work (like me!). (I'm just joking of course, but I thought it was a funny option. :-) Jonathan Merritt. From dalembertian at yahoo.com Tue Jan 3 22:50:19 2006 From: dalembertian at yahoo.com (Frank) Date: Tue, 3 Jan 2006 21:50:19 -0600 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB3E31.6080500@pgrad.unimelb.edu.au> References: <43BB201C.7050801@ieee.org> <43BB3E31.6080500@pgrad.unimelb.edu.au> Message-ID: Dear All, I vote for "scicore". Anything to make the division between scipy core and numeric more clear. Thanks, Frank From pajer at iname.com Tue Jan 3 23:08:19 2006 From: pajer at iname.com (Gary) Date: Tue, 03 Jan 2006 23:08:19 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: References: <43BB201C.7050801@ieee.org> <43BB3E31.6080500@pgrad.unimelb.edu.au> Message-ID: <43BB4A33.2080100@iname.com> Frank wrote: >Dear All, > >I vote for "scicore". Anything to make the division between scipy >core and numeric more clear. > > ... as well as the distinction between scipy core and scipy. scicore gets my vote from that list, but IMO a more application-neutral name would be preferable. Of course, I don't have any suggestions. Too bad "numeric" and "numpy" are taken. I'll sleep on it and see if I can come up with a winner :) -gary >Thanks, >Frank > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From lanceboyle at qwest.net Wed Jan 4 00:29:49 2006 From: lanceboyle at qwest.net (lanceboyle at qwest.net) Date: Tue, 3 Jan 2006 22:29:49 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB201C.7050801@ieee.org> References: <43BB201C.7050801@ieee.org> Message-ID: <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> On Jan 3, 2006, at 6:08 PM, Travis Oliphant wrote: > > Please give feedback on the following names: > > 1. psipi > 2. scicore > 3. muscle > 4. numstar > > We have to move from numerix due to trademark concerns (I don't > want to > have any that I can obviously avoid). > > -Travis > All of these names are obscure. _Must_ that be the way of open source projects? Would you give any of these names to a product that you wished to sell and thus wished to be descriptive? First, consider that non-scientists might be interested in the software; for example, an engineer might find it useful. Second, consider your audience. To whom outside this group does "scicore" (SKI-cor) mean diddly? You are suffering from the "Airport Sign Syndrome" wherein the people who make signs for airports already know where to go; it's the folks from out of town who get into fender benders trying to decipher bad signs. Jerry From Fernando.Perez at colorado.edu Wed Jan 4 00:35:44 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 03 Jan 2006 22:35:44 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> Message-ID: <43BB5EB0.3090506@colorado.edu> lanceboyle at qwest.net wrote: > All of these names are obscure. _Must_ that be the way of open source > projects? Would you give any of these names to a product that you > wished to sell and thus wished to be descriptive? > > First, consider that non-scientists might be interested in the > software; for example, an engineer might find it useful. Second, > consider your audience. To whom outside this group does > "scicore" (SKI-cor) mean diddly? You are suffering from the "Airport > Sign Syndrome" wherein the people who make signs for airports already > know where to go; it's the folks from out of town who get into fender > benders trying to decipher bad signs. Several of us have already admitted we suck at coming up with good names. And we've been burning time on this issue, when Travis could be renaming code and setting up the new repo. So how about, instead of criticizing on something we already know, you simply make useful suggestions OF BETTER NAMES? We already admitted we don't know how to do better, so by all means help. Best, f From dalembertian at yahoo.com Wed Jan 4 01:08:23 2006 From: dalembertian at yahoo.com (Frank) Date: Wed, 4 Jan 2006 00:08:23 -0600 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB5EB0.3090506@colorado.edu> References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> <43BB5EB0.3090506@colorado.edu> Message-ID: Dear All, OK, I have a naive question: Why should there be two separate packages? That is, why not just maintain the full SciPy package and skip offering the separate core. Thanks, Frank From Fernando.Perez at colorado.edu Wed Jan 4 01:11:52 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 03 Jan 2006 23:11:52 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> <43BB5EB0.3090506@colorado.edu> Message-ID: <43BB6728.3020201@colorado.edu> Frank wrote: > Dear All, > > OK, I have a naive question: Why should there be two separate > packages? That is, why not just maintain the full SciPy package and > skip offering the separate core. google('scipy-dev renaming scipy_core') for the details. cheers, f From gerard.vermeulen at grenoble.cnrs.fr Wed Jan 4 02:36:45 2006 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Wed, 4 Jan 2006 08:36:45 +0100 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB5EB0.3090506@colorado.edu> References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> <43BB5EB0.3090506@colorado.edu> Message-ID: <20060104083645.23af4d63.gerard.vermeulen@grenoble.cnrs.fr> On Tue, 03 Jan 2006 22:35:44 -0700 Fernando Perez wrote: > lanceboyle at qwest.net wrote: > > > All of these names are obscure. _Must_ that be the way of open source > > projects? Would you give any of these names to a product that you > > wished to sell and thus wished to be descriptive? > > > > First, consider that non-scientists might be interested in the > > software; for example, an engineer might find it useful. Second, > > consider your audience. To whom outside this group does > > "scicore" (SKI-cor) mean diddly? You are suffering from the "Airport > > Sign Syndrome" wherein the people who make signs for airports already > > know where to go; it's the folks from out of town who get into fender > > benders trying to decipher bad signs. > > Several of us have already admitted we suck at coming up with good names. And > we've been burning time on this issue, when Travis could be renaming code and > setting up the new repo. > > So how about, instead of criticizing on something we already know, you simply > make useful suggestions OF BETTER NAMES? We already admitted we don't know > how to do better, so by all means help. > Why not scipy (was scipy-core) and scipy++ (was scipy-svn)? Those names are less obscure than the ones proposed and people may guess there is a base package and an extension package. Gerard From Fernando.Perez at colorado.edu Wed Jan 4 02:39:38 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 04 Jan 2006 00:39:38 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <20060104083645.23af4d63.gerard.vermeulen@grenoble.cnrs.fr> References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> <43BB5EB0.3090506@colorado.edu> <20060104083645.23af4d63.gerard.vermeulen@grenoble.cnrs.fr> Message-ID: <43BB7BBA.1080003@colorado.edu> Gerard Vermeulen wrote: > Why not > scipy (was scipy-core) > and > scipy++ (was scipy-svn)? > > > Those names are less obscure than the ones proposed and people may guess > there is a base package and an extension package. import scipy++ is not valid python. Cheers, f From pjrandew at sun.ac.za Wed Jan 4 02:44:13 2006 From: pjrandew at sun.ac.za (Randewijk P-J ) Date: Wed, 4 Jan 2006 09:44:13 +0200 Subject: [SciPy-user] Renaming scipy_core: here we go again Message-ID: What about a name to compliment python, i.e. a name of some sort of snake... but that also sounds somewhat mathematical... My first choice would by, our deadly: 1) pufadder thought sounding not "mathematical", but still a snake, 2) mamba 3) cobra could also be considered... Also with the chief author's surname sounding very... South African... a deadly (South) African snake name would be imho very appropriate... but maybe I'm biased... PJR > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Fernando Perez > Sent: 04 January 2006 07:36 > To: SciPy Users List > Subject: Re: [SciPy-user] Renaming scipy_core: here we go again ... > Several of us have already admitted we suck at coming up with > good names. And > we've been burning time on this issue, when Travis could be > renaming code and > setting up the new repo. > > So how about, instead of criticizing on something we already > know, you simply > make useful suggestions OF BETTER NAMES? We already admitted > we don't know > how to do better, so by all means help. > > Best, > > f > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user > From meesters at uni-mainz.de Wed Jan 4 02:58:12 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Wed, 4 Jan 2006 08:58:12 +0100 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB7BBA.1080003@colorado.edu> References: <43BB201C.7050801@ieee.org> <20060104083645.23af4d63.gerard.vermeulen@grenoble.cnrs.fr> <43BB7BBA.1080003@colorado.edu> Message-ID: <200601040858.12681.meesters@uni-mainz.de> On Wednesday 04 January 2006 08:39, Fernando Perez wrote: > Gerard Vermeulen wrote: > > Why not > > scipy (was scipy-core) > > and > > scipy++ (was scipy-svn)? > > > > > > Those names are less obscure than the ones proposed and people may guess > > there is a base package and an extension package. > > import scipy++ > > is not valid python. > > Cheers, > > f Good point. But 'import scipyplus' or 'import scipy_plus' would be valid python. At least in MHO Gerard's suggestion is finally something creative and descriptive. (Whereas my solution is not that good anymore, but perhaps someone of you still knows a way out?) @Travis: Despite of the naming confusions. Thank you and keep up the good work! Cheers Christian From gerard.vermeulen at grenoble.cnrs.fr Wed Jan 4 03:20:32 2006 From: gerard.vermeulen at grenoble.cnrs.fr (Gerard Vermeulen) Date: Wed, 4 Jan 2006 09:20:32 +0100 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB7BBA.1080003@colorado.edu> References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> <43BB5EB0.3090506@colorado.edu> <20060104083645.23af4d63.gerard.vermeulen@grenoble.cnrs.fr> <43BB7BBA.1080003@colorado.edu> Message-ID: <20060104092032.437c74bd.gerard.vermeulen@grenoble.cnrs.fr> On Wed, 04 Jan 2006 00:39:38 -0700 Fernando Perez wrote: > Gerard Vermeulen wrote: > > > Why not > > scipy (was scipy-core) > > and > > scipy++ (was scipy-svn)? > > > > > > Those names are less obscure than the ones proposed and people may guess > > there is a base package and an extension package. > > import scipy++ > > is not valid python. > True, but I proposed this on the assumption that all scipy++ stuff gets installed in the scipy directory (I may have missed a discussion on installing scipy-core and scipy-full in two separate directories). If my assumption is still valid, users would never have to do import scipy++ Gerard PS: the svn repository can be called something else than scipy++, isn't it? From ted.horst at earthlink.net Wed Jan 4 03:36:34 2006 From: ted.horst at earthlink.net (Ted Horst) Date: Wed, 4 Jan 2006 02:36:34 -0600 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB4A33.2080100@iname.com> References: <43BB201C.7050801@ieee.org> <43BB3E31.6080500@pgrad.unimelb.edu.au> <43BB4A33.2080100@iname.com> Message-ID: <29fd1972cad577ee682419b321523aa4@earthlink.net> My (limited) understanding is that scipy_core is primarily the multidimensional array class and supporting infrastructure. It is not all that scientific except that linear algebra and ffts have been bundled with it. So perhaps the name should be something along the lines of: ndarray multiarray bigarray Ted On Jan 3, 2006, at 22:08, Gary wrote: > Frank wrote: > >> Dear All, >> >> I vote for "scicore". Anything to make the division between scipy >> core and numeric more clear. >> >> > ... as well as the distinction between scipy core and scipy. > > scicore gets my vote from that list, but IMO a more application-neutral > name would be preferable. Of course, I don't have any suggestions. > Too > bad "numeric" and "numpy" are taken. I'll sleep on it and see if I can > come up with a winner :) > > -gary > >> Thanks, >> Frank >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From peter at mapledesign.co.uk Wed Jan 4 05:07:57 2006 From: peter at mapledesign.co.uk (Peter Bowyer) Date: Wed, 04 Jan 2006 10:07:57 +0000 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> References: <43BB201C.7050801@ieee.org> <90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net> Message-ID: <6.2.3.4.0.20060104100530.03b63c00@127.0.0.1> At 05:29 04/01/2006, lanceboyle at qwest.net wrote: >All of these names are obscure. _Must_ that be the way of open source >projects? Would you give any of these names to a product that you >wished to sell and thus wished to be descriptive? Completely agree. When I first came to Python it threw me that I was having to find randomly named packages to do what I needed - and still does. Not wishing to start a flamewar, but a CPAN style structure is a lot easier to find your way around... As to suggestions, I've still not grasped what's in scipy_core, and googling as someone suggested turned up a thread with people agreeing to rename scipy_core to numpy, which makes sense. Peter -- Maple Design - quality web design and programming http://www.mapledesign.co.uk From agn at noc.soton.ac.uk Wed Jan 4 05:18:25 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Wed, 4 Jan 2006 10:18:25 +0000 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <29fd1972cad577ee682419b321523aa4@earthlink.net> References: <43BB201C.7050801@ieee.org> <43BB3E31.6080500@pgrad.unimelb.edu.au> <43BB4A33.2080100@iname.com> <29fd1972cad577ee682419b321523aa4@earthlink.net> Message-ID: <5FBFBB0B-3903-49B3-975A-37E6BE30D62E@noc.soton.ac.uk> numpy sounds good George N. From pajer at iname.com Wed Jan 4 06:43:09 2006 From: pajer at iname.com (Gary) Date: Wed, 04 Jan 2006 06:43:09 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB201C.7050801@ieee.org> References: <43BB201C.7050801@ieee.org> Message-ID: <43BBB4CD.2060202@iname.com> Travis Oliphant wrote: >Please give feedback on the following names: > >1. psipi >2. scicore >3. muscle >4. numstar > >We have to move from numerix due to trademark concerns (I don't want to >have any that I can obviously avoid). > >-Travis > > > How about scipy core --> monty scipy --> full monty :) Ok, I'll get serious. Good idea to brainstorm for names here. Let me suggest thinking along the lines of reptiles and/or Monty Python. Too many two-name associations come to mind: spanish inquisition, african swallow, comfy chair, ... > > > > > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From gruben at bigpond.net.au Wed Jan 4 07:02:43 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Wed, 04 Jan 2006 23:02:43 +1100 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BBB4CD.2060202@iname.com> References: <43BB201C.7050801@ieee.org> <43BBB4CD.2060202@iname.com> Message-ID: <43BBB963.3010702@bigpond.net.au> I think there is a consensus forming on the dev list to go with numpy. In fact, in the interests of homing in on something quickly, Travis's last post there was to ask for reasons 'against' going with numpy. I'm cross-posting this fact here so everyone gets a voice. My opinion, voiced on the dev list, was to go with numpy. My reading is that unless someone has a really good reason against it, or comes up with a new killer name, it'll be numpy. Gary R. From pajer at iname.com Wed Jan 4 08:39:56 2006 From: pajer at iname.com (Gary) Date: Wed, 04 Jan 2006 08:39:56 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BBB963.3010702@bigpond.net.au> References: <43BB201C.7050801@ieee.org> <43BBB4CD.2060202@iname.com> <43BBB963.3010702@bigpond.net.au> Message-ID: <43BBD02C.2030403@iname.com> Gary Ruben wrote: >I think there is a consensus forming on the dev list to go with numpy. >In fact, in the interests of homing in on something quickly, Travis's >last post there was to ask for reasons 'against' going with numpy. I'm >cross-posting this fact here so everyone gets a voice. My opinion, >voiced on the dev list, was to go with numpy. My reading is that unless >someone has a really good reason against it, or comes up with a new >killer name, it'll be numpy. > >Gary R. > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > small point: the new name ought to be googleable. Numpy already has an existance, so, at least for a while, googling numpy will return lots of obsolete information. Scicore, for example, wouldn't suffer from this problem, although I'm not crazy about scicore. Names like cobra aren't very good in this regard. I sometimes use a TeX macro package called ConTeXt. Google is useless. ATC, I like numpy at the moment, with reservations. -gary p. From arnd.baecker at web.de Wed Jan 4 08:53:41 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Wed, 4 Jan 2006 14:53:41 +0100 (CET) Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BBD02C.2030403@iname.com> References: <43BB201C.7050801@ieee.org> <43BBB4CD.2060202@iname.com> <43BBB963.3010702@bigpond.net.au> <43BBD02C.2030403@iname.com> Message-ID: On Wed, 4 Jan 2006, Gary wrote: > Gary Ruben wrote: > > >I think there is a consensus forming on the dev list to go with numpy. > >In fact, in the interests of homing in on something quickly, Travis's > >last post there was to ask for reasons 'against' going with numpy. I'm > >cross-posting this fact here so everyone gets a voice. My opinion, > >voiced on the dev list, was to go with numpy. My reading is that unless > >someone has a really good reason against it, or comes up with a new > >killer name, it'll be numpy. > > > >Gary R. > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.net > >http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > small point: the new name ought to be googleable. Numpy already has an > existance, so, at least for a while, googling numpy will return lots of > obsolete information. Scicore, for example, wouldn't suffer from this > problem, although I'm not crazy about scicore. Names like cobra aren't > very good in this regard. But the first hits for numpy are not that bad: - for the first hit (http://numeric.scipy.org/) Travis is in charge - 2nd hit: http://www.numpy.org/ points (with some history information) to http://sourceforge.net/projects/numpy - 3d hit: http://www.pfdubois.com/numpy/ the same - 4th hit: http://www.pfdubois.com/numpy/html2/numpy.html obsoleted by now - 5th hit: http://sourceforge.net/projects/numpy So in 4/5 cases you get to the right place. Seems ok to me. > I sometimes use a TeX macro package called ConTeXt. Google is useless. > > ATC, I like numpy at the moment, with reservations. > > -gary p. Best, Arnd From sransom at nrao.edu Wed Jan 4 09:01:41 2006 From: sransom at nrao.edu (Scott Ransom) Date: Wed, 4 Jan 2006 09:01:41 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: References: <43BB201C.7050801@ieee.org> <43BBB4CD.2060202@iname.com> <43BBB963.3010702@bigpond.net.au> <43BBD02C.2030403@iname.com> Message-ID: <20060104140141.GB11478@ssh.cv.nrao.edu> I'm for numpy as well. I think the search engine issues are not too important right now as the engines will adjust based on usage. I've always thought that moving from Numeric/numpy -> numarray -> scipy was a mess when all the packages do effectively the same (or very similar) things. Scott -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From Michael_OKeefe at nrel.gov Wed Jan 4 09:11:30 2006 From: Michael_OKeefe at nrel.gov (O'Keefe, Michael) Date: Wed, 4 Jan 2006 07:11:30 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again Message-ID: I'd like to cast my vote for #2 "scicore" though I personally wouldn't mind "scipycore" either. -Michael -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant Sent: Tuesday, January 03, 2006 18:09 To: SciPy Developers List; SciPy Users List Subject: [SciPy-user] Renaming scipy_core: here we go again Please give feedback on the following names: 1. psipi 2. scicore 3. muscle 4. numstar We have to move from numerix due to trademark concerns (I don't want to have any that I can obviously avoid). -Travis From Chris.Fonnesbeck at MyFWC.com Wed Jan 4 09:20:38 2006 From: Chris.Fonnesbeck at MyFWC.com (Fonnesbeck, Chris) Date: Wed, 4 Jan 2006 09:20:38 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <20060104140141.GB11478@ssh.cv.nrao.edu> Message-ID: On 1/4/06 9:01 AM, "Scott Ransom" wrote: > I'm for numpy as well. I think the search engine issues are > not too important right now as the engines will adjust based on > usage. I second this. I think numpy is a good combination of information and name recognition/tradition. C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: Chris.Fonnesbeck at MyFWC.com -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2131 bytes Desc: not available URL: From aisaac at american.edu Wed Jan 4 09:59:07 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 4 Jan 2006 09:59:07 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB5EB0.3090506@colorado.edu> References: <43BB201C.7050801@ieee.org><90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net><43BB5EB0.3090506@colorado.edu> Message-ID: On Tue, 03 Jan 2006, Fernando Perez apparently wrote: > so by all means help numnuts (as in nuts and bolts) half serious ... Cheers, Alan Isaac From pajer at iname.com Wed Jan 4 10:06:40 2006 From: pajer at iname.com (Gary) Date: Wed, 04 Jan 2006 10:06:40 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: References: <43BB201C.7050801@ieee.org><90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net><43BB5EB0.3090506@colorado.edu> Message-ID: <43BBE480.90302@iname.com> Alan G Isaac wrote: >On Tue, 03 Jan 2006, Fernando Perez apparently wrote: > > >>so by all means help >> >> > >numnuts >(as in nuts and bolts) > >half serious ... > > > numbnuts ? From perry at stsci.edu Wed Jan 4 10:45:58 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 4 Jan 2006 10:45:58 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BBE480.90302@iname.com> References: <43BB201C.7050801@ieee.org><90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net><43BB5EB0.3090506@colorado.edu> <43BBE480.90302@iname.com> Message-ID: I don't see the google issue for numpy as a big one. There may be some time that older, confusing links are intermixed, but that will eventually go away and the top one should rapidly become the right one. There seems to be sufficient support for it that Travis should work on that basis. Perry From jaonary at free.fr Wed Jan 4 10:54:31 2006 From: jaonary at free.fr (jaonary at free.fr) Date: Wed, 04 Jan 2006 16:54:31 +0100 Subject: [SciPy-user] How to built scipy on windows Message-ID: <1136390071.43bbefb7361c6@imp1-g19.free.fr> Hi all, I'd like to build the latest scipy and the core package on a windows xp machine. I have installed on it : python2.4.2 mkl (the latest version from intel) intel fortran compiler (evaluation version from intel) After retrieving the last scipy/trunk with svn, I did as, usual, in the core directory python setup.py build Here, I got a build error. Mainly, distutil couldn't find my fortran copiler, mkl lib and the python24.lib file. The last message I got is ERROR: Failed to test configuration. I think that there's some environment variable that are not set correctly. I someone could give me more explanation it would be helpufull. Cheers, Jaonary Rabarisoa From pajer at iname.com Wed Jan 4 11:29:16 2006 From: pajer at iname.com (Gary) Date: Wed, 04 Jan 2006 11:29:16 -0500 Subject: [SciPy-user] How to built scipy on windows In-Reply-To: <1136390071.43bbefb7361c6@imp1-g19.free.fr> References: <1136390071.43bbefb7361c6@imp1-g19.free.fr> Message-ID: <43BBF7DC.7000005@iname.com> jaonary at free.fr wrote: > Hi all, > I'd like to build the latest scipy and the core package on a windows > xp machine. > I have installed on it : > python2.4.2 > mkl (the latest version from intel) > intel fortran compiler (evaluation version from intel) > > After retrieving the last scipy/trunk with svn, I did as, usual, in > the core > directory > python setup.py build Not sure how to fix, but I *can* tell you that the svn versions of scipy_core and scipy build extremely smoothly on WinXP if you use the MinGW compiler. The hard part is finding the right thing to download from MingGW. Building scipy is a lot easier than it sounds. Start here http://sourceforge.net/projects/mingw/ Click on "Download MinGW" (in the green box) Click on "Previous" (non intuitive!) Click on "MingGW" (down the page a bit) Download MinGW-3.1.0-1.exe (version 4.1.0 doesn't work for me) and install Here's Travis' build instructions from a previous post: *[I'll add*, since it might not be clear, that scipy is in a separate tree:* http://svn.scipy.org/svn/scipy/trunk Set up two directories, one for scipy_core the other for scipy. Build and install scipy_core first.] *-------------------------------------------- Then, check out the latest SVN tree (Tortoise SVN is an excellent windows SVN client that makes it easy). The URL is http://svn.scipy.org/svn/scipy_core/trunk You should be able to go into the directory where you placed the tree and type python setup.py config --compiler=mingw32 build --compiler=mingw32 install or python setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst to get an installable executable. Alternatively, to avoid all the --compiler=xxxxx noise you can create (or modify if you already have one) a distutils configuration file for your version of Python. The file name is \Lib\distutils\distutils.cfg and the contents should contain [build] compiler = mingw32 [config] compiler = mingw32 On my system C:\Python24\Lib\distutils\distutils.cfg is where it is located. ------------------------------------------------------------ scipy_core compiles as-is. scipy needs atlas (scipy_core doesn't) get an atlas binary here: http://www.scipy.org/download/atlasbinaries/winnt/ and install. set an environment variable set ATLAS=c:\path\to\atlas (or set via the Windows control panel) Then scipy compiles hth, gary >Here, I got a build error. Mainly, distutil couldn't find my fortran copiler, >mkl lib and the python24.lib file. The last message I got is > >ERROR: Failed to test configuration. > >I think that there's some environment variable that are not set correctly. I >someone could give me more explanation it would be helpufull. > >Cheers, > >Jaonary Rabarisoa > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From dalembertian at yahoo.com Wed Jan 4 11:54:51 2006 From: dalembertian at yahoo.com (Frank) Date: Wed, 4 Jan 2006 10:54:51 -0600 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: References: <43BB201C.7050801@ieee.org><90975D05-B1AF-4E10-9B72-BCEA93FBAD9E@qwest.net><43BB5EB0.3090506@colorado.edu> <43BBE480.90302@iname.com> Message-ID: <975EBC94-C6D8-4D96-BA6A-9065228E536D@yahoo.com> Hi All, I followed Fernando's suggestion to google "scipy-dev renaming scipy_core" to find out why there were separate names in the first place. Although I am still not clear as to why there MUST be different names, one thing I saw in following the "More Bugs Fixed" thread was that it seems some folks were having issues with subpackage placement. For example, where does "stats" belong. I only mention this as a caution. Sometimes when folks create named categories (e.g., num or sci) a subpackage will come along that code- wise should be in one but English-description-wise should be in the other. Thus lending to confusion. Again, if there must be two packages, I vote for the more general and nondescript "scicore." Whatever the case, I offer a sincere "thank you" to all the developers. SciPy is awesome. Frank From gpajer at rider.edu Wed Jan 4 12:16:51 2006 From: gpajer at rider.edu (Gary Pajer) Date: Wed, 04 Jan 2006 12:16:51 -0500 Subject: [SciPy-user] Error importing In-Reply-To: References: Message-ID: <43BC0303.6040000@rider.edu> Rob Managan wrote: >It seems I needed to delete the scipy directory in site-packages. > >Now i see that the import functionality has been updated since i now >tells that misc failed to import Image. > I noticed that, too. Does it indicate a bug? If not, what ? -g > Nice addition!! > > > >>I just updated to the latest svn and after doing a full rebuild and >> >> From strawman at astraw.com Wed Jan 4 13:40:57 2006 From: strawman at astraw.com (Andrew Straw) Date: Wed, 04 Jan 2006 10:40:57 -0800 Subject: [SciPy-user] How to built scipy on windows In-Reply-To: <43BBF7DC.7000005@iname.com> References: <1136390071.43bbefb7361c6@imp1-g19.free.fr> <43BBF7DC.7000005@iname.com> Message-ID: <43BC16B9.3060308@astraw.com> I just posted Gary's great build instructions: http://new.scipy.org/Wiki/Installing_SciPy From evemariedevaliere at yahoo.fr Wed Jan 4 13:56:06 2006 From: evemariedevaliere at yahoo.fr (=?iso-8859-1?q?Devali=E8re=20Eve-Marie?=) Date: Wed, 4 Jan 2006 19:56:06 +0100 (CET) Subject: [SciPy-user] newbie...info doesn't work... In-Reply-To: <20051230014323.GA9340@localhost.localdomain> Message-ID: <20060104185606.20532.qmail@web25814.mail.ukl.yahoo.com> Hi folks! Thanks Evan and Dave! The info for the mailing list helps but I still can't access the info command...so for now I am trying to use numarray instead ... my pythonpath is the following: echo $PYTHONPATH /usr/lib/python2.4:/swan/download/python/SciPy_complete-0.3.2/scipy_core:/usr/include/python2.4/numarray:/usr/lib/python2.4/lib-tk:/swan/download/python/Numeric-24.2:/swan/download/python/Numeric-24.2/Demo:/swan/download/python/SciPy_complete-0.3.2:/swan/download/python/Python-2.4.2 I didn't find any scipy directory under user lib but when I import it it works fine... Am I missing smth? (I had installed scipy core only at first and it wasn't working (well same thing info wasn't wiorking but import was...) so then I installed scipy_complete...without any change... Also, is it worth it to buy the scipy doc? Thanks a lot! Cheers, Eve --- Evan Monroig a ?crit : > On Dec.30 02h19, Devali?re Eve-Marie wrote : > > I feel kind of stupid but I have just installed > > python and stuff for scipy and just my 'info' > function > > doesn't work...other functions seems to work but > > seeing how week is the scipy doc on the web I am > > worrying... > > Hi, > > I'm not sure how you use scipy, but there are two > cases > > If you use the command 'import scipy' to import > scipy, then the info > function will be available as scipy.info > > On the other hand, if you use the command 'from > scipy import *' to > import scipy, then info should be available > directly. (and if it is not, > I don't know what to do..) > > > Would anyone have an idea on what's happening? > > Also, is there any working search among scipy > > threads...the search mailing list doesn't give me > any > > result even typing smth like 'matrix'... so I went > > into the archive of each month but that's not > handy... > > I don't really know about scipy's website, but > another solution is to > use google and enter this in the box when you look > for 'matrix': > > site:www.scipy.org/mailinglists matrix > > Hope this helped ^^ > > Evan > > ps: when you ask a question on the mailing-list you > should write an > entirely new mail, not just reply to a mail and > change the subject ;) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > ___________________________________________________________________________ Nouveau : t?l?phonez moins cher avec Yahoo! Messenger ! D?couvez les tarifs exceptionnels pour appeler la France et l'international. T?l?chargez sur http://fr.messenger.yahoo.com From jaonary at free.fr Wed Jan 4 14:25:09 2006 From: jaonary at free.fr (Jaonary Rabarisoa) Date: Wed, 4 Jan 2006 20:25:09 +0100 Subject: [SciPy-user] How to built scipy on windows In-Reply-To: <43BBF7DC.7000005@iname.com> References: <1136390071.43bbefb7361c6@imp1-g19.free.fr> <43BBF7DC.7000005@iname.com> Message-ID: <53A91485-A8D2-4816-9AA0-AB2F9D8D86EB@free.fr> > > Not sure how to fix, but I *can* tell you that the svn versions of > scipy_core and scipy build extremely smoothly on WinXP if you use the > MinGW compiler. The hard part is finding the right thing to download > from MingGW. Building scipy is a lot easier than it sounds. > > Oki. I'll try with mingGW. But, I'd also like to use mkl instead of atlas and intel compiler instead of gcc. During the configuration, distutils try to find these library and compilers. So I think it's not impossible to do that. From rainman.rocks at gmail.com Wed Jan 4 15:41:15 2006 From: rainman.rocks at gmail.com (rainman) Date: Wed, 4 Jan 2006 23:41:15 +0300 Subject: [SciPy-user] Renaming scipy_core: here we go again Message-ID: <561204042.20060104234115@gmail.com> Hello scipy-user, pythematics? ;] -- Best regards, rainman mailto:rainman.rocks at gmail.com From lanceboyle at qwest.net Wed Jan 4 22:31:34 2006 From: lanceboyle at qwest.net (lanceboyle at qwest.net) Date: Wed, 4 Jan 2006 20:31:34 -0700 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <43BB201C.7050801@ieee.org> References: <43BB201C.7050801@ieee.org> Message-ID: <014E09D2-EFCF-4DC6-A81E-180CB7DB7797@qwest.net> basic_python_numerics extended_python_numerics python_numerics_core python_numerics_extras numerics_basic numerics_extended numerics_core numerics_extras Too much typing? Eat a bigger breakfast. 8^) From alexander.borghgraef.rma at gmail.com Thu Jan 5 04:32:54 2006 From: alexander.borghgraef.rma at gmail.com (Alexander Borghgraef) Date: Thu, 5 Jan 2006 10:32:54 +0100 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <29fd1972cad577ee682419b321523aa4@earthlink.net> References: <43BB201C.7050801@ieee.org> <43BB3E31.6080500@pgrad.unimelb.edu.au> <43BB4A33.2080100@iname.com> <29fd1972cad577ee682419b321523aa4@earthlink.net> Message-ID: <9e8c52a20601050132g4b9d8220j98d626c7103461f5@mail.gmail.com> On 1/4/06, Ted Horst wrote: > > My (limited) understanding is that scipy_core is primarily the > multidimensional array class and supporting infrastructure. It is not > all that scientific except that linear algebra and ffts have been > bundled with it. So perhaps the name should be something along the > lines of: > > ndarray > multiarray > bigarray > Seconded. The name should be descriptive of what it is, not something vague like 'scicore', which may sound relevant to developers, but not to anyone new to the library trying to solve a problem with it. The name should either refer to multidimensional arrays or linear algebra, since those are the concepts the lib is all about. A reference to tensors would be nice too, since they are basically n-D arrays, but that may be a bit obscure for some. My list of names, including the one of Ted's list I like, in preferred order: pytensor pylinalg multiarray tensorlib Also, is there a possibility of using capitals in the name, or does your naming conventions preclude that? I like PyTensor and PyLinAlg a lot better than pytensor and pylinalg, but that may be a matter of personal preference. -- Alex Borghgraef -------------- next part -------------- An HTML attachment was scrubbed... URL: From vel.accel at gmail.com Thu Jan 5 04:50:41 2006 From: vel.accel at gmail.com (Deiter Hering) Date: Thu, 05 Jan 2006 04:50:41 -0500 Subject: [SciPy-user] Renaming scipy_core: here we go again In-Reply-To: <9e8c52a20601050132g4b9d8220j98d626c7103461f5@mail.gmail.com> References: <43BB201C.7050801@ieee.org> <43BB3E31.6080500@pgrad.unimelb.edu.au> <43BB4A33.2080100@iname.com> <29fd1972cad577ee682419b321523aa4@earthlink.net> <9e8c52a20601050132g4b9d8220j98d626c7103461f5@mail.gmail.com> Message-ID: <43BCEBF1.9080801@gmail.com> Alexander Borghgraef wrote: > On 1/4/06, *Ted Horst* > wrote: > > My (limited) understanding is that scipy_core is primarily the > multidimensional array class and supporting infrastructure. It is not > all that scientific except that linear algebra and ffts have been > bundled with it. So perhaps the name should be something along the > lines of: > > ndarray > multiarray > bigarray > > > Seconded. The name should be descriptive of what it is, not something > vague like 'scicore', which may sound relevant > to developers, but not to anyone new to the library trying to solve a > problem with it. The name should either refer to > multidimensional arrays or linear algebra, since those are the > concepts the lib is all about. A reference to tensors would > be nice too, since they are basically n-D arrays, but that may be a > bit obscure for some. My list of names, including the > one of Ted's list I like, in preferred order: > > pytensor > pylinalg > multiarray > tensorlib > > Also, is there a possibility of using capitals in the name, or does > your naming conventions preclude that? I like PyTensor > and PyLinAlg a lot better than pytensor and pylinalg, but that may be > a matter of personal preference. > > -- > Alex Borghgraef > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > Just to let the Scipy-user list know that the name is now: numpy. Dieter -------------- next part -------------- An HTML attachment was scrubbed... URL: From evan at monroig.net Thu Jan 5 05:11:06 2006 From: evan at monroig.net (Evan Monroig) Date: Thu, 5 Jan 2006 19:11:06 +0900 Subject: [SciPy-user] [solved] errors using weave with with blitz on Ubuntu/Breezy Message-ID: On 12/31/05, Fernando Perez wrote: > Evan Monroig wrote: > > Hi, > > > > I am trying to use weave to speed up my code with inline c++ code, but > > I can't find any working sample. > > > > I found a very simple one [1] to return the trace of a matrix, but > > I can't find how to incorporate the c++ code. I attached the compile > > errors. I also tried to remove the "type_converters" parameter in inline > > and change the array parentheses () into brackets [], then it compiles > > but gives wrong results... > > > > I am running on Ubuntu with > > python2.4-scipy-core=0.3.2-2ubuntu1 > > python2.4-scipy=0.3.2-3ubuntu2 > > python2.4-numeric=23.8-4 > > python2.4-numeric-ext=23.8-4 > > blitz++=1:0.8-4 (just in case) > > > > gcc version is 4.0.2 > ^^^^^^^^^^^^^^^^^^^^ > > This is your problem: only blitz 0.9 compiles with gcc4. You need to download > the newer blitz, and put it by hand in the > > /usr/lib/python2.4/site-packages/weave/blitz-20001213/blitz > > directory. Thanks again for the help. Now it works. Just in case someone has the same problem, here is how to solve it Warning: don't forget I have Ubuntu/Breezy and I use the scipy packages that come in the universe repository First download blitz from sourceforge [1] and put it for example in /usr/src/ Then replace the old blitz directory by the new as Fernando Perez explained. I did with the following commands (don't forget to save the old version just in case something goes wrong !) ---- cd /usr/lib/python2.4/site-packages/weave/blitz-20001213 sudo tar cvjf blitz_old.tar.bz2 blitz/ sudo rm -rf /usr/lib/python2.4/site-packages/weave/blitz-20001213/blitz cd /usr/src tar xvzf blitz-0.9.tar.gz cd blitz-0.9 sudo cp -a blitz/ /usr/lib/python2.4/site-packages/weave/blitz-20001213/ ---- Now you can test weave with a sample file [2]. Evan [1] http://sourceforge.net/project/showfiles.php?group_id=63961 [2] the sample file: (the code is not mine, and you can find the original file here http://amath.colorado.edu/faculty/fperez/python/weave_examples.html) --- #!/usr/bin/python # -*- coding: utf-8 -*- import scipy from weave import converters, inline def trace(mat): """Return the trace of a matrix. """ nrow,ncol = mat.shape code = \ """ double tr=0.0; for(int i=0;i Hi, I've built one of the latest revision of scipy from svn. Things go right and the installation was succesfull. I'd also like to use gnuplot for graphic visualisation. I see that it's in the sandbox package. But impossibile to do something with it. I can do import scipy.sandbox as sb but after that nothing! even a litle sb.gplt Any idea ? Another issue that I run into is with the very latest revision of scipy. I don't understand why it requires numpy now. The revision before 01/03/2006 don't need numpy at all and the today's (01/05/2006), during the build stage, complains that he can't find numpy.distutils... Best regards, Jaonary From oliphant.travis at ieee.org Thu Jan 5 05:31:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 03:31:52 -0700 Subject: [SciPy-user] Where are the packages (modules) under sandbox In-Reply-To: <1136456072.43bcf188a9d8c@imp4-g19.free.fr> References: <1136456072.43bcf188a9d8c@imp4-g19.free.fr> Message-ID: <43BCF598.5050002@ieee.org> jaonary at free.fr wrote: >Hi, >I've built one of the latest revision of scipy from svn. Things go right and the >installation was succesfull. I'd also like to use gnuplot for graphic >visualisation. I see that it's in the sandbox package. But impossibile to do >something with it. >I can do > import scipy.sandbox as sb >but after that nothing! even a litle > sb.gplt > > You have to enable the building of the sandbox. See Lib/sandbox/setup.py You need to uncomment the packages you want to build.... -Travis From arnd.baecker at web.de Thu Jan 5 05:32:29 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 5 Jan 2006 11:32:29 +0100 (CET) Subject: [SciPy-user] Where are the packages (modules) under sandbox In-Reply-To: <1136456072.43bcf188a9d8c@imp4-g19.free.fr> References: <1136456072.43bcf188a9d8c@imp4-g19.free.fr> Message-ID: On Thu, 5 Jan 2006 jaonary at free.fr wrote: > Hi, > I've built one of the latest revision of scipy from svn. Things go right and the > installation was succesfull. I'd also like to use gnuplot for graphic > visualisation. I see that it's in the sandbox package. But impossibile to do > something with it. > I can do > import scipy.sandbox as sb > but after that nothing! even a litle > sb.gplt > > Any idea ? > Before installation, edit `Lib/sandbox/setup.py` to have the line config.add_subpackage('gplt') I haven't tried this for a while, but that's how it should work ... > Another issue that I run into is with the very latest revision of scipy. I don't > understand why it requires numpy now. The revision before 01/03/2006 don't need > numpy at all and the today's (01/05/2006), during the build stage, complains > that he can't find numpy.distutils... It was decided (for various reasons) that the new `scipy core` is now called `numpy`. So you will have to install `numpy` (the successore of Numeric/numarray/...) first and then scipy. The restructuring was done last night - so things are really fresh (but not test failures have been reported so far!) Commands to check out: svn co http://svn.scipy.org/svn/numpy/trunk numpy svn co http://svn.scipy.org/svn/scipy/trunk scipy If you need further information, don't hesitate to ask. HTH, Arnd From oliphant.travis at ieee.org Thu Jan 5 05:33:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 03:33:02 -0700 Subject: [SciPy-user] Where are the packages (modules) under sandbox In-Reply-To: <1136456072.43bcf188a9d8c@imp4-g19.free.fr> References: <1136456072.43bcf188a9d8c@imp4-g19.free.fr> Message-ID: <43BCF5DE.2040402@ieee.org> jaonary at free.fr wrote: >Hi, >I've built one of the latest revision of scipy from svn. Things go right and the >installation was succesfull. I'd also like to use gnuplot for graphic >visualisation. I see that it's in the sandbox package. But impossibile to do >something with it. >I can do > import scipy.sandbox as sb >but after that nothing! even a litle > sb.gplt > >Any idea ? > >Another issue that I run into is with the very latest revision of scipy. I don't >understand why it requires numpy now. The revision before 01/03/2006 don't need >numpy at all and the today's (01/05/2006), during the build stage, complains >that he can't find numpy.distutils... > > You always needed scipy_core. All that happened is that scipy_core is now called numpy and numpy and scipy have their own namespaces (previously scipy_core was using the scipy namespace as well). -Travis From oliphant.travis at ieee.org Thu Jan 5 05:39:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 03:39:05 -0700 Subject: [SciPy-user] New name for Scipy Core is NumPy Message-ID: <43BCF749.9060902@ieee.org> There has been a fast-paced discussion on the scipy-dev list which resulted in the renaming of the scipy_core package to numpy in preparation for the 0.9. 2 (nearing stable 1.0 release). You now need to get numpy out of svn: svn co http://svn.scipy.org/svn/numpy/trunk numpy and build it before building scipy (numpy replaces scipy_core) Also, you must explicitly call scipy.pkgload() if you want to load the scipy namespace with the numpy names (and the scipy sub-package names). Otherwise, scipy acts more like a library (you have to explicitly import scipy.linalg) This new packaging structure makes things much nicer for .eggs and should mean a more stable platform in the long run. The only big porting effort is to use replace scipy.base with numpy and/or import scipy scipy.pkgload() An actual release of numpy and scipy will follow. Best regards, -Travis From dd55 at cornell.edu Thu Jan 5 08:55:09 2006 From: dd55 at cornell.edu (Darren Dale) Date: Thu, 5 Jan 2006 08:55:09 -0500 Subject: [SciPy-user] recognizing djbfft In-Reply-To: <200512111440.48878.dd55@cornell.edu> References: <200511292029.23404.dd55@cornell.edu> <200512111440.48878.dd55@cornell.edu> Message-ID: <200601050855.09919.dd55@cornell.edu> I see from the new numpy checkout that distutils.system_info still does not recognize libdjbfft.so, only libdjbfft.a. I suggested the following change a while back, but got no response. Would someone with commit rights make the change, if it is acceptable? Thanks, Darren On Sunday 11 December 2005 14:40, Darren Dale wrote: > On Tuesday 29 November 2005 8:29 pm, Darren Dale wrote: > > Could someone tell me if svn scipy will recognize djbfft? I have > > djbfft-0.76 installed, in /usr/lib/ and /usr/include, but scipy does not > > find it according to the output of system_info.py. > > I would like to suggest that line 593 in distutils/system_info.py be > changed to include the libdjbfft.so: > > p = self.combine_paths (d,['libdjbfft.a', 'libdjbfft.so']) > > > Darren > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Darren S. Dale, Ph.D. Cornell High Energy Synchrotron Source Cornell University 200L Wilson Lab Rt. 366 & Pine Tree Road Ithaca, NY 14853 dd55 at cornell.edu office: (607) 255-9894 fax: (607) 255-9001 From pearu at scipy.org Thu Jan 5 08:21:49 2006 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 5 Jan 2006 07:21:49 -0600 (CST) Subject: [SciPy-user] recognizing djbfft In-Reply-To: <200601050855.09919.dd55@cornell.edu> References: <200511292029.23404.dd55@cornell.edu> <200512111440.48878.dd55@cornell.edu> <200601050855.09919.dd55@cornell.edu> Message-ID: Thanks for the patch, it is in svn now. Pearu On Thu, 5 Jan 2006, Darren Dale wrote: > I see from the new numpy checkout that distutils.system_info still does not > recognize libdjbfft.so, only libdjbfft.a. I suggested the following change a > while back, but got no response. Would someone with commit rights make the > change, if it is acceptable? > > Thanks, > Darren > > On Sunday 11 December 2005 14:40, Darren Dale wrote: >> On Tuesday 29 November 2005 8:29 pm, Darren Dale wrote: >>> Could someone tell me if svn scipy will recognize djbfft? I have >>> djbfft-0.76 installed, in /usr/lib/ and /usr/include, but scipy does not >>> find it according to the output of system_info.py. >> >> I would like to suggest that line 593 in distutils/system_info.py be >> changed to include the libdjbfft.so: >> >> p = self.combine_paths (d,['libdjbfft.a', 'libdjbfft.so']) >> >> >> Darren >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user > > From mfmorss at aep.com Thu Jan 5 09:29:13 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Thu, 5 Jan 2006 09:29:13 -0500 Subject: [SciPy-user] Installing SciPy on AIX 5.2 Message-ID: Here at AEP we have recently installed Python 2.4.2 from source on an AIX 5.2 machine; running testall.py revealed only inconsequential problems. We are now attempting to install SciPy, which we also have to do from source. Is it correct that the installation guidance at http://www.scipy.org is obsolete? That it refers to SciPy 0.3.2 would seem to case doubt on all the information provided. The most recent source tarball is, of course, 0.8.2. In particular, at http://www.scipy.org/documentation/buildscipy.txt, it says that Python must be installed with the zlib module enabled. Is this true? Why would SciPy depend on a file compression module? Mark F. Morss Principal Analyst, Market Risk American Electric Power From gpajer at rider.edu Thu Jan 5 10:12:18 2006 From: gpajer at rider.edu (Gary Pajer) Date: Thu, 05 Jan 2006 10:12:18 -0500 Subject: [SciPy-user] How to built scipy on windows : revised In-Reply-To: <43BC16B9.3060308@astraw.com> References: <1136390071.43bbefb7361c6@imp1-g19.free.fr> <43BBF7DC.7000005@iname.com> <43BC16B9.3060308@astraw.com> Message-ID: <43BD3752.4060301@rider.edu> Andrew Straw wrote: >I just posted Gary's great build instructions: > >http://new.scipy.org/Wiki/Installing_SciPy > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > Less than 24 hours later, it's obsolete. I've updated it to reflect the numpy/scipy reorg. I also cleaned it up a bit ... and in the process it got a little longer. Maybe it's too long now. Feel free to edit at will. I didn't think it rated a wiki entry, but, well, there you go. Thanks, Andrew. This is the first time I touched a wiki, and I have a formatting question. The formatting hints tell me that three single quotes makes boldface. I put in triple quotes, but got no boldface. What did I miss? -gap From vincefn at users.sourceforge.net Thu Jan 5 10:34:01 2006 From: vincefn at users.sourceforge.net (Favre-Nicolin Vincent) Date: Thu, 5 Jan 2006 16:34:01 +0100 Subject: [SciPy-user] How to built scipy on windows : revised In-Reply-To: <43BD3752.4060301@rider.edu> References: <1136390071.43bbefb7361c6@imp1-g19.free.fr> <43BC16B9.3060308@astraw.com> <43BD3752.4060301@rider.edu> Message-ID: <200601051634.01348.vincefn@users.sourceforge.net> On Jeudi 05 Janvier 2006 16:12, Gary Pajer wrote: > This is the first time I touched a wiki, and I have a formatting > question. The formatting hints tell me that three single quotes makes > boldface. I put in triple quotes, but got no boldface. What did I miss? That's because that part of the text is within a "code display block", i.e. between {{{ }}}. No formatting can be used there - format is a pure monospace font. Vincent -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From jh at oobleck.astro.cornell.edu Thu Jan 5 11:25:44 2006 From: jh at oobleck.astro.cornell.edu (Joe Harrington) Date: Thu, 5 Jan 2006 11:25:44 -0500 Subject: [SciPy-user] [SciPy-dev] Long live to numpy (and its list as well) In-Reply-To: (scipy-dev-request@scipy.net) References: Message-ID: <200601051625.k05GPixf021407@oobleck.astro.cornell.edu> Francesc Altet on scipy-dev: > Now that numpy is dead and we all want a long life to numpy, what > about moving discussions in this list into the venerable: > "numpy-discussion " We're one community (or want to be). Could we just have one mailing list per type of discussion (dev and user)? All the cross-posting and cross-monitoring is tedious. I don't care whether we keep scipy-* or numpy-* lists, but is there a functional reason to have four lists? Consider that soon we may have *-doc, *-newbie, *-announce, and others as well, if this takes off like we all hope. If the developers want separate lists because some are only working on one of the two packages, I can see that argument (in the future if not now). I don't see a need for two user lists, unless perhaps sorted by high and low traffic. I propose we either drop the numpy-* lists (if subscribers there agree), or leave them for ongoing discussion of the legacy packages, and discourage discussion of the new numpy/scipy there. Ok, flame me. --jh-- From strawman at astraw.com Thu Jan 5 12:49:38 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 05 Jan 2006 09:49:38 -0800 Subject: [SciPy-user] How to built scipy on windows : revised In-Reply-To: <43BD3752.4060301@rider.edu> References: <1136390071.43bbefb7361c6@imp1-g19.free.fr> <43BBF7DC.7000005@iname.com> <43BC16B9.3060308@astraw.com> <43BD3752.4060301@rider.edu> Message-ID: <43BD5C32.6070504@astraw.com> Thanks for keeping the page up-to-the-minute. I removed the "preformatting" from the quote block (I initially was in a hurry, so I wanted to preserve your ASCII formatting) and cleaned up the formatting a little. Rather than use '''bold''', I made your sections start with == a level-2 header == which will be used to automatically generate a table of contents at some later date. From Fernando.Perez at colorado.edu Thu Jan 5 12:57:14 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Thu, 05 Jan 2006 10:57:14 -0700 Subject: [SciPy-user] [SciPy-dev] Long live to numpy (and its list as well) In-Reply-To: <200601051625.k05GPixf021407@oobleck.astro.cornell.edu> References: <200601051625.k05GPixf021407@oobleck.astro.cornell.edu> Message-ID: <43BD5DFA.20906@colorado.edu> Joe Harrington wrote: > Francesc Altet on scipy-dev: > > >>Now that numpy is dead and we all want a long life to numpy, what >>about moving discussions in this list into the venerable: > > >>"numpy-discussion " > > > We're one community (or want to be). Could we just have one mailing > list per type of discussion (dev and user)? All the cross-posting and > cross-monitoring is tedious. I don't care whether we keep scipy-* or > numpy-* lists, but is there a functional reason to have four lists? > Consider that soon we may have *-doc, *-newbie, *-announce, and others > as well, if this takes off like we all hope. If the developers want > separate lists because some are only working on one of the two > packages, I can see that argument (in the future if not now). I don't > see a need for two user lists, unless perhaps sorted by high and low > traffic. > > I propose we either drop the numpy-* lists (if subscribers there > agree), or leave them for ongoing discussion of the legacy packages, > and discourage discussion of the new numpy/scipy there. > > Ok, flame me. Uh, no. I'm actually with you on this one: I just don't think we are a large enough group to warrant the existence of separate numpy- and scipy- lists, especially when the overlap in topics is so high. Every scipy user is, by necessity, a numpy user as well. I think that, IF in the future: 1. the traffic on the scipy- lists becomes enormous, AND 2. a significant portion of that traffic is for users of numpy as a pure array library with no scientific concerns (if it really becomes a popular 'data exchange' system for Python-and-C libraries), THEN we can consider resuscitating the numpy lists. For now, I vote on leaving them dormant, and moving all numeric(abandoned), numarray(maintenance-transition) and numpy/scipy (new development) discussion to the scipy-* lists. I don't think the occasional post about Numeric or numarray is a major concern (at least it doesn't bother me). It is an issue also of friendliness to newbies: I'd like to tell newcomers "for information and discussion, just join scipy-user and matplotlib-user, and you should be set on all numerics and plotting in python". Telling them to subscribe to, or monitor via gmane, 8 different lists is annoying. Cheers, f From chris at trichech.us Thu Jan 5 13:01:58 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Thu, 5 Jan 2006 13:01:58 -0500 Subject: [SciPy-user] OS X installers for numpy/scipy Message-ID: For interested Mac users, I have built new numpy and scipy installers for OS X 10.4 using python 2.4.2. and have made them available at: http://trichech.us Let me know if there are any problems installing. C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From rowen at cesmail.net Thu Jan 5 13:31:45 2006 From: rowen at cesmail.net (Russell E. Owen) Date: Thu, 05 Jan 2006 10:31:45 -0800 Subject: [SciPy-user] New name for Scipy Core is NumPy References: <43BCF749.9060902@ieee.org> Message-ID: In article <43BCF749.9060902 at ieee.org>, Travis Oliphant wrote: > There has been a fast-paced discussion on the scipy-dev list which > resulted in the renaming of the scipy_core package to numpy in > preparation for the 0.9. 2 (nearing stable 1.0 release). > > You now need to get numpy out of svn: > > svn co http://svn.scipy.org/svn/numpy/trunk numpy > > and build it before building scipy (numpy replaces scipy_core) > > Also, you must explicitly call scipy.pkgload() if you want to load the > scipy namespace with the numpy names (and the scipy sub-package names). > Otherwise, scipy acts more like a library (you have to explicitly import > scipy.linalg) > > This new packaging structure makes things much nicer for .eggs and > should mean a more stable platform in the long run. The only big > porting effort is to use replace scipy.base with numpy and/or > > import scipy > scipy.pkgload() > > An actual release of numpy and scipy will follow. > > Best regards, > > -Travis Thank you. This seems like a very nice improvement over scipy.core. It may seem like a small thing, but I suspect it will greatly increase the rate of adoption (since it now looks more like a standalone package). It certainly will make me more likely to try it out. One minor suggestion: please settle on numpy or NumPy and use it for everything. That way one doesn't have to guess on the correct import statement (unlike the original NumPy which was via "import Numeric"; let's not recreate *that* mess). -- Russell From chris at trichech.us Thu Jan 5 13:54:03 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Thu, 5 Jan 2006 13:54:03 -0500 Subject: [SciPy-user] New name for Scipy Core is NumPy In-Reply-To: <43BCF749.9060902@ieee.org> References: <43BCF749.9060902@ieee.org> Message-ID: <7078E7A3-66A4-49F5-AFCE-B92DAE31AD07@trichech.us> On Jan 5, 2006, at 5:39 AM, Travis Oliphant wrote: > This new packaging structure makes things much nicer for .eggs and > should mean a more stable platform in the long run. By the way, how does one get a numpy egg? On OSX, when I call "python setup.py bdist_mpkg", it builds eggs for some packages (eg matplotlib) and not for others (eg. numpy). Probably the wrong place to post this, but others may be interested. C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From strawman at astraw.com Thu Jan 5 14:12:30 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 05 Jan 2006 11:12:30 -0800 Subject: [SciPy-user] New name for Scipy Core is NumPy In-Reply-To: <7078E7A3-66A4-49F5-AFCE-B92DAE31AD07@trichech.us> References: <43BCF749.9060902@ieee.org> <7078E7A3-66A4-49F5-AFCE-B92DAE31AD07@trichech.us> Message-ID: <43BD6F9E.4090003@astraw.com> That's because matplotlib does a "import setuptools" in the setup.py script (inside a try/except clause). The same behavior for other packages that doen't explicitly ask for setuptools if you do, at the command line, "python -c "import setuptools; execfile('setup.py')" your_setup.py_arguments_here". Cheers! Andrew Christopher Fonnesbeck wrote: > On Jan 5, 2006, at 5:39 AM, Travis Oliphant wrote: > >> This new packaging structure makes things much nicer for .eggs and >> should mean a more stable platform in the long run. > > > By the way, how does one get a numpy egg? On OSX, when I call "python > setup.py bdist_mpkg", it builds eggs for some packages (eg > matplotlib) and not for others (eg. numpy). > > Probably the wrong place to post this, but others may be interested. > > C. > > -- > Christopher J. Fonnesbeck > > Population Ecologist, Marine Mammal Section > Fish & Wildlife Research Institute (FWC) > St. Petersburg, FL > > Adjunct Assistant Professor > Warnell School of Forest Resources > University of Georgia > Athens, GA > > T: 727.235.5570 > E: chris at trichech.us > > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From oliphant.travis at ieee.org Thu Jan 5 14:31:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 12:31:07 -0700 Subject: [SciPy-user] Installing SciPy on AIX 5.2 In-Reply-To: References: Message-ID: <43BD73FB.30405@ieee.org> mfmorss at aep.com wrote: >Here at AEP we have recently installed Python 2.4.2 from source on an AIX >5.2 machine; running testall.py revealed only inconsequential problems. We >are now attempting to install SciPy, which we also have to do from source. > >Is it correct that the installation guidance at http://www.scipy.org is >obsolete? That it refers to SciPy 0.3.2 would seem to case doubt on all >the information provided. The most recent source tarball is, of course, > > >0.8.2. > > SciPy 0.3.2 is (old) scipy and builds on top of Numeric. You may go that route, but I'm not sure how much response you will receive if you need help. The SVN version of scipy is (new) scipy (will be release 0.4.4) and builds on top of the new numpy package. I don't think much testing has been done on AIX, but presumably numpy should build there. The only issue will be the IEEE floating-point support (in other-words the error-modes will probably not be supported on AIX). If you get the latest numpy (look for a release in a few hours) and start there, you will probably get more help... >In particular, at http://www.scipy.org/documentation/buildscipy.txt, it >says that Python must be installed with the zlib module enabled. Is this >true? Why would SciPy depend on a file compression module? > > I don't know the answer to that one. I think the array_import function can make use of it, but I don't think it *has* to be installed. -Travis From managan at llnl.gov Thu Jan 5 15:21:00 2006 From: managan at llnl.gov (Rob Managan) Date: Thu, 5 Jan 2006 12:21:00 -0800 Subject: [SciPy-user] scipy.basic to numpy Message-ID: Am I right that with the change of scipy_core to numpy that where I used to use "scipy.basic.fft" I now have to use "numpy.fft"? That works but it also means that I have to add an "import numpy" at the top of the script. This comes from testing whether the bugs in the fftpack stuff for Mac OSX has been fixed yet. (It has not!) -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From oliphant.travis at ieee.org Thu Jan 5 16:03:24 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 14:03:24 -0700 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: References: Message-ID: <43BD899C.4060403@ieee.org> Rob Managan wrote: >Am I right that with the change of scipy_core to numpy that where I used to use >"scipy.basic.fft" I now have to use "numpy.fft"? That works but it >also means that I have to add an "import numpy" at the top of the >script. > > Yes, but numpy.dft is the correct name of the sub-package. numpy.fft is an actual fft function. Yes, you need to use import numpy. Also, there is no playing with imports so that if you need a sub-package you have to import it. import numpy.dft There is a pkgload function in full scipy, that can auto-load sub-packages on request, but this is not done by default. >This comes from testing whether the bugs in the fftpack stuff for Mac >OSX has been fixed yet. (It has not!) > > Are you using the fftpack provided in numpy or are you trying to use fftw? Please re-send any bug reports regarding fft on OSX. Thanks. -Travis From chris at trichech.us Thu Jan 5 16:09:35 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Thu, 5 Jan 2006 16:09:35 -0500 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: <43BD899C.4060403@ieee.org> References: <43BD899C.4060403@ieee.org> Message-ID: <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> On Jan 5, 2006, at 4:03 PM, Travis Oliphant wrote: > Please re-send any bug reports regarding fft on OSX. Here is mine again, from earlier: ====================================================================== FAIL: bench_random (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_diff) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 210, in bench_random assert_array_almost_equal(diff(f,1),direct_diff(f,1)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 96.0%): Array 1: [ 4.0000000e+00 4.4583823e+00 4.6585202e+00 4.5314873e+00 4.0154475e+00 3.0782890e+00 1.7457348e+00 1.22... Array 2: [ 4.0000000e+00 3.9029585e+00 3.6083502e+00 3.1083616e+00 2.3987172e+00 1.4909128e+00 4.2567472e-01 -7.18... ====================================================================== FAIL: check_definition (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_diff) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 86, in check_definition assert_array_almost_equal(diff(sin(x),2),direct_diff(sin(x),2)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 87.5%): Array 1: [ -6.3519652e-15 -3.8268343e-01 -7.0710678e-01 -9.2387953e-01 -1.0000000e+00 -9.2387953e-01 -7.0710678e-01 -3.82... Array 2: [ -7.3854931e-15 6.5259351e-15 -2.4942634e-15 -7.5636114e-17 1.4745663e-15 -1.9133685e-15 2.2804788e-16 8.70... ====================================================================== FAIL: bench_random (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 356, in bench_random assert_array_almost_equal(hilbert(f),direct_hilbert(f)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 96.0%): Array 1: [ -1.0865211e+00 -1.1538025e+00 -1.1519434e+00 -1.0725580e+00 -9.1204050e-01 -6.7405401e-01 -3.7184092e-01 -2.91... Array 2: [ -1.0865211e+00 +0.0000000e+00j -1.0575653e+00 -5.3966502e-17j -9.7152085e-01 -3.8615213e-17j -8.3113061e-01 -1.544... ====================================================================== FAIL: check_random_even (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_hilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 332, in check_random_even assert_array_almost_equal(direct_hilbert(direct_ihilbert(f)),f) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 3.4694470e-18+0.j 0.0000000e+00-0.j 1.3276999e-18-0.j 0.0000000e+00-0.j -2.4532695e-18-0.j 0.0000000e+00-0.... Array 2: [ 0.3593213 -0.1507557 0.1843634 0.2185266 -0.1491027 0.260701 0.3475645 0.3236272 -0.0056732 -0.2636082 -0.261637... ====================================================================== FAIL: bench_random (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_shift) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 424, in bench_random assert_array_almost_equal(direct_shift(f,1),sf) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 98.0%): Array 1: [ 0.6015407 0.6432035 0.7632704 0.947396 1.1731603 1.4125187 1.635358 1.8137685 1.9262065 1.9605388 1.91521... Array 2: [ 0.6015407 0.5654719 0.6040595 0.7013589 0.8369433 0.9885575 1.1344713 1.2554716 1.336468 1.3676865 1.34542... ====================================================================== FAIL: check_definition (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_shift) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 390, in check_definition assert_array_almost_equal(shift(sin(x),a),direct_shift(sin(x),a)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 88.8888888889%): Array 1: [ 0.0998334 0.4341242 0.7160532 0.9116156 0.9972237 0.9625519 0.8117822 0.5630995 0.2464987 -0.0998334 -0.43412... Array 2: [ 0.0998334 0.0938127 0.0764768 0.0499167 0.0173359 -0.0173359 -0.0499167 -0.0764768 -0.0938127 -0.0998334 -0.09381... ====================================================================== FAIL: bench_random (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 273, in bench_random assert_array_almost_equal(tilbert(f,1),direct_tilbert(f,1)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 96.0%): Array 1: [ -1.0896991e+00 -1.1569240e+00 -1.1548950e+00 -1.0752326e+00 -9.1434140e-01 -6.7589846e-01 -3.7316325e-01 -2.99... Array 2: [ -1.0896991e+00 +0.0000000e+00j -1.0606855e+00 -6.1149403e-17j -9.7447017e-01 -6.0785235e-17j -8.3380221e-01 +1.609... ====================================================================== FAIL: check_random_even (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_tilbert) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_pseudo_diffs.py", line 240, in check_random_even assert_array_almost_equal(direct_tilbert(direct_itilbert(f,h),h),f) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.+0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.j 0.-0.... Array 2: [-0.4887394 0.1249265 -0.103235 -0.4062288 0.2585714 0.1095033 -0.0283535 0.4611639 0.0624227 0.1781299 0.43917... ====================================================================== FAIL: bench_random (scipy.fftpack.basic.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 162, in bench_random assert_array_almost_equal(fft(x),y) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 5.3861007e+01+0.j 1.7218330e+00+1.2361686j 3.0878512e-01+1.2132911j -7.9501199e-01-2.7068478j -3.05832... Array 2: [ 5.3861007e+01 +4.9356911e+01j 1.1092897e+00 +3.3469226e+00j 3.0029669e+00 +5.0301050e-01j -4.4234200e+00 -3.517... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 98, in check_definition assert_array_almost_equal(y,y1) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 20.+0.j 0.+0.j -4.+4.j 0.+0.j -4.+0.j 0.-0.j -4.-4.j 0.-0.j] Array 2: [ 20. +3.j -0.7071068+0.7071068j -7. +4.j -0.7071068-0.7071068j -4. -3.j 0.707106... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 425, in check_definition assert_array_almost_equal(y,direct_dftn(x)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 22.2222222222%): Array 1: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] [-13.5+7.7942286j 0. +0.j 0. +0.j ] [-13.5-7.79... Array 2: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] [-13.5+0.j 0. +0.j 0. -0.j ] [-13.5+0.j ... ====================================================================== FAIL: bench_random (scipy.fftpack.basic.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 250, in bench_random assert_array_almost_equal(ifft(x),y) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 0.5381809 +0.0000000e+00j -0.0333 -1.8584613e-03j 0.0097213 -9.6840755e-03j -0.0184339 -2.1795412e-02j -0.033912... Array 2: [ 5.3818089e-01 +4.8791729e-01j -2.2206355e-02 -1.7093318e-02j 2.2545471e-02 -4.1871617e-02j -1.3166584e-02 +3.814... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 183, in check_definition assert_array_almost_equal(y,y1) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 2.5+0.j 0. -0.j -0.5-0.5j 0. -0.j -0.5+0.j 0. +0.j -0.5+0.5j 0. +0.j ] Array 2: [ 2.5 +0.375j 0.0883883+0.0883883j -0.125 -0.5j 0.0883883-0.0883883j -0.5 -0.375j -0.0883883-0.0... ====================================================================== FAIL: check_random_real (scipy.fftpack.basic.test_basic.test_ifft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 217, in check_random_real assert_array_almost_equal (ifft(fft(x)),x) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 98.0392156863%): Array 1: [ 0.1560985 +0.0000000e+00j 0.6894917 -6.5307237e-18j 0.4770869 +0.0000000e+00j 0.399722 -4.9826377e-17j 0.579505... Array 2: [ 1.5609845e-01 8.9684084e-01 3.7733582e-01 6.8525305e-01 7.1896685e-01 6.3541571e-01 3.6191352e-01 1.78... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_ifftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 594, in check_definition assert_array_almost_equal(y,direct_idftn(x)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 22.2222222222%): Array 1: [[ 5. +0.j -0.5-0.2886751j -0.5+0.2886751j] [-1.5-0.8660254j 0. +0.j 0. +0.j ] [-1.5+0.8660254j ... Array 2: [[ 5. +0.j -0.5-0.2886751j -0.5+0.2886751j] [-1.5+0.j 0. -0.j 0. +0.j ] [-1.5+0.j ... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_irfft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/ scipy/fftpack/tests/test_basic.py", line 341, in check_definition assert_array_almost_equal(y,ifft(x1)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/ numpy/testing/utils.py", line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 50.0%): Array 1: [ 2.625 -1.6856602 -0.375 -1.1856602 0.625 0.4356602 -0.375 0.9356602] Array 2: [ 2.625+0.j -0.375-0.j -0.375-0.j -0.375-0.j 0.625 +0.j -0.375+0.j -0.375+0.j -0.375+0.j] ---------------------------------------------------------------------- Ran 945 tests in 124.005s FAILED (failures=16) -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From oliphant.travis at ieee.org Thu Jan 5 16:16:35 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 14:16:35 -0700 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> References: <43BD899C.4060403@ieee.org> <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> Message-ID: <43BD8CB3.80508@ieee.org> Christopher Fonnesbeck wrote: > On Jan 5, 2006, at 4:03 PM, Travis Oliphant wrote: > >> Please re-send any bug reports regarding fft on OSX. > > > Here is mine again, from earlier: > Thanks, but these are tests from full scipy, right? Important, but right now my pressing concern is numpy. Rob seemed to be saying that he was getting fft failures in numpy. Perhaps this makes the case for a separate numpy list.... -Travis From chris at trichech.us Thu Jan 5 16:19:16 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Thu, 5 Jan 2006 16:19:16 -0500 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: <43BD8CB3.80508@ieee.org> References: <43BD899C.4060403@ieee.org> <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> <43BD8CB3.80508@ieee.org> Message-ID: <2E13CE73-0BAA-4CCB-8136-96CC66D067D8@trichech.us> On Jan 5, 2006, at 4:16 PM, Travis Oliphant wrote: > Perhaps this makes the case for a separate numpy list.... Sorry, my mistake. numpy runs clean. C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From pearu at scipy.org Thu Jan 5 15:22:26 2006 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 5 Jan 2006 14:22:26 -0600 (CST) Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> References: <43BD899C.4060403@ieee.org> <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> Message-ID: On Thu, 5 Jan 2006, Christopher Fonnesbeck wrote: > On Jan 5, 2006, at 4:03 PM, Travis Oliphant wrote: > >> Please re-send any bug reports regarding fft on OSX. > > Here is mine again, from earlier: > > ====================================================================== > FAIL: bench_random (scipy.fftpack.pseudo_diffs.test_pseudo_diffs.test_diff) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy-0.4.4.1525-py2.4-macosx-10.4-ppc.egg/scipy/fftpack/tests/test_pseudo_diffs.py", > line 210, in bench_random > assert_array_almost_equal(diff(f,1),direct_diff(f,1)) > File > "/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy-0.9.2.1826-py2.4-macosx-10.4-ppc.egg/numpy/testing/utils.py", > line 182, in assert_array_almost_equal > assert cond,\ > AssertionError: > Arrays are not almost equal (mismatch 96.0%): > Array 1: [ 4.0000000e+00 4.4583823e+00 4.6585202e+00 > 4.5314873e+00 > 4.0154475e+00 3.0782890e+00 1.7457348e+00 1.22... > Array 2: [ 4.0000000e+00 3.9029585e+00 3.6083502e+00 > 3.1083616e+00 > 2.3987172e+00 1.4909128e+00 4.2567472e-01 -7.18... What is the output of scipy.show_config() ? Pearu From chris at trichech.us Thu Jan 5 16:26:21 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Thu, 5 Jan 2006 16:26:21 -0500 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: References: <43BD899C.4060403@ieee.org> <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> Message-ID: <2CC79183-E7A6-428A-AF2A-07578A511326@trichech.us> On Jan 5, 2006, at 3:22 PM, Pearu Peterson wrote: > What is the output of > > scipy.show_config() > > ? lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] djbfft_info: NOT AVAILABLE fftw_info: libraries = ['rfftw', 'fftw'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW_H', None)] include_dirs = ['/usr/local/include'] C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From oliphant.travis at ieee.org Thu Jan 5 15:58:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 13:58:45 -0700 Subject: [SciPy-user] Just saw this Message-ID: <43BD8885.5080300@ieee.org> http://heim.ifi.uio.no/~kent-and/software/Instant/doc/Instant.html I'd like to fix weave so it can be used *more easily* in this way (creating an extension module on-the-fly). -Travis From managan at llnl.gov Thu Jan 5 17:10:27 2006 From: managan at llnl.gov (Rob Managan) Date: Thu, 5 Jan 2006 14:10:27 -0800 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: <43BD8CB3.80508@ieee.org> References: <43BD899C.4060403@ieee.org> <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> <43BD8CB3.80508@ieee.org> Message-ID: At 2:16 PM -0700 1/5/06, Travis Oliphant wrote: >Christopher Fonnesbeck wrote: > >> On Jan 5, 2006, at 4:03 PM, Travis Oliphant wrote: >> >>> Please re-send any bug reports regarding fft on OSX. >> >> >> Here is mine again, from earlier: >> >Thanks, but these are tests from full scipy, right? Important, but >right now my pressing concern is numpy. Rob seemed to be saying that he >was getting fft failures in numpy. > Sorry for being unclear. I was referencing the numpy.dft.fft function to show that it works but the fftpack version in scipy does not. The problem is in scipy not numpy. -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From managan at llnl.gov Thu Jan 5 17:19:45 2006 From: managan at llnl.gov (Rob Managan) Date: Thu, 5 Jan 2006 14:19:45 -0800 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: References: <43BD899C.4060403@ieee.org> <3EE37000-1673-4B3A-97DB-470221C9E051@trichech.us> Message-ID: At 2:22 PM -0600 1/5/06, Pearu Peterson wrote: > > On Jan 5, 2006, at 4:03 PM, Travis Oliphant wrote: >> > >> Please re-send any bug reports regarding fft on OSX. >> > >What is the output of > > scipy.show_config() > >? > >Pearu [mangrove:~/Documents/devel/scipy] managan% python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] djbfft_info: NOT AVAILABLE fftw_info: libraries = ['rfftw', 'fftw'] library_dirs = ['/Users/managan/Documents/local/lib'] define_macros = [('SCIPY_FFTW_H', None)] include_dirs = ['/Users/managan/Documents/local/include'] ****** Here is the basic error I see ****** >>> import numpy >>> x = numpy.arange(16)*2*numpy.pi/16.0 >>> sx = numpy.sin(x) >>> sx array([ 0.00000000e+00, 3.82683432e-01, 7.07106781e-01, 9.23879533e-01, 1.00000000e+00, 9.23879533e-01, 7.07106781e-01, 3.82683432e-01, 1.22464680e-16, -3.82683432e-01, -7.07106781e-01, -9.23879533e-01, -1.00000000e+00, -9.23879533e-01, -7.07106781e-01, -3.82683432e-01]) >>> scipy.fft(sx) array([ 1.77975831e-16 +0.00000000e+00j, -1.36807534e-15 -8.00000000e+00j, -3.87815369e-16 -8.43346956e-16j, 1.91553812e-16 -1.08399892e-15j, 1.14423775e-17 -4.99600361e-16j, 1.91553812e-16 -1.95820501e-16j, 6.32744729e-16 -1.77213142e-16j, 4.95108994e-16 +0.00000000e+00j, 2.88998134e-16 +0.00000000e+00j, 4.95108994e-16 -0.00000000e+00j, 6.32744729e-16 +1.77213142e-16j, 1.91553812e-16 +1.95820501e-16j, 1.14423775e-17 +4.99600361e-16j, 1.91553812e-16 +1.08399892e-15j, -3.87815369e-16 +8.43346956e-16j, -1.36807534e-15 +8.00000000e+00j]) >>> scipy.ifft(scipy.fft(sx)) array([ 2.46519033e-32+0.j, -3.12314248e-16-0.j, -8.32667268e-17-0.j, -5.85949503e-18-0.j, 0.00000000e+00-0.j, 1.72392949e-16-0.j, 1.38777878e-16-0.j, 1.18025219e-16-0.j, 1.22464680e-16+0.j, 1.18025219e-16+0.j, 1.38777878e-16+0.j, 1.72392949e-16+0.j, 0.00000000e+00+0.j, -5.85949503e-18+0.j, -8.32667268e-17+0.j, -3.12314248e-16+0.j]) >>> numpy.ifft(numpy.fft(sx)) array([ 6.16297582e-32 +0.00000000e+00j, 3.82683432e-01 +0.00000000e+00j, 7.07106781e-01 -5.55111512e-17j, 9.23879533e-01 +1.23259516e-32j, 1.00000000e+00 -2.46519033e-32j, 9.23879533e-01 +0.00000000e+00j, 7.07106781e-01 -5.55111512e-17j, 3.82683432e-01 -1.23259516e-32j, 1.22464680e-16 +0.00000000e+00j, -3.82683432e-01 +0.00000000e+00j, -7.07106781e-01 +5.55111512e-17j, -9.23879533e-01 +1.23259516e-32j, -1.00000000e+00 +2.46519033e-32j, -9.23879533e-01 +0.00000000e+00j, -7.07106781e-01 +5.55111512e-17j, -3.82683432e-01 -1.23259516e-32j]) The real parts of the last two should agree but scipy gives zeros to within roundoff. -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From pearu at scipy.org Thu Jan 5 16:24:31 2006 From: pearu at scipy.org (Pearu Peterson) Date: Thu, 5 Jan 2006 15:24:31 -0600 (CST) Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: References: <43BD899C.4060403@ieee.org><43BD8CB3.80508@ieee.org> Message-ID: On Thu, 5 Jan 2006, Rob Managan wrote: > Sorry for being unclear. I was referencing the numpy.dft.fft function > to show that it works but the fftpack version in scipy does not. > > The problem is in scipy not numpy. I tried building scipy.fftpack against fftw-2.1.3, fftw-3.0.1, and fortran fftpack. In all cases scipy.fftpack.test() finishes without failures on debian box, both on 32 and 64-bit boxes. Could you try building scipy without fftw support? For that set export FFTW=None and rebuild scipy. Pearu From managan at llnl.gov Thu Jan 5 18:03:33 2006 From: managan at llnl.gov (Rob Managan) Date: Thu, 5 Jan 2006 15:03:33 -0800 Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: References: <43BD899C.4060403@ieee.org><43BD8CB3.80508@ieee.org> Message-ID: At 3:24 PM -0600 1/5/06, Pearu Peterson wrote: >On Thu, 5 Jan 2006, Rob Managan wrote: > >> Sorry for being unclear. I was referencing the numpy.dft.fft function >> to show that it works but the fftpack version in scipy does not. >> >> The problem is in scipy not numpy. > >I tried building scipy.fftpack against fftw-2.1.3, fftw-3.0.1, and fortran >fftpack. In all cases scipy.fftpack.test() finishes without failures on >debian box, both on 32 and 64-bit boxes. > >Could you try building scipy without fftw support? For that >set > > export FFTW=None > >and rebuild scipy. > here are the results with the first two failures. [mangrove:~/Documents/devel/scipy] managan% python Python 2.4.1 (#2, Mar 31 2005, 00:05:10) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.show_config() dfftw_info: NOT AVAILABLE blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Head ers'] define_macros = [('NO_ATLAS_INFO', 3)] djbfft_info: NOT AVAILABLE lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec'] define_macros = [('NO_ATLAS_INFO', 3)] fftw_info: NOT AVAILABLE >>> scipy.fftpack.test() Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.fftpack.helper Found 20 tests for scipy.fftpack.pseudo_diffs Found 0 tests for __main__ F...F..F..FF.F........F.........F.....F.F. ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/fftpack/tests/tes t_basic.py", line 98, in check_definition assert_array_almost_equal(y,y1) File "/Users/managan/Documents/local/lib/python2.4/site-packages/numpy/testing/utils.py" , line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 100.0%): Array 1: [ 20.+0.j 0.+0.j -4.+4.j 0.+0.j -4.+0.j 0.-0.j -4.-4.j 0.-0.j] Array 2: [ 20. +3.j -0.7071068+0.7071068j -7. +4.j -0.7071068-0.7071068j -4. -3.j 0.707106... ====================================================================== FAIL: check_definition (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/managan/Documents/local/lib/python2.4/site-packages/scipy/fftpack/tests/tes t_basic.py", line 425, in check_definition assert_array_almost_equal(y,direct_dftn(x)) File "/Users/managan/Documents/local/lib/python2.4/site-packages/numpy/testing/utils.py" , line 182, in assert_array_almost_equal assert cond,\ AssertionError: Arrays are not almost equal (mismatch 22.2222222222%): Array 1: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] [-13.5+7.7942286j 0. +0.j 0. +0.j ] [-13.5-7.79... Array 2: [[ 45. +0.j -4.5+2.5980762j -4.5-2.5980762j] [-13.5+0.j 0. +0.j 0. -0.j ] [-13.5+0.j ... My simple test import numpy import scipy x = numpy.arange(16)*2*numpy.pi/16.0 sx = numpy.sin(x) print 'sx' print sx print 'scipy.ifft(scipy.fft(sx)).real' print scipy.ifft(scipy.fft(sx)).real print 'numpy.ifft(numpy.fft(sx)).real' print numpy.ifft(numpy.fft(sx)).real gives [mangrove:~/Documents/devel/scipy] managan% python fft3.py sx [ 0.00000000e+00 3.82683432e-01 7.07106781e-01 9.23879533e-01 1.00000000e+00 9.23879533e-01 7.07106781e-01 3.82683432e-01 1.22464680e-16 -3.82683432e-01 -7.07106781e-01 -9.23879533e-01 -1.00000000e+00 -9.23879533e-01 -7.07106781e-01 -3.82683432e-01] scipy.ifft(scipy.fft(sx)).real [ 0.00000000e+00 -3.11858849e-16 -1.13434883e-16 5.44139747e-17 0.00000000e+00 1.12119479e-16 1.68946034e-16 1.17569819e-16 1.22464680e-16 1.17569819e-16 1.68946034e-16 1.12119479e-16 0.00000000e+00 5.44139747e-17 -1.13434883e-16 -3.11858849e-16] numpy.ifft(numpy.fft(sx)).real [ 6.16297582e-32 3.82683432e-01 7.07106781e-01 9.23879533e-01 1.00000000e+00 9.23879533e-01 7.07106781e-01 3.82683432e-01 1.22464680e-16 -3.82683432e-01 -7.07106781e-01 -9.23879533e-01 -1.00000000e+00 -9.23879533e-01 -7.07106781e-01 -3.82683432e-01] -- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Rob Managan email managan at llnl.gov LLNL phone: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 From oliphant.travis at ieee.org Thu Jan 5 18:25:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 16:25:07 -0700 Subject: [SciPy-user] ANN: Release of NumPy 0.9.2 Message-ID: <43BDAAD3.6070705@ieee.org> Numpy 0.9.2 is the successor to both Numeric and Numarray and builds and uses code from both. This release marks the first release using the new (but historical) Numpy name. The release notes are included below: Best regards, -Travis Oliphant Release Notes ================== NumPy 0.9.2 marks the first release of the new array package under its new name. This new name should reflect that the new package is a hybrid of the Numeric and Numarray packages. This release adds many more features and speed-enhancements from Numarray. Changes from (SciPy Core) 0.8.4: - Namespace and Python package name is now "numpy" and "numpy" instead of "scipy" and "scipy_core" respectively. This should help packagers and egg-builders. - The NumPy arrayobject now both exports and consumes the full array_descr protocol (including field information). - Removed NOTSWAPPED flag. The byteswapping information is handled by the data-type descriptor. - The faster sorting functions were brought over from numarray leading to a factor of 2-3 speed increase in sorting. Also changed .sort() method to be in-place like numarray and lists. - Polynomial division has been fixed. - basic.fft, basic.linalg, basic.random have been moved to dft, linalg, and random respectively (no extra basic sub-package layer). - Introduced numpy.dual to allow calling of functions that are both in SciPy and NumPy when it is desired to use the SciPy function if the user has it but otherwise use the NumPy function. - The "rtype" keyword used in a couple of places has been changed to "dtype" for consistency. - Fixes so that the standard array constructor can be used to construct record-arrays with fields. - Changed method .toscalar() to .item() (added to convertcode.py) - Added numpy.lib.mlab to be fully compatible with old MLab including the addition of a kaiser window even when full SciPy is not installed. - Arrays of nested records should behave better. - Fixed masked arrays buglets. - Added code so that strings can be converted to numbers using .astype() - Added a lexsort (lexigraphic) function so that sorting on multiple keys can be done -- very useful for record-arrays - Speed ups and bug-fixes for 1-d "fancy" indexing by going through the flattened array iterator when possible. - Added the ability to add docstrings to builtin objects "on-the-fly". Allows adding docstrings without re-compiling C-code. - Moved the weave subpackage to SciPy. - Changed the fields attribute of the dtypedescr object to return a "read-only" dictionary when accessed from Python. - Added a typeNA dictionary for the numarray types and added a compare function for dtypedescr objects so that equivalent types can be detected. Please not that all modules are imported using lower-case letters (so don't let the NumPy marketing name confuse you, the package to import is "numpy"). From chris at trichech.us Thu Jan 5 19:55:11 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Thu, 5 Jan 2006 19:55:11 -0500 Subject: [SciPy-user] Updated OS X build instructions Message-ID: <05961580-3BBA-4F58-86A6-28F2888CA30D@trichech.us> Following today's release of Numpy, I have updated the SciPy installation instructions for OS X, and have added them to the installation page of the Wiki: http://new.scipy.org/Wiki/Installing_SciPy This made the page rather long, so I split it into a couple of pages. C. -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From hgamboa at gmail.com Thu Jan 5 21:40:42 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Fri, 6 Jan 2006 02:40:42 +0000 Subject: [SciPy-user] Question about trick index functions. Message-ID: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> I would like start to congratulate the community for the brave rename and quick release of NumPy! Following the discussions here, have been a very instructive experience on how a quality project is run in open and honest discussion. Thank you all. An now a question: When using the r_ and c_ functions from numpy: r_[1,2,3] gives the same result than c_[1,2,3] I tried to follow the code and tried the concatenate function: concatenate( ([1,2],[5,6]), axis=0) gives the same result as concatenate( ([1,2],[5,6]), axis=1) Is this the correct behaviour? When I want to produce a column vector I need to do something like: r_[1,2,3].reshape((3,1)) -- Hugo Gamboa, Phd student Communication Theory and Pattern Recognition Group Instituto de Telecomunica??es, Instituto Superior T?cnico, Portugal From oliphant.travis at ieee.org Thu Jan 5 21:48:07 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 19:48:07 -0700 Subject: [SciPy-user] Question about trick index functions. In-Reply-To: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> References: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> Message-ID: <43BDDA67.6070508@ieee.org> Hugo Gamboa wrote: >I would like start to congratulate the community for the brave rename >and quick release of NumPy! >Following the discussions here, have been a very instructive >experience on how a quality project is run in open and honest >discussion. >Thank you all. > >An now a question: > >When using the r_ and c_ functions from numpy: > >r_[1,2,3] > >gives the same result than > >c_[1,2,3] > >I tried to follow the code and tried the concatenate function: > >concatenate( ([1,2],[5,6]), axis=0 > > >gives the same result as > >concatenate( ([1,2],[5,6]), axis=1) > > > >Is this the correct behaviour? > > Yes, although perhaps an error could be raised in the second case. The NumPy concatenate comes straight from Numeric The problem is that [1,2] and [5,6] are both 1-d arrays so that they only have one axis to concatenate along. >When I want to produce a column vector I need to do something like: > >r_[1,2,3].reshape((3,1)) > > Yes, or r_[1,2,3,'c'] which produces a 3x1 matrix. -Travis From hgamboa at gmail.com Thu Jan 5 21:55:11 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Fri, 6 Jan 2006 02:55:11 +0000 Subject: [SciPy-user] Question about trick index functions. In-Reply-To: <43BDDA67.6070508@ieee.org> References: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> <43BDDA67.6070508@ieee.org> Message-ID: <86522b1a0601051855o16a0d786hfbdaa365dfdd2a1b@mail.gmail.com> So what is the difference between r_ and c_ ? Hugo Gamboa From oliphant.travis at ieee.org Thu Jan 5 22:05:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 20:05:53 -0700 Subject: [SciPy-user] Question about trick index functions. In-Reply-To: <86522b1a0601051855o16a0d786hfbdaa365dfdd2a1b@mail.gmail.com> References: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> <43BDDA67.6070508@ieee.org> <86522b1a0601051855o16a0d786hfbdaa365dfdd2a1b@mail.gmail.com> Message-ID: <43BDDE91.1050509@ieee.org> Hugo Gamboa wrote: >So what is the difference between r_ and c_ ? > > > c_ is deprecated (it's there only for compatibility) :-) For 1-d arrays there was never any difference. For 2-d arrays c_ and r_ stacked along different dimensions. Now, the r_ constructor can stack along any dimension by using a string integer as the last element, but note this has the same limitation as concatenate: the arrays stacked together must actually have the dimension to stack along.... Compare the output of a = arange(6).reshape(2,3) r_[a,a] with r_[a,a,'-1'] c_[a,a] # not recommended for use anymore... The real use of r_[] is to quickly concatenate arrays together to build up complicated arrays. It was developed when I was using SciPy to teach a signal processing course and the student lab-manuals had Matlab exercises where they used matlab to build up compilcated arrays quickly using bracket notation: -Travis From prabhu_r at users.sf.net Thu Jan 5 21:42:15 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Fri, 6 Jan 2006 08:12:15 +0530 Subject: [SciPy-user] [SciPy-dev] Just saw this In-Reply-To: <43BD8885.5080300@ieee.org> References: <43BD8885.5080300@ieee.org> Message-ID: <17341.55559.291887.235802@monster.iitb.ac.in> >>>>> "Travis" == Travis Oliphant writes: Travis> http://heim.ifi.uio.no/~kent-and/software/Instant/doc/Instant.html Travis> I'd like to fix weave so it can be used *more easily* in Travis> this way (creating an extension module on-the-fly). It is quite easy to do this with weave but with 2-3 lines more code. Look at weave/examples/ramp2.py, fibonacci.py or increment_example.py. cheers, prabhu From hgamboa at gmail.com Thu Jan 5 22:25:58 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Fri, 6 Jan 2006 03:25:58 +0000 Subject: [SciPy-user] Question about trick index functions. In-Reply-To: <43BDDE91.1050509@ieee.org> References: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> <43BDDA67.6070508@ieee.org> <86522b1a0601051855o16a0d786hfbdaa365dfdd2a1b@mail.gmail.com> <43BDDE91.1050509@ieee.org> Message-ID: <86522b1a0601051925g162b31a6w46a287942685fdb1@mail.gmail.com> I got the idea, and since I'm still working my transition from matlab I thought that the c_ for one dimension would do a row vector, for those quick matrix build ups like: >r_[1,2,3]+r_[1,2,3,'c'] matrix([[2, 3, 4], [3, 4, 5], [4, 5, 6]]) that is even better than in matlab via the broadcasting functionality. I thought that for using that kind of broadcasting I could use a very quick code like: r_[1,2,3]+c_[1,2,3]. Thanks for the quick answers. Hugo Gamboa On 1/6/06, Travis Oliphant wrote: > Hugo Gamboa wrote: > > >So what is the difference between r_ and c_ ? > > > > > > > c_ is deprecated (it's there only for compatibility) :-) > > For 1-d arrays there was never any difference. > > For 2-d arrays c_ and r_ stacked along different dimensions. > > Now, the r_ constructor can stack along any dimension by using a string > integer as the last element, but note this has the same limitation as > concatenate: the arrays stacked together must actually have the > dimension to stack along.... > > Compare the output of > > a = arange(6).reshape(2,3) > > r_[a,a] > > with > > r_[a,a,'-1'] > > c_[a,a] # not recommended for use anymore... > > > The real use of r_[] is to quickly concatenate arrays together to build > up complicated arrays. It was developed when I was using SciPy to teach > a signal processing course and the student lab-manuals had Matlab > exercises where they used matlab to build up compilcated arrays quickly > using bracket notation: > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From brendansimons at yahoo.ca Thu Jan 5 22:38:22 2006 From: brendansimons at yahoo.ca (Brendan Simons) Date: Thu, 5 Jan 2006 22:38:22 -0500 Subject: [SciPy-user] NumpyArray.resize(0) In-Reply-To: References: Message-ID: <3736A6D9-90F7-460A-B6EC-EEE6027F4406@yahoo.ca> Congrats Travis, Pearu and all. I'm very excited that we'll soon have a standard for Python. I apologize if the following has been discussed before: When I try the following code: a = numpy.ones(5) a.resize(0) I get: ValueError: newsize is zero; cannot delete an array in this way Numeric gave this result too, while Numarray hapily returns a 0d array. I prefer the latter because it simplifies my current requirement: a data structure, based on numpy, which holds an arbitrary number of data points, and which can be extended or truncated at will. Is there a good reason for Numeric/Numpy's behaviour, as opposed to Numarray's? Brendan -- Brendan Simons, Project Engineer Stern Laboratories, Hamilton Ontario __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From oliphant.travis at ieee.org Thu Jan 5 23:51:37 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 05 Jan 2006 21:51:37 -0700 Subject: [SciPy-user] NumpyArray.resize(0) In-Reply-To: <3736A6D9-90F7-460A-B6EC-EEE6027F4406@yahoo.ca> References: <3736A6D9-90F7-460A-B6EC-EEE6027F4406@yahoo.ca> Message-ID: <43BDF759.3090002@ieee.org> Brendan Simons wrote: >Congrats Travis, Pearu and all. I'm very excited that we'll soon >have a standard for Python. >I apologize if the following has been discussed before: > >When I try the following code: > >a = numpy.ones(5) >a.resize(0) > >I get: > >ValueError: newsize is zero; cannot delete an array in this wa0 > > >Numeric gave this result too, while Numarray hapily returns a 0d >array. I prefer the latter because it simplifies my current >requirement: a data structure, based on numpy, which holds an >arbitrary number of data points, and which can be extended or >truncated at will. Is there a good reason for Numeric/Numpy's >behaviour, as opposed to Numarray's? > > I can't think of a good reason except that's the way Numeric did it. I don't think a change in this behavior would hurt anybody. But let's be clear. A 0d array and a size-0 array are two different things. A 0d array actually has room for one element while a size-0 array has one of the dimensions as 0. I just checked, and numarray returns a size-0 array. I can change this. But, note also that if another object is referencing a you can't do a resize like this: i.e. a = numpy.ones(5) b = a a.resize(...) will give an error because the object a has more than one reference to it. I don't see anyway around this problem short of not using system malloc to construct memory (but instead going through a separate memory object like numarray does). This will have a performance impact for small arrays -- I'm not sure how much of a one, though. The point of resize is to be able to modify the size of the memory pointer for the array. You can't do this is another array is using the same memory pointer (which it might be if this array has more than one reference). If you really need the numarray ability to swap out the memory address of the array for another one, then you might as well use the resize function: a = resize(a,(0,)) # which we will have to fix to make work right ;-) -Travis From arnd.baecker at web.de Fri Jan 6 02:36:16 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 6 Jan 2006 08:36:16 +0100 (CET) Subject: [SciPy-user] scipy.basic to numpy In-Reply-To: References: Message-ID: Hi Pearu, On Thu, 5 Jan 2006, Pearu Peterson wrote: > On Thu, 5 Jan 2006, Rob Managan wrote: > > > Sorry for being unclear. I was referencing the numpy.dft.fft function > > to show that it works but the fftpack version in scipy does not. > > > > The problem is in scipy not numpy. > > I tried building scipy.fftpack against fftw-2.1.3, fftw-3.0.1, and fortran > fftpack. In all cases scipy.fftpack.test() finishes without failures on > debian box, both on 32 and 64-bit boxes. Do you also observe a very poor performance of fftw-3.0.1 for (in particular for complex Arrays)? Best, Arnd (some) Details: In [11]: numpy.__version__ Out[11]: '0.9.3.1837' In [12]: scipy.__version__ Out[12]: '0.4.4.1526' This is on an Opteron, but we also see similar results on other machines ... Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 0.05 | 0.06 | 0.88 | 0.05 (secs for 7000 calls) 1000 | 0.04 | 0.08 | 0.51 | 0.08 (secs for 2000 calls) 256 | 0.11 | 0.10 | 1.43 | 0.11 (secs for 10000 calls) 512 | 0.17 | 0.19 | 1.68 | 0.19 (secs for 10000 calls) 1024 | 0.02 | 0.04 | 0.23 | 0.03 (secs for 1000 calls) 2048 | 0.04 | 0.07 | 0.34 | 0.07 (secs for 1000 calls) 4096 | 0.05 | 0.11 | 0.29 | 0.11 (secs for 500 calls) 8192 | 0.11 | 0.48 | 0.65 | 0.48 (secs for 500 calls) .... Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100 | 0.06 | 0.06 | 0.05 | 0.07 (secs for 100 calls) 1000x100 | 0.05 | 0.11 | 0.06 | 0.10 (secs for 7 calls) 256x256 | 0.11 | 0.10 | 0.12 | 0.11 (secs for 10 calls) 512x512 | 0.34 | 0.20 | 0.32 | 0.20 (secs for 3 calls) ..... Inverse Fast Fourier Transform =============================================== | real input | complex input ----------------------------------------------- size | scipy | Numeric | scipy | Numeric ----------------------------------------------- 100 | 0.05 | 0.15 | 0.92 | 0.14 (secs for 7000 calls) 1000 | 0.06 | 0.17 | 0.54 | 0.18 (secs for 2000 calls) 256 | 0.11 | 0.27 | 1.49 | 0.28 (secs for 10000 calls) 512 | 0.17 | 0.43 | 1.76 | 0.45 (secs for 10000 calls) 1024 | 0.02 | 0.07 | 0.24 | 0.08 (secs for 1000 calls) 2048 | 0.05 | 0.14 | 0.35 | 0.14 (secs for 1000 calls) 4096 | 0.05 | 0.18 | 0.30 | 0.20 (secs for 500 calls) 8192 | 0.10 | 0.70 | 0.67 | 0.73 (secs for 500 calls) ! ldd /home/abaecker/BUILDS3/BuildDir/inst_numpy/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so libfftw3.so.3 => /scr/python/lib/libfftw3.so.3 (0x00002aaaaabb6000) libg2c.so.0 => /scr/python/lib64/libg2c.so.0 (0x00002aaaaad66000) libm.so.6 => /lib64/tls/libm.so.6 (0x00002aaaaaebc000) libgcc_s.so.1 => /scr/python/lib64/libgcc_s.so.1 (0x00002aaaab014000) libc.so.6 => /lib64/tls/libc.so.6 (0x00002aaaab11f000) /lib64/ld-linux-x86-64.so.2 (0x0000555555554000) From pearu at scipy.org Fri Jan 6 02:34:24 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 6 Jan 2006 01:34:24 -0600 (CST) Subject: [SciPy-user] scipy.fftpack performance (was Re: scipy.basic to numpy) In-Reply-To: References: Message-ID: This is what I get on a Opteron box: # Use fftw-2.1.3: pearu at opt:~/svn/scipy/Lib/fftpack$ FFTW3=None python setup.py build pearu at opt:~/svn/scipy/Lib/fftpack$ python tests/test_basic.py -l 10 Found 23 tests for __main__ Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 0.07 | 0.07 | 0.07 | 0.08 (secs for 7000 calls) 1000 | 0.07 | 0.11 | 0.09 | 0.11 (secs for 2000 calls) 256 | 0.13 | 0.16 | 0.15 | 0.15 (secs for 10000 calls) 512 | 0.19 | 0.28 | 0.22 | 0.29 (secs for 10000 calls) 1024 | 0.03 | 0.06 | 0.04 | 0.06 (secs for 1000 calls) 2048 | 0.06 | 0.10 | 0.09 | 0.10 (secs for 1000 calls) 4096 | 0.06 | 0.15 | 0.09 | 0.16 (secs for 500 calls) 8192 | 0.15 | 0.68 | 0.37 | 0.70 (secs for 500 calls) ... ---------------------------------------------------------------------- Ran 23 tests in 26.286s # Use fftw-3.0.1: pearu at opt:~/svn/scipy/Lib/fftpack$ FFTW2=None python setup.py build pearu at opt:~/svn/scipy/Lib/fftpack$ python tests/test_basic.py -l 10 Found 23 tests for __main__ Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 0.07 | 0.08 | 0.43 | 0.09 (secs for 7000 calls) 1000 | 0.07 | 0.12 | 0.61 | 0.12 (secs for 2000 calls) 256 | 0.15 | 0.16 | 0.99 | 0.16 (secs for 10000 calls) 512 | 0.22 | 0.29 | 1.53 | 0.29 (secs for 10000 calls) 1024 | 0.04 | 0.06 | 0.26 | 0.06 (secs for 1000 calls) 2048 | 0.06 | 0.10 | 0.48 | 0.10 (secs for 1000 calls) 4096 | 0.06 | 0.15 | 0.48 | 0.16 (secs for 500 calls) 8192 | 0.15 | 0.68 | 1.11 | 0.69 (secs for 500 calls) .... ---------------------------------------------------------------------- Ran 23 tests in 38.188s On Fri, 6 Jan 2006, Arnd Baecker wrote: > Hi Pearu, > > Do you also observe a very poor performance of fftw-3.0.1 for > (in particular for complex Arrays)? So, yes. I get similar behavior on a 32-bit box. So, it's probably not a 64-bit issue. A reason for using fftw3 is slower than when using fftw2 could be due to the fact that fftw2 wrappers use cache while fftw3 wrappers don't, if I recall correctly. I'll look into later.. Could you create a ticket in http://projects.scipy.org/scipy/scipy/wiki about this issue? Pearu From arnd.baecker at web.de Fri Jan 6 04:01:55 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 6 Jan 2006 10:01:55 +0100 (CET) Subject: [SciPy-user] scipy.fftpack performance (was Re: scipy.basic to numpy) In-Reply-To: References: Message-ID: On Fri, 6 Jan 2006, Pearu Peterson wrote: [...] > On Fri, 6 Jan 2006, Arnd Baecker wrote: > > > Hi Pearu, > > > > Do you also observe a very poor performance of fftw-3.0.1 for > > (in particular for complex Arrays)? > > So, yes. I get similar behavior on a 32-bit box. So, it's probably not a > 64-bit issue. > > A reason for using fftw3 is slower than when using fftw2 could be due to > the fact that fftw2 wrappers use cache while fftw3 wrappers don't, if I > recall correctly. I'll look into later.. That would be fantastic! > Could you create a ticket in > > http://projects.scipy.org/scipy/scipy/wiki > > about this issue? Sure. I submitted that anonymously (it seems I have either forgotten my login/password or I never had one for that ;-) There it is: http://projects.scipy.org/scipy/scipy/ticket/1 (does this honestly mean this is the first ticket for scipy? If I had known that I would written a nicer text ;-). Best, Arnd From brendansimons at yahoo.ca Fri Jan 6 08:22:48 2006 From: brendansimons at yahoo.ca (Brendan Simons) Date: Fri, 6 Jan 2006 08:22:48 -0500 Subject: [SciPy-user] SciPy-user Digest, Vol 29, Issue 11 In-Reply-To: References: Message-ID: On 6-Jan-06, at 3:01 AM, Travis wrote: > I can't think of a good reason except that's the way Numeric did it. > I don't think a change in this behavior would hurt anybody. > > But let's be clear. A 0d array and a size-0 array are two different > things. > > A 0d array actually has room for one element while a size-0 array has > one of the dimensions as 0. I just checked, and numarray returns a > size-0 array. Right, my mistake. I meant a size 0 array. > > I can change this. Great! That would be very helpful to me. > > But, note also that if another object is referencing a you can't do a > resize like this: > > i.e. > > a = numpy.ones(5) > b = a > a.resize(...) will give an error because the object a has more > than one > reference to it. Oh, I hadn't realized this. I can work around that limitation, but it might not be apparent to future users. I can't comment about the relative merits of malloc approaches, but I certainly don't want to sacrifice the performance of numpy for this use case. > > The point of resize is to be able to modify the size of the memory > pointer for the array. You can't do this is another array is using > the > same memory pointer (which it might be if this array has more than one > reference). So the memory address belongs to the reference, and not the array itself? *Looks embarassed for not understanding better how things work* > > If you really need the numarray ability to swap out the memory address > of the array for another one, then you might as well use the resize > function: > > a = resize(a,(0,)) # which we will have to fix to make work > right ;-) > The global resize function will change the memory address of all references to the array then? If so, that sounds like an adequate solution, thanks. Brendan -- Brendan Simons, Project Engineer Stern Laboratories, Hamilton Ontario -------------- next part -------------- An HTML attachment was scrubbed... URL: From pajer at iname.com Fri Jan 6 09:41:48 2006 From: pajer at iname.com (Gary) Date: Fri, 06 Jan 2006 09:41:48 -0500 Subject: [SciPy-user] Why are there two FFTs? Message-ID: <43BE81AC.209@iname.com> scipy.fftpack.* and numpy.dft.* are they different under the hood? From haase at msg.ucsf.edu Fri Jan 6 12:07:52 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 6 Jan 2006 09:07:52 -0800 Subject: [SciPy-user] Question about trick index functions. In-Reply-To: <43BDDE91.1050509@ieee.org> References: <86522b1a0601051840r4c25ed58nf33bedb836a75fd4@mail.gmail.com> <86522b1a0601051855o16a0d786hfbdaa365dfdd2a1b@mail.gmail.com> <43BDDE91.1050509@ieee.org> Message-ID: <200601060907.52789.haase@msg.ucsf.edu> On Thursday 05 January 2006 19:05, Travis Oliphant wrote: > Hugo Gamboa wrote: > >So what is the difference between r_ and c_ ? > > c_ is deprecated (it's there only for compatibility) :-) > First off, congratulation from me too - I looks great and I'm excited ;-) Is it really true that c_ is deprecated ?? It sounds quite useful to me: r_ for row and c_ for column ! And maybe it should even turn a 1D into a (expected) 2D column (instead off just being a synonym for r_ ) - Sebastian > For 1-d arrays there was never any difference. > > For 2-d arrays c_ and r_ stacked along different dimensions. > > Now, the r_ constructor can stack along any dimension by using a string > integer as the last element, but note this has the same limitation as > concatenate: the arrays stacked together must actually have the > dimension to stack along.... > > Compare the output of > > a = arange(6).reshape(2,3) > > r_[a,a] > > with > > r_[a,a,'-1'] > > c_[a,a] # not recommended for use anymore... > > > The real use of r_[] is to quickly concatenate arrays together to build > up complicated arrays. It was developed when I was using SciPy to teach > a signal processing course and the student lab-manuals had Matlab > exercises where they used matlab to build up compilcated arrays quickly > using bracket notation: > > -Travis From elcorto at gmx.net Fri Jan 6 12:09:27 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 06 Jan 2006 18:09:27 +0100 Subject: [SciPy-user] numpy svn Message-ID: <43BEA447.8060701@gmx.net> Hi Travis said the svn location of numpy is http://svn.scipy.org/svn/numpy/trunk numpy but the numpy/README.txt says " The current version is always available from a Subversion repostiory: http://svn.numpy.org/svn/numpy_core/trunk" What's the right way? cheers, steve -- "People like Blood Sausage too. People are Morons!" -- Phil Connors, Groundhog Day From robert.kern at gmail.com Fri Jan 6 12:15:30 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 06 Jan 2006 11:15:30 -0600 Subject: [SciPy-user] numpy svn In-Reply-To: <43BEA447.8060701@gmx.net> References: <43BEA447.8060701@gmx.net> Message-ID: <43BEA5B2.6070804@gmail.com> Steve Schmerler wrote: > Hi > > Travis said the svn location of numpy is > > http://svn.scipy.org/svn/numpy/trunk numpy > > but the numpy/README.txt says > > " The current version is always available from a Subversion repostiory: > > http://svn.numpy.org/svn/numpy_core/trunk" > > What's the right way? The first. The latter seems to be a victim of overzealous search-and-replace. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From joseph.a.crider at Boeing.com Fri Jan 6 12:53:02 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Fri, 6 Jan 2006 11:53:02 -0600 Subject: [SciPy-user] Problems compiling SciPy in Cygwin Message-ID: I have unsuccessfully been attempting to compile SciPy under Cygwin on a Windows XP system, with the latest versions of all available programs installed on Cygwin. I have tried both SciPy 0.3.2 and the current version from subversion and encounter similar problems although in different locations. The last messages when compiling SciPy 0.3.2 are as follows: building 'scipy.stats.rand' extension compiling C sources gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' compile options: '-I/usr/include/python2.4 -c' gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/randmodule.o build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/ranlib_all.o -L/usr/lib/python2.4/config -Lbuild/temp.cygwin-1.5.18-i686-2.4 -lpython2.4 -o build/lib.cygwin-1.5.18-i686-2.4/scipy/stats/rand.dll build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/ranlib_all.o: In function `ignpoi': /cygdrive/c/Temp/Python/SciPy_complete-0.3.2/Lib/stats/ranlib_all.c:1254 : undefined reference to `_inrgcm' collect2: ld returned 1 exit status build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/ranlib_all.o: In function `ignpoi': /cygdrive/c/Temp/Python/SciPy_complete-0.3.2/Lib/stats/ranlib_all.c:1254 : undefined reference to `_inrgcm' collect2: ld returned 1 exit status With the version from subversion I get a similar error while building numpy.core.umath. I did some testing that seemed to suggest that the problem is the -O3 in the gcc options, but I was unable to figure out how to change it. Any suggestions? J. Allen Crider (256)461-2699 J. Allen Crider (256)461-2699 From elcorto at gmx.net Fri Jan 6 13:24:41 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 06 Jan 2006 19:24:41 +0100 Subject: [SciPy-user] help/import question In-Reply-To: <43BEA5B2.6070804@gmail.com> References: <43BEA447.8060701@gmx.net> <43BEA5B2.6070804@gmail.com> Message-ID: <43BEB5E9.10508@gmx.net> Hi Some things I discoverd while playing arround with the recent numpy/scipy svn: ################################################################################################################# In [9]: ?scipy [...] Available subpackages --------------------- stats --- Statistical Functions sparse --- Sparse matrix [*] lib --- Python wrappers to external libraries linalg --- Linear algebra routines signal --- Signal Processing Tools [*] misc --- Various utilities that don't have another home. interpolate --- Interpolation Tools [*] optimize --- Optimization Tools cluster --- Vector Quantization / Kmeans [*] fftpack --- Discrete Fourier Transform algorithms io --- Data input and output [*] integrate --- Integration routines [*] lib.lapack --- Wrappers to LAPACK library special --- Special Functions lib.blas --- Wrappers to BLAS library [*] [*] - using a package requires explicit import ################################################################################################################# The *-marking of subpackages tells me that I can do import scipy ?scipy.stats but to get help on sparse I have to import scipy.signal ?scipy.signal Is this desired? If so, why? The scipy.sparse help contains nothing: In [5]: import scipy.sparse In [6]: ?scipy.sparse Type: module Base Class: String Form: Namespace: Interactive File: /usr/lib/python2.3/site-packages/scipy/sparse/__init__.py Docstring: Sparse matrix ============= In [7]: cheers, steve -- "People like Blood Sausage too. People are Morons!" -- Phil Connors, Groundhog Day From elcorto at gmx.net Fri Jan 6 13:27:46 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Fri, 06 Jan 2006 19:27:46 +0100 Subject: [SciPy-user] help/import question In-Reply-To: <43BEB5E9.10508@gmx.net> References: <43BEA447.8060701@gmx.net> <43BEA5B2.6070804@gmail.com> <43BEB5E9.10508@gmx.net> Message-ID: <43BEB6A2.7070202@gmx.net> Steve Schmerler wrote: > Hi > > Some things I discoverd while playing arround with the recent > numpy/scipy svn: > > ################################################################################################################# > > In [9]: ?scipy > > [...] > > Available subpackages > --------------------- > stats --- Statistical Functions > sparse --- Sparse matrix [*] > lib --- Python wrappers to external libraries > linalg --- Linear algebra routines > signal --- Signal Processing Tools [*] > misc --- Various utilities that don't have another home. > interpolate --- Interpolation Tools [*] > optimize --- Optimization Tools > cluster --- Vector Quantization / Kmeans [*] > fftpack --- Discrete Fourier Transform algorithms > io --- Data input and output [*] > integrate --- Integration routines [*] > lib.lapack --- Wrappers to LAPACK library > special --- Special Functions > lib.blas --- Wrappers to BLAS library [*] > [*] - using a package requires explicit import > > > ################################################################################################################# > > The *-marking of subpackages tells me that I can do > > import scipy > ?scipy.stats > > but to get help on sparse I have to > > import scipy.signal > ?scipy.signal > Of course I ment import scipy.sparse ?scipy.sparse :) > Is this desired? If so, why? > > > > The scipy.sparse help contains nothing: > > In [5]: import scipy.sparse > > In [6]: ?scipy.sparse > Type: module > Base Class: > String Form: '/usr/lib/python2.3/site-packages/scipy/sparse/__init__.pyc'> > Namespace: Interactive > File: /usr/lib/python2.3/site-packages/scipy/sparse/__init__.py > Docstring: > Sparse matrix > ============= > > > In [7]: > > > cheers, > steve > -- "People like Blood Sausage too. People are Morons!" -- Phil Connors, Groundhog Day From oliphant.travis at ieee.org Fri Jan 6 14:09:53 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 06 Jan 2006 12:09:53 -0700 Subject: [SciPy-user] Why are there two FFTs? In-Reply-To: <43BE81AC.209@iname.com> References: <43BE81AC.209@iname.com> Message-ID: <43BEC081.50502@ieee.org> Gary wrote: >scipy.fftpack.* and numpy.dft.* > >are they different under the hood > Yes. scipy.fftpack can link to fftw and/or djbfft for example which can be faster (but also have more install difficulties...). Also the extension module is f2py generated. numpy.dft always uses an f2c'd version of fftpack and a hand-written extension module. -teo From haase at msg.ucsf.edu Fri Jan 6 15:14:09 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Fri, 6 Jan 2006 12:14:09 -0800 Subject: [SciPy-user] Why are there two FFTs? In-Reply-To: <43BEC081.50502@ieee.org> References: <43BE81AC.209@iname.com> <43BEC081.50502@ieee.org> Message-ID: <200601061214.09442.haase@msg.ucsf.edu> On Friday 06 January 2006 11:09, Travis Oliphant wrote: > Gary wrote: > >scipy.fftpack.* and numpy.dft.* > > > >are they different under the hood > > Yes. scipy.fftpack can link to fftw and/or djbfft for example which can > be faster (but also have more install difficulties...). Also the > extension module is f2py generated. > > numpy.dft always uses an f2c'd version of fftpack and a hand-written > extension module. > > -teo Is there any support for "single precision" (float32,complex32) fft ? We consider this quite important since our 3D image can already be on the order of 500MB ... - Sebastian From joseph.a.crider at Boeing.com Fri Jan 6 16:55:39 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Fri, 6 Jan 2006 15:55:39 -0600 Subject: [SciPy-user] Problems compiling SciPy in Cygwin Message-ID: After some more digging through documentation on Python.org and experimentation, I think I may have come up with a satisfactory answer for my needs at least. I had been just using the command python setup.py build to build SciPy. Changing the command to python setup.py build --compiler=cygwin resulted in an error similar to that below in the xplt module. Since I don't plan to use xplt, I placed that in the list ignore_packages in setup.py and got a successful build. J. Allen Crider (256)461-2699 -----Original Message----- From: Crider, Joseph A Sent: Friday, January 06, 2006 11:53 AM To: scipy-user at scipy.net Subject: [SciPy-user] Problems compiling SciPy in Cygwin I have unsuccessfully been attempting to compile SciPy under Cygwin on a Windows XP system, with the latest versions of all available programs installed on Cygwin. I have tried both SciPy 0.3.2 and the current version from subversion and encounter similar problems although in different locations. The last messages when compiling SciPy 0.3.2 are as follows: building 'scipy.stats.rand' extension compiling C sources gcc options: '-fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' compile options: '-I/usr/include/python2.4 -c' gcc -shared -Wl,--enable-auto-image-base build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/randmodule.o build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/ranlib_all.o -L/usr/lib/python2.4/config -Lbuild/temp.cygwin-1.5.18-i686-2.4 -lpython2.4 -o build/lib.cygwin-1.5.18-i686-2.4/scipy/stats/rand.dll build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/ranlib_all.o: In function `ignpoi': /cygdrive/c/Temp/Python/SciPy_complete-0.3.2/Lib/stats/ranlib_all.c:1254 : undefined reference to `_inrgcm' collect2: ld returned 1 exit status build/temp.cygwin-1.5.18-i686-2.4/Lib/stats/ranlib_all.o: In function `ignpoi': /cygdrive/c/Temp/Python/SciPy_complete-0.3.2/Lib/stats/ranlib_all.c:1254 : undefined reference to `_inrgcm' collect2: ld returned 1 exit status With the version from subversion I get a similar error while building numpy.core.umath. I did some testing that seemed to suggest that the problem is the -O3 in the gcc options, but I was unable to figure out how to change it. Any suggestions? J. Allen Crider (256)461-2699 J. Allen Crider (256)461-2699 _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From oliphant.travis at ieee.org Fri Jan 6 17:01:43 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 06 Jan 2006 15:01:43 -0700 Subject: [SciPy-user] Problems compiling SciPy in Cygwin In-Reply-To: References: Message-ID: <43BEE8C7.1040905@ieee.org> Crider, Joseph A wrote: >After some more digging through documentation on Python.org and >experimentation, I think I may have come up with a satisfactory answer >for my needs at least. I had been just using the command > python setup.py build >to build SciPy. Changing the command to > python setup.py build --compiler=cygwin >resulted in an error similar to that below in the xplt module. Since I >don't plan to use xplt, I placed that in the list ignore_packages in >setup.py and got a successful build. > > > > Thanks for reporting your solution. I'm pretty sure this is with SciPy 0.3.2 and Numeric as a base. The new SciPy 0.4.X requires NumPy as a base. But, similar instructions are probably in order there as well.. -Travis From oliphant.travis at ieee.org Fri Jan 6 15:32:23 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 06 Jan 2006 13:32:23 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 29, Issue 11 In-Reply-To: References: Message-ID: <43BED3D7.7020507@ieee.org> Brendan Simons wrote: > > Great! That would be very helpful to me. It's done in SVN. Also resize(a,0) works too. >> >> But, note also that if another object is referencing a you can't do a >> >> resize like this: >> >> >> i.e. >> >> >> a = numpy.ones(5) >> >> b = a >> >> a.resize(...) will give an error because the object a has more than one >> >> reference to it. >> > > Oh, I hadn't realized this. I can work around that limitation, but it > might not be apparent to future users. > I can't comment about the relative merits of malloc approaches, but I > certainly don't want to sacrifice the performance of numpy for this > use case. Ultimately this is because you can't tell whether you simply have b = a # We really could resize a because b is just another name for a... or this b = a[1:5:2,0:3:2] # now b is a *view* of a's memory. If we resize a, then b is really going to be messed up and give us problems. We could distinguish these two using another array flag like (SHAREDATA) or something. > So the memory address belongs to the reference, and not the array > itself? *Looks embarassed for not understanding better how things work* The memory address belongs to whichever array actually owns the data (which one allocated it). Check a.flags.owndata to see if the array owns it or not. Obviously you can't resize data you don't own. But, you also can't resize if another array. If the array does not own it's own data, then .base will point to the object that it got it's memory from (which might not own its own data --- you could recurse until you find it though...) > >> >> If you really need the numarray ability to swap out the memory address >> >> of the array for another one, then you might as well use the resize >> >> function: >> >> >> a = resize(a,(0,)) # which we will have to fix to make work right ;-) >> >> > > The global resize function will change the memory address of all > references to the array then? If so, that sounds like an adequate > solution, thanks. No, it has no way to do that, there is no information kept about which objects have references to the array (and indeed would be quite involved to keep track of that). All that happens is that you've just changed a to point to another array which is of size 0. > > Brendan > -- > Brendan Simons, Project Engineer > Stern Laboratories, Hamilton Ontario > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From oliphant.travis at ieee.org Fri Jan 6 16:11:38 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 06 Jan 2006 14:11:38 -0700 Subject: [SciPy-user] Why are there two FFTs? In-Reply-To: <200601061214.09442.haase@msg.ucsf.edu> References: <43BE81AC.209@iname.com> <43BEC081.50502@ieee.org> <200601061214.09442.haase@msg.ucsf.edu> Message-ID: <43BEDD0A.5040505@ieee.org> Sebastian Haase wrote: >On Friday 06 January 2006 11:09, Travis Oliphant wrote: > > >>Gary wrote: >> >> >>>scipy.fftpack.* and numpy.dft.* >>> >>>are they different under the hood >>> >>> >>Yes. scipy.fftpack can link to fftw and/or djbfft for example which can >>be faster (but also have more install difficulties...). Also the >>extension module is f2py generated. >> >>numpy.dft always uses an f2c'd version of fftpack and a hand-written >>extension module. >> >>-teo >> >> > >Is there any support for "single precision" (float32,complex32) fft ? >We consider this quite important since our 3D image can already be on the >order of 500MB ... > > > No, I don't think this is supported yet. But, it could be added for scipy.fftpack. I don't think we will add it to numpy. Otherwise, you will need to call out to the library yourself (use numpy.f2py to help...) -Travis From joseph.a.crider at Boeing.com Fri Jan 6 17:09:48 2006 From: joseph.a.crider at Boeing.com (Crider, Joseph A) Date: Fri, 6 Jan 2006 16:09:48 -0600 Subject: [SciPy-user] Problems compiling SciPy in Cygwin Message-ID: Yes, it was SciPy 0.3.2. I am still unable to build numpy.core.umath in the copy of numpy I downloaded this morning, but I don't have time to dig any further into why. J. Allen Crider (256)461-2699 -----Original Message----- From: Travis Oliphant [mailto:oliphant.travis at ieee.org] Sent: Friday, January 06, 2006 4:02 PM To: SciPy Users List Subject: Re: [SciPy-user] Problems compiling SciPy in Cygwin Crider, Joseph A wrote: >After some more digging through documentation on Python.org and >experimentation, I think I may have come up with a satisfactory answer >for my needs at least. I had been just using the command > python setup.py build >to build SciPy. Changing the command to > python setup.py build --compiler=cygwin resulted in an error similar >to that below in the xplt module. Since I don't plan to use xplt, I >placed that in the list ignore_packages in setup.py and got a >successful build. > > > > Thanks for reporting your solution. I'm pretty sure this is with SciPy 0.3.2 and Numeric as a base. The new SciPy 0.4.X requires NumPy as a base. But, similar instructions are probably in order there as well.. -Travis _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From Fernando.Perez at colorado.edu Fri Jan 6 17:19:13 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 06 Jan 2006 15:19:13 -0700 Subject: [SciPy-user] Problems compiling SciPy in Cygwin In-Reply-To: <43BEE8C7.1040905@ieee.org> References: <43BEE8C7.1040905@ieee.org> Message-ID: <43BEECE1.8000707@colorado.edu> Travis Oliphant wrote: > Crider, Joseph A wrote: > > >>After some more digging through documentation on Python.org and >>experimentation, I think I may have come up with a satisfactory answer >>for my needs at least. I had been just using the command >> python setup.py build >>to build SciPy. Changing the command to >> python setup.py build --compiler=cygwin >>resulted in an error similar to that below in the xplt module. Since I >>don't plan to use xplt, I placed that in the list ignore_packages in >>setup.py and got a successful build. >> >> >> >> > > Thanks for reporting your solution. I'm pretty sure this is with SciPy > 0.3.2 and Numeric as a base. > > The new SciPy 0.4.X requires NumPy as a base. But, similar > instructions are probably in order there as well.. Except that xplt is off by default in scipy (it's only in the sandbox), so hopefully less problems will arise. Still, the --compiler=cygwin nugget is a good one for win32 users. Cheers, f From Fernando.Perez at colorado.edu Fri Jan 6 17:39:23 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 06 Jan 2006 15:39:23 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 29, Issue 11 In-Reply-To: <43BED3D7.7020507@ieee.org> References: <43BED3D7.7020507@ieee.org> Message-ID: <43BEF19B.6070307@colorado.edu> Travis Oliphant wrote: > Ultimately this is because you can't tell whether you simply have > > b = a # We really could resize a because b is just another name for a... > > or this > > b = a[1:5:2,0:3:2] # now b is a *view* of a's memory. If we resize a, > then b is really going to be messed up and give us problems. > > We could distinguish these two using another array flag like (SHAREDATA) > or something. I'm not sure that can work in python: In [3]: a = N.arange(10) In [4]: b = a[::2] a.SHAREDATA is now true In [5]: del b a.SHAREDATA is still true, but shouldn't. There is no way, to my knowledge, of enforcing this correctly. In a sense, SHAREDATA would be a reference counting mechanism for numpy arrays, but Python doesn't really give you enough low-level access to do this reliably (remember that __del__ methods of objects are NOT guaranteed to run, so even that is not an option). Cheers, f From oliphant.travis at ieee.org Fri Jan 6 18:06:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 06 Jan 2006 16:06:16 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 29, Issue 11 In-Reply-To: <43BEF19B.6070307@colorado.edu> References: <43BED3D7.7020507@ieee.org> <43BEF19B.6070307@colorado.edu> Message-ID: <43BEF7E8.5000806@ieee.org> Fernando Perez wrote: >Travis Oliphant wrote: > > > >>Ultimately this is because you can't tell whether you simply have >> >>b = a # We really could resize a because b is just another name for a... >> >>or this >> >>b = a[1:5:2,0:3:2] # now b is a *view* of a's memory. If we resize a, >>then b is really going to be messed up and give us problems. >> >>We could distinguish these two using another array flag like (SHAREDATA) >>or something. >> >> > >I'm not sure that can work in python: > >In [3]: a = N.arange(10) > >In [4]: b = a[::2] > >a.SHAREDATA is now true > >In [5]: del b > >a.SHAREDATA is still true, but shouldn't. > > You are right. We would also have to have a variable storing the number of shares, and then objects that share the data would have to decrement it correctly. So, it's a lot harder than a simple flag. In fact, I don't think we should pursue it... -Travis From Fernando.Perez at colorado.edu Fri Jan 6 18:53:59 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 06 Jan 2006 16:53:59 -0700 Subject: [SciPy-user] SciPy-user Digest, Vol 29, Issue 11 In-Reply-To: <43BEF7E8.5000806@ieee.org> References: <43BED3D7.7020507@ieee.org> <43BEF19B.6070307@colorado.edu> <43BEF7E8.5000806@ieee.org> Message-ID: <43BF0317.8000305@colorado.edu> Travis Oliphant wrote: > You are right. We would also have to have a variable storing the number > of shares, and then objects that share the data would have to decrement > it correctly. So, it's a lot harder than a simple flag. In fact, I > don't think we should pursue it... As I said, it's reference counting, in a language with automatic memory management. You'd have to impose that any object which refers to another MUST always call a .close() method, much like is done with files (python doesn't guarantee that it closes files when they go out of scope, only when the interpreter shuts down). I don't think any of us wants to have to write; b = a[::2] c = z[1::3] ... b.close(); c.close();... return This is slow, error prone and annoying. So let's not :) Cheers, f From oliphant.travis at ieee.org Fri Jan 6 18:17:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 06 Jan 2006 16:17:01 -0700 Subject: [SciPy-user] numpy lists In-Reply-To: References: <200601051625.k05GPixf021407@oobleck.astro.cornell.edu> <43BD5DFA.20906@colorado.edu> <43BD6109.9030402@sympatico.ca> <43BD7A99.8070002@noaa.gov> Message-ID: <43BEFA6D.7090305@ieee.org> Bruce Southey wrote: > I am also +1 to keep numpy for the same reasons as Colin and Chris. So > far there has been nothing in alpha SciPy versions that offer any > advantage over Numarray for what I use or develop. There are two new mailing lists for NumPy numpy-devel at lists.sourceforge.net numpy-user at lists.sourceforge.net These are for developers and users to talk about only NumPy The SciPy lists can be for SciPy itself. Two packages deserve separate lists. -Travis From strawman at astraw.com Fri Jan 6 20:09:58 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 06 Jan 2006 17:09:58 -0800 Subject: [SciPy-user] numpy lists In-Reply-To: <43BEFA6D.7090305@ieee.org> References: <200601051625.k05GPixf021407@oobleck.astro.cornell.edu> <43BD5DFA.20906@colorado.edu> <43BD6109.9030402@sympatico.ca> <43BD7A99.8070002@noaa.gov> <43BEFA6D.7090305@ieee.org> Message-ID: <43BF14E6.8070507@astraw.com> Travis Oliphant wrote: >There are two new mailing lists for NumPy > >numpy-devel at lists.sourceforge.net >numpy-user at lists.sourceforge.net > >These are for developers and users to talk about only NumPy > > You can subscribe to these lists from http://sourceforge.net/mail/?group_id=1369 From ryanlists at gmail.com Sat Jan 7 14:02:33 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 7 Jan 2006 14:02:33 -0500 Subject: [SciPy-user] vector subtraction error Message-ID: I am still using old scipy, so maybe this is no longer an issue in the new NumPy, but I seem to do this to myself a fair ammount. I think I have to 1-d vectors and I need to subtract them, but some how there shapes are (n,) and (n,1) and when I subtract them I get something that is shape (n,n): (Pdb) shape(cb.dBmag()) Out[3]: (4250,) (Pdb) shape(curb.dBmag()) Out[3]: (4250, 1) (Pdb) temp=cb.dBmag()-curb.dBmag() (Pdb) shape(temp) Out[3]: (4250, 4250) Would there be a terrible performance cost to check for this when array subtraction is called? Would this be different in the new NumPy? Thanks, Ryan From pebarrett at gmail.com Sat Jan 7 15:54:59 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Sat, 7 Jan 2006 15:54:59 -0500 Subject: [SciPy-user] vector subtraction error In-Reply-To: References: Message-ID: <40e64fa20601071254w4f5bef9bsdce7711ac8ac1dc3@mail.gmail.com> On 1/7/06, Ryan Krauss wrote: > > I am still using old scipy, so maybe this is no longer an issue in the > new NumPy, but I seem to do this to myself a fair ammount. I think I > have to 1-d vectors and I need to subtract them, but some how there > shapes are (n,) and (n,1) and when I subtract them I get something > that is shape (n,n): > > (Pdb) shape(cb.dBmag()) > Out[3]: (4250,) > (Pdb) shape(curb.dBmag()) > Out[3]: (4250, 1) > (Pdb) temp=cb.dBmag()-curb.dBmag() > (Pdb) shape(temp) > Out[3]: (4250, 4250) > > Would there be a terrible performance cost to check for this when > array subtraction is called? Would this be different in the new > NumPy? > You are seeing the array broadcasting behavior of Numeric/Numarray/Numpy, which behaves like an outer product when operating on row and column vectors. The output array that you are seeing is the result of this behavior, since you are subtracting a column vector from a row vector. You probably want to reshape the column vector into a row vector and then subtract. Note that this behaviour will never change. It is a feature of Numpy. -- Paul -- Paul Barrett, PhD Johns Hopkins University Assoc. Research Scientist Dept of Physics and Astronomy Phone: 410-516-5190 Baltimore, MD 21218 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Jan 7 16:14:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 07 Jan 2006 15:14:13 -0600 Subject: [SciPy-user] vector subtraction error In-Reply-To: References: Message-ID: <43C02F25.2050500@gmail.com> Ryan Krauss wrote: > I am still using old scipy, so maybe this is no longer an issue in the > new NumPy, but I seem to do this to myself a fair ammount. I think I > have to 1-d vectors and I need to subtract them, but some how there > shapes are (n,) and (n,1) and when I subtract them I get something > that is shape (n,n): > > (Pdb) shape(cb.dBmag()) > Out[3]: (4250,) > (Pdb) shape(curb.dBmag()) > Out[3]: (4250, 1) > (Pdb) temp=cb.dBmag()-curb.dBmag() > (Pdb) shape(temp) > Out[3]: (4250, 4250) > > Would there be a terrible performance cost to check for this when > array subtraction is called? Would this be different in the new > NumPy? This is deliberate behavior for broadcasting. Travis' book (which I expect to be retitled shortly) describes it in "2.4 Universal Functions for arrays". The numarray and Numeric manuals should also describe the broadcasting rules. In particular the rule being applied here is that when arrays have different numbers of dimensions, the smaller dimensions (in this case (n,)) get prepended with 1s until they get are the same number as the larger. So here, you are essentially subtracting a (n,1) array from a (1,n) array. You should also review your .dBmag() method to find out why it is returning arrays of different dimensions when you want to treat them as the same number of dimensions. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From ryanlists at gmail.com Sat Jan 7 16:15:53 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 7 Jan 2006 16:15:53 -0500 Subject: [SciPy-user] vector subtraction error In-Reply-To: <40e64fa20601071254w4f5bef9bsdce7711ac8ac1dc3@mail.gmail.com> References: <40e64fa20601071254w4f5bef9bsdce7711ac8ac1dc3@mail.gmail.com> Message-ID: I understand what you are saying, but as far as I can tell, a vector with the shape of (n,) is neither a row nor a column. It can't be transposed from one to the other: In [9]: shape(temp2) Out[9]: (9,) In [10]: temp=arange(1,10) In [11]: shape(temp) Out[11]: (9,) In [12]: temp2=transpose(temp) In [13]: shape(temp2) Out[13]: (9,) This is entirely different from a 2d array: In [16]: temp3=atleast_2d(temp) In [17]: shape(temp3) Out[17]: (1, 9) In [18]: temp4=transpose(temp3) In [19]: shape(temp4) Out[19]: (9, 1) So, is a 1D array always a row vector? And if so, shouldn't subtracting a 1xn from and nx1 raise an error, since it makes no sense from a linear algebra stand point? Ryan On 1/7/06, Paul Barrett wrote: > On 1/7/06, Ryan Krauss wrote: > > I am still using old scipy, so maybe this is no longer an issue in the > > new NumPy, but I seem to do this to myself a fair ammount. I think I > > have to 1-d vectors and I need to subtract them, but some how there > > shapes are (n,) and (n,1) and when I subtract them I get something > > that is shape (n,n): > > > > (Pdb) shape(cb.dBmag()) > > Out[3]: (4250,) > > (Pdb) shape(curb.dBmag()) > > Out[3]: (4250, 1) > > (Pdb) temp=cb.dBmag()-curb.dBmag() > > (Pdb) shape(temp) > > Out[3]: (4250, 4250) > > > > Would there be a terrible performance cost to check for this when > > array subtraction is called? Would this be different in the new > > NumPy? > > > > You are seeing the array broadcasting behavior of Numeric/Numarray/Numpy, > which behaves like an outer product when operating on row and column > vectors. The output array that you are seeing is the result of this > behavior, since you are subtracting a column vector from a row vector. You > probably want to reshape the column vector into a row vector and then > subtract. > > Note that this behaviour will never change. It is a feature of Numpy. > > -- Paul > > -- > Paul Barrett, PhD Johns Hopkins University > Assoc. Research Scientist Dept of Physics and Astronomy > Phone: 410-516-5190 Baltimore, MD 21218 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From ryanlists at gmail.com Sat Jan 7 16:21:34 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 7 Jan 2006 16:21:34 -0500 Subject: [SciPy-user] vector subtraction error In-Reply-To: <43C02F25.2050500@gmail.com> References: <43C02F25.2050500@gmail.com> Message-ID: In response to Robert's question, I will admit that I am somewhat responsible for my own problem. I do not believe it is the dBmag function, but the inputs to it that are different. The reason is most likely that buried somewhere in my code I had a check for a problem like this that got applied to experimental data and not to the model I am trying to fit to it. I have written a function called colwise as a solution to these kinds of problems that makes sure that the largest dimension of any matrix is the # of rows. I just don't want to have to sprinkle these colwise transformations all over (I believe it also calls atleast_2d and that may not be appropriate all the time). Ryan On 1/7/06, Robert Kern wrote: > Ryan Krauss wrote: > > I am still using old scipy, so maybe this is no longer an issue in the > > new NumPy, but I seem to do this to myself a fair ammount. I think I > > have to 1-d vectors and I need to subtract them, but some how there > > shapes are (n,) and (n,1) and when I subtract them I get something > > that is shape (n,n): > > > > (Pdb) shape(cb.dBmag()) > > Out[3]: (4250,) > > (Pdb) shape(curb.dBmag()) > > Out[3]: (4250, 1) > > (Pdb) temp=cb.dBmag()-curb.dBmag() > > (Pdb) shape(temp) > > Out[3]: (4250, 4250) > > > > Would there be a terrible performance cost to check for this when > > array subtraction is called? Would this be different in the new > > NumPy? > > This is deliberate behavior for broadcasting. Travis' book (which I expect to be > retitled shortly) describes it in "2.4 Universal Functions for arrays". The > numarray and Numeric manuals should also describe the broadcasting rules. In > particular the rule being applied here is that when arrays have different > numbers of dimensions, the smaller dimensions (in this case (n,)) get prepended > with 1s until they get are the same number as the larger. So here, you are > essentially subtracting a (n,1) array from a (1,n) array. > > You should also review your .dBmag() method to find out why it is returning > arrays of different dimensions when you want to treat them as the same number of > dimensions. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From evan.monroig at gmail.com Sat Jan 7 19:32:08 2006 From: evan.monroig at gmail.com (Evan Monroig) Date: Sun, 8 Jan 2006 09:32:08 +0900 Subject: [SciPy-user] vector subtraction error In-Reply-To: References: <40e64fa20601071254w4f5bef9bsdce7711ac8ac1dc3@mail.gmail.com> Message-ID: <20060108003208.GA10850@localhost.localdomain> On Jan.07 16h15, Ryan Krauss wrote : > I understand what you are saying, but as far as I can tell, a vector > with the shape of (n,) is neither a row nor a column. It can't be > transposed from one to the other: > In [11]: shape(temp) > Out[11]: (9,) > > In [12]: temp2=transpose(temp) > > In [13]: shape(temp2) > Out[13]: (9,) if temp is a (n,) array, you can add a new dimension to give it shape (n,1) or (1,n) : >>> temp = rand(9) >>> temp.shape (9,) >>> temp[NewAxis,:].shape (1, 9) >>> temp[:,NewAxis].shape (9, 1) Or you can reduce your (n,1) array to a 1-dimensional array: >>> temp2.shape (10, 1) >>> squeeze(temp2).shape (10,) Evan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From pearu at scipy.org Sun Jan 8 04:36:20 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 8 Jan 2006 03:36:20 -0600 (CST) Subject: [SciPy-user] help/import question In-Reply-To: <43BEB5E9.10508@gmx.net> References: <43BEA447.8060701@gmx.net> <43BEA5B2.6070804@gmail.com> <43BEB5E9.10508@gmx.net> Message-ID: On Fri, 6 Jan 2006, Steve Schmerler wrote: > Hi > > Some things I discoverd while playing arround with the recent > numpy/scipy svn: > > ################################################################################################################# > > In [9]: ?scipy > > [...] > > Available subpackages > --------------------- > stats --- Statistical Functions > sparse --- Sparse matrix [*] > lib --- Python wrappers to external libraries > linalg --- Linear algebra routines > signal --- Signal Processing Tools [*] > misc --- Various utilities that don't have another home. > interpolate --- Interpolation Tools [*] > optimize --- Optimization Tools > cluster --- Vector Quantization / Kmeans [*] > fftpack --- Discrete Fourier Transform algorithms > io --- Data input and output [*] > integrate --- Integration routines [*] > lib.lapack --- Wrappers to LAPACK library > special --- Special Functions > lib.blas --- Wrappers to BLAS library [*] > [*] - using a package requires explicit import > > > ################################################################################################################# > > The *-marking of subpackages tells me that I can do > > import scipy > ?scipy.stats > > but to get help on sparse I have to > > import scipy.signal > ?scipy.signal > > Is this desired? If so, why? Yes. Users seldomly need to use all of scipy packages in their programs, so, by default, not all scipy packages are imported while importing scipy, this reduces import time and consumes memory. If you wish to load all scipy packages, then do import scipy scipy.pkgload() and then simple ?scipy.signal also works. Pearu From elcorto at gmx.net Sun Jan 8 08:45:52 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 08 Jan 2006 14:45:52 +0100 Subject: [SciPy-user] help/import question In-Reply-To: References: <43BEA447.8060701@gmx.net> <43BEA5B2.6070804@gmail.com> <43BEB5E9.10508@gmx.net> Message-ID: <43C11790.8010306@gmx.net> Pearu Peterson wrote: > > On Fri, 6 Jan 2006, Steve Schmerler wrote: > > >>Hi >> >>Some things I discoverd while playing arround with the recent >>numpy/scipy svn: >> >>################################################################################################################# >> >>In [9]: ?scipy >> >>[...] >> >> Available subpackages >> --------------------- >> stats --- Statistical Functions >> sparse --- Sparse matrix [*] >> lib --- Python wrappers to external libraries >> linalg --- Linear algebra routines >> signal --- Signal Processing Tools [*] >> misc --- Various utilities that don't have another home. >> interpolate --- Interpolation Tools [*] >> optimize --- Optimization Tools >> cluster --- Vector Quantization / Kmeans [*] >> fftpack --- Discrete Fourier Transform algorithms >> io --- Data input and output [*] >> integrate --- Integration routines [*] >> lib.lapack --- Wrappers to LAPACK library >> special --- Special Functions >> lib.blas --- Wrappers to BLAS library [*] >> [*] - using a package requires explicit import >> >> >>################################################################################################################# >> >>The *-marking of subpackages tells me that I can do >> >>import scipy >>?scipy.stats >> >>but to get help on sparse I have to >> >>import scipy.signal >>?scipy.signal >> >>Is this desired? If so, why? > > > Yes. Users seldomly need to use all of scipy packages in their programs, > so, by default, not all scipy packages are imported while importing > scipy, this reduces import time and consumes memory. > > If you wish to load all scipy packages, then do > > import scipy > scipy.pkgload() > > and then simple > > ?scipy.signal > > also works. > > Pearu > So the decision to import stats, lib, linalg etc. (but not signal, interpolate, ...) is a pure memory/time thing. What I described is only an issue in interactive use anyway, i.e. if one wants to study the on-line help. Maybe scipy.pkgload() should be mentioned somewhere in the ?scipy help. cheers, steve -- "People like Blood Sausage too. People are Morons!" -- Phil Connors, Groundhog Day From vincefn at users.sourceforge.net Mon Jan 9 14:38:13 2006 From: vincefn at users.sourceforge.net (Vincent Favre-Nicolin) Date: Mon, 9 Jan 2006 20:38:13 +0100 Subject: [SciPy-user] [Matplotlib-users] Postscript driver problem? In-Reply-To: <17345.51298.739946.636293@owl.eos.ubc.ca> References: <1136747955.8883.12.camel@localhost.localdomain> <407C53E2-633D-4167-988B-59AAC3479EF4@iit.edu> <17345.51298.739946.636293@owl.eos.ubc.ca> Message-ID: <200601092038.13694.vincefn@users.sourceforge.net> On Lundi 09 Janvier 2006 03:20, Philip Austin wrote: > This behavior has to be idiosyncratic to my system (Fedora Core 3, > HP LaserJet 4050N), > but I'd appreciate any suggestions about how to debug it. [...] > On the off chance that anyone has a clue about what could be going on, > I've put file1.ps, file1_ps2pdf.pdf and file1_roundtrip.ps in > http://clouds.eos.ubc.ca/~phil/matplotlib_postscript I was going to blame hp, but my brother printer (HL5170DN) does not print it either - all I get is a postscript error: ERROR NAME; invalidfont COMMAND; stringwidth OPERAND STACK --stringtype-- The ps is probably sent to you printer, which is -apparently- not configured to report postscript interpretation errors, so you don't see anything. Sorry I can't be of more help, but at least it does not look like it's the fault of your printer driver. Vincent -- Vincent Favre-Nicolin Universit? Joseph Fourier http://v.favrenicolin.free.fr ObjCryst & Fox : http://objcryst.sourceforge.net From Fernando.Perez at colorado.edu Tue Jan 10 04:00:27 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 10 Jan 2006 02:00:27 -0700 Subject: [SciPy-user] [ANN] IPython 0.7.0 is out. Message-ID: <43C377AB.4040901@colorado.edu> [ Sorry for the cross-post, but I know that a number of scipy members are also ipython users, and not necessarily on the ipython lists. This release has a number of significant enhancements, so hopefully this will be of interest. Please send any ipython-related requests directly to the ipython lists, to keep scipy clear of such traffic. ] Hi all, After a long hiatus (0.6.15 was out in June of 2005), I'm glad to announce the release of IPython 0.7.0, with lots of new features (the _code_ diff from the previous release is almost 9000 lines long, so there's quite a bit in here). IPython's homepage is at: http://ipython.scipy.org and downloads are at: http://ipython.scipy.org/dist I've provided: - source downloads (.tar.gz) - RPMs (for Python 2.3 and 2.4, built under Fedora Core 3). - Python Eggs (http://peak.telecommunity.com/DevCenter/PythonEggs). - a native win32 installer for both Python 2.3 and 2.4. Fedora users should note that IPython is now officially part of the Extras repository, so they can get the update from there as well (though it may lag by a few days). Debian, Fink and BSD packages for this version should be coming soon, as the respective maintainers (many thanks to Jack Moffit, Andrea Riciputi and Dryice Liu) have the time to follow their packaging procedures. A lot of new features have gone into this release, the bulk of which were driven by user feedback and requests, and more importantly by patches from IPython users. I greatly appreciate these contributions, and hope they will continue in the future. In particular, thanks to Vivian de Smedt, Jorgen Stenarsson and Ville Vainio, who contributed large patches with much of the new significant functionality. I've tried to provide credit in the notes below and the project's ChangeLog, please let me know if I've accidentally ommitted you. Many thanks to Enthought for their continued hosting support for IPython. Release notes ------------- *** WARNING: compatibility changes *** - IPython now requires at least Python 2.3. If you can't upgrade from 2.2, you'll need to continue using IPython 0.6.15. *** End warning. As always, the NEWS file can be found at http://ipython.scipy.org/NEWS, and the full ChangeLog at http://ipython.scipy.org/ChangeLog. The highlights of this release follow. - Wildcard patterns in searches, supported by the %psearch magic, as well as the '?' operator. Type psearch? for the full details. Extremely useful, thanks to J?rgen Stenarson. - Major improvements to the pdb mode. It now has tab-completion, syntax highlighting and better stack handling. Thanks to Vivian De Smedt for this work (double-points given that pdb has a well-deserved reputation for being very unpleasant to work with). - Support for input with empty lines. If you have auto-indent on, this means that you need to either hit enter _twice_, or add/remove a space to your last blank line, to indicate you're done entering input. These changes also allow us to provide copy/paste of code with blank lines. - Support for pasting multiline input even with autoindent on. The code will look wrong on screen, but it will be stored and executed correctly internally. - TAB on an otherwise empty line actually inserts a tab. Convenient for indenting (for those who don't use autoindent). - Significant improvements for all multithreaded versions of ipython. Now, if your threaded code raises exceptions, instead of seeing a crash report, a normal (colored, verbose, etc.) exception is printed. Additionally, if you have pdb on, it will activate in your threaded code. Very nice for interactively debugging GUI programs. - Many fixes to embedded ipython, including proper handling of globals and tab completion. - New -t and -o options to %logstart, to respectively put timestamps in your logs, and to also log all output (tagged as #[Out]#). The default log name is now ipython_log.py, to better reflect that logs remain valid Python source. - Lightweight persistence mechanism via %store. IPython had always had %save, to write out a group of input lines directly to a file. Now, its %store companion stores persistently (associated with your profile, and auto-loaded at startup) not just source, but any python variable which can be pickled. Thanks to Matt Wilkie for the request, and ville for the patches. - Macros (created with %macro) can now be edited with %edit (just say '%edit macroname'). This, coupled with the ability to store them persistently, makes the macro system much more useful. - New guarantee that, if you disable autocalling, ipython will never call getattr() on your objects. This solves problems with code that has side-effects on attribute access. Note that TAB-completion inevitably does call getattr(), so not all forms of side-effects can be eliminated. - Unicode support for prompts. - Improvements to path handling under win32. Thanks to Ville and Jorgen for the patches. - Improvements to pager under win32. Contributed by Alexander Belchenko. - Demo class for interactive demos using ipython. - %pycat magic for showing syntax-highlighted python sources - support for download_url in setup.py, so PyPI (and setuptools) work transparently with ipython. - New exit/quit magics to exit, conditionally asking (%Exit/%Quit don't) - Automatically reopen the editor if your file has a syntax error in it (when using the %edit system). - New notation N-M for indicating the range of lines N,...,M (including both endpoints), in magic commands such as %macro, %save and %edit. - The IPython instance has a new attribute, .meta, which is an empty namespace (an instance of 'class Bunch:pass'). This is meant to provide extension writers with a safe namespace to store metadata of any kind, without the risk of name clashes with IPython's internals. - Added tab-completion support for objects with Traits, a sophisticated type definition system for Python: http://code.enthought.com/traits. - Several patches related to Emacs support. Thanks to Alex Schmolck and John Barnard. - New 'smart' autocall mode, which avoids autocalling if a function with no arguments is the input. The old 'full' mode can be obtained by setting the autocall parameter in the ipythonrc to 2, or via the %autocall magic. - A large amount of internal reorganization and cleanup, to allow the code to be more readily moved over to the chainsaw branch (see below). - Many other small fixes and enhancements. The changelog has full details. Enjoy, and as usual please report any problems. Regards, Fernando. From Fernando.Perez at colorado.edu Tue Jan 10 04:01:32 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 10 Jan 2006 02:01:32 -0700 Subject: [SciPy-user] [ANN] IPython 0.7.0 is out. Message-ID: <43C377EC.5030504@colorado.edu> [ Sorry for the cross-post, but I know that a number of matplotlib members are also ipython users, and not necessarily on the ipython lists. This release has a number of significant enhancements, so hopefully this will be of interest. Please send any ipython-related requests directly to the ipython lists, to keep the matplotlib ones clear of such traffic. ] Hi all, After a long hiatus (0.6.15 was out in June of 2005), I'm glad to announce the release of IPython 0.7.0, with lots of new features (the _code_ diff from the previous release is almost 9000 lines long, so there's quite a bit in here). IPython's homepage is at: http://ipython.scipy.org and downloads are at: http://ipython.scipy.org/dist I've provided: - source downloads (.tar.gz) - RPMs (for Python 2.3 and 2.4, built under Fedora Core 3). - Python Eggs (http://peak.telecommunity.com/DevCenter/PythonEggs). - a native win32 installer for both Python 2.3 and 2.4. Fedora users should note that IPython is now officially part of the Extras repository, so they can get the update from there as well (though it may lag by a few days). Debian, Fink and BSD packages for this version should be coming soon, as the respective maintainers (many thanks to Jack Moffit, Andrea Riciputi and Dryice Liu) have the time to follow their packaging procedures. A lot of new features have gone into this release, the bulk of which were driven by user feedback and requests, and more importantly by patches from IPython users. I greatly appreciate these contributions, and hope they will continue in the future. In particular, thanks to Vivian de Smedt, Jorgen Stenarsson and Ville Vainio, who contributed large patches with much of the new significant functionality. I've tried to provide credit in the notes below and the project's ChangeLog, please let me know if I've accidentally ommitted you. Many thanks to Enthought for their continued hosting support for IPython. Release notes ------------- *** WARNING: compatibility changes *** - IPython now requires at least Python 2.3. If you can't upgrade from 2.2, you'll need to continue using IPython 0.6.15. *** End warning. As always, the NEWS file can be found at http://ipython.scipy.org/NEWS, and the full ChangeLog at http://ipython.scipy.org/ChangeLog. The highlights of this release follow. - Wildcard patterns in searches, supported by the %psearch magic, as well as the '?' operator. Type psearch? for the full details. Extremely useful, thanks to J?rgen Stenarson. - Major improvements to the pdb mode. It now has tab-completion, syntax highlighting and better stack handling. Thanks to Vivian De Smedt for this work (double-points given that pdb has a well-deserved reputation for being very unpleasant to work with). - Support for input with empty lines. If you have auto-indent on, this means that you need to either hit enter _twice_, or add/remove a space to your last blank line, to indicate you're done entering input. These changes also allow us to provide copy/paste of code with blank lines. - Support for pasting multiline input even with autoindent on. The code will look wrong on screen, but it will be stored and executed correctly internally. - TAB on an otherwise empty line actually inserts a tab. Convenient for indenting (for those who don't use autoindent). - Significant improvements for all multithreaded versions of ipython. Now, if your threaded code raises exceptions, instead of seeing a crash report, a normal (colored, verbose, etc.) exception is printed. Additionally, if you have pdb on, it will activate in your threaded code. Very nice for interactively debugging GUI programs. - Many fixes to embedded ipython, including proper handling of globals and tab completion. - New -t and -o options to %logstart, to respectively put timestamps in your logs, and to also log all output (tagged as #[Out]#). The default log name is now ipython_log.py, to better reflect that logs remain valid Python source. - Lightweight persistence mechanism via %store. IPython had always had %save, to write out a group of input lines directly to a file. Now, its %store companion stores persistently (associated with your profile, and auto-loaded at startup) not just source, but any python variable which can be pickled. Thanks to Matt Wilkie for the request, and ville for the patches. - Macros (created with %macro) can now be edited with %edit (just say '%edit macroname'). This, coupled with the ability to store them persistently, makes the macro system much more useful. - New guarantee that, if you disable autocalling, ipython will never call getattr() on your objects. This solves problems with code that has side-effects on attribute access. Note that TAB-completion inevitably does call getattr(), so not all forms of side-effects can be eliminated. - Unicode support for prompts. - Improvements to path handling under win32. Thanks to Ville and Jorgen for the patches. - Improvements to pager under win32. Contributed by Alexander Belchenko. - Demo class for interactive demos using ipython. - %pycat magic for showing syntax-highlighted python sources - support for download_url in setup.py, so PyPI (and setuptools) work transparently with ipython. - New exit/quit magics to exit, conditionally asking (%Exit/%Quit don't) - Automatically reopen the editor if your file has a syntax error in it (when using the %edit system). - New notation N-M for indicating the range of lines N,...,M (including both endpoints), in magic commands such as %macro, %save and %edit. - The IPython instance has a new attribute, .meta, which is an empty namespace (an instance of 'class Bunch:pass'). This is meant to provide extension writers with a safe namespace to store metadata of any kind, without the risk of name clashes with IPython's internals. - Added tab-completion support for objects with Traits, a sophisticated type definition system for Python: http://code.enthought.com/traits. - Several patches related to Emacs support. Thanks to Alex Schmolck and John Barnard. - New 'smart' autocall mode, which avoids autocalling if a function with no arguments is the input. The old 'full' mode can be obtained by setting the autocall parameter in the ipythonrc to 2, or via the %autocall magic. - A large amount of internal reorganization and cleanup, to allow the code to be more readily moved over to the chainsaw branch (see below). - Many other small fixes and enhancements. The changelog has full details. Enjoy, and as usual please report any problems. Regards, Fernando. From Fernando.Perez at colorado.edu Tue Jan 10 04:04:43 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 10 Jan 2006 02:04:43 -0700 Subject: [SciPy-user] [ANN] IPython 0.7.0 is out. In-Reply-To: <43C377EC.5030504@colorado.edu> References: <43C377EC.5030504@colorado.edu> Message-ID: <43C378AB.7090306@colorado.edu> Fernando Perez wrote: > [ Sorry for the cross-post, but I know that a number of matplotlib members are > also ipython users, and not necessarily on the ipython lists. This release > has a number of significant enhancements, so hopefully this will be of > interest. Please send any ipython-related requests directly to the ipython > lists, to keep the matplotlib ones clear of such traffic. ] Argh! Apologies, hit send when trying to change the header for the mpl list. Sorry. f From giovanni.samaey at cs.kuleuven.ac.be Tue Jan 10 04:35:55 2006 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Tue, 10 Jan 2006 10:35:55 +0100 Subject: [SciPy-user] numpy compatibility with third-party packages? Message-ID: <43C37FFB.6010105@cs.kuleuven.ac.be> Hi all, I have been making quite intensive use of scipy over the last year(s), and something that has been a pain for me with the new scipy is the combined use of e.g. scipy and mpipython or scipy and pytables etc. The issue is that mpipython and pytables assume that you are feeding them numarray or Numeric arrays, and they don't know the new numpy objects. Sometimes using Numeric.array(...) before sending and scipy.array(...) after getting them returns wrong results without me noticing what happened. Also, this is not really handy for programming. Are there efforts to make such packages use new numpy, or efforts to let arrays present them in the "expected" way (this is probably silly). Or are there plans to make scipy such a fantastic collection of packages that we do not need the "outside world" anymore? Do other people have this problem? Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From falted at pytables.org Tue Jan 10 06:46:30 2006 From: falted at pytables.org (Francesc Altet) Date: Tue, 10 Jan 2006 12:46:30 +0100 Subject: [SciPy-user] numpy compatibility with third-party packages? In-Reply-To: <43C37FFB.6010105@cs.kuleuven.ac.be> References: <43C37FFB.6010105@cs.kuleuven.ac.be> Message-ID: <200601101246.31491.falted@pytables.org> Hi Giovanni, I'm working to provide an interface to the new numpy for PyTables. However, I can't make promises about the time-frame of a public release (we plan to include some other features before doing this). So, don't worry too much about that ;-) A Dimarts 10 Gener 2006 10:35, Giovanni Samaey va escriure: > Hi all, > > I have been making quite intensive use of scipy over the last year(s), > and something that has been a pain for me with the new scipy is the > combined use > of e.g. scipy and mpipython or scipy and pytables etc. > > The issue is that mpipython and pytables assume that you are feeding > them numarray > or Numeric arrays, and they don't know the new numpy objects. Sometimes > using > Numeric.array(...) before sending and scipy.array(...) after getting > them returns wrong > results without me noticing what happened. Also, this is not really > handy for programming. > > Are there efforts to make such packages use new numpy, or efforts to let > arrays present them > in the "expected" way (this is probably silly). Or are there plans to > make scipy such a fantastic > collection of packages that we do not need the "outside world" anymore? > > Do other people have this problem? > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- >0,0< Francesc Altet ? ? http://www.carabos.com/ V V C?rabos Coop. V. ??Enjoy Data "-" From schofield at ftw.at Tue Jan 10 07:03:48 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 10 Jan 2006 12:03:48 +0000 Subject: [SciPy-user] numpy compatibility with third-party packages? In-Reply-To: <43C37FFB.6010105@cs.kuleuven.ac.be> References: <43C37FFB.6010105@cs.kuleuven.ac.be> Message-ID: <49957B97-672E-45CB-B086-2E9A0A86AC5F@ftw.at> On 10/01/2006, at 9:35 AM, Giovanni Samaey wrote: > Hi all, > > The issue is that mpipython and pytables assume that you are feeding > them numarray > or Numeric arrays, and they don't know the new numpy objects. > Sometimes > using > Numeric.array(...) before sending and scipy.array(...) after getting > them returns wrong > results without me noticing what happened. Also, this is not really > handy for programming. > > Are there efforts to make such packages use new numpy, or efforts > to let > arrays present them > in the "expected" way (this is probably silly). Yes, the latest versions of Numeric and numarray should make this easy. I've started a Wiki page on this topic at http://new.scipy.org/Wiki/Array_Interface Could someone with a spare moment to answer this question please add to that page, instead of posting here? -- Ed From oliphant.travis at ieee.org Tue Jan 10 10:54:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 10 Jan 2006 08:54:05 -0700 Subject: [SciPy-user] [Numpy-discussion] initialise an array In-Reply-To: <43C3D080.7060507@yahoo.fr> References: <43C3D080.7060507@yahoo.fr> Message-ID: <43C3D89D.9010903@ieee.org> Humufr wrote: > Hello, > > I have a function like this: > > def test(x,xall=None): > if xall == None: xall =x > > I obtain this error message when I'm doing this with numpy (I don't > have this problem with numarray and Numeric). > > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all You probably want to do if xall is None: xall = x This will not only be faster but will also not be ambiguous. -Travis From H.FANGOHR at soton.ac.uk Tue Jan 10 11:19:23 2006 From: H.FANGOHR at soton.ac.uk (Hans Fangohr) Date: Tue, 10 Jan 2006 16:19:23 +0000 (GMT) Subject: [SciPy-user] transition problems Numeric 24.1 and Numpy(=scipy core) In-Reply-To: <439C6E2D.6070707@gmail.com> References: <439C4AAB.70308@hoc.net> <439C6E2D.6070707@gmail.com> Message-ID: Hi, I noticed a problem with the newish scipy 0.4.3.1490 (scipy core: 0.8.4) interacting with Numeric 24.1 (on a fink install of python2.4) even though I have read before that it is believed this should work for Numerix version >=24. I summarise what I think the problem is, attach the source code and list the error message below: The program calls y = scipy.integrate.odeint(rhs_function, y0, t) where y0 is a Numeric array. The type of the return object y is now a scipy.ndarray. In earlier versions of scipy (for example 0.3.3_303.4573) the returned object was of type (Numeric) array. Subsequently, the code wants to manipulate y using Numeric functions such as 'Numeric.concatenate' which fails. However, I thought this should work. Any suggestions? Finally, I'd like to add some explanation: I am well aware that this example will work by using only scipy's ndarray, and I can also make this work by converting the returned value y (from odeint) into a Numeric array. However, this code is a shortened version from teaching materials which (historically) are based on Numeric, and later scipy is introduced to provide, for example, odeint. This all works with the oldish versions of Numeric and scipy we have installed. It would be a great relief to know that the code will still work for the students if our python software is upgraded (or if they install a newer version at home). I have attached the source code of the problematic program and list the error messages below. Looking forward to receiving any advice, Hans Error messages and standard output: jarjar:~/tmp fangohr$ python2.4 ode1.py Failed to import fftpack No module named fftpack Failed to import signal No module named fftpack Numeric version: 24.1 scipy version: 0.4.3.1490 scipy core version: 0.8.4 Traceback (most recent call last): File "ode1.py", line 18, in rhs dydt = N.concatenate( [dvdt,drdt] ) #this fails File "/sw/lib/python2.4/site-packages/Numeric/Numeric.py", line 236, in concatenate return multiarray.concatenate(a) ValueError: Invalid type for array odepack.error: Error occured while calling the Python function named rhs Traceback (most recent call last): File "ode1.py", line 18, in rhs dydt = N.concatenate( [dvdt,drdt] ) #this fails File "/sw/lib/python2.4/site-packages/Numeric/Numeric.py", line 236, in concatenate return multiarray.concatenate(a) ValueError: Invalid type for array odepack.error: Error occured while calling the Python function named rhs coming back from scipy's oddeint position r = ( 0.000000, 5.000000, 0.000000) jarjar:~/tmp fangohr$ -------------- next part -------------- import Numeric as N import scipy def rhs( y, t ): v = y[0:3] r = y[3:6] mass = 1.0 #kg g = 9.81 #N/kg F_grav = N.array([0,-g, 0]) dvdt = F_grav/mass #this is a Numeric array drdt = v #this is a scipy.array print type(dvdt),type(drdt) dydt = N.concatenate( [dvdt,drdt] ) #this fails return dydt #just for the record print "Numeric version:",N.__version__ print "scipy version:",scipy.__scipy_version__ print "scipy core version:",scipy.__core_version__ r = N.array([0,5,0]) #creation of initial values v = N.array([0,0,0]) t = 0 dt = 1/30.0 n = 1 #number of time steps for i in range(n): y = N.concatenate((v,r)) #combine v and r into state vector y y = scipy.integrate.odeint( rhs, y, N.array([t,t+dt]) ) print "coming back from scipy's oddeint",type(y) t = t + dt v = y[-1,0:3] r = y[-1,3:6] if r[1] < 0: #if below base plate v[1] = -v[1] #then reverse velocity (elastic bounce) print "position r = (%10f, %10f, %10f)" % (r[0],r[1],r[2]) From hgamboa at gmail.com Tue Jan 10 12:03:47 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Tue, 10 Jan 2006 17:03:47 +0000 Subject: [SciPy-user] Bug on r_ Message-ID: <86522b1a0601100903p6884639dnd0f7826d47e9681@mail.gmail.com> This manage to create a segmentation fault: r_[0,arange(10,dtype='f')] How to track a bug when a segmentation fault appears in python? Hugo Gamboa From aisaac at american.edu Tue Jan 10 12:10:28 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 10 Jan 2006 12:10:28 -0500 Subject: [SciPy-user] numpy's math library? Message-ID: It was recently claimed on the Gnumeric list http://mail.gnome.org/archives/gnumeric-list/2006-January/msg00006.html that if libc is not available (I assume this means at compile time) then numpy uses a fallback library that is numerically naive. (See the post for a specific example.) I suppose this would affect only Windows users, but I am one. Can someone tell me how this actually works? Thank you, Alan Isaac From nwagner at mecha.uni-stuttgart.de Tue Jan 10 12:18:28 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Tue, 10 Jan 2006 18:18:28 +0100 Subject: [SciPy-user] Bug on r_ In-Reply-To: <86522b1a0601100903p6884639dnd0f7826d47e9681@mail.gmail.com> References: <86522b1a0601100903p6884639dnd0f7826d47e9681@mail.gmail.com> Message-ID: On Tue, 10 Jan 2006 17:03:47 +0000 Hugo Gamboa wrote: > This manage to create a segmentation fault: > > r_[0,arange(10,dtype='f')] > > How to track a bug when a segmentation fault appears in >python? > > Hugo Gamboa > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user You may use gdb gdb python run gamboa.py bt Starting program: /usr/local/bin/python gamboa.py [Thread debugging using libthread_db enabled] [New Thread 1076175008 (LWP 7519)] Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 1076175008 (LWP 7519)] PyArray_Concatenate (op=0x438e434c, axis=0) at arrayobject.c:178 warning: Source file is more recent than executable. 178 if (PyArray_CheckExact(obj)) (gdb) bt #0 PyArray_Concatenate (op=0x438e434c, axis=0) at arrayobject.c:178 #1 0x4036495c in array_concatenate (dummy=0x0, args=0x409f9c6c, kwds=0x438de9bc) at multiarraymodule.c:4736 #2 0x0811eb56 in PyCFunction_Call (func=0x402bd18c, arg=0x409f9c6c, kw=0x438de9bc) at methodobject.c:93 #3 0x0805935e in PyObject_Call (func=0x402bd18c, arg=0x409f9c6c, kw=0x438de9bc) at abstract.c:1756 #4 0x080c4c5e in PyEval_EvalFrame (f=0x817dda4) at ceval.c:3766 #5 0x080c8bb4 in PyEval_EvalCodeEx (co=0x409db4e0, globals=0x409e00b4, locals=0x0, args=0x438e42d8, argcount=2, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #6 0x0811dda2 in function_call (func=0x409e6fb4, arg=0x438e42cc, kw=0x0) at funcobject.c:548 #7 0x0805935e in PyObject_Call (func=0x409e6fb4, arg=0x438e42cc, kw=0x0) at abstract.c:1756 #8 0x08064bd4 in instancemethod_call (func=0x0, arg=0x438e42cc, kw=0x0) at classobject.c:2447 #9 0x0805935e in PyObject_Call (func=0x438ba0f4, arg=0x402a26cc, kw=0x0) at abstract.c:1756 #10 0x080989e0 in call_method (o=0x409e10cc, name=0x812b48e "__getitem__", nameobj=0x815d788, format=0x81287f4 "(O)") at typeobject.c:923 #11 0x08099082 in slot_mp_subscript (self=0x409e10cc, arg1=0x43686ccc) at typeobject.c:4221 #12 0x0805da36 in PyObject_GetItem (o=0x409e10cc, key=0x43686ccc) at abstract.c:94 #13 0x080c3186 in PyEval_EvalFrame (f=0x816d8f4) at ceval.c:1169 #14 0x080c8bb4 in PyEval_EvalCodeEx (co=0x4029af60, globals=0x4026c824, locals=0x4026c824, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ceval.c:2736 #15 0x080c8de5 in PyEval_EvalCode (co=0x4029af60, globals=0x4026c824, locals=0x4026c824) at ceval.c:484 #16 0x080f77f8 in PyRun_SimpleFileExFlags (fp=0x8160008, filename=0xbffff019 "gamboa.py", closeit=1, flags=0xbfffecf4) at pythonrun.c:1265 #17 0x08055917 in Py_Main (argc=1, argv=0xbfffedc4) at main.c:484 #18 0x08054fc8 in main (argc=2, argv=0xbfffedc4) at python.c:23 Nils From Doug.LATORNELL at mdsinc.com Tue Jan 10 16:12:23 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Tue, 10 Jan 2006 13:12:23 -0800 Subject: [SciPy-user] NumPy On OpenBSD Message-ID: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> I'm having a go a building NumPy on OpenBSD 3.8 installed on an P4 box running Python 2.4.1. I think I'm almost there, but I get the message "floating point flags not supported on this platform" twice when I import numpy, and about 30,000 times when I run numpy.test(10). If I edit out that message though, the tests seem to have all been passed. I tracked the "floating point flags not supported on this platform" message down to numpy/core/include/numpy/ufuncobject.h It looks like OpenBSD falls through the ifdefined structure that defines UFUNC_CHECK_STATUS(ret). Looking at the various ways that the IEEE flags are tested for different platforms, I think OpenBSD might fall into the same block as SunOS since I have an ieeefp.h and an fpgetsticky() function. Does this make sense, or am I barking up the wrong tree completely? The problem is that I don't know what magic word I should add as an OR clause to #elif defined(sun) to test my guess. Where are the various platform names that are checked in unfuncobject.h defined? Any advice is appreciated... Doug Latornell MDS Nordion Vancouver, BC, Canada This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From oliphant at ee.byu.edu Tue Jan 10 17:22:04 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 10 Jan 2006 15:22:04 -0700 Subject: [SciPy-user] Bug on r_ In-Reply-To: <86522b1a0601100903p6884639dnd0f7826d47e9681@mail.gmail.com> References: <86522b1a0601100903p6884639dnd0f7826d47e9681@mail.gmail.com> Message-ID: <43C4338C.4010200@ee.byu.edu> Hugo Gamboa wrote: >This manage to create a segmentation fault: > >r_[0,arange(10,dtype='f')] > >How to track a bug when a segmentation fault appears in python? > > > I use gdb, run the code and then get a traceback to find out where the problem is... Thanks for the report. I can reproduce the problem and it looks like an error is not being caught. -Travis From oliphant at ee.byu.edu Tue Jan 10 17:27:42 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 10 Jan 2006 15:27:42 -0700 Subject: [SciPy-user] Bug on r_ In-Reply-To: <43C4338C.4010200@ee.byu.edu> References: <86522b1a0601100903p6884639dnd0f7826d47e9681@mail.gmail.com> <43C4338C.4010200@ee.byu.edu> Message-ID: <43C434DE.50003@ee.byu.edu> Travis Oliphant wrote: >Hugo Gamboa wrote: > > > >>This manage to create a segmentation fault: >> >>r_[0,arange(10,dtype='f')] >> >>How to track a bug when a segmentation fault appears in python? >> >> >> >> >> >I use gdb, run the code and then get a traceback to find out where the >problem is... > >Thanks for the report. I can reproduce the problem and it looks like an >error is not being caught. > > I found the problem. An error wasn't being caught. I fixed that problem, but am looking into why an error was raised in the first place (I don't think it should have been). I'll check that now. -Travis From oliphant at ee.byu.edu Tue Jan 10 17:43:06 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 10 Jan 2006 15:43:06 -0700 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> Message-ID: <43C4387A.1010506@ee.byu.edu> LATORNELL, Doug wrote: >I'm having a go a building NumPy on OpenBSD 3.8 installed on an P4 box >running Python 2.4.1. I think I'm almost there, but I get the message >"floating point flags not supported on this platform" twice when I >import numpy, and about 30,000 times when I run numpy.test(10). If I >edit out that message though, the tests seem to have all been passed. > > This message should probably be a warning because it is not a critical error it just means that certain functionality won't be available... >I tracked the "floating point flags not supported on this platform" >message down to numpy/core/include/numpy/ufuncobject.h It looks like >OpenBSD falls through the ifdefined structure that defines >UFUNC_CHECK_STATUS(ret). Looking at the various ways that the IEEE >flags are tested for different platforms, I think OpenBSD might fall >into the same block as SunOS since I have an ieeefp.h and an >fpgetsticky() function. Does this make sense, or am I barking up the >wrong tree completely? The problem is that I don't know what magic word >I should add as an OR clause to > >#elif defined(sun) > >to test my guess. Where are the various platform names that are checked >in unfuncobject.h defined? > > That's a great question. I would search on the net for what defines OpenBSD ensures.... You are right that all you need to do is determine how IEEE flags are handled on the platform and make the appropriate defines. -Travis From oliphant.travis at ieee.org Tue Jan 10 19:40:26 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 10 Jan 2006 17:40:26 -0700 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> Message-ID: LATORNELL, Doug wrote: > I'm having a go a building NumPy on OpenBSD 3.8 installed on an P4 box > running Python 2.4.1. I think I'm almost there, but I get the message > "floating point flags not supported on this platform" twice when I > import numpy, and about 30,000 times when I run numpy.test(10). If I > edit out that message though, the tests seem to have all been passed. > > I tracked the "floating point flags not supported on this platform" > message down to numpy/core/include/numpy/ufuncobject.h It looks like > OpenBSD falls through the ifdefined structure that defines > UFUNC_CHECK_STATUS(ret). Looking at the various ways that the IEEE > flags are tested for different platforms, I think OpenBSD might fall > into the same block as SunOS since I have an ieeefp.h and an > fpgetsticky() function. Does this make sense, or am I barking up the > wrong tree completely? The problem is that I don't know what magic word > I should add as an OR clause to > > #elif defined(sun) > > to test my guess. Where are the various platform names that are checked > in unfuncobject.h defined? > > Any advice is appreciated... > You might try to add defined(NETBSD) || defined(FREEBSD) || defined(OPENBSD) I saw those in another project. -Travis From oliphant.travis at ieee.org Tue Jan 10 19:42:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 10 Jan 2006 17:42:04 -0700 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE0E34B82@SMDMX0501.mds.mdsinc.com> Message-ID: LATORNELL, Doug wrote: > wrong tree completely? The problem is that I don't know what magic word > I should add as an OR clause to > > #elif defined(sun) > > to test my guess. Where are the various platform names that are checked > in unfuncobject.h defined? > I've also seen __FreeBSD__ and __OpenBSD__ used. You could try those as well. -Travis From Doug.LATORNELL at mdsinc.com Tue Jan 10 20:02:28 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Tue, 10 Jan 2006 17:02:28 -0800 Subject: [SciPy-user] NumPy On OpenBSD Message-ID: <34090E25C2327C4AA5D276799005DDE0E34B90@SMDMX0501.mds.mdsinc.com> Thanks, Travis. I've found BSD and OpenBSD #defines in my sys/param.h file. Or-ing one of those in seems like it should work, but I think fpgetsticky() is returning 0. I'm trying to dig into that now and understand it. BTW, the OpenBSD porting guide page (http://www.openbsd.org/porting.html#Generic) is kind of adamant about *not* using __OpenBSD__. Their take is test for features, not specific OSes... I'll keep you posted... Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > Sent: January 10, 2006 16:42 > To: scipy-user at scipy.org > Subject: Re: [SciPy-user] NumPy On OpenBSD > > LATORNELL, Doug wrote: > > wrong tree completely? The problem is that I don't know what magic > > word I should add as an OR clause to > > > > #elif defined(sun) > > > > to test my guess. Where are the various platform names that are > > checked in unfuncobject.h defined? > > > > I've also seen __FreeBSD__ and __OpenBSD__ used. You could > try those as well. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From oliphant at ee.byu.edu Tue Jan 10 20:22:00 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 10 Jan 2006 18:22:00 -0700 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34B90@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE0E34B90@SMDMX0501.mds.mdsinc.com> Message-ID: <43C45DB8.1050203@ee.byu.edu> LATORNELL, Doug wrote: >Thanks, Travis. > >I've found BSD and OpenBSD #defines in my sys/param.h file. Or-ing one >of those in seems like it should work, but I think fpgetsticky() is >returning 0. I'm trying to dig into that now and understand it. > >BTW, the OpenBSD porting guide page >(http://www.openbsd.org/porting.html#Generic) is kind of adamant about >*not* using __OpenBSD__. Their take is test for features, not specific >OSes... > > Yes, that is probably a better way to do it. I followed numarray's lead in how they were handling the IEEE floating-point stuff platform-to-platform. But, we could take advantage of the decent configuration system already present where we detect quite a few things at build time in the setup.py file. Tests for the existence of different IEEE functions could be defined there as well. The presence of specific functions would tell us which method is being used and we could then define a UFUNC_IEEE_FPMETHOD variable that specified which type was in use. The only other problem is that different platforms need different header files for this capability, so we would probably need to define another variable to pick-up the correct headers which would complicate matters. -Travis From Doug.LATORNELL at mdsinc.com Tue Jan 10 20:57:13 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Tue, 10 Jan 2006 17:57:13 -0800 Subject: [SciPy-user] NumPy On OpenBSD Message-ID: <34090E25C2327C4AA5D276799005DDE0E34B92@SMDMX0501.mds.mdsinc.com> #elif defined(sun) || defined(__OpenBSD__) is the ticket! Using defined (OpenBSD) doesn't work IsoInfoCompute:doug$ python Python 2.4.1 (#1, Sep 3 2005, 13:08:59) [GCC 3.3.5 (propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(10) Found 3 tests for numpy.distutils.misc_util Found 2 tests for numpy.core.umath Found 3 tests for numpy.dft.helper Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 9 tests for numpy.lib.twodim_base Found 11 tests for numpy.core.multiarray Found 4 tests for numpy.lib.getlimits Found 21 tests for numpy.core.ma Found 6 tests for numpy.core.defmatrix Found 33 tests for numpy.lib.function_base Found 6 tests for numpy.core.records Found 4 tests for numpy.lib.index_tricks Found 44 tests for numpy.lib.shape_base Found 0 tests for __main__ ........................................................................ ........................................................................ ...................................................... ---------------------------------------------------------------------- Ran 198 tests in 1.596s OK >>> Thanks for the help, Travis! Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > Sent: January 10, 2006 16:42 > To: scipy-user at scipy.org > Subject: Re: [SciPy-user] NumPy On OpenBSD > > LATORNELL, Doug wrote: > > wrong tree completely? The problem is that I don't know what magic > > word I should add as an OR clause to > > > > #elif defined(sun) > > > > to test my guess. Where are the various platform names that are > > checked in unfuncobject.h defined? > > > > I've also seen __FreeBSD__ and __OpenBSD__ used. You could > try those as well. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From oliphant.travis at ieee.org Wed Jan 11 04:58:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 11 Jan 2006 02:58:57 -0700 Subject: [SciPy-user] Trial run of object arrays returning the objects and *not* object array scalars Message-ID: <43C4D6E1.2090004@ieee.org> In SVN there is a version of numpy that does not return object array scalars when using object arrays. The machinery for object array scalars is still there for now in case we decide that it is actually useful (it's a simple change to go back to them). The down side is that object arrays now do not follow the rule that items selected from them have the methods and attributes of arrays. The up-side is that now you don't have to use .item() to get to the actually object you stored in the object array. Waiting to hear which side complains the loudest.... -Travis From cookedm at physics.mcmaster.ca Wed Jan 11 06:59:20 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 11 Jan 2006 06:59:20 -0500 Subject: [SciPy-user] numpy's math library? In-Reply-To: References: Message-ID: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> On Jan 10, 2006, at 12:10 , Alan G Isaac wrote: > It was recently claimed on the Gnumeric list > http://mail.gnome.org/archives/gnumeric-list/2006-January/ > msg00006.html > that if libc is not available (I assume this means at > compile time) then numpy uses a fallback library that is > numerically naive. (See the post for a specific example.) > > I suppose this would affect only Windows users, > but I am one. Can someone tell me how this actually works? Oh yeah, pick on one of the few functions defined like that :-) If the inverse hyperbolic functions are not found, replacements are used for asinh, acosh, and atanh are used. This was added about 4 years ago into Numeric. I know for Numeric that on win32 the setup.py would set HAVE_INVERSE_HYPERBOLIC to 0, but with numpy, it actually checks for asinh. Have a look at build/src/numpy/core/config.h to see if it's been defined or not. If necessary, we could use the definitions from fdlibm (http:// www.netlib.org/fdlibm/). -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From aisaac at american.edu Wed Jan 11 12:41:45 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 11 Jan 2006 12:41:45 -0500 Subject: [SciPy-user] numpy's math library? In-Reply-To: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> Message-ID: On Wed, 11 Jan 2006, "David M. Cooke" apparently wrote: > If necessary, we could use the definitions from fdlibm > (http://www.netlib.org/fdlibm/). It seems like a good idea to replace them with something better in any case, if for no other reason than to avoid having such doubts arise. If the fdlibm (attribution only) license looks fine, these look good. Otherwise there are also liberal license implementations that while not as good as fdlibm are still better than the current implementation: Public Domain: http://www.digitalmars.com/d/archives/digitalmars/D/28555.html BSD: http://savannah.nongnu.org/projects/avr-libc/ Probably public domain (I can ask): http://www.plunk.org/~hatch/rightway.php Alan Isaac From robert.kern at gmail.com Wed Jan 11 12:42:38 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jan 2006 11:42:38 -0600 Subject: [SciPy-user] numpy's math library? In-Reply-To: References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> Message-ID: <43C5438E.5090203@gmail.com> Alan G Isaac wrote: > On Wed, 11 Jan 2006, "David M. Cooke" apparently wrote: > >>If necessary, we could use the definitions from fdlibm >>(http://www.netlib.org/fdlibm/). > > It seems like a good idea to replace them with something > better in any case, if for no other reason than to avoid > having such doubts arise. If the fdlibm (attribution only) > license looks fine, these look good. It is. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From mfmorss at aep.com Wed Jan 11 13:37:02 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Wed, 11 Jan 2006 13:37:02 -0500 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34B92@SMDMX0501.mds.mdsinc.co m> Message-ID: I had this same symptom (many message that floating point flags aren't supported on "this platform") installing scipy core (now numpy) from source on AIX 5.2. I perhaps am confused, but it's not obvious to me, from reading the messages on this topic, in which file the suggested edit is to be implemented. Would someone please say? Mark F. Morss Principal Analyst, Market Risk American Electric Power "LATORNELL, Doug" To Sent by: "SciPy Users List" scipy-user-bounce s at scipy.net cc Subject 01/10/2006 08:57 Re: [SciPy-user] NumPy On OpenBSD PM Please respond to SciPy Users List #elif defined(sun) || defined(__OpenBSD__) is the ticket! Using defined (OpenBSD) doesn't work IsoInfoCompute:doug$ python Python 2.4.1 (#1, Sep 3 2005, 13:08:59) [GCC 3.3.5 (propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test(10) Found 3 tests for numpy.distutils.misc_util Found 2 tests for numpy.core.umath Found 3 tests for numpy.dft.helper Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 9 tests for numpy.lib.twodim_base Found 11 tests for numpy.core.multiarray Found 4 tests for numpy.lib.getlimits Found 21 tests for numpy.core.ma Found 6 tests for numpy.core.defmatrix Found 33 tests for numpy.lib.function_base Found 6 tests for numpy.core.records Found 4 tests for numpy.lib.index_tricks Found 44 tests for numpy.lib.shape_base Found 0 tests for __main__ ........................................................................ ........................................................................ ...................................................... ---------------------------------------------------------------------- Ran 198 tests in 1.596s OK >>> Thanks for the help, Travis! Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > Sent: January 10, 2006 16:42 > To: scipy-user at scipy.org > Subject: Re: [SciPy-user] NumPy On OpenBSD > > LATORNELL, Doug wrote: > > wrong tree completely? The problem is that I don't know what magic > > word I should add as an OR clause to > > > > #elif defined(sun) > > > > to test my guess. Where are the various platform names that are > > checked in unfuncobject.h defined? > > > > I've also seen __FreeBSD__ and __OpenBSD__ used. You could > try those as well. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From Fernando.Perez at colorado.edu Wed Jan 11 13:37:18 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Wed, 11 Jan 2006 11:37:18 -0700 Subject: [SciPy-user] Trial run of object arrays returning the objects and *not* object array scalars In-Reply-To: <43C4D6E1.2090004@ieee.org> References: <43C4D6E1.2090004@ieee.org> Message-ID: <43C5505E.20700@colorado.edu> Travis Oliphant wrote: > In SVN there is a version of numpy that does not return object array > scalars when using object arrays. The machinery for object array > scalars is still there for now in case we decide that it is actually > useful (it's a simple change to go back to them). > > The down side is that object arrays now do not follow the rule that > items selected from them have the methods and attributes of arrays. The > up-side is that now you don't have to use .item() to get to the actually > object you stored in the object array. > > Waiting to hear which side complains the loudest.... Thanks. I do realize that this is a tricky balancing act between two opposing constraints, with good arguments to be made either way. I just hope I don't find myself arguing for the opposite three months from now, because of having to special-case scalar access for 'O' arrays :) Ah, I wish we could find some magical solution that would address all the issues in one clean shot. This is certainly worth getting more feedback and testing from others; I am the first to admit I may only be seeing a small part of the issue. Cheers, f From Doug.LATORNELL at mdsinc.com Wed Jan 11 13:46:33 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Wed, 11 Jan 2006 10:46:33 -0800 Subject: [SciPy-user] NumPy On OpenBSD Message-ID: <34090E25C2327C4AA5D276799005DDE0E34BA6@SMDMX0501.mds.mdsinc.com> The file to edit is numpy/core/include/numpy/ufuncobject.h You have to figure out which of the #elif defined() blocks is appropriate for your platform, based on the header file(s) that are included in the block, and the name of the function that is called to set fpstatus. I see that there is an #elif defined(AIX) block already there, but perhaps you are running into version differences? Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of mfmorss at aep.com > Sent: January 11, 2006 10:37 > To: SciPy Users List > Cc: SciPy Users List; scipy-user-bounces at scipy.net > Subject: Re: [SciPy-user] NumPy On OpenBSD > > I had this same symptom (many message that floating point > flags aren't supported on "this platform") installing scipy > core (now numpy) from source on AIX 5.2. I perhaps am > confused, but it's not obvious to me, from reading the > messages on this topic, in which file the suggested edit is > to be implemented. Would someone please say? > > Mark F. Morss > Principal Analyst, Market Risk > American Electric Power > > > > > "LATORNELL, Doug" > > > dsinc.com> > To > Sent by: "SciPy Users List" > > scipy-user-bounce > > s at scipy.net > cc > > > > Subject > 01/10/2006 08:57 Re: [SciPy-user] NumPy > On OpenBSD > PM > > > > > > Please respond to > > SciPy Users List > > > .net> > > > > > > > > > > #elif defined(sun) || defined(__OpenBSD__) > > is the ticket! Using defined (OpenBSD) doesn't work > > IsoInfoCompute:doug$ python > Python 2.4.1 (#1, Sep 3 2005, 13:08:59) [GCC 3.3.5 > (propolice)] on openbsd3 Type "help", "copyright", "credits" > or "license" for more information. > >>> import numpy > >>> numpy.test(10) > Found 3 tests for numpy.distutils.misc_util > Found 2 tests for numpy.core.umath > Found 3 tests for numpy.dft.helper > Found 8 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 9 tests for numpy.lib.twodim_base > Found 11 tests for numpy.core.multiarray > Found 4 tests for numpy.lib.getlimits > Found 21 tests for numpy.core.ma > Found 6 tests for numpy.core.defmatrix > Found 33 tests for numpy.lib.function_base > Found 6 tests for numpy.core.records > Found 4 tests for numpy.lib.index_tricks > Found 44 tests for numpy.lib.shape_base > Found 0 tests for __main__ > .............................................................. > .......... > .............................................................. > .......... > ...................................................... > ---------------------------------------------------------------------- > Ran 198 tests in 1.596s > > OK > > >>> > > Thanks for the help, Travis! > > Doug > > > > -----Original Message----- > > From: scipy-user-bounces at scipy.net > > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > > Sent: January 10, 2006 16:42 > > To: scipy-user at scipy.org > > Subject: Re: [SciPy-user] NumPy On OpenBSD > > > > LATORNELL, Doug wrote: > > > wrong tree completely? The problem is that I don't know > what magic > > > word I should add as an OR clause to > > > > > > #elif defined(sun) > > > > > > to test my guess. Where are the various platform names that are > > > checked in unfuncobject.h defined? > > > > > > > I've also seen __FreeBSD__ and __OpenBSD__ used. You could > try those > > as well. > > > > -Travis > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > This email and any files transmitted with it may contain > privileged or confidential information and may be read or > used only by the intended recipient. If you are not the > intended recipient of the email or any of its attachments, > please be advised that you have received this email in error > and any use, dissemination, distribution, forwarding, > printing or copying of this email or any attached files is > strictly prohibited. If you have received this email in > error, please immediately purge it and all attachments and > notify the sender by reply email or contact the sender at the > number listed. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From ndarray at mac.com Wed Jan 11 13:49:39 2006 From: ndarray at mac.com (Sasha) Date: Wed, 11 Jan 2006 13:49:39 -0500 Subject: [SciPy-user] Who will use numpy.ma? Message-ID: MA is intended to be a drop-in replacement for Numeric arrays that can explicitely handle missing observations. With the recent improvements to the array object in NumPy, the MA library has fallen behind. There are more than 50 methods in the ndarray object that are not present in ma.array. I would like to hear from people who work with datasets with missing observations? Do you use MA? Do you think with the support for nan's and replaceable mathematical operations, should missing observations be handled in numpy using special values rather than an array of masks? Thanks. -- sasha From aisaac at american.edu Wed Jan 11 14:07:27 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 11 Jan 2006 14:07:27 -0500 Subject: [SciPy-user] Who will use numpy.ma? In-Reply-To: References: Message-ID: On Wed, 11 Jan 2006, Sasha apparently wrote: > should missing observations be handled in numpy using > special values rather than an array of masks? That's the behavior I am used to from other environments (e.g., GAUSS), but I will use whatever facility is provided. Cheers, Alan Isaac From haase at msg.ucsf.edu Wed Jan 11 19:51:30 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 11 Jan 2006 16:51:30 -0800 Subject: [SciPy-user] traits - where to get the current source Message-ID: <200601111651.31128.haase@msg.ucsf.edu> Hi, Over Christmas I was evaluation my options for generating (simplest) user interfaces (GUIs for scientific software). So I came across Enthought's traits package (which I heard already about at SciPy'04 ;-) How and where can I get the traits package source ? I want to use this with Python2.4 (and numarray, if that's related to this at all !?) I was looking for CVS, but that seems to be gone, right !? Thanks, Sebastian Haase From robert.kern at gmail.com Wed Jan 11 20:00:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jan 2006 19:00:37 -0600 Subject: [SciPy-user] traits - where to get the current source In-Reply-To: <200601111651.31128.haase@msg.ucsf.edu> References: <200601111651.31128.haase@msg.ucsf.edu> Message-ID: <43C5AA35.7010400@gmail.com> Sebastian Haase wrote: > Hi, > Over Christmas I was evaluation my options for generating (simplest) user > interfaces (GUIs for scientific software). > So I came across Enthought's traits package (which I heard already about at > SciPy'04 ;-) > > How and where can I get the traits package source ? http://svn.enthought.com/enthought -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Wed Jan 11 22:09:29 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 11 Jan 2006 22:09:29 -0500 Subject: [SciPy-user] numpy's math library? In-Reply-To: (Alan G. Isaac's message of "Wed, 11 Jan 2006 12:41:45 -0500") References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> Message-ID: Alan G Isaac writes: > On Wed, 11 Jan 2006, "David M. Cooke" apparently wrote: >> If necessary, we could use the definitions from fdlibm >> (http://www.netlib.org/fdlibm/). > > It seems like a good idea to replace them with something > better in any case, if for no other reason than to avoid > having such doubts arise. If the fdlibm (attribution only) > license looks fine, these look good. Otherwise there are > also liberal license implementations that while not as good > as fdlibm are still better than the current implementation: > > Public Domain: > http://www.digitalmars.com/d/archives/digitalmars/D/28555.html > BSD: > http://savannah.nongnu.org/projects/avr-libc/ > Probably public domain (I can ask): > http://www.plunk.org/~hatch/rightway.php > > Alan Isaac Using ideas from the first and third links above (verified by me), and some digging for Kahan's version for arccosh, I've reimplemented the replacement inverse hyperbolic functions in http://projects.scipy.org/scipy/numpy/changeset/1884 (and a minor fix in 1885). I've also exposed log1p(x) = log(1+x) and expm1(x) = exp(x)-1 as ufuncs, since those are quite useful if you're worrying about cancellation errors. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From robert.kern at gmail.com Wed Jan 11 22:19:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jan 2006 21:19:09 -0600 Subject: [SciPy-user] numpy's math library? In-Reply-To: References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> Message-ID: <43C5CAAD.2010900@gmail.com> David M. Cooke wrote: > I've also exposed log1p(x) = log(1+x) and expm1(x) = exp(x)-1 as > ufuncs, since those are quite useful if you're worrying about > cancellation errors. Of course, they're useful, but they're also in scipy.special. Let's try not to migrate more things from scipy to numpy than we strictly have to. So, I'm -1 on exposing log1p() and expm1() in numpy. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Wed Jan 11 22:36:32 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 11 Jan 2006 22:36:32 -0500 Subject: [SciPy-user] numpy's math library? In-Reply-To: <43C5CAAD.2010900@gmail.com> (Robert Kern's message of "Wed, 11 Jan 2006 21:19:09 -0600") References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> <43C5CAAD.2010900@gmail.com> Message-ID: Robert Kern writes: > David M. Cooke wrote: > >> I've also exposed log1p(x) = log(1+x) and expm1(x) = exp(x)-1 as >> ufuncs, since those are quite useful if you're worrying about >> cancellation errors. > > Of course, they're useful, but they're also in scipy.special. Let's try not to > migrate more things from scipy to numpy than we strictly have to. So, I'm -1 on > exposing log1p() and expm1() in numpy. They're also part of the C99 standard, so I'd say there is some argument for making them part of numpy: exposing functions already defined by the C library. Mind you, we're also missing exp10, pow10, exp2, log2, cbrt, erf, erfc, lgamma, tgamma, and a few others. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Wed Jan 11 22:38:28 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 11 Jan 2006 20:38:28 -0700 Subject: [SciPy-user] numpy's math library? In-Reply-To: References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> <43C5CAAD.2010900@gmail.com> Message-ID: <43C5CF34.8070604@ieee.org> David M. Cooke wrote: >Robert Kern writes: > > > >>David M. Cooke wrote: >> >> >> >>>I've also exposed log1p(x) = log(1+x) and expm1(x) = exp(x)-1 as >>>ufuncs, since those are quite useful if you're worrying about >>>cancellation errors. >>> >>> >>Of course, they're useful, but they're also in scipy.special. Let's try not to >>migrate more things from scipy to numpy than we strictly have to. So, I'm -1 on >>exposing log1p() and expm1() in numpy. >> >> > >They're also part of the C99 standard, so I'd say there is some >argument for making them part of numpy: exposing functions already >defined by the C library. Mind you, we're also missing exp10, pow10, >exp2, log2, cbrt, erf, erfc, lgamma, tgamma, and a few others. > > > I think we should bring as much as we can from the C99 standard into numpy. But, I'm not in a hurry to do it. -Travis From robert.kern at gmail.com Wed Jan 11 22:40:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 11 Jan 2006 21:40:59 -0600 Subject: [SciPy-user] numpy's math library? In-Reply-To: References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> <43C5CAAD.2010900@gmail.com> Message-ID: <43C5CFCB.5060401@gmail.com> David M. Cooke wrote: > Robert Kern writes: > >>David M. Cooke wrote: >> >>>I've also exposed log1p(x) = log(1+x) and expm1(x) = exp(x)-1 as >>>ufuncs, since those are quite useful if you're worrying about >>>cancellation errors. >> >>Of course, they're useful, but they're also in scipy.special. Let's try not to >>migrate more things from scipy to numpy than we strictly have to. So, I'm -1 on >>exposing log1p() and expm1() in numpy. > > They're also part of the C99 standard, so I'd say there is some > argument for making them part of numpy: exposing functions already > defined by the C library. Mind you, we're also missing exp10, pow10, > exp2, log2, cbrt, erf, erfc, lgamma, tgamma, and a few others. Okay, that changes my mind. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From arnd.baecker at web.de Thu Jan 12 02:17:57 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Thu, 12 Jan 2006 08:17:57 +0100 (CET) Subject: [SciPy-user] numpy's math library? In-Reply-To: References: <6B2C5019-4EB3-4636-974C-74521749DDBE@physics.mcmaster.ca> <43C5CAAD.2010900@gmail.com> Message-ID: On Wed, 11 Jan 2006, David M. Cooke wrote: > Robert Kern writes: > > > David M. Cooke wrote: > > > >> I've also exposed log1p(x) = log(1+x) and expm1(x) = exp(x)-1 as > >> ufuncs, since those are quite useful if you're worrying about > >> cancellation errors. > > > > Of course, they're useful, but they're also in scipy.special. Let's try not to > > migrate more things from scipy to numpy than we strictly have to. So, I'm -1 on > > exposing log1p() and expm1() in numpy. > > They're also part of the C99 standard, so I'd say there is some > argument for making them part of numpy: exposing functions already > defined by the C library. Mind you, we're also missing exp10, pow10, > exp2, log2, cbrt, erf, erfc, lgamma, tgamma, and a few others. Thanks a lot for spending thought on this - it can be very important in some cases! BTW: This brings me to something which I wanted to post anyway: In [1]: import numpy In [2]: x=2**numpy.arange(5) In [3]: y=0*x In [4]: print numpy.log2(x) [ 0. 1. 2. 3. 4.] # fine so far In [5]: print numpy.log2(x,y) [0 0 1 2 2] # looks surprsing ;-) In [6]: print y [0 0 1 2 2] The reason is clear when looking at the code for log2: def log2(x, y=None): """Returns the base 2 logarithm of x If y is an array, the result replaces the contents of y. """ x = asarray(x) if y is None: y = umath.log(x) else: umath.log(x, y) y /= _log2 return y A natively implemented umath.log2 would cure the above surprise, I'd guess? Best, Arnd From travis.brady at gmail.com Thu Jan 12 13:39:46 2006 From: travis.brady at gmail.com (Travis Brady) Date: Thu, 12 Jan 2006 10:39:46 -0800 Subject: [SciPy-user] Failed Build and Multiplication Question Message-ID: I just updated from SVN and the NumPy build (on win2k with MingW using Gary's instructions from the Wiki) failed with the following error: error: Command "gcc -O2 -Wall -Wstrict-prototypes -Ibuild\src\numpy\core\src -Inumpy\core\include -Ibuild\src\numpy\core -Inumpy\core\src -Inumpy\lib\..\core\include -IC:\Python24\include -IC:\Python24\PC -c build\src\numpy\core\src\umathmodule.c -o build\temp.win32-2.4\Release\build\src\numpy\core\src\umathmodule.o" failed with exit status 1 Also, I have a 3x2x4 array that I'd like to be able to multiple elementwise with a 3x1 column vector, I can't currently do this w/o a loop of some kind, but I'm thinking there must be a better way. In [36]: x = numpy.arange(24) In [37]: x.shape = 3,2,4 In [38]: x Out[38]: array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[ 8, 9, 10, 11], [12, 13, 14, 15]], [[16, 17, 18, 19], [20, 21, 22, 23]]]) In [43]: b = scipy.r_[:3] In [44]: b.shape = 3,1 In [45]: x*b --------------------------------------------------------------------------- ValueError: index objects are not broadcastable to a single shape. Thanks for any help with either of these. Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From cookedm at physics.mcmaster.ca Thu Jan 12 15:26:52 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 12 Jan 2006 15:26:52 -0500 Subject: [SciPy-user] Failed Build and Multiplication Question In-Reply-To: References: Message-ID: <8FD4BB0F-F93C-476D-B511-A2E76CE1B8D3@physics.mcmaster.ca> On Jan 12, 2006, at 13:39 , Travis Brady wrote: > I just updated from SVN and the NumPy build (on win2k with MingW > using Gary's instructions from the Wiki) failed with the following > error: > > error: Command "gcc -O2 -Wall -Wstrict-prototypes > -Ibuild\src\numpy\core\src -Inumpy\core\include > -Ibuild\src\numpy\core > -Inumpy\core\src > -Inumpy\lib\..\core\include > -IC:\Python24\include > -IC:\Python24\PC > -c build\src\numpy\core\src\umathmodule.c > -o build\temp.win32- 2.4\Release\build\src\numpy\core\src > \umathmodule.o" failed with exit status 1 I have a feeling that was probably me who broke that. What is the error message though? It should be before these one. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Thu Jan 12 15:50:20 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 12 Jan 2006 13:50:20 -0700 Subject: [SciPy-user] Failed Build and Multiplication Question In-Reply-To: References: Message-ID: <43C6C10C.3020105@ieee.org> Travis Brady wrote: > I just updated from SVN and the NumPy build (on win2k with MingW using > Gary's instructions from the Wiki) failed with the following error: > > error: Command "gcc -O2 -Wall -Wstrict-prototypes > -Ibuild\src\numpy\core\src -Inumpy\core\include > -Ibuild\src\numpy\core > -Inumpy\core\src > -Inumpy\lib\..\core\include > -IC:\Python24\include > -IC:\Python24\PC > -c build\src\numpy\core\src\umathmodule.c > -o build\temp.win32- > 2.4\Release\build\src\numpy\core\src\umathmodule.o" failed with exit > status 1 Thanks for not posting the entire build log, but please show the actual error message that caused this failure. > > Also, I have a 3x2x4 array that I'd like to be able to multiple > elementwise with a 3x1 column vector, I can't currently do this w/o a > loop of some kind, but I'm thinking there must be a better way. Read the section on broadcasting in the sample chapters of the book online. x*b[:,newaxis,newaxis] should work (do it before you change the shape of b) or b.shape = (3,1,1) x*b should also work. -Travis >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From mfmorss at aep.com Thu Jan 12 17:28:30 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Thu, 12 Jan 2006 17:28:30 -0500 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34BA6@SMDMX0501.mds.mdsinc.co m> Message-ID: Well, by changing "defined(AIX)" to "defined(_AIX)" in ufuncobject.h, I was able to get past the "floating point flags not supported" problem, but unfortunately, the resulting function UFUNC_CHECK_STATUS(ret) does not work, to put it mildly. Later on in the install, I get this: cc_r: build/src/numpy/core/src/umathmodule.c "numpy/core/include/numpy/ufuncobject.h", line 292.9: 1506-273 (E) Missing type in declaration of ret. "numpy/core/include/numpy/ufuncobject.h", line 292.34: 1506-045 (S) Undeclared identifier fpstatus. "numpy/core/include/numpy/ufuncobject.h", line 292.13: 1506-221 (S) Initializer must be a valid constant expression. "numpy/core/include/numpy/ufuncobject.h", line 296.22: 1506-046 (S) Syntax error. "numpy/core/include/numpy/ufuncobject.h", line 296.22: 1506-172 (S) Parameter type list for function fp_clr_flag contains parameters without identifiers. "numpy/core/include/numpy/ufuncobject.h", line 296.9: 1506-343 (S) Redeclaration of fp_clr_flag differs from previous declaration on line 98 of "/usr/include/fpxcp.h". "numpy/core/include/numpy/ufuncobject.h", line 296.9: 1506-050 (I) Return type "int" in redeclaration is not compatible with the previous return type "void". "build/src/numpy/core/src/umathmodule.c", line 741.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 758.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 775.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 794.18: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 824.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 8155.32: 1506-280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. "numpy/core/src/ufuncobject.c", line 523.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 550.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 551.19: 1506-045 (S) Undeclared identifier intype. "numpy/core/src/ufuncobject.c", line 582.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 583.17: 1506-045 (S) Undeclared identifier thistype. "numpy/core/src/ufuncobject.c", line 583.32: 1506-045 (S) Undeclared identifier neededtype. "numpy/core/src/ufuncobject.c", line 583.49: 1506-045 (S) Undeclared identifier scalar. "numpy/core/src/ufuncobject.c", line 607.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 608.29: 1506-045 (S) Undeclared identifier self. "numpy/core/src/ufuncobject.c", line 608.40: 1506-045 (S) Undeclared identifier arg_types. "numpy/core/src/ufuncobject.c", line 609.38: 1506-045 (S) Undeclared identifier function. "numpy/core/src/ufuncobject.c", line 609.55: 1506-045 (S) Undeclared identifier data. "numpy/core/src/ufuncobject.c", line 610.20: 1506-045 (S) Undeclared identifier scalars. "numpy/core/include/numpy/ufuncobject.h", line 292.9: 1506-273 (E) Missing type in declaration of ret. "numpy/core/include/numpy/ufuncobject.h", line 292.34: 1506-045 (S) Undeclared identifier fpstatus. "numpy/core/include/numpy/ufuncobject.h", line 292.13: 1506-221 (S) Initializer must be a valid constant expression. "numpy/core/include/numpy/ufuncobject.h", line 296.22: 1506-046 (S) Syntax error. "numpy/core/include/numpy/ufuncobject.h", line 296.22: 1506-172 (S) Parameter type list for function fp_clr_flag contains parameters without identifiers. "numpy/core/include/numpy/ufuncobject.h", line 296.9: 1506-343 (S) Redeclaration of fp_clr_flag differs from previous declaration on line 98 of "/usr/include/fpxcp.h". "numpy/core/include/numpy/ufuncobject.h", line 296.9: 1506-050 (I) Return type "int" in redeclaration is not compatible with the previous return type "void". "build/src/numpy/core/src/umathmodule.c", line 741.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 758.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 775.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 794.18: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 824.19: 1506-045 (S) Undeclared identifier nc_1f. "build/src/numpy/core/src/umathmodule.c", line 8155.32: 1506-280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. "numpy/core/src/ufuncobject.c", line 523.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 550.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 551.19: 1506-045 (S) Undeclared identifier intype. "numpy/core/src/ufuncobject.c", line 582.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 583.17: 1506-045 (S) Undeclared identifier thistype. "numpy/core/src/ufuncobject.c", line 583.32: 1506-045 (S) Undeclared identifier neededtype. "numpy/core/src/ufuncobject.c", line 583.49: 1506-045 (S) Undeclared identifier scalar. "numpy/core/src/ufuncobject.c", line 607.1: 1506-046 (S) Syntax error. "numpy/core/src/ufuncobject.c", line 608.29: 1506-045 (S) Undeclared identifier self. "numpy/core/src/ufuncobject.c", line 608.40: 1506-045 (S) Undeclared identifier arg_types. "numpy/core/src/ufuncobject.c", line 609.38: 1506-045 (S) Undeclared identifier function. "numpy/core/src/ufuncobject.c", line 609.55: 1506-045 (S) Undeclared identifier data. "numpy/core/src/ufuncobject.c", line 610.20: 1506-045 (S) Undeclared identifier scalars. Mark F. Morss Principal Analyst, Market Risk American Electric Power "LATORNELL, Doug" To Sent by: "SciPy Users List" scipy-user-bounce s at scipy.net cc Subject 01/11/2006 01:46 Re: [SciPy-user] NumPy On OpenBSD PM Please respond to SciPy Users List The file to edit is numpy/core/include/numpy/ufuncobject.h You have to figure out which of the #elif defined() blocks is appropriate for your platform, based on the header file(s) that are included in the block, and the name of the function that is called to set fpstatus. I see that there is an #elif defined(AIX) block already there, but perhaps you are running into version differences? Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of mfmorss at aep.com > Sent: January 11, 2006 10:37 > To: SciPy Users List > Cc: SciPy Users List; scipy-user-bounces at scipy.net > Subject: Re: [SciPy-user] NumPy On OpenBSD > > I had this same symptom (many message that floating point > flags aren't supported on "this platform") installing scipy > core (now numpy) from source on AIX 5.2. I perhaps am > confused, but it's not obvious to me, from reading the > messages on this topic, in which file the suggested edit is > to be implemented. Would someone please say? > > Mark F. Morss > Principal Analyst, Market Risk > American Electric Power > > > > > "LATORNELL, Doug" > > > dsinc.com> > To > Sent by: "SciPy Users List" > > scipy-user-bounce > > s at scipy.net > cc > > > > Subject > 01/10/2006 08:57 Re: [SciPy-user] NumPy > On OpenBSD > PM > > > > > > Please respond to > > SciPy Users List > > > .net> > > > > > > > > > > #elif defined(sun) || defined(__OpenBSD__) > > is the ticket! Using defined (OpenBSD) doesn't work > > IsoInfoCompute:doug$ python > Python 2.4.1 (#1, Sep 3 2005, 13:08:59) [GCC 3.3.5 > (propolice)] on openbsd3 Type "help", "copyright", "credits" > or "license" for more information. > >>> import numpy > >>> numpy.test(10) > Found 3 tests for numpy.distutils.misc_util > Found 2 tests for numpy.core.umath > Found 3 tests for numpy.dft.helper > Found 8 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 9 tests for numpy.lib.twodim_base > Found 11 tests for numpy.core.multiarray > Found 4 tests for numpy.lib.getlimits > Found 21 tests for numpy.core.ma > Found 6 tests for numpy.core.defmatrix > Found 33 tests for numpy.lib.function_base > Found 6 tests for numpy.core.records > Found 4 tests for numpy.lib.index_tricks > Found 44 tests for numpy.lib.shape_base > Found 0 tests for __main__ > .............................................................. > .......... > .............................................................. > .......... > ...................................................... > ---------------------------------------------------------------------- > Ran 198 tests in 1.596s > > OK > > >>> > > Thanks for the help, Travis! > > Doug > > > > -----Original Message----- > > From: scipy-user-bounces at scipy.net > > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Travis Oliphant > > Sent: January 10, 2006 16:42 > > To: scipy-user at scipy.org > > Subject: Re: [SciPy-user] NumPy On OpenBSD > > > > LATORNELL, Doug wrote: > > > wrong tree completely? The problem is that I don't know > what magic > > > word I should add as an OR clause to > > > > > > #elif defined(sun) > > > > > > to test my guess. Where are the various platform names that are > > > checked in unfuncobject.h defined? > > > > > > > I've also seen __FreeBSD__ and __OpenBSD__ used. You could > try those > > as well. > > > > -Travis > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > This email and any files transmitted with it may contain > privileged or confidential information and may be read or > used only by the intended recipient. If you are not the > intended recipient of the email or any of its attachments, > please be advised that you have received this email in error > and any use, dissemination, distribution, forwarding, > printing or copying of this email or any attached files is > strictly prohibited. If you have received this email in > error, please immediately purge it and all attachments and > notify the sender by reply email or contact the sender at the > number listed. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From oliphant.travis at ieee.org Thu Jan 12 17:38:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 12 Jan 2006 15:38:52 -0700 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: References: Message-ID: <43C6DA7C.6040200@ieee.org> mfmorss at aep.com wrote: >Well, by changing "defined(AIX)" to "defined(_AIX)" in ufuncobject.h, I was >able to get past the "floating point flags not supported" problem, but >unfortunately, the resulting function UFUNC_CHECK_STATUS(ret) does not >work, to put it mildly. Later on in the install, I get this: > > This is the first post I'm aware of trying to build on the AIX platform, so it is not that surprising you are having some difficulties. It looks to me like there are multiple issues here. Did the code compile before you changed defined(AIX) to defined(_AIX)? If you are willing to help, we can try to get it to work. -Travis From oliphant.travis at ieee.org Thu Jan 12 17:45:52 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 12 Jan 2006 15:45:52 -0700 Subject: [SciPy-user] NumPy On OpenBSD In-Reply-To: References: Message-ID: <43C6DC20.6030102@ieee.org> mfmorss at aep.com wrote: >Well, by changing "defined(AIX)" to "defined(_AIX)" in ufuncobject.h, I was >able to get past the "floating point flags not supported" problem, but >unfortunately, the resulting function UFUNC_CHECK_STATUS(ret) does not >work, to put it mildly. Later on in the install, I get this: > > I found a problem with the header file in that section of code. There are two missing '\' characters. They need to be on the end of every line. There are two lines between #elif defined(_AIX) and #else that need to have a '\' placed on the end of them. -Travis From zollars at caltech.edu Thu Jan 12 20:14:28 2006 From: zollars at caltech.edu (Eric Zollars) Date: Thu, 12 Jan 2006 17:14:28 -0800 Subject: [SciPy-user] NumPy on ppc linux In-Reply-To: <43C6DC20.6030102@ieee.org> References: <43C6DC20.6030102@ieee.org> Message-ID: <43C6FEF4.6010901@caltech.edu> I cannot get setup.py to detect the xlf compilers that exist on this system. The correct executables are in disutils/fcompiler/ibm.py but setup config_fc --help-fcompiler still has it as unavailable. What else has to be done? Is there an current installation guide for new numpy? Eric Travis Oliphant wrote: > mfmorss at aep.com wrote: > > >>Well, by changing "defined(AIX)" to "defined(_AIX)" in ufuncobject.h, I was >>able to get past the "floating point flags not supported" problem, but >>unfortunately, the resulting function UFUNC_CHECK_STATUS(ret) does not >>work, to put it mildly. Later on in the install, I get this: >> >> > > > I found a problem with the header file in that section of code. There > are two missing '\' > characters. They need to be on the end of every line. There are two > lines between > #elif defined(_AIX) > > and > > #else > > that need to have a '\' placed on the end of them. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From travis.brady at gmail.com Thu Jan 12 19:31:49 2006 From: travis.brady at gmail.com (Travis Brady) Date: Thu, 12 Jan 2006 16:31:49 -0800 Subject: [SciPy-user] Failed Build and Multiplication Question In-Reply-To: <8FD4BB0F-F93C-476D-B511-A2E76CE1B8D3@physics.mcmaster.ca> References: <8FD4BB0F-F93C-476D-B511-A2E76CE1B8D3@physics.mcmaster.ca> Message-ID: Here is all of the output: Running from numpy source directory. Assuming default configuration (numpy\distutils\command/{setup_command,setup}.py was not found) Appending numpy.distutils.command configuration to numpy.distutils Assuming default configuration (numpy\distutils\fcompiler/{setup_fcompiler,setup }.py was not found) Appending numpy.distutils.fcompiler configuration to numpy.distutils Appending numpy.distutils configuration to numpy Appending numpy.testing configuration to numpy No module named __svn_version__ Creating numpy\f2py\__svn_version__.py (version='1886') F2PY Version 2_1886 Appending numpy.f2py configuration to numpy blas_opt_info: blas_mkl_info: NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS NOT AVAILABLE atlas_blas_info: FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\usr\\src\\ATLAS'] language = c running build_src building extension "atlas_version" sources adding 'build\src\atlas_version_-0x56c203ea.c' to sources. running build_ext customize Mingw32CCompiler customize Mingw32CCompiler using build_ext FOUND: libraries = ['f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\usr\\src\\ATLAS'] language = c define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] Creating numpy\core\__svn_version__.py (version='1886') Appending numpy.core configuration to numpy Appending numpy.lib configuration to numpy Appending numpy.dft configuration to numpy lapack_opt_info: lapack_mkl_info: mkl_info: NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: numpy.distutils.system_info.atlas_info FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\usr\\src\\ATLAS'] language = f77 running build_src building extension "atlas_version" sources adding 'build\src\atlas_version_0x7ca12f75.c' to sources. running build_ext customize Mingw32CCompiler customize Mingw32CCompiler using build_ext 0 Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext FOUND: libraries = ['lapack', 'f77blas', 'cblas', 'atlas'] library_dirs = ['C:\\usr\\src\\ATLAS'] language = f77 define_macros = [('ATLAS_INFO', '"\\"3.6.0\\""')] Appending numpy.linalg configuration to numpy Appending numpy.random configuration to numpy Appending numpy configuration to Inheriting attribute 'version' from '?' numpy version 0.9.3.1886 running config running build running config_fc running build_src building extension "numpy.distutils.__config__" sources adding 'build\src\numpy\distutils\numpy\distutils\__config__.py' to sources. building extension "numpy.core.multiarray" sources adding 'build\src\numpy\core\config.h' to sources. adding 'build\src\numpy\core\__multiarray_api.h' to sources. adding 'build\src\numpy\core\src' to include_dirs. numpy.core - nothing done with h_files= ['build\\src\\numpy\\core\\src\\scalarty pes.inc', 'build\\src\\numpy\\core\\src\\arraytypes.inc', 'build\\src\\numpy\\co re\\config.h', 'build\\src\\numpy\\core\\__multiarray_api.h'] building extension "numpy.core.umath" sources adding 'build\src\numpy\core\config.h' to sources. adding 'build\src\numpy\core\__ufunc_api.h' to sources. adding 'build\src\numpy\core\src' to include_dirs. numpy.core - nothing done with h_files= ['build\\src\\numpy\\core\\src\\scalarty pes.inc', 'build\\src\\numpy\\core\\src\\arraytypes.inc', 'build\\src\\numpy\\co re\\config.h', 'build\\src\\numpy\\core\\__ufunc_api.h'] building extension "numpy.core._sort" sources adding 'build\src\numpy\core\config.h' to sources. adding 'build\src\numpy\core\__multiarray_api.h' to sources. numpy.core - nothing done with h_files= ['build\\src\\numpy\\core\\config.h', 'b uild\\src\\numpy\\core\\__multiarray_api.h'] building extension "numpy.core._dotblas" sources adding 'numpy\core\blasdot\_dotblas.c' to sources. building extension "numpy.lib._compiled_base" sources building extension "numpy.dft.fftpack_lite" sources building extension "numpy.linalg.lapack_lite" sources adding 'numpy\linalg\lapack_litemodule.c' to sources. building extension "numpy.random.mtrand" sources building extension "numpy.__config__" sources adding 'build\src\numpy\numpy\__config__.py' to sources. running build_py copying build\src\numpy\numpy\__config__.py -> build\lib.win32-2.4\numpy copying build\src\numpy\distutils\numpy\distutils\__config__.py -> build\lib.win 32-2.4\numpy\distutils copying numpy\f2py\__svn_version__.py -> build\lib.win32-2.4\numpy\f2py copying numpy\core\__svn_version__.py -> build\lib.win32-2.4\numpy\core running build_ext customize Mingw32CCompiler customize Mingw32CCompiler using build_ext customize GnuFCompiler customize GnuFCompiler customize GnuFCompiler using build_ext building 'numpy.core.umath' extension compiling C sources gcc options: '-O2 -Wall -Wstrict-prototypes' compile options: '-Ibuild\src\numpy\core\src -Inumpy\core\include -Ibuild\src\nu mpy\core -Inumpy\core\src -Inumpy\lib\..\core\include -IC:\Python24\include -IC: \Python24\PC -c' gcc -O2 -Wall -Wstrict-prototypes -Ibuild\src\numpy\core\src -Inumpy\core\includ e -Ibuild\src\numpy\core -Inumpy\core\src -Inumpy\lib\..\core\include -IC:\Pytho n24\include -IC:\Python24\PC -c build\src\numpy\core\src\umathmodule.c -o build\ temp.win32-2.4\Release\build\src\numpy\core\src\umathmodule.o In file included from build/src/numpy/core/src/umathmodule.c:8440: build/src/numpy/core/__umath_generated.c:117: `expm1f' undeclared here (not in a function) build/src/numpy/core/__umath_generated.c:117: initializer element is not constan t build/src/numpy/core/__umath_generated.c:117: (near initialization for `expm1_da ta[0]') build/src/numpy/core/__umath_generated.c:117: `expm1l' undeclared here (not in a function) build/src/numpy/core/__umath_generated.c:117: initializer element is not constan t build/src/numpy/core/__umath_generated.c:117: (near initialization for `expm1_da ta[2]') error: Command "gcc -O2 -Wall -Wstrict-prototypes -Ibuild\src\numpy\core\src -In umpy\core\include -Ibuild\src\numpy\core -Inumpy\core\src -Inumpy\lib\..\core\in clude -IC:\Python24\include -IC:\Python24\PC -c build\src\numpy\core\src\umathmo dule.c -o build\temp.win32- 2.4\Release\build\src\numpy\core\src\umathmodule.o" f ailed with exit status 1 removed numpy\core\__svn_version__.py removed numpy\core\__svn_version__.pyc removed numpy\f2py\__svn_version__.py removed numpy\f2py\__svn_version__.pyc On 1/12/06, David M. Cooke wrote: > > On Jan 12, 2006, at 13:39 , Travis Brady wrote: > > > I just updated from SVN and the NumPy build (on win2k with MingW > > using Gary's instructions from the Wiki) failed with the following > > error: > > > > error: Command "gcc -O2 -Wall -Wstrict-prototypes > > -Ibuild\src\numpy\core\src -Inumpy\core\include > > -Ibuild\src\numpy\core > > -Inumpy\core\src > > -Inumpy\lib\..\core\include > > -IC:\Python24\include > > -IC:\Python24\PC > > -c build\src\numpy\core\src\umathmodule.c > > -o build\temp.win32- 2.4\Release\build\src\numpy\core\src > > \umathmodule.o" failed with exit status 1 > > I have a feeling that was probably me who broke that. What is the > error message though? It should be before these one. > > -- > |>|\/|< > /------------------------------------------------------------------\ > |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ > |cookedm at physics.mcmaster.ca > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Jan 12 21:50:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 12 Jan 2006 19:50:57 -0700 Subject: [SciPy-user] ***[Possible UCE]*** Re: Failed Build and Multiplication Question In-Reply-To: References: <8FD4BB0F-F93C-476D-B511-A2E76CE1B8D3@physics.mcmaster.ca> Message-ID: <43C71591.1090107@ieee.org> Travis Brady wrote: > Here is all of the output: > This is fixed in SVN. There was a mistake in when the expm1f, expm1l, logp1f, and logp1l functions should be defined... -Travis From cookedm at physics.mcmaster.ca Thu Jan 12 22:52:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 12 Jan 2006 22:52:26 -0500 Subject: [SciPy-user] ***[Possible UCE]*** Re: Failed Build and Multiplication Question In-Reply-To: <43C71591.1090107@ieee.org> References: <8FD4BB0F-F93C-476D-B511-A2E76CE1B8D3@physics.mcmaster.ca> <43C71591.1090107@ieee.org> Message-ID: On Jan 12, 2006, at 21:50 , Travis Oliphant wrote: > Travis Brady wrote: > >> Here is all of the output: >> > This is fixed in SVN. There was a mistake in when the expm1f, > expm1l, > logp1f, and logp1l functions should be defined... I had added log1p and exp1m to the same area that handles atanf (and others) not existing, so they were. The error from his compile indicates that he has expm1f and expm1l, it's just that they're done using a #define instead of functions (so the address can't be taken of them). Your change will (probably) still fail; we'll have to do something like #if defined(expm1f) #undef expm1f #endif -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Thu Jan 12 23:07:16 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 12 Jan 2006 21:07:16 -0700 Subject: [SciPy-user] ***[Possible UCE]*** Re: Failed Build and Multiplication Question In-Reply-To: References: <8FD4BB0F-F93C-476D-B511-A2E76CE1B8D3@physics.mcmaster.ca> <43C71591.1090107@ieee.org> Message-ID: <43C72774.4010307@ieee.org> David M. Cooke wrote: >On Jan 12, 2006, at 21:50 , Travis Oliphant wrote: > > > >>Travis Brady wrote: >> >> >> >>>Here is all of the output: >>> >>> >>> >>This is fixed in SVN. There was a mistake in when the expm1f, >>expm1l, >>logp1f, and logp1l functions should be defined... >> >> > >I had added log1p and exp1m to the same area that handles atanf (and >others) not existing, so they were. > > Yes, I saw that, but the problem was that on my windows platform (compiled using mingw32) the config.h showed HAVE_LOG1P but no HAVE_EXPM1 but also HAVE_LONGDOUBLE_FUNCS and HAVE_FLOAT_FUNCS were defined. So, my system did not have expm1f nor expm1l even though it had expf and expl. Thus, the single HAVE_FLOAT_FUNCS check wasn't enough for these extra C99 functions. >The error from his compile indicates that he has expm1f and expm1l, >it's just that they're done using a #define instead of functions (so >the address can't be taken of them). > > I guess I didn't look closely, I just got a similar error regarding exmp1f and exm1l not being defined and started trying to fix it. Thanks for looking at this, -Travis From mfmorss at aep.com Fri Jan 13 09:31:28 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Fri, 13 Jan 2006 09:31:28 -0500 Subject: [SciPy-user] Fw: Changes for ufuncobject.h Message-ID: (* Forgot to sent to everyone *) Mark F. Morss Principal Analyst, Market Risk American Electric Power ----- Forwarded by Mark F Morss/OR3/AEPIN on 01/13/2006 09:30 AM ----- Mark F Morss/OR3/AEPIN To 01/13/2006 09:18 Travis Oliphant AM cc Subject Re: Changes for ufuncobject.h (Document link: Mark F. Morss) Thank you, Travis! With that, I now have numpy working -- albeit without LAPACK or BLAS, which are absent from my system. I would appreciate if anyone could comment on the significance of a couple of messages that I received during the compilation of umathmodule.c: "build/src/numpy/core/src/umathmodule.c", line 8155.32: 1506-280 (W) Function argument assignment between types "long double*" and "double*" is not allowed. 1500-030: (I) INFORMATION: InitOperators: Additional optimization may be attained by recompiling and specifying MAXMEM option with a value greater than 2048. I wonder if I will encounter any difficulties due to this double/long-double assignment issue. Also, I apologize for my ignorance, but I wonder if it would be advisable to reset MAXMEM, and how to do so. I searched the download for a possible answer to the latter question, and couldn't find any. Mark F. Morss Principal Analyst, Market Risk American Electric Power Travis Oliphant To mfmorss at aep.com 01/12/2006 05:47 cc PM Subject Changes for ufuncobject.h There are some syntax errors in that section of code. Here is a patch showing the changes... You'll notice that a '\' character is needed at the end of two lines... Index: ufuncobject.h =================================================================== --- ufuncobject.h (revision 1887) +++ ufuncobject.h (working copy) @@ -280,7 +280,7 @@ #define generate_divbyzero_error() feraiseexcept(FE_DIVBYZERO) #define generate_overflow_error() feraiseexcept(FE_OVERFLOW) -#elif defined(AIX) +#elif defined(_AIX) #include #include @@ -288,11 +288,11 @@ #define UFUNC_CHECK_STATUS(ret) { \ fpflag_t fpstatus; \ \ - fpstatus = fp_read_flag(); + fpstatus = fp_read_flag(); \ ret = ((FP_DIV_BY_ZERO & fpstatus) ? UFUNC_FPE_DIVIDEBYZERO : 0) \ | ((FP_OVERFLOW & fpstatus) ? UFUNC_FPE_OVERFLOW : 0) \ | ((FP_UNDERFLOW & fpstatus) ? UFUNC_FPE_UNDERFLOW : 0) \ - | ((FP_INVALID & fpstatus) ? UFUNC_FPE_INVALID : 0); + | ((FP_INVALID & fpstatus) ? UFUNC_FPE_INVALID : 0); \ fp_clr_flag( FP_DIV_BY_ZERO | FP_OVERFLOW | FP_UNDERFLOW | FP_INVALID); \ } From zollars at caltech.edu Fri Jan 13 14:34:51 2006 From: zollars at caltech.edu (Eric Zollars) Date: Fri, 13 Jan 2006 11:34:51 -0800 Subject: [SciPy-user] numpy and fcompiler Message-ID: <43C800DB.6000707@caltech.edu> I cannot get numpy-0.9.2 to build with the correct compiler. Starting with a clean directory the command: python setup.py config_fc --fcompiler=ibm --f77exec=/opt/ibmcmp/xlf/8.1/bin/xlf --f90exec=/opt/ibmcmp/xlf/8.1/bin/xlf90 --f77flags=-qextname --f90flags=-qextname build still leads to the error warning: build_ext: fcompiler=ibm is not available. Anything I can try to figure this out? Eric From pearu at scipy.org Fri Jan 13 14:22:31 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 13 Jan 2006 13:22:31 -0600 (CST) Subject: [SciPy-user] numpy and fcompiler In-Reply-To: <43C800DB.6000707@caltech.edu> References: <43C800DB.6000707@caltech.edu> Message-ID: On Fri, 13 Jan 2006, Eric Zollars wrote: > I cannot get numpy-0.9.2 to build with the correct compiler. Starting > with a clean directory the command: > python setup.py config_fc --fcompiler=ibm > --f77exec=/opt/ibmcmp/xlf/8.1/bin/xlf > --f90exec=/opt/ibmcmp/xlf/8.1/bin/xlf90 --f77flags=-qextname > --f90flags=-qextname build > > still leads to the error > warning: build_ext: fcompiler=ibm is not available. > > Anything I can try to figure this out? What is the output of xlf ? If this does not contain xlf version information, then it is assumed that the version info can be obtained from the listing of /etc/opt/ibmcmp/xlf/. See numpy/distutils/fcompiler/ibm.py for more information. I can fix this if you send me the output of `xlf` and confirm that /opt/ibmcmp/xlf/8.1/etc/xlf.cfg exists (or send me the fullpath to xlf.cfg). Pearu From zollars at caltech.edu Fri Jan 13 16:19:26 2006 From: zollars at caltech.edu (Eric Zollars) Date: Fri, 13 Jan 2006 13:19:26 -0800 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: References: <43C800DB.6000707@caltech.edu> Message-ID: <43C8195E.2030003@caltech.edu> Pearu- The command xlf gives compiler options, no version information. I have already checked ibm.py and the xlf.cfg is where it is supposed to be: /etc/opt/ibmcmp/xlf/8.1/xlf.cfg Eric Pearu Peterson wrote: > > On Fri, 13 Jan 2006, Eric Zollars wrote: > > >>I cannot get numpy-0.9.2 to build with the correct compiler. Starting >>with a clean directory the command: >>python setup.py config_fc --fcompiler=ibm >>--f77exec=/opt/ibmcmp/xlf/8.1/bin/xlf >>--f90exec=/opt/ibmcmp/xlf/8.1/bin/xlf90 --f77flags=-qextname >>--f90flags=-qextname build >> >>still leads to the error >>warning: build_ext: fcompiler=ibm is not available. >> >>Anything I can try to figure this out? > > > What is the output of > > xlf > > ? If this does not contain xlf version information, then it is assumed > that the version info can be obtained from the listing of > /etc/opt/ibmcmp/xlf/. See numpy/distutils/fcompiler/ibm.py for more > information. I can fix this if you send me the output of `xlf` and confirm > that /opt/ibmcmp/xlf/8.1/etc/xlf.cfg exists (or send me the fullpath to > xlf.cfg). > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From pearu at scipy.org Fri Jan 13 15:49:52 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 13 Jan 2006 14:49:52 -0600 (CST) Subject: [SciPy-user] numpy and fcompiler In-Reply-To: <43C8195E.2030003@caltech.edu> References: <43C800DB.6000707@caltech.edu> <43C8195E.2030003@caltech.edu> Message-ID: On Fri, 13 Jan 2006, Eric Zollars wrote: > Pearu- > The command xlf gives compiler options, no version information. I have > already checked ibm.py and the xlf.cfg is where it is supposed to be: > /etc/opt/ibmcmp/xlf/8.1/xlf.cfg Have you tried to fix ibm.py? For debugging, you can run python ibm.py --verbose in numpy/distutils/fcompiler/ that should show compiler version number if xlf is detected. Pearu From mfmorss at aep.com Fri Jan 13 16:55:10 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Fri, 13 Jan 2006 16:55:10 -0500 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: <43C8195E.2030003@caltech.edu> Message-ID: May ask what system you're running on? I could not tell from your post. I have numpy running on AIX 5.2, but at present, we don't have Fortran. Mark F. Morss Principal Analyst, Market Risk American Electric Power Eric Zollars To Sent by: SciPy Users List scipy-user-bounce s at scipy.net cc Subject 01/13/2006 04:19 Re: [SciPy-user] numpy and PM fcompiler Please respond to SciPy Users List Pearu- The command xlf gives compiler options, no version information. I have already checked ibm.py and the xlf.cfg is where it is supposed to be: /etc/opt/ibmcmp/xlf/8.1/xlf.cfg Eric Pearu Peterson wrote: > > On Fri, 13 Jan 2006, Eric Zollars wrote: > > >>I cannot get numpy-0.9.2 to build with the correct compiler. Starting >>with a clean directory the command: >>python setup.py config_fc --fcompiler=ibm >>--f77exec=/opt/ibmcmp/xlf/8.1/bin/xlf >>--f90exec=/opt/ibmcmp/xlf/8.1/bin/xlf90 --f77flags=-qextname >>--f90flags=-qextname build >> >>still leads to the error >>warning: build_ext: fcompiler=ibm is not available. >> >>Anything I can try to figure this out? > > > What is the output of > > xlf > > ? If this does not contain xlf version information, then it is assumed > that the version info can be obtained from the listing of > /etc/opt/ibmcmp/xlf/. See numpy/distutils/fcompiler/ibm.py for more > information. I can fix this if you send me the output of `xlf` and confirm > that /opt/ibmcmp/xlf/8.1/etc/xlf.cfg exists (or send me the fullpath to > xlf.cfg). > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From zollars at caltech.edu Fri Jan 13 18:35:00 2006 From: zollars at caltech.edu (Eric Zollars) Date: Fri, 13 Jan 2006 15:35:00 -0800 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: References: Message-ID: <43C83924.1050509@caltech.edu> Mark- This is a SUSE 9.? system. Eric mfmorss at aep.com wrote: > May ask what system you're running on? I could not tell from your post. I > have numpy running on AIX 5.2, but at present, we don't have Fortran. > > Mark F. Morss > Principal Analyst, Market Risk > American Electric Power > > > > Eric Zollars > edu> To > Sent by: SciPy Users List > scipy-user-bounce > s at scipy.net cc > > Subject > 01/13/2006 04:19 Re: [SciPy-user] numpy and > PM fcompiler > > > Please respond to > SciPy Users List > .net> > > > > > > > Pearu- > The command xlf gives compiler options, no version > information. I have > already checked ibm.py and the xlf.cfg is where it is supposed to be: > /etc/opt/ibmcmp/xlf/8.1/xlf.cfg > > Eric > > Pearu Peterson wrote: > >>On Fri, 13 Jan 2006, Eric Zollars wrote: >> >> >> >>>I cannot get numpy-0.9.2 to build with the correct compiler. Starting >>>with a clean directory the command: >>>python setup.py config_fc --fcompiler=ibm >>>--f77exec=/opt/ibmcmp/xlf/8.1/bin/xlf >>>--f90exec=/opt/ibmcmp/xlf/8.1/bin/xlf90 --f77flags=-qextname >>>--f90flags=-qextname build >>> >>>still leads to the error >>>warning: build_ext: fcompiler=ibm is not available. >>> >>>Anything I can try to figure this out? >> >> >>What is the output of >> >> xlf >> >>? If this does not contain xlf version information, then it is assumed >>that the version info can be obtained from the listing of >>/etc/opt/ibmcmp/xlf/. See numpy/distutils/fcompiler/ibm.py for more >>information. I can fix this if you send me the output of `xlf` and > > confirm > >>that /opt/ibmcmp/xlf/8.1/etc/xlf.cfg exists (or send me the fullpath to >>xlf.cfg). >> >>Pearu >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From zollars at caltech.edu Fri Jan 13 19:02:47 2006 From: zollars at caltech.edu (Eric Zollars) Date: Fri, 13 Jan 2006 16:02:47 -0800 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: References: <43C800DB.6000707@caltech.edu> <43C8195E.2030003@caltech.edu> Message-ID: <43C83FA7.10107@caltech.edu> Pearu Peterson wrote: > > Have you tried to fix ibm.py? For debugging, you can run > > python ibm.py --verbose > > in numpy/distutils/fcompiler/ that should show compiler version number if > xlf is detected. > > Pearu After it prints out IbmFCompiler instance properties:, it prints "None". The compiler.customize() line does not appear to getting any of the changes I make to the class IbmFCompiler in ibm.py. Eric From pearu at scipy.org Sat Jan 14 00:23:02 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 13 Jan 2006 23:23:02 -0600 (CST) Subject: [SciPy-user] numpy and fcompiler In-Reply-To: <43C83FA7.10107@caltech.edu> References: <43C800DB.6000707@caltech.edu> <43C83FA7.10107@caltech.edu> Message-ID: On Fri, 13 Jan 2006, Eric Zollars wrote: > Pearu Peterson wrote: >> >> Have you tried to fix ibm.py? For debugging, you can run >> >> python ibm.py --verbose >> >> in numpy/distutils/fcompiler/ that should show compiler version number if >> xlf is detected. >> >> Pearu > > After it prints out IbmFCompiler instance properties:, it prints "None". > The compiler.customize() line does not appear to getting any of the > changes I make to the class IbmFCompiler in ibm.py. Change the line compiler = new_fcompiler(compiler='ibm') to compiler = IbmFCompiler() Also, what are the values os.name sys.platform in your system? Currently ibm compiler is enabled only for aix platforms, see _default_compilers dictionary in numpy/distutils/fcompiler/__init__.py. Pearu From lists at spicynoodles.net Sat Jan 14 18:33:55 2006 From: lists at spicynoodles.net (Andre Radke) Date: Sun, 15 Jan 2006 00:33:55 +0100 Subject: [SciPy-user] Inverting Complex64 array fails on OS X Message-ID: I'm new to SciPy and as my first project, I chose to port some linear algebra code over from MatLab. I ran into some unexpected results while inverting an array of type Complex64. Here's a simple test case: jannu:~ andre$ /usr/local/bin/python ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#1, Oct 3 2005, 09:39:46) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> a = array([[1, 1], [-1, -1j]], dtype=Complex64) >>> a_inv = linalg.inv(a) >>> a array([[ 1.+0.j, 1.+0.j], [-1.+0.j, 0.-1.j]], dtype=complex128) >>> a_inv array([[ 0., -1.], [ 1., 1.]]) >>> dot (a, a_inv) array([[ 1.+0.j, 0.+0.j], [ 0.-1.j, 1.-1.j]], dtype=complex128) Inverting this Complex64 matrix surprisingly returns a real matrix. The dot product of the matrix and its supposed inverse does not yield the unit matrix, but rather a complex matrix whose real part happens to be the unit matrix. It seems like linalg.inv() completely ignored the imaginary part of the input matrix. Switching the data type of the array to Complex32 and repeating the calculations yields the correct results: >>> b = array([[1, 1], [-1, -1j]], dtype=Complex32) >>> b_inv = linalg.inv(b) >>> b array([[ 1.+0.j, 1.+0.j], [-1.+0.j, 0.-1.j]], dtype=complex64) >>> b_inv array([[ 0.5-0.5j, -0.5-0.5j], [ 0.5+0.5j, 0.5+0.5j]], dtype=complex64) >>> dot(b, b_inv) array([[ 1.+0.j, 0.+0.j], [ 0.+0.j, 1.+0.j]], dtype=complex64) Am I right in assuming that linalg.inv() is supposed to be able to deal with Complex64 arrays? If so, can anybody reproduce the results I got? This is on an Apple PowerBook G3 (pre-Altivec) running Mac OS X 10.3.9 with NumPy and SciPy compiled from current svn source (0.9.3.1903 and 0.4.4.1550) per Chris' latest instructions on new.scipy.org. I also tried to obtain the ATLAS version and path info as per step 5 of the troubleshooting hints in the INSTALL.txt file, but ran into the following warning and error while running setup_atlas_version.py: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/numpy/distutils/misc_util.py:886: UserWarning: Use Configuration('linalg','',top_path=None) instead of deprecated default_config_dict('linalg','',None) warnings.warn('Use Configuration(%s,%s,top_path=%s) instead of '\ Traceback (most recent call last): File "setup_atlas_version.py", line 29, in ? setup(**configuration()) File "setup_atlas_version.py", line 13, in configuration del config['fortran_libraries'] KeyError: 'fortran_libraries' When I comment out the offending line in setup_atlas_version.py and run it again, the script raises the AtlasNotFoundError exception. I haven't figured out yet whether this means that the build process indeed failed to locate the Atlas libraries on my system and if so, what replacement (if any) it used instead. Any hints on how to do this would be appreciated. Thanks in advance, -Andre -- Andre Radke + mailto:lists at spicynoodles.net + http://spicynoodles.net/ From oliphant.travis at ieee.org Sat Jan 14 22:45:55 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 14 Jan 2006 20:45:55 -0700 Subject: [SciPy-user] Changed attributes .dtypedescr, .dtypechar, .dtypestr and .dtype Message-ID: <43C9C573.2020809@ieee.org> There was some cruft left over from the change to making data-type descriptors real Python objects. This left lots of .dtype related attributes on the array object --- too many as Francesc Altet graciously pointed out. In the latest SVN, I've cleaned things up (thanks to a nice patch from Francesc to get it started). Basically, there is now only one attribute on the array object dealing with the data-type (arr.dtype). This attribute returns the data-type descriptor object for the array. This object itself has the attributes .char, .str, and .type (among others). I think this will lead to less confusion long term. The cruft was due to the fact that my understanding of the data-type descriptor came in December while seriously looking at records module. This will have some backward-compatibility issues (we are still pre-1.0 and early enough that I hope this is not too difficult to deal with). The compatibility to numpy-0.9.2 issues I can see are: 1) Replacing attributes that are now gone: .dtypechar --> .dtype.char .dtypestr --> .dtype.str .dtypedescr --> .dtype 2) Changing old .dtype -> .dtype.type This is only necessary if you were using a.dtype as a *typeobject* as in issubclass(a.dtype, ) If you were using .dtype as a parameter to dtype= then that usage will still work great (in fact a little faster) because now .dtype returns a "descriptor object" 3) The dtypedescr constructor is now called dtype. This change should have gone into the 0.9.2 release, but things got too hectic with all the name changes. I will quickly release 0.9.4 with these changes unless I hear strong disagreements within the next few days. -Travis P.S. SciPy SVN has been updated and fixed with the changes. Numeric compatibility now implies that .typecode() --> .dtype.char although if .typecode() was used as an argument to a function, then .dtype will very likely work. -Travis From Paul.Ray at nrl.navy.mil Sun Jan 15 09:50:28 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Sun, 15 Jan 2006 09:50:28 -0500 Subject: [SciPy-user] Setting the real part of a complex array element Message-ID: Hi, I have some old code that tries to write to the real part of a complex array element, but the syntax now seems to fail in the new NumPy. Looking in the new NumPy book (section 3.1.3) , it appears that the .real attribute should be writable, but it does not seem to work. Does anyone know how to do this? In [1]: import numpy In [2]: numpy.__version__ Out[2]: '0.9.2' In [4]: c = numpy.zeros(10,dtype=numpy.complex64) In [5]: c[0].real = 1.0 ------------------------------------------------------------------------ --- exceptions.TypeError Traceback (most recent call last) /Users/paulr/ TypeError: attribute 'real' of 'generic_arrtype' objects is not writable In [11]: a = c[0] In [12]: a? Type: complex64_arrtype Base Class: String Form: 0j Namespace: Interactive Thanks for any help, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From lists at spicynoodles.net Sun Jan 15 14:28:49 2006 From: lists at spicynoodles.net (Andre Radke) Date: Sun, 15 Jan 2006 20:28:49 +0100 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: References: Message-ID: At 0:33 +0100 01/15/2006, Andre Radke wrote: >I'm new to SciPy and as my first project, I chose to port some linear >algebra code over from MatLab. I ran into some unexpected results >while inverting an array of type Complex64. [...] >This is on an Apple PowerBook G3 (pre-Altivec) running Mac OS X >10.3.9 with NumPy and SciPy compiled from current svn source >(0.9.3.1903 and 0.4.4.1550) per Chris' latest instructions on >new.scipy.org. [...] >When I comment out the offending line in setup_atlas_version.py and >run it again, the script raises the AtlasNotFoundError exception. I >haven't figured out yet whether this means that the build process >indeed failed to locate the Atlas libraries on my system and if so, >what replacement (if any) it used instead. Any hints on how to do >this would be appreciated. Since posting the above, I have made some progress: 1. Running scipy.test(1) completes without errors (1033 tests in 21.373 seconds) but shows some warnings about empty clapack and cblas modules. 2. The implementation of inv() in scipy/linalg/basic.py uses the flapack branches and not the clapack branches. 3. The otool command line tool tells me that the shared libraries (*.so) in scipy/linalg are all linked with Apple's Accelerate framework. I assume this means that the build process used the Fortran version of LAPACK provided as part of Mac OS X. Is this the way it's supposed to work? If so, any ideas what else could cause scipy.linalg.inv() to return incorrect results for a matrix of type Complex64 on my machine? Still digging... -Andre -- Andre Radke + mailto:lists at spicynoodles.net + http://spicynoodles.net/ From robert.kern at gmail.com Sun Jan 15 14:54:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 15 Jan 2006 13:54:53 -0600 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: References: Message-ID: <43CAA88D.2060805@gmail.com> Andre Radke wrote: > 3. The otool command line tool tells me that the shared libraries > (*.so) in scipy/linalg are all linked with Apple's Accelerate > framework. > > I assume this means that the build process used the Fortran version > of LAPACK provided as part of Mac OS X. Is this the way it's supposed > to work? Accelerate.framework only contains the FORTRAN version of LAPACK, yes. I do believe it contains CBLAS (given the existence of cblas_* symbols in the dylib), though. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From lists at spicynoodles.net Sun Jan 15 16:53:56 2006 From: lists at spicynoodles.net (Andre Radke) Date: Sun, 15 Jan 2006 22:53:56 +0100 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: <43CAA88D.2060805@gmail.com> References: <43CAA88D.2060805@gmail.com> Message-ID: Robert Kern wrote: >Accelerate.framework only contains the FORTRAN version of LAPACK, yes. I do >believe it contains CBLAS (given the existence of cblas_* symbols in >the dylib), though. Okay, thanks. In the meantime, I think I have figured out why inverting a matrix of type Complex64 failed for me: The implementation of linalg.inv() in scipy/linalg/basic.py uses get_lapack_funcs() in scipy/lib/lapack/__init__.py to obtain the reference to the underlying Fortran functions for performing the matrix inversion. get_lapack_funcs() examines the dtypechar attribute of the provided matrix to determine whether to use the single/double precision and real/complex version of the Fortran functions. The dtypechar attribute of my Complex64 matrix was 'G'. This wasn't one of the type code chars expected by get_lapack_funcs(), so it defaulted to the version of the Fortran functions that take double precision real arguments, i.e. dgetrf and dgetri. Consequently, linalg.inv() returned only the inverse of input's matrix real part. I suspect my Complex64 matrix would instead have required using the zgetrf and zgetri Fortran functions (for a double precision complex argument) which would have happened if the dtypechar of the matrix had been 'D' instead of 'G'. Is this actually a bug in get_lapack_funcs() or are my assumptions about how this should work simply incorrect? Also, is Complex64 the preferred way to specify a double precision complex dtype when constructing a matrix? TIA, -Andre -- Andre Radke + mailto:lists at spicynoodles.net + http://www.spicynoodles.net/ From oliver.tomic at matforsk.no Sun Jan 15 17:45:58 2006 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Sun, 15 Jan 2006 23:45:58 +0100 Subject: [SciPy-user] problems with stats.py Message-ID: Hi there, I am developing an application that amongst others uses stats.py from Gary Strangman (version 0.6, May 10, 2002). Since we've been encouraged to switch over to Numpy I tried to do so. Now however I am running into problems. Previously I used Numeric 23.8 and everything worked fine. Then I installed Numpy and replaced 'Numeric' with 'Numpy' everywhere in the code. Now the following occurs: Traceback (most recent call last): File "C:\Python24\pmse_Plot.py", line 175, in pmsePlotter ANOVAresults = lF_oneway(preANOVA[(assessor, attribute)]) File "C:\Python24\stats.py", in line 1534, in lF_oneway means = map(amean,tmp) NameError: global name 'amean' is not defined I guess that this is probably a minor thing, but right now I don't see how to work around this (my brain doesn't work anymore that late at night ... past bedtime actually :-). Any help appreciated Oliver From oliphant.travis at ieee.org Sun Jan 15 18:01:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 15 Jan 2006 16:01:08 -0700 Subject: [SciPy-user] Setting the real part of a complex array element In-Reply-To: References: Message-ID: <43CAD434.2040201@ieee.org> Paul Ray wrote: >Hi, > >I have some old code that tries to write to the real part of a >complex array element, but the syntax now seems to fail in the new >NumPy. Looking in the new NumPy book (section 3.1.3) , it appears >that the .real attribute should be writable, but it does not seem to >work. Does anyone know how to do this? > >In [1]: import numpy > >In [2]: numpy.__version__ >Out[2]: '0.9.2' > >In [4]: c = numpy.zeros(10,dtype=numpy.complex64) > > > >In [5]: c[0].real = 1.0 > > c.real[0] = 1.0 The problem is that c[0] is not an array it is a scalar. This would not have worked with typecode='D' with Numeric either... c.real[0] = 1.0 will work. -Travis From oliphant.travis at ieee.org Sun Jan 15 18:05:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 15 Jan 2006 16:05:05 -0700 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: References: <43CAA88D.2060805@gmail.com> Message-ID: <43CAD521.9070006@ieee.org> Andre Radke wrote: >Robert Kern wrote: > > >>Accelerate.framework only contains the FORTRAN version of LAPACK, yes. I do >>believe it contains CBLAS (given the existence of cblas_* symbols in >>the dylib), though. >> >> > >Okay, thanks. In the meantime, I think I have figured out why >inverting a matrix of type Complex64 failed for me: > >The implementation of linalg.inv() in scipy/linalg/basic.py uses >get_lapack_funcs() in scipy/lib/lapack/__init__.py to obtain the >reference to the underlying Fortran functions for performing the >matrix inversion. get_lapack_funcs() examines the dtypechar attribute >of the provided matrix to determine whether to use the single/double >precision and real/complex version of the Fortran functions. > >The dtypechar attribute of my Complex64 matrix was 'G'. > This is definitely the problem. It should be 'D'. 'G' is a complex number with long doubles. How did you specify the matrix again? Could you show us some of the attributes of the matrix you created. I'm shocked that 'G' was the dtypechar... >This wasn't >one of the type code chars expected by get_lapack_funcs(), so it >defaulted to the version of the Fortran functions that take double >precision real arguments, i.e. dgetrf and dgetri. Consequently, >linalg.inv() returned only the inverse of input's matrix real part. > >I suspect my Complex64 matrix would instead have required using the >zgetrf and zgetri Fortran functions (for a double precision complex >argument) which would have happened if the dtypechar of the matrix >had been 'D' instead of 'G'. > >Is this actually a bug in get_lapack_funcs() or are my assumptions >about how this should work simply incorrect? > >Also, is Complex64 the preferred way to specify a double precision >complex dtype when constructing a matrix? > > No. Use numpy.complex128 or 'D' or just simply complex. -Travis From sransom at nrao.edu Sun Jan 15 18:11:42 2006 From: sransom at nrao.edu (Scott Ransom) Date: Sun, 15 Jan 2006 18:11:42 -0500 Subject: [SciPy-user] Setting the real part of a complex array element In-Reply-To: <43CAD434.2040201@ieee.org> References: <43CAD434.2040201@ieee.org> Message-ID: <20060115231142.GA23137@ssh.cv.nrao.edu> Hi Travis, Hmmm. Very strange. The code that Paul was mentioning (which was written by me, BTW), used to run on 4-byte floats (i.e. typecode 'F'). And it worked fine. But you are right that it doesn't work for old typecode 'D': In [1]:Numeric.__version__ Out[1]:'24.2' In [2]:c = Numeric.zeros(10,'F') In [3]:c[0].real = 1.0 In [4]:c Out[4]: array([ 1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j,0.+0.j, 0.+0.j, 0.+0.j],'F') In [5]:c = Numeric.zeros(10,'D') In [6]:c[0].real = 1.0 --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/sransom/ TypeError: 'complex' object has only read-only attributes (assign to .real) Very strange.... Scott On Sun, Jan 15, 2006 at 04:01:08PM -0700, Travis Oliphant wrote: > Paul Ray wrote: > > >Hi, > > > >I have some old code that tries to write to the real part of a > >complex array element, but the syntax now seems to fail in the new > >NumPy. Looking in the new NumPy book (section 3.1.3) , it appears > >that the .real attribute should be writable, but it does not seem to > >work. Does anyone know how to do this? > > > >In [1]: import numpy > > > >In [2]: numpy.__version__ > >Out[2]: '0.9.2' > > > >In [4]: c = numpy.zeros(10,dtype=numpy.complex64) > > > > > > > >In [5]: c[0].real = 1.0 > > > > > > c.real[0] = 1.0 > > The problem is that > > c[0] is not an array it is a scalar. This would not have worked with > typecode='D' with Numeric either... > > c.real[0] = 1.0 will work. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From aisaac at american.edu Sun Jan 15 18:36:34 2006 From: aisaac at american.edu (Alan G Isaac) Date: Sun, 15 Jan 2006 18:36:34 -0500 Subject: [SciPy-user] problems with stats.py In-Reply-To: References: Message-ID: On Sun, 15 Jan 2006, oliver.tomic at matforsk.no apparently wrote: > Previously I used Numeric 23.8 and everything worked fine. Then I installed > Numpy and replaced 'Numeric' with 'Numpy' everywhere in the code. Now the > following occurs: > Traceback (most recent call last): > File "C:\Python24\pmse_Plot.py", line 175, in pmsePlotter > ANOVAresults = lF_oneway(preANOVA[(assessor, attribute)]) > File "C:\Python24\stats.py", in line 1534, in lF_oneway > means = map(amean,tmp) > NameError: global name 'amean' is not defined Numeric did not define amean either. This is defined in stats.py Are you making the replacements in Strangman's code? Don't forget his stats.py uses pstat.py. It might turn on how you are importing the module functions. fwiw, Alan Isaac PS Your directory structure looks very odd to me, by the way. Isn't Lib/site-packages a more common place to install such modules? From Paul.Ray at nrl.navy.mil Sun Jan 15 21:45:04 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Sun, 15 Jan 2006 21:45:04 -0500 Subject: [SciPy-user] Setting the real part of a complex array element In-Reply-To: <43CAD434.2040201@ieee.org> References: <43CAD434.2040201@ieee.org> Message-ID: <0CA8F11D-4B48-4EAA-97E5-802A458F85DD@nrl.navy.mil> On Jan 15, 2006, at 6:01 PM, Travis Oliphant wrote: > c.real[0] = 1.0 > > The problem is that > > c[0] is not an array it is a scalar. This would not have worked > with > typecode='D' with Numeric either... > > c.real[0] = 1.0 will work. Travis, Thanks for the quick answer. That does indeed fix the problem. It is a very strange syntax, however. Is there some reason why the real part of a scalar can't be set with that syntax (c[0].real = 2.3)? Is there any big inefficiency with c.real[0]? It seems like converting a whole array to real, then grabbing the 0th element, which is counter-intuitive for an operation where I want to grab the 0th element and set its real part to some number. Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From oliphant.travis at ieee.org Mon Jan 16 03:14:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 16 Jan 2006 01:14:33 -0700 Subject: [SciPy-user] Setting the real part of a complex array element In-Reply-To: <0CA8F11D-4B48-4EAA-97E5-802A458F85DD@nrl.navy.mil> References: <43CAD434.2040201@ieee.org> <0CA8F11D-4B48-4EAA-97E5-802A458F85DD@nrl.navy.mil> Message-ID: <43CB55E9.4080305@ieee.org> Paul Ray wrote: >On Jan 15, 2006, at 6:01 PM, Travis Oliphant wrote: > > > >>c.real[0] = 1.0 >> >>The problem is that >> >>c[0] is not an array it is a scalar. This would not have worked >>with >>typecode='D' with Numeric either... >> >>c.real[0] = 1.0 will work. >> >> > >Travis, > >Thanks for the quick answer. That does indeed fix the problem. > >It is a very strange syntax, however. Is there some reason why the >real part of a scalar can't be set with that syntax (c[0].real = 2.3)? > > Yes, because c[0] actually copies an element from the array into a scalar (in exactly the same way the Numeric did for only some data-types). NumPy is more consistent and does it for all data-types (well void scalars are an exception (to support records) but that's another matter). It used to work before in this particular case because certain versions of Numeric would return 0-d arrays for certain data-types (like old typecode 'F'). But, this was really an anomaly of Numeric because as I pointed out, the syntax would not have worked before if you'd used typecode 'D' scalars. >Is there any big inefficiency with c.real[0]? > No, not particularly. What you get with c.real is actually a view of the data with strides adjusted so that you skip over the imaginary parts. Then, on this new array you set the particular item to what you want. The difference is that Python allows defining separate behavior for "(obj)[index] = " and "(obj)[index]". The syntax "(obj)[index].attribute = " is always interpreted as "a = (obj)[index]" followed by "a.attribute = " >It seems like >converting a whole array to real, then grabbing the 0th element, >which is counter-intuitive for an operation where I want to grab the >0th element and set its real part to some number. > > Isn't that a matter of perspective? I see it has selecting a view of the real-part of the array and setting the 0th element of that view to a specific value, which seems perfectly natural to me. The issue is that you can't "grab" the 0th element (which by the operation of "grabbing" actually copies the data out into another object), and then set the corresponding object to anything that will affect the underlying array it came from. That's the concept that needs to be understood. In C, something like that would work because you are just dealing with memory pointers and C-structures, but in Python that won't work. -Travis From oliphant.travis at ieee.org Mon Jan 16 03:17:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 16 Jan 2006 01:17:39 -0700 Subject: [SciPy-user] problems with stats.py In-Reply-To: References: Message-ID: <43CB56A3.5090007@ieee.org> oliver.tomic at matforsk.no wrote: >Hi there, > >I am developing an application that amongst others uses stats.py from Gary >Strangman (version 0.6, May 10, 2002). Since we've been encouraged to >switch over to Numpy I tried to do so. Now however I am running into >problems. > >Previously I used Numeric 23.8 and everything worked fine. Then I installed >Numpy and replaced 'Numeric' with 'Numpy' everywhere in the code. Now the >following occurs: > > Did you try using "numpy.lib.convertcode" ? Did you convert stats.py as well? There is a version of stats.py in (full) SciPy that seems to work fine, as well. -Travis From schofield at ftw.at Mon Jan 16 05:16:46 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 16 Jan 2006 11:16:46 +0100 Subject: [SciPy-user] ANN: SciPy 0.4.4 released Message-ID: <43CB728E.1020400@ftw.at> =========================== SciPy 0.4.4 Scientific tools for Python =========================== I'm pleased to announce the release of SciPy 0.4.4. This is the first release to support the new NumPy package (version 0.9.2). This release reflects 14 months of development since SciPy 0.3.1. It provides much new functionality, many bug fixes, and a simpler installation procedure. It is available for download from http://new.scipy.org/Wiki/Download as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and 32-bit) and as an executable installer for Win32. More information on SciPy is available at http://new.scipy.org/Wiki/ =========================== SciPy is an Open Source library of scientific tools for Python. It contains a variety of high-level science and engineering modules, including modules for statistics, optimization, integration, linear algebra, Fourier transforms, signal and image processing, genetic algorithms, ODE solvers, special functions, and more. From e.tadeu at gmail.com Mon Jan 16 07:29:25 2006 From: e.tadeu at gmail.com (Edson Tadeu) Date: Mon, 16 Jan 2006 10:29:25 -0200 Subject: [SciPy-user] ANN: SciPy 0.4.4 released In-Reply-To: <43CB728E.1020400@ftw.at> References: <43CB728E.1020400@ftw.at> Message-ID: Hi, Thanks for this release, SciPy is a very useful tool/framework, and now we can officially use SciPy in Python 2.4 too :) I think xplt is missing in SciPy 0.4.4 binaries. I installed "SciPy 0.4.4for Python 2.4 and Pentium 3" binary, but site-packages/scipy/xplt only has gistdemomovie.py and pyc! On 1/16/06, Ed Schofield wrote: > > =========================== > SciPy 0.4.4 > Scientific tools for Python > =========================== > > I'm pleased to announce the release of SciPy 0.4.4. This is the first > release to support the new NumPy package (version 0.9.2). This release > reflects 14 months of development since SciPy 0.3.1. It provides much > new functionality, many bug fixes, and a simpler installation procedure. > > It is available for download from > > http://new.scipy.org/Wiki/Download > > as a source tarball for Linux/Solaris/OS X/BSD/Windows (64-bit and > 32-bit) and as an executable installer for Win32. > > More information on SciPy is available at > > http://new.scipy.org/Wiki/ > > =========================== > > SciPy is an Open Source library of scientific tools for Python. It > contains a variety of high-level science and engineering modules, > including modules for statistics, optimization, integration, linear > algebra, Fourier transforms, signal and image processing, genetic > algorithms, ODE solvers, special functions, and more. > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From d.howey at imperial.ac.uk Mon Jan 16 08:38:39 2006 From: d.howey at imperial.ac.uk (Howey, David A) Date: Mon, 16 Jan 2006 13:38:39 -0000 Subject: [SciPy-user] Suggested Linux Distribution? Message-ID: <056D32E9B2D93B49B01256A88B3EB21801160861@icex2.ic.ac.uk> I've heard good things about Ubuntu - look for messages from Ryan Krauss (or email him) on this newslist Dave ________________________________ From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Bill Dandreta Sent: 17 August 2005 23:13 To: SciPy Users List Subject: Re: [SciPy-user] Suggested Linux Distribution? >> Can people recommend a Linux distribution that "just works" for scientific computing? << I have it working under Gentoo but it took some work because I had to tweak the ebuild file. Also installing Gentoo is a royal PITA, it has no installer so you have to do everything from the command line. The documentation is excellent and easy to follow but it takes a couple of days to get the system and applications compiled. The benefit is that once installed (and in the case of scipy, which is not officially supported, tweaking the unofficial ebuild) emerge scipy is all you have to do to get software installed, it gets scipy and all its dependencies from a Gentoo mirror, compiles and installs them.. You might want to check out The Quantian Scientific Computing Environmen t, it has scipy plus many other scientific applications. You can boot and run it from your cdrom. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jan 16 09:16:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 16 Jan 2006 08:16:57 -0600 Subject: [SciPy-user] ANN: SciPy 0.4.4 released In-Reply-To: References: <43CB728E.1020400@ftw.at> Message-ID: <43CBAAD9.7010603@gmail.com> Edson Tadeu wrote: > Hi, > > Thanks for this release, SciPy is a very useful tool/framework, and now > we can officially use SciPy in Python 2.4 too :) > > I think xplt is missing in SciPy 0.4.4 binaries. I installed "SciPy > 0.4.4 for Python 2.4 and Pentium 3" binary, but site-packages/scipy/xplt > only has gistdemomovie.py and pyc! xplt, gplt and plt have been removed from scipy. They currently live in the scipy.sandbox namespace, but that is probably not built for official releases. They will not be returning to scipy. There has been some stated interest in taking up the maintenance of xplt as a separate package, but no one has announced a release or a new home for the code. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From mfmorss at aep.com Mon Jan 16 09:30:48 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Mon, 16 Jan 2006 09:30:48 -0500 Subject: [SciPy-user] Numpy and Pytables Message-ID: Is anyone aware whether these two are compatible? The Pytables documentation says that, while it's primarily designed to work with Numarray, it will also work with Numeric. My limited understanding is that Numpy is consistent with Numeric; hence my question. Mark F. Morss Principal Analyst, Market Risk American Electric Power From schofield at ftw.at Mon Jan 16 09:43:46 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 16 Jan 2006 15:43:46 +0100 Subject: [SciPy-user] ANN: SciPy 0.4.4 released In-Reply-To: References: <43CB728E.1020400@ftw.at> Message-ID: <43CBB122.9040308@ftw.at> Edson Tadeu wrote: > Hi, > > Thanks for this release, SciPy is a very useful tool/framework, and > now we can officially use SciPy in Python 2.4 too :) > > I think xplt is missing in SciPy 0.4.4 binaries. I installed "SciPy > 0.4.4 for Python 2.4 and Pentium 3" binary, but > site-packages/scipy/xplt only has gistdemomovie.py and pyc! Hi Edson, xplt is no longer included in scipy, as Robert mentioned. If you really need it, you could try building it from the Subversion repository, where it's living in the "sandbox". This would require some patience, though, because it probably hasn't been tested since the NumPy transition -- but if you're willing to do some hacking we could point you in the right direction on this mailing list. Otherwise, I suggest you think about migrating to another Python plotting package like matplotlib ... -- Ed From massimo.sandal at unibo.it Mon Jan 16 09:52:57 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 16 Jan 2006 15:52:57 +0100 Subject: [SciPy-user] ANN: SciPy 0.4.4 released In-Reply-To: <43CBAAD9.7010603@gmail.com> References: <43CB728E.1020400@ftw.at> <43CBAAD9.7010603@gmail.com> Message-ID: <43CBB349.2010605@unibo.it> > xplt, gplt and plt have been removed from scipy. They currently live in the > scipy.sandbox namespace, but that is probably not built for official releases. What about integration / relationships with matplotlib? m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From ryanlists at gmail.com Mon Jan 16 09:56:08 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 16 Jan 2006 09:56:08 -0500 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: <056D32E9B2D93B49B01256A88B3EB21801160861@icex2.ic.ac.uk> References: <056D32E9B2D93B49B01256A88B3EB21801160861@icex2.ic.ac.uk> Message-ID: I have been really happy with Ubuntu, but this is one of those loaded questions among Linux users. Installing all the prerequisites for SciPy was fairly straight forward (I think almost all of them are in the package system). Ryan On 1/16/06, Howey, David A wrote: > > I've heard good things about Ubuntu - look for messages from Ryan Krauss (or > email him) on this newslist > > Dave > > ________________________________ > From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On > Behalf Of Bill Dandreta > Sent: 17 August 2005 23:13 > To: SciPy Users List > Subject: Re: [SciPy-user] Suggested Linux Distribution? > > > >> Can people recommend a Linux distribution that "just works" for > scientific computing? << > > I have it working under Gentoo but it took some work because I had to tweak > the ebuild file. Also installing Gentoo is a royal PITA, it has no installer > so you have to do everything from the command line. The documentation is > excellent and easy to follow but it takes a couple of days to get the system > and applications compiled. The benefit is that once installed (and in the > case of scipy, which is not officially supported, tweaking the unofficial > ebuild) > > emerge scipy > > is all you have to do to get software installed, it gets scipy and all its > dependencies from a Gentoo mirror, compiles and installs them.. > > You might want to check out The Quantian Scientific Computing Environment, > it has scipy plus many other scientific applications. You can boot and run > it from your cdrom. > > Bill > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From aisaac at american.edu Mon Jan 16 10:01:36 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 16 Jan 2006 10:01:36 -0500 Subject: [SciPy-user] ANN: SciPy 0.4.4 released In-Reply-To: <43CB728E.1020400@ftw.at> References: <43CB728E.1020400@ftw.at> Message-ID: On Mon, 16 Jan 2006, Ed Schofield apparently wrote: > http://new.scipy.org/Wiki/Download > as a source tarball for Linux/Solaris/OS X/BSD/Windows > (64-bit and 32-bit) and as an executable installer for > Win32. Wonderful! Can you please clarify one thing: the relationship established between numpy and SciPy. Specifically, after SciPy is installed, will I still be able to import numpy and see just numpy (nothing from scipy)? I did not understand the final resolution of this issue, which I believe got some discussion. Thank you, Alan Isaac From aisaac at american.edu Mon Jan 16 10:01:38 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 16 Jan 2006 10:01:38 -0500 Subject: [SciPy-user] ANN: SciPy 0.4.4 released In-Reply-To: <43CBAAD9.7010603@gmail.com> References: <43CB728E.1020400@ftw.at><43CBAAD9.7010603@gmail.com> Message-ID: On Mon, 16 Jan 2006, Robert Kern apparently wrote: > xplt, gplt and plt have been removed from scipy. They > currently live in the scipy.sandbox namespace, but that is > probably not built for official releases. They will not > be returning to scipy. There has been some stated interest > in taking up the maintenance of xplt as a separate > package, but no one has announced a release or a new home > for the code. Might it be worth announcing the need for a new home for xplt on comp.lang.python? And what about the Yorick maintainers: might they be interested? fwiw, Alan Isaac From madhadron at gmail.com Mon Jan 16 09:58:11 2006 From: madhadron at gmail.com (Frederick Ross) Date: Mon, 16 Jan 2006 09:58:11 -0500 Subject: [SciPy-user] FFT Problem Message-ID: I'm running the SVN builds from http://homepage.mac.com/fonnesbeck/mac/ on MacOS 10.4.4 on a 2005 PowerBook G4, and have the following peculiar problem: >>> import numpy >>> a = numpy.array([1, 3, 5, 4]) >>> print numpy.ifft(numpy.fft(a)) [ 1.+0.j 3.+0.j 5.+0.j 4.+0.j] >>> import scipy Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) >>> print scipy.ifft(scipy.fft(a)) [ 1. +0.j 3.5-0.j 5. +0.j 3.5+0.j] Any suggestions on what might be causing this would be much appreciated. -- Frederick Ross Graduate Fellow The Rockefeller University From mfmorss at aep.com Mon Jan 16 10:08:25 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Mon, 16 Jan 2006 10:08:25 -0500 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: <056D32E9B2D93B49B01256A88B3EB21801160861@icex2.ic.ac.uk> Message-ID: For scientific computing I would strongly recommend a distribution that compiles from source; that's the only way to exploit the full power of your machine (most binary distros, for universality's sake, are compiled for the i386). The major such distribution is Gentoo. A very good one, which I have used for a few years with great satisfaction, is Lunar (http:http://www.lunar-linux.org/). Also with such distributions, you don't get a boatload of crap installed on your machine, only the packages you want. I don't run KDE or Gnome, for example but rather a very austere window manager; I would rather not spend computational resources on pretty icons and buttons. Lunar admin software is written in bash, that of Gentoo in Python, which makes Lunar somewhat more accessible for most people. But for people on this list, I would think that Gentoo would be a natural. Both distros very nicely automate the delivery and maintenance of many packages, but Gentoo supports many more than Lunar. It's usually easy to install a package not supported by your distro, but maintaining many unsupported packages over a period of time can be troublesome. Under both Gentoo and Lunar you can write package admin softare yourself, and make your installation take care of your personal packages just as if they were supported by the distribution at large. I doubt the accuracy of the statement below that Gentoo installation proceeds wholly by command-line interractions. Lunar installs from a pretty nice ncurses interface, and my impression was that Gentoo also had an automated installer. If you go with a compiled distribution, you do accept a much larger burden of system administration. That seems to be the price of having a fast, lean system. Besides a compiled distribution, if you are running on an i686 you could also consider Arch Linux. Mark F. Morss Principal Analyst, Market Risk American Electric Power "Howey, David A" To Sent by: "SciPy Users List" scipy-user-bounce s at scipy.net cc Subject 01/16/2006 08:38 Re: [SciPy-user] Suggested Linux AM Distribution? Please respond to SciPy Users List I've heard good things about Ubuntu - look for messages from Ryan Krauss (or email him) on this newslist Dave From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Bill Dandreta Sent: 17 August 2005 23:13 To: SciPy Users List Subject: Re: [SciPy-user] Suggested Linux Distribution? >> Can people recommend a Linux distribution that "just works" for scientific computing? << I have it working under Gentoo but it took some work because I had to tweak the ebuild file. Also installing Gentoo is a royal PITA, it has no installer so you have to do everything from the command line. The documentation is excellent and easy to follow but it takes a couple of days to get the system and applications compiled. The benefit is that once installed (and in the case of scipy, which is not officially supported, tweaking the unofficial ebuild) emerge scipy is all you have to do to get software installed, it gets scipy and all its dependencies from a Gentoo mirror, compiles and installs them.. You might want to check out The Quantian Scientific Computing Environment, it has scipy plus many other scientific applications. You can boot and run it from your cdrom. Bill _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From nwagner at mecha.uni-stuttgart.de Mon Jan 16 10:14:58 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 16 Jan 2006 16:14:58 +0100 Subject: [SciPy-user] FFT Problem In-Reply-To: References: Message-ID: <43CBB872.4030801@mecha.uni-stuttgart.de> Frederick Ross wrote: >I'm running the SVN builds from >http://homepage.mac.com/fonnesbeck/mac/ on MacOS 10.4.4 on a 2005 >PowerBook G4, and have the following peculiar problem: > > >>>>import numpy >>>>a = numpy.array([1, 3, 5, 4]) >>>>print numpy.ifft(numpy.fft(a)) >>>> >[ 1.+0.j 3.+0.j 5.+0.j 4.+0.j] > >>>>import scipy >>>> >Overwriting fft= from scipy.fftpack.basic >(was from numpy.dft.fftpack) >Overwriting ifft= from scipy.fftpack.basic >(was from numpy.dft.fftpack) > >>>>print scipy.ifft(scipy.fft(a)) >>>> >[ 1. +0.j 3.5-0.j 5. +0.j 3.5+0.j] > >Any suggestions on what might be causing this would be much appreciated. > >-- >Frederick Ross >Graduate Fellow >The Rockefeller University > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > I cannot reproduce your results. Note that there is no message starting with "Overwriting" on my system >>> import numpy >>> a = numpy.array([1, 3, 5, 4]) >>> print numpy.ifft(numpy.fft(a)) [ 1.+0.j 3.+0.j 5.+0.j 4.+0.j] >>> import scipy >>> print scipy.ifft(scipy.fft(a)) [ 1.+0.j 3.+0.j 5.+0.j 4.+0.j] >>> scipy.__version__ '0.4.4.1556' >>> numpy.__version__ '0.9.4.1914' Nils From dd55 at cornell.edu Mon Jan 16 10:18:48 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 16 Jan 2006 10:18:48 -0500 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: References: <056D32E9B2D93B49B01256A88B3EB21801160861@icex2.ic.ac.uk> Message-ID: <200601161018.48557.dd55@cornell.edu> On Monday 16 January 2006 09:56, Ryan Krauss wrote: > I have been really happy with Ubuntu, but this is one of those loaded > questions among Linux users. Installing all the prerequisites for > SciPy was fairly straight forward (I think almost all of them are in > the package system). For what its worth: I am a longtime gentoo user. I was recently considering switching to ubuntu, mainly because when I tell everyone how great Linux is, I can't recommend gentoo due to how difficult it is to install. Last week I tried installing kubuntu, and had a hard time understanding the package manager, how to upgrade to the most recent kernel, how do I find the full lapack installation required to build scipy, etc. Compared to gentoo, the package system seemed a bit messy (universe, multiverse, etc) and not as current. I guess I am sticking with gentoo and hoping for the gentoo installer project to mature, so I dont have to waste two days setting up my next computer. Darren From rshepard at appl-ecosys.com Mon Jan 16 10:20:23 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 16 Jan 2006 07:20:23 -0800 (PST) Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: References: Message-ID: On Mon, 16 Jan 2006, mfmorss at aep.com wrote: > For scientific computing I would strongly recommend a distribution that > compiles from source; that's the only way to exploit the full power of your > machine (most binary distros, for universality's sake, are compiled for the > i386). With any distribution you can build a custom kernel; as a matter of fact, you should. That's where you get the fine-tuning of your installation. I ran Red Hat for six years, then switched to Slackware about 2.5 years ago to get away from the bleeding edge and upgrade dependency hell. It's one of the most secure, stable, reliable distributions available. And, you don't need to build it all from source. If you want to spend a lot of time doing so, then gentoo is probably a good choice. However, if you want a clean, easy installation of a distribution that's solid as a rock and more than a decade old, take a look at . I call it the "quiet distribution." Rich -- Richard B. Shepard, Ph.D. | Author of "Quantifying Environmental Applied Ecosystem Services, Inc. (TM) | Impact Assessments Using Fuzzy Logic" Voice: 503-667-4517 Fax: 503-667-8863 From madhadron at gmail.com Mon Jan 16 10:21:36 2006 From: madhadron at gmail.com (Frederick Ross) Date: Mon, 16 Jan 2006 10:21:36 -0500 Subject: [SciPy-user] FFT Problem In-Reply-To: <43CBB872.4030801@mecha.uni-stuttgart.de> References: <43CBB872.4030801@mecha.uni-stuttgart.de> Message-ID: Then it's probably the details of my setup. I'll try building from scratch. Thanks. On 1/16/06, Nils Wagner wrote: > I cannot reproduce your results. Note that there is no message starting > with "Overwriting" on my system > > >>> import numpy > >>> a = numpy.array([1, 3, 5, 4]) > >>> print numpy.ifft(numpy.fft(a)) > [ 1.+0.j 3.+0.j 5.+0.j 4.+0.j] > >>> import scipy > >>> print scipy.ifft(scipy.fft(a)) > [ 1.+0.j 3.+0.j 5.+0.j 4.+0.j] > >>> scipy.__version__ > '0.4.4.1556' > >>> numpy.__version__ > '0.9.4.1914' > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- Frederick Ross I AM NOT FRED CROSS From Jonathan.Peirce at nottingham.ac.uk Mon Jan 16 10:35:04 2006 From: Jonathan.Peirce at nottingham.ac.uk (Jon Peirce) Date: Mon, 16 Jan 2006 15:35:04 +0000 Subject: [SciPy-user] plotting packages [was ANN: SciPy 0.4.4 released] In-Reply-To: References: Message-ID: <43CBBD28.5080605@psychology.nottingham.ac.uk> An HTML attachment was scrubbed... URL: From a.h.jaffe at gmail.com Mon Jan 16 10:35:33 2006 From: a.h.jaffe at gmail.com (Andrew Jaffe) Date: Mon, 16 Jan 2006 15:35:33 +0000 Subject: [SciPy-user] numpy.dft.real+fft problem? Message-ID: <43CBBD45.4050306@gmail.com> Hi All, [This has already appeared on python-scientific-devel; apologies to those of you seeing it twice...] There seems to be a problem with the real_fft routine; starting with a length n array, it should give a length n/2+1 array with real numbers in the first and last positions (for n even). However, note that the last entry on the first row has a real part, as opposed to the complex routine fft, which has it correct. The weirdness of the real part seems to imply it's due to uninitialized memory. The problem persists in 2d, as well. In [6]:numpy.__version__ Out[6]:'0.9.4.1915' In [53]:a = numpy.randn(16) In [54]:fa = numpy.dft.real_fft(a) In [55]:fa Out[55]: array([ 0.4453+0.0000e+000j, -3.5009-3.4618e+000j, -0.1276+3.5740e-001j, -1.043+2.3504e+000j, -1.2028+1.6121e-001j, -7.1041-1.1182e+000j, 0.09-1.6013e-001j, 4.2079-1.0106e-001j, -4.8295+2.3091e+251j]) **wrong** ^^^^^^^^^^^^^ In [56]:f2a = numpy.dft.fft(a) In [57]:f2a Out[57]: array([ 0.4453+0.j , -3.5009-3.4618j, -0.1276+0.3574j, -1.043+2.3504j, -1.2028+0.1612j, -7.1041-1.1182j, 0.09-0.1601j, 4.2079-0.1011j, -4.8295+0.j , **correct** ^^^ 4.2079+0.1011j, 0.09+0.1601j, -7.1041+1.1182j, -1.2028-0.1612j, -1.043-2.3504j,-0.1276-0.3574j, -3.5009+3.4618j]) -Andrew From mfmorss at aep.com Mon Jan 16 10:46:54 2006 From: mfmorss at aep.com (mfmorss at aep.com) Date: Mon, 16 Jan 2006 10:46:54 -0500 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: Message-ID: >With any distribution you can build a custom kernel; as a matter of fact, >you should. That's where you get the fine-tuning of your installation. It is, of course, not only the kernel but all the packages that one runs, e.g. gcc itself, Python, R, netCDF, whatever, that is locally compiled when running a compiled distribution. In each such case, therefore, there is significantly greater exploitation of any given machine's computational power. This may not matter much to most people, but the question was about building a system for scientific applications. If you want things to run fast, compile them from source, optimized for your own machine. The compiled distributions greatly facilitate this. Mark F. Morss Principal Analyst, Market Risk American Electric Power From Jonathan.Peirce at nottingham.ac.uk Mon Jan 16 10:50:26 2006 From: Jonathan.Peirce at nottingham.ac.uk (Jon Peirce) Date: Mon, 16 Jan 2006 15:50:26 +0000 Subject: [SciPy-user] plotting packages [was ANN: SciPy 0.4.4 released] In-Reply-To: <43CBBD28.5080605@psychology.nottingham.ac.uk> References: <43CBBD28.5080605@psychology.nottingham.ac.uk> Message-ID: <43CBC0C2.1060108@psychology.nottingham.ac.uk> Jon Peirce wrote: >> On Mon, 16 Jan 2006, Robert Kern apparently wrote: >>> > xplt, gplt and plt have been removed from scipy. They >>> > currently live in the scipy.sandbox namespace, but that is >>> > probably not built for official releases. They will not >>> > be returning to scipy. There has been some stated interest >>> > in taking up the maintenance of xplt as a separate >>> > package, but no one has announced a release or a new home >>> > for the code. >>> >> Might it be worth announcing the need for a new home for >> xplt on comp.lang.python? And what about the Yorick >> maintainers: might they be interested? >> >> fwiw, >> Alan Isaac >> > Maybe instead we could let xplt die a peaceful death? For years the > python community suffered with a large number of half-finished > products for plotting (sorry to sound so melodramatic - admittedly i > dont think anyone actually died from the problem of the inadequate > plotting package!). Then John Hunter came along (hail the conquering > hero) and did wonders in making Matplotlib the central plotting > package that could accomplish an enormous variety of very high quality > plots but remain extremely usable. > > Announcing a new home for xplt (or the need for one) makes it sound > like this is also a suitable alternative to matplotlib, but for most > people that really isn't true. IMHO it would be better for everyone to > point to Matplotlib, and if there is some feature it missing, let's > prioritise getting them in there rather than maintianing two packages. > (e.g. mesh plots?) > > just my $0.02 > Jon > sorry - my signature file has a URL, and apparently that's bad! Jon This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From massimo.sandal at unibo.it Mon Jan 16 10:57:56 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Mon, 16 Jan 2006 16:57:56 +0100 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: References: Message-ID: <43CBC284.1080809@unibo.it> Before all, a friendly advice if you never used Linux/*BSD. No matter what Linux distro will you install, before doing it *read some documentation first*. You can find a lot of tutorials etc. on the internet. Do it and do it *first*, before even thinking of installing anything. Linux is generally not difficult (no more than Windows), but it's quite different, and you have to know its basics both to not stumble upon problems and fully appreciate its power. Switching to Linux without some knowledge of its basics is like being parachuted in an unknown city without a map and trying to find the post office. Remember again: *read it first*. When I first installed Linux two years ago I took the time to download and read a bunch of tutorials, introductions etc. and it saved me a lot of grief. I immediately felt at /home ;). mfmorss at aep.com wrote: > For scientific computing I would strongly recommend a distribution that > compiles from source; that's the only way to exploit the full power of your > machine (most binary distros, for universality's sake, are compiled for the > i386). The major such distribution is Gentoo. I am a Gentoo user myself and I would recommend Gentoo too for the same reasons. If you never used Linux it can be a bit harsh introduction (but, hey, we're scientists! we don't fear the unknown!), but you surely will learn a lot of things in the install process and you will appreciate the flexibility of a computing environment you can fully tailor around your own needs. Moreover documentation is truly excellent, and both the forums and the gentoo-user mailing list are wonderful helps, among the best in the Linux community. > I doubt the accuracy of the statement below that Gentoo installation > proceeds wholly by command-line interractions. Ehm, no. Gentoo installation is still mainly command line driven. But the documentation holds your hands quite easily. Keep another PC to connect to the internet if you need help during the process is highly advisable. > If you go with a compiled distribution, you do > accept a much larger burden of system administration. That seems to be the > price of having a fast, lean system. Not really more than with other distros, after the first configurations. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From giovanni.samaey at cs.kuleuven.ac.be Mon Jan 16 11:05:58 2006 From: giovanni.samaey at cs.kuleuven.ac.be (Giovanni Samaey) Date: Mon, 16 Jan 2006 17:05:58 +0100 Subject: [SciPy-user] Numpy and Pytables In-Reply-To: References: Message-ID: <43CBC466.80103@cs.kuleuven.ac.be> No, but they are working on it -- see a question of mine from last week. My current work-around is to copy element by element from a numarray array into a scipy array. (I agree this looks stupid when you read it, but I get all sorts of strange errors when I attempt other things.) Giovanni mfmorss at aep.com wrote: >Is anyone aware whether these two are compatible? The Pytables >documentation says that, while it's primarily designed to work with >Numarray, it will also work with Numeric. My limited understanding is that >Numpy is consistent with Numeric; hence my question. > >Mark F. Morss >Principal Analyst, Market Risk >American Electric Power > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From chris at trichech.us Mon Jan 16 11:11:38 2006 From: chris at trichech.us (Christopher Fonnesbeck) Date: Mon, 16 Jan 2006 11:11:38 -0500 Subject: [SciPy-user] SciPy import error in linalg Message-ID: <4210AA89-BA90-48C4-B233-468492B29E86@trichech.us> From today's svn, I get the following import error from linalg: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy-0.4.4.1557-py2.4-macosx-10.4-ppc.egg/scipy/linalg/ __init__.py 6 from linalg_version import linalg_version as __version__ 7 ----> 8 from basic import * global basic = undefined 9 from decomp import * 10 from matfuncs import * /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy-0.4.4.1557-py2.4-macosx-10.4-ppc.egg/scipy/linalg/ basic.py 15 #from blas import get_blas_funcs 16 #from lapack import get_lapack_funcs ---> 17 from flinalg import get_flinalg_funcs global flinalg = undefined get_flinalg_funcs = undefined 18 from scipy.lib.lapack import get_lapack_funcs 19 from numpy import asarray,zeros,sum,NewAxis,greater_equal,subtract,arange,\ /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy-0.4.4.1557-py2.4-macosx-10.4-ppc.egg/scipy/linalg/ flinalg.py 13 import _flinalg 14 except ImportError: ---> 15 from numpy.distutils.misc_util import PostponedException global numpy.distutils.misc_util = undefined PostponedException = undefined 16 _flinalg = PostponedException() 17 print _flinalg.__doc__ ImportError: cannot import name PostponedException -- Christopher J. Fonnesbeck Population Ecologist, Marine Mammal Section Fish & Wildlife Research Institute (FWC) St. Petersburg, FL Adjunct Assistant Professor Warnell School of Forest Resources University of Georgia Athens, GA T: 727.235.5570 E: chris at trichech.us -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2417 bytes Desc: not available URL: From philou at philou.ch Mon Jan 16 11:42:39 2006 From: philou at philou.ch (Philippe Strauss) Date: Mon, 16 Jan 2006 17:42:39 +0100 Subject: [SciPy-user] scipy 0.4.4 and weave Message-ID: <20060116164239.GA15245@philou.ch> Hello, Using the weave package from scipy 0.4.4, the examples script shipped does not work! the namespace as changed but not the examples. Any news from the weave package maintainer? -- Philippe Strauss av. de Beaulieu 25 1004 Lausanne http://www.maitre-toilier.ch/ From wjdandreta at att.net Mon Jan 16 13:07:58 2006 From: wjdandreta at att.net (Bill Dandreta) Date: Mon, 16 Jan 2006 13:07:58 -0500 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: References: Message-ID: <43CBE0FE.1000502@att.net> Rich Shepard wrote: > I ran Red Hat for six years, then switched to Slackware about 2.5 years ago >to get away from the bleeding edge and upgrade dependency hell. It's one of >the most secure, stable, reliable distributions available. And, you don't >need to build it all from source. If you want to spend a lot of time doing >so, then gentoo is probably a good choice. However, if you want a clean, easy >installation of a distribution that's solid as a rock and more than a decade >old, take a look at . I call it the "quiet >distribution." > > I used Slackware for several years and liked it very much. One major problem is that there is no 64 bit Slackware. I switched to Gentoo over a year ago and find it very easy to use. Installation is time consuming (it takes a couple of days if you compile everything from source). It has the best software administration tool (Portage). One of the biggest administration headaches with other distros is that your configuration files get clobbered when you upgrade software. Gentoo archives the config files so you can roll back to earlier versions if you want to. If you like to use the latest version of software, Gentoo is very quick at releasing stable ebuilds. Most distros run at least 1-2 releases behind what is available from developers. Gentoo has stable ebuilds available usually within a few days of a new release (I've seen it take as long as a month for more complex or obscure software). The latest kernel is also made available quickly. There is a Scientific Gentoo Project with its own mailing list for questions about scientific software. Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkuhlman at cutter.rexx.com Mon Jan 16 13:17:56 2006 From: dkuhlman at cutter.rexx.com (Dave Kuhlman) Date: Mon, 16 Jan 2006 10:17:56 -0800 Subject: [SciPy-user] Numpy and Pytables In-Reply-To: <43CBC466.80103@cs.kuleuven.ac.be> References: <43CBC466.80103@cs.kuleuven.ac.be> Message-ID: <20060116181755.GA36500@cutter.rexx.com> On Mon, Jan 16, 2006 at 05:05:58PM +0100, Giovanni Samaey wrote: > No, but they are working on it -- see a question of mine from last week. > My current work-around is to copy element by element from a numarray array > into a scipy array. (I agree this looks stupid when you read it, but I > get all sorts of > strange errors when I attempt other things.) > This was discussed several weeks ago. I believe that the solution offered was to use asarry(): numarray_array = numarray.asarray(scipy_array) And: scipy_array = scipy.asarray(numarray_array) If you get strange errors when using these, perhaps you have found a bug. Dave -- Dave Kuhlman http://www.rexx.com/~dkuhlman From zollars at caltech.edu Mon Jan 16 13:32:15 2006 From: zollars at caltech.edu (Eric Zollars) Date: Mon, 16 Jan 2006 10:32:15 -0800 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: References: <43C800DB.6000707@caltech.edu> <43C83FA7.10107@caltech.edu> Message-ID: <43CBE6AF.8000001@caltech.edu> > Change the line > > compiler = new_fcompiler(compiler='ibm') > > to > > compiler = IbmFCompiler() > > Also, what are the values > > os.name > sys.platform > > in your system? Currently ibm compiler is enabled only for aix platforms, > see _default_compilers dictionary in > numpy/distutils/fcompiler/__init__.py. > > Pearu os.name = posix sys.platform = linux2 I added ibm to _default_compilers for linux.* in __init__.py. But I was still getting 'None' for compiler.get_version(). In the def get_version() in ibm.py there is this code: l = [d for d in l if os.path.isfile(os.path.join(xlf_dir,d,'xlf.cfg'))] if not l: from distutils.version import LooseVersion self.version = version = LooseVersion(l[0]) However 'l' is defined in this case ['8.1'], so I added the else statement else: version = l[0] Now the ibm compiler is detected correctly. However the build is failing with cannot find 'bundle1.o'. This appears to be added to the xlf.cfg in ibm.py in the get_flags_linker_so() function. I am not sure what this does. Let me know if this is AIX specific. Also, in the same function '-bshared' is added as an option for the linker. As far as I can tell there is no such option for the 8.1 (or 9.1) versions of the compiler for Linux. Continuing.. Eric From mantha at chem.unr.edu Mon Jan 16 13:41:33 2006 From: mantha at chem.unr.edu (Jordan Mantha) Date: Mon, 16 Jan 2006 10:41:33 -0800 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: <200601161018.48557.dd55@cornell.edu> References: <056D32E9B2D93B49B01256A88B3EB21801160861@icex2.ic.ac.uk> <200601161018.48557.dd55@cornell.edu> Message-ID: <43CBE8DD.3060203@chem.unr.edu> Darren Dale wrote: > On Monday 16 January 2006 09:56, Ryan Krauss wrote: > >>I have been really happy with Ubuntu, but this is one of those loaded >>questions among Linux users. Installing all the prerequisites for >>SciPy was fairly straight forward (I think almost all of them are in >>the package system). > > > For what its worth: > > I am a longtime gentoo user. I was recently considering switching to ubuntu, > mainly because when I tell everyone how great Linux is, I can't recommend > gentoo due to how difficult it is to install. Last week I tried installing > kubuntu, and had a hard time understanding the package manager, how to > upgrade to the most recent kernel, how do I find the full lapack installation > required to build scipy, etc. Compared to gentoo, the package system seemed a > bit messy (universe, multiverse, etc) and not as current. I was a Gentoo user for ~ 3 years before I switched over to Ubuntu. I was sort of tired of the compiling and tweaking and Ubuntu seemed to offer a lot of packages (since it comes from Debian) but more geared towards desktop use. The universe, multiverse, etc. repository thing does take a bit of getting used to. Certainly, if you want to optimize your distro to the maximum Gentoo is a great choice. If you want an easy desktop for general work in the sciences, I think Ubuntu is a great choice. One nice thing about Ubuntu in the context of this list is that it is quite python-centric so python packages tend to get worked on more than other distros. As far as how current Ubuntu is it really depends on where in the 6-month release cycle you are and what release your using. Having come from Gentoo I was not used to "releases" but if you install Ubuntu 6.04 (to be released April) when it is first released you will have quite current software. You can also run the development release if your brave (I have since it started) and then you have the same (if not newer) versions as Debian unstable. So I think Ubuntu is a great choice for desktop scientific use, although I'm a bit partial since I work on the Universe science maintainer team. > I guess I am sticking with gentoo and hoping for the gentoo installer project > to mature, so I dont have to waste two days setting up my next computer. Is Vidalinux (or I guess it's VLOS) around still? I thought it had some potential for getting people up and running with Gentoo. -Jordan Mantha > Darren > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From lists at spicynoodles.net Mon Jan 16 14:47:08 2006 From: lists at spicynoodles.net (Andre Radke) Date: Mon, 16 Jan 2006 20:47:08 +0100 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: <43CAD521.9070006@ieee.org> References: <43CAA88D.2060805@gmail.com> <43CAD521.9070006@ieee.org> Message-ID: Travis Oliphant wrote: >Andre Radke wrote: > >The dtypechar attribute of my Complex64 matrix was 'G'. >> >This is definitely the problem. It should be 'D'. 'G' is a complex >number with long doubles. > >How did you specify the matrix again? Could you show us some of the >attributes of the matrix you created. I'm shocked that 'G' was the >dtypechar... jannu:/Volumes/Data/Diploma/code/py-smatrix andre$ /usr/local/bin/python ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#1, Oct 3 2005, 09:39:46) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> a = array([[1, 1], [-1, -1j]], dtype=Complex64) >>> a array([[ 1.+0.j, 1.+0.j], [-1.+0.j, 0.-1.j]], dtype=complex128) >>> a_inv = linalg.inv(a) >>> a_inv array([[ 0., -1.], [ 1., 1.]]) >>> dot(a, a_inv) array([[ 1.+0.j, 0.+0.j], [ 0.-1.j, 1.-1.j]], dtype=complex128) >>> a.shape (2, 2) >>> a.real array([[ 1., 1.], [-1., 0.]], dtype=float64) >>> a.imag array([[ 0., 0.], [ 0., -1.]], dtype=float64) >>> a.dtypechar 'G' >>> a.dtypedescr dtypedescr('>c16') >>> a.dtype >>> Complex64 'G' This is on an Apple PowerBook G3 (pre-Altivec) running Mac OS X 10.3.9 with ActivePython 2.4.2, NumPy 0.9.3.1903 and SciPy 0.4.4.1550 compiled per Chris Fonnesbeck's instructions on new.scipy.org. If you think I may have run into a bug and there's something else you would like me to try, just let me know... > >Also, is Complex64 the preferred way to specify a double precision > >complex dtype when constructing a matrix? > >No. Use numpy.complex128 or 'D' or just simply complex. Thanks, using complex solved my actual problem and got my project unstuck. -Andre From oliphant.travis at ieee.org Mon Jan 16 15:19:22 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 16 Jan 2006 13:19:22 -0700 Subject: [SciPy-user] scipy 0.4.4 and weave In-Reply-To: <20060116164239.GA15245@philou.ch> References: <20060116164239.GA15245@philou.ch> Message-ID: <43CBFFCA.7020100@ieee.org> Philippe Strauss wrote: >Hello, > >Using the weave package from scipy 0.4.4, the examples script shipped >does not work! the namespace as changed but not the examples. >Any news from the weave package maintainer? > > That one got past us (for a while in fact seeing how numerix was never a part of numpy...) Change numpy.numerix to numpy for a good start... -Travis From oliphant.travis at ieee.org Mon Jan 16 16:35:30 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 16 Jan 2006 14:35:30 -0700 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: References: <43CAA88D.2060805@gmail.com> <43CAD521.9070006@ieee.org> Message-ID: <43CC11A2.9090402@ieee.org> Andre Radke wrote: >Travis Oliphant wrote: > > >>Andre Radke wrote: >> >The dtypechar attribute of my Complex64 matrix was 'G'. >> >> >>This is definitely the problem. It should be 'D'. 'G' is a complex >>number with long doubles. >> >>How did you specify the matrix again? Could you show us some of the >>attributes of the matrix you created. I'm shocked that 'G' was the >>dtypechar... >> >> > >jannu:/Volumes/Data/Diploma/code/py-smatrix andre$ /usr/local/bin/python >ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on >Python 2.4.2 (#1, Oct 3 2005, 09:39:46) >[GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin >Type "help", "copyright", "credits" or "license" for more information. > > >>>> from scipy import * >>>> a = array([[1, 1], [-1, -1j]], dtype=Complex64) >>>> a >>>> >>>> I think I understand the problem. On your system, longdouble is the same as double. However, not all of the code recognizes the equivalency of 'D' and 'G' on your system which causes the problem. I've patched things up so that Complex64 returns 'D' as expected even if 'G' is the same type. Perhaps the SVN version will now work correctly. -Travis From cournape at atr.jp Mon Jan 16 21:24:32 2006 From: cournape at atr.jp (Cournapeau David) Date: Tue, 17 Jan 2006 11:24:32 +0900 Subject: [SciPy-user] Suggested Linux Distribution? In-Reply-To: References: Message-ID: <1137464672.5139.14.camel@localhost.localdomain> On Mon, 2006-01-16 at 10:46 -0500, mfmorss at aep.com wrote: > >With any distribution you can build a custom kernel; as a matter of fact, > >you should. That's where you get the fine-tuning of your installation. > > It is, of course, not only the kernel but all the packages that one runs, > e.g. gcc itself, Python, R, netCDF, whatever, that is locally compiled when > running a compiled distribution. In each such case, therefore, there is > significantly greater exploitation of any given machine's computational > power. This may not matter much to most people, but the question was about > building a system for scientific applications. If you want things to run > fast, compile them from source, optimized for your own machine. The > compiled distributions greatly facilitate this. I've read thoses informations many times, but I think they are far from accurate. Honestly, compiling for i386 or i585 won't give you many differences, I would even think it is not measurable. Plus, you can recompile your own packages with Red Hat based and debian based (which includes Ubuntu), not only gentoo. Compiling your own kernel is also useless in most cases: it will definitely not give you any speed improvements (except if the kernel is a difference version of course), and you will not be in sync with the package manager anymore for the drivers (at least in the debian and fedora core based distributions). Compiling from source is something to avoid as much as possible if you are a beginner, it is the best way to screw things up if you do not know what you are doing (library path problems, bad installation path, overwriting base components). My experience is that linux does not just work, it is not really the point. There will be some painful adaptations, but once it is done, going back to something else will be even more painful :) My advice is simple: Just take one of the big distributions to easily find support on the internet, and post your problems on the relevant forums. The distribution does not matter that much; from my experience, ubuntu has a good installer and a fixed release cycle (every 6 months), gentoo is more configurable, debian is rock solid, fedora core is widely used and contains a lot of packages, etc... David From ryanlists at gmail.com Mon Jan 16 23:16:48 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 16 Jan 2006 23:16:48 -0500 Subject: [SciPy-user] embedding Python code in LaTeX Message-ID: Can anyone recommend a good way to embed python code in a LaTeX document? Preferably a LaTeX package or Python module that converts code to LaTeX. I would like to have the code snippets look pretty in my thesis. Thanks, Ryan From Fernando.Perez at colorado.edu Mon Jan 16 23:23:51 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Mon, 16 Jan 2006 21:23:51 -0700 Subject: [SciPy-user] embedding Python code in LaTeX In-Reply-To: References: Message-ID: <43CC7157.9000304@colorado.edu> Ryan Krauss wrote: > Can anyone recommend a good way to embed python code in a LaTeX > document? Preferably a LaTeX package or Python module that converts > code to LaTeX. I would like to have the code snippets look pretty in > my thesis. The listings latex package, combined with this, gives me results I'm quite happy with (colors optimized so they read well on print, even if a b/w printer is used): \usepackage{color} \definecolor{orange}{cmyk}{0,0.4,0.8,0.2} % Use and configure listings package for nicely formatted code \usepackage{listings} \lstset{ language=Python, basicstyle=\small\ttfamily, commentstyle=\ttfamily\color{blue}, stringstyle=\ttfamily\color{orange}, showstringspaces=false, breaklines=true, postbreak = \space\dots } hth, f From dd55 at cornell.edu Mon Jan 16 23:58:19 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 16 Jan 2006 23:58:19 -0500 Subject: [SciPy-user] embedding Python code in LaTeX In-Reply-To: References: Message-ID: <200601162358.20471.dd55@cornell.edu> On Monday 16 January 2006 11:16 pm, Ryan Krauss wrote: > Can anyone recommend a good way to embed python code in a LaTeX > document? Preferably a LaTeX package or Python module that converts > code to LaTeX. I would like to have the code snippets look pretty in > my thesis. I think py2tex is just what you are looking for: http://www.sollunae.net/py2tex/ -- Darren S. Dale, Ph.D. dd55 at cornell.edu From prabhu_r at users.sf.net Tue Jan 17 00:01:02 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Tue, 17 Jan 2006 10:31:02 +0530 Subject: [SciPy-user] embedding Python code in LaTeX In-Reply-To: <43CC7157.9000304@colorado.edu> References: <43CC7157.9000304@colorado.edu> Message-ID: <17356.31246.328827.553187@prpc.aero.iitb.ac.in> >>>>> "Fernando" == Fernando Perez writes: Fernando> Ryan Krauss wrote: >> Can anyone recommend a good way to embed python code in a LaTeX >> document? Preferably a LaTeX package or Python module that >> converts code to LaTeX. I would like to have the code snippets >> look pretty in my thesis. Fernando> The listings latex package, combined with this, gives me Fernando> results I'm quite happy with (colors optimized so they Fernando> read well on print, even if a b/w printer is used): Fernando> \usepackage{listings} \lstset{ Yes, I strongly recommend listings as well, fwiw, this is what I use: \usepackage{listings} \lstset{language=Python, commentstyle=\color{red}\itshape, stringstyle=\color{darkgreen}, showstringspaces=false, keywordstyle=\color{blue}\bfseries} cheers, prabhu From arnd.baecker at web.de Tue Jan 17 02:18:35 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Tue, 17 Jan 2006 08:18:35 +0100 (CET) Subject: [SciPy-user] embedding Python code in LaTeX In-Reply-To: <17356.31246.328827.553187@prpc.aero.iitb.ac.in> References: <17356.31246.328827.553187@prpc.aero.iitb.ac.in> Message-ID: Hi, On Tue, 17 Jan 2006, Prabhu Ramachandran wrote: > >>>>> "Fernando" == Fernando Perez writes: > > Fernando> Ryan Krauss wrote: > >> Can anyone recommend a good way to embed python code in a LaTeX > >> document? Preferably a LaTeX package or Python module that > >> converts code to LaTeX. I would like to have the code snippets > >> look pretty in my thesis. > > Fernando> The listings latex package, combined with this, gives me > Fernando> results I'm quite happy with (colors optimized so they > Fernando> read well on print, even if a b/w printer is used): > > Fernando> \usepackage{listings} \lstset{ > > Yes, I strongly recommend listings as well, fwiw, this is what I use: > > \usepackage{listings} > \lstset{language=Python, > commentstyle=\color{red}\itshape, > stringstyle=\color{darkgreen}, > showstringspaces=false, > keywordstyle=\color{blue}\bfseries} I usually use fancyvrb: \usepackage{fancyvrb} % include a .py file \VerbatimInput[frame=lines,fontshape=sl,fontsize=\footnotesize]{spanish.py} Documentation: http://www.tug.org/tetex/tetex-texmfdist/doc/latex/fancyvrb/fancyvrb.ps Interestingly, the listings package has an interface to fancyvrb (sec 4.15. in the manual of listings v1.2). As fancyvrb does not do pretty printing, the combination of listings and fancyvrb might give you maximum flexibility ... Best, Arnd From oliver.tomic at matforsk.no Tue Jan 17 06:07:12 2006 From: oliver.tomic at matforsk.no (oliver.tomic at matforsk.no) Date: Tue, 17 Jan 2006 12:07:12 +0100 Subject: [SciPy-user] problems with stats.py In-Reply-To: Message-ID: Hi Alan and Travis, Thanks for your comments scipy-user-bounces at scipy.net wrote on 16.01.2006 00:36:34: > On Sun, 15 Jan 2006, oliver.tomic at matforsk.no apparently wrote: > > Previously I used Numeric 23.8 and everything worked fine. Then I installed > > Numpy and replaced 'Numeric' with 'Numpy' everywhere in the code. Now the > > following occurs: > > Traceback (most recent call last): > > File "C:\Python24\pmse_Plot.py", line 175, in pmsePlotter > > ANOVAresults = lF_oneway(preANOVA[(assessor, attribute)]) > > File "C:\Python24\stats.py", in line 1534, in lF_oneway > > means = map(amean,tmp) > > NameError: global name 'amean' is not defined > > Numeric did not define amean either. > This is defined in stats.py I was aware of that, but I have no idea why it didn't work with Numpy. > Are you making the replacements in Strangman's code? > Don't forget his stats.py uses pstat.py. > It might turn on how you are importing the module functions. Yes, I made replacements in Strangman's code. I checked both, stats.py and pstat.py. I'll have to look into the importing. > PS Your directory structure looks very odd to me, by the > way. Isn't Lib/site-packages a more common place to install > such modules? As my application progressed (from one version to another) I had to make slight changes in stats.py to make everything work for my application. Every new version I keep in another directory with the corresponding modified stats.py. Maybe not the way a more experienced programmer would do it but for me it felt right and it worked. :-) > Did you try using "numpy.lib.convertcode" ? Did you convert stats.py > as well? No, I wasn't really aware of it. Are there any examples how to use it? > There is a version of stats.py in (full) SciPy that seems to work fine, > as well. I wasn't aware of that either. Since Numeric 23.8 and Strangmans modules did the job for me I didn't spent much time looking into Scipy, unfortunately. That kept things for me simple. I must admit that I was rather confused by all the different packages out there and their history. A while ago I kept finding comments on the web about Scipy 0.4.x, yet on www.scipy.org one can only download scipy 0.3.3. Numeric, Numarray, old Numpy, Scipy, Scipy_core, new Numpy ... I simply got lost. Here at work we have Matlab available. We pay around 15 000$ (12 500 Euros) each year for 4 floating licenses. This is a LOT of money for my institute. Personally, I haven't used Matlab so much yet as some of my colleagues who have turned into real Matlab-slaves. I don't want to end up like them and I wanted to do something about that. Since I have used Python/wxPython for quite a while in differenent projects Numeric/Numarray was an obvious choice for me. However, it was somewhat difficult to understand/decide what packages to use, since there is so much different stuff out there. Because of my affection to Python I wasn't willing to let my frustration take over and choose something else (non-pythonic) for my scientific computations. A few days ago I purchased Travis Oliphant's very good "Guide to NumPy" and after reading the intro things were clearer. I am very happy to see that Travis and the people around him try to unite the different fragmentetd groups. The new website really looks great and is a very good start for Numpy. Finally!! This is what is needed to make Python an even more convinient choice for researchers and scientists. I will do my part and use only Numpy from now on (as soon as I have succeeded to migrate my application form Numeric to Numpy). Maybe I'll even succeed with convincing some of my Matlab-slave colleagues to switch over to Python. Sorry about the long mail, but I just had to let it out. ;-) best regards Oliver ================= Oliver Tomic, Ph.D. MATFORSK - Norwegian Food Research Institute Osloveien 1 1430 ?s Norway Tel.: 0047 6497 0252 Fax: 0047 6497 0333 Mob.: 0047 9574 6167 ================== http://www.matforsk.no From ryanlists at gmail.com Tue Jan 17 09:21:29 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 17 Jan 2006 09:21:29 -0500 Subject: [SciPy-user] embedding Python code in LaTeX In-Reply-To: References: <17356.31246.328827.553187@prpc.aero.iitb.ac.in> Message-ID: Thanks to everyone who replied. All three of these are good solutions. The listings package seems pretty flexible as is giving me nice looking results without too much effort with Fernando's and Prabhu's customization suggestions. Ryan On 1/17/06, Arnd Baecker wrote: > Hi, > > On Tue, 17 Jan 2006, Prabhu Ramachandran wrote: > > > >>>>> "Fernando" == Fernando Perez writes: > > > > Fernando> Ryan Krauss wrote: > > >> Can anyone recommend a good way to embed python code in a LaTeX > > >> document? Preferably a LaTeX package or Python module that > > >> converts code to LaTeX. I would like to have the code snippets > > >> look pretty in my thesis. > > > > Fernando> The listings latex package, combined with this, gives me > > Fernando> results I'm quite happy with (colors optimized so they > > Fernando> read well on print, even if a b/w printer is used): > > > > Fernando> \usepackage{listings} \lstset{ > > > > Yes, I strongly recommend listings as well, fwiw, this is what I use: > > > > \usepackage{listings} > > \lstset{language=Python, > > commentstyle=\color{red}\itshape, > > stringstyle=\color{darkgreen}, > > showstringspaces=false, > > keywordstyle=\color{blue}\bfseries} > > I usually use fancyvrb: > \usepackage{fancyvrb} > > % include a .py file > \VerbatimInput[frame=lines,fontshape=sl,fontsize=\footnotesize]{spanish.py} > > Documentation: > http://www.tug.org/tetex/tetex-texmfdist/doc/latex/fancyvrb/fancyvrb.ps > > Interestingly, the listings package has an interface to fancyvrb > (sec 4.15. in the manual of listings v1.2). > As fancyvrb does not do pretty printing, > the combination of listings and fancyvrb might give you maximum > flexibility ... > > Best, Arnd > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From brombo at comcast.net Tue Jan 17 09:32:30 2006 From: brombo at comcast.net (Alan Bromborsky) Date: Tue, 17 Jan 2006 09:32:30 -0500 Subject: [SciPy-user] GetDP: a General Environment for the Treatment of Discrete Problems Message-ID: <43CCFFFE.4020604@comcast.net> First the question, what is the best linux ditribution for installing scipy on a 64-bit operating system (in my case a dual opteron)? Secondly, Travis do you know about the finite element program "getdp" (see link below). Would it be reasonable to incorporate this program into scipy. It has it's own scripting language which could be replaced by python? http://www.geuz.org/getdp/ From massimo.sandal at unibo.it Tue Jan 17 10:17:23 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Tue, 17 Jan 2006 16:17:23 +0100 Subject: [SciPy-user] GetDP: a General Environment for the Treatment of Discrete Problems In-Reply-To: <43CCFFFE.4020604@comcast.net> References: <43CCFFFE.4020604@comcast.net> Message-ID: <43CD0A83.2080909@unibo.it> Alan Bromborsky wrote: > First the question, what is the best linux ditribution for installing > scipy on a 64-bit operating system (in my case a dual opteron)? AFAIK at least Gentoo, SuSE and Debian offer x86_64 versions. I'd personally take advantage of Gentoo. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From robert.kern at gmail.com Tue Jan 17 10:54:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Jan 2006 09:54:00 -0600 Subject: [SciPy-user] GetDP: a General Environment for the Treatment of Discrete Problems In-Reply-To: <43CCFFFE.4020604@comcast.net> References: <43CCFFFE.4020604@comcast.net> Message-ID: <43CD1318.7000804@gmail.com> Alan Bromborsky wrote: > Secondly, Travis do you know about the finite element program "getdp" > (see link below). Would it be reasonable to incorporate this program > into scipy. It has it's own scripting language which could be replaced > by python? > > http://www.geuz.org/getdp/ GetDP is released under the GPL, which is more restrictive than Scipy's license (BSD). Unlike FFTW, which is GPLed, too, GetDP is not simply an alternative, removable kernel for a standard operation like FFTs. We are not including more GPLed code into Scipy. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From lists at spicynoodles.net Tue Jan 17 14:08:32 2006 From: lists at spicynoodles.net (Andre Radke) Date: Tue, 17 Jan 2006 20:08:32 +0100 Subject: [SciPy-user] Inverting Complex64 array fails on OS X In-Reply-To: <43CC11A2.9090402@ieee.org> References: <43CAA88D.2060805@gmail.com> <43CAD521.9070006@ieee.org> <43CC11A2.9090402@ieee.org> Message-ID: Travis Oliphant wrote: >I think I understand the problem. On your system, longdouble is the >same as double. However, not all of the code recognizes the >equivalency of 'D' and 'G' on your system which causes the problem. >I've patched things up so that Complex64 returns 'D' as expected even if >'G' is the same type. > >Perhaps the SVN version will now work correctly. I tried my previous test case with NumPy 0.9.4.1923 and SciPy 0.4.4.1558. Indeed, it does work correctly on my machine now. Thanks for the quick fix, -Andre From travis.brady at gmail.com Tue Jan 17 17:46:37 2006 From: travis.brady at gmail.com (Travis Brady) Date: Tue, 17 Jan 2006 14:46:37 -0800 Subject: [SciPy-user] Failed build w/ mingw Message-ID: With SVN updated 5 minutes ago I receive (with win2k, msys, mingw and Python 2.4): building 'scipy.montecarlo.intsampler' extension compiling C sources gcc options: '-O2 -Wall -Wstrict-prototypes' compile options: '-IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python 24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC - c' g++ -shared build\temp.win32- 2.4\Release\lib\montecarlo\src\intsamplermodule.o b uild\temp.win32-2.4\Release\lib\montecarlo\src\sampler5tbl.o-LC:\Python24\libs -LC:\Python24\PCBuild -Lbuild\temp.win32-2.4 -lpython24 -lmsvcr71 -o build\lib.w in32-2.4\scipy\montecarlo\intsampler.pyd build\temp.win32-2.4\Release\lib\montecarlo\src\intsamplermodule.o (.text+0x255): intsamplermodule.c: undefined reference to `srand48' build\temp.win32-2.4\Release\lib\montecarlo\src\sampler5tbl.o (.text+0x104):sampl er5tbl.c: undefined reference to `srand48' build\temp.win32-2.4\Release\lib\montecarlo\src\sampler5tbl.o (.text+0x109):sampl er5tbl.c: undefined reference to `lrand48' error: Command "g++ -shared build\temp.win32- 2.4\Release\lib\montecarlo\src\ints amplermodule.o build\temp.win32-2.4\Release\lib\montecarlo\src\sampler5tbl.o-LC :\Python24\libs -LC:\Python24\PCBuild -Lbuild\temp.win32-2.4 -lpython24 -lmsvcr7 1 -o build\lib.win32-2.4\scipy\montecarlo\intsampler.pyd" failed with exit status 1 removed Lib\__svn_version__.py removed Lib\__svn_version__.pyc I think I don't currently need (and haven't heard of) this monte carlo package, is there an easy switch to turn off its build? thank you Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Jan 17 19:13:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 17 Jan 2006 17:13:08 -0700 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: References: Message-ID: <43CD8814.6000409@ieee.org> Travis Brady wrote: > With SVN updated 5 minutes ago I receive (with win2k, msys, mingw and > Python 2.4): > > I think I don't currently need (and haven't heard of) this monte carlo > package, is there an easy switch to turn off its build? It was recently added. Yes, you can turn it off easily. Go to scipy/Lib/setup.py file Comment out the config.add_subpackage(...) lines corresponding to the packages you don't want or need. -Travis From travis.brady at gmail.com Tue Jan 17 20:54:52 2006 From: travis.brady at gmail.com (Travis Brady) Date: Tue, 17 Jan 2006 17:54:52 -0800 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: <43CD8814.6000409@ieee.org> References: <43CD8814.6000409@ieee.org> Message-ID: Thanks, Travis. The build succeeds after commenting out the montecarlo bit, but I receive the following when trying to run the tests. >>> import scipy import utils -> failed: No module named base >>> scipy.test() Overwriting lib= from C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) Fatal Python error: can't initialize module specfun (failed to import scipy.base ) This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Anybody else had this happen? On 1/17/06, Travis Oliphant wrote: > > Travis Brady wrote: > > > With SVN updated 5 minutes ago I receive (with win2k, msys, mingw and > > Python 2.4): > > > > I think I don't currently need (and haven't heard of) this monte carlo > > package, is there an easy switch to turn off its build? > > It was recently added. > > Yes, you can turn it off easily. Go to scipy/Lib/setup.py file > > Comment out the config.add_subpackage(...) lines corresponding to the > packages you don't want or need. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Jan 17 20:59:43 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 17 Jan 2006 17:59:43 -0800 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: References: <43CD8814.6000409@ieee.org> Message-ID: <43CDA10F.4090907@astraw.com> Travis Brady wrote: > Thanks, Travis. > The build succeeds after commenting out the montecarlo bit, but I > receive the following when trying to run the tests. > > >>> import scipy > import utils -> failed: No module named base > >>> scipy.test() > Overwriting lib= 'C:\Python24\lib\site-packages\scipy\li > b\__init__.pyc'> from > C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was > 'C:\Python24\lib\site-packages\numpy\lib\__init__.pyc'> > from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) > Fatal Python error: can't initialize module specfun (failed to import > scipy.base > ) > I'm seeing it now on a 32-bit linux box... Actually, the following: In [1]:import scipy import utils -> failed: No module named base In [2]:scipy.__file__ Out[2]:'/usr/lib/python2.3/site-packages/scipy-0.4.4.1558-py2.3-linux-i686.egg/scipy/__init__.pyc' In [3]:scipy.pkgload() Overwriting lib= from /usr/lib/python2.3/site-packages/scipy-0.4.4.1558-py2.3-linux-i686.egg/scipy/lib/__init__.pyc (was from /usr/lib/python2.3/site-packages/numpy-0.9.4.1926-py2.3-linux-i686.egg/numpy/lib/__init__.pyc) from utils import info -> failed: cannot import name info from utils import factorial -> failed: cannot import name factorial from utils import factorial2 -> failed: cannot import name factorial2 from utils import factorialk -> failed: cannot import name factorialk from utils import comb -> failed: cannot import name comb from utils import who -> failed: cannot import name who from utils import lena -> failed: cannot import name lena from utils import central_diff_weights -> failed: cannot import name central_diff_weights from utils import derivative -> failed: cannot import name derivative from utils import pade -> failed: cannot import name pade Fatal Python error: can't initialize module specfun (failed to import scipy.base) Aborted From robert.kern at gmail.com Tue Jan 17 21:15:55 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 17 Jan 2006 20:15:55 -0600 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: References: <43CD8814.6000409@ieee.org> Message-ID: <43CDA4DB.3060208@gmail.com> Travis Brady wrote: > Thanks, Travis. > The build succeeds after commenting out the montecarlo bit, but I > receive the following when trying to run the tests. > >>>> import scipy > import utils -> failed: No module named base >>>> scipy.test() > Overwriting lib= 'C:\Python24\lib\site-packages\scipy\li > b\__init__.pyc'> from > C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was > 'C:\Python24\lib\site-packages\numpy\lib\__init__.pyc'> > from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) > Fatal Python error: can't initialize module specfun (failed to import > scipy.base > ) > > This application has requested the Runtime to terminate it in an unusual > way. > Please contact the application's support team for more information. > > Anybody else had this happen? I don't see this with the current SVN on OS X. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From travis.brady at gmail.com Tue Jan 17 21:22:07 2006 From: travis.brady at gmail.com (Travis Brady) Date: Tue, 17 Jan 2006 18:22:07 -0800 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: <43CDA4DB.3060208@gmail.com> References: <43CD8814.6000409@ieee.org> <43CDA4DB.3060208@gmail.com> Message-ID: Hmmm... Incidentally a recent test of NumPy yields the same error: >>> import numpy >>> numpy.test() Found 9 tests for numpy.core.umath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 9 tests for numpy.lib.twodim_base Found 3 tests for numpy.lib.getlimits Found 3 tests for numpy.distutils.misc_util Found 25 tests for numpy.core.ma Found 6 tests for numpy.core.defmatrix Found 33 tests for numpy.lib.function_base Found 3 tests for numpy.dft.helper Found 14 tests for numpy.core.multiarray Found 6 tests for numpy.core.records Found 4 tests for numpy.lib.index_tricks Found 44 tests for numpy.lib.shape_base Found 0 tests for __main__ ................................................................................ ...................import utils -> failed: No module named base Fatal Python error: can't initialize module _flinalg (failed to import scipy.bas e) This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. On 1/17/06, Robert Kern wrote: > > Travis Brady wrote: > > Thanks, Travis. > > The build succeeds after commenting out the montecarlo bit, but I > > receive the following when trying to run the tests. > > > >>>> import scipy > > import utils -> failed: No module named base > >>>> scipy.test() > > Overwriting lib= > 'C:\Python24\lib\site-packages\scipy\li > > b\__init__.pyc'> from > > C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was > > > 'C:\Python24\lib\site-packages\numpy\lib\__init__.pyc'> > > from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) > > Fatal Python error: can't initialize module specfun (failed to import > > scipy.base > > ) > > > > This application has requested the Runtime to terminate it in an unusual > > way. > > Please contact the application's support team for more information. > > > > Anybody else had this happen? > > I don't see this with the current SVN on OS X. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Jan 17 21:25:45 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 17 Jan 2006 18:25:45 -0800 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: <43CDA10F.4090907@astraw.com> References: <43CD8814.6000409@ieee.org> <43CDA10F.4090907@astraw.com> Message-ID: <43CDA729.9050200@astraw.com> Hmm, I've detected an old version of f2py on my system... I see if I get the same errors after re-building scipy without it. From strawman at astraw.com Tue Jan 17 21:37:38 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 17 Jan 2006 18:37:38 -0800 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: <43CDA729.9050200@astraw.com> References: <43CD8814.6000409@ieee.org> <43CDA10F.4090907@astraw.com> <43CDA729.9050200@astraw.com> Message-ID: <43CDA9F2.4030704@astraw.com> Ahh, yes, getting rid of that f2py did it! Another benefit of .eggs: old top-level modules/packages are installed in the .egg, so you can't forget to delete them. (I know f2py has now been moved into the numpy namespace, but, for example, pytz and pylab are all within my matplotlib .egg). Cheers! Andrew From oliphant.travis at ieee.org Tue Jan 17 21:04:58 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 17 Jan 2006 19:04:58 -0700 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: References: <43CD8814.6000409@ieee.org> Message-ID: <43CDA24A.6080806@ieee.org> Travis Brady wrote: > Thanks, Travis. > The build succeeds after commenting out the montecarlo bit, but I > receive the following when trying to run the tests. > > >>> import scipy > import utils -> failed: No module named base > >>> scipy.test() > Overwriting lib= 'C:\Python24\lib\site-packages\scipy\li > b\__init__.pyc'> from > C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was > 'C:\Python24\lib\site-packages\numpy\lib\__init__.pyc'> > from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) > Fatal Python error: can't initialize module specfun (failed to import > scipy.base > ) Hmm... This is odd. It looks like you are picking up an old header file from somewhere. This failure is due to import_array() in the initialization and it's apparently trying to load scipy.base. This used to be what was loaded to get the C-API. But, now it's numpy.core. Make sure you are not picking up old versions of the header when you compile scipy... -Travis From schofield at ftw.at Wed Jan 18 06:08:32 2006 From: schofield at ftw.at (Ed Schofield) Date: Wed, 18 Jan 2006 12:08:32 +0100 Subject: [SciPy-user] [SciPy-dev] New maximum entropy and Monte Carlo packages Message-ID: <43CE21B0.40903@ftw.at> Hi all, I recently moved two new packages, maxent and montecarlo, from the sandbox into the main SciPy tree. I've now moved them back to the sandbox pending further discussion. I'll introduce them here and ask for feedback on whether they should be included in the main tree. The maxent package is for fitting maximum entropy models subject to expectation constraints. Maximum entropy models represent the 'least biased' models subject to given constraints. When the constraints are on the expectations of functionals -- the usual formulation -- maximum entropy models take the form of a generalized exponential family. A normal distribution, for example, is a maximum entropy distribution subject to mean and variance constraints. The maxent package contains one main module and one module with utility functions. Both are entirely in Python. (I have now removed the F2Py dependency.) The main module supports fitting models on either small or large sample spaces, where 'large' means continuous or otherwise too large to iterate over. Maxent models on 'small' sample spaces are common in natural language processing; models on 'large' sample spaces are useful for channel modelling in mobile communications, spectrum and chirp analysis, and (I believe) fluid turbulence. Some simple examples are in the examples/ directory. The simplest use is to define a list of functions f, an array of desired expectations K, and a sample space, and use the commands >>>>>> model = maxent.model(f, samplespace) >>>>>> model.fit(K) >>> >>> You can then retrieve the fitted parameters directly or analyze the model in other ways. I've been developing the maxent algorithms and code for about 4 years. The code is very well commented and should be straightforward to maintain. The montecarlo package currently does only one thing. It generates discrete variates from a given distribution. It does this FAST. On my P4 it generates over 107 variates per second, even for a sample space with 106 elements. The algorithm is the compact 5-table lookup sampler of Marsaglia. The main module, called 'intsampler', is written in C. There is also a simple Python wrapper class around this called 'dictsampler' that provides a nicer interface, allowing sampling from a distribution with arbitrary hashable objects (e.g. strings) as labels instead of {0,1,2,...}. dictsampler has slightly more overhead than intsampler, but is also very fast (around 106 per second for me with a sample space of 106 elements labelled with strings). An example of using it to sample from this discrete distribution: x 'a' 'b' 'c' p(x) 10/180 150/180 20/180 is: >>>>>> table = {'a':10, 'b':150, 'c':20} >>>>>> sampler = dictsampler(table) >>>>>> sampler.sample(10**4) >>> >>> array([b, b, a, ..., b, b, c], dtype=object) The montecarlo package is very small (and not nearly as impressive as Christopher Fonnesbeck's PyMC package), but the functionality that is there would be an efficient foundation for many discrete Monte Carlo algorithms. I'm aware of the build issue Travis Brady reported with MinGW not defining lrand48(). I can't remember why I used this, but I'll adapt it to use lrand() instead and report back. Would these packages be useful? Are there any objections to including them? -- Ed From loic.calvino at gmail.com Wed Jan 18 09:09:36 2006 From: loic.calvino at gmail.com (=?ISO-8859-1?Q?Lo=EFc?=) Date: Wed, 18 Jan 2006 15:09:36 +0100 Subject: [SciPy-user] (no subject) Message-ID: <2c6d22220601180609u4a0b8537i@mail.gmail.com> -------------- next part -------------- An HTML attachment was scrubbed... URL: From loic.calvino at gmail.com Wed Jan 18 09:19:59 2006 From: loic.calvino at gmail.com (=?ISO-8859-1?Q?Lo=EFc?=) Date: Wed, 18 Jan 2006 15:19:59 +0100 Subject: [SciPy-user] using py2exe with scipy Message-ID: <2c6d22220601180619v5fd4264u@mail.gmail.com> Hi, I'm trying to make an executable file containing scipy components but it doesn't work. The creation of the exe works but when I want to use scipy.gplt I have this error: Traceback (most recent call last): File "gui.py", line 671, in plotting File "plot.pyc", line 129, in main File "scipy\gplt\interface.pyc", line 106, in plot File "scipy\gplt\pyPlot.pyc", line 132, in plot File "scipy\gplt\pyPlot.pyc", line 702, in _init_plot File "scipy\gplt\pyPlot.pyc", line 819, in _send IOError: [Errno 22] Invalid argument I want to know if I have to include the executable file of gnuplot (wgnuplot.exe) and how to include it in the setup process. Thanks, Loic -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jan 18 10:49:58 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 18 Jan 2006 10:49:58 -0500 Subject: [SciPy-user] maxent Message-ID: On Wed, 18 Jan 2006, Ed Schofield apparently wrote: > Would these packages be useful? Yes. Cheers, Alan Isaac From travis.brady at gmail.com Wed Jan 18 15:28:14 2006 From: travis.brady at gmail.com (Travis Brady) Date: Wed, 18 Jan 2006 12:28:14 -0800 Subject: [SciPy-user] Failed build w/ mingw In-Reply-To: <43CDA24A.6080806@ieee.org> References: <43CD8814.6000409@ieee.org> <43CDA24A.6080806@ieee.org> Message-ID: I went and deleted my Numpy and Scipy svn dirs and checked out from SVN and tried again. The build failed on the monte carlo stuff again (which was in the sandbox setup.py), so I commented that bit out and it builds and tests fine. Thanks for everybody's help. Travis On 1/17/06, Travis Oliphant wrote: > > Travis Brady wrote: > > > Thanks, Travis. > > The build succeeds after commenting out the montecarlo bit, but I > > receive the following when trying to run the tests. > > > > >>> import scipy > > import utils -> failed: No module named base > > >>> scipy.test() > > Overwriting lib= > 'C:\Python24\lib\site-packages\scipy\li > > b\__init__.pyc'> from > > C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was > > > 'C:\Python24\lib\site-packages\numpy\lib\__init__.pyc'> > > from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) > > Fatal Python error: can't initialize module specfun (failed to import > > scipy.base > > ) > > > Hmm... This is odd. It looks like you are picking up an old header > file from somewhere. This failure is due to import_array() in the > initialization and it's apparently trying to load scipy.base. This used > to be what was loaded to get the C-API. But, now it's numpy.core. > > Make sure you are not picking up old versions of the header when you > compile scipy... > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Wed Jan 18 19:55:53 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 18 Jan 2006 19:55:53 -0500 Subject: [SciPy-user] [SciPy-dev] New maximum entropy and Monte Carlo packages In-Reply-To: <43CE21B0.40903@ftw.at> References: <43CE21B0.40903@ftw.at> Message-ID: <91cf711d0601181655k7e9e529cp@mail.gmail.com> I think that maxent methods are sufficiently useful to deserve a place in scipy. On a related topic, is there a politic about what should or should not be included on scipy ? What is the 'grand design' intended for scipy ? A select compilation of the best software or the largest possible collection of routines ? David 2006/1/18, Ed Schofield : > > Hi all, > > I recently moved two new packages, maxent and montecarlo, from the > sandbox into the main SciPy tree. I've now moved them back to the > sandbox pending further discussion. I'll introduce them here and ask > for feedback on whether they should be included in the main tree. > > The maxent package is for fitting maximum entropy models subject to > expectation constraints. Maximum entropy models represent the 'least > biased' models subject to given constraints. When the constraints are > on the expectations of functionals -- the usual formulation -- maximum > entropy models take the form of a generalized exponential family. A > normal distribution, for example, is a maximum entropy distribution > subject to mean and variance constraints. > > The maxent package contains one main module and one module with utility > functions. Both are entirely in Python. (I have now removed the F2Py > dependency.) The main module supports fitting models on either small or > large sample spaces, where 'large' means continuous or otherwise too > large to iterate over. Maxent models on 'small' sample spaces are > common in natural language processing; models on 'large' sample spaces > are useful for channel modelling in mobile communications, spectrum and > chirp analysis, and (I believe) fluid turbulence. Some simple examples > are in the examples/ directory. The simplest use is to define a list of > functions f, an array of desired expectations K, and a sample space, and > use the commands > > > >>>>>> model = maxent.model(f, samplespace) > >>>>>> model.fit(K) > >>> > >>> > > You can then retrieve the fitted parameters directly or analyze the > model in other ways. > > I've been developing the maxent algorithms and code for about 4 years. > The code is very well commented and should be straightforward to maintain. > > > The montecarlo package currently does only one thing. It generates > discrete variates from a given distribution. It does this FAST. On my > P4 it generates over 107 variates per second, even for a sample space > with 106 elements. The algorithm is the compact 5-table lookup sampler > of Marsaglia. The main module, called 'intsampler', is written in C. > There is also a simple Python wrapper class around this called > 'dictsampler' that provides a nicer interface, allowing sampling from a > distribution with arbitrary hashable objects > (e.g. strings) as labels instead of {0,1,2,...}. dictsampler has > slightly more overhead than intsampler, but is also very fast (around > 106 per second for me with a sample space of 106 elements labelled > with strings). An example of using it to sample from this discrete > distribution: > > x 'a' 'b' 'c' > p(x) 10/180 150/180 20/180 > > is: > > > >>>>>> table = {'a':10, 'b':150, 'c':20} > >>>>>> sampler = dictsampler(table) > >>>>>> sampler.sample(10**4) > >>> > >>> > array([b, b, a, ..., b, b, c], dtype=object) > > The montecarlo package is very small (and not nearly as impressive as > Christopher Fonnesbeck's PyMC package), but the functionality that is > there would be an efficient foundation for many discrete Monte Carlo > algorithms. > > I'm aware of the build issue Travis Brady reported with MinGW not > defining lrand48(). I can't remember why I used this, but I'll adapt it > to use lrand() instead and report back. > > > Would these packages be useful? Are there any objections to including > them? > > > -- Ed > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew at sel.cam.ac.uk Thu Jan 19 09:02:27 2006 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Thu, 19 Jan 2006 14:02:27 +0000 Subject: [SciPy-user] Scipy builds OK on OSX, but doesn't actually work! Message-ID: <880F6323-3F58-4224-BD46-AC4494BECFA0@sel.cam.ac.uk> Hi, I followed the instructions for installing scipy on OSX, and so built the SVN versions. All seemed to go fine, until I actually try and use scipy: kublai:~ matthew$ python Python 2.3.5 (#1, Mar 20 2005, 20:38:20) [GCC 3.3 20030304 (Apple Computer, Inc. build 1809)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy Traceback (most recent call last): File "", line 1, in ? File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/scipy/__init__.py", line 51, in ? pkgload(verbose=SCIPY_IMPORT_VERBOSE,postpone=True) File "/System/Library/Frameworks/Python.framework/Versions/2.3/lib/ python2.3/site-packages/numpy/_import_tools.py", line 196, in __call__ self.warn('Overwriting %s=%s (was %s)' \ AttributeError: PackageLoader instance has no attribute '_obj2str' This is using the apple-supplied python. Any ideas? Thanks, Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From josemaria.alkala at gmail.com Thu Jan 19 13:50:08 2006 From: josemaria.alkala at gmail.com (=?iso-8859-1?q?Jos=E9_Mar=EDa?=) Date: Thu, 19 Jan 2006 19:50:08 +0100 Subject: [SciPy-user] SciPy 0.4.4: fromstring Message-ID: <200601191950.08371.josemaria.garcia.perez@ya.com> Hello, I have changed to SciPy 0.4.4 but now, the next code for reading audiofiles: ------------------------------------------------- def ReadRawSound (_filename): _fSound = open( _filename , 'r' ) _SoundAsChar = _fSound.read() _fSound.close() _SoundAsFloat = scipy.fromstring( _SoundAsChar , dtype='f' , count=-1) return _SoundAsFloat -------------------------------------------------- With the previous version I used: _SoundAsFloat =scipy.fromstring( _SoundAsChar , typecode='f' , count=-1) The problem is that now this function seems VERY slow. =============================== I'm using Gentoo and I'm really new to python and scipy. There still no exist and ebuild so I installed by doing: python setup.py build python setup.py install Is this a problem with 'fromstring' or with my installation? Thanks a lot in advance? Jos? Mar?a (Spain) From pawel.wielgus at gmail.com Thu Jan 19 22:28:21 2006 From: pawel.wielgus at gmail.com (Pawel Wielgus) Date: Thu, 19 Jan 2006 21:28:21 -0600 Subject: [SciPy-user] Test failure Message-ID: <43D058D5.6070703@gmail.com> Dear SciPy Users, I've just installed SciPy for the first time in my life (with a lot of problems when using svn core and scipy, but smoothly and succesfuly when using Scipy_core-0.3.2.tar.gz and SciPy-0.3.2.tar.gz +FFTW +ATLAS binaries -DJBFTT [for DJBFFT I had some compilation problems, but honestly I did not dig it deeper]). Anyway, finally I run some tests. After t=scipy.test() I have: Ran 972 tests in 2.042s OK But after scipy.test(level=10,verbosity=2) I receive an error: ====================================================================== FAIL: check_normal (scipy.stats.morestats.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/scipy/stats/tests/test_morestats.py", line 47, in check_normal assert_array_less(crit[:-1], A) File "/usr/local/lib/python2.3/site-packages/scipy_test/testing.py", line 708, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 25.0%): Array 1: [ 0.538 0.613 0.736 0.858] Array 2: 0.816053551457 ---------------------------------------------------------------------- Ran 986 tests in 113.680s FAILED (failures=1) Is there any particular reason for that? What does it mean? Thanks in advance Goosi From lists at spicynoodles.net Thu Jan 19 14:53:01 2006 From: lists at spicynoodles.net (Andre Radke) Date: Thu, 19 Jan 2006 20:53:01 +0100 Subject: [SciPy-user] multiplying scalar with complex matrix segfaults Message-ID: When I try to multiply a float value with a complex matrix object, I get unexpected results or even a segfault, e.g.: jannu:~ andre$ /usr/local/bin/python ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#1, Oct 3 2005, 09:39:46) [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * >>> 1.0 * matrix(eye(3, dtype=complex)) matrix([[ 1.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 1.+0.j]]) >>> 1.0 * matrix(eye(41, dtype=complex)) Segmentation fault I'm aware that the matrix class overrides the * operator with matrix multiplication and that's why I want to use it. However, I still expected a scalar multiplied with a matrix to yield a matrix where each entry has been multiplied with that scalar. In other words, I would expect both examples shown above to return the identity matrix of the appropriate shape. Obviously, that's not the case, at least not on my machine. Can anybody else reproduce this? This is using NumPy 0.9.4.1923, SciPy 0.4.4.1558, and ActivePython 2.4.2 on Mac OS X 10.3.9. Thanks, -Andre P.S. Here's the relevant portion of the crash log. It looks like the segfault occurs in the ATLAS zaxpy function: Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_INVALID_ADDRESS (0x0001) at 0x031090c0 Thread 0 Crashed: 0 libBLAS.dylib 0x9550f6e4 ATL_zaxpy_xp0yp0aXbX + 0x1c 1 _dotblas.so 0x0069beb8 dotblas_matrixproduct + 0x8d4 (_dotblas.c:341) 2 org.activestate.ActivePython24 0x1007c4a0 call_function + 0x3bc (ceval.c:3558) 3 org.activestate.ActivePython24 0x10079f4c PyEval_EvalFrame + 0x22b8 (ceval.c:2163) 4 org.activestate.ActivePython24 0x1007b0b4 PyEval_EvalCodeEx + 0x868 (ceval.c:2736) 5 org.activestate.ActivePython24 0x10025a48 function_call + 0x158 (funcobject.c:548) 6 org.activestate.ActivePython24 0x1000badc PyObject_Call + 0x30 (abstract.c:1757) 7 org.activestate.ActivePython24 0x1001526c instancemethod_call + 0x31c (classobject.c:2448) 8 org.activestate.ActivePython24 0x1000badc PyObject_Call + 0x30 (abstract.c:1757) 9 org.activestate.ActivePython24 0x10052a80 call_maybe + 0x128 (typeobject.c:963) 10 org.activestate.ActivePython24 0x1000bd00 binary_op1 + 0x180 (abstract.c:378) 11 org.activestate.ActivePython24 0x1000a518 PyNumber_Multiply + 0x28 (abstract.c:673) 12 org.activestate.ActivePython24 0x100784ec PyEval_EvalFrame + 0x858 (ceval.c:1049) 13 org.activestate.ActivePython24 0x1007b0b4 PyEval_EvalCodeEx + 0x868 (ceval.c:2736) 14 org.activestate.ActivePython24 0x1007e52c PyEval_EvalCode + 0x30 (ceval.c:484) 15 org.activestate.ActivePython24 0x100b2ebc run_node + 0x4c (pythonrun.c:1265) 16 org.activestate.ActivePython24 0x100b238c PyRun_InteractiveOneFlags + 0x200 (pythonrun.c:763) 17 org.activestate.ActivePython24 0x100b216c PyRun_InteractiveLoopFlags + 0x10c (pythonrun.c:699) 18 org.activestate.ActivePython24 0x100b3c10 PyRun_AnyFileExFlags + 0xa4 (pythonrun.c:659) 19 org.activestate.ActivePython24 0x100bf6e0 Py_Main + 0xa2c (main.c:487) 20 python 0x000018d0 _start + 0x188 (crt.c:267) 21 dyld 0x8fe1a278 _dyld_start + 0x64 -- Andre Radke + mailto:lists at spicynoodles.net + http://spicynoodles.net/ From Doug.LATORNELL at mdsinc.com Thu Jan 19 16:30:58 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Thu, 19 Jan 2006 13:30:58 -0800 Subject: [SciPy-user] multiplying scalar with complex matrix segfaults Message-ID: <34090E25C2327C4AA5D276799005DDE0E34C96@SMDMX0501.mds.mdsinc.com> Works okay for me, Andre IsoInfoCompute:doug$ python Python 2.4.1 (#1, Sep 3 2005, 13:08:59) [GCC 3.3.5 (propolice)] on openbsd3 Type "help", "copyright", "credits" or "license" for more information. >>> from scipy import * Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) >>> 1.0 * matrix(eye(3, dtype=complex)) matrix([[ 1.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 1.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 1.+0.j]]) >>> 1.0 * matrix(eye(41, dtype=complex)) matrix([[ 1.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 1.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 1.+0.j, ..., 0.+0.j, 0.+0.j, 0.+0.j], ..., [ 0.+0.j, 0.+0.j, 0.+0.j, ..., 1.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 1.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j, ..., 0.+0.j, 0.+0.j, 1.+0.j]]) >>> import numpy >>> numpy.__version__ '0.9.3.1868' >>> import scipy >>> scipy.__version__ '0.4.4.1549' >>> import os >>> os.uname() ('OpenBSD', 'IsoInfoCompute.mds.mdsinc.com', '3.8', 'GENERIC#0', 'i386') >>> Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Andre Radke > Sent: January 19, 2006 11:53 > To: SciPy Users List > Subject: [SciPy-user] multiplying scalar with complex matrix segfaults > > > When I try to multiply a float value with a complex matrix > object, I get unexpected results or even a segfault, e.g.: > > jannu:~ andre$ /usr/local/bin/python > ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on > Python 2.4.2 (#1, Oct 3 2005, 09:39:46) [GCC 3.3 20030304 > (Apple Computer, Inc. build 1666)] on darwin Type "help", > "copyright", "credits" or "license" for more information. > >>> from scipy import * > >>> 1.0 * matrix(eye(3, dtype=complex)) > matrix([[ 1.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 1.+0.j]]) > >>> 1.0 * matrix(eye(41, dtype=complex)) > Segmentation fault > > I'm aware that the matrix class overrides the * operator with > matrix multiplication and that's why I want to use it. > However, I still expected a scalar multiplied with a matrix > to yield a matrix where each entry has been multiplied with > that scalar. > > In other words, I would expect both examples shown above to > return the identity matrix of the appropriate shape. > Obviously, that's not the case, at least not on my machine. > > Can anybody else reproduce this? > > This is using NumPy 0.9.4.1923, SciPy 0.4.4.1558, and > ActivePython 2.4.2 on Mac OS X 10.3.9. > > Thanks, > > -Andre > > > > > > P.S. Here's the relevant portion of the crash log. It looks > like the segfault occurs in the ATLAS zaxpy function: > > Exception: EXC_BAD_ACCESS (0x0001) > Codes: KERN_INVALID_ADDRESS (0x0001) at 0x031090c0 > > Thread 0 Crashed: > 0 libBLAS.dylib 0x9550f6e4 > ATL_zaxpy_xp0yp0aXbX + 0x1c > 1 _dotblas.so 0x0069beb8 > dotblas_matrixproduct + 0x8d4 (_dotblas.c:341) > 2 org.activestate.ActivePython24 0x1007c4a0 > call_function + 0x3bc (ceval.c:3558) > 3 org.activestate.ActivePython24 0x10079f4c > PyEval_EvalFrame + 0x22b8 (ceval.c:2163) > 4 org.activestate.ActivePython24 0x1007b0b4 > PyEval_EvalCodeEx + 0x868 (ceval.c:2736) > 5 org.activestate.ActivePython24 0x10025a48 > function_call + 0x158 (funcobject.c:548) > 6 org.activestate.ActivePython24 0x1000badc > PyObject_Call + 0x30 (abstract.c:1757) > 7 org.activestate.ActivePython24 0x1001526c > instancemethod_call + 0x31c (classobject.c:2448) > 8 org.activestate.ActivePython24 0x1000badc > PyObject_Call + 0x30 (abstract.c:1757) > 9 org.activestate.ActivePython24 0x10052a80 call_maybe + > 0x128 (typeobject.c:963) > 10 org.activestate.ActivePython24 0x1000bd00 binary_op1 + > 0x180 (abstract.c:378) > 11 org.activestate.ActivePython24 0x1000a518 > PyNumber_Multiply + 0x28 (abstract.c:673) > 12 org.activestate.ActivePython24 0x100784ec > PyEval_EvalFrame + 0x858 (ceval.c:1049) > 13 org.activestate.ActivePython24 0x1007b0b4 > PyEval_EvalCodeEx + 0x868 (ceval.c:2736) > 14 org.activestate.ActivePython24 0x1007e52c > PyEval_EvalCode + 0x30 (ceval.c:484) > 15 org.activestate.ActivePython24 0x100b2ebc run_node + > 0x4c (pythonrun.c:1265) > 16 org.activestate.ActivePython24 0x100b238c > PyRun_InteractiveOneFlags + 0x200 (pythonrun.c:763) > 17 org.activestate.ActivePython24 0x100b216c > PyRun_InteractiveLoopFlags + 0x10c (pythonrun.c:699) > 18 org.activestate.ActivePython24 0x100b3c10 > PyRun_AnyFileExFlags + 0xa4 (pythonrun.c:659) > 19 org.activestate.ActivePython24 0x100bf6e0 Py_Main + > 0xa2c (main.c:487) > 20 python 0x000018d0 _start + > 0x188 (crt.c:267) > 21 dyld 0x8fe1a278 _dyld_start + 0x64 > > > > > > -- > Andre Radke + mailto:lists at spicynoodles.net + http://spicynoodles.net/ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From jeremy at jeremysanders.net Thu Jan 19 16:50:43 2006 From: jeremy at jeremysanders.net (Jeremy Sanders) Date: Thu, 19 Jan 2006 21:50:43 +0000 (GMT) Subject: [SciPy-user] ANN: Veusz 0.9 released Message-ID: Veusz 0.9 --------- Velvet Ember Under Sky Zenith ----------------------------- http://home.gna.org/veusz/ Veusz is Copyright (C) 2003-2006 Jeremy Sanders Licenced under the GPL (version 2 or greater) Veusz is a scientific plotting package written in Python. It uses PyQt for display and user-interfaces, and numarray for handling the numeric data. Veusz is designed to produce publication-ready Postscript output. Veusz provides a GUI, command line and scripting interface (based on Python) to its plotting facilities. The plots are built using an object-based system to provide a consistent interface. Changes from 0.8: Please refer to ChangeLog for all the changes. Highlights include: * Contour support (thanks to the code of the matplotlib guys!) * Undo/redo * Rubber band axis zooming Features of package: * X-Y plots (with errorbars) * Contour plots * Images (with colour mappings) * Stepped plots (for histograms) * Line plots * Function plots * Fitting functions to data * Stacked plots and arrays of plots * Plot keys * Plot labels * LaTeX-like formatting for text * EPS output * Simple data importing * Scripting interface * Save/Load plots * Dataset manipulation * Embed Veusz within other programs To be done: * UI improvements * Import filters (for qdp and other plotting packages, fits, csv) Requirements: Python (probably 2.3 or greater required) http://www.python.org/ Qt (free edition) http://www.trolltech.com/products/qt/ PyQt (SIP is required to be installed first) http://www.riverbankcomputing.co.uk/pyqt/ http://www.riverbankcomputing.co.uk/sip/ numarray http://www.stsci.edu/resources/software_hardware/numarray Microsoft Core Fonts (recommended) http://corefonts.sourceforge.net/ PyFITS (optional) http://www.stsci.edu/resources/software_hardware/pyfits For documentation on using Veusz, see the "Documents" directory. The manual is in pdf, html and text format (generated from docbook). If you enjoy using Veusz, I would love to hear from you. Please join the mailing lists at https://gna.org/mail/?group=veusz to discuss new features or if you'd like to contribute code. The newest code can always be found in CVS. If non GPL projects are interested in using Veusz code, please contact me. I am happy to consider relicencing code for other free projects, if I am legally allowed to do so. Cheers Jeremy From hgamboa at gmail.com Thu Jan 19 18:06:02 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Thu, 19 Jan 2006 23:06:02 +0000 Subject: [SciPy-user] SciPy 0.4.4: fromstring In-Reply-To: <200601191950.08371.josemaria.garcia.perez@ya.com> References: <200601191950.08371.josemaria.garcia.perez@ya.com> Message-ID: <86522b1a0601191506h48a24d76i86e3d37b9e493ee@mail.gmail.com> I'm using the method fromfile that reads the file directly, and works impressively fast. To you really need to read the file as a string, and then convert it? The following code shows the timing of fromstring and fromfile in a file with 80mb (the difference is only visible with large files) from scipy import *import timingimport os*def* tic(): timing.start() *def* tac(s=""): timing.finish() *print*(str(timing.milli())+'ms '+s) tic() a=randn(10000,1000) tac("Generate data") tic() f=open("tofile.mat","w")a.tofile(f) tac("Write tofile") tic() f=open("tofile.mat","r") d=fromfile(f,"d") tac("Read fromfile") tic() f=open("tofile.mat","r") l=f.read() d=fromstring(l,"d") tac("read fromstring") os.remove("tofile.mar") Hugo Gamboa On 1/19/06, Jos? Mar?a wrote: > > Hello, > I have changed to SciPy 0.4.4 but now, the next code for reading > audiofiles: > ------------------------------------------------- > def ReadRawSound (_filename): > _fSound = open( _filename , 'r' ) > _SoundAsChar = _fSound.read() > _fSound.close() > _SoundAsFloat = scipy.fromstring( _SoundAsChar , dtype='f' , > count=-1) > > return _SoundAsFloat > -------------------------------------------------- > > With the previous version I used: > _SoundAsFloat =scipy.fromstring( _SoundAsChar , typecode='f' , count=-1) > > The problem is that now this function seems VERY slow. > > =============================== > I'm using Gentoo and I'm really new to python and scipy. There still no > exist > and ebuild so I installed by doing: > python setup.py build > python setup.py install > > > Is this a problem with 'fromstring' or with my installation? > > Thanks a lot in advance? > Jos? Mar?a (Spain) > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From evan.monroig at gmail.com Thu Jan 19 19:45:51 2006 From: evan.monroig at gmail.com (Evan Monroig) Date: Fri, 20 Jan 2006 09:45:51 +0900 Subject: [SciPy-user] multiplying scalar with complex matrix segfaults In-Reply-To: References: Message-ID: <20060120004551.GA20239@localhost.localdomain> On Jan.19 20h53, Andre Radke wrote : > > When I try to multiply a float value with a complex matrix object, I get unexpected results or even a segfault, e.g.: > > jannu:~ andre$ /usr/local/bin/python > ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on > Python 2.4.2 (#1, Oct 3 2005, 09:39:46) > [GCC 3.3 20030304 (Apple Computer, Inc. build 1666)] on darwin > Type "help", "copyright", "credits" or "license" for more information. > >>> from scipy import * > >>> 1.0 * matrix(eye(3, dtype=complex)) > matrix([[ 1.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 0.+0.j], > [ 0.+0.j, 0.+0.j, 1.+0.j]]) > >>> 1.0 * matrix(eye(41, dtype=complex)) > Segmentation fault > > Can anybody else reproduce this? > > This is using NumPy 0.9.4.1923, SciPy 0.4.4.1558, and ActivePython > 2.4.2 on Mac OS X 10.3.9. Works okay for me (but I have to do 'from numpy import *' instead of scipy) NumPy 0.9.3.1836, Scipy 0.4.4.1526 on MacPython 2.4.1 and Mac OS X 10.4 Evan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: From Paul.Ray at nrl.navy.mil Thu Jan 19 22:11:17 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Thu, 19 Jan 2006 22:11:17 -0500 Subject: [SciPy-user] Bug in stats? Message-ID: Hi, I'm getting nan when I try to calculate the stats on any distribution In [1]: import scipy Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) In [2]: scipy.stats.norm.stats() Out[2]: (nan, nan) In [3]: scipy.__version__ Out[3]: '0.4.4' I get nan's for all distributions I have tried and the loc and scale parameters don't seem to help. Is this a bug or am I using it wrong? Thanks, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From arnd.baecker at web.de Fri Jan 20 03:46:30 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Fri, 20 Jan 2006 09:46:30 +0100 (CET) Subject: [SciPy-user] Test failure In-Reply-To: <43D058D5.6070703@gmail.com> References: <43D058D5.6070703@gmail.com> Message-ID: On Thu, 19 Jan 2006, Pawel Wielgus wrote: > Dear SciPy Users, > > I've just installed SciPy for the first time in my life (with a lot of > problems when using svn core and scipy, have you tried a recent numpy/scipy svn, i.e.: svn co http://svn.scipy.org/svn/numpy/trunk numpy svn co http://svn.scipy.org/svn/scipy/trunk scipy (see also http://new.scipy.org/Wiki for more information - under-development) Concerning the naming: scipy core etc.: have a look at http://numeric.scipy.org/ which explains this. To summarize: it is really numpy+scipy what you want. ;-) It would be great if you could post the details of your problems. I am sure that these can be solved! (Actually, with the most recent numpy/scipy it should work fine...) > but smoothly and succesfuly when > using Scipy_core-0.3.2.tar.gz and SciPy-0.3.2.tar.gz +FFTW +ATLAS > binaries -DJBFTT [for DJBFFT I had some compilation problems, but > honestly I did not dig it deeper]). Anyway, finally I run some tests. > After t=scipy.test() I have: > Ran 972 tests in 2.042s > OK > > But after scipy.test(level=10,verbosity=2) I receive an error: > ====================================================================== > FAIL: check_normal (scipy.stats.morestats.test_morestats.test_anderson) > ---------------------------------------------------------------------- [...] > Is there any particular reason for that? What does it mean? This is a statistical test, so it may fail from time to time. If you rerun `scipy.test(10)` usually this test will pass. It has been discussed on scipy-dev to run statistical tests a couple of times and only if it fails all the time to raise an error (not yet implemented). Best, Arnd From hgamboa at gmail.com Fri Jan 20 07:16:17 2006 From: hgamboa at gmail.com (Hugo Gamboa) Date: Fri, 20 Jan 2006 12:16:17 +0000 Subject: [SciPy-user] Question about slicing Message-ID: <86522b1a0601200416q52d2af8csf18c2676f2d224d7@mail.gmail.com> In the code below the commented line makes numpy complains. The intended result is produced by getting the indexes with where. If I use Am I asking too much with slicing or it should simply work? Thanks in advance. Hugo Gamboa from numpy import * a=arange(10) b=rand(10,10) i=a>3 r=b[i] #works fine but only in the first axis #b[:,i] <- IndexError: arrays used as indices must be of integer type v=b[:,where(i)] From josemaria.alkala at gmail.com Fri Jan 20 10:39:46 2006 From: josemaria.alkala at gmail.com (=?iso-8859-15?q?Jos=E9_Mar=EDa?=) Date: Fri, 20 Jan 2006 16:39:46 +0100 Subject: [SciPy-user] SciPy 0.4.4: fromstring In-Reply-To: <86522b1a0601191506h48a24d76i86e3d37b9e493ee@mail.gmail.com> References: <200601191950.08371.josemaria.garcia.perez@ya.com> <86522b1a0601191506h48a24d76i86e3d37b9e493ee@mail.gmail.com> Message-ID: <200601201639.47058.josemaria.garcia.perez@ya.com> Sorry me, I was wrong. There wan't the function "fromstring" the origin of the problem. But it was fine to learn about "fromfile". Thanks a lot. Kind regards, Jose Mar?a On Friday 20 January 2006 00:06, Hugo Gamboa wrote: > I'm using the method fromfile that reads the file directly, and works > impressively fast. > > To you really need to read the file as a string, and then convert it? > > The following code shows the timing of fromstring and fromfile in a file > with 80mb (the difference is only visible with large files) > > from scipy import *import timingimport os*def* tic(): > timing.start() > *def* tac(s=""): > timing.finish() > *print*(str(timing.milli())+'ms '+s) > > > tic() > a=randn(10000,1000) > tac("Generate data") > > tic() > f=open("tofile.mat","w")a.tofile(f) > tac("Write tofile") > > tic() > f=open("tofile.mat","r") > d=fromfile(f,"d") > tac("Read fromfile") > > tic() > f=open("tofile.mat","r") > l=f.read() > d=fromstring(l,"d") > tac("read fromstring") > os.remove("tofile.mar") > > > > Hugo Gamboa > > On 1/19/06, Jos? Mar?a wrote: > > Hello, > > I have changed to SciPy 0.4.4 but now, the next code for reading > > audiofiles: > > ------------------------------------------------- > > def ReadRawSound (_filename): > > _fSound = open( _filename , 'r' ) > > _SoundAsChar = _fSound.read() > > _fSound.close() > > _SoundAsFloat = scipy.fromstring( _SoundAsChar , dtype='f' , > > count=-1) > > > > return _SoundAsFloat > > -------------------------------------------------- > > > > With the previous version I used: > > _SoundAsFloat =scipy.fromstring( _SoundAsChar , typecode='f' , count=-1) > > > > The problem is that now this function seems VERY slow. > > > > =============================== > > I'm using Gentoo and I'm really new to python and scipy. There still no > > exist > > and ebuild so I installed by doing: > > python setup.py build > > python setup.py install > > > > > > Is this a problem with 'fromstring' or with my installation? > > > > Thanks a lot in advance? > > Jos? Mar?a (Spain) > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user From humufr at yahoo.fr Fri Jan 20 12:26:23 2006 From: humufr at yahoo.fr (Humufr) Date: Fri, 20 Jan 2006 12:26:23 -0500 Subject: [SciPy-user] new scipy Message-ID: <43D11D3F.5080603@yahoo.fr> Hello I'm trying to install the new scipy + numpy. It's seems to compile correctly but when I want to import import scipy.signal I have this error message when I'm doing like it's write in the INSTALL file import clapack _use_force_clapack = 1 ImportError: /scratch/usr/local/lib/python2.4/site-packages/scipy/lib/lapack/clapack.so: undefined symbol: clapack_sgesv I thought that the lapack library is not complete (event if the error message is a little bit different) so I follow the instruction wrote in the file but I obtain always this error. My system is a Suse 10 and I can't do nothing on the system so I have to install it in my own directory. Thanks, Nicolas From bldrake at adaptcs.com Fri Jan 20 14:16:33 2006 From: bldrake at adaptcs.com (Barry Drake) Date: Fri, 20 Jan 2006 11:16:33 -0800 (PST) Subject: [SciPy-user] sparse matrices and umfpack Message-ID: <20060120191633.7313.qmail@web215.biz.mail.re2.yahoo.com> I'm trying to use UMFSparse, but there is no module umfpack which should be imported at the beginning of UMFSparse.py: """UMFSparse: A minimal sparse-matrix class for Python, useful for interfacing with the UMFPACK library for solving large linear systems, This file defines a class: UMFmatrix that defines a sparse-matrix class. It is designed so that the attribute data points to the non-zero values in the matrix and index is a list of arrays which detail how to place the non-zero values in the array. Currently only a compressed sparse row storage is implemented. """ import scipy_base import scipy.io as io import types import struct import umfpack ... Where is umfpack? I'm using the latest version of Enthought's Python distro on a Win XP Pro system. I've searched my hard drive, the message archives, googled, and gmaned. So far nothing that helps. Anyone know where the umfpack module is? Thanks. Barry Drake From lists at spicynoodles.net Fri Jan 20 15:10:16 2006 From: lists at spicynoodles.net (Andre Radke) Date: Fri, 20 Jan 2006 21:10:16 +0100 Subject: [SciPy-user] multiplying scalar with complex matrix segfaults In-Reply-To: <34090E25C2327C4AA5D276799005DDE0E34C96@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE0E34C96@SMDMX0501.mds.mdsinc.com> Message-ID: LATORNELL, Doug wrote: >Works okay for me, Andre Okay, thanks. I updated from the svn repository again, re-built numpy and scipy, and now scalar multiplication of a complex matrix works properly for me. I don't know why though. Maybe I screwed up during the previous build and ended up with a corrupt installation of scipy... -Andre From oliphant at ee.byu.edu Fri Jan 20 15:58:18 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 20 Jan 2006 13:58:18 -0700 Subject: [SciPy-user] multiplying scalar with complex matrix segfaults In-Reply-To: References: <34090E25C2327C4AA5D276799005DDE0E34C96@SMDMX0501.mds.mdsinc.com> Message-ID: <43D14EEA.5010302@ee.byu.edu> Andre Radke wrote: >LATORNELL, Doug wrote: > > >>Works okay for me, Andre >> >> > >Okay, thanks. I updated from the svn repository again, re-built numpy >and scipy, and now scalar multiplication of a complex matrix works >properly for me. I don't know why though. Maybe I screwed up during >the previous build and ended up with a corrupt installation of >scipy... > > > No you didn't. There was a bug that you alerted me to, that I introduced while trying to improve matrix multiplies using a transpose. I just fixed it. -Travis From Paul.Ray at nrl.navy.mil Fri Jan 20 17:47:06 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Fri, 20 Jan 2006 17:47:06 -0500 Subject: [SciPy-user] Type error in signaltools.py? Message-ID: Hi, I'm running scipy on Python2.4.2 and I found the following issue: Line 1353 of signal/signaltools.py is currently: ret = reshape(newdata,tdshape) This crashes with a TypeError: *** TypeError: Array can not be safely cast to required type I think the line should correctly be: ret = reshape(newdata,tuple(tdshape)) Is this a change that happened between Python2.3 and Python2.4? I found this problem in scipy 0.3.2, but the source line is the same in the current SVN version of scipy. Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From oliphant at ee.byu.edu Fri Jan 20 18:18:15 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 20 Jan 2006 16:18:15 -0700 Subject: [SciPy-user] Type error in signaltools.py? In-Reply-To: References: Message-ID: <43D16FB7.8000607@ee.byu.edu> Paul Ray wrote: >Hi, > >I'm running scipy on Python2.4.2 and I found the following issue: > >Line 1353 of signal/signaltools.py is currently: > ret = reshape(newdata,tdshape) >This crashes with a TypeError: >*** TypeError: Array can not be safely cast to required type > >I think the line should correctly be: > ret = reshape(newdata,tuple(tdshape)) > >Is this a change that happened between Python2.3 and Python2.4? > >I found this problem in scipy 0.3.2, but the source line is the same >in the current SVN version of scipy. > > > Thanks. Fixed in SVN. But we should figure out why tdshape could not be interpreted as PyArray_INTP -Travis From hetland at tamu.edu Fri Jan 20 18:26:37 2006 From: hetland at tamu.edu (Robert Hetland) Date: Fri, 20 Jan 2006 17:26:37 -0600 Subject: [SciPy-user] Installing SciPy on Linux (with enhancements) Message-ID: In case anyone is interested, here is a link to unofficial directions for installing SciPy on Linux (slightly more detailed than on the SciPy Install page, but more specific for Linux), written by Steve Baum (http://stommel.tamu.edu/~baum): http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 -Rob. ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From dd55 at cornell.edu Sat Jan 21 16:12:02 2006 From: dd55 at cornell.edu (Darren Dale) Date: Sat, 21 Jan 2006 16:12:02 -0500 Subject: [SciPy-user] use of blas routines in scipy 0.4.4 Message-ID: <200601211612.02641.dd55@cornell.edu> I'm having a bit of trouble using a blas function in scipy-0.4.4. I've tested the following script with version 0.3.2 and Numeric, which works, and 0.4.4 and numpy which gives me an error. # For scipy-0.4.4 with numpy-0.9.4.1971 from scipy.lib.blas import get_blas_funcs from numpy import array # For scipy-0.3.2, with Numeric-24.2: ##from Numeric import array ##from scipy.linalg.blas import get_blas_funcs a = array([[1,1,1]]) b=array([[1],[1],[1]]) dot, = get_blas_funcs(['gemm',], (a,b), debug=1) res = dot(1, a, b) print res Here's the error: dgemm:n=1 Traceback (most recent call last): File "blasdot.py", line 12, in ? res = dot(1, a, b) fblas.error: (trans_b?kb==k:ldb==k) failed for hidden n Is there a bug, or am I not aware of some important change? Thanks, Darren From evan.monroig at gmail.com Sun Jan 22 01:28:18 2006 From: evan.monroig at gmail.com (Evan Monroig) Date: Sun, 22 Jan 2006 15:28:18 +0900 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX Message-ID: Hi, I have this strange problem that the special erf function returns NANs when the input is an array: >>> import scipy,numpy >>> x = numpy.arange(0., 2., 0.1) >>> x array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]) >>> scipy.special.erf(x) array([ 0. , 0.11246292, 0.22270259, 0.32862676, 0.42839236, 0.52049988, 0.60385609, 0.67780119, 0.74210096, 0.79690821, nan, 0.88020507, 0.91031398, 0.93400794, 0.95228512, 0.96610515, 0.97634838, 0.98379046, 0.9890905 , 0.99279043]) >>> x = numpy.arange(1., 2., 0.1) >>> scipy.special.erf(x) array([ 0.84270079, 0.88020507, 0.91031398, 0.93400794, 0.95228512, 0.96610515, 0.97634838, 0.98379046, 0.9890905 , 0.99279043]) scipy is 0.4.4.1526, numpy is 0.9.3.1836, I am running on Mac OS X with MacPython 2.4.1 Can anyone reproduce the problem? Evan From vbalko at gmail.com Sun Jan 22 07:41:44 2006 From: vbalko at gmail.com (balky) Date: Sun, 22 Jan 2006 13:41:44 +0100 Subject: [SciPy-user] confused In-Reply-To: References: <43C02F25.2050500@gmail.com> Message-ID: <43D37D88.3020802@gmail.com> hello, I`m little bit confused now from this rename. What exactly is numpy and scipy now? What is relation between this two packages? Which modules contains one and other doesn`t? And most important - what should I do to run programs written for old scipy? And one last question if I can - how can I make an array of python classes? How can I tell to scipy that each cell (record) in array should be my own class? Some examples please :) thank you for taking me out of confusion :) balky From schofield at ftw.at Sun Jan 22 08:11:18 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 22 Jan 2006 14:11:18 +0100 Subject: [SciPy-user] confused In-Reply-To: <43D37D88.3020802@gmail.com> References: <43C02F25.2050500@gmail.com> <43D37D88.3020802@gmail.com> Message-ID: <43D38476.7080307@ftw.at> balky wrote: >hello, > >I`m little bit confused now from this rename. > >What exactly is numpy and scipy now? What is relation between this two >packages? Which modules contains one and other doesn`t? > > See: http://new.scipy.org/Wiki/SciPy >And most important - what should I do to run programs written for old scipy? > > Hmm ... I don't know if we have any documentation on this. You can start by changing all references to scipy_base to, I think, scipy, and running the convertcode.py script provided with numpy to convert Numeric-specific code to use numpy instead. Post any other questions here (as specific as possible) and we can help you out. -- Ed From strawman at astraw.com Sun Jan 22 11:35:35 2006 From: strawman at astraw.com (Andrew Straw) Date: Sun, 22 Jan 2006 08:35:35 -0800 Subject: [SciPy-user] Installing SciPy on Linux (with enhancements) In-Reply-To: References: Message-ID: <43D3B457.7080707@astraw.com> Thanks. I made a link from the wiki: http://new.scipy.org/Wiki/Installing_SciPy/Linux Robert Hetland wrote: >In case anyone is interested, here is a link to unofficial directions >for installing SciPy on Linux (slightly more detailed than on the >SciPy Install page, but more specific for Linux), written by Steve >Baum (http://stommel.tamu.edu/~baum): > >http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > >-Rob. > >----- >Rob Hetland, Assistant Professor >Dept of Oceanography, Texas A&M University >p: 979-458-0096, f: 979-845-6331 >e: hetland at tamu.edu, w: http://pong.tamu.edu > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > From robert.kern at gmail.com Sun Jan 22 14:47:51 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 22 Jan 2006 13:47:51 -0600 Subject: [SciPy-user] confused In-Reply-To: <43D37D88.3020802@gmail.com> References: <43C02F25.2050500@gmail.com> <43D37D88.3020802@gmail.com> Message-ID: <43D3E167.7090602@gmail.com> balky wrote: > And one last question if I can - how can I make an array of python > classes? Use object arrays. In [1]: from numpy import array In [2]: class A(object): ...: def __init__(self, x): ...: self.x = x ...: In [3]: some_As = array([A(1), A(2), A(3)], object) In [4]: some_As Out[4]: array([<__main__.A object at 0x212af70>, <__main__.A object at 0x212ae30>, <__main__.A object at 0x212ac90>], dtype=object) > How can I tell to scipy that each cell (record) in array should > be my own class? Some examples please :) You can't. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cournape at atr.jp Sun Jan 22 20:58:19 2006 From: cournape at atr.jp (Cournapeau David) Date: Mon, 23 Jan 2006 10:58:19 +0900 Subject: [SciPy-user] FFT Problem In-Reply-To: References: <43CBB872.4030801@mecha.uni-stuttgart.de> Message-ID: <1137981499.5572.3.camel@localhost.localdomain> On Mon, 2006-01-16 at 10:21 -0500, Frederick Ross wrote: > Then it's probably the details of my setup. I'll try building from scratch. > > Thanks. I have exactly the same problem, having build from sources last WE. THe problem appears only on one of my computers, which is the minimac (on linux, though, I don't use Mac OS X). I would think this is a powerpc related problem. David From evan.monroig at gmail.com Sun Jan 22 21:01:18 2006 From: evan.monroig at gmail.com (Evan Monroig) Date: Mon, 23 Jan 2006 11:01:18 +0900 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: References: Message-ID: On 1/22/06, Evan Monroig wrote: > Hi, > > I have this strange problem that the special erf function returns NANs > when the input is an array: [snip] To respond to myself, I could not reproduce the problem on Ubuntu Breezy (5.10). So I will stick with linux for a while :). Evan From robert.kern at gmail.com Sun Jan 22 21:15:50 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 22 Jan 2006 20:15:50 -0600 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: References: Message-ID: <43D43C56.7010608@gmail.com> Evan Monroig wrote: > Hi, > > I have this strange problem that the special erf function returns NANs > when the input is an array: > Can anyone reproduce the problem? Yes, also on OS X, and only with the almost-but-not-quite-1.0 that you get by adding 0.1 to 0.0 ten times. In [9]: x = 0.0 In [10]: for i in range(10): ....: x += 0.1 ....: In [11]: x Out[11]: 0.99999999999999989 In [12]: scipy.special.erf(x) Out[12]: nan It's probably a platform-specific bug in Cephes. I'll try to see if it shows up in C, too. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Sun Jan 22 21:18:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 22 Jan 2006 20:18:45 -0600 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: References: Message-ID: <43D43D05.9020707@gmail.com> Evan Monroig wrote: > Hi, > > I have this strange problem that the special erf function returns NANs > when the input is an array: > scipy is 0.4.4.1526, numpy is 0.9.3.1836, I am running on Mac OS X > with MacPython 2.4.1 > > Can anyone reproduce the problem? Actually, on OS X, /usr/include/math.h defines an erf() function that is probably getting picked up instead of Cephes's version. Bad Apple! -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From cookedm at physics.mcmaster.ca Sun Jan 22 22:32:07 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sun, 22 Jan 2006 22:32:07 -0500 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: <43D43D05.9020707@gmail.com> (Robert Kern's message of "Sun, 22 Jan 2006 20:18:45 -0600") References: <43D43D05.9020707@gmail.com> Message-ID: Robert Kern writes: > Evan Monroig wrote: >> Hi, >> >> I have this strange problem that the special erf function returns NANs >> when the input is an array: > >> scipy is 0.4.4.1526, numpy is 0.9.3.1836, I am running on Mac OS X >> with MacPython 2.4.1 >> >> Can anyone reproduce the problem? > > Actually, on OS X, /usr/include/math.h defines an erf() function that is > probably getting picked up instead of Cephes's version. Bad Apple! Hmm, we've seen this type of problem before, with round(). Cephes really should use a namespace (cephes_erf() instead of plain erf(), for instance). The least-intrusive patch would be a header with things like #define erf cephes_erf then when linking the cephes version will be pulled in. I've also just noticed that we're using an older version of Cephes (release 2.3, Jan. 1995) compared with one on netlib (release 2.9, Nov. 2000). I'll have a look at making a header, and then updating to the latest. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From ariciputi at pito.com Mon Jan 23 03:20:42 2006 From: ariciputi at pito.com (Andrea Riciputi) Date: Mon, 23 Jan 2006 09:20:42 +0100 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: <43D43C56.7010608@gmail.com> References: <43D43C56.7010608@gmail.com> Message-ID: OS X here too (10.4.4 with gcc 4.0.0), but sorry I've not installed numpy yet. However, I've tried to reproduce the problem using pure C. Using the Apple math.h the strange behaviour showed by Evan Monroig doesn't show up. The returned results are as expected: > Totila:~/Documents/Dottorato/Codice/erf andrea$ ./erf1 > erf(0.000000) = 0.000000e+00 > erf(0.100000) = 1.124629e-01 > erf(0.200000) = 2.227026e-01 > erf(0.300000) = 3.286268e-01 > erf(0.400000) = 4.283924e-01 > erf(0.500000) = 5.204999e-01 > erf(0.600000) = 6.038561e-01 > erf(0.700000) = 6.778012e-01 > erf(0.800000) = 7.421010e-01 > erf(0.900000) = 7.969082e-01 > erf(1.000000) = 8.427008e-01 > erf(1.100000) = 8.802051e-01 > erf(1.200000) = 9.103140e-01 > erf(1.300000) = 9.340079e-01 > erf(1.400000) = 9.522851e-01 > erf(1.500000) = 9.661051e-01 > erf(1.600000) = 9.763484e-01 > erf(1.700000) = 9.837905e-01 > erf(1.800000) = 9.890905e-01 > erf(1.900000) = 9.927904e-01 Even the code proposed by Robert Kern (when translated in C) works correctly: > Totila:~/Documents/Dottorato/Codice/prova andrea$ ./erf2 > erf(1.900000) = 9.927904e-01 So I don't think it's a problem with Apple erf implementation. Which OS X version are you running? Which compiler did you used to compile numpy and scipy? HTH, Andrea On Jan 23, 2006, at 03:15 , Robert Kern wrote: > Evan Monroig wrote: >> Hi, >> >> I have this strange problem that the special erf function returns >> NANs >> when the input is an array: > >> Can anyone reproduce the problem? > > Yes, also on OS X, and only with the almost-but-not-quite-1.0 that > you get by > adding 0.1 to 0.0 ten times. > > In [9]: x = 0.0 > > In [10]: for i in range(10): > ....: x += 0.1 > ....: > > In [11]: x > Out[11]: 0.99999999999999989 > > In [12]: scipy.special.erf(x) > Out[12]: nan > > It's probably a platform-specific bug in Cephes. I'll try to see if > it shows up > in C, too. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From cimrman3 at ntc.zcu.cz Mon Jan 23 04:27:08 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 23 Jan 2006 10:27:08 +0100 Subject: [SciPy-user] sparse matrices and umfpack In-Reply-To: <20060120191633.7313.qmail@web215.biz.mail.re2.yahoo.com> References: <20060120191633.7313.qmail@web215.biz.mail.re2.yahoo.com> Message-ID: <43D4A16C.6070603@ntc.zcu.cz> Barry Drake wrote: > I'm trying to use UMFSparse, but there is no module > umfpack which should be imported at the beginning of > UMFSparse.py: > > """UMFSparse: A minimal sparse-matrix class for > Python, useful for interfacing > with the UMFPACK library for solving large linear > systems, > > Where is umfpack? I'm using the latest version of > Enthought's Python distro on a Win XP Pro system. > I've searched my hard drive, the message archives, > googled, and gmaned. So far nothing that helps. > > Anyone know where the umfpack module is? I may have the idea :) UMFSparse.py is an obsolete file. Try to get the new scipy from the subversion directory: (see e.g. http://new.scipy.org/Wiki/Installing_SciPy/Linux) - it contains sparse matrix support under 'scipy.sparse'. If you are interested specifically in the umfpack solver, there are stand-alone wrappers in 'scipy/Lib/sandbox/umfpack' directory (shame on me it's not yet merged into scipy.sparse...). However in order to use it, you must dowload and install umfpack (it is not included in scipy) from http://www.cise.ufl.edu/research/sparse/umfpack/ - the wrappers work with the version 4.4. r. From evan.monroig at gmail.com Mon Jan 23 08:24:07 2006 From: evan.monroig at gmail.com (Evan Monroig) Date: Mon, 23 Jan 2006 22:24:07 +0900 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: References: <43D43C56.7010608@gmail.com> Message-ID: <20060123132407.GA18834@localhost.localdomain> On Jan.23 09h20, Andrea Riciputi wrote : > > Even the code proposed by Robert Kern (when translated in C) works > correctly: > > > Totila:~/Documents/Dottorato/Codice/prova andrea$ ./erf2 > > erf(1.900000) = 9.927904e-01 Did you really try the *almost 1.0 but not 1.0* number ? It should be something like the following: double x = 0.0; for (int i = 0; i < 10; ++i) { x += 0.1; } result = erf(x); ... > So I don't think it's a problem with Apple erf implementation. Which > OS X version are you running? Which compiler did you used to compile > numpy and scipy? Sorry I don't have OSX at hand, but I know it is Tiger, and I am sure I used gcc 3. Evan From hetland at tamu.edu Mon Jan 23 09:46:53 2006 From: hetland at tamu.edu (Robert Hetland) Date: Mon, 23 Jan 2006 08:46:53 -0600 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: <20060123132407.GA18834@localhost.localdomain> References: <43D43C56.7010608@gmail.com> <20060123132407.GA18834@localhost.localdomain> Message-ID: I get the same errors: OS X 10.4, NumPy 0.9.2.1831, SciPy 0.4.4.1526. It is clearly an almost 1.0 problem... vis, x = arange(0.,2.,0.1) >>> x array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]) >>> special.erf(x) array([ 0. , 0.11246292, 0.22270259, 0.32862676, 0.42839236, 0.52049988, 0.60385609, 0.67780119, 0.74210096, 0.79690821, nan, 0.88020507, 0.91031398, 0.93400794, 0.95228512, 0.96610515, 0.97634838, 0.98379046, 0.9890905 , 0.99279043]) >>> x[10] 0.99999999999999989 >>> special.erf(x[10]) nan But, look.. >>> x[10]+1.0-1.0 1.0 >>> special.erf(x[10]+1.0-1.0) 0.84270079294971478 -Rob. On Jan 23, 2006, at 7:24 AM, Evan Monroig wrote: > On Jan.23 09h20, Andrea Riciputi wrote : >> >> Even the code proposed by Robert Kern (when translated in C) works >> correctly: >> >>> Totila:~/Documents/Dottorato/Codice/prova andrea$ ./erf2 >>> erf(1.900000) = 9.927904e-01 > > Did you really try the *almost 1.0 but not 1.0* number ? It should be > something like the following: > > double x = 0.0; > for (int i = 0; i < 10; ++i) { > x += 0.1; > } > result = erf(x); > ... > >> So I don't think it's a problem with Apple erf implementation. Which >> OS X version are you running? Which compiler did you used to compile >> numpy and scipy? > > Sorry I don't have OSX at hand, but I know it is Tiger, and I am sure > I used gcc 3. > > Evan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From aanskis at yahoo.com Mon Jan 23 10:56:59 2006 From: aanskis at yahoo.com (Aurimas Anskaitis) Date: Mon, 23 Jan 2006 07:56:59 -0800 (PST) Subject: [SciPy-user] problems numpy 0.9.4 and matplotlib Message-ID: <20060123155659.63899.qmail@web36201.mail.mud.yahoo.com> I have some problems with numpy-0.9.4 and matplotlib 0.86.1. When I try to launch very simple script like this: import pylab from scipy import * y = array([1, 2, 3]) pylab.plot(y) pylab.show() I get a long error message ending with line "ValueError: arrays must have same number of dimensions". May somebody explain me what's happening? Everything works just fine using numpy 0.9.2. --------------------------------- Yahoo! Photos Ring in the New Year with Photo Calendars. Add photos, events, holidays, whatever. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdhunter at ace.bsd.uchicago.edu Mon Jan 23 10:53:43 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 23 Jan 2006 09:53:43 -0600 Subject: [SciPy-user] problems numpy 0.9.4 and matplotlib In-Reply-To: <20060123155659.63899.qmail@web36201.mail.mud.yahoo.com> (Aurimas Anskaitis's message of "Mon, 23 Jan 2006 07:56:59 -0800 (PST)") References: <20060123155659.63899.qmail@web36201.mail.mud.yahoo.com> Message-ID: <87oe23q6t4.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Aurimas" == Aurimas Anskaitis writes: Aurimas> I have some problems with numpy-0.9.4 and matplotlib Aurimas> 0.86.1. When I try to launch very simple script like Aurimas> this: import pylab from scipy import * y = array([1, 2, Aurimas> 3]) pylab.plot(y) pylab.show() Aurimas> I get a long error message ending with line "ValueError: Aurimas> arrays must have same number of dimensions". May somebody Aurimas> explain me what's happening? Everything works just fine Aurimas> using numpy 0.9.2. You need to make sure that matplotlib knows about which array type you are using by setting the "numerix" setting in you matplotlib rc file, typically found in ~/.matplotlib/matplotlibrc but you can find the definitive location by running your script with --verbose-helpful which the rc file location as well as which array package you are using. You will want to make sure your numerix setting is numpy. FYI, from pylab import * together with from scipy import * is a recipe for confusion, since both packages import a lot of names with a overlapping names. Hope this helps, JDH From nwagner at mecha.uni-stuttgart.de Mon Jan 23 11:09:38 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Mon, 23 Jan 2006 17:09:38 +0100 Subject: [SciPy-user] Compiling djbfft Message-ID: <43D4FFC2.8000406@mecha.uni-stuttgart.de> Hi all, I saw the comments wrt djbfft. http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 The compilation problem can be fixed by adding a line #include to error.h. The default installation directory is /usr/local/djbfft. However djbfft_info: NOT AVAILABLE Pearu, How should I modify system_info.py in ~/numpy/numpy/distutils such that the library djbfft.a in /usr/local/djbfft/lib/ will be found. Nils From aisaac at american.edu Mon Jan 23 11:38:06 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 23 Jan 2006 11:38:06 -0500 Subject: [SciPy-user] problems numpy 0.9.4 and matplotlib In-Reply-To: <87oe23q6t4.fsf@peds-pc311.bsd.uchicago.edu> References: <20060123155659.63899.qmail@web36201.mail.mud.yahoo.com><87oe23q6t4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: On Mon, 23 Jan 2006, John Hunter apparently wrote: > You need to make sure that matplotlib knows about which array type you > are using by setting the "numerix" setting in you matplotlib rc file, I see the problem reported by Aurimas. Details below. Alan Isaac Python 2.4.1 (#65, Mar 30 2005, 09:13:57) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as N >>> import pylab as P >>> import matplotlib >>> N.__version__ '0.9.4' >>> matplotlib.__version__ '0.86.1' >>> matplotlib.rcParams['numerix'] 'numpy' >>> P.plot(N.array([1,2,3])) Traceback (most recent call last): File "", line 1, in ? File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 2055, in plot b = ishold() File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 939, in ishold return gca().ishold() File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 890, in gca ax = gcf().gca(**kwargs) File "C:\Python24\Lib\site-packages\matplotlib\figure.py", line 615, in gca return self.add_subplot(111, **kwargs) File "C:\Python24\Lib\site-packages\matplotlib\figure.py", line 465, in add_su bplot a = Subplot(self, *args, **kwargs) File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 3974, in __init_ _ Axes.__init__(self, fig, [self.figLeft, self.figBottom, File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 331, in __init__ self._init_axis() File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 360, in _init_ax is self.xaxis = XAxis(self) File "C:\Python24\Lib\site-packages\matplotlib\axis.py", line 501, in __init__ self.cla() File "C:\Python24\Lib\site-packages\matplotlib\axis.py", line 524, in cla self.majorTicks.extend([self._get_tick(major=True) for i in range(1)]) File "C:\Python24\Lib\site-packages\matplotlib\axis.py", line 834, in _get_tic k return XTick(self.axes, 0, '', major=major) File "C:\Python24\Lib\site-packages\matplotlib\axis.py", line 100, in __init__ self.tick1line = self._get_tick1line(loc) File "C:\Python24\Lib\site-packages\matplotlib\axis.py", line 276, in _get_tic k1line markersize=self._size, File "C:\Python24\Lib\site-packages\matplotlib\lines.py", line 211, in __init_ _ self.set_data(xdata, ydata) File "C:\Python24\Lib\site-packages\matplotlib\lines.py", line 282, in set_dat a self._segments = unmasked_index_ranges(mask) File "C:\Python24\Lib\site-packages\matplotlib\lines.py", line 69, in unmasked _index_ranges m = concatenate(((1,), mask, (1,))) ValueError: arrays must have same number of dimensions From oliphant at ee.byu.edu Mon Jan 23 12:21:20 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 23 Jan 2006 10:21:20 -0700 Subject: [SciPy-user] problems numpy 0.9.4 and matplotlib In-Reply-To: References: <20060123155659.63899.qmail@web36201.mail.mud.yahoo.com><87oe23q6t4.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <43D51090.30105@ee.byu.edu> Alan G Isaac wrote: >On Mon, 23 Jan 2006, John Hunter apparently wrote: > > >>You need to make sure that matplotlib knows about which array type you >>are using by setting the "numerix" setting in you matplotlib rc file, >> >> > >I see the problem reported by Aurimas. >Details below. >Alan Isaac > > You need to get matplotlib out of CVS. It has the most recent changes to work with numpy. Masked Arrays underwent some facelifts and that is causing the problem you are seeing. The release schedule of matplotlib is not tied to numpy at this point... -Travis From robert.kern at gmail.com Mon Jan 23 12:31:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 23 Jan 2006 11:31:23 -0600 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: <20060123132407.GA18834@localhost.localdomain> References: <43D43C56.7010608@gmail.com> <20060123132407.GA18834@localhost.localdomain> Message-ID: <43D512EB.6010907@gmail.com> Evan Monroig wrote: > On Jan.23 09h20, Andrea Riciputi wrote : > >>Even the code proposed by Robert Kern (when translated in C) works >>correctly: >> >>>Totila:~/Documents/Dottorato/Codice/prova andrea$ ./erf2 >>>erf(1.900000) = 9.927904e-01 > > Did you really try the *almost 1.0 but not 1.0* number ? It should be > something like the following: > > double x = 0.0; > for (int i = 0; i < 10; ++i) { > x += 0.1; > } > result = erf(x); > ... I tried that and could *not* reproduce the bug with Apple's erf(). I will try with Cephes' later. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From aisaac at american.edu Mon Jan 23 14:18:52 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 23 Jan 2006 14:18:52 -0500 Subject: [SciPy-user] problems numpy 0.9.4 and matplotlib In-Reply-To: <43D51090.30105@ee.byu.edu> References: <20060123155659.63899.qmail@web36201.mail.mud.yahoo.com><87oe23q6t4.fsf@peds-pc311.bsd.uchicago.edu> <43D51090.30105@ee.byu.edu> Message-ID: On Mon, 23 Jan 2006, Travis Oliphant apparently wrote: > You need to get matplotlib out of CVS. It has the most > recent changes to work with numpy. Masked Arrays > underwent some facelifts and that is causing the problem > you are seeing. Thanks! Alan From dd55 at cornell.edu Mon Jan 23 16:15:46 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 23 Jan 2006 16:15:46 -0500 Subject: [SciPy-user] use of blas routines in scipy 0.4.4 In-Reply-To: <200601211612.02641.dd55@cornell.edu> References: <200601211612.02641.dd55@cornell.edu> Message-ID: <200601231615.46569.dd55@cornell.edu> On Saturday 21 January 2006 16:12, Darren Dale wrote: > I'm having a bit of trouble using a blas function in scipy-0.4.4. I've > tested the following script with version 0.3.2 and Numeric, which works, > and 0.4.4 and numpy which gives me an error. Does anyone else see this behavior with the following script? > # For scipy-0.4.4 with numpy-0.9.4.1971 > from scipy.lib.blas import get_blas_funcs > from numpy import array > # For scipy-0.3.2, with Numeric-24.2: > ##from Numeric import array > ##from scipy.linalg.blas import get_blas_funcs > > a = array([[1,1,1]]) > b=array([[1],[1],[1]]) > > dot, = get_blas_funcs(['gemm',], (a,b), debug=1) > res = dot(1, a, b) > print res > > > Here's the error: > > dgemm:n=1 > Traceback (most recent call last): > File "blasdot.py", line 12, in ? > res = dot(1, a, b) > fblas.error: (trans_b?kb==k:ldb==k) failed for hidden n > > Is there a bug, or am I not aware of some important change? From oliphant at ee.byu.edu Mon Jan 23 16:21:59 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 23 Jan 2006 14:21:59 -0700 Subject: [SciPy-user] use of blas routines in scipy 0.4.4 In-Reply-To: <200601231615.46569.dd55@cornell.edu> References: <200601211612.02641.dd55@cornell.edu> <200601231615.46569.dd55@cornell.edu> Message-ID: <43D548F7.1090002@ee.byu.edu> Darren Dale wrote: >On Saturday 21 January 2006 16:12, Darren Dale wrote: > > >>I'm having a bit of trouble using a blas function in scipy-0.4.4. I've >>tested the following script with version 0.3.2 and Numeric, which works, >>and 0.4.4 and numpy which gives me an error. >> >> > >Does anyone else see this behavior with the following script? > > This should be fixed with the latest SVN (and released numpy-0.9.4). I think it was an f2py issue on two-dimensional arrays. -Travis From dd55 at cornell.edu Mon Jan 23 17:57:44 2006 From: dd55 at cornell.edu (Darren Dale) Date: Mon, 23 Jan 2006 17:57:44 -0500 Subject: [SciPy-user] use of blas routines in scipy 0.4.4 In-Reply-To: <43D548F7.1090002@ee.byu.edu> References: <200601211612.02641.dd55@cornell.edu> <200601231615.46569.dd55@cornell.edu> <43D548F7.1090002@ee.byu.edu> Message-ID: <200601231757.44378.dd55@cornell.edu> On Monday 23 January 2006 16:21, Travis Oliphant wrote: > Darren Dale wrote: > >On Saturday 21 January 2006 16:12, Darren Dale wrote: > >>I'm having a bit of trouble using a blas function in scipy-0.4.4. I've > >>tested the following script with version 0.3.2 and Numeric, which works, > >>and 0.4.4 and numpy which gives me an error. > > > >Does anyone else see this behavior with the following script? > > This should be fixed with the latest SVN (and released numpy-0.9.4). > > I think it was an f2py issue on two-dimensional arrays. Yes, it is fixed. Thanks! Darren From daishi at egcrc.net Mon Jan 23 19:26:06 2006 From: daishi at egcrc.net (daishi at egcrc.net) Date: Mon, 23 Jan 2006 16:26:06 -0800 Subject: [SciPy-user] multinomial-like randint Message-ID: <604a8d93a679717aeb252f0f556ae0af@egcrc.net> hi, i'm wondering if i'm missing a function which does something along the lines of the following: --- import scipy def randomint(f): """Sample from [0, len(f)-1] according to f, which doesn't have to be normalized. """ n = len(f) s = scipy.sum(f) if n == 0 or s == 0.0: raise ValueError('Input to randomint must be nonzero') r = scipy.random.random() for i in xrange(n): x = f[i]/s if r < x: return i r -= x raise RuntimeError('Error in randomint, fell off of end') --- i've found scipy.random.randint which is uniform, and scipy.random.multinomial, which returns multi-sample counts which i don't need. i realize the above is pretty simple, but i imagine that this must be a relatively common need, and i'd rather not loop in python, so i feel like i must not be looking in the right place. tia,d From Fernando.Perez at colorado.edu Tue Jan 24 02:00:04 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Tue, 24 Jan 2006 00:00:04 -0700 Subject: [SciPy-user] IPython 0.7.1 is out. Message-ID: <43D5D074.7060503@colorado.edu> [ As usual, sorry for the cross-post. Given that many of you use ipython and aren't on the lists, I hope you'll find this useful. But let me know if this is considered spam, and I'll stop ] Hi all, I have released IPython 0.7.1, which is mainly a bugfix release over 0.7.0. As expected in that release, given the large changes made, some problems inevitably appeared. I believe all regressions and known bugs have been fixed, along with some useful new features. This release marks the end of my tenure on the stable branch of ipython. Ville Vainio will continue as maintainer of the ipython trunk/, while I will continue working on the development branch of IPython (chainsaw). I will remain available on the list as usual, both to assist Ville as needed, and to discuss withe anyone interested. But I'll try to limit my time and effort spent on trunk to a minimum, so we can really advance the (fairly ambitious) chainsaw project. Where to get it --------------- IPython's homepage is at: http://ipython.scipy.org and downloads are at: http://ipython.scipy.org/dist I've provided: - source downloads (.tar.gz) - RPMs (for Python 2.3 and 2.4, built under Fedora Core 3). - Python Eggs (http://peak.telecommunity.com/DevCenter/PythonEggs). - a native win32 installer for both Python 2.3 and 2.4. Fedora users should note that IPython is now officially part of the Extras repository, so they can get the update from there as well (though it may lag by a few days). Debian, Fink and BSD packages for this version should be coming soon, as the respective maintainers have the time to follow their packaging procedures. Many thanks to Jack Moffit, Norbert Tretkowski, Andrea Riciputi and Dryice Liu). The eggs included now are lighter, as they don't include documentation and other ancillary data. If you want a full ipython installation, use the source tarball or your distribution's favorite system. Many thanks to Enthought for their continued hosting support for IPython. Release notes ------------- As always, the full ChangeLog is at http://ipython.scipy.org/ChangeLog. The highlights of this release follow. - *FIXED: this wasn't really working right in 0.7.0* Support for input with empty lines. If you have auto-indent on, this means that you need to either hit enter _twice_, or add/remove a space to your last blank line, to indicate you're done entering input. These changes also allow us to provide copy/paste of code with blank lines. - *FIXED: this wasn't really working right in 0.7.0* Support for pasting multiline input even with autoindent on. The code will look wrong on screen, but it will be stored and executed correctly internally. Note that if you have blank lines in your code, these need to be indented like their surroundings (pure empty lines will make ipython assume you are finished with your input). This is a limitation also of the plain python interpreter, and can't really be fixed in a line-oriented system like ipython. - Fixed bug where macros were not working correctly in threaded shells. - Catch exceptions which can be triggered asynchronously by signal handlers. This fixes a rare and obscure problem, but which could crash ipython; reported by Colin Kingsley . - Added new '-r' option to %hist, to see the raw input history (without conversions like %ls -> ipmagic("ls")). - Fix problems with GTK under win32 (excessive keyboard lag and cpu usage). - Added ipython.py script to root directory of download. This allows you to unpack ipython and execute it in-place, without needing to install it at all. - Improved handling of backslashes (\) in magics. This improves the usability of ipython especially under win32, which uses \ as a path separator (though it also benefits Unix for escaping characters in shell usage). - More autocall fixes (rare, but critical). - New IPython.ipapi module to begin exposing 'officially' IPython's public API. This should ease the task of those building systems on top of ipython. - New IPython.platutils module, exposing various platform-dependent utilities (such as terminal title control). - Implemented exception-based 'chain of command' for IPython's hooks. - Added Jason Orendorff's "path" module to IPython tree, http://www.jorendorff.com/articles/python/path/. You can get path objects conveniently through %sc, and !!, e.g.: sc files=ls for p in files.paths: # or files.p print p,p.mtime - Fix '%run -d' with Python 2.4 (pdb changed in Python 2.4). - Various other small fixes and enhancements. Enjoy, and as usual please report any problems. Regards, and thanks for all this time using IPython! Fernando. From ariciputi at pito.com Tue Jan 24 04:28:09 2006 From: ariciputi at pito.com (Andrea Riciputi) Date: Tue, 24 Jan 2006 10:28:09 +0100 Subject: [SciPy-user] scipy.special.erf randomly returns NANs on OSX In-Reply-To: <20060123132407.GA18834@localhost.localdomain> References: <43D43C56.7010608@gmail.com> <20060123132407.GA18834@localhost.localdomain> Message-ID: Yes, I really tried it exactly as shown in the code you wrote. I also tried to something like: double x = 1.0; for(i = 0; i < 10000; i++) { x -= i * DBL_EPSILON; y = erf(x); print("erf(%.20f) = %e\n", x, y); } and the results are absolutely stable and correct. All the tests I made passed both with gcc 3.3 and 4.0. HTH, Andrea On Jan 23, 2006, at 14:24 , Evan Monroig wrote: > Did you really try the *almost 1.0 but not 1.0* number ? It should be > something like the following: > > double x = 0.0; > for (int i = 0; i < 10; ++i) { > x += 0.1; > } > result = erf(x); > ... From elcorto at gmx.net Tue Jan 24 05:29:51 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 24 Jan 2006 11:29:51 +0100 Subject: [SciPy-user] numpy.asarray and Numeric.asarray Message-ID: <43D6019F.901@gmx.net> Hi Using optimize.fmin_powell from a recent svn build with the initial guess x0 being a scalar I get this error: Traceback (most recent call last): File "trans.py", line 151, in ? tau = fmin_powell(func, tau_start) File "/usr/lib/python2.3/site-packages/scipy/optimize/optimize.py", line 1453, in fmin_powell N = len(x) TypeError: len() of unsized object where x = asarray(x0). Comparing numpy's and Numeric's asarray I found =================================================================================== In [8]: import numpy as nu In [9]: import Numeric as NU In [10]: x = nu.asarray([1,2]); len(x) Out[10]: 2 In [11]: y = NU.asarray([1,2]); len(y) Out[11]: 2 In [12]: x = nu.asarray(3); len(x) --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/elcorto/ TypeError: len() of unsized object In [13]: y = NU.asarray(3); len(y) Out[13]: 1 =================================================================================== cheers, steve -- "People like Blood Sausage too. People are Morons!" -- Phil Connors, Groundhog Day From elcorto at gmx.net Tue Jan 24 07:23:10 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 24 Jan 2006 13:23:10 +0100 Subject: [SciPy-user] numpy.asarray Numeric.asarray, fmin_* return type In-Reply-To: <43D6019F.901@gmx.net> References: <43D6019F.901@gmx.net> Message-ID: <43D61C2E.5050002@gmx.net> Steve Schmerler wrote: > Hi > > Using optimize.fmin_powell from a recent svn build with the initial > guess x0 being a scalar I get this error: > > Traceback (most recent call last): > File "trans.py", line 151, in ? > tau = fmin_powell(func, tau_start) > File "/usr/lib/python2.3/site-packages/scipy/optimize/optimize.py", > line 1453, in fmin_powell > N = len(x) > TypeError: len() of unsized object > > where x = asarray(x0). Comparing numpy's and Numeric's asarray I found > > =================================================================================== > In [8]: import numpy as nu > > In [9]: import Numeric as NU > > In [10]: x = nu.asarray([1,2]); len(x) > Out[10]: 2 > > In [11]: y = NU.asarray([1,2]); len(y) > Out[11]: 2 > > In [12]: x = nu.asarray(3); len(x) > --------------------------------------------------------------------------- > exceptions.TypeError Traceback (most > recent call last) > > /home/elcorto/ > > TypeError: len() of unsized object > > In [13]: y = NU.asarray(3); len(y) > Out[13]: 1 > > =================================================================================== > OK an easy workarround is to provide "scalar" start values as x = array([x0]) instead of just x0, then len(x) = 1. (providing x0 as it is (scalar) worked for me with scipy 0.3.2) But another thing: fmin, fmin_cg and fmin_bfgs (I haven't tried the others) return when the initial guess is of the same type (i.e. array([x0]) which seems reasonable. Unfortunately fmin_powell returns in this case. cheers, steve From schofield at ftw.at Tue Jan 24 10:10:34 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 24 Jan 2006 16:10:34 +0100 Subject: [SciPy-user] multinomial-like randint In-Reply-To: <604a8d93a679717aeb252f0f556ae0af@egcrc.net> References: <604a8d93a679717aeb252f0f556ae0af@egcrc.net> Message-ID: <43D6436A.5020008@ftw.at> daishi at egcrc.net wrote: >hi, > >i'm wondering if i'm missing a function >which does something along the lines of >the following: > > >i've found scipy.random.randint which is uniform, >and scipy.random.multinomial, which returns >multi-sample counts which i don't need. >i realize the above is pretty simple, but i >imagine that this must be a relatively common >need, and i'd rather not loop in python, so >i feel like i must not be looking in the right >place. > > Yes, this is exactly what the 'intsampler' class in the new Monte Carlo package does. I described it in the thread [New maximum entropy and Monte Carlo packages] a few days ago. The code is still in the sandbox of the SVN tree. I had to make some changes to get it to build with MinGW. I haven't yet had access to a Windows machine to test it, but once I've tested it I'll move it into the main tree, hopefully in a few days. If you'd like to use it now, you can check out the latest SciPy from SVN and uncomment the line #config.add_subpackage('montecarlo') in scipy/Lib/sandbox/setup.py before building SciPy. -- Ed From david.huard at gmail.com Tue Jan 24 10:16:41 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 24 Jan 2006 10:16:41 -0500 Subject: [SciPy-user] Bug in stats? In-Reply-To: References: Message-ID: <91cf711d0601240716q6b4ded95k@mail.gmail.com> Hi, I get the same problem, and had it with previous versions of scipy too. Also, I just found out that the fit method of the genextreme distribution returns [1.,0.,1.] systematically, no matter what the data set is. For example, >>> z = genextreme.rvs(.15, loc=3.8, scale=.17, size=30) >>> z array([3.57489347, 3.88036819, 3.78173901,...]) >>> genextreme.fit(z) array([ 1., 0., 1.]) Is it a bug or the stats package is not completed yet ? Thanks David 2006/1/19, Paul Ray : > > Hi, > > I'm getting nan when I try to calculate the stats on any distribution > In [1]: import scipy > Overwriting fft= from scipy.fftpack.basic > (was from numpy.dft.fftpack) > Overwriting ifft= from > scipy.fftpack.basic (was from > numpy.dft.fftpack) > > In [2]: scipy.stats.norm.stats() > Out[2]: (nan, nan) > > In [3]: scipy.__version__ > Out[3]: '0.4.4' > > I get nan's for all distributions I have tried and the loc and scale > parameters don't seem to help. > > Is this a bug or am I using it wrong? > > Thanks, > > -- Paul > > -- > Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil > Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ > personnel/paulr/ > Code 7655 Phone : (202) 404-1619 > Washington, DC 20375 AIM : NRLPSR > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Tue Jan 24 10:45:59 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 24 Jan 2006 10:45:59 -0500 Subject: [SciPy-user] Bug in stats? In-Reply-To: <91cf711d0601240716q6b4ded95k@mail.gmail.com> References: <91cf711d0601240716q6b4ded95k@mail.gmail.com> Message-ID: <91cf711d0601240745j53f9a541y@mail.gmail.com> I answered my own question about genextreme.fit. The problem was that for the data set, the default initial value gave an -inf log-likelihood. I assume that this perplexes the fmin routine who simply returns the initial value. David 2006/1/24, David Huard : > > Hi, > > I get the same problem, and had it with previous versions of scipy too. > Also, I just found out that the fit method of the genextreme distribution > returns [1.,0.,1.] systematically, no matter what the data set is. > > For example, > >>> z = genextreme.rvs(.15, loc=3.8, scale=.17, size=30) > >>> z > array([3.57489347, 3.88036819, 3.78173901,...]) > >>> genextreme.fit(z) > array([ 1., 0., 1.]) > > Is it a bug or the stats package is not completed yet ? > > Thanks > > David > > 2006/1/19, Paul Ray : > > > > Hi, > > > > I'm getting nan when I try to calculate the stats on any distribution > > In [1]: import scipy > > Overwriting fft= from scipy.fftpack.basic > > (was from numpy.dft.fftpack) > > Overwriting ifft= from > > scipy.fftpack.basic (was from > > numpy.dft.fftpack) > > > > In [2]: scipy.stats.norm.stats() > > Out[2]: (nan, nan) > > > > In [3]: scipy.__version__ > > Out[3]: '0.4.4' > > > > I get nan's for all distributions I have tried and the loc and scale > > parameters don't seem to help. > > > > Is this a bug or am I using it wrong? > > > > Thanks, > > > > -- Paul > > > > -- > > Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil > > Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ > > personnel/paulr/ > > Code 7655 Phone : (202) 404-1619 > > Washington, DC 20375 AIM : NRLPSR > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Tue Jan 24 10:53:05 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 24 Jan 2006 10:53:05 -0500 Subject: [SciPy-user] Bug in stats? In-Reply-To: <91cf711d0601240745j53f9a541y@mail.gmail.com> References: <91cf711d0601240716q6b4ded95k@mail.gmail.com> <91cf711d0601240745j53f9a541y@mail.gmail.com> Message-ID: <91cf711d0601240753q3c83b5c4j@mail.gmail.com> To use the stats method, try giving it the parameters explicitely, as in >>> gamma.stats(1.5, loc=4., scale =5.) (array(11.5), array(37.5)) David 2006/1/24, David Huard : > > I answered my own question about genextreme.fit. The problem was that for > the data set, the default initial value gave an -inf log-likelihood. I > assume that this perplexes the fmin routine who simply returns the initial > value. > > David > > 2006/1/24, David Huard : > > > > Hi, > > > > I get the same problem, and had it with previous versions of scipy too. > > Also, I just found out that the fit method of the genextreme distribution > > returns [1.,0.,1.] systematically, no matter what the data set is. > > > > For example, > > >>> z = genextreme.rvs(.15, loc=3.8, scale=.17, size=30) > > >>> z > > array([3.57489347, 3.88036819, 3.78173901,...]) > > >>> genextreme.fit(z) > > array([ 1., 0., 1.]) > > > > Is it a bug or the stats package is not completed yet ? > > > > Thanks > > > > David > > > > 2006/1/19, Paul Ray : > > > > > > Hi, > > > > > > I'm getting nan when I try to calculate the stats on any distribution > > > In [1]: import scipy > > > Overwriting fft= from scipy.fftpack.basic > > > (was from numpy.dft.fftpack) > > > Overwriting ifft= from > > > scipy.fftpack.basic (was from > > > numpy.dft.fftpack) > > > > > > In [2]: scipy.stats.norm.stats() > > > Out[2]: (nan, nan) > > > > > > In [3]: scipy.__version__ > > > Out[3]: '0.4.4' > > > > > > I get nan's for all distributions I have tried and the loc and scale > > > parameters don't seem to help. > > > > > > Is this a bug or am I using it wrong? > > > > > > Thanks, > > > > > > -- Paul > > > > > > -- > > > Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil > > > Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ > > > personnel/paulr/ > > > Code 7655 Phone : (202) 404-1619 > > > Washington, DC 20375 AIM : NRLPSR > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Ray at nrl.navy.mil Tue Jan 24 11:07:55 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 24 Jan 2006 11:07:55 -0500 Subject: [SciPy-user] Bug in stats? In-Reply-To: <91cf711d0601240753q3c83b5c4j@mail.gmail.com> References: <91cf711d0601240716q6b4ded95k@mail.gmail.com> <91cf711d0601240745j53f9a541y@mail.gmail.com> <91cf711d0601240753q3c83b5c4j@mail.gmail.com> Message-ID: <3B848C20-6D12-41AB-9075-CACE8A46D5D7@nrl.navy.mil> On Jan 24, 2006, at 10:53 AM, David Huard wrote: > To use the stats method, try giving it the parameters explicitely, > as in > >>> gamma.stats(1.5, loc=4., scale =5.) > (array(11.5), array(37.5)) It seems to be broken for me... In [1]: import scipy Overwriting fft= from scipy.fftpack.basic (was from numpy.dft.fftpack) Overwriting ifft= from scipy.fftpack.basic (was from numpy.dft.fftpack) In [2]: scipy.stats.norm.stats() Out[2]: (nan, nan) In [3]: scipy.stats.norm.stats(loc=4.0,scale=5.0) Out[3]: (nan, nan) In [4]: scipy.stats.norm.stats(1.5, loc=4.0,scale=5.0) Out[4]: (nan, nan) In [5]: scipy.stats.gamma.stats(1.5, loc=4., scale =5.) Out[5]: (nan, nan) In [6]: scipy.__version__ Out[6]: '0.4.4' In [8]: os.uname() Out[8]: ('Darwin', 'mcxr2.nrl.navy.mil', '8.4.0', 'Darwin Kernel Version 8.4.0: Tue Jan 3 18:22:10 PST 2006; root:xnu-792.6.56.obj~1/RELEASE_PPC', 'Power Macintosh') Curiously, it passes the tests, so they must not tickle this bug: In [9]: scipy.stats.test(10) Found 92 tests for scipy.stats.stats Found 70 tests for scipy.stats.distributions Found 10 tests for scipy.stats.morestats Found 92 tests for scipy.stats Found 0 tests for __main__ ........................................................................ ........................................................................ ....................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ........................................................................ .......................... ---------------------------------------------------------------------- Ran 264 tests in 1.635s OK Out[9]: However, my old version of scipy on a linux box works OK: >>> import scipy >>> scipy.stats.norm.stats() (0.0, 1.0) >>> scipy.stats.gamma.stats(1.5, loc=4., scale =5.) (11.5, 37.5) >>> scipy.__version__ '0.3.2' Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From schofield at ftw.at Tue Jan 24 11:28:51 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 24 Jan 2006 17:28:51 +0100 Subject: [SciPy-user] Monte Carlo package Message-ID: <43D655C3.8030109@ftw.at> Hi all, The Monte Carlo package now builds and runs fine for me on MinGW. I'd now like to ask for some help with testing on other platforms. (Are there any that don't define the rand() function? ;) If there are no cries of anguish I'll move it to the main tree on Friday. -- Ed From robert.kern at gmail.com Tue Jan 24 11:30:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Jan 2006 10:30:29 -0600 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <43D655C3.8030109@ftw.at> References: <43D655C3.8030109@ftw.at> Message-ID: <43D65625.4070707@gmail.com> Ed Schofield wrote: > Hi all, > > The Monte Carlo package now builds and runs fine for me on MinGW. I'd > now like to ask for some help with testing on other platforms. (Are > there any that don't define the rand() function? ;) I really would prefer that it not use rand() but use numpy.random instead. What do you need to make this happen? -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From robert.kern at gmail.com Tue Jan 24 11:55:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Jan 2006 10:55:41 -0600 Subject: [SciPy-user] Bug in stats? In-Reply-To: <3B848C20-6D12-41AB-9075-CACE8A46D5D7@nrl.navy.mil> References: <91cf711d0601240716q6b4ded95k@mail.gmail.com> <91cf711d0601240745j53f9a541y@mail.gmail.com> <91cf711d0601240753q3c83b5c4j@mail.gmail.com> <3B848C20-6D12-41AB-9075-CACE8A46D5D7@nrl.navy.mil> Message-ID: <43D65C0D.3080306@gmail.com> Paul Ray wrote: > It seems to be broken for me... > > In [1]: import scipy > Overwriting fft= from scipy.fftpack.basic > (was from numpy.dft.fftpack) > Overwriting ifft= from > scipy.fftpack.basic (was from > numpy.dft.fftpack) > > In [2]: scipy.stats.norm.stats() > Out[2]: (nan, nan) Can you try the most recent checkouts of numpy and scipy? I get In [3]: from scipy import stats In [4]: stats.norm.stats() Out[4]: (array(0.0), array(1.0)) -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From yann.ledu at noos.fr Tue Jan 24 12:19:35 2006 From: yann.ledu at noos.fr (Yann Le Du) Date: Tue, 24 Jan 2006 18:19:35 +0100 (CET) Subject: [SciPy-user] Behavior of string.join differs from standard python Message-ID: Hello, Just a small problem I stumbled upon. I'd like to know why the behavior of string.join in scipy is different from that of standard python ? In scipy : # string.join("",["a","b","c"]) and in python string module : # string.join(["a","vb",",cf"],"") And also, the documentation of string.join is not clear, because if you do what's written there, there's an error. -- Yann Le Du http://yledu.free.fr From Paul.Ray at nrl.navy.mil Tue Jan 24 12:38:54 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 24 Jan 2006 12:38:54 -0500 Subject: [SciPy-user] Bug in stats? In-Reply-To: <43D65C0D.3080306@gmail.com> References: <91cf711d0601240716q6b4ded95k@mail.gmail.com> <91cf711d0601240745j53f9a541y@mail.gmail.com> <91cf711d0601240753q3c83b5c4j@mail.gmail.com> <3B848C20-6D12-41AB-9075-CACE8A46D5D7@nrl.navy.mil> <43D65C0D.3080306@gmail.com> Message-ID: <0D19E7E3-E512-4F62-9D0A-A336D140AEE1@nrl.navy.mil> On Jan 24, 2006, at 11:55 AM, Robert Kern wrote: > Can you try the most recent checkouts of numpy and scipy? I get > > In [3]: from scipy import stats > > In [4]: stats.norm.stats() > Out[4]: (array(0.0), array(1.0)) Yep, it is now fixed! Thanks! And, no more strange errors about overwriting the fft function when I import. In [1]: import scipy In [3]: import scipy.stats In [4]: scipy.stats.norm.stats() Out[4]: (array(0.0), array(1.0)) In [5]: scipy.stats.norm.stats(loc=4.0,scale=5.0) Out[5]: (array(4.0), array(25.0)) In [6]: scipy.stats.gamma.stats(1.5, loc=4., scale =5.) Out[6]: (array(11.5), array(37.5)) In [7]: scipy.__version__ Out[7]: '0.4.5.1570' Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From Paul.Ray at nrl.navy.mil Tue Jan 24 12:51:33 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 24 Jan 2006 12:51:33 -0500 Subject: [SciPy-user] fftpack test failures Message-ID: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> Hi, Since I just did an svn update for numpy and scipy to fix the stats bug, so I did a scipy.test(10), and found there are now test failures in fftpack: In [12]: scipy.fftpack.test(10) Found 24 tests for scipy.fftpack.pseudo_diffs Found 23 tests for scipy.fftpack.basic Found 4 tests for scipy.fftpack.helper Found 0 tests for __main__ Differentiation of periodic functions ===================================== size | convolve | naive ------------------------------------- 100 | 0.08 | 1.61 (secs for 1500 calls) 1000 | 0.07 | 1.94 (secs for 300 calls) 256 | 0.11 | 2.50 (secs for 1500 calls) 512 | 0.13 | 2.71 (secs for 1000 calls) 1024 | 0.09 | 3.10 (secs for 500 calls) 2048 | 0.11 | 2.57 (secs for 200 calls) 4096 | 0.12 | 2.68 (secs for 100 calls) 8192 | 0.20 | 2.82.......... (secs for 50 calls) Hilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.08 | 1.37 (secs for 1500 calls) 1000 | 0.07 | 1.19 (secs for 300 calls) 256 | 0.10 | 1.66 (secs for 1500 calls) 512 | 0.11 | 1.70 (secs for 1000 calls) 1024 | 0.09 | 1.86 (secs for 500 calls) 2048 | 0.11 | 1.47 (secs for 200 calls) 4096 | 0.11 | 1.38 (secs for 100 calls) 8192 | 0.12 | 1.38........ (secs for 50 calls) Shifting periodic functions ============================== size | optimized | naive ------------------------------ 100 | 0.08 | 1.35 (secs for 1500 calls) 1000 | 0.06 | 1.59 (secs for 300 calls) 256 | 0.11 | 2.03 (secs for 1500 calls) 512 | 0.11 | 2.14 (secs for 1000 calls) 1024 | 0.10 | 2.57 (secs for 500 calls) 2048 | 0.10 | 1.87 (secs for 200 calls) 4096 | 0.10 | 1.76 (secs for 100 calls) 8192 | 0.13 | 1.77.. (secs for 50 calls) Tilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.08 | 1.52 (secs for 1500 calls) 1000 | 0.05 | 1.42 (secs for 300 calls) 256 | 0.11 | 2.18 (secs for 1500 calls) 512 | 0.11 | 2.21 (secs for 1000 calls) 1024 | 0.09 | 3.00 (secs for 500 calls) 2048 | 0.20 | 2.78 (secs for 200 calls) 4096 | 0.15 | 2.57 (secs for 100 calls) 8192 | 0.12 | 1.72.... (secs for 50 calls) Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 0.28 | 0.31 | 2.43 | 0.32 (secs for 7000 calls) 1000 | 0.50 | 0.76 | 3.32 | 0.82 (secs for 2000 calls) 256 | 0.68 | 0.65 | 4.76 | 0.63 (secs for 10000 calls) 512 | 0.92 | 1.35 | 8.03 | 1.27 (secs for 10000 calls) 1024 | 0.30 | 0.40 | 1.38 | 0.41 (secs for 1000 calls) 2048 | 0.62 | 0.82 | 2.56 | 0.76 (secs for 1000 calls) 4096 | 0.56 | 0.75 | 2.43 | 0.77 (secs for 500 calls) 8192 | 1.25 | 2.03 | 11.35 | 4.08...confused: size=2, arr_size=8, rank=1, effrank=2, arr.nd=2, dims=[ 2 ], arr.dims=[ 2 4 ] E (secs for 500 calls) Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100confused: size=100, arr_size=10000, rank=1, effrank=2, arr.nd=2, dims=[ 100 ], arr.dims=[ 100 100 ] Econfused: size=3, arr_size=27, rank=1, effrank=3, arr.nd=3, dims= [ 3 ], arr.dims=[ 3 3 3 ] Econfused: size=3, arr_size=9, rank=1, effrank=2, arr.nd=2, dims= [ 3 ], arr.dims=[ 3 3 ] Econfused: size=4, arr_size=16, rank=1, effrank=2, arr.nd=2, dims= [ 4 ], arr.dims=[ 4 4 ] Econfused: size=4, arr_size=16, rank=1, effrank=2, arr.nd=2, dims= [ 4 ], arr.dims=[ 4 4 ] E Inverse Fast Fourier Transform =============================================== | real input | complex input ----------------------------------------------- size | scipy | Numeric | scipy | Numeric ----------------------------------------------- 100 | 0.70 | 1.46 | 4.39 | 1.09 (secs for 7000 calls) 1000 | 0.57 | 2.31 | 3.40 | 2.36 (secs for 2000 calls) 256 | 0.66 | 2.27 | 5.91 | 2.58 (secs for 10000 calls) 512 | 1.09 | 4.78 | 7.63 | 4.10 (secs for 10000 calls) 1024 | 0.38 | 1.74 | 2.40 | 1.23 (secs for 1000 calls) 2048 | 0.65 | 3.74 | 3.25 | 2.11 (secs for 1000 calls) 4096 | 0.64 | 2.22 | 2.60 | 2.00 (secs for 500 calls) 8192 | 1.34 | 4.77 | 6.15 | 7.22.....confused: size=3, arr_size=9, rank=1, effrank=2, arr.nd=2, dims=[ 3 ], arr.dims=[ 3 3 ] Econfused: size=2, arr_size=4, rank=1, effrank=2, arr.nd=2, dims= [ 2 ], arr.dims=[ 2 2 ] E (secs for 500 calls) Inverse Fast Fourier Transform (real data) ================================== size | scipy | Numeric ---------------------------------- 100 | 0.57 | 1.33 (secs for 7000 calls) 1000 | 0.28 | 1.46 (secs for 2000 calls) 256 | 0.75 | 1.96 (secs for 10000 calls) 512 | 0.81 | 1.96 (secs for 10000 calls) 1024 | 0.13 | 0.51 (secs for 1000 calls) 2048 | 0.62 | 2.14 (secs for 1000 calls) 4096 | 0.71 | 1.15 (secs for 500 calls) 8192 | 1.31 | 2.52.... (secs for 500 calls) Fast Fourier Transform (real data) ================================== size | scipy | Numeric ---------------------------------- 100 | 0.40 | 0.38 (secs for 7000 calls) 1000 | 0.24 | 0.31 (secs for 2000 calls) 256 | 0.60 | 0.65 (secs for 10000 calls) 512 | 0.87 | 0.94 (secs for 10000 calls) 1024 | 0.12 | 0.14 (secs for 1000 calls) 2048 | 0.55 | 0.60 (secs for 1000 calls) 4096 | 0.51 | 0.52 (secs for 500 calls) 8192 | 0.96 | 1.09....... ====================================================================== ERROR: check_n_argument_real (scipy.fftpack.basic.test_basic.test_fft) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 105, in check_n_argument_real y = fft([x1,x2],n=4) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 101, in fft return work_function(tmp,n,1,0,overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zrfft to C/ Fortran array ====================================================================== ERROR: bench_random (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 571, in bench_random y = fftn(x) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 302, in fftn return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 238, in _raw_fftnd return work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ====================================================================== ERROR: check_axes_argument (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 458, in check_axes_argument assert_array_almost_equal(fftn(x),fftn(x,axes=(-3,-2,-1))) # kji_space File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 302, in fftn return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 238, in _raw_fftnd return work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ====================================================================== ERROR: check_definition (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 424, in check_definition y = fftn(x) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 302, in fftn return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 238, in _raw_fftnd return work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ====================================================================== ERROR: check_shape_argument (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 524, in check_shape_argument y = fftn(small_x,shape=(4,4)) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 302, in fftn return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 238, in _raw_fftnd return work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ====================================================================== ERROR: check_shape_axes_argument (scipy.fftpack.basic.test_basic.test_fftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 535, in check_shape_axes_argument y = fftn(small_x,shape=(4,4),axes=(-1,)) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 302, in fftn return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 253, in _raw_fftnd r = work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ====================================================================== ERROR: check_definition (scipy.fftpack.basic.test_basic.test_ifftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 593, in check_definition y = ifftn(x) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 331, in ifftn return _raw_fftnd(tmp,shape,axes,-1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 238, in _raw_fftnd return work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ====================================================================== ERROR: check_random_complex (scipy.fftpack.basic.test_basic.test_ifftn) ---------------------------------------------------------------------- Traceback (most recent call last): File "/sw/lib/python2.4/site-packages/scipy/fftpack/tests/ test_basic.py", line 603, in check_random_complex assert_array_almost_equal (ifftn(fftn(x)),x) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 302, in fftn return _raw_fftnd(tmp,shape,axes,1,overwrite_x,work_function) File "/sw/lib/python2.4/site-packages/scipy/fftpack/basic.py", line 238, in _raw_fftnd return work_function(x,s,direction,overwrite_x=overwrite_x) error: failed in converting 1st argument `x' of _fftpack.zfftnd to C/ Fortran array ---------------------------------------------------------------------- Ran 51 tests in 250.187s FAILED (errors=8) (secs for 500 calls) Out[12]: In [13]: scipy.__version__ Out[13]: '0.4.5.1570' In [14]: import os In [15]: os.uname() Out[15]: ('Darwin', 'mcxr2.nrl.navy.mil', '8.4.0', 'Darwin Kernel Version 8.4.0: Tue Jan 3 18:22:10 PST 2006; root:xnu-792.6.56.obj~1/RELEASE_PPC', 'Power Macintosh') In [16]: scipy.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec'] define_macros = [('NO_ATLAS_INFO', 3)] fft_opt_info: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] djbfft_info: NOT AVAILABLE fftw3_info: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] Finally, if it matters, here are the FFTW2 and FFTW3 libraries on my system: mcxr2 : 160>ls /usr/local/lib/*fft* /usr/local/lib/libfftw3.a /usr/local/lib/libsfftw.a /usr/local/lib/libfftw3.la* /usr/local/lib/libsfftw.la* /usr/local/lib/libfftw3_threads.a /usr/local/lib/ libsfftw_threads.a /usr/local/lib/libfftw3_threads.la* /usr/local/lib/ libsfftw_threads.la* /usr/local/lib/libfftw3f.a /usr/local/lib/libsrfftw.a /usr/local/lib/libfftw3f.la* /usr/local/lib/libsrfftw.la* /usr/local/lib/libfftw3f_threads.a /usr/local/lib/ libsrfftw_threads.a /usr/local/lib/libfftw3f_threads.la* /usr/local/lib/ libsrfftw_threads.la* Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From oliphant.travis at ieee.org Tue Jan 24 13:06:20 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 24 Jan 2006 11:06:20 -0700 Subject: [SciPy-user] Behavior of string.join differs from standard python In-Reply-To: References: Message-ID: <43D66C9C.9050104@ieee.org> Yann Le Du wrote: >Hello, > >Just a small problem I stumbled upon. > >I'd like to know why the behavior of string.join in scipy is different >from that of standard python ? > >In scipy : > ># string.join("",["a","b","c"]) > > Look at type(numpy.string) and type(string) for part of the answer. >And also, the documentation of string.join is not clear, because if you do >what's written there, there's an error. > > The doc-string (please call it that not "the documentation" which usually refers to off-line docs), is exactly the same as str.join because the string object in numpy inherits from the str object in Python. So, numpy.string.join is an unbound method. Thus, the first argument must be a "joining" string. However, the string module has a different calling syntax. My understanding is that the string module is deprecated, by the way since most of the functionality are now methods on strings. -Travis From oliphant.travis at ieee.org Tue Jan 24 14:32:46 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 24 Jan 2006 12:32:46 -0700 Subject: [SciPy-user] fftpack test failures In-Reply-To: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> References: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> Message-ID: <43D680DE.80407@ieee.org> Paul Ray wrote: >Hi, > >Since I just did an svn update for numpy and scipy to fix the stats >bug, so I did a scipy.test(10), and found there are now test failures >in fftpack: > > > This is an f2py problem. Specifically, the thing I recently changed to fix a problem with dgemm is breaking these functions. So, Pearu, I need help on this one. Apparently sometimes check_and_fix_dimensions shoul be swapping the dimensions and sometimes not. I'm not sure when to swicth the order and when not to. I've changes it back (thus causing the dgemm example posted earlier to fail). Update your numpy distribution and remove build/src/fortranobject.c and build/src/fortranobject.h to get the modules to re-link against the *old* behavior. -Travis From pearu at scipy.org Tue Jan 24 14:02:51 2006 From: pearu at scipy.org (Pearu Peterson) Date: Tue, 24 Jan 2006 13:02:51 -0600 (CST) Subject: [SciPy-user] fftpack test failures In-Reply-To: <43D680DE.80407@ieee.org> References: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> <43D680DE.80407@ieee.org> Message-ID: On Tue, 24 Jan 2006, Travis Oliphant wrote: > Paul Ray wrote: > >> Hi, >> >> Since I just did an svn update for numpy and scipy to fix the stats >> bug, so I did a scipy.test(10), and found there are now test failures >> in fftpack: >> >> >> > > This is an f2py problem. Specifically, the thing I recently changed to > fix a problem with dgemm is breaking these functions. So, Pearu, I need > help on this one. Apparently sometimes check_and_fix_dimensions shoul > be swapping the dimensions and sometimes not. Ok, I'll take a look at this.. Pearu From aisaac at american.edu Tue Jan 24 15:36:29 2006 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 24 Jan 2006 15:36:29 -0500 Subject: [SciPy-user] =?utf-8?q?Behavior_of_string=2Ejoin_differs_from_sta?= =?utf-8?q?ndard=09python?= In-Reply-To: <43D66C9C.9050104@ieee.org> References: <43D66C9C.9050104@ieee.org> Message-ID: On Tue, 24 Jan 2006, Travis Oliphant apparently wrote: > My understanding is that the string module is deprecated, by the way > since most of the functionality are now methods on strings. Except for capwords and maketrans, that is correct: http://www.python.org/doc/current/lib/node111.html Cheers, Alan Isaac From zollars at caltech.edu Tue Jan 24 16:02:09 2006 From: zollars at caltech.edu (Eric Zollars) Date: Tue, 24 Jan 2006 13:02:09 -0800 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: References: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> <43D680DE.80407@ieee.org> Message-ID: <43D695D1.1010100@caltech.edu> Pearu- What is bundle1.o that is added to xlf.cfg in ibm.py in the get_flags_linker_so() function? Eric > Pearu Peterson wrote: > > Change the line > > compiler = new_fcompiler(compiler='ibm') > > to > > compiler = IbmFCompiler() > > Also, what are the values > > os.name > sys.platform > > in your system? Currently ibm compiler is enabled only for aix platforms, see _default_compilers dictionary in numpy/distutils/fcompiler/__init__.py. > > Pearu os.name = posix sys.platform = linux2 I added ibm to _default_compilers for linux.* in __init__.py. But I was still getting 'None' for compiler.get_version(). In the def get_version() in ibm.py there is this code: l = [d for d in l if os.path.isfile(os.path.join(xlf_dir,d,'xlf.cfg'))] if not l: from distutils.version import LooseVersion self.version = version = LooseVersion(l[0]) However 'l' is defined in this case ['8.1'], so I added the else statement else: version = l[0] Now the ibm compiler is detected correctly. However the build is failing with cannot find 'bundle1.o'. This appears to be added to the xlf.cfg in ibm.py in the get_flags_linker_so() function. I am not sure what this does. Let me know if this is AIX specific. Also, in the same function '-bshared' is added as an option for the linker. As far as I can tell there is no such option for the 8.1 (or 9.1) versions of the compiler for Linux. Continuing.. Eric From robert.kern at gmail.com Tue Jan 24 16:25:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 24 Jan 2006 15:25:11 -0600 Subject: [SciPy-user] numpy and fcompiler In-Reply-To: <43D695D1.1010100@caltech.edu> References: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> <43D680DE.80407@ieee.org> <43D695D1.1010100@caltech.edu> Message-ID: <43D69B37.4090108@gmail.com> Eric Zollars wrote: > Pearu- > What is bundle1.o that is added to xlf.cfg in ibm.py in the > get_flags_linker_so() function? It's an object file in /usr/lib on OS X. Mac users were the first to try using xlf with f2py, so the code got written to support Macs. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From travis.brady at gmail.com Tue Jan 24 17:17:59 2006 From: travis.brady at gmail.com (Travis Brady) Date: Tue, 24 Jan 2006 14:17:59 -0800 Subject: [SciPy-user] Strange behavior with scipy.test() Message-ID: I've just installed Numpy-0.9.2 and Scipy-0.4.4 from the windows binaries on another box I have access to and Numpy.test() works as expected by scipy.test() yields the following strange message and kicks me out of ipython (0.7.1): In [1]: import scipy In [2]: scipy.test() Overwriting lib= from C:\Python24\lib\site-packages\scipy\lib\__init__.pyc (was from C:\Python24\lib\site-packages\numpy\lib\__init__.pyc) Found 128 tests for scipy.linalg.fblas Found 10 tests for scipy.stats.morestats Found 92 tests for scipy.stats.stats Found 36 tests for scipy.linalg.decomp Found 6 tests for scipy.optimize.optimize Found 4 tests for scipy.linalg.lapack Found 1 tests for scipy.optimize.zeros Found 92 tests for scipy.stats Found 41 tests for scipy.linalg.basic Found 339 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 14 tests for scipy.lib.blas Found 14 tests for scipy.linalg.blas Found 70 tests for scipy.stats.distributions Found 6 tests for scipy.optimize Found 0 tests for __main__ EEEEEEEEEEEEE C:\Python24\Scripts> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Wed Jan 25 10:08:19 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 25 Jan 2006 10:08:19 -0500 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <43D65625.4070707@gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> Message-ID: <91cf711d0601250708u49fd339bt@mail.gmail.com> It builds fine on linux Ubuntu and the example you gave in the mail works fine, however, I'm not sure I understand your question about rand(). Here is what I get : >>> from scipy.sandbox.montecarlo import * >>> rand() NameError: name 'rand' is not defined If I may ask a question, is there a routine in scipy to generate random samples from a continuous distribution ? Thanks, David 2006/1/24, Robert Kern : > > Ed Schofield wrote: > > Hi all, > > > > The Monte Carlo package now builds and runs fine for me on MinGW. I'd > > now like to ask for some help with testing on other platforms. (Are > > there any that don't define the rand() function? ;) > > I really would prefer that it not use rand() but use numpy.random instead. > What > do you need to make this happen? > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkottens at nd.edu Wed Jan 25 10:53:39 2006 From: nkottens at nd.edu (Nicholas Kottenstette) Date: Wed, 25 Jan 2006 10:53:39 -0500 Subject: [SciPy-user] building scipy Message-ID: <43D79F03.70609@nd.edu> Hey guys, I'm still a bit confused as to whether I need to install Numeric or not. Following the building scipy routines for unix http://www.scipy.org/documentation/buildscipy.txt I went and built scipy which did not generate any of the linear algebra routines. ie. unable to import LinearAlgebra So looking at the Downloads documentation, I saw that Numeric should be installed. Looking at the old Numeric documentation this would definetly give me the linear algebra interface, which I did and it works. Is this a correct conclusion: 1. In order to use scipy and utilize linear algebra features such as the ability to calculate a matrix inverse and perform a matrixmultiply I need to: a) first build and install Numeric, I used version 24.2 from http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=1351 b) then build and install scipy as described in http://www.scipy.org/documentation/buildscipy.txt Looking at Travis's documentation on NumPy p. 14 and p. 15 leaves me even further confused. This tends to imply that I should build and install numpy instead of Numeric in order to get the linalg package. But is NumPy already part of scipy? Sincerely, Nicholas Kottenstette From nwagner at mecha.uni-stuttgart.de Wed Jan 25 11:19:58 2006 From: nwagner at mecha.uni-stuttgart.de (Nils Wagner) Date: Wed, 25 Jan 2006 17:19:58 +0100 Subject: [SciPy-user] building scipy In-Reply-To: <43D79F03.70609@nd.edu> References: <43D79F03.70609@nd.edu> Message-ID: <43D7A52E.5060701@mecha.uni-stuttgart.de> Nicholas Kottenstette wrote: >Hey guys, > >I'm still a bit confused as to whether I need to install Numeric or not. > >Following the building scipy routines for unix >http://www.scipy.org/documentation/buildscipy.txt >I went and built scipy which did not generate any of the linear algebra >routines. ie. unable to import LinearAlgebra > >So looking at the Downloads documentation, I saw that Numeric should be >installed. Looking at the old Numeric documentation this would >definetly give me the linear algebra interface, which I did and it works. > >Is this a correct conclusion: >1. In order to use scipy and utilize linear algebra features such as the >ability to calculate a matrix inverse and perform a matrixmultiply I >need to: >a) first build and install Numeric, I used version 24.2 from >http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=1351 >b) then build and install scipy as described in >http://www.scipy.org/documentation/buildscipy.txt > >Looking at Travis's documentation on NumPy p. 14 and p. 15 leaves me >even further confused. This tends to imply that I should build and >install numpy instead of Numeric in order to get the linalg package. >But is NumPy already part of scipy? > >Sincerely, > >Nicholas Kottenstette > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > Please follow the instructions available via http://new.scipy.org/Wiki/Installing_SciPy Nils From robert.kern at gmail.com Wed Jan 25 11:24:59 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Jan 2006 10:24:59 -0600 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <91cf711d0601250708u49fd339bt@mail.gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> Message-ID: <43D7A65B.9090309@gmail.com> David Huard wrote: > It builds fine on linux Ubuntu and the example you gave in the mail > works fine, however, I'm not sure I understand your question about rand(). > Here is what I get : > >>>> from scipy.sandbox.montecarlo import * >>>> rand() > NameError: name 'rand' is not defined I am referring to the POSIX rand() function in C, which the current version of the montecarlo package uses to seed a faster generator. > If I may ask a question, is there a routine in scipy to generate random > samples from a continuous distribution ? Which distribution? numpy.random has plenty of standard continuous distribution generators. scipy.stats.distributions defines a whole slew of distribution objects, each of which has an .rvs() method which samples from the given distribution. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From david.huard at gmail.com Wed Jan 25 14:07:41 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 25 Jan 2006 14:07:41 -0500 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <43D7A65B.9090309@gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> <43D7A65B.9090309@gmail.com> Message-ID: <91cf711d0601251107y1b7d96fdr@mail.gmail.com> I mean an arbitrary user-defined distribution. I want to generate samples using a Gibbs sampler, and one of the conditional distribution has a weird shape involving a sum of logs... Anyway, there is no chance that I can find an analytical expression for this cdf and I wondered if there was a routine somewhere that would let me define the function and would return samples drawn from this distribution. I thought about that since the intsampler of montecarlo does something similar for discrete distributions. I realize that if it was so simple, we wouldn't need MCMC algorithms... Anyway, I think I'll try to tweak PyMC to mix Gibbs sampling with Metropolis jumps. I guess that's possible, isn't it ? Thanks, David 2006/1/25, Robert Kern : > > David Huard wrote: > > It builds fine on linux Ubuntu and the example you gave in the mail > > works fine, however, I'm not sure I understand your question about > rand(). > > Here is what I get : > > > >>>> from scipy.sandbox.montecarlo import * > >>>> rand() > > NameError: name 'rand' is not defined > > I am referring to the POSIX rand() function in C, which the current > version of > the montecarlo package uses to seed a faster generator. > > > If I may ask a question, is there a routine in scipy to generate random > > samples from a continuous distribution ? > > Which distribution? numpy.random has plenty of standard continuous > distribution > generators. scipy.stats.distributions defines a whole slew of distribution > objects, each of which has an .rvs() method which samples from the given > distribution. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jan 25 14:38:49 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Jan 2006 13:38:49 -0600 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <91cf711d0601251107y1b7d96fdr@mail.gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> <43D7A65B.9090309@gmail.com> <91cf711d0601251107y1b7d96fdr@mail.gmail.com> Message-ID: <43D7D3C9.30804@gmail.com> David Huard wrote: > I mean an arbitrary user-defined distribution. I want to generate > samples using a Gibbs sampler, and one of the conditional distribution > has a weird shape involving a sum of logs... Anyway, there is no chance > that I can find an analytical expression for this cdf and I wondered if > there was a routine somewhere that would let me define the function and > would return samples drawn from this distribution. I thought about that > since the intsampler of montecarlo does something similar for discrete > distributions. Well, Gibbs sampling is only useful when the conditional distributions can be sampled quickly. Otherwise, I would recommend using a full Metropolis-Hastings algorithm with Gibbs-type jumps for the conditionals that you can sample quickly. Now, there are semi-black-box algorithms for sampling nearly-arbitrary univariate distributions, and I do intend to get around to implementing them for scipy someday. For your problem, I'm not sure they would be more useful than simply doing M-H since the parameters of your conditional will be changing each time. _Automatic Nonuniform Random Variate Generation_ is an excellent book which covers these algorithms: http://statistik.wu-wien.ac.at/projects/arvag/monograph/index.html > I realize that if it was so simple, we wouldn't need MCMC algorithms... > Anyway, I think I'll try to tweak PyMC to mix Gibbs sampling with > Metropolis jumps. I guess that's possible, isn't it ? Gibbs sampling is simply a special case of M-H sampling with an acceptance of 1, so yes, I think so. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant at ee.byu.edu Wed Jan 25 14:44:15 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 25 Jan 2006 12:44:15 -0700 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <91cf711d0601251107y1b7d96fdr@mail.gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> <43D7A65B.9090309@gmail.com> <91cf711d0601251107y1b7d96fdr@mail.gmail.com> Message-ID: <43D7D50F.70404@ee.byu.edu> David Huard wrote: > I mean an arbitrary user-defined distribution. I want to generate > samples using a Gibbs sampler, and one of the conditional distribution > has a weird shape involving a sum of logs... Anyway, there is no > chance that I can find an analytical expression for this cdf and I > wondered if there was a routine somewhere that would let me define the > function and would return samples drawn from this distribution. I > thought about that since the intsampler of montecarlo does something > similar for discrete distributions. > Well in SciPy stats, there is some (limited) support for this. If you can define the pdf, then it will first numerically integrate that to get the CDF and then invert that to generate samples from the distribution. It will be slow and the convergence of the two piggy-backed numerical solutions might give you headaches, but... You might try using the rejection algorithm. There is no simple interface to such a thing though. It might be a good thing to add (i.e. specify a known density and a constant so that cg(x) bounds and then your pdf and then have it generate samples using the rejection method. -Travis From pearu at scipy.org Wed Jan 25 14:16:58 2006 From: pearu at scipy.org (Pearu Peterson) Date: Wed, 25 Jan 2006 13:16:58 -0600 (CST) Subject: [SciPy-user] fftpack test failures In-Reply-To: <43D680DE.80407@ieee.org> References: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> <43D680DE.80407@ieee.org> Message-ID: On Tue, 24 Jan 2006, Travis Oliphant wrote: > This is an f2py problem. Specifically, the thing I recently changed to > fix a problem with dgemm is breaking these functions. So, Pearu, I need > help on this one. Apparently sometimes check_and_fix_dimensions shoul > be swapping the dimensions and sometimes not. This issue is now fixed in svn. Pearu From oliphant at ee.byu.edu Wed Jan 25 15:57:28 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 25 Jan 2006 13:57:28 -0700 Subject: [SciPy-user] fftpack test failures In-Reply-To: References: <41717FD0-92EA-4537-A8E0-1E73273C0461@nrl.navy.mil> <43D680DE.80407@ieee.org> Message-ID: <43D7E638.5040402@ee.byu.edu> Pearu Peterson wrote: >On Tue, 24 Jan 2006, Travis Oliphant wrote: > > > >>This is an f2py problem. Specifically, the thing I recently changed to >>fix a problem with dgemm is breaking these functions. So, Pearu, I need >>help on this one. Apparently sometimes check_and_fix_dimensions shoul >>be swapping the dimensions and sometimes not. >> >> > >This issue is now fixed in svn. > >Pearu > > Pearu is a genius!! Thank you very much.... -Travis From david.huard at gmail.com Wed Jan 25 17:26:09 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 25 Jan 2006 17:26:09 -0500 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <43D7D50F.70404@ee.byu.edu> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> <43D7A65B.9090309@gmail.com> <91cf711d0601251107y1b7d96fdr@mail.gmail.com> <43D7D50F.70404@ee.byu.edu> Message-ID: <91cf711d0601251426i20854209x@mail.gmail.com> Thanks for the advice, Indeed, the parameters of the distribution will change for each sample, so the numerical integration and inversion is probably not a good choice. I think I'll use PyMC mixed with Gibbs sampling using the stats.rvs routines. I'll let you know how it goes. By the way, I'd like to thank you for all the work you're putting into scipy and numpy. The gaussian_kde class along with PyMC make Bayesian analyses a breeze. It's easy to be productive when you work with such good tools. David 2006/1/25, Travis Oliphant : > > David Huard wrote: > > > I mean an arbitrary user-defined distribution. I want to generate > > samples using a Gibbs sampler, and one of the conditional distribution > > has a weird shape involving a sum of logs... Anyway, there is no > > chance that I can find an analytical expression for this cdf and I > > wondered if there was a routine somewhere that would let me define the > > function and would return samples drawn from this distribution. I > > thought about that since the intsampler of montecarlo does something > > similar for discrete distributions. > > > Well in SciPy stats, there is some (limited) support for this. If you > can define the pdf, then it will first numerically integrate that to get > the CDF and then invert that to generate samples from the distribution. > It will be slow and the convergence of the two piggy-backed numerical > solutions might give you headaches, but... > > You might try using the rejection algorithm. There is no simple > interface to such a thing though. It might be a good thing to add (i.e. > specify a known density and a constant so that cg(x) bounds and then > your pdf and then have it generate samples using the rejection method. > > -Travis > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jan 25 17:34:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 25 Jan 2006 16:34:53 -0600 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <91cf711d0601251426i20854209x@mail.gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> <43D7A65B.9090309@gmail.com> <91cf711d0601251107y1b7d96fdr@mail.gmail.com> <43D7D50F.70404@ee.byu.edu> <91cf711d0601251426i20854209x@mail.gmail.com> Message-ID: <43D7FD0D.6090500@gmail.com> David Huard wrote: > By the way, I'd like to thank you for all the work you're putting into > scipy and numpy. You're welcome. :-) > The gaussian_kde class along with PyMC make Bayesian > analyses a breeze. It's easy to be productive when you work with such > good tools. Are you just trying to sample from the KDE? You can do it *much* easier and faster than MCMC using the .resample() method. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From nkottens at nd.edu Wed Jan 25 18:29:13 2006 From: nkottens at nd.edu (Nicholas Kottenstette) Date: Wed, 25 Jan 2006 18:29:13 -0500 Subject: [SciPy-user] building scipy In-Reply-To: <43D7A52E.5060701@mecha.uni-stuttgart.de> References: <43D79F03.70609@nd.edu> <43D7A52E.5060701@mecha.uni-stuttgart.de> Message-ID: <43D809C9.6040302@nd.edu> Nils Wagner wrote: >Nicholas Kottenstette wrote: > > >>Hey guys, >> >>I'm still a bit confused as to whether I need to install Numeric or not. >> >>Following the building scipy routines for unix >>http://www.scipy.org/documentation/buildscipy.txt >>I went and built scipy which did not generate any of the linear algebra >>routines. ie. unable to import LinearAlgebra >> >>So looking at the Downloads documentation, I saw that Numeric should be >>installed. Looking at the old Numeric documentation this would >>definetly give me the linear algebra interface, which I did and it works. >> >>Is this a correct conclusion: >>1. In order to use scipy and utilize linear algebra features such as the >>ability to calculate a matrix inverse and perform a matrixmultiply I >>need to: >>a) first build and install Numeric, I used version 24.2 from >>http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=1351 >>b) then build and install scipy as described in >>http://www.scipy.org/documentation/buildscipy.txt >> >>Looking at Travis's documentation on NumPy p. 14 and p. 15 leaves me >>even further confused. This tends to imply that I should build and >>install numpy instead of Numeric in order to get the linalg package. >>But is NumPy already part of scipy? >> >>Sincerely, >> >>Nicholas Kottenstette >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> > > >Please follow the instructions available via > >http://new.scipy.org/Wiki/Installing_SciPy > >Nils > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > Nils, Thanks for the reply. This clarifies things a bit. Reading the wiki: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 which has me working from the repository for numpy and scipy what is the correct way to reflect updates as they occur? ie. cd numpy svn up python setup.py clean --all python setup.py install #It seems to me in order to get this stuff totally removed #from my python libraries I actually need to #remove /lib/python2.4/site-packages/numpy in order #to eliminate any version of previously installed stuff Sincerely, Nicholas From david.huard at gmail.com Thu Jan 26 10:18:26 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 26 Jan 2006 10:18:26 -0500 Subject: [SciPy-user] Monte Carlo package In-Reply-To: <43D7FD0D.6090500@gmail.com> References: <43D655C3.8030109@ftw.at> <43D65625.4070707@gmail.com> <91cf711d0601250708u49fd339bt@mail.gmail.com> <43D7A65B.9090309@gmail.com> <91cf711d0601251107y1b7d96fdr@mail.gmail.com> <43D7D50F.70404@ee.byu.edu> <91cf711d0601251426i20854209x@mail.gmail.com> <43D7FD0D.6090500@gmail.com> Message-ID: <91cf711d0601260718o426634b4q@mail.gmail.com> No, no, i'd be glad if it was that simple ! I'm using PyMC to generate samples from a posterior. I then use gaussian_kde to build the pdf, and if need be, resample from it to evaluate the uncertainty on predictions using the resample method. The next step is to use a Gibbs sampler instead of M-H to estimate the posterior pdf of the parameters of a peak over threshold problem. Out of the three parameters, two have a Gamma conditional probability, and the third has a non-descript shape. Hence, the solution proposed in "Bayesian Data Analysis" is to draw the first two parameters from a Gamma and make a M-H jump on the third. In PyMC, if I'm correct, that would imply defining the two first parameters as "nodes", and the third one as a "parameter" in the MetropolisHastings class. I would "compute" the nodes by drawing them using gamma.rvs, and then compute the likelihood for the third parameter and let PyMC take care of the rest. If it works, (and it should!), I'll let Mr. Fonnesbeck know about it. David 2006/1/25, Robert Kern : > > David Huard wrote: > > By the way, I'd like to thank you for all the work you're putting into > > scipy and numpy. > > You're welcome. :-) > > > The gaussian_kde class along with PyMC make Bayesian > > analyses a breeze. It's easy to be productive when you work with such > > good tools. > > Are you just trying to sample from the KDE? You can do it *much* easier > and > faster than MCMC using the .resample() method. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.abreu at iac.es Fri Jan 27 06:30:19 2006 From: david.abreu at iac.es (David Abreu Rodriguez) Date: Fri, 27 Jan 2006 11:30:19 +0000 Subject: [SciPy-user] percentile Message-ID: <43DA044B.9010303@iac.es> I do not understand this result: >>> scipy.stats.scoreatpercentile([1,2,3,4],50) 2.0750000000000002 >>> scipy.stats.percentileofscore([1,2,3,4],2.5) 50.0 I suppose that percentileofscore is the inverse function of scoreatpercentile. what is wrong? thanks From ckkart at hoc.net Fri Jan 27 08:04:47 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 27 Jan 2006 14:04:47 +0100 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz Message-ID: <43DA1A6F.3050103@hoc.net> Hi, I've got problems getting inline code to work after having updated from 0.3.2 to 0.4.4. Automatic argument conversion with converters.blitz fails (try the array3d.py example) with this error message: In file included from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:31, from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, from /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/bzconfig.h:39:32: blitz/gnu/bzconfig.h: No such file or directory In file included from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, from /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, from /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: #error In : A working template implementation is required by Blitz++ (you may need to rerun the compiler/bzconfig script) I've tried to install Blitz 0.9, so that at least weave can find the header file "gnu/bzconfig.h" but it didn't really help. Any ideas what is wrong? Regards, Christian From Fernando.Perez at colorado.edu Fri Jan 27 11:42:54 2006 From: Fernando.Perez at colorado.edu (Fernando.Perez at colorado.edu) Date: Fri, 27 Jan 2006 09:42:54 -0700 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <43DA1A6F.3050103@hoc.net> References: <43DA1A6F.3050103@hoc.net> Message-ID: <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Quoting Christian Kristukat : > Hi, > I've got problems getting inline code to work after having updated from 0.3.2 > to > 0.4.4. Automatic argument conversion with converters.blitz fails (try the > array3d.py example) with this error message: > > In file included from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:31, > from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, > from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, > from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, > from > /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/bzconfig.h:39:32: > blitz/gnu/bzconfig.h: No such file or directory > In file included from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, > from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, > from > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, > from > /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: > #error > In : A working template implementation is required by Blitz++ > (you may need to rerun the compiler/bzconfig script) > > I've tried to install Blitz 0.9, so that at least weave can find the header > file > "gnu/bzconfig.h" but it didn't really help. > > Any ideas what is wrong? What _exact_ gcc version are you using? Not only are there issues with blitz and weave, there may also be gcc version problems here. gcc4 has a neverending stream of major problems. Cheers, f From ckkart at hoc.net Fri Jan 27 12:10:23 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 27 Jan 2006 18:10:23 +0100 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <1138380174.43da4d8ed9ae6@webmail.colorado.edu> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Message-ID: <43DA53FF.7090909@hoc.net> Fernando.Perez at colorado.edu wrote: > Quoting Christian Kristukat : > > >>Hi, >>I've got problems getting inline code to work after having updated from 0.3.2 >>to >>0.4.4. Automatic argument conversion with converters.blitz fails (try the >>array3d.py example) with this error message: >> >>In file included from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:31, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, >> from >>/home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/bzconfig.h:39:32: >>blitz/gnu/bzconfig.h: No such file or directory >>In file included from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, >> from >>/home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: >>#error >>In : A working template implementation is required by Blitz++ >>(you may need to rerun the compiler/bzconfig script) >> >>I've tried to install Blitz 0.9, so that at least weave can find the header >>file >> "gnu/bzconfig.h" but it didn't really help. >> >>Any ideas what is wrong? > > > What _exact_ gcc version are you using? Not only are there issues with blitz > and weave, there may also be gcc version problems here. gcc4 has a neverending > stream of major problems. > gcc 3.3.4 Regards, Christian From Fernando.Perez at colorado.edu Fri Jan 27 12:32:44 2006 From: Fernando.Perez at colorado.edu (Fernando Perez) Date: Fri, 27 Jan 2006 10:32:44 -0700 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <43DA53FF.7090909@hoc.net> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43DA53FF.7090909@hoc.net> Message-ID: <43DA593C.3050503@colorado.edu> Christian Kristukat wrote: >>>Any ideas what is wrong? >> >> >>What _exact_ gcc version are you using? Not only are there issues with blitz >>and weave, there may also be gcc version problems here. gcc4 has a neverending >>stream of major problems. >> > > > gcc 3.3.4 Mmh. Then, no clue from my part right now, I'm afraid. Sorry, f From oliphant.travis at ieee.org Fri Jan 27 13:35:08 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 27 Jan 2006 11:35:08 -0700 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <1138380174.43da4d8ed9ae6@webmail.colorado.edu> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Message-ID: Fernando.Perez at colorado.edu wrote: > Quoting Christian Kristukat : > > >>Hi, >>I've got problems getting inline code to work after having updated from 0.3.2 >>to >>0.4.4. Automatic argument conversion with converters.blitz fails (try the >>array3d.py example) with this error message: >> >>In file included from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:31, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, >> from >>/home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/bzconfig.h:39:32: >>blitz/gnu/bzconfig.h: No such file or directory >>In file included from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/blitz.h:47, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array-impl.h:41, >> from >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/array.h:32, >> from >>/home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>/usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: >>#error >>In : A working template implementation is required by Blitz++ >>(you may need to rerun the compiler/bzconfig script) >> >>I've tried to install Blitz 0.9, so that at least weave can find the header >>file >> "gnu/bzconfig.h" but it didn't really help. >> >>Any ideas what is wrong? Try moving the gnu/bzconfig.h file from the installation directory to the weave directory /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/gnu Then try again. We may be missing some files when the upgrade in blitz occurred. -Travis From pearu at scipy.org Fri Jan 27 13:19:02 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 27 Jan 2006 12:19:02 -0600 (CST) Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Message-ID: On Fri, 27 Jan 2006, Travis Oliphant wrote: >>> /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: >>> #error >>> In : A working template implementation is required by Blitz++ >>> (you may need to rerun the compiler/bzconfig script) >>> >>> I've tried to install Blitz 0.9, so that at least weave can find the header >>> file >>> "gnu/bzconfig.h" but it didn't really help. >>> >>> Any ideas what is wrong? > > Try moving the gnu/bzconfig.h file from the installation directory to > the weave directory > > /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/gnu > > Then try again. > > We may be missing some files when the upgrade in blitz occurred. Indeed, scipy/Lib/weave/blitz/blitz/gnu is an empty directory in scipy SVN. Pearu From cookedm at physics.mcmaster.ca Fri Jan 27 16:08:33 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 27 Jan 2006 16:08:33 -0500 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: (Pearu Peterson's message of "Fri, 27 Jan 2006 12:19:02 -0600 (CST)") References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Message-ID: Pearu Peterson writes: > On Fri, 27 Jan 2006, Travis Oliphant wrote: > >>>> /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>>> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: >>>> #error >>>> In : A working template implementation is required by Blitz++ >>>> (you may need to rerun the compiler/bzconfig script) >>>> >>>> I've tried to install Blitz 0.9, so that at least weave can find the header >>>> file >>>> "gnu/bzconfig.h" but it didn't really help. >>>> >>>> Any ideas what is wrong? >> >> Try moving the gnu/bzconfig.h file from the installation directory to >> the weave directory >> >> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/gnu >> >> Then try again. >> >> We may be missing some files when the upgrade in blitz occurred. > > Indeed, scipy/Lib/weave/blitz/blitz/gnu is an empty directory in scipy > SVN. Fixed. I added the bzconfig.h that I had used when I updated it last. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From bgranger at scu.edu Sat Jan 28 15:28:41 2006 From: bgranger at scu.edu (Brian Granger) Date: Sat, 28 Jan 2006 12:28:41 -0800 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X Message-ID: Hello all, Thanks so much for all the work that has been going on with scipy/numpy. I have some questions/confusions about BLAS/LAPACK on Mac OS X. Everyting seems to build find. Apple's Accelerate framwork is picked up by numpy/scipy. But when I do a scipy.test I see things like: **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** and **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Thus, It looks like the Apple supplied cblas and clapack libraries are never used, even though they are found at build time. What is the current status of using Apple's cblas and clapack? Can I use them from within scipy/numpy? How? Thanks Brian Brian Granger, Ph.D. Assistant Professor of Physics Santa Clara University bgranger at scu.edu Phone: 408-551-1891 Fax: 408-554-6965 This message scanned for viruses and SPAM at SCU (MGW2) From robert.kern at gmail.com Sat Jan 28 16:06:02 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Jan 2006 15:06:02 -0600 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X In-Reply-To: References: Message-ID: <43DBDCBA.3010408@gmail.com> Brian Granger wrote: > Thus, It looks like the Apple supplied cblas and clapack libraries are never used, even though they are found at build time. What is the current status of using Apple's cblas and clapack? Can I use them from within scipy/numpy? How? AFAICT, Accelerate.framework does not include CLAPACK versions of their LAPACK functions, although they do include CBLAS versions of their BLAS functions. I've never looked into how scipy determines the existince of the CBLAS routines because I've always used ATLAS since Accelerate.framework does not contain the CLAPACK versions. However, it looks like the culprit is line 82 in scipy/Lib/linalg/setup.py. ... 82 if name[0]=='c' and atlas_version is None and newer(__file__,target): f = open(target,'w') f.write('python module '+name+'\n') f.write('usercode void empty_module(void) {}\n') ... I don't think atlas_version gets set when the ATLAS is Accelerate.framework. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From strawman at astraw.com Sat Jan 28 16:18:29 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 28 Jan 2006 13:18:29 -0800 Subject: [SciPy-user] a virtual work party: help migrate scipy.org to new wiki Message-ID: <43DBDFA5.7010404@astraw.com> As many of you know, a number of us have been busy migrating the scipy.org website to a new, more user-friendly wiki system. The new wiki is up-and-running, is more-up-to-date, and prettier-to-look at than the old site. We are working to make scipy.org a portal website for all scientific software written in Python, and not just for the packages numpy and scipy, which happen to have their natural homes there. This wiki structure allows peer-review-by-wiki, so please do not feel bashful about updating any aspect of the wiki to suit your opinions. The current address of the new site is http://new.scipy.org/Wiki Once we go "live", we'll point http://scipy.org to this wiki. Before we make the switch, however, there are a few things that need to be done and a few things that should be done. This is a request for two things: 1) Review the page http://new.scipy.org/Wiki/MigratingFromPlone and make sure you can live with what it says. If you can't edit it. 2) Please undertake any of the tasks listed at that page. Update the page that you're working on the task, and again when you're done. 3) (Optional) Bask in the glory of our shiny new website which will allow NumPy/ SciPy/ Matplotlib/ IPython/ whateverelse to rule the scientific computing world! I'd like to ask that we try and complete these tasks as quickly as possible. I know I'm asking a busy group of people to pitch in, but I hope you'll agree the results will be worth it. Thank you, Andrew Straw PS Sorry for the cross postings. I thought it would be important to get as many hands involved as possible. From bgranger at scu.edu Sat Jan 28 16:23:42 2006 From: bgranger at scu.edu (Brian Granger) Date: Sat, 28 Jan 2006 13:23:42 -0800 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X Message-ID: On Jan 28, 2006, at 1:06 PM, Robert Kern wrote: Brian Granger wrote: Thus, It looks like the Apple supplied cblas and clapack libraries are never used, even though they are found at build time. What is the current status of using Apple's cblas and clapack? Can I use them from within scipy/numpy? How? AFAICT, Accelerate.framework does not include CLAPACK versions of their LAPACK functions, although they do include CBLAS versions of their BLAS functions. I've never looked into how scipy determines the existince of the CBLAS routines because I've always used ATLAS since Accelerate.framework does not contain the CLAPACK versions. What about: /System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/ Versions/Current/libLAPACK.dylib and Headers/clapack.h Also, Apple's docs on Accelerate claim that they include clapack. Are these incomplete/unusable for some reason? It seems silly to build ATLAS if it is already there. However, it looks like the culprit is line 82 in scipy/Lib/linalg/setup.py. ... 82 if name[0]=='c' and atlas_version is None and newer(__file__,target): f = open(target,'w') f.write('python module '+name+'\n') f.write('usercode void empty_module(void) {}\n') ... I don't think atlas_version gets set when the ATLAS is Accelerate.framework. Maybe I will play around with this, but I am not very familiar with how libalg is setup, so I don't know if I will get anywhere. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user Brian Granger, Ph.D. Assistant Professor of Physics Santa Clara University bgranger at scu.edu Phone: 408-551-1891 Fax: 408-554-6965 This message scanned for viruses and SPAM by GWGuardian at SCU (MGW1) From robert.kern at gmail.com Sat Jan 28 16:43:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Jan 2006 15:43:48 -0600 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X In-Reply-To: References: Message-ID: <43DBE594.50301@gmail.com> Brian Granger wrote: > What about: > > /System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/ > > Versions/Current/libLAPACK.dylib > > and > > Headers/clapack.h > > Also, Apple's docs on Accelerate claim that they include clapack. > > Are these incomplete/unusable for some reason? It seems silly to build ATLAS if it is already there. Look at the symbols in libBLAS.dylib and libLAPACK.dylib. You can see the CBLAS versions in libBLAS.dylib, but there are only the FORTRAN versions in libLAPACK.dylib. You are right that the docs claim (correctly) that Accelerate.framework contains CLAPACK. However, I goofed and referred to things incorrectly. CLAPACK is simply an implementation of FORTRAN LAPACK in C. It uses FORTRAN column-major arrays. The flapack module links against these. However, ATLAS also provides row-major versions of some(?) LAPACK functions to match the row-major CBLAS routines. The clapack module wraps these. These are not included in Accelerate.framework, although the row-major CBLAS versions are. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From byrnes at bu.edu Sat Jan 28 17:19:45 2006 From: byrnes at bu.edu (John Byrnes) Date: Sat, 28 Jan 2006 17:19:45 -0500 Subject: [SciPy-user] Linux Installation Instructions Message-ID: <20060128221945.GB6662@localhost.localdomain> Is anyone else unable to access the Linux install instructions at http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 I get a 403 - Forbidden Page. Regards, John -- Well done is better than well said. -- Benjamin Franklin From strawman at astraw.com Sat Jan 28 17:53:49 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 28 Jan 2006 14:53:49 -0800 Subject: [SciPy-user] cubic splines in scipy Message-ID: <43DBF5FD.9050308@astraw.com> Hi, (Warning: non math guru alert.) I'm looking to do 1D interpolation using cubic splines: http://mathworld.wolfram.com/CubicSpline.html That page has a 2D picture, but the math is for 1D. The important bit for me is that interpolated curve passes through the control points and the whole thing is continuously differentiable. I can see a few hints toward what I want to do in scipy: scipy.signal.cspline1d and various spline routines in scipy.interpolate. Unfortunately, cspline1d doesn't seem to be tied to anything I can use to interpolate with and the things in scipy.interpolate don't seem to be cubic splines -- I desire my interpolated trajectory to pass through the control points. Can anyone point me to anything in scipy that allows cubic spline interpolation, including the interpolated values passing through the control points? According to the MathWorld page cited above, I thought the attached code would work (not optimized for speed, obviously), but it doesn't interpolate well. I guess from this that scipy.signal.cspline1d isn't doing exactly what I think it's doing. Or perhaps my code is wrong? (For what it's worth, I need to interpolate discretely sampled data from a discrete-time simulation that I then want to use as forcing functions for use in a scipy ODE solver. So, my discretely sampled data contain no noise and I don't want to smooth them, I'm just looking for intermediate values that are continuously differentiable.) Thanks in advance for any help, Andrew -------------- next part -------------- A non-text attachment was scrubbed... Name: test_cubic.py Type: text/x-python Size: 1197 bytes Desc: not available URL: From ryanlists at gmail.com Sat Jan 28 17:54:18 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 28 Jan 2006 17:54:18 -0500 Subject: [SciPy-user] Linux Installation Instructions In-Reply-To: <20060128221945.GB6662@localhost.localdomain> References: <20060128221945.GB6662@localhost.localdomain> Message-ID: I can't access it either right now. On 1/28/06, John Byrnes wrote: > Is anyone else unable to access the Linux install instructions at > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > I get a 403 - Forbidden Page. > > Regards, > John > > -- > Well done is better than well said. > -- Benjamin Franklin > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From bgranger at scu.edu Sat Jan 28 17:54:40 2006 From: bgranger at scu.edu (Brian Granger) Date: Sat, 28 Jan 2006 14:54:40 -0800 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X Message-ID: Brian Granger, Ph.D. Assistant Professor of Physics Santa Clara University bgranger at scu.edu Phone: 408-551-1891 Fax: 408-554-6965 >>> robert.kern at gmail.com 01/28/06 1:43 PM >>> Brian Granger wrote: > What about: > > /System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/ > > Versions/Current/libLAPACK.dylib > > and > > Headers/clapack.h > > Also, Apple's docs on Accelerate claim that they include clapack. > > Are these incomplete/unusable for some reason? It seems silly to build ATLAS if it is already there. Look at the symbols in libBLAS.dylib and libLAPACK.dylib. You can see the CBLAS versions in libBLAS.dylib, but there are only the FORTRAN versions in libLAPACK.dylib. You are right that the docs claim (correctly) that Accelerate.framework contains CLAPACK. However, I goofed and referred to things incorrectly. CLAPACK is simply an implementation of FORTRAN LAPACK in C. It uses FORTRAN column-major arrays. The flapack module links against these. However, ATLAS also provides row-major versions of some(?) LAPACK functions to match the row-major CBLAS routines. The clapack module wraps these. These are not included in Accelerate.framework, although the row-major CBLAS versions are. I have been reading the header files in the Accelerate framework to get a better understanding of all this. Apple's implementation of BLAS is the usual CBLAS, which can work with matrices in either row or column major storage. The storage order is determined at runtime by an argument passed to the functions. Are these wrapped and available in scipy? If so, then where? The LAPACK implmentation, as you assert, includes only the fortran style functions that require columnn major arrays. So even though these are called "CLAPACK" by Apple, scipy refers to them as "FLAPACK" because they are not the row major versions that ATLAS exposes. Is this correct? Thanks Brian -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user This message scanned for viruses and SPAM at SCU (MGW2) From arnd.baecker at web.de Sat Jan 28 18:00:04 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Sun, 29 Jan 2006 00:00:04 +0100 (CET) Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X In-Reply-To: <43DBE594.50301@gmail.com> References: <43DBE594.50301@gmail.com> Message-ID: On Sat, 28 Jan 2006, Robert Kern wrote: > Brian Granger wrote: > > > What about: > > > > /System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/ > > > > Versions/Current/libLAPACK.dylib > > > > and > > > > Headers/clapack.h > > > > Also, Apple's docs on Accelerate claim that they include clapack. > > > > Are these incomplete/unusable for some reason? > > It seems silly to build ATLAS if it is already there. It might also be a bad choice, as ATLAS can be slower than other optimized LAPACK routines, e.g. MKL or ACML. Do any benchmarks of `Accelerate` vs. ATLAS exist? > Look at the symbols in libBLAS.dylib and libLAPACK.dylib. You can see the CBLAS > versions in libBLAS.dylib, but there are only the FORTRAN versions in > libLAPACK.dylib. You are right that the docs claim (correctly) that > Accelerate.framework contains CLAPACK. However, I goofed and referred to things > incorrectly. CLAPACK is simply an implementation of FORTRAN LAPACK in C. It uses > FORTRAN column-major arrays. The flapack module links against these. However, > ATLAS also provides row-major versions of some(?) LAPACK functions to match the > row-major CBLAS routines. The clapack module wraps these. These are not included > in Accelerate.framework, although the row-major CBLAS versions are. If I understand things correctly, creating a matrix with (e.g) mat = zeros((Ny, Nx), "d") gives row-major (i.e. C style) ordering. Calling (for example) scipy.linalg.eig() will determine the ordering via `scipy.linalg.basic.get_lapack_funcs` and return the "Fortran code for leading array with column major order" (i.e. flapack) and in all other cases clapack is preferred over flapack. Now, clapack is not available, e.g. when ATLAS is replaced by Accelerate or MKL (presumably ACML as well), or some other Fortran only Lapack implementation. In such a case a copy of the original array will be made while calling `eig` on the level of f2py (at least that's my experience so far - Pearu, please correct me if this statement is too general). Therefore, depending on the available LAPACK implementation, a copy of the array will be made or not. I am not sure if this is optimal, or if one should start straight away with fortran type column major order. For Numeric I usually used the trick mat = transpose(zeros((Nx, Ny), "d")) - note the change order for Nx and Ny - and filled the matrix elements just via mat[ny,nx] = ... With numpy mat = zeros((Ny, Nx), fortran=1) should do the job (I have not tested this yet). If all the above is correct, then the solution I would use for myself is to set `fortran=1` for all arrays which will be used by some LAPACK routine. Then no unnecessary copies of (presumably large) arrays will take place on any machine and one could stop worrying about flapack vs. clapack ;-). Not sure if that is the solution for everyone - so I am happy to learn about any drawbacks.... Just my 2cent, Arnd From oliphant.travis at ieee.org Sat Jan 28 18:23:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 28 Jan 2006 16:23:57 -0700 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X In-Reply-To: References: <43DBE594.50301@gmail.com> Message-ID: <43DBFD0D.1060808@ieee.org> Arnd Baecker wrote: >With numpy > mat = zeros((Ny, Nx), fortran=1) >should do the job (I have not tested this yet). > >If all the above is correct, then the solution I would use >for myself is to set `fortran=1` for all arrays which >will be used by some LAPACK routine. >Then no unnecessary copies of (presumably large) arrays will take >place on any machine and one could stop worrying about flapack >vs. clapack ;-). > >Not sure if that is the solution for everyone - so I am happy >to learn about any drawbacks.... > > This is the main purpose of the Fortran-order arrays in NumPy --- optimize interfaces to Fortran-written packages. Right now, C-contiguous arrays still have a "special place" because several algorithms require C-contiguous arrays in order to work and will make copies of Fortran-order arrays as needed. There may still be some issues with these Fortran-order arrays especially regarding un-needed copying. With Numeric, f2py did an intelligent job of deciding whether or not to copy. Most of this is unneeded now because the FORTRAN flag on the NumPy array is kept up-to-date with the striding information on the NumPy array so that you just need to look at that flag to determine if the array is in Fortran-order or not. -Travis From bgranger at scu.edu Sat Jan 28 18:47:39 2006 From: bgranger at scu.edu (Brian Granger) Date: Sat, 28 Jan 2006 15:47:39 -0800 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X Message-ID: Brian Granger, Ph.D. Assistant Professor of Physics Santa Clara University bgranger at scu.edu Phone: 408-551-1891 Fax: 408-554-6965 >>> oliphant.travis at ieee.org 01/28/06 3:23 PM >>> Arnd Baecker wrote: >With numpy > mat = zeros((Ny, Nx), fortran=1) >should do the job (I have not tested this yet). > >If all the above is correct, then the solution I would use >for myself is to set `fortran=1` for all arrays which >will be used by some LAPACK routine. >Then no unnecessary copies of (presumably large) arrays will take >place on any machine and one could stop worrying about flapack >vs. clapack ;-). > >Not sure if that is the solution for everyone - so I am happy >to learn about any drawbacks.... > > This is the main purpose of the Fortran-order arrays in NumPy --- optimize interfaces to Fortran-written packages. Right now, C-contiguous arrays still have a "special place" because several algorithms require C-contiguous arrays in order to work and will make copies of Fortran-order arrays as needed. There may still be some issues with these Fortran-order arrays especially regarding un-needed copying. With Numeric, f2py did an intelligent job of deciding whether or not to copy. Most of this is unneeded now because the FORTRAN flag on the NumPy array is kept up-to-date with the striding information on the NumPy array so that you just need to look at that flag to determine if the array is in Fortran-order or not. -Travis So it sounds like on Mac OS X, I should use fortran=1 in my numpy arrays if I will be calling things like linalg.eig. That way no copies are made when they are passed to the Accelerate framework's versions of LAPACK which take column major arrays. It is really nice to be able to pick the storage format like this. Brian _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user This message scanned for viruses and SPAM at SCU (MGW2) From oliphant.travis at ieee.org Sat Jan 28 19:06:21 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 28 Jan 2006 17:06:21 -0700 Subject: [SciPy-user] cubic splines in scipy In-Reply-To: <43DBF5FD.9050308@astraw.com> References: <43DBF5FD.9050308@astraw.com> Message-ID: <43DC06FD.9010709@ieee.org> Andrew Straw wrote: >Hi, > >(Warning: non math guru alert.) > >I'm looking to do 1D interpolation using cubic splines: >http://mathworld.wolfram.com/CubicSpline.html That page has a 2D >picture, but the math is for 1D. The important bit for me is that >interpolated curve passes through the control points and the whole thing >is continuously differentiable. > > There are two incarnations of splines in SciPy. The first is from fitpack. It is the traditional way to view cubic splines as it allows for control-points to be placed in arbitrary ways. The specialized cubic-spline algorithms in scipy.signal make use of speed enhancements that are possible when your control points are *equally-spaced* and you are willing to assume specific kinds of boundary conditions (Michael Unser has written extensively about this use of cubic splines and his papers form the basis for what is in scipy.signal). The big win is that by making these assumptions your matrix-to-be inverted is circulant and the inversion can be accomplished very quickly. >(For what it's worth, I need to interpolate discretely sampled data from >a discrete-time simulation that I then want to use as forcing functions >for use in a scipy ODE solver. So, my discretely sampled data contain no >noise and I don't want to smooth them, I'm just looking for intermediate >values that are continuously differentiable.) > > > Look in the old scipy tutorial for discussion on the spline functions. Both, the interpolate in fitpack and the B-spline approach are discussed. If you have equally-spaced points (it looks like you do), you can use the B-spline approach. Do not use the Mathworld discussion for this approach (it's related but the formulas there won't necessarily work). What to do instead? Well, the bsplines need some more helper functions to make what you want to do easy. Instead, use the fact that the acutal interpolating function is: y(x) = sum(c_j*beta_3(x/deltax - j)) where beta_3 is a cubic-spline interpolator function (a symmetric kernel) which scipy.signal.bspline(x/deltax-j,3) will evaluate for you, and cj are the (spline coefficients) returned from cspline1d(y). In order to get the edges to come out right you will need to extend the returned coefficients using mirror symmetric boundary conditions and include those extra knots in your sum near the edges. We really need a faster way to compute the interpolation function (besides evaluating beta_3(x/deltax-j) at every new x and j point and summing the results because most of them are zero). This would be a good addition and a quick-way to get 2-d interpolation working quickly. -Travis From robert.kern at gmail.com Sat Jan 28 19:15:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 28 Jan 2006 18:15:09 -0600 Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X In-Reply-To: References: <43DBE594.50301@gmail.com> Message-ID: <43DC090D.8030105@gmail.com> Arnd Baecker wrote: > It might also be a bad choice, as ATLAS can be slower than > other optimized LAPACK routines, e.g. MKL or ACML. > Do any benchmarks of `Accelerate` vs. ATLAS exist? The BLAS in Accelerate.framework *is* ATLAS. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant.travis at ieee.org Sat Jan 28 19:42:03 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 28 Jan 2006 17:42:03 -0700 Subject: [SciPy-user] cubic splines in scipy In-Reply-To: <43DC06FD.9010709@ieee.org> References: <43DBF5FD.9050308@astraw.com> <43DC06FD.9010709@ieee.org> Message-ID: <43DC0F5B.4040105@ieee.org> Travis Oliphant wrote: >Instead, use the fact that the acutal interpolating function is: > >y(x) = sum(c_j*beta_3(x/deltax - j)) > >where beta_3 is a cubic-spline interpolator function (a symmetric >kernel) which scipy.signal.bspline(x/deltax-j,3) will evaluate for >you, and cj are the (spline coefficients) returned from cspline1d(y). > Incidentally, the scipy.signal.bspline function is probably not the most intelligently written code (and it's generic). It is a "vectorized" function so that it works like a ufunc. But, you may find something that works better. The "closed-form" expression for the bspline interpolator is beta_n(x) = Delta^(n+1){x**n u(x)} / n! where u(x) is the step-function that is 0 when u(x)<0 and Delta^(n+1){f(x)} is equivalent to Delta^n{Delta{f(x)}} and Delta{f(x)} == f(x+1/2)-f(x-1/2). Thus: beta_3(x) = [(x+2)**3 u(x+2) - 4(x+1)**3 u(x+1) +6 x**3 u(x) - 4(x-1)**3 u(x-1) + (x-2)**3 u(x-2)]/6 Breaking this up into intervals we can show: beta_3(x): |x| > 2 0 1 < |x| <= 2 [(2-|x|)**3] / 6 0 <= |x| <= 1 [(2-|x|)**3 - 4(1-|x|)**3] / 6 Proving this to yourself from the general formula takes a bit of calculation... So, for any given point we are trying to find the interpolated value for we need at most use the spline coefficients strictly within two knot-distances away from the current point. We should definitely write up an interpolation class using these splines and include it with the bspline material, so this kind of calculation doesn't have to be refigured all the time. -Travis From oliphant.travis at ieee.org Sun Jan 29 02:44:02 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sun, 29 Jan 2006 00:44:02 -0700 Subject: [SciPy-user] Example of how to use B-splines for interpolation Message-ID: <43DC7242.5050301@ieee.org> This example needs the SVN version of scipy (or you need to get the cspline1d_eval function out of SVN): I sent a smaller image hoping it would make it to the list... Example; from numpy import r_, sin from scipy.signal import cspline1d, cspline1d_eval x = r_[0:10] dx = x[1]-x[0] newx = r_[-3:13:0.1] # notice outside the original domain y = sin(x) cj = cspline1d(y) newy = cspline1d_eval(cj, newx, dx=dx,x0=x[0]) from pylab import plot plot(newx, newy, x, y, 'o') Have fun, -Travis -------------- next part -------------- A non-text attachment was scrubbed... Name: figure1_small.png Type: image/png Size: 27554 bytes Desc: not available URL: From arnd.baecker at web.de Sun Jan 29 05:18:22 2006 From: arnd.baecker at web.de (Arnd Baecker) Date: Sun, 29 Jan 2006 11:18:22 +0100 (CET) Subject: [SciPy-user] Confusing BLAS/LAPACK situation on Mac OS X In-Reply-To: <43DC090D.8030105@gmail.com> References: <43DBE594.50301@gmail.com> <43DC090D.8030105@gmail.com> Message-ID: On Sat, 28 Jan 2006, Robert Kern wrote: > Arnd Baecker wrote: > > > It might also be a bad choice, as ATLAS can be slower than > > other optimized LAPACK routines, e.g. MKL or ACML. > > Do any benchmarks of `Accelerate` vs. ATLAS exist? > > The BLAS in Accelerate.framework *is* ATLAS. Thanks for the clarification! ((I did a quick google on Accelerate.framework before which lead me to http://www.cocoabuilder.com/archive/message/cocoa/2004/10/17/119581 """ If you're really paranoid about speed then you might want to look at ATLAS . I'm not sure what the state of play is nowadays, but there was a point when it produced a faster BLAS implementation than the one in Apple's Accelerate framework. """ )) Best, Arnd From ckkart at hoc.net Sun Jan 29 09:29:03 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Sun, 29 Jan 2006 15:29:03 +0100 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> Message-ID: <43DCD12F.6070204@hoc.net> David M. Cooke wrote: > Pearu Peterson writes: > >> On Fri, 27 Jan 2006, Travis Oliphant wrote: >> >>>>> /home/ck/.python23_compiled/sc_9824b12a96792c10b5fdb725f9caa3c414.cpp:5: >>>>> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/compiler.h:38:6: >>>>> #error >>>>> In : A working template implementation is required by Blitz++ >>>>> (you may need to rerun the compiler/bzconfig script) >>>>> >>>>> I've tried to install Blitz 0.9, so that at least weave can find the header >>>>> file >>>>> "gnu/bzconfig.h" but it didn't really help. >>>>> >>>>> Any ideas what is wrong? >>> Try moving the gnu/bzconfig.h file from the installation directory to >>> the weave directory >>> >>> /usr/lib/python2.3/site-packages/scipy/weave/blitz/blitz/gnu >>> >>> Then try again. >>> >>> We may be missing some files when the upgrade in blitz occurred. >> Indeed, scipy/Lib/weave/blitz/blitz/gnu is an empty directory in scipy >> SVN. > > Fixed. I added the bzconfig.h that I had used when I updated it last. > Thanks! That works. Unfortunately the inline code that used to work before still doesn't compile. It seems to be related to the support code I'm using. In the error message below dist(double, double, int, double) is a function of the support code. Have there been changes to weave that I'm not aware of? /home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp: In function ?PyObject* compiled_func(PyObject*, PyObject*)?: /home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp:837: error: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second: /usr/lib/python2.4/site-packages/scipy/weave/blitz/blitz/array-impl.h:1910: note: candidate 1: typename blitz::SliceInfo::T_slice blitz::Array::operator()(T1, T2) const [with T1 = long int, T2 = int, P_numtype = double, int N_rank = 2] /usr/lib/python2.4/site-packages/scipy/weave/blitz/blitz/array-impl.h:1637: note: candidate 2: P_numtype& __restrict__ blitz::Array::operator()(int, int) [with P_numtype = double, int N_rank = 2] /home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp:837: error: cannot convert ?blitz::Array? to ?double? for argument ?2? to ?double dist(double, double, int, double)? Regards, Christian From icy.flame.gm at gmail.com Sun Jan 29 11:52:04 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Sun, 29 Jan 2006 16:52:04 +0000 Subject: [SciPy-user] cubic splines in scipy In-Reply-To: <43DC0F5B.4040105@ieee.org> References: <43DBF5FD.9050308@astraw.com> <43DC06FD.9010709@ieee.org> <43DC0F5B.4040105@ieee.org> Message-ID: This is what I do with my data, might not be the best way to do things, but it worked with the old SciPy, seems to work on the new one too, but not fully tested. ============================ from scipy import std from scipy.interpolate import UnivariateSpline from Numeric import ones # ary = [] 1D array length = len(ary) weight = ones(length) # In case you want to do smoothing, then # you can supply the std of the noise. # # In this example I take the first 200 # data point to calculate the noise level. # Which is pure noise in my measurement # # weight = ones(length) / std(ary[:200]) b = UnivariateSpline(range(length), ary, w = weight) # # Now b is a function, anypoint on the # function can be call using: InterpolatedValue = b.__call__(j) ============================ -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From stefan at sun.ac.za Sun Jan 29 13:00:48 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 29 Jan 2006 20:00:48 +0200 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <43DCD12F.6070204@hoc.net> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43DCD12F.6070204@hoc.net> Message-ID: <20060129180048.GB4326@alpha> On Sun, Jan 29, 2006 at 03:29:03PM +0100, Christian Kristukat wrote: > Thanks! That works. > Unfortunately the inline code that used to work before still doesn't compile. It > seems to be related to the support code I'm using. In the error message below > dist(double, double, int, double) is a function of the support code. Have there > been changes to weave that I'm not aware of? > > /home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp: In > function ?PyObject* compiled_func(PyObject*, PyObject*)?: > /home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp:837: error: > ISO C++ says that these are ambiguous, even though the worst conversion for the > first is better than the worst conversion for the second: > /usr/lib/python2.4/site-packages/scipy/weave/blitz/blitz/array-impl.h:1910: > note: candidate 1: typename blitz::SliceInfo blitz::nilArraySection, blitz::nilArraySection, blitz::nilArraySection, > blitz::nilArraySection, blitz::nilArraySection, blitz::nilArraySection, > blitz::nilArraySection, blitz::nilArraySection, blitz::nilArraySection>::T_slice > blitz::Array::operator()(T1, T2) const [with T1 = long int, T2 = int, > P_numtype = double, int N_rank = 2] > /usr/lib/python2.4/site-packages/scipy/weave/blitz/blitz/array-impl.h:1637: > note: candidate 2: P_numtype& __restrict__ blitz::Array::operator()(int, > int) [with P_numtype = double, int N_rank = 2] > /home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp:837: error: > cannot convert ?blitz::Array? to ?double? for argument ?2? to ?double > dist(double, double, int, double)? Don't you just love C++'s clear and descriptive compiler error's? This error also popped up in my code after a compiler upgrade (to GCC4, IIRC). Seems that GCC4 is a lot more pedantic than 3.2. You should be able to work around the problem by casting the value before passing it to your function, i.e. dist(static_cast(whatever_you_had_here), ...) I had the problem with a function being called like dist(3, 4, 1, 1); which would only compile if I did dist(3.0, 4.0, 1, 1.0); in order to explicitly make the parameters doubles. Or maybe I'm totally off track, but it was worth a try! Regards St?fan From strawman at astraw.com Sun Jan 29 14:23:00 2006 From: strawman at astraw.com (Andrew Straw) Date: Sun, 29 Jan 2006 11:23:00 -0800 Subject: [SciPy-user] cubic splines in scipy In-Reply-To: References: <43DBF5FD.9050308@astraw.com> <43DC06FD.9010709@ieee.org> <43DC0F5B.4040105@ieee.org> Message-ID: <43DD1614.1020407@astraw.com> Thanks, but this does not pass through the control points, which is one of my requirements. Also, you can shorten InterpolatedValue = b.__call__(j) to InterpolatedValue = b(j) From strawman at astraw.com Sun Jan 29 16:42:07 2006 From: strawman at astraw.com (Andrew Straw) Date: Sun, 29 Jan 2006 13:42:07 -0800 Subject: [SciPy-user] Example of how to use B-splines for interpolation In-Reply-To: <43DC7242.5050301@ieee.org> References: <43DC7242.5050301@ieee.org> Message-ID: <43DD36AF.8000605@astraw.com> Hi Travis, Thanks for your work on this--it's very useful to me. I found 2 issues. I'm including a test and a potential fix for the first issue, which seems to be an end-point problem. Under some circumstances, the endpoints aren't properly detected. I didn't attempt to comprehend everything going on in this function, but the patch I made apparently works. Please review it and apply it if it's acceptable. The second issue is that the x array cannot be integers (a TypeError gets raised). There doesn't seem to be any good reason for this (why can't splines exist over integers?), so I submit that it's also a bug. Unfortunately, that didn't look as easy for me to fix, so I leave it for now. Cheers! Andrew Travis Oliphant wrote: > This example needs the SVN version of scipy (or you need to get the > cspline1d_eval function out of SVN): > > I sent a smaller image hoping it would make it to the list... > > Example; > > from numpy import r_, sin > from scipy.signal import cspline1d, cspline1d_eval > > x = r_[0:10] > dx = x[1]-x[0] > newx = r_[-3:13:0.1] # notice outside the original domain > y = sin(x) > cj = cspline1d(y) > newy = cspline1d_eval(cj, newx, dx=dx,x0=x[0]) > > from pylab import plot > plot(newx, newy, x, y, 'o') > > > Have fun, > > -Travis > -------------- next part -------------- A non-text attachment was scrubbed... Name: spline.patch Type: text/x-patch Size: 1568 bytes Desc: not available URL: From strawman at astraw.com Sun Jan 29 23:36:18 2006 From: strawman at astraw.com (Andrew Straw) Date: Sun, 29 Jan 2006 20:36:18 -0800 Subject: [SciPy-user] cubic splines in scipy In-Reply-To: <43DC06FD.9010709@ieee.org> References: <43DBF5FD.9050308@astraw.com> <43DC06FD.9010709@ieee.org> Message-ID: <43DD97C2.3030306@astraw.com> Travis Oliphant wrote: >The specialized cubic-spline algorithms in scipy.signal make use of >speed enhancements that are possible when your control points are >*equally-spaced* and you are willing to assume specific kinds of >boundary conditions (Michael Unser has written extensively about this >use of cubic splines and his papers form the basis for what is in >scipy.signal). The big win is that by making these assumptions your >matrix-to-be inverted is circulant and the inversion can be accomplished >very quickly. > > Just to let people know, here is a very nice review article by Michael Unser about splines: http://bigwww.epfl.ch/publications/unser9902.html It would be good to include a link to that article within the source code for those wishing to delve deeper. Cheers! Andrew From ckkart at hoc.net Mon Jan 30 05:08:33 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Mon, 30 Jan 2006 11:08:33 +0100 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <20060129180048.GB4326@alpha> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43DCD12F.6070204@hoc.net> <20060129180048.GB4326@alpha> Message-ID: <43DDE5A1.8030308@hoc.net> Stefan van der Walt wrote: > On Sun, Jan 29, 2006 at 03:29:03PM +0100, Christian Kristukat wrote: > >>Thanks! That works. >>Unfortunately the inline code that used to work before still doesn't compile. It >>seems to be related to the support code I'm using. In the error message below >>dist(double, double, int, double) is a function of the support code. Have there >>been changes to weave that I'm not aware of? >> >>/home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp: In >>function ?PyObject* compiled_func(PyObject*, PyObject*)?: >>/home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp:837: error: >>ISO C++ says that these are ambiguous, even though the worst conversion for the >>first is better than the worst conversion for the second: >>/usr/lib/python2.4/site-packages/scipy/weave/blitz/blitz/array-impl.h:1910: >>note: candidate 1: typename blitz::SliceInfo>blitz::nilArraySection, blitz::nilArraySection, blitz::nilArraySection, >>blitz::nilArraySection, blitz::nilArraySection, blitz::nilArraySection, >>blitz::nilArraySection, blitz::nilArraySection, blitz::nilArraySection>::T_slice >>blitz::Array::operator()(T1, T2) const [with T1 = long int, T2 = int, >>P_numtype = double, int N_rank = 2] >>/usr/lib/python2.4/site-packages/scipy/weave/blitz/blitz/array-impl.h:1637: >>note: candidate 2: P_numtype& __restrict__ blitz::Array::operator()(int, >>int) [with P_numtype = double, int N_rank = 2] >>/home/ck/.python24_compiled/sc_9824b12a96792c10b5fdb725f9caa3c44.cpp:837: error: >>cannot convert ?blitz::Array? to ?double? for argument ?2? to ?double >>dist(double, double, int, double)? > > > Don't you just love C++'s clear and descriptive compiler error's? > This error also popped up in my code after a compiler upgrade (to > GCC4, IIRC). Seems that GCC4 is a lot more pedantic than 3.2. You > should be able to work around the problem by casting the value before > passing it to your function, i.e. > > dist(static_cast(whatever_you_had_here), ...) > > I had the problem with a function being called like > > dist(3, 4, 1, 1); > > which would only compile if I did > > dist(3.0, 4.0, 1, 1.0); > > in order to explicitly make the parameters doubles. > > Or maybe I'm totally off track, but it was worth a try! Good idea! But it seems like I have to cast to a different type. Can you help me with that error message? I've no idea of C++. For me it looks like he's trying to convert a 0-dim array (=scalar ?) to a double, right? error: invalid static_cast from type `blitz::Array' to type `double' Regards, Christian From a.u.r.e.l.i.a.n at gmx.net Mon Jan 30 11:50:48 2006 From: a.u.r.e.l.i.a.n at gmx.net (=?ISO-8859-1?Q?=22Johannes_L=F6hnert=22?=) Date: Mon, 30 Jan 2006 17:50:48 +0100 (MET) Subject: [SciPy-user] Strange behaviour of dot function Message-ID: <19508.1138639848@www036.gmx.net> Hi, I just found out that the dot function which multiplies matrices gives strange results for a 3-dimensional array. Consider the following example: ----------------- from Numeric import * #from numpy import * a=(arange(-1,3)[:, NewAxis, NewAxis] +arange(1,3)[NewAxis, :, NewAxis] +arange(3,6)[NewAxis, NewAxis, :]) print a n=a.shape[1] print 'sum:\n', sum(a, axis=1) print 'dot1:\n', dot(ones(n), a) print 'dot2:\n', dot(swapaxes(a,1,2), ones(n)) ---------------- My expectation would be that all three of the last lines give the same result. However, only 'sum' and 'dot2' are equal. (using Numeric 23.8) As you probably guessed I also tried it with a recent numpy version (numpy.__version__ = 0.9.5.1983). In this case, both dot1 and dot2 give the wrong result. Question: Is this behaviour intended? If yes, how do you get to the 'wrong' result? Johannes Loehnert -- Telefonieren Sie schon oder sparen Sie noch? NEU: GMX Phone_Flat http://www.gmx.net/de/go/telefonie From joe at enthought.com Mon Jan 30 16:39:02 2006 From: joe at enthought.com (Joe Cooper) Date: Mon, 30 Jan 2006 15:39:02 -0600 Subject: [SciPy-user] SciPy migration underway Message-ID: <43DE8776.5050808@enthought.com> Hi all, As many of you know, there's been a lot of work going on on various aspects of the SciPy website--Andrew Straw and Travis Oliphant and many others have been working hard on the new Moin-based website, and it's looking really great. It has already proven to be a success in all of the ways that the Plone site has always failed (i.e. making it easy for people to get involved and contribute), and everyone concerned would now like it to be the official First Contact for anyone coming to SciPy. To that end tonight I will be making the final changes that moves the website, mailing lists and old archives and user lists, and everything else to the new server and the front page of scipy.org will be the new Moin wiki (http://new.scipy.org/Wiki for those who haven't seen it, but really want to see it even before the move takes place this afternoon). I'm putting forth an effort not to break anything in the process, but that's hoping for a lot when so many services and complicated redirects, proxy rules, and assorted stuff is involved. So, if you find any SciPy.org service is broken between now and tomorrow morning, don't be too surprised. If anything is still broken in the morning, please let me know about it, as it probably means I didn't notice. Anyway, just a heads up about the impending intermittent downtime of some or all of our services on SciPy.org during the next few hours. From williams at thphys.ox.ac.uk Mon Jan 30 16:48:23 2006 From: williams at thphys.ox.ac.uk (Michael Williams) Date: Mon, 30 Jan 2006 21:48:23 +0000 Subject: [SciPy-user] SciPy migration underway In-Reply-To: <43DE8776.5050808@enthought.com> References: <43DE8776.5050808@enthought.com> Message-ID: <2AD2CD18-5AF2-48C5-9F28-5F5187C11D91@thphys.ox.ac.uk> On 30 Jan 2006, at 21:39, Joe Cooper wrote: > Anyway, just a heads up about the impending intermittent downtime of > some or all of our services on SciPy.org during the next few hours. Good luck, and thanks for all the hard work! -- Mike From brianc at temple.edu Mon Jan 30 17:04:15 2006 From: brianc at temple.edu (Brian Cole) Date: Mon, 30 Jan 2006 17:04:15 -0500 Subject: [SciPy-user] Centroid Calculation Message-ID: <54b165660601301404ka075ecfre0b4a554ea0dc17d@mail.gmail.com> I'm a NumPy newbie. What is the NumPy way of doing this? n=0 centroid_x=0 centroid_y=0 centroid_z=0 for x, y, z in catesian_coords: centroid_x+=x centroid_y+=y centroid_z+=z n+=1 centroid_x/=n centroid_y/=n centroid_z/=n Or am I entirely missing the point of NumPy? Thanks, Brian From jdhunter at ace.bsd.uchicago.edu Mon Jan 30 16:57:49 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Mon, 30 Jan 2006 15:57:49 -0600 Subject: [SciPy-user] Centroid Calculation In-Reply-To: <54b165660601301404ka075ecfre0b4a554ea0dc17d@mail.gmail.com> (Brian Cole's message of "Mon, 30 Jan 2006 17:04:15 -0500") References: <54b165660601301404ka075ecfre0b4a554ea0dc17d@mail.gmail.com> Message-ID: <87u0blz8de.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Brian" == Brian Cole writes: Brian> I'm a NumPy newbie. What is the NumPy way of doing this? Brian> n=0 centroid_x=0 centroid_y=0 centroid_z=0 for x, y, z in Brian> catesian_coords: centroid_x+=x centroid_y+=y centroid_z+=z Brian> n+=1 centroid_x/=n centroid_y/=n centroid_z/=n where X is a nx3 array of x,y,z,coords In [11]: import numpy as nx In [12]: X = nx.rand(1000,3) # make up some random data In [13]: centroid = nx.mean(X) In [14]: print centroid [ 0.48319355 0.49741983 0.49024469] JDH From stefan at sun.ac.za Mon Jan 30 17:22:16 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 31 Jan 2006 00:22:16 +0200 Subject: [SciPy-user] scipy 0.4.4 problems with converters.blitz In-Reply-To: <43DDE5A1.8030308@hoc.net> References: <43DA1A6F.3050103@hoc.net> <1138380174.43da4d8ed9ae6@webmail.colorado.edu> <43DCD12F.6070204@hoc.net> <20060129180048.GB4326@alpha> <43DDE5A1.8030308@hoc.net> Message-ID: <20060130222216.GC19454@alpha> On Mon, Jan 30, 2006 at 11:08:33AM +0100, Christian Kristukat wrote: > Good idea! But it seems like I have to cast to a different type. Can you help me > with that error message? I've no idea of C++. For me it looks like he's trying > to convert a 0-dim array (=scalar ?) to a double, right? > > error: invalid static_cast from type `blitz::Array' to type `double' Array is still an array -- not a double. I assume you obtained it by doing something similar to blitz::Array x(3); x = 1,2,3; dist(x(blitz::Range(2,2)), ...) Try specifying that you need element 0 of that cut, i.e. dist(x(blitz::Range(2,2))(0), ...) If that does not help, please post the offending piece of code. Cheers St?fan From ryanlists at gmail.com Mon Jan 30 18:45:56 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 30 Jan 2006 18:45:56 -0500 Subject: [SciPy-user] numpy and f2py Message-ID: Is there a secret to making f2py and the new numpy play together? Here is the begining of a fortran file that worked great with old scipy. I just tried installing numpy/new scipy and it no longer works - the output is a vector of nan's (I have not re-installed f2py since installing new scipy). subroutine bodevect(svect,ucv,outvect,n1,n2) Cf2py integer intent(hide),depend(svect) :: n1 = len(svect) Cf2py integer intent(hide),depend(ucv) :: n2 = len(ucv) Cf2py intent(in) svect, ucv Cf2py intent(out) outvect integer n1, n2, i double complex svect(n1),outvect(n1), bode double precision ucv(n2) DO i=1,n1 outvect(i) = bode(svect(i),ucv,n1) ENDDO END double complex function zcosh(z) double complex z zcosh = 0.5*(exp(z)+exp(-z)) RETURN END double complex function zsinh(z) double complex z zsinh = 0.5*(exp(z)-exp(-z)) RETURN END double complex function bode(s,ucv,n) double complex s double complex zsinh, zcosh Cf2py intent(in) s, ucv Cf2py intent(out) bode Cf2py integer intent(hide),depend(ucv) :: n = len(ucv) integer n double precision ucv(n) double precision kbase, cbase, mubeam, EIbeam, Lbeam, rl0, Ll0, @ ml0, Il0, kj1, cj1, rl1, Ll1, ml1, Il1, Kact, tauact, kj2, @ cj2, rl2, Ll2, ml2, Il2, kj3, cj3, rl36, Ll36, ml36, Il36, @ gainbode1, abeam, gainbode0 double complex c1beam, c2beam, c3beam, c4beam, betabeam, a_1, a_2, @ a_3, a_4, a_5, a_6, a_7, a_8, a_9, a_10, a_11, a_12, a_13, @ a_14, a_15, a_16, a_17, a_18, a_19, a_20, a_21, a_22, a_23, @ a_24, a_25, a_26, a_27, a_28, a_29, a_30, a_31, a_32, a_33, @ a_34, a_35, a_36, a_37, a_38, a_39, a_40, a_41, a_42, a_43, @ a_44, a_45, a_46, a_47, a_48, a_49, a_50, a_51, a_52, a_53, @ a_54, a_55, a_56, a_57, a_58, a_59, a_60, a_61, a_62, a_63, @ a_64, a_65, a_66, a_67, a_68, a_69, a_70, a_71, a_72, a_73, @ a_74, a_75, a_76, a_77, a_78, a_79, a_80, a_81, a_82, a_83, @ a_84, a_85, a_86, a_87, a_88, a_89, a_90, a_91 kbase=ucv(1) Thanks, Ryan From robert.kern at gmail.com Mon Jan 30 18:51:19 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Jan 2006 17:51:19 -0600 Subject: [SciPy-user] numpy and f2py In-Reply-To: References: Message-ID: <43DEA677.4080604@gmail.com> Ryan Krauss wrote: > Is there a secret to making f2py and the new numpy play together? Are you sure you are using the f2py that is part of numpy and not the older, separate release that does not work with numpy? -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From pgmdevlist at mailcan.com Mon Jan 30 19:41:34 2006 From: pgmdevlist at mailcan.com (pgmdevlist at mailcan.com) Date: Mon, 30 Jan 2006 19:41:34 -0500 Subject: [SciPy-user] numpy and f2py In-Reply-To: <43DEA677.4080604@gmail.com> References: <43DEA677.4080604@gmail.com> Message-ID: <200601301941.35197.pgmdevlist@mailcan.com> On Monday 30 January 2006 18:51, Robert Kern wrote: > Ryan Krauss wrote: > > Is there a secret to making f2py and the new numpy play together? > > Are you sure you are using the f2py that is part of numpy and not the > older, separate release that does not work with numpy? In other terms, uninstall f2py 2.45.xxx and scipy_distutils, and reinstall numpy.f2py. Note that there are some tweakings to do if you're running on an 64b machine and want to use the g95 or intel compilers. From madhadron at gmail.com Mon Jan 30 22:04:55 2006 From: madhadron at gmail.com (Frederick Ross) Date: Mon, 30 Jan 2006 22:04:55 -0500 Subject: [SciPy-user] What happened to Numeric Python EM Project? Message-ID: On the website there's a link to the Numeric Python EM Project, which looks really interesting from the description...but it appears to have vanished from the face of the earth. www.pythonemproject.com isn't responding, and Rob Lytle's home page doesn't appear to have a link to it. Does anyone know where it went? -- Frederick Ross Graduate Fellow The Rockefeller University From ryanlists at gmail.com Mon Jan 30 22:57:49 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 30 Jan 2006 22:57:49 -0500 Subject: [SciPy-user] numpy and f2py In-Reply-To: <200601301941.35197.pgmdevlist@mailcan.com> References: <43DEA677.4080604@gmail.com> <200601301941.35197.pgmdevlist@mailcan.com> Message-ID: Is there documentation on this? Do I uninstall the command line executable f2py as well or just the python package? Thanks, Ryan On 1/30/06, pgmdevlist at mailcan.com wrote: > On Monday 30 January 2006 18:51, Robert Kern wrote: > > Ryan Krauss wrote: > > > Is there a secret to making f2py and the new numpy play together? > > > > Are you sure you are using the f2py that is part of numpy and not the > > older, separate release that does not work with numpy? > > In other terms, uninstall f2py 2.45.xxx and scipy_distutils, and reinstall > numpy.f2py. > Note that there are some tweakings to do if you're running on an 64b machine > and want to use the g95 or intel compilers. > From robert.kern at gmail.com Tue Jan 31 00:10:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 30 Jan 2006 23:10:14 -0600 Subject: [SciPy-user] numpy and f2py In-Reply-To: References: <43DEA677.4080604@gmail.com> <200601301941.35197.pgmdevlist@mailcan.com> Message-ID: <43DEF136.6050801@gmail.com> Ryan Krauss wrote: > Is there documentation on this? Not as such, no. The documentation inside the package is somewhat out-of-date especially with regards to installation of f2py. > Do I uninstall the command line > executable f2py as well or just the python package? Yes. The script won't be of any use to you once you remove the package. But have you determined that you were actually using the old f2py with numpy (which doesn't work, but I'm not sure if it would fail the way you said)? If you could give a small, self-contained example that exhibits the problem, I'd be happy to try to find what's wrong. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From novak at ucolick.org Tue Jan 31 00:30:23 2006 From: novak at ucolick.org (Gregory Novak) Date: Mon, 30 Jan 2006 21:30:23 -0800 Subject: [SciPy-user] Matrix Multiply for > 2 dim matricies? Message-ID: I'm somewhat mystified by the behavior of the matrix multiply routine for > 2d matricies. The back story: at every point on a 3d grid, I have a vector and I would like to apply a matrix transformation to all of them. I had thought that by feeding the matrix and the whole 4d array to matrixmultiply, I'd be able to quickly do this. However, I had trouble figuring out how to correctly interpret the output from matrixmultiply. After exhaustively feeding 2x2x2 arrays to matrix multiply to see if I could tease out the correct behavior, I found that A = matrixmultiply(B,C) means that: A_ij0k = B_ijn C_0nk But when the first index of C is 1, I don't see what's going on. For example, it seems that: A_0010 = B_000 * C_010 + B_001 * C_100 Ie, for either: B = zeros((2,2,2)); C = zeros((2,2,2)) B[0,0,0] = 1 C[0,1,0] = 1 A = matrixmultiply(B,C) or: B = zeros((2,2,2)); C = zeros((2,2,2)) B[0,0,1] = 1 C[1,0,0] = 1 A = matrixmultiply(B,C) A ends up being the same, namely: [[[[0,0,] [1,0,]] [[0,0,] [0,0,]]] [[[0,0,] [0,0,]] [[0,0,] [0,0,]]]] So, I know that matrixmultiply multiplies a bunch of things and adds them together, but I'm having trouble deducing exactly _what_ it does in this case. Thanks, Greg From ryanlists at gmail.com Tue Jan 31 00:41:37 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 31 Jan 2006 00:41:37 -0500 Subject: [SciPy-user] numpy and f2py In-Reply-To: <43DEF136.6050801@gmail.com> References: <43DEA677.4080604@gmail.com> <200601301941.35197.pgmdevlist@mailcan.com> <43DEF136.6050801@gmail.com> Message-ID: Thanks for your willingness to help Robert, but I think I have it all working. It was actually pretty straightforward, I just happen to have an old .so file in my test directory that I didn't realize my script was picking up that was causing me trouble. I basically deleted the f2py2e directory from my site-packages dir and deleted the f2py executable from /usr/bin and then reinstalled numpy and everything is working well. I probably didn't need to manually delete the executable as it was probably being over written. I checked my version and didn't know if this ryan at ubuntu:~$ f2py -v 2_2025 was old or new. But apparently it is new (when I deleted it and then reinstalled, it came back with this same version). So, near as I can tell, everything is working fine for me and I am now running the new scipy/numpy with weave and f2py both happy. Thanks for your help, Ryan On 1/31/06, Robert Kern wrote: > Ryan Krauss wrote: > > Is there documentation on this? > > Not as such, no. The documentation inside the package is somewhat out-of-date > especially with regards to installation of f2py. > > > Do I uninstall the command line > > executable f2py as well or just the python package? > > Yes. The script won't be of any use to you once you remove the package. > > But have you determined that you were actually using the old f2py with numpy > (which doesn't work, but I'm not sure if it would fail the way you said)? If you > could give a small, self-contained example that exhibits the problem, I'd be > happy to try to find what's wrong. > > -- > Robert Kern > robert.kern at gmail.com > > "In the fields of hell where the grass grows high > Are the graves of dreams allowed to die." > -- Richard Harter > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From a.u.r.e.l.i.a.n at gmx.net Tue Jan 31 02:56:25 2006 From: a.u.r.e.l.i.a.n at gmx.net (aurelian) Date: Tue, 31 Jan 2006 08:56:25 +0100 Subject: [SciPy-user] Matrix Multiply for > 2 dim matricies? In-Reply-To: References: Message-ID: <43DF1829.8050400@gmx.net> Hi, I think it is a bug, I stumbled upon it just yesterday. It occurs in both Numeric and numpy. As long as it is not fixed, you can try to put the 2d matrix in the 2nd place like this: A=matrixmultiply(swapaxes(C, 3, 4), transpose(B)). Maybe this will yield the correct result (see my mail from yesterday 16:50 GMT). Best regards, Johannes Loehnert From ckkart at hoc.net Tue Jan 31 05:02:52 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Tue, 31 Jan 2006 11:02:52 +0100 Subject: [SciPy-user] numpy/scipy rpms Message-ID: <43DF35CC.5000007@hoc.net> Hi, is anybody interested in SuSE 10.0 rpms of numpy/scipy built with BLAS/LAPACK on a P4? Maybe someone could put them somewhere on the new web page. Christian From joe at enthought.com Tue Jan 31 06:18:27 2006 From: joe at enthought.com (Joe Cooper) Date: Tue, 31 Jan 2006 05:18:27 -0600 Subject: [SciPy-user] The new SciPy.org Message-ID: <43DF4783.50007@enthought.com> Hi all, As threatened, the new Moin wiki has taken over the SciPy.org site. The migration has proven even more complicated than anticipated (and I anticipated a lot of complexity), so some services remain on the old server as of this morning. This will be wrapped up later today, after I get some sleep. Some of the things that have gone well: - The wiki is great. Andrew and Oliphant and lots of others did a fantastic job on it. - Mailing list archive links have an accurate Moved Temporarily redirect (i.e. google links to scipy archives will continue to work). I need to hash out with Travis how he'd like to handle the mailing list archives going forward, as he wrote the scripts that integrated them into Plone, and obviously those aren't going to work in Moin. I wrote something in Perl in the meantime, but I imagine that'll go away as soon as Travis becomes aware of it. - It will be possible to redirect most "sections" of the old plone site to a similar section on the Moin site...I just need to figure out which sections from Plone apply to what in the Wiki. This will be a time-consuming thing, but is probably worth the trouble in order to keep some google links working reasonably well some of the time. Some things that remain outstanding: - Mail and list migration. I don't expect trouble with this, it just takes a long time, and requires attention to a lot of boring details. - Takeover of the old IP, so that DNS service for the "scipy.org" domain is taken over by the new server. This one is moderately tricky, since we have a half-dozen delegated domains for the other projects hosted on the server (ipython, neuroimaging, etc.). Those have to be merged back into the scipy zone. I'm sure there are problems somewhere...But it's running relatively well, with at least working content everywhere I thought to poke. Holler if you see anything glaringly stupid caused by the change. Thanks! From meesters at uni-mainz.de Tue Jan 31 08:17:28 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Tue, 31 Jan 2006 14:17:28 +0100 Subject: [SciPy-user] where to get version 0.3.2? Message-ID: <200601311417.29222.meesters@uni-mainz.de> Hi, I just wanted to install one of my packages for a student of mine. This package requires the old scipy. Now, the new web page looks great, but I was unable to find a download link for the "old" scipy. And screening sourceforge did get me no further. Is the old version still available? Many thanks, Christian From schofield at ftw.at Tue Jan 31 09:47:14 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 31 Jan 2006 15:47:14 +0100 Subject: [SciPy-user] where to get version 0.3.2? In-Reply-To: <200601311417.29222.meesters@uni-mainz.de> References: <200601311417.29222.meesters@uni-mainz.de> Message-ID: <43DF7872.3050009@ftw.at> Christian Meesters wrote: >Hi, > >I just wanted to install one of my packages for a student of mine. This >package requires the old scipy. > >Now, the new web page looks great, but I was unable to find a download link >for the "old" scipy. And screening sourceforge did get me no further. Is the >old version still available? > > Doh! ;) I've now uploaded the generic 0.3.2 tarball to SourceForge and added a link from the Wiki download page: http://www.scipy.org/Wiki/Download I still had this file, but I don't have any old binaries lying around. Can you compile it yourself? -- Ed From oliphant.travis at ieee.org Tue Jan 31 09:55:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 31 Jan 2006 07:55:17 -0700 Subject: [SciPy-user] Strange behaviour of dot function In-Reply-To: <19508.1138639848@www036.gmx.net> References: <19508.1138639848@www036.gmx.net> Message-ID: <43DF7A55.3040602@ieee.org> Johannes L?hnert wrote: >Hi, > >I just found out that the dot function which multiplies matrices gives >strange results for a 3-dimensional array. Consider the following example: > > You just found two bugs in numpy.dot one of which is also there in Numeric. I just committed a fix to both bugs by using the ever-useful N-d array iterator (it sure makes it easier to write algorithms for strided arrays...). All three of your tests now produce the same answer. Thank you for finding this problem. -Travis From schofield at ftw.at Tue Jan 31 10:04:58 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 31 Jan 2006 16:04:58 +0100 Subject: [SciPy-user] The new SciPy.org In-Reply-To: <43DF4783.50007@enthought.com> References: <43DF4783.50007@enthought.com> Message-ID: <43DF7C9A.20403@ftw.at> Joe Cooper wrote: >Hi all, > >As threatened, the new Moin wiki has taken over the SciPy.org site. > > > > Congratulations and thank you for all your hard work! -- Ed From Paul.Ray at nrl.navy.mil Tue Jan 31 10:19:49 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 31 Jan 2006 10:19:49 -0500 Subject: [SciPy-user] The new SciPy.org In-Reply-To: <43DF7C9A.20403@ftw.at> References: <43DF4783.50007@enthought.com> <43DF7C9A.20403@ftw.at> Message-ID: <9E4C75A6-04FD-494F-ADE1-B30ED28D3D16@nrl.navy.mil> On Jan 31, 2006, at 10:04 AM, Ed Schofield wrote: > Joe Cooper wrote: > >> As threatened, the new Moin wiki has taken over the SciPy.org site. I think it looks great, and I'm happy that the new SciPy and NumPy have an "official" home, so I can start having all my developers install consistent releases and get past the Numeric/numarray confusion that has been causing trouble. I'm sure the transition to NumPy will take some work, but so far I haven't had many problems. (BTW, does SciPy 0.4.4 require NumPy 0.9.2, or will NumPy 0.9.4 work? If not, when will a SciPy release be made that supports NumPy 0.9.4?) However, one problem with the pretty front page image is that it doesn't work with the new SciPy! xplt has been removed from the default build, and relegated to the sandbox. This is fine, as matplotlib is the preferred plotting interface. Thus, I think the image should be changed to something that will actually work. (I realize this may be tricky since SciPy no longer does graphics, and graphics are pretty, and you probably don't want to have the SciPy front page highlight something that is really in matplotlib, not SciPy.) I'm not sure what the best solution is, but it brings up a question I have. What is the currently favored way to make a simple 3-d surface plot like the one of the bessel function on the front page? I didn't see that capability in matplotlib, but perhaps I missed it. If they had it, you would think there would be a 3-d plot screenshot, but there isn't: http://matplotlib.sourceforge.net/screenshots.html Any suggestions for easy 3-d surface plots of numpy arrays? Cheers, -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From ryanlists at gmail.com Tue Jan 31 10:23:53 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 31 Jan 2006 10:23:53 -0500 Subject: [SciPy-user] truth testing new numpy arrays Message-ID: I have a style question. With the old scipy/Numeric, when I had a vector I wanted to be an optional variable in a function I would set is default to "myvect=[]". Then when I wanted to test whether or not the variable had been specified, I just checked "if myvect:". With numpy arrays, this produces a message about ambiguous results for testing arrays and says I should use myvect.any(). The problem is that empty lists don't have an "any" method. I could set the optional arrays explicitly to None or I could check if it" ==[]", but both of these are slightly more fragile (if I accidentally pass an empty list instead of None it would cause problems). I could also set the optional array to "array([])" and always test "any()", but that is more typing. Is there a best way to handle optional arrays in function specifications? (I am slightly addicted to Python's boolean testing for empty objects.) Ryan From meesters at uni-mainz.de Tue Jan 31 10:44:31 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Tue, 31 Jan 2006 16:44:31 +0100 Subject: [SciPy-user] where to get version 0.3.2? In-Reply-To: <43DF7872.3050009@ftw.at> References: <200601311417.29222.meesters@uni-mainz.de> <43DF7872.3050009@ftw.at> Message-ID: <200601311644.31853.meesters@uni-mainz.de> On Tuesday 31 January 2006 15:47, Ed Schofield wrote: > Christian Meesters wrote: > >Hi, > > > >I just wanted to install one of my packages for a student of mine. This > >package requires the old scipy. > > > >Now, the new web page looks great, but I was unable to find a download > > link for the "old" scipy. And screening sourceforge did get me no > > further. Is the old version still available? > > Doh! ;) > > I've now uploaded the generic 0.3.2 tarball to SourceForge and added a > link from the Wiki download page: > > http://www.scipy.org/Wiki/Download > > I still had this file, but I don't have any old binaries lying around. > Can you compile it yourself? > > -- Ed Thanks, but no, I can't easily compile it myself: I wrote the package on my own computer, but want to install it on (a) computer(s) running Windows, where I have limited rights. So, an installer where everything is already wrapped up would be great. :-) (Just that everybody understands: I'm talking of machines placed in our institute belonging to our university, maintained by the university's support people. I might run almost every program, but installing a compiler is something different, and making them to install packages is hopeless.) Perhaps somebody has the installer saved and could send it to me? Christian PS The "0.3.2-link" on http://www.scipy.org/Wiki/Download lead to http://www.scipy.org/Wiki/Installing_SciPy some hours ago From travis at enthought.com Tue Jan 31 11:01:09 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 31 Jan 2006 10:01:09 -0600 Subject: [SciPy-user] where to get version 0.3.2? In-Reply-To: <200601311644.31853.meesters@uni-mainz.de> References: <200601311417.29222.meesters@uni-mainz.de> <43DF7872.3050009@ftw.at> <200601311644.31853.meesters@uni-mainz.de> Message-ID: <43DF89C5.5080801@enthought.com> Christian Meesters wrote: > ... > > Perhaps somebody has the installer saved and could send it to me? > > I'm uploading some of the scipy 0.3.2 binaries to the sourceforge site now--I'll repost when they're available (and update the wiki page). Travis > Christian > > PS The "0.3.2-link" on http://www.scipy.org/Wiki/Download lead to > http://www.scipy.org/Wiki/Installing_SciPy some hours ago > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- ........................ Travis N. Vaught CEO Enthought, Inc. http://www.enthought.com ........................ From travis at enthought.com Tue Jan 31 12:22:57 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 31 Jan 2006 11:22:57 -0600 Subject: [SciPy-user] where to get version 0.3.2? In-Reply-To: <43DF89C5.5080801@enthought.com> References: <200601311417.29222.meesters@uni-mainz.de> <43DF7872.3050009@ftw.at> <200601311644.31853.meesters@uni-mainz.de> <43DF89C5.5080801@enthought.com> Message-ID: <43DF9CF1.5050003@enthought.com> Travis N. Vaught wrote: > Christian Meesters wrote: > >> ... >> >> Perhaps somebody has the installer saved and could send it to me? >> >> >> > I'm uploading some of the scipy 0.3.2 binaries to the sourceforge site > now--I'll repost when they're available (and update the wiki page). > > Travis > [Apologies for the cross-post] I've finished uploading the "legacy" binaries to the sourceforge site. Let me know if there are any errors or omissions. I've also disabled (from non-developer view) the forums and trackers at the sourceforge site--there are folks who have been posting there, but no one is monitoring these tools. Please use the mailing lists for questions: scipy-user at scipy.org -- subscribe at http://www.scipy.net/mailman/listinfo/scipy-user scipy-dev at scipy.org -- subscribe at http://www.scipy.net/mailman/listinfo/scipy-dev and use the Tracker for issue/bugs/feature requests: http://projects.scipy.org/scipy/scipy Thanks, Travis p.s. Developers, please help move any relevant issues from sourceforge over to the developer tracker.) From oliphant.travis at ieee.org Tue Jan 31 10:43:38 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 31 Jan 2006 08:43:38 -0700 Subject: [SciPy-user] truth testing new numpy arrays In-Reply-To: References: Message-ID: <43DF85AA.5030507@ieee.org> Ryan Krauss wrote: >I have a style question. With the old scipy/Numeric, when I had a >vector I wanted to be an optional variable in a function I would set >is default to "myvect=[]". Then when I wanted to test whether or not >the variable had been specified, I just checked "if myvect:". With >numpy arrays, this produces a message about ambiguous results for >testing arrays and says I should use myvect.any(). The problem is >that empty lists don't have an "any" method. I could set the optional >arrays explicitly to None or I could check if it" ==[]", but both of >these are slightly more fragile (if I accidentally pass an empty list >instead of None it would cause problems). I could also set the >optional array to "array([])" and always test "any()", but that is >more typing. > > I typically use None for optional array arguments. But, you could do what your doing and test if any(myvect): else: But, I think the None test is less prone to error. -Travis From meesters at uni-mainz.de Tue Jan 31 13:05:56 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Tue, 31 Jan 2006 19:05:56 +0100 Subject: [SciPy-user] where to get version 0.3.2? In-Reply-To: <43DF9CF1.5050003@enthought.com> References: <200601311417.29222.meesters@uni-mainz.de> <43DF89C5.5080801@enthought.com> <43DF9CF1.5050003@enthought.com> Message-ID: <200601311905.56290.meesters@uni-mainz.de> Thanks a lot. I'm sorry: Apparently I've been all too impatient. I know that you guys have put quite some work in the transition. The new web page, the wiki and everything just looks great. So, once more: Thanks for all that work! Christian On Tuesday 31 January 2006 18:22, Travis N. Vaught wrote: > Travis N. Vaught wrote: > > Christian Meesters wrote: > >> ... > >> > >> Perhaps somebody has the installer saved and could send it to me? > > > > I'm uploading some of the scipy 0.3.2 binaries to the sourceforge site > > now--I'll repost when they're available (and update the wiki page). > > > > Travis > From Paul.Ray at nrl.navy.mil Tue Jan 31 14:10:03 2006 From: Paul.Ray at nrl.navy.mil (Paul Ray) Date: Tue, 31 Jan 2006 14:10:03 -0500 Subject: [SciPy-user] truth testing new numpy arrays In-Reply-To: <43DF85AA.5030507@ieee.org> References: <43DF85AA.5030507@ieee.org> Message-ID: <7D98BBD1-0DA2-47D7-8F7C-D4882B4B69AA@nrl.navy.mil> On Jan 31, 2006, at 10:43 AM, Travis Oliphant wrote: > I typically use None for optional array arguments. > > But, you could do what your doing and test > > if any(myvect): > > else: > > > But, I think the None test is less prone to error. Assuming you use None for unset optional arguments, how do you do the test? if myvect: doesn't work because it is ambiguous if myvect isn't none. if myvect.any(): doesn't work because None doesn't have an any() method So, do you use: if any(myvect): or, if myvect is not None: ? -- Paul -- Dr. Paul S. Ray E-mail: Paul.Ray at nrl.navy.mil Naval Research Laboratory WWW : http://xweb.nrl.navy.mil/ personnel/paulr/ Code 7655 Phone : (202) 404-1619 Washington, DC 20375 AIM : NRLPSR From robert.kern at gmail.com Tue Jan 31 14:50:50 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 31 Jan 2006 13:50:50 -0600 Subject: [SciPy-user] truth testing new numpy arrays In-Reply-To: <7D98BBD1-0DA2-47D7-8F7C-D4882B4B69AA@nrl.navy.mil> References: <43DF85AA.5030507@ieee.org> <7D98BBD1-0DA2-47D7-8F7C-D4882B4B69AA@nrl.navy.mil> Message-ID: <43DFBF9A.6020106@gmail.com> Paul Ray wrote: > Assuming you use None for unset optional arguments, how do you do > the test? > if myvect is not None: > > ? This is *the* canonical, Pythonic approach. -- Robert Kern robert.kern at gmail.com "In the fields of hell where the grass grows high Are the graves of dreams allowed to die." -- Richard Harter From oliphant.travis at ieee.org Tue Jan 31 15:18:04 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 31 Jan 2006 13:18:04 -0700 Subject: [SciPy-user] Example of how to use B-splines for interpolation In-Reply-To: <43DD36AF.8000605@astraw.com> References: <43DC7242.5050301@ieee.org> <43DD36AF.8000605@astraw.com> Message-ID: <43DFC5FC.6050507@ieee.org> Andrew Straw wrote: >Hi Travis, > >Thanks for your work on this--it's very useful to me. > >I found 2 issues. I'm including a test and a potential fix for the first >issue, which seems to be an end-point problem. Under some circumstances, >the endpoints aren't properly detected. I didn't attempt to comprehend >everything going on in this function, but the patch I made apparently >works. Please review it and apply it if it's acceptable. > > > Thanks for the patch. I'm doing something a little simpler now (the clip method). >The second issue is that the x array cannot be integers (a TypeError >gets raised). There doesn't seem to be any good reason for this (why >can't splines exist over integers?), so I submit that it's also a bug. >Unfortunately, that didn't look as easy for me to fix, so I leave it for >now. > > I'm not getting any errors for x being integers in the code you gave, perhaps you mean that when the new array to evaluate over is an array of integers we get errors, which is true and has been fixed. -Travis From ryanlists at gmail.com Tue Jan 31 21:00:05 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 31 Jan 2006 21:00:05 -0500 Subject: [SciPy-user] io.loadmat Message-ID: I am having trouble loading Matlab .mat files that loaded just fine under the old scipy. Here is the error message: In [1]: test=scipy.io.loadmat('figure5') --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/ryan/thesis/actuator_modeling/ /usr/lib/python2.4/site-packages/scipy/io/mio.py in loadmat(name, dict, appendmat, basename) 745 if not (0 in test_vals): # MATLAB version 5 format 746 fid.rewind() --> 747 thisdict = _loadv5(fid,basename) 748 if dict is not None: 749 dict.update(thisdict) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _loadv5(fid, basename) 682 try: 683 var = var + 1 --> 684 el, varname = _get_element(fid) 685 if varname is None: 686 varname = '%s_%04d' % (basename,var) /usr/lib/python2.4/site-packages/scipy/io/mio.py in _get_element(fid) 642 fid.rewind(1) 643 # get the data tag --> 644 raw_tag = fid.read(1,'I') 645 646 # check for compressed /usr/lib/python2.4/site-packages/scipy/io/mio.py in read(self, count, stype, rtype, bs, c_is_b) 283 if count == 0: 284 return zeros(0,rtype) --> 285 retval = numpyio.fread(self, count, stype, rtype, bs) 286 if len(retval) == 1: 287 retval = retval[0] TypeError: argument 3 must be char, not type I attached the message in a previous message, but it was sent to a moderator because it is 600kb. I thought that was a reasonable attachment size, but I guess that message limit is 100kb. If anyone wants the file to try and help me with this, I will gladly send it off list or post it on my website. Thanks, Ryan From prabhu_r at users.sf.net Tue Jan 31 23:50:13 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Wed, 1 Feb 2006 10:20:13 +0530 Subject: [SciPy-user] The new SciPy.org In-Reply-To: <9E4C75A6-04FD-494F-ADE1-B30ED28D3D16@nrl.navy.mil> References: <43DF4783.50007@enthought.com> <43DF7C9A.20403@ftw.at> <9E4C75A6-04FD-494F-ADE1-B30ED28D3D16@nrl.navy.mil> Message-ID: <17376.15877.541492.633333@prpc.aero.iitb.ac.in> >>>>> "Paul" == Paul Ray writes: Paul> On Jan 31, 2006, at 10:04 AM, Ed Schofield wrote: Paul> I'm not sure what the best solution is, but it brings up a Paul> question I have. What is the currently favored way to make Paul> a simple 3-d surface plot like the one of the bessel Paul> function on the front page? I didn't see that capability in Paul> matplotlib, but perhaps I missed it. If they had it, you Paul> would think there would be a 3-d plot screenshot, but there Paul> isn't: http://matplotlib.sourceforge.net/screenshots.html Paul> Any suggestions for easy 3-d surface plots of numpy arrays? You might find this of use: http://www.enthought.com/enthought/wiki/TVTK For specific examples of the kind seen in page, see here: http://www.enthought.com/enthought/wiki/TVTKIntroduction#tools-mlab Read the example source code here: http://www.enthought.com/enthought/browser/trunk/src/lib/enthought/tvtk/tools/mlab.py#L1195 TVTK itself works with numpy arrays but getting the rest of the enthought tool suite to build cleanly is problematic since it relies on scipy_distutils. If you do choose to check out the SVN tree you might want to look at Pearu's lib_numpy branch here: http://www.enthought.com/enthought/browser/branches/pearu/lib_numpy HTH. cheers, prabhu