From schofield at ftw.at Tue Aug 1 05:05:25 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 01 Aug 2006 11:05:25 +0200 Subject: [SciPy-user] 2 more sparse questions In-Reply-To: References: Message-ID: <44CF1955.1080900@ftw.at> [CC to list] William Hunter wrote: > Ed; > > I'm still chipping away at this, ... > Thought I'd respond off-list as it might be too specific, but I'll add > it to the scipy wiki once I've figured it out. > > System: {U} = [K]{F} > Q1) For a sparse symmetric banded K, and an extremely sparse F (one to > five entries in a vector of length > 1200), what is the best solver to > use? > I don't know. SciPy only has wrappers for SuperLU and UMFPACK, so I suggest you read the documentation for those (e.g. http://www.cise.ufl.edu/research/sparse/umfpack/v5.0/UMFPACK/Doc/UserGuide.pdf) to find out what cases the routines are designed to handle. If you need something more specialized, try Roman Geus's PySparse. It's not yet been ported from Numeric to NumPy, although I did port the basic matrix objects a while ago (in the SciPy sandbox), and that went fine. And there's Tim Davis's CSparse (http://www.cise.ufl.edu/research/sparse/CSparse/), which could be wrapped with ctypes ... Perhaps Robert Cimrman can give you more specific advice ... he wrote the UMFPACK wrapper and knows more about sparse solvers than I do. > Q2) At the moment I'm just using linsolve.spsolve, but I also see that > there is a linsolve.splu. No docstring, obviously LU factorisation, but > what does one use then? > > I'll create an example 3 demonstrating the use. > Yeah, we need a docstring for this. I've never used it before either. But it's a wrapper for the SuperLU factorization functions like dgstrf(), and you could figure it out by reading the source and the SuperLU docs. If you could contribute a docstring I'd be very happy to commit it :) -- Ed From cimrman3 at ntc.zcu.cz Tue Aug 1 05:32:30 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Aug 2006 11:32:30 +0200 Subject: [SciPy-user] 2 more sparse questions In-Reply-To: <44CF1955.1080900@ftw.at> References: <44CF1955.1080900@ftw.at> Message-ID: <44CF1FAE.2090101@ntc.zcu.cz> Ed Schofield wrote: > [CC to list] > > William Hunter wrote: >> Ed; >> >> I'm still chipping away at this, ... >> Thought I'd respond off-list as it might be too specific, but I'll add >> it to the scipy wiki once I've figured it out. >> >> System: {U} = [K]{F} >> Q1) For a sparse symmetric banded K, and an extremely sparse F (one to >> five entries in a vector of length > 1200), what is the best solver to >> use? >> > I don't know. SciPy only has wrappers for SuperLU and UMFPACK, so I > suggest you read the documentation for those (e.g. > http://www.cise.ufl.edu/research/sparse/umfpack/v5.0/UMFPACK/Doc/UserGuide.pdf) > to find out what cases the routines are designed to handle. If you need > something more specialized, try Roman Geus's PySparse. It's not yet been > ported from Numeric to NumPy, although I did port the basic matrix > objects a while ago (in the SciPy sandbox), and that went fine. And > there's Tim Davis's CSparse > (http://www.cise.ufl.edu/research/sparse/CSparse/), which could be > wrapped with ctypes ... Just note that UMFPACK is also written by Tim Davis (a part of UFsparse package), it is imho more heavy-weight than the sparse solvers in CSparse. Otherwise CSparse contains lots of good stuff that I would like to see wrapped. > Perhaps Robert Cimrman can give you more specific advice ... he wrote > the UMFPACK wrapper and knows more about sparse solvers than I do. I am also just a user of sparse solvers. And I am not aware of a solver using information on the sparsity of the right-hand side. Google is silent. Maybe we should all read Tim's 'Direct Methods for Sparse Linear Systems' :) >> Q2) At the moment I'm just using linsolve.spsolve, but I also see that >> there is a linsolve.splu. No docstring, obviously LU factorisation, but >> what does one use then? >> >> I'll create an example 3 demonstrating the use. >> > > Yeah, we need a docstring for this. I've never used it before either. > But it's a wrapper for the SuperLU factorization functions like > dgstrf(), and you could figure it out by reading the source and the > SuperLU docs. If you could contribute a docstring I'd be very happy to > commit it :) you can use the umfpack module directly (import scipy.linsolve.umfpack as um) to do LU, and than solve e.g. more right-hand sides - see scipy/linsolve/umfpack/umfpack.py docstring. hmm, help( um ) does not show it - just look into the source file. The wrapper is not complete for now, there are functions in the UMFPACK library to get the L, U factors as sparse matrices etc. - I will update it if there is enough interest. r. From willemjagter at gmail.com Tue Aug 1 10:45:17 2006 From: willemjagter at gmail.com (William Hunter) Date: Tue, 1 Aug 2006 16:45:17 +0200 Subject: [SciPy-user] 2 more sparse questions In-Reply-To: <44CF1FAE.2090101@ntc.zcu.cz> References: <44CF1955.1080900@ftw.at> <44CF1FAE.2090101@ntc.zcu.cz> Message-ID: <8b3894bc0608010745s30e6f58do95f56cc02596a2e@mail.gmail.com> On 01/08/06, Robert Cimrman wrote: > Ed Schofield wrote: > > [CC to list] > > > > William Hunter wrote: > >> Ed; > >> > >> I'm still chipping away at this, ... > >> Thought I'd respond off-list as it might be too specific, but I'll add > >> it to the scipy wiki once I've figured it out. > >> > >> System: {U} = [K]{F} > >> Q1) For a sparse symmetric banded K, and an extremely sparse F (one to > >> five entries in a vector of length > 1200), what is the best solver to > >> use? > >> > > I don't know. SciPy only has wrappers for SuperLU and UMFPACK, so I > > suggest you read the documentation for those (e.g. > > http://www.cise.ufl.edu/research/sparse/umfpack/v5.0/UMFPACK/Doc/UserGuide.pdf) > > to find out what cases the routines are designed to handle. If you need > > something more specialized, try Roman Geus's PySparse. It's not yet been > > ported from Numeric to NumPy, although I did port the basic matrix > > objects a while ago (in the SciPy sandbox), and that went fine. And > > there's Tim Davis's CSparse > > (http://www.cise.ufl.edu/research/sparse/CSparse/), which could be > > wrapped with ctypes ... > > Just note that UMFPACK is also written by Tim Davis (a part of UFsparse > package), it is imho more heavy-weight than the sparse solvers in > CSparse. Otherwise CSparse contains lots of good stuff that I would like > to see wrapped. > > > Perhaps Robert Cimrman can give you more specific advice ... he wrote > > the UMFPACK wrapper and knows more about sparse solvers than I do. > > I am also just a user of sparse solvers. And I am not aware of a solver > using information on the sparsity of the right-hand side. Google is > silent. Maybe we should all read Tim's 'Direct Methods for Sparse Linear > Systems' :) > > >> Q2) At the moment I'm just using linsolve.spsolve, but I also see that > >> there is a linsolve.splu. No docstring, obviously LU factorisation, but > >> what does one use then? > >> > >> I'll create an example 3 demonstrating the use. > >> > > > > Yeah, we need a docstring for this. I've never used it before either. > > But it's a wrapper for the SuperLU factorization functions like > > dgstrf(), and you could figure it out by reading the source and the > > SuperLU docs. If you could contribute a docstring I'd be very happy to > > commit it :) > > you can use the umfpack module directly (import scipy.linsolve.umfpack > as um) to do LU, and than solve e.g. more right-hand sides - see > scipy/linsolve/umfpack/umfpack.py docstring. > > hmm, help( um ) does not show it - just look into the source file. > > The wrapper is not complete for now, there are functions in the UMFPACK > library to get the L, U factors as sparse matrices etc. - I will update > it if there is enough interest. > > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Thanks again, I have some reading to do... As an aside, I was pondering whether it would be worthwhile to create a Tentative Index for the Scipy_Tutorial, because at the moment it seems (for newcomers - like myself) as though it's only dealing with sparse matrices? I also think that the numpy and scipy wiki should look the same, for newcomers it may seem a bit strange that they're different, unless they know how wikis work. However, I might annoy some users if I go and change there wikis for the sake of looks... -- Regards, WH From nwagner at iam.uni-stuttgart.de Wed Aug 2 03:39:59 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 Aug 2006 09:39:59 +0200 Subject: [SciPy-user] Call for testing Message-ID: <44D056CF.6040605@iam.uni-stuttgart.de> Dear Scipy users, It's not my intention to annoy you, but is someone able to confirm the output (32bit.png) of test_det.py on a 32-bit machine ? The script is available via http://projects.scipy.org/scipy/scipy/ticket/223 I have no idea why I get different results on different architectures. Thanks in advance for testing. Nils From cimrman3 at ntc.zcu.cz Wed Aug 2 04:19:51 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 02 Aug 2006 10:19:51 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <44D056CF.6040605@iam.uni-stuttgart.de> References: <44D056CF.6040605@iam.uni-stuttgart.de> Message-ID: <44D06027.7060701@ntc.zcu.cz> Nils Wagner wrote: > Dear Scipy users, > > It's not my intention to annoy you, but > is someone able to confirm the output (32bit.png) of test_det.py on a > 32-bit machine ? > The script is available via > > http://projects.scipy.org/scipy/scipy/ticket/223 > > I have no idea why I get different results on different architectures. > > Thanks in advance for testing. I have a 32 bit machine, but the results look like yours 64bit.png, with scipy 0.5.0.2142 and numpy 1.0b2.dev2942 (latest SVN). r. From wollez at gmx.net Wed Aug 2 04:18:15 2006 From: wollez at gmx.net (Wolfgang) Date: Wed, 02 Aug 2006 10:18:15 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <44D056CF.6040605@iam.uni-stuttgart.de> References: <44D056CF.6040605@iam.uni-stuttgart.de> Message-ID: Hi Nils, I've tried but my sandbox is missing delaunay from scipy.sandbox.delaunay import * ImportError: No module named delaunay Cheers Wolfgang Nils Wagner schrieb: > Dear Scipy users, > > It's not my intention to annoy you, but > is someone able to confirm the output (32bit.png) of test_det.py on a > 32-bit machine ? > The script is available via > > http://projects.scipy.org/scipy/scipy/ticket/223 > > I have no idea why I get different results on different architectures. > > Thanks in advance for testing. > > Nils From nwagner at iam.uni-stuttgart.de Wed Aug 2 04:40:02 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 Aug 2006 10:40:02 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: References: <44D056CF.6040605@iam.uni-stuttgart.de> Message-ID: <44D064E2.8010408@iam.uni-stuttgart.de> Wolfgang wrote: > Hi Nils, > > I've tried but my sandbox is missing delaunay > > from scipy.sandbox.delaunay import * > ImportError: No module named delaunay > > > Cheers > Wolfgang > Wolfgang, Sorry delaunay is useless in the reduced scipt. So please remove it or use #. Thank you. Nils > Nils Wagner schrieb: > >> Dear Scipy users, >> >> It's not my intention to annoy you, but >> is someone able to confirm the output (32bit.png) of test_det.py on a >> 32-bit machine ? >> The script is available via >> >> http://projects.scipy.org/scipy/scipy/ticket/223 >> >> I have no idea why I get different results on different architectures. >> >> Thanks in advance for testing. >> >> Nils >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From elcorto at gmx.net Wed Aug 2 05:43:27 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 02 Aug 2006 11:43:27 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <44D056CF.6040605@iam.uni-stuttgart.de> References: <44D056CF.6040605@iam.uni-stuttgart.de> Message-ID: <44D073BF.50105@gmx.net> Nils Wagner wrote: > Dear Scipy users, > > It's not my intention to annoy you, but > is someone able to confirm the output (32bit.png) of test_det.py on a > 32-bit machine ? > The script is available via > > http://projects.scipy.org/scipy/scipy/ticket/223 > > I have no idea why I get different results on different architectures. > > Thanks in advance for testing. > > Nils > Same result as your 64 bit. My system: AMD Athlon(TM) XP 2400+ atlas3-base 3.6.0-20.2 (nothing special, from debian testing) numpy: 0.9.9.2612 scipy: 0.5.0.1949 cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From jr at sun.ac.za Wed Aug 2 05:58:16 2006 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 2 Aug 2006 11:58:16 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <44D06027.7060701@ntc.zcu.cz> References: <44D056CF.6040605@iam.uni-stuttgart.de> <44D06027.7060701@ntc.zcu.cz> Message-ID: <200608021158.17079.jr@sun.ac.za> On Wednesday, 2 August 2006 10:19, Robert Cimrman wrote: > Nils Wagner wrote: > > Dear Scipy users, > > > > It's not my intention to annoy you, but > > is someone able to confirm the output (32bit.png) of test_det.py > > on a 32-bit machine ? > > The script is available via > > > > http://projects.scipy.org/scipy/scipy/ticket/223 > > > > I have no idea why I get different results on different > > architectures. > > > > Thanks in advance for testing. > > I have a 32 bit machine, but the results look like yours 64bit.png, > with scipy 0.5.0.2142 and numpy 1.0b2.dev2942 (latest SVN). I have a 32bit machine and get Nils's results in 32bit.png (scattered), i.e. the "buggy" version. However, I'm not running the latest numpy and scipy. Processor: Intel(R) Pentium(R) 4 CPU 2.60GHz ATLAS: 3.7.11 (self compiled with SSE2 support) In [1]: scipy.__version__ Out[1]: '0.4.9.1851' In [4]: numpy.__version__ Out[4]: '0.9.7.2339' OS: Mandriva 2006.0 Linux (2.6.12 kernel) HTH, Johann From nwagner at iam.uni-stuttgart.de Wed Aug 2 06:19:51 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 Aug 2006 12:19:51 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <200608021158.17079.jr@sun.ac.za> References: <44D056CF.6040605@iam.uni-stuttgart.de> <44D06027.7060701@ntc.zcu.cz> <200608021158.17079.jr@sun.ac.za> Message-ID: <44D07C47.3040303@iam.uni-stuttgart.de> Johann Rohwer wrote: > On Wednesday, 2 August 2006 10:19, Robert Cimrman wrote: > >> Nils Wagner wrote: >> >>> Dear Scipy users, >>> >>> It's not my intention to annoy you, but >>> is someone able to confirm the output (32bit.png) of test_det.py >>> on a 32-bit machine ? >>> The script is available via >>> >>> http://projects.scipy.org/scipy/scipy/ticket/223 >>> >>> I have no idea why I get different results on different >>> architectures. >>> >>> Thanks in advance for testing. >>> >> I have a 32 bit machine, but the results look like yours 64bit.png, >> with scipy 0.5.0.2142 and numpy 1.0b2.dev2942 (latest SVN). >> > > I have a 32bit machine and get Nils's results in 32bit.png > (scattered), i.e. the "buggy" version. > > Great, I am not out on a limb. > However, I'm not running the latest numpy and scipy. > I don't think that it is related to the version. Maybe an ATLAS issue ? I am looking forward to see more "exceptions" ;-) Nils > Processor: > Intel(R) Pentium(R) 4 CPU 2.60GHz > > ATLAS: 3.7.11 > (self compiled with SSE2 support) > > In [1]: scipy.__version__ > Out[1]: '0.4.9.1851' > > In [4]: numpy.__version__ > Out[4]: '0.9.7.2339' > > OS: Mandriva 2006.0 Linux (2.6.12 kernel) > > HTH, > Johann > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From cimrman3 at ntc.zcu.cz Wed Aug 2 06:40:21 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 02 Aug 2006 12:40:21 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <44D07C47.3040303@iam.uni-stuttgart.de> References: <44D056CF.6040605@iam.uni-stuttgart.de> <44D06027.7060701@ntc.zcu.cz> <200608021158.17079.jr@sun.ac.za> <44D07C47.3040303@iam.uni-stuttgart.de> Message-ID: <44D08115.5090004@ntc.zcu.cz> Nils Wagner wrote: > Great, I am not out on a limb. >> However, I'm not running the latest numpy and scipy. >> > I don't think that it is related to the version. Maybe an ATLAS issue ? > > I am looking forward to see more "exceptions" ;-) ok, my ATLAS (working well on Gentoo Linux) is lapack-atlas-3.6.0 r. From lists.steve at arachnedesign.net Wed Aug 2 07:20:30 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 2 Aug 2006 07:20:30 -0400 Subject: [SciPy-user] Call for testing In-Reply-To: <44D056CF.6040605@iam.uni-stuttgart.de> References: <44D056CF.6040605@iam.uni-stuttgart.de> Message-ID: <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> Hi Nils et. al, > is someone able to confirm the output (32bit.png) of test_det.py on a > 32-bit machine ? > The script is available via > > http://projects.scipy.org/scipy/scipy/ticket/223 I just ran it on the newest scipy and almost newest numpy from svn and my output is the correct one (the 64bit.png) and I'm running on a 32bit machine (Intel Core Duo (MacBook Pro)). No atlas as such, tho I was under the impression I did install atlas ... there's even a libatlas.a in my /usr/local/lib ... hmm .. Anyway, scipy/numpy info is pasted below. -steve In [5]: scipy.__version__ Out[5]: '0.5.0.2142' In [6]: scipy.show_config() umfpack_info: NOT AVAILABLE dfftw_info: libraries = ['drfftw', 'dfftw'] library_dirs = ['/opt/local/lib'] define_macros = [('SCIPY_DFFTW_H', None)] include_dirs = ['/opt/local/include'] fft_opt_info: libraries = ['drfftw', 'dfftw'] library_dirs = ['/opt/local/lib'] define_macros = [('SCIPY_DFFTW_H', None)] include_dirs = ['/opt/local/include'] djbfft_info: NOT AVAILABLE lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] fftw2_info: NOT AVAILABLE fftw3_info: NOT AVAILABLE blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] ============================= In [9]: numpy.__version__ Out[9]: '1.0b2.dev2940' In [10]: numpy.show_config() lapack_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3'] define_macros = [('NO_ATLAS_INFO', 3)] blas_opt_info: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/ vecLib.framework/Headers'] define_macros = [('NO_ATLAS_INFO', 3)] From humufr at yahoo.fr Wed Aug 2 08:31:12 2006 From: humufr at yahoo.fr (humufr at yahoo.fr) Date: Wed, 2 Aug 2006 08:31:12 -0400 Subject: [SciPy-user] Problem with AstroLib - coords-0.2 In-Reply-To: References: Message-ID: <200608020831.13443.humufr@yahoo.fr> Hi, I look a little bit on this problem and I saw that the answer is not true but not false... It's true modulo 360 degree but it's false because (for what I know) right ascension is always positive. That can be a big problem if you want to transform it in hms (the answer is different): In [1]: import coords as C In [2]: ob=C.Position('12:34:45.34 -23:42:32.6') In [3]: ob.dd() Out[3]: (188.68891666666667, -23.709055555555555) In [4]: ob.j2000() Out[4]: (188.68891666666667, -23.709055555555555) In [5]: ob1=C.Position((188.68891666666667, -23.709055555555555)) In [6]: ob1.hmsdms() Out[6]: '12:34:45.340 -23:42:32.600' In [7]: ob1=C.Position((-171.31108333333333, -23.709055555555555)) In [8]: ob1.hmsdms() Out[8]: '11:25:14.660 -23:42:32.600' So like I did in the ticket: http://projects.scipy.org/astropy/astrolib/ticket/8 I suggest to change a little bit the function j2000 in position.py (and all function with the same problem) but perhaps I'm wrong, I don't think so ot in other hand that means that the hmsdms must be change to take care of this problem because it's j2000 who are false or hmsdms. I hope that help a little bit. N. Le mercredi 12 juillet 2006 21:25, Tara Murphy a ?crit?: > Hi, > I've just started playing around with coords-0.2 and am having a problem > reproducing the example shown on the wiki: > > http://www.scipy.org/AstroLibCoordsSnapshot > > I am using Python2.4 and I've pasted my session below. When I try ob.dd() > > and ob.j2000() the answers are not the same: > >>> import coords as C > >>> ob=C.Position('12:34:45.34 -23:42:32.6') > >>> ob.hmsdms() > > '12:34:45.340 -23:42:32.600' > > >>> ob.dd() > > (188.68891666666667, -23.709055555555555) > > >>> ob.j2000() > > (-171.31108333333333, -23.709055555555555) > > > Is this something I'm misinterpreting, or is it a bug? > > thanks, > Tara > > ----- > Dr. Tara Murphy > ARC Postdoctoral Fellow > > School of IT | School of Physics > Room 448 | Room 565 > University of Sydney | University of Sydney > P: +61 2 9351 4723 | P: +61 2 9351 3041 > E: tm at it.usyd.edu.au | E: tara at physics.usyd.edu.au > > http://www.it.usyd.edu.au/~info1903 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From nwagner at iam.uni-stuttgart.de Wed Aug 2 13:34:58 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 02 Aug 2006 19:34:58 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> Message-ID: On Wed, 2 Aug 2006 07:20:30 -0400 Steve Lianoglou wrote: > Hi Nils et. al, > >> is someone able to confirm the output (32bit.png) of >>test_det.py on a >> 32-bit machine ? >> The script is available via >> >> http://projects.scipy.org/scipy/scipy/ticket/223 > > I just ran it on the newest scipy and almost newest >numpy from svn and my output is the correct one (the >64bit.png) and I'm running on a 32bit machine (Intel >Core Duo (MacBook Pro)). > > > No atlas as such, tho I was under the impression I did >install atlas ... there's even a libatlas.a in my >/usr/local/lib ... hmm .. > > Anyway, scipy/numpy info is pasted below. > > -steve > > > In [5]: scipy.__version__ > Out[5]: '0.5.0.2142' > > In [6]: scipy.show_config() > umfpack_info: > NOT AVAILABLE > > dfftw_info: > libraries = ['drfftw', 'dfftw'] > library_dirs = ['/opt/local/lib'] > define_macros = [('SCIPY_DFFTW_H', None)] > include_dirs = ['/opt/local/include'] > > fft_opt_info: > libraries = ['drfftw', 'dfftw'] > library_dirs = ['/opt/local/lib'] > define_macros = [('SCIPY_DFFTW_H', None)] > include_dirs = ['/opt/local/include'] > > djbfft_info: > NOT AVAILABLE > > lapack_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3'] > define_macros = [('NO_ATLAS_INFO', 3)] > > fftw2_info: > NOT AVAILABLE > > fftw3_info: > NOT AVAILABLE > > blas_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3', >'-I/System/Library/Frameworks/ > vecLib.framework/Headers'] > define_macros = [('NO_ATLAS_INFO', 3)] > > ============================= > > > In [9]: numpy.__version__ > Out[9]: '1.0b2.dev2940' > > In [10]: numpy.show_config() > lapack_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3'] > define_macros = [('NO_ATLAS_INFO', 3)] > > blas_opt_info: > extra_link_args = ['-Wl,-framework', >'-Wl,Accelerate'] > extra_compile_args = ['-msse3', >'-I/System/Library/Frameworks/ > vecLib.framework/Headers'] > define_macros = [('NO_ATLAS_INFO', 3)] > Thank you all for running the test ! Finally I have disabled ATLAS by export ATLAS=None Good news is that I get the correct result. So I guess it's definitely an ATLAS issue. Bad news is that scipy.test(1) results in FAILED (failures=17, errors=4) More details below. Can someone reproduce these failures (No ATLAS) ? >>> numpy.__version__ '1.0b2.dev2943' >>> scipy.__version__ '0.5.0.2142' >>> ====================================================================== ERROR: Compare eigenvalues of eigvals_banded with those of linalg.eig. ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 270, in check_eigvals_banded select='i', select_range=(ind1, ind2) ) File "/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 381, in eigvals_banded select_range=select_range) File "/usr/lib/python2.4/site-packages/scipy/linalg/decomp.py", line 362, in eig_banded if info>0: raise LinAlgError,"eig algorithm did not converge" LinAlgError: eig algorithm did not converge ====================================================================== ERROR: check_zero (scipy.linalg.tests.test_matfuncs.test_expm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 105, in check_zero assert_array_almost_equal(expm2(a),[[1,0],[0,1]]) File "/usr/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 71, in expm2 vri = inv(vr) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 180, in inv a1 = asarray_chkfinite(a) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 181, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== ERROR: check_defective1 (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 46, in check_defective1 r = signm(a) File "/usr/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 274, in signm iS0 = inv(S0) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 180, in inv a1 = asarray_chkfinite(a) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 181, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== ERROR: check_defective3 (scipy.linalg.tests.test_matfuncs.test_signm) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_matfuncs.py", line 67, in check_defective3 r = signm(a) File "/usr/lib/python2.4/site-packages/scipy/linalg/matfuncs.py", line 274, in signm iS0 = inv(S0) File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 180, in inv a1 = asarray_chkfinite(a) File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 181, in asarray_chkfinite raise ValueError, "array must not contain infs or NaNs" ValueError: array must not contain infs or NaNs ====================================================================== FAIL: Compare eigenvalues and eigenvectors of eig_banded ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 314, in check_eig_banded self.w_sym_lin[ind1:ind2+1]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 199, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (shapes (0,), (5,) mismatch) x: array([], dtype=float64) y: array([-0.52994106, -0.43023792, 0.86217766, 1.66090423, 2.84350424]) ====================================================================== FAIL: check_simple (scipy.linalg.tests.test_decomp.test_svdvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 508, in check_simple assert s[0]>=s[1]>=s[2] AssertionError ====================================================================== FAIL: check_simple_complex (scipy.linalg.tests.test_decomp.test_svdvals) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 526, in check_simple_complex assert s[0]>=s[1]>=s[2] AssertionError ====================================================================== FAIL: check_syevr_irange_high (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 74, in check_syevr_irange_high def check_syevr_irange_high(self): self.check_syevr_irange(irange=[1,2]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.]) y: array([ 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange_low (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 70, in check_syevr_irange_low def check_syevr_irange_low(self): self.check_syevr_irange(irange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.]) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_irange_mid (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 72, in check_syevr_irange_mid def check_syevr_irange_mid(self): self.check_syevr_irange(irange=[1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.]) y: array([ 0.48769389]) ====================================================================== FAIL: check_syevr_vrange (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.5, 4.5, 4.5]) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_high (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 100, in check_syevr_vrange_high def check_syevr_vrange_high(self): self.check_syevr_vrange(vrange=[1,10]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 5.5]) y: array([ 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_low (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 96, in check_syevr_vrange_low def check_syevr_vrange_low(self): self.check_syevr_vrange(vrange=[-1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.]) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_vrange_mid (scipy.lib.tests.test_lapack.test_flapack_double) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 98, in check_syevr_vrange_mid def check_syevr_vrange_mid(self): self.check_syevr_vrange(vrange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.5]) y: array([ 0.48769389]) ====================================================================== FAIL: check_syevr_irange_high (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 74, in check_syevr_irange_high def check_syevr_irange_high(self): self.check_syevr_irange(irange=[1,2]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.], dtype=float32) y: array([ 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_irange_low (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 70, in check_syevr_irange_low def check_syevr_irange_low(self): self.check_syevr_irange(irange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.], dtype=float32) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_irange_mid (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 72, in check_syevr_irange_mid def check_syevr_irange_mid(self): self.check_syevr_irange(irange=[1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 66, in check_syevr_irange assert_array_almost_equal(w,exact_w[rslice]) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.], dtype=float32) y: array([ 0.48769389]) ====================================================================== FAIL: check_syevr_vrange (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.5, 4.5, 4.5], dtype=float32) y: array([-0.66992434, 0.48769389, 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_high (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 100, in check_syevr_vrange_high def check_syevr_vrange_high(self): self.check_syevr_vrange(vrange=[1,10]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 5.5], dtype=float32) y: array([ 9.18223045]) ====================================================================== FAIL: check_syevr_vrange_low (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 96, in check_syevr_vrange_low def check_syevr_vrange_low(self): self.check_syevr_vrange(vrange=[-1,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0., 0.], dtype=float32) y: array([-0.66992434, 0.48769389]) ====================================================================== FAIL: check_syevr_vrange_mid (scipy.lib.tests.test_lapack.test_flapack_float) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 98, in check_syevr_vrange_mid def check_syevr_vrange_mid(self): self.check_syevr_vrange(vrange=[0,1]) File "/usr/lib/python2.4/site-packages/scipy/lib/lapack/tests/esv_tests.py", line 91, in check_syevr_vrange assert_array_almost_equal(w,ew) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 230, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 215, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 0.5], dtype=float32) y: array([ 0.48769389]) ---------------------------------------------------------------------- Ran 1552 tests in 3.979s From bhendrix at enthought.com Wed Aug 2 13:46:12 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 02 Aug 2006 12:46:12 -0500 Subject: [SciPy-user] ANN: Python Enthought Edition 1.0.0 Released Message-ID: <44D0E4E4.4020304@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0 (http://code.enthought.com/enthon/) -- a python distribution for Windows. About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com 1.0.0 Release Notes ------------------------- A lot of work has gone into testing this release, and it is our most stable release to date, but there are a couple of caveats: * The generated documentation index entries are missing. The full text search does work and the table of contents is complete, so this feature will be pushed to version 1.1.0. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. * Some users are reporting that older matplotlibrc files are not compatible with the version of matplotlib installed with this release. Please refer to the matplotlib mailing list (http://sourceforge.net/mail/?group_id=80706) for further help. We are grateful to everyone who has helped test this release. If you'd like to contribute or report a bug, you can do so at https://svn.enthought.com/enthought. From wollez at gmx.net Wed Aug 2 14:13:28 2006 From: wollez at gmx.net (Wolfgang) Date: Wed, 02 Aug 2006 20:13:28 +0200 Subject: [SciPy-user] ImportError: cannot import name Int8 Message-ID: Hi, I've tried to run the test file from two threads above but I only get an error: File "test_det.py", line 7, in ? from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, contourf, cm, colorbar, savefig File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? from matplotlib.pylab import * File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 196, in ? import cm File "C:\Python24\Lib\site-packages\matplotlib\cm.py", line 5, in ? import colors File "C:\Python24\Lib\site-packages\matplotlib\colors.py", line 33, in ? from numerix import array, arange, take, put, Float, Int, where, \ File "C:\Python24\Lib\site-packages\matplotlib\numerix\__init__.py", line 68, in ? from _sp_imports import nx, infinity File "C:\Python24\Lib\site-packages\matplotlib\numerix\_sp_imports.py", line 1, in ? from numpy import Int8, UInt8, \ ImportError: cannot import name Int8 Installed are the binaries from sourceforge: Python 2.4.3 matplotlib 0.87.3 numarray 1.5.1 numeric 24.2 numpy 1.0b1 scipy 0.5.0 debug annoying output: C:\Python24\lib\site-packages\matplotlib\__init__.py:947: UserWarning: Bad val "free" on line #200 "image.aspect : free # free | preserve" in file "C:\Documents and Settings\s0167070\.matplotlib\matplotlibrc" not a valid aspect specification warnings.warn('Bad val "%s" on line #%d\n\t"%s"\n\tin file "%s"\n\t%s' % ( loaded rc file C:\Documents and Settings\s0167070\.matplotlib\matplotlibrc matplotlib version 0.87.3 verbose.level debug-annoying interactive is False platform is win32 loaded modules: ['numpy.distutils.misc_util', 'scipy.linalg.lapack', '_bisect', 'scipy.io.numpyio', 'numpy.core.info', 'scipy.cluster.info', 'numpy.distutils.re', 'scipy.linalg.new', 'random', 'numpy.core.defchararray', 'numpy.distutils.numpy', '__future__', 'scipy.linalg.info', 'matplotlib.tempfile', 'numpy.lib.getlimits', 'bisect', 'scipy.special.basic', 'scipy.io.cPickle', 'scipy.linsolve.umfpack', 'numpy.core.numerictypes', 'scipy.misc.inspect', 'numpy.random.mtrand', 'scipy.linsolve', 'matplotlib.shutil', 'scipy.io.info', 'distutils', 'scipy.io.scipy', 'numpy.random.info', 'tempfile', 'mmap', 'md5', 'numpy.linalg', 'numpy.core.os', 'numpy.testing.types', 'numpy.testing', 'collections', 'scipy.special.__future__', 'numpy.core.umath', 'numpy.distutils.ccompiler', 'distutils.types', 'codecs', 'numpy.distutils.exec_command', 'scipy.weave.info', 'numpy.core.scalarmath', 'zipimport', 'string', 'numpy.testing.os', 'numpy.lib.arraysetops', 'numpy.testing.unittest', 'numpy.lib.math', 'repr', 'matplotlib.__future__', 'numpy.distutils.tempfile', 'scipy', 'numpy.testing.re', 'itertools', 'numpy.version', 'numpy.lib.re', 'distutils.re', 'numpy.distutils.info', 'scipy.io.mio', 'numpy.testing.operator', 'numpy.lib.type_check', 'scipy.linalg.scipy', 'scipy.misc.common', 'distutils.dir_util', 'numpy.distutils.distutils', 'scipy.integrate.info', 'signal', 'numpy.lib.types', 'numpy.numpy', 'shelve', 'pydoc', 'numpy.distutils.copy', 'numpy.oldnumeric.olddefaults', 'scipy.signal.info', 'scipy.linsolve._zsuperlu', 'numpy.core.multiarray', 'matplotlib.pytz', 'scipy.linalg', 'distutils.log', 'distutils.copy', 'dis', 'distutils.version', 'cStringIO', 'scipy.linsolve._ssuperlu', 'scipy.linalg.matfuncs', 'scipy.linsolve._superlu', 'numpy.distutils', 'locale', 'numpy.add_newdocs', 'scipy.special._cephes', 'distutils.sysconfig', 'scipy.linalg.flapack', 'scipy.linsolve.umfpack.re', 'numpy.oldnumeric.sys', 'matplotlib.dateutil', 'scipy.version', 'numpy.lib.sys', 'encodings', 'scipy.stats.info', 'numpy.oldnumeric', 'shutil', 'numpy.oldnumeric.cPickle', 'numpy.distutils.__version__', 'dateutil', 'scipy.io.cStringIO', 'scipy.linalg._iterative', 'scipy.misc.pydoc', 'numpy.imp', 'scipy.misc.info', 'scipy.misc.numpy', 'token', 'numpy.lib.numpy', 'numpy.dft.helper', 're', 'numpy.lib._compiled_base', 'scipy.io.data_store', 'ntpath', 'scipy.sparse.bisect', 'numpy.core.mmap', 'new', 'math', 'scipy.io.zlib', 'scipy.linsolve.linsolve', 'scipy.sparse.sparsetools', 'numpy.core.ma', 'scipy.misc.helpmod', 'UserDict', 'scipy.sparse.copy', 'inspect', 'distutils.os', 'scipy.numpy', 'scipy.special.orthogonal', 'exceptions', 'numpy.lib.info', 'distutils.dep_util', 'numpy.lib.warnings', 'scipy.linalg.decomp', 'numpy.core.sys', 'numpy.core._sort', 'scipy.lib.lapack.info', 'numpy.os', 'matplotlib.re', 'pytz.bisect', 'distutils.ccompiler', 'numpy.distutils.string', 'pylab', '_locale', 'scipy.io.array_import', 'thread', 'sre', 'numpy.core.memmap', 'traceback', 'scipy.interpolate.info', 'scipy.linsolve.umfpack.imp', 'numpy.core._internal', 'distutils.spawn', 'opcode', 'scipy.io', 'numpy.testing.imp', 'numpy.linalg.lapack_lite', 'scipy.io.dumb_shelve', 'distutils.sys', 'os', 'scipy.io.re', 'numpy.dft.info', 'numpy.distutils.new', 'scipy.special.scipy', 'numpy.dft.types', 'numpy.core.string', 'numpy.dft', '_sre', 'unittest', 'numpy.testing.sys', 'numpy.random', 'numpy.linalg.numpy', '__builtin__', 'scipy.linsolve.numpy', 'numpy.lib.twodim_base', 'scipy.linalg.blas', 'numpy.core.cPickle', 'scipy.sparse.operator', 'scipy.linalg._flinalg', 'operator', 'distutils.util', 'sre_constants', 'distutils.string', 'scipy.linsolve._dsuperlu', 'matplotlib.datetime', 'scipy.special.types', 'numpy.distutils.unixccompiler', 'posixpath', 'imp', 'numpy.oldnumeric.numpy', 'errno', 'numpy.distutils.imp', 'binascii', 'scipy.optimize.info', 'scipy.linalg.fblas', 'numpy.core.arrayprint', 'datetime', 'matplotlib.md5', 'types', 'scipy.linsolve.scipy', 'scipy.special.info', 'numpy.core.numpy', 'numpy', 'sets', 'numpy.dft.fftpack', 'numpy.core.defmatrix', 'scipy.sparse.numpy', 'cPickle', 'scipy.linsolve.umfpack.scipy', 'pytz.sys', 'scipy.linsolve._csuperlu', 'scipy.sparse.itertools', 'scipy.sparse', '_codecs', 'tokenize', 'scipy.lib.blas.info', 'matplotlib.sys', 'encodings.cp1252', 'scipy.special.numpy', 'pytz', 'scipy.io.sys', 'numpy.testing.utils', 'scipy.io.mmio', 'scipy.linsolve.umfpack.umfpack', 'numpy.distutils.glob', 'numpy.lib.ufunclike', 'copy', 'numpy.core.re', 'struct', 'numpy.core.fromnumeric', 'scipy.linalg.warnings', 'numpy.core.copy_reg', 'numpy.lib.scimath', 'scipy.special.specfun', 'numpy.lib', 'numpy.random.numpy', 'scipy.io.types', 'scipy.linsolve.umfpack.info', 'numpy.dft.fftpack_lite', 'scipy.sparse.info', 'encodings.aliases', 'numpy.lib.function_base', 'fnmatch', 'sre_parse', 'scipy.os', 'scipy.linalg.basic', 'scipy.linalg.linalg_version', 'numpy.distutils.__config__', 'distutils.distutils', 'scipy.linalg.flinalg', 'copy_reg', 'sre_compile', '_random', 'numpy.testing.glob', 'scipy.io.numpy', 'site', 'numpy.lib.polynomial', 'numpy._import_tools', 'numpy.glob', 'scipy.ndimage.info', 'numpy.lib.time', 'scipy.linalg.cblas', '__main__', 'numpy.dft.numpy', 'numpy.core.records', 'numpy.core._dotblas', 'numpy.sys', 'scipy.special', 'numpy.oldnumeric.types', 'matplotlib.os', 'numpy.testing.traceback', 'strop', 'numpy.testing.numpytest', 'numpy.core.numeric', 'numpy.linalg.info', 'distutils.unixccompiler', 'encodings.codecs', 'pytz.datetime', 'scipy.maxentropy.info', 'numpy.core', 'numpy.testing.info', 'scipy.linalg.iterative', 'numpy.distutils.sys', 'encodings.exceptions', 'numpy.distutils.log', 'nt', 'pytz.sets', 'scipy.io.os', 'stat', 'scipy.linalg.clapack', 'numpy.lib.utils', 'numpy.lib.index_tricks', 'numpy.distutils.os', 'warnings', 'encodings.types', 'glob', 'numpy.lib.shape_base', 'numpy.dual', 'numpy.core.types', 'sys', 'scipy.sparse.sparse', 'numpy.core.warnings', 'zlib', 'numpy.core.__builtin__', 'distutils.file_util', 'scipy.fftpack.info', 'numpy.lib.os', 'numpy.__config__', 'scipy.misc.types', 'os.path', 'matplotlib', 'scipy.linsolve.umfpack.numpy', 'scipy.linalg.calc_lwork', 'numpy.oldnumeric.compat', 'scipy.misc', 'scipy.misc.sys', 'matplotlib.distutils', 'numpy.core.cStringIO', 'pytz.tzinfo', 'distutils.errors', 'scipy.lib.info', 'scipy.linalg.numpy', 'linecache', 'scipy.io.struct', 'scipy.linsolve.info', 'time', 'scipy.io.shelve', 'scipy.io.pickler', 'numpy.lib.machar', 'scipy.misc.limits', 'numpy.linalg.linalg', 'matplotlib.warnings', 'scipy.__config__', 'numpy.testing.numpy'] Traceback (most recent call last): File "test_det.py", line 7, in ? from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, contourf, cm, colorbar, savefig File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? from matplotlib.pylab import * File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 196, in ? import cm File "C:\Python24\Lib\site-packages\matplotlib\cm.py", line 5, in ? import colors File "C:\Python24\Lib\site-packages\matplotlib\colors.py", line 33, in ? from numerix import array, arange, take, put, Float, Int, where, \ File "C:\Python24\Lib\site-packages\matplotlib\numerix\__init__.py", line 68, in ? from _sp_imports import nx, infinity File "C:\Python24\Lib\site-packages\matplotlib\numerix\_sp_imports.py", line 1, in ? from numpy import Int8, UInt8, \ ImportError: cannot import name Int8 >Exit code: 1 Time: 1.174 From wollez at gmx.net Wed Aug 2 14:25:47 2006 From: wollez at gmx.net (Wolfgang) Date: Wed, 02 Aug 2006 20:25:47 +0200 Subject: [SciPy-user] ImportError: cannot import name Int8 GJKRS Message-ID: Hi, I've tried to run the test file from two threads above but I only get an error: File "test_det.py", line 7, in ? from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, contourf, cm, colorbar, savefig File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? from matplotlib.pylab import * File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 196, in ? import cm File "C:\Python24\Lib\site-packages\matplotlib\cm.py", line 5, in ? import colors File "C:\Python24\Lib\site-packages\matplotlib\colors.py", line 33, in ? from numerix import array, arange, take, put, Float, Int, where, \ File "C:\Python24\Lib\site-packages\matplotlib\numerix\__init__.py", line 68, in ? from _sp_imports import nx, infinity File "C:\Python24\Lib\site-packages\matplotlib\numerix\_sp_imports.py", line 1, in ? from numpy import Int8, UInt8, \ ImportError: cannot import name Int8 Installed are the binaries from sourceforge: Python 2.4.3 matplotlib 0.87.3 numarray 1.5.1 numeric 24.2 numpy 1.0b1 scipy 0.5.0 debug annoying output: C:\Python24\lib\site-packages\matplotlib\__init__.py:947: UserWarning: Bad val "free" on line #200 "image.aspect : free # free | preserve" in file "C:\Documents and Settings\s0167070\.matplotlib\matplotlibrc" not a valid aspect specification warnings.warn('Bad val "%s" on line #%d\n\t"%s"\n\tin file "%s"\n\t%s' % ( loaded rc file C:\Documents and Settings\s0167070\.matplotlib\matplotlibrc matplotlib version 0.87.3 verbose.level debug-annoying interactive is False platform is win32 loaded modules: ['numpy.distutils.misc_util', 'scipy.linalg.lapack', '_bisect', 'scipy.io.numpyio', 'numpy.core.info', 'scipy.cluster.info', 'numpy.distutils.re', 'scipy.linalg.new', 'random', 'numpy.core.defchararray', 'numpy.distutils.numpy', '__future__', 'scipy.linalg.info', 'matplotlib.tempfile', 'numpy.lib.getlimits', 'bisect', 'scipy.special.basic', 'scipy.io.cPickle', 'scipy.linsolve.umfpack', 'numpy.core.numerictypes', 'scipy.misc.inspect', 'numpy.random.mtrand', 'scipy.linsolve', 'matplotlib.shutil', 'scipy.io.info', 'distutils', 'scipy.io.scipy', 'numpy.random.info', 'tempfile', 'mmap', 'md5', 'numpy.linalg', 'numpy.core.os', 'numpy.testing.types', 'numpy.testing', 'collections', 'scipy.special.__future__', 'numpy.core.umath', 'numpy.distutils.ccompiler', 'distutils.types', 'codecs', 'numpy.distutils.exec_command', 'scipy.weave.info', 'numpy.core.scalarmath', 'zipimport', 'string', 'numpy.testing.os', 'numpy.lib.arraysetops', 'numpy.testing.unittest', 'numpy.lib.math', 'repr', 'matplotlib.__future__', 'numpy.distutils.tempfile', 'scipy', 'numpy.testing.re', 'itertools', 'numpy.version', 'numpy.lib.re', 'distutils.re', 'numpy.distutils.info', 'scipy.io.mio', 'numpy.testing.operator', 'numpy.lib.type_check', 'scipy.linalg.scipy', 'scipy.misc.common', 'distutils.dir_util', 'numpy.distutils.distutils', 'scipy.integrate.info', 'signal', 'numpy.lib.types', 'numpy.numpy', 'shelve', 'pydoc', 'numpy.distutils.copy', 'numpy.oldnumeric.olddefaults', 'scipy.signal.info', 'scipy.linsolve._zsuperlu', 'numpy.core.multiarray', 'matplotlib.pytz', 'scipy.linalg', 'distutils.log', 'distutils.copy', 'dis', 'distutils.version', 'cStringIO', 'scipy.linsolve._ssuperlu', 'scipy.linalg.matfuncs', 'scipy.linsolve._superlu', 'numpy.distutils', 'locale', 'numpy.add_newdocs', 'scipy.special._cephes', 'distutils.sysconfig', 'scipy.linalg.flapack', 'scipy.linsolve.umfpack.re', 'numpy.oldnumeric.sys', 'matplotlib.dateutil', 'scipy.version', 'numpy.lib.sys', 'encodings', 'scipy.stats.info', 'numpy.oldnumeric', 'shutil', 'numpy.oldnumeric.cPickle', 'numpy.distutils.__version__', 'dateutil', 'scipy.io.cStringIO', 'scipy.linalg._iterative', 'scipy.misc.pydoc', 'numpy.imp', 'scipy.misc.info', 'scipy.misc.numpy', 'token', 'numpy.lib.numpy', 'numpy.dft.helper', 're', 'numpy.lib._compiled_base', 'scipy.io.data_store', 'ntpath', 'scipy.sparse.bisect', 'numpy.core.mmap', 'new', 'math', 'scipy.io.zlib', 'scipy.linsolve.linsolve', 'scipy.sparse.sparsetools', 'numpy.core.ma', 'scipy.misc.helpmod', 'UserDict', 'scipy.sparse.copy', 'inspect', 'distutils.os', 'scipy.numpy', 'scipy.special.orthogonal', 'exceptions', 'numpy.lib.info', 'distutils.dep_util', 'numpy.lib.warnings', 'scipy.linalg.decomp', 'numpy.core.sys', 'numpy.core._sort', 'scipy.lib.lapack.info', 'numpy.os', 'matplotlib.re', 'pytz.bisect', 'distutils.ccompiler', 'numpy.distutils.string', 'pylab', '_locale', 'scipy.io.array_import', 'thread', 'sre', 'numpy.core.memmap', 'traceback', 'scipy.interpolate.info', 'scipy.linsolve.umfpack.imp', 'numpy.core._internal', 'distutils.spawn', 'opcode', 'scipy.io', 'numpy.testing.imp', 'numpy.linalg.lapack_lite', 'scipy.io.dumb_shelve', 'distutils.sys', 'os', 'scipy.io.re', 'numpy.dft.info', 'numpy.distutils.new', 'scipy.special.scipy', 'numpy.dft.types', 'numpy.core.string', 'numpy.dft', '_sre', 'unittest', 'numpy.testing.sys', 'numpy.random', 'numpy.linalg.numpy', '__builtin__', 'scipy.linsolve.numpy', 'numpy.lib.twodim_base', 'scipy.linalg.blas', 'numpy.core.cPickle', 'scipy.sparse.operator', 'scipy.linalg._flinalg', 'operator', 'distutils.util', 'sre_constants', 'distutils.string', 'scipy.linsolve._dsuperlu', 'matplotlib.datetime', 'scipy.special.types', 'numpy.distutils.unixccompiler', 'posixpath', 'imp', 'numpy.oldnumeric.numpy', 'errno', 'numpy.distutils.imp', 'binascii', 'scipy.optimize.info', 'scipy.linalg.fblas', 'numpy.core.arrayprint', 'datetime', 'matplotlib.md5', 'types', 'scipy.linsolve.scipy', 'scipy.special.info', 'numpy.core.numpy', 'numpy', 'sets', 'numpy.dft.fftpack', 'numpy.core.defmatrix', 'scipy.sparse.numpy', 'cPickle', 'scipy.linsolve.umfpack.scipy', 'pytz.sys', 'scipy.linsolve._csuperlu', 'scipy.sparse.itertools', 'scipy.sparse', '_codecs', 'tokenize', 'scipy.lib.blas.info', 'matplotlib.sys', 'encodings.cp1252', 'scipy.special.numpy', 'pytz', 'scipy.io.sys', 'numpy.testing.utils', 'scipy.io.mmio', 'scipy.linsolve.umfpack.umfpack', 'numpy.distutils.glob', 'numpy.lib.ufunclike', 'copy', 'numpy.core.re', 'struct', 'numpy.core.fromnumeric', 'scipy.linalg.warnings', 'numpy.core.copy_reg', 'numpy.lib.scimath', 'scipy.special.specfun', 'numpy.lib', 'numpy.random.numpy', 'scipy.io.types', 'scipy.linsolve.umfpack.info', 'numpy.dft.fftpack_lite', 'scipy.sparse.info', 'encodings.aliases', 'numpy.lib.function_base', 'fnmatch', 'sre_parse', 'scipy.os', 'scipy.linalg.basic', 'scipy.linalg.linalg_version', 'numpy.distutils.__config__', 'distutils.distutils', 'scipy.linalg.flinalg', 'copy_reg', 'sre_compile', '_random', 'numpy.testing.glob', 'scipy.io.numpy', 'site', 'numpy.lib.polynomial', 'numpy._import_tools', 'numpy.glob', 'scipy.ndimage.info', 'numpy.lib.time', 'scipy.linalg.cblas', '__main__', 'numpy.dft.numpy', 'numpy.core.records', 'numpy.core._dotblas', 'numpy.sys', 'scipy.special', 'numpy.oldnumeric.types', 'matplotlib.os', 'numpy.testing.traceback', 'strop', 'numpy.testing.numpytest', 'numpy.core.numeric', 'numpy.linalg.info', 'distutils.unixccompiler', 'encodings.codecs', 'pytz.datetime', 'scipy.maxentropy.info', 'numpy.core', 'numpy.testing.info', 'scipy.linalg.iterative', 'numpy.distutils.sys', 'encodings.exceptions', 'numpy.distutils.log', 'nt', 'pytz.sets', 'scipy.io.os', 'stat', 'scipy.linalg.clapack', 'numpy.lib.utils', 'numpy.lib.index_tricks', 'numpy.distutils.os', 'warnings', 'encodings.types', 'glob', 'numpy.lib.shape_base', 'numpy.dual', 'numpy.core.types', 'sys', 'scipy.sparse.sparse', 'numpy.core.warnings', 'zlib', 'numpy.core.__builtin__', 'distutils.file_util', 'scipy.fftpack.info', 'numpy.lib.os', 'numpy.__config__', 'scipy.misc.types', 'os.path', 'matplotlib', 'scipy.linsolve.umfpack.numpy', 'scipy.linalg.calc_lwork', 'numpy.oldnumeric.compat', 'scipy.misc', 'scipy.misc.sys', 'matplotlib.distutils', 'numpy.core.cStringIO', 'pytz.tzinfo', 'distutils.errors', 'scipy.lib.info', 'scipy.linalg.numpy', 'linecache', 'scipy.io.struct', 'scipy.linsolve.info', 'time', 'scipy.io.shelve', 'scipy.io.pickler', 'numpy.lib.machar', 'scipy.misc.limits', 'numpy.linalg.linalg', 'matplotlib.warnings', 'scipy.__config__', 'numpy.testing.numpy'] Traceback (most recent call last): File "test_det.py", line 7, in ? from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, contourf, cm, colorbar, savefig File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? from matplotlib.pylab import * File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 196, in ? import cm File "C:\Python24\Lib\site-packages\matplotlib\cm.py", line 5, in ? import colors File "C:\Python24\Lib\site-packages\matplotlib\colors.py", line 33, in ? from numerix import array, arange, take, put, Float, Int, where, \ File "C:\Python24\Lib\site-packages\matplotlib\numerix\__init__.py", line 68, in ? from _sp_imports import nx, infinity File "C:\Python24\Lib\site-packages\matplotlib\numerix\_sp_imports.py", line 1, in ? from numpy import Int8, UInt8, \ ImportError: cannot import name Int8 >Exit code: 1 Time: 1.174 From robert.kern at gmail.com Wed Aug 2 14:29:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 13:29:48 -0500 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: References: Message-ID: <44D0EF1C.3080405@gmail.com> Wolfgang wrote: > Hi, > > I've tried to run the test file from two threads above but I only get an > error: > File > "C:\Python24\Lib\site-packages\matplotlib\numerix\_sp_imports.py", line > 1, in ? > from numpy import Int8, UInt8, \ > ImportError: cannot import name Int8 Please read http://www.scipy.org/ReleaseNotes/NumPy_1.0 You will have to update matplotlib to match numpy 1.0b1 -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wollez at gmx.net Wed Aug 2 14:43:58 2006 From: wollez at gmx.net (Wolfgang) Date: Wed, 02 Aug 2006 20:43:58 +0200 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <44D0EF1C.3080405@gmail.com> References: <44D0EF1C.3080405@gmail.com> Message-ID: Hi, Updating matplotlib to 0.87.4 results in: Traceback (most recent call last): File "test_det.py", line 7, in ? from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, contourf, cm, colorbar, savefig File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? from matplotlib.pylab import * File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 200, in ? from axes import Axes, PolarAxes File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 23, in ? from contour import ContourSet File "C:\Python24\Lib\site-packages\matplotlib\contour.py", line 18, in ? import _contour File "C:\Python24\Lib\site-packages\matplotlib\_contour.py", line 17, in ? from matplotlib._ns_cntr import * RuntimeError: module compiled against version 90709 of C-API but this version of numpy is 1000000 >Exit code: 1 Time: 6.503 Robert Kern schrieb: > Wolfgang wrote: >> Hi, >> >> I've tried to run the test file from two threads above but I only get an >> error: > >> File >> "C:\Python24\Lib\site-packages\matplotlib\numerix\_sp_imports.py", line >> 1, in ? >> from numpy import Int8, UInt8, \ >> ImportError: cannot import name Int8 > > Please read > > http://www.scipy.org/ReleaseNotes/NumPy_1.0 > > You will have to update matplotlib to match numpy 1.0b1 > From wollez at gmx.net Wed Aug 2 14:47:11 2006 From: wollez at gmx.net (Wolfgang) Date: Wed, 02 Aug 2006 20:47:11 +0200 Subject: [SciPy-user] ImportError: cannot import name Int8 GJKRS In-Reply-To: References: Message-ID: Sorry for the double post! But I've got confused by a mail from spam assassin. Wolfgang From lists.steve at arachnedesign.net Wed Aug 2 14:56:02 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Wed, 2 Aug 2006 14:56:02 -0400 Subject: [SciPy-user] Call for testing In-Reply-To: References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> Message-ID: <9ADBF898-D18F-4957-BA41-9CDC09D597EC@arachnedesign.net> > Thank you all for running the test ! > > Finally I have disabled ATLAS by > export ATLAS=None > > Good news is that I get the correct result. > So I guess it's definitely an ATLAS issue. > > Bad news is that > scipy.test(1) results in > > FAILED (failures=17, errors=4) > More details below. Hi again, scipy.test(1) only results in two failures for me: ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.4/site-packages/scipy/lib/blas/tests/ test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/opt/local/lib/python2.4/site-packages/numpy/testing/ utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 3.6954854766660471e-37j DESIRED: (-9+2j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/local/lib/python2.4/site-packages/scipy/linalg/tests/ test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/opt/local/lib/python2.4/site-packages/numpy/testing/ utils.py", line 156, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: ACTUAL: 3.6954890639901157e-37j DESIRED: (-9+2j) ---------------------------------------------------------------------- Ran 1552 tests in 3.975s FAILED (failures=2) -steve From hasslerjc at adelphia.net Wed Aug 2 14:59:41 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Wed, 02 Aug 2006 14:59:41 -0400 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: References: <44D0EF1C.3080405@gmail.com> Message-ID: <44D0F61D.8020807@adelphia.net> I ran into the same problems. I don't know whether one must use the version 7 compiler for Windows or not - if so, I couldn't compile Matplotlib anyway. What I did was to get numarray: http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=32367 and then edit: .... site-packages\matplotlib\mpl-data\matplotlibrc to select numarray instead of numpy. Then it worked. Wolfgang wrote: > Hi, > Updating matplotlib to 0.87.4 results in: > > Traceback (most recent call last): > File "test_det.py", line 7, in ? > from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, > ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, > contourf, cm, colorbar, savefig > File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? > from matplotlib.pylab import * > File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 200, in ? > from axes import Axes, PolarAxes > File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 23, in ? > from contour import ContourSet > File "C:\Python24\Lib\site-packages\matplotlib\contour.py", line 18, in ? > import _contour > File "C:\Python24\Lib\site-packages\matplotlib\_contour.py", line 17, > in ? > from matplotlib._ns_cntr import * > RuntimeError: module compiled against version 90709 of C-API but this > version of numpy is 1000000 > >Exit code: 1 Time: 6.503 > > From robert.kern at gmail.com Wed Aug 2 15:01:21 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 14:01:21 -0500 Subject: [SciPy-user] ImportError: cannot import name Int8 GJKRS In-Reply-To: References: Message-ID: <44D0F681.1060208@gmail.com> Wolfgang wrote: > Sorry for the double post! But I've got confused by a mail from spam > assassin. It's okay. You're not the first. I've asked the person running that filter to turn off the challenge-response script for this and the other scipy/numpy lists. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From oliphant at ee.byu.edu Wed Aug 2 15:03:34 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Aug 2006 13:03:34 -0600 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: References: <44D0EF1C.3080405@gmail.com> Message-ID: <44D0F706.1010800@ee.byu.edu> Wolfgang wrote: >Hi, >Updating matplotlib to 0.87.4 results in: > >Traceback (most recent call last): > File "test_det.py", line 7, in ? > from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, >ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, >contourf, cm, colorbar, savefig > File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? > from matplotlib.pylab import * > File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 200, in ? > from axes import Axes, PolarAxes > File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 23, in ? > from contour import ContourSet > File "C:\Python24\Lib\site-packages\matplotlib\contour.py", line 18, in ? > import _contour > File "C:\Python24\Lib\site-packages\matplotlib\_contour.py", line 17, >in ? > from matplotlib._ns_cntr import * >RuntimeError: module compiled against version 90709 of C-API but this >version of numpy is 1000000 > > Is this error not clear enough? Suggested wording appreciated. When the C-API of NumPy changes, extension modules which use the NumPy C-API (like those in matplotlib) must be re-compiled. You cannot just download a binary version of matplotlib and expect it to work with all releases of NumPy. A new binary of matplotlib needs to be made for version 1.0 of NumPy. -Travis From hasslerjc at adelphia.net Wed Aug 2 15:12:19 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Wed, 02 Aug 2006 15:12:19 -0400 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <44D0F706.1010800@ee.byu.edu> References: <44D0EF1C.3080405@gmail.com> <44D0F706.1010800@ee.byu.edu> Message-ID: <44D0F913.40301@adelphia.net> An HTML attachment was scrubbed... URL: From dd55 at cornell.edu Wed Aug 2 15:17:40 2006 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 2 Aug 2006 15:17:40 -0400 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <44D0F706.1010800@ee.byu.edu> References: <44D0F706.1010800@ee.byu.edu> Message-ID: <200608021517.41052.dd55@cornell.edu> On Wednesday 02 August 2006 15:03, Travis Oliphant wrote: > A new binary of matplotlib needs to be made for version 1.0 of NumPy. Is the C API expected to change in the future, now that numpy is in beta? From oliphant at ee.byu.edu Wed Aug 2 15:17:13 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Aug 2006 13:17:13 -0600 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <44D0F913.40301@adelphia.net> References: <44D0EF1C.3080405@gmail.com> <44D0F706.1010800@ee.byu.edu> <44D0F913.40301@adelphia.net> Message-ID: <44D0FA39.8000405@ee.byu.edu> John Hassler wrote: > The release notes for matplotlib-0.87.4 contain the entry: > > Add compatibility for NumPy 1.0 - TEO 2006-06-29 > > So this refers to something other than compilation? Yes. What version of NumPy, matplotlib is compiled against is an unrelated question. No change has to be made to matplotlib, it just needs to be re-compiled. -Travis From wollez at gmx.net Wed Aug 2 15:16:51 2006 From: wollez at gmx.net (Wolfgang) Date: Wed, 02 Aug 2006 21:16:51 +0200 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <44D0F706.1010800@ee.byu.edu> References: <44D0EF1C.3080405@gmail.com> <44D0F706.1010800@ee.byu.edu> Message-ID: Hi Travis, I'm quite new to python, so I don't have much knowledge of all dependencies between scipy, mumpy, mumarray, ... and I don't think that I'm able to compile full python on my own. Compilling from source seems much easier on linux than on win. Probably it is the best to change to enthought python than picking up all packages on my own. Wolfgang Wolfgang Travis Oliphant schrieb: > Wolfgang wrote: > >> Hi, >> Updating matplotlib to 0.87.4 results in: >> >> Traceback (most recent call last): >> File "test_det.py", line 7, in ? >> from pylab import plot, show, subplot, legend, xlim, ylim, xlabel, >> ylabel, scatter, figure, semilogy, title, pcolor, contour, cm, fill, >> contourf, cm, colorbar, savefig >> File "C:\Python24\Lib\site-packages\pylab.py", line 1, in ? >> from matplotlib.pylab import * >> File "C:\Python24\Lib\site-packages\matplotlib\pylab.py", line 200, in ? >> from axes import Axes, PolarAxes >> File "C:\Python24\Lib\site-packages\matplotlib\axes.py", line 23, in ? >> from contour import ContourSet >> File "C:\Python24\Lib\site-packages\matplotlib\contour.py", line 18, in ? >> import _contour >> File "C:\Python24\Lib\site-packages\matplotlib\_contour.py", line 17, >> in ? >> from matplotlib._ns_cntr import * >> RuntimeError: module compiled against version 90709 of C-API but this >> version of numpy is 1000000 >> >> > > Is this error not clear enough? Suggested wording appreciated. > > When the C-API of NumPy changes, extension modules which use the NumPy > C-API (like those in matplotlib) must be re-compiled. You cannot just > download a binary version of matplotlib and expect it to work with all > releases of NumPy. > > A new binary of matplotlib needs to be made for version 1.0 of NumPy. > > -Travis From oliphant at ee.byu.edu Wed Aug 2 15:19:38 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Aug 2006 13:19:38 -0600 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <44D0F913.40301@adelphia.net> References: <44D0EF1C.3080405@gmail.com> <44D0F706.1010800@ee.byu.edu> <44D0F913.40301@adelphia.net> Message-ID: <44D0FACA.3080006@ee.byu.edu> John Hassler wrote: > And _does_ matplotlib have to be compiled with the same compiler as > Python? > john No, you just have to set the --compiler= flag to the compiler you want to use. On Windows I install the mingw compiler and then use --compiler=mingw32 for the config and build distutils command: python setup.py config --compiler=mingw32 build --compiler=mingw32 install You can replace the last install with bdist_wininst to get a windows installer If you want to avoid setting the compiler every time you compile you can add [config] compiler=mingw32 [build] compiler=mingw32 to the distutils.cfg (I think that's the name but I might be wrong) file in the site-packages/distutils directory of your Python installation -Travis From oliphant at ee.byu.edu Wed Aug 2 15:21:31 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 02 Aug 2006 13:21:31 -0600 Subject: [SciPy-user] ImportError: cannot import name Int8 In-Reply-To: <200608021517.41052.dd55@cornell.edu> References: <44D0F706.1010800@ee.byu.edu> <200608021517.41052.dd55@cornell.edu> Message-ID: <44D0FB3B.8090103@ee.byu.edu> Darren Dale wrote: >On Wednesday 02 August 2006 15:03, Travis Oliphant wrote: > > >>A new binary of matplotlib needs to be made for version 1.0 of NumPy. >> >> > >Is the C API expected to change in the future, now that numpy is in beta? > > Very unlikely (the only exception is if a major issue comes out during the beta release phase). Once 1.0 is released there will be no C-API changes until 1.1 -Travis From fredantispam at free.fr Wed Aug 2 17:11:40 2006 From: fredantispam at free.fr (fred) Date: Wed, 02 Aug 2006 23:11:40 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> Message-ID: <44D1150C.3090501@free.fr> David Huard a ?crit : > I hope someone else will find a better answer to your question, but here > is something you could try if you are really hungry for speed. [snip] Ok, thanks for the hint, I'm working on it. However, I would like to see how jn(n,x) is written. I did not find nothing about it (no "def jn:"), except the fact that jn(n,x) is written in C, in cephes/. But I did not find out how the interface C/python is written, as in fortran (source specfun.f -> specfun.pyf -> basic.py). I'm going on the bad way ? Any suggestion are welcome. Cheers, -- Fred. From robert.kern at gmail.com Wed Aug 2 17:17:54 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 16:17:54 -0500 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D1150C.3090501@free.fr> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> Message-ID: <44D11682.5070009@gmail.com> fred wrote: > David Huard a ?crit : >> I hope someone else will find a better answer to your question, but here >> is something you could try if you are really hungry for speed. > > [snip] > > Ok, thanks for the hint, I'm working on it. > > However, I would like to see how jn(n,x) is written. > I did not find nothing about it (no "def jn:"), except the fact that > jn(n,x) is written in C, in cephes/. > But I did not find out how the interface C/python is written, as in > fortran (source specfun.f -> specfun.pyf -> basic.py). > I'm going on the bad way ? It is a ufunc, so it is created by calling PyUFunc_FromFuncAndData in Lib/special/_cephesmodule.c . The actual implementation of the math-bits is in Lib/special/cephes/jn.c . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredantispam at free.fr Wed Aug 2 17:35:01 2006 From: fredantispam at free.fr (fred) Date: Wed, 02 Aug 2006 23:35:01 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D11682.5070009@gmail.com> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> Message-ID: <44D11A85.2050608@free.fr> Robert Kern a ?crit : > It is a ufunc, so it is created by calling PyUFunc_FromFuncAndData in > Lib/special/_cephesmodule.c . Hmm, it's a little too obscure for me :-( Does it mean that the solution proposed by David is the only one ? I don't like it because you have to pass the dimension of x as arg and because of the loop (thanks anyway, David, you showed me an interesting way ;-) Cheers, -- Fred. From nicolas at limare.net Wed Aug 2 17:43:50 2006 From: nicolas at limare.net (Nicolas Limare) Date: Wed, 02 Aug 2006 23:43:50 +0200 Subject: [SciPy-user] scipy and freeze - missing modules Message-ID: <44D11C96.2000701@limare.net> Hi, I tried to use freeze[1] do build a stand-alone executable from my python program. It computes with warnings about missing numpy modules, and the binary fails to run, with this error message: 8<----------8<----------8<----------8<----------8<----------8<---------- No scipy-style subpackage 'core' found in numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'lib' found in numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'linalg' found in numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'dft' found in numpy. Ignoring: cannot import name typeinfo No scipy-style subpackage 'random' found in numpy. Ignoring: No module named info Traceback (most recent call last): File "mesh.py", line 24, in ? from numpy import empty, ones, zeros, concatenate, array, arange, \ File "/usr/lib/python2.4/site-packages/numpy/__init__.py", line 49, in ? import add_newdocs File "/usr/lib/python2.4/site-packages/numpy/add_newdocs.py", line 2, in ? from lib import add_newdoc File "/usr/lib/python2.4/site-packages/numpy/lib/__init__.py", line 5, in ? from type_check import * File "/usr/lib/python2.4/site-packages/numpy/lib/type_check.py", line 8, in ? import numpy.core.numeric as _nx File "/usr/lib/python2.4/site-packages/numpy/core/__init__.py", line 7, in ? import numerictypes as nt File "/usr/lib/python2.4/site-packages/numpy/core/numerictypes.py", line 82, in ? from multiarray import typeinfo, ndarray, array, empty ImportError: cannot import name typeinfo 8<----------8<----------8<----------8<----------8<----------8<---------- I use scipy 0.4.8 and numpy 0.9.6, compiled from the official tgz, with Python 2.4.2 on Ubuntu 5.10 (Breezy). My scipy-related module imports are: 8<----------8<----------8<----------8<----------8<----------8<---------- from numpy import empty, ones, zeros, concatenate, array, arange, \ nonzero, byte, float32, cumsum from scipy.sparse import csr_matrix from scipy.ndimage.filters import median_filter from scipy.misc.pilutil import fromimage from scipy.weave import inline, converter 8<----------8<----------8<----------8<----------8<----------8<---------- Do you use freeze with numpy/scipy? Do you have some idea about where the problem lies? Thanks for your help. [1]http://wiki.python.org/moin/Freeze PS : I can provide the full freeze compilation logs and a sample of my program. -- Nicolas LIMARE From robert.kern at gmail.com Wed Aug 2 17:50:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 02 Aug 2006 16:50:58 -0500 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D11A85.2050608@free.fr> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> Message-ID: <44D11E42.8090402@gmail.com> fred wrote: > Robert Kern a ?crit : > >> It is a ufunc, so it is created by calling PyUFunc_FromFuncAndData in >> Lib/special/_cephesmodule.c . > Hmm, it's a little too obscure for me :-( > > Does it mean that the solution proposed by David is the only one ? > I don't like it because you have to pass the dimension of x as arg and > because of the loop (thanks anyway, David, you showed me an interesting > way ;-) It's probably the easiest one. There is a reason that the current forms of the spherical harmonic functions are not ufuncs like jn is: they can't be shoved into the ufunc framework since they return arrays for scalar inputs. Note that in the f2py interface that David wrote, you won't have to pass in the dimension of x as an argument from Python; you just have to write the underlying FORTRAN subroutine to do so. The f2py interface takes care of inferring the dimensions of your inputs for you. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredantispam at free.fr Wed Aug 2 18:10:37 2006 From: fredantispam at free.fr (fred) Date: Thu, 03 Aug 2006 00:10:37 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D11E42.8090402@gmail.com> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> <44D11E42.8090402@gmail.com> Message-ID: <44D122DD.1060408@free.fr> Robert Kern a ?crit : > fred wrote: > >>Robert Kern a ??crit : >> >> >>>It is a ufunc, so it is created by calling PyUFunc_FromFuncAndData in >>>Lib/special/_cephesmodule.c . >> >>Hmm, it's a little too obscure for me :-( >> >>Does it mean that the solution proposed by David is the only one ? >>I don't like it because you have to pass the dimension of x as arg and >>because of the loop (thanks anyway, David, you showed me an interesting >>way ;-) > > > It's probably the easiest one. There is a reason that the current forms of the > spherical harmonic functions are not ufuncs like jn is: they can't be shoved > into the ufunc framework since they return arrays for scalar inputs. Yes, this is the reason I wrote mylpmn for instance, which returns - only the Legendre function and not its derivative ; - only for order m and degree n. > Note that in the f2py interface that David wrote, you won't have to pass in the > dimension of x as an argument from Python; you just have to write the underlying > FORTRAN subroutine to do so. The f2py interface takes care of inferring the > dimensions of your inputs for you. Ok, I did not really read it completely. Thanks to have pointed me on that. Cheers, -- Fred. From fredantispam at free.fr Thu Aug 3 06:26:37 2006 From: fredantispam at free.fr (fred) Date: Thu, 03 Aug 2006 12:26:37 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D11E42.8090402@gmail.com> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> <44D11E42.8090402@gmail.com> Message-ID: <44D1CF5D.3050104@free.fr> Robert Kern a ?crit : [snip] > It's probably the easiest one. There is a reason that the current forms of the > spherical harmonic functions are not ufuncs like jn is: they can't be shoved > into the ufunc framework since they return arrays for scalar inputs. marsu[pts/0]:~/{5}/> ipython Python 2.4.3 (#1, Jul 25 2006, 18:55:45) Type "copyright", "credits" or "license" for more information. IPython 0.7.2 -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: from scipy.special import * In [2]: help(lpmn) Help on function lpmn in module scipy.special.basic: lpmn(m, n, z) Associated Legendre functions of the first kind, Pmn(z) and its derivative, Pmn'(z) of order m and degree n. Returns two arrays of size (m+1,n+1) containing Pmn(z) and Pmn'(z) for all orders from 0..m and degrees from 0..n. z can be complex. In [3]: help(jn) Help on ufunc object: jn = class ufunc(__builtin__.object) | Optimized functions make it possible to implement arithmetic with arrays efficiently | | Methods defined here: | | __call__(...) | x.__call__(...) <==> x(...) | | __repr__(...) | x.__repr__() <==> repr(x) | | __str__(...) | x.__str__() <==> str(x) | | accumulate(...) | | outer(...) | | reduce(...) | | reduceat(...) .../... How can I get info about jn functions ?? Cheers, -- Fred. From nwagner at iam.uni-stuttgart.de Thu Aug 3 06:37:32 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Aug 2006 12:37:32 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D1CF5D.3050104@free.fr> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> <44D11E42.8090402@gmail.com> <44D1CF5D.3050104@free.fr> Message-ID: <44D1D1EC.6030208@iam.uni-stuttgart.de> fred wrote: > Robert Kern a ?crit : > > [snip] > > >> It's probably the easiest one. There is a reason that the current forms of the >> spherical harmonic functions are not ufuncs like jn is: they can't be shoved >> into the ufunc framework since they return arrays for scalar inputs. >> > > marsu[pts/0]:~/{5}/> ipython > Python 2.4.3 (#1, Jul 25 2006, 18:55:45) > Type "copyright", "credits" or "license" for more information. > > IPython 0.7.2 -- An enhanced Interactive Python. > ? -> Introduction to IPython's features. > %magic -> Information about IPython's 'magic' % functions. > help -> Python's own help system. > object? -> Details about 'object'. ?object also works, ?? prints more. > > In [1]: from scipy.special import * > > In [2]: help(lpmn) > Help on function lpmn in module scipy.special.basic: > > lpmn(m, n, z) > Associated Legendre functions of the first kind, Pmn(z) and its > derivative, Pmn'(z) of order m and degree n. Returns two > arrays of size (m+1,n+1) containing Pmn(z) and Pmn'(z) for > all orders from 0..m and degrees from 0..n. > > z can be complex. > > In [3]: help(jn) > Help on ufunc object: > > jn = class ufunc(__builtin__.object) > | Optimized functions make it possible to implement arithmetic with > arrays efficiently > | > | Methods defined here: > | > | __call__(...) > | x.__call__(...) <==> x(...) > | > | __repr__(...) > | x.__repr__() <==> repr(x) > | > | __str__(...) > | x.__str__() <==> str(x) > | > | accumulate(...) > | > | outer(...) > | > | reduce(...) > | > | reduceat(...) > .../... > > How can I get info about jn functions ?? > > Cheers, > > You can try ipython Python 2.4.1 (#1, Sep 12 2005, 23:33:18) Type "copyright", "credits" or "license" for more information. IPython 0.7.3.svn -- An enhanced Interactive Python. ? -> Introduction to IPython's features. %magic -> Information about IPython's 'magic' % functions. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: from scipy import * In [2]: special.jn? Type: ufunc Base Class: String Form: Namespace: Interactive Docstring: y = jn(x1,x2) y=jn(n,x) returns the Bessel function of integer order n at x. In [3]: info(special.jn) y = jn(x1,x2) y=jn(n,x) returns the Bessel function of integer order n at x. Nils From fredantispam at free.fr Thu Aug 3 07:24:11 2006 From: fredantispam at free.fr (fred) Date: Thu, 03 Aug 2006 13:24:11 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D1D1EC.6030208@iam.uni-stuttgart.de> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> <44D11E42.8090402@gmail.com> <44D1CF5D.3050104@free.fr> <44D1D1EC.6030208@iam.uni-stuttgart.de> Message-ID: <44D1DCDB.6020100@free.fr> Nils Wagner a ?crit : > You can try [snip] Ok, but why help() does not work on jn() ? Because it's a ufunc ? Cheers, -- Fred. From fredantispam at free.fr Thu Aug 3 07:28:21 2006 From: fredantispam at free.fr (fred) Date: Thu, 03 Aug 2006 13:28:21 +0200 Subject: [SciPy-user] scipy 0.4.9/numpy 0.9.8 & jn_zeros() In-Reply-To: <17609.8589.168511.499003@prpc.aero.iitb.ac.in> References: <44C7A0BA.6020708@free.fr> <44C9168D.3030003@free.fr> <17609.8589.168511.499003@prpc.aero.iitb.ac.in> Message-ID: <44D1DDD5.20802@free.fr> Prabhu Ramachandran a ?crit : >>>>>>"fred" == fred writes: > > > fred> fred a ?crit : > >> Hi all, > >> > >> Since the new release, I get this error message : > >> > >> Python 2.4.3 (#1, Jul 25 2006, 18:55:45) [GCC 3.3.5 (Debian > >> 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or > >> "license" for more information. > >> > >>>>> from scipy.special.basic import jn_zeros print jn_zeros(0,1) > >> Floating exception > >> > >> What am I doing wrong ? > fred> Nobody has any suggestion ? :-( > > See if these articles help: [snip] Ok, I recompiled/installed libc6 with patch. But it still fails. I also have rebuilt scipy, but not python2.4. Should I have to rebuild it ? -- Fred. From stephenemslie at gmail.com Thu Aug 3 08:34:51 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Thu, 3 Aug 2006 13:34:51 +0100 Subject: [SciPy-user] hough transform Message-ID: <51f97e530608030534k4ef26457s741c7d048d1f3ac4@mail.gmail.com> I'd like to perform a hough transform ( http://en.wikipedia.org/wiki/Hough_transform) on an image as part of a template matching algorithm. I've looked around scipy and numpy and I couldn't find any sign of the Hough transform, but it seems to be a fairly routine operation in image processing so I was wondering if anyone had written an implementation (or perhaps if it is already in scipy and I'm just missing it). Any other information on template matching with scipy would be greatly appreciated. Thanks! Stephen Emslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Thu Aug 3 08:50:42 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 3 Aug 2006 14:50:42 +0200 Subject: [SciPy-user] hough transform In-Reply-To: <51f97e530608030534k4ef26457s741c7d048d1f3ac4@mail.gmail.com> Message-ID: <006101c6b6fb$6e2a72c0$2701a8c0@OneArchitecture.local> http://www.intel.com/technology/computing/opencv/ _____ From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of stephen emslie Sent: Thursday, August 03, 2006 2:35 PM To: SciPy Users List Subject: [SciPy-user] hough transform I'd like to perform a hough transform (http://en.wikipedia.org/wiki/Hough_transform) on an image as part of a template matching algorithm. I've looked around scipy and numpy and I couldn't find any sign of the Hough transform, but it seems to be a fairly routine operation in image processing so I was wondering if anyone had written an implementation (or perhaps if it is already in scipy and I'm just missing it). Any other information on template matching with scipy would be greatly appreciated. Thanks! Stephen Emslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From basvandijk at home.nl Thu Aug 3 09:12:09 2006 From: basvandijk at home.nl (basvandijk at home.nl) Date: Thu, 3 Aug 2006 15:12:09 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq Message-ID: <982796.1154610729010.JavaMail.root@webmail1.groni1> Hello, When running the following example from http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault in leastsq: -------------------------- from scipy import * from numpy import * x = arange(0,6e-2,6e-2/30) A,k,theta = 10, 1.0/3e-2, pi/6s y_true = A*sin(2*pi*k*x+theta) y_meas = y_true + 2*randn(len(x)) def residuals(p, y, x): A,k,theta = p err = y-A*sin(2*pi*k*x+theta) return err def peval(x, p): return p[0]*sin(2*pi*p[1]*x+p[2]) p0 = [8, 1/2.3e-2, pi/3] print array(p0) from scipy.optimize import leastsq plsq = leastsq(residuals, p0, args=(y_meas, x)) print plsq[0] print array([A, k, theta]) -------------------------- Note that C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also gives an error. I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. What can be the problem? Greetings, Bas van Dijk From nwagner at iam.uni-stuttgart.de Thu Aug 3 09:18:53 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Aug 2006 15:18:53 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <982796.1154610729010.JavaMail.root@webmail1.groni1> References: <982796.1154610729010.JavaMail.root@webmail1.groni1> Message-ID: <44D1F7BD.2070308@iam.uni-stuttgart.de> basvandijk at home.nl wrote: > Hello, > > When running the following example from http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault in leastsq: > > -------------------------- > from scipy import * > from numpy import * > > x = arange(0,6e-2,6e-2/30) > A,k,theta = 10, 1.0/3e-2, pi/6s > y_true = A*sin(2*pi*k*x+theta) > y_meas = y_true + 2*randn(len(x)) > > def residuals(p, y, x): > A,k,theta = p > err = y-A*sin(2*pi*k*x+theta) > return err > > def peval(x, p): > return p[0]*sin(2*pi*p[1]*x+p[2]) > > p0 = [8, 1/2.3e-2, pi/3] > print array(p0) > > from scipy.optimize import leastsq > plsq = leastsq(residuals, p0, args=(y_meas, x)) > print plsq[0] > > print array([A, k, theta]) > -------------------------- > > Note that C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also gives an error. > > I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. > > What can be the problem? > > Greetings, > > Bas van Dijk > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > I cannot confirm your problems on Linux. python -i /usr/lib64/python2.4/site-packages/scipy/optimize/tests/test_optimize.py Found 6 tests for __main__ ...... ---------------------------------------------------------------------- Ran 6 tests in 0.028s OK The example results in [ 8. 43.47826087 1.04719755] [-10.43031241 33.94233618 3.55891069] [ 10. 33.33333333 0.52359878] 1.0b2.dev2946 0.5.0.2142 Nils From jelle.feringa at ezct.net Thu Aug 3 09:39:24 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 3 Aug 2006 15:39:24 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <982796.1154610729010.JavaMail.root@webmail1.groni1> Message-ID: <007201c6b702$3c55c770$2701a8c0@OneArchitecture.local> Any chance you have an older AMD processor perhaps not supporting SSE(3?)? -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of basvandijk at home.nl Sent: Thursday, August 03, 2006 3:12 PM To: scipy-user at scipy.org Subject: [SciPy-user] segfault in scipy.optimize.leastsq Hello, When running the following example from http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault in leastsq: -------------------------- from scipy import * from numpy import * x = arange(0,6e-2,6e-2/30) A,k,theta = 10, 1.0/3e-2, pi/6s y_true = A*sin(2*pi*k*x+theta) y_meas = y_true + 2*randn(len(x)) def residuals(p, y, x): A,k,theta = p err = y-A*sin(2*pi*k*x+theta) return err def peval(x, p): return p[0]*sin(2*pi*p[1]*x+p[2]) p0 = [8, 1/2.3e-2, pi/3] print array(p0) from scipy.optimize import leastsq plsq = leastsq(residuals, p0, args=(y_meas, x)) print plsq[0] print array([A, k, theta]) -------------------------- Note that C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also gives an error. I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. What can be the problem? Greetings, Bas van Dijk _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From basvandijk at home.nl Thu Aug 3 09:32:54 2006 From: basvandijk at home.nl (basvandijk at home.nl) Date: Thu, 3 Aug 2006 15:32:54 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq Message-ID: <3182876.1154611974457.JavaMail.root@webmail1.groni1> When running test_optimize.py on a newer PC with an older scipy I don't get the error. Specs of the newer PC: CPU: Pentium 4 2.8Ghz Memory: 512MB OS: Windows XP SP2 Scipy: 0.4.9 Numpy: 0.9.8 Specs of the older PC: CPU: Pentium 2 450Mhz Memory: 128MB OS: Windows XP SP2 Scipy: 0.5.0 Numpy: 1.0b1 I'm going to install a newer version of scipy on the new PC and try again. Thanks, Bas. ---- Nils Wagner schrijft: > ... > I cannot confirm your problems on Linux. > python -i > /usr/lib64/python2.4/site-packages/scipy/optimize/tests/test_optimize.py > Found 6 tests for __main__ > ...... > ---------------------------------------------------------------------- > Ran 6 tests in 0.028s > > OK > > The example results in > [ 8. 43.47826087 1.04719755] > [-10.43031241 33.94233618 3.55891069] > [ 10. 33.33333333 0.52359878] > > > 1.0b2.dev2946 > 0.5.0.2142 > > Nils From hasslerjc at adelphia.net Thu Aug 3 09:33:18 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Thu, 03 Aug 2006 09:33:18 -0400 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <982796.1154610729010.JavaMail.root@webmail1.groni1> References: <982796.1154610729010.JavaMail.root@webmail1.groni1> Message-ID: <44D1FB1E.5000009@adelphia.net> Are you running Windows on an Athlon? john basvandijk at home.nl wrote: > Hello, > > When running the following example from http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault in leastsq: > > > Note that C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also gives an error. > > I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. > > What can be the problem? > > Greetings, > > Bas van Dijk > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From nwagner at iam.uni-stuttgart.de Thu Aug 3 09:34:42 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Aug 2006 15:34:42 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <007201c6b702$3c55c770$2701a8c0@OneArchitecture.local> References: <007201c6b702$3c55c770$2701a8c0@OneArchitecture.local> Message-ID: <44D1FB72.10007@iam.uni-stuttgart.de> Jelle Feringa / EZCT Architecture & Design Research wrote: > Any chance you have an older AMD processor perhaps not supporting SSE(3?)? > > In case of a segfault I suggest to use gdb to track down the problem. Nils > > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On > Behalf Of basvandijk at home.nl > Sent: Thursday, August 03, 2006 3:12 PM > To: scipy-user at scipy.org > Subject: [SciPy-user] segfault in scipy.optimize.leastsq > > Hello, > > When running the following example from > http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault > in leastsq: > > -------------------------- > from scipy import * > from numpy import * > > x = arange(0,6e-2,6e-2/30) > A,k,theta = 10, 1.0/3e-2, pi/6s > y_true = A*sin(2*pi*k*x+theta) > y_meas = y_true + 2*randn(len(x)) > > def residuals(p, y, x): > A,k,theta = p > err = y-A*sin(2*pi*k*x+theta) > return err > > def peval(x, p): > return p[0]*sin(2*pi*p[1]*x+p[2]) > > p0 = [8, 1/2.3e-2, pi/3] > print array(p0) > > from scipy.optimize import leastsq > plsq = leastsq(residuals, p0, args=(y_meas, x)) > print plsq[0] > > print array([A, k, theta]) > -------------------------- > > Note that > C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also > gives an error. > > I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. > > What can be the problem? > > Greetings, > > Bas van Dijk > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From basvandijk at home.nl Thu Aug 3 09:42:30 2006 From: basvandijk at home.nl (basvandijk at home.nl) Date: Thu, 3 Aug 2006 15:42:30 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq Message-ID: <26215754.1154612550450.JavaMail.root@webmail1.groni1> Jelle Feringa / EZCT Architecture & Design Research wrote: > Any chance you have an older AMD processor perhaps not supporting SSE(3?)? I have a Pentium 2 450Mhz CPU ---- Nils Wagner schrijft: > In case of a segfault I suggest to use gdb to track down the problem. I don't know for sure if I get segmentation error. It can also be something else. From jelle.feringa at ezct.net Thu Aug 3 09:58:49 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 3 Aug 2006 15:58:49 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <26215754.1154612550450.JavaMail.root@webmail1.groni1> Message-ID: <007301c6b704$f29bb240$2701a8c0@OneArchitecture.local> http://en.wikipedia.org/wiki/Streaming_SIMD_Extensions So the odds are fair your running into the SSE issue... Probably it would be a good idea to get a topic on this matter on the scipy.org wiki? -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of basvandijk at home.nl Sent: Thursday, August 03, 2006 3:43 PM To: SciPy Users List Subject: Re: [SciPy-user] segfault in scipy.optimize.leastsq Jelle Feringa / EZCT Architecture & Design Research wrote: > Any chance you have an older AMD processor perhaps not supporting SSE(3?)? I have a Pentium 2 450Mhz CPU ---- Nils Wagner schrijft: > In case of a segfault I suggest to use gdb to track down the problem. I don't know for sure if I get segmentation error. It can also be something else. _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From basvandijk at home.nl Thu Aug 3 10:05:20 2006 From: basvandijk at home.nl (basvandijk at home.nl) Date: Thu, 3 Aug 2006 16:05:20 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq Message-ID: <33165064.1154613920868.JavaMail.root@webmail2.tilbu1> ---- Jelle Feringa / EZCT Architecture & Design Research schrijft: > So the odds are fair your running into the SSE issue... I think so. It would be a good idea to put the minimal system requirements on the scipy download page. Is there any chance of using scipy on a Pentium II? Bas. From hgk at et.uni-magdeburg.de Thu Aug 3 10:07:17 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Thu, 03 Aug 2006 16:07:17 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <982796.1154610729010.JavaMail.root@webmail1.groni1> References: <982796.1154610729010.JavaMail.root@webmail1.groni1> Message-ID: basvandijk at home.nl wrote: > Hello, > > When running the following example from http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault in leastsq: > > -------------------------- > from scipy import * > from numpy import * > > x = arange(0,6e-2,6e-2/30) > A,k,theta = 10, 1.0/3e-2, pi/6s > y_true = A*sin(2*pi*k*x+theta) > y_meas = y_true + 2*randn(len(x)) > > def residuals(p, y, x): > A,k,theta = p > err = y-A*sin(2*pi*k*x+theta) > return err > > def peval(x, p): > return p[0]*sin(2*pi*p[1]*x+p[2]) > > p0 = [8, 1/2.3e-2, pi/3] > print array(p0) > > from scipy.optimize import leastsq > plsq = leastsq(residuals, p0, args=(y_meas, x)) > print plsq[0] > > print array([A, k, theta]) > -------------------------- > > Note that C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also gives an error. > > I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. > > What can be the problem? > > Greetings, > > Bas van Dijk Sounds familar to me. I had a similar problem with older scipy and numpy versions. For me it worked after using ATLAS without SSE support. But that was an ATHLON and not an Pentium 2. You can find the old thred here: http://thread.gmane.org/gmane.comp.python.scientific.user/8139/focus=8139 Hans Georg From hasslerjc at adelphia.net Thu Aug 3 10:14:55 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Thu, 03 Aug 2006 10:14:55 -0400 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <33165064.1154613920868.JavaMail.root@webmail2.tilbu1> References: <33165064.1154613920868.JavaMail.root@webmail2.tilbu1> Message-ID: <44D204DF.8020609@adelphia.net> An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Aug 3 10:14:20 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Aug 2006 16:14:20 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: References: <982796.1154610729010.JavaMail.root@webmail1.groni1> Message-ID: <44D204BC.3050500@iam.uni-stuttgart.de> Dr. Hans Georg Krauthaeuser wrote: > basvandijk at home.nl wrote: > >> Hello, >> >> When running the following example from http://www.tau.ac.il/~kineret/amit/scipy_tutorial I get an segmentationfault in leastsq: >> >> -------------------------- >> from scipy import * >> from numpy import * >> >> x = arange(0,6e-2,6e-2/30) >> A,k,theta = 10, 1.0/3e-2, pi/6s >> y_true = A*sin(2*pi*k*x+theta) >> y_meas = y_true + 2*randn(len(x)) >> >> def residuals(p, y, x): >> A,k,theta = p >> err = y-A*sin(2*pi*k*x+theta) >> return err >> >> def peval(x, p): >> return p[0]*sin(2*pi*p[1]*x+p[2]) >> >> p0 = [8, 1/2.3e-2, pi/3] >> print array(p0) >> >> from scipy.optimize import leastsq >> plsq = leastsq(residuals, p0, args=(y_meas, x)) >> print plsq[0] >> >> print array([A, k, theta]) >> -------------------------- >> >> Note that C:\Python24\Lib\site-packages\scipy\optimize\tests\test_optimize.py also gives an error. >> >> I'm running Python 2.4.3, Scipy 0.5.0 and numpy 1.0b1. >> >> What can be the problem? >> >> Greetings, >> >> Bas van Dijk >> > Sounds familar to me. I had a similar problem with older scipy and numpy > versions. For me it worked after using ATLAS without SSE support. But > that was an ATHLON and not an Pentium 2. > > You can find the old thred here: > http://thread.gmane.org/gmane.comp.python.scientific.user/8139/focus=8139 > > Hans Georg > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Interesting. There seems to be more SSE related problems. How do I disable SSE support ? Nils From jstrunk at enthought.com Thu Aug 3 10:26:05 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Thu, 3 Aug 2006 09:26:05 -0500 Subject: [SciPy-user] New ISP Message-ID: <200608030926.05096.jstrunk@enthought.com> Good Morning, I am pleased to announce that Enthought is now using a new ISP. We have a 10 megabit internet connection from our in building provider now. That is 6.6 times faster than our old T1. Downloads of Enthon and Scipy should be much faster now. If you find that you are connecting at slow speeds, it may be that your ISP's DNS cache has not recognized our DNS changes yet. Most should have changed within 20 minutes of the switchover. Some ISPs may take a day, and very few will take a few weeks. Thanks, Jeff Strunk IT Administrator Enthought, Inc. From hgk at et.uni-magdeburg.de Thu Aug 3 10:32:16 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Thu, 03 Aug 2006 16:32:16 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: <44D204BC.3050500@iam.uni-stuttgart.de> References: <982796.1154610729010.JavaMail.root@webmail1.groni1> <44D204BC.3050500@iam.uni-stuttgart.de> Message-ID: Nils Wagner wrote: > > Interesting. There seems to be more SSE related problems. How do I > disable SSE support ? > > Nils Nils, I compiled ATLAS by myself on that computer. ATLAS detects whether to use SSE or not during the build. Of course you have to compile/link numpy and scipy (or only numpy?) against this ATLAS. Ed Schofield uploaded precompiles ATLAS binaries to http://www.scipy.org/Installing_SciPy/Windows This may save some time... BTW, isn't there any way to dynamically link against the ATLAS library? Hans Georg From alex.liberzon at gmail.com Thu Aug 3 10:47:23 2006 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Thu, 3 Aug 2006 16:47:23 +0200 Subject: [SciPy-user] hough transform Message-ID: <775f17a80608030747p3c596853mc9b8ca4d8eb55a02@mail.gmail.com> http://www.amk.ca/files/unmaintained/hough.py Runs perferctly on my Enthought distro 1.0 (Scipy 0.5.0, Numpy 0.9.9, Win XP SP2) HIH, Alex From nwagner at iam.uni-stuttgart.de Thu Aug 3 11:09:07 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Aug 2006 17:09:07 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: References: <982796.1154610729010.JavaMail.root@webmail1.groni1> <44D204BC.3050500@iam.uni-stuttgart.de> Message-ID: On Thu, 03 Aug 2006 16:32:16 +0200 "Dr. Hans Georg Krauthaeuser" wrote: > Nils Wagner wrote: >> >> Interesting. There seems to be more SSE related >>problems. How do I >> disable SSE support ? >> >> Nils > Nils, > > I compiled ATLAS by myself on that computer. ATLAS >detects whether to > use SSE or not during the build. But is there a way to enforce ATLAS not to use SSE ? Nils Of course you have to >compile/link > numpy and scipy (or only numpy?) against this ATLAS. > > Ed Schofield uploaded precompiles ATLAS binaries to > http://www.scipy.org/Installing_SciPy/Windows > > This may save some time... > > BTW, isn't there any way to dynamically link against the >ATLAS library? > > Hans Georg > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From jelle.feringa at ezct.net Thu Aug 3 11:25:36 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Thu, 3 Aug 2006 17:25:36 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: Message-ID: <009101c6b711$126ae440$2701a8c0@OneArchitecture.local> Why would you if you havent got an explicit SSE related problem? But is there a way to enforce ATLAS not to use SSE ? Nils From oliphant at ee.byu.edu Thu Aug 3 12:29:04 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Aug 2006 10:29:04 -0600 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D1DCDB.6020100@free.fr> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> <44D11E42.8090402@gmail.com> <44D1CF5D.3050104@free.fr> <44D1D1EC.6030208@iam.uni-stuttgart.de> <44D1DCDB.6020100@free.fr> Message-ID: <44D22450.2010605@ee.byu.edu> fred wrote: >Nils Wagner a ?crit : > > > >>You can try >> >> > >[snip] > >Ok, but why help() does not work on jn() ? >Because it's a ufunc ? > > > For some reason Python's help does not print the docstring of an instance but only of it's type (i.e. the general ufunc). I think this is broken behavior. IPython does the right thing. So does scipy's info function. -Travis From stefan at sun.ac.za Thu Aug 3 15:21:45 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 3 Aug 2006 21:21:45 +0200 Subject: [SciPy-user] hough transform In-Reply-To: <51f97e530608030534k4ef26457s741c7d048d1f3ac4@mail.gmail.com> References: <51f97e530608030534k4ef26457s741c7d048d1f3ac4@mail.gmail.com> Message-ID: <20060803192145.GG6682@mentat.za.net> Hi Stephen On Thu, Aug 03, 2006 at 01:34:51PM +0100, stephen emslie wrote: > I'd like to perform a hough transform (http://en.wikipedia.org/wiki/ > Hough_transform) on an image as part of a template matching algorithm. > > I've looked around scipy and numpy and I couldn't find any sign of the Hough > transform, but it seems to be a fairly routine operation in image processing so > I was wondering if anyone had written an implementation (or perhaps if it is > already in scipy and I'm just missing it). I attach code for the straight line Hough transform. I saw a post earlier which refers to another implementation. They are similar, but this one is vectorised, which should provide a speed improvement. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: houghtf.py Type: text/x-python Size: 2928 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Thu Aug 3 15:49:41 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 3 Aug 2006 21:49:41 +0200 Subject: [SciPy-user] pyreport Message-ID: <20060803194941.GA3408@clipper.ens.fr> The list gave me such positive feedback about the script for literate programming in python that I have decided to make something more usable out of it. I have worked a bit more on the code and in theory it does things a bit more "the right way". If you want to use it (and probably bug report) all the info is on http://gael-varoquaux.info/computers/pyreport/ , with a few examples a use cases. Ga?l From R.Springuel at umit.maine.edu Thu Aug 3 17:54:27 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 03 Aug 2006 17:54:27 -0400 Subject: [SciPy-user] Pycluster, Numeric, & numpy Message-ID: <44D27093.8080102@umit.maine.edu> I'm trying to do some cluster analysis and running into a small problem. The cluster package which is included in scipy only does k-means style clustering, and I need to do hierarchical clustering as well. So I downloaded Pycluster which has that ability. However, Pycluster uses Numeric, which I no longer have since scipy was upgraded to use numpy instead. Does anyone have any experience with updating packages to use numpy instead of Numeric? I tried replacing the "from Numeric import *" line in Pycluster with "from numpy import *" but this creates a program fault which crashes Python. -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 214 Office Hours: By Appointment only during the Summer From R.Springuel at umit.maine.edu Thu Aug 3 18:20:29 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Thu, 03 Aug 2006 18:20:29 -0400 Subject: [SciPy-user] Pycluster, Numeric, & numpy In-Reply-To: References: <44D27093.8080102@umit.maine.edu> Message-ID: <44D276AD.4090602@umit.maine.edu> > Yes, I have. I took the easy way out: replacing references to Numeric with > numpy, and then fixing the few places where pycluster crashes. I did this > last summer so my memory isn't fresh. I'll try to figure it out again for > you. > > By the way, there was talk in the mailing list a couple months ago about > including pycluster into SciPy. The catch is that pycluster relies on a > Numerical Recipies random number generator. So there would have to be a > branch that rips it out of the C code and replaces it with something from > the numpy library. > > john Problem is that my interpreter crash is not giving me any traceback errors, so I can't figure out which line(s) is(are) causing the problem. -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 214 Office Hours: By Appointment only during the Summer From rlratzel at enthought.com Thu Aug 3 18:32:07 2006 From: rlratzel at enthought.com (Rick Ratzel) Date: Thu, 3 Aug 2006 17:32:07 -0500 (CDT) Subject: [SciPy-user] ANN: Python 2.4.3 Enthought Edition 1.1.0 alpha1 Message-ID: <20060803223207.D9CF1536@mail.enthought.com> In keeping with the "release early, release often" mantra of the open source community, Enthought is pleased to announce Python Enthought Edition Version 1.1.0 alpha1. Information page: http://code.enthought.com/enthon/enstaller.shtml Download link: http://code.enthought.com/enstaller/enthought_setup-1.1.0.alpha.msi About Python Enthought Edition 1.1.0 alpha1: -------------------------------------------- This alpha release of 1.1.0 Python Enthought Edition is the first Enthought distribution (and the first Python distribution that we know of) to use individual egg packages (http://peak.telecommunity.com/DevCenter/PythonEggs) rather than a monolithic .exe installer. Python Enthought Edition 1.1.0 alpha1 introduces the "Enstaller" application, which manages selecting, downloading, and installing Python eggs. The initial installer program installs the standard Python software and runs a small script, which downloads, installs, and runs Enstaller. From within Enstaller, you can select individual egg packages from Enthought's egg repository, or select the "enthon-1.1.0" package, which installs everything listed in the 1.1.0 alpha1 distribution. Python 2.4.3, Enthought Edition 1.1.0 is available only for Windows for the alpha1 release. It includes eggs for the following packages: Numpy SciPy IPython wxPython PIL mingw MayaVi Scientific Python VTK and many more... After installing Python Enthought Edition 1.1.0 alpha1, you can launch Enstaller again later, to upgrade, uninstall, inspect, or add new packages to your Python installation. More information is available about this and all other open source software written or released by Enthought, Inc., at . A Note About the License ------------------------ The Enstaller application, released under the same BSD2 Open Source License as all other Enthought Open Source, is and will remain free. Access to Enthought's egg repository is free to all during the alpha and beta test phases, and will remain free for individual and academic use. After the official release of Enstaller, commercial and government users will be asked to pay a nominal, per-seat, annual fee if they'd like continued access to the egg repository (to help defray the costs of maintaining the repository). The Enstaller application may be used to access other egg repositories (i.e. the Python Cheese Shop--see release notes below), not just Enthought's repository. 1.1.0 alpha1 Release Notes -------------------------- This release is early alpha and has several known bugs, as well as a long list of features that have not yet been implemented. Some of the more obvious ones are listed below: * An egg of the standard documents that are typically included in Python Enthought Edition releases is not yet available. * An egg of the complete Enthought Tool Suite (ETS), a package typically available in Python Enthought Edition releases, is not yet available. However a subset of the ETS that includes Traits and TraitsUI is installed with Enstaller. * The uninstall feature does not properly uninstall a package if it is in use by any other process, including Enstaller. * In some cases, installing a package that is already installed causes errors. * The metadata about an installed package might not display in the Enstaller dialog box, in most cases because the installed package does not include metadata. This issue will improve as Enthought enhances the eggs available in the Enthought egg repository, * Packages that modify the system registry (typically, adding directories to the search PATH) require the user to log out and then log back in before the settings take effect. * Changing the "find_links" preference, which allows you to add additional repositories to search for eggs, often causes a traceback inspector to appear. You can safely close the inspector and add the new repository. * Some packages that launch post-install scripts can display error messages if they fail for any reason, which can require manual intervention. These errors are usually caused by inadequate permissions for creating or moving files. * Accessing the Cheese Shop repository takes a very long time, but we have several enhancements underway to vastly improve this which will be in future releases soon. At the moment however, a bug prevents Enstaller from downloading eggs from the Cheese Shop for install, and access to it with Enstaller is limited to browsing only. If you'd like to browse the Cheese Shop, enter "http://python.org/pypi/" (the final '/' is required) in the "find_links" field under Preferences. * IPython may have problems when starting for the first time if a previous version of IPython was run on the system. If you see "WARNING: could not import user config", either follow the directions that follow the warning, or delete ipy_user_conf.py from the IPython directory. * Some users report that older matplotlibrc files are not compatible with the version of matplotlib installed with this release. Please refer to the matplotlib mailing list for further help. We are grateful to everyone who has helped test this release. If you'd like to contribute suggestions or report a bug, you can do so at https://svn.enthought.com/enthought. -- Rick Ratzel - Enthought, Inc. 515 Congress Avenue, Suite 2100 - Austin, Texas 78701 512-536-1057 x229 - Fax: 512-536-1059 http://www.enthought.com From ckkart at hoc.net Thu Aug 3 19:09:17 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Fri, 04 Aug 2006 08:09:17 +0900 Subject: [SciPy-user] Pycluster, Numeric, & numpy In-Reply-To: <44D276AD.4090602@umit.maine.edu> References: <44D27093.8080102@umit.maine.edu> <44D276AD.4090602@umit.maine.edu> Message-ID: <44D2821D.7080401@hoc.net> R. Padraic Springuel wrote: >> Yes, I have. I took the easy way out: replacing references to Numeric with >> numpy, and then fixing the few places where pycluster crashes. I did this >> last summer so my memory isn't fresh. I'll try to figure it out again for >> you. >> >> By the way, there was talk in the mailing list a couple months ago about >> including pycluster into SciPy. The catch is that pycluster relies on a >> Numerical Recipies random number generator. So there would have to be a >> branch that rips it out of the C code and replaces it with something from >> the numpy library. >> >> john > > Problem is that my interpreter crash is not giving me any traceback > errors, so I can't figure out which line(s) is(are) causing the problem. With recent numpy you can try a from numpy.oldnumeric import * this will at least provide old-style names. And I believe to remember that I've heard about a translation script from Numeri to numpy but I don't remeber where. However as you don't get a traceback, this seems to be more severe than the approach above will be able to solve. Christian From oliphant at ee.byu.edu Thu Aug 3 19:15:09 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 03 Aug 2006 17:15:09 -0600 Subject: [SciPy-user] Pycluster, Numeric, & numpy In-Reply-To: <44D27093.8080102@umit.maine.edu> References: <44D27093.8080102@umit.maine.edu> Message-ID: <44D2837D.7060507@ee.byu.edu> R. Padraic Springuel wrote: >I'm trying to do some cluster analysis and running into a small problem. > The cluster package which is included in scipy only does k-means style >clustering, and I need to do hierarchical clustering as well. So I >downloaded Pycluster which has that ability. However, Pycluster uses >Numeric, which I no longer have since scipy was upgraded to use numpy >instead. Does anyone have any experience with updating packages to use >numpy instead of Numeric? I tried replacing the "from Numeric import *" >line in Pycluster with "from numpy import *" but this creates a program >fault which crashes Python. > > Are you using a binary version of PyCluster? This will most certainly not work. You will at least have to alter the extension module generated by PyCluster. It looks like PyCluster uses PyFort for the interface. Ideally, this should be converted to use f2py, but if the pyfort just generated an extension module, then this extension module should compile with NumPy just fine as long as the include files are changed to be #include numpy/arrayobject.h There may be some additional corrections that need to be made to the python side. Most of these can be done with the convertcode script. There is some talk about creating a fully backward compatible library so that you can replace Numeric with numpy.oldnumeric and have code work. I'm not sure if we can get 100% of the way there, but maybe. You will still need to re-compile extension modules, though. -Travis From hgk at et.uni-magdeburg.de Fri Aug 4 02:40:33 2006 From: hgk at et.uni-magdeburg.de (Dr. Hans Georg Krauthaeuser) Date: Fri, 04 Aug 2006 08:40:33 +0200 Subject: [SciPy-user] segfault in scipy.optimize.leastsq In-Reply-To: References: <982796.1154610729010.JavaMail.root@webmail1.groni1> <44D204BC.3050500@iam.uni-stuttgart.de> Message-ID: Nils Wagner wrote: > > But is there a way to enforce ATLAS not to use SSE ? > > Nils > I'm not an expert regarding ATLAS, but my impression was that the ATLAS build process will do 'The Right Thing' automatically. Hans Georg From olivetti at itc.it Fri Aug 4 04:03:54 2006 From: olivetti at itc.it (Emanuele Olivetti) Date: Fri, 04 Aug 2006 10:03:54 +0200 Subject: [SciPy-user] Pycluster, Numeric, & numpy In-Reply-To: <44D27093.8080102@umit.maine.edu> References: <44D27093.8080102@umit.maine.edu> Message-ID: <44D2FF6A.8070600@itc.it> You can install Numeric too (so having numpy/scipy living together with Numeric): PyCluster will use Numeric internally and you'll use numpy/scipy in your code. Passing data from numpy/scipy to Pycluster/Numeric and results from Pycluster/Numeric to numpy/scipy is flawless: just passing a numpy array to Pycluster is ok and constructing a numpy array from Pycluster's results works fine. I'm using Pycluster in this way. It's not the best solution but it is fast. I talked to Michiel de Hoon (Pycluster's author) one month ago and he said that he will not port Pycluster to numpy in the short term. Cheers, Emanuele R. Padraic Springuel wrote: > I'm trying to do some cluster analysis and running into a small problem. > The cluster package which is included in scipy only does k-means style > clustering, and I need to do hierarchical clustering as well. So I > downloaded Pycluster which has that ability. However, Pycluster uses > Numeric, which I no longer have since scipy was upgraded to use numpy > instead. Does anyone have any experience with updating packages to use > numpy instead of Numeric? I tried replacing the "from Numeric import *" > line in Pycluster with "from numpy import *" but this creates a program > fault which crashes Python. From stephenemslie at gmail.com Fri Aug 4 06:00:48 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Fri, 4 Aug 2006 11:00:48 +0100 Subject: [SciPy-user] hough transform In-Reply-To: <20060803192145.GG6682@mentat.za.net> References: <51f97e530608030534k4ef26457s741c7d048d1f3ac4@mail.gmail.com> <20060803192145.GG6682@mentat.za.net> Message-ID: <51f97e530608040300p6956f8feje881e7f870557e63@mail.gmail.com> Thanks very much Stefan, this is very useful to me! I wonder how easy/hard it would be to adapt this for the generalized hough transform? Stephen Emslie On 8/3/06, Stefan van der Walt wrote: > > Hi Stephen > > On Thu, Aug 03, 2006 at 01:34:51PM +0100, stephen emslie wrote: > > I'd like to perform a hough transform (http://en.wikipedia.org/wiki/ > > Hough_transform) on an image as part of a template matching algorithm. > > > > I've looked around scipy and numpy and I couldn't find any sign of the > Hough > > transform, but it seems to be a fairly routine operation in image > processing so > > I was wondering if anyone had written an implementation (or perhaps if > it is > > already in scipy and I'm just missing it). > > I attach code for the straight line Hough transform. I saw a post > earlier which refers to another implementation. They are similar, but > this one is vectorised, which should provide a speed improvement. > > Regards > St?fan > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.schnizer at gsi.de Fri Aug 4 07:05:59 2006 From: p.schnizer at gsi.de (Pierre SCHNIZER) Date: Fri, 04 Aug 2006 13:05:59 +0200 Subject: [SciPy-user] numpy 1.0b1: arccosh(0.0) In-Reply-To: <44A18C14.5020605@gmail.com> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com> Message-ID: <44D32A17.4080800@gsi.de> Hello, I don't know if that only happens on my machine.... For numpy 1.0 b1 I get In [241]:numpy.arccosh(0) Out[241]:nan In [242]:numpy.arccosh(0+0j) Out[242]:(-inf+0j) In [243]:numpy.__version__ Out[243]:'1.0b1' I think the real part should be 0. and the complex part pi/2 I get these results for In [244]:numarray.arccosh(0+0j) Out[244]:1.5707963267948966j In [245]:numarray.__version__ Out[245]:'1.5.1' In [247]:Numeric.arccosh(0+0j) Out[247]:1.5707963267948966j In [248]:Numeric.__version__ Out[248]:'24.2' I am running on Linux mtpc18 2.6.17.7 #13 SMP PREEMPT Thu Jul 27 16:18:46 CEST 2006 i686 GNU/Linux Debian stable I used the compiler gcc (GCC) 3.3.5 (Debian 1:3.3.5-13) Copyright (C) 2003 Free Software Foundation, Inc. to compile the package. I had the same problem with numpy 0.9.8. (Just discovered now) Is that a problem only occuring on my installation? Thanks for your help! Pierre From david at ar.media.kyoto-u.ac.jp Fri Aug 4 07:35:18 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 04 Aug 2006 20:35:18 +0900 Subject: [SciPy-user] numpy 1.0b1: arccosh(0.0) In-Reply-To: <44D32A17.4080800@gsi.de> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com> <44D32A17.4080800@gsi.de> Message-ID: <44D330F6.9060104@ar.media.kyoto-u.ac.jp> Pierre SCHNIZER wrote: > Hello, > > I don't know if that only happens on my machine.... > For numpy 1.0 b1 I get > > > In [241]:numpy.arccosh(0) > Out[241]:nan I get the same results, but this is logical... acosh is defined by 0.5(exp(z) + exp(-z)) for z complex, and this cannot be smaller than 1 for z real, and never equal to 0, so arccosh(0) does not really make sense to me, cheers, David From hasslerjc at adelphia.net Fri Aug 4 07:40:07 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Fri, 04 Aug 2006 07:40:07 -0400 Subject: [SciPy-user] numpy 1.0b1: arccosh(0.0) In-Reply-To: <44D330F6.9060104@ar.media.kyoto-u.ac.jp> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com> <44D32A17.4080800@gsi.de> <44D330F6.9060104@ar.media.kyoto-u.ac.jp> Message-ID: <44D33217.60906@adelphia.net> An HTML attachment was scrubbed... URL: From p.schnizer at gsi.de Fri Aug 4 08:29:31 2006 From: p.schnizer at gsi.de (Pierre SCHNIZER) Date: Fri, 04 Aug 2006 14:29:31 +0200 Subject: [SciPy-user] numpy 1.0b1: arccosh(0.0) In-Reply-To: <44D33217.60906@adelphia.net> References: <44A0EBE5.20803@iam.uni-stuttgart.de> <44A18C14.5020605@gmail.com> <44D32A17.4080800@gsi.de> <44D330F6.9060104@ar.media.kyoto-u.ac.jp> <44D33217.60906@adelphia.net> Message-ID: <44D33DAB.5030105@gsi.de> John Hassler wrote: > It does exist - but it's a multivalued function, so you need a branch > cut. See (halfway down the page): > http://mathworld.wolfram.com/InverseHyperbolicCosine.html > > john > > David Cournapeau wrote: > I agree. Perhaps that is the reason for the inf here? In [18]:numpy.arccosh(0+0j) Out[18]:(-inf+0j) As it looks to me, thats not correct neither... I am using it for transforming between elliptic coordinates and rectangular ones, there this behaviour is not that desireable. Sincerely yours Pierre >>Pierre SCHNIZER wrote: >> >> >>>Hello, >>> >>> I don't know if that only happens on my machine.... >>>For numpy 1.0 b1 I get >>> >>> >>>In [241]:numpy.arccosh(0) >>>Out[241]:nan >>> >>> >>I get the same results, but this is logical... acosh is defined by >>0.5(exp(z) + exp(-z)) for z complex, and this cannot be smaller than 1 >>for z real, and never equal to 0, so arccosh(0) does not really make >>sense to me, >> >>cheers, >> >>David >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From R.Springuel at umit.maine.edu Fri Aug 4 11:31:53 2006 From: R.Springuel at umit.maine.edu (R. Padraic Springuel) Date: Fri, 04 Aug 2006 11:31:53 -0400 Subject: [SciPy-user] Pycluster, Numeric, & numpy In-Reply-To: References: Message-ID: <44D36869.4020508@umit.maine.edu> I am indeed using a version that used a binary installer. I'm on a Windows machine and have zero experience with compilers, so I always use binary installs. It's a habit I should probably get out of, especially since I'm hoping to switch to a dual boot system soon. For now, however, the alongside install of Numeric seems to work. Thanks. -- R. Padraic Springuel Teaching Assistant Department of Physics and Astronomy University of Maine Bennett 214 Office Hours: By Appointment only during the Summer From jstrunk at enthought.com Fri Aug 4 13:00:14 2006 From: jstrunk at enthought.com (Jeff Strunk) Date: Fri, 4 Aug 2006 12:00:14 -0500 Subject: [SciPy-user] scipy site will be back up real soon now Message-ID: <200608041200.14489.jstrunk@enthought.com> Hello, The automatic update for apache had different behavior than the previous version. I am compiling a fixed version, so it should work again soon. Sorry for the inconvenience. Jeff Strunk IT Administrator Enthought, Inc. From rene.donner at mac.com Fri Aug 4 17:24:03 2006 From: rene.donner at mac.com (=?ISO-8859-1?Q?Ren=E9_Donner?=) Date: Fri, 4 Aug 2006 23:24:03 +0200 Subject: [SciPy-user] Using Mayavi's tvtk/mlab within pyGTK? Message-ID: hello, i have found this post http://www.serpia.org/pygtk on how to display a matplotlib figure inside a pyGTK window. Is it somehow possible to do the same with the mlab module from the Mayavi / tvtk visualization toolkit? (http://scipy.org/Cookbook/ MayaVi/mlab) Even a hack to render mlab's output to a png and display this in the pygtk window would be great! Any link or snippet is highly appreciated! Thanks a lot, rene From fredantispam at free.fr Fri Aug 4 17:44:16 2006 From: fredantispam at free.fr (fred) Date: Fri, 04 Aug 2006 23:44:16 +0200 Subject: [SciPy-user] jn & lpmn or sph_jn... In-Reply-To: <44D11E42.8090402@gmail.com> References: <44C931A6.5060608@free.fr> <91cf711d0607280551k295bebccw9f22a9d09944c97f@mail.gmail.com> <44D1150C.3090501@free.fr> <44D11682.5070009@gmail.com> <44D11A85.2050608@free.fr> <44D11E42.8090402@gmail.com> Message-ID: <44D3BFB0.4040708@free.fr> Robert Kern a ?crit : [snip] > Note that in the f2py interface that David wrote, you won't have to pass in the > dimension of x as an argument from Python; you just have to write the underlying > FORTRAN subroutine to do so. The f2py interface takes care of inferring the > dimensions of your inputs for you. That works quite fine, thanks to all you have helped/answered to me : I wrote lpmn3D() and sphjn3D() functions that fit my needs. Thanks again. Cheers, -- Fred. From prabhu at aero.iitb.ac.in Sat Aug 5 13:36:38 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Sat, 5 Aug 2006 23:06:38 +0530 Subject: [SciPy-user] Using Mayavi's tvtk/mlab within pyGTK? In-Reply-To: References: Message-ID: <17620.55078.270955.808268@prpc.aero.iitb.ac.in> >>>>> "Ren?" == Ren? Donner writes: Ren?> hello, i have found this post Ren?> http://www.serpia.org/pygtk Ren?> on how to display a matplotlib figure inside a pyGTK window. Ren?> Is it somehow possible to do the same with the mlab module Ren?> from the Mayavi / tvtk visualization toolkit? Ren?> (http://scipy.org/Cookbook/ MayaVi/mlab) There is no native pyGTK port of the enthought tool suite UI tools (yet). However, you could try and use gtk.Plug and gtk.Socket to "plug" the wxPython widget into your GTK widget. I haven't tried this but it might just work. cheers, prabhu From rene.donner at mac.com Sat Aug 5 16:09:47 2006 From: rene.donner at mac.com (=?ISO-8859-1?Q?Ren=E9_Donner?=) Date: Sat, 5 Aug 2006 22:09:47 +0200 Subject: [SciPy-user] Using Mayavi's tvtk/mlab within pyGTK? In-Reply-To: <17620.55078.270955.808268@prpc.aero.iitb.ac.in> References: <17620.55078.270955.808268@prpc.aero.iitb.ac.in> Message-ID: <1B202D36-9494-4CA3-B60B-8D506832011C@mac.com> hi prabhu, thanks for the hint! i will try this and report on my results :-) rene Am 05.08.2006 um 19:36 schrieb Prabhu Ramachandran: >>>>>> "Ren?" == Ren? Donner writes: > > Ren?> hello, i have found this post > > Ren?> http://www.serpia.org/pygtk > > Ren?> on how to display a matplotlib figure inside a pyGTK window. > > Ren?> Is it somehow possible to do the same with the mlab module > Ren?> from the Mayavi / tvtk visualization toolkit? > Ren?> (http://scipy.org/Cookbook/ MayaVi/mlab) > > There is no native pyGTK port of the enthought tool suite UI tools > (yet). However, you could try and use gtk.Plug and gtk.Socket to > "plug" the wxPython widget into your GTK widget. I haven't tried this > but it might just work. > > cheers, > prabhu > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From fullung at gmail.com Sat Aug 5 19:51:47 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 01:51:47 +0200 Subject: [SciPy-user] Building on Windows with the Intel Visual Fortran Compiler Message-ID: Hello all I'm trying to compile SciPy from SVN with MSVC and the Intel Visual Fortran Compiler, version 9.1. In the root of the SciPy source tree I have my site.cfg which contains: [blas_src] src_dirs = C:\home\albert\work2\blas [lapack_src] src_dirs = C:\home\albert\work2\lapack Inside the "Build Environment for Fortran IA-32 applications" command prompt I run the following command: python setup.py config --compiler=msvc --fcompiler=intelv build --compiler=msvc bdist_wininst (the build part doesn't seem to accept a --fcompiler option). This runs for a while and then it prints: Fortran f77 compiler: f77 -g -Wall -fno-second-underscore -mno-cygwin /w /I:C:\Program Files\VNI\CTT6.0\include\IA32 /fpe:3 /nologo -O2 -funroll-loops -fomit-frame-pointer -malign-double Fortran f90 compiler: ifort /w /I:C:\Program Files\VNI\CTT6.0\include\IA32 /fpe:3 /nologo /w /I:C:\Program Files\VNI\CTT6.0\include\IA32 /fpe:3 /nologo -O2 -funroll-loops -fomit-frame-pointer -malign-double Fortran fix compiler: ifort /w /I:C:\Program Files\VNI\CTT6.0\include\IA32 /fpe:3 /nologo /w /I:C:\Program Files\VNI\CTT6.0\include\IA32 /fpe:3 /nologo -O2 -funroll-loops -fomit-frame-pointer -malign-double followed by creating build\temp.win32-2.4 creating build\temp.win32-2.4\Lib creating build\temp.win32-2.4\Lib\fftpack creating build\temp.win32-2.4\Lib\fftpack\dfftpack compile options: '-c' f77:f77: Lib\fftpack\dfftpack\dcosqb.f Could not locate executable f77 Executable f77 does not exist which obviously isn't going to work, since I'm not using the MinGW compiler, nor did I specify it anywhere. The IntelVisualFCompiler class in intel.py in numpy.distutils contains: fc_exe = 'ifl' ... executables = { ... 'compiler_f77' : [fc_exe,"-FI","-w90","-w95"], 'compiler_fix' : [fc_exe,"-FI","-4L72","-w"], 'compiler_f90' : [fc_exe], ... } which looks like it should work, but it doesn't. Any ideas? Regards, Albert From robert.kern at gmail.com Sat Aug 5 20:34:45 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 05 Aug 2006 19:34:45 -0500 Subject: [SciPy-user] Building on Windows with the Intel Visual Fortran Compiler In-Reply-To: References: Message-ID: <44D53925.3010605@gmail.com> Albert Strasheim wrote: > Hello all > > I'm trying to compile SciPy from SVN with MSVC and the Intel Visual Fortran > Compiler, version 9.1. > > In the root of the SciPy source tree I have my site.cfg which contains: > > [blas_src] > src_dirs = C:\home\albert\work2\blas > [lapack_src] > src_dirs = C:\home\albert\work2\lapack > > Inside the "Build Environment for Fortran IA-32 applications" command prompt > I run the following command: > > python setup.py config --compiler=msvc --fcompiler=intelv build > --compiler=msvc bdist_wininst > > (the build part doesn't seem to accept a --fcompiler option). It should be on build_clib and build_ext, not build. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Sat Aug 5 20:48:59 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 02:48:59 +0200 Subject: [SciPy-user] Building on Windows with the Intel Visual Fortran Compiler In-Reply-To: <44D53925.3010605@gmail.com> Message-ID: Hello all > > (the build part doesn't seem to accept a --fcompiler option). > > It should be on build_clib and build_ext, not build. Ah, thanks. I'll update the wiki when I get it building. New error: Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 174, in setup return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 149, in setup dist.run_commands() File "C:\Python24\lib\distutils\dist.py", line 946, in run_commands self.run_command(cmd) File "C:\Python24\lib\distutils\dist.py", line 966, in run_command cmd_obj.run() File "C:\Python24\Lib\site-packages\numpy\distutils\command\build_clib.py", line 72, in run force=self.force) File "C:\Python24\Lib\site-packages\numpy\distutils\fcompiler\__init__.py", line 654, in new_fcompiler __import__ (module_name) File "C:\Python24\Lib\site-packages\numpy\distutils\fcompiler\intel.py", line 123, in ? class IntelVisualFCompiler(FCompiler): File "C:\Python24\Lib\site-packages\numpy\distutils\fcompiler\intel.py", line 133, in IntelVisualFCompiler ar_exe = MSVCCompiler().lib AttributeError: MSVCCompiler instance has no attribute 'lib' Cheers, Albert From robert.kern at gmail.com Sat Aug 5 21:09:35 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 05 Aug 2006 20:09:35 -0500 Subject: [SciPy-user] Building on Windows with the Intel Visual Fortran Compiler In-Reply-To: References: Message-ID: <44D5414F.2020904@gmail.com> Albert Strasheim wrote: > Hello all > >>> (the build part doesn't seem to accept a --fcompiler option). >> It should be on build_clib and build_ext, not build. > > Ah, thanks. I'll update the wiki when I get it building. New error: > > Traceback (most recent call last): > File "C:\Python24\Lib\site-packages\numpy\distutils\fcompiler\intel.py", > line 133, in IntelVisualFCompiler > ar_exe = MSVCCompiler().lib > AttributeError: MSVCCompiler instance has no attribute 'lib' That usually means that that compiler never got configured. Check the messages distutils spits out when it's trying to find a C compiler. You will probably want to set --compiler on build_clib and build_ext rather than build, too. I think the passing of information between distutils commands is a little screwed up; I've never had problems when I explicitly set that information on build_clib and build_ext, though. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Sat Aug 5 21:22:59 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 03:22:59 +0200 Subject: [SciPy-user] Building on Windows with the Intel Visual Fortran Compiler In-Reply-To: <44D5414F.2020904@gmail.com> Message-ID: Hello all > > AttributeError: MSVCCompiler instance has no attribute 'lib' > > That usually means that that compiler never got configured. Check the > messages > distutils spits out when it's trying to find a C compiler. You will > probably > want to set --compiler on build_clib and build_ext rather than build, too. > I > think the passing of information between distutils commands is a little > screwed > up; I've never had problems when I explicitly set that information on > build_clib > and build_ext, though. I was doing this already: python setup.py config --compiler=msvc --fcompiler=intelv build_clib --compiler=msvc --fcompiler=intelv build_ext --compiler=msvc --fcompiler=intelv bdist_wininst Pertinent distutils output: ... running config running build_clib No module named msvccompiler in numpy.distutils, trying from distutils.. customize MSVCCompiler customize MSVCCompiler using build_clib 0 1 Could not locate executable ifc Could not locate executable efort Could not locate executable efc Could not locate executable efort Could not locate executable efc Traceback (most recent call last): ... File "C:\Python24\Lib\site-packages\numpy\distutils\fcompiler\intel.py", line 133, in IntelVisualFCompiler ar_exe = MSVCCompiler().lib AttributeError: MSVCCompiler instance has no attribute 'lib' Cheers, Albert From fullung at gmail.com Sat Aug 5 21:33:59 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 03:33:59 +0200 Subject: [SciPy-user] Building on Windows with the Intel Visual FortranCompiler In-Reply-To: Message-ID: I think intel.py is simply wrong. It does: ar_exe = 'lib.exe' fc_exe = 'ifl' if sys.platform=='win32': from distutils.msvccompiler import MSVCCompiler ar_exe = MSVCCompiler().lib ar_exe is set already, so why go look for a value for it in the MSVCCompiler instance? Cheers, Albert > I was doing this already: > > python setup.py config --compiler=msvc --fcompiler=intelv build_clib > --compiler=msvc --fcompiler=intelv build_ext --compiler=msvc > --fcompiler=intelv bdist_wininst From fullung at gmail.com Sat Aug 5 21:45:54 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sun, 6 Aug 2006 03:45:54 +0200 Subject: [SciPy-user] Building on Windows with the Intel VisualFortranCompiler In-Reply-To: Message-ID: Hello all > -----Original Message----- > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > On Behalf Of Albert Strasheim > Sent: 06 August 2006 03:34 > To: 'SciPy Users List' > Subject: Re: [SciPy-user] Building on Windows with the Intel > VisualFortranCompiler > > I think intel.py is simply wrong. It does: > > ar_exe = 'lib.exe' > fc_exe = 'ifl' > if sys.platform=='win32': > from distutils.msvccompiler import MSVCCompiler > ar_exe = MSVCCompiler().lib > > ar_exe is set already, so why go look for a value for it in the > MSVCCompiler instance? Removing the code mentioned above seemed to fix the problem. Filed NumPy ticket #234 for this one. http://projects.scipy.org/scipy/numpy/ticket/234 Unfortunately, it seems the special/cephes/const.c code contains some constructs that MSVC doesn't accept. I reported these here some months ago: http://projects.scipy.org/scipy/scipy/ticket/12 Regards, Albert From gael.varoquaux at normalesup.org Sun Aug 6 16:40:39 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 6 Aug 2006 22:40:39 +0200 Subject: [SciPy-user] Accelerating the calculation of a function Message-ID: <20060806204039.GD24568@clipper.ens.fr> Hi list, I am integrating an ODE. I thus have a complicated function that I call a lot of times. What limits my execution times (currently too high) is the call to this function, which happens a great number of times. I therefore need to speed this up. If it where a simple function I could use weave.inline or weave.blitz but this is a rather complicated function involving a lot of python magic that I would be very happy to keep ( things such as : sum( [F(k) for k in lasers]) ). How can I speed this up without rendering my function unreadable ? Is there a compiler (Just in time, maybe) that could speed this up ? Is psyco of any use ? Is their a way to inline my functions (I would like to keep them as functions for the sake of readability) ? Any tricks or ideas appreciated. Cheers -- Ga?l From simon at arrowtheory.com Sun Aug 6 20:57:56 2006 From: simon at arrowtheory.com (Simon Burton) Date: Mon, 7 Aug 2006 10:57:56 +1000 Subject: [SciPy-user] Accelerating the calculation of a function In-Reply-To: <20060806204039.GD24568@clipper.ens.fr> References: <20060806204039.GD24568@clipper.ens.fr> Message-ID: <20060807105756.0bae0871.simon@arrowtheory.com> On Sun, 6 Aug 2006 22:40:39 +0200 Gael Varoquaux wrote: > Hi list, > > I am integrating an ODE. I thus have a complicated function that > I call a lot of times. What limits my execution times (currently too > high) is the call to this function, which happens a great number of > times. I therefore need to speed this up. > > If it where a simple function I could use weave.inline or weave.blitz > but this is a rather complicated function involving a lot of python > magic that I would be very happy to keep ( things such as : > sum( [F(k) for k in lasers]) ). > > How can I speed this up without rendering my function unreadable ? Is > there a compiler (Just in time, maybe) that could speed this up ? Is > psyco of any use ? My tool of choice has been pyrex. (Note that it cannot handle list comprehensions.) Psyco does not understand numpy arrays (read: slow), but it does understand (ie. optimize for) the python array type. > > Is their a way to inline my functions (I would like to keep them as > functions for the sake of readability) ? Yes, I hear you. I recently found this bytecodehacks/psyco post: http://groups.google.com/group/comp.lang.python/browse_frm/thread/eab35852b7f1e98c/3174613b8b02f5d9 Simon. -- Simon Burton, B.Sc. Licensed PO Box 8066 ANU Canberra 2601 Australia Ph. 61 2 6249 6940 http://arrowtheory.com From elcorto at gmx.net Sun Aug 6 23:26:04 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 07 Aug 2006 05:26:04 +0200 Subject: [SciPy-user] Accelerating the calculation of a function In-Reply-To: <20060806204039.GD24568@clipper.ens.fr> References: <20060806204039.GD24568@clipper.ens.fr> Message-ID: <44D6B2CC.90304@gmx.net> Gael Varoquaux wrote: > If it where a simple function I could use weave.inline or weave.blitz > but this is a rather complicated function involving a lot of python > magic that I would be very happy to keep ( things such as : > sum( [F(k) for k in lasers]) ). You're most probably doing it already (I just wondered about the list comprehension): Can you do numpy.sum(F(lasers)) in your function (that is, is 'lasers' a simple array)? Note that numpy.sum (or a.sum()) is much faster then the builtin sum() for large arrays. Also use numpy.amin() instead of min() and stuff like that. HTH steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From gael.varoquaux at normalesup.org Mon Aug 7 02:27:49 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 7 Aug 2006 08:27:49 +0200 Subject: [SciPy-user] Accelerating the calculation of a function In-Reply-To: <44D6B2CC.90304@gmx.net> References: <20060806204039.GD24568@clipper.ens.fr> <44D6B2CC.90304@gmx.net> Message-ID: <20060807062749.GA6114@clipper.ens.fr> On Mon, Aug 07, 2006 at 05:26:04AM +0200, Steve Schmerler wrote: >( things such as : sum( [F(k) for k in lasers]) ). > You're most probably doing it already (I just wondered about the list > comprehension): Can you do numpy.sum(F(lasers)) in your function (that > is, is 'lasers' a simple array)? I have been turning this around in my head wondering whether to do it or not. The problem is that F is already a 2D array, and that I would have to add an extra dimension to it to be able to sum it (and it has matrix and array products in its definition, so writing it starts getting really ugly, and my current problem is rather finding the errors in the different formulas). len(laser) = 6, so this shouldn't be to bad. Actually the best way to implement that sum would a macro (in another language then python). Hint for the numpy developpers: maybe a nice function to make code readable and fast would be to have a function that does this with arrays without having to write F compatible with arrays (ie no "frames are not aligned" problems). Something like: sumover(F,lasers,1) where F would be a callable, lasers an array, and the last argument (optional) the direction of the array in which the sum should happen. I do not know if this is possible but this would be useful. -- Ga?l From stephen.kelly2 at ucdconnect.ie Mon Aug 7 03:00:12 2006 From: stephen.kelly2 at ucdconnect.ie (Stephen Kelly) Date: Mon, 07 Aug 2006 09:00:12 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. Message-ID: Hi, I sent this email to Eric Jones, but he recommended that I send it to this mailing list instead. I'm trying to see if scipy could fit into a final year project for my mechanical engineering course. I am a student of mechanical engineering in UCD Dublin , and part of my course involves doing a final year project. We do very little programming in my course, but as coding and python in particular is something that interests me, I'd like to have some involvement in python in my final year project. Because my course is not centered around coding, I think I'd be more likely to be able to do a project on applying a python application like scipy to a problem, or using python as glue in a scripting applicatin. I found scipy while searching for an idea for a project that I could do in python, while staying within the scope of mechanical engineering and my course. I'm still no closer to an idea, so I thought I'd write to you to see if you had any ideas of what I might be able to do as a mechanical engineer with python/scipy. Something that I think might be plausible might be a simulation or computational analysis of some kind, but I would probably need to justify using python for whatever I do rather than MatLab, which is the industry standard, and the program that my lecturers are already familiar with. I have never used MatLab myself, so I'm not sure what it offers. I would appreciate any thoughts you might have on this. I emailed one of my lecturers already, who asked me for more specific details and to give thought to the type of project I would like to do. Kind Regards, Stephen Kelly -------------- next part -------------- An HTML attachment was scrubbed... URL: From willemjagter at gmail.com Mon Aug 7 03:53:31 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 7 Aug 2006 09:53:31 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: References: Message-ID: <8b3894bc0608070053w25cc8828h9c02620e14bfd7ab@mail.gmail.com> Stephen; I'm a mechanical engineer; it's interesting to see that Matlab is also the default choice in your part of the world and that programming doesn't feature very well in your mechanical course, the same here... 1) Why don't you you do a small FEA program, assuming that you have knowledge of this? Something simple where you can define 2D geometry with AutoCAD or anything that can save in DXF format. You'll have to read in the DXF file, represent with Matplotlib, mesh it (perhaps using Delaunay), apply loads and constraints and solve the system. Or perhaps just some aspects of this? 2) There's a whole lot of heat and mass transfer problems I can think of (see book by Mills). To justify this simply state that as a practising engineer entering into your first job, you're boss is not going to buy Matlab for you just so that you can solve a few problems. He'll most likely expect you to use Excel (since everyone has that), but Python is free and can do the job... Regards, William Hunter On 07/08/06, Stephen Kelly wrote: > > > > Hi, > > I sent this email to Eric Jones, but he recommended that I send it to this > mailing list instead. I'm trying to see if scipy could fit into a final year > project for my mechanical engineering course. > > I am a student of mechanical engineering in UCD Dublin , and part of my > course involves doing a final year project. We do very little programming in > my course, but as coding and python in particular is something that > interests me, I'd like to have some involvement in python in my final year > project. > > Because my course is not centered around coding, I think I'd be more likely > to be able to do a project on applying a python application like scipy to a > problem, or using python as glue in a scripting applicatin. I found scipy > while searching for an idea for a project that I could do in python, while > staying within the scope of mechanical engineering and my course. I'm still > no closer to an idea, so I thought I'd write to you to see if you had any > ideas of what I might be able to do as a mechanical engineer with > python/scipy. > > Something that I think might be plausible might be a simulation or > computational analysis of some kind, but I would probably need to justify > using python for whatever I do rather than MatLab, which is the industry > standard, and the program that my lecturers are already familiar with. I > have never used MatLab myself, so I'm not sure what it offers. > > I would appreciate any thoughts you might have on this. I emailed one of my > lecturers already, who asked me for more specific details and to give > thought to the type of project I would like to do. > > Kind Regards, > > Stephen Kelly > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Regards, WH From gael.varoquaux at normalesup.org Mon Aug 7 04:40:27 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 7 Aug 2006 10:40:27 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: <8b3894bc0608070053w25cc8828h9c02620e14bfd7ab@mail.gmail.com> References: <8b3894bc0608070053w25cc8828h9c02620e14bfd7ab@mail.gmail.com> Message-ID: <20060807084027.GB6114@clipper.ens.fr> On Mon, Aug 07, 2006 at 09:53:31AM +0200, William Hunter wrote: > 1) Why don't you you do a small FEA program, assuming that you have > knowledge of this? Something simple where you can define 2D geometry > with AutoCAD or anything that can save in DXF format. You'll have to > read in the DXF file, represent with Matplotlib, mesh it (perhaps > using Delaunay), apply loads and constraints and solve the system. Or > perhaps just some aspects of this? If something that reads DXF files and is able to plot them with matplotlib (and describe them with numpy arrays) come out with an open source licence, I think many people would be interested. I know the guys at enthought have some code that does something like this, so they may be able to help you with that project. However, they make a living selling this thing, so I don't know how much of their knowledge they will be willing to share (and that's quite understandable). -- Ga?l From arserlom at gmail.com Mon Aug 7 06:56:52 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Mon, 7 Aug 2006 12:56:52 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: References: Message-ID: On 8/7/06, Stephen Kelly wrote: > Something that I think might be plausible might be a simulation or > computational analysis of some kind, but I would probably need to justify > using python for whatever I do rather than MatLab, which is the industry > standard, and the program that my lecturers are already familiar with. I > have never used MatLab myself, so I'm not sure what it offers. I don't think you "need to justify using Python for whatever you do rather than MatLab", specially since you are doing this in the scope of a university project. I have used Python to solve some problems for my university courses and have had no problems at all (Civil engineering in UPV ). Just write an introductory paragraph with a general description of the language (see for example www.python.org) and of the packages you will be using so that the lecturers know what you are using. And if you do need to justify it's usage: it's more versatile than MatLab plus it's allway better to know Python and MatLab than just knowing MatLab. And it's free. Armando Serrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at enthought.com Mon Aug 7 07:00:19 2006 From: travis at enthought.com (Travis N. Vaught) Date: Mon, 07 Aug 2006 06:00:19 -0500 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: <20060807084027.GB6114@clipper.ens.fr> References: <8b3894bc0608070053w25cc8828h9c02620e14bfd7ab@mail.gmail.com> <20060807084027.GB6114@clipper.ens.fr> Message-ID: <44D71D43.3010408@enthought.com> Gael Varoquaux wrote: > On Mon, Aug 07, 2006 at 09:53:31AM +0200, William Hunter wrote: > >> 1) Why don't you you do a small FEA program, assuming that you have >> knowledge of this? Something simple where you can define 2D geometry >> with AutoCAD or anything that can save in DXF format. You'll have to >> read in the DXF file, represent with Matplotlib, mesh it (perhaps >> using Delaunay), apply loads and constraints and solve the system. Or >> perhaps just some aspects of this? >> > > If something that reads DXF files and is able to plot them with > matplotlib (and describe them with numpy arrays) come out with an open > source licence, I think many people would be interested. > > I know the guys at enthought have some code that does something like > this, so they may be able to help you with that project. However, they > make a living selling this thing, so I don't know how much of their > knowledge they will be willing to share (and that's quite > understandable). > We actually read/write STL format files in our recent project and use tvtk for the visualization. Some of this _is_ necessarily closed, but we'd be glad to discuss what we've done and there's an off-chance there may be some things we could share. Best, Travis From fredantispam at free.fr Mon Aug 7 07:49:41 2006 From: fredantispam at free.fr (fred) Date: Mon, 07 Aug 2006 13:49:41 +0200 Subject: [SciPy-user] spherical harmonic Message-ID: <44D728D5.3050407@free.fr> Hi, I'm playing with spherical harmonic (special.sph_harm()) and I don't understand what I get. >From Wolfram (http://mathworld.wolfram.com/SphericalHarmonic.html), Re_Y11.png, Im_Y11.png & mod_Y11.png seems to be correct http://fredantispam.free.fr/Re_Y11.png http://fredantispam.free.fr/Im_Y11.png http://fredantispam.free.fr/mod_Y11.png when I use r = (sqrt((2*n+1)/4.0/pi)*exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1)))*sin(phi)*exp(1j*m*theta))**2 But when I use special.sph_harm(m,n,theta,phi), I get this http://fredantispam.free.fr/Re_Y11_2.png http://fredantispam.free.fr/Im_Y11_2.png http://fredantispam.free.fr/mod_Y11_2.png Real & imaginary parts don't seem to be correct. What I'm doing wrong ? Cheers, Here the small code I have used. from sys import argv from scipy import pi, cos, sin, exp, sqrt, mgrid, real, imag from scipy.special import sph_harm, gammaln from enthought.tvtk.tools import mlab, ivtk def main(): m = int(argv[1]) n = int(argv[2]) gui = mlab.GUI() window = ivtk.IVTKWithCrustAndBrowser(size=(1175,773)) window.open() fig = mlab.Figure(window.scene) o = mlab.Outline() fig.add(o) dphi, dtheta = pi/180, pi/180 [phi, theta] = mgrid[0:pi+dphi:dphi, 0:2*pi+dtheta:dtheta] ### cf. Wolfram : theta <--> phi r = real(sph_harm(m,n,theta,phi))**2 # r = real(sqrt((2*n+1)/4.0/pi)*exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1)))*sin(phi)*exp(1j*m*theta))**2 x = r*cos(theta)*sin(phi) y = r*sin(theta)*sin(phi) z = r*cos(phi) s = mlab.Surf(x, y, z, z) fig.add(s) window.scene.x_plus_view() window.scene.camera.parallel_projection = True window.scene.camera.azimuth(-62) window.scene.camera.elevation(19.5) gui.start_event_loop() if __name__ == '__main__': main() -- Fred. From alex.liberzon at gmail.com Mon Aug 7 08:13:47 2006 From: alex.liberzon at gmail.com (Alex Liberzon) Date: Mon, 7 Aug 2006 14:13:47 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project Message-ID: <775f17a80608070513y7f1bc3eeh8303e2db573a0378@mail.gmail.com> I'm from mechanical engineering field myself, from the fluid mechanical part of it and I'm working on a project that might be useful for you to see as a 'test case'. The project is about one of the experimental techniques in fluid mechanics, called Particle Image Velocimetry and it is the state-of-the-art method to measure spatial and temporal distribution of the flow velocity fields. You may find thousands of references about the method all around the web. The 'test case' I'm talking about is an algorithm used for analysing the images of particles in the flow. The original project, called URAPIV http://urapiv.wordpress.com was developed (proudly to say among the first) in Matlab, to allow various users to test their commercial software, to compare or to develop further different algorithms for this digital image correlation method. Recently we decided that Mathworks's licensing is too strict and we would like to let the users not only open source, but also a real open source project, one they can use without paying for 'beer'. So, we develop PyPIV, a clone of URAPIV in Python with Scipy/Numpy. We already have one more developer from outside the group, from Australia, a Master student for Aeronautics, and we'd like to have more developers. You can download both packages and demo images and start playing with it. The further development which is necessary: 1) develop drivers and image acquisition to allow this project be complete package, allowing open source and free educational tool for high schools, universities, etc. 2) develop GUI in Python for easy installation, use and analysis 3) develop new algorithms, e.g. iterative, multi-resolution, window shifting, pattern matching, optical flow, etc. 4) develop it into a new package, for strain measurements, using the basic ability to do 2D correlation and followed by a gradients (strain, stress) measurements of tenciles, etc. 5) develop other techniques of the similar kind, e.g. particle tracking velocimetry in three dimensions and similar. ..... The list is actually very long and I'll skip the rest. If you're interested, you can contact us offline (urapiv at gmail dot com) Best regards and good luck with Python/Numpy/Scipy. It does not need the justification, it's great. Alex From hasslerjc at adelphia.net Mon Aug 7 09:43:38 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Mon, 07 Aug 2006 09:43:38 -0400 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: References: Message-ID: <44D7438A.8050803@adelphia.net> An HTML attachment was scrubbed... URL: From fredantispam at free.fr Mon Aug 7 09:45:02 2006 From: fredantispam at free.fr (fredantispam at free.fr) Date: Mon, 07 Aug 2006 15:45:02 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... Message-ID: <1154958302.44d743de279be@imp2-g19.free.fr> Hi, I need to find out the (first) derivative of associated Legendre polynomials defined in lpmn(m,n,x). The problem is that I want to derive vs theta where x=cos(theta). For instance, for m=1 & n=0, P_0^1(cos(theta)) = cos(theta) -> P_0^1'(cos(theta)) = -sin(theta) I know that scipy has not symbolic math module (and pythonica seems to be quite old & unmaintained). But I use maxima. So I wonder if it would be possible to run maxima, derive assoc_legendre_p(n,m,cos(theta)) vs. theta and "send" the result to python. How could I do that ? Cheers, From wdj at usna.edu Mon Aug 7 09:55:20 2006 From: wdj at usna.edu (David Joyner) Date: Mon, 07 Aug 2006 09:55:20 -0400 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <1154958302.44d743de279be@imp2-g19.free.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> Message-ID: <44D74648.1090909@usna.edu> Have you looked into SAGE (sage.scipy.org)? SAGE is written in Python but includes maxima and has some special functions (wrapped from maxima and pari). I'm not excatly sure what your question is, but SAGE might help do what you want. +++++++++++++++++++++++++++++++++++++ fredantispam at free.fr wrote: > Hi, > > I need to find out the (first) derivative of associated Legendre polynomials > defined in lpmn(m,n,x). > > The problem is that I want to derive vs theta where x=cos(theta). > > For instance, for m=1 & n=0, > P_0^1(cos(theta)) = cos(theta) -> P_0^1'(cos(theta)) = -sin(theta) > > I know that scipy has not symbolic math module (and pythonica seems > to be quite old & unmaintained). > > But I use maxima. > > So I wonder if it would be possible to run maxima, derive > assoc_legendre_p(n,m,cos(theta)) vs. theta and "send" the result to python. > > > How could I do that ? > > Cheers, > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From willemjagter at gmail.com Mon Aug 7 10:10:13 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 7 Aug 2006 16:10:13 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: <44D7438A.8050803@adelphia.net> References: <44D7438A.8050803@adelphia.net> Message-ID: <8b3894bc0608070710v5f457541x7223e2256c83c033@mail.gmail.com> Another thought(s). Something you're likely to come across if you end up in the mechanical design field, is the use of codes... Why don't you write a program that can calculate the dimensions of a pressure vessel, according to ASME, for example? Or a gear design program which will output the 3D gear? A program to design offshore containers to DNV's CN 2.7-1? Go look at some of the add-ins available for programs like SolidWorks and Solid Edge, you might find some ideas here too. For programs like these the user (probably yourself one day) would typically want an intuitive UI, some form of customisation, be it by editing a text file with material spec's or a file specifying pipe and/or flange dimensions, and DEFINITELY output in the form of a CAD file (I would say DXF/IGES for 2D and StL/XML/VRML for 3D. You can attempt Parasolid, ACIS or STEP, but these are not as simple as StL and last mentioned is in wide use by the rapid prototyping industry). One would possibly also want to view (matplotlib) the pressure vessel before saving it as a DXF file for instance. Maybe add an option to enter "Company and Project Details" when saving the file as a DXF... There's a lot of options! Regards, William On 07/08/06, John Hassler wrote: > > You've gotten a lot of very good responses already. I'd just make a couple > of comments. > > I'm a former prof. of ChE (retired). When I started out (a long time ago), > industrial plants were built in several stages: beaker, bucket, barrel, > (maybe more), final plant. You would start with bench-scale experiments in > the laboratory, and work through progressively larger "pilot plants" until > you were confident that you understood the process well enough to commit to > building the full-scale plant. Now, design is done through computer > modeling and simulation, and pilot plants are rarely, if ever, used. The > same is true in all fields of engineering. For example, computer chips are > simulated in detail before being tried "in silicon." > > In spite of this, most engineers never learn to program at any level beyond > the most basic. I'm not familiar with your field, but in ChE, there are > "black box" programs (Aspen, ChemCad) which will do very sophisticated > modeling, without requiring much knowledge on the part of the operator. I > think this is unfortunate ... but then, I'm old fashioned. > > I'm a visiting prof. here at VaTech. The engineering college has > standardized on Matlab. I don't like it much, for two reasons. First, it's > outrageously expensive. (Scilab, as an example, is free, similar to Matlab > in basic capabilities, works under both Windows and Linux, and is certainly > sufficient for any but the most demanding computational problems.) Second, > Matlab makes it difficult to write well-structured programs of any > appreciable size. Python-Scipy, on the other hand, is a "real" programming > language, which can handle VERY large programs with "grace and beauty," and > can do small programs with convenience. The "immediate" calculational > abilities (IDLE) are more convenient than those of Matlab, since you can > define functions "on the fly" ... Matlab requires "m-files" for functions > ... and I've used Python plus Numeric to solve some reasonably large FEM > examples in acceptable times. > > So if you needed any more encouragement to look into a modeling problem with > Python, perhaps this will help. > > (As I re-read what I've written, I guess I could have just said, "Computing > .. important. Python... good." Professors are long winded ..... ) > > john > > > > Stephen Kelly wrote: > > > Hi, > > I sent this email to Eric Jones, but he recommended that I send it to this > mailing list instead. I'm trying to see if scipy could fit into a final year > project for my mechanical engineering course. > > I am a student of mechanical engineering in UCD Dublin , and part of my > course involves doing a final year project. We do very little programming in > my course, but as coding and python in particular is something that > interests me, I'd like to have some involvement in python in my final year > project. > > Because my course is not centered around coding, I think I'd be more likely > to be able to do a project on applying a python application like scipy to a > problem, or using python as glue in a scripting applicatin. I found scipy > while searching for an idea for a project that I could do in python, while > staying within the scope of mechanical engineering and my course. I'm still > no closer to an idea, so I thought I'd write to you to see if you had any > ideas of what I might be able to do as a mechanical engineer with > python/scipy. > > Something that I think might be plausible might be a simulation or > computational analysis of some kind, but I would probably need to justify > using python for whatever I do rather than MatLab, which is the industry > standard, and the program that my lecturers are already familiar with. I > have never used MatLab myself, so I'm not sure what it offers. > > I would appreciate any thoughts you might have on this. I emailed one of my > lecturers already, who asked me for more specific details and to give > thought to the type of project I would like to do. > > Kind Regards, > > Stephen Kelly > ________________________________ > > _______________________________________________ SciPy-user > mailing list SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -- Regards, WH From gnata at obs.univ-lyon1.fr Mon Aug 7 10:32:17 2006 From: gnata at obs.univ-lyon1.fr (Xavier Gnata) Date: Mon, 07 Aug 2006 16:32:17 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <1154958302.44d743de279be@imp2-g19.free.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> Message-ID: <44D74EF1.3040502@obs.univ-lyon1.fr> Hi, I perform the same kind of job with mathematica and python : 1) Compute what you want using mathematica/maxima/... 2) export the result as text in a syntax as close as possible as python once (mathematica can export to C code for instance) 3) Perform tiny modification on the resulting strings so that it corresponds with python syntax. 4) use exec() and that's it. Maybe it is uglly but it is powerfull and simple ;) Xavier. > Hi, > > I need to find out the (first) derivative of associated Legendre polynomials > defined in lpmn(m,n,x). > > The problem is that I want to derive vs theta where x=cos(theta). > > For instance, for m=1 & n=0, > P_0^1(cos(theta)) = cos(theta) -> P_0^1'(cos(theta)) = -sin(theta) > > I know that scipy has not symbolic math module (and pythonica seems > to be quite old & unmaintained). > > But I use maxima. > > So I wonder if it would be possible to run maxima, derive > assoc_legendre_p(n,m,cos(theta)) vs. theta and "send" the result to python. > > > How could I do that ? > > Cheers, > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles Andr? 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gnata at obs.univ-lyon1.fr ############################################ From gael.varoquaux at normalesup.org Mon Aug 7 10:39:59 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 7 Aug 2006 16:39:59 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <44D74EF1.3040502@obs.univ-lyon1.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> <44D74EF1.3040502@obs.univ-lyon1.fr> Message-ID: <20060807143959.GB23067@clipper.ens.fr> On Mon, Aug 07, 2006 at 04:32:17PM +0200, Xavier Gnata wrote: > Hi, > I perform the same kind of job with mathematica and python : > 1) Compute what you want using mathematica/maxima/... > 2) export the result as text in a syntax as close as possible as python > once (mathematica can export to C code for instance) > 3) Perform tiny modification on the resulting strings so that it > corresponds with python syntax. > 4) use exec() and that's it. Better still: use weave.inline, and pure c code. -- Ga?l From jelle.feringa at ezct.net Mon Aug 7 11:53:55 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Mon, 7 Aug 2006 17:53:55 +0200 Subject: [SciPy-user] ga examples broken? Message-ID: <006b01c6ba39$b09d0320$2701a8c0@OneArchitecture.local> I just checked in the scipy.ga package from the svn server, but failed to run the example: c:\Python24\Lib\site-packages\scipy\ga>python examples.py Warning: The key "migrants" in not a valid setting. The valid settings are ['pop_size', 'p_replace', 'p_cross', 'p_mutate', 'p_devia tion', 'gens', 'rand_seed', 'rand_alg', 'dbase', 'update_rate'] Warning: The key "num_pops" in not a valid setting. The valid settings are ['pop_size', 'p_replace', 'p_cross', 'p_mutate', 'p_devia tion', 'gens', 'rand_seed', 'rand_alg', 'dbase', 'update_rate'] Traceback (most recent call last): File "examples.py", line 64, in ? ex1() File "examples.py", line 60, in ex1 galg.evolve() File "c:\Python24\lib\site-packages\scipy\ga\algorithm.py", line 127, in evolv e self.initialize() File "c:\Python24\lib\site-packages\scipy\ga\algorithm.py", line 60, in initia lize if reseed: rv.initialize(seed = sd, algorithm = alg) AttributeError: 'module' object has no attribute 'initialize' In [2]: sp.__version__ Out[2]: '0.4.9' Might it possible broken, or is something the matter with my setup? Thanks, Jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From haase at msg.ucsf.edu Mon Aug 7 12:16:53 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 7 Aug 2006 09:16:53 -0700 Subject: [SciPy-user] weave: how to give code to someone w/o compiler Message-ID: <200608070916.54015.haase@msg.ucsf.edu> Hi, I just starting to use weave - it seems to be the best thing since sliced bread ... First I have two questions: a) Is it possible to distribute modules using weave to other people who might NOT have a C compiler installed ? b) when I (or someone who does not have a C compiler !) change parts of that module that should not require a recompiling of the C part - is weave smart enough to recognise this ? Thanks, Sebastian Haase From travis at enthought.com Mon Aug 7 13:24:33 2006 From: travis at enthought.com (Travis N. Vaught) Date: Mon, 07 Aug 2006 12:24:33 -0500 Subject: [SciPy-user] weave: how to give code to someone w/o compiler In-Reply-To: <200608070916.54015.haase@msg.ucsf.edu> References: <200608070916.54015.haase@msg.ucsf.edu> Message-ID: <44D77751.9040404@enthought.com> Sebastian Haase wrote: > Hi, > I just starting to use weave - it seems to be the best thing since sliced > bread ... > First I have two questions: > a) Is it possible to distribute modules using weave to other people who might > NOT have a C compiler installed ? > > b) when I (or someone who does not have a C compiler !) change parts of that > module that should not require a recompiling of the C part - is weave smart > enough to recognise this ? > > Thanks, > Sebastian Haase > It's my understanding that a re-compile is triggered by a mismatch to a MD5 generated on the C string that is to be compiled and the cached MD5 for the expression. This would mean that only changes to that string would force a re-compile. However, even formatting changes (even to whitespace) in the C string force a recompile. As I'm not that familiar, this could be just hear-say. Travis From robert.kern at gmail.com Mon Aug 7 13:36:03 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 Aug 2006 12:36:03 -0500 Subject: [SciPy-user] weave: how to give code to someone w/o compiler In-Reply-To: <44D77751.9040404@enthought.com> References: <200608070916.54015.haase@msg.ucsf.edu> <44D77751.9040404@enthought.com> Message-ID: <44D77A03.50401@gmail.com> Travis N. Vaught wrote: > Sebastian Haase wrote: >> Hi, >> I just starting to use weave - it seems to be the best thing since sliced >> bread ... >> First I have two questions: >> a) Is it possible to distribute modules using weave to other people who might >> NOT have a C compiler installed ? >> >> b) when I (or someone who does not have a C compiler !) change parts of that >> module that should not require a recompiling of the C part - is weave smart >> enough to recognise this ? >> >> Thanks, >> Sebastian Haase >> > It's my understanding that a re-compile is triggered by a mismatch to a > MD5 generated on the C string that is to be compiled and the cached MD5 > for the expression. This would mean that only changes to that string > would force a re-compile. However, even formatting changes (even to > whitespace) in the C string force a recompile. The types of the inputs are also taken into account. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fredantispam at free.fr Mon Aug 7 13:56:30 2006 From: fredantispam at free.fr (fred) Date: Mon, 07 Aug 2006 19:56:30 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <44D74648.1090909@usna.edu> References: <1154958302.44d743de279be@imp2-g19.free.fr> <44D74648.1090909@usna.edu> Message-ID: <44D77ECE.9000508@free.fr> David Joyner a ?crit : > Have you looked into SAGE (sage.scipy.org)? SAGE is written > in Python but includes maxima and has some special functions > (wrapped from maxima and pari). I'm not excatly sure what your > question is, but SAGE might help do what you want. Looks like interesting... But I don't think it can fit my needs (maybe I'm wrong). Thanks anyway. -- Fred. From fredantispam at free.fr Mon Aug 7 13:59:12 2006 From: fredantispam at free.fr (fred) Date: Mon, 07 Aug 2006 19:59:12 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <20060807143959.GB23067@clipper.ens.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> <44D74EF1.3040502@obs.univ-lyon1.fr> <20060807143959.GB23067@clipper.ens.fr> Message-ID: <44D77F70.3090805@free.fr> Gael Varoquaux a ?crit : > On Mon, Aug 07, 2006 at 04:32:17PM +0200, Xavier Gnata wrote: > >>Hi, > > >>I perform the same kind of job with mathematica and python : >>1) Compute what you want using mathematica/maxima/... >>2) export the result as text in a syntax as close as possible as python >>once (mathematica can export to C code for instance) >>3) Perform tiny modification on the resulting strings so that it >>corresponds with python syntax. >>4) use exec() and that's it. > > > Better still: use weave.inline, and pure c code. I read a little about weave.inline. But I don't understand the use of "pure c code". What do you mean ? How can I get the derivative of the associated Legendre polynomial vs theta ? -- Fred. From fredantispam at free.fr Mon Aug 7 14:00:47 2006 From: fredantispam at free.fr (fred) Date: Mon, 07 Aug 2006 20:00:47 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <44D74EF1.3040502@obs.univ-lyon1.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> <44D74EF1.3040502@obs.univ-lyon1.fr> Message-ID: <44D77FCF.4070001@free.fr> Xavier Gnata a ?crit : > Hi, > > I perform the same kind of job with mathematica and python : > 1) Compute what you want using mathematica/maxima/... > 2) export the result as text in a syntax as close as possible as python > once (mathematica can export to C code for instance) > 3) Perform tiny modification on the resulting strings so that it > corresponds with python syntax. > 4) use exec() and that's it. > > Maybe it is uglly but it is powerfull and simple ;) Hmm, yes, I usually proceed like this ;-) Tricky but it should work. -- Fred. From gael.varoquaux at normalesup.org Mon Aug 7 14:02:16 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Mon, 7 Aug 2006 20:02:16 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <44D77F70.3090805@free.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> <44D74EF1.3040502@obs.univ-lyon1.fr> <20060807143959.GB23067@clipper.ens.fr> <44D77F70.3090805@free.fr> Message-ID: <20060807180216.GC23067@clipper.ens.fr> On Mon, Aug 07, 2006 at 07:59:12PM +0200, fred wrote: > Gael Varoquaux a ?crit : > > On Mon, Aug 07, 2006 at 04:32:17PM +0200, Xavier Gnata wrote: > >>Hi, > >>I perform the same kind of job with mathematica and python : > >>1) Compute what you want using mathematica/maxima/... > >>2) export the result as text in a syntax as close as possible as python > >>once (mathematica can export to C code for instance) > >>3) Perform tiny modification on the resulting strings so that it > >>corresponds with python syntax. > >>4) use exec() and that's it. > > Better still: use weave.inline, and pure c code. > I read a little about weave.inline. > But I don't understand the use of "pure c code". I was talking about Xavier's idea: do step 1 and 2 as he says, and use weave.inline to use the c code generated by mathematica. -- Ga?l From jelle.feringa at ezct.net Mon Aug 7 14:14:54 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Mon, 7 Aug 2006 20:14:54 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <20060807180216.GC23067@clipper.ens.fr> Message-ID: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> http://swiginac.berlios.de/ could be interesting for dealing with symbolic computation -----Original Message----- From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] On Behalf Of Gael Varoquaux Sent: Monday, August 07, 2006 8:02 PM To: SciPy Users List Subject: Re: [SciPy-user] symbolic math & scipy (& maxima)... On Mon, Aug 07, 2006 at 07:59:12PM +0200, fred wrote: > Gael Varoquaux a ?crit : > > On Mon, Aug 07, 2006 at 04:32:17PM +0200, Xavier Gnata wrote: > >>Hi, > >>I perform the same kind of job with mathematica and python : > >>1) Compute what you want using mathematica/maxima/... > >>2) export the result as text in a syntax as close as possible as python > >>once (mathematica can export to C code for instance) > >>3) Perform tiny modification on the resulting strings so that it > >>corresponds with python syntax. > >>4) use exec() and that's it. > > Better still: use weave.inline, and pure c code. > I read a little about weave.inline. > But I don't understand the use of "pure c code". I was talking about Xavier's idea: do step 1 and 2 as he says, and use weave.inline to use the c code generated by mathematica. -- Ga?l _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From haase at msg.ucsf.edu Mon Aug 7 14:55:57 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 7 Aug 2006 11:55:57 -0700 Subject: [SciPy-user] weave: how to give code to someone w/o compiler In-Reply-To: <44D77A03.50401@gmail.com> References: <200608070916.54015.haase@msg.ucsf.edu> <44D77751.9040404@enthought.com> <44D77A03.50401@gmail.com> Message-ID: <200608071155.57351.haase@msg.ucsf.edu> On Monday 07 August 2006 10:36, Robert Kern wrote: > Travis N. Vaught wrote: > > Sebastian Haase wrote: > >> Hi, > >> I just starting to use weave - it seems to be the best thing since > >> sliced bread ... > >> First I have two questions: > >> a) Is it possible to distribute modules using weave to other people who > >> might NOT have a C compiler installed ? > >> > >> b) when I (or someone who does not have a C compiler !) change parts of > >> that module that should not require a recompiling of the C part - is > >> weave smart enough to recognise this ? > >> > >> Thanks, > >> Sebastian Haase > > > > It's my understanding that a re-compile is triggered by a mismatch to a > > MD5 generated on the C string that is to be compiled and the cached MD5 > > for the expression. This would mean that only changes to that string > > would force a re-compile. However, even formatting changes (even to > > whitespace) in the C string force a recompile. > > The types of the inputs are also taken into account. Thanks for the info - what file do I need to distribute along with .py file ? where do I find it ? and where does it need to go ? Can I put a compiled file into the same directory as my .py file !? I just discovered (on my linux box) /tmp/haase/python24_intermediate/compiler_cae5f1e251cd037a19f0234c558d6f0e !? Thanks, Sebastian Haase From prabhu at aero.iitb.ac.in Mon Aug 7 15:22:11 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Tue, 8 Aug 2006 00:52:11 +0530 Subject: [SciPy-user] weave: how to give code to someone w/o compiler In-Reply-To: <200608071155.57351.haase@msg.ucsf.edu> References: <200608070916.54015.haase@msg.ucsf.edu> <44D77751.9040404@enthought.com> <44D77A03.50401@gmail.com> <200608071155.57351.haase@msg.ucsf.edu> Message-ID: <17623.37603.174760.184500@prpc.aero.iitb.ac.in> >>>>> "Sebastian" == Sebastian Haase writes: >> The types of the inputs are also taken into account. Sebastian> Thanks for the info - what file do I need to distribute Sebastian> along with .py file ? where do I find it ? and where Sebastian> does it need to go ? Can I put a compiled file into the Sebastian> same directory as my .py file !? Sebastian> I just discovered (on my linux box) Sebastian> /tmp/haase/python24_intermediate/compiler_cae5f1e251cd037a19f0234c558d6f0e Sebastian> !? You can make weave generate an extension module of your choosing. Look at examples/fibonacci.py which builds a fibonacci_ext.cpp/fibonacci_ext.so pair in the current directory. cheers, prabhu From fredantispam at free.fr Mon Aug 7 16:34:06 2006 From: fredantispam at free.fr (fred) Date: Mon, 07 Aug 2006 22:34:06 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <20060807180216.GC23067@clipper.ens.fr> References: <1154958302.44d743de279be@imp2-g19.free.fr> <44D74EF1.3040502@obs.univ-lyon1.fr> <20060807143959.GB23067@clipper.ens.fr> <44D77F70.3090805@free.fr> <20060807180216.GC23067@clipper.ens.fr> Message-ID: <44D7A3BE.1020308@free.fr> Gael Varoquaux a ?crit : > I was talking about Xavier's idea: do step 1 and 2 as he says, and > use weave.inline to use the c code generated by mathematica. Ok, I understand better. But maxima only generates fortran code. A little tricky (quick'n'dirty) but it works fine now : def Lmnp(x,y,z,m,n): cmd = "maxima -r 'load(\"specfun\")$fortran(diff(assoc_legendre_p("+str(n)+','+str(m)+",cos(theta)),theta))$\n' | grep -v 'Maxima\|%' | tr [A-Z] [a-z] | sed 's!theta!theta(x,y,z)!g'" foo = getoutput(cmd) return(eval(foo)) Thanks all. -- Fred. From fredantispam at free.fr Mon Aug 7 16:35:13 2006 From: fredantispam at free.fr (fred) Date: Mon, 07 Aug 2006 22:35:13 +0200 Subject: [SciPy-user] symbolic math & scipy (& maxima)... In-Reply-To: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> References: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> Message-ID: <44D7A401.8000702@free.fr> Jelle Feringa / EZCT Architecture & Design Research a ?crit : > http://swiginac.berlios.de/ > > could be interesting for dealing with symbolic computation Ok, thanks. I'm looking for something more "clean" than the one I wrote ;-) Cheers, -- Fred. From haase at msg.ucsf.edu Mon Aug 7 20:50:49 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 7 Aug 2006 17:50:49 -0700 Subject: [SciPy-user] weave and numpy Message-ID: <200608071750.49706.haase@msg.ucsf.edu> Hi, if I pass a numpy array 'arr' as argument a) how does the C code get arr.ndim ? b) how does the C code get arr.shape[0],... ? c) if the C code changes elements of arr, are those changes *on the original data* ? In other words, is weave.inline making a copy of arr ? I searched through http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/weave/doc/tutorial.html?format=raw but did not find a definite answer. From the 'array3d.py' example in weave in looks like Narr would contain the shape !? Thanks, Sebastian Haase From haase at msg.ucsf.edu Mon Aug 7 20:54:52 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Mon, 7 Aug 2006 17:54:52 -0700 Subject: [SciPy-user] weave and numpy In-Reply-To: <200608071750.49706.haase@msg.ucsf.edu> References: <200608071750.49706.haase@msg.ucsf.edu> Message-ID: <200608071754.52253.haase@msg.ucsf.edu> On Monday 07 August 2006 17:50, Sebastian Haase wrote: > Hi, > if I pass a numpy array 'arr' as argument > a) how does the C code get arr.ndim ? > b) how does the C code get arr.shape[0],... ? > c) if the C code changes elements of arr, are those changes *on the > original data* ? In other words, is weave.inline making a copy of arr ? > > I searched through > http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/weave/doc/tutorial. >html?format=raw but did not find a definite answer. From > the 'array3d.py' example in weave in looks like Narr would contain the > shape !? > Oh, and I forgot: How about non-contiguous arrays !? In the case that these are handled - does that slow things down for proper aligned arrays, too !? > Thanks, > Sebastian Haase From robert.kern at gmail.com Mon Aug 7 21:28:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 07 Aug 2006 20:28:00 -0500 Subject: [SciPy-user] weave and numpy In-Reply-To: <200608071754.52253.haase@msg.ucsf.edu> References: <200608071750.49706.haase@msg.ucsf.edu> <200608071754.52253.haase@msg.ucsf.edu> Message-ID: <44D7E8A0.4050409@gmail.com> Sebastian Haase wrote: > On Monday 07 August 2006 17:50, Sebastian Haase wrote: >> Hi, >> if I pass a numpy array 'arr' as argument >> a) how does the C code get arr.ndim ? >> b) how does the C code get arr.shape[0],... ? >> c) if the C code changes elements of arr, are those changes *on the >> original data* ? Yes. > In other words, is weave.inline making a copy of arr ? No. >> I searched through >> http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/weave/doc/tutorial. >> html?format=raw but did not find a definite answer. From >> the 'array3d.py' example in weave in looks like Narr would contain the >> shape !? Yes. Specifically: arr_array is the actual PyArrayObject* corresponding to the Python object. Narr = arr_array->dimensions Sarr = arr_array->strides Darr = arr_array->nd arr = arr_array->data > Oh, and I forgot: How about non-contiguous arrays !? Passed straight on through, just like contiguous arrays. > In the case that these are handled - does that slow things down for proper > aligned arrays, too !? You will have to take the strides into account in your code. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sgarcia at olfac.univ-lyon1.fr Tue Aug 8 06:06:41 2006 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Tue, 08 Aug 2006 12:06:41 +0200 Subject: [SciPy-user] Install problem Message-ID: <44D86231.1000109@olfac.univ-lyon1.fr> Hi all, from the latest svn version of numpy (2975) and scipy(2151) I have this problem for scipy (numpy is OK) : Traceback (most recent call last): File "setup.py", line 55, in ? setup_package() File "setup.py", line 47, in setup_package configuration=configuration ) File "/usr/local/lib/python2.3/site-packages/numpy/distutils/core.py", line 144, in setup config = configuration() File "setup.py", line 19, in configuration config.add_subpackage('Lib') File "/usr/local/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 753, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 736, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 683, in _get_configuration_from_setup_py config = setup_module.configuration(*args) File "./Lib/setup.py", line 22, in configuration config.add_subpackage('ndimage') File "/usr/local/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 753, in add_subpackage caller_level = 2) File "/usr/local/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 736, in get_subpackage caller_level = caller_level + 1) File "/usr/local/lib/python2.3/site-packages/numpy/distutils/misc_util.py", line 668, in _get_configuration_from_setup_py ('.py', 'U', 1)) File "Lib/ndimage/setup.py", line 3, in ? from numpy.numarray import get_numarray_include_dirs File "/usr/local/lib/python2.3/site-packages/numpy/numarray/__init__.py", line 3, in ? from functions import * AttributeError: 'module' object has no attribute 'concatenate' Any idea Thanks a lot Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredantispam at free.fr Tue Aug 8 06:50:17 2006 From: fredantispam at free.fr (fred) Date: Tue, 08 Aug 2006 12:50:17 +0200 Subject: [SciPy-user] symbolic math & scipy (& swiginac) In-Reply-To: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> References: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> Message-ID: <44D86C69.8090807@free.fr> Jelle Feringa / EZCT Architecture & Design Research a ?crit : > http://swiginac.berlios.de/ > > could be interesting for dealing with symbolic computation Ok, sounds good... But I did not find anything about Legendre associated polynomials. :-( -- Fred. From fredantispam at free.fr Tue Aug 8 07:22:11 2006 From: fredantispam at free.fr (fred) Date: Tue, 08 Aug 2006 13:22:11 +0200 Subject: [SciPy-user] symbolic math & scipy (& swiginac) In-Reply-To: <44D86C69.8090807@free.fr> References: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> <44D86C69.8090807@free.fr> Message-ID: <44D873E3.8070905@free.fr> fred a ?crit : > Jelle Feringa / EZCT Architecture & Design Research a ?crit : > >>http://swiginac.berlios.de/ >> >>could be interesting for dealing with symbolic computation > > Ok, sounds good... > > But I did not find anything about Legendre associated polynomials. :-( One can try to define them : from scipy.special import * from swiginac import * x = symbol('x') def lpm(m,x): p = (x**2-1)**m return (1./(2**m*factorial(m))*p.diff(x,m)) def lpmn(m,n,x): p = lpm(n,x) return ((-1)**m*(1-x**2)**(m/2.)*p.diff(x,m)) print lpm(2,x) >>>-0.5+(1.5)*x**2 Ok. But : u = arange(0,1,0.1) print lpm(2,u) Traceback (most recent call last): File "./essai.py", line 20, in ? print lpm(2,u) File "./essai.py", line 11, in lpm return (1./(2**m*factorial(m))*p.diff(x,m)) AttributeError: 'numpy.ndarray' object has no attribute 'diff' Any suggestion ? Cheers, -- Fred. From gael.varoquaux at normalesup.org Tue Aug 8 07:26:21 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Tue, 8 Aug 2006 13:26:21 +0200 Subject: [SciPy-user] symbolic math & scipy (& swiginac) In-Reply-To: <44D873E3.8070905@free.fr> References: <001501c6ba4d$63be6cb0$2701a8c0@OneArchitecture.local> <44D86C69.8090807@free.fr> <44D873E3.8070905@free.fr> Message-ID: <20060808112621.GB27448@clipper.ens.fr> On Tue, Aug 08, 2006 at 01:22:11PM +0200, fred wrote: > Any suggestion ? One of the author of swiginac suggests to have a look at his new package, sympy, not yet as extensive, but easier to extend. -- Ga?l From peter.row at gmail.com Tue Aug 8 09:08:31 2006 From: peter.row at gmail.com (Peter Row) Date: Tue, 8 Aug 2006 23:08:31 +1000 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: <8b3894bc0608070710v5f457541x7223e2256c83c033@mail.gmail.com> References: <44D7438A.8050803@adelphia.net> <8b3894bc0608070710v5f457541x7223e2256c83c033@mail.gmail.com> Message-ID: <291306e90608080608w4af19bai691b477368655a92@mail.gmail.com> Why scipy: Say you have a inner loop like: X = X + F * dt In either Matlab or scipy, it will probably work as: temp1 = F * dt temp2 = X + tempF X = temp2 Which is slower than nessesary, because you have to assign those great hunks of memory to temp1 and temp2. If you use weave.inline (after doing a rapid prototype in pure python - because debugging segfaults when you make a mess of your indicies is never fun, or easy), you can get rid of the big temporary arrays. You can also do this in Matlab (using CXX), but you could simply plead that it is easier for you to do using inline. This is a pretty trivial example. There is no limit to the number of intensive operations that you can't find a good vectorized function for. Speaking of which, have you seen the "vectorize" command in scipy? It seems a little slow (next to inline), but it sure beats running an inner loop in pure Matlab or python. Good luck, Peter On 8/8/06, William Hunter wrote: > > Another thought(s). Something you're likely to come across if you end > up in the mechanical design field, is the use of codes... > > Why don't you write a program that can calculate the dimensions of a > pressure vessel, according to ASME, for example? Or a gear design > program which will output the 3D gear? A program to design offshore > containers to DNV's CN 2.7-1? > > Go look at some of the add-ins available for programs like SolidWorks > and Solid Edge, you might find some ideas here too. > > For programs like these the user (probably yourself one day) would > typically want an intuitive UI, some form of customisation, be it by > editing a text file with material spec's or a file specifying pipe > and/or flange dimensions, and DEFINITELY output in the form of a CAD > file (I would say DXF/IGES for 2D and StL/XML/VRML for 3D. You can > attempt Parasolid, ACIS or STEP, but these are not as simple as StL > and last mentioned is in wide use by the rapid prototyping industry). > One would possibly also want to view (matplotlib) the pressure vessel > before saving it as a DXF file for instance. Maybe add an option to > enter "Company and Project Details" when saving the file as a DXF... > There's a lot of options! > > Regards, > William > > On 07/08/06, John Hassler wrote: > > > > You've gotten a lot of very good responses already. I'd just make a > couple > > of comments. > > > > I'm a former prof. of ChE (retired). When I started out (a long time > ago), > > industrial plants were built in several stages: beaker, bucket, barrel, > > (maybe more), final plant. You would start with bench-scale experiments > in > > the laboratory, and work through progressively larger "pilot plants" > until > > you were confident that you understood the process well enough to commit > to > > building the full-scale plant. Now, design is done through computer > > modeling and simulation, and pilot plants are rarely, if ever, > used. The > > same is true in all fields of engineering. For example, computer chips > are > > simulated in detail before being tried "in silicon." > > > > In spite of this, most engineers never learn to program at any level > beyond > > the most basic. I'm not familiar with your field, but in ChE, there are > > "black box" programs (Aspen, ChemCad) which will do very sophisticated > > modeling, without requiring much knowledge on the part of the > operator. I > > think this is unfortunate ... but then, I'm old fashioned. > > > > I'm a visiting prof. here at VaTech. The engineering college has > > standardized on Matlab. I don't like it much, for two reasons. First, > it's > > outrageously expensive. (Scilab, as an example, is free, similar to > Matlab > > in basic capabilities, works under both Windows and Linux, and is > certainly > > sufficient for any but the most demanding computational > problems.) Second, > > Matlab makes it difficult to write well-structured programs of any > > appreciable size. Python-Scipy, on the other hand, is a "real" > programming > > language, which can handle VERY large programs with "grace and beauty," > and > > can do small programs with convenience. The "immediate" calculational > > abilities (IDLE) are more convenient than those of Matlab, since you can > > define functions "on the fly" ... Matlab requires "m-files" for > functions > > ... and I've used Python plus Numeric to solve some reasonably large FEM > > examples in acceptable times. > > > > So if you needed any more encouragement to look into a modeling problem > with > > Python, perhaps this will help. > > > > (As I re-read what I've written, I guess I could have just said, > "Computing > > .. important. Python... good." Professors are long winded ..... ) > > > > john > > > > > > > > Stephen Kelly wrote: > > > > > > Hi, > > > > I sent this email to Eric Jones, but he recommended that I send it to > this > > mailing list instead. I'm trying to see if scipy could fit into a final > year > > project for my mechanical engineering course. > > > > I am a student of mechanical engineering in UCD Dublin , and part of my > > course involves doing a final year project. We do very little > programming in > > my course, but as coding and python in particular is something that > > interests me, I'd like to have some involvement in python in my final > year > > project. > > > > Because my course is not centered around coding, I think I'd be more > likely > > to be able to do a project on applying a python application like scipy > to a > > problem, or using python as glue in a scripting applicatin. I found > scipy > > while searching for an idea for a project that I could do in python, > while > > staying within the scope of mechanical engineering and my course. I'm > still > > no closer to an idea, so I thought I'd write to you to see if you had > any > > ideas of what I might be able to do as a mechanical engineer with > > python/scipy. > > > > Something that I think might be plausible might be a simulation or > > computational analysis of some kind, but I would probably need to > justify > > using python for whatever I do rather than MatLab, which is the industry > > standard, and the program that my lecturers are already familiar with. I > > have never used MatLab myself, so I'm not sure what it offers. > > > > I would appreciate any thoughts you might have on this. I emailed one of > my > > lecturers already, who asked me for more specific details and to give > > thought to the type of project I would like to do. > > > > Kind Regards, > > > > Stephen Kelly > > ________________________________ > > > > _______________________________________________ SciPy-user > > mailing list SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > -- > Regards, > WH > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Aug 8 12:10:34 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 08 Aug 2006 10:10:34 -0600 Subject: [SciPy-user] Install problem In-Reply-To: <44D86231.1000109@olfac.univ-lyon1.fr> References: <44D86231.1000109@olfac.univ-lyon1.fr> Message-ID: <44D8B77A.90403@ieee.org> Samuel GARCIA wrote: > Hi all, > from the latest svn version of numpy (2975) and scipy(2151) I have > this problem for scipy (numpy is OK) : It's a numpy problem in the backward compatibility module, numpy.numarray which I was fixing up last night but got very tired and installed my changes to the wrong place so I missed the error during testing. It should be fixed now. -Travis From haase at msg.ucsf.edu Tue Aug 8 12:50:16 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 8 Aug 2006 09:50:16 -0700 Subject: [SciPy-user] weave: how to return values Message-ID: <200608080950.16916.haase@msg.ucsf.edu> Hi, I am surprised when reading this example in the documentation: >>> a = 1 >>> a = weave.inline("return_val = Py::new_reference_to(Py::Int(a+1));",['a']) >>> a 2 is it really true that I can not just write !? a = weave.inline("return_val = a+1;",['a']) I was under the impression that basic types were handled transparently; how about: a = weave.inline("char s[100] = "test"; return_val = s;") Thanks, Sebastian Haase From cookedm at physics.mcmaster.ca Tue Aug 8 17:11:21 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 8 Aug 2006 17:11:21 -0400 Subject: [SciPy-user] weave: how to give code to someone w/o compiler In-Reply-To: <44D77A03.50401@gmail.com> References: <200608070916.54015.haase@msg.ucsf.edu> <44D77751.9040404@enthought.com> <44D77A03.50401@gmail.com> Message-ID: <20060808171121.2b60ddb2@arbutus.physics.mcmaster.ca> On Mon, 07 Aug 2006 12:36:03 -0500 Robert Kern wrote: > Travis N. Vaught wrote: > > Sebastian Haase wrote: > >> Hi, > >> I just starting to use weave - it seems to be the best thing since > >> sliced bread ... > >> First I have two questions: > >> a) Is it possible to distribute modules using weave to other people who > >> might NOT have a C compiler installed ? > >> > >> b) when I (or someone who does not have a C compiler !) change parts of > >> that module that should not require a recompiling of the C part - is > >> weave smart enough to recognise this ? > >> > >> Thanks, > >> Sebastian Haase > >> > > It's my understanding that a re-compile is triggered by a mismatch to a > > MD5 generated on the C string that is to be compiled and the cached MD5 > > for the expression. This would mean that only changes to that string > > would force a re-compile. However, even formatting changes (even to > > whitespace) in the C string force a recompile. > > The types of the inputs are also taken into account. And (to be pedantic :) the version the numpy API. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Tue Aug 8 17:45:41 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 8 Aug 2006 17:45:41 -0400 Subject: [SciPy-user] Building on Windows with the Intel VisualFortranCompiler In-Reply-To: References: Message-ID: <20060808174541.2c096a1e@arbutus.physics.mcmaster.ca> On Sun, 6 Aug 2006 03:45:54 +0200 "Albert Strasheim" wrote: > Hello all > > > -----Original Message----- > > From: scipy-user-bounces at scipy.org [mailto:scipy-user-bounces at scipy.org] > > On Behalf Of Albert Strasheim > > Sent: 06 August 2006 03:34 > > To: 'SciPy Users List' > > Subject: Re: [SciPy-user] Building on Windows with the Intel > > VisualFortranCompiler > > > > I think intel.py is simply wrong. It does: > > > > ar_exe = 'lib.exe' > > fc_exe = 'ifl' > > if sys.platform=='win32': > > from distutils.msvccompiler import MSVCCompiler > > ar_exe = MSVCCompiler().lib > > > > ar_exe is set already, so why go look for a value for it in the > > MSVCCompiler instance? > > Removing the code mentioned above seemed to fix the problem. Filed NumPy > ticket #234 for this one. > > http://projects.scipy.org/scipy/numpy/ticket/234 Fixed. > Unfortunately, it seems the special/cephes/const.c code contains some > constructs that MSVC doesn't accept. I reported these here some months ago: > > http://projects.scipy.org/scipy/scipy/ticket/12 Can you try the patch I just attached to that? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From fullung at gmail.com Wed Aug 9 10:06:24 2006 From: fullung at gmail.com (Albert Strasheim) Date: Wed, 9 Aug 2006 16:06:24 +0200 Subject: [SciPy-user] Building on Windows with the Intel VisualFortranCompiler In-Reply-To: <20060808174541.2c096a1e@arbutus.physics.mcmaster.ca> Message-ID: Hello all > > > Removing the code mentioned above seemed to fix the problem. Filed NumPy > > ticket #234 for this one. > > > > http://projects.scipy.org/scipy/numpy/ticket/234 > > Fixed. Looks good. I think the Compaq Visual Fortran Compiler (the compiler was bought from Compaq by Intel) code has the same error. Maybe you want to fix that. > > Unfortunately, it seems the special/cephes/const.c code contains some > > constructs that MSVC doesn't accept. I reported these here some months > ago: > > > > http://projects.scipy.org/scipy/scipy/ticket/12 > > Can you try the patch I just attached to that? Patch worked. Next issue here: http://projects.scipy.org/scipy/scipy/ticket/242 MSVC doesn't like cephes redefining fabs, which the compiler considers to be an instrinsic function. Next issue after this: building with the following site.cfg fails on Windows. [blas_src] src_dirs = C:\home\albert\work2\blas [lapack_src] src_dirs = C:\home\albert\work2\lapack The reason this fails is that the step where the LAPACK files are linked into a static library generates command line arguments that are about 80000 characters long. The limit seems to be 32k: http://blogs.msdn.com/oldnewthing/archive/2003/12/10/56028.aspx The way to get around this is to use the Windows @ trick. You put the command line arguments in a temporary file (the tempfile module is probably your friend here) and execute the command as: lib.exe @tempfilecontainingcommandlinearguments This is what SCons does when linking large numbers of files on Windows. Regards, Albert From haase at msg.ucsf.edu Tue Aug 8 20:40:56 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Tue, 8 Aug 2006 17:40:56 -0700 Subject: [SciPy-user] weave: doesn't know about scalar type 'float32scalar' Message-ID: <200608081740.56585.haase@msg.ucsf.edu> Hi! I get this error File "../lib/python/scipy/weave/inline_tools.py", line 366, in attempt_function_call raise TypeError, msg TypeError: cannot convert value to double and I think it's referring to this variable: (Pdb) p type(ssubSqSum) Should weave treat this like a regular float !? Thanks, Sebastian Haase From robert.kern at gmail.com Wed Aug 9 12:57:07 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 09 Aug 2006 11:57:07 -0500 Subject: [SciPy-user] weave: doesn't know about scalar type 'float32scalar' In-Reply-To: <200608081740.56585.haase@msg.ucsf.edu> References: <200608081740.56585.haase@msg.ucsf.edu> Message-ID: <44DA13E3.806@gmail.com> Sebastian Haase wrote: > Hi! > I get this error > File "../lib/python/scipy/weave/inline_tools.py", line 366, in > attempt_function_call > raise TypeError, msg > TypeError: cannot convert value to double > > and I think it's referring to this variable: > (Pdb) p type(ssubSqSum) > > > Should weave treat this like a regular float !? Sure! Patches are welcome. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stephen.kelly2 at ucdconnect.ie Wed Aug 9 13:23:36 2006 From: stephen.kelly2 at ucdconnect.ie (Stephen Kelly) Date: Wed, 09 Aug 2006 19:23:36 +0200 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: <291306e90608080608w4af19bai691b477368655a92@mail.gmail.com> References: <44D7438A.8050803@adelphia.net> <8b3894bc0608070710v5f457541x7223e2256c83c033@mail.gmail.com> <291306e90608080608w4af19bai691b477368655a92@mail.gmail.com> Message-ID: John Hassler adelphia.net> wrote: > You've gotten a lot of very good responses already. I'd just make a couple of comments. Indeed I have. Thanks to all for the replies, there are certainly some good points here. I've been very busy and having difficulty finding the time to pursue this further. I will send my lecturer a link to this thread, and hopfully some suitable ideas can be investigated. William Hunter gmail.com> wrote: > 1) Why don't you you do a small FEA program, assuming that you have > knowledge of this? Something simple where you can define 2D geometry > with AutoCAD or anything that can save in DXF format. You'll have to > read in the DXF file, represent with Matplotlib, mesh it (perhaps > using Delaunay), apply loads and constraints and solve the system. Or > perhaps just some aspects of this? The urapiv project looks interesting, but might be beyond my abilities at this time, seeing as I do not have much academic programming experience. However, I could have more experience by next year to take on greater challenges in a Masters role. My current experience in python has mainly been exploratory. I've done some tutorials, and made a few one off programs to simplify repetitive tasks. The biggest project I've used python for was to make infoboxes on a mediawiki website using the pywikipediabot framework. That involved mainly string parsing of the source data, but I learned a lot about string and array objects. I haven't yet found the time to use scipy and see for myself what I can do with it, but I will hopefully get the chance to do so next week. I'll have a look around the wiki and mailing list and find some demos and tutorials. Even if a project I do this year is not centered around scipy, I will certainly try to use it for any computation that comes up. Thanks again for the responses and interest in this. Stephen Kelly. ----- Original Message ----- From: Peter Row Date: Tuesday, August 8, 2006 3:08 pm Subject: Re: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. To: SciPy Users List > Why scipy: > > Say you have a inner loop like: > > X = X + F * dt > > In either Matlab or scipy, it will probably work as: > > temp1 = F * dt > temp2 = X + tempF > X = temp2 > > Which is slower than nessesary, because you have to assign those > great hunks > of memory to temp1 and temp2. > > If you use weave.inline (after doing a rapid prototype in pure > python - > because debugging segfaults when you make a mess of your indicies > is never > fun, or easy), you can get rid of the big temporary arrays. > > You can also do this in Matlab (using CXX), but you could simply > plead that > it is easier for you to do using inline. > > This is a pretty trivial example. There is no limit to the number of > intensive operations that you can't find a good vectorized function > for.Speaking of which, have you seen the "vectorize" command in > scipy? It seems > a little slow (next to inline), but it sure beats running an inner > loop in > pure Matlab or python. > > Good luck, > Peter > > > > On 8/8/06, William Hunter wrote: > > > > Another thought(s). Something you're likely to come across if you > end> up in the mechanical design field, is the use of codes... > > > > Why don't you write a program that can calculate the dimensions > of a > > pressure vessel, according to ASME, for example? Or a gear design > > program which will output the 3D gear? A program to design offshore > > containers to DNV's CN 2.7-1? > > > > Go look at some of the add-ins available for programs like > SolidWorks> and Solid Edge, you might find some ideas here too. > > > > For programs like these the user (probably yourself one day) would > > typically want an intuitive UI, some form of customisation, be it by > > editing a text file with material spec's or a file specifying pipe > > and/or flange dimensions, and DEFINITELY output in the form of a CAD > > file (I would say DXF/IGES for 2D and StL/XML/VRML for 3D. You can > > attempt Parasolid, ACIS or STEP, but these are not as simple as StL > > and last mentioned is in wide use by the rapid prototyping > industry).> One would possibly also want to view (matplotlib) the > pressure vessel > > before saving it as a DXF file for instance. Maybe add an option to > > enter "Company and Project Details" when saving the file as a DXF... > > There's a lot of options! > > > > Regards, > > William > > > > On 07/08/06, John Hassler wrote: > > > > > > You've gotten a lot of very good responses already. I'd just > make a > > couple > > > of comments. > > > > > > I'm a former prof. of ChE (retired). When I started out (a > long time > > ago), > > > industrial plants were built in several stages: beaker, bucket, > barrel,> > (maybe more), final plant. You would start with bench- > scale experiments > > in > > > the laboratory, and work through progressively larger "pilot > plants"> until > > > you were confident that you understood the process well enough > to commit > > to > > > building the full-scale plant. Now, design is done through > computer> > modeling and simulation, and pilot plants are rarely, > if ever, > > used. The > > > same is true in all fields of engineering. For example, > computer chips > > are > > > simulated in detail before being tried "in silicon." > > > > > > In spite of this, most engineers never learn to program at any > level> beyond > > > the most basic. I'm not familiar with your field, but in ChE, > there are > > > "black box" programs (Aspen, ChemCad) which will do very > sophisticated> > modeling, without requiring much knowledge on the > part of the > > operator. I > > > think this is unfortunate ... but then, I'm old fashioned. > > > > > > I'm a visiting prof. here at VaTech. The engineering college has > > > standardized on Matlab. I don't like it much, for two reasons. > First, > > it's > > > outrageously expensive. (Scilab, as an example, is free, > similar to > > Matlab > > > in basic capabilities, works under both Windows and Linux, and is > > certainly > > > sufficient for any but the most demanding computational > > problems.) Second, > > > Matlab makes it difficult to write well-structured programs of any > > > appreciable size. Python-Scipy, on the other hand, is a "real" > > programming > > > language, which can handle VERY large programs with "grace and > beauty,"> and > > > can do small programs with convenience. The "immediate" > calculational> > abilities (IDLE) are more convenient than those of > Matlab, since you can > > > define functions "on the fly" ... Matlab requires "m-files" for > > functions > > > ... and I've used Python plus Numeric to solve some reasonably > large FEM > > > examples in acceptable times. > > > > > > So if you needed any more encouragement to look into a modeling > problem> with > > > Python, perhaps this will help. > > > > > > (As I re-read what I've written, I guess I could have just said, > > "Computing > > > .. important. Python... good." Professors are long winded > ..... ) > > > > > > john > > > > > > > > > > > > Stephen Kelly wrote: > > > > > > > > > Hi, > > > > > > I sent this email to Eric Jones, but he recommended that I send > it to > > this > > > mailing list instead. I'm trying to see if scipy could fit into > a final > > year > > > project for my mechanical engineering course. > > > > > > I am a student of mechanical engineering in UCD Dublin , and > part of my > > > course involves doing a final year project. We do very little > > programming in > > > my course, but as coding and python in particular is something > that> > interests me, I'd like to have some involvement in python > in my final > > year > > > project. > > > > > > Because my course is not centered around coding, I think I'd be > more> likely > > > to be able to do a project on applying a python application > like scipy > > to a > > > problem, or using python as glue in a scripting applicatin. I > found> scipy > > > while searching for an idea for a project that I could do in > python,> while > > > staying within the scope of mechanical engineering and my > course. I'm > > still > > > no closer to an idea, so I thought I'd write to you to see if > you had > > any > > > ideas of what I might be able to do as a mechanical engineer with > > > python/scipy. > > > > > > Something that I think might be plausible might be a simulation or > > > computational analysis of some kind, but I would probably need to > > justify > > > using python for whatever I do rather than MatLab, which is the > industry> > standard, and the program that my lecturers are already > familiar with. I > > > have never used MatLab myself, so I'm not sure what it offers. > > > > > > I would appreciate any thoughts you might have on this. I > emailed one of > > my > > > lecturers already, who asked me for more specific details and > to give > > > thought to the type of project I would like to do. > > > > > > Kind Regards, > > > > > > Stephen Kelly > > > ________________________________ > > > > > > _______________________________________________ SciPy-user > > > mailing list SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.org > > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > > > > > > > > > -- > > Regards, > > WH > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From hasslerjc at adelphia.net Wed Aug 9 13:52:09 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Wed, 09 Aug 2006 13:52:09 -0400 Subject: [SciPy-user] Use of Scipy in a students final year mechanical engineering project. In-Reply-To: References: <44D7438A.8050803@adelphia.net> <8b3894bc0608070710v5f457541x7223e2256c83c033@mail.gmail.com> <291306e90608080608w4af19bai691b477368655a92@mail.gmail.com> Message-ID: <44DA20C9.30607@adelphia.net> Along the lines of "what can be done" with Python, go to: http://comptlsci.anu.edu.au/tutorials.html and look, in particular, at the Navier-Stokes module. (The example uses Python for scripting, and C for the heavy crunching. Although I haven't gotten around to trying it, I think that Python/Scipy would also be practical. Python/Numeric was certainly fast enough for some FEM work I did.) john Stephen Kelly wrote: > [snip] > > I haven't yet found the time to use scipy and see for myself what I can > do with it, but I will hopefully get the chance to do so next week. I'll > have a look around the wiki and mailing list and find some demos and > tutorials. Even if a project I do this year is not centered around > scipy, I will certainly try to use it for any computation that comes up. > > Thanks again for the responses and interest in this. > Stephen Kelly. > > From stephenemslie at gmail.com Wed Aug 9 14:07:12 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Wed, 9 Aug 2006 19:07:12 +0100 Subject: [SciPy-user] hough transform In-Reply-To: <20060804101925.GF17650@mentat.za.net> References: <51f97e530608030534k4ef26457s741c7d048d1f3ac4@mail.gmail.com> <20060803192145.GG6682@mentat.za.net> <51f97e530608040300p6956f8feje881e7f870557e63@mail.gmail.com> <20060804101925.GF17650@mentat.za.net> Message-ID: <51f97e530608091107n654915d2ldaf8ef63e3bb50b8@mail.gmail.com> On 8/4/06, Stefan van der Walt wrote: > This example is optimised for straight lines. For a generalised > transform, one would probably have to do a search through the full > parameter space. Then the 'for'-loops might kill performance. I > guess the best would be to try and find an article that has an > algorithm for the specific case you are interested in. I've been giving this some thought and reading more about the hough transform. I'm new to image processing but I think I'm getting a better understanding of how it works. How about including a hough transform function in scipy? It seems like a useful function to have in the image processing toolbox, and the line detection seems fairly straightforward. Stefan's implementation already looks like a great start. Is this something that would be useful for others? (Stefan - sorry for the double mail) Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Wed Aug 9 21:14:56 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 9 Aug 2006 20:14:56 -0500 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: <44BDC30F.7000808@gmail.com> Message-ID: Sorry for such a late reply, I have been on pseudo-vacation without much internet connection and got way behind on the list. My filter looks for scipy-user in the to field. This seems to automatically handle me sending messages to scipy-user. I have never actually thought about it, so I don't know if I accidentally did something right. All my lists have similar filters, and when I click on the filter view I often see my email with no response (on some less responsive lists). Here is my filter setting: Matches: to:(scipy-user) Do this: Skip Inbox, Apply label "scipy" If that doesn't work for you, let me know. I would like to help everyone experience my gmail list joy. Ryan On 7/19/06, David Grant wrote: > I also have a flter and label for scipy. How do you get your sent message to > get labelled SciPy? > > Dave > > > On 7/19/06, Ryan Krauss < ryanlists at gmail.com> wrote: > > For what it's worth, I have a gmail account set up just for my email > > lists and I really like it. I have a label and filter for each list > > so that any mail to scipy-user gets labeled SciPy and is taken out of > > my inbox automatically. Once I do that, email I send to a list shows > > up in the SciPy label view. > > > > Ryan > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From sebastian_busch at gmx.net Thu Aug 10 07:25:41 2006 From: sebastian_busch at gmx.net (Sebastian Busch) Date: Thu, 10 Aug 2006 13:25:41 +0200 Subject: [SciPy-user] SciPy installation problems on SUSE linux Message-ID: <44DB17B5.50804@gmx.net> Dear list, I do have problems installing SciPy on my computer. In short, when I think I would be finished and run " import scipy print "hallo" " it says: " Overwriting info= from scipy.misc (was from numpy.lib.utils) hallo " My computer is a > uname -a > Linux e13-v1 2.6.13-15.11-smp #1 SMP Mon Jul 17 09:43:01 UTC 2006 i686 i686 i386 GNU/Linux Unfortunately, I did so much stuff that I cannot reconstruct what I did. However, I tried to stick to http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 Installing NumPy seemed to work, importing it in a script gives no error. I then installed blas and lapack (and now even atlas) and after that scipy via python setup.py build sudo python setup.py install Perhaps someone had a similar problem and can halp? I would highly appreciate that. Thanks in advance, Sebastian. From gruben at bigpond.net.au Thu Aug 10 07:39:18 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu, 10 Aug 2006 21:39:18 +1000 Subject: [SciPy-user] Ubuntu speed-up Message-ID: <44DB1AE6.4030200@bigpond.net.au> A friend sent me this link to a shell script which claims to speed up the standard Ubuntu Dapper Drake distro. I thought those here who quest for speed would be interested. Note; I haven't tried this yet but it looks safe: http://www.dylanknightrogers.com/2006/07/17/faster-dappersh/ Gary R From davidgrant at gmail.com Thu Aug 10 12:20:15 2006 From: davidgrant at gmail.com (David Grant) Date: Thu, 10 Aug 2006 09:20:15 -0700 Subject: [SciPy-user] sorry (mailing list is a bit slow?) In-Reply-To: References: <44BDC30F.7000808@gmail.com> Message-ID: I see... I have been doing subject-line filtering. I'll have to switch over to TO: based filtering... Thanks. Dave On 8/9/06, Ryan Krauss wrote: > > Sorry for such a late reply, I have been on pseudo-vacation without > much internet connection and got way behind on the list. > > My filter looks for scipy-user in the to field. This seems to > automatically handle me sending messages to scipy-user. I have never > actually thought about it, so I don't know if I accidentally did > something right. All my lists have similar filters, and when I click > on the filter view I often see my email with no response (on some less > responsive lists). > > Here is my filter setting: > Matches: to:(scipy-user) > Do this: Skip Inbox, Apply label "scipy" > > If that doesn't work for you, let me know. I would like to help > everyone experience my gmail list joy. > > Ryan > > On 7/19/06, David Grant wrote: > > I also have a flter and label for scipy. How do you get your sent > message to > > get labelled SciPy? > > > > Dave > > > > > > On 7/19/06, Ryan Krauss < ryanlists at gmail.com> wrote: > > > For what it's worth, I have a gmail account set up just for my email > > > lists and I really like it. I have a label and filter for each list > > > so that any mail to scipy-user gets labeled SciPy and is taken out of > > > my inbox automatically. Once I do that, email I send to a list shows > > > up in the SciPy label view. > > > > > > Ryan > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Thu Aug 10 12:53:54 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 10 Aug 2006 11:53:54 -0500 Subject: [SciPy-user] SciPy 2006 LiveCD torrent is available Message-ID: <44DB64A2.60203@enthought.com> For those not able to make SciPy 2006 next week, or who would like to download the ISO a few days early, its available at http://code.enthought.com/downloads/scipy2006-i386.iso.torrent. We squashed a lot onto the CD, so I also had to trim > 100 MB of packages that ship with the standard Ubuntu CD. Here's what I was able to add: * SciPy build from svn (Wed, 12:00 CST) * NumPy built from svn (Wed, 12:00 CST) * Matplotlib built from svn (Wed, 12:00 CST) * IPython built from svn (Wed, 12:00 CST) * Enthought built from svn (Wed, 16:00 CST) * ctypes 1.0.0 * hdf5 1.6.5 * networkx 0.31 * Pyrex 0.9.4.1 * pytables 1.3.2 All of the svn checkouts are zipped in /src, if you'd like to build from a svn version newer than what was shipped, simple copy the compressed package to your home dir, uncompress it, run "svn upate", and built it. Please note: This ISO was built rather hastily, uses un-official code, and received very little testing. Please don't even consider using this in a production environment. Bryce From cookedm at physics.mcmaster.ca Thu Aug 10 13:14:16 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Thu, 10 Aug 2006 13:14:16 -0400 Subject: [SciPy-user] SciPy installation problems on SUSE linux In-Reply-To: <44DB17B5.50804@gmx.net> References: <44DB17B5.50804@gmx.net> Message-ID: <20060810171416.GA12640@arbutus.physics.mcmaster.ca> On Thu, Aug 10, 2006 at 01:25:41PM +0200, Sebastian Busch wrote: > Dear list, > > I do have problems installing SciPy on my computer. In short, when I > think I would be finished and run > > " > import scipy > print "hallo" > " > > it says: > > " > Overwriting info= from scipy.misc (was > from numpy.lib.utils) > hallo > " That's a harmless problem -- nothing's wrong; it's just printing more than it probably should. If you want to test scipy, do >>> import scipy >>> scipy.test() -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From emsellem at obs.univ-lyon1.fr Thu Aug 10 13:18:12 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 10 Aug 2006 19:18:12 +0200 Subject: [SciPy-user] SciPy installation problems on SUSE linux In-Reply-To: References: Message-ID: <44DB6A54.3080406@obs.univ-lyon1.fr> Hi Sebastian, here is a (attached) file I used to install all the necessary packages; Things are much easier via svn of course. Hope this helps. Don't forget to remove the old stuff if you wish to reinstall scipy and numpy. Eric P.S.: I have Suse 10.1, and it installs fine on my laptop and desktop PC -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: INSTALL_SUSE10.1_python_centrino URL: From willemjagter at gmail.com Fri Aug 11 10:04:23 2006 From: willemjagter at gmail.com (William Hunter) Date: Fri, 11 Aug 2006 16:04:23 +0200 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex Message-ID: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> List; Has anyone ever attempted using sparse matrices in combination with either one of the following; weave.inline, f2py or Pyrex? What is the best option? Perhaps a case of personal preference? Reason I'm asking: I've gotten the solving part of my code to run satisfactorily, but building them with Python's for loops takes a heck of a long time, too long. Regards, William From b.p.s.thurin at city.ac.uk Fri Aug 11 15:19:42 2006 From: b.p.s.thurin at city.ac.uk (Brice Thurin) Date: Fri, 11 Aug 2006 20:19:42 +0100 Subject: [SciPy-user] scipy.fftpack.fft2 bug Message-ID: It seems there is a bug with the axes argument in the scipy.fftpack.fft2 function. It does not do what it is supose to do. In [1]: import scipy.fftpack In [2]: import numpy In [3]: x = ones((4,4,2)) In [4]: fx = scipy.fftpack.fft2(x,shape=(8,8),axes = (-3,-2)) In [5]: fx.shape Out[5]: (4, 8, 8) but it is working fine with numpy.dft.fft2 In [1]: import numpy In [2]: x = numpy.ones((4,4,2)) In [3]: fx = numpy.dft.fft2(x,s=(8,8),axes=(-3,-2)) In [4]: fx.shape Out[4]: (8, 8, 2) From davidgrant at gmail.com Fri Aug 11 17:24:08 2006 From: davidgrant at gmail.com (David Grant) Date: Fri, 11 Aug 2006 14:24:08 -0700 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> Message-ID: On 8/11/06, William Hunter wrote: > > List; > > Has anyone ever attempted using sparse matrices in combination with > either one of the following; weave.inline, f2py or Pyrex? > > What is the best option? Perhaps a case of personal preference? > > Reason I'm asking: I've gotten the solving part of my code to run > satisfactorily, but building them with Python's for loops takes a heck > of a long time, too long. Have you tried using vector indexing with numpy first? See if you can remove some of those python for loops. -- David Grant http://www.davidgrant.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From peridot.faceted at gmail.com Sat Aug 12 02:55:24 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sat, 12 Aug 2006 02:55:24 -0400 Subject: [SciPy-user] better control over odeint Message-ID: Hi, I'm writing a program that requires extensive use of scipy.integrate.odeint (tracing paths around a black hole). It mostly works well but has certain frustrating features. (1) It is apparently impossible to prevent it from printing to the standard output. This is wrong; if it must output error messages, they should go on standard error, so as not to contaminate the output of my program. It would also be nice to be able to turn them off! Or, more specifically, to signal the error conditions in a way I can reasonably test for. String matching on info["messages"] is not at all satisfactory. (2) It is very difficult to control what happens when an error occurs. When a function called from inside odeint throws an exception (undefined variable, overflow error, what have you), odeint catches it, prints a backtrace (to standard error, at least, thankfully), and continues trying more points. Since what has usually happened is that odeint has run into a singularity, it grinds away endlessly, spewing page after page of the same error message, before finally giving up. An error is then signaled by leaving info["mused"][n]==0. Ideal for my purposes would be for odeint simply not to catch exceptions, or to treat exceptions as an instruction to return early. Failure to integrate over the whole range of t could be signaled by raising an exception. This would allow easy catching of integration failures, and it would also (for example) allow me to signal that the ray had proceeded into a forbidden region by throwing an exception. I realize that nobody is very keen on rooting around inside the FORTRAN code for odeint, and I can't see directly how to make it stop immediately. It should be fine for the user function to simply return NaNs. Another option is, if the integration falls apart at some point, to simply stop and provide information about how far the integration successfully progressed. lsoda.f is definitely capable of this without modification. Thanks. A. M. Archibald From willemjagter at gmail.com Sat Aug 12 07:50:35 2006 From: willemjagter at gmail.com (William Hunter) Date: Sat, 12 Aug 2006 13:50:35 +0200 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> Message-ID: <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> Yes, well, my best attempt at it. You'll notice that the sparse matrices doesn't support fancy indexing, unfortunately. (I'm hoping someone can prove me wrong :-) Here's some code you can fiddle with (also attached - sparsefiddle.py), if you want. #!/usr/bin/env python from numpy import allclose, ix_, random from scipy import sparse Alil = sparse.lil_matrix((6,6)) A = Alil.todense() edof = [1,3,0,2] arb = random.rand(4,4) # Dense matrix for r in xrange(10): for c in xrange(10): A[ix_(edof,edof)] += r*c*arb # Sparse matrices for r in xrange(10): for c in xrange(10): # Alil[ix_(edof,edof)] += r*c*arb # Unfortunately doesn't work for idxR,valR in enumerate(edof): # This works, but slowly for idxC,valC in enumerate(edof): Alil[valR,valC] += r*c*arb[idxR,idxC] print allclose(A,Alil.todense()) # Should return a 'True' # EOF Regards, William On 11/08/06, David Grant wrote: > > > On 8/11/06, William Hunter wrote: > > List; > > > > Has anyone ever attempted using sparse matrices in combination with > > either one of the following; weave.inline, f2py or Pyrex? > > > > What is the best option? Perhaps a case of personal preference? > > > > Reason I'm asking: I've gotten the solving part of my code to run > > satisfactorily, but building them with Python's for loops takes a heck > > of a long time, too long. > > Have you tried using vector indexing with numpy first? See if you can remove > some of those python for loops. > > -- > David Grant > http://www.davidgrant.ca > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- #!/usr/bin/env python from numpy import allclose, ix_, random from scipy import sparse Alil = sparse.lil_matrix((6,6)) A = Alil.todense() edof = [1,3,0,2] arb = random.rand(4,4) # Dense matrix for r in xrange(10): for c in xrange(10): A[ix_(edof,edof)] += r*c*arb # Sparse matrices for r in xrange(10): for c in xrange(10): # Alil[ix_(edof,edof)] += r*c*arb # Unfortunately doesn't work for idxR,valR in enumerate(edof): # This works, but slowly for idxC,valC in enumerate(edof): Alil[valR,valC] += r*c*arb[idxR,idxC] print allclose(A,Alil.todense()) # Should return a 'True' From josh8912 at yahoo.com Sat Aug 12 09:20:40 2006 From: josh8912 at yahoo.com (JJ) Date: Sat, 12 Aug 2006 06:20:40 -0700 (PDT) Subject: [SciPy-user] ODE solvers and Scipy In-Reply-To: Message-ID: <20060812132040.9017.qmail@web51713.mail.yahoo.com> Hello. I am just about to start building a somewhat complex dynamic model that will incorporate a large number (~50-100) of differential equations and parameters to be estimated. This is my first project of this type, so I am on a steep learning curve. I could build it using Matlab's Simulink, but I would rather build it in Python. I have looked for good graphical interfaces to ODE solvers in Python but have not found any that seem suitable. I would not mind using the Scipy ODE solvers, but I have a couple of quick, simple questions before I invest a lot of time in that direction. Any thoughts would be appreciated. -- The (few) examples I have seen in using Scipy's ODE functions are only small, simple ones. Is there anything to prevent use of the solvers for larger, more complex systems of ODEs? -- Has anyone seen a good tutorial on use of Scipy's ODE solver for larger (or at least not tiny) problems? -- Are there other Open Source packages out there that might offer more options than Scipy's ODE solver? I dont want to make this job any harder than I need to. Ive looked but didnt see any. -- Simulink offers use of various model blocks to simplify model building. I can write my own, but is there any public code already available along these lines for use with Scipy's ODE solver? Please pardon the simplicity of my questions and thanks for your help. Would anyone be willing to allow me to write privately if I have a few troubles along the way? John __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From davidlinke at tiscali.de Sat Aug 12 10:31:30 2006 From: davidlinke at tiscali.de (David Linke) Date: Sat, 12 Aug 2006 16:31:30 +0200 Subject: [SciPy-user] ODE solvers and Scipy In-Reply-To: <20060812132040.9017.qmail@web51713.mail.yahoo.com> References: <20060812132040.9017.qmail@web51713.mail.yahoo.com> Message-ID: <44DDE642.1070109@tiscali.de> Hi John, Here is a page that might be of interest (other ODE solver integrated via f2py, maybe a little outdated): http://moncs.cs.mcgill.ca/people/slacoste/research/report/SummerReport.html IIRC, another system for solving ODEs was developed at the same place. David ---- JJ wrote: > Hello. > I am just about to start building a somewhat complex > dynamic model that will incorporate a large number > (~50-100) of differential equations and parameters to > be estimated. This is my first project of this type, > so I am on a steep learning curve. I could build it > using Matlab's Simulink, but I would rather build it > in Python. I have looked for good graphical > interfaces to ODE solvers in Python but have not found > any that seem suitable. I would not mind using the > Scipy ODE solvers, but I have a couple of quick, > simple questions before I invest a lot of time in that > direction. Any thoughts would be appreciated. > > -- The (few) examples I have seen in using Scipy's ODE > functions are only small, simple ones. Is there > anything to prevent use of the solvers for larger, > more complex systems of ODEs? > > -- Has anyone seen a good tutorial on use of Scipy's > ODE solver for larger (or at least not tiny) problems? > > -- Are there other Open Source packages out there that > might offer more options than Scipy's ODE solver? I > dont want to make this job any harder than I need to. > Ive looked but didnt see any. > > -- Simulink offers use of various model blocks to > simplify model building. I can write my own, but is > there any public code already available along these > lines for use with Scipy's ODE solver? > > Please pardon the simplicity of my questions and > thanks for your help. Would anyone be willing to > allow me to write privately if I have a few troubles > along the way? > > John > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From hasslerjc at adelphia.net Sat Aug 12 11:17:25 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Sat, 12 Aug 2006 11:17:25 -0400 Subject: [SciPy-user] ODE solvers and Scipy In-Reply-To: <20060812132040.9017.qmail@web51713.mail.yahoo.com> References: <20060812132040.9017.qmail@web51713.mail.yahoo.com> Message-ID: <44DDF105.1050309@adelphia.net> Although it's not open source, Scilab is a free program with capabilities similar to those of Matlab, including a block diagram modeling language. It isn't as polished as Matlab, but it allows (IMHO) better program structure. http://www.scilab.org/ You might also be interested in: http://comptlsci.anu.edu.au/tutorials.html for examples in both Scilab and Python. That said, I'd still prefer to work in Python, even without a block diagram language. john JJ wrote: > Hello. > I am just about to start building a somewhat complex > dynamic model that will incorporate a large number > (~50-100) of differential equations and parameters to > be estimated. This is my first project of this type, > so I am on a steep learning curve. I could build it > using Matlab's Simulink, but I would rather build it > in Python. I have looked for good graphical > interfaces to ODE solvers in Python but have not found > any that seem suitable. I would not mind using the > Scipy ODE solvers, but I have a couple of quick, > simple questions before I invest a lot of time in that > direction. Any thoughts would be appreciated. > > -- The (few) examples I have seen in using Scipy's ODE > functions are only small, simple ones. Is there > anything to prevent use of the solvers for larger, > more complex systems of ODEs? > > -- Has anyone seen a good tutorial on use of Scipy's > ODE solver for larger (or at least not tiny) problems? > > -- Are there other Open Source packages out there that > might offer more options than Scipy's ODE solver? I > dont want to make this job any harder than I need to. > Ive looked but didnt see any. > > -- Simulink offers use of various model blocks to > simplify model building. I can write my own, but is > there any public code already available along these > lines for use with Scipy's ODE solver? > > Please pardon the simplicity of my questions and > thanks for your help. Would anyone be willing to > allow me to write privately if I have a few troubles > along the way? > > John > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From josh8912 at yahoo.com Sat Aug 12 12:05:34 2006 From: josh8912 at yahoo.com (John) Date: Sat, 12 Aug 2006 16:05:34 +0000 (UTC) Subject: [SciPy-user] ODE solvers and Scipy Message-ID: Thanks John and David. It looks as if I will try to use python/scipy, unless someone has written wrappers for Sundials for python. Does anybody know if this has been done? John From peridot.faceted at gmail.com Sat Aug 12 12:51:05 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Sat, 12 Aug 2006 12:51:05 -0400 Subject: [SciPy-user] ODE solvers and Scipy In-Reply-To: <20060812132040.9017.qmail@web51713.mail.yahoo.com> References: <20060812132040.9017.qmail@web51713.mail.yahoo.com> Message-ID: On 12/08/06, JJ wrote: > I am just about to start building a somewhat complex > dynamic model that will incorporate a large number > (~50-100) of differential equations and parameters to > be estimated. This is my first project of this type, > so I am on a steep learning curve. I could build it > using Matlab's Simulink, but I would rather build it > in Python. I have looked for good graphical > interfaces to ODE solvers in Python but have not found > any that seem suitable. I would not mind using the > Scipy ODE solvers, but I have a couple of quick, > simple questions before I invest a lot of time in that > direction. Any thoughts would be appreciated. > > -- The (few) examples I have seen in using Scipy's ODE > functions are only small, simple ones. Is there > anything to prevent use of the solvers for larger, > more complex systems of ODEs? Scipy's ODE solver is based on ODEPACK's lsoda.f, which is robust, well-tested, and well-documented (see, for example, http://www.netlib.org/alliant/ode/prog/lsoda.f ). All it does is integrate a system of first-order ODEs over a user-specified range; if problems occur it carries the integration as far as it can and then stops. The Python wrapper, however, has a few serious flaws. In an attempt to provide Numeric-style calling, the ability to stop partway through an interval appears to have been lost, and any error or exception (including KeyboardInterrupt) is trapped and an error is printed to the standard output, rendering debugging extremely cumbersome. More seriously, if there are parts of the solution space that cause problems (the solution approaches a singularity, or the edge of a coordinate patch), there is no really effective way to bring the solution up to the edge and then stop. This is because lsoda doesn't do that - that feature is in lsodar.f ( http://www.netlib.org/alliant/ode/prog/lsodar.f ), which has not (yet?) been wrapped in python. I'm actually hoping to get a wrapper for lsodar written, but as I've no experience with FORTRAN, we'll see. A. M. Archibald From jdhunter at ace.bsd.uchicago.edu Sat Aug 12 12:42:54 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 12 Aug 2006 11:42:54 -0500 Subject: [SciPy-user] fortran compiter, OS X 10.3 Message-ID: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> I am trying to compile scipy from svn 2156 on my powerbook OS X 10.3. I have gcc 3.3 installed but apparently no g77, g95 or f95. When I try and compile, I get the error since it can't find a fortran compiler. Before I go down a false path compiling a new gcc/g77, I was wondering if anybody had a recommendation for a binary g77 that I can get that is compatible with my gcc, or alternatively, what the path of least resistance is for me to get a compiler for this platform that can compile scipy. John-Hunters-Computer:~/python/svn/scipy> gcc --version gcc (GCC) 3.3 20030304 (Apple Computer, Inc. build 1495) From lists.steve at arachnedesign.net Sat Aug 12 13:27:53 2006 From: lists.steve at arachnedesign.net (Steve Lianoglou) Date: Sat, 12 Aug 2006 13:27:53 -0400 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: Hi John, > Before I go down a false path compiling a new gcc/g77, I was wondering > if anybody had a recommendation for a binary g77 that I can get that > is compatible with my gcc, or alternatively, what the path of least > resistance is for me to get a compiler for this platform that can > compile scipy. A long time ago I stumbled upon this installation howto for macs ... it's quite handy. http://bmistudents.blogspot.com/2006/04/on-installation-of-proper- python.html The person there recommends this method to grab a fortran compiler for PPC macs: curl -O http://easynews.dl.sourceforge.net/sourceforge/hpc/g77v3.4- bin.tar.gz sudo tar -C / -xzf g77v3.4-bin.tar.gz HTH, -steve From rclewley at cam.cornell.edu Sat Aug 12 13:30:40 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Sat, 12 Aug 2006 13:30:40 -0400 (EDT) Subject: [SciPy-user] ODE solvers and Scipy In-Reply-To: <20060812132040.9017.qmail@web51713.mail.yahoo.com> References: <20060812132040.9017.qmail@web51713.mail.yahoo.com> Message-ID: On Sat, 12 Aug 2006, JJ wrote: > Hello. > I am just about to start building a somewhat complex > dynamic model that will incorporate a large number > (~50-100) of differential equations and parameters to > be estimated. This is my first project of this type, > so I am on a steep learning curve. I could build it > using Matlab's Simulink, but I would rather build it > in Python. I have looked for good graphical > interfaces to ODE solvers in Python but have not found > any that seem suitable. Have you even found any at all? I am not aware of any graphical interfaces to ODE solvers in Python. I think Simulink is great for learning with and can be quite easy to start out with, but it's horribly inefficient for serious projects like parameter estimation with 100 ODEs. And if you want to make broad systematic changes to your model (like changing many related parameters at once) you'd better learn how to edit the simulink script file directly! 50 DEs is already past my limit of what I'd consider dealing with graphically. > -- The (few) examples I have seen in using Scipy's ODE > functions are only small, simple ones. Is there > anything to prevent use of the solvers for larger, > more complex systems of ODEs? No, I don't think so. I use SciPy's VODE for large systems sometimes and it's OK. The main problem is the inefficiency of the integrators needing to call back to your python function that defines the right hand sides of your ODEs. > -- Has anyone seen a good tutorial on use of Scipy's > ODE solver for larger (or at least not tiny) problems? Other than sparseness I don't know what kind of issues arise with larger system size that would need a specific tutorial. > -- Are there other Open Source packages out there that > might offer more options than Scipy's ODE solver? I > dont want to make this job any harder than I need to. > Ive looked but didnt see any. My group provides the distinctly non-graphically interfaced PyDSTool OSS package built over old numarray/scipy array classes, and where we've wrapped faster and well-established C-based ODE stiff and non-stiff integrators (avoiding the slow python callback problem). There is somewhat improved error-handling and support for stopping at events compared to the wrapping of the Scipy integrators. Parameter estimation and handling the book-keeping of a large system is somewhat easier with our Python classes too, I would say. We also support DAEs (a feature you might be looking for in Sundials), hybrid systems and maps. > -- Simulink offers use of various model blocks to > simplify model building. I can write my own, but is > there any public code already available along these > lines for use with Scipy's ODE solver? PyDSTool provides a model building system (basically template classes for coupling simulink-link "blocks") which has not been filled out with examples such as those in Simulink. You build the models in scripts as you would in matlb. But it's easy to make your own "blocks" with these classes and there are some examples provided for other types of problem. HTH Rob From strawman at astraw.com Sat Aug 12 14:24:57 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 Aug 2006 11:24:57 -0700 Subject: [SciPy-user] should "python setup.py sdist" provide the sandbox? Message-ID: <44DE1CF9.5010604@astraw.com> "python setup.py sdist" from the scipy svn tree doesn't grab the sandbox sources. I think it should. (People may want to play with the packages, for example.) A simple way to go about this would be to add all the files to MANIFEST.in. If folks think that's a good idea, I'll go ahead and generate a MANIFEST.in that adds all the files in svn scipy add submit the patch to Trac. Or is there some reason to exclude the source of the sandbox from "sdist" built distros? From cookedm at physics.mcmaster.ca Sat Aug 12 14:30:31 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 14:30:31 -0400 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20060812183031.GA951@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 11:42:54AM -0500, John Hunter wrote: > > I am trying to compile scipy from svn 2156 on my powerbook OS X 10.3. > I have gcc 3.3 installed but apparently no g77, g95 or f95. When I > try and compile, I get the error since it can't find a fortran > compiler. > > Before I go down a false path compiling a new gcc/g77, I was wondering > if anybody had a recommendation for a binary g77 that I can get that > is compatible with my gcc, or alternatively, what the path of least > resistance is for me to get a compiler for this platform that can > compile scipy. http://hpc.sourceforget.net/ has g77. Or, use darwinports or fink. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Sat Aug 12 14:30:31 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 14:30:31 -0400 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20060812183031.GA951@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 11:42:54AM -0500, John Hunter wrote: > > I am trying to compile scipy from svn 2156 on my powerbook OS X 10.3. > I have gcc 3.3 installed but apparently no g77, g95 or f95. When I > try and compile, I get the error since it can't find a fortran > compiler. > > Before I go down a false path compiling a new gcc/g77, I was wondering > if anybody had a recommendation for a binary g77 that I can get that > is compatible with my gcc, or alternatively, what the path of least > resistance is for me to get a compiler for this platform that can > compile scipy. http://hpc.sourceforget.net/ has g77. Or, use darwinports or fink. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jdhunter at ace.bsd.uchicago.edu Sat Aug 12 14:26:03 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 12 Aug 2006 13:26:03 -0500 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <20060812183031.GA951@arbutus.physics.mcmaster.ca> ("David M. Cooke"'s message of "Sat, 12 Aug 2006 14:30:31 -0400") References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> Message-ID: <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "David" == David M Cooke writes: David> http://hpc.sourceforget.net/ has g77. I spent a little time at this site before posting -- it seems their g77 was compiled for 3.4 and Tiger. David> Or, use darwinports or fink. Been there, done that, not going back. Unfortunately I have a lot of stuff I've already compiled for my python install, so I'm not too keen on changing horses mid-stream with another compiler version or python distro. May end up having to do that eventually, but I'd like to avoid it. JDH From jdhunter at ace.bsd.uchicago.edu Sat Aug 12 14:26:03 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 12 Aug 2006 13:26:03 -0500 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <20060812183031.GA951@arbutus.physics.mcmaster.ca> ("David M. Cooke"'s message of "Sat, 12 Aug 2006 14:30:31 -0400") References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> Message-ID: <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "David" == David M Cooke writes: David> http://hpc.sourceforget.net/ has g77. I spent a little time at this site before posting -- it seems their g77 was compiled for 3.4 and Tiger. David> Or, use darwinports or fink. Been there, done that, not going back. Unfortunately I have a lot of stuff I've already compiled for my python install, so I'm not too keen on changing horses mid-stream with another compiler version or python distro. May end up having to do that eventually, but I'd like to avoid it. JDH From robert.kern at gmail.com Sat Aug 12 14:52:00 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 12 Aug 2006 11:52:00 -0700 Subject: [SciPy-user] should "python setup.py sdist" provide the sandbox? In-Reply-To: <44DE1CF9.5010604@astraw.com> References: <44DE1CF9.5010604@astraw.com> Message-ID: <44DE2350.5070205@gmail.com> Andrew Straw wrote: > "python setup.py sdist" from the scipy svn tree doesn't grab the sandbox > sources. I think it should. (People may want to play with the packages, > for example.) A simple way to go about this would be to add all the > files to MANIFEST.in. If folks think that's a good idea, I'll go ahead > and generate a MANIFEST.in that adds all the files in svn scipy add > submit the patch to Trac. Sure. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sat Aug 12 14:54:08 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 12 Aug 2006 11:54:08 -0700 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <44DE23D0.9060105@gmail.com> John Hunter wrote: >>>>>> "David" == David M Cooke writes: > > David> http://hpc.sourceforget.net/ has g77. > > I spent a little time at this site before posting -- it seems their > g77 was compiled for 3.4 and Tiger. Well, you do have to use g77 3.4 on OS X. 3.3 was hideously broken. It works fine with gcc 3.3. Finding a Panther-compatible build might be a little difficult. > David> Or, use darwinports or fink. > > Been there, done that, not going back. > > Unfortunately I have a lot of stuff I've already compiled for my > python install, so I'm not too keen on changing horses mid-stream with > another compiler version or python distro. May end up having to do > that eventually, but I'd like to avoid it. However, installing just g77 from darwinports wouldn't make you change horses for anything else. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cookedm at physics.mcmaster.ca Sat Aug 12 15:19:46 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 15:19:46 -0400 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: <20060812191946.GA1108@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 01:26:03PM -0500, John Hunter wrote: > >>>>> "David" == David M Cooke writes: > > David> http://hpc.sourceforget.net/ has g77. > > I spent a little time at this site before posting -- it seems their > g77 was compiled for 3.4 and Tiger. Hmm, the g77 3.4 there says it's for Panther/Tiger. Compiling your own isn't that hard, actually. 3.4.6 takes about 2 hours to compile on my 800 MHz iBook (4.x takes about 6 hours :() # download from http://gcc.gnu.org/releases.html $ tar zxvf gcc-3.4.6.tar.gz $ mkdir build-gcc $ cd build-gcc $ ../gcc-3.4.6/configure --prefix=/somewhere --enable-languages=c,c++,f77,java,objc $ make # wait a while $ sudo make install -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From cookedm at physics.mcmaster.ca Sat Aug 12 15:26:09 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 15:26:09 -0400 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <44DE23D0.9060105@gmail.com> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> <44DE23D0.9060105@gmail.com> Message-ID: <20060812192609.GA1171@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 11:54:08AM -0700, Robert Kern wrote: > John Hunter wrote: > >>>>>> "David" == David M Cooke writes: > > > > David> http://hpc.sourceforget.net/ has g77. > > > > I spent a little time at this site before posting -- it seems their > > g77 was compiled for 3.4 and Tiger. > > Well, you do have to use g77 3.4 on OS X. 3.3 was hideously broken. It works > fine with gcc 3.3. Finding a Panther-compatible build might be a little difficult. On PowerPC. For the Intel Macs, you're pretty much stuck with using the 4.2 prerelease of gfortran (from what I've read, and my own experiences with trying to get previous versions to compile). (I haven't tried g95 yet.) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From oliphant.travis at ieee.org Sat Aug 12 15:58:57 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 12 Aug 2006 13:58:57 -0600 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <20060812192609.GA1171@arbutus.physics.mcmaster.ca> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> <44DE23D0.9060105@gmail.com> <20060812192609.GA1171@arbutus.physics.mcmaster.ca> Message-ID: <44DE3301.9050108@ieee.org> David M. Cooke wrote: > On Sat, Aug 12, 2006 at 11:54:08AM -0700, Robert Kern wrote: > >> John Hunter wrote: >> >>>>>>>> "David" == David M Cooke writes: >>>>>>>> >>> David> http://hpc.sourceforget.net/ has g77. >>> >>> I spent a little time at this site before posting -- it seems their >>> g77 was compiled for 3.4 and Tiger. >>> >> Well, you do have to use g77 3.4 on OS X. 3.3 was hideously broken. It works >> fine with gcc 3.3. Finding a Panther-compatible build might be a little difficult. >> > > On PowerPC. For the Intel Macs, you're pretty much stuck with using > the 4.2 prerelease of gfortran (from what I've read, and my own > experiences with trying to get previous versions to compile). > > Does the 4.2 pre-release of gfortran work to compile SciPy on Intel Macs? The only way I've gotten things to work on the Intel Mac is to use g95. -Travis From cookedm at physics.mcmaster.ca Sat Aug 12 16:02:58 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Sat, 12 Aug 2006 16:02:58 -0400 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <44DE3301.9050108@ieee.org> References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> <44DE23D0.9060105@gmail.com> <20060812192609.GA1171@arbutus.physics.mcmaster.ca> <44DE3301.9050108@ieee.org> Message-ID: <20060812200258.GA1261@arbutus.physics.mcmaster.ca> On Sat, Aug 12, 2006 at 01:58:57PM -0600, Travis Oliphant wrote: > David M. Cooke wrote: > > On Sat, Aug 12, 2006 at 11:54:08AM -0700, Robert Kern wrote: > > > >> John Hunter wrote: > >> > >>>>>>>> "David" == David M Cooke writes: > >>>>>>>> > >>> David> http://hpc.sourceforget.net/ has g77. > >>> > >>> I spent a little time at this site before posting -- it seems their > >>> g77 was compiled for 3.4 and Tiger. > >>> > >> Well, you do have to use g77 3.4 on OS X. 3.3 was hideously broken. It works > >> fine with gcc 3.3. Finding a Panther-compatible build might be a little difficult. > >> > > > > On PowerPC. For the Intel Macs, you're pretty much stuck with using > > the 4.2 prerelease of gfortran (from what I've read, and my own > > experiences with trying to get previous versions to compile). > > > > > Does the 4.2 pre-release of gfortran work to compile SciPy on Intel > Macs? The only way I've gotten things to work on the Intel Mac is to > use g95. Seems to work (passes the unit tests, at least). I used darwinports, but I had to update the Portfiles for gcc42 and odcctools to use more recent versions. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From jdhunter at ace.bsd.uchicago.edu Sat Aug 12 15:54:04 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Sat, 12 Aug 2006 14:54:04 -0500 Subject: [SciPy-user] fortran compiter, OS X 10.3 In-Reply-To: <20060812191946.GA1108@arbutus.physics.mcmaster.ca> ("David M. Cooke"'s message of "Sat, 12 Aug 2006 15:19:46 -0400") References: <873bc1j46p.fsf@peds-pc311.bsd.uchicago.edu> <20060812183031.GA951@arbutus.physics.mcmaster.ca> <873bc1zu84.fsf@peds-pc311.bsd.uchicago.edu> <20060812191946.GA1108@arbutus.physics.mcmaster.ca> Message-ID: <87bqqpd92b.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "David" == David M Cooke writes: David> Compiling your own isn't that hard, actually. 3.4.6 takes David> about 2 hours to compile on my 800 MHz iBook (4.x takes David> about 6 hours :() David> # download from http://gcc.gnu.org/releases.html $ tar zxvf David> gcc-3.4.6.tar.gz $ mkdir build-gcc $ cd build-gcc $ David> ../gcc-3.4.6/configure --prefix=/somewhere David> --enable-languages=c,c++,f77,java,objc $ make # wait a David> while $ sudo make install The binary g77 for 3.4 worked fine on my platform; as Robert pointed out it is compatible with gcc-3.3 and panther did not cause any problems either. Thanks for all the suggestions! JDH From fperez.net at gmail.com Sat Aug 12 16:41:52 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 12 Aug 2006 14:41:52 -0600 Subject: [SciPy-user] should "python setup.py sdist" provide the sandbox? In-Reply-To: <44DE2350.5070205@gmail.com> References: <44DE1CF9.5010604@astraw.com> <44DE2350.5070205@gmail.com> Message-ID: On 8/12/06, Robert Kern wrote: > Andrew Straw wrote: > > "python setup.py sdist" from the scipy svn tree doesn't grab the sandbox > > sources. I think it should. (People may want to play with the packages, > > for example.) A simple way to go about this would be to add all the > > files to MANIFEST.in. If folks think that's a good idea, I'll go ahead > > and generate a MANIFEST.in that adds all the files in svn scipy add > > submit the patch to Trac. > > Sure. Minor note: for some reason I don't understand, distutils will not remove MANIFEST if things change (MANIFEST.in changes, or directories it grafts change). In ipython's setup file, I've had for ages the following: # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly # update it when the contents of directories change. if os.path.exists('MANIFEST'): os.remove('MANIFEST') It's never given me any problems and has prevented quite a few headaches and bizarrely incomplete builds/dists, so you may want to use it. cheers, f From strawman at astraw.com Sat Aug 12 19:48:08 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 Aug 2006 16:48:08 -0700 Subject: [SciPy-user] should "python setup.py sdist" provide the sandbox? In-Reply-To: References: <44DE1CF9.5010604@astraw.com> <44DE2350.5070205@gmail.com> Message-ID: <44DE68B8.7070403@astraw.com> Submitted as http://projects.scipy.org/scipy/scipy/ticket/245 > Minor note: for some reason I don't understand, distutils will not > remove MANIFEST if things change (MANIFEST.in changes, or directories > it grafts change). In ipython's setup file, I've had for ages the > following: > > # BEFORE importing distutils, remove MANIFEST. distutils doesn't properly > # update it when the contents of directories change. > if os.path.exists('MANIFEST'): os.remove('MANIFEST') > > It's never given me any problems and has prevented quite a few > headaches and bizarrely incomplete builds/dists, so you may want to > use it. +1 (I always manually delete MANIFEST out of being bitten by this once or twice, long ago...) From strawman at astraw.com Sat Aug 12 19:53:40 2006 From: strawman at astraw.com (Andrew Straw) Date: Sat, 12 Aug 2006 16:53:40 -0700 Subject: [SciPy-user] should "python setup.py sdist" provide the sandbox? In-Reply-To: <44DE68B8.7070403@astraw.com> References: <44DE1CF9.5010604@astraw.com> <44DE2350.5070205@gmail.com> <44DE68B8.7070403@astraw.com> Message-ID: <44DE6A04.30209@astraw.com> Andrew Straw wrote: > Submitted as http://projects.scipy.org/scipy/scipy/ticket/245 > I uploaded the wrong patch initially, and a few minutes later uploaded the right one. The one there now is OK. From stephen.walton at csun.edu Sat Aug 12 20:27:28 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Sat, 12 Aug 2006 17:27:28 -0700 Subject: [SciPy-user] Ubuntu speed-up In-Reply-To: <44DB1AE6.4030200@bigpond.net.au> References: <44DB1AE6.4030200@bigpond.net.au> Message-ID: <44DE71F0.1010907@csun.edu> Gary Ruben wrote: > A friend sent me this link to a shell script which claims to speed up > the standard Ubuntu Dapper Drake distro. > http://www.dylanknightrogers.com/2006/07/17/faster-dappersh/ > Hmm. I just looked through the script, and I would guess that the main "felt" speedup results from the addition of prelinking. Interestingly, the script warns that enabling automatic prelink for newly installed packages, as the script does, will noticeably slow down software installs. I wonder if that's why everyone complains that yum/rpm on Fedora is slower than apt-get, because Fedora has always had prelink and always runs it on newly installed packages. From stephen.walton at csun.edu Sat Aug 12 21:02:04 2006 From: stephen.walton at csun.edu (Stephen Walton) Date: Sat, 12 Aug 2006 18:02:04 -0700 Subject: [SciPy-user] Call for testing In-Reply-To: <9ADBF898-D18F-4957-BA41-9CDC09D597EC@arachnedesign.net> References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> <9ADBF898-D18F-4957-BA41-9CDC09D597EC@arachnedesign.net> Message-ID: <44DE7A0C.1060401@csun.edu> Steve Lianoglou wrote: >> Thank you all for running the test ! >> >> Finally I have disabled ATLAS by >> export ATLAS=None >> >> Good news is that I get the correct result. >> So I guess it's definitely an ATLAS issue. >> >> Yes, but I think the real difficulty is ATLAS 3.6.0 vs. 3.7.11. I would like to see some kind of official statement, I suppose, about which version of ATLAS scipy builds best with. 3.6.0 is stable but very old; Clint Whaley is recommending use of the 3.7 series. The good news is that he has a grant to help pay him for the time it will take to get a new stable release out. Nils, it might be worth trying ATLAS 3.7.13, which has a completely new (and much improved) configuration, but READ THE DOCS first. Building 3.7.13 is very different from building any previous versions. Frankly, I need stability more than the last iota of speed right now, so I'm using the Fedora Extras 3.6.0 ATLAS binaries with Scipy, which gives adequate performance and scipy.test(1) finishes with no errors. Steve From rclewley at cam.cornell.edu Sat Aug 12 21:02:01 2006 From: rclewley at cam.cornell.edu (Robert Clewley) Date: Sat, 12 Aug 2006 21:02:01 -0400 (EDT) Subject: [SciPy-user] ODE solvers and Scipy In-Reply-To: References: <20060812132040.9017.qmail@web51713.mail.yahoo.com> Message-ID: On Sat, 12 Aug 2006, A. M. Archibald wrote: > More seriously, if there are parts of the solution space that cause > problems (the solution approaches a singularity, or the edge of a > coordinate patch), there is no really effective way to bring the > solution up to the edge and then stop. This is because lsoda doesn't > do that - that feature is in lsodar.f ( > http://www.netlib.org/alliant/ode/prog/lsodar.f ), which has not > (yet?) been wrapped in python. I believe that lsodar *has* been wrapped as part of the sourceforge "SloppyCell" project for systems biology modelling. That package provides event detection for stopping integration under circumstances such as you mention, as does PyDSTool. Rob From nwagner at iam.uni-stuttgart.de Sun Aug 13 03:19:56 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 Aug 2006 09:19:56 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: <44DE7A0C.1060401@csun.edu> References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> <9ADBF898-D18F-4957-BA41-9CDC09D597EC@arachnedesign.net> <44DE7A0C.1060401@csun.edu> Message-ID: On Sat, 12 Aug 2006 18:02:04 -0700 Stephen Walton wrote: > Steve Lianoglou wrote: > >>> Thank you all for running the test ! >>> >>> Finally I have disabled ATLAS by >>> export ATLAS=None >>> >>> Good news is that I get the correct result. >>> So I guess it's definitely an ATLAS issue. >>> >>> > Yes, but I think the real difficulty is ATLAS 3.6.0 vs. >3.7.11. I would > like to see some kind of official statement, I suppose, >about which > version of ATLAS scipy builds best with. 3.6.0 is >stable but very old; > Clint Whaley is recommending use of the 3.7 series. The >good news is > that he has a grant to help pay him for the time it will >take to get a > new stable release out. Nils, it might be worth trying >ATLAS 3.7.13, > which has a completely new (and much improved) >configuration, but READ > THE DOCS first. Building 3.7.13 is very different from >building any > previous versions. > >Frankly, I need stability more than the last iota of >speed right now, so > I'm using the Fedora Extras 3.6.0 ATLAS binaries with >Scipy, which gives > adequate performance and scipy.test(1) finishes with no >errors. > > Steve > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Steve, I have used 3.7.11 and the problem is still open. Nils From nwagner at iam.uni-stuttgart.de Sun Aug 13 06:28:15 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 Aug 2006 12:28:15 +0200 Subject: [SciPy-user] Call for testing In-Reply-To: References: <44D056CF.6040605@iam.uni-stuttgart.de> <34E2F6D9-C7F6-4D2D-AF13-7D712E0124C7@arachnedesign.net> <9ADBF898-D18F-4957-BA41-9CDC09D597EC@arachnedesign.net> <44DE7A0C.1060401@csun.edu> Message-ID: <44DEFEBF.7000505@iam.uni-stuttgart.de> Nils Wagner wrote: > On Sat, 12 Aug 2006 18:02:04 -0700 > Stephen Walton wrote: > >> Steve Lianoglou wrote: >> >> >>>> Thank you all for running the test ! >>>> >>>> Finally I have disabled ATLAS by >>>> export ATLAS=None >>>> >>>> Good news is that I get the correct result. >>>> So I guess it's definitely an ATLAS issue. >>>> >>>> >>>> >> Yes, but I think the real difficulty is ATLAS 3.6.0 vs. >> 3.7.11. I would >> like to see some kind of official statement, I suppose, >> about which >> version of ATLAS scipy builds best with. 3.6.0 is >> stable but very old; >> Clint Whaley is recommending use of the 3.7 series. The >> good news is >> that he has a grant to help pay him for the time it will >> take to get a >> new stable release out. Nils, it might be worth trying >> ATLAS 3.7.13, >> which has a completely new (and much improved) >> configuration, but READ >> THE DOCS first. Building 3.7.13 is very different from >> building any >> previous versions. >> >> Frankly, I need stability more than the last iota of >> speed right now, so >> I'm using the Fedora Extras 3.6.0 ATLAS binaries with >> Scipy, which gives >> adequate performance and scipy.test(1) finishes with no >> errors. >> >> Steve >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > Steve, > > I have used 3.7.11 and the problem is still open. > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi all, The different results disappeared misteriously ! Something must happened to numpy/scipy between ATLAS 3.6.0 >>> scipy.__version__ '0.5.0.2147' >>> numpy.__version__ '1.0b2.dev2951' >>> and ATLAS 3.6.0 >>> numpy.__version__ '1.0b2.dev3005' >>> scipy.__version__ '0.5.0.2158' >>> Note that ATLAS has been retained unchanged !!! Nils From schofield at ftw.at Sun Aug 13 09:37:47 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 13 Aug 2006 15:37:47 +0200 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> Message-ID: > On 11/08/06, David Grant wrote: >> >> >> On 8/11/06, William Hunter wrote: >> > List; >> > >> > Has anyone ever attempted using sparse matrices in combination with >> > either one of the following; weave.inline, f2py or Pyrex? >> > >> > What is the best option? Perhaps a case of personal preference? >> > >> > Reason I'm asking: I've gotten the solving part of my code to run >> > satisfactorily, but building them with Python's for loops takes >> a heck >> > of a long time, too long. >> >> Have you tried using vector indexing with numpy first? See if you >> can remove >> some of those python for loops. >> On 12/08/2006, at 1:50 PM, William Hunter wrote: > Yes, well, my best attempt at it. You'll notice that the sparse > matrices doesn't support fancy indexing, unfortunately. (I'm hoping > someone can prove me wrong :-) > > Here's some code you can fiddle with (also attached - > sparsefiddle.py), if you want. > > [code snipped] > Yeah, sparse fancy indexing is still only partially supported. I don't think it'll ever be possible to make sparse support all of NumPy's indexing tricks. But I'd like to fill out the sparse indexing code so that more NumPy-like indexing operations are supported; this would at least improve usability. We could also definitely improve performance by a constant factor by taking the loops inside the methods such as __setitem__. You could try hacking the lil_matrix.__setitem__ method to accept index sequences of the form returned by ix_ to avoid all the extra method calls. (And I'd be happy to accept a patch!) It might be possible to speed up your code somewhat by doing the loops in a compiled language, but I expect this would be a complex endeavour and the gains would be small. To get speed gains of orders of magnitude you'd need to think about the optimal sparse data type and find an algorithm that's suitable for your matrix structure. I think constructing sparse matrices efficiently is inherently a tricky business ... -- Ed From nwagner at iam.uni-stuttgart.de Sun Aug 13 10:24:56 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sun, 13 Aug 2006 16:24:56 +0200 Subject: [SciPy-user] Shooting method in scipy Message-ID: <44DF3638.8080402@iam.uni-stuttgart.de> Hi all, Has someone implemented the shooting method in scipy ? Nils From hasslerjc at adelphia.net Sun Aug 13 12:00:49 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Sun, 13 Aug 2006 12:00:49 -0400 Subject: [SciPy-user] Shooting method in scipy In-Reply-To: <44DF3638.8080402@iam.uni-stuttgart.de> References: <44DF3638.8080402@iam.uni-stuttgart.de> Message-ID: <44DF4CB1.1000203@adelphia.net> You could splice the ode-solver to the equation solver, although it might be slow. What sort of problem are you trying to solve? john Nils Wagner wrote: > Hi all, > > Has someone implemented the shooting method in scipy ? > > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From stefan at sun.ac.za Sun Aug 13 15:09:07 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sun, 13 Aug 2006 21:09:07 +0200 Subject: [SciPy-user] Shooting method in scipy In-Reply-To: <44DF3638.8080402@iam.uni-stuttgart.de> References: <44DF3638.8080402@iam.uni-stuttgart.de> Message-ID: <20060813190907.GJ29149@mentat.za.net> On Sun, Aug 13, 2006 at 04:24:56PM +0200, Nils Wagner wrote: > Has someone implemented the shooting method in scipy ? Is this the infamous shooting(self,in_foot) method? Cheers St?fan From matthew.brett at gmail.com Sun Aug 13 15:22:44 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Sun, 13 Aug 2006 20:22:44 +0100 Subject: [SciPy-user] Shooting method in scipy In-Reply-To: <20060813190907.GJ29149@mentat.za.net> References: <44DF3638.8080402@iam.uni-stuttgart.de> <20060813190907.GJ29149@mentat.za.net> Message-ID: <1e2af89e0608131222p3a0917fm54960ff764540091@mail.gmail.com> On 8/13/06, Stefan van der Walt wrote: > On Sun, Aug 13, 2006 at 04:24:56PM +0200, Nils Wagner wrote: > > Has someone implemented the shooting method in scipy ? > > Is this the infamous shooting(self,in_foot) method? I doubt it, as that method is implemented in every large code-base! Matthew From nwagner at iam.uni-stuttgart.de Mon Aug 14 03:30:04 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Aug 2006 09:30:04 +0200 Subject: [SciPy-user] Shooting method in scipy In-Reply-To: <1e2af89e0608131222p3a0917fm54960ff764540091@mail.gmail.com> References: <44DF3638.8080402@iam.uni-stuttgart.de> <20060813190907.GJ29149@mentat.za.net> <1e2af89e0608131222p3a0917fm54960ff764540091@mail.gmail.com> Message-ID: <44E0267C.70109@iam.uni-stuttgart.de> Matthew Brett wrote: > On 8/13/06, Stefan van der Walt wrote: > >> On Sun, Aug 13, 2006 at 04:24:56PM +0200, Nils Wagner wrote: >> >>> Has someone implemented the shooting method in scipy ? >>> >> Is this the infamous shooting(self,in_foot) method? >> > > I doubt it, as that method is implemented in every large code-base! > > Matthew > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > The main steps of the shooting method are - Transform the given boundary value problem into an initial value problem with estimated parameters - Adjust the parameters iteratively to reproduce the given boundary values A nice application of the shooting method is given in a recent paper by Coomer et al. "A non-linear eigenvalue problem associated with inextensible whirling strings" Journal of Sound and Vibration, Vol. 239 Issue 5 pp. 969-982 (2001) Nils From bhendrix at enthought.com Mon Aug 14 11:33:42 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Mon, 14 Aug 2006 10:33:42 -0500 Subject: [SciPy-user] SciPy 2006 PPC ISO update Message-ID: <44E097D6.8050409@enthought.com> I was unable to master a new PPC ISO this weekend due to a number of problems. Its not impossible, but very difficult and time consuming with the hardware available (important note: Ubuntu LiveCD does not work on G5s). Instead of the Live CD, I've made available a tar of all the files which are not provided by Ubuntu packages. The download information can be found at http://www.scipy.org/SciPy2006/LiveCD. Bryce From willemjagter at gmail.com Tue Aug 15 04:09:12 2006 From: willemjagter at gmail.com (William Hunter) Date: Tue, 15 Aug 2006 10:09:12 +0200 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> Message-ID: <8b3894bc0608150109m5ef0bbf5u1519c9915b4927b6@mail.gmail.com> OK, so that's it then. I'll give hacking lil_matrix.__setitem__ a shot, although it would be my first attempt at hacking. I suppose looking at the matrix.__setitem__ method is a good starting point, if there is such an ? Has there ever been an attempt at using dictionaries to work with sparse matrices? I'm exposing my obvious novice status here, but it seems possible if I look at the two intro texts on Python I have. One would have to make use of an iterative method like Jacobi or SOR. Robert Cimrman ones mentioned he'd like to see Tim Davies' CSparse wrapped. Me too. He mentioned using ctypes. Would that require less work than, say SWIG. WH On 13/08/06, Ed Schofield wrote: > > > On 11/08/06, David Grant wrote: > >> > >> > >> On 8/11/06, William Hunter wrote: > >> > List; > >> > > >> > Has anyone ever attempted using sparse matrices in combination with > >> > either one of the following; weave.inline, f2py or Pyrex? > >> > > >> > What is the best option? Perhaps a case of personal preference? > >> > > >> > Reason I'm asking: I've gotten the solving part of my code to run > >> > satisfactorily, but building them with Python's for loops takes > >> a heck > >> > of a long time, too long. > >> > >> Have you tried using vector indexing with numpy first? See if you > >> can remove > >> some of those python for loops. > >> > > On 12/08/2006, at 1:50 PM, William Hunter wrote: > > Yes, well, my best attempt at it. You'll notice that the sparse > > matrices doesn't support fancy indexing, unfortunately. (I'm hoping > > someone can prove me wrong :-) > > > > Here's some code you can fiddle with (also attached - > > sparsefiddle.py), if you want. > > > > [code snipped] > > > > > Yeah, sparse fancy indexing is still only partially supported. I > don't think it'll ever be possible to make sparse support all of > NumPy's indexing tricks. But I'd like to fill out the sparse indexing > code so that more NumPy-like indexing operations are supported; this > would at least improve usability. We could also definitely improve > performance by a constant factor by taking the loops inside the > methods such as __setitem__. You could try hacking the > lil_matrix.__setitem__ method to accept index sequences of the form > returned by ix_ to avoid all the extra method calls. (And I'd be > happy to accept a patch!) > > It might be possible to speed up your code somewhat by doing the > loops in a compiled language, but I expect this would be a complex > endeavour and the gains would be small. To get speed gains of orders > of magnitude you'd need to think about the optimal sparse data type > and find an algorithm that's suitable for your matrix structure. I > think constructing sparse matrices efficiently is inherently a tricky > business ... > > > -- Ed > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From sebastian_busch at gmx.net Tue Aug 15 11:05:21 2006 From: sebastian_busch at gmx.net (Sebastian Busch) Date: Tue, 15 Aug 2006 17:05:21 +0200 Subject: [SciPy-user] Still SciPy installation problems on SUSE linux Message-ID: <44E1E2B1.6040806@gmx.net> Dear list, I posted about this already some days ago -- thank you David and Eric for your advice. I tried to install SciPy the way Eric proposed. I didn't install pyfits, IPython, and F2Py. After fighting a long time with LAPACK and atlas, I decided to try lapack & blas provided by SUSE (which doesn't seem to change much, as far as I can see). Before installing again, I simply deleted the corresponding directories, e.g. /usr/lib/python2.4/site-packages/numpy/ numpy.test works fine (matplotlib is also working), however scipy.test gives an error about integers in arrays -- I attached the output. It comes from import scipy scipy.test() I hope you have again some ideas, thanks, Sebastian. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: test.log URL: From nwagner at iam.uni-stuttgart.de Tue Aug 15 11:20:23 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Aug 2006 17:20:23 +0200 Subject: [SciPy-user] Still SciPy installation problems on SUSE linux In-Reply-To: <44E1E2B1.6040806@gmx.net> References: <44E1E2B1.6040806@gmx.net> Message-ID: <44E1E637.1000006@iam.uni-stuttgart.de> Sebastian Busch wrote: > Dear list, > > I posted about this already some days ago -- thank you David and Eric > for your advice. > > I tried to install SciPy the way Eric proposed. I didn't install pyfits, > IPython, and F2Py. After fighting a long time with LAPACK and atlas, I > decided to try lapack & blas provided by SUSE (which doesn't seem to > change much, as far as I can see). > > Before installing again, I simply deleted the corresponding directories, > e.g. /usr/lib/python2.4/site-packages/numpy/ > > numpy.test works fine (matplotlib is also working), however scipy.test > gives an error about integers in arrays -- I attached the output. It > comes from > > import scipy > scipy.test() > > > I hope you have again some ideas, > thanks, > Sebastian. > > Please deinstall the rpm's provided by SUSE ! They are incomplete. It's very easy to compile both BLAS and LAPACK. See http://www.scipy.org/Installing_SciPy/BuildingGeneral Compiling ATLAS takes a while but it accelerates your computations afterwards. Note that it is important to build a complete library. http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 Once you have installed these libraries you can install numpy/scipy/matplotlib. Nils > ------------------------------------------------------------------------ > > Overwriting info= from scipy.misc (was from numpy.lib.utils) > import signal -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import lib.lapack -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import cluster -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import linsolve.umfpack -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > Overwriting fft= from scipy.fftpack.basic (was from /usr/lib/python2.4/site-packages/numpy/fft/__init__.pyc) > import linsolve -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import integrate -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import optimize -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import special -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import lib.blas -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import linalg -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import maxentropy -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > import stats -> failed: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > Found 4 tests for scipy.io.array_import > Found 397 tests for scipy.ndimage > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Found 20 tests for scipy.fftpack.pseudo_diffs > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py:30: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Found 5 tests for scipy.interpolate.fitpack > Found 1 tests for scipy.interpolate > Found 12 tests for scipy.io.mmio > Found 18 tests for scipy.fftpack.basic > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/optimize/lbfgsb.py:30: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Found 16 tests for scipy.io.mio > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/linalg/lapack.py:17: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Warning: FAILURE importing tests for > /usr/lib/python2.4/site-packages/scipy/linsolve/_superlu.py:1: ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename (in ?) > Found 4 tests for scipy.fftpack.helper > Found 0 tests for __main__ > Warning: 1000000 bytes requested, 20 bytes read. > ...E................................................................................................................................................................................................................................................................................................................................................................................................................................./usr/lib/python2.4/site-packages/scipy/interpolate/fitpack2.py:410: UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > warnings.warn(message) > ........................................................ > ====================================================================== > ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer > from scipy import stats > File "/usr/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? > from stats import * > File "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", line 191, in ? > import scipy.special as special > File "/usr/lib/python2.4/site-packages/scipy/special/__init__.py", line 10, in ? > import orthogonal > File "/usr/lib/python2.4/site-packages/scipy/special/orthogonal.py", line 66, in ? > from scipy.linalg import eig > File "/usr/lib/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? > from basic import * > File "/usr/lib/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? > from lapack import get_lapack_funcs > File "/usr/lib/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? > from scipy.linalg import flapack > ImportError: /usr/lib/libblas.so.3: undefined symbol: _gfortran_filename > > ---------------------------------------------------------------------- > Ran 477 tests in 8.052s > > FAILED (errors=1) > > Don't worry about a warning regarding the number of bytes read. > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From david.huard at gmail.com Tue Aug 15 11:56:31 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 15 Aug 2006 11:56:31 -0400 Subject: [SciPy-user] Autocorrelation Message-ID: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> Hi, I haven't seen in Scipy a function to compute the autocorrelation of a time series. Is there one I missed ? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pierre.Barbier_De_reuille at sophia.inria.fr Tue Aug 15 12:18:43 2006 From: Pierre.Barbier_De_reuille at sophia.inria.fr (Pierre Barbier de Reuille) Date: Tue, 15 Aug 2006 17:18:43 +0100 Subject: [SciPy-user] Autocorrelation In-Reply-To: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> References: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> Message-ID: <44E1F3E3.7080203@sophia.inria.fr> David Huard wrote: > Hi, > I haven't seen in Scipy a function to compute the autocorrelation of a > time series. Is there one I missed ? > Isn't numpy.correlate enough ? For autocorrelation you can do something like: r = numpy.correlate(x, x) no ? Pierre > Thanks, > > David > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Tue Aug 15 12:03:07 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Aug 2006 18:03:07 +0200 Subject: [SciPy-user] Autocorrelation In-Reply-To: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> References: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> Message-ID: <44E1F03B.5060400@iam.uni-stuttgart.de> David Huard wrote: > Hi, > I haven't seen in Scipy a function to compute the autocorrelation of a > time series. Is there one I missed ? > > Thanks, > > David > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > signal.correlate(in1,in1,full) ? Nils From sebastian_busch at gmx.net Tue Aug 15 12:53:20 2006 From: sebastian_busch at gmx.net (Sebastian Busch) Date: Tue, 15 Aug 2006 18:53:20 +0200 Subject: [SciPy-user] Still SciPy installation problems on SUSE linux In-Reply-To: <44E1E637.1000006@iam.uni-stuttgart.de> References: <44E1E2B1.6040806@gmx.net> <44E1E637.1000006@iam.uni-stuttgart.de> Message-ID: <44E1FC00.3000904@gmx.net> Nils Wagner wrote: > Sebastian Busch wrote: >> ... After fighting a long time with LAPACK and atlas, I >> decided to try lapack & blas provided by SUSE (which doesn't seem to >> change much, as far as I can see). >> ... > Please deinstall the rpm's provided by SUSE ! They are incomplete. Thanks. I did so and installed the precompiled ATLAS 3.6.0 library, then completed it with LAPACK. I also installed IPython & f2py. Now, things work fine. Before, I tried to work the ATLAS developer version, perhaps this was not a good idea. It still gives " Overwriting info= from scipy.misc (was from numpy.lib.utils) " but, according to David, this is not a problem. Thanks again, Sebastian. From david.huard at gmail.com Tue Aug 15 12:59:49 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 15 Aug 2006 12:59:49 -0400 Subject: [SciPy-user] Autocorrelation In-Reply-To: <44E1F03B.5060400@iam.uni-stuttgart.de> References: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> <44E1F03B.5060400@iam.uni-stuttgart.de> Message-ID: <91cf711d0608150959r70e89baft2eea260fd4432f7a@mail.gmail.com> > signal.correlate(in1,in1,full) ? > r = numpy.correlate(x, x) ? I should have been more precise, I meant autocorrelation of lag 1, in other words, the correlation between neighbours. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Aug 15 13:26:42 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 15 Aug 2006 10:26:42 -0700 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: <8b3894bc0608150109m5ef0bbf5u1519c9915b4927b6@mail.gmail.com> References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> <8b3894bc0608150109m5ef0bbf5u1519c9915b4927b6@mail.gmail.com> Message-ID: <44E203D2.4050907@astraw.com> William Hunter wrote: > Has there ever been an attempt at using dictionaries to work with > sparse matrices? See scipy.sparse.dok_matrix From david.huard at gmail.com Tue Aug 15 13:41:35 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 15 Aug 2006 13:41:35 -0400 Subject: [SciPy-user] Autocorrelation In-Reply-To: <91cf711d0608150959r70e89baft2eea260fd4432f7a@mail.gmail.com> References: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> <44E1F03B.5060400@iam.uni-stuttgart.de> <91cf711d0608150959r70e89baft2eea260fd4432f7a@mail.gmail.com> Message-ID: <91cf711d0608151041qdf181b7o9ea73b2a0f736135@mail.gmail.com> Turns out its quicker to write one, than to find one... def autocorr(x, lag=1): """Returns the autocorrelation x.""" x = squeeze(asarray(x)) mu = x.mean() s = x.std() return ((x[:-lag]-mu)*(x[lag:]-mu)).sum()/(s**2)/(len(x) - lag) David -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Aug 15 13:44:24 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 15 Aug 2006 19:44:24 +0200 Subject: [SciPy-user] Autocorrelation In-Reply-To: <44E1F3E3.7080203@sophia.inria.fr> References: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> <44E1F3E3.7080203@sophia.inria.fr> Message-ID: <20060815174424.GH7825@mentat.za.net> On Tue, Aug 15, 2006 at 05:18:43PM +0100, Pierre Barbier de Reuille wrote: > David Huard wrote: > > Hi, > > I haven't seen in Scipy a function to compute the autocorrelation of a > > time series. Is there one I missed ? > > > Isn't numpy.correlate enough ? For autocorrelation you can do something > like: > > r = numpy.correlate(x, x) These functions are useful when working with short 1-dimensional signals, but for larger images they are agonisingly slow. A way around the problem is to calculate the correlation using the FFT, i.e. def fft_correlate(A,B,*args,**kwargs): return S.signal.fftconvolve(A,B[::-1,::-1,...],*args,**kwargs) On my computer, I benchmarked these methods with different length signals. See the attached graph and script. You'll notice that the FFT execution time jumps around bit -- I know that it can be calculated especially quickly for lengths that are powers of two, maybe this has something to do with that. Can someone on the list enlighten me? Either way, your mileage may vary. Some say that this method is somewhat less accurate, and that it uses more memory. All I know is that it finishes calculating before the end of time. Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: corr.py Type: text/x-python Size: 643 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bench_corr.png Type: image/png Size: 45581 bytes Desc: not available URL: From ckkart at hoc.net Tue Aug 15 22:06:16 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 16 Aug 2006 11:06:16 +0900 Subject: [SciPy-user] odr weighted residuals Message-ID: <44E27D98.9060403@hoc.net> Hi, this is probably a question for Robert but other people might be interested in this as well, so here it goes: Does odr support weighted residuals such that data points which are far away from the bulk of points have small weights? I could think of doing that externally by running odr once, looking at the residuals, calculating new weigths and restarting the fit but of course I'd prefer to not to do it like this. Regards, Christian From robert.kern at gmail.com Wed Aug 16 00:22:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Aug 2006 21:22:27 -0700 Subject: [SciPy-user] odr weighted residuals In-Reply-To: <44E27D98.9060403@hoc.net> References: <44E27D98.9060403@hoc.net> Message-ID: <44E29D83.5090309@gmail.com> Christian Kristukat wrote: > Hi, > this is probably a question for Robert but other people might be interested in > this as well, so here it goes: > > Does odr support weighted residuals such that data points which are far away > from the bulk of points have small weights? Nope. Whatever else ODR is, it's still least-squares. In order to do what you want, you will have to resort to iterated reweighting (Google for algorithms) or some other scheme that is built on top of least-squares. Or you can construct the appropriate non-linear optimization problem directly and use one of the fmin*() functions. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ckkart at hoc.net Wed Aug 16 04:12:53 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 16 Aug 2006 17:12:53 +0900 Subject: [SciPy-user] odr weighted residuals In-Reply-To: <44E29D83.5090309@gmail.com> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> Message-ID: <44E2D385.2090404@hoc.net> Robert Kern wrote: > Christian Kristukat wrote: >> Hi, >> this is probably a question for Robert but other people might be interested in >> this as well, so here it goes: >> >> Does odr support weighted residuals such that data points which are far away >> from the bulk of points have small weights? > > Nope. Whatever else ODR is, it's still least-squares. In order to do what you > want, you will have to resort to iterated reweighting (Google for algorithms) or > some other scheme that is built on top of least-squares. Or you can construct > the appropriate non-linear optimization problem directly and use one of the > fmin*() functions. Ok, thanks. I think I'll better remove the bad data points, but just for curiosity: when using optimize.leastsq you have to provide the residuals. Can't odr modified to calculate the residuals externally or would that break the algorithm? Christian From elcorto at gmx.net Wed Aug 16 05:08:21 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Wed, 16 Aug 2006 11:08:21 +0200 Subject: [SciPy-user] numpy test fails Message-ID: <44E2E085.4010206@gmx.net> I've already posted this on the numpy list (for numpy 1.0b2.dev3007) but the same problem is there in 1.0b3.dev3027 (I'm running Python 2.3.5 on Debian testing): #-----------------------------------------------------------------------# numpy.test(5,5) [.....] check_2D_array (numpy.lib.tests.test_shape_base.test_vsplit) ... ok check_0D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_1D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_2D_array (numpy.lib.tests.test_shape_base.test_vstack) ... ok check_2D_array2 (numpy.lib.tests.test_shape_base.test_vstack) ... ok ====================================================================== ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", line 120, in check_ascii a = fromstring('1 , 2 , 3 , 4',sep=',') ValueError: don't know how to read character strings for given array type ---------------------------------------------------------------------- Ran 481 tests in 1.703s FAILED (errors=1) #-----------------------------------------------------------------------# Full logfile is attached. The scipy test runs fine. BTW, I shouldn't worry about the "Warning: No test file found in ..." messages, right? -- cheers, steve Random number generation is the art of producing pure gibberish as quickly as possible. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: numpylog.txt URL: From david.huard at gmail.com Wed Aug 16 09:50:12 2006 From: david.huard at gmail.com (David Huard) Date: Wed, 16 Aug 2006 09:50:12 -0400 Subject: [SciPy-user] Autocorrelation In-Reply-To: <20060815174424.GH7825@mentat.za.net> References: <91cf711d0608150856w6fe8ca68v3d0ed57a4b450c79@mail.gmail.com> <44E1F3E3.7080203@sophia.inria.fr> <20060815174424.GH7825@mentat.za.net> Message-ID: <91cf711d0608160650w5f27a8a4tab1982e7c854b54f@mail.gmail.com> Thanks Stefan for the tip, Fortunately, I got a pretty simple case so I don't need to rely on anything fancy. David 2006/8/15, Stefan van der Walt : > > On Tue, Aug 15, 2006 at 05:18:43PM +0100, Pierre Barbier de Reuille wrote: > > David Huard wrote: > > > Hi, > > > I haven't seen in Scipy a function to compute the autocorrelation of a > > > time series. Is there one I missed ? > > > > > Isn't numpy.correlate enough ? For autocorrelation you can do something > > like: > > > > r = numpy.correlate(x, x) > > These functions are useful when working with short 1-dimensional > signals, but for larger images they are agonisingly slow. A way > around the problem is to calculate the correlation using the FFT, i.e. > > def fft_correlate(A,B,*args,**kwargs): > return S.signal.fftconvolve(A,B[::-1,::-1,...],*args,**kwargs) > > On my computer, I benchmarked these methods with different length > signals. See the attached graph and script. You'll notice that the > FFT execution time jumps around bit -- I know that it can be > calculated especially quickly for lengths that are powers of two, > maybe this has something to do with that. Can someone on the list > enlighten me? > > Either way, your mileage may vary. Some say that this method is > somewhat less accurate, and that it uses more memory. All I know is > that it finishes calculating before the end of time. > > Regards > St?fan > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 16 12:05:35 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Aug 2006 09:05:35 -0700 Subject: [SciPy-user] odr weighted residuals In-Reply-To: <44E2D385.2090404@hoc.net> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> Message-ID: <44E3424F.4030708@gmail.com> Christian Kristukat wrote: > Robert Kern wrote: >> Christian Kristukat wrote: >>> Hi, >>> this is probably a question for Robert but other people might be interested in >>> this as well, so here it goes: >>> >>> Does odr support weighted residuals such that data points which are far away >>> from the bulk of points have small weights? >> Nope. Whatever else ODR is, it's still least-squares. In order to do what you >> want, you will have to resort to iterated reweighting (Google for algorithms) or >> some other scheme that is built on top of least-squares. Or you can construct >> the appropriate non-linear optimization problem directly and use one of the >> fmin*() functions. > > Ok, thanks. I think I'll better remove the bad data points, but just for > curiosity: when using optimize.leastsq you have to provide the residuals. Can't > odr modified to calculate the residuals externally or would that break the > algorithm? It sort of breaks the algorithm of optimize.leastsq(), too. And no, it can't. One of the things that ODR does for you is calculate the minimum orthogonal distance (residual) from your data points and the function. If you could calculate those orthogonal residuals, you wouldn't need ODRPACK; you could just use optimize.leastsq(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rlratzel at enthought.com Wed Aug 16 14:29:34 2006 From: rlratzel at enthought.com (Rick Ratzel) Date: Wed, 16 Aug 2006 13:29:34 -0500 (CDT) Subject: [SciPy-user] ANN: Python 2.4.3 Enthought Edition 1.1.0 alpha2 Message-ID: <20060816182934.555BD647@mail.enthought.com> In keeping with the "release early, release often" mantra of the open source community, Enthought is pleased to announce Python Enthought Edition Version 1.1.0 alpha2. What's New In Python 2.4.3 Enthought Edition 1.1.0 alpha2: ---------------------------------------------------------- This release focuses mainly on improving access to the Python Cheese Shop, as well as some bug fixes and enhancements to the GUI. To access the Python Cheese Shop, simply add 'http://python.org/pypi' to the 'find_links' preference under 'Preferences'. Then, click 'Install' to browse the list of packages available for installation. Changes include: * PyPI access speed greatly improved. Meta-data is also fetched "on-demand". * Bug preventing PyPI packages from downloading/installing fixed. * Packages can be "de-activated" and "re-activated" allowing users to easily select which versions, if any, are available for import. * Enstaller highlites packages which are already installed and prevents users from inadvertently changing an existing install. For a detailed list of changes made since the last release, please refer to the wiki page listed below. Links: ------ Python 2.4.3 Enthought Edition 1.1.0 download page: http://code.enthought.com/enthon/enstaller.shtml Enstaller wiki: http://enthought.com/enthought/wiki/Enstaller About Python Enthought Edition 1.1.0 alpha2: -------------------------------------------- This alpha release of 1.1.0 Python Enthought Edition is the first Enthought distribution (and the first Python distribution that we know of) to use individual egg packages (http://peak.telecommunity.com/DevCenter/PythonEggs) rather than a monolithic .exe installer. Python Enthought Edition 1.1.0 includes the Enstaller application, which manages selecting, downloading, and installing Python eggs. The initial installer program installs the standard Python software and runs a small script, which downloads, installs, and runs Enstaller. From within Enstaller, you can select individual egg packages from Enthought's egg repository, or select the "enthon-1.1.0" package, which installs everything listed in the 1.1.0 distribution. Python 2.4.3, Enthought Edition 1.1.0 is available only for Windows for the alpha2 release. It includes eggs for the following packages: Numpy SciPy IPython wxPython PIL mingw MayaVi Scientific Python VTK and many more... After installing Python Enthought Edition 1.1.0 alpha2, you can launch Enstaller again later, to upgrade, uninstall, inspect, or add new packages to your Python installation. More information is available about this and all other open source software written or released by Enthought, Inc., at http://code.enthought.com. A Note About the License ------------------------ The Enstaller application, released under the same BSD2 Open Source License as all other Enthought Open Source, is and will remain free. Access to Enthought's egg repository is free to all during the alpha and beta test phases, and will remain free for individual and academic use. After the official release of Enstaller, commercial and government users will be asked to pay a nominal, per-seat, annual fee if they'd like continued access to the egg repository (to help defray the costs of maintaining the repository). The Enstaller application may be used to access other egg repositories (e.g. the Python Cheese Shop), not just Enthought's repository. 1.1.0 alpha2 Release Notes -------------------------- This release is early alpha and has several known bugs, as well as a long list of features that have not yet been implemented. Some of the more obvious ones are listed below: * An egg of the standard documents that are typically included in Python Enthought Edition releases is not yet available. * An egg of the complete Enthought Tool Suite (ETS), a package typically available in Python Enthought Edition releases, is not yet available. However a subset of the ETS that includes Traits and TraitsUI is installed with Enstaller. * The uninstall feature does not properly uninstall a package if it is in use by any other process, including Enstaller. * The metadata about an installed package might not display in the Enstaller dialog box, in most cases because the installed package does not include metadata. This issue will improve as Enthought enhances the eggs available in the Enthought egg repository, * Packages that modify the system registry (typically, adding directories to the search PATH) require the user to log out and then log back in before the settings take effect. * Some packages that launch post-install scripts can display error messages if they fail for any reason, which can require manual intervention. These errors are usually caused by inadequate permissions for creating or moving files. * IPython may have problems when starting for the first time if a previous version of IPython was run on the system. If you see "WARNING: could not import user config", either follow the directions that follow the warning, or delete ipy_user_conf.py from the IPython directory. * Some users report that older matplotlibrc files are not compatible with the version of matplotlib installed with this release. Please refer to the matplotlib mailing list for further help. We are grateful to everyone who has helped test this release. If you'd like to contribute suggestions or report a bug, you can do so at https://svn.enthought.com/enthought. -- Rick Ratzel - Enthought, Inc. 515 Congress Avenue, Suite 2100 - Austin, Texas 78701 512-536-1057 x229 - Fax: 512-536-1059 http://www.enthought.com From fredantispam at free.fr Wed Aug 16 18:04:03 2006 From: fredantispam at free.fr (fred) Date: Thu, 17 Aug 2006 00:04:03 +0200 Subject: [SciPy-user] find roots for spherical bessel functions... Message-ID: <44E39653.4080608@free.fr> Hi all, I'm trying to find out the first n roots of the first m spherical bessel functions Jn(r) (and for the derivative of (r*Jn(r))), for 1 References: <44E39653.4080608@free.fr> Message-ID: On 16/08/06, fred wrote: > I'm trying to find out the first n roots of the first m spherical bessel > functions Jn(r) > (and for the derivative of (r*Jn(r))), for 1 References: <44E39653.4080608@free.fr> Message-ID: <44E41A17.10601@free.fr> A. M. Archibald a ?crit : > On 16/08/06, fred wrote: > > >>I'm trying to find out the first n roots of the first m spherical bessel >>functions Jn(r) >>(and for the derivative of (r*Jn(r))), for 1 > > This is only about 20,000 values, so I recommend precomputing a table > (which will be slow, but only necessary once). Probably the tables > exist somewhere, but whether they're freely available (can you > copyright those numbers?) in digital form... perhaps you should > publish them (and the code, so we can check!) when you succeed? Ok, only I succeed ;-) > Looking at Abramowitz and Stegun ( > http://www.math.sfu.ca/~cbm/aands/page_370.htm ), you can see that the > zeros of the m+1st interlace with the zeros of the mth. So if you can > find the first 200 zeros of the first spherical Bessel function, all > the rest fall immediately using (say) brentq. Ok. (PS: A & S online ? Great ! :-) > As for the first, well, you could probably eyeball it, see what the > shortest space between zeros is for the first two hundred, and just > take values at less than half that spacing. That should get you > brackets for all of them. (You could also use some clever estimates of > the sizes of first and second derivatives to make sure you didn't miss > any roots, if you wanted to do it more automatically.) Sturm theory > would probably also let you bracket roots between roots of > integer-weight Bessel functions, if you needed a random-access > function. Ok. I will look that. > > Your second function, r*Jn'(r)+Jn(r) will be more painful, but some > clever reasoning about the signs and zeros of Jn and Jn' (and Jn'', > which you can get from the differential equation) should let you > bracket the roots without undue difficulty. idem. > Good luck, and please do make the table available online, > A. M. Archibald > > P.S. scipy.special can compute spherical Bessel functions directly. Yes, I know that. The problem I have with this function (as associated Legendre polys, etc), is that its argument can only be a scalar. And I need to pass it arrays. So I have to define spherical Bessel func with jv(), which accepts arrays as arg. PS : with fsolve's octave routine, I have no such problem (find out x0, etc). So I was thinking it will be so easy with scipy. Thanks. -- Fred. From peridot.faceted at gmail.com Thu Aug 17 04:04:09 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Thu, 17 Aug 2006 04:04:09 -0400 Subject: [SciPy-user] find roots for spherical bessel functions... In-Reply-To: <44E41A17.10601@free.fr> References: <44E39653.4080608@free.fr> <44E41A17.10601@free.fr> Message-ID: On 17/08/06, fred wrote: > PS : with fsolve's octave routine, I have no such problem (find out x0, > etc). > So I was thinking it will be so easy with scipy. I am mystified by this. What initial values are you providing to fsolve, and how are you sure that they're giving you the right roots? (Si la langue est une probl?me, vous pouvez me contacter en Fran?ais hors la liste) A. M. Archibald From agn at noc.soton.ac.uk Thu Aug 17 07:02:24 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Thu, 17 Aug 2006 12:02:24 +0100 Subject: [SciPy-user] numpy netcdf reader In-Reply-To: References: Message-ID: <391FB6AF-CC98-4EB6-8D72-4F1AD130A20C@noc.soton.ac.uk> On 20 Jul 2006, at 17:36, Rob Hetland wrote: > > Konrad & scipy-users, > > I have made some changes to the scipy.sandbox.netcdf package so > that it installs correctly. (I am a total hack at distutils, but > it seems to do the right thing..). This could also be installed as > a stand alone package, since it does not actually depend on scipy > for anything. > > I have tested this module on my Mac, a Redhat distribution, and a > Win box (using cygwin netcdf libraries), and all seem to work fine. Thanks for doing this. It's very useful. It works fine for me on our Linux Redhat 64-bit setup, though haven't needed to write 1 character strings. George Nurser. From bpederse at gmail.com Thu Aug 17 16:34:44 2006 From: bpederse at gmail.com (Brent Pedersen) Date: Thu, 17 Aug 2006 13:34:44 -0700 Subject: [SciPy-user] hough transform again Message-ID: hi, i found this in recent archives and the script is useful. http://projects.scipy.org/pipermail/scipy-user/2006-August/008841.html has anyone written the code to go from the hough transform back to the image with the lines/edges enhanced or with non-lines removed? it's bending my mind a bit so if someone's already done it, i'd be glad of it--or any pointers. thanks. -brent [please include my email in the reply, i've subscribed to scipy-users, but not sure if it went through yet] -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenemslie at gmail.com Thu Aug 17 16:51:44 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Thu, 17 Aug 2006 21:51:44 +0100 Subject: [SciPy-user] hough transform again In-Reply-To: References: Message-ID: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> I'm busy doing just that at the moment (in fact literally right now), and I'll be happy to post any results here. My understanding is that you'll need to search the output of the hough transform for cells with a high count as those will be the most likely to correspond to lines in the main image. The output is a matrix relating combinations of rho and theta to the number of feature points that that line passes through - so the combinations of rho and theta with that go through the most feature points will be the strongest lines. Or something like that - I'm also really new to this stuff so I'd be happy to be corrected by someone that knows more. The second to last part of this document is a good read on the hough transform: http://homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/Chap006.pdf Stephen On 8/17/06, Brent Pedersen wrote: > > hi, i found this in recent archives and the script is useful. > http://projects.scipy.org/pipermail/scipy-user/2006-August/008841.html > > has anyone written the code to go from the hough transform back to the > image with the lines/edges enhanced or with non-lines removed? it's bending > my mind a bit so if someone's already done it, i'd be glad of it--or any > pointers. > thanks. > -brent > [please include my email in the reply, i've subscribed to scipy-users, but > not sure if it went through yet] > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bpederse at gmail.com Thu Aug 17 16:56:55 2006 From: bpederse at gmail.com (Brent Pedersen) Date: Thu, 17 Aug 2006 13:56:55 -0700 Subject: [SciPy-user] hough transform again In-Reply-To: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> References: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> Message-ID: right, extending the example at the bottom of the houghtf.py, i do: import scipy as S largevals = S.where(out + delta > max(out.flatten())); largevals = N.array(zip(ss[0],ss[1])) which gives an array of r,thetas that are within delta of the maximum. now, to find img coordinates that match those values... On 8/17/06, stephen emslie wrote: > > I'm busy doing just that at the moment (in fact literally right now), and > I'll be happy to post any results here. > > My understanding is that you'll need to search the output of the hough > transform for cells with a high count as those will be the most likely to > correspond to lines in the main image. The output is a matrix relating > combinations of rho and theta to the number of feature points that that line > passes through - so the combinations of rho and theta with that go through > the most feature points will be the strongest lines. > > Or something like that - I'm also really new to this stuff so I'd be happy > to be corrected by someone that knows more. > > The second to last part of this document is a good read on the hough > transform: http://homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/Chap006.pdf > > Stephen > > > On 8/17/06, Brent Pedersen wrote: > > > hi, i found this in recent archives and the script is useful. > http://projects.scipy.org/pipermail/scipy-user/2006-August/008841.html > > has anyone written the code to go from the hough transform back to the > image with the lines/edges enhanced or with non-lines removed? it's bending > my mind a bit so if someone's already done it, i'd be glad of it--or any > pointers. > thanks. > -brent > [please include my email in the reply, i've subscribed to scipy-users, but > not sure if it went through yet] > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Thu Aug 17 18:09:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 17 Aug 2006 15:09:33 -0700 Subject: [SciPy-user] numpy netcdf reader In-Reply-To: <391FB6AF-CC98-4EB6-8D72-4F1AD130A20C@noc.soton.ac.uk> References: <391FB6AF-CC98-4EB6-8D72-4F1AD130A20C@noc.soton.ac.uk> Message-ID: <44E4E91D.2030301@ieee.org> George Nurser wrote: > On 20 Jul 2006, at 17:36, Rob Hetland wrote: > > >> Konrad & scipy-users, >> >> I have made some changes to the scipy.sandbox.netcdf package so >> that it installs correctly. (I am a total hack at distutils, but >> it seems to do the right thing..). This could also be installed as >> a stand alone package, since it does not actually depend on scipy >> for anything. >> >> I have tested this module on my Mac, a Redhat distribution, and a >> Win box (using cygwin netcdf libraries), and all seem to work fine. >> > > > > Thanks for doing this. It's very useful. It works fine for me on our > Linux Redhat 64-bit setup, though haven't needed to write 1 character > strings. > The netcdf 1-character string support actually prompted better backward-compatible support for them in NumPy. There is a version of pynetcdf (0.6) at pylab.sourceforge.net that works with character strings and NumPy. The one in scipy.sandbox is going to replace pynetcdf (but probably be replaced by another netcdf package). -Travis > George Nurser. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From kbreger at science.uva.nl Fri Aug 18 01:56:29 2006 From: kbreger at science.uva.nl (Kfir Breger) Date: Fri, 18 Aug 2006 07:56:29 +0200 Subject: [SciPy-user] hough transform again In-Reply-To: References: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> Message-ID: <1BB7E91E-131F-4EA6-8EF5-67BCBC8A3011@science.uva.nl> I am working on a voxel coloring implementation with scipy and have used (in the beginning) hough transformation to detect lines the algroithm i used is quite simple 1. get the PIL library, which is quite good imo 2. use the build in filter function for edge detection imagename.filter(ImageFilter.FIND_EDGES) 3. make a scipy array from the image 4. to eliminate noise do a closing operation on the edge image 5. set a threshold on edge value (i.e pixel value > t is a point on the edge). I used a double threshold whith 2 runs 6. for each of the pixels found in 6 calculate the p value found for a discreat number of tetas (i used 360 values between 0 and pi) as in x cos tet + y sin tet = p where x,y and tet are know. and store for each pair of tet and p how often they come out 7. set a threshhold for the value to determine if its a real line or not ( i u sed max value found /2) and all p and tetas tuple with a total value bigger then this are your lines. I eventually dropped this in favour of radon transform as the results are better imo. Kfir Breger On Aug 17, 2006, at 10:56 PM, Brent Pedersen wrote: > right, extending the example at the bottom of the houghtf.py, > i do: > > import scipy as S > largevals = S.where(out + delta > max(out.flatten())); > largevals = N.array(zip(ss[0],ss[1])) > > which gives an array of r,thetas that are within delta of the > maximum. now, to find img coordinates that match those values... > > > On 8/17/06, stephen emslie wrote: > I'm busy doing just that at the moment (in fact literally right > now), and I'll be happy to post any results here. > > My understanding is that you'll need to search the output of the > hough transform for cells with a high count as those will be the > most likely to correspond to lines in the main image. The output is > a matrix relating combinations of rho and theta to the number of > feature points that that line passes through - so the combinations > of rho and theta with that go through the most feature points will > be the strongest lines. > > Or something like that - I'm also really new to this stuff so I'd > be happy to be corrected by someone that knows more. > > The second to last part of this document is a good read on the > hough transform: http://homepages.inf.ed.ac.uk/rbf/BOOKS/VERNON/ > Chap006.pdf > > Stephen > > > On 8/17/06, Brent Pedersen wrote: > hi, i found this in recent archives and the script is useful. > http://projects.scipy.org/pipermail/scipy-user/2006-August/008841.html > > has anyone written the code to go from the hough transform back to > the image with the lines/edges enhanced or with non-lines removed? > it's bending my mind a bit so if someone's already done it, i'd be > glad of it--or any pointers. > thanks. > -brent > [please include my email in the reply, i've subscribed to scipy- > users, but not sure if it went through yet] > > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.p.marshall at gmail.com Fri Aug 18 02:15:19 2006 From: josh.p.marshall at gmail.com (Josh Marshall) Date: Fri, 18 Aug 2006 16:15:19 +1000 Subject: [SciPy-user] hough transform again (stephen emslie) In-Reply-To: References: Message-ID: <6966B09A-4C02-463E-9BC1-115100376DD0@gmail.com> I have some code for working with Hough transforms and their inverses. I haven't been following this thread very closely, so hopefully this is useful to someone. It hasn't been tested in isolation, but it does work fine embedded in my application. Try: import * from Hough ht = HoughTransformFromFilename("my_test_image.jpg") and then: showPiccy(ht) Note that this is *not* the standard Hough transform that operates on binary thresholded images. Rather, this an alternate "Hough" transform as described in the Shapiro & Stockman book. It takes the grayscale image and calculates the gradient at every pixel, discards those with a small magnitude and adds the rest to the Hough transform array. However, the inverse functions should be easily modifiable to be used with the real standard Hough transform. I like this form better because I am working with noisy images, which give useless results with the standard version. Cheers, Josh -------------- next part -------------- A non-text attachment was scrubbed... Name: Hough.py Type: text/x-python-script Size: 13435 bytes Desc: not available URL: -------------- next part -------------- From maik.troemel at maitro.net Fri Aug 18 04:57:29 2006 From: maik.troemel at maitro.net (=?ISO-8859-1?Q?Maik_Tr=F6mel?=) Date: Fri, 18 Aug 2006 10:57:29 +0200 Subject: [SciPy-user] Matrix Operation with index dependencies Message-ID: <44E580F9.7030402@maitro.net> Hello list! I've got a question concerning matrix operations. I'd like to fill a numpy-array (A) with values. for j in range(maxj): for i in range(maxi): y = i / 4 x = j / 4 index = (j - j / 4 * 4) * 4 + (i - i / 4 * 4) if index == xyz: C = C1 elif index == xyz +1: C = C2 elif ....... ....... A[j,i] = sum( sum( B[ x-r:x+2 , y-s:y+2 ] * C ) ) C, s, r are depending on index; x and y are depending on i and j. The problem is that it takes to long this way. Is it possible to do this without iterarting with numpy? Thanks for your help! Greetings Maik From stephenemslie at gmail.com Fri Aug 18 06:07:12 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Fri, 18 Aug 2006 11:07:12 +0100 Subject: [SciPy-user] hough transform again (stephen emslie) In-Reply-To: <6966B09A-4C02-463E-9BC1-115100376DD0@gmail.com> References: <6966B09A-4C02-463E-9BC1-115100376DD0@gmail.com> Message-ID: <51f97e530608180307x278c59f4u6c25c696f41905d1@mail.gmail.com> > Note that this is *not* the standard Hough transform that operates on > binary thresholded images. Rather, this an alternate "Hough" > transform as described in the Shapiro & Stockman book. It takes the > grayscale image and calculates the gradient at every pixel, discards > those with a small magnitude and adds the rest to the Hough transform > array. Is that similar to the sobel filter? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hetland at tamu.edu Fri Aug 18 09:52:10 2006 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 18 Aug 2006 08:52:10 -0500 Subject: [SciPy-user] numpy netcdf reader In-Reply-To: <44E4E91D.2030301@ieee.org> References: <391FB6AF-CC98-4EB6-8D72-4F1AD130A20C@noc.soton.ac.uk> <44E4E91D.2030301@ieee.org> Message-ID: <9B700358-280A-4B5C-9C25-442B9F124156@tamu.edu> Actually, the netcdf package I use is by Jeff Whitaker, found here: http://www.cdc.noaa.gov/people/jeffrey.s.whitaker/python/netCDF4.html The reason I was looking for other options is that this one relies on netcdf4 (which is alpha), and hdf5. I was hoping there would be an easier tool to use that would work with people's pre-existing netcdf libraries (most likely netcdf3). Whitaker's package *does* work with netcdf3 files -- reading is transparent, and writing can be done in 'CLASSIC' mode. However, there are arguments to be made about the simplicity of installing a package that uses the libraries you have. Soon, the community will switch to netcdf4, since there are big advantages in doing so (like groups in a single netcdf file for ensemble model runs). I hope that this package can someday become part of the standard scipy install, but perhaps it is too soon now. What I would really like to see is a _completely_ unified API between Whitaker's package, for those who want it, and a different netcdf3 package, for those who don't. Then you could just change your import statement, and all of your programs would go on working. -Rob On Aug 17, 2006, at 5:09 PM, Travis Oliphant wrote: > George Nurser wrote: >> On 20 Jul 2006, at 17:36, Rob Hetland wrote: >> >> >>> Konrad & scipy-users, >>> >>> I have made some changes to the scipy.sandbox.netcdf package so >>> that it installs correctly. (I am a total hack at distutils, but >>> it seems to do the right thing..). This could also be installed as >>> a stand alone package, since it does not actually depend on scipy >>> for anything. >>> >>> I have tested this module on my Mac, a Redhat distribution, and a >>> Win box (using cygwin netcdf libraries), and all seem to work fine. >>> >> >> >> >> Thanks for doing this. It's very useful. It works fine for me on our >> Linux Redhat 64-bit setup, though haven't needed to write 1 character >> strings. >> > The netcdf 1-character string support actually prompted better > backward-compatible support for them in NumPy. There is a version of > pynetcdf (0.6) at pylab.sourceforge.net that works with character > strings and NumPy. The one in scipy.sandbox is going to replace > pynetcdf (but probably be replaced by another netcdf package). > > > -Travis > > >> George Nurser. >> >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user ---- Rob Hetland, Assistant Professor Dept. of Oceanography, Texas A&M University http://pong.tamue.edu/~rob phone: 979-458-0096, fax: 979-845-6331 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jelle.feringa at ezct.net Fri Aug 18 11:24:47 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 18 Aug 2006 17:24:47 +0200 Subject: [SciPy-user] scipy.odeint question Message-ID: <007001c6c2da$70f855e0$0201a8c0@JELLE> I have a question on how/if I could use scipy.odeint for the following; I have a function that approximates a given distance on a nurbs curve. The algorithm is rather crude, so I'd like to generalize and make it more efficient. Scipy.odeint could be a suitable function to do so, but I'm having a hard time implementing it. The function I'd like to integrate with scip.odeint is the following: while 1: print 'approxLengthPLUS',approxLength if abs(distance-approxLength) < tolerance: break if approxLength < distance: tangent = vec3(RS.PointCoordinates(pointIdPLUS)) + vec3(crv.tangent(pointIdPLUS)[2]).normalize() * (1.5*vectorScale) closestPointIdPLUS = crv.closestPoint(tangent) closestCoord = crv.evaluateCrv(closestPointIdPLUS) RS.PointCoordinates(pointIdPLUS, closestCoord) else : tangent = vec3(RS.PointCoordinates(pointIdPLUS)) + vec3(crv.tangent(pointIdMIN)[2]).normalize() * (-0.75*vectorScale) closestPointIdPLUS = crv.closestPoint(tangent) closestCoord = crv.evaluateCrv(closestPointIdPLUS) RS.PointCoordinates(pointIdPLUS, closestCoord) n+=1 approxLength = crv.length(subDomain=[crvParameter, closestPointIdPLUS]) vectorScale = approxLength*(tolerance/n) Which basically scans a nurbs curve back & forth until a point is located within the tolerance. The method crv.closestPoint(tangent) returns the closest parameter of the current point The method crv.length(subDomain=[crvParameter, closestPointIdPLUS]) returns the distance from the starting point, to the point currently evaluated. Any pointers in how I could integrate such a function with scipy are greatly appreciated! Cheers, -jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From riccardocorradini at yahoo.it Mon Aug 21 04:47:30 2006 From: riccardocorradini at yahoo.it (Riccardo Corradini) Date: Mon, 21 Aug 2006 10:47:30 +0200 (CEST) Subject: [SciPy-user] p-values of simple regressions Message-ID: <20060821084730.76290.qmail@web25509.mail.ukl.yahoo.com> Dear scipy users, I would like to compute the p-values to accept of reject the null hypothesis of a simple ordinary least square regression. Which is the scipy's function for the t of student(and p-values)? Could somebody please give me an example? Thanks a lot in advance Respectfully Riccardo Chiacchiera con i tuoi amici in tempo reale! http://it.yahoo.com/mail_it/foot/*http://it.messenger.yahoo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Mon Aug 21 06:31:42 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 21 Aug 2006 12:31:42 +0200 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> Message-ID: <44E98B8E.6030708@ntc.zcu.cz> William Hunter wrote: > Yes, well, my best attempt at it. You'll notice that the sparse > matrices doesn't support fancy indexing, unfortunately. (I'm hoping > someone can prove me wrong :-) > Here's some code you can fiddle with (also attached - > sparsefiddle.py), if you want. Hi William, I assume you wish to assemble a large sparse matrix from small dense matrices (finite element method?) I use for this a function written in C, wrapped by SWIG. So if you are interested... r. From willemjagter at gmail.com Mon Aug 21 08:36:20 2006 From: willemjagter at gmail.com (William Hunter) Date: Mon, 21 Aug 2006 14:36:20 +0200 Subject: [SciPy-user] Sparse matrices and weave.inline, f2py or Pyrex In-Reply-To: <44E98B8E.6030708@ntc.zcu.cz> References: <8b3894bc0608110704u4c22b196kfc97d493b9d69e4e@mail.gmail.com> <8b3894bc0608120450w7a6212b0ta5deb7005a154109@mail.gmail.com> <44E98B8E.6030708@ntc.zcu.cz> Message-ID: <8b3894bc0608210536yd07a4c8pbcc1d8751fbbf870@mail.gmail.com> HI Robert; Your assumption is correct, that's exactly how I assemble the big sparse matrix, and yes, I am very definitely interested! Without a doubt. Thanks, William On 21/08/06, Robert Cimrman wrote: > William Hunter wrote: > > Yes, well, my best attempt at it. You'll notice that the sparse > > matrices doesn't support fancy indexing, unfortunately. (I'm hoping > > someone can prove me wrong :-) > > Here's some code you can fiddle with (also attached - > > sparsefiddle.py), if you want. > > Hi William, > > I assume you wish to assemble a large sparse matrix from small dense > matrices (finite element method?) I use for this a function written in > C, wrapped by SWIG. So if you are interested... > > r. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From willits at wisc.edu Mon Aug 21 15:57:11 2006 From: willits at wisc.edu (Jon Willits) Date: Mon, 21 Aug 2006 14:57:11 -0500 Subject: [SciPy-user] import linsolve.umfpack failure Message-ID: I have installed scipy on an Intel Mac (running python 2.4.3) using the ScipySuperpack-Intel-10.4.tar. Everything seems to have installed ok, except that when I import scipy, I get this message: import linsolve.umfpack -> failed: Failure linking new module: / Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site- packages/scipy/sparse/sparsetools.so: Library not loaded: /usr/local/ lib/libgfortran.1.dylib Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/sparse/sparsetools.so Reason: image not found I noticed an earlier posting about this, with a message that said something like "oops, bad linking" but without any advice on how to fix it. My programs run fine (they do not rely on the sparse libraries), but always produce that error message, which will not be good for all kinds of reasons. Is there a way to fix this, or at a minimum, to suppress this error message? Thanks, Jon Willits -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredantispam at free.fr Mon Aug 21 19:08:06 2006 From: fredantispam at free.fr (fred) Date: Tue, 22 Aug 2006 01:08:06 +0200 Subject: [SciPy-user] find roots for spherical bessel functions... In-Reply-To: References: <44E39653.4080608@free.fr> Message-ID: <44EA3CD6.5050905@free.fr> A. M. Archibald a ?crit : > On 16/08/06, fred wrote: > > >>I'm trying to find out the first n roots of the first m spherical bessel >>functions Jn(r) >>(and for the derivative of (r*Jn(r))), for 1 > > This is only about 20,000 values, so I recommend precomputing a table > (which will be slow, but only necessary once). Probably the tables > exist somewhere, but whether they're freely available (can you > copyright those numbers?) in digital form... perhaps you should > publish them (and the code, so we can check!) when you succeed? Ok, so I have found a first solution, I call "recursive", because the intervals [a,b] of Jn depend of zeros of Jn-1 (thanks to you and A&S ;-) This is useful when you want to find out _all_ zeros for 1= 30 & n >= 15, one can see that the curves begin to be quite "noisy". I don't understand/know why. Any idea ? 2) I have defined Jn as spherical Bessel, because: a) You can't display it in an easy way, knowing that sph_jn() does not accept array as arg ; b) I can't get solving sph_jn = 0, since brentq/fsolve can only accept n as a second argument. But in sph_jn(), n is the first argument (this is what I have understood). Feedbacks & comments are welcome. from scipy import arange, pi, sqrt, zeros from scipy.special import jv, jvp from scipy.optimize import brentq from sys import argv from pylab import * def Jn(r,n): return (sqrt(pi/(2*r))*jv(n+0.5,r)) def Jn_zeros(n,nt): zerosj = zeros((n+1, nt)) zerosj[0] = arange(1,nt+1)*pi points = arange(1,nt+n+1)*pi racines = zeros(nt+n) for i in range(1,n+1): for j in range(nt+n-i): foo = brentq(Jn, points[j], points[j+1], (i,)) racines[j] = foo points = racines zerosj[i][:nt] = racines[:nt] return (zerosj) def rJnp(r,n): return (0.5*sqrt(pi/(2*r))*jv(n+0.5,r) + sqrt(pi*r/2)*jvp(n+0.5,r)) def rJnp_zeros(n,nt): zerosj = zeros((n+1, nt)) zerosj[0] = (2.*arange(1,nt+1)-1)*pi/2 points = (2.*arange(1,nt+n+1)-1)*pi/2 racines = zeros(nt+n) for i in range(1,n+1): for j in range(nt+n-i): foo = brentq(rJnp, points[j], points[j+1], (i,)) racines[j] = foo points = racines zerosj[i][:nt] = racines[:nt] return (zerosj) n = int(argv[1]) nt = int(argv[2]) dr = 0.01 jnz = Jn_zeros(n,nt)[n] r1 = arange(0,jnz[len(jnz)-1],dr) jnzp = rJnp_zeros(n,nt)[n] r2 = arange(0,jnzp[len(jnzp)-1],dr) grid(True) plot(r1,Jn(r1,n),'b', jnz,zeros(len(jnz)),'bo', r2,rJnp(r2,n),'r', jnzp,zeros(len(jnzp)),'rd') legend((r'$j_{'+str(n)+'}(r)$','', r'$(rj_{'+str(n)+'}(r))\'$','')) show() Cheers, -- Fred. From ryanlists at gmail.com Mon Aug 21 22:10:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 21 Aug 2006 21:10:23 -0500 Subject: [SciPy-user] vectorized atan Message-ID: I know this is a dumb question, but it seems like scipy must have a vectorized atan and atan2 somewhere. I can't seem to find it though and am using list comprehensions on the functions from the math module. Am I missing something? Ryan From wbaxter at gmail.com Mon Aug 21 23:08:33 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 22 Aug 2006 12:08:33 +0900 Subject: [SciPy-user] vectorized atan In-Reply-To: References: Message-ID: numpy.arctan numpy.arctan2 --bb On 8/22/06, Ryan Krauss wrote: > > I know this is a dumb question, but it seems like scipy must have a > vectorized atan and atan2 somewhere. I can't seem to find it though > and am using list comprehensions on the functions from the math > module. Am I missing something? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ganesh.iitm at gmail.com Tue Aug 22 00:25:59 2006 From: ganesh.iitm at gmail.com (Ganesh V) Date: Tue, 22 Aug 2006 09:55:59 +0530 Subject: [SciPy-user] Shooting method in scipy In-Reply-To: <44E0267C.70109@iam.uni-stuttgart.de> References: <44DF3638.8080402@iam.uni-stuttgart.de> <20060813190907.GJ29149@mentat.za.net> <1e2af89e0608131222p3a0917fm54960ff764540091@mail.gmail.com> <44E0267C.70109@iam.uni-stuttgart.de> Message-ID: Hi! The main steps of the shooting method are > > - Transform the given boundary value problem into an initial value > problem with estimated parameters > - Adjust the parameters iteratively to reproduce the given boundary values > I remember this.. http://www.cambridge.org/catalogue/catalogue.asp?isbn=0521852870&ss=res This is the book.. Numerical Methods in Engineering with Python from the Cambridge Univ Press. This book has a chapter on solving Two point boundary value problems. But since the book is meant to teach Numerical methods, he has developed all the necessary machinery including ODEint and Newton Raphson iteration all by himself using Numpy alone and uses those functions to write the code for solving BVP's. A similar thing will have to be done using Scipy's packages now I guess.. or are we expecting a code fully written in C, C++, FORTRAN ?? I could help if it were the former..not the latter. I have been wanting this for quite some time now !! Too basic a feature to be missing in the Scipy !! Ganesh -- Ganesh V Undergraduate student, Aerospace Engineering, IIT Madras, Chennai-36. My homepage --> http://www.ae.iitm.ac.in/~ae03b007 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Tue Aug 22 09:50:03 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 22 Aug 2006 08:50:03 -0500 Subject: [SciPy-user] vectorized atan In-Reply-To: References: Message-ID: Thanks Bill. I was looking for atan. Somehow arc didn't occur to me. Ryan On 8/21/06, Bill Baxter wrote: > numpy.arctan > numpy.arctan2 > --bb > > > On 8/22/06, Ryan Krauss wrote: > > I know this is a dumb question, but it seems like scipy must have a > > vectorized atan and atan2 somewhere. I can't seem to find it though > > and am using list comprehensions on the functions from the math > > module. Am I missing something? > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From stephenemslie at gmail.com Tue Aug 22 10:57:35 2006 From: stephenemslie at gmail.com (stephen emslie) Date: Tue, 22 Aug 2006 15:57:35 +0100 Subject: [SciPy-user] hough transform again In-Reply-To: References: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> <51f97e530608211542g2d5c7db0vaf41b3c19187d96@mail.gmail.com> <51f97e530608220659r3a978c2r8d5776f7fd6adafd@mail.gmail.com> Message-ID: <51f97e530608220757h6d41aad6m8e9153f50aec3b20@mail.gmail.com> It could be that Stefan's hough is fast because it uses only one for loop, and lots of array operations, which I believe are fast. Speed is definitely going to be an issue for me, as I'm going to need to be able to process a video feed in real-time. But I'd like to try and get things working with scipy and radon to start, and then look at speeding things up. Thanks for the link. btw, hope you dont mind if I send this to the list too. Perhaps someone will be able to give some more insight into hough, radon, inverse radon, speed, etc. Stephen On 8/22/06, Brent Pedersen wrote: > > hi again, this page makes the inverse radon seem fairly do-able: > http://www.owlnet.rice.edu/~elec431/projects96/DSP/bpanalysis.html > i'm going to try to implement. > at least all the tools are in sci/num-py, just have to put them > together... > i still wonder why the stefan's hough is so much faster than scipy's > radon, i guess the imrotate in radon must be pretty expensive. > > -b > > > On 8/22/06, stephen emslie wrote: > > > > > > also, have you seen radon() in scipy.misc ? > > > if only there was an inverse_radon(). > > > > > > > > > I'm taking a look at it now. The Radeon transform looks like it might be > > more effective than the Hough transform. Even so, I think it would be nice > > to have a Hough transform in scipy. Of course there's still that problem of > > the inverse transform as you say. It would definitely be handy to have! > > > > Stephen > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gpajer at rider.edu Tue Aug 22 11:21:07 2006 From: gpajer at rider.edu (Gary Pajer) Date: Tue, 22 Aug 2006 11:21:07 -0400 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44E3424F.4030708@gmail.com> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> Message-ID: <44EB20E3.9090902@rider.edu> Soon I'm going to need to interface my experiment to my computer. Can any one suggest any python tools that might exist? I've been googling for abour an hour, and haven't come up with anything. I can't afford LabView at the moment, and besides, I've never used it so it doesn't have the advantage of familiarity. I don't need a graphical environment. At first the equipment will be very simple: a couple of photodiodes, an A/D converter, perhaps a stepper motor controler. Any hints are welcome. If this strikes you as noise, please accept my apology. Thanks, gary From bhendrix at enthought.com Tue Aug 22 11:40:41 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Tue, 22 Aug 2006 10:40:41 -0500 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB20E3.9090902@rider.edu> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> Message-ID: <44EB2579.1080101@enthought.com> I'm not sure how much this helps- pyserial and pyparallel are packages for interfacing with hardware via the serial and parallel ports. If you use the Enthought disto of Python, it includes pyserial. Bryce Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor controler. > > Any hints are welcome. > If this strikes you as noise, please accept my apology. > > Thanks, > gary > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From hasslerjc at adelphia.net Tue Aug 22 11:57:52 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Tue, 22 Aug 2006 11:57:52 -0400 Subject: [SciPy-user] Shooting method in scipy In-Reply-To: References: <44DF3638.8080402@iam.uni-stuttgart.de> <20060813190907.GJ29149@mentat.za.net> <1e2af89e0608131222p3a0917fm54960ff764540091@mail.gmail.com> <44E0267C.70109@iam.uni-stuttgart.de> Message-ID: <44EB2980.9000901@adelphia.net> An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Aug 22 12:05:16 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 22 Aug 2006 09:05:16 -0700 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB20E3.9090902@rider.edu> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> Message-ID: <44EB2B3C.5010308@astraw.com> Hi Gary, There are a number of options, but they are hardware and/or OS dependent. On linux, the 700 pound gorilla is comedi. This supports lots of hardware and has a Python interface. (Last I checked, however, they did not support the nice and inexpensive Measurement Computing USB devices. I have some useful-for-me but not polished linux wrappers for these devices based on Warren Jasper's GPL-licensed C libraries.) On Windows, there are various Python wrappers for the manufacturer-provided C API for at least National Instruments (last I checked) and Measurement Computing (I "maintain" this one, called PyUniversalLibrary). Bryce is right that pyserial and pyparallel are great for serial and parallel, if that's all you need. Let me know if you need any more info. Cheers, Andrew Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor controler. > > Any hints are welcome. > If this strikes you as noise, please accept my apology. > > Thanks, > gary > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From russel at appliedminds.net Tue Aug 22 12:06:21 2006 From: russel at appliedminds.net (Russel Howe) Date: Tue, 22 Aug 2006 09:06:21 -0700 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB20E3.9090902@rider.edu> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> Message-ID: I have used these under OS X and linux (ctypes used to access their libraries). I have used them for UI mockups, so I am not sure how well they can be calibrated for laboratory use, but if you don't need super accuracy they would probably work. http://www.phidgets.com/ On Aug 22, 2006, at 8:21 AM, Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never > used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor > controler. > > Any hints are welcome. > If this strikes you as noise, please accept my apology. > > Thanks, > gary > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From hasslerjc at adelphia.net Tue Aug 22 12:07:25 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Tue, 22 Aug 2006 12:07:25 -0400 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB20E3.9090902@rider.edu> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> Message-ID: <44EB2BBD.1050809@adelphia.net> Do you already have the a/d board? If so, what is it? By some strange coincidence, I've just been working with exactly that problem - three times. In one, I used an ancient, obsolete ISA board on an ancient, obsolete computer with Linux, comedi, swig, and Python. For the other two, I had windows .dll files, and was able to use c_types with Python to interface directly. One was Ethernet, the other was USB. For the USB device (Measurement Computing USB1208) the first part looks like: from ctypes import * # Load the cbw32.DLL file cbw = windll.LoadLibrary('c:\\MCC\\cbw32.dll') # Define the appropriate c_types RevNum = c_float(0) VxDRevNum = c_float(0) BoardNum = c_int(0) .... and so on. Then: err = cbw.cbAIn (BoardNum, Chan, Gain, byref(DataValue)) if err != 0: print "a/d error ", DataValue, err ... and the like. Really very very easy. I love Python. I can give you more details, if you're interested. john Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor controler. > > Any hints are welcome. > If this strikes you as noise, please accept my apology. > > Thanks, > gary > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From strawman at astraw.com Tue Aug 22 12:22:00 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 22 Aug 2006 09:22:00 -0700 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB2BBD.1050809@adelphia.net> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> <44EB2BBD.1050809@adelphia.net> Message-ID: <44EB2F28.9020408@astraw.com> John Hassler wrote: > For the USB device (Measurement Computing USB1208) the first part looks > like: > from ctypes import * > Dear John, I would like to make the next version of PyUniversalLibrary use ctypes (instead of pyrex). Due to time constraints, this is not on my immediate-to-do list, so I have no idea when, if ever, it will happen. If you ever get around to the attempt to make a complete API wrapper for UniversalLibrary, please contact me so we can hopefully collaborate. I will do the same. Cheers! Andrew From stefan at sun.ac.za Tue Aug 22 13:08:46 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Tue, 22 Aug 2006 19:08:46 +0200 Subject: [SciPy-user] hough transform again In-Reply-To: <51f97e530608220757h6d41aad6m8e9153f50aec3b20@mail.gmail.com> References: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> <51f97e530608211542g2d5c7db0vaf41b3c19187d96@mail.gmail.com> <51f97e530608220659r3a978c2r8d5776f7fd6adafd@mail.gmail.com> <51f97e530608220757h6d41aad6m8e9153f50aec3b20@mail.gmail.com> Message-ID: <20060822170845.GA31504@mentat.za.net> On Tue, Aug 22, 2006 at 03:57:35PM +0100, stephen emslie wrote: > It could be that Stefan's hough is fast because it uses only one for loop, and > lots of array operations, which I believe are fast. > > Speed is definitely going to be an issue for me, as I'm going to need to be > able to process a video feed in real-time. But I'd like to try and get things > working with scipy and radon to start, and then look at speeding things up. > > Thanks for the link. > btw, hope you dont mind if I send this to the list too. Perhaps someone will be > able to give some more insight into hough, radon, inverse radon, speed, etc. > > Stephen > > On 8/22/06, Brent Pedersen wrote: > > hi again, this page makes the inverse radon seem fairly do-able: > http://www.owlnet.rice.edu/~elec431/projects96/DSP/bpanalysis.html > i'm going to try to implement. > at least all the tools are in sci/num-py, just have to put them together... > i still wonder why the stefan's hough is so much faster than scipy's radon, > i guess the imrotate in radon must be pretty expensive. Certainly. Here is the code snippet from Lib/misc/pilutil.py: def radon(arr,theta=None): if theta is None: theta = mgrid[0:180] s = zeros((arr.shape[1],len(theta)), float) k = 0 for th in theta: im = imrotate(arr,-th) s[:,k] = sum(im,axis=0) k += 1 return s There are more accurate and faster ways of calculating the Radon transform (but I must add that this snippet demonstrates the idea behind the transform very well!). It can, for example, simply be done in the Fourier domain, as shown in B.R. Ramesh, N. Srinivasa, K. Rajgopal, "An Algorithm for Computing the Discrete Radon Transform With Some Applications", Proceedings of the Fourth IEEE Region 10 International Conference, TENCON '89, 1989. and, more recently, with the Fast Radon Transform as described in B. Kelley, V. K. Madisetti, "The Fast Discrete Radon Transform- I: Theory", IEEE Trans. on Image Processing, July 1993. Both papers are on the internet and can be found using scholar.google.com. Beylkin's original article is harder to find: Beylkin, G., "Discrete Radon Transform", IEEE Transactions on Acoustics, Speech and Signal Processing, vol 35, no. 2, 1987, pp 162 - 172. Or, if you *really* want to get to the root of things, read J. Radon, "Uber die Bestimmung von Funktionen durch ihre Integralwerte l?ngs gewisser Manigfaltigkeiten", Ber. Ver. S?chs. Akad. Wiss. Leipzig, Math-Phys. Kl., vol. 69, April 1917. Sure, you'd have to learn German first, but that will probably come in handy one day :) Regards St?fan From dd55 at cornell.edu Tue Aug 22 13:34:08 2006 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 22 Aug 2006 13:34:08 -0400 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB20E3.9090902@rider.edu> References: <44E27D98.9060403@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> Message-ID: <200608221334.08311.dd55@cornell.edu> On Tuesday 22 August 2006 11:21, Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor controler. I haven't worked with it, but you might look at TACO: http://www.esrf.fr/taco/ From fredantispam at free.fr Tue Aug 22 13:36:05 2006 From: fredantispam at free.fr (fred) Date: Tue, 22 Aug 2006 19:36:05 +0200 Subject: [SciPy-user] spherical harmonic In-Reply-To: <44D728D5.3050407@free.fr> References: <44D728D5.3050407@free.fr> Message-ID: <44EB4085.1010600@free.fr> fred a ?crit : > Hi, > > I'm playing with spherical harmonic (special.sph_harm()) and I > don't understand what I get. > >>From Wolfram (http://mathworld.wolfram.com/SphericalHarmonic.html), > Re_Y11.png, Im_Y11.png & mod_Y11.png seems to be correct > > http://fredantispam.free.fr/Re_Y11.png > http://fredantispam.free.fr/Im_Y11.png > http://fredantispam.free.fr/mod_Y11.png > > when I use > r = > (sqrt((2*n+1)/4.0/pi)*exp(0.5*(gammaln(n-m+1)-gammaln(n+m+1)))*sin(phi)*exp(1j*m*theta))**2 > > But when I use special.sph_harm(m,n,theta,phi), I get this > > http://fredantispam.free.fr/Re_Y11_2.png > http://fredantispam.free.fr/Im_Y11_2.png > http://fredantispam.free.fr/mod_Y11_2.png > > Real & imaginary parts don't seem to be correct. > > What I'm doing wrong ? Nobody has an answer or suggestion ??? -- Fred. From hasslerjc at adelphia.net Tue Aug 22 13:39:33 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Tue, 22 Aug 2006 13:39:33 -0400 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB2F28.9020408@astraw.com> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> <44EB2BBD.1050809@adelphia.net> <44EB2F28.9020408@astraw.com> Message-ID: <44EB4155.6000005@adelphia.net> An HTML attachment was scrubbed... URL: From strawman at astraw.com Tue Aug 22 14:58:52 2006 From: strawman at astraw.com (Andrew Straw) Date: Tue, 22 Aug 2006 11:58:52 -0700 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB4155.6000005@adelphia.net> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> <44EB2BBD.1050809@adelphia.net> <44EB2F28.9020408@astraw.com> <44EB4155.6000005@adelphia.net> Message-ID: <44EB53EC.2070407@astraw.com> Dear John, I believe PyUL now has analog output. You must have grabbed an older version. I'm of a different opinion about no wrapper being needed. Although I want the Python API to be as close as possible to the C API, I don't think one should be forced to manually check for errors when calling from Python. Thus, the wrapper would essentially do nothing except call the C function and check for errors. Small, simple, and nearly trivial, yes. But still a wrapper, nevertheless. And like you say, ctypes makes this very easy and that's what the next version will be based on. John Hassler wrote: > I looked at your PyUL, and would have used it, if it had had analog > output. Since it didn't, I started looking deeper into the problem, > and figured out that you could just use c_types. There's no "wrapper" > needed, really ... once you define the .dll, then you just use the UL > calls, as-is. > > I'll be glad to show you what I've done, but really, there's nothing > complex about it. > john > > Andrew Straw wrote: >> John Hassler wrote: >> >>> For the USB device (Measurement Computing USB1208) the first part looks >>> like: >>> from ctypes import * >>> >>> >> Dear John, >> >> I would like to make the next version of PyUniversalLibrary use ctypes >> (instead of pyrex). Due to time constraints, this is not on my >> immediate-to-do list, so I have no idea when, if ever, it will happen. >> If you ever get around to the attempt to make a complete API wrapper for >> UniversalLibrary, please contact me so we can hopefully collaborate. I >> will do the same. >> >> Cheers! >> Andrew >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From gpajer at rider.edu Tue Aug 22 17:16:12 2006 From: gpajer at rider.edu (Gary Pajer) Date: Tue, 22 Aug 2006 17:16:12 -0400 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB2BBD.1050809@adelphia.net> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> <44EB2BBD.1050809@adelphia.net> Message-ID: <44EB741C.1040300@rider.edu> John Hassler wrote: >Do you already have the a/d board? If so, what is it? > > Thanks to all who replied. Fabulously helpful. I placed an order for a NI USB-6009 just minutes before checking my e-mail. Not too late to cancel it if my "support group" is using Measurement Computing. (the choice was a toss-up) >By some strange coincidence, I've just been working with exactly that >problem - three times. In one, I used an ancient, obsolete ISA board on >an ancient, obsolete computer with Linux, comedi, swig, and Python. For >the other two, I had windows .dll files, and was able to use c_types >with Python to interface directly. One was Ethernet, the other was USB. > >For the USB device (Measurement Computing USB1208) the first part looks >like: >from ctypes import * > ># Load the cbw32.DLL file >cbw = windll.LoadLibrary('c:\\MCC\\cbw32.dll') > ># Define the appropriate c_types >RevNum = c_float(0) >VxDRevNum = c_float(0) >BoardNum = c_int(0) >.... and so on. Then: > err = cbw.cbAIn (BoardNum, Chan, Gain, byref(DataValue)) > if err != 0: print "a/d error ", DataValue, err >... and the like. Really very very easy. I love Python. > >I can give you more details, if you're interested. >john > >Gary Pajer wrote: > > >>Soon I'm going to need to interface my experiment to my computer. >> >>Can any one suggest any python tools that might exist? I've been >>googling for abour an hour, and haven't come up with anything. >> >> I can't afford LabView at the moment, and besides, I've never used it >>so it doesn't have the advantage of familiarity. I don't need a >>graphical environment. At first the equipment will be very simple: a >>couple of photodiodes, an A/D converter, perhaps a stepper motor controler. >> >>Any hints are welcome. >>If this strikes you as noise, please accept my apology. >> >>Thanks, >>gary >> >> >> >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.org >>http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.org >http://projects.scipy.org/mailman/listinfo/scipy-user > > > From hasslerjc at adelphia.net Tue Aug 22 18:03:35 2006 From: hasslerjc at adelphia.net (John Hassler) Date: Tue, 22 Aug 2006 18:03:35 -0400 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB741C.1040300@rider.edu> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> <44EB2BBD.1050809@adelphia.net> <44EB741C.1040300@rider.edu> Message-ID: <44EB7F37.9040002@adelphia.net> An HTML attachment was scrubbed... URL: From ckkart at hoc.net Tue Aug 22 18:47:38 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Wed, 23 Aug 2006 07:47:38 +0900 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB20E3.9090902@rider.edu> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> Message-ID: <44EB898A.6050707@hoc.net> Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor controler. I would recommend to use I2c for communication with the devices, which is an industry standard to interconnect different parts within electronic devices. There are many low cost modules available, such as DA/AD converters, stepper motor controler, digitial IO, etc. Those modules can be linked serially similar to IEEE and each module has its own ID. Using an adapter for the serial port you can control the devices using, e.g. pyserial. Christian From fredantispam at free.fr Tue Aug 22 19:43:16 2006 From: fredantispam at free.fr (fred) Date: Wed, 23 Aug 2006 01:43:16 +0200 Subject: [SciPy-user] find roots for spherical bessel functions... In-Reply-To: <44EA3CD6.5050905@free.fr> References: <44E39653.4080608@free.fr> <44EA3CD6.5050905@free.fr> Message-ID: <44EB9694.2080308@free.fr> fred a ?crit : > This is useful when you want to find out _all_ zeros for 1 example. > But when you only want the nth zero of the mth Jn, I think I can find an > easier algorithm (I'm currently working > on it) In fact, my second idea was to interpolate intevals [a,b] (which should contain zero) for a given Jn, and then for Jn for all n. Both should be linear interpolation. But I can't get it working fine because linear interpolation does not match very well [a,b] values :-( So... I don't know. Nevertheless, I have at least one solution to find out zeros of spherical Bessel, which works fine. Thanks to all. Cheers, -- Fred. From bldrake at adaptcs.com Tue Aug 22 20:26:16 2006 From: bldrake at adaptcs.com (Barry Drake) Date: Tue, 22 Aug 2006 17:26:16 -0700 (PDT) Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB898A.6050707@hoc.net> Message-ID: <20060823002616.64956.qmail@web208.biz.mail.re2.yahoo.com> Christian, This looks very interesting. I'm a long time LabVIEW (National Instruments) developer, now hooked on Python. Do you have a favorite vendor for hardware that uses I2C? Thanks. Barry Drake Adaptive Computing Systems Christian Kristukat wrote: Gary Pajer wrote: > Soon I'm going to need to interface my experiment to my computer. > > Can any one suggest any python tools that might exist? I've been > googling for abour an hour, and haven't come up with anything. > > I can't afford LabView at the moment, and besides, I've never used it > so it doesn't have the advantage of familiarity. I don't need a > graphical environment. At first the equipment will be very simple: a > couple of photodiodes, an A/D converter, perhaps a stepper motor controler. I would recommend to use I2c for communication with the devices, which is an industry standard to interconnect different parts within electronic devices. There are many low cost modules available, such as DA/AD converters, stepper motor controler, digitial IO, etc. Those modules can be linked serially similar to IEEE and each module has its own ID. Using an adapter for the serial port you can control the devices using, e.g. pyserial. Christian _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Wed Aug 23 10:21:48 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Thu, 24 Aug 2006 00:21:48 +1000 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <44EB2579.1080101@enthought.com> References: <44E27D98.9060403@hoc.net> <44E29D83.5090309@gmail.com> <44E2D385.2090404@hoc.net> <44E3424F.4030708@gmail.com> <44EB20E3.9090902@rider.edu> <44EB2579.1080101@enthought.com> Message-ID: <44EC647C.8090704@bigpond.net.au> Was this mentioned? Gary R. Bryce Hendrix wrote: > I'm not sure how much this helps- pyserial and pyparallel are packages > for interfacing with hardware via the serial and parallel ports. If you > use the Enthought disto of Python, it includes pyserial. > > Bryce > > Gary Pajer wrote: >> Soon I'm going to need to interface my experiment to my computer. >> >> Can any one suggest any python tools that might exist? I've been >> googling for abour an hour, and haven't come up with anything. >> >> I can't afford LabView at the moment, and besides, I've never used it >> so it doesn't have the advantage of familiarity. I don't need a >> graphical environment. At first the equipment will be very simple: a >> couple of photodiodes, an A/D converter, perhaps a stepper motor controler. >> >> Any hints are welcome. >> If this strikes you as noise, please accept my apology. >> >> Thanks, >> gary From haase at msg.ucsf.edu Wed Aug 23 18:57:03 2006 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 23 Aug 2006 15:57:03 -0700 Subject: [SciPy-user] weave: C-float argument does not accept python int value Message-ID: <200608231557.03871.haase@msg.ucsf.edu> Hi, I'm really excited about weave - it's great fun ! Now I get this error message: Conversion Error:, received 'int' type instead of 'float' for variable 'pz' Apparently if I call my functions "f" like f(5) but f was defined in C as: void f(double pz) I get that error. Did I understand this correct ? And how could I specify a "converter" so that a int gets "automagically" converted into a float !? Thanks for weave, Sebastian Haase From ckkart at hoc.net Wed Aug 23 22:03:31 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 24 Aug 2006 11:03:31 +0900 Subject: [SciPy-user] OT(?): Lab hardware interface In-Reply-To: <20060823002616.64956.qmail@web208.biz.mail.re2.yahoo.com> References: <20060823002616.64956.qmail@web208.biz.mail.re2.yahoo.com> Message-ID: <44ED08F3.8050005@hoc.net> Barry Drake wrote: > Christian, > This looks very interesting. I'm a long time LabVIEW (National > Instruments) developer, now hooked on Python. Do you have a favorite > vendor for hardware that uses I2C? Once we bought an usb i2c interface at http://www.diolan.com/i2c/u2c12.html Other parts we got from a local (Germany) dealer. But it's easy to find parts and vendors on google. Christian From mark at mitre.org Thu Aug 24 12:51:12 2006 From: mark at mitre.org (Mark Heslep) Date: Thu, 24 Aug 2006 12:51:12 -0400 Subject: [SciPy-user] hough transform again In-Reply-To: <51f97e530608220757h6d41aad6m8e9153f50aec3b20@mail.gmail.com> References: <51f97e530608171351q2462984av5bc767192e773479@mail.gmail.com> <51f97e530608211542g2d5c7db0vaf41b3c19187d96@mail.gmail.com> <51f97e530608220659r3a978c2r8d5776f7fd6adafd@mail.gmail.com> <51f97e530608220757h6d41aad6m8e9153f50aec3b20@mail.gmail.com> Message-ID: <44EDD900.7030407@mitre.org> stephen emslie wrote: > It could be that Stefan's hough is fast because it uses only one for > loop, > and lots of array operations, which I believe are fast. > > Speed is definitely going to be an issue for me, as I'm going to need > to be > able to process a video feed in real-time. But I'd like to try and get > things working with scipy and radon to start, and then look at speeding > things up. > You might find the open source Hough code in OpenCv instructive / useful. It's mature (Cv started in '99) and highly optimized, supporting standard and probabilistic methods for line and circle finding. On x86 platforms you will find its speed unbeatable for packaged code, especially w/ Intel's IPP. Wiki documentation for HoughLines2 w/ a C example at the API level: http://opencvlibrary.sourceforge.net/CvReference#cv_imgproc_special To see the underlying code you want the cvhough.cpp module in ../cv/src/ from the download tarball. There's also a Python example in ../samples/python/ of the CVS version, using OpenCv's Swig - Python port. I'm working on a ctypes Py port. Mark From bpederse at gmail.com Sat Aug 26 17:27:53 2006 From: bpederse at gmail.com (Brent Pedersen) Date: Sat, 26 Aug 2006 14:27:53 -0700 Subject: [SciPy-user] numexpr Message-ID: hi, i have just been looking at numexpr in the scipy sandbox from svn. i see that there is mention of log, exp [in complex_functions.inc, but not in interpreter.c] but can not follow the code. can those be used in numexpr evaluate() ? and if so, how? is there a way i can figure out what expressions/functions are supported? i am unable to find any docs other than this page: http://www.scipy.org/SciPyPackages/NumExpr is there any further documentaion that i am missing? thanks, -brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sun Aug 27 06:53:16 2006 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 27 Aug 2006 12:53:16 +0200 Subject: [SciPy-user] sparse (fwd) Message-ID: <20060827105316.GB19289@clipper.ens.fr> Hi Dwishen, I never ever used sparse matrices, so I am not exactly the right person to ask. On a general rule, alway ask the mailing list, and reply to the mailing list, that way every body can help you. Regards, Ga?l ----- Forwarded message from Dwishen Ramanah ----- From: Dwishen Ramanah To: gael.varoquaux at normalesup.org Subject: sparse Date: Thu, 24 Aug 2006 17:28:02 +1000 Hi, I was just wondering if it was possible to solve using sparse in scipy for a non-square A matrix. Ax = b dwishen ----- End forwarded message ----- From nmarais at sun.ac.za Mon Aug 28 14:44:56 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 28 Aug 2006 20:44:56 +0200 Subject: [SciPy-user] numpy.sum and generator expressions Message-ID: Hi I have a lot of code that uses numpy.sum with generator expressions: import numpy as N a = sum(i for i in xrange(3)) Of course the generator expressions are a little more involved than that. With numpy 0.9.8 this works as expected: In [1]: import numpy as N In [2]: N.sum(i for i in range(3)) Out[2]: 3 but with 1.0b4 I get: In [14]: N.sum(i for i in range(3)) Out[14]: Is this the (new?) expected bahaviour, or is it a bug? I hope for the latter, since I use generators like this quite a bit... Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From oliphant.travis at ieee.org Mon Aug 28 16:01:59 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 14:01:59 -0600 Subject: [SciPy-user] numpy.sum and generator expressions In-Reply-To: References: Message-ID: <44F34BB7.6070806@ieee.org> Neilen Marais wrote: > Hi > > I have a lot of code that uses numpy.sum with generator expressions: > > import numpy as N > > a = sum(i for i in xrange(3)) > > Of course the generator expressions are a little more involved than that. > With numpy 0.9.8 this works as expected: > > In [1]: import numpy as N > In [2]: N.sum(i for i in range(3)) > Out[2]: 3 > > but with 1.0b4 I get: > > In [14]: N.sum(i for i in range(3)) > Out[14]: > > Is this the (new?) expected bahaviour, or is it a bug? I hope for the > latter, since I use generators like this quite a bit... > It's a bug. It was introduced when adding an optional output argument to sum. In the sum function in fromnumeric.py there needs to be a return res statement in the check for a generator type. This is fixed in SVN. -Travis From rshepard at appl-ecosys.com Mon Aug 28 16:14:46 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 13:14:46 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed Message-ID: I just finished downloading and installing the latest NumPy and SciPy; the installation tests failed: Ran 1552 tests in 12.197s FAILED (errors=196) I followed the instructions on the wiki installation page: first NumPy, then BLAS (as a directory within /usr/local/scipy-0.5.0), then LAPACK (also within that directory), then SciPy. When compiling LAPACK I reran it without the quotation marks for the optimization option on the command line; hadn't seen options quoted before. Could that make a difference? Also, when LAPACK finished compiling it did not automatically return control to the command line; I had to kill the process with ctrl-c. What do you suggest I do to get a working installation? I can send the test results if that helps. Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From ewald.zietsman at gmail.com Mon Aug 28 16:25:29 2006 From: ewald.zietsman at gmail.com (Ewald Zietsman) Date: Mon, 28 Aug 2006 22:25:29 +0200 Subject: [SciPy-user] Fourier Tranforms of irregularly sampled data Message-ID: Hi all, I want to calculate the periodogram of a series of data that have been measured at irregular intervals i.e. A quantity was measured every ten seconds one day and then maybe every 20 seconds a week later etc. Is there any way to do this easily in scipy? I have an algorithm to do this but it is dreadfully slow since it uses python loops and map(). Any help will be greatly appreciated. -Ewald -------------- next part -------------- An HTML attachment was scrubbed... URL: From conor.robinson at gmail.com Mon Aug 28 16:42:22 2006 From: conor.robinson at gmail.com (Conor Robinson) Date: Mon, 28 Aug 2006 13:42:22 -0700 Subject: [SciPy-user] Academic Question? Message-ID: Hello scipy users, If one is using 1ofC encoding, what does PCA applied to 1ofC result in? 1ofc is nice if you're trying to get the posterior probability distribution as output (sigmoid single output unit feedforward network), however would this still hold true after applying PCA to reduce input dimensionality? Does this even make "sense" for 1ofc? Furthermore, is there any solid literature reviewing encoding schemes for nnets etc? Thanks, Conor From oliphant.travis at ieee.org Mon Aug 28 16:59:45 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 14:59:45 -0600 Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: Message-ID: <44F35941.4000700@ieee.org> Rich Shepard wrote: > I just finished downloading and installing the latest NumPy and SciPy; the > installation tests failed: > > Ran 1552 tests in 12.197s > > FAILED (errors=196) > > I followed the instructions on the wiki installation page: first NumPy, > then BLAS (as a directory within /usr/local/scipy-0.5.0), then LAPACK (also > within that directory), then SciPy. > > When compiling LAPACK I reran it without the quotation marks for the > optimization option on the command line; hadn't seen options quoted before. > Could that make a difference? Also, when LAPACK finished compiling it did > not automatically return control to the command line; I had to kill the > process with ctrl-c. > > What do you suggest I do to get a working installation? I can send the > test results if that helps. > What is your platform? What does numpy.test() do by itself? What are the failing tests (most likely they are all related to a few points of failure so you probably don't have to provide all the failing tests. Have you tried with a pre-built ATLAS? -Travis From rshepard at appl-ecosys.com Mon Aug 28 17:22:03 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 14:22:03 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F35941.4000700@ieee.org> References: <44F35941.4000700@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > What is your platform? What does numpy.test() do by itself? My apologies, Travis. After sending the message I realized that I should have specified I'm running Slackware-10.2. I'm rebuilding numpy right now; the directory is /usr/local/numpy-1.0b4. > What are the failing tests (most likely they are all related to a few points > of failure so you probably don't have to provide all the failing tests. Let me rebuild everything (using OPT = "-O2" rather than OPT = -O2 for lapack) and see if it's strickly a user error here. Some of the failed tests are: ERROR: binary hit-or-miss transform (1-3) ERROR: iterating a structure (1-3) ERROR: label (1-13) ERROR: maximum (1, 2) ERROR: maximum filter (6-9) ERROR: maximum position (1, 3) ERROR: mean (1, 2) ERROR: minimum (1, 2) ERROR: minimum filter (6-9) and others in morphological, rank, median, standard deviation, sum, variance, watershed_ift, white tophat, and check_rvs. > Have you tried with a pre-built ATLAS? No. That was my next step. While running the numpy installation, 'python setup.py install >& install.log & tail -f install log,' I guess it finished at the line: copying build/src.linux-i686-2.4/numpy/core/ufunc_api.txt -> /usr/lib/python2.4/site-packages/numpy/core/include/numpy because the tail sat there for more than 10 minutes. I've attached the gzipped log file. I don't see the syntax to test only numpy. Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 -------------- next part -------------- A non-text attachment was scrubbed... Name: install.log.gz Type: application/x-gzip Size: 3418 bytes Desc: URL: From rshepard at appl-ecosys.com Mon Aug 28 17:43:35 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 14:43:35 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F35941.4000700@ieee.org> References: <44F35941.4000700@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > What does numpy.test() do by itself? Travis, Ran 481 tests in 2.744s OK So, on to the next step. I'm building Atlas as I write. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From aarre at pair.com Mon Aug 28 17:55:22 2006 From: aarre at pair.com (Aarre Laakso) Date: Mon, 28 Aug 2006 17:55:22 -0400 Subject: [SciPy-user] Academic Question? In-Reply-To: References: Message-ID: <44F3664A.9080405@pair.com> Conor Robinson wrote: > If one is using 1ofC encoding, what does PCA > applied to 1ofC result in? 1ofc is nice if you're trying to get the > posterior probability distribution as output (sigmoid single output > unit feedforward network), > however would this still hold true after applying PCA to reduce input > dimensionality? Does this even make "sense" for 1ofc? Furthermore, > is there any solid literature reviewing encoding schemes for nnets > etc? >From what I've been told, it doesn't make sense to apply PCA to categorical variables, although it is a common practice. If you want the posterior probability distribution at the outputs, then I believe you want to use softmax. The neural nets FAQ http://www.faqs.org/faqs/ai-faq/neural-nets/ has some practical advice about encoding, with citations to the literature (part 2, "How should categories be encoded?"), as well as some recommendations on the literature in general (part 4). If you haven't read Bishop (1995), I highly recommend it. Aarre -- Aarre Laakso http://www.laakshmi.com/aarre/ From robert.kern at gmail.com Mon Aug 28 19:32:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Aug 2006 18:32:33 -0500 Subject: [SciPy-user] Fourier Tranforms of irregularly sampled data In-Reply-To: References: Message-ID: <44F37D11.9030902@gmail.com> Ewald Zietsman wrote: > Hi all, > > I want to calculate the periodogram of a series of data that have been > measured at irregular intervals i.e. A quantity was measured every ten > seconds one day and then maybe every 20 seconds a week later etc. > > Is there any way to do this easily in scipy? I have an algorithm to do > this but it is dreadfully slow since it uses python loops and map(). The Lomb-Scargle periodogram works fairly well here (Google for references). Numerical Recipes 13.8 describes an implementation that utilizes FFTs in a clever way to do speed up the calculation. http://library.lanl.gov/numerical/bookcpdf/c13-8.pdf If you do implement it, I would love to include it in scipy.signal . -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rshepard at appl-ecosys.com Mon Aug 28 19:43:20 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 16:43:20 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed -- UPDATE In-Reply-To: <44F35941.4000700@ieee.org> References: <44F35941.4000700@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > What does numpy.test() do by itself? Travis, I may have found part of the problem. Having rebuilt and re-installed in this order: atlas, blas, lapack, numpy, numpy.test() now reports 481 tests successfully completed. On to scipy rebuild/re-installation. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Aug 28 19:50:35 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 16:50:35 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed -- UPDATE 2 In-Reply-To: <44F35941.4000700@ieee.org> References: <44F35941.4000700@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > What are the failing tests (most likely they are all related to a few > points of failure so you probably don't have to provide all the failing > tests. SciPy rebuilt. Invoke ipython. Enter 'import numpy;' prompt for next line returned. Enter 'import scipy' and the following is returned: In [2]: import scipy Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) If instead I type from numpy import * from scipy import * I see: In [2]: from scipy import * Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) /usr/lib/python2.4/site-packages/numpy/dft/__init__.py:2: UserWarning: The dft subpackage will be removed by 1.0 final -- it is now called fft warnings.warn("The dft subpackage will be removed by 1.0 final -- it is now called fft") Running 'test(level=1, verbosity=2)' there are still failures: Ran 1552 tests in 11.576s FAILED (errors=196) What do you suggest? Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From oliphant.travis at ieee.org Mon Aug 28 19:58:33 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 17:58:33 -0600 Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: Message-ID: <44F38329.20504@ieee.org> Rich Shepard wrote: > I just finished downloading and installing the latest NumPy and SciPy; the > installation tests failed: > > Wait a minute. By the "latest" SciPy what do you mean?? The SciPy release is behind the NumPy release. Only the SVN version of SciPy works with the latest NumPy. I suspect the errors are due to that. -Travis From fperez.net at gmail.com Mon Aug 28 20:07:37 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 28 Aug 2006 18:07:37 -0600 Subject: [SciPy-user] Fourier Tranforms of irregularly sampled data In-Reply-To: <44F37D11.9030902@gmail.com> References: <44F37D11.9030902@gmail.com> Message-ID: On 8/28/06, Robert Kern wrote: > Ewald Zietsman wrote: > > Hi all, > > > > I want to calculate the periodogram of a series of data that have been > > measured at irregular intervals i.e. A quantity was measured every ten > > seconds one day and then maybe every 20 seconds a week later etc. > > > > Is there any way to do this easily in scipy? I have an algorithm to do > > this but it is dreadfully slow since it uses python loops and map(). > > The Lomb-Scargle periodogram works fairly well here (Google for references). > Numerical Recipes 13.8 describes an implementation that utilizes FFTs in a > clever way to do speed up the calculation. It's also worth mentioning more recent work on the topic, which introduces significant improvements (at the cost of a more difficult implementation): On the Fast Fourier Transform of Functions With Singularities Gregory Beylkin Applied and Computational Harmonic Analysis, 2, pp. 363-381, 1995 http://amath.colorado.edu/pub/wavelets/papers/usfft.pdf Cheers, f From rshepard at appl-ecosys.com Mon Aug 28 20:10:57 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 17:10:57 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F38329.20504@ieee.org> References: <44F38329.20504@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > The SciPy release is behind the NumPy release. Only the SVN version of > SciPy works with the latest NumPy. > I suspect the errors are due to that. Travis, Aha! I downloaded numpy-1.0b4 and scipy-0.5.0 from the SciPy site. So, I'll grab the SVN version of SciPy and try again. Do I also need the SVN version of numpy? Oh, what the heck, I'll redo both from the trunk. Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From oliphant.travis at ieee.org Mon Aug 28 20:15:42 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 18:15:42 -0600 Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: <44F38329.20504@ieee.org> Message-ID: <44F3872E.6090405@ieee.org> Rich Shepard wrote: > On Mon, 28 Aug 2006, Travis Oliphant wrote: > > >> The SciPy release is behind the NumPy release. Only the SVN version of >> SciPy works with the latest NumPy. >> > > >> I suspect the errors are due to that. >> > > Travis, > > Aha! I downloaded numpy-1.0b4 and scipy-0.5.0 from the SciPy site. So, > I'll grab the SVN version of SciPy and try again. Do I also need the SVN > version of numpy? Oh, what the heck, I'll redo both from the trunk. > > It's easy enough if you are building from sources to just grab the SVN of both. My focus is to keep both synchronized so they work together. Yes, you will need the trunk from both at this point. Sorry, I didn't catch that earlier. -Travis From rshepard at appl-ecosys.com Mon Aug 28 20:22:04 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 17:22:04 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F3872E.6090405@ieee.org> References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > It's easy enough if you are building from sources to just grab the SVN of > both. My focus is to keep both synchronized so they work together. > > Yes, you will need the trunk from both at this point. Sorry, I didn't > catch that earlier. Travis, I should have remembered reading that, too. After dinner I'll start over again with both. Not many files were changed with the numpy checkout, but scipy revision 2180 brought down many files. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Aug 28 21:00:17 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 18:00:17 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> Message-ID: On Mon, 28 Aug 2006, Rich Shepard wrote: > After dinner I'll start over again with both. Well, I decided to do it before dinner. > Not many files were changed with the numpy checkout, but scipy revision > 2180 brought down many files. Closer. Many warnings scrolled by during the build/install. After invoking ipython, 'import scipy' showed: In [2]: import scipy import misc -> failed: cannot import name place Ran 'scipy.test(level=1)' and I'm down from 196 errors to 1: ====================================================================== ERROR: check_integer (scipy.io.tests.test_array_import.test_read_array) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/io/tests/test_array_import.py", line 55, in check_integer from scipy import stats File "/usr/lib/python2.4/site-packages/scipy/stats/__init__.py", line 7, in ? from stats import * File "/usr/lib/python2.4/site-packages/scipy/stats/stats.py", line 1693, in ? import distributions File "/usr/lib/python2.4/site-packages/scipy/stats/distributions.py", line 16, in ? from numpy import atleast_1d, polyval, angle, ceil, place, extract, \ ImportError: cannot import name place ---------------------------------------------------------------------- Ran 1388 tests in 11.423s FAILED (errors=1) Out[4]: Also, **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ATLAS _is_ installed; perhaps in the wrong place? It was built within /usr/local/ATLAS, not as a scipy/ subdirectory. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Mon Aug 28 21:09:24 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Mon, 28 Aug 2006 18:09:24 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> Message-ID: On Mon, 28 Aug 2006, Rich Shepard wrote: > Not many files were changed with the numpy checkout, On my notebook I removed the numpy directory and grabbed the trunk head from the svn server. After that built, I ran 'numpy.test(level=1)' and got three errors: check_ascii, check_both, and check_place. ====================================================================== ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", line 120, in check_ascii a = fromstring('1 , 2 , 3 , 4',sep=',') ValueError: don't know how to read character strings for given array type ====================================================================== ERROR: check_both (numpy.lib.tests.test_function_base.test_extins) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/numpy/lib/tests/test_function_base.py", line 249, in check_both place(a,mask,0) NameError: global name 'place' is not defined ====================================================================== ERROR: check_place (numpy.lib.tests.test_function_base.test_extins) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/numpy/lib/tests/test_function_base.py", line 242, in check_place place(a,[0,1,0,1,0,1,0],[2,4,6]) NameError: global name 'place' is not defined ---------------------------------------------------------------------- Ran 481 tests in 0.863s FAILED (errors=3) Out[5]: Something I did incorrectly? Thanks, Travis, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From oliphant.travis at ieee.org Mon Aug 28 23:26:27 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 28 Aug 2006 21:26:27 -0600 Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> Message-ID: <44F3B3E3.6040608@ieee.org> Rich Shepard wrote: > On Mon, 28 Aug 2006, Rich Shepard wrote: > > >> Not many files were changed with the numpy checkout, >> > > On my notebook I removed the numpy directory and grabbed the trunk head > from the svn server. After that built, I ran 'numpy.test(level=1)' and got > three errors: check_ascii, check_both, and check_place. > > ====================================================================== > ERROR: check_ascii (numpy.core.tests.test_multiarray.test_fromstring) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.3/site-packages/numpy/core/tests/test_multiarray.py", > line 120, in check_ascii > a = fromstring('1 , 2 , 3 , 4',sep=',') > ValueError: don't know how to read character strings for given array type > > This error is known on Python 2.3. I wouldn't worry about it. It's probably a missing function that Python added to it's C-API in Python 2.4 > ====================================================================== > ERROR: check_both (numpy.lib.tests.test_function_base.test_extins) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.3/site-packages/numpy/lib/tests/test_function_base.py", > line 249, in check_both > place(a,mask,0) > NameError: global name 'place' is not defined > > ====================================================================== > ERROR: check_place (numpy.lib.tests.test_function_base.test_extins) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.3/site-packages/numpy/lib/tests/test_function_base.py", > line 242, in check_place > place(a,[0,1,0,1,0,1,0],[2,4,6]) > NameError: global name 'place' is not defined > Both of these errors indicate some problem with the installation. Are you sure you have the right SVN tree? What is the output of svn info (run from the same directory you ran setup.py from) What about python -c "import numpy; print numpy.__version__" From nmarais at sun.ac.za Tue Aug 29 04:37:43 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Tue, 29 Aug 2006 10:37:43 +0200 Subject: [SciPy-user] numpy.sum and generator expressions References: <44F34BB7.6070806@ieee.org> Message-ID: On Mon, 28 Aug 2006 14:01:59 -0600, Travis Oliphant wrote: > Neilen Marais wrote: >> but with 1.0b4 I get: >> >> In [14]: N.sum(i for i in range(3)) >> Out[14]: >> >> Is this the (new?) expected bahaviour, or is it a bug? I hope for the >> latter, since I use generators like this quite a bit... >> > > It's a bug. It was introduced when adding an optional output argument > to sum. In the sum function in fromnumeric.py there needs to be a > return res statement in the check for a generator type. > > This is fixed in SVN. Great, thanks for the fix. Regards Neilen > > -Travis -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From rshepard at appl-ecosys.com Tue Aug 29 09:13:26 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 06:13:26 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F3B3E3.6040608@ieee.org> References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> <44F3B3E3.6040608@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > This error is known on Python 2.3. I wouldn't worry about it. It's probably > a missing function that Python added to it's C-API in Python 2.4 [rshepard at salmo ~]$ ls /var/log/packages/ | grep python python-2.4.1-i486-1 Apparently not. > Both of these errors indicate some problem with the installation. Are you > sure you have the right SVN tree? What is the output of > svn info (run from the same directory you ran setup.py from) Workstation: [rshepard at salmo /usr/local/scipy]$ svn info Path: . URL: http://svn.scipy.org/svn/scipy/trunk Repository Root: http://svn.scipy.org/svn/scipy Repository UUID: d6536bca-fef9-0310-8506-e4c0a848fbcf Revision: 2180 Node Kind: directory Schedule: normal Last Changed Author: oliphant Last Changed Rev: 2180 Last Changed Date: 2006-08-27 00:44:15 -0700 (Sun, 27 Aug 2006) Properties Last Updated: 2006-08-28 17:38:34 -0700 (Mon, 28 Aug 2006) > What about > python -c "import numpy; print numpy.__version__" [rshepard at salmo ~]$ python -c "import numpy; print numpy.__version__" 1.0b4 Notebook: Path: . URL: http://svn.scipy.org/svn/numpy/trunk Repository Root: http://svn.scipy.org/svn/numpy Repository UUID: 94b884b6-d6fd-0310-90d3-974f1d3f35e1 Revision: 3089 Node Kind: directory Schedule: normal Last Changed Author: oliphant Last Changed Rev: 3089 Last Changed Date: 2006-08-28 13:56:23 -0700 (Mon, 28 Aug 2006) Properties Last Updated: 2006-08-28 17:52:36 -0700 (Mon, 28 Aug 2006) Path: . URL: http://svn.scipy.org/svn/scipy/trunk Repository Root: http://svn.scipy.org/svn/scipy Repository UUID: d6536bca-fef9-0310-8506-e4c0a848fbcf Revision: 2180 Node Kind: directory Schedule: normal Last Changed Author: oliphant Last Changed Rev: 2180 Last Changed Date: 2006-08-27 00:44:15 -0700 (Sun, 27 Aug 2006) Properties Last Updated: 2006-08-28 17:47:40 -0700 (Mon, 28 Aug 2006) Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From rshepard at appl-ecosys.com Tue Aug 29 11:53:01 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 08:53:01 -0700 (PDT) Subject: [SciPy-user] NumPy or Numeric for Matplotlib? Message-ID: Yesterday, when I upgraded numpy and scipy I removed numeric because it seems to have been replaced by numpy. However, the latest version of matplotlib, 0.83 still, seems to want numeric or numarray. Do I re-install numeric on my systems or is there a work-around for this? Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From willemjagter at gmail.com Tue Aug 29 12:12:43 2006 From: willemjagter at gmail.com (William Hunter) Date: Tue, 29 Aug 2006 18:12:43 +0200 Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: References: Message-ID: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> There's an option in your matplotlibrc file under configuration, close to the top. It gives options for numerix, just type in in that line and all should be ok. There's a whole lot of other stuff you can set here too. -- William On 29/08/06, Rich Shepard wrote: > Yesterday, when I upgraded numpy and scipy I removed numeric because it > seems to have been replaced by numpy. However, the latest version of > matplotlib, 0.83 still, seems to want numeric or numarray. Do I re-install > numeric on my systems or is there a work-around for this? > > Thanks, > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental Permitting > Applied Ecosystem Services, Inc.(TM) | Accelerator > Voice: 503-667-4517 Fax: 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From conor.robinson at gmail.com Tue Aug 29 12:34:20 2006 From: conor.robinson at gmail.com (Conor Robinson) Date: Tue, 29 Aug 2006 09:34:20 -0700 Subject: [SciPy-user] Academic Question? In-Reply-To: <44F3664A.9080405@pair.com> References: <44F3664A.9080405@pair.com> Message-ID: Thanks Aarre, I have both of those resources, they are good, but I'm really looking for a study comparing encoding schema. I've been combing the University California libraries, however most studies use a "hand wave" gesture or don't mention how they encode whatsoever. I've developed a basic technique for compressing 1ofC, but I would like to see what others have done. In bishop around p. 230 he notes you can use cross entropy with sigmoid for probabilities. Conor On 8/28/06, Aarre Laakso wrote: > > Conor Robinson wrote: > > If one is using 1ofC encoding, what does PCA > > applied to 1ofC result in? 1ofc is nice if you're trying to get the > > posterior probability distribution as output (sigmoid single output > > unit feedforward network), > > however would this still hold true after applying PCA to reduce input > > dimensionality? Does this even make "sense" for 1ofc? Furthermore, > > is there any solid literature reviewing encoding schemes for nnets > > etc? > > >From what I've been told, it doesn't make sense to apply PCA to > categorical variables, although it is a common practice. > > If you want the posterior probability distribution at the outputs, then > I believe you want to use softmax. > > The neural nets FAQ http://www.faqs.org/faqs/ai-faq/neural-nets/ has > some practical advice about encoding, with citations to the literature > (part 2, "How should categories be encoded?"), as well as some > recommendations on the literature in general (part 4). If you haven't > read Bishop (1995), I highly recommend it. > > Aarre > > > -- > Aarre Laakso > http://www.laakshmi.com/aarre/ > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rshepard at appl-ecosys.com Tue Aug 29 12:42:56 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 09:42:56 -0700 (PDT) Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> References: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> Message-ID: On Tue, 29 Aug 2006, William Hunter wrote: > There's an option in your matplotlibrc file under configuration, close to > the top. It gives options for numerix, just type in in > that line and all should be ok. There's a whole lot of other stuff you can > set here too. William, Thank you very much! I assumed there was some way to specify preferences, but hadn't stumbled across this one by myself. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From wbaxter at gmail.com Tue Aug 29 13:29:10 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Wed, 30 Aug 2006 02:29:10 +0900 Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: References: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> Message-ID: Watch out though, the current release version of matplotlib (.87.4) isn't compatible with the current numpy (anything since 1.0b). So you may want to keep numeric installed for the time being for matplotlib's sake. --bb On 8/30/06, Rich Shepard wrote: > On Tue, 29 Aug 2006, William Hunter wrote: > > > There's an option in your matplotlibrc file under configuration, close to > > the top. It gives options for numerix, just type in in > > that line and all should be ok. There's a whole lot of other stuff you can > > set here too. > > William, > > Thank you very much! I assumed there was some way to specify preferences, > but hadn't stumbled across this one by myself. > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental Permitting > Applied Ecosystem Services, Inc.(TM) | Accelerator > Voice: 503-667-4517 Fax: 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rshepard at appl-ecosys.com Tue Aug 29 13:32:11 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 10:32:11 -0700 (PDT) Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: References: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> Message-ID: On Wed, 30 Aug 2006, Bill Baxter wrote: > Watch out though, the current release version of matplotlib (.87.4) isn't > compatible with the current numpy (anything since 1.0b). So you may want > to keep numeric installed for the time being for matplotlib's sake. Oh. OK. I just replaced 1.0b4 with the svn head last evening as I try to get scipy to pass all tests after building. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From agn at noc.soton.ac.uk Tue Aug 29 14:13:40 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Tue, 29 Aug 2006 19:13:40 +0100 Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: References: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> Message-ID: <3673DE07-3F74-4E8D-8827-CD54D299F599@noc.soton.ac.uk> On 29 Aug 2006, at 18:32, Rich Shepard wrote: > On Wed, 30 Aug 2006, Bill Baxter wrote: > >> Watch out though, the current release version of matplotlib (. >> 87.4) isn't >> compatible with the current numpy (anything since 1.0b). So you >> may want >> to keep numeric installed for the time being for matplotlib's sake. > I believe that _matplotlib_ SVN works with the latest numpy. -George Nurser. From rshepard at appl-ecosys.com Tue Aug 29 14:22:04 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 11:22:04 -0700 (PDT) Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: <3673DE07-3F74-4E8D-8827-CD54D299F599@noc.soton.ac.uk> References: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> <3673DE07-3F74-4E8D-8827-CD54D299F599@noc.soton.ac.uk> Message-ID: On Tue, 29 Aug 2006, George Nurser wrote: > I believe that _matplotlib_ SVN works with the latest numpy. George, I just checked out the trunk and will build it later today or tomorrow. I'll see if it imports properly, then go on from there. Thanks, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From jdhunter at ace.bsd.uchicago.edu Tue Aug 29 14:28:41 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Tue, 29 Aug 2006 13:28:41 -0500 Subject: [SciPy-user] NumPy or Numeric for Matplotlib? In-Reply-To: (Rich Shepard's message of "Tue, 29 Aug 2006 11:22:04 -0700 (PDT)") References: <8b3894bc0608290912n331b3c67g221181a2092ed14e@mail.gmail.com> <3673DE07-3F74-4E8D-8827-CD54D299F599@noc.soton.ac.uk> Message-ID: <87y7t7cs6e.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Rich" == Rich Shepard writes: Rich> I just checked out the trunk and will build it later Rich> today or tomorrow. I'll see if it imports properly, then go Rich> on from there. svn numpy + svn mpl work fine together for me. JDH From rshepard at appl-ecosys.com Tue Aug 29 16:15:09 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 13:15:09 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F3B3E3.6040608@ieee.org> References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> <44F3B3E3.6040608@ieee.org> Message-ID: On Mon, 28 Aug 2006, Travis Oliphant wrote: > Both of these errors indicate some problem with the installation. Are you > sure you have the right SVN tree? What is the output of Eureka! I checked out the numpy svn trunk on my workstation and built/installed that. Now scipy generates warnings on the tests, but no errors. Ran 1569 tests in 3.977s OK Whew! I'll see what's different on the notebook later; now I need to get some productive work done on my application. Thanks, Travis, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From oliphant at ee.byu.edu Tue Aug 29 16:45:04 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 29 Aug 2006 14:45:04 -0600 Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> <44F3B3E3.6040608@ieee.org> Message-ID: <44F4A750.3090408@ee.byu.edu> Rich Shepard wrote: >On Mon, 28 Aug 2006, Travis Oliphant wrote: > > > >>Both of these errors indicate some problem with the installation. Are you >>sure you have the right SVN tree? What is the output of >> >> > > Eureka! > > I checked out the numpy svn trunk on my workstation and built/installed >that. Now scipy generates warnings on the tests, but no errors. > > Printed warnings are no big deal. Only test failures and errors are signficant. Congradualations. I suspect the big problem was the confusion over which SciPy works with which NumPy? We need to make that clear. Hopefully by the first of next week, the two will be in sync and stay that way for a long time (in that SciPy will work with all Numpy 1.0.X releases. That is the goal. -Travis From rshepard at appl-ecosys.com Tue Aug 29 16:50:34 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Tue, 29 Aug 2006 13:50:34 -0700 (PDT) Subject: [SciPy-user] Intallation Tests Failed In-Reply-To: <44F4A750.3090408@ee.byu.edu> References: <44F38329.20504@ieee.org> <44F3872E.6090405@ieee.org> <44F3B3E3.6040608@ieee.org> <44F4A750.3090408@ee.byu.edu> Message-ID: On Tue, 29 Aug 2006, Travis Oliphant wrote: > Printed warnings are no big deal. Only test failures and errors are > signficant. Travis, That was my assumption. Warnings can be cleaned up later without affecting usability. > Congradualations. I suspect the big problem was the confusion over which > SciPy works with which NumPy? We need to make that clear. Hopefully by the > first of next week, the two will be in sync and stay that way for a long > time (in that SciPy will work with all Numpy 1.0.X releases. I'd like to suggest that the version numbers then be synchronized. It's not obvious to folks like me that numpy-1.0b4 and scipy-0.5.0 might (or might not) work together. If the numbers are identical it's a good bet us novices will figure out how to make a matched pair. The next issue I need to test and, I hope, not need to resolve is the svn trunk checkout of matplotlib to go with numpy and scipy. Whew! At least I have my pysqlite issues fixed up now. Progress on several fronts today. Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From a.mcmorland at auckland.ac.nz Wed Aug 30 06:00:26 2006 From: a.mcmorland at auckland.ac.nz (Angus McMorland) Date: Wed, 30 Aug 2006 22:00:26 +1200 Subject: [SciPy-user] amd64 specific error in fitpack spline interpolation Message-ID: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Hi all, I posted about this problem a few months ago, but we didn't get very far, but the annoyance level of the problem is enough now for me to have another go. I'm trying to run the spline example from the cookbook (my exact code, called spl.py, is inlined below). On an ix86 computer this works fine, and returns the expected graphs, but on an amd64 machine I get the error below. Both computers are running debian etch, with recent svn numpy and scipy. TypeError: array cannot be safely cast to required type > /usr/lib/python2.4/site-packages/scipy/interpolate/fitpack.py(217)splprep() 216 t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, --> 217 nest,wrk,iwrk,per) 218 _parcur_cache['u']=o['u'] I believe the error is generated by the _parcur call -> once in pdb, calling it again re-raises the same error. ipdb>_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, nest,wrk,iwrk,per) *** TypeError: array cannot be safely cast to required type I've checked the values of all the inputs to the call ravel(transpose(x)) | shape=(300,) w | shape=(100,) u | shape=(100,) ub | 0 ue | 1 k | 2 task | 0 ipar | F s | 3.0 t | [] nest | 103 wrk | [] iwrk | [] per | 0 and they are all the same on both machines, suggesting to me that something is happening wrong in the internals of the function call, rather than with one of the parameters. Any suggestions what could be going on, or how I could debug this further? Can anyone at least verify that the problem exists on other amd64 platforms. Thanks, Angus. spl.py: -------------------------------- from numpy import arange, cos, linspace, pi, sin from scipy.interpolate import splprep, splev from numpy.random import randn # make ascending spiral in 3-space t=linspace(0,1.75*2*pi,100) x = sin(t) y = cos(t) z = t # add noise x+= 0.1*randn(*x.shape) y+= 0.1*randn(*y.shape) z+= 0.1*randn(*z.shape) # spline parameters s=3.0 # smoothness parameter k=2 # spline order nest=-1 # estimate of number of knots needed (-1 = maximal) # find the knot points tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) # evaluate spline, including interpolated points xnew,ynew,znew = splev(linspace(0,1,400),tckp) import pylab pylab.subplot(2,2,1) data,=pylab.plot(x,y,'bo-',label='data') fit,=pylab.plot(xnew,ynew,'r-',label='fit') pylab.legend() pylab.xlabel('x') pylab.ylabel('y') pylab.subplot(2,2,2) data,=pylab.plot(x,z,'bo-',label='data') fit,=pylab.plot(xnew,znew,'r-',label='fit') pylab.legend() pylab.xlabel('x') pylab.ylabel('z') pylab.subplot(2,2,3) data,=pylab.plot(y,z,'bo-',label='data') fit,=pylab.plot(ynew,znew,'r-',label='fit') pylab.legend() pylab.xlabel('y') pylab.ylabel('z') -- Angus McMorland email a.mcmorland at auckland.ac.nz mobile +64-21-155-4906 PhD Student, Neurophysiology / Multiphoton & Confocal Imaging Physiology, University of Auckland phone +64-9-3737-599 x89707 Armourer, Auckland University Fencing Secretary, Fencing North Inc. From willemjagter at gmail.com Wed Aug 30 06:49:49 2006 From: willemjagter at gmail.com (William Hunter) Date: Wed, 30 Aug 2006 12:49:49 +0200 Subject: [SciPy-user] amd64 specific error in fitpack spline interpolation In-Reply-To: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <8b3894bc0608300349y77d77996se43de854fdbb57a0@mail.gmail.com> Angus; We recently got an amd64; I've not had time to get the thing up and running with scipy and friends, but I'll do that tonight (I hope). I'll try your script and post before Monday. -- WH On 30/08/06, Angus McMorland wrote: > Hi all, > > I posted about this problem a few months ago, but we didn't get very > far, but the annoyance level of the problem is enough now for me to have > another go. > > I'm trying to run the spline example from the cookbook (my exact code, > called spl.py, is inlined below). On an ix86 computer this works fine, > and returns the expected graphs, but on an amd64 machine I get the error > below. Both computers are running debian etch, with recent svn numpy and > scipy. > > TypeError: array cannot be safely cast to required type > > /usr/lib/python2.4/site-packages/scipy/interpolate/fitpack.py(217)splprep() > 216 > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > --> 217 nest,wrk,iwrk,per) > 218 _parcur_cache['u']=o['u'] > > I believe the error is generated by the _parcur call -> once in pdb, > calling it again re-raises the same error. > > ipdb>_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > nest,wrk,iwrk,per) > *** TypeError: array cannot be safely cast to required type > > I've checked the values of all the inputs to the call > > ravel(transpose(x)) | shape=(300,) > w | shape=(100,) > u | shape=(100,) > ub | 0 > ue | 1 > k | 2 > task | 0 > ipar | F > s | 3.0 > t | [] > nest | 103 > wrk | [] > iwrk | [] > per | 0 > > and they are all the same on both machines, suggesting to me that > something is happening wrong in the internals of the function call, > rather than with one of the parameters. > > Any suggestions what could be going on, or how I could debug this > further? Can anyone at least verify that the problem exists on other > amd64 platforms. > > Thanks, > > Angus. > > > > spl.py: > -------------------------------- > from numpy import arange, cos, linspace, pi, sin > from scipy.interpolate import splprep, splev > from numpy.random import randn > > # make ascending spiral in 3-space > t=linspace(0,1.75*2*pi,100) > > x = sin(t) > y = cos(t) > z = t > > # add noise > x+= 0.1*randn(*x.shape) > y+= 0.1*randn(*y.shape) > z+= 0.1*randn(*z.shape) > > # spline parameters > s=3.0 # smoothness parameter > k=2 # spline order > nest=-1 # estimate of number of knots needed (-1 = maximal) > > # find the knot points > tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) > > # evaluate spline, including interpolated points > xnew,ynew,znew = splev(linspace(0,1,400),tckp) > > import pylab > pylab.subplot(2,2,1) > data,=pylab.plot(x,y,'bo-',label='data') > fit,=pylab.plot(xnew,ynew,'r-',label='fit') > pylab.legend() > pylab.xlabel('x') > pylab.ylabel('y') > > pylab.subplot(2,2,2) > data,=pylab.plot(x,z,'bo-',label='data') > fit,=pylab.plot(xnew,znew,'r-',label='fit') > pylab.legend() > pylab.xlabel('x') > pylab.ylabel('z') > > pylab.subplot(2,2,3) > data,=pylab.plot(y,z,'bo-',label='data') > fit,=pylab.plot(ynew,znew,'r-',label='fit') > pylab.legend() > pylab.xlabel('y') > pylab.ylabel('z') > > -- > Angus McMorland > email a.mcmorland at auckland.ac.nz > mobile +64-21-155-4906 > > PhD Student, Neurophysiology / Multiphoton & Confocal Imaging > Physiology, University of Auckland > phone +64-9-3737-599 x89707 > > Armourer, Auckland University Fencing > Secretary, Fencing North Inc. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From agn at noc.soton.ac.uk Wed Aug 30 07:22:04 2006 From: agn at noc.soton.ac.uk (George Nurser) Date: Wed, 30 Aug 2006 12:22:04 +0100 Subject: [SciPy-user] amd64 specific error in fitpack spline interpolation In-Reply-To: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: On 30 Aug 2006, at 11:00, Angus McMorland wrote: > Hi all, > > I posted about this problem a few months ago, but we didn't get very > far, but the annoyance level of the problem is enough now for me to > have > another go. > > I'm trying to run the spline example from the cookbook (my exact code, > called spl.py, is inlined below). On an ix86 computer this works fine, > and returns the expected graphs, but on an amd64 machine I get the > error > below. Both computers are running debian etch, with recent svn > numpy and > scipy. > > TypeError: array cannot be safely cast to required type >> /usr/lib/python2.4/site-packages/scipy/interpolate/fitpack.py(217) >> splprep() > 216 > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > --> 217 nest,wrk,iwrk,per) > 218 _parcur_cache['u']=o['u'] > > I believe the error is generated by the _parcur call -> once in pdb, > calling it again re-raises the same error. > > ipdb>_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > nest,wrk,iwrk,per) Same problem on our Opterons. numpy v 2631 scipy v 1614 built & linked to acml [8]nohow@/noc/users/agn/python> python amdtest.py Traceback (most recent call last): File "amdtest.py", line 23, in ? tckp,u = splprep([x,y,z],s=s,k=k,nest=-1) File "/data/jrd/mod1/agn/ext/Linux/lib64/python2.3/site-packages/ scipy/interpolate/fitpack.py", line 217, in splprep TypeError: array cannot be safely cast to required type -George. From dd55 at cornell.edu Wed Aug 30 08:24:46 2006 From: dd55 at cornell.edu (Darren Dale) Date: Wed, 30 Aug 2006 08:24:46 -0400 Subject: [SciPy-user] amd64 specific error in fitpack spline interpolation In-Reply-To: References: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <200608300824.46389.dd55@cornell.edu> On Wednesday 30 August 2006 07:22, George Nurser wrote: > On 30 Aug 2006, at 11:00, Angus McMorland wrote: > > Hi all, > > > > I posted about this problem a few months ago, but we didn't get very > > far, but the annoyance level of the problem is enough now for me to > > have > > another go. > > > > I'm trying to run the spline example from the cookbook (my exact code, > > called spl.py, is inlined below). On an ix86 computer this works fine, > > and returns the expected graphs, but on an amd64 machine I get the > > error > > below. Both computers are running debian etch, with recent svn > > numpy and > > scipy. > > > > TypeError: array cannot be safely cast to required type > > > >> /usr/lib/python2.4/site-packages/scipy/interpolate/fitpack.py(217) > >> splprep() > > > > 216 > > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > > --> 217 nest,wrk,iwrk,per) > > 218 _parcur_cache['u']=o['u'] > > > > I believe the error is generated by the _parcur call -> once in pdb, > > calling it again re-raises the same error. > > > > ipdb>_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > > nest,wrk,iwrk,per) > > Same problem on our Opterons. > numpy v 2631 > scipy v 1614 > built & linked to acml I also see it on an Athlon, with numpy 1.0b5.dev3094, scipy 0.5.1.dev2184, linked to atlas 3.7.11. From xpeng at cse.cuhk.edu.hk Wed Aug 30 10:12:16 2006 From: xpeng at cse.cuhk.edu.hk (Xiang Peng) Date: Wed, 30 Aug 2006 22:12:16 +0800 Subject: [SciPy-user] Could you help me? Message-ID: <001201c6cc3e$4f90b4c0$9f59bd89@PC89159> Hi all, Hello, I am a freshman here. And I am a postgraduate student in Chinese University of Hong Kong. Currently I am trying to install numpy and scipy for my project. It's fine with numpy. But somehow I can't get scipy into working properly. When I tried to import scipy, it always says: RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002. Could somebody help me? Thank you very very much. Best Regards, Xiang Peng -------------- next part -------------- An HTML attachment was scrubbed... URL: From rshepard at appl-ecosys.com Wed Aug 30 10:31:35 2006 From: rshepard at appl-ecosys.com (Rich Shepard) Date: Wed, 30 Aug 2006 07:31:35 -0700 (PDT) Subject: [SciPy-user] Could you help me? In-Reply-To: <001201c6cc3e$4f90b4c0$9f59bd89@PC89159> References: <001201c6cc3e$4f90b4c0$9f59bd89@PC89159> Message-ID: On Wed, 30 Aug 2006, Xiang Peng wrote: > It's fine with numpy. But somehow I can't get scipy into working properly. > When I tried to import scipy, it always says: RuntimeError: module > compiled against version 1000000 of C-API but this version of numpy is > 1000002. Xiang, You need the latest SVN builds for both of them. On the SciPy web site's download page you'll find the syntax to check out each. Make a directory for numpy/ and one for scipy/, cd to each in turn and run 'svn co ...' to get the trunk for each. If these directories already exist, 'rm -rf *' to get a clean install. Then, as root in the numpy/ directory re-run 'python setup.py install.' Now, before you do anything with SciPy, follow the links and install BLAS, LAPACK, and ATLAS. Once those are successfully built in subdirectories within scipy/, then run 'python setup.py install' on that code. HTH, Rich -- Richard B. Shepard, Ph.D. | The Environmental Permitting Applied Ecosystem Services, Inc.(TM) | Accelerator Voice: 503-667-4517 Fax: 503-667-8863 From stefan at sun.ac.za Wed Aug 30 11:33:21 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 30 Aug 2006 17:33:21 +0200 Subject: [SciPy-user] amd64 specific error in fitpack spline interpolation In-Reply-To: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> References: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> Message-ID: <20060830153321.GU23074@mentat.za.net> On Wed, Aug 30, 2006 at 10:00:26PM +1200, Angus McMorland wrote: > Hi all, > > I posted about this problem a few months ago, but we didn't get very > far, but the annoyance level of the problem is enough now for me to have > another go. > > I'm trying to run the spline example from the cookbook (my exact code, > called spl.py, is inlined below). On an ix86 computer this works fine, > and returns the expected graphs, but on an amd64 machine I get the error > below. Both computers are running debian etch, with recent svn numpy and > scipy. > > TypeError: array cannot be safely cast to required type > > /usr/lib/python2.4/site-packages/scipy/interpolate/fitpack.py(217)splprep() > 216 > t,c,o=_fitpack._parcur(ravel(transpose(x)),w,u,ub,ue,k,task,ipar,s,t, > --> 217 nest,wrk,iwrk,per) > 218 _parcur_cache['u']=o['u'] Hi Angus, This is fixed in SVN (it works on the AMD Athlon 64 that I have access to). The C-wrappers expect 32-bit integers, instead of the 64-bit integers associated with 'int' on your platform. Can any of the developers tell me what the correct way is of handling 64-bit integers in C, using the numpy API? Or more specifically, of handling the default system type integer. Regards St?fan From edmondo_minisci at yahoo.it Wed Aug 30 12:18:53 2006 From: edmondo_minisci at yahoo.it (Edmondo Minisci) Date: Wed, 30 Aug 2006 18:18:53 +0200 Subject: [SciPy-user] amd64 specific error in fitpack spline, interpolation Message-ID: <44F5BA6D.7050206@yahoo.it> Dear Angus, I run your file on my amd64 dual core with SLES9. It apparently works fine. Edmondo Minisci From peridot.faceted at gmail.com Wed Aug 30 12:39:35 2006 From: peridot.faceted at gmail.com (A. M. Archibald) Date: Wed, 30 Aug 2006 12:39:35 -0400 Subject: [SciPy-user] amd64 specific error in fitpack spline interpolation In-Reply-To: <20060830153321.GU23074@mentat.za.net> References: <1156932026.713.36.camel@amcnote2.mcmorland.mph.auckland.ac.nz> <20060830153321.GU23074@mentat.za.net> Message-ID: On 30/08/06, Stefan van der Walt wrote: > Hi Angus, > > This is fixed in SVN (it works on the AMD Athlon 64 that I have access > to). The C-wrappers expect 32-bit integers, instead of the 64-bit > integers associated with 'int' on your platform. > > Can any of the developers tell me what the correct way is of handling > 64-bit integers in C, using the numpy API? Or more specifically, of > handling the default system type integer. In this case, the problem is not with C but with Fortran. The fitpack routines, and many others in scipy, are written in Fortran, which specifies its integer size machine-independently. So fortran INTEGERS are always int32s. Unfortunately, there are places in the scipy wrappers where people have assumed that they are ints. A. M. Archibald From wbaxter at gmail.com Wed Aug 30 17:19:58 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Thu, 31 Aug 2006 06:19:58 +0900 Subject: [SciPy-user] Academic Question? In-Reply-To: References: <44F3664A.9080405@pair.com> Message-ID: What is 1ofC? Googling for it didn't turn up anything useful. --bb On 8/30/06, Conor Robinson wrote: > Thanks Aarre, > > I have both of those resources, they are good, but I'm really looking > for a study comparing encoding schema. I've been combing the > University California libraries, however most studies use a "hand > wave" gesture or don't mention how they encode whatsoever. I've > developed a basic technique for compressing 1ofC, but I would like to > see what others have done. In bishop around p. 230 he notes you can > use cross entropy with sigmoid for probabilities. > > Conor > From redredliuyan at hotmail.com Wed Aug 30 20:41:59 2006 From: redredliuyan at hotmail.com (li red) Date: Thu, 31 Aug 2006 00:41:59 +0000 Subject: [SciPy-user] About cholesky funtion Message-ID: Hi, In my procedure, I use the cholesky(a) function in the scipy.linalg.decomp model. If the max number of row and column in matrix a is less equal 15, cholesky(a) is OK. But when the max number is larger than 15, there will have a error message: ?File "C:\Python24\Lib\site-packages\scipy\linalg\decomp.py", line 413, in cholesky if info>0: raise LinAlgError, "matrix not positive definite" scipy.linalg.basic.LinAlgError: matrix not positive definite?. My matrix is a real symmetric matrix and positive definite, so I don?t know why the error emerges. Have there some limits on the number of row and column in a matrix? the attachment is my code. Please give me your help. Thanks! Best wishes, yan _________________________________________________________________ ?????????????? MSN Messenger: http://messenger.msn.com/cn -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: temp1.py URL: From robert.kern at gmail.com Wed Aug 30 20:53:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Aug 2006 19:53:58 -0500 Subject: [SciPy-user] About cholesky funtion In-Reply-To: References: Message-ID: <44F63326.7090601@gmail.com> li red wrote: > Hi, > In my procedure, I use the cholesky(a) function in the > scipy.linalg.decomp model. If the max number of row and column in matrix > a is less equal 15, cholesky(a) is OK. But when the max number is larger > than 15, there will have a error message: > ?File "C:\Python24\Lib\site-packages\scipy\linalg\decomp.py", line 413, > in cholesky > if info>0: raise LinAlgError, "matrix not positive definite" > scipy.linalg.basic.LinAlgError: matrix not positive definite?. > My matrix is a real symmetric matrix and positive definite, so I don?t > know why the error emerges. Have there some limits on the number of row > and column in a matrix? the attachment is my code. Please give me your > help. Your 20x20 matrix is not positive definite. In [4]: numpy.linalg.eigvals(numpy.array(r2)) Out[4]: array([ 7.72597025e+00, 3.54906260e+00, 2.48954386e+00, 2.02734388e+00, 1.35138284e+00, 8.68691981e-01, 5.44877668e-01, 5.97051355e-01, 3.35814212e-01, 2.38247257e-01, 1.40268138e-01, 1.15689284e-01, 1.60566735e-02, 1.08883148e-08, 8.16773522e-09, -9.54799978e-09, -3.75866766e-09, -1.92201032e-09, 1.99914405e-09, 2.30997213e-09]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From xpeng at cse.cuhk.edu.hk Wed Aug 30 23:27:25 2006 From: xpeng at cse.cuhk.edu.hk (Xiang Peng) Date: Thu, 31 Aug 2006 11:27:25 +0800 Subject: [SciPy-user] Could you help me? References: <001201c6cc3e$4f90b4c0$9f59bd89@PC89159> Message-ID: <004501c6ccad$64289eb0$9f59bd89@PC89159> Dear Rich, Thanks a lot for your reply. I am trying it now. Thanks again, Xiang ----- Original Message ----- From: "Rich Shepard" To: "SciPy Users List" Sent: Wednesday, August 30, 2006 10:31 PM Subject: Re: [SciPy-user] Could you help me? > On Wed, 30 Aug 2006, Xiang Peng wrote: > >> It's fine with numpy. But somehow I can't get scipy into working >> properly. >> When I tried to import scipy, it always says: RuntimeError: module >> compiled against version 1000000 of C-API but this version of numpy is >> 1000002. > > Xiang, > > You need the latest SVN builds for both of them. On the SciPy web site's > download page you'll find the syntax to check out each. Make a directory > for > numpy/ and one for scipy/, cd to each in turn and run 'svn co ...' to get > the trunk for each. If these directories already exist, 'rm -rf *' to get > a > clean install. > > Then, as root in the numpy/ directory re-run 'python setup.py install.' > > Now, before you do anything with SciPy, follow the links and install > BLAS, > LAPACK, and ATLAS. Once those are successfully built in subdirectories > within scipy/, then run 'python setup.py install' on that code. > > HTH, > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental Permitting > Applied Ecosystem Services, Inc.(TM) | Accelerator > Voice: 503-667-4517 Fax: > 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From xpeng at cse.cuhk.edu.hk Thu Aug 31 04:13:53 2006 From: xpeng at cse.cuhk.edu.hk (Xiang Peng) Date: Thu, 31 Aug 2006 16:13:53 +0800 Subject: [SciPy-user] Could you help me? References: <001201c6cc3e$4f90b4c0$9f59bd89@PC89159> Message-ID: <014401c6ccd5$694b4a00$9f59bd89@PC89159> Hi Rich, I succeeded following your suggestions. Thank you so much. Best Regards, Xiang Peng ----- Original Message ----- From: "Rich Shepard" To: "SciPy Users List" Sent: Wednesday, August 30, 2006 10:31 PM Subject: Re: [SciPy-user] Could you help me? > On Wed, 30 Aug 2006, Xiang Peng wrote: > >> It's fine with numpy. But somehow I can't get scipy into working >> properly. >> When I tried to import scipy, it always says: RuntimeError: module >> compiled against version 1000000 of C-API but this version of numpy is >> 1000002. > > Xiang, > > You need the latest SVN builds for both of them. On the SciPy web site's > download page you'll find the syntax to check out each. Make a directory > for > numpy/ and one for scipy/, cd to each in turn and run 'svn co ...' to get > the trunk for each. If these directories already exist, 'rm -rf *' to get > a > clean install. > > Then, as root in the numpy/ directory re-run 'python setup.py install.' > > Now, before you do anything with SciPy, follow the links and install > BLAS, > LAPACK, and ATLAS. Once those are successfully built in subdirectories > within scipy/, then run 'python setup.py install' on that code. > > HTH, > > Rich > > -- > Richard B. Shepard, Ph.D. | The Environmental Permitting > Applied Ecosystem Services, Inc.(TM) | Accelerator > Voice: 503-667-4517 Fax: > 503-667-8863 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From conor.robinson at gmail.com Thu Aug 31 12:41:32 2006 From: conor.robinson at gmail.com (Conor Robinson) Date: Thu, 31 Aug 2006 09:41:32 -0700 Subject: [SciPy-user] Academic Question? In-Reply-To: References: <44F3664A.9080405@pair.com> Message-ID: One 1ofC is a way of encoding categories for a neural network or other model For example: attribute: color red 100 green 010 blue 001 You can run into problems, however if you have an attribute with lets say 400 different categories, aka the curse of dimensionality. On one had you would like your net to consider all categories (given there are ample examples of each in your training set). RBF functions are especially bad at this (look into the biological premise of the RBF), but fast to train. Feed forward nets are better, but slower to train. In a typical sample you may not have examples of all 400 categories and you would like to compress your input so the net is looking at statically relevant information. Im looking into a GA process for compressing 1ofC, and in general papers or a plethora of information on encoding and represention. In my opinion the represention problem is the cornerstone of any model in most cases. On 8/30/06, Bill Baxter wrote: > What is 1ofC? Googling for it didn't turn up anything useful. > > --bb > > On 8/30/06, Conor Robinson wrote: > > Thanks Aarre, > > > > I have both of those resources, they are good, but I'm really looking > > for a study comparing encoding schema. I've been combing the > > University California libraries, however most studies use a "hand > > wave" gesture or don't mention how they encode whatsoever. I've > > developed a basic technique for compressing 1ofC, but I would like to > > see what others have done. In bishop around p. 230 he notes you can > > use cross entropy with sigmoid for probabilities. > > > > Conor > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From atriolo at research.telcordia.com Thu Aug 31 14:40:35 2006 From: atriolo at research.telcordia.com (Tony Triolo) Date: Thu, 31 Aug 2006 14:40:35 -0400 Subject: [SciPy-user] Problem with Windows Scipy 0.5 Message-ID: <44F72D23.8080206@research.telcordia.com> I've been trying to use Scipy 0.5 with Python 2.4 for Windows and can't seem to get it to play nicely with any version of Numpy. Here is what I have tried along with the result: ********************************* Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. **************************************************************** Personal firewall software may warn about the connection IDLE makes to its subprocess using this computer's internal loopback interface. This connection is not visible on any external interface and no data is sent to or received from the Internet. **************************************************************** IDLE 1.1.3 >>> import numpy,scipy Overwriting info= from scipy.misc.helpmod (was from numpy.lib.utils) Overwriting who= from scipy.misc.common (was from numpy.lib.utils) Overwriting source= from scipy.misc.helpmod (was from numpy.lib.utils) RuntimeError: module compiled against version 1000000 of C-API but this version of numpy is 1000002 ********************************** The installer used for Scipy was : scipy-0.5.0.win32-py2.4.exe and the installer for Numpy was : numpy-1.0b4.win32-py2.4.exe I've tried Numpy 0.9.8 and 0.9.6 with the same result. I've also tried on Windows XP and 2000 machines. Before I go trying to make my own Numpy and Scipy builds, any ideas on how I can successfully install SciPy on a Windows machine with the available binaries? I don't have an out of the ordinary PC configuration and figured the binaries on the download site should work just fine. Has anyone tested the SciPy install on Win2000 or WinXP Pentium 4 machines? Thanks. -Tony ----------------------------------------------- Anthony A. Triolo, Ph.D. Senior Scientist Telcordia Technologies (732) 758 - 3098 ----------------------------------------------- From davidlinke at tiscali.de Thu Aug 31 15:22:52 2006 From: davidlinke at tiscali.de (David Linke) Date: Thu, 31 Aug 2006 21:22:52 +0200 Subject: [SciPy-user] Problem with Windows Scipy 0.5 In-Reply-To: <44F72D23.8080206@research.telcordia.com> References: <44F72D23.8080206@research.telcordia.com> Message-ID: <44F7370C.2040401@tiscali.de> Tony Triolo wrote: > I've been trying to use Scipy 0.5 with Python 2.4 for Windows and can't > seem to get it to play nicely with any version of Numpy. Here is what I > have tried along with the result: > (...) > RuntimeError: module compiled against version 1000000 of C-API but this > version of numpy is 1000002 > > ********************************** > > The installer used for Scipy was : scipy-0.5.0.win32-py2.4.exe > and the installer for Numpy was : numpy-1.0b4.win32-py2.4.exe Hi Tony! You have to use a (C-API) compatible Numpy version which should be numpy-1.0b1.win32-py2.4.exe for your scipy download. ...but it seems like this is no longer available on SF for download. If you don't want to compile yourself you may also give http://code.enthought.com/enthon/ a try. The download page on the wiki is not pointing out this C-API compatibility issue clearly. To avoid confusion it would be good to add the numpy version which was used for compiling scipy into the name of the binary in the future. E.g. scipy-0.5.0-numpy1.0.1b1-win32-py2.4.exe Regards, David From jallikattu at googlemail.com Wed Aug 9 04:16:40 2006 From: jallikattu at googlemail.com (morovia morovia) Date: Wed, 09 Aug 2006 08:16:40 -0000 Subject: [SciPy-user] global fitting of data sets with common parameters Message-ID: <72da94d60608090115j371a6c46odfd1a8c384adba2c@mail.gmail.com> Hello I have data sets and would like to fit them globally with common parameters. Is there a way to do this using existing scipy modules leastsq. Any pointers for algo and suggestions are welcome. Thanks Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jallikattu at googlemail.com Thu Aug 31 09:00:34 2006 From: jallikattu at googlemail.com (morovia morovia) Date: Thu, 31 Aug 2006 13:00:34 -0000 Subject: [SciPy-user] global fitting of data sets with linked paramters Message-ID: <72da94d60608310600o36677194l30fed34ccacea55e@mail.gmail.com> Hello I have data sets and would like to fit them globally with common parameters. Is there a way to do this using existing scipy modules leastsq. Any pointers for algo and suggestions are welcome. Thanks Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jallikattu at googlemail.com Thu Aug 31 09:05:00 2006 From: jallikattu at googlemail.com (morovia morovia) Date: Thu, 31 Aug 2006 13:05:00 -0000 Subject: [SciPy-user] global fitting of data sets with linked paramters Message-ID: <72da94d60608310604h10a50327vb458bf13d64c5ed6@mail.gmail.com> Hello I have data sets and would like to fit them globally with common parameters. Is there a way to do this using existing scipy modules leastsq. Any pointers for algo and suggestions are welcome. Thanks Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jallikattu at googlemail.com Sat Aug 5 04:03:41 2006 From: jallikattu at googlemail.com (morovia morovia) Date: Sat, 05 Aug 2006 08:03:41 -0000 Subject: [SciPy-user] global fitting Message-ID: <72da94d60608050103v47610638y9fc9e993d2b600ff@mail.gmail.com> Hello I have data sets and would like to fit them globally with common parameters. Is there a way to do this using scipy. Any pointers for algo and suggestions are welcome. Thanks Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jallikattu at googlemail.com Tue Aug 8 04:59:09 2006 From: jallikattu at googlemail.com (morovia morovia) Date: Tue, 08 Aug 2006 08:59:09 -0000 Subject: [SciPy-user] on global fitting for sets of data Message-ID: <72da94d60608080159x580f710n38b28fab3932e794@mail.gmail.com> Hello I have data sets and would like to fit them globally with common parameters. Is there a way to do this using existing scipy modules leastsq. Any pointers for algo and suggestions are welcome. Thanks Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: