From robert.kern at gmail.com Sat Apr 1 20:51:33 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 01 Apr 2006 19:51:33 -0600 Subject: [SciPy-user] Statistics review months Message-ID: <442F2E25.9040705@gmail.com> In the interest of improving the quality of the scipy.stats package, I hereby declare April and May of 2006 to be Statistics Review Months. I propose that we set ourselves a goal to review each function in stats.py and morestats.py (and a few others) for correctness and completeness of implementation by the end of May. By my count, that's about 2.5 functions every day. Surely this is a reasonable amount of effort for a rather large payoff: a robust, well-tested and thorough statistics library. I have added a Wiki page describing the details: http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview Barring any objections, I will be irretrievably creating the ~150 tickets or so for all of the functions to be reviewed later tonight. So if you object, act fast! [Disclosure: this idea isn't mine. Eric Jones mentioned it to me once, and I'm just running with it.] -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gruben at bigpond.net.au Sat Apr 1 22:49:46 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Sun, 02 Apr 2006 13:49:46 +1000 Subject: [SciPy-user] docstring improvement Message-ID: <442F49DA.5000200@bigpond.net.au> Just looking at Robert Kern's checklist from a user's p.o.v., it would be nice to add some simple (one-liner) function call examples to each docstring. This is something I think is lacking from most functions in scipy. Gary R. Robert Kern wrote: > In the interest of improving the quality of the scipy.stats package, > I hereby declare April and May of 2006 to be Statistics Review > Months. I propose that we set ourselves a goal to review each > function in stats.py and morestats.py (and a few others) for > correctness and completeness of implementation by the end of May. By > my count, that's about 2.5 functions every day. Surely this is a > reasonable amount of effort for a rather large payoff: a robust, > well-tested and thorough statistics library. > > I have added a Wiki page describing the details: > http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview From jonathan.taylor at stanford.edu Sat Apr 1 23:20:02 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Sat, 01 Apr 2006 20:20:02 -0800 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: <442F2E25.9040705@gmail.com> References: <442F2E25.9040705@gmail.com> Message-ID: <442F50F2.2060303@stanford.edu> on this topic, as an honest-to-goodness statistician it might be nice to see more statistical modelling in scipy. i know Rpy exists, but the interface is not very pythonic. i have some "home-brew" modules for linear regression, formula building (something like R's) and a few other things. if it went into something like scipy, it might gain from the criticisms of others.... is there any interest in making the equivalent of a scipy.stats.models module? i think an easily (medium-term) achievable goal is: i) linear (least-squares) regression models with/without weights or non-diagonal covariance matrices (in R: lm + more) ii) generalized linear models (in R: glm) iii) iteratively reweighted least squares algorithms (glm is a special case), i.e. robust regression (in R: rlm). iv) ordinary least squares multivariate linear models (i.e. multivariate responses) some of these models can easily be "broadcasted", others not so easily.... further goals are more general models: classification, constrained model fitting, model selection.... for some of these things, it may not be worth duplicating R's (or other packages') efforts. -- jonathan Robert Kern wrote: >In the interest of improving the quality of the scipy.stats package, I hereby >declare April and May of 2006 to be Statistics Review Months. I propose that we >set ourselves a goal to review each function in stats.py and morestats.py (and a >few others) for correctness and completeness of implementation by the end of >May. By my count, that's about 2.5 functions every day. Surely this is a >reasonable amount of effort for a rather large payoff: a robust, well-tested and >thorough statistics library. > >I have added a Wiki page describing the details: > > http://projects.scipy.org/scipy/scipy/wiki/StatisticsReview > >Barring any objections, I will be irretrievably creating the ~150 tickets or so >for all of the functions to be reviewed later tonight. So if you object, act fast! > >[Disclosure: this idea isn't mine. Eric Jones mentioned it to me once, and I'm >just running with it.] > > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From imcsee at gmail.com Sun Apr 2 04:33:25 2006 From: imcsee at gmail.com (imcs ee) Date: Sun, 2 Apr 2006 16:33:25 +0800 Subject: [SciPy-user] ask help for scipy install Message-ID: in ubuntu5.10 i install scipy with apt-get install scipy and run the unittest from scipy import * test(10) it shows the result as below. is here any guide to solve it? thanks in advance ====================================================================== FAIL: check_expon (scipy.stats.morestats.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 55, in check_expon assert_array_less(A, crit[-2:]) File "/usr/lib/python2.4/site-packages/scipy_test/testing.py", line 708, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 50.0%): Array 1: 1.65613125073 Array 2: [ 1.587 1.9339999999999999] ---------------------------------------------------------------------- Ran 986 tests in 102.422s FAILED (failures=1) Out[3]: -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Sun Apr 2 12:09:20 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Sun, 02 Apr 2006 18:09:20 +0200 Subject: [SciPy-user] building numpy fails Message-ID: <442FF730.1090502@gmx.net> The latest numpy svn checkout fails to build (Python 2.3.5) when trying to call tempfile.mktemp(). The module index says: mktemp([suffix[, prefix[, dir]]]) Deprecated since release 2.3. Use mkstemp() instead. [...] ------------------------------------------------------------------------------------ [...] Traceback (most recent call last): File "setup.py", line 84, in ? setup_package() File "setup.py", line 77, in setup_package setup( configuration=configuration ) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/core.py", line 152, in setup return old_setup(**new_attr) File "/usr/lib/python2.3/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.3/distutils/dist.py", line 907, in run_commands self.run_command(cmd) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/install.py", line 11, in run r = old_install.run(self) File "/usr/lib/python2.3/distutils/command/install.py", line 506, in run self.run_command('build') File "/usr/lib/python2.3/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "/usr/lib/python2.3/distutils/command/build.py", line 107, in run self.run_command(cmd_name) File "/usr/lib/python2.3/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.3/distutils/dist.py", line 927, in run_command cmd_obj.run() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/build_src.py", line 84, in run self.build_sources() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/build_src.py", line 103, in build_sources self.build_extension_sources(ext) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/build_src.py", line 209, in build_extension_sources sources = self.generate_sources(sources, ext) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/build_src.py", line 267, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 35, in generate_config_h library_dirs = default_lib_dirs) File "/usr/lib/python2.3/distutils/command/config.py", line 278, in try_run self._check_compiler() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/command/config.py", line 35, in _check_compiler self.fcompiler.customize(self.distribution) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 294, in customize oflags = self.__get_flags(self.get_flags_opt,'FOPT',(conf,'opt')) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/fcompiler/__init__.py", line 511, in __get_flags var = command() File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/fcompiler/gnu.py", line 122, in get_flags_opt if self.get_version()<='3.3.3': File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/ccompiler.py", line 251, in CCompiler_get_version status, output = exec_command(cmd,use_tee=0) File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", line 254, in exec_command use_tee=use_tee, File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", line 279, in _exec_command_posix tmpfile = tempfile.mktemp() AttributeError: 'module' object has no attribute 'mktemp' ------------------------------------------------------------------------------------ Replacing mktemp() by mkstemp() doesn't help: [...] File "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", line 279, in _exec_command_posix tmpfile = tempfile.mkstemp() AttributeError: 'module' object has no attribute 'mkstemp' but doing it in an interactive session works: In [1]: import tempfile In [2]: tempfile.mkstemp() Out[2]: (3, '/tmp/tmp0Iv8K7') What's going on? cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From ryanlists at gmail.com Sun Apr 2 13:55:47 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sun, 2 Apr 2006 13:55:47 -0400 Subject: [SciPy-user] ask help for scipy install In-Reply-To: References: Message-ID: I run scipy in Ubuntu Breezy, so it definitely works. But I installed from source, so I can't tell you what might be wrong with the package. Installing from source in Ubuntu isn't too painful since Atlas, Blas, Lapack and all that are available as packages (that I know do work). If you can't resolve your issue with the package install, let me know if you want help installing from source. Ryan On 4/2/06, imcs ee wrote: > in ubuntu5.10 > i install scipy with apt-get install scipy > > and run the unittest > > from scipy import * > test(10) > > it shows the result as below. is here any guide to solve it? thanks in > advance > > > ====================================================================== > FAIL: check_expon > (scipy.stats.morestats.test_morestats.test_anderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", > line 55, in check_expon > assert_array_less(A, crit[-2:]) > File > "/usr/lib/python2.4/site-packages/scipy_test/testing.py", > line 708, in > assert_array_less > assert cond,\ > AssertionError: > Arrays are not less-ordered (mismatch 50.0%): > Array 1: 1.65613125073 > Array 2: [ 1.587 1.9339999999999999] > > > ---------------------------------------------------------------------- > Ran 986 tests in 102.422s > > FAILED (failures=1) > Out[3]: > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From robert.kern at gmail.com Sun Apr 2 14:45:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 02 Apr 2006 13:45:41 -0500 Subject: [SciPy-user] building numpy fails In-Reply-To: <442FF730.1090502@gmx.net> References: <442FF730.1090502@gmx.net> Message-ID: <44301BD5.1040601@gmail.com> Steve Schmerler wrote: > The latest numpy svn checkout fails to build (Python 2.3.5) when trying > to call tempfile.mktemp(). The module index says: > > mktemp([suffix[, prefix[, dir]]]) > Deprecated since release 2.3. Use mkstemp() instead. > [...] > File > "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", > line 279, in _exec_command_posix > tmpfile = tempfile.mktemp() > AttributeError: 'module' object has no attribute 'mktemp' Do you have a file tempfile.py sitting around that isn't the standard library's tempfile.py? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From alan at ajackson.org Sun Apr 2 14:51:45 2006 From: alan at ajackson.org (Alan Jackson) Date: Sun, 2 Apr 2006 13:51:45 -0500 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: <442F50F2.2060303@stanford.edu> References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu> Message-ID: <20060402135145.c8670067.alan@ajackson.org> On Sat, 01 Apr 2006 20:20:02 -0800 Jonathan Taylor wrote: > on this topic, as an honest-to-goodness statistician it might be nice to > see more statistical modelling in scipy. i know Rpy exists, but the > interface is not very pythonic. > > i have some "home-brew" modules for linear regression, formula building > (something like R's) and a few other things. if it went into something > like scipy, it might gain from the criticisms of others.... > > is there any interest in making the equivalent of a > > scipy.stats.models > > module? > > i think an easily (medium-term) achievable goal is: > > i) linear (least-squares) regression models with/without weights or > non-diagonal covariance matrices (in R: lm + more) > > ii) generalized linear models (in R: glm) > > iii) iteratively reweighted least squares algorithms (glm is a special > case), i.e. robust regression (in R: rlm). I'm a big fan of R and of rlm in particular. I have to agree with your comments about Rpy, though I think their plans for it head in the right direction and give hope of a better interface. But, yes, I would support adding those capabilities to SciPy. I have Rpy accessing rlm in a little product right now, and it would certainly simply life! > -- ----------------------------------------------------------------------- | Alan K. Jackson | To see a World in a Grain of Sand | | alan at ajackson.org | And a Heaven in a Wild Flower, | | www.ajackson.org | Hold Infinity in the palm of your hand | | Houston, Texas | And Eternity in an hour. - Blake | ----------------------------------------------------------------------- From mantha at chem.unr.edu Sun Apr 2 18:01:02 2006 From: mantha at chem.unr.edu (Jordan Mantha) Date: Sun, 02 Apr 2006 15:01:02 -0700 Subject: [SciPy-user] ask help for scipy install In-Reply-To: References: Message-ID: <4430499E.3080703@chem.unr.edu> Ryan Krauss wrote: > I run scipy in Ubuntu Breezy, so it definitely works. But I installed > from source, so I can't tell you what might be wrong with the package. > Installing from source in Ubuntu isn't too painful since Atlas, Blas, > Lapack and all that are available as packages (that I know do work). > If you can't resolve your issue with the package install, let me know > if you want help installing from source. I've run Scipy on both Ubuntu Breezy and Dapper. scipy.test(10) worked fine for me (on Dapper) without building from source. But then I also got numpy and scipy working on my Intel iMac last week so maybe I'm just getting lucky. -Jordan From zpincus at stanford.edu Sun Apr 2 17:39:20 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sun, 2 Apr 2006 16:39:20 -0500 Subject: [SciPy-user] stats review: std/var and samplestd/samplevar Message-ID: <52740E64-79E7-4905-8518-42A7F66D1D0D@stanford.edu> Hi folks - It appears to me that the scipy.stats implementations for calculating sample variances and population variances (and hence standard deviations too) are somehow reversed. Specifically, the variance of an entire population is calculated with a denominator of the population size N. The variance of a sample from a population is either estimated using a denominator of the sample size n (to obtain a biased estimate) or 1-n (to obtain an unbiased estimate). Note that saying "sample variance" does not imply the use of the 1-n estimator, as there are cases in which the biased estimator may legitimately be used.(*) see e.g.: http://en.wikipedia.org/wiki/Variance http://en.wikipedia.org/wiki/Standard_deviation However, scipy.stats.std and scipy.stats.var use 1-N, while scipy.stats.samplestd and scipy.stats.samplevar use N. This is clearly incorrect notation any way you slice it. I would propose to have: (1) scipy.stats.var and scipy.stats.std -- use N as the denominator (2) scipy.stats.samplevar and scipy.stats.samplesdt -- at least use n-1 as the denominator. Better would be to deprecate / remove them because as above "sample variance" is ambiguous. (3) scipy.stats.var_unbiased -- use n-1 as denominator. As per the note below, there is no general unbiased estimator of the standard deviation, and so there should be no scipy.stats.std_unbiased function. (See the wikipedia entry and also http://www.itl.nist.gov/ div898/handbook/pmc/section3/pmc32.htm ) I feel vaguely that the N-1 estimator is always problematic, because if you have a small enough sample that it makes a difference, you've got bigger problems than using N or N-1. Not that these problems are insurmountable, but you've got to have some statistical savvy to deal properly with them. As such, I think that the default functions (var and std) should just return the population statistics. But reasonable people may disagree. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine (*) E.g.: While it is possible to estimate the variance in an unbiased manner, estimating the standard deviation of a population from a sample without bias is actually impossible without assumptions about the population. (There is a complex correction factor for samples from normal populations discussed on the NIST page.) Moreover, though the (N-1)-denominated estimator of the variance is unbiased, the estimator itself has a greater variance around the true value than the N-denominated estimator. As such, using the unbiased estimator can sap statistical power from some tests. This is why sometimes one might use the N-denominated estimator for the sample variance. From gruben at bigpond.net.au Sun Apr 2 19:10:44 2006 From: gruben at bigpond.net.au (Gary Ruben) Date: Mon, 03 Apr 2006 09:10:44 +1000 Subject: [SciPy-user] docstring improvement In-Reply-To: <442F49DA.5000200@bigpond.net.au> References: <442F49DA.5000200@bigpond.net.au> Message-ID: <443059F4.8070901@bigpond.net.au> Robert, Thanks for taking this suggestion on and adding it to the checklist. I agree with the importance of keeping any examples short. Gary Gary Ruben wrote: > Just looking at Robert Kern's checklist from a user's p.o.v., it would > be nice to add some simple (one-liner) function call examples to each > docstring. This is something I think is lacking from most functions in > scipy. > > Gary R. From elcorto at gmx.net Sun Apr 2 19:13:04 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 03 Apr 2006 01:13:04 +0200 Subject: [SciPy-user] building numpy fails In-Reply-To: <44301BD5.1040601@gmail.com> References: <442FF730.1090502@gmx.net> <44301BD5.1040601@gmail.com> Message-ID: <44305A80.4010704@gmx.net> Robert Kern wrote: > Steve Schmerler wrote: > >>The latest numpy svn checkout fails to build (Python 2.3.5) when trying >>to call tempfile.mktemp(). The module index says: >> >>mktemp([suffix[, prefix[, dir]]]) >> Deprecated since release 2.3. Use mkstemp() instead. >> [...] > > >> File >>"/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", >>line 279, in _exec_command_posix >> tmpfile = tempfile.mktemp() >>AttributeError: 'module' object has no attribute 'mktemp' > > > Do you have a file tempfile.py sitting around that isn't the standard library's > tempfile.py? > No. elcorto at ramrod:~/install/python/matplotlib$ sudo updatedb Password: elcorto at ramrod:~/install/python/matplotlib$ locate tempfile.py /usr/lib/python2.3/tempfile.py /usr/lib/python2.3/tempfile.pyc /usr/lib/python2.3/tempfile.pyo /usr/share/reportbug/rbtempfile.py /usr/share/reportbug/rbtempfile.pyc /usr/share/reportbug/rbtempfile.pyo numpy 0.9.6 from sourceforge builds just fine. cheers, steve -- Random number generation is the art of producing pure gibberish as quickly as possible. From schofield at ftw.at Sun Apr 2 19:23:11 2006 From: schofield at ftw.at (Ed Schofield) Date: Mon, 3 Apr 2006 01:23:11 +0200 Subject: [SciPy-user] maxentropy In-Reply-To: <43f499ca0603301755v71e48642vf90db1becbfed72b@mail.gmail.com> References: <43f499ca0603131746i76670b35m438fc5f7421d0341@mail.gmail.com> <0286C503-6542-4C97-B842-922396FDDA50@ftw.at> <441ABBC5.90304@ftw.at> <43f499ca0603171631w57400cf4n583bfb0c26e6fc5c@mail.gmail.com> <43f499ca0603211253k585c56eev3bd08fb42f6574b8@mail.gmail.com> <4422DB17.40402@ftw.at> <8866A7C4-460D-4EE0-9102-54B71810EEA9@ftw.at> <43f499ca0603301755v71e48642vf90db1becbfed72b@mail.gmail.com> Message-ID: <31611557-230F-4200-95A2-4721E906765F@ftw.at> On 31/03/2006, at 3:55 AM, Matthew Cooper wrote: > I went through the new conditionalexample_high_level.py and I still > think there a small change that needs to be made (I think it's > small anyway). I think that we want F to be the size > > F = sparse.lil_matrix((len(f), numcorpus*numsamplespace)) > > where numcorpus = len(corpus) > Okay, this seems straightforward. I've changed the example so there are only columns of F for contexts that appear in the corpus. > I don't think this alters your code, as long as the pmf and F > matrices are initialized correctly. > Yes, you're right. > At test time, we do need to evaluate the feature functions on > unseen documents, but this can be handled more easily. > I'm not sure how yet. I'll give this some thought. > I have another question. I haven't installed your version of scipy > outright since it was a bit of a pain to get the current stable > distribution up on my machine. However, if I need to load a bunch > of modules from your version to test the conditional models is > there an easy way to do that? Which scipy version are you using? If it's recent enough, you can just copy my maxentropy.py and sparse.py files over the installed ones. I'm happy enough that it works now; I've merged the new sparse functionality back into the trunk, and I'll do the same with conditional maxent class in the next few days. > At the moment, I couldn't import sparseutils (I can't find > the .py file since I probably haven't built it?). sparsetools is written in FORTRAN, with an f2py interface, so it needs to be installed properly by numpy.distutils. But sparsetools is the same in my branch as in the trunk ... -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From imcsee at gmail.com Sun Apr 2 21:35:05 2006 From: imcsee at gmail.com (imcs ee) Date: Mon, 3 Apr 2006 09:35:05 +0800 Subject: [SciPy-user] ask help for scipy install In-Reply-To: References: Message-ID: thanks, i 'll try it tonight. some farther info of install:, i install scipy on windows server 2003, 1,ActivePython-2.4.2.10-win32-x86.msi+numpy-0.9.6r1.win32-py2.4.exe+ scipy-0.4.8.win32-py2.4-pentium4sse2.exe scipy.test(10) works fine. 2, python-2.4.3.msi+numpy-0.9.6r1.win32-py2.4.exe+ scipy-0.4.8.win32-py2.4-pentium4sse2.exe ...get the same error (less-ordered...) On 4/3/06, Ryan Krauss wrote: > > I run scipy in Ubuntu Breezy, so it definitely works. But I installed > from source, so I can't tell you what might be wrong with the package. > Installing from source in Ubuntu isn't too painful since Atlas, Blas, > Lapack and all that are available as packages (that I know do work). > If you can't resolve your issue with the package install, let me know > if you want help installing from source. > > Ryan > > On 4/2/06, imcs ee wrote: > > in ubuntu5.10 > > i install scipy with apt-get install scipy > > > > and run the unittest > > > > from scipy import * > > test(10) > > > > it shows the result as below. is here any guide to solve it? thanks in > > advance > > > > > > ====================================================================== > > FAIL: check_expon > > (scipy.stats.morestats.test_morestats.test_anderson) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", > > line 55, in check_expon > > assert_array_less(A, crit[-2:]) > > File > > "/usr/lib/python2.4/site-packages/scipy_test/testing.py", > > line 708, in > > assert_array_less > > assert cond,\ > > AssertionError: > > Arrays are not less-ordered (mismatch 50.0%): > > Array 1: 1.65613125073 > > Array 2: [ 1.587 1.9339999999999999] > > > > > > ---------------------------------------------------------------------- > > Ran 986 tests in 102.422s > > > > FAILED (failures=1) > > Out[3]: > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From webb.sprague at gmail.com Sun Apr 2 23:05:27 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Sun, 2 Apr 2006 19:05:27 -0800 Subject: [SciPy-user] weird error in mod_python (3.1.4.r1) /scipy (0.4.8)/gentoo (~x86) web application Message-ID: This may be un-reproducible, and it does NOT happen within ipython shell, but I am getting the backtrace below when trying to import scipy within my application. I would guess it has to do with the very outdated ebuild of mod_python, (see the gentoo bug: http://bugs.gentoo.org/show_bug.cgi?id=123852), but just in case anybody has any quick fixes, please let me know. I have restarted Apache numerous times to make sure there isn't some weird cache thing going on (the source of most of my phantom bug reports). Scipy 0.3.2 does not have this problem. Backtrace follows. (6), with LcUtil.py, is where it tries to import and errors out. Mod_python error: "PythonHandler mod_python.publisher" Traceback (most recent call last): (1) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 299, in HandlerDispatch result = object(req) (2) File "/usr/lib/python2.4/site-packages/mod_python/publisher.py", line 98, in handler path=[path]) (3) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 457, in import_module module = imp.load_module(mname, f, p, d) (4) File "/var/www/localhost/htdocs/larry/lc.py", line 32, in ? import LcSinglePopObject (5) File "/var/www/localhost/htdocs/larry/LcSinglePopObject.py", line 40, in ? import LcUtil (6) File "/var/www/localhost/htdocs/larry/LcUtil.py", line 8, in ? import scipy as S File "/usr/lib/python2.4/site-packages/scipy/__init__.py", line 18, in ? import pkg_resources as _pr # activate namespace packages (manipulates __path__) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2347, in ? working_set = WorkingSet() File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 343, in __init__ self.add_entry(entry) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 358, in add_entry for dist in find_distributions(entry, True): File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1450, in find_distributions importer = get_importer(path_item) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1407, in get_importer importer = hook(path_item) TypeError: zipimporter() argument 1 must be string, not builtin_function_or_method From robert.kern at gmail.com Sun Apr 2 23:27:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 02 Apr 2006 22:27:31 -0500 Subject: [SciPy-user] weird error in mod_python (3.1.4.r1) /scipy (0.4.8)/gentoo (~x86) web application In-Reply-To: References: Message-ID: <44309623.7000903@gmail.com> Webb Sprague wrote: > This may be un-reproducible, and it does NOT happen within ipython > shell, but I am getting the backtrace below when trying to import > scipy within my application. I would guess it has to do with the very > outdated ebuild of mod_python, (see the gentoo bug: > http://bugs.gentoo.org/show_bug.cgi?id=123852), but just in case > anybody has any quick fixes, please let me know. > > I have restarted Apache numerous times to make sure there isn't some > weird cache thing going on (the source of most of my phantom bug > reports). Scipy 0.3.2 does not have this problem. > > Backtrace follows. (6), with LcUtil.py, is where it tries to import > and errors out. > > Mod_python error: "PythonHandler mod_python.publisher" > > Traceback (most recent call last): > > (1) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", > line 299, in HandlerDispatch > result = object(req) > > (2) File "/usr/lib/python2.4/site-packages/mod_python/publisher.py", > line 98, in handler > path=[path]) > > (3) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", > line 457, in import_module > module = imp.load_module(mname, f, p, d) > > (4) File "/var/www/localhost/htdocs/larry/lc.py", line 32, in ? > import LcSinglePopObject > > (5) File "/var/www/localhost/htdocs/larry/LcSinglePopObject.py", line 40, in ? > import LcUtil > > (6) File "/var/www/localhost/htdocs/larry/LcUtil.py", line 8, in ? > import scipy as S > > File "/usr/lib/python2.4/site-packages/scipy/__init__.py", line 18, in ? > import pkg_resources as _pr # activate namespace packages > (manipulates __path__) > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2347, in ? > working_set = WorkingSet() > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 343, > in __init__ > self.add_entry(entry) > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 358, > in add_entry > for dist in find_distributions(entry, True): > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1450, > in find_distributions > importer = get_importer(path_item) > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1407, > in get_importer > importer = hook(path_item) > > TypeError: zipimporter() argument 1 must be string, not > builtin_function_or_method It looks like this is an issue with setuptools which provides pkg_resources.py. You may want to ask on the Distutils-SIG mailing list. In the meantime, you can just delete try: except: suite. It's not necessary if you aren't using namespace package eggs. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From webb.sprague at gmail.com Mon Apr 3 00:06:16 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Sun, 2 Apr 2006 20:06:16 -0800 Subject: [SciPy-user] [mod_python] weird error in mod_python (3.1.4.r1) /scipy(0.4.8)/gentoo (~x86) web application In-Reply-To: <1144035392.2357@dscpl.user.openhosting.com> References: <1144035392.2357@dscpl.user.openhosting.com> Message-ID: Thanks to Graham's help, I found the typo ("sys.path.append(os.getcwd)" should have been "sys.path.append(os.getcwd())"), and all is well. W On 4/2/06, Graham Dumpleton wrote: > Webb Sprague wrote .. > > This may be un-reproducible, and it does NOT happen within ipython > > shell, but I am getting the backtrace below when trying to import > > scipy within my application. I would guess it has to do with the very > > outdated ebuild of mod_python, (see the gentoo bug: > > http://bugs.gentoo.org/show_bug.cgi?id=123852), but just in case > > anybody has any quick fixes, please let me know. > > > > I have restarted Apache numerous times to make sure there isn't some > > weird cache thing going on (the source of most of my phantom bug > > reports). Scipy 0.3.2 does not have this problem. > > Looking through pkg_resources.py, critical bit of code seems to be: > > class WorkingSet(object): > """A collection of active distributions on sys.path (or a similar list)""" > > def __init__(self, entries=None): > """Create working set from list of path entries (default=sys.path)""" > self.entries = [] > self.entry_keys = {} > self.by_key = {} > self.callbacks = [] > > if entries is None: > entries = sys.path > > for entry in entries: > self.add_entry(entry) > > Specifically, "entries" is set to 'sys.path' and then each entry in that is > processed. The final error suggests that one of the entries in 'sys.path' > is actually a function and not a string. > > Are you setting 'sys.path' using PythonPath directive in mod_python > and somehow stuffed it up, or are you setting 'sys.path' explicitly in > any other places? > > Graham > > > Backtrace follows. (6), with LcUtil.py, is where it tries to import > > and errors out. > > > > Mod_python error: "PythonHandler mod_python.publisher" > > > > Traceback (most recent call last): > > > > (1) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", > > line 299, in HandlerDispatch > > result = object(req) > > > > (2) File "/usr/lib/python2.4/site-packages/mod_python/publisher.py", > > line 98, in handler > > path=[path]) > > > > (3) File "/usr/lib/python2.4/site-packages/mod_python/apache.py", > > line 457, in import_module > > module = imp.load_module(mname, f, p, d) > > > > (4) File "/var/www/localhost/htdocs/larry/lc.py", line 32, in ? > > import LcSinglePopObject > > > > (5) File "/var/www/localhost/htdocs/larry/LcSinglePopObject.py", line > > 40, in ? > > import LcUtil > > > > (6) File "/var/www/localhost/htdocs/larry/LcUtil.py", line 8, in ? > > import scipy as S > > > > File "/usr/lib/python2.4/site-packages/scipy/__init__.py", line 18, in > > ? > > import pkg_resources as _pr # activate namespace packages > > (manipulates __path__) > > > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2347, > > in ? > > working_set = WorkingSet() > > > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 343, > > in __init__ > > self.add_entry(entry) > > > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 358, > > in add_entry > > for dist in find_distributions(entry, True): > > > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1450, > > in find_distributions > > importer = get_importer(path_item) > > > > File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1407, > > in get_importer > > importer = hook(path_item) > > > > TypeError: zipimporter() argument 1 must be string, not > > builtin_function_or_method > > > > _______________________________________________ > > Mod_python mailing list > > Mod_python at modpython.org > > http://mailman.modpython.org/mailman/listinfo/mod_python > From imcsee at gmail.com Mon Apr 3 02:35:00 2006 From: imcsee at gmail.com (imcs ee) Date: Mon, 3 Apr 2006 14:35:00 +0800 Subject: [SciPy-user] ask help for scipy install In-Reply-To: References: Message-ID: Dapper works ,:) Ran 986 tests in 157.908s OK thanks Ryan Krauss, thanks Jordan Mantha. On 4/3/06, imcs ee wrote: > > thanks, i 'll try it tonight. > some farther info of install:, i install scipy on windows server 2003, > 1,ActivePython-2.4.2.10-win32-x86.msi+numpy-0.9.6r1.win32-py2.4.exe+ > scipy-0.4.8.win32-py2.4-pentium4sse2.exe > scipy.test(10) works fine. > 2, python-2.4.3.msi+numpy-0.9.6r1.win32-py2.4.exe+ > scipy-0.4.8.win32-py2.4-pentium4sse2.exe ...get the same error > (less-ordered...) > > > On 4/3/06, Ryan Krauss wrote: > > > > I run scipy in Ubuntu Breezy, so it definitely works. But I installed > > from source, so I can't tell you what might be wrong with the package. > > Installing from source in Ubuntu isn't too painful since Atlas, Blas, > > Lapack and all that are available as packages (that I know do work). > > If you can't resolve your issue with the package install, let me know > > if you want help installing from source. > > > > Ryan > > > > On 4/2/06, imcs ee < imcsee at gmail.com> wrote: > > > in ubuntu5.10 > > > i install scipy with apt-get install scipy > > > > > > and run the unittest > > > > > > from scipy import * > > > test(10) > > > > > > it shows the result as below. is here any guide to solve it? thanks > > in > > > advance > > > > > > > > > ====================================================================== > > > FAIL: check_expon > > > (scipy.stats.morestats.test_morestats.test_anderson) > > > ---------------------------------------------------------------------- > > > Traceback (most recent call last): > > > File > > > > > "/usr/lib/python2.4/site-packages/scipy/stats/tests/test_morestats.py", > > > line 55, in check_expon > > > assert_array_less(A, crit[-2:]) > > > File > > > "/usr/lib/python2.4/site-packages/scipy_test/testing.py", > > > line 708, in > > > assert_array_less > > > assert cond,\ > > > AssertionError: > > > Arrays are not less-ordered (mismatch 50.0%): > > > Array 1: 1.65613125073 > > > Array 2: [ 1.587 1.9339999999999999] > > > > > > > > > ---------------------------------------------------------------------- > > > > > Ran 986 tests in 102.422s > > > > > > FAILED (failures=1) > > > Out[3]: > > > > > > > > > _______________________________________________ > > > SciPy-user mailing list > > > SciPy-user at scipy.net > > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at mailcan.com Mon Apr 3 05:03:52 2006 From: pgmdevlist at mailcan.com (Pierre GM) Date: Mon, 3 Apr 2006 05:03:52 -0400 Subject: [SciPy-user] Statistics review months In-Reply-To: References: Message-ID: <200604030503.53714.pgmdevlist@mailcan.com> Robert, Excellent initative, thanks a lot ! Before getting too involved, I have a question: should the functions support MaskedArrays (when possible) ? I think about var/std (already available for MA in this patch, that should be checked before inclusion [http://projects.scipy.org/scipy/numpy/attachment/wiki/MaskedArray/ma-200603280900.patch]), or median, in particular... From matthew at sel.cam.ac.uk Mon Apr 3 06:27:21 2006 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Mon, 3 Apr 2006 11:27:21 +0100 Subject: [SciPy-user] stats review: std/var and samplestd/samplevar In-Reply-To: <52740E64-79E7-4905-8518-42A7F66D1D0D@stanford.edu> References: <52740E64-79E7-4905-8518-42A7F66D1D0D@stanford.edu> Message-ID: <6313ED18-D464-4759-8018-285E30C05794@sel.cam.ac.uk> Hi, I think the original poster meant (N-1) some of the time when they said (1-N). > I would propose to have: > (1) scipy.stats.var and scipy.stats.std -- use N as the denominator > > (2) scipy.stats.samplevar and scipy.stats.samplesdt -- at least use > n-1 as the denominator. Better would be to deprecate / remove them > because as above "sample variance" is ambiguous. > > (3) scipy.stats.var_unbiased -- use n-1 as denominator. As per the > note below, there is no general unbiased estimator of the standard > deviation, and so there should be no scipy.stats.std_unbiased > function. (See the wikipedia entry and also http://www.itl.nist.gov/ > div898/handbook/pmc/section3/pmc32.htm ) > I feel vaguely that the N-1 estimator is always problematic, because > if you have a small enough sample that it makes a difference, you've > got bigger problems than using N or N-1. Not that these problems are > insurmountable, but you've got to have some statistical savvy to deal > properly with them. As such, I think that the default functions (var > and std) should just return the population statistics. But reasonable > people may disagree. Whilst you might argue that N vs N-1 isn't going to make much of a difference on a large sample, I am still strongly of the opinion that it should be an option. why not simply have scipy.stats.var (and std) with an option for whether you want N or N-1? Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From matthew at sel.cam.ac.uk Mon Apr 3 06:32:25 2006 From: matthew at sel.cam.ac.uk (Matthew Vernon) Date: Mon, 3 Apr 2006 11:32:25 +0100 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: <442F50F2.2060303@stanford.edu> References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu> Message-ID: Hi, On 2 Apr 2006, at 05:20, Jonathan Taylor wrote: > on this topic, as an honest-to-goodness statistician it might be > nice to > see more statistical modelling in scipy. i know Rpy exists, but the > interface is not very pythonic. I am not a statistician, but I do a fair amount of stats nonetheless. Enough that I wish there were some basic stats functions in "vanilla" python! > i have some "home-brew" modules for linear regression, formula > building > (something like R's) and a few other things. if it went into something > like scipy, it might gain from the criticisms of others.... > > is there any interest in making the equivalent of a > > scipy.stats.models > > module? My concern is that this is effort re-inventing a wheel. Would it not be better to improve the R interface rather than effectively duplicating functionality that's already in R (a Free, and high quality piece of software)? Matthew -- Matthew Vernon MA VetMB LGSM MRCVS Farm Animal Epidemiology and Informatics Unit Department of Veterinary Medicine, University of Cambridge http://www.cus.cam.ac.uk/~mcv21/ From matthew.brett at gmail.com Mon Apr 3 06:41:32 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 3 Apr 2006 11:41:32 +0100 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu> Message-ID: <1e2af89e0604030341g137d0b50yef4d656acc2439ec@mail.gmail.com> Hi, > My concern is that this is effort re-inventing a wheel. Would it not > be better to improve the R interface rather than effectively > duplicating functionality that's already in R (a Free, and high > quality piece of software)? Well, I think the argument would be that if you are writing software in Python that needs this functionality, then making R and RPy a dependency is a burden, even if RPy is working perfectly on all platforms. It also means that NumPy will be less likely to attract statisticians, if you still require R for your statistical processing. Best, Matthew From vincenzo.cacciatore at gmail.com Mon Apr 3 07:06:53 2006 From: vincenzo.cacciatore at gmail.com (vincenzo cacciatore) Date: Mon, 3 Apr 2006 13:06:53 +0200 Subject: [SciPy-user] Band pass filter design Message-ID: <7b580e5d0604030406o469f4441u2140989e60421478@mail.gmail.com> Hi all, i would like to design a high pass filter with scipy.signal module. This is the code i'm using to: import scipy.signal as signal import scipy #first of all i design the lowpass fir filter. This is a 10 taps filter with cutoff frequency =1 (as help tell me to do) lpwindow=signal.firwin(10,1) #with the following instruction i'm creating a band pass filter from the low pass one bpwindow=signal.lp2bp(lpwindow,1,0.5,0.2) My problem is that the band-pass filter obtained with lp2bp function is 16 taps one! How is it possible?? thanks, Vincenzo -------------- next part -------------- An HTML attachment was scrubbed... URL: From icy.flame.gm at gmail.com Mon Apr 3 07:27:18 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Mon, 3 Apr 2006 12:27:18 +0100 Subject: [SciPy-user] Filtering high frquency noise In-Reply-To: References: <441E77C4.5070702@axetic.com> <11C42000-3040-4A34-9597-BC109F4DB385@qwest.net> Message-ID: Assume that the noise power is constant though out your measurement, and relatively small compare to the signal level, using cubic spline line interpolation with the weight option might be a fast solution. The weight is the standard deviation of the noise, this is very easy to find if you have a bit of signal clear of signals, otherwise it doesnt take long to guess a close enough value for it. The advantage of this method is you dont have to worry about the group delay in with filters, or dispersion to your signal. Take a look the my data as example, it might help you to choose what to do: This is the orginal signal as received, 1024 points. https://warwickultrasound.co.uk/smooth/Org_1024.png This is the result after a moving average filter: https://warwickultrasound.co.uk/smooth/Avg_1024.png This is the result after a badly done FIR filter: https://warwickultrasound.co.uk/smooth/FIR_1024.png NB: this is a very bad example, i am sure a carefully choosen FIR filter can do much better than this. This is the result after using the cubic spline method i described above: https://warwickultrasound.co.uk/smooth/Cub_1024.png Looking at the defference between orginal and filtered signal, i choose to use the cubic spline method, because the residual seem uniform enough not to include any signals. i.e. only the noise are filted out. -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From zpincus at stanford.edu Mon Apr 3 09:24:43 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 3 Apr 2006 08:24:43 -0500 Subject: [SciPy-user] stats review: std/var and samplestd/samplevar In-Reply-To: <6313ED18-D464-4759-8018-285E30C05794@sel.cam.ac.uk> References: <52740E64-79E7-4905-8518-42A7F66D1D0D@stanford.edu> <6313ED18-D464-4759-8018-285E30C05794@sel.cam.ac.uk> Message-ID: <83FACFF0-3147-4FAC-B8E1-CE43BE8E4281@stanford.edu> Hi again folks, > I think the original poster meant (N-1) some of the time when they > said (1-N). Yeah, sorry. The take-home message is that scipy.stats uses "sample variance" to mean "a variance denominated by N", when the rest of the world uses "sample variance" to mean "an estimator of the population variance denominated by N-1 or N", and scipy.stats uses "variance" to mean "the unbiased estimator of population variance (denominated by N-1)", which is not in general what "variance" means. In both cases, these usages are not clear, and in the "sample" case, it is directly contrary to established usage. > why not simply have scipy.stats.var (and std) with an option for > whether you want N or N-1? How do people feel about this? The folks on the numpy list have relatively strong feelings that when functions have a boolean flag such as you're proposing, then that means that they really should be two functions. I'm not really sure how strongly I feel about that. Would it be OK to have scipy.stats.var have an boolean 'unbiased_estimator' or 'UnbiasedEstimator' flag? I'm rather not sure that scipy.stats.std ought to have such a flag, given the caveats (e.g. that there is no general unbiased estimator), but if that's what people want... Zach > >> I would propose to have: >> (1) scipy.stats.var and scipy.stats.std -- use N as the denominator >> >> (2) scipy.stats.samplevar and scipy.stats.samplesdt -- at least use >> n-1 as the denominator. Better would be to deprecate / remove them >> because as above "sample variance" is ambiguous. >> >> (3) scipy.stats.var_unbiased -- use n-1 as denominator. As per the >> note below, there is no general unbiased estimator of the standard >> deviation, and so there should be no scipy.stats.std_unbiased >> function. (See the wikipedia entry and also http://www.itl.nist.gov/ >> div898/handbook/pmc/section3/pmc32.htm ) > > >> I feel vaguely that the N-1 estimator is always problematic, because >> if you have a small enough sample that it makes a difference, you've >> got bigger problems than using N or N-1. Not that these problems are >> insurmountable, but you've got to have some statistical savvy to deal >> properly with them. As such, I think that the default functions (var >> and std) should just return the population statistics. But reasonable >> people may disagree. > > Whilst you might argue that N vs N-1 isn't going to make much of a > difference on a large sample, I am still strongly of the opinion that > it should be an option. > > why not simply have scipy.stats.var (and std) with an option for > whether you want N or N-1? > > Matthew > > -- > Matthew Vernon MA VetMB LGSM MRCVS > Farm Animal Epidemiology and Informatics Unit > Department of Veterinary Medicine, University of Cambridge > http://www.cus.cam.ac.uk/~mcv21/ > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From pearu at scipy.org Mon Apr 3 10:31:30 2006 From: pearu at scipy.org (Pearu Peterson) Date: Mon, 3 Apr 2006 09:31:30 -0500 (CDT) Subject: [SciPy-user] building numpy fails In-Reply-To: <44305A80.4010704@gmx.net> References: <442FF730.1090502@gmx.net> <44301BD5.1040601@gmail.com> <44305A80.4010704@gmx.net> Message-ID: On Mon, 3 Apr 2006, Steve Schmerler wrote: > Robert Kern wrote: >> Steve Schmerler wrote: >> >>> The latest numpy svn checkout fails to build (Python 2.3.5) when trying >>> to call tempfile.mktemp(). The module index says: >>> >>> mktemp([suffix[, prefix[, dir]]]) >>> Deprecated since release 2.3. Use mkstemp() instead. >>> [...] >> >> >>> File >>> "/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", >>> line 279, in _exec_command_posix >>> tmpfile = tempfile.mktemp() >>> AttributeError: 'module' object has no attribute 'mktemp' >> >> >> Do you have a file tempfile.py sitting around that isn't the standard library's >> tempfile.py? >> > > No. I can reproduce this error with pyhton 2.3 but not with 2.4. So it seems to be a bug of import machinery of Python 2.3. I have commited a workaround to this problem to numpy svn. Pearu From aisaac at american.edu Mon Apr 3 12:02:44 2006 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 3 Apr 2006 12:02:44 -0400 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: <1e2af89e0604030341g137d0b50yef4d656acc2439ec@mail.gmail.com> References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu><1e2af89e0604030341g137d0b50yef4d656acc2439ec@mail.gmail.com> Message-ID: >> My concern is that this is effort re-inventing a wheel. >> Would it not be better to improve the R interface rather >> than effectively duplicating functionality that's already >> in R (a Free, and high quality piece of software)? On Mon, 3 Apr 2006, Matthew Brett apparently wrote: > Well, I think the argument would be that if you are writing software > in Python that needs this functionality, then making R and RPy a > dependency is a burden, even if RPy is working perfectly on all > platforms. It also means that NumPy will be less likely to attract > statisticians, if you still require R for your statistical processing. It would also mean that using numpy for stats would involve GPL'd rather than free software. ;-) Cheers, Alan Isaac From fonnesbeck at gmail.com Mon Apr 3 12:04:57 2006 From: fonnesbeck at gmail.com (Chris Fonnesbeck) Date: Mon, 3 Apr 2006 12:04:57 -0400 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu> Message-ID: <723eb6930604030904y7dec7fc2yce44ecbe42ee847f@mail.gmail.com> On 4/3/06, Matthew Vernon wrote: > > > My concern is that this is effort re-inventing a wheel. Would it not > be better to improve the R interface rather than effectively > duplicating functionality that's already in R (a Free, and high > quality piece of software)? > I strongly disagree with this approach. I'm not interested in dropping back to a 3rd party application to do sophisticated statistical analysis. We should be able to do it in python, and do it well. I specifically chose python to implement my Bayesian analysis software because it is faster, more flexible, and more user-friendly than R. In fact, all of scipy could have been replicated in R if desired, but that would be a bad choice. C. -- Chris Fonnesbeck + Atlanta, GA + http://trichech.us -------------- next part -------------- An HTML attachment was scrubbed... URL: From elcorto at gmx.net Mon Apr 3 13:04:48 2006 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 03 Apr 2006 19:04:48 +0200 Subject: [SciPy-user] building numpy fails In-Reply-To: References: <442FF730.1090502@gmx.net> <44301BD5.1040601@gmail.com> <44305A80.4010704@gmx.net> Message-ID: <443155B0.2010800@gmx.net> Pearu Peterson wrote: > > On Mon, 3 Apr 2006, Steve Schmerler wrote: > > >>Robert Kern wrote: >> >>>Steve Schmerler wrote: >>> >>> >>>>The latest numpy svn checkout fails to build (Python 2.3.5) when trying >>>>to call tempfile.mktemp(). The module index says: >>>> >>>>mktemp([suffix[, prefix[, dir]]]) >>>> Deprecated since release 2.3. Use mkstemp() instead. >>>> [...] >>> >>> >>>> File >>>>"/home/elcorto/install/python/scipy/svn/numpy/numpy/distutils/exec_command.py", >>>>line 279, in _exec_command_posix >>>> tmpfile = tempfile.mktemp() >>>>AttributeError: 'module' object has no attribute 'mktemp' >>> >>> >>>Do you have a file tempfile.py sitting around that isn't the standard library's >>>tempfile.py? >>> >> >>No. > > > I can reproduce this error with pyhton 2.3 but not with 2.4. > So it seems to be a bug of import machinery of Python 2.3. I have commited > a workaround to this problem to numpy svn. > Thanks. I should switch to python 2.4 anyway sometime soon ... cheers, steve -- When danger or in doubt, run in circles, scream and shout. From hetland at tamu.edu Mon Apr 3 16:05:01 2006 From: hetland at tamu.edu (Robert Hetland) Date: Mon, 3 Apr 2006 15:05:01 -0500 Subject: [SciPy-user] Floating point exception in scipy In-Reply-To: <60752119-1E41-4D4A-980F-DB855D52E0DA@tamu.edu> References: <60752119-1E41-4D4A-980F-DB855D52E0DA@tamu.edu> Message-ID: <718E1606-B256-48F3-9275-C5833A8A4F30@tamu.edu> I guess I shouldn't expect a flood of answers if I post late Friday afternoon.... I figured out the problem -- you need to use the *latest* gfortran from hpc.sf.net. I had been using a version that was a few weeks old.. Now it does not bomb when I test; I get two errors with scipy.test(10,10), which is acceptable to me. So, to summarize, scipy works on an Intel Mac using gcc 4.0.1 and gfortran. -Rob. On Mar 31, 2006, at 4:37 PM, Rob Hetland wrote: > > Compiled on an Intel Mac os X using gcc 4.0.1 (the only one available > on intel macs) and gfortran (from hpc.sf.net). Python is MacPython > Universal build 2.4.3. > > Numpy compiles without a hitch, and tests with no errors. > > SciPy also compiles without errors, but I get a floating point > exception when trying to test scipy. This includes doing scipy.test > (10,10). Below are the details of some attempts. I'm really not > sure where to begin, as it compiles fine. Jordan Mantha has claimed > success basically following the PPC build instructions with this > compiler configuration, but I have not had any luck. I have also > tried to exclude modules that need umfpack, but that also failed in a > similar way. > > Any ideas? > > -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From mantha at chem.unr.edu Mon Apr 3 16:53:10 2006 From: mantha at chem.unr.edu (Jordan Mantha) Date: Mon, 3 Apr 2006 13:53:10 -0700 Subject: [SciPy-user] Floating point exception in scipy In-Reply-To: <718E1606-B256-48F3-9275-C5833A8A4F30@tamu.edu> References: <60752119-1E41-4D4A-980F-DB855D52E0DA@tamu.edu> <718E1606-B256-48F3-9275-C5833A8A4F30@tamu.edu> Message-ID: <05678DEB-BB3D-4A9E-830C-938FAA47C875@chem.unr.edu> On Apr 3, 2006, at 1:05 PM, Robert Hetland wrote: > > I guess I shouldn't expect a flood of answers if I post late Friday > afternoon.... > > I figured out the problem -- you need to use the *latest* gfortran > from hpc.sf.net. I had been using a version that was a few weeks > old.. Now it does not bomb when I test; I get two errors with > scipy.test(10,10), which is acceptable to me. > > So, to summarize, scipy works on an Intel Mac using gcc 4.0.1 and > gfortran. Yeah! I was hoping it wasn't just something I did weird. -Jordan Mantha From oliphant at ee.byu.edu Mon Apr 3 19:20:50 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 03 Apr 2006 17:20:50 -0600 Subject: [SciPy-user] stats review: std/var and samplestd/samplevar In-Reply-To: <83FACFF0-3147-4FAC-B8E1-CE43BE8E4281@stanford.edu> References: <52740E64-79E7-4905-8518-42A7F66D1D0D@stanford.edu> <6313ED18-D464-4759-8018-285E30C05794@sel.cam.ac.uk> <83FACFF0-3147-4FAC-B8E1-CE43BE8E4281@stanford.edu> Message-ID: <4431ADD2.6050002@ee.byu.edu> Zachary Pincus wrote: >Hi again folks, > > > >>I think the original poster meant (N-1) some of the time when they >>said (1-N). >> >> > >Yeah, sorry. > >The take-home message is that scipy.stats uses "sample variance" to >mean "a variance denominated by N", when the rest of the world uses >"sample variance" to mean "an estimator of the population variance >denominated by N-1 or N", and scipy.stats uses "variance" to mean >"the unbiased estimator of population variance (denominated by N-1)", >which is not in general what "variance" means. > > Let's change the documentation to be more consistent and minimize confusion. Let's add an option. Frankly, I'm not enamored with unbiased estimators and would probably divide by N on computing variance by default and allow the option to change it. The only reason to do differently in library code is because of overwhelming expectation. But, if we are wrong about that, then let's do it right. -Travis From oliphant at ee.byu.edu Mon Apr 3 19:21:51 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 03 Apr 2006 17:21:51 -0600 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: <723eb6930604030904y7dec7fc2yce44ecbe42ee847f@mail.gmail.com> References: <442F2E25.9040705@gmail.com> <442F50F2.2060303@stanford.edu> <723eb6930604030904y7dec7fc2yce44ecbe42ee847f@mail.gmail.com> Message-ID: <4431AE0F.4070209@ee.byu.edu> Chris Fonnesbeck wrote: > On 4/3/06, *Matthew Vernon* > wrote: > > > My concern is that this is effort re-inventing a wheel. Would it not > be better to improve the R interface rather than effectively > duplicating functionality that's already in R (a Free, and high > quality piece of software)? > > > I strongly disagree with this approach. I'm not interested in dropping > back to a 3rd party application to do sophisticated statistical > analysis. We should be able to do it in python, and do it well. I > specifically chose python to implement my Bayesian analysis software > because it is faster, more flexible, and more user-friendly than R. In > fact, all of scipy could have been replicated in R if desired, but > that would be a bad choice. > I'm of this opinion as well. Certainly we can learn from R (especially regarding interfaces and useful functions to implement). -Travis From oliphant at ee.byu.edu Mon Apr 3 20:11:52 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 03 Apr 2006 18:11:52 -0600 Subject: [SciPy-user] ***[Possible UCE]*** Band pass filter design In-Reply-To: <7b580e5d0604030406o469f4441u2140989e60421478@mail.gmail.com> References: <7b580e5d0604030406o469f4441u2140989e60421478@mail.gmail.com> Message-ID: <4431B9C8.10203@ee.byu.edu> vincenzo cacciatore wrote: > Hi all, > i would like to design a high pass filter with scipy.signal module. > > This is the code i'm using to: > > import scipy.signal as signal > import scipy > > #first of all i design the lowpass fir filter. This is a 10 taps filter > with cutoff frequency =1 (as help tell me to do) > > lpwindow=signal.firwin(10,1) > > #with the following instruction i'm creating a band pass filter from > the low pass one > bpwindow=signal.lp2bp(lpwindow,1,0.5,0.2) > > My problem is that the band-pass filter obtained with lp2bp function is > 16 taps one! > How is it possible?? > The low-pass to band-pass function uses a specific transformation that changes the number of taps from N to 2*N-1 So, for me the number of taps goes from 10 to 19. You have two options: 1) Change the number of taps in your underlying low-pass filter 2) (Preferred). Design the filter in the frequency domain explicitly (i.e. make a vector of ones and zeros that define your filter). Use the inverse FFT to get the ideal set of coefficients and then window them using the hamming window. Something like this: We will use normalized frequencies where 1 corresponds to Nyquist (i.e. pi radians / sample) --- your use of 1 for a cutoff showed that you need to re-think what normalized coordinates are.. Assume the pass-band is f1 and f2 so that Let the number of taps in the filter be N and Let desired = {1 if |f| in [f1,f2] { 0 otherwise from numpy import * from scipy import signal Ndesign = 256 # assume it is even f = dft.fftfreq(Ndesign,d=0.5) af = abs(f) desired = where((af > f1) & (af < f2), 1,0) hideal = ifft(desired).real win = signal.get_window('hamming',N) # Now we need to get the actual tap coefficients by windowing this result. htrunc = dft.fftshift(r_[hideal[:N/2],hideal[-(N-1)/2:]]) hfinal = win * htrunc # The final coefficents contain the taps of your filter (with the largest tap at the center). # We should probably add something like this to the library. Best, -Travis From meesters at uni-mainz.de Tue Apr 4 07:14:18 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Tue, 4 Apr 2006 13:14:18 +0200 Subject: [SciPy-user] implementation for sine transformation? Message-ID: <200604041314.18190.meesters@uni-mainz.de> Hi, I'm looking for a way of calculating a sine transformation in Python, which I'd like to apply on a 3D-array. Does anybody know an (tested) implementation? Or could somebody give me a hint how to achieve this using a function from fftpack as a shortcut? If nothing is already available: Would it be worth to implement it in SciPy? In this case I would try to code it with this goal in mind. (But I would need some time, since other projects have a certain priority these weeks.) Cheers Christian From a.u.r.e.l.i.a.n at gmx.net Tue Apr 4 08:08:27 2006 From: a.u.r.e.l.i.a.n at gmx.net (Johannes Loehnert) Date: Tue, 4 Apr 2006 14:08:27 +0200 Subject: [SciPy-user] implementation for sine transformation? In-Reply-To: <200604041314.18190.meesters@uni-mainz.de> References: <200604041314.18190.meesters@uni-mainz.de> Message-ID: <200604041408.27688.a.u.r.e.l.i.a.n@gmx.net> Hi, > I'm looking for a way of calculating a sine transformation in Python, which > I'd like to apply on a 3D-array. Does anybody know an (tested) > implementation? Or could somebody give me a hint how to achieve this using > a function from fftpack as a shortcut? There is the function fftpack.fftn, which calculates the n-dimensional Fourier Transformation. You can then use the symmetries that arise from sin(kx) = 1/(2i) * (exp(ikx) - exp(-ikx)). My intuition says that probably you have to substract the Fourier coefficients for (+k) and (-k) and multiply with something like 1/2i. Please figure out the details yourself. :-) Johannes From arauzo at decsai.ugr.es Tue Apr 4 14:50:38 2006 From: arauzo at decsai.ugr.es (Antonio Arauzo Azofra) Date: Tue, 04 Apr 2006 20:50:38 +0200 Subject: [SciPy-user] Error using 0.4.8 on code that worked on 0.3.2 Message-ID: <4432BFFE.4020803@decsai.ugr.es> Hello everybody, I am almost new to Scipy. I was using the following in my code on 0.3.2 and it worked: >>> mat=[[1,2,3],[4,5,6],[7,8,9]] >>> import scipy >>> scipy.cov(mat) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/lib/function_base.py", line 649, in cov if (m.shape[0] == 1): AttributeError: 'list' object has no attribute 'shape' Now it is necessary to use always an array: >>> amat = array(mat) Traceback (most recent call last): File "", line 1, in ? NameError: name 'array' is not defined >>> amat = scipy.array(mat) >>> scipy.cov(amat) array([[ 9., 9., 9.], [ 9., 9., 9.], [ 9., 9., 9.]]) Is this better? If it is, (by no means I try to critize good work made on scipy) maybe it would be better if 'cov' funtion checked the type before begining, and throw an more meaningfull error. :-? Just a suggestion. Also I had problems because cov function returns an integer if matrix is 1x1. I have used this workarround but I am not sure if this is the best way to work with Scipy. Any suggestions? ----------------------------------------------------------- ... disp1 = scipy.cov(class1) #within class1 scatter matrix disp2 = scipy.cov(class2) #within class2 scatter matrix #We allways want covariance matrices even if they are 1 by 1 if not type(disp1) == type( scipy.array(0) ): disp1 = scipy.array([[disp1]]) if not type(disp2) == type( scipy.array(0) ): disp2 = scipy.array([[disp2]]) aux = scipy.add(disp1,disp2) aux = scipy.divide(aux,2) ... ----------------------------------------------------------- -- Regards, Antonio Arauzo Azofra From arserlom at gmail.com Tue Apr 4 15:10:21 2006 From: arserlom at gmail.com (Armando Serrano Lombillo) Date: Tue, 4 Apr 2006 21:10:21 +0200 Subject: [SciPy-user] Linear programming with scipy Message-ID: Hello, I'm looking for a tool to do linear programming with Python. Has this been included in SciPy? Armando Serrano From oliphant at ee.byu.edu Tue Apr 4 16:32:12 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Tue, 04 Apr 2006 14:32:12 -0600 Subject: [SciPy-user] implementation for sine transformation? In-Reply-To: <200604041314.18190.meesters@uni-mainz.de> References: <200604041314.18190.meesters@uni-mainz.de> Message-ID: <4432D7CC.7090802@ee.byu.edu> Christian Meesters wrote: >Hi, > >I'm looking for a way of calculating a sine transformation in Python, which >I'd like to apply on a 3D-array. Does anybody know an (tested) >implementation? Or could somebody give me a hint how to achieve this using a >function from fftpack as a shortcut? > > > If you are talking about the discrete sine transform, then there is an implementation in the sandbox area of SciPy. Here's a link to the Python script (look for the functions dst, dst2, and dstn). http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/sandbox/image/transforms.py -Travis From webb.sprague at gmail.com Tue Apr 4 17:53:59 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Tue, 4 Apr 2006 14:53:59 -0700 Subject: [SciPy-user] [SciPy-dev] Statistics review months Message-ID: I just wanted to add my two cents to the how should we incorporate statistics into scipy question. Take it with all the grains of salt necessary.... I might be able to play a statistician on TV, but not much more... Point 1: One of the *nicest* things about R (and I suppose S-Plus) is that when you do a statistical procedure, even a simple regression, it calculates all sorts of useful stuff for you, from diagnostic plots to all that fancy ANOVA stuff to the residuals etc, and stores it in an object that you can manipulate later. I am not sure how to do this "pythonically" in scipy, but I think it should be considered when we design a stats extension. Point 2: One of the most *annoying* things about R/S-Plus is that it assumes a user-interaction paradigm rather than a server paradigm. I think that as we conceptualize a scipy extension we should remember that a lot of people might want to do something like make statistical analyses available through a web interface (one reason for using for going with Python for me, anyway) and relying on a X system, on the fly generated graphics, a persistent session across commands, etc, would make that extremely difficult (I tried...). At the very least, I would need to be able to pickle the analysis results and save them in a database, retrieving both data and graphics (or nicely re-generating the graphics) later. Anyway, I just wanted to get that off my chest. I hope it makes the world a better place :) From ryanlists at gmail.com Tue Apr 4 20:17:10 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 4 Apr 2006 20:17:10 -0400 Subject: [SciPy-user] latex listings package for python code Message-ID: This is a little off topic and Fernado is the guy who would know, but he is sort of out email contact right now, but does anyone know how to alter line spacing within Python code snipets in Latex using the listings package? The documentation for the package seems not to talk about it. I thought I found something in Google, but it doesn't seem to work. It would be great if it worked with floating and non-floating snipets. Thanks, Ryan From meesters at uni-mainz.de Wed Apr 5 04:17:47 2006 From: meesters at uni-mainz.de (Christian Meesters) Date: Wed, 5 Apr 2006 10:17:47 +0200 Subject: [SciPy-user] implementation for sine transformation? In-Reply-To: <200604041314.18190.meesters@uni-mainz.de> References: <200604041314.18190.meesters@uni-mainz.de> Message-ID: <200604051017.48094.meesters@uni-mainz.de> Hi, Thanks a lot Johannes and Travis. Indeed I was looking for a discrete sine transform (should have told that one, too). Yesterday evening I hacked one version, but looking at the sandbox version I was pointed to, I must admit that it looks at first glance cleaner and also probably faster than my own code. Christian From josegomez at gmx.net Wed Apr 5 06:19:12 2006 From: josegomez at gmx.net (Jose Luis Gomez Dans) Date: Wed, 5 Apr 2006 12:19:12 +0200 (MEST) Subject: [SciPy-user] Interpolation and Extrapolation Message-ID: <14754.1144232352@www076.gmx.net> Hi! I want to interpolate and smooth a regular time series, with gaps. Gaps in the middle of the series are not a problem, as I want to have a periodic temporal sampling, and I can interpolate those points. However, I cannot extrapolate points which are in the extrema using the interpolate package (doh!). A first step is to use the nearest neighbour, but what?s the most efficient way of doing this? Is there a way of flagging the gaps (at the moment, they are set to zero), and using maybe interpolate.interp1d to get a first estimate of the time series prior to smoothing? Many thanks! Jose -- Analog-/ISDN-Nutzer sparen mit GMX SmartSurfer bis zu 70%! Kostenlos downloaden: http://www.gmx.net/de/go/smartsurfer From aisaac at american.edu Wed Apr 5 07:51:15 2006 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 5 Apr 2006 07:51:15 -0400 Subject: [SciPy-user] [SciPy-dev] Statistics review months In-Reply-To: References: Message-ID: On Tue, 4 Apr 2006, Webb Sprague apparently wrote: > Point 1: One of the nicest things about R (and I suppose > S-Plus) is that when you do a statistical procedure, even > a simple regression, it calculates all sorts of useful > stuff for you, from diagnostic plots to all that fancy > ANOVA stuff to the residuals etc, and stores it in an > object that you can manipulate later. I am not sure how > to do this "pythonically" in scipy, but I think it should > be considered when we design a stats extension. Although this has not been the past direction of SciPy, I completely agree. I think that Scientific Python suggests good possibilites. If you work on this, I can pitch in a bit this summer. Cheers, Alan Isaac From citrog at gmail.com Wed Apr 5 15:11:50 2006 From: citrog at gmail.com (Gil Citro) Date: Wed, 5 Apr 2006 15:11:50 -0400 Subject: [SciPy-user] Trouble Compiling SciPy 0.4.8 under Debian 3.1r0a on AMD64 Message-ID: <9c80f12d0604051211u25563a23ya24589f42da53663@mail.gmail.com> I'm trying to install SciPy 0.4.8 from source under Debian 3.1r0a on an AMD64 machine. When I type python setup.py install, I get this error /usr/bin/ld: /usr/local/lib/libfftw3.a(mapflags.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read symbols: Bad value collect2: ld returned 1 exit status /usr/bin/ld: /usr/local/lib/libfftw3.a(mapflags.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libfftw3.a: could not read symbols: Bad value collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -shared build/temp.linux-x86_64-2.3/build/src/Lib/fftpack/_fftpackmodule.o build/temp.linux-x86_64-2.3/Lib/fftpack/src/zfft.o build/temp.linux-x86_64-2.3/Lib/fftpack/src/drfft.o build/temp.linux-x86_64-2.3/Lib/fftpack/src/zrfft.o build/temp.linux-x86_64-2.3/Lib/fftpack/src/zfftnd.o build/temp.linux-x86_64-2.3/build/src/fortranobject.o -L/usr/local/lib -Lbuild/temp.linux-x86_64-2.3 -ldfftpack -lfftw3 -lg2c-pic -o build/lib.linux-x86_64-2.3/scipy/fftpack/_fftpack.so" failed with exit status 1 FFTW 3.1.1 is installed from sources and tests good with make check. Has anyone seen this problem? The closest match I came across was this, but it doesn't suggest to me how to fix the problem. http://www.mail-archive.com/cross-lfs at linuxfromscratch.org/msg00411.html If anyone has a suggestion about what to try I'd appreciate it. Thanks. Gil From robert.kern at gmail.com Wed Apr 5 15:16:12 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Apr 2006 14:16:12 -0500 Subject: [SciPy-user] Trouble Compiling SciPy 0.4.8 under Debian 3.1r0a on AMD64 In-Reply-To: <9c80f12d0604051211u25563a23ya24589f42da53663@mail.gmail.com> References: <9c80f12d0604051211u25563a23ya24589f42da53663@mail.gmail.com> Message-ID: <4434177C.5080909@gmail.com> Gil Citro wrote: > I'm trying to install SciPy 0.4.8 from source under Debian 3.1r0a on > an AMD64 machine. When I type python setup.py install, I get this > error > > /usr/bin/ld: /usr/local/lib/libfftw3.a(mapflags.o): relocation > R_X86_64_32 can not be used when making a shared object; recompile > with -fPIC This error message tells you what you need to do: recompile the FFTW3 libraries using the -fPIC flag to gcc. Otherwise, they can't be linked into shared libraries like Python extension modules. Ideally, you would compile FFTW3 as shared libraries themselves although I'm not sure if the FFTW3 build process makes that easy. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From citrog at gmail.com Wed Apr 5 15:32:50 2006 From: citrog at gmail.com (Gil Citro) Date: Wed, 5 Apr 2006 15:32:50 -0400 Subject: [SciPy-user] Trouble Compiling SciPy 0.4.8 under Debian 3.1r0a on AMD64 In-Reply-To: <4434177C.5080909@gmail.com> References: <9c80f12d0604051211u25563a23ya24589f42da53663@mail.gmail.com> <4434177C.5080909@gmail.com> Message-ID: <9c80f12d0604051232n4d427997o9759dda651d683c6@mail.gmail.com> On 4/5/06, Robert Kern wrote: > Gil Citro wrote: > > I'm trying to install SciPy 0.4.8 from source under Debian 3.1r0a on > > an AMD64 machine. When I type python setup.py install, I get this > > error > > > > /usr/bin/ld: /usr/local/lib/libfftw3.a(mapflags.o): relocation > > R_X86_64_32 can not be used when making a shared object; recompile > > with -fPIC > > This error message tells you what you need to do: recompile the FFTW3 libraries > using the -fPIC flag to gcc. Otherwise, they can't be linked into shared > libraries like Python extension modules. Ideally, you would compile FFTW3 as > shared libraries themselves although I'm not sure if the FFTW3 build process > makes that easy. > Thanks, but would you have any idea how to do that? I tried modifying the Makefile to add -fPIC to CFLAGS, CPPFLAGS, and FFLAGS but it didn't change the error when attempting to build SciPy. I'm not that familiar with compiling software under Linux. Thanks again if you have any other suggestions. Gil From robert.kern at gmail.com Wed Apr 5 15:48:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Apr 2006 14:48:09 -0500 Subject: [SciPy-user] Trouble Compiling SciPy 0.4.8 under Debian 3.1r0a on AMD64 In-Reply-To: <9c80f12d0604051232n4d427997o9759dda651d683c6@mail.gmail.com> References: <9c80f12d0604051211u25563a23ya24589f42da53663@mail.gmail.com> <4434177C.5080909@gmail.com> <9c80f12d0604051232n4d427997o9759dda651d683c6@mail.gmail.com> Message-ID: <44341EF9.7050107@gmail.com> Gil Citro wrote: > On 4/5/06, Robert Kern wrote: > >>Gil Citro wrote: >> >>>I'm trying to install SciPy 0.4.8 from source under Debian 3.1r0a on >>>an AMD64 machine. When I type python setup.py install, I get this >>>error >>> >>>/usr/bin/ld: /usr/local/lib/libfftw3.a(mapflags.o): relocation >>>R_X86_64_32 can not be used when making a shared object; recompile >>>with -fPIC >> >>This error message tells you what you need to do: recompile the FFTW3 libraries >>using the -fPIC flag to gcc. Otherwise, they can't be linked into shared >>libraries like Python extension modules. Ideally, you would compile FFTW3 as >>shared libraries themselves although I'm not sure if the FFTW3 build process >>makes that easy. > > Thanks, but would you have any idea how to do that? I tried modifying > the Makefile to add -fPIC to CFLAGS, CPPFLAGS, and FFLAGS but it > didn't change the error when attempting to build SciPy. I'm not that > familiar with compiling software under Linux. Thanks again if you > have any other suggestions. Is there a reason you aren't using the Debian package which provides the shared libraries? $ sudo apt-get install fftw3-dev If you must build from source, pass --enable-shared to the configure script to get shared libraries. If you must use static libraries, pass --with-pic to the configure script to ensure that the static libraries are built with the -fPIC flag. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From citrog at gmail.com Wed Apr 5 16:22:23 2006 From: citrog at gmail.com (Gil Citro) Date: Wed, 5 Apr 2006 16:22:23 -0400 Subject: [SciPy-user] Trouble Compiling SciPy 0.4.8 under Debian 3.1r0a on AMD64 In-Reply-To: <44341EF9.7050107@gmail.com> References: <9c80f12d0604051211u25563a23ya24589f42da53663@mail.gmail.com> <4434177C.5080909@gmail.com> <9c80f12d0604051232n4d427997o9759dda651d683c6@mail.gmail.com> <44341EF9.7050107@gmail.com> Message-ID: <9c80f12d0604051322ob04137du62721797fd5dffc7@mail.gmail.com> On 4/5/06, Robert Kern wrote: > > Is there a reason you aren't using the Debian package which provides the shared > libraries? > > $ sudo apt-get install fftw3-dev > > If you must build from source, pass --enable-shared to the configure script to > get shared libraries. If you must use static libraries, pass --with-pic to the > configure script to ensure that the static libraries are built with the -fPIC flag. > fftw3-dev was already installed, which is why I decided to try compiling from source. Passing --enable-shared to configure results in this error /usr/bin/ld: kernel/.libs/libkernel.a(alloc.o): relocation R_X86_64_32 can not be used when making a shared object; recompile with -fPIC Passing -with-pic causes no problem when compiling FFTW, but when building SciPy I get the same error as before. I'm starting to think it might have been a mistake to load the AMD64 kernel on this machine. If anyone has any other suggestions let me know, otherwise I might just try starting over with a 32 bit kernel. Thanks again! Gil From rahul.kanwar at gmail.com Wed Apr 5 18:33:40 2006 From: rahul.kanwar at gmail.com (Rahul Kanwar) Date: Wed, 5 Apr 2006 18:33:40 -0400 Subject: [SciPy-user] Numpy on 64 bit Xeon with ifort and mkl Message-ID: <63dec5bf0604051533n51bce00at2e4117b3bc2aeab9@mail.gmail.com> Hello, I am trying to compile Numpy on 64 bit Xeon with ifort and mkl libraries running Suse 10.0 linux. I had set the MKLROOT variable to the mkl library root but it could'nt find the 64 bit library. After a little bit of snooping I found the following in numpy/distutils/cpuinfo.py ------------------------------ def _is_XEON(self): return re.match(r'.*?XEON\b', self.info[0]['model name']) is not None _is_Xeon = _is_XEON ------------------------------ I changed XEON to Xeon and it worked and was able to indentify the em64t libraries. But it again got stuck with the following message. I used the following command to build Numpy python setup.py config_fc --fcompiler=intel install ------------------------------ building 'numpy.core._dotblas' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' compile options: '-Inumpy/core/blasdot -I/opt/intel/mkl/8.0.2/include -Inumpy/core/include -Ibuild/src/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/include/python2.4 -c' gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -L/opt/intel/mkl/8.0.2/lib/em64t -lmkl_em64t -lmkl -lvml -lguide -lpthread -o build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: /opt/intel/mkl/8.0.2/lib/em64t/libmkl_em64t.a(def_cgemm_omp.o): relocation R_X86_64_PC32 against `_mkl_blas_def_cgemm_276__par_loop0' can not be used when making a shared object; recompile with -fPIC /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: /opt/intel/mkl/8.0.2/lib/em64t/libmkl_em64t.a(def_cgemm_omp.o): relocation R_X86_64_PC32 against `_mkl_blas_def_cgemm_276__par_loop0' can not be used when making a shared object; recompile with -fPIC /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse-linux/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status error: Command "gcc -pthread -shared build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -L/opt/intel/mkl/8.0.2/lib/em64t -lmkl_em64t -lmkl -lvml -lguide -lpthread -o build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 ---------------------------------------------- i successfuly compiled it without the -lmkl_em64t flag but when i import numpy in python it gives error that some symbol is missing. I think that maybe if i use ifort as the linker instead ok gcc then things will work out properly, but i could'nt find how to change the linker to ifort. Aynone there who can help me with this problem ? regards, Rahul From jaonary at free.fr Thu Apr 6 04:00:50 2006 From: jaonary at free.fr (jaonary at free.fr) Date: Thu, 06 Apr 2006 10:00:50 +0200 Subject: [SciPy-user] scipy.io.write_array and the precision Message-ID: <1144310450.4434cab2b9df4@imp3-g19.free.fr> Hi all, I'm trying to write an ascii output of my computation. To do this I'm planing to use the package io of scipy. My problem is that I can't get a simple ascii output. The read_array function write the numbers in the scientific form : 110.330e10. And me, I'd like to write my number simply as 122.00323123. So how can I do the with this module ? Is it possible or I have to do this by hand ? Best regards From oliphant.travis at ieee.org Thu Apr 6 05:55:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 06 Apr 2006 03:55:49 -0600 Subject: [SciPy-user] scipy.io.write_array and the precision In-Reply-To: <1144310450.4434cab2b9df4@imp3-g19.free.fr> References: <1144310450.4434cab2b9df4@imp3-g19.free.fr> Message-ID: <4434E5A5.5040602@ieee.org> jaonary at free.fr wrote: > Hi all, > I'm trying to write an ascii output of my computation. To do this I'm planing to > use the package io of scipy. The write_array function is pretty simple and can be adjusted as you like. But, if you are using NumPy (the new one...), then you can also write ascii data to a file using the tofile method arr.tofile('myfile.txt',sep=' ') will do it... -Travis From icy.flame.gm at gmail.com Thu Apr 6 07:41:09 2006 From: icy.flame.gm at gmail.com (iCy-fLaME) Date: Thu, 6 Apr 2006 12:41:09 +0100 Subject: [SciPy-user] FAIL tests, seems fftpack related, check_dot takes forever. Message-ID: Problem: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) seems to take forever on my system, been running at 100% CPU time for at least five minutes on this P4 3GHz machine, I cancel the test at the end. Several FAIL tests related to fftpack, not sure they are important or not. Can some one point me a direction to resolve the problems or indeed is there a problem to begin with? OS: Fedora Core 4 Platform: i686 (smp sse sse2) Installed package from rpm repo: atlas-3.6.0-9.fc4 atlas-sse-3.6.0-9.fc4 atlas-sse2-3.6.0-9.fc4 atlas-3dnow-3.6.0-9.fc4 fftw-3.1-3.fc4 fftw-devel-3.1-3.fc4 fftw2-2.1.5-11.fc4 fftw2-devel-2.1.5-11.fc4 python-2.4.1-2 Installed from source: numpy 0.9.6 scipy 0.4.8 Failed test log: check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ... FAIL check_random_even (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ... FAIL check_tilbert_relation (scipy.fftpack.tests.test_pseudo_diffs.test_hilbert) ... FAIL check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) ... FAIL check_itilbert_relation (scipy.fftpack.tests.test_pseudo_diffs.test_ihilbert) ... FAIL check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_itilbert) ... FAIL check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_shift) ... FAIL check_definition (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) ... FAIL check_random_even (scipy.fftpack.tests.test_pseudo_diffs.test_tilbert) ... FAIL line-search Newton conjugate gradient optimization routine ... ERROR check_definition (scipy.fftpack.tests.test_basic.test_fft) ... FAIL check_djbfft (scipy.fftpack.tests.test_basic.test_fft) ... FAIL check_definition (scipy.fftpack.tests.test_basic.test_fftn) ... FAIL check_definition (scipy.fftpack.tests.test_basic.test_ifft) ... FAIL check_djbfft (scipy.fftpack.tests.test_basic.test_ifft) ... FAIL check_random_complex (scipy.fftpack.tests.test_basic.test_ifft) ... FAIL check_random_real (scipy.fftpack.tests.test_basic.test_ifft) ... FAIL check_definition (scipy.fftpack.tests.test_basic.test_ifftn) ... FAIL check_definition (scipy.fftpack.tests.test_basic.test_irfft) ... FAIL check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/site-packages/scipy/__init__.py", line 67, in test return ScipyTest(scipy).test(level, verbosity) File "/usr/lib/python2.4/site-packages/numpy/testing/numpytest.py", line 438, in test runner.run(all_tests) File "/usr/lib/python2.4/unittest.py", line 696, in run test(result) File "/usr/lib/python2.4/unittest.py", line 428, in __call__ return self.run(*args, **kwds) File "/usr/lib/python2.4/unittest.py", line 424, in run test(result) File "/usr/lib/python2.4/site-packages/numpy/testing/numpytest.py", line 139, in __call__ unittest.TestCase.__call__(self, result) File "/usr/lib/python2.4/unittest.py", line 281, in __call__ return self.run(*args, **kwds) File "/usr/lib/python2.4/unittest.py", line 260, in run testMethod() File "/usr/lib/python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) -- iCy-fLaME The body maybe wounded, but it is the mind that hurts. From schofield at ftw.at Thu Apr 6 08:55:22 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 06 Apr 2006 14:55:22 +0200 Subject: [SciPy-user] Conditional maximum entropy models Message-ID: <44350FBA.4060002@ftw.at> Hi all, I've just merged new functionality for conditional maximum entropy models into the SVN trunk. There are two small example scripts with comments that demonstrate how to use it, and docstrings that describe the class and its methods in more detail. Thanks to Matt Cooper for his feedback on getting it working! -- Ed From jaonary at free.fr Thu Apr 6 09:18:59 2006 From: jaonary at free.fr (jaonary at free.fr) Date: Thu, 06 Apr 2006 15:18:59 +0200 Subject: [SciPy-user] scipy.io.write_array and the precision In-Reply-To: <4434E5A5.5040602@ieee.org> References: <1144310450.4434cab2b9df4@imp3-g19.free.fr> <4434E5A5.5040602@ieee.org> Message-ID: <1144329539.44351543365fb@imp2-g19.free.fr> Selon Travis Oliphant : > arr.tofile('myfile.txt',sep=' ') > > will do it... > Thank your for your answer. In fact, with arr.tofile() things go well, there's just one more litte problem. With this method (tofile) my array is written in one line. How can I do to put one row in one line ? Also, where can I find more information on these built-in method in the array object and related stuff ? It tried something like this : help(numpy.array.tofile) but there's nothing and so on the web sit. It seems that the documentation on numpy is difficult to find :-) Jaonary From ryanlists at gmail.com Thu Apr 6 09:32:12 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 6 Apr 2006 09:32:12 -0400 Subject: [SciPy-user] scipy.io.write_array and the precision In-Reply-To: <1144329539.44351543365fb@imp2-g19.free.fr> References: <1144310450.4434cab2b9df4@imp3-g19.free.fr> <4434E5A5.5040602@ieee.org> <1144329539.44351543365fb@imp2-g19.free.fr> Message-ID: I see this as well: mymat=rand(3,3) In [5]: mymat Out[5]: array([[ 0.19463406, 0.92311955, 0.18562841], [ 0.31952113, 0.42110699, 0.91320285], [ 0.2302922 , 0.04191094, 0.13106267]]) mymat.tofile('mymat.txt',sep='\t') or sep=' ' produces a file with only one line. In [11]: mat2=scipy.io.read_array('mymat.txt') In [12]: mat2 Out[12]: array([ 0.19463406, 0.92311955, 0.18562841, 0.31952113, 0.42110699, 0.91320285, 0.2302922 , 0.04191094, 0.13106267]) In [13]: shape(mat2) Out[13]: (9,) So, how do you read these back in without knowing their shape ahead of time? Ryan On 4/6/06, jaonary at free.fr wrote: > Selon Travis Oliphant : > > > arr.tofile('myfile.txt',sep=' ') > > > > will do it... > > > > Thank your for your answer. In fact, with arr.tofile() things go well, there's > just one more litte problem. With this method (tofile) my array is written in > one line. How can I do to put one row in one line ? Also, where can I find more > information on these built-in method in the array object and related stuff ? > It tried something like this : > help(numpy.array.tofile) > but there's nothing and so on the web sit. > It seems that the documentation on numpy is difficult to find :-) > > Jaonary > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From aisaac at american.edu Thu Apr 6 10:14:25 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 6 Apr 2006 10:14:25 -0400 Subject: [SciPy-user] scipy.io.write_array and the precision In-Reply-To: <1144310450.4434cab2b9df4@imp3-g19.free.fr> References: <1144310450.4434cab2b9df4@imp3-g19.free.fr> Message-ID: On Thu, 06 Apr 2006, jaonary at free.fr apparently wrote: > I'm trying to write an ascii output of my computation. To do this I'm planing to > use the package io of scipy. My problem is that I can't get a simple ascii > output. The read_array function write the numbers in the scientific form : > 110.330e10. And me, I'd like to write my number simply as 122.00323123. Possibly of use (below). Cheers, Alan Isaac >>> help(N.set_printoptions) Help on function set_printoptions in module numpy.core.arrayprint: set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, suppress=None) Set options associated with printing. precision the default number of digits of precision for floating point output (default 8) threshold total number of array elements which trigger summarization rather than full repr. (default 1000) edgeitems number of array items in summary at beginning and end of each dimension. (default 3) linewidth the number of characters per line for the purpose of inserting line breaks. (default 75) suppress Boolean value indicating whether or not suppress printing of small floating point values using scientific notation (default False) From jaonary at free.fr Thu Apr 6 10:36:19 2006 From: jaonary at free.fr (jaonary at free.fr) Date: Thu, 06 Apr 2006 16:36:19 +0200 Subject: [SciPy-user] scipy.io.write_array and the precision In-Reply-To: References: <1144310450.4434cab2b9df4@imp3-g19.free.fr> Message-ID: <1144334179.443527633b43b@imp2-g19.free.fr> I had a look into the io.array_import.py and did the following modification : def str_array(arr, precision=5,col_sep=' ',row_sep="\n",ss=0): --> added if precision > -1 : fmtstr = "%%.%de" % precision else : fmtstr = "%f" And now I have what I need when I use io.write_array(file,array,precision=-1) Jaonary From ryanlists at gmail.com Thu Apr 6 10:49:44 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Thu, 6 Apr 2006 10:49:44 -0400 Subject: [SciPy-user] scipy.io.write_array and the precision In-Reply-To: <1144334179.443527633b43b@imp2-g19.free.fr> References: <1144310450.4434cab2b9df4@imp3-g19.free.fr> <1144334179.443527633b43b@imp2-g19.free.fr> Message-ID: That seems like a decent solution. Should the code be patched to replace precision with fmtstr="%0.5e" so that users can pass in a more general formatting string? Or add fmtstr=None and use it if it is given and fmtstr = "%%.%de" % precision if fmtstr is None. Ryan On 4/6/06, jaonary at free.fr wrote: > I had a look into the io.array_import.py and did the following modification : > > > def str_array(arr, precision=5,col_sep=' ',row_sep="\n",ss=0): > > --> added > if precision > -1 : > fmtstr = "%%.%de" % precision > else : > fmtstr = "%f" > > And now I have what I need when I use io.write_array(file,array,precision=-1) > > Jaonary > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From rahul.kanwar at gmail.com Thu Apr 6 20:51:36 2006 From: rahul.kanwar at gmail.com (Rahul Kanwar) Date: Thu, 6 Apr 2006 20:51:36 -0400 Subject: [SciPy-user] Error while installing Scipy Message-ID: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> Hello, I am working on 64 bit Xeon machine running Suse 10. I am using Intel's MKL and ifort for compiling Numpy and Scipy. I succesfuly compiled Numpy using this combination and was able to load it in the python interpreter. But i get errors while compiling Scipy, here is what i am getting: --------------------------------------------------------------- ifort: Command line warning: ignoring unknown option '-fno-second-underscore' ifort: Command line warning: overriding '-O3' with '-O2' ifort:f77: Lib/integrate/odepack/cfode.f ifort: Command line warning: ignoring option '-W'; no argument required ifort: Command line warning: ignoring unknown option '-fno-second-underscore' ifort: Command line warning: overriding '-O3' with '-O2' ifort:f77: Lib/integrate/odepack/iprep.f ifort: Command line warning: ignoring option '-W'; no argument required ifort: Command line warning: ignoring unknown option '-fno-second-underscore' ifort: Command line warning: overriding '-O3' with '-O2' ifort:f77: Lib/integrate/odepack/prepj.f ifort: Command line warning: ignoring option '-W'; no argument required ifort: Command line warning: ignoring unknown option '-fno-second-underscore' ifort: Command line warning: overriding '-O3' with '-O2' ar: adding 50 object files to build/temp.linux-x86_64-2.4/libodepack.a ar: adding 9 object files to build/temp.linux-x86_64-2.4/libodepack.a running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler customize IntelFCompiler customize LaheyFCompiler customize PGroupFCompiler customize AbsoftFCompiler customize NAGFCompiler customize VastFCompiler customize GnuFCompiler customize CompaqFCompiler customize IntelItaniumFCompiler customize Gnu95FCompiler customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler warning: build_ext: fcompiler=gnu is not available. building 'scipy.cluster._vq' extension compiling C++ sources c++ options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' creating build/temp.linux-x86_64-2.4/Lib/cluster creating build/temp.linux-x86_64-2.4/Lib/cluster/src compile options: '-I/usr/local/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' c++: Lib/cluster/src/vq_wrap.cpp In file included from /usr/include/python2.4/Python.h:8, from Lib/cluster/src/vq_wrap.cpp:176: /usr/include/python2.4/pyconfig.h:838:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/string.h:26, from Lib/cluster/src/vq_wrap.cpp:27: /usr/include/features.h:154:1: warning: this is the location of the previous definition In file included from Lib/cluster/src/vq_wrap.cpp:499: Lib/cluster/src/vq.h:57:7: warning: no newline at end of file Lib/cluster/src/vq_wrap.cpp: In function 'int char_to_numtype(char)': Lib/cluster/src/vq_wrap.cpp:590: warning: control reaches end of non-void function Lib/cluster/src/vq_wrap.cpp: In function 'int char_to_size(char)': Lib/cluster/src/vq_wrap.cpp:582: warning: control reaches end of non-void function Lib/cluster/src/vq_wrap.cpp: At global scope: Lib/cluster/src/vq_wrap.cpp:147: warning: 'void* SWIG_TypeQuery(const char*)' defined but not used Lib/cluster/src/vq_wrap.cpp:301: warning: 'void SWIG_addvarlink(PyObject*, char*, PyObject* (*)(), int (*)(PyObject*))' defined but not used Lib/cluster/src/vq_wrap.cpp:315: warning: 'int SWIG_ConvertPtr(PyObject*, void**, swig_type_info*, int)' defined but not used Lib/cluster/src/vq_wrap.cpp:516: warning: 'PyObject* l_output_helper(PyObject*, PyObject*)' defined but not used [] c++ -pthread -shared build/temp.linux-x86_64-2.4/Lib/cluster/src/vq_wrap.o -Lbuild/temp.linux-x86_64-2.4 -o build/lib.linux-x86_64-2.4/scipy/cluster/_vq.so building 'scipy.integrate._quadpack' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -O2 -fmessage-length=0 -Wall -D_FORTIFY_SOURCE=2 -g -fPIC' compile options: '-I/usr/local/lib64/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' gcc: Lib/integrate/_quadpackmodule.c In file included from Lib/integrate/_quadpackmodule.c:6: Lib/integrate/__quadpack.h: In function 'quad_function': Lib/integrate/__quadpack.h:60: warning: unused variable 'nb' Traceback (most recent call last): File "setup.py", line 48, in ? setup_package() File "setup.py", line 41, in setup_package setup( **config.todict() ) File "/usr/local/lib64/python2.4/site-packages/numpy/distutils/core.py", line 85, in setup return old_setup(**new_attr) File "/usr/lib64/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib64/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib64/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib64/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib64/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/local/lib64/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 109, in run self.build_extensions() File "/usr/lib64/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/usr/local/lib64/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 301, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' -------------------------------------------------------------------------------- Can anyone please help me with it ? Thanks. regard, Rahul From jonathan.taylor at stanford.edu Thu Apr 6 21:00:47 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Thu, 06 Apr 2006 18:00:47 -0700 Subject: [SciPy-user] iterating through permutations Message-ID: <4435B9BF.7010505@stanford.edu> just wondering -- is there any easy way to iterate over all permutations of, say, K integers in scipy? i know the package probstat does this, just wondered if it existed in scipy..... thanks, jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From aisaac at american.edu Thu Apr 6 21:21:22 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 6 Apr 2006 21:21:22 -0400 Subject: [SciPy-user] iterating through permutations In-Reply-To: <4435B9BF.7010505@stanford.edu> References: <4435B9BF.7010505@stanford.edu> Message-ID: On Thu, 06 Apr 2006, Jonathan Taylor apparently wrote: > just wondering -- is there any easy way to iterate over > all permutations of, say, K integers in scipy? At http://www.american.edu/econ/pytrix/pytrix.py find the below. Cheers, Alan Isaac def permuteg(lst): '''Return generator of all permutations of a list. :type `lst`: sequence :rtype: list of lists :return: all permutations of `lst` :requires: Python 2.4+ :note: recursive :since: 2005-06-20 ''' return ([lst[i]]+x for i in range(len(lst)) for x in permute(lst[:i]+lst[i+1:])) \ or [[]] From robert.kern at gmail.com Thu Apr 6 21:21:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 06 Apr 2006 20:21:29 -0500 Subject: [SciPy-user] Error while installing Scipy In-Reply-To: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> References: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> Message-ID: <4435BE99.6030407@gmail.com> Rahul Kanwar wrote: > Hello, > I am working on 64 bit Xeon machine running Suse 10. I am using > Intel's MKL and ifort for compiling Numpy and Scipy. > I succesfuly compiled Numpy using this combination and was able to > load it in the python interpreter. But i get errors while compiling > Scipy, here is what i am getting: > > --------------------------------------------------------------- > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > ifort: Command line warning: overriding '-O3' with '-O2' > ifort:f77: Lib/integrate/odepack/cfode.f > ifort: Command line warning: ignoring option '-W'; no argument required > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > ifort: Command line warning: overriding '-O3' with '-O2' > ifort:f77: Lib/integrate/odepack/iprep.f > ifort: Command line warning: ignoring option '-W'; no argument required > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > ifort: Command line warning: overriding '-O3' with '-O2' > ifort:f77: Lib/integrate/odepack/prepj.f > ifort: Command line warning: ignoring option '-W'; no argument required > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > ifort: Command line warning: overriding '-O3' with '-O2' > ar: adding 50 object files to build/temp.linux-x86_64-2.4/libodepack.a > ar: adding 9 object files to build/temp.linux-x86_64-2.4/libodepack.a > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize GnuFCompiler > customize IntelFCompiler > customize LaheyFCompiler > customize PGroupFCompiler > customize AbsoftFCompiler > customize NAGFCompiler > customize VastFCompiler > customize GnuFCompiler > customize CompaqFCompiler > customize IntelItaniumFCompiler > customize Gnu95FCompiler > customize G95FCompiler > customize GnuFCompiler > customize Gnu95FCompiler > warning: build_ext: fcompiler=gnu is not available. What was the command-line that you used? It appears that you did not set --fcompiler=intel on the build_ext command. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmarais at sun.ac.za Fri Apr 7 05:34:38 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 07 Apr 2006 11:34:38 +0200 Subject: [SciPy-user] F2PY stopped working with new scipy Message-ID: Hi I'm using F2PY with the intel fortran compiler under Ubunty Breezy (05.10) AMD64 to wrap some F90 code. If I use the old scipy (package version 0.3.2-2ubuntu1) with: F2PY Version:2.46.243_2020 scipy.distutils Version: 0.3.2 the following test command works as expected: $ f2py --fcompiler=intel -m testmod -c test_data.f90 test_prog.f90 Before I install new scipy, I removed all the f2py files , since it is now included with scipy. I also removed the ubuntu scipy package. After installing numpy-0.9.6 and scipy-0.4.8, trying to generate the wrappers results in the following output: $ f2py --fcompiler=intel -m testmod -c test_data.f90 test_prog.f90 running build running config_fc running build_src building extension "testmod" sources f2py options: [] f2py:> /tmp/tmp_cZX2X/src/testmodmodule.c creating /tmp/tmp_cZX2X creating /tmp/tmp_cZX2X/src Reading fortran codes... Reading file 'test_data.f90' (format:free) Reading file 'test_prog.f90' (format:free) Post-processing... Block: testmod Block: data Block: prog Block: init_data Block: process_data Post-processing (stage 2)... Block: testmod Block: unknown_interface Block: data Block: prog Block: init_data Block: process_data Building modules... Building module "testmod"... Constructing F90 module support for "data"... Variables: test_arr Constructing F90 module support for "prog"... Constructing wrapper function "prog.init_data"... init_data() Constructing wrapper function "prog.process_data"... process_data(factors,[n]) Wrote C/API module "testmod" to file "/tmp/tmp_cZX2X/src/testmodmodule.c" Fortran 90 wrappers are saved to "/tmp/tmp_cZX2X/src/testmod-f2pywrappers2.f90" adding '/tmp/tmp_cZX2X/src/fortranobject.c' to sources. adding '/tmp/tmp_cZX2X/src' to include_dirs. copying /usr/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmp_cZX2X/src copying /usr/lib/python2.4/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmp_cZX2X/src adding '/tmp/tmp_cZX2X/src/testmod-f2pywrappers2.f90' to sources. running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext Could not locate executable efort Could not locate executable efc warning: build_ext: fcompiler=intel is not available. building 'testmod' extension compiling C sources gcc options: '-pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC' creating /tmp/tmp_cZX2X/tmp creating /tmp/tmp_cZX2X/tmp/tmp_cZX2X creating /tmp/tmp_cZX2X/tmp/tmp_cZX2X/src compile options: '-I/tmp/tmp_cZX2X/src -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c' gcc: /tmp/tmp_cZX2X/src/fortranobject.c gcc: /tmp/tmp_cZX2X/src/testmodmodule.c Traceback (most recent call last): File "/usr/bin/f2py", line 6, in ? f2py.main() File "/usr/lib/python2.4/site-packages/numpy/f2py/f2py2e.py", line 546, in main run_compile() File "/usr/lib/python2.4/site-packages/numpy/f2py/f2py2e.py", line 533, in run_compile setup(ext_modules = [ext]) File "/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 85, in setup return old_setup(**new_attr) File "/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 109, in run self.build_extensions() File "/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 220, in build_extension if self.fcompiler.module_dir_switch is None: AttributeError: 'NoneType' object has no attribute 'module_dir_switch' Is there something wrong with my setup, or what is causing this behaviour? Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From steffen.loeck at gmx.de Fri Apr 7 10:05:56 2006 From: steffen.loeck at gmx.de (steffen.loeck at gmx.de) Date: Fri, 7 Apr 2006 16:05:56 +0200 Subject: [SciPy-user] Vectorize wrapped Fortran routine Message-ID: <200604071605.56959.steffen.loeck@gmx.de> Hi, I would like to vectorize a Fortran routine wrapped with f2py using scipy.vectorize. With the old scipy this works fine but with the new one i get the following error: TypeError: object is not a callable Python object Wrapping was done with: f2py -m hermite -h hermite.pyf hermite.f f2py2.3 -c hermite.pyf hermite.f The routine works without using vectorize but scipy.vectorize(hermite.routine) fails. Is there any way to get this working under new scipy? Thanks, Steffen From robert.kern at gmail.com Fri Apr 7 11:50:47 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 10:50:47 -0500 Subject: [SciPy-user] F2PY stopped working with new scipy In-Reply-To: References: Message-ID: <44368A57.9000206@gmail.com> Neilen Marais wrote: > Hi > > I'm using F2PY with the intel fortran compiler under Ubunty Breezy (05.10) > AMD64 to wrap some F90 code. If I use the old scipy (package version > 0.3.2-2ubuntu1) with: > > F2PY Version:2.46.243_2020 > scipy.distutils Version: 0.3.2 > > the following test command works as expected: > > $ f2py --fcompiler=intel -m testmod -c test_data.f90 test_prog.f90 > > Before I install new scipy, I removed all the f2py files , since it is now > included with scipy. I also removed the ubuntu scipy package. > > After installing numpy-0.9.6 and scipy-0.4.8, trying to generate the wrappers > results in the following output: > > $ f2py --fcompiler=intel -m testmod -c test_data.f90 test_prog.f90 > adding '/tmp/tmp_cZX2X/src/testmod-f2pywrappers2.f90' to sources. running > build_ext customize UnixCCompiler customize UnixCCompiler using build_ext > Could not locate executable efort Could not locate executable efc warning: > build_ext: fcompiler=intel is not available. This is the problem. The first thing to check is that efc is on your PATH. The second thing to check is the version string of the compiler. numpy.distutils uses regexes to extract the version of the compiler from the version string. It is possible that you are using a version of the compiler that has a different string than we are expecting. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at scipy.org Fri Apr 7 13:19:41 2006 From: pearu at scipy.org (Pearu Peterson) Date: Fri, 7 Apr 2006 12:19:41 -0500 (CDT) Subject: [SciPy-user] Vectorize wrapped Fortran routine In-Reply-To: <200604071605.56959.steffen.loeck@gmx.de> References: <200604071605.56959.steffen.loeck@gmx.de> Message-ID: On Fri, 7 Apr 2006, steffen.loeck at gmx.de wrote: > Hi, > > I would like to vectorize a Fortran routine wrapped with f2py using > scipy.vectorize. With the old scipy this works fine but with the new one i > get the following error: > > TypeError: object is not a callable Python object > > Wrapping was done with: > > f2py -m hermite -h hermite.pyf hermite.f > f2py2.3 -c hermite.pyf hermite.f > > The routine works without using vectorize but scipy.vectorize(hermite.routine) > fails. > > Is there any way to get this working under new scipy? vectorize expects a Python function or method as a first argument as this assumption allows it to determine the number of expected arguments. As a workaround, you can use scipy.vectorize(lambda x:hermite.routine(x)) (the number of x-es may vary in your case). However, the error is misleading. For example, f2py generated fortran objects and instances of a class with __call__ method are callable according to callable() test but fail in vectorize. As a possible fix, here is a more general way to determine the number of arguments of a callable Python object: # File: test_nargs.py import re import types def get_nargs(obj): if not callable(obj): raise TypeError, 'object is not a callable Python object: '+str(type(obj)) if hasattr(obj,'func_code'): fcode = obj.func_code nargs = fcode.co_argcount if obj.func_defaults is not None: nargs -= len(obj.func_defaults) if isinstance(obj, types.MethodType): nargs -= 1 return nargs terr = re.compile(r'.*? takes exactly (?P\d+) argument(s|) \((?P\d+) given\)') try: obj() return 0 except TypeError, msg: m = terr.match(str(msg)) if m: nargs = int(m.group('exargs'))-int(m.group('gargs')) if isinstance(obj, types.MethodType): nargs -= 1 return nargs raise ValueError, 'failed to determine the number of arguments for %s' % (obj) # TEST CODE FOLLOWS: class A: def foo(self, a1, a2, a3): pass def __call__(self, a1, a2): pass def car(self, a1, a2=2): pass def bar(a1,a2,a3,a4): pass def gun(a1,a2,a3=1,a4=2): pass from numpy.testing import * assert_equal(get_nargs(A()),2) assert_equal(get_nargs(A().foo),3) assert_equal(get_nargs(A().car),1) assert_equal(get_nargs(bar),4) assert_equal(get_nargs(gun),2) import t # t is f2py generated module using a command: # f2py -c foo.f -m t # where foo.f contains: """ subroutine sin(x,r) double precision x,r cf2py intent(out) r r = dsin(x) end """ assert_equal(get_nargs(t.sin),1) #EOF From rahul.kanwar at gmail.com Fri Apr 7 16:12:24 2006 From: rahul.kanwar at gmail.com (Rahul Kanwar) Date: Fri, 7 Apr 2006 16:12:24 -0400 Subject: [SciPy-user] Error while installing Scipy In-Reply-To: <4435BE99.6030407@gmail.com> References: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> <4435BE99.6030407@gmail.com> Message-ID: <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> Thanks for your reply. I did use the --fcompiler flag. Here is the command line I am using to build scipy. python setup.py config_fc --fcompiler=intel build here is what i get when i do python setup.py config_fc --fcompiler=intel config ------------------------------------------------------------------ fft_opt_info: fftw3_info: /usr/local/lib64/python2.4/site-packages/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['fftw3'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ fftw3 not found NOT AVAILABLE fftw2_info: /usr/local/lib64/python2.4/site-packages/numpy/distutils/system_info.py:531: UserWarning: Library error: libs=['rfftw', 'fftw'] found_libs=[] warnings.warn("Library error: libs=%s found_libs=%s" % \ FOUND: libraries = ['rfftw', 'fftw'] library_dirs = ['/opt/fft2/lib'] define_macros = [('SCIPY_FFTW_H', None)] include_dirs = ['/opt/fft2/include'] djbfft_info: NOT AVAILABLE FOUND: libraries = ['rfftw', 'fftw'] library_dirs = ['/opt/fft2/lib'] define_macros = [('SCIPY_FFTW_H', None)] include_dirs = ['/opt/fft2/include'] blas_opt_info: blas_mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.2/lib/em64t'] include_dirs = ['/opt/intel/mkl/8.0.2/include'] FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.2/lib/em64t'] include_dirs = ['/opt/intel/mkl/8.0.2/include'] lapack_opt_info: lapack_mkl_info: mkl_info: FOUND: libraries = ['mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.2/lib/em64t'] include_dirs = ['/opt/intel/mkl/8.0.2/include'] FOUND: libraries = ['mkl_lapack64', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.2/lib/em64t'] include_dirs = ['/opt/intel/mkl/8.0.2/include'] FOUND: libraries = ['mkl_lapack64', 'mkl', 'vml', 'guide', 'pthread'] library_dirs = ['/opt/intel/mkl/8.0.2/lib/em64t'] include_dirs = ['/opt/intel/mkl/8.0.2/include'] Warning: not existing path in Lib/maxentropy: doc scipy version 0.4.8 running config_fc running config ------------------------------------------------------------------------- I am using the 64 bit ifort compiler. I used the same options to build numpy and it was sucessfully created. Plus numpy passed all the tests inside the python interpreter. but i still keep getting the error when i compile scipy. regards, Rahul On 4/6/06, Robert Kern wrote: > Rahul Kanwar wrote: > > Hello, > > I am working on 64 bit Xeon machine running Suse 10. I am using > > Intel's MKL and ifort for compiling Numpy and Scipy. > > I succesfuly compiled Numpy using this combination and was able to > > load it in the python interpreter. But i get errors while compiling > > Scipy, here is what i am getting: > > > > --------------------------------------------------------------- > > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > > ifort: Command line warning: overriding '-O3' with '-O2' > > ifort:f77: Lib/integrate/odepack/cfode.f > > ifort: Command line warning: ignoring option '-W'; no argument required > > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > > ifort: Command line warning: overriding '-O3' with '-O2' > > ifort:f77: Lib/integrate/odepack/iprep.f > > ifort: Command line warning: ignoring option '-W'; no argument required > > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > > ifort: Command line warning: overriding '-O3' with '-O2' > > ifort:f77: Lib/integrate/odepack/prepj.f > > ifort: Command line warning: ignoring option '-W'; no argument required > > ifort: Command line warning: ignoring unknown option '-fno-second-underscore' > > ifort: Command line warning: overriding '-O3' with '-O2' > > ar: adding 50 object files to build/temp.linux-x86_64-2.4/libodepack.a > > ar: adding 9 object files to build/temp.linux-x86_64-2.4/libodepack.a > > running build_ext > > customize UnixCCompiler > > customize UnixCCompiler using build_ext > > customize GnuFCompiler > > customize IntelFCompiler > > customize LaheyFCompiler > > customize PGroupFCompiler > > customize AbsoftFCompiler > > customize NAGFCompiler > > customize VastFCompiler > > customize GnuFCompiler > > customize CompaqFCompiler > > customize IntelItaniumFCompiler > > customize Gnu95FCompiler > > customize G95FCompiler > > customize GnuFCompiler > > customize Gnu95FCompiler > > warning: build_ext: fcompiler=gnu is not available. > > What was the command-line that you used? It appears that you did not set > --fcompiler=intel on the build_ext command. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From zpincus at stanford.edu Fri Apr 7 16:40:04 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 7 Apr 2006 15:40:04 -0500 Subject: [SciPy-user] Doesn't work: setup.py config_fc --help-fcompiler Message-ID: Hi folks, I'm playing around with getting scipy to build with gfortran and gcc4 on my OS X box. Unfortunately, for some reason when I run 'python setup.py config_fc --help-fcompiler' I get absolutely no output. This is with the latest SVN scipy and numpy, with numpy installed. Any ideas? Zach From robert.kern at gmail.com Fri Apr 7 17:10:43 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 16:10:43 -0500 Subject: [SciPy-user] Doesn't work: setup.py config_fc --help-fcompiler In-Reply-To: References: Message-ID: <4436D553.2010701@gmail.com> Zachary Pincus wrote: > Hi folks, > > I'm playing around with getting scipy to build with gfortran and gcc4 > on my OS X box. > > Unfortunately, for some reason when I run 'python setup.py config_fc > --help-fcompiler' I get absolutely no output. This is with the latest > SVN scipy and numpy, with numpy installed. I can confirm. This appears to be a bug. http://projects.scipy.org/scipy/numpy/ticket/48 -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Apr 7 17:12:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 16:12:15 -0500 Subject: [SciPy-user] Error while installing Scipy In-Reply-To: <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> References: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> <4435BE99.6030407@gmail.com> <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> Message-ID: <4436D5AF.4010408@gmail.com> Rahul Kanwar wrote: > Thanks for your reply. I did use the --fcompiler flag. Here is the > command line I am using to build scipy. > python setup.py config_fc --fcompiler=intel build This would be a bug, then. Until it gets fixed, please try setting --fcompiler on both the build_clib and build_ext commands explicitly. E.g.: $ python setup.py config build_src build_clib --fcompiler=intel build_ext --fcompiler=intel build -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Apr 7 17:16:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 16:16:58 -0500 Subject: [SciPy-user] Error while installing Scipy In-Reply-To: <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> References: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> <4435BE99.6030407@gmail.com> <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> Message-ID: <4436D6CA.1030706@gmail.com> Rahul Kanwar wrote: > Thanks for your reply. I did use the --fcompiler flag. Here is the > command line I am using to build scipy. > python setup.py config_fc --fcompiler=intel build Hmm, I cannot confirm the bug with the latest SVN checkout of numpy and scipy. What version of numpy were you using? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Fri Apr 7 17:31:00 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 7 Apr 2006 16:31:00 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X Message-ID: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> Hi folks, Emboldened by the some reported successes with compiling SciPy with gfortran and gcc4 on OSX/Intel machines, I thought I'd give it a try on my powerbook. Unfortunately, it did not work -- but hopefully it's close, because the problems I saw are all things that various people on this list have seen and solved. VITALS OS X 10.4.6, on a PPC G4 gcc: powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5250) gfortran: GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) (gfortran binary downloaded from hpc.sf.net today.) SVN checkout of numpy and scipy from today. BUILD PROBLEM I only had one problem during the build -- there were errors like: /usr/bin/ld: can't locate file for: -lcc_dynamic I fixed this by modifying numpy/distutils/fcompiler/gnu.py so that the Gnu95FCompiler class had a method like: def get_libraries(self): opt = GnuFCompiler.get_libraries(self) if sys.platform=='darwin': opt.remove('cc_dynamic') return opt so that the unnecessary cc_dynamic library was not included. RUNTIME PROBLEMS > In [1]: import scipy > import linsolve.umfpack -> failed: No module named _umfpack I think that people had seen this problem before, but using a more recent gfortran solved it for them on intel macs. I'm using the most recent gfortran from hpc.sf.net on my G4, so this problem still persists. > In [2]: scipy.test() > import linsolve.umfpack -> failed: No module named _umfpack > Overwriting fft= from > scipy.fftpack.basic (was from > numpy.dft.fftpack) > Overwriting ifft= from > scipy.fftpack.basic (was from > numpy.dft.fftpack) ... > Adjust D1MACH by uncommenting data statements > appropriate for your machine. > STOP 779 Also a problem people had seen with gfortran, but one that I thought had been patched. For the time being, I'll just switch back to gcc3/g77. Hopefully this information will help, though. Zach From robert.kern at gmail.com Fri Apr 7 17:48:14 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 16:48:14 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> Message-ID: <4436DE1E.4070308@gmail.com> Zachary Pincus wrote: > Hi folks, > > Emboldened by the some reported successes with compiling SciPy with > gfortran and gcc4 on OSX/Intel machines, I thought I'd give it a try > on my powerbook. > > Unfortunately, it did not work -- but hopefully it's close, because > the problems I saw are all things that various people on this list > have seen and solved. > > VITALS > OS X 10.4.6, on a PPC G4 > gcc: powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, > Inc. build 5250) > gfortran: GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) > (gfortran binary downloaded from hpc.sf.net today.) > SVN checkout of numpy and scipy from today. > > BUILD PROBLEM > I only had one problem during the build -- there were errors like: > /usr/bin/ld: can't locate file for: -lcc_dynamic > I fixed this by modifying numpy/distutils/fcompiler/gnu.py so that > the Gnu95FCompiler class had a method like: > def get_libraries(self): > opt = GnuFCompiler.get_libraries(self) > if sys.platform=='darwin': > opt.remove('cc_dynamic') > return opt > so that the unnecessary cc_dynamic library was not included. That's reasonable, yes. > RUNTIME PROBLEMS > >>In [1]: import scipy >>import linsolve.umfpack -> failed: No module named _umfpack > > I think that people had seen this problem before, but using a more > recent gfortran solved it for them on intel macs. I'm using the most > recent gfortran from hpc.sf.net on my G4, so this problem still > persists. This has nothing to do with gfortran. The linsolve setup.py is screwy and is building __umfpack.so instead of _umfpack.so. >>In [2]: scipy.test() >>import linsolve.umfpack -> failed: No module named _umfpack >>Overwriting fft= from >>scipy.fftpack.basic (was from >>numpy.dft.fftpack) >>Overwriting ifft= from >>scipy.fftpack.basic (was from >>numpy.dft.fftpack) > > ... > >>Adjust D1MACH by uncommenting data statements >>appropriate for your machine. >>STOP 779 > > Also a problem people had seen with gfortran, but one that I thought > had been patched. No patch has been submitted. I believe that the solution is to compile d1mach.f with -O only and not -O2. Possibly also the -ffloat-store flag needs to be set as well. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Fri Apr 7 18:05:09 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 7 Apr 2006 17:05:09 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <4436DE1E.4070308@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <4436DE1E.4070308@gmail.com> Message-ID: <585D99A2-3ABF-4482-8B94-151C3B2BF5FC@stanford.edu> Robert, Thanks for your quick reply. >> I fixed this by modifying numpy/distutils/fcompiler/gnu.py so that >> the Gnu95FCompiler class had a method like: >> def get_libraries(self): >> opt = GnuFCompiler.get_libraries(self) >> if sys.platform=='darwin': >> opt.remove('cc_dynamic') >> return opt >> so that the unnecessary cc_dynamic library was not included. > > That's reasonable, yes. Presumably a fix along these lines should go into the numpy svn? >>> In [1]: import scipy >>> import linsolve.umfpack -> failed: No module named _umfpack >> > This has nothing to do with gfortran. The linsolve setup.py is > screwy and is > building __umfpack.so instead of _umfpack.so. Aah, OK. Presumably I should just sit tight and this will get resolved at some point? Or can I help track down the problem in any way? (It doesn't really affect me, so no matter.) >>> Adjust D1MACH by uncommenting data statements >>> appropriate for your machine. >>> STOP 779 >> >> Also a problem people had seen with gfortran, but one that I thought >> had been patched. > > No patch has been submitted. I believe that the solution is to > compile d1mach.f > with -O only and not -O2. Possibly also the -ffloat-store flag > needs to be set > as well. I thought I had seen something, so some searching turned up a March 7 email by Neil Becker to scipy-dev (subject "[PATCH] d1mach problem") which has a patch to make d1mach compile with -O0. I'll try out this patch (and a version with -O) and let you know if it works. Neil also had to patch numpy/distutils/command/build_clib.py to allow 'extra_postargs' to be passed through to the fortran compiler. Anyhow, I'll report on the success of these patches. If they work, is this something that should or should not go into scipy, do you think? (If they do go in, a bug ticket should be filed about not using -O2 so that this vaguely ugly hack gets revisited if/when gfortran gets better.) Zach From robert.kern at gmail.com Fri Apr 7 18:34:31 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 17:34:31 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <585D99A2-3ABF-4482-8B94-151C3B2BF5FC@stanford.edu> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <4436DE1E.4070308@gmail.com> <585D99A2-3ABF-4482-8B94-151C3B2BF5FC@stanford.edu> Message-ID: <4436E8F7.3050505@gmail.com> Zachary Pincus wrote: > I thought I had seen something, so some searching turned up a March 7 > email by Neil Becker to scipy-dev (subject "[PATCH] d1mach problem") > which has a patch to make d1mach compile with -O0. I'll try out this > patch (and a version with -O) and let you know if it works. Neil also > had to patch numpy/distutils/command/build_clib.py to allow > 'extra_postargs' to be passed through to the fortran compiler. > > Anyhow, I'll report on the success of these patches. If they work, is > this something that should or should not go into scipy, do you think? > (If they do go in, a bug ticket should be filed about not using -O2 > so that this vaguely ugly hack gets revisited if/when gfortran gets > better.) Yes, you're right there was a patch, and I missed it (as an aside, I do recommend contributing patches to the Trac and not the mailing list so that we don't lose track of them). My only concern is that adding extra_compile_args to work around one compiler's bug may interfere with other compilers. As for the extra_postargs fix, I have no objection to it, but it's possible Pearu did that intentionally. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From piscinerome at tutopia.com Fri Apr 7 20:56:38 2006 From: piscinerome at tutopia.com (Andres Gonzalez-Mancera) Date: Fri, 7 Apr 2006 20:56:38 -0400 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X Message-ID: <5CB6788D-CDBD-462C-829A-AD0E4690875C@tutopia.com> I agree in that your problem does not have to do with gfortran at this point. I tried installing Scipy from yesterdays SVN and run into the same error regarding _umfpack. I was able to install the latest released versions of Numpy and Scipy on Mac Os X 10.4.6 in a G4 but using G77 and GCC 3.3. From my experience the SVN versions are a little difficult since one day can work while other may not. May I ask you why you want to use Gfortran and the newer version of GCC. I don't think this will bring any improvements in speed over g77 but I might be wrong though. Andres From robert.kern at gmail.com Fri Apr 7 21:06:15 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 20:06:15 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <5CB6788D-CDBD-462C-829A-AD0E4690875C@tutopia.com> References: <5CB6788D-CDBD-462C-829A-AD0E4690875C@tutopia.com> Message-ID: <44370C87.2020302@gmail.com> Andres Gonzalez-Mancera wrote: > I agree in that your problem does not have to do with gfortran at this > point. I tried installing Scipy from yesterdays SVN and run into the > same error regarding _umfpack. I was able to install the latest > released versions of Numpy and Scipy on Mac Os X 10.4.6 in a G4 but > using G77 and GCC 3.3. > > From my experience the SVN versions are a little difficult since one > day can work while other may not. May I ask you why you want to use > Gfortran and the newer version of GCC. I don't think this will bring > any improvements in speed over g77 but I might be wrong though. The new Intel Macs do not support gcc 3.x. gcc 4.x does not support g77. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From hetland at tamu.edu Fri Apr 7 20:55:34 2006 From: hetland at tamu.edu (Rob Hetland) Date: Fri, 7 Apr 2006 19:55:34 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <4436DE1E.4070308@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <4436DE1E.4070308@gmail.com> Message-ID: I found that I needed to simlink /usr/lib/gcc/i686-apple-darwin8/4.0.1/libgcc.a -> /usr/local/lib/ libcc_dynamic.a (Similar to the previous PPC gcc4 instructions.) That fixes the missing cc_dynamic library problem for me. -Rob On Apr 7, 2006, at 4:48 PM, Robert Kern wrote: > Zachary Pincus wrote: >> Hi folks, >> >> Emboldened by the some reported successes with compiling SciPy with >> gfortran and gcc4 on OSX/Intel machines, I thought I'd give it a try >> on my powerbook. >> >> Unfortunately, it did not work -- but hopefully it's close, because >> the problems I saw are all things that various people on this list >> have seen and solved. >> >> VITALS >> OS X 10.4.6, on a PPC G4 >> gcc: powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, >> Inc. build 5250) >> gfortran: GNU Fortran 95 (GCC) 4.2.0 20060218 (experimental) >> (gfortran binary downloaded from hpc.sf.net today.) >> SVN checkout of numpy and scipy from today. >> >> BUILD PROBLEM >> I only had one problem during the build -- there were errors like: >> /usr/bin/ld: can't locate file for: -lcc_dynamic >> I fixed this by modifying numpy/distutils/fcompiler/gnu.py so that >> the Gnu95FCompiler class had a method like: >> def get_libraries(self): >> opt = GnuFCompiler.get_libraries(self) >> if sys.platform=='darwin': >> opt.remove('cc_dynamic') >> return opt >> so that the unnecessary cc_dynamic library was not included. > > That's reasonable, yes. > >> RUNTIME PROBLEMS >> >>> In [1]: import scipy >>> import linsolve.umfpack -> failed: No module named _umfpack >> >> I think that people had seen this problem before, but using a more >> recent gfortran solved it for them on intel macs. I'm using the most >> recent gfortran from hpc.sf.net on my G4, so this problem still >> persists. > > This has nothing to do with gfortran. The linsolve setup.py is > screwy and is > building __umfpack.so instead of _umfpack.so. > >>> In [2]: scipy.test() >>> import linsolve.umfpack -> failed: No module named _umfpack >>> Overwriting fft= from >>> scipy.fftpack.basic (was from >>> numpy.dft.fftpack) >>> Overwriting ifft= from >>> scipy.fftpack.basic (was from >>> numpy.dft.fftpack) >> >> ... >> >>> Adjust D1MACH by uncommenting data statements >>> appropriate for your machine. >>> STOP 779 >> >> Also a problem people had seen with gfortran, but one that I thought >> had been patched. > > No patch has been submitted. I believe that the solution is to > compile d1mach.f > with -O only and not -O2. Possibly also the -ffloat-store flag > needs to be set > as well. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From robert.kern at gmail.com Fri Apr 7 21:22:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 07 Apr 2006 20:22:46 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <4436DE1E.4070308@gmail.com> Message-ID: <44371066.8050706@gmail.com> Rob Hetland wrote: > I found that I needed to simlink > > /usr/lib/gcc/i686-apple-darwin8/4.0.1/libgcc.a -> /usr/local/lib/ > libcc_dynamic.a > > (Similar to the previous PPC gcc4 instructions.) That fixes the > missing cc_dynamic library problem for me. cc_dynamic *shouldn't* be needed for gcc 4. The fact that numpy.distutils requests it is purely a leftover from the pre-Tiger days that I haven't yet fixed. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From zpincus at stanford.edu Fri Apr 7 23:57:18 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Fri, 7 Apr 2006 22:57:18 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <44371066.8050706@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <4436DE1E.4070308@gmail.com> <44371066.8050706@gmail.com> Message-ID: <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Hi folks - Several issues: (1) I think that the cc_dynamic library is still needed for gcc3/g77, so it shouldn't be removed from numpy.distutils wholesale. Removing cc_dynamic from *just* gnu95 is the way to go. (2) After applying the patches that I referred to earlier, I was able to compile and test SciPy. Unfortunately, several of the tests fail with gcc4/gfortran that do not fail when I build with gcc3/g77. (Failures noted below.) I can now understand why gfortran is still somewhat suspect. I will look into rebuilding with all optimizations disabled to see if this fixes things. Now, I'm using a PPC chip, so I can just revert easily to gcc3/g77, which I will do for any real work. However, I'm not sure if I would feel totally comfortable using scipy built with gfortran on an intel chip right now, regardless of whether the tests work or not. Maybe for the time being scipy should disable gfortran optimization until the compiler gets a bit more stable? I'll report back with more information and a complete patch when I get a chance. Zach scipy.test failures: ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9985527992248535+3.2729130506728153e-37j) ====================================================================== FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 45, in check_normal assert_array_less(crit[:-1], A) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 255, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 100.0%): Array 1: [ 0.538 0.613 0.736 0.858] Array 2: nan ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9985527992248535+3.2744197267816573e-37j) ---------------------------------------------------------------------- Ran 1506 tests in 11.506s FAILED (failures=3) On Apr 7, 2006, at 8:22 PM, Robert Kern wrote: > Rob Hetland wrote: >> I found that I needed to simlink >> >> /usr/lib/gcc/i686-apple-darwin8/4.0.1/libgcc.a -> /usr/local/lib/ >> libcc_dynamic.a >> >> (Similar to the previous PPC gcc4 instructions.) That fixes the >> missing cc_dynamic library problem for me. > > cc_dynamic *shouldn't* be needed for gcc 4. The fact that > numpy.distutils > requests it is purely a leftover from the pre-Tiger days that I > haven't yet fixed. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From pearu at scipy.org Sat Apr 8 04:32:38 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 03:32:38 -0500 (CDT) Subject: [SciPy-user] Doesn't work: setup.py config_fc --help-fcompiler In-Reply-To: <4436D553.2010701@gmail.com> References: <4436D553.2010701@gmail.com> Message-ID: On Fri, 7 Apr 2006, Robert Kern wrote: > Zachary Pincus wrote: >> Hi folks, >> >> I'm playing around with getting scipy to build with gfortran and gcc4 >> on my OS X box. >> >> Unfortunately, for some reason when I run 'python setup.py config_fc >> --help-fcompiler' I get absolutely no output. This is with the latest >> SVN scipy and numpy, with numpy installed. > > I can confirm. This appears to be a bug. > > http://projects.scipy.org/scipy/numpy/ticket/48 Fixed in svn. Pearu From pearu at scipy.org Sat Apr 8 04:41:31 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 03:41:31 -0500 (CDT) Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Message-ID: On Fri, 7 Apr 2006, Zachary Pincus wrote: > Hi folks - > > Several issues: > (1) I think that the cc_dynamic library is still needed for gcc3/g77, > so it shouldn't be removed from numpy.distutils wholesale. Removing > cc_dynamic from *just* gnu95 is the way to go. The corresponding patch has been applied to SVN. > (2) After applying the patches that I referred to earlier, I was able > to compile and test SciPy. Could you send me the patch or file the corresponding ticket to numpy.distutils? > Maybe for the time being scipy should disable gfortran optimization > until the compiler gets a bit more stable? Try `config_fc --noopt --noarch build`. Does this fix these issues? Pearu From zpincus at stanford.edu Sat Apr 8 12:03:49 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sat, 8 Apr 2006 11:03:49 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Message-ID: > Try `config_fc --noopt --noarch build`. Does this fix these issues? Hmm. Now I get different errors (see below). Impressive. > Could you send me the patch or file the corresponding ticket to > numpy.distutils? My slightly modified versions of Neil Becker's patches are attached. Now, I'm not sure that these should be applied generally. First, regarding the numpy patch, I'm not sure if there's a reason for not passing the extra_postargs to the fortran compiler, so I'm not sure if this patch would cause problems if generally applied. Second, regarding the scipy patch, as is it will pass '-O' to all fortran compilers when the mach codes are compiled. Probably not good! Better would be to test if gfortran was in use and just pass '- O' in that case. Even better might be to just assume that gfortran optimizations are bad in general and disable them wholesale. Then at least bizarre compiler-specific stuff wouldn't be littered all over various setup.py files. Zach New errors when compiling without optimizations: ====================================================================== FAIL: check_cdf (scipy.stats.tests.test_distributions.test_fatiguelife) ---------------------------------------------------------------------- Traceback (most recent call last): File "", line 9, in check_cdf AssertionError: D = 0.393270222394; pval = 5.39820008418e-05; alpha = 0.01 args = (1.299268127930717,) ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9985527992248535+3.0378070058602205e-37j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9985527992248535+3.0392849833765131e-37j) ---------------------------------------------------------------------- Ran 1506 tests in 12.599s FAILED (failures=3) -------------- next part -------------- A non-text attachment was scrubbed... Name: numpy-gfortran.patch Type: application/octet-stream Size: 702 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy-gfortran.patch Type: application/octet-stream Size: 1366 bytes Desc: not available URL: -------------- next part -------------- On Apr 8, 2006, at 3:41 AM, Pearu Peterson wrote: > > > On Fri, 7 Apr 2006, Zachary Pincus wrote: > >> Hi folks - >> >> Several issues: >> (1) I think that the cc_dynamic library is still needed for gcc3/g77, >> so it shouldn't be removed from numpy.distutils wholesale. Removing >> cc_dynamic from *just* gnu95 is the way to go. > > The corresponding patch has been applied to SVN. > >> (2) After applying the patches that I referred to earlier, I was able >> to compile and test SciPy. > > Could you send me the patch or file the corresponding ticket to > numpy.distutils? > >> Maybe for the time being scipy should disable gfortran optimization >> until the compiler gets a bit more stable? > > Try `config_fc --noopt --noarch build`. Does this fix these issues? > > Pearu > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From pearu at scipy.org Sat Apr 8 13:16:38 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 12:16:38 -0500 (CDT) Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Message-ID: On Sat, 8 Apr 2006, Zachary Pincus wrote: >> Try `config_fc --noopt --noarch build`. Does this fix these issues? > > Hmm. Now I get different errors (see below). Impressive. > >> Could you send me the patch or file the corresponding ticket to >> numpy.distutils? > > My slightly modified versions of Neil Becker's patches are attached. Now, I'm > not sure that these should be applied generally. First, regarding the numpy > patch, I'm not sure if there's a reason for not passing the extra_postargs to > the fortran compiler, so I'm not sure if this patch would cause problems if > generally applied. > Second, regarding the scipy patch, as is it will pass '-O' to all fortran > compilers when the mach codes are compiled. Probably not good! Better would > be to test if gfortran was in use and just pass '-O' in that case. Even > better might be to just assume that gfortran optimizations are bad in general > and disable them wholesale. Then at least bizarre compiler-specific stuff > wouldn't be littered all over various setup.py files. Try the latest numpy and scipy from svn. I have added support to specify config_fc options inside setup.py files for libraries. This feature would be probably useful also C libraries and extension modules but let's see if this can fix the current issue first.. Pearu From zpincus at stanford.edu Sat Apr 8 14:43:47 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sat, 8 Apr 2006 13:43:47 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Message-ID: > Try the latest numpy and scipy from svn. I have added support to > specify > config_fc options inside setup.py files for libraries. This feature > would > be probably useful also C libraries and extension modules but let's > see > if this can fix the current issue first.. OK, with latest SVN of scipy and numpy, I can properly build everything with gcc4 and gfortran. However, a couple of tests still fail, which worries me. Also, some of the failures don't happen every time. (See below.) Finally, I would note that maybe the patched setup.py files should only turn off optimization for the mach files if gfortran is being used. (If this is something that can easily be done.) Anyhow, I'm not sure what to make of all of this. Not looking too good for gfortran, though. Zach Test failures: The first two happen every time, the third isn't so regular. ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9985527992248535+3.0377747199436024e-37j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9985527992248535+3.0397298115610284e-37j) ====================================================================== FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 45, in check_normal assert_array_less(crit[:-1], A) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 255, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 100.0%): Array 1: [ 0.538 0.613 0.736 0.858] Array 2: nan From pearu at scipy.org Sat Apr 8 14:55:00 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 13:55:00 -0500 (CDT) Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Message-ID: On Sat, 8 Apr 2006, Zachary Pincus wrote: >> Try the latest numpy and scipy from svn. I have added support to specify >> config_fc options inside setup.py files for libraries. This feature would >> be probably useful also C libraries and extension modules but let's see >> if this can fix the current issue first.. > > OK, with latest SVN of scipy and numpy, I can properly build everything with > gcc4 and gfortran. However, a couple of tests still fail, which worries me. > Also, some of the failures don't happen every time. (See below.) > > Finally, I would note that maybe the patched setup.py files should only turn > off optimization for the mach files if gfortran is being used. (If this is > something that can easily be done.) Nope. Actually mach files should be compiled without optimization for all compilers. So the patched setup.py files are ok. > Anyhow, I'm not sure what to make of all of this. Not looking too good for > gfortran, though. On my debian sid linux box all blas tests pass ok when using gfortran 4.0.3. But that's linux.. Pearu From robert.kern at gmail.com Sat Apr 8 15:01:09 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 08 Apr 2006 14:01:09 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> Message-ID: <44380875.3090803@gmail.com> Pearu Peterson wrote: > On Sat, 8 Apr 2006, Zachary Pincus wrote: >>Finally, I would note that maybe the patched setup.py files should only turn >>off optimization for the mach files if gfortran is being used. (If this is >>something that can easily be done.) > > Nope. Actually mach files should be compiled without optimization for all > compilers. So the patched setup.py files are ok. My question is, does -O mean the same thing for all compilers? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at scipy.org Sat Apr 8 15:10:38 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 14:10:38 -0500 (CDT) Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <44380875.3090803@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> Message-ID: On Sat, 8 Apr 2006, Robert Kern wrote: > Pearu Peterson wrote: >> On Sat, 8 Apr 2006, Zachary Pincus wrote: >>> Finally, I would note that maybe the patched setup.py files should only turn >>> off optimization for the mach files if gfortran is being used. (If this is >>> something that can easily be done.) >> >> Nope. Actually mach files should be compiled without optimization for all >> compilers. So the patched setup.py files are ok. > > My question is, does -O mean the same thing for all compilers? Hmm, not sure. But I wouldn't count on it. For example, for gcc -O means level 1 optimization, for intel compilers -O sets level 2 optimization. I wonder, why this question is relevant for you? Btw, now that I have compiled the whole scipy with gfortran, I get ====================================================================== ERROR: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) SystemError: NULL result without error in PyObject_Call ====================================================================== ERROR: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.3/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) SystemError: NULL result without error in PyObject_Call ---------------------------------------------------------------------- Ran 1510 tests in 6.353s FAILED (errors=2) So now I have something to track down.. Pearu From pearu at scipy.org Sat Apr 8 15:40:12 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sat, 8 Apr 2006 14:40:12 -0500 (CDT) Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> Message-ID: On Sat, 8 Apr 2006, Pearu Peterson wrote: > > > On Sat, 8 Apr 2006, Robert Kern wrote: > >> Pearu Peterson wrote: >>> On Sat, 8 Apr 2006, Zachary Pincus wrote: >>>> Finally, I would note that maybe the patched setup.py files should only turn >>>> off optimization for the mach files if gfortran is being used. (If this is >>>> something that can easily be done.) >>> >>> Nope. Actually mach files should be compiled without optimization for all >>> compilers. So the patched setup.py files are ok. Now I realized that mach files should be compiled with optimization on (I confused them with LAPACK ?lamch.f files that should be always compiled without optimization) > Btw, now that I have compiled the whole scipy with gfortran, I get > > ====================================================================== > ERROR: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/usr/local/lib/python2.3/site-packages/scipy/lib/blas/tests/test_blas.py", > line 76, in check_dot > assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) > SystemError: NULL result without error in PyObject_Call When I compiled blas sources with gfortran and disabled ATLAS (export ATLAS=None; export BLAS=None; export BLAS_OPT=None; export BLAS_SRC=/path/to/blas/sources/) then all tests are passed. So it could be also an ATLAS issue. Try building scipy against BLAS sources and disable optimized BLAS libraries. Pearu From robert.kern at gmail.com Sat Apr 8 15:42:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 08 Apr 2006 14:42:37 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> Message-ID: <4438122D.5010004@gmail.com> Pearu Peterson wrote: > > On Sat, 8 Apr 2006, Robert Kern wrote: > >>Pearu Peterson wrote: >> >>>On Sat, 8 Apr 2006, Zachary Pincus wrote: >>> >>>>Finally, I would note that maybe the patched setup.py files should only turn >>>>off optimization for the mach files if gfortran is being used. (If this is >>>>something that can easily be done.) >>> >>>Nope. Actually mach files should be compiled without optimization for all >>>compilers. So the patched setup.py files are ok. >> >>My question is, does -O mean the same thing for all compilers? > > Hmm, not sure. But I wouldn't count on it. For example, for gcc -O means > level 1 optimization, for intel compilers -O sets level 2 optimization. > I wonder, why this question is relevant for you? The scipy-gfortran.patch that Zach provided adds extra_compile_args=['-O'] to the all of the mach files regardless of what compiler is compiling them. I'm also wondering about Windows Fortran compilers which may try to mimic MSVC's /slash /style /arguments. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From rahul.kanwar at gmail.com Sat Apr 8 16:37:23 2006 From: rahul.kanwar at gmail.com (Rahul Kanwar) Date: Sat, 8 Apr 2006 16:37:23 -0400 Subject: [SciPy-user] Error while installing Scipy In-Reply-To: <4436D6CA.1030706@gmail.com> References: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> <4435BE99.6030407@gmail.com> <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> <4436D6CA.1030706@gmail.com> Message-ID: <63dec5bf0604081337u4314c062vb16d88641b06508d@mail.gmail.com> Hello, I finally got numpy and scipy working on my computer! My computer's configuration is Xeon 64 bit, ifort (64 bit fortran compiler), mkl (em64t) and gcc 4.1.0 running Suse 10.1 beta 9. I used the following command to build numPy and scipy: python setup.py config_fc --fcompiler=intel build Here is what I did, this maybe helpful for others and maybe the developers can do the following chnages in the code to make it work out of the box! 1) To compile numpy edit numpy/distutils/cpuinfo.py and replace XEON by Xeon, ------------------------------------------------------------------- def _is_XEON(self): return re.match(r'.*?XEON\b', self.info[0]['model name']) is not None ------------------------------------------------------------------- now open numpy/distutils/fcompiler/intel.py and change ------------------------------------------------------------------- version_pattern = r'Intel\(R\) Fortran Compiler for 32-bit '\ 'applications, Version (?P[^\s*]*)' ------------------------------------------------------------------- with ******************************************************************* version_pattern = r'Intel\(R\) Fortran Compiler for Intel\(R\) EM64T-based '\ 'applications, Version (?P[^\s*]*)' ******************************************************************* 2) You need to change the mkl linker flags as -lmkl_em64t cannot be used to build a shared libraray as there is only libmkl_em64t.a file in the mkl lib folder. We can add the LAPACK and BLAS routines by linking with libraries libmkl_lapack32.so and libmkl_lapack64.so (we need to include both as single and doubel precesion functions are kept in seperate files). Here is how I did this, first open numpy/distutils/system_info.py and edit ------------------------------------------------------------------- elif cpu.is_Xeon(): plt = 'em64t' l = 'mkl_em64t' ------------------------------------------------------------------- with (i am removing mkl_em64t from the linker command and replacing with its shared counterpart) ******************************************************************* elif cpu.is_Xeon(): plt = 'em64t' l = 'mkl' #'mkl_em64t' ******************************************************************* now in the same file replace ------------------------------------------------------------------- lapack_libs = self.get_libs('lapack_libs',['mkl_lapack']) info = {'libraries': lapack_libs} dict_append(info,**mkl) self.set_info(**info) ------------------------------------------------------------------- with ******************************************************************* lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32', 'mkl_lapack64']) info = {'libraries': lapack_libs} dict_append(info,**mkl) self.set_info(**info) ******************************************************************* 3) you can also replace ------------------------------------------------------------------- if cpu.has_mmx(): opt.append('-xM') ------------------------------------------------------------------- with(it helps ifort to vectorize the loops on Xeon machine and does not give the xM warning) ******************************************************************* if cpu.has_mmx(): opt.append('') #-xM ******************************************************************* Thats all i did to get numPy and scipy running on my machine, hope this info helps some one else. bye, Rahul From zpincus at stanford.edu Sat Apr 8 17:07:52 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sat, 8 Apr 2006 16:07:52 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <4438122D.5010004@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> Message-ID: <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> >Robert > The scipy-gfortran.patch that Zach provided adds extra_compile_args= > ['-O'] to > the all of the mach files regardless of what compiler is compiling > them. > > I'm also wondering about Windows Fortran compilers which may try to > mimic MSVC's > /slash /style /arguments. Pearu added a more sophisticated patch to the SVN that passes a '-- noopt' argument to config_fc, instead of the more crude job that I had initially done with forcibly passing '-O' to the compiler. So that's good. It's still for all compilers/platforms, but at least it won't be passing overtly wrong arguments to non-gfortran compilers. >Pearu > When I compiled blas sources with gfortran and disabled ATLAS (export > ATLAS=None; export BLAS=None; export BLAS_OPT=None; export > BLAS_SRC=/path/to/blas/sources/) Hmm, I've been using the Apple-supplied optimized whatever that's in their VecLib, so I haven't bothered configuring or compiling any blas or atlas sources. I do get the following warning on scipy.test(): > WARNING: cblas module is empty ... > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses fblas instead of cblas. Should I try anything in particular to see about tracking the problems down on my box? Maybe tell scipy to not use VecLib or whatever (how?) ? Also, I'll try this on a G5 when I get into work on monday, to see if things are different on those. Zach From robert.kern at gmail.com Sat Apr 8 17:19:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 08 Apr 2006 16:19:28 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> Message-ID: <443828E0.1050504@gmail.com> Zachary Pincus wrote: > >Robert > >>The scipy-gfortran.patch that Zach provided adds extra_compile_args= >>['-O'] to >>the all of the mach files regardless of what compiler is compiling >>them. >> >>I'm also wondering about Windows Fortran compilers which may try to >>mimic MSVC's >>/slash /style /arguments. > > Pearu added a more sophisticated patch to the SVN that passes a '-- > noopt' argument to config_fc, instead of the more crude job that I > had initially done with forcibly passing '-O' to the compiler. So > that's good. It's still for all compilers/platforms, but at least it > won't be passing overtly wrong arguments to non-gfortran compilers. Oh good. I didn't see that before posting. > >Pearu > >>When I compiled blas sources with gfortran and disabled ATLAS (export >>ATLAS=None; export BLAS=None; export BLAS_OPT=None; export >>BLAS_SRC=/path/to/blas/sources/) > > Hmm, I've been using the Apple-supplied optimized whatever that's in > their VecLib, so I haven't bothered configuring or compiling any blas > or atlas sources. I do get the following warning on scipy.test(): > >>WARNING: cblas module is empty > > ... > >>* If atlas library is not found by numpy/distutils/system_info.py, >> then scipy uses fblas instead of cblas. > > Should I try anything in particular to see about tracking the > problems down on my box? Maybe tell scipy to not use VecLib or > whatever (how?) ? Well, that behavior is a bit wrong since the VecLib framework does in fact contain all of the ATLAS CBLAS functions. I don't think it contains the optimized C versions of the LAPACK functions, though. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From pearu at scipy.org Sun Apr 9 06:13:51 2006 From: pearu at scipy.org (Pearu Peterson) Date: Sun, 9 Apr 2006 05:13:51 -0500 (CDT) Subject: [SciPy-user] Error while installing Scipy In-Reply-To: <63dec5bf0604081337u4314c062vb16d88641b06508d@mail.gmail.com> References: <63dec5bf0604061751r1b7a038dm29fc9c69a20bb666@mail.gmail.com> <63dec5bf0604071312r3fef4c60s641707f52416c3c8@mail.gmail.com> <63dec5bf0604081337u4314c062vb16d88641b06508d@mail.gmail.com> Message-ID: On Sat, 8 Apr 2006, Rahul Kanwar wrote: > Hello, > I finally got numpy and scipy working on my computer! My computer's > configuration is > Xeon 64 bit, ifort (64 bit fortran compiler), mkl (em64t) and gcc > 4.1.0 running Suse 10.1 beta 9. I used the following command to build > numPy and scipy: > python setup.py config_fc --fcompiler=intel build Thanks for the feedback. Everything that you did is now in numpy svn. Different from the above command line, you should use now python setup.py config_fc --fcompiler=intelem build Pearu > Here is what I did, this maybe helpful for others and maybe the > developers can do the following chnages in the code to make it work > out of the box! > > 1) To compile numpy edit numpy/distutils/cpuinfo.py and replace XEON by Xeon, > > ------------------------------------------------------------------- > def _is_XEON(self): > return re.match(r'.*?XEON\b', > self.info[0]['model name']) is not None > ------------------------------------------------------------------- > > now open numpy/distutils/fcompiler/intel.py and change > > ------------------------------------------------------------------- > version_pattern = r'Intel\(R\) Fortran Compiler for 32-bit '\ > 'applications, Version (?P[^\s*]*)' > ------------------------------------------------------------------- > > with > > ******************************************************************* > version_pattern = r'Intel\(R\) Fortran Compiler for Intel\(R\) > EM64T-based '\ > 'applications, Version (?P[^\s*]*)' > ******************************************************************* > > 2) You need to change the mkl linker flags as -lmkl_em64t cannot be > used to build a shared libraray as there is only libmkl_em64t.a file > in the mkl lib folder. We can add the LAPACK and BLAS routines by > linking with libraries libmkl_lapack32.so and libmkl_lapack64.so (we > need to include both as single and doubel precesion functions are kept > in seperate files). Here is how I did this, first open > numpy/distutils/system_info.py and edit > > ------------------------------------------------------------------- > elif cpu.is_Xeon(): > plt = 'em64t' > l = 'mkl_em64t' > ------------------------------------------------------------------- > > with (i am removing mkl_em64t from the linker command and replacing > with its shared counterpart) > > ******************************************************************* > elif cpu.is_Xeon(): > plt = 'em64t' > l = 'mkl' #'mkl_em64t' > ******************************************************************* > > now in the same file replace > > ------------------------------------------------------------------- > lapack_libs = self.get_libs('lapack_libs',['mkl_lapack']) > info = {'libraries': lapack_libs} > dict_append(info,**mkl) > self.set_info(**info) > ------------------------------------------------------------------- > > with > > ******************************************************************* > lapack_libs = self.get_libs('lapack_libs',['mkl_lapack32', > 'mkl_lapack64']) > info = {'libraries': lapack_libs} > dict_append(info,**mkl) > self.set_info(**info) > ******************************************************************* > > 3) you can also replace > > ------------------------------------------------------------------- > if cpu.has_mmx(): > opt.append('-xM') > ------------------------------------------------------------------- > > with(it helps ifort to vectorize the loops on Xeon machine and does > not give the xM warning) > > ******************************************************************* > if cpu.has_mmx(): > opt.append('') #-xM > ******************************************************************* > > Thats all i did to get numPy and scipy running on my machine, hope > this info helps some one else. > > bye, > Rahul > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From zpincus at stanford.edu Mon Apr 10 05:14:17 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 10 Apr 2006 02:14:17 -0700 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <443828E0.1050504@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> <443828E0.1050504@gmail.com> Message-ID: <3EA0C9C7-15E0-4AE5-8AE8-53B419BD100F@stanford.edu> >>> WARNING: cblas module is empty >> >> ... >> >>> * If atlas library is not found by numpy/distutils/system_info.py, >>> then scipy uses fblas instead of cblas. >> >> Should I try anything in particular to see about tracking the >> problems down on my box? Maybe tell scipy to not use VecLib or >> whatever (how?) ? > > Well, that behavior is a bit wrong since the VecLib framework does > in fact > contain all of the ATLAS CBLAS functions. I don't think it contains > the > optimized C versions of the LAPACK functions, though. Is there anything I can do to help track down this slightly wrong behavior? Also, in general, should I be building/installing other blas/atlas/ lapack-stuff libs on the mac to get numpy in optimal shape? Or (modulo wrong behavior) should what numpy provides and what VecLib provides be all I want? thanks, Zach From nmarais at sun.ac.za Mon Apr 10 06:18:55 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Mon, 10 Apr 2006 12:18:55 +0200 Subject: [SciPy-user] F2PY stopped working with new scipy References: <44368A57.9000206@gmail.com> Message-ID: Hi Robert On Fri, 07 Apr 2006 10:50:47 -0500, Robert Kern wrote: > Neilen Marais wrote: >> After installing numpy-0.9.6 and scipy-0.4.8, trying to generate the wrappers >> results in the following output: >> >> $ f2py --fcompiler=intel -m testmod -c test_data.f90 test_prog.f90 > >> adding '/tmp/tmp_cZX2X/src/testmod-f2pywrappers2.f90' to sources. running >> build_ext customize UnixCCompiler customize UnixCCompiler using build_ext >> Could not locate executable efort Could not locate executable efc warning: >> build_ext: fcompiler=intel is not available. > > This is the problem. The first thing to check is that efc is on your PATH. The efc? AFAIK the official driver name for intel fortran is ifort, and this is indeed on my path. An older name is ifc, though it complains about that command name being deprecated: brick at genugtig:/usr/local/src/numpy-0.9.6 $ ifc ifc: warning: The Intel Fortran driver is now named ifort. You can suppress this message with '-quiet' ifort: Command line error: no files specified; for help type "ifort -help" brick at genugtig:/usr/local/src/numpy-0.9.6 $ efc bash: efc: command not found brick at genugtig:/usr/local/src/numpy-0.9.6 $ ifort -v Version 9.0 brick at genugtig:/usr/local/src/numpy-0.9.6 I also added symbolic links for efc and efort. Still no-go though. > second thing to check is the version string of the compiler. numpy.distutils > uses regexes to extract the version of the compiler from the version string. It > is possible that you are using a version of the compiler that has a different > string than we are expecting. How can I obtain this test string? It did work with the older version of scipy/f2py, so this may be some sort of regression. Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From robert.kern at gmail.com Mon Apr 10 12:45:32 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Apr 2006 11:45:32 -0500 Subject: [SciPy-user] F2PY stopped working with new scipy In-Reply-To: References: <44368A57.9000206@gmail.com> Message-ID: <443A8BAC.8080302@gmail.com> Neilen Marais wrote: > Hi Robert > > On Fri, 07 Apr 2006 10:50:47 -0500, Robert Kern wrote: > >>Neilen Marais wrote: >> >>>After installing numpy-0.9.6 and scipy-0.4.8, trying to generate the wrappers >>>results in the following output: >>> >>>$ f2py --fcompiler=intel -m testmod -c test_data.f90 test_prog.f90 >> >>> adding '/tmp/tmp_cZX2X/src/testmod-f2pywrappers2.f90' to sources. running >>> build_ext customize UnixCCompiler customize UnixCCompiler using build_ext >>> Could not locate executable efort Could not locate executable efc warning: >>> build_ext: fcompiler=intel is not available. >> >>This is the problem. The first thing to check is that efc is on your PATH. The > > efc? AFAIK the official driver name for intel fortran is ifort, and this is > indeed on my path. Well, according to the error message, it was looking for efort and efc for some reason. Looking at the code (numpy/distutils/fcompiler/intel.py), it appears that the IntelItaniamFCompiler class looks for efort and efc; however, that compiler is supposed to be specified by intele, not intel. > An older name is ifc, though it complains about that command > name being deprecated: > > brick at genugtig:/usr/local/src/numpy-0.9.6 > $ ifc > ifc: warning: The Intel Fortran driver is now named ifort. You can suppress > this message with '-quiet' > ifort: Command line error: no files specified; for help type "ifort -help" > > brick at genugtig:/usr/local/src/numpy-0.9.6 > $ efc > bash: efc: command not found > > brick at genugtig:/usr/local/src/numpy-0.9.6 > $ ifort -v > Version 9.0 > > brick at genugtig:/usr/local/src/numpy-0.9.6 > > I also added symbolic links for efc and efort. Still no-go though. > >>second thing to check is the version string of the compiler. numpy.distutils >>uses regexes to extract the version of the compiler from the version string. It >>is possible that you are using a version of the compiler that has a different >>string than we are expecting. > > How can I obtain this test string? It did work with the older version > of scipy/f2py, so this may be some sort of regression. The regexes are the version_pattern class attributes in the file intel.py given above. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jelle.feringa at ezct.net Mon Apr 10 12:57:20 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Mon, 10 Apr 2006 18:57:20 +0200 Subject: [SciPy-user] ndimage | LOG / DOG In-Reply-To: <443A8BAC.8080302@gmail.com> Message-ID: <008701c65cbf$d54844f0$0b01a8c0@JELLE> A question on the (terrific!) ndimage module: Is the LOG operator (laplace of gauss) == scipy.ndimage.gaussian_laplace? Is the DOG operator (difference of Gaussians) already implemented in ndimage or have I just overlooked it? # A snippet on how to compute efficiently difference of Gaussians would # greatly appreciated... -jelle From webb.sprague at gmail.com Mon Apr 10 14:40:46 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Mon, 10 Apr 2006 11:40:46 -0700 Subject: [SciPy-user] Any SPLUS to scipy ideas for lm and summary(lm)? Message-ID: Hi Scipy-ers, I would like to duplicate the following piece of SPLUS/R code in Python-scipy, and would love somebody smarter than me to give me some ideas. (If you don't know SPLUS/R, you may not want to bother with this.) model.kt <- summary(lm(kt.diff ~ 1 )) kt.drift <- model.kt$coefficients[1,1] # Coefficient sec <- model.kt$coefficients[1,2] # Standard Error of the Coefficient (SEC) see <- model.kt$sigma # Standard error of the Equation (SEE) Getting a least-squares fit in scipy is not a problem, but getting all that other nice stuff IS kind of a problem. I don't mind either hacking scipy.stats, or writing my own function, but maybe someone has some ideas for this, maybe it can be contributed, or ???. I also realize that the SPLUS formula notation doesn't exist at all in scipy-Python, so no need to point that out to me. Perhaps there should be a scipy.stats working group? It seems like scipy.stats (not including the probability distributions and basic summary functions, which are fine) is kind of a forgotten stepchild in scipy, and probably needs a nurturing aunt or uncle or several.... Thx, sorry for such an open ended question. W From jonathan.taylor at stanford.edu Mon Apr 10 16:21:53 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 10 Apr 2006 13:21:53 -0700 Subject: [SciPy-user] Any SPLUS to scipy ideas for lm and summary(lm)? In-Reply-To: References: Message-ID: <443ABE61.9020701@stanford.edu> actually, i have some implementation of the model formula stuff in python, and some linear model stuff. i hope to contribute to scipy soon.... there was a brief discussion of this on scipy-dev over the past two weeks and it seems there is some interest in getting this stuff into scipy. -- jonathan Webb Sprague wrote: >Hi Scipy-ers, > >I would like to duplicate the following piece of SPLUS/R code in >Python-scipy, and would love somebody smarter than me to give me some >ideas. (If you don't know SPLUS/R, you may not want to bother with >this.) > >model.kt <- summary(lm(kt.diff ~ 1 )) >kt.drift <- model.kt$coefficients[1,1] # Coefficient >sec <- model.kt$coefficients[1,2] # Standard Error of the Coefficient (SEC) >see <- model.kt$sigma # Standard error of the Equation (SEE) > >Getting a least-squares fit in scipy is not a problem, but getting all >that other nice stuff IS kind of a problem. I don't mind either >hacking scipy.stats, or writing my own function, but maybe someone has >some ideas for this, maybe it can be contributed, or ???. I also >realize that the SPLUS formula notation doesn't exist at all in >scipy-Python, so no need to point that out to me. > >Perhaps there should be a scipy.stats working group? It seems like >scipy.stats (not including the probability distributions and basic >summary functions, which are fine) is kind of a forgotten stepchild in >scipy, and probably needs a nurturing aunt or uncle or several.... > >Thx, sorry for such an open ended question. >W > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From zpincus at stanford.edu Mon Apr 10 18:13:14 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 10 Apr 2006 15:13:14 -0700 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <3EA0C9C7-15E0-4AE5-8AE8-53B419BD100F@stanford.edu> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> <443828E0.1050504@gmail.com> <3EA0C9C7-15E0-4AE5-8AE8-53B419BD100F@stanford.edu> Message-ID: <9F470E6B-F2D9-4D2F-820C-7351FC41F0F6@stanford.edu> Hi folks, I just built the latest scipy svn version on a G5 with gfortran to see how things went. Everything build OK, and the tests don't segfault, but all the problems I reported on my G4 persist. Specifically, some tests fail all the time and some fail sporadically (see below). Also, I still get the 'WARNING: clapack module is empty' message, which Robert informed me shouldn't be happening on OS X. Oh well. Anything I can do to help sort out these issues? Zach TESTS WHICH FAIL ALL THE TIME ====================================================================== FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9984917640686035-1.9984936714172363j) ====================================================================== FAIL: check_dot (scipy.linalg.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/linalg/tests/test_blas.py", line 75, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9984917640686035-1.9984936714172363j) SPORADIC FAILURES ====================================================================== FAIL: check_expon (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 57, in check_expon assert_array_less(A, crit[-2:]) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 255, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 100.0%): Array 1: 2.1501866413808912 Array 2: [ 1.587 1.9339999999999999] ====================================================================== FAIL: check_normal (scipy.stats.tests.test_morestats.test_anderson) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/stats/tests/test_morestats.py", line 45, in check_normal assert_array_less(crit[:-1], A) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 255, in assert_array_less assert cond,\ AssertionError: Arrays are not less-ordered (mismatch 100.0%): Array 1: [ 0.538 0.613 0.736 0.858] Array 2: nan From robert.kern at gmail.com Mon Apr 10 18:37:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Apr 2006 17:37:27 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <9F470E6B-F2D9-4D2F-820C-7351FC41F0F6@stanford.edu> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> <443828E0.1050504@gmail.com> <3EA0C9C7-15E0-4AE5-8AE8-53B419BD100F@stanford.edu> <9F470E6B-F2D9-4D2F-820C-7351FC41F0F6@stanford.edu> Message-ID: <443ADE27.3050903@gmail.com> Zachary Pincus wrote: > Hi folks, > > I just built the latest scipy svn version on a G5 with gfortran to > see how things went. > Everything build OK, and the tests don't segfault, but all the > problems I reported on my G4 persist. Specifically, some tests fail > all the time and some fail sporadically (see below). Also, I still > get the 'WARNING: clapack module is empty' message, which Robert > informed me shouldn't be happening on OS X. No, the clapack module should be empty if you are using the Accelerate framework. Although the Accelerate framework provides most of ATLAS, it does not provide the row-major versions of LAPACK routines that the clapack module wraps. The cblas module, on the other hand, ideally shouldn't be empty. In order to fix that, you will need to alter the test in the function generate_pyf() in Lib/linalg/setup.py (line 79 or so) to take the Accelerate framework into account as a special case. You will also probably have to adjust the list of functions in the cblas module to match those provided by the Accelerate framework. All off my scipy time is going towards scipy.stats at the moment, so I don't have time to develop and test the appropriate choices here. > SPORADIC FAILURES > ====================================================================== > FAIL: check_expon (scipy.stats.tests.test_morestats.test_anderson) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/stats/tests/test_morestats.py", line > 57, in check_expon > assert_array_less(A, crit[-2:]) > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/numpy/testing/utils.py", line 255, in > assert_array_less > assert cond,\ > AssertionError: > Arrays are not less-ordered (mismatch 100.0%): > Array 1: 2.1501866413808912 > Array 2: [ 1.587 1.9339999999999999] Well, this is a stochastic test; it is *supposed* to fail sporadically. However, the mismatch in the array shapes is probably indicative of a real bug. The anderson function is on the list for review, of course. http://projects.scipy.org/scipy/scipy/ticket/159 -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Zhong.Huang at uth.tmc.edu Mon Apr 10 19:06:47 2006 From: Zhong.Huang at uth.tmc.edu (Huang, Zhong ) Date: Mon, 10 Apr 2006 18:06:47 -0500 Subject: [SciPy-user] NameError Message-ID: <7B54DE0D8F88E1418C0B804A737E5D90073CA9@UTHEVS4.mail.uthouston.edu> Hi, I am trying to use least-squares procedure of scipy. I encountered NameError problem: from scipy.optimize import leastsq plsq=leastsq(residuals,p0,args(y_meas,x)) NameError: name 'args' is not defined Could anybody help me out? Thanks! From oliphant at ee.byu.edu Mon Apr 10 19:15:00 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Mon, 10 Apr 2006 17:15:00 -0600 Subject: [SciPy-user] NameError In-Reply-To: <7B54DE0D8F88E1418C0B804A737E5D90073CA9@UTHEVS4.mail.uthouston.edu> References: <7B54DE0D8F88E1418C0B804A737E5D90073CA9@UTHEVS4.mail.uthouston.edu> Message-ID: <443AE6F4.4080007@ee.byu.edu> Huang, Zhong wrote: >Hi, I am trying to use least-squares procedure >of scipy. I encountered NameError problem: > >from scipy.optimize import leastsq >plsq=leastsq(residuals,p0,args(y_meas,x)) > >NameError: name 'args' is not defined > >Could anybody help me out? > > I think you mean: args = (y_meas, x) i.e. plsq=leastsq(residuals,p0,args=(y_meas,x)) -Travis From robert.kern at gmail.com Mon Apr 10 19:21:08 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Apr 2006 18:21:08 -0500 Subject: [SciPy-user] NameError In-Reply-To: <7B54DE0D8F88E1418C0B804A737E5D90073CA9@UTHEVS4.mail.uthouston.edu> References: <7B54DE0D8F88E1418C0B804A737E5D90073CA9@UTHEVS4.mail.uthouston.edu> Message-ID: <443AE864.7060704@gmail.com> Huang, Zhong wrote: > Hi, I am trying to use least-squares procedure > of scipy. I encountered NameError problem: > > from scipy.optimize import leastsq > plsq=leastsq(residuals,p0,args(y_meas,x)) You are missing an equals sign. plsq=leastsq(residuals,p0,args=(y_meas,x)) -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Zhong.Huang at uth.tmc.edu Mon Apr 10 19:21:28 2006 From: Zhong.Huang at uth.tmc.edu (Huang, Zhong ) Date: Mon, 10 Apr 2006 18:21:28 -0500 Subject: [SciPy-user] NameError Message-ID: <7B54DE0D8F88E1418C0B804A737E5D90073CAB@UTHEVS4.mail.uthouston.edu> Quite right. I tried plsq=leastsq(residuals,p0,(y_meas,x)) it works, too. Thank you, Travis. Zhong -----Original Message----- From: scipy-user-bounces at scipy.net on behalf of Travis Oliphant Sent: Mon 4/10/2006 6:15 PM To: SciPy Users List Subject: Re: [SciPy-user] NameError Huang, Zhong wrote: >Hi, I am trying to use least-squares procedure >of scipy. I encountered NameError problem: > >from scipy.optimize import leastsq >plsq=leastsq(residuals,p0,args(y_meas,x)) > >NameError: name 'args' is not defined > >Could anybody help me out? > > I think you mean: args = (y_meas, x) i.e. plsq=leastsq(residuals,p0,args=(y_meas,x)) -Travis _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2675 bytes Desc: not available URL: From Zhong.Huang at uth.tmc.edu Mon Apr 10 19:22:57 2006 From: Zhong.Huang at uth.tmc.edu (Huang, Zhong ) Date: Mon, 10 Apr 2006 18:22:57 -0500 Subject: [SciPy-user] NameError Message-ID: <7B54DE0D8F88E1418C0B804A737E5D90073CAC@UTHEVS4.mail.uthouston.edu> Yes, thank you, Robert. Zhong -----Original Message----- From: scipy-user-bounces at scipy.net on behalf of Robert Kern Sent: Mon 4/10/2006 6:21 PM To: SciPy Users List Subject: Re: [SciPy-user] NameError Huang, Zhong wrote: > Hi, I am trying to use least-squares procedure > of scipy. I encountered NameError problem: > > from scipy.optimize import leastsq > plsq=leastsq(residuals,p0,args(y_meas,x)) You are missing an equals sign. plsq=leastsq(residuals,p0,args=(y_meas,x)) -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 2755 bytes Desc: not available URL: From michael.sorich at gmail.com Mon Apr 10 19:37:20 2006 From: michael.sorich at gmail.com (Michael Sorich) Date: Tue, 11 Apr 2006 09:07:20 +0930 Subject: [SciPy-user] Any SPLUS to scipy ideas for lm and summary(lm)? In-Reply-To: References: Message-ID: <16761e100604101637k74b97117u90a0ae78729afbe@mail.gmail.com> On 4/11/06, Webb Sprague wrote: > > Hi Scipy-ers, > > I would like to duplicate the following piece of SPLUS/R code in > Python-scipy, and would love somebody smarter than me to give me some > ideas. (If you don't know SPLUS/R, you may not want to bother with > this.) > > model.kt <- summary(lm(kt.diff ~ 1 )) > kt.drift <- model.kt$coefficients[1,1] # Coefficient > sec <- model.kt$coefficients[1,2] # Standard Error of the Coefficient > (SEC) > see <- model.kt$sigma # Standard error of the Equation (SEE) > > Getting a least-squares fit in scipy is not a problem, but getting all > that other nice stuff IS kind of a problem. I don't mind either > hacking scipy.stats, or writing my own function, but maybe someone has > some ideas for this, maybe it can be contributed, or ???. I also > realize that the SPLUS formula notation doesn't exist at all in > scipy-Python, so no need to point that out to me. > > Perhaps there should be a scipy.stats working group? It seems like > scipy.stats (not including the probability distributions and basic > summary functions, which are fine) is kind of a forgotten stepchild in > scipy, and probably needs a nurturing aunt or uncle or several.... I don't have anything really useful to say, other than to say that I would also like a stronger focus on stats in scipy. I typically use R at the moment but would prefer to use scipy. I find the data.frame data type in R/Splus particularly helpful for the type of statistical analysis I undertake (basically something like a masked recarray with ability to have col and row names). In my spare time I am working on trying to make something similar for numpy. Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From zpincus at stanford.edu Tue Apr 11 04:13:27 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 11 Apr 2006 01:13:27 -0700 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: <443ADE27.3050903@gmail.com> References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> <443828E0.1050504@gmail.com> <3EA0C9C7-15E0-4AE5-8AE8-53B419BD100F@stanford.edu> <9F470E6B-F2D9-4D2F-820C-7351FC41F0F6@stanford.edu> <443ADE27.3050903@gmail.com> Message-ID: Robert - Thanks for the feedback about the errors, etc. Do you have any feel for the persistent errors on scipy.test() that I've been seeing with gfortran? You commented on the sporadic errors earlier, but the consistent ones worry me the most, given that it appears to be a simple dot product (on complex numbers, reproduced below for convenience) that is failing badly, and consistently. Is this indicative of gfortran problems, do you think, or something else that I can hunt down? Zach PS. Thanks for your time here, especially since this is all sort of tangential (because gfortran isn't a necessity on PPC Apple hardware, and apparently these problems aren't happening on Intel Apple scipy builds with gfortran (??) ). That dot product problem: FAIL: check_dot (scipy.lib.tests.test_blas.test_fblas1_simple) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/lib/blas/tests/test_blas.py", line 76, in check_dot assert_almost_equal(f([3j,-4,3-4j],[2,3,1]),-9+2j) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/testing/utils.py", line 148, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: DESIRED: (-9+2j) ACTUAL: (-1.9984917640686035-1.9984936714172363j) > >> SPORADIC FAILURES >> ===================================================================== >> = >> FAIL: check_expon (scipy.stats.tests.test_morestats.test_anderson) >> --------------------------------------------------------------------- >> - >> Traceback (most recent call last): >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/scipy/stats/tests/test_morestats.py", line >> 57, in check_expon >> assert_array_less(A, crit[-2:]) >> File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >> python2.4/site-packages/numpy/testing/utils.py", line 255, in >> assert_array_less >> assert cond,\ >> AssertionError: >> Arrays are not less-ordered (mismatch 100.0%): >> Array 1: 2.1501866413808912 >> Array 2: [ 1.587 1.9339999999999999] > > Well, this is a stochastic test; it is *supposed* to fail > sporadically. However, > the mismatch in the array shapes is probably indicative of a real > bug. The > anderson function is on the list for review, of course. > > http://projects.scipy.org/scipy/scipy/ticket/159 From robert.kern at gmail.com Tue Apr 11 04:34:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 11 Apr 2006 03:34:29 -0500 Subject: [SciPy-user] SciPy with gcc4 and gfortran on OS X In-Reply-To: References: <12B081E0-DC83-480F-BEDF-75D81D938E6A@stanford.edu> <69325117-8B86-4D8E-870C-0EFD6B8C977A@stanford.edu> <44380875.3090803@gmail.com> <4438122D.5010004@gmail.com> <1405509B-D5A1-4891-84BA-6C506D7CF1D7@stanford.edu> <443828E0.1050504@gmail.com> <3EA0C9C7-15E0-4AE5-8AE8-53B419BD100F@stanford.edu> <9F470E6B-F2D9-4D2F-820C-7351FC41F0F6@stanford.edu> <443ADE27.3050903@gmail.com> Message-ID: <443B6A15.9030301@gmail.com> Zachary Pincus wrote: > Robert - > > Thanks for the feedback about the errors, etc. > Do you have any feel for the persistent errors on scipy.test() that > I've been seeing with gfortran? You commented on the sporadic errors > earlier, but the consistent ones worry me the most, given that it > appears to be a simple dot product (on complex numbers, reproduced > below for convenience) that is failing badly, and consistently. Is > this indicative of gfortran problems, do you think, or something else > that I can hunt down? I can't think of anything offhand. You will want to check whether the function that is failing is cdotu or zdotu. Also, check if {c,z}dotc is giving incorrect results, too. {c,z}dotu seems to be used in _dotblas, so you should check numpy.dot() for the same bug. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jelle.feringa at ezct.net Tue Apr 11 04:56:25 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 11 Apr 2006 10:56:25 +0200 Subject: [SciPy-user] ndimage | LOG/DOG operator Message-ID: <004101c65d45$d07c9530$0b01a8c0@JELLE> A question on the (terrific!) ndimage module: Is the LOG operator (laplace of gauss) == scipy.ndimage.gaussian_laplace? Is the DOG operator (difference of Gaussians) already implemented in ndimage or have I just overlooked it? # A snippet on how to compute efficiently difference of Gaussians would # greatly appreciated... -jelle --- Sorry for posting this again, I mistakenly posted it in another thread --- From travis at enthought.com Tue Apr 11 16:10:30 2006 From: travis at enthought.com (Travis N. Vaught) Date: Tue, 11 Apr 2006 15:10:30 -0500 Subject: [SciPy-user] ANN: SciPy 2006 Conference Message-ID: <443C0D36.80608@enthought.com> Greetings, The *SciPy 2006 Conference* is scheduled for August 17-18, 2006 at CalTech. A tremendous amount of work has gone into SciPy and Numpy over the past few months, and the scientific python community around these and other tools has truly flourished[1]. The Scipy 2006 Conference is an excellent opportunity to exchange ideas, learn techniques, contribute code and affect the direction of scientific computing with Python. Conference details are at http://www.scipy.org/SciPy2006 Keynote ------- Python language author Guido van Rossum (!) has agreed to be the Keynote speaker at this year's Conference. http://www.python.org/~guido/ Registration: ------------- Registration is now open. You may register early online for $100.00 at http://www.enthought.com/scipy06. Registration includes breakfast and lunch Thursday & Friday and a very nice dinner Thursday night. After July 14, 2006, registration will cost $150.00. Call for Presenters ------------------- If you are interested in presenting at the conference, you may submit an abstract in Plain Text, PDF or MS Word formats to abstracts at scipy.org -- the deadline for abstract submission is July 7, 2006. Papers and/or presentation slides are acceptable and are due by August 4, 2006. Tutorial Sessions ----------------- Several people have expressed interest in attending a tutorial session. The Wednesday before the conference might be a good day for this. Please email the list if you have particular topics that you are interested in. Here's a preliminary list: - Migrating from Numeric or Numarray to Numpy - 2D Visualization with Python - 3D Visualization with Python - Introduction to Scientific Computing with Python - Building Scientific Simulation Applications - Traits/TraitsUI Please rate these and add others in a subsequent thread to the SciPy-user mailing list. Perhaps we can pick 4-6 top ideas and recruit speakers as demand dictates. The authoritative list will be tracked here: http://www.scipy.org/SciPy2006/TutorialSessions Coding Sprints -------------- If anyone would like to arrive earlier (Monday and Tuesday the 14th and 15th of August), we can borrow a room on the CalTech campus to sit and code against particular libraries or apps of interest. Please register your interest in these coding sprints on the SciPy-user mailing list as well. The authoritative list will be tracked here: http://www.scipy.org/SciPy2006/CodingSprints Mailing list address: scipy-user at scipy.org Mailing list archives: http://dir.gmane.org/gmane.comp.python.scientific.user Mailing list signup: http://www.scipy.net/mailman/listinfo/scipy-user [1] Some stats: NumPy has averaged over 16,000 downloads per month Sept. 05 to March 06. SciPy has averaged over 3,800 downloads per month in Feb. and March 06. (both scipy and numpy figures do not include the 2000 instances per month downloaded as part of the Python Enthought Edition Distribution for Windows.) From webb.sprague at gmail.com Tue Apr 11 21:04:32 2006 From: webb.sprague at gmail.com (Webb Sprague) Date: Tue, 11 Apr 2006 18:04:32 -0700 Subject: [SciPy-user] Any SPLUS to scipy ideas for lm and summary(lm)? In-Reply-To: <443ABE61.9020701@stanford.edu> References: <443ABE61.9020701@stanford.edu> Message-ID: Hi All I can only offer my services as a tester for scipy-stats, but better statistics in Scipy would be great. If we do go ahead with a new and improved stats package, I think a lot of up front design work would be great (I can help some with that, even if real statistical programming is beyond me). R/SPLUS seems to have grown partly by accretion and some of it is pretty ugly, especially wrt naming conventions. However, a lot of it is really great and would serve as a good model. I also think that a data.frame type of data type would be great. If we could concentrate on that and a really good (general) linear model framework we would be making great progress, I think. It is funny that if you grep the scipy/stats directory for "residual" you get nothing :)... Cheers W On 4/10/06, Jonathan Taylor wrote: > actually, i have some implementation of the model formula stuff in > python, and some linear model stuff. i hope to contribute to scipy > soon.... there was a brief discussion of this on scipy-dev over the past > two weeks and it seems there is some interest in getting this stuff into > scipy. > > -- jonathan > > Webb Sprague wrote: > > >Hi Scipy-ers, > > > >I would like to duplicate the following piece of SPLUS/R code in > >Python-scipy, and would love somebody smarter than me to give me some > >ideas. (If you don't know SPLUS/R, you may not want to bother with > >this.) > > > >model.kt <- summary(lm(kt.diff ~ 1 )) > >kt.drift <- model.kt$coefficients[1,1] # Coefficient > >sec <- model.kt$coefficients[1,2] # Standard Error of the Coefficient (SEC) > >see <- model.kt$sigma # Standard error of the Equation (SEE) > > > >Getting a least-squares fit in scipy is not a problem, but getting all > >that other nice stuff IS kind of a problem. I don't mind either > >hacking scipy.stats, or writing my own function, but maybe someone has > >some ideas for this, maybe it can be contributed, or ???. I also > >realize that the SPLUS formula notation doesn't exist at all in > >scipy-Python, so no need to point that out to me. > > > >Perhaps there should be a scipy.stats working group? It seems like > >scipy.stats (not including the probability distributions and basic > >summary functions, which are fine) is kind of a forgotten stepchild in > >scipy, and probably needs a nurturing aunt or uncle or several.... > > > >Thx, sorry for such an open ended question. > >W > > > >_______________________________________________ > >SciPy-user mailing list > >SciPy-user at scipy.net > >http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > -- > ------------------------------------------------------------------------ > I'm part of the Team in Training: please support our efforts for the > Leukemia and Lymphoma Society! > > http://www.active.com/donate/tntsvmb/tntsvmbJTaylor > > GO TEAM !!! > > ------------------------------------------------------------------------ > Jonathan Taylor Tel: 650.723.9230 > Dept. of Statistics Fax: 650.725.8977 > Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo > 390 Serra Mall > Stanford, CA 94305 > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From perry at stsci.edu Wed Apr 12 09:45:03 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 12 Apr 2006 09:45:03 -0400 Subject: [SciPy-user] ANN: SciPy 2006 Conference In-Reply-To: <443C0D36.80608@enthought.com> References: <443C0D36.80608@enthought.com> Message-ID: Hi Travis, Who is helping organize this year's conference besides you. I'm thinking I may want to help, particularly in involving the astronomical community more. The next month isn't that good for me but I'd like to be involved if that is of interest to you (i.e., with the program and such). If you are interested, what is your schedule of activities for planning the conference? Thanks, Perry On Apr 11, 2006, at 4:10 PM, Travis N. Vaught wrote: > Greetings, > > The *SciPy 2006 Conference* is scheduled for August 17-18, 2006 at > CalTech. > > A tremendous amount of work has gone into SciPy and Numpy over the > past > few months, and the scientific python community around these and other > tools has truly flourished[1]. The Scipy 2006 Conference is an > excellent opportunity to exchange ideas, learn techniques, contribute > code and affect the direction of scientific computing with Python. > > Conference details are at http://www.scipy.org/SciPy2006 > > Keynote > ------- > Python language author Guido van Rossum (!) has agreed to be the > Keynote > speaker at this year's Conference. > http://www.python.org/~guido/ > > > Registration: > ------------- > Registration is now open. > > You may register early online for $100.00 at > http://www.enthought.com/scipy06. Registration includes breakfast and > lunch Thursday & Friday and a very nice dinner Thursday night. After > July 14, 2006, registration will cost $150.00. > > > Call for Presenters > ------------------- > If you are interested in presenting at the conference, you may > submit an > abstract in Plain Text, PDF or MS Word formats to > abstracts at scipy.org -- > the deadline for abstract submission is July 7, 2006. Papers and/or > presentation slides are acceptable and are due by August 4, 2006. > > > Tutorial Sessions > ----------------- > Several people have expressed interest in attending a tutorial > session. > The Wednesday before the conference might be a good day for this. > Please email the list if you have particular topics that you are > interested in. Here's a preliminary list: > > - Migrating from Numeric or Numarray to Numpy > - 2D Visualization with Python > - 3D Visualization with Python > - Introduction to Scientific Computing with Python > - Building Scientific Simulation Applications > - Traits/TraitsUI > > Please rate these and add others in a subsequent thread to the > SciPy-user mailing list. Perhaps we can pick 4-6 top ideas and > recruit speakers as demand dictates. The authoritative list will > be tracked here: > http://www.scipy.org/SciPy2006/TutorialSessions > > > Coding Sprints > -------------- > If anyone would like to arrive earlier (Monday and Tuesday the 14th > and > 15th of August), we can borrow a room on the CalTech campus to sit and > code against particular libraries or apps of interest. Please > register > your interest in these coding sprints on the SciPy-user mailing > list as > well. The authoritative list will be tracked here: > http://www.scipy.org/SciPy2006/CodingSprints > > Mailing list address: scipy-user at scipy.org > Mailing list archives: > http://dir.gmane.org/gmane.comp.python.scientific.user > Mailing list signup: http://www.scipy.net/mailman/listinfo/scipy-user > > > [1] Some stats: > NumPy has averaged over 16,000 downloads per month Sept. 05 to > March 06. > SciPy has averaged over 3,800 downloads per month in Feb. and > March 06. > (both scipy and numpy figures do not include the 2000 instances per > month downloaded as part of the Python Enthought Edition > Distribution > for Windows.) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From travis at enthought.com Wed Apr 12 13:33:42 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 12 Apr 2006 12:33:42 -0500 Subject: [SciPy-user] ANN: SciPy 2006 Conference In-Reply-To: References: <443C0D36.80608@enthought.com> Message-ID: <443D39F6.6040805@enthought.com> Perry Greenfield wrote: > Hi Travis, > > Who is helping organize this year's conference besides you. I'm > thinking I may want to help, particularly in involving the > astronomical community more. The next month isn't that good for me > but I'd like to be involved if that is of interest to you (i.e., with > the program and such). If you are interested, what is your schedule > of activities for planning the conference? > > Thanks, Perry Hey Perry (and anyone else interested in helping), I'm open to any help we can get to organize the conference (thus the cross-post to scipy-dev)--thanks for the willingness to pitch in. It's probably useful to give a breakdown of all the tasks (at least the one's I can think of right now): Abstracts Review/Speaker Recruitment & Acceptance ------------------------------------------------- This has traditionally been under the purview of the co-sponsors, represented by me, Eric Jones, Michael Aivazis and Michel Sanner. If anyone would like to participate, let us know and we'll organize a committee to do this with more rigor. I think there's a lot of potential in this area that, frankly, has gone untapped. Organize Auxiliary Sessions --------------------------- Several things to do here, particularly in accommodating the various sub-communities like astronomy, biology, etc. This may be a good area for you to pitch in, Perry.: - Organize Sprint projects - Organize Tutorials - Initiate and moderate conversation about BOF interests and organize BOF meeting times. Marketing the Conference ------------------------ We have largely relied on the usual mailing lists to get the word out about the conference. We could definitely have a broader campaign here. I think we could reasonably accommodate 50% more attendees and still have a nice collaborative environment. Some things that come immediately to mind are: - A prominent announcement on the python.org home page & encouraging other PSF involvement (sponsorship of a speaker(s), a sprint, student attendees?) - Announce/articles on other sites/blogs. - Better follow-up reminders for registration. - Targeting particular folks in various Organizations/Universities/Labs to spread the word internally. - Pitch in on keeping the scipy site updated, graphics, etc. - Any other ideas? Sponsorship/Encouragement of Student Participation -------------------------------------------------- There has been some discussion about ways to encourage more students to attend the meeting. Because of our goal of keeping registration costs low, we don't have funds to physically bring students to the conference. We could possibly waive student registration fees, or recruit organizations to sponsor student attendance. We could organize Sprints/Tutorials/BOFs specifically for this sort of group. Of course there's also the issue of getting the word out and targeting interested students. Any ideas? Event Planning -------------- The wonderful folks at CalTech handle this superbly. All planning for meals, meeting rooms, A/V, parking, nametags, check-in, etc. are pretty much taken care of by them--an amazing thing, really. So, not much to do here (thankfully). Ideas? ------ We're interested in any ideas to make this a compelling, productive time--I'm sure I'm forgetting/missing some things. Now, to actually answer your question about the schedule for the Conference planning. Here are some dates: --Between now and June, we should try to build a schedule of tutorials and sprint projects. We should follow up with threads for this. I've created wiki stubs to hold the results: http://www.scipy.org/SciPy2006/TutorialSessions http://www.scipy.org/SciPy2006/CodingSprints -- June 30,2006: Arbitrary target date for preliminary Sprint & Tutorial Schedule --manage 'registration' of tutorial and sprint attendees-- July 7, 2006: Presentation Abstracts Due --week of reviewing abstracts and defining the schedule-- July 14, 2006: Accept/announce presentation schedule July 14, 2006: Early registration deadline --Figure out a BOF schedule sometime in August and announce at the Conference-- We're a bit flexible on this, so suggestions are welcome. Thanks again, Perry. Travis From travis at enthought.com Wed Apr 12 16:16:30 2006 From: travis at enthought.com (Travis N. Vaught) Date: Wed, 12 Apr 2006 15:16:30 -0500 Subject: [SciPy-user] [SciPy-dev] ANN: SciPy 2006 Conference In-Reply-To: <443D39F6.6040805@enthought.com> References: <443C0D36.80608@enthought.com> <443D39F6.6040805@enthought.com> Message-ID: <443D601E.3020500@enthought.com> Travis N. Vaught wrote: > > Abstracts Review/Speaker Recruitment & Acceptance > ------------------------------------------------- > This has traditionally been under the purview of the co-sponsors, > represented by me, Eric Jones, Michael Aivazis and Michel Sanner. One correction...I believe Travis Oliphant took on the bulk of this work last year. Apologies to "the hard-working Travis" for forgetting about that. Travis From perry at stsci.edu Wed Apr 12 18:00:02 2006 From: perry at stsci.edu (Perry Greenfield) Date: Wed, 12 Apr 2006 18:00:02 -0400 Subject: [SciPy-user] Regarding what "where" returns In-Reply-To: <443D601E.3020500@enthought.com> References: <443C0D36.80608@enthought.com> <443D39F6.6040805@enthought.com> <443D601E.3020500@enthought.com> Message-ID: We've noticed that in numpy that the where() function behaves differently than for numarray. In numarray, where() (when used with a mask or condition array only) always returns a tuple of index arrays, even for the 1D case whereas numpy returns an index array for the 1D case and a tuple for higher dimension cases. While the tuple is a annoyance for users when they want to manipulate the 1D case, the benefit is that one always knows that where is returning a tuple, and thus can write code accordingly. The problem with the current numpy behavior is that it requires special case testing to see which kind return one has before manipulating if you aren't certain of what the dimensionality of the argument is going to be. I'd like to raise the issue of whether or not numpy should change the behavior. We often deal with both 1D and 2D arrays so it is an inconvenience for us. How many others deal with this and have an opinion on which way it should work. There is no difference in using the result for where() as an index in any case. Tuples are handled transparently, even for the 1-d case. For example (for numarray): >>> x = arange(10) >>> ind = where(x > 6) >>> print x[ind] [7 8 9] >>> print ind (array([7, 8, 9]),) So to access the actuall index array, one must index the tuple, e.g.: ind[0][:2] Thoughts? Perry From albert at csail.mit.edu Wed Apr 12 18:06:04 2006 From: albert at csail.mit.edu (Albert Huang) Date: Wed, 12 Apr 2006 18:06:04 -0400 Subject: [SciPy-user] interp2d raises AttributeError: interp2d instance has no attribute 'tck', scipy 0.4.8, numpy 0.9.6 Message-ID: Hi, The following program raises an AttributeError file: testinterp2d.py ==== from scipy.interpolate.interpolate import interp2d from numpy import * X, Y = mgrid[0:3, 0:3] Z = X * Y ip = interp2d( X, Y, Z ) ip( 0.5, 0.5 ) ==== # python testinterp2d.py Traceback (most recent call last): File "testinterp2d.py", line 9, in ? ip( 0.5, 0.5 ) File "/home/albert/local/lib/python2.4/site-packages/scipy/interpolate/interpolate.py", line 64, in __call__ z,ier=fitpack._fitpack._bispev(*(self.tck+[x,y,dx,dy])) AttributeError: interp2d instance has no attribute 'tck' ==== This is on scipy 0.4.8, numpy 0.9.6, Ubuntu 5.10, python 2.4 Am I not using interp2d correctly? Thanks, -albert -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Wed Apr 12 18:14:47 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Wed, 12 Apr 2006 16:14:47 -0600 Subject: [SciPy-user] Regarding what "where" returns In-Reply-To: References: <443C0D36.80608@enthought.com> <443D39F6.6040805@enthought.com> <443D601E.3020500@enthought.com> Message-ID: <443D7BD7.3060007@ee.byu.edu> Perry Greenfield wrote: >We've noticed that in numpy that the where() function behaves >differently than for numarray. In numarray, where() (when used with a >mask or condition array only) always returns a tuple of index arrays, >even for the 1D case whereas numpy returns an index array for the 1D >case and a tuple for higher dimension cases. While the tuple is a >annoyance for users when they want to manipulate the 1D case, the >benefit is that one always knows that where is returning a tuple, and >thus can write code accordingly. The problem with the current numpy >behavior is that it requires special case testing to see which kind >return one has before manipulating if you aren't certain of what the >dimensionality of the argument is going to be. > > I think this is reasonable. I don't think much thought went in to the current behavior as it simply defaults to the behavior of the nonzero method (where just defaults to nonzero in the circumstances you are describing). The nonzero method has it's behavior because of the nonzero function in Numeric (which only worked with 1-d and returned an array not a tuple). Ideally, I think we should fix the nonzero method and where to have the same behavior (both return tuples --- that's actually what the docstring of nonzero says right now). The nonzero function can be special-cased to index the tuple for backward compatibility. -Travis From vel.accel at gmail.com Thu Apr 13 01:57:13 2006 From: vel.accel at gmail.com (dHering) Date: Wed, 12 Apr 2006 22:57:13 -0700 Subject: [SciPy-user] Any SPLUS to scipy ideas for lm and summary(lm)? In-Reply-To: References: <443ABE61.9020701@stanford.edu> Message-ID: <1e52e0880604122257k19ca8e3dkfd08e0f2474d5d7f@mail.gmail.com> Hi Webb, As Jonathan mentioned the devs are currently in the process of over-hauling the stats package. You should probably take a look at what they're working on and make some comments/contributions if you wish. Go to the SciPy-dev mail list: http://www.scipy.net/mailman/listinfo/scipy-dev Scipy-dev at scipy.net On 4/11/06, Webb Sprague wrote: > Hi All > > I can only offer my services as a tester for scipy-stats, but better > statistics in Scipy would be great. > > If we do go ahead with a new and improved stats package, I think a lot > of up front design work would be great (I can help some with that, > even if real statistical programming is beyond me). R/SPLUS seems to > have grown partly by accretion and some of it is pretty ugly, > especially wrt naming conventions. However, a lot of it is really > great and would serve as a good model. I also think that a data.frame > type of data type would be great. If we could concentrate on that and > a really good (general) linear model framework we would be making > great progress, I think. > > It is funny that if you grep the scipy/stats directory for "residual" > you get nothing :)... > > Cheers > W > > On 4/10/06, Jonathan Taylor wrote: > > actually, i have some implementation of the model formula stuff in > > python, and some linear model stuff. i hope to contribute to scipy > > soon.... there was a brief discussion of this on scipy-dev over the past > > two weeks and it seems there is some interest in getting this stuff into > > scipy. > > > > -- jonathan > > > > Webb Sprague wrote: > > > > >Hi Scipy-ers, > > > > > >I would like to duplicate the following piece of SPLUS/R code in > > >Python-scipy, and would love somebody smarter than me to give me some > > >ideas. (If you don't know SPLUS/R, you may not want to bother with > > >this.) > > > > > >model.kt <- summary(lm(kt.diff ~ 1 )) > > >kt.drift <- model.kt$coefficients[1,1] # Coefficient > > >sec <- model.kt$coefficients[1,2] # Standard Error of the Coefficient > (SEC) > > >see <- model.kt$sigma # Standard error of the Equation (SEE) > > > > > >Getting a least-squares fit in scipy is not a problem, but getting all > > >that other nice stuff IS kind of a problem. I don't mind either > > >hacking scipy.stats, or writing my own function, but maybe someone has > > >some ideas for this, maybe it can be contributed, or ???. I also > > >realize that the SPLUS formula notation doesn't exist at all in > > >scipy-Python, so no need to point that out to me. > > > > > >Perhaps there should be a scipy.stats working group? It seems like > > >scipy.stats (not including the probability distributions and basic > > >summary functions, which are fine) is kind of a forgotten stepchild in > > >scipy, and probably needs a nurturing aunt or uncle or several.... > > > > > >Thx, sorry for such an open ended question. > > >W > > > > > >_______________________________________________ > > >SciPy-user mailing list > > >SciPy-user at scipy.net > > >http://www.scipy.net/mailman/listinfo/scipy-user > > > > > > > > > > -- > > ------------------------------------------------------------------------ > > I'm part of the Team in Training: please support our efforts for the > > Leukemia and Lymphoma Society! > > > > http://www.active.com/donate/tntsvmb/tntsvmbJTaylor > > > > GO TEAM !!! > > > > ------------------------------------------------------------------------ > > Jonathan Taylor Tel: 650.723.9230 > > Dept. of Statistics Fax: 650.725.8977 > > Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo > > 390 Serra Mall > > Stanford, CA 94305 > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From jonathan.taylor at stanford.edu Thu Apr 13 02:57:20 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed, 12 Apr 2006 23:57:20 -0700 Subject: [SciPy-user] Any SPLUS to scipy ideas for lm and summary(lm)? In-Reply-To: <1e52e0880604122257k19ca8e3dkfd08e0f2474d5d7f@mail.gmail.com> References: <443ABE61.9020701@stanford.edu> <1e52e0880604122257k19ca8e3dkfd08e0f2474d5d7f@mail.gmail.com> Message-ID: <443DF650.6050804@stanford.edu> glad to hear there is interest in this. sorry i haven't got around to making my linear model/ formula stuff ready yet. hope to do it by early next week -- jonathan dHering wrote: >Hi Webb, > >As Jonathan mentioned the devs are currently in the process of >over-hauling the stats package. You should probably take a look at >what they're working on and make some comments/contributions if you >wish. > >Go to the SciPy-dev mail list: >http://www.scipy.net/mailman/listinfo/scipy-dev > >Scipy-dev at scipy.net > >On 4/11/06, Webb Sprague wrote: > > >>Hi All >> >>I can only offer my services as a tester for scipy-stats, but better >>statistics in Scipy would be great. >> >>If we do go ahead with a new and improved stats package, I think a lot >>of up front design work would be great (I can help some with that, >>even if real statistical programming is beyond me). R/SPLUS seems to >>have grown partly by accretion and some of it is pretty ugly, >>especially wrt naming conventions. However, a lot of it is really >>great and would serve as a good model. I also think that a data.frame >>type of data type would be great. If we could concentrate on that and >>a really good (general) linear model framework we would be making >>great progress, I think. >> >>It is funny that if you grep the scipy/stats directory for "residual" >>you get nothing :)... >> >>Cheers >>W >> >>On 4/10/06, Jonathan Taylor wrote: >> >> >>>actually, i have some implementation of the model formula stuff in >>>python, and some linear model stuff. i hope to contribute to scipy >>>soon.... there was a brief discussion of this on scipy-dev over the past >>>two weeks and it seems there is some interest in getting this stuff into >>>scipy. >>> >>>-- jonathan >>> >>>Webb Sprague wrote: >>> >>> >>> >>>>Hi Scipy-ers, >>>> >>>>I would like to duplicate the following piece of SPLUS/R code in >>>>Python-scipy, and would love somebody smarter than me to give me some >>>>ideas. (If you don't know SPLUS/R, you may not want to bother with >>>>this.) >>>> >>>>model.kt <- summary(lm(kt.diff ~ 1 )) >>>>kt.drift <- model.kt$coefficients[1,1] # Coefficient >>>>sec <- model.kt$coefficients[1,2] # Standard Error of the Coefficient >>>> >>>> >>(SEC) >> >> >>>>see <- model.kt$sigma # Standard error of the Equation (SEE) >>>> >>>>Getting a least-squares fit in scipy is not a problem, but getting all >>>>that other nice stuff IS kind of a problem. I don't mind either >>>>hacking scipy.stats, or writing my own function, but maybe someone has >>>>some ideas for this, maybe it can be contributed, or ???. I also >>>>realize that the SPLUS formula notation doesn't exist at all in >>>>scipy-Python, so no need to point that out to me. >>>> >>>>Perhaps there should be a scipy.stats working group? It seems like >>>>scipy.stats (not including the probability distributions and basic >>>>summary functions, which are fine) is kind of a forgotten stepchild in >>>>scipy, and probably needs a nurturing aunt or uncle or several.... >>>> >>>>Thx, sorry for such an open ended question. >>>>W >>>> >>>>_______________________________________________ >>>>SciPy-user mailing list >>>>SciPy-user at scipy.net >>>>http://www.scipy.net/mailman/listinfo/scipy-user >>>> >>>> >>>> >>>> >>>-- >>>------------------------------------------------------------------------ >>>I'm part of the Team in Training: please support our efforts for the >>>Leukemia and Lymphoma Society! >>> >>>http://www.active.com/donate/tntsvmb/tntsvmbJTaylor >>> >>>GO TEAM !!! >>> >>>------------------------------------------------------------------------ >>>Jonathan Taylor Tel: 650.723.9230 >>>Dept. of Statistics Fax: 650.725.8977 >>>Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo >>>390 Serra Mall >>>Stanford, CA 94305 >>> >>>_______________________________________________ >>>SciPy-user mailing list >>>SciPy-user at scipy.net >>>http://www.scipy.net/mailman/listinfo/scipy-user >>> >>> >>> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >> > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From pau.gargallo at gmail.com Thu Apr 13 04:36:02 2006 From: pau.gargallo at gmail.com (Pau Gargallo) Date: Thu, 13 Apr 2006 10:36:02 +0200 Subject: [SciPy-user] interp2d raises AttributeError: interp2d instance has no attribute 'tck', scipy 0.4.8, numpy 0.9.6 In-Reply-To: References: Message-ID: <6ef8f3380604130136s314b33b9td5f841689bcfddf7@mail.gmail.com> as far as i know, interp2d is not implemented. There is some skeleton code, but is not finished. See http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/interpolate/interpolate.py i have a pure python implementation of interpn that may be useful for you at http://www.scipy.org/PauGargallo/Interpolation it is probably very buggy, but seems to work. if you want more sophisticated, accurate or fast interpolation you will have to use the fitpack wrappings directly. Are there any plans for reviewing the interpolate package? Something like the 'interpolation review week'? pau On 4/13/06, Albert Huang wrote: > Hi, > > The following program raises an AttributeError > > file: testinterp2d.py > ==== > from scipy.interpolate.interpolate import interp2d > from numpy import * > > X, Y = mgrid[0:3, 0:3] > Z = X * Y > ip = interp2d( X, Y, Z ) > ip( 0.5, 0.5 ) > > > ==== > # python testinterp2d.py > Traceback (most recent call last): > File "testinterp2d.py", line 9, in ? > ip( 0.5, 0.5 ) > File > "/home/albert/local/lib/python2.4/site-packages/scipy/interpolate/interpolate.py", > line 64, in __call__ > z,ier=fitpack._fitpack._bispev(*(self.tck+[x,y,dx,dy])) > AttributeError: interp2d instance has no attribute 'tck' > > ==== > This is on scipy 0.4.8, numpy 0.9.6, Ubuntu 5.10, python 2.4 > > Am I not using interp2d correctly? > > Thanks, > -albert > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From as8ca at virginia.edu Thu Apr 13 04:44:25 2006 From: as8ca at virginia.edu (Alok Singhal) Date: Thu, 13 Apr 2006 04:44:25 -0400 Subject: [SciPy-user] [bug?] scipy.ndimage.rotate Message-ID: <20060413084425.GA16220@virginia.edu> Hi, I am using scipy 0.4.8 and numpy 0.9.6 on a Linux machine. I am trying to figure out scipy.ndimage.rotate() function. From info(rotate): ---- rotate(input, angle, axes=(-1, -2), reshape=True, output_type=None, output=None, order=3, mode='constant', cval=0.0, prefilter=True) Rotate an array. The array is rotated in the plane defined by the two axes given by the axes parameter using spline interpolation of the requested order. The angle is given in degrees. Points outside the boundaries of the input are filled according to the given mode. If reshape is true, the output shape is adapted so that the input array is contained completely in the output. The parameter prefilter determines if the input is pre- filtered before interpolation, if False it is assumed that the input is already filtered. ---- I am trying to rotate an array by an arbitrary angle, but the rotation does not seem to work well unless the angle is a multiple of 90 degrees[1]: >>> a = eye(4, dtype=Float64) >>> a array([[ 1., 0., 0., 0.], [ 0., 1., 0., 0.], [ 0., 0., 1., 0.], [ 0., 0., 0., 1.]]) >>> # Rotation by 90 degrees >>> b = rotate(a, 90) >>> b # looks good array([[ -2.37169225e-20, 1.23341550e-16, -2.22044605e-16, 1.00000000e+00], [ 1.23338162e-16, -2.12801781e-16, 1.00000000e+00, -2.21990395e-16], [ -1.70206189e-16, 1.00000000e+00, -3.51959130e-17, 7.89366944e-17], [ 1.00000000e+00, -3.99677578e-16, 1.08562519e-16, 3.69983991e-18]]) >>> # Rotation by 45 degrees >>> b = rotate(a, 45) >>> b array([[ 0. , 0. , 0. , 0.02360487, 0. , 0. ], [ 0. , 0. , 0.01925501, 0.01925501, 0.27416806, 0. ], [ 0. , 0.22915545, 0.44416261, 0.44416261, 0.22915545, 0.44507896], [ 0. , 0.22915545, 0.44416261, 0.44416261, 0.22915545, 0.44507896], [ 0. , 0. , 0.01925501, 0.01925501, 0.27416806, 0. ], [ 0. , 0. , 0. , 0.02360487, 0. , 0. ]]) >>> b = rotate(a, 45, reshape=False) >>> b array([[ 0. , 0.01925501, 0.01925501, 0.27416806], [ 0.22915545, 0.44416261, 0.44416261, 0.22915545], [ 0.22915545, 0.44416261, 0.44416261, 0.22915545], [ 0. , 0.01925501, 0.01925501, 0.27416806]]) I cannot understand the output in this case. Rotating the Identity matrix by 45 degrees (or any arbitrary angle) should result in a matrix that still has 1's "along a line" and zeros everywhere else (or something close to that). But as can be seen above, the resulting matrix does not have that property. Am I missing something, or is it a bug in ndimage.rotate()? Thanks, Alok [1] Even the case for multiples of 90 degrees "does not work": >>> a = eye(5, dtype=Float64) >>> b = rotate(a, 270) >>> b array([[ 1.23480463e-17, 3.97766672e-18, 9.09628682e-17, 0.00000000e+00, 0.00000000e+00], [ 2.21922632e-18, 1.87363688e-17, -6.40458552e-17, 1.00000000e+00, 0.00000000e+00], [ 2.81757040e-17, 5.07155895e-16, 1.00000000e+00, -5.28853491e-16, 0.00000000e+00], [ 6.07356504e-17, 1.00000000e+00, -5.87237778e-16, 1.79130528e-16, 0.00000000e+00], [ 1.00000000e+00, -1.00977847e-15, 2.66090318e-16, -6.32022104e-17, 0.00000000e+00]]) >>> # b[0][4] is not 1 >>> b = rotate(a, 90) >>> b array([[ 1.23531285e-17, -2.15688470e-17, 1.23355102e-16, -2.96048179e-16, 1.00000000e+00], [ -2.46749168e-17, 6.01901612e-17, -2.06574395e-16, 1.00000000e+00, 6.07898606e-17], [ -7.70630575e-18, -1.12540186e-16, 1.00000000e+00, -2.06567619e-16, 2.81824802e-17], [ -4.17228101e-16, 1.00000000e+00, -8.78678098e-17, 1.87363688e-17, -9.66972813e-18], [ 1.00000000e+00, -6.22545499e-16, 1.86127020e-16, -1.98154888e-17, 1.13553237e-17]]) >>> # Looks OK -- Alok Singhal * * Graduate Student, dept. of Astronomy * * * University of Virginia http://www.astro.virginia.edu/~as8ca/ * * From sgarcia at olfac.univ-lyon1.fr Thu Apr 13 12:56:11 2006 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 13 Apr 2006 18:56:11 +0200 Subject: [SciPy-user] weave.inline : resize an array in c++ Message-ID: <443E82AB.1090305@olfac.univ-lyon1.fr> Hi, how to resize a array in the c++ code with weave.inline() I try : from scipy import * from scipy import weave from scipy.weave import converters c = ones((3,6)) code = """ PyArray_Dims dims; dims.len = 2; dims.ptr = new int[2]; dims.ptr[0] = 4; dims.ptr[1] = 7; PyArray_Resize(&c,&dims); """ err = weave.inline(code, ['c'], type_converters=converters.blitz) print c.shape #I want (4,7) !!! but it does'nt work because c is not a PyArray_Object. Is there a easier (and working) way ? Thanks Samuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgarcia at olfac.univ-lyon1.fr Thu Apr 13 12:58:18 2006 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Thu, 13 Apr 2006 18:58:18 +0200 Subject: [SciPy-user] weave.inline : resize an array in c++ In-Reply-To: <443E82AB.1090305@olfac.univ-lyon1.fr> References: <443E82AB.1090305@olfac.univ-lyon1.fr> Message-ID: <443E832A.7020006@olfac.univ-lyon1.fr> Other approch : Where can I found good and easy examples for using weave.inline() thanks samuel Samuel GARCIA a ?crit : > Hi, > how to resize a array in the c++ code with weave.inline() > > I try : > > from scipy import * > from scipy import weave > from scipy.weave import converters > > c = ones((3,6)) > > code = """ > PyArray_Dims dims; > dims.len = 2; > dims.ptr = new int[2]; > dims.ptr[0] = 4; > dims.ptr[1] = 7; > PyArray_Resize(&c,&dims); > """ > err = weave.inline(code, > ['c'], > type_converters=converters.blitz) > print c.shape #I want (4,7) !!! > > but it does'nt work because c is not a PyArray_Object. > > Is there a easier (and working) way ? > > Thanks > > Samuel > >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu Apr 13 15:10:10 2006 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 13 Apr 2006 12:10:10 -0700 Subject: [SciPy-user] 64-bit ndimage Message-ID: <1e2af89e0604131210l436ee528x6bba3bd0f614d84c@mail.gmail.com> Hi Travis, > >Sorry to keep bombarding the list with 64-bit troubles. > > > >In brief - scipy.ndimage.test() segfaults on my x86-64 P4 system, but > >passes on other x86-32 systems. > > > > > ndimage is not 64-bit ready at this point. I would disable it in the > setup.py script until it is (or remove it from the scipy directory). Just to ask - is 64-bit compatibility for ndimage in progress? I may have some time to help, but have little experience of scipy internals; do you have a feel for how much work this would be? Is there a good place to start looking for pointers? Thanks a lot, Matthew From sgarcia at olfac.univ-lyon1.fr Fri Apr 14 05:18:06 2006 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Fri, 14 Apr 2006 11:18:06 +0200 Subject: [SciPy-user] weave.inline : resize an array in c++ In-Reply-To: <443E832A.7020006@olfac.univ-lyon1.fr> References: <443E82AB.1090305@olfac.univ-lyon1.fr> <443E832A.7020006@olfac.univ-lyon1.fr> Message-ID: <443F68CE.7000709@olfac.univ-lyon1.fr> I also try : from scipy import * from scipy import weave from scipy.weave import converters c = ones((3,6)) code = """ PyArray_Dims dims; dims.len = 2; dims.ptr = new int[2]; dims.ptr[0] = 4; dims.ptr[1] = 7; PyArray_Resize(c_array,&dims); """ err = weave.inline(code, ['c'], type_converters=converters.blitz) print c but fails again Any ideas ? Sam Samuel GARCIA a ?crit : > Other approch : Where can I found good and easy examples for using > weave.inline() > > thanks > > samuel > > Samuel GARCIA a ?crit : > >> Hi, >> how to resize a array in the c++ code with weave.inline() >> >> I try : >> >> from scipy import * >> from scipy import weave >> from scipy.weave import converters >> >> c = ones((3,6)) >> >> code = """ >> PyArray_Dims dims; >> dims.len = 2; >> dims.ptr = new int[2]; >> dims.ptr[0] = 4; >> dims.ptr[1] = 7; >> PyArray_Resize(&c,&dims); >> """ >> err = weave.inline(code, >> ['c'], >> type_converters=converters.blitz) >> print c.shape #I want (4,7) !!! >> >> but it does'nt work because c is not a PyArray_Object. >> >> Is there a easier (and working) way ? >> >> Thanks >> >> Samuel >> >>------------------------------------------------------------------------ >> >>_______________________________________________ >>SciPy-user mailing list >>SciPy-user at scipy.net >>http://www.scipy.net/mailman/listinfo/scipy-user >> >> >------------------------------------------------------------------------ > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sun Apr 16 05:36:37 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Apr 2006 04:36:37 -0500 Subject: [SciPy-user] Trac Wikis closed for anonymous edits until further notice Message-ID: <44421025.9060804@gmail.com> We've been hit badly by spammers, so I can only presume our Trac sites are now on the traded spam lists. I am going to turn off anonymous edits for now. Ticket creation will probably still be left open for now. Many thanks to David Cooke for quickly removing the spam. I am looking into ways to allow people to register themselves with the Trac sites so they can edit the Wikis and submit tickets without needing to be added by a project admin. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nmb at unc.edu Fri Apr 14 16:23:23 2006 From: nmb at unc.edu (Neil Martinsen-Burrell) Date: Fri, 14 Apr 2006 16:23:23 -0400 Subject: [SciPy-user] Building SciPy with IBM XL Fortran compiler on Mac OS X Message-ID: <444004BB.1020905@unc.edu> I am having a problem build SciPy (svn r1854) with the IBM XL 8.1 fortran compiler on Mac OS X 10.4.6. Starting from a clean checkout, it builds a number of libraries successfully, but fails one the final one. Here's the log (elided, full results at http://braeburn.amath.unc.edu/~nburrell/scipy.build.log) [...] building 'scipy.fftpack._fftpack' extension compiling C sources gcc options: '-fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes' creating build/temp.darwin-8.5.0-Power_Macintosh-2.4/build creating build/temp.darwin-8.5.0-Power_Macintosh-2.4/build/src creating build/temp.darwin-8.5.0-Power_Macintosh-2.4/build/src/Lib creating build/temp.darwin-8.5.0-Power_Macintosh-2.4/build/src/Lib/fftpack creating build/temp.darwin-8.5.0-Power_Macintosh-2.4/Lib/fftpack/src compile options: '-Ibuild/src -I/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/site-packages/numpy/core/include -I/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4 -c' gcc: Lib/fftpack/src/drfft.c gcc: Lib/fftpack/src/zfftnd.c gcc: build/src/fortranobject.c In file included from /Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4/Python.h:55, from build/src/fortranobject.h:7, from build/src/fortranobject.c:2: /Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4/pyport.h:396: warning: 'struct winsize' declared inside parameter list /Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4/pyport.h:397: warning: 'struct winsize' declared inside parameter list gcc: build/src/Lib/fftpack/_fftpackmodule.c In file included from /Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4/Python.h:55, from build/src/Lib/fftpack/_fftpackmodule.c:16: /Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4/pyport.h:396: warning: 'struct winsize' declared inside parameter list /Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/include/python2.4/pyport.h:397: warning: 'struct winsize' declared inside parameter list gcc: Lib/fftpack/src/zfft.c gcc: Lib/fftpack/src/zrfft.c Traceback (most recent call last): File "setup.py", line 50, in ? setup_package() File "setup.py", line 42, in setup_package configuration=configuration ) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/site-packages/numpy/distutils/core.py", line 153, in setup return old_setup(**new_attr) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/site-packages/numpy/distutils/command/install.py", line 11, in run r = old_install.run(self) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/command/install.py", line 506, in run self.run_command('build') File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 109, in run self.build_extensions() File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/Network/Servers/core.amath.unc.edu/Volumes/data/home/nburrell/usr/lib/python2.4/site-packages/numpy/distutils/command/build_ext.py", line 301, in build_extension link = self.fcompiler.link_shared_object AttributeError: 'NoneType' object has no attribute 'link_shared_object' It appears that self.fcompiler is None in build_ext.py because IBMFCompiler.get_version() returns None even when there is a valid IBM fortran compiler installed. Digging into numpy/distutils/fcompiler/ibm.py, in get_version(), the version discovery code contains: [...] if not l: from distutils.version import LooseVersion self.version = version = LooseVersion(l[0]) return version I believe that this should be "if l". Making that change shows the IBM compiler as available in python setup.py config_fc --help-fcompiler and scipy builds correctly. Can someone make this change to numpy/distutils/fcompiler/ibm.py? Thanks. Peace, -Neil -- Neil Martinsen-Burrell nmb at unc.edu From robert.kern at gmail.com Sun Apr 16 06:23:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 16 Apr 2006 05:23:48 -0500 Subject: [SciPy-user] Building SciPy with IBM XL Fortran compiler on Mac OS X In-Reply-To: <444004BB.1020905@unc.edu> References: <444004BB.1020905@unc.edu> Message-ID: <44421B34.6010408@gmail.com> Neil Martinsen-Burrell wrote: > It appears that self.fcompiler is None in build_ext.py because > IBMFCompiler.get_version() returns None even when there is a valid IBM > fortran compiler installed. Digging into > numpy/distutils/fcompiler/ibm.py, in get_version(), the version > discovery code contains: > > [...] > if not l: > from distutils.version import LooseVersion > self.version = version = LooseVersion(l[0]) > return version > > > I believe that this should be "if l". > > Making that change shows the IBM compiler as available in > > python setup.py config_fc --help-fcompiler > > and scipy builds correctly. Can someone make this change to > numpy/distutils/fcompiler/ibm.py? Thanks. Peace, Done. Thank you! -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From prabhu_r at users.sf.net Sat Apr 15 09:51:09 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Sat, 15 Apr 2006 19:21:09 +0530 Subject: [SciPy-user] weave.inline : resize an array in c++ In-Reply-To: <443E82AB.1090305@olfac.univ-lyon1.fr> References: <443E82AB.1090305@olfac.univ-lyon1.fr> Message-ID: <17472.64077.796376.991043@prpc.aero.iitb.ac.in> >>>>> "Samuel" == Samuel GARCIA writes: Samuel> Hi, how to resize a array in the c++ code with Samuel> weave.inline() [...] Samuel> c = ones((3,6)) [...] Samuel> err = weave.inline(code, ['c'], type_converters=converters.blitz) Samuel> print c.shape #I want (4,7) !!! I am not sure why you'd want to do that. Can't you resize it from Python? In any case, if you really must do it, take a look at the c++ code generated by weave and look at it. You'll see that c_array is a PyArrayObject that you can use if you want it. However, I am not sure what will happen if you do resize the array and are using blitz. I suspect something bad may happen. Resizing the array might also relocate the entire block of memory for the array so accessing the older pointer will likely be disastrous. So, you should be careful doing this. cheers, prabhu From oneelkruns at hotmail.com Mon Apr 17 09:22:55 2006 From: oneelkruns at hotmail.com (Ron Kneusel) Date: Mon, 17 Apr 2006 08:22:55 -0500 Subject: [SciPy-user] No module named _umfpack Message-ID: Hi- I'm building numpy and scipy according to the directions on the web site (Steve Baum's doc). Everything works fine and compiles properly. When I load scipy: >>>import scipy I'm told that there is no module named '_umfpack'. Looking through the archives I see a recent post where someone said that setup.py for linsolve was screwy and was building __umfpack instead of _umfpack. What the post didn't mention is how to fix this error. What needs to be modified? Thanks! Ron From robert.kern at gmail.com Mon Apr 17 11:08:41 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 17 Apr 2006 10:08:41 -0500 Subject: [SciPy-user] No module named _umfpack In-Reply-To: References: Message-ID: <4443AF79.9060507@gmail.com> Ron Kneusel wrote: > Hi- > > I'm building numpy and scipy according to the directions on the web site > (Steve Baum's doc). Everything works fine and compiles properly. When I > load scipy: > >>>>import scipy > > I'm told that there is no module named '_umfpack'. > > Looking through the archives I see a recent post where someone said that > setup.py for linsolve was screwy and was building __umfpack instead of > _umfpack. > > What the post didn't mention is how to fix this error. What needs to be > modified? If you actually do want to build the UMFPACK bindings, then you can edit Lib/linsolve/setup.py to replace "__umfpack" with "_umfpack". There are two bugs open and assigned to Robert Cimrman. I hope he can find the time to attend to them soon. http://projects.scipy.org/scipy/scipy/ticket/190 http://projects.scipy.org/scipy/scipy/ticket/191 -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From sgarcia at olfac.univ-lyon1.fr Tue Apr 18 03:16:07 2006 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Tue, 18 Apr 2006 09:16:07 +0200 Subject: [SciPy-user] weave.inline : resize an array in c++ In-Reply-To: <17472.64077.796376.991043@prpc.aero.iitb.ac.in> References: <443E82AB.1090305@olfac.univ-lyon1.fr> <17472.64077.796376.991043@prpc.aero.iitb.ac.in> Message-ID: <44449237.3060705@olfac.univ-lyon1.fr> No I can't it resize from python because it is to slow. There are a lot of iteration and for some of them I add a new value to a vector. My code was only an example, of course. Is it easier to add one element to a 1D array than a ND ? I am porting a old code from matlab and I was able to to do that in mex file with mxRealloc. thanks Sam Prabhu Ramachandran a ?crit : >>>>>>"Samuel" == Samuel GARCIA writes: >>>>>> >>>>>> > > Samuel> Hi, how to resize a array in the c++ code with > Samuel> weave.inline() >[...] > Samuel> c = ones((3,6)) >[...] > Samuel> err = weave.inline(code, ['c'], type_converters=converters.blitz) > Samuel> print c.shape #I want (4,7) !!! > >I am not sure why you'd want to do that. Can't you resize it from >Python? In any case, if you really must do it, take a look at the c++ >code generated by weave and look at it. You'll see that c_array is a >PyArrayObject that you can use if you want it. However, I am not sure >what will happen if you do resize the array and are using blitz. I >suspect something bad may happen. Resizing the array might also >relocate the entire block of memory for the array so accessing the >older pointer will likely be disastrous. So, you should be careful >doing this. > >cheers, >prabhu > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Apr 18 04:03:32 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 18 Apr 2006 10:03:32 +0200 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <4443AF79.9060507@gmail.com> References: <4443AF79.9060507@gmail.com> Message-ID: <44449D54.1090408@ntc.zcu.cz> Robert Kern wrote: > Ron Kneusel wrote: > >>Hi- >> >>I'm building numpy and scipy according to the directions on the web site >>(Steve Baum's doc). Everything works fine and compiles properly. When I >>load scipy: >> >> >>>>>import scipy >> >>I'm told that there is no module named '_umfpack'. >> >>Looking through the archives I see a recent post where someone said that >>setup.py for linsolve was screwy and was building __umfpack instead of >>_umfpack. >> >>What the post didn't mention is how to fix this error. What needs to be >>modified? > > > If you actually do want to build the UMFPACK bindings, then you can edit > Lib/linsolve/setup.py to replace "__umfpack" with "_umfpack". you mean Lib/linsolve/umfpack/setup.py? in Lib/linsolve/setup.py there is just config.add_subpackage('umfpack')... > There are two bugs open and assigned to Robert Cimrman. I hope he can find the > time to attend to them soon. Sorry, I missed them... > http://projects.scipy.org/scipy/scipy/ticket/190 Well, it should work as it is now - the top-level umfpack module is 'umfpack.py', the swig-generated modules are '_umfpack.py' and '__umfpack.so', so please do not change this. UMFPACK is optional, and it is not included in the scipy SVN, so you have to install it yourself from Tim Davis' homepage if you want (the version 4.4). Then just add [amd] and [umfpack] sections to your site.cfg, as it is described in numpy/site.cfg.example > http://projects.scipy.org/scipy/scipy/ticket/191 Done in the SVN. cheers, r. From robert.kern at gmail.com Tue Apr 18 08:05:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Apr 2006 07:05:53 -0500 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <44449D54.1090408@ntc.zcu.cz> References: <4443AF79.9060507@gmail.com> <44449D54.1090408@ntc.zcu.cz> Message-ID: <4444D621.7000801@gmail.com> Robert Cimrman wrote: > Robert Kern wrote: >>If you actually do want to build the UMFPACK bindings, then you can edit >>Lib/linsolve/setup.py to replace "__umfpack" with "_umfpack". > > you mean Lib/linsolve/umfpack/setup.py? in Lib/linsolve/setup.py there > is just config.add_subpackage('umfpack')... Umm, yes. That's what I meant. >>There are two bugs open and assigned to Robert Cimrman. I hope he can find the >>time to attend to them soon. > > Sorry, I missed them... I'm working on setting up email notification. >>http://projects.scipy.org/scipy/scipy/ticket/190 > > Well, it should work as it is now - the top-level umfpack module is > 'umfpack.py', the swig-generated modules are '_umfpack.py' and > '__umfpack.so', so please do not change this. Yes, you're right. I realize now that I wasn't building umfpack at all because I don't have UMFPACK installed. And I probably won't until there is a source distribution of it that doesn't require me to hack makefiles that attempt to build MEX files. > UMFPACK is optional, and it is not included in the scipy SVN, so you > have to install it yourself from Tim Davis' homepage if you want (the > version 4.4). Then just add [amd] and [umfpack] sections to your > site.cfg, as it is described in numpy/site.cfg.example > > > http://projects.scipy.org/scipy/scipy/ticket/191 > > Done in the SVN. Thank you! -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dd55 at cornell.edu Tue Apr 18 08:25:40 2006 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 18 Apr 2006 08:25:40 -0400 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <44449D54.1090408@ntc.zcu.cz> References: <4443AF79.9060507@gmail.com> <44449D54.1090408@ntc.zcu.cz> Message-ID: <200604180825.40833.dd55@cornell.edu> On Tuesday 18 April 2006 04:03, Robert Cimrman wrote: > UMFPACK is optional, and it is not included in the scipy SVN, so you > have to install it yourself from Tim Davis' homepage if you want (the > version 4.4). Then just add [amd] and [umfpack] sections to your > site.cfg, as it is described in numpy/site.cfg.example For any gentoo users out there, a umfpack ebuild is available here: http://gentooscience.org/browser/overlay/sci-libs/umfpack. I was able to build scipy with umfpack support on an 64bit Athlon machine using this ebuild, which provides version 4.6 with support to build shared libraries. No changes to site.cfg were necessary. scipy.test() and scipy.linsolve.umfpack.test() were both successful. Darren From cimrman3 at ntc.zcu.cz Tue Apr 18 08:35:57 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 18 Apr 2006 14:35:57 +0200 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <200604180825.40833.dd55@cornell.edu> References: <4443AF79.9060507@gmail.com> <44449D54.1090408@ntc.zcu.cz> <200604180825.40833.dd55@cornell.edu> Message-ID: <4444DD2D.6080108@ntc.zcu.cz> Darren Dale wrote: > On Tuesday 18 April 2006 04:03, Robert Cimrman wrote: > >>UMFPACK is optional, and it is not included in the scipy SVN, so you >>have to install it yourself from Tim Davis' homepage if you want (the >>version 4.4). Then just add [amd] and [umfpack] sections to your >>site.cfg, as it is described in numpy/site.cfg.example > > > For any gentoo users out there, a umfpack ebuild is available here: > http://gentooscience.org/browser/overlay/sci-libs/umfpack. > > I was able to build scipy with umfpack support on an 64bit Athlon machine > using this ebuild, which provides version 4.6 with support to build shared > libraries. No changes to site.cfg were necessary. > > scipy.test() and scipy.linsolve.umfpack.test() were both successful. great news! I will try it immediately :) r. From dd55 at cornell.edu Tue Apr 18 08:43:54 2006 From: dd55 at cornell.edu (Darren Dale) Date: Tue, 18 Apr 2006 08:43:54 -0400 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <4444DD2D.6080108@ntc.zcu.cz> References: <200604180825.40833.dd55@cornell.edu> <4444DD2D.6080108@ntc.zcu.cz> Message-ID: <200604180843.54807.dd55@cornell.edu> On Tuesday 18 April 2006 08:35, Robert Cimrman wrote: > Darren Dale wrote: > > On Tuesday 18 April 2006 04:03, Robert Cimrman wrote: > >>UMFPACK is optional, and it is not included in the scipy SVN, so you > >>have to install it yourself from Tim Davis' homepage if you want (the > >>version 4.4). Then just add [amd] and [umfpack] sections to your > >>site.cfg, as it is described in numpy/site.cfg.example > > > > For any gentoo users out there, a umfpack ebuild is available here: > > http://gentooscience.org/browser/overlay/sci-libs/umfpack. > > > > I was able to build scipy with umfpack support on an 64bit Athlon machine > > using this ebuild, which provides version 4.6 with support to build > > shared libraries. No changes to site.cfg were necessary. > > > > scipy.test() and scipy.linsolve.umfpack.test() were both successful. > > great news! I will try it immediately :) Oh, let me make clear, the ebuild provides support to build shared libraries, not the umfpack-4.6 package itself. More information here: http://bugs.gentoo.org/show_bug.cgi?id=40255 Darren From oneelkruns at hotmail.com Tue Apr 18 10:13:53 2006 From: oneelkruns at hotmail.com (Ron Kneusel) Date: Tue, 18 Apr 2006 09:13:53 -0500 Subject: [SciPy-user] No gplt or xplt? In-Reply-To: <200604180825.40833.dd55@cornell.edu> Message-ID: Ok, I finally got umfpack built and installed and I get no errors now when I do: >>>import scipy Thanks to all those who helped out! However, the gplt and xplt packages do not appear to be installed. What additional modules are required to get them working? I'm using Fedora Core 5. Ron From nwagner at iam.uni-stuttgart.de Tue Apr 18 10:20:58 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 18 Apr 2006 16:20:58 +0200 Subject: [SciPy-user] No gplt or xplt? In-Reply-To: References: Message-ID: <4444F5CA.6090305@iam.uni-stuttgart.de> Ron Kneusel wrote: > Ok, I finally got umfpack built and installed and I get no errors now when I > do: > > >>>> import scipy >>>> > > Thanks to all those who helped out! > > However, the gplt and xplt packages do not appear to be installed. What > additional modules are required to get them working? I'm using Fedora Core > 5. > > Ron > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > # Gist-based plotting library for X11 #config.add_subpackage('xplt') It is disabled. See the setup.py file in scipy/Lib/sandbox Nils From robert.kern at gmail.com Tue Apr 18 10:29:28 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 18 Apr 2006 09:29:28 -0500 Subject: [SciPy-user] No gplt or xplt? In-Reply-To: References: Message-ID: <4444F7C8.4050806@gmail.com> Ron Kneusel wrote: > Ok, I finally got umfpack built and installed and I get no errors now when I > do: > >>>>import scipy > > Thanks to all those who helped out! > > However, the gplt and xplt packages do not appear to be installed. They have been removed from the main package. If you really want to use them, they are in the sandbox. You can edit Lib/sandbox/setup.py to enable building them. They can be then accessed as scipy.sandbox.gplt, scipy.sandbox.xplt. They are not supported anymore. gplt, at least, hasn't been ported to numpy at all. The general recommendation is to use matplotlib instead. > What > additional modules are required to get them working? I'm using Fedora Core > 5. Each should work alone. Except that gnuplot itself must be installed for gplt. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From cimrman3 at ntc.zcu.cz Tue Apr 18 11:24:58 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 18 Apr 2006 17:24:58 +0200 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <4444D621.7000801@gmail.com> References: <4443AF79.9060507@gmail.com> <44449D54.1090408@ntc.zcu.cz> <4444D621.7000801@gmail.com> Message-ID: <444504CA.3020600@ntc.zcu.cz> > Robert Cimrman wrote: >> > http://projects.scipy.org/scipy/scipy/ticket/191 >> >>Done in the SVN. whoops, it was not fixed. I was too quick and moreover did not counted with the numpy import machinery. I hope this time I got it right... r. From cimrman3 at ntc.zcu.cz Tue Apr 18 11:41:01 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 18 Apr 2006 17:41:01 +0200 Subject: [SciPy-user] No module named _umfpack In-Reply-To: <200604180843.54807.dd55@cornell.edu> References: <200604180825.40833.dd55@cornell.edu> <4444DD2D.6080108@ntc.zcu.cz> <200604180843.54807.dd55@cornell.edu> Message-ID: <4445088D.60205@ntc.zcu.cz> Darren Dale wrote: > On Tuesday 18 April 2006 08:35, Robert Cimrman wrote: > >>Darren Dale wrote: >> >>>For any gentoo users out there, a umfpack ebuild is available here: >>>http://gentooscience.org/browser/overlay/sci-libs/umfpack. >>> >>>I was able to build scipy with umfpack support on an 64bit Athlon machine >>>using this ebuild, which provides version 4.6 with support to build >>>shared libraries. No changes to site.cfg were necessary. >>> >>>scipy.test() and scipy.linsolve.umfpack.test() were both successful. >> >>great news! I will try it immediately :) > > > Oh, let me make clear, the ebuild provides support to build shared libraries, > not the umfpack-4.6 package itself. More information here: > http://bugs.gentoo.org/show_bug.cgi?id=40255 well, it works well for me, moreover I got about 30% speed-up w.r.t the manually built version 4.4 for a simple problem I had at hand. Conclusion: the wrappers seem to work also with the version 4.6; still there might be some extra functionality in 4.6 worth exposing, I will check it out when time permits. r. From oneelkruns at hotmail.com Tue Apr 18 14:00:51 2006 From: oneelkruns at hotmail.com (Ron Kneusel) Date: Tue, 18 Apr 2006 13:00:51 -0500 Subject: [SciPy-user] No gplt or xplt? In-Reply-To: <4444F7C8.4050806@gmail.com> Message-ID: Robert Kern wrote: >The general recommendation is to use matplotlib instead. Ok. I already installed matplotlib, very nice. I was just trying to run a few samples I found online. Ron From cookedm at physics.mcmaster.ca Tue Apr 18 16:09:24 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Tue, 18 Apr 2006 16:09:24 -0400 Subject: [SciPy-user] [Numpy-discussion] Trac Wikis closed for anonymous edits until further notice In-Reply-To: <44421025.9060804@gmail.com> (Robert Kern's message of "Sun, 16 Apr 2006 04:36:37 -0500") References: <44421025.9060804@gmail.com> Message-ID: Robert Kern writes: > We've been hit badly by spammers, so I can only presume our Trac sites are now > on the traded spam lists. I am going to turn off anonymous edits for now. Ticket > creation will probably still be left open for now. Another thing that's concerned me is closing of tickets by anonymous; can we turn that off? It disturbs me when I'm browsing the RSS feed and I see that. If a user who's not a developer thinks it could be closed, they could post a comment saying that, and a developer could close it. > Many thanks to David Cooke for quickly removing the spam. The RSS feeds are great for that. Although having a way to quickly revert a change would have made it easier :-) > I am looking into ways to allow people to register themselves with the Trac > sites so they can edit the Wikis and submit tickets without needing to be added > by a project admin. that'd be good. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From morovia at rediffmail.com Wed Apr 19 11:08:51 2006 From: morovia at rediffmail.com (morovia) Date: 19 Apr 2006 15:08:51 -0000 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. Message-ID: <20060419150851.29323.qmail@webmail50.rediffmail.com> Hi, I would like to know if anyone had tried on wofz and the optimized Humlicek algorithm for the voigt algorithm developed by F. Schreier (Courtesy : http://www.op.dlr.de/ne-oe/ir/voigt.html). I dont know about the license issue of this code but nice to use the optimized one to save computation time. I tried to f2py the above code but since I am using enthought edition of scipy it demands visual studio. Thanks for your comments and info in advance, Best regards, Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Apr 19 11:32:57 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Apr 2006 10:32:57 -0500 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. In-Reply-To: <20060419150851.29323.qmail@webmail50.rediffmail.com> References: <20060419150851.29323.qmail@webmail50.rediffmail.com> Message-ID: <44465829.7080708@gmail.com> morovia wrote: > > Hi, > > I would like to know if anyone had tried on wofz and the optimized > Humlicek algorithm for the voigt algorithm developed by F. > Schreier (Courtesy : http://www.op.dlr.de/ne-oe/ir/voigt.html). I dont > know about the license issue of this code but nice to use the optimized > one to save computation time. > > I tried to f2py the above code but since I am using enthought > edition of scipy it demands visual studio. Use --compiler=mingw32 on the appropriate commands. If you are building from a setup.py, you would do $ python setup.py build_src build_clib --compiler=mingw32 build_ext --compiler=mingw32 I forget what you would do when using the f2py2e script, though. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ryanlists at gmail.com Wed Apr 19 11:39:33 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 19 Apr 2006 11:39:33 -0400 Subject: [SciPy-user] accessing a class's code Message-ID: I don't have the best of luck with comp.lang.python and I get great results from this list. So, I am mildly sorry about the general python question and the double post. I have a set of Python classes that represent elements in a structural model for vibration modeling (sort of like FEA). Some of the parameters of the model are initially unknown and I do some system identification to determine the parameters. After I determine these unknown parameters, I would like to substitute them back into the model and save the model as a new python class. To do this, I think each element needs to be able to read in the code for its __init__ method, make the substitutions and then write the new __init__ method to a file defining a new class with the now known parameters. Is there a way for a Python instance to access its own code (especially the __init__ method)? And if there is, is there a clean way to write the modified code back to a file? I assume that if I can get the code as a list of strings, I can output it to a file easily enough. I am tempted to just read in the code and write a little Python script to parse it to get me the __init__ methods, but that seems like reinventing the wheel. I don't just want to read and write a vector of coefficients because after I have identified the unknown structural parameters, I will use the model for control design and there will be a new set of unknown control parameters. Thanks, Ryan From pebarrett at gmail.com Wed Apr 19 11:46:14 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Wed, 19 Apr 2006 11:46:14 -0400 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. In-Reply-To: <20060419150851.29323.qmail@webmail50.rediffmail.com> References: <20060419150851.29323.qmail@webmail50.rediffmail.com> Message-ID: <40e64fa20604190846s35a9f136p3740c6b29ff726df@mail.gmail.com> Morovia, I've written a C version of a fast Humlicek algorithm, which I use for spectral fitting. I'm willing to part with it if someone would like to include it in SciPy. -- Paul On 19 Apr 2006 15:08:51 -0000, morovia wrote: > > > Hi, > > I would like to know if anyone had tried on wofz and the optimized > Humlicek algorithm for the voigt algorithm developed by F. > Schreier (Courtesy : http://www.op.dlr.de/ne-oe/ir/voigt.html). I dont > know about the license issue of this code but nice to use the optimized > one to save computation time. > > I tried to f2py the above code but since I am using enthought > edition of scipy it demands visual studio. > > Thanks for your comments and info in advance, > > Best regards, > Morovia. > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Doug.LATORNELL at mdsinc.com Wed Apr 19 12:51:49 2006 From: Doug.LATORNELL at mdsinc.com (LATORNELL, Doug) Date: Wed, 19 Apr 2006 09:51:49 -0700 Subject: [SciPy-user] accessing a class's code Message-ID: <34090E25C2327C4AA5D276799005DDE001010A40@SMDMX0501.mds.mdsinc.com> I think the inspect module (http://docs.python.org/lib/module-inspect.html) in the standard library might enable you to do what you want. I've never used inspect (just read its docs when they caught my eye a couple of weeks ago), nor done what you describe though, so I can't promise that I'm not sending you on a wild goose chase :-) Doug > -----Original Message----- > From: scipy-user-bounces at scipy.net > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Ryan Krauss > Sent: April 19, 2006 08:40 > To: SciPy Users List > Subject: [SciPy-user] accessing a class's code > > I don't have the best of luck with comp.lang.python and I get > great results from this list. So, I am mildly sorry about > the general python question and the double post. > > I have a set of Python classes that represent elements in a > structural model for vibration modeling (sort of like FEA). > Some of the parameters of the model are initially unknown and > I do some system identification to determine the parameters. > After I determine these unknown parameters, I would like to > substitute them back into the model and save the model as a > new python class. To do this, I think each element needs to > be able to read in the code for its __init__ method, make the > substitutions and then write the new __init__ method to a > file defining a new class with the now known parameters. > > Is there a way for a Python instance to access its own code > (especially the __init__ method)? And if there is, is there a clean > way to write the modified code back to a file? I assume that if I > can get the code as a list of strings, I can output it to a > file easily enough. > > I am tempted to just read in the code and write a little > Python script to parse it to get me the __init__ methods, but > that seems like reinventing the wheel. > > I don't just want to read and write a vector of coefficients > because after I have identified the unknown structural > parameters, I will use the model for control design and there > will be a new set of unknown control parameters. > > Thanks, > > Ryan > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. From ryanlists at gmail.com Wed Apr 19 14:04:42 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Wed, 19 Apr 2006 14:04:42 -0400 Subject: [SciPy-user] accessing a class's code In-Reply-To: <34090E25C2327C4AA5D276799005DDE001010A40@SMDMX0501.mds.mdsinc.com> References: <34090E25C2327C4AA5D276799005DDE001010A40@SMDMX0501.mds.mdsinc.com> Message-ID: Thanks Doug. I think that is a great start. Not to justify myself, but I have gotten like 10 responses on comp.lang.python mainly asking why I would want to do such a thing or assuming I am an idiot. I got one response here and it was very helpful. Ryan On 4/19/06, LATORNELL, Doug wrote: > I think the inspect module > (http://docs.python.org/lib/module-inspect.html) in the standard library > might enable you to do what you want. I've never used inspect (just > read its docs when they caught my eye a couple of weeks ago), nor done > what you describe though, so I can't promise that I'm not sending you on > a wild goose chase :-) > > Doug > > > > -----Original Message----- > > From: scipy-user-bounces at scipy.net > > [mailto:scipy-user-bounces at scipy.net] On Behalf Of Ryan Krauss > > Sent: April 19, 2006 08:40 > > To: SciPy Users List > > Subject: [SciPy-user] accessing a class's code > > > > I don't have the best of luck with comp.lang.python and I get > > great results from this list. So, I am mildly sorry about > > the general python question and the double post. > > > > I have a set of Python classes that represent elements in a > > structural model for vibration modeling (sort of like FEA). > > Some of the parameters of the model are initially unknown and > > I do some system identification to determine the parameters. > > After I determine these unknown parameters, I would like to > > substitute them back into the model and save the model as a > > new python class. To do this, I think each element needs to > > be able to read in the code for its __init__ method, make the > > substitutions and then write the new __init__ method to a > > file defining a new class with the now known parameters. > > > > Is there a way for a Python instance to access its own code > > (especially the __init__ method)? And if there is, is there a clean > > way to write the modified code back to a file? I assume that if I > > can get the code as a list of strings, I can output it to a > > file easily enough. > > > > I am tempted to just read in the code and write a little > > Python script to parse it to get me the __init__ methods, but > > that seems like reinventing the wheel. > > > > I don't just want to read and write a vector of coefficients > > because after I have identified the unknown structural > > parameters, I will use the model for control design and there > > will be a new set of unknown control parameters. > > > > Thanks, > > > > Ryan > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.net > > http://www.scipy.net/mailman/listinfo/scipy-user > > > > > This email and any files transmitted with it may contain privileged or confidential information and may be read or used only by the intended recipient. If you are not the intended recipient of the email or any of its attachments, please be advised that you have received this email in error and any use, dissemination, distribution, forwarding, printing or copying of this email or any attached files is strictly prohibited. If you have received this email in error, please immediately purge it and all attachments and notify the sender by reply email or contact the sender at the number listed. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From hetland at tamu.edu Wed Apr 19 14:29:56 2006 From: hetland at tamu.edu (Robert Hetland) Date: Wed, 19 Apr 2006 13:29:56 -0500 Subject: [SciPy-user] Errors compiling matplotlib on intel Mac (with fix) Message-ID: I have had errors complaining about uint and ushort being defined in both types.h and numpy/arrayobject.h. I fixed this problem by commenting out the lines in arrayobject where these things were redefined -- matplotlib compiles fine then. This hack is unnecessary on the PPC Mac (as well as other platforms, I imagine). Will this hack give me trouble with other packages? Do any of you have suggestions for a more permanent fix? -Rob ----- Rob Hetland, Assistant Professor Dept of Oceanography, Texas A&M University p: 979-458-0096, f: 979-845-6331 e: hetland at tamu.edu, w: http://pong.tamu.edu From robert.kern at gmail.com Wed Apr 19 16:13:53 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Apr 2006 15:13:53 -0500 Subject: [SciPy-user] Errors compiling matplotlib on intel Mac (with fix) In-Reply-To: References: Message-ID: <44469A01.3090708@gmail.com> Robert Hetland wrote: > I have had errors complaining about uint and ushort being defined in > both types.h and numpy/arrayobject.h. I fixed this problem by > commenting out the lines in arrayobject where these things were > redefined -- matplotlib compiles fine then. This hack is unnecessary > on the PPC Mac (as well as other platforms, I imagine). > > Will this hack give me trouble with other packages? Do any of you > have suggestions for a more permanent fix? Define PY_ARRAY_TYPES_PREFIX to be something, I think. Add defines=[('PY_ARRAY_TYPES_PREFIX', 'numpy_or_anything_else_thats_unique')], to any Extension() that is #including numpy/arrayobject.h . -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Norman.Shelley at freescale.com Wed Apr 19 16:47:50 2006 From: Norman.Shelley at freescale.com (Norman Shelley) Date: Wed, 19 Apr 2006 13:47:50 -0700 Subject: [SciPy-user] ?Inclusion of stats.py (GPL) Message-ID: <4446A1F6.3070005@freescale.com> Should scipy use stats.py? The current version at least is GPL and "infects" other software on installation or importation. I got rid of it after seeing this header. # Copyright (c) 1999-2002 Gary Strangman; All Rights Reserved. # # This software is distributable under the terms of the GNU # General Public License (GPL) v2, the text of which can be found at # http://www.gnu.org/copyleft/gpl.html. Installing, importing or otherwise # using this module constitutes acceptance of the terms of this License. # http://www.nmr.mgh.harvard.edu/Neural_Systems_Group/gary/python/stats.py From robert.kern at gmail.com Wed Apr 19 16:59:10 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 19 Apr 2006 15:59:10 -0500 Subject: [SciPy-user] ?Inclusion of stats.py (GPL) In-Reply-To: <4446A1F6.3070005@freescale.com> References: <4446A1F6.3070005@freescale.com> Message-ID: <4446A49E.8030304@gmail.com> Norman Shelley wrote: > Should scipy use stats.py? > > The current version at least is GPL > and "infects" other software on installation or importation. The version that we derived stats.py from had a more permissive license. See http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/stats/stats.py -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From jonathan.taylor at stanford.edu Wed Apr 19 19:16:17 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Wed, 19 Apr 2006 16:16:17 -0700 Subject: [SciPy-user] some statistical models / formulas Message-ID: <4446C4C1.2060003@stanford.edu> i have made a numpy/scipy package for some linear statistical models http://www-stat.stanford.edu/~jtaylo/scipy_stats_models-0.01a.tar.gz i was hoping that it might someday get into scipy.stats, maybe as scipy.stats.models? anyways, i am sure the code needs work and more docs with examples, but right now there is basic functionality for the following (the tests give some examples): - model formulae as in R (to some extent) - OLS (ordinary least square regression) - WLS (weighted least square regression) - AR1 regression (non-diagonal covariance -- right now just AR1 but easy to extend to ARp) - generalized linear models (all of R's links and variance functions but extensible as well -- not everything has been rigorously tested but logistic agrees with R, for instance) - robust linear models using M estimators (with a number of standard default robust norms as in R's rlm) - robust scale estimates (MAD, Huber's proposal 2). it would be nice to add a few things over time, too, like: - mixed effects models - generalized additive models (gam), generalized estimating equations (gee).... - nonlinear regression (i have some quasi working code for this, too, but it is not yet included). + anything else people want to add. -- jonathan -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From ckkart at hoc.net Wed Apr 19 19:54:55 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 20 Apr 2006 08:54:55 +0900 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. In-Reply-To: <20060419150851.29323.qmail@webmail50.rediffmail.com> References: <20060419150851.29323.qmail@webmail50.rediffmail.com> Message-ID: <4446CDCF.3050900@hoc.net> morovia wrote: > > Hi, > > I would like to know if anyone had tried on wofz and the optimized > Humlicek algorithm for the voigt algorithm developed by F. > Schreier (Courtesy : http://www.op.dlr.de/ne-oe/ir/voigt.html). I dont > know about the license issue of this code but nice to use the optimized > one to save computation time. > > I tried to f2py the above code but since I am using enthought > edition of scipy it demands visual studio. > > Thanks for your comments and info in advance, > I have wrapped the humdev subroutine found here http://www-atm.physics.ox.ac.uk/user/wells/voigt.html some time ago with f2py but switched to wofz since I didn't need high speed. I can look for the .pyf file if you're interested. Regards, Christian From stefan at sun.ac.za Thu Apr 20 05:33:30 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 20 Apr 2006 11:33:30 +0200 Subject: [SciPy-user] interp2d raises AttributeError: interp2d instance has no attribute 'tck', scipy 0.4.8, numpy 0.9.6 In-Reply-To: <6ef8f3380604130136s314b33b9td5f841689bcfddf7@mail.gmail.com> References: <6ef8f3380604130136s314b33b9td5f841689bcfddf7@mail.gmail.com> Message-ID: <20060420093330.GC26396@alpha> On Thu, Apr 13, 2006 at 10:36:02AM +0200, Pau Gargallo wrote: > as far as i know, interp2d is not implemented. > There is some skeleton code, but is not finished. See > http://projects.scipy.org/scipy/scipy/browser/trunk/Lib/interpolate/interpolate.py > > i have a pure python implementation of interpn that may be useful for you at > http://www.scipy.org/PauGargallo/Interpolation > it is probably very buggy, but seems to work. > > if you want more sophisticated, accurate or fast interpolation you > will have to use the fitpack wrappings directly. > > Are there any plans for reviewing the interpolate package? > Something like the 'interpolation review week'? I filed a ticket at http://projects.scipy.org/scipy/scipy/ticket/195 linking to the previous threads, patches and to Pau's implementation. Regards St?fan From morovia at rediffmail.com Thu Apr 20 06:05:54 2006 From: morovia at rediffmail.com (morovia) Date: 20 Apr 2006 10:05:54 -0000 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. Message-ID: <20060420100554.2625.qmail@webmail10.rediffmail.com> Thanks for the responses. > I've written a C version of a fast Humlicek algorithm, which I use for > spectral fitting. I'm willing to part with it if someone would like to > include it in SciPy. Paul : I dont know how to write wrappers around c programmes. > I have wrapped the humdev subroutine found here > http://www-atm.physics.ox.ac.uk/user/wells/voigt.html > some time ago with f2py but switched to wofz since I didn't need high > speed. I can look for the .pyf file if you're interested. Christian : I will try this option, if you can send it. Thanks, Morovia. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Thu Apr 20 06:38:53 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 20 Apr 2006 19:38:53 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab Message-ID: <444764BD.4060206@ar.media.kyoto-u.ac.jp> Dear numpy users, I am converting some code from matlab to numpy/scipy, but still a bit confused by numpy, mostly the array vs matrix issue. Looking at the scipy website, the matrix type looks the closest to matlab syntax, but I still have some issues: - under matlab, everything, including scalar, are matrices in matlab sense. In python, they are not. So, of I want to handle scalar case in a function which takes arrays, what should I do ? Having special case for scalar sounds like a pain, so is asarray/asmatrix the best way to handle those cases so my function only deal with array types ? - what is the difference between matrix and array, except syntax ? If I want to handle both in one function, what is the "best" method ? Using one type only (for example matrix), and using asmatrix on all arguments accordingly ? To convert my matlab code, I was thinking about using asmatrix for arguments in all my functions, but I am not sure this is really the "right" way. I was hoping some other people would have some experience with the same issues, and could give me some general advices thank you, David P.S: it would be great to have this kind of information on scipy website; right now, the scipy for matlab users part is a bit sparse... I am willing to change this once I understand the problem myself, of course:) From pebarrett at gmail.com Thu Apr 20 07:48:10 2006 From: pebarrett at gmail.com (Paul Barrett) Date: Thu, 20 Apr 2006 07:48:10 -0400 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. In-Reply-To: <20060420100554.2625.qmail@webmail10.rediffmail.com> References: <20060420100554.2625.qmail@webmail10.rediffmail.com> Message-ID: <40e64fa20604200448n1e0595b8vf444e7d34d32e4dd@mail.gmail.com> On 20 Apr 2006 10:05:54 -0000, morovia wrote: > > > Thanks for the responses. > > I've written a C version of a fast Humlicek algorithm, which I use for > > spectral fitting. I'm willing to part with it if someone would like to > > include it in SciPy. > > Paul : I dont know how to write wrappers around c programmes. > A simple SWIG wrapper should suffice. I suppose that I could supply that also. Let me know if you are still interested. -- Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnchen at cortechs.net Thu Apr 20 09:50:07 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 20 Apr 2006 06:50:07 -0700 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <444764BD.4060206@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> Message-ID: Hi! All, I am also in the same process. And I would like to add one more question: In Matlab, for a 3D array or matrix, the indexing is a(i,j,k). In numpy, it became a[k-1,i-1,j-1]. Is there any way to make it become a[i-1,j-1,k-1]? Or I am doing something wrong here?? Gen On Apr 20, 2006, at 3:38 AM, David Cournapeau wrote: > Dear numpy users, > > I am converting some code from matlab to numpy/scipy, but still a > bit confused by numpy, mostly the array vs matrix issue. Looking at > the > scipy website, the matrix type looks the closest to matlab syntax, > but I > still have some issues: > > - under matlab, everything, including scalar, are matrices in > matlab > sense. In python, they are not. So, of I want to handle scalar > case in > a function which takes arrays, what should I do ? Having special case > for scalar sounds like a pain, so is asarray/asmatrix the best way to > handle those cases so my function only deal with array types ? > - what is the difference between matrix and array, except syntax ? > If I want to handle both in one function, what is the "best" method ? > Using one type only (for example matrix), and using asmatrix on all > arguments accordingly ? > > To convert my matlab code, I was thinking about using asmatrix for > arguments in all my functions, but I am not sure this is really the > "right" way. I was hoping some other people would have some experience > with the same issues, and could give me some general advices > > thank you, > > David > > P.S: it would be great to have this kind of information on scipy > website; right now, the scipy for matlab users part is a bit > sparse... I > am willing to change this once I understand the problem myself, of > course:) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From ckkart at hoc.net Thu Apr 20 10:24:40 2006 From: ckkart at hoc.net (Christian Kristukat) Date: Thu, 20 Apr 2006 23:24:40 +0900 Subject: [SciPy-user] wofz and optimized Humlicek algorithm. In-Reply-To: <20060420100554.2625.qmail@webmail10.rediffmail.com> References: <20060420100554.2625.qmail@webmail10.rediffmail.com> Message-ID: <444799A8.9030006@hoc.net> morovia wrote: > > Thanks for the responses. > >> I've written a C version of a fast Humlicek algorithm, which I use for >> spectral fitting. I'm willing to part with it if someone would like to >> include it in SciPy. > > Paul : I dont know how to write wrappers around c programmes. > >> I have wrapped the humdev subroutine found here >> http://www-atm.physics.ox.ac.uk/user/wells/voigt.html >> some time ago with f2py but switched to wofz since I didn't need high >> speed. I can look for the .pyf file if you're interested. > > Christian : I will try this option, if you can send it. Here you go. Christian -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: humdev.for URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: humdev.pyf URL: From emsellem at obs.univ-lyon1.fr Thu Apr 20 12:13:33 2006 From: emsellem at obs.univ-lyon1.fr (Eric Emsellem) Date: Thu, 20 Apr 2006 18:13:33 +0200 Subject: [SciPy-user] stupid python question : for in in range(nlines) : In-Reply-To: References: Message-ID: <4447B32D.3050300@obs.univ-lyon1.fr> Hi, I have a very dum python question here, hopefully someone can answer this in no time: I have a script including lines such as (reading a file with "nlines" lines): ############################## for i in range(nlines) : ... ... while i < nlines : .... i += 1 if ... : break ############################# But of course the "while" loop changes the increment "i", but then, when the break condition is valid, it returns to the "for" loop and starts again with the set of lines WITHOUT taking into account the fact that "i" was incremented (so that it should not read these lines AGAIN). Hope this is clear. Let me know if you have a simple solution (my scripting habits are coming from C, hence the way I stupily wrote things here...) Thanks in advance for any help there Eric From schofield at ftw.at Thu Apr 20 12:37:52 2006 From: schofield at ftw.at (Ed Schofield) Date: Thu, 20 Apr 2006 18:37:52 +0200 Subject: [SciPy-user] stupid python question : for in in range(nlines) : In-Reply-To: <4447B32D.3050300@obs.univ-lyon1.fr> References: <4447B32D.3050300@obs.univ-lyon1.fr> Message-ID: <4447B8E0.6080703@ftw.at> Eric Emsellem wrote: > Hi, > > I have a script including lines such as (reading a file with "nlines" > lines): > > ############################## > for i in range(nlines) : > ... > ... > while i < nlines : > .... > i += 1 > if ... : > break > ############################# > I think you need to use a 'while' loop instead of 'for'. For example: i = 0 while i < nlines: ... ... while i < nlines : .... i += 1 if ... : break i += 1 Hope this helps! -- Ed From aisaac at american.edu Thu Apr 20 12:50:39 2006 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 20 Apr 2006 12:50:39 -0400 Subject: [SciPy-user] stupid python question : for in in range(nlines) : In-Reply-To: <4447B32D.3050300@obs.univ-lyon1.fr> References: <4447B32D.3050300@obs.univ-lyon1.fr> Message-ID: On Thu, 20 Apr 2006, Eric Emsellem apparently wrote: > ############################## for i in range(nlines) > : ... ... while i < nlines : .... i += 1 if ... : break > ############################# The for loop provides a rule for assignment to the name 'i', so that must fail. Files are iterators: http://docs.python.org/lib/bltin-file-objects.html Use 'for line in file:' along with the next() method. hth, Alan Isaac From nmarais at sun.ac.za Thu Apr 20 13:36:15 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Thu, 20 Apr 2006 19:36:15 +0200 Subject: [SciPy-user] F2PY stopped working with new scipy References: <44368A57.9000206@gmail.com> <443A8BAC.8080302@gmail.com> Message-ID: Hi On Mon, 10 Apr 2006 11:45:32 -0500, Robert Kern wrote: > Neilen Marais wrote: >> Hi Robert >> >> On Fri, 07 Apr 2006 10:50:47 -0500, Robert Kern wrote: >> > Well, according to the error message, it was looking for efort and efc for some > reason. Looking at the code (numpy/distutils/fcompiler/intel.py), it appears > that the IntelItaniamFCompiler class looks for efort and efc; however, that > compiler is supposed to be specified by intele, not intel. It seems to be confused by the fact that I'm using the EM64T version of the intel compilers. The version string printed by my compiler is: Intel(R) Fortran Compiler for Intel(R) EM64T-based applications, Version 9.0 Build 20050430 Package ID: l_fc_p_9.0.021 >> >> How can I obtain this test string? It did work with the older version >> of scipy/f2py, so this may be some sort of regression. > > The regexes are the version_pattern class attributes in the file intel.py given > above. I updated this regex, and also commented out some options that aren't valid for the EM64T compiler. A diff on intel.py from today's svn reveals: --- intel.py~ 2006-04-12 18:34:30.000000000 +0200 +++ intel.py 2006-04-20 12:39:01.000000000 +0200 @@ -10,7 +10,7 @@ class IntelFCompiler(FCompiler): compiler_type = 'intel' - version_pattern = r'Intel\(R\) Fortran Compiler for 32-bit '\ + version_pattern = r'Intel\(R\) Fortran Compiler for .* '\ 'applications, Version (?P[^\s*]*)' for fc_exe in map(find_executable,['ifort','ifc']): @@ -56,12 +56,12 @@ opt.append('-tpp5') elif cpu.is_PentiumIV() or cpu.is_Xeon(): opt.extend(['-tpp7','-xW']) - if cpu.has_mmx() and not cpu.is_Xeon(): - opt.append('-xM') - if cpu.has_sse2(): - opt.append('-arch SSE2') - elif cpu.has_sse(): - opt.append('-arch SSE') +# if cpu.has_mmx() and not cpu.is_Xeon(): +# opt.append('-xM') +# if cpu.has_sse2(): +# opt.append('-arch SSE2') +# elif cpu.has_sse(): +# opt.append('-arch SSE') return opt def get_flags_linker_so(self): This gets the compiler to run, and builds the extension module. The resulting module doesn't quite work right though. I'll make a separate post about that though. Of course these changes may break things for 32-bit platforms. Cheers Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From nmarais at sun.ac.za Thu Apr 20 13:40:07 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Thu, 20 Apr 2006 19:40:07 +0200 Subject: [SciPy-user] F2PY stopped working with new scipy References: <44368A57.9000206@gmail.com> <443A8BAC.8080302@gmail.com> Message-ID: Hi Again On Thu, 20 Apr 2006 19:36:15 +0200, Neilen Marais wrote: >>> How can I obtain this test string? It did work with the older version >>> of scipy/f2py, so this may be some sort of regression. >> >> The regexes are the version_pattern class attributes in the file intel.py given >> above. > > I updated this regex, and also commented out some options that aren't valid for > the EM64T compiler. A diff on intel.py from today's svn reveals: Actually, I just looked in the old scipy distutils, and the regexps are exactly the same as in numpy! Strange that it worked before, unless it's a bug in the old distutils code that it ignored the test? Cheers Neilen From nmarais at sun.ac.za Thu Apr 20 14:05:24 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Thu, 20 Apr 2006 20:05:24 +0200 Subject: [SciPy-user] F2PY, F90 ALLOCATED arrays and AMD64 Message-ID: Hi. There seems to be a problem with recent version of F2PY on 64bit (or at least my 64bit) platforms. I'm using the Intel EM64T version of Intel Fortran 9.0. By default, this compiler doesn't work, since distutils don't recognise the EM64T versions of the compiler. After patching numpy/distutils/fcompilers/intel.py as explained in the thread "F2PY stopped working with new scipy", I was able to compile the wrappers, but they don't quite work right.I think the easiest way to explain it is by example. The following code in the file test_data.f90 is being wrapped : MODULE DATA IMPLICIT NONE REAL, DIMENSION(:), ALLOCATABLE :: test_arr CONTAINS SUBROUTINE init() ALLOCATE(test_arr(10)) test_arr=55 END SUBROUTINE init END MODULE DATA using this command: $ f2py --fcompiler=intel -m testmod -c test_data.f90 Using this version of f2py: $ f2py -v 2.46.243_2020 I get the expected output: $ python Python 2.4.2 (#2, Sep 30 2005, 22:19:27) [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import testmod >>> testmod.data.init() >>> testmod.data.test_arr array([ 55., 55., 55., 55., 55., 55., 55., 55., 55., 55.],'f') >>> If, instead, I use this version of f2py: $ f2py -v 2_2383 I get: $ python Python 2.4.2 (#2, Sep 30 2005, 22:19:27) [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import testmod >>> testmod.data.init() >>> testmod.data.test_arr Traceback (most recent call last): File "", line 1, in ? ValueError: negative dimensions are not allowed I also checked on a very similarly setup 32-bit machine. I first ran into a problem compiling: $ f2py --fcompiler=intel -m testmod -c test_data.f90 ....... ifort:f90: test_data.f90 ifort: Command line warning: extension 'M' not supported ignored in option '-x' ifort: Command line error: Unrecognized keyword 'SSE' for option '-arch' ifort: Command line warning: extension 'M' not supported ignored in option '-x' ifort: Command line error: Unrecognized keyword 'SSE' for option '-arch' error: Command "/usr/local/bin/ifort -FR -KPIC -cm -O3 -unroll -tpp6 -xM -arch SSE -I/tmp/tmp1fRBC7/src -I/usr/lib/python2.4/site-packages/numpy/core/include -I/usr/include/python2.4 -c -c test_data.f90 -o /tmp/tmp1fRBC7/test_data.o -module /tmp/tmp1fRBC7/ -I/tmp/tmp1fRBC7/" failed with exit status 1 I fixed this by commenting out the offending compiler options in intel.py. After this, the wrapped code works both with the old and new f2py. I must add that the 32-bit system is running python 2.4.3 instead of 2.4.2. Don't know if this would make any difference. Thanks Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) From oliphant at ee.byu.edu Thu Apr 20 14:28:26 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 20 Apr 2006 12:28:26 -0600 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> Message-ID: <4447D2CA.50505@ee.byu.edu> Gennan Chen wrote: >Hi! All, > >I am also in the same process. And I would like to add one more >question: > >In Matlab, for a 3D array or matrix, the indexing is a(i,j,k). In >numpy, it became a[k-1,i-1,j-1]. Is there any way to make it become >a[i-1,j-1,k-1]? Or I am doing something wrong here?? > > > In NumPy, arrays and matrices are by default in C-contiguous order so that the last index varies the fastest. Matlab is based on Fortran originally and defines arrays in Fortran-contiguous order (the first index varies the fastest as you walk linearly throught memory). The only time this really matters is if you are interfacing with some compiled code. Otherwise, how you think about indexing is up to you and how you define the array. So, to make it a[i-1,j-1,k-1] you need to reshape the array from the way you defined it in MATLAB. It really does just depend on how you define things. Perhaps you could give a specific example so we could be more specific on how you would write the same thing in NumPy. -Travis From wjdandreta at att.net Thu Apr 20 14:39:20 2006 From: wjdandreta at att.net (Bill Dandreta) Date: Thu, 20 Apr 2006 14:39:20 -0400 Subject: [SciPy-user] stupid python question : for in in range(nlines) : In-Reply-To: <4447B32D.3050300@obs.univ-lyon1.fr> References: <4447B32D.3050300@obs.univ-lyon1.fr> Message-ID: <4447D558.9030009@att.net> I do this kind of thing with for loops all the time. It usually takes this form: This skips a line if if doesn't meet your processing condition: for line in file('filename): parse line here if condition: continue process line here If you need to look forward (or backward) some number of lines, use a second loop like this: for i in range(nlines) : if condition: continue ... k=1 while kHi, > >I have a very dum python question here, hopefully someone can answer >this in no time: > >I have a script including lines such as (reading a file with "nlines" >lines): > >############################## >for i in range(nlines) : > ... > ... > while i < nlines : > .... > i += 1 > if ... : > break >############################# > >But of course the "while" loop changes the increment "i", but then, when >the break condition is valid, it returns to the "for" loop and starts >again with the set of lines WITHOUT taking into account the fact that >"i" was incremented (so that it should not read these lines AGAIN). > >Hope this is clear. Let me know if you have a simple solution (my >scripting habits are coming from C, hence the way I stupily wrote things >here...) > >Thanks in advance for any help there > >Eric > >_______________________________________________ >SciPy-user mailing list >SciPy-user at scipy.net >http://www.scipy.net/mailman/listinfo/scipy-user > > > From gnchen at cortechs.net Thu Apr 20 14:46:26 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 20 Apr 2006 11:46:26 -0700 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <4447D2CA.50505@ee.byu.edu> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4447D2CA.50505@ee.byu.edu> Message-ID: <8EF895A7-6A9C-408A-9158-CA8B6641D334@cortechs.net> Hi! Travis, Let's start with an example under matlab: >> d = [0:23] >> k = reshape(d, 3,4,2) k(:,:,1) = 0 3 6 9 1 4 7 10 2 5 8 11 k(:,:,2) = 12 15 18 21 13 16 19 22 14 17 20 23 under numpy: >In [2]: d = numpy.asarray(range(0,24), numpy.float32) In [3]: d Out[3]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23.], dtype=float32) In [4]: k = d.reshape(3,4,2) In [5]: k Out[5]: array([[[ 0., 1.], [ 2., 3.], [ 4., 5.], [ 6., 7.]], [[ 8., 9.], [ 10., 11.], [ 12., 13.], [ 14., 15.]], [[ 16., 17.], [ 18., 19.], [ 20., 21.], [ 22., 23.]]], dtype=float32) So, if I want to port my Matlab code, I need to pay attention to this. And Since I have a lot of C/C++ mexing code in Matlab, I need to fix not just 1-0 based but also indexing issue here. Any chance I can make the indexing like the Matlab's way? Or I should just hunker down... Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net On Apr 20, 2006, at 11:28 AM, Travis Oliphant wrote: > Gennan Chen wrote: > >> Hi! All, >> >> I am also in the same process. And I would like to add one more >> question: >> >> In Matlab, for a 3D array or matrix, the indexing is a(i,j,k). In >> numpy, it became a[k-1,i-1,j-1]. Is there any way to make it become >> a[i-1,j-1,k-1]? Or I am doing something wrong here?? >> >> >> > > In NumPy, arrays and matrices are by default in C-contiguous order so > that the last index varies the fastest. Matlab is based on Fortran > originally and defines arrays in Fortran-contiguous order (the first > index varies the fastest as you walk linearly throught memory). > The > only time this really matters is if you are interfacing with some > compiled code. Otherwise, how you think about indexing is up to you > and how you define the array. > > So, to make it a[i-1,j-1,k-1] you need to reshape the array from the > way you defined it in MATLAB. It really does just depend on how you > define things. Perhaps you could give a specific example so we could > be more specific on how you would write the same thing in NumPy. > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From oliphant at ee.byu.edu Thu Apr 20 16:44:06 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 20 Apr 2006 14:44:06 -0600 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <8EF895A7-6A9C-408A-9158-CA8B6641D334@cortechs.net> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4447D2CA.50505@ee.byu.edu> <8EF895A7-6A9C-408A-9158-CA8B6641D334@cortechs.net> Message-ID: <4447F296.8060307@ee.byu.edu> Gennan Chen wrote: >Hi! Travis, > >Let's start with an example under matlab: > > >> d = [0:23] > >> k = reshape(d, 3,4,2) >k(:,:,1) = >0 3 6 9 >1 4 7 10 >2 5 8 11 >k(:,:,2) = >12 15 18 21 >13 16 19 22 >14 17 20 23 > > > This is a FORTRAN-order reshaping. The linear sequence of values is reshaped into an array by varying the first index (the row) the fastest. NumPy, by default, uses C-contiguous order so that the last index varies the fastest as it places elements in the array. NumPy does have support for the FORTRAN-order, but there are a few constructs that don't support it: (arr.flat iterators are always in C-contiguous order and a.shape = (3,4,2) always assumes C-contiguous order for advancing through the elements). I don't know of anyone who has used the FORTRAN support to successfully convert MATLAB code so unless you want to be a guinea pig, you might want to just hunker down and convert to C-contiguous order. Alternatively you can just re-think the shape of your arrays in reverse: i.e. instead of creating a 3,4,2 array, create a 2,4,3 array and reverse all your indices. >So, if I want to port my Matlab code, I need to pay attention to >this. And Since I have a lot of C/C++ mexing code in Matlab, I need >to fix not just 1-0 based but also indexing issue here. Any chance I >can make the indexing like the Matlab's way? Or I should just hunker >down... > > As far as the indexing is concerned. The only way to alter it is to implement a new class that subtracts 1 from all the indices. You could also define "end" as a simple object and when you see it replace with the number of dimension in the array. Such a thing is possible (you could also implement it to reverse the order of all your indices and simulate a matlab-style array). -Travis From gnchen at cortechs.net Thu Apr 20 16:58:37 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 20 Apr 2006 13:58:37 -0700 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <4447F296.8060307@ee.byu.edu> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4447D2CA.50505@ee.byu.edu> <8EF895A7-6A9C-408A-9158-CA8B6641D334@cortechs.net> <4447F296.8060307@ee.byu.edu> Message-ID: <09A7C334-6299-4421-B903-72372417B89B@cortechs.net> Hi! Travis, Thanks for your suggestion. In fact, that is what I do. Creating a class and use __getitem__ to swap the indices. Unfortunately, In the medical imaging, index (i,j,k) is always related to FORTRAN way. Gen On Apr 20, 2006, at 1:44 PM, Travis Oliphant wrote: > Gennan Chen wrote: > >> Hi! Travis, >> >> Let's start with an example under matlab: >> >>>> d = [0:23] >>>> k = reshape(d, 3,4,2) >> k(:,:,1) = >> 0 3 6 9 >> 1 4 7 10 >> 2 5 8 11 >> k(:,:,2) = >> 12 15 18 21 >> 13 16 19 22 >> 14 17 20 23 >> >> >> > > This is a FORTRAN-order reshaping. The linear sequence of values is > reshaped into an array by varying the first index (the row) the > fastest. > > NumPy, by default, uses C-contiguous order so that the last index > varies > the fastest as it places elements in the array. > > NumPy does have support for the FORTRAN-order, but there are a few > constructs that don't support it: (arr.flat iterators are always in > C-contiguous order and a.shape = (3,4,2) always assumes C-contiguous > order for advancing through the elements). I don't know of anyone > who > has used the FORTRAN support to successfully convert MATLAB code so > unless you want to be a guinea pig, you might want to just hunker down > and convert to C-contiguous order. Alternatively you can just re- > think > the shape of your arrays in reverse: i.e. instead of creating a 3,4,2 > array, create a 2,4,3 array and reverse all your indices. > >> So, if I want to port my Matlab code, I need to pay attention to >> this. And Since I have a lot of C/C++ mexing code in Matlab, I need >> to fix not just 1-0 based but also indexing issue here. Any chance I >> can make the indexing like the Matlab's way? Or I should just hunker >> down... >> >> > As far as the indexing is concerned. The only way to alter it is to > implement a new class that subtracts 1 from all the indices. You > could > also define "end" as a simple object and when you see it replace with > the number of dimension in the array. Such a thing is possible (you > could also implement it to reverse the order of all your indices and > simulate a matlab-style array). > > -Travis > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From david at ar.media.kyoto-u.ac.jp Thu Apr 20 23:27:25 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 21 Apr 2006 12:27:25 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> Message-ID: <4448511D.3070306@ar.media.kyoto-u.ac.jp> Gennan Chen wrote: > Hi! All, > > I am also in the same process. And I would like to add one more > question: > > In Matlab, for a 3D array or matrix, the indexing is a(i,j,k). In > numpy, it became a[k-1,i-1,j-1]. Is there any way to make it become > a[i-1,j-1,k-1]? Or I am doing something wrong here?? > > To be more specific, I am trying to convert a function which compute multivariate Gaussian densities. It should be able to handle scalar case, the case where the mean is a vector, and the case where va is a vector (diagonal covariance matrix) or square matrix (full covariance matrix). So, in matlab, I simply do: function [n, d, K, varmode] = gaussd_args(data, mu, var) [n, d] = size(data); [dm0, dm1] = size(mu); [dv0, dv1]= size(var); And I check that the dimensions are what I expect afterwards. Using arrays, I don't see a simple way to do that while passing scalar arguments to the functions. So either I should be using matrix type (and using asmatrix on the arguments), or I should never pass scalar to the function, and always pass arrays. But maybe I've used matlab too much, and there is a much simpler way to do that in scipy. To sum it up, what is the convention in scipy when a function handles both scalar and arrays ? Is there an idiom to treat scalar and arrays of size 1 the same way, whatever the number of dimensions arrays may have ? David From robert.kern at gmail.com Fri Apr 21 00:07:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Apr 2006 23:07:01 -0500 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <4448511D.3070306@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> Message-ID: <44485A65.70309@gmail.com> David Cournapeau wrote: > To sum it up, what is the convention in scipy when a function > handles both scalar and arrays ? Is there an idiom to treat scalar and > arrays of size 1 the same way, whatever the number of dimensions arrays > may have ? Very frequently, you can simply rely on the array broadcasting of the ufuncs and basic operations to do the work for you. I can't find a simple description of the broadcasting rules on the Web at the moment (big opportunity for a Wiki page), but very basically: In [1]: from numpy import * In [2]: a = arange(20).reshape((4,5)) In [3]: a Out[3]: array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) In [4]: a + 10 Out[4]: array([[10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) If you really do want scalars to be treated as arrays of size 1 (what dimensionality?), then you can usually use one of the atleast_* functions: In [5]: atleast*? atleast_1d atleast_2d atleast_3d In [6]: atleast_1d(10) Out[6]: array([10]) In [7]: atleast_2d(10) Out[7]: array([[10]]) In [8]: atleast_3d(10) Out[8]: array([[[10]]]) -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Apr 21 00:32:33 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 21 Apr 2006 13:32:33 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <44485A65.70309@gmail.com> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> Message-ID: <44486061.8010008@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > David Cournapeau wrote: > > >> To sum it up, what is the convention in scipy when a function >> handles both scalar and arrays ? Is there an idiom to treat scalar and >> arrays of size 1 the same way, whatever the number of dimensions arrays >> may have ? >> > > Very frequently, you can simply rely on the array broadcasting of the ufuncs and > basic operations to do the work for you. I can't find a simple description of > the broadcasting rules on the Web at the moment (big opportunity for a Wiki > page), but very basically: > > In [1]: from numpy import * > > In [2]: a = arange(20).reshape((4,5)) > > In [3]: a > Out[3]: > array([[ 0, 1, 2, 3, 4], > [ 5, 6, 7, 8, 9], > [10, 11, 12, 13, 14], > [15, 16, 17, 18, 19]]) > > In [4]: a + 10 > Out[4]: > array([[10, 11, 12, 13, 14], > [15, 16, 17, 18, 19], > [20, 21, 22, 23, 24], > [25, 26, 27, 28, 29]]) > > I understand those cases, this is pretty similar to matlab, so I am used to it. But my problem is different (or maybe not ?) > If you really do want scalars to be treated as arrays of size 1 (what > dimensionality?), then you can usually use one of the atleast_* functions: > > This looks exactly like what I am looking for. My problem for my function is the following (pseudo code): foo(x, mu, va): if mu and va scalars: call scalar_implementation return result if mu and va rank 1: call scalar implementation on each element if mu rank 1 and va rank 2: call matrix implementation and assumed all arguments are always rank 2, even if they are "scalar" (size 1), a bit like in numpy.linalg, if I understood correctly (calling numpy.linalg.inv(1) does not work). It looks like those atleast* methods should do the work. Actually, my problem is pretty similar to implementing wrapper around numpy.linalg.inv which works in scalar case and rank 1 (assuming rank 1 means diagonal) cases. Are those atleast* functions expensive ? For small size arrays, I don't care too much, but in the case of a big array of rank 1 converted to a rank 2 array, does those function need to copy the data ? David From robert.kern at gmail.com Fri Apr 21 00:42:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Apr 2006 23:42:23 -0500 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <44486061.8010008@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> Message-ID: <444862AF.9060606@gmail.com> David Cournapeau wrote: > Actually, my problem is pretty similar to implementing wrapper around > numpy.linalg.inv which works in scalar case and rank 1 (assuming rank 1 > means diagonal) cases. Are those atleast* functions expensive ? For > small size arrays, I don't care too much, but in the case of a big array > of rank 1 converted to a rank 2 array, does those function need to copy > the data ? No: def atleast_2d(*arys): """ Force a sequence of arrays to each be at least 2D. Description: Force an array to each be at least 2D. If the array is 0D or 1D, the array is converted to a single row of values. Otherwise, the array is unaltered. Arguments: arys -- arrays to be converted to 2 or more dimensional array. Returns: input array converted to at least 2D array. """ res = [] for ary in arys: res.append(array(ary,copy=False,ndmin=2)) if len(res) == 1: return res[0] else: return res -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Fri Apr 21 00:44:01 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Apr 2006 23:44:01 -0500 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <4448511D.3070306@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> Message-ID: <44486311.7050007@gmail.com> David Cournapeau wrote: > Gennan Chen wrote: > >>Hi! All, >> >>I am also in the same process. And I would like to add one more >>question: >> >>In Matlab, for a 3D array or matrix, the indexing is a(i,j,k). In >>numpy, it became a[k-1,i-1,j-1]. Is there any way to make it become >>a[i-1,j-1,k-1]? Or I am doing something wrong here?? > > To be more specific, I am trying to convert a function which compute > multivariate Gaussian densities. It should be able to handle scalar > case, the case where the mean is a vector, and the case where va is a > vector (diagonal covariance matrix) or square matrix (full covariance > matrix). Also, both of you might want to take a look at this page and its links: http://www.scipy.org/NumPy_for_Matlab_Users -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From david at ar.media.kyoto-u.ac.jp Fri Apr 21 00:52:10 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 21 Apr 2006 13:52:10 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <44486311.7050007@gmail.com> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44486311.7050007@gmail.com> Message-ID: <444864FA.2040505@ar.media.kyoto-u.ac.jp> Robert Kern wrote: > > Also, both of you might want to take a look at this page and its links: > > http://www.scipy.org/NumPy_for_Matlab_Users > > I think the page lacks some key points, at least concerning these rank issues, where numpy and matlab are quite different (for example, nowhere it is said that slicing gives an array whose rank is different than the original array). Once I am sure to get my head around the whole issue, I will try to complete it accordingly. David From wbaxter at gmail.com Fri Apr 21 01:03:42 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 21 Apr 2006 14:03:42 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <44486061.8010008@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> Message-ID: On 4/21/06, David Cournapeau wrote: > > Robert Kern wrote: > > David Cournapeau wrote: > > > This looks exactly like what I am looking for. My problem for my > function is the following (pseudo code): > > foo(x, mu, va): > > if mu and va scalars: > call scalar_implementation > return result > if mu and va rank 1: > call scalar implementation on each element > if mu rank 1 and va rank 2: > call matrix implementation To handle the first two cases (scalar, and call scalar on every element), you should be able to use 'numpy.frompyfunc' to create a version of your scalar function that automatically works that way. def plusone(v): return v+1 uf = numpy.frompyfunc(plusone,1,1) >>> uf(1) 2 >>> uf([1,2,3]) array([2, 3, 4], dtype=object) and assumed all arguments are always rank 2, even if they are "scalar" > (size 1), a bit like in numpy.linalg, if I understood > correctly (calling numpy.linalg.inv(1) does not work). It looks like > those atleast* methods should do the work. > > Actually, my problem is pretty similar to implementing wrapper around > numpy.linalg.inv which works in scalar case and rank 1 (assuming rank 1 > means diagonal) cases. Are those atleast* functions expensive ? For > small size arrays, I don't care too much, but in the case of a big array > of rank 1 converted to a rank 2 array, does those function need to copy > the data ? Looks like atleast_1d doesn't copy the data, so yes, it should be fast. a = numpy.array([1,2,3,4]) b = numpy.atleast_2d(a) a array([1, 2, 3, 4]) b array([[1, 2, 3, 4]]) a[1]=0 a array([1, 0, 3, 4]) b array([[1, 0, 3, 4]]) -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri Apr 21 01:20:19 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 21 Apr 2006 14:20:19 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <44486061.8010008@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> Message-ID: By the way, I'd be interested in an n-dimension Gaussian function for NumPy/SciPy too. Anyone else interested in machine learning and or bayesian methods? A port of Netlab (http://www.ncrg.aston.ac.uk/netlab/index.php) in SciPy would be great. --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri Apr 21 01:31:06 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 21 Apr 2006 14:31:06 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <444864FA.2040505@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <44486311.7050007@gmail.com> <444864FA.2040505@ar.media.kyoto-u.ac.jp> Message-ID: On 4/21/06, David Cournapeau wrote: > > Robert Kern wrote: > > > > Also, both of you might want to take a look at this page and its links: > > > > http://www.scipy.org/NumPy_for_Matlab_Users > > > > > I think the page lacks some key points, at least concerning these rank > issues, where numpy and matlab are quite different (for example, nowhere > it is said that slicing gives an array whose rank is different than the > original array). Once I am sure to get my head around the whole issue, I > will try to complete it accordingly. That would be great. Despite the fact that there's a column for 'array' there, most of the page was written with the assumption that you're using matlab for linear algebra, and so will be doing most everything with 'matrix', not 'array'. Slicing a 2-index matrix always returns another 2-index matrix. But you have a very good point when it comes to 'array's. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Apr 21 01:38:38 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 21 Apr 2006 14:38:38 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> Message-ID: <44486FDE.3080700@ar.media.kyoto-u.ac.jp> Bill Baxter wrote: > By the way, I'd be interested in an n-dimension Gaussian function for > NumPy/SciPy too. > > Anyone else interested in machine learning and or bayesian methods? A > port of Netlab ( http://www.ncrg.aston.ac.uk/netlab/index.php) in > SciPy would be great. Actually, I am porting a code for Gaussian Mixture Models with batch and online EM. I first try to do a pure python version to get an idea on scipy capabilities, and then I intend to create the stub to a C implementation (which already exists for matlab, the core being independant of matlab). I am hoping to have a much cleaner implementation, and more extensible (ie using other pdf, and why not more general models) using python languages capabilities (module, inheritance, etc...). I think porting netlab would be a huge task, and quite difficult; there is also torch (http://www.torch.ch/) which may be interesting to use (C++ code, BSD license). Having a machine learning tool box would be a step forward for scipy, I guess. David From wbaxter at gmail.com Fri Apr 21 02:12:04 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 21 Apr 2006 15:12:04 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <44486FDE.3080700@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> <44486FDE.3080700@ar.media.kyoto-u.ac.jp> Message-ID: Yeh, I'm not so interested in having a PyNetlab per se, i.e. api-for-api equivalent, as much as having similar functionality. But I reckon copying the design and organization of something that already exists would be faster than doing it completely from scratch. The bits I use are GMM, Gaussian Processes, RBF networks, (P)PCA. But I guess that's probably the bulk of the code right there. Anyway, I can dream, can't I? :-) --bb On 4/21/06, David Cournapeau wrote: > > Bill Baxter wrote: > > By the way, I'd be interested in an n-dimension Gaussian function for > > NumPy/SciPy too. > > > > Anyone else interested in machine learning and or bayesian methods? A > > port of Netlab ( http://www.ncrg.aston.ac.uk/netlab/index.php) in > > SciPy would be great. > Actually, I am porting a code for Gaussian Mixture Models with batch and > online EM. I first try to do a pure python version to get an idea on > scipy capabilities, and then I intend to create the stub to a C > implementation (which already exists for matlab, the core being > independant of matlab). I am hoping to have a much cleaner > implementation, and more extensible (ie using other pdf, and why not > more general models) using python > languages capabilities (module, inheritance, etc...). > > I think porting netlab would be a huge task, and quite difficult; there > is also torch (http://www.torch.ch/) which may be interesting to use > (C++ code, BSD license). Having a machine learning tool box would be a > step forward for scipy, I guess. > > David > > ____________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Fri Apr 21 02:33:33 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 21 Apr 2006 15:33:33 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> <44486FDE.3080700@ar.media.kyoto-u.ac.jp> Message-ID: One thing... I'm not sure why you think porting Netlab to SciPy would be such a huge task. It's a big task, sure. Porting to C++ would definitely be a huge task. But I would think porting to another high-level language like python would be a one-summer job for a reasonably clueful grad student. It's only 9000 lines of code excluding comments and blank lines: [BAXTER-PC<7>netlab]> egrep -v '(^%|^$)' *.m | wc 8971 41422 361276 I think converting 100 lines a day for 90 days is not unreasonable. That includes all the demos too. If you leave out the demos its about half that: [BAXTER-PC<18>netlab]> egrep -v '(^%|^\s*$)' `ls *.m | grep -v '^dem'` | wc 4171 18725 156628 Ok maybe it's still a little unreasonable. Alright, maybe it's not a 1-man summer job. I've also ignored testing and converting the comments, but the task is also fairly parallelizeable. Probably a little team of 3 eager new grad students could do a bang up job over a summer. --bb On 4/21/06, David Cournapeau wrote: > > > Bill Baxter wrote: > > > By the way, I'd be interested in an n-dimension Gaussian function for > > > NumPy/SciPy too. > > > > > > Anyone else interested in machine learning and or bayesian methods? A > > > port of Netlab ( http://www.ncrg.aston.ac.uk/netlab/index.php) in > > > SciPy would be great. > > Actually, I am porting a code for Gaussian Mixture Models with batch and > > online EM. I first try to do a pure python version to get an idea on > > scipy capabilities, and then I intend to create the stub to a C > > implementation (which already exists for matlab, the core being > > independant of matlab). I am hoping to have a much cleaner > > implementation, and more extensible (ie using other pdf, and why not > > more general models) using python > > languages capabilities (module, inheritance, etc...). > > > > I think porting netlab would be a huge task, and quite difficult; there > > is also torch ( http://www.torch.ch/) which may be interesting to use > > (C++ code, BSD license). Having a machine learning tool box would be a > > step forward for scipy, I guess. > > > > David > > > > ____________ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Fri Apr 21 02:54:49 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 21 Apr 2006 15:54:49 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> <44486FDE.3080700@ar.media.kyoto-u.ac.jp> Message-ID: <444881B9.4000801@ar.media.kyoto-u.ac.jp> Bill Baxter wrote: > One thing... I'm not sure why you think porting Netlab to SciPy would > be such a huge task. It's a big task, sure. Porting to C++ would > definitely be a huge task. Well, the nice thing with C++ is that you can plug it directly to python using swig and hand-coded wrapping code. It is actually one reason why I want to go on python: wrapping C code for matlab is awful (there is no way to control the memory handler, for example), and things like swig or python::boost are much better (without even taking into account that C and python have the same convention for indexing and row major ordering). As the code is BSD, I think the licenses are compatible with scipy. I think in a summer internship, you could write good swig or boost::python extension to have the wrapping mostly automated. Porting from matlab to scipy involve porting/testing all the code, whereas using C++ code involve mostly glue-code. But maybe I am underestimating the difficulty... David From wbaxter at gmail.com Fri Apr 21 03:28:06 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 21 Apr 2006 16:28:06 +0900 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <444881B9.4000801@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> <44486FDE.3080700@ar.media.kyoto-u.ac.jp> <444881B9.4000801@ar.media.kyoto-u.ac.jp> Message-ID: Torch does look pretty nice. Yeh, providing wrappers for torch may be easier (and result in faster code) as long as their data format is relatively sane. I'm not really sure how one goes about interfacing numpy.arrays with external code, but it's certainly possible, since that's how the bulk of SciPy was written (by calling on external fortran or C code, not sure about C++). [info about NumPy and SWIG here if you haven't seen it already: http://www.scipy.org/Cookbook/SWIG_and_NumPy] The other problem with my estimate on time to port Matlab code is that a figure like 4200 lines doesn't reveal the real cost if one of those lines happens to be a call to something like Matlab's nonlinear optimization routines or something else for which there is currently no numpy equivalent. I don't think there are /many/ of those gotchas in Netlab, but eigs() is one of them. As far as I know SciPy has no function to get just a few eigenvalues without having to find them all. Nothing prevents it from being added to SciPy (matlab's eigs is just a wrapper for the freely available ARPACK) it just hasn't been done yet. Anyway if you're just wrapping existing C++, you know you're not going to run into rats' nests like that. --bb On 4/21/06, David Cournapeau wrote: > > Bill Baxter wrote: > > One thing... I'm not sure why you think porting Netlab to SciPy would > > be such a huge task. It's a big task, sure. Porting to C++ would > > definitely be a huge task. > Well, the nice thing with C++ is that you can plug it directly to python > using swig and hand-coded wrapping code. It is actually one reason why I > want to go on python: wrapping C code for matlab is awful (there is no > way to control the memory handler, for example), and things like swig or > python::boost are much better (without even taking into account that C > and python have the same convention for indexing and row major > ordering). As the code is BSD, I think the licenses are compatible with > scipy. I think in a summer internship, you could write good swig or > boost::python extension to have the wrapping mostly automated. > > Porting from matlab to scipy involve porting/testing all the code, > whereas using C++ code involve mostly glue-code. But maybe I am > underestimating the difficulty... > > David > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Fri Apr 21 06:17:34 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 21 Apr 2006 12:17:34 +0200 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: <4448511D.3070306@ar.media.kyoto-u.ac.jp> References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> Message-ID: <4448B13E.4090103@ftw.at> David Cournapeau wrote: > To be more specific, I am trying to convert a function which compute > multivariate Gaussian densities. It should be able to handle scalar > case, the case where the mean is a vector, and the case where va is a > vector (diagonal covariance matrix) or square matrix (full covariance > matrix). > So, in matlab, I simply do: > > function [n, d, K, varmode] = gaussd_args(data, mu, var) > > [n, d] = size(data); > [dm0, dm1] = size(mu); > [dv0, dv1]= size(var); > > And I check that the dimensions are what I expect afterwards. Using > arrays, I don't see a simple way to do that while passing scalar > arguments to the functions. So either I should be using matrix type (and > using asmatrix on the arguments), or I should never pass scalar to the > function, and always pass arrays. But maybe I've used matlab too much, > and there is a much simpler way to do that in scipy. > To sum it up, what is the convention in scipy when a function > handles both scalar and arrays ? Is there an idiom to treat scalar and > arrays of size 1 the same way, whatever the number of dimensions arrays > may have ? > You could use rank-0 arrays instead of scalars. For example, if your function were to wrap the arguments up with 'asarray', they'd then have the normal methods and attributes of arrays: def foo(x, mu, va): x = asarray(x) mu = asarray(m) va = asarray(va) if mu.ndim == 0 and va.ndim == 0: call scalar_implementation return result if mu.ndim == 1 and va.ndim == 1: call scalar implementation on each element if mu.ndim == 1 and va.ndim == 2: call matrix implementation -- Ed From chuckles at llnl.gov Fri Apr 21 11:30:27 2006 From: chuckles at llnl.gov (Chuckles McGregor) Date: Fri, 21 Apr 2006 08:30:27 -0700 Subject: [SciPy-user] (no subject) Message-ID: <6.2.1.2.2.20060421083018.033feaa8@mail.llnl.gov> good day, I've been trying to get weave working, I'm on a win2k box running python 2.4.3, with mingw g++ ver 3.4.2, scipy 0.4.8, numpy 0.9.6 and I can't get this example (and some of the others) from the doc to work. the error is: `Py' has not been declared what did I miss in installing/configuring this? I got hello world to work ok. chuckles >>> a=1 >>> a = weave.inline("return_val = Py::new_reference_to(Py::Int(a+1));",['a']) No module named msvccompiler in numpy.distutils, trying from distutils.. cc1plus.exe: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++ c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:1: warning: ignoring #pragma warning c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:2: warning: ignoring #pragma warning c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp: In function `PyObject* file_to_py(FILE*, char*, char*)': c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:399: warning: unused variable 'py_obj' c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp: In function `PyObject* compiled_func(PyObject*, PyObject*)': c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:658: error: `Py' has not been declared c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:658: error: `Py' has not been declared c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:658: error: `Int' undeclared (first use this function) c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:658: error: (Each undeclared identifier is reported only once for each function it appears in.) c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp:658: error: `new_reference_to' undeclared (first use this function) Traceback (most recent call last): File "", line 1, in -toplevel- a = weave.inline("return_val = Py::new_reference_to(Py::Int(a+1));",['a']) File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 334, in inline auto_downcast = auto_downcast, File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 442, in compile_function verbose=verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 353, in compile verbose = verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line 274, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 85, in setup return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 166, in setup raise SystemExit, "error: " + str(msg) CompileError: error: Command "g++ -O2 -Wall -Wstrict-prototypes -IC:\Python24\lib\site-packages\scipy\weave -IC:\Python24\lib\site-packages\scipy\weave\scxx -IC:\Python24\lib\site-packages\numpy\core\include -IC:\Python24\include -IC:\Python24\PC -c c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.cpp -o c:\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_intermediate\compiler_d41d8cd98f00b204e9800998ecf8427e\Release\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compiled\sc_5b09eaf68ff529a1fbaedc892ca5a4530.o" failed with exit status 1 >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Apr 21 11:39:48 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 21 Apr 2006 10:39:48 -0500 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: References: <444764BD.4060206@ar.media.kyoto-u.ac.jp> <4448511D.3070306@ar.media.kyoto-u.ac.jp> <44485A65.70309@gmail.com> <44486061.8010008@ar.media.kyoto-u.ac.jp> <44486FDE.3080700@ar.media.kyoto-u.ac.jp> <444881B9.4000801@ar.media.kyoto-u.ac.jp> Message-ID: <4448FCC4.6030003@gmail.com> Bill Baxter wrote: > The other problem with my estimate on time to port Matlab code is that a > figure like 4200 lines doesn't reveal the real cost if one of those > lines happens to be a call to something like Matlab's nonlinear > optimization routines or something else for which there is currently no > numpy equivalent. Have you looked at scipy.optimize? -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From massimo.sandal at unibo.it Fri Apr 21 14:19:35 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 21 Apr 2006 20:19:35 +0200 Subject: [SciPy-user] [OT] - what happens to the cmd module in python 2.3.5 on windows? Message-ID: <44492237.5050101@unibo.it> Hi, Sorry for the off-topic request, but I can't find anything by googling and I'm sure to talk to a crowd of expert Pythoneers, so... I'm building a CLI+GUI app for data analysis using wxpython and matplotlib for the GUI, and the Cmd standard Python module for the command line. The cli and the gui run in two separate threads. On Debian GNU/Linux the application works perfectly. I'm trying to get it working on Windows too. I'd like it to be able to work with the Enthought python distribution on Windows, that already includes 90% of the external libraries (scipy, numarray, wxpython etc.) I need, so people don't have to install a bazillion dependencies one by one to get it working -just a few must be downloaded in addition. This distribution ships Python 2.3.5. Now, on windows the GUI thread starts apparently correctly, but the CLI doesn't work and stops with the following error: Exception in thread Thread-1:Traceback (most recent call last): File "C:\Python23\lib\threading.py", line 442, in __bootstrap self.run() File "hooke.py", line 57, in run cli.cmdloop() File "C:\Python23\lib\cmd.py", line 109, in cmdloop self.preloop() File "C:\Python23\lib\cmd.py", line 152, in preloop import readline File "C:\Python23\lib\site-packages\readline\__init__.py", line 1, in ? from PyReadline import * File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 1091, in ? rl = Readline() File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 46, in __ini t__ self.emacs_editing_mode(None) File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 1008, in ema cs_editing_mode self._bind_key('"%s"' % chr(c), self.self_insert) File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 1000, in _bi nd_key keyinfo = key_text_to_keyinfo(key) File "C:\Python23\lib\site-packages\readline\keysyms.py", line 101, in key_tex t_to_keyinfo return keyseq_to_keyinfo(keytext[1:-1]) File "C:\Python23\lib\site-packages\readline\keysyms.py", line 163, in keyseq_ to_keyinfo res.append(char_to_keyinfo(keyseq[0], control, meta, shift)) File "C:\Python23\lib\site-packages\readline\keysyms.py", line 111, in char_to _keyinfo raise ValueError, 'bad key' ValueError: bad key I really can't understand how to patch the thing here. Any suggestion? Thanks again for your patience, m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From massimo.sandal at unibo.it Fri Apr 21 14:19:35 2006 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 21 Apr 2006 20:19:35 +0200 Subject: [SciPy-user] [wxPython-users] [OT] - what happens to the cmd module in python 2.3.5 on windows? Message-ID: <44492237.5050101@unibo.it> Hi, Sorry for the off-topic request, but I can't find anything by googling and I'm sure to talk to a crowd of expert Pythoneers, so... I'm building a CLI+GUI app for data analysis using wxpython and matplotlib for the GUI, and the Cmd standard Python module for the command line. The cli and the gui run in two separate threads. On Debian GNU/Linux the application works perfectly. I'm trying to get it working on Windows too. I'd like it to be able to work with the Enthought python distribution on Windows, that already includes 90% of the external libraries (scipy, numarray, wxpython etc.) I need, so people don't have to install a bazillion dependencies one by one to get it working -just a few must be downloaded in addition. This distribution ships Python 2.3.5. Now, on windows the GUI thread starts apparently correctly, but the CLI doesn't work and stops with the following error: Exception in thread Thread-1:Traceback (most recent call last): File "C:\Python23\lib\threading.py", line 442, in __bootstrap self.run() File "hooke.py", line 57, in run cli.cmdloop() File "C:\Python23\lib\cmd.py", line 109, in cmdloop self.preloop() File "C:\Python23\lib\cmd.py", line 152, in preloop import readline File "C:\Python23\lib\site-packages\readline\__init__.py", line 1, in ? from PyReadline import * File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 1091, in ? rl = Readline() File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 46, in __ini t__ self.emacs_editing_mode(None) File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 1008, in ema cs_editing_mode self._bind_key('"%s"' % chr(c), self.self_insert) File "C:\Python23\lib\site-packages\readline\PyReadline.py", line 1000, in _bi nd_key keyinfo = key_text_to_keyinfo(key) File "C:\Python23\lib\site-packages\readline\keysyms.py", line 101, in key_tex t_to_keyinfo return keyseq_to_keyinfo(keytext[1:-1]) File "C:\Python23\lib\site-packages\readline\keysyms.py", line 163, in keyseq_ to_keyinfo res.append(char_to_keyinfo(keyseq[0], control, meta, shift)) File "C:\Python23\lib\site-packages\readline\keysyms.py", line 111, in char_to _keyinfo raise ValueError, 'bad key' ValueError: bad key I really can't understand how to patch the thing here. Any suggestion? Thanks again for your patience, m. -- Massimo Sandal University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo.sandal.vcf Type: text/x-vcard Size: 275 bytes Desc: not available URL: -------------- next part -------------- --------------------------------------------------------------------- To unsubscribe, e-mail: wxPython-users-unsubscribe at lists.wxwidgets.org For additional commands, e-mail: wxPython-users-help at lists.wxwidgets.org From fullung at gmail.com Fri Apr 21 21:26:53 2006 From: fullung at gmail.com (Albert Strasheim) Date: Sat, 22 Apr 2006 03:26:53 +0200 Subject: [SciPy-user] array vs matrix, converting code from matlab In-Reply-To: Message-ID: <001e01c665ab$d63b95c0$0502010a@dsp.sun.ac.za> Hello all Just thought I'd throw in my 2 cents. I recently started with my masters thesis, which will probably focus on the application of Gaussian Mixture Models and Support Vector Machines to the problem of speaker verification. I would be very interested in cooperating on any effort to implement or wrap GMM and SVM code for SciPy. As far as SVM libraries go, I quite like SVM-Light and libsvm, which both have Python wrappers. Unfortunately these don't integrate with NumPy at the moment. http://www.cs.cornell.edu/~tomf/svmpython/ http://www.csie.ntu.edu.tw/~cjlin/libsvm/ I'm also interested in implementing some feature extraction algorithms and compensation techniques, such as Mel-Frequency Cepstral Coefficients (MFCC), RASTA filtering and others. There's some very nice MATLAB code that can serve as a starting point for these efforts: http://www.ee.columbia.edu/labrosa/matlab/rastamat/ Anybody else doing speech recognition/speaker verification/etc. with NumPy and SciPy? Regards, Albert _____ From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Bill Baxter Sent: 21 April 2006 09:28 To: SciPy Users List Subject: Re: [SciPy-user] array vs matrix, converting code from matlab Torch does look pretty nice. Yeh, providing wrappers for torch may be easier (and result in faster code) as long as their data format is relatively sane. I'm not really sure how one goes about interfacing numpy.arrays with external code, but it's certainly possible, since that's how the bulk of SciPy was written (by calling on external fortran or C code, not sure about C++). [info about NumPy and SWIG here if you haven't seen it already: http://www.scipy.org/Cookbook/SWIG_and_NumPy] The other problem with my estimate on time to port Matlab code is that a figure like 4200 lines doesn't reveal the real cost if one of those lines happens to be a call to something like Matlab's nonlinear optimization routines or something else for which there is currently no numpy equivalent. I don't think there are /many/ of those gotchas in Netlab, but eigs() is one of them. As far as I know SciPy has no function to get just a few eigenvalues without having to find them all. Nothing prevents it from being added to SciPy (matlab's eigs is just a wrapper for the freely available ARPACK) it just hasn't been done yet. Anyway if you're just wrapping existing C++, you know you're not going to run into rats' nests like that. --bb On 4/21/06, David Cournapeau wrote: Bill Baxter wrote: > One thing... I'm not sure why you think porting Netlab to SciPy would > be such a huge task. It's a big task, sure. Porting to C++ would > definitely be a huge task. Well, the nice thing with C++ is that you can plug it directly to python using swig and hand-coded wrapping code. It is actually one reason why I want to go on python: wrapping C code for matlab is awful (there is no way to control the memory handler, for example), and things like swig or python::boost are much better (without even taking into account that C and python have the same convention for indexing and row major ordering). As the code is BSD, I think the licenses are compatible with scipy. I think in a summer internship, you could write good swig or boost::python extension to have the wrapping mostly automated. Porting from matlab to scipy involve porting/testing all the code, whereas using C++ code involve mostly glue-code. But maybe I am underestimating the difficulty... David -------------- next part -------------- An HTML attachment was scrubbed... URL: From williams at astro.ox.ac.uk Sat Apr 22 09:25:59 2006 From: williams at astro.ox.ac.uk (Michael Williams) Date: Sat, 22 Apr 2006 14:25:59 +0100 Subject: [SciPy-user] [wxPython-users] [OT] - what happens to the cmd module in python 2.3.5 on windows? In-Reply-To: <44492237.5050101@unibo.it> References: <44492237.5050101@unibo.it> Message-ID: <20060422132559.GA385@astro.ox.ac.uk> Hi Massimo, On Fri, Apr 21, 2006 at 08:19:35PM +0200, massimo sandal wrote: >Sorry for the off-topic request, but I can't find anything by googling >and I'm sure to talk to a crowd of expert Pythoneers, so... I really don't want to seem agressive, but I think you might be more likely to get help by posting to another list. If you know it's off-topic then it remains off-topic -- even if you know that knowledgable people are reading it. What you're doing now, by cross-posting an off-topic question to two lists of knowledgable people, is the same as posting a question about how to do some generic thing in C to linux-kernel. The recipients might well know the answer, but I doubt they'd welcome the question! Good luck! -- Mike From williams at astro.ox.ac.uk Sat Apr 22 09:25:59 2006 From: williams at astro.ox.ac.uk (Michael Williams) Date: Sat, 22 Apr 2006 14:25:59 +0100 Subject: [SciPy-user] [wxPython-users] Re: [wxPython-users] [OT] - what happens to the cmd module in python 2.3.5 on windows? In-Reply-To: <44492237.5050101@unibo.it> References: <44492237.5050101@unibo.it> Message-ID: <20060422132559.GA385@astro.ox.ac.uk> Hi Massimo, On Fri, Apr 21, 2006 at 08:19:35PM +0200, massimo sandal wrote: >Sorry for the off-topic request, but I can't find anything by googling >and I'm sure to talk to a crowd of expert Pythoneers, so... I really don't want to seem agressive, but I think you might be more likely to get help by posting to another list. If you know it's off-topic then it remains off-topic -- even if you know that knowledgable people are reading it. What you're doing now, by cross-posting an off-topic question to two lists of knowledgable people, is the same as posting a question about how to do some generic thing in C to linux-kernel. The recipients might well know the answer, but I doubt they'd welcome the question! Good luck! -- Mike --------------------------------------------------------------------- To unsubscribe, e-mail: wxPython-users-unsubscribe at lists.wxwidgets.org For additional commands, e-mail: wxPython-users-help at lists.wxwidgets.org From ryanlists at gmail.com Sat Apr 22 20:54:23 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Sat, 22 Apr 2006 20:54:23 -0400 Subject: [SciPy-user] import/include in f2py Message-ID: If I have a user-defined function in a seperate .f or .so file, how do I include that function in another fortran file for use with f2py? i.e. if I define: double complex function zcosh(z) double complex z zcosh = 0.5*(exp(z)+exp(-z)) RETURN END in mylib.f and then I want to use it like this: double complex function bode(s) double complex s double complex zsinh, zcosh in bode(s) in otherfile.f, what line(s) do I need in otherfile.f to make it find zcosh(z)? I guess I am asking what is the fortran equivalent of include from c or import in python? I have looked in numerous fortran books and poked around in google, but I am missing something here. Thanks, Ryan From robert.kern at gmail.com Sat Apr 22 21:04:40 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 22 Apr 2006 20:04:40 -0500 Subject: [SciPy-user] import/include in f2py In-Reply-To: References: Message-ID: <444AD2A8.5040502@gmail.com> Ryan Krauss wrote: > If I have a user-defined function in a seperate .f or .so file, how do > I include that function in another fortran file for use with f2py? > i.e. if I define: > > double complex function zcosh(z) > double complex z > zcosh = 0.5*(exp(z)+exp(-z)) > RETURN > END > > in mylib.f > > and then I want to use it like this: > > double complex function bode(s) > double complex s > double complex zsinh, zcosh > > in bode(s) in otherfile.f, what line(s) do I need in otherfile.f to > make it find zcosh(z)? None. Just link the object files during linking. > I guess I am asking what is the fortran equivalent of include from c > or import in python? There is none. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wegwerp at gmail.com Sun Apr 23 13:25:18 2006 From: wegwerp at gmail.com (weg werp) Date: Sun, 23 Apr 2006 19:25:18 +0200 Subject: [SciPy-user] scipy.test() crashes under win2k In-Reply-To: <6f54c160604141427m4f28c5fbse630d75acc30a2dd@mail.gmail.com> References: <6f54c160604141427m4f28c5fbse630d75acc30a2dd@mail.gmail.com> Message-ID: <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> Hi group, I finally tried to install scipy. Everything appears to work, but scipy.test() crashes. I installed the prebuilt versions: first numpy-0.9.6r1.win32-py2.4.exe, then scipy-0.4.8.win32-py2.4-pentium3.exe (I assume that Pentium 4/SSE2 does not work on my Athlon). import numpy;numpy.test() works fine, but import scipy;scipy.test() first gives some warnings about overwriting fft, then runs all the tests ok, then shows a lot of dots (file test?) and crashes (drwatson log available). Any ideas? System: win2k sp4, Athlon XP 1700+, 768 MB Python 2.4 (#60, Nov 30 2004, 09:34:21) [MSC v.1310 32 bit (Intel)] on win32 Other point, slight nitpick for the website: it took my a few minutes to figure out the difference between the download and the install sections on the scipy website. In the download section I see some executables for windows, which is what I want as a newby. I then went to 'install' and expected some installation guidelines, but this seems more like an instruction for compiling from source (instant newby panic). Could the difference be made a little bit more clear? A few words are probably ok: -download section: 'these are prebuilt versions, for the latest version you have to compile yourself, see install'. -install section: 'installing from source, to download a prebuilt stable version see download' Thanks, Bas p.s.: I tried to send this message without subscribing to the list, but it got probably stuck in moderator approval. Is the policy to allow no posts from non-subscribers? If so, this should be stated on the web-page.... From robert.kern at gmail.com Sun Apr 23 14:27:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Apr 2006 13:27:11 -0500 Subject: [SciPy-user] scipy.test() crashes under win2k In-Reply-To: <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> References: <6f54c160604141427m4f28c5fbse630d75acc30a2dd@mail.gmail.com> <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> Message-ID: <444BC6FF.3010005@gmail.com> weg werp wrote: > Hi group, > > I finally tried to install scipy. Everything appears to work, but > scipy.test() crashes. > > I installed the prebuilt versions: > first numpy-0.9.6r1.win32-py2.4.exe, then > scipy-0.4.8.win32-py2.4-pentium3.exe (I assume that Pentium 4/SSE2 > does not work on my Athlon). > > import numpy;numpy.test() > works fine, but > import scipy;scipy.test() > first gives some warnings about overwriting fft, then runs all the > tests ok, then shows a lot of dots (file test?) and crashes (drwatson > log available). > > Any ideas? Please run the tests with greater verbosity; i.e. scipy.test(10,10). Then the test framework will print out the name of the test before it runs. That way, we will know what fails in particular. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Sun Apr 23 14:43:18 2006 From: schofield at ftw.at (Ed Schofield) Date: Sun, 23 Apr 2006 20:43:18 +0200 Subject: [SciPy-user] scipy.test() crashes under win2k In-Reply-To: <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> References: <6f54c160604141427m4f28c5fbse630d75acc30a2dd@mail.gmail.com> <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> Message-ID: <444BCAC6.9060302@ftw.at> weg werp wrote: > Other point, slight nitpick for the website: it took my a few minutes > to figure out the difference between the download and the install > sections on the scipy website. In the download section I see some > executables for windows, which is what I want as a newby. I then went > to 'install' and expected some installation guidelines, but this seems > more like an instruction for compiling from source (instant newby > panic). Could the difference be made a little bit more clear? A few > words are probably ok: > -download section: 'these are prebuilt versions, for the latest > version you have to compile yourself, see install'. > -install section: 'installing from source, to download a prebuilt > stable version see download' > Thanks for the tip. I've changed the Install page to try to explain this better. -- Ed From wegwerp at gmail.com Sun Apr 23 15:13:37 2006 From: wegwerp at gmail.com (weg werp) Date: Sun, 23 Apr 2006 21:13:37 +0200 Subject: [SciPy-user] scipy.test() crashes under win2k In-Reply-To: <444BC6FF.3010005@gmail.com> References: <6f54c160604141427m4f28c5fbse630d75acc30a2dd@mail.gmail.com> <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> <444BC6FF.3010005@gmail.com> Message-ID: <6f54c160604231213x4f98e24bre9261f9dbe3d7a7a@mail.gmail.com> > Please run the tests with greater verbosity; i.e. scipy.test(10,10). Then the > test framework will print out the name of the test before it runs. That way, we > will know what fails in particular. The last line before the crash is check_simple (scipy.linalg.tests.test_decomp.test_schur) ... ok so it is probably (one of) the next test(s) that crashes. According to Dr. Watson it is a Exception number: c000001d (illegal instruction) Bas From robert.kern at gmail.com Sun Apr 23 16:01:58 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Apr 2006 15:01:58 -0500 Subject: [SciPy-user] scipy.test() crashes under win2k In-Reply-To: <6f54c160604231213x4f98e24bre9261f9dbe3d7a7a@mail.gmail.com> References: <6f54c160604141427m4f28c5fbse630d75acc30a2dd@mail.gmail.com> <6f54c160604231025h18a322b5wb23e638c89fd1788@mail.gmail.com> <444BC6FF.3010005@gmail.com> <6f54c160604231213x4f98e24bre9261f9dbe3d7a7a@mail.gmail.com> Message-ID: <444BDD36.6090003@gmail.com> weg werp wrote: >>Please run the tests with greater verbosity; i.e. scipy.test(10,10). Then the >>test framework will print out the name of the test before it runs. That way, we >>will know what fails in particular. > > The last line before the crash is > check_simple (scipy.linalg.tests.test_decomp.test_schur) ... ok > so it is probably (one of) the next test(s) that crashes. Hmm. The "ok" means that the test ran successfully. The name of the test is always printed *before* the test is run, so it seems that something is crashing *in between* tests. I recommend trying other binaries and seeing if the same crash happens. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Apr 23 20:50:46 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Apr 2006 19:50:46 -0500 Subject: [SciPy-user] Changing the Trac authentication Message-ID: <444C20E5.7090309@gmail.com> I will be changing the Trac authentication over the next hour or so. I will be installing the AccountManagerPlugin to allow users to create accounts for themselves without needing to have SVN write access. Anonymous users will not be able to edit the Wikis or tickets. Non-developer, but registered users will be able to do so with some restrictions, notably not being able to resolve tickets. Developers who currently have accounts will have the same username/password as before. If you have problems using the Trac sites before I announce that I am done, please wait until I am finished. If there are still problems, please let me know and I will try to fix them as soon as possible. Thank you for your patience. Hopefully, this change will resolve the spam problem. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Sun Apr 23 21:11:05 2006 From: robert.kern at gmail.com (Robert Kern) Date: Sun, 23 Apr 2006 20:11:05 -0500 Subject: [SciPy-user] Changing the Trac authentication In-Reply-To: <444C20E5.7090309@gmail.com> References: <444C20E5.7090309@gmail.com> Message-ID: <444C25A9.8080701@gmail.com> Robert Kern wrote: > I will be changing the Trac authentication over the next hour or so. Never mind. I'll have to do it tomorrow when I get to the office. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From imcsee at gmail.com Mon Apr 24 08:35:58 2006 From: imcsee at gmail.com (imcs ee) Date: Mon, 24 Apr 2006 20:35:58 +0800 Subject: [SciPy-user] ask help for the userwanring Message-ID: debian - sarge. when i use the scipy package. it prints the words below. but the python2.4-profiler in not in the source ...and i am not have the privilege to install. could i just ignore it .and will the result be reliable ? --- /usr/lib/python2.4/site-packages/scipy_base/ppimport.py:273: UserWarning: The pstats module is not available. Install the python2.4-profiler Debian package if you need it module = __import__(name,None,None,['*']) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gnchen at cortechs.net Mon Apr 24 11:42:18 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 24 Apr 2006 08:42:18 -0700 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy Message-ID: <732BCAFB-8511-4D79-904C-22CD69DF0B19@cortechs.net> Hi! All, We have a a lot of C/C++ code written for interacting with Matlab (i.,e mexing code). I was wondering what's the best approach to port them into python/numpy/scipy? How about using SWIG? Any recommendation will be welcomed.... Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net From cimrman3 at ntc.zcu.cz Mon Apr 24 12:09:05 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Mon, 24 Apr 2006 18:09:05 +0200 Subject: [SciPy-user] which parallel programming package? Message-ID: <444CF821.50505@ntc.zcu.cz> Hi, I have checked the homepages of various parallel programing packages (based on MPI) listed at http://scipy.org/Topical_Software, and none of them seems to be recently updated. Which of them do/would you use and recommend? I do not need anything fancy, just pass a bunch of data (numpy arrays, scalars) over a cluster, and quickly. r. From tom.denniston at alum.dartmouth.org Mon Apr 24 12:09:44 2006 From: tom.denniston at alum.dartmouth.org (Tom Denniston) Date: Mon, 24 Apr 2006 11:09:44 -0500 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy In-Reply-To: <732BCAFB-8511-4D79-904C-22CD69DF0B19@cortechs.net> References: <732BCAFB-8511-4D79-904C-22CD69DF0B19@cortechs.net> Message-ID: Look at swig and boost python. Boost is more pythonic. Swig a little more automatic. I would try both on small examples and determine what you prefer. I personally like boost a little better. On 4/24/06, Gennan Chen wrote: > Hi! All, > > We have a a lot of C/C++ code written for interacting with Matlab > (i.,e mexing code). I was wondering what's the best approach to port > them into python/numpy/scipy? How about using SWIG? Any > recommendation will be welcomed.... > > Gen-Nan Chen, PhD > Chief Scientist > Research and Development Group > CorTechs Labs Inc (www.cortechs.net) > 1020 Prospect St., #304, La Jolla, CA, 92037 > Tel: 1-858-459-9700 ext 16 > Fax: 1-858-459-9705 > Email: gnchen at cortechs.net > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From gnchen at cortechs.net Mon Apr 24 12:16:38 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 24 Apr 2006 09:16:38 -0700 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy In-Reply-To: References: <732BCAFB-8511-4D79-904C-22CD69DF0B19@cortechs.net> Message-ID: <08C44B88-D753-4E11-837A-4422A9B0DD71@cortechs.net> Thanks!! Since we have more C code than C++, is there a example for using SWIG with numpy in the numpy or scipy's repository? I need to access and return a numpy's 3d array for most of my calculation. Gen On Apr 24, 2006, at 9:09 AM, Tom Denniston wrote: > Look at swig and boost python. Boost is more pythonic. Swig a little > more automatic. I would try both on small examples and determine what > you prefer. I personally like boost a little better. > > On 4/24/06, Gennan Chen wrote: >> Hi! All, >> >> We have a a lot of C/C++ code written for interacting with Matlab >> (i.,e mexing code). I was wondering what's the best approach to port >> them into python/numpy/scipy? How about using SWIG? Any >> recommendation will be welcomed.... >> >> Gen-Nan Chen, PhD >> Chief Scientist >> Research and Development Group >> CorTechs Labs Inc (www.cortechs.net) >> 1020 Prospect St., #304, La Jolla, CA, 92037 >> Tel: 1-858-459-9700 ext 16 >> Fax: 1-858-459-9705 >> Email: gnchen at cortechs.net >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.net >> http://www.scipy.net/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From russel at appliedminds.net Mon Apr 24 12:18:54 2006 From: russel at appliedminds.net (Russel) Date: Mon, 24 Apr 2006 09:18:54 -0700 Subject: [SciPy-user] which parallel programming package? In-Reply-To: <444CF821.50505@ntc.zcu.cz> References: <444CF821.50505@ntc.zcu.cz> Message-ID: <1DF59A0E-DA4A-4272-BEE5-D77EE11E5E3D@appliedminds.net> I have had success with http://www.penzilla.net/mmpi/ on gentoo linux with mpich2 I am trying to get it working on solaris today, but mpich2 is being difficult Russel On Apr 24, 2006, at 9:09 AM, Robert Cimrman wrote: > Hi, > > I have checked the homepages of various parallel programing packages > (based on MPI) listed at http://scipy.org/Topical_Software, and > none of > them seems to be recently updated. Which of them do/would you use and > recommend? I do not need anything fancy, just pass a bunch of data > (numpy arrays, scalars) over a cluster, and quickly. > > r. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user From fullung at gmail.com Mon Apr 24 12:28:39 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 24 Apr 2006 18:28:39 +0200 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy In-Reply-To: <08C44B88-D753-4E11-837A-4422A9B0DD71@cortechs.net> Message-ID: <009201c667bc$24d29190$0502010a@dsp.sun.ac.za> Hello Gen I was able to get going with SWIG using the following: http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/doc/swig/ http://www.scipy.org/Cookbook/SWIG_and_NumPy If you're building on Windows, check out: http://projects.scipy.org/scipy/numpy/ticket/77 My SWIG file so far: /* -*- C -*- */ %module _svmlight %{ #define SWIG_FILE_WITH_INIT #include "svm_light/svm_common.h" #include "svm_light/svm_learn.h" #include "svmlight_wrap.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* IN_ARRAY2, int DIM1, int DIM2) {(double* const features, int const rows, int const cols)}; %apply (double* IN_ARRAY1, int DIM1) {(double* const labels, int const size)}; #include "svmlight_wrap.h" /* eof */ This wraps the following function defined in svmlight_wrap.h: void* svmlearn(double* const features, int const rows, int const cols, double* const labels, int const size); >From Python I can call it like so: x = array([[1.0,2.0,3.0],[4.0,5.0,6.0]]) y = array([1,-1]) model = svmlight.svmlearn(x, y) Check the example in the SWIG NumPy docs for more info. Regards, Albert > -----Original Message----- > From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] > On Behalf Of Gennan Chen > Sent: 24 April 2006 18:17 > To: SciPy Users List > Subject: Re: [SciPy-user] port C/C++ matlab mexing code to numpy > > Thanks!! Since we have more C code than C++, is there a example for > using SWIG with numpy in the numpy or scipy's repository? > I need to access and return a numpy's 3d array for most of my > calculation. > > Gen > > > On Apr 24, 2006, at 9:09 AM, Tom Denniston wrote: > > > Look at swig and boost python. Boost is more pythonic. Swig a little > > more automatic. I would try both on small examples and determine what > > you prefer. I personally like boost a little better. > > > > On 4/24/06, Gennan Chen wrote: > >> Hi! All, > >> > >> We have a a lot of C/C++ code written for interacting with Matlab > >> (i.,e mexing code). I was wondering what's the best approach to port > >> them into python/numpy/scipy? How about using SWIG? Any > >> recommendation will be welcomed.... > >> > >> Gen-Nan Chen, PhD > >> Chief Scientist > >> Research and Development Group > >> CorTechs Labs Inc (www.cortechs.net) > >> 1020 Prospect St., #304, La Jolla, CA, 92037 > >> Tel: 1-858-459-9700 ext 16 > >> Fax: 1-858-459-9705 > >> Email: gnchen at cortechs.net From stefan at sun.ac.za Mon Apr 24 12:34:36 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Mon, 24 Apr 2006 18:34:36 +0200 Subject: [SciPy-user] which parallel programming package? In-Reply-To: <444CF821.50505@ntc.zcu.cz> References: <444CF821.50505@ntc.zcu.cz> Message-ID: <20060424163436.GD29509@sun.ac.za> The development branch of IPython now has some support for parallel computing. An overview of the new design is at http://projects.scipy.org/ipython/ipython/wiki/NewDesign where you will also find http://projects.scipy.org/ipython/ipython/wiki/NewDesign/ParallelOverview Regards St?fan On Mon, Apr 24, 2006 at 06:09:05PM +0200, Robert Cimrman wrote: > Hi, > > I have checked the homepages of various parallel programing packages > (based on MPI) listed at http://scipy.org/Topical_Software, and none of > them seems to be recently updated. Which of them do/would you use and > recommend? I do not need anything fancy, just pass a bunch of data > (numpy arrays, scalars) over a cluster, and quickly. > > r. From gnchen at cortechs.net Mon Apr 24 12:54:23 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 24 Apr 2006 09:54:23 -0700 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy In-Reply-To: <009201c667bc$24d29190$0502010a@dsp.sun.ac.za> References: <009201c667bc$24d29190$0502010a@dsp.sun.ac.za> Message-ID: <53EBAE19-803E-41E4-BA06-36721DE5C528@cortechs.net> Thanks Albert!. I missed that. BTW, when will your wrapper for svmlight be in the repository? I am using libsvm in Matlab now. If you took care of this, I probably won't spend time to make a wrapper of it. Gen On Apr 24, 2006, at 9:28 AM, Albert Strasheim wrote: > Hello Gen > > I was able to get going with SWIG using the following: > > http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/doc/swig/ > http://www.scipy.org/Cookbook/SWIG_and_NumPy > > If you're building on Windows, check out: > > http://projects.scipy.org/scipy/numpy/ticket/77 > > My SWIG file so far: > > /* -*- C -*- */ > %module _svmlight > %{ > #define SWIG_FILE_WITH_INIT > #include "svm_light/svm_common.h" > #include "svm_light/svm_learn.h" > #include "svmlight_wrap.h" > %} > %include "numpy.i" > %init %{ > import_array(); > %} > %apply (double* IN_ARRAY2, int DIM1, int DIM2) {(double* const > features, int > const rows, int const cols)}; > %apply (double* IN_ARRAY1, int DIM1) {(double* const labels, int const > size)}; > #include "svmlight_wrap.h" > /* eof */ > > This wraps the following function defined in svmlight_wrap.h: > > void* svmlearn(double* const features, > int const rows, > int const cols, > double* const labels, > int const size); > >> From Python I can call it like so: > > x = array([[1.0,2.0,3.0],[4.0,5.0,6.0]]) > y = array([1,-1]) > model = svmlight.svmlearn(x, y) > > Check the example in the SWIG NumPy docs for more info. > > Regards, > > Albert > >> -----Original Message----- >> From: scipy-user-bounces at scipy.net [mailto:scipy-user- >> bounces at scipy.net] >> On Behalf Of Gennan Chen >> Sent: 24 April 2006 18:17 >> To: SciPy Users List >> Subject: Re: [SciPy-user] port C/C++ matlab mexing code to numpy >> >> Thanks!! Since we have more C code than C++, is there a example for >> using SWIG with numpy in the numpy or scipy's repository? >> I need to access and return a numpy's 3d array for most of my >> calculation. >> >> Gen >> >> >> On Apr 24, 2006, at 9:09 AM, Tom Denniston wrote: >> >>> Look at swig and boost python. Boost is more pythonic. Swig a >>> little >>> more automatic. I would try both on small examples and determine >>> what >>> you prefer. I personally like boost a little better. >>> >>> On 4/24/06, Gennan Chen wrote: >>>> Hi! All, >>>> >>>> We have a a lot of C/C++ code written for interacting with Matlab >>>> (i.,e mexing code). I was wondering what's the best approach to >>>> port >>>> them into python/numpy/scipy? How about using SWIG? Any >>>> recommendation will be welcomed.... >>>> >>>> Gen-Nan Chen, PhD >>>> Chief Scientist >>>> Research and Development Group >>>> CorTechs Labs Inc (www.cortechs.net) >>>> 1020 Prospect St., #304, La Jolla, CA, 92037 >>>> Tel: 1-858-459-9700 ext 16 >>>> Fax: 1-858-459-9705 >>>> Email: gnchen at cortechs.net > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From bblais at bryant.edu Mon Apr 24 13:04:52 2006 From: bblais at bryant.edu (Brian Blais) Date: Mon, 24 Apr 2006 13:04:52 -0400 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy In-Reply-To: <53EBAE19-803E-41E4-BA06-36721DE5C528@cortechs.net> References: <009201c667bc$24d29190$0502010a@dsp.sun.ac.za> <53EBAE19-803E-41E4-BA06-36721DE5C528@cortechs.net> Message-ID: <444D0534.9020900@bryant.edu> Gennan Chen wrote: >>>>> We have a a lot of C/C++ code written for interacting with Matlab >>>>> (i.,e mexing code). I was wondering what's the best approach to >>>>> port >>>>> them into python/numpy/scipy? How about using SWIG? Any >>>>> recommendation will be welcomed.... >>>>> although more work in the short run, I have found it faster often to port my mex code to Pyrex (http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/). it's by hand, but the syntax is so close to Python that the porting wasn't hard, and some of the C-code I could simply include in a .h, and link in. bb -- ----------------- bblais at bryant.edu http://web.bryant.edu/~bblais From fullung at gmail.com Mon Apr 24 13:59:22 2006 From: fullung at gmail.com (Albert Strasheim) Date: Mon, 24 Apr 2006 19:59:22 +0200 Subject: [SciPy-user] port C/C++ matlab mexing code to numpy In-Reply-To: <53EBAE19-803E-41E4-BA06-36721DE5C528@cortechs.net> Message-ID: <00a501c667c8$d153ad80$0502010a@dsp.sun.ac.za> Hello I hope to have some code for public consumption within the next few days. I'd also like to wrap libsvm, since SVM-Light isn't free for commercial use. I'll keep you posted. Regards, Albert > -----Original Message----- > From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] > On Behalf Of Gennan Chen > Sent: 24 April 2006 18:54 > To: SciPy Users List > Subject: Re: [SciPy-user] port C/C++ matlab mexing code to numpy > > Thanks Albert!. I missed that. BTW, when will your wrapper for > svmlight be in the repository? I am using libsvm in Matlab now. If > you took care of this, I probably won't spend time to make a wrapper > of it. > > Gen From bgranger at scu.edu Mon Apr 24 17:55:40 2006 From: bgranger at scu.edu (Brian Granger) Date: Mon, 24 Apr 2006 14:55:40 -0700 Subject: [SciPy-user] which parallel programming package? In-Reply-To: <20060424163436.GD29509@sun.ac.za> References: <444CF821.50505@ntc.zcu.cz> <20060424163436.GD29509@sun.ac.za> Message-ID: The current state of Python wrappings for MPi is less than ideal. At one point I was keeping track of each implementation, but there are so many now (pympi, pypar, scientific python, mmpi, mympi) I can't keep up - and new ones appear every so often. Why are there so many? I think it is because there has never been one really good solution that discouraged others from attempting a new one. But for what it is worth, I do know someone that uses PyPar for real work. > The development branch of IPython now has some support for parallel > computing. An overview of the new design is at > > http://projects.scipy.org/ipython/ipython/wiki/NewDesign > http://projects.scipy.org/ipython/ipython/wiki/NewDesign/ParallelOverview The parallel features in the IPython chainsaw branch focus more on allowing parallel computations to be done interactively within IPython. This approach is not necessarily orthogonal to using MPI, but it is different. It really depends on your needs. While there is a working prototype in the chainsaw branch, it is still under heavy development and the docs and wiki may not reflect the current state of affairs. Brian -- Brian Granger Santa Clara University ellisonbg at gmail.com From gnchen at cortechs.net Mon Apr 24 20:38:28 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 24 Apr 2006 17:38:28 -0700 Subject: [SciPy-user] ndimage and 64 bit Message-ID: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> Hi! It looks like some of functions I want to port from Matlab are already in scipy.ndimage. However, I remember i saw a post a while ago about it is not working for 64 bit. Is it still true? or It is just my illusion. Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net From robert.kern at gmail.com Mon Apr 24 20:45:11 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 19:45:11 -0500 Subject: [SciPy-user] ndimage and 64 bit In-Reply-To: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> References: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> Message-ID: <444D7117.1030307@gmail.com> Gennan Chen wrote: > Hi! > > It looks like some of functions I want to port from Matlab are > already in scipy.ndimage. However, I remember i saw a post a while > ago about it is not working for 64 bit. Is it still true? or It is > just my illusion. Yes, it's still true. I imagine it might be easier to fix the 64-bit issues in scipy.ndimage than porting the Matlab code. We would appreciate any contribution you could make. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Apr 24 20:59:04 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 19:59:04 -0500 Subject: [SciPy-user] Changing the Trac authentication, for real this time! Message-ID: <444D7458.3020402@gmail.com> If you encounter errors accessing the Trac sites for NumPy and SciPy over the next hour or so, please wait until I have announced that I have finished. If things are still broken after that, please let me know and I will try to fix it immediately. The details of the changes were posted to the previous thread "Changing the Trac authentication". Apologies for any disruption and for the noise. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From gcross at u.washington.edu Mon Apr 24 21:03:09 2006 From: gcross at u.washington.edu (Gregory Crosswhite) Date: Mon, 24 Apr 2006 18:03:09 -0700 Subject: [SciPy-user] OSX Issue -- Symbol not found: _fprintf$LDBLStub Message-ID: <19DA9FD8-3927-4FDD-8206-BA72A0D81E0A@u.washington.edu> Hey! I'm attempting to get SciPy to run on OSX 10.4 (Tiger), with the latest version of Xcode (2.2.1, I believe) installed. When I run python and import scipy.fftpack, I get the following error: Python 2.4.3 (#1, Mar 30 2006, 11:02:15) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scipy.fftpack Traceback (most recent call last): File "", line 1, in ? File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/fftpack/__init__.py", line 10, in ? from basic import * File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/scipy/fftpack/basic.py", line 13, in ? import _fftpack as fftpack ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/scipy/fftpack/_fftpack.so Expected in: dynamic lookup I've tried both the MacPython binaries distributed at python.org, and the ActiveState distribution. I've tried both downloading and installing SciPy binaries (the latest version, 0.4.8, and an older version, 0.4.4), and compiling them from source. The problem does not go away whether I use GCC 3.3 or 4.0. Now, at one point in the past I had installed and gotten working SciPy 0.3 from sources, so there might be a library sitting around from that which is screwing things up, but I don't know where to look! Under the belief that maybe the problem was an old version of FFTW (or one compiled with GCC version 4.0 instead of 3.3) I downloaded, compiled, and installed BOTH FFTW 2.1.5 and 3.1.1 using GCC 3.3. Again, no change in the error message. There is exactly one thing that does seem to work, and that is using the binary of version 0.4.9 built by Christopher Fonnesbeck, downloadable from http://trichech.us/. I am using that for now, so this isn't a terribly urgent issue, but it really bothers me that I can't get any other binary or source distribution of SciPy working; I wish I could figure out why his build works when even builds performed on my own computer won't work. Does anyone have thoughts on what could be going wrong? Thanks a lot! :-) - Greg From gnchen at cortechs.net Mon Apr 24 21:10:49 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Mon, 24 Apr 2006 18:10:49 -0700 Subject: [SciPy-user] ndimage and 64 bit In-Reply-To: <444D7117.1030307@gmail.com> References: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> <444D7117.1030307@gmail.com> Message-ID: Robert, What is the issue there?? Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net On Apr 24, 2006, at 5:45 PM, Robert Kern wrote: > Gennan Chen wrote: >> Hi! >> >> It looks like some of functions I want to port from Matlab are >> already in scipy.ndimage. However, I remember i saw a post a while >> ago about it is not working for 64 bit. Is it still true? or It is >> just my illusion. > > Yes, it's still true. I imagine it might be easier to fix the 64- > bit issues in > scipy.ndimage than porting the Matlab code. We would appreciate any > contribution > you could make. > > -- > Robert Kern > robert.kern at gmail.com > > "I have come to believe that the whole world is an enigma, a > harmless enigma > that is made terrible by our own mad attempt to interpret it as > though it had > an underlying truth." > -- Umberto Eco > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From jonathan.taylor at stanford.edu Mon Apr 24 21:11:52 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Mon, 24 Apr 2006 18:11:52 -0700 Subject: [SciPy-user] ndimage and 64 bit In-Reply-To: <444D7117.1030307@gmail.com> References: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> <444D7117.1030307@gmail.com> Message-ID: <444D7758.7090002@stanford.edu> Are the things needing to be fixed in scipy.ndimage laid out anywhere? Jonathan Robert Kern wrote: >Gennan Chen wrote: > > >>Hi! >> >>It looks like some of functions I want to port from Matlab are >>already in scipy.ndimage. However, I remember i saw a post a while >>ago about it is not working for 64 bit. Is it still true? or It is >>just my illusion. >> >> > >Yes, it's still true. I imagine it might be easier to fix the 64-bit issues in >scipy.ndimage than porting the Matlab code. We would appreciate any contribution >you could make. > > > -- ------------------------------------------------------------------------ I'm part of the Team in Training: please support our efforts for the Leukemia and Lymphoma Society! http://www.active.com/donate/tntsvmb/tntsvmbJTaylor GO TEAM !!! ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 From robert.kern at gmail.com Mon Apr 24 21:19:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 20:19:27 -0500 Subject: [SciPy-user] ndimage and 64 bit In-Reply-To: <444D7758.7090002@stanford.edu> References: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> <444D7117.1030307@gmail.com> <444D7758.7090002@stanford.edu> Message-ID: <444D791F.3090608@gmail.com> Jonathan Taylor wrote: > Are the things needing to be fixed in scipy.ndimage > laid out anywhere? Not to my knowledge although you can search the archives. It's probably not hard; we just haven't found the right combination of available 64-bit machines and interested developers, I think. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From robert.kern at gmail.com Mon Apr 24 22:38:26 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 24 Apr 2006 21:38:26 -0500 Subject: [SciPy-user] Changing the Trac authentication, for real this time! In-Reply-To: <444D7458.3020402@gmail.com> References: <444D7458.3020402@gmail.com> Message-ID: <444D8BA2.1080407@gmail.com> I hate computers. It's still not done. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From Tony.Mannucci at jpl.nasa.gov Mon Apr 24 23:37:13 2006 From: Tony.Mannucci at jpl.nasa.gov (Tony Mannucci) Date: Mon, 24 Apr 2006 20:37:13 -0700 Subject: [SciPy-user] OSX Issue -- Symbol not found: _fprintf$LDBLStub In-Reply-To: References: Message-ID: I used to have a very similar problem until I used g77 with gcc 3.3 (sudo gcc_select 3.3) rather than gcc 4.0. Then the proper libraries were searched and the symbol found. The library you are having problems with appears to be written in C, so I don't if this applies. Make sure you are using the correct gcc by typing gcc_select without arguments. I think scipy needs f77 for some modules, so you must have a fortran compiler somewhere? I downloaded the binary from Khanna's HPC on OS X site, and followed the installation instructions carefully. Be sure to remove any old SciPy installations before retrying (e.g. /usr/lib/python2.4/site-packages/scipy or $HOME/lib/python2.4/ site-packages/scipy). -Tony > >Message: 5 >Date: Mon, 24 Apr 2006 18:03:09 -0700 >From: Gregory Crosswhite >Subject: [SciPy-user] OSX Issue -- Symbol not found: _fprintf$LDBLStub >To: scipy-user at scipy.net >Message-ID: <19DA9FD8-3927-4FDD-8206-BA72A0D81E0A at u.washington.edu> >Content-Type: text/plain; charset=US-ASCII; delsp=yes; format=flowed > >Hey! I'm attempting to get SciPy to run on OSX 10.4 (Tiger), with >the latest version of Xcode (2.2.1, I believe) installed. When I run >python and import scipy.fftpack, I get the following error: > >Python 2.4.3 (#1, Mar 30 2006, 11:02:15) >[GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin >Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy.fftpack >Traceback (most recent call last): > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >python2.4/site-packages/scipy/fftpack/__init__.py", line 10, in ? > from basic import * > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ >python2.4/site-packages/scipy/fftpack/basic.py", line 13, in ? > import _fftpack as fftpack >ImportError: Failure linking new module: /Library/Frameworks/ >Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ >fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub > Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ >lib/python2.4/site-packages/scipy/fftpack/_fftpack.so > Expected in: dynamic lookup > > >I've tried both the MacPython binaries distributed at python.org, and >the ActiveState distribution. I've tried both downloading and >installing SciPy binaries (the latest version, 0.4.8, and an older >version, 0.4.4), and compiling them from source. The problem does >not go away whether I use GCC 3.3 or 4.0. > >Now, at one point in the past I had installed and gotten working >SciPy 0.3 from sources, so there might be a library sitting around >from that which is screwing things up, but I don't know where to >look! Under the belief that maybe the problem was an old version of >FFTW (or one compiled with GCC version 4.0 instead of 3.3) I >downloaded, compiled, and installed BOTH FFTW 2.1.5 and 3.1.1 using >GCC 3.3. Again, no change in the error message. > >There is exactly one thing that does seem to work, and that is using >the binary of version 0.4.9 built by Christopher Fonnesbeck, >downloadable from http://trichech.us/. I am using that for now, so >this isn't a terribly urgent issue, but it really bothers me that I >can't get any other binary or source distribution of SciPy working; >I wish I could figure out why his build works when even builds >performed on my own computer won't work. > >Does anyone have thoughts on what could be going wrong? > >Thanks a lot! :-) > >- Greg > > > -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From robert.kern at gmail.com Tue Apr 25 01:14:27 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 00:14:27 -0500 Subject: [SciPy-user] [SciPy-dev] Google Summer of Code In-Reply-To: <44476AEA.7080003@decsai.ugr.es> References: <44476AEA.7080003@decsai.ugr.es> Message-ID: <444DB033.4000906@gmail.com> [Cross-posted because this is partially an announcement. Continuing discussion should go to only one list, please.] Antonio Arauzo Azofra wrote: > Google Summer of Code > http://code.google.com/soc/ > > Have you considered participating as a Mentoring organization? Offering > any project about Scipy? I'm not sure which "you" you are referring to here, but yes! Unfortunately, it was a bit late in the process to be applying as a mentoring organization. Google started consolidating mentoring organizations. However, I and several others at Enthought are volunteering to mentor through the PSF. I encourage others on these lists to do the same or to apply as students, whichever is appropriate. We'll be happy to provide SVN workspace for numpy and scipy SoC projects. I've added one fairly general scipy entry to the python.org Wiki page listing project ideas: http://wiki.python.org/moin/SummerOfCode If you have more specific ideas, please add them to the Wiki. Potential mentors: Neal Norwitz is coordinating PSF mentors this year and has asked that those he or Guido does not know personally to give personal references. If you've been active on this list, I'm sure we can play the "Two Degrees of Separation From Guido Game" and get you a reference from someone else here. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nomo17k at gmail.com Tue Apr 25 02:09:14 2006 From: nomo17k at gmail.com (Taro Sato) Date: Mon, 24 Apr 2006 23:09:14 -0700 Subject: [SciPy-user] singular matrix linalg.basic.LinAlgError in optimize.leastsq Message-ID: I frequently use optimize.leastsq, and there is this annoying tendency for the routine to throw an exception when the resulting covariant matrix ends up singular. The error message follows. Basically, I need full_output=1 to just obtain a stopping condition in the output parameter ier, but the way the code prepare the full output, it also tries to compute covariant matrix. The problem is when it fails, it throws an exception which is not handled within minpack.py: ----------------------------------------------------------------- Traceback (most recent call last): File "./test.py", line 27, in ? if __name__ == '__main__': main() File "./test.py", line 23, in main p = O.leastsq(residual, [10.,.1,.1], args=(x,y),full_output=1) File "/usr/lib/python2.3/site-packages/scipy/optimize/minpack.py", line 271, in leastsq cov_x = sl.inv(dot(transpose(R),R)) File "/usr/lib/python2.3/site-packages/scipy/linalg/basic.py", line 221, in inv if info>0: raise LinAlgError, "singular matrix" scipy.linalg.basic.LinAlgError: singular matrix ---------------------------------------------------------------- I'm not sure when exactly a covariant matrix cannot be computed this way. My test code (attached below and used to produce the error above) seems to indicate when the initial guess parameters are way off from the best parameters, the above error can result. The existing behavior of leastsq is very annoying, since to obtain a stopping condition that can be handled easily (i.e., in ier as an integer), I must use full_output=1, but when the computation of a covarant matrix is not well behaving, it simply explodes; there's no way to get all the other info that are relevant for knowing why the fitting failed. Is there any better way to handle the situation (which might include suggesting a modification to minpack.py)? Also, it appears to make more sense (at least to me) for leastsq to return ier when full_output=0, rather than mesg, as it makes error handling easier...a string message is human friendly but not really friendly to coders.... :) By the way, in the mailing list archive, I noticed there was a similar request about a year ago for adding an exception handling in this case. Thank you for your time, Taro ---CODE BEGINS (test.py)------------------------------------------- #!/usr/bin/env python import numpy as N import scipy.optimize as O def fgauss(lamb, params): lambc, a, b = params return 1.+a*N.exp(-0.5*((lamb-lambc)/b)**2) def residual(p, x, y): return y - fgauss(x, p) def main(): p0 = [100., 50., 5.] x = N.arange(200).astype(float) y0 = fgauss(x, p0) y = N.zeros(y0.shape).astype(float) for i in xrange(y.size): y[i] = N.random.poisson(y0[i]) p = O.leastsq(residual, [10.,.1,.1], args=(x,y),full_output=1) print p[0],p[3:] if __name__ == '__main__': main() ---CODE ENDS------------------------------------------- From oliphant.travis at ieee.org Tue Apr 25 02:17:05 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 00:17:05 -0600 Subject: [SciPy-user] singular matrix linalg.basic.LinAlgError in optimize.leastsq In-Reply-To: References: Message-ID: <444DBEE1.7020406@ieee.org> Taro Sato wrote: > I frequently use optimize.leastsq, and there is this annoying tendency > for the routine to throw an exception when the resulting covariant > matrix ends up singular. The error message follows. > > Basically, I need full_output=1 to just obtain a stopping condition in > the output parameter ier, but the way the code prepare the full > output, it also tries to compute covariant matrix. The problem is > when it fails, it throws an exception which is not handled within > minpack.py: > This is a good suggestion. Unfortunately, this year has been very busy for me and I have not been able to spend much time on SciPy (most of my Python time has been spent on NumPy). Fortunately, other SciPy developers have helped but we could use more who can make these kinds of changes. Think about submitting a patch (please at least enter a ticket on the Trac page for SciPy). This can help. -Travis From jelle.feringa at ezct.net Tue Apr 25 03:51:22 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 25 Apr 2006 09:51:22 +0200 Subject: [SciPy-user] GiNaC / Scipy ? Message-ID: <050a01c6683d$0beb46a0$0b01a8c0@JELLE> Dear group, I've been really impressed by some recent efforts in bringing FEM to python: especially SyFi caught my eye, Symbolic Finite Element, and effort of Kent-Andre Mardal. The interesting thing here is that its built on top of the symbolic math lib GiNaC, http://www.ginac.de/ which has swig wrappers for python, Swigniac, http://swiginac.berlios.de/ I'm wondering how relevant Swigniac could be for Scipy? Cheers, -jelle //We get quite a bit of Matlab references on this list, //let's just not overlook Mathematica ;) From wbaxter at gmail.com Tue Apr 25 05:28:32 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Tue, 25 Apr 2006 18:28:32 +0900 Subject: [SciPy-user] GiNaC / Scipy ? In-Reply-To: <050a01c6683d$0beb46a0$0b01a8c0@JELLE> References: <050a01c6683d$0beb46a0$0b01a8c0@JELLE> Message-ID: On 4/25/06, Jelle Feringa / EZCT Architecture & Design Research < jelle.feringa at ezct.net> wrote: > > Dear group, > > I've been really impressed by some recent efforts in bringing FEM to > python: > especially SyFi caught my eye, Symbolic Finite Element, and effort of > Kent-Andre Mardal. The interesting thing here is that its built on top of > the symbolic math lib GiNaC, > http://www.ginac.de/ > which has swig wrappers for python, Swigniac, > http://swiginac.berlios.de/ > > I'm wondering how relevant Swigniac could be for Scipy? I don't have much to say Re: SciPy and this, but it's certainly worth linking to from the SciPy wiki, if nothing else. I guess the main thing you'd want is to be able to do is evaluate GiNaC expressions using values taken from Numpy arrays. Are the SWIG wrappers currently better than the Boost::python wrappers linked to from the main ginac page? Since they're linked, the boost ones seem to be more "official" than the swig ones. > Cheers, > > -jelle > > //We get quite a bit of Matlab references on this list, > //let's just not overlook Mathematica ;) > > // Let's not overlook Matlab's symbolic toolbox either. ;-) // (It's pretty much the only add-on toolbox I ever use in Matlab.) --bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Apr 25 05:30:51 2006 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 25 Apr 2006 11:30:51 +0200 Subject: [SciPy-user] which parallel programming package? In-Reply-To: References: <444CF821.50505@ntc.zcu.cz> <20060424163436.GD29509@sun.ac.za> Message-ID: <444DEC4B.5060707@ntc.zcu.cz> Thanks for all the answers, I will try first mmpi, since it seems to be actively developed and is known to work on Gentoo Linux. But pypar and mympi would serve me equally well, IMHO. regards, r. From jelle.feringa at ezct.net Tue Apr 25 06:26:32 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Tue, 25 Apr 2006 12:26:32 +0200 Subject: [SciPy-user] GiNaC / Scipy ? In-Reply-To: Message-ID: <052601c66852$b96419f0$0b01a8c0@JELLE> I don't have much to say Re: SciPy and this, but it's certainly worth linking to from the SciPy wiki, if nothing else. I guess the main thing you'd want is to be able to do is evaluate GiNaC expressions using values taken from Numpy arrays. >From what I understand one is able to do so with the current state of the wrapper. Which is pretty impressive. I cant say so for sure since I haven't been able to build GiNaC so far. Are the SWIG wrappers currently better than the Boost::python wrappers linked to from the main ginac page? Since they're linked, the boost ones seem to be more "official" than the swig ones. Some pointers on that matter here: http://swiginac.berlios.de/chicago05.pdf The boost.python wrapper is also a orphane, perhaps that's been a strong consideration as well. Also it seems the auther prefers SWIG over boost.python // Let's not overlook Matlab's symbolic toolbox either. ;-) // (It's pretty much the only add-on toolbox I ever use in Matlab.) I'm a bit surprised sometimes by the dominance of matlab references on this list. Quite sure you'd like mathematica a lot of symbolic computing is of your interest. On that note, mathematica comes supplied with a (mathlink) module for binding it to python. I've tried building this module -\Wolfram Research\Mathematica\5.1\AddOns\MathLink\LanguageBindings\Python\- But never managed to do so successfully. Python dies when I import the module. I would love to know whether anyone successfully build it. Cheers, -jelle -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Tue Apr 25 07:54:08 2006 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 25 Apr 2006 07:54:08 -0400 Subject: [SciPy-user] GiNaC / Scipy ? In-Reply-To: <052601c66852$b96419f0$0b01a8c0@JELLE> References: <052601c66852$b96419f0$0b01a8c0@JELLE> Message-ID: I think it looks pretty cool, but it can't be included in SciPy unless the ginac authors would change the licensing from GPL to BSD-like. Does anyone have a feel for its linear algebra capabilities? Ryan On 4/25/06, Jelle Feringa / EZCT Architecture & Design Research wrote: > > > > > > > I don't have much to say Re: SciPy and this, but it's certainly worth > linking to from the SciPy wiki, if nothing else. I guess the main thing > you'd want is to be able to do is evaluate GiNaC expressions using values > taken from Numpy arrays. > > > > > From what I understand one is able to do so with the current state of the > wrapper. > > Which is pretty impressive. > > I cant say so for sure since I haven't been able to build GiNaC so far. > > > > Are the SWIG wrappers currently better than the Boost::python wrappers > linked to from the main ginac page? Since they're linked, the boost ones > seem to be more "official" than the swig ones. > > > > > Some pointers on that matter here: > http://swiginac.berlios.de/chicago05.pdf > > The boost.python wrapper is also a orphane, perhaps that's been a strong > consideration as well. > > Also it seems the auther prefers SWIG over boost.python > > > > > > // Let's not overlook Matlab's symbolic toolbox either. ;-) > // (It's pretty much the only add-on toolbox I ever use in Matlab.) > > > > > I'm a bit surprised sometimes by the dominance of matlab references on this > list. > > Quite sure you'd like mathematica a lot of symbolic computing is of your > interest. > > On that note, mathematica comes supplied with a (mathlink) module for > binding it to python. > > I've tried building this module > > -\Wolfram > Research\Mathematica\5.1\AddOns\MathLink\LanguageBindings\Python\- > > But never managed to do so successfully. Python dies when I import the > module? > > I would love to know whether anyone successfully build it? > > > > Cheers, > > > > -jelle > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > > > From ringueet at ete.inrs.ca Tue Apr 25 09:02:22 2006 From: ringueet at ete.inrs.ca (Etienne Ringuet) Date: Tue, 25 Apr 2006 09:02:22 -0400 Subject: [SciPy-user] Unable to build : Unknown distribution option: 'configuration' Message-ID: This is my first post to the list so hello everyone. I am trying to build today's svn source of scipy on a dual opteron server. I am using this guide as a reference: http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 Here is the error: [root at marie scipy]# python setup.py build /usr/lib64/python2.3/distutils/dist.py:227: UserWarning: Unknown distribution option: 'configuration' warnings.warn(msg) running build running config_fc [root at marie scipy]# uname -a Linux marie.ad.inrs.ca 2.6.9-5.0.5.ELsmp #1 SMP Tue Apr 19 17:06:07 CDT 2005 x86_64 x86_64 x86_64 GNU/Linux I don't know much about python, I am just adding requested packages to our cluster. Thanks in advance for your help, Etienne Ringuet -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant.travis at ieee.org Tue Apr 25 11:57:17 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 25 Apr 2006 09:57:17 -0600 Subject: [SciPy-user] ***[Possible UCE]*** Re: ndimage and 64 bit In-Reply-To: References: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> <444D7117.1030307@gmail.com> Message-ID: <444E46DD.3080004@ieee.org> Gennan Chen wrote: > Robert, > > What is the issue there?? > I think most of the problem is that in several places in the ndimage code an int pointer and a long pointer are being used interchangeably as if they were the same thing. On 32-bit platforms this is usually true but it is rarely true on 64-bit platforms. Thus, the code fails. Most of these places should be picked up by a compiler and should be fixable in a pretty straightforward way. I've already fixed many of them using output-logs of people with 64-bit systems (I don't have one myself). We need someone with a 64-bit system willing to track down the remaining instances. It is also possible that there is a problem with the numcompat.h file and numcompat.c file on 64-bit systems that I am yet unaware of. This also needs a 64-bit-system user to help debug. -Travis From gnchen at cortechs.net Tue Apr 25 12:20:39 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Tue, 25 Apr 2006 09:20:39 -0700 Subject: [SciPy-user] ***[Possible UCE]*** Re: ndimage and 64 bit In-Reply-To: <444E46DD.3080004@ieee.org> References: <898F5CB0-339E-4076-8FE3-D0DE9DDD9509@cortechs.net> <444D7117.1030307@gmail.com> <444E46DD.3080004@ieee.org> Message-ID: <676C2B37-397B-4E01-A547-B9D4F10C682D@cortechs.net> Travis, Thanks for the heads up. I need to make it working on our new 64 bit machine (dual dual-core opteron running Centos 4.3) anyway. Hope I can track down the remaining instances. Gen On Apr 25, 2006, at 8:57 AM, Travis Oliphant wrote: > Gennan Chen wrote: >> Robert, >> >> What is the issue there?? >> > > I think most of the problem is that in several places in the ndimage > code an int pointer and a long pointer are being used > interchangeably as > if they were the same thing. On 32-bit platforms this is usually true > but it is rarely true on 64-bit platforms. > > Thus, the code fails. Most of these places should be picked up by a > compiler and should be fixable in a pretty straightforward way. I've > already fixed many of them using output-logs of people with 64-bit > systems (I don't have one myself). > > We need someone with a 64-bit system willing to track down the > remaining > instances. > > It is also possible that there is a problem with the numcompat.h file > and numcompat.c file on 64-bit systems that I am yet unaware of. > This > also needs a 64-bit-system user to help debug. > > > -Travis > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From robert.kern at gmail.com Tue Apr 25 12:51:29 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 11:51:29 -0500 Subject: [SciPy-user] Unable to build : Unknown distribution option: 'configuration' In-Reply-To: References: Message-ID: <444E5391.9050203@gmail.com> Etienne Ringuet wrote: > This is my first post to the list so hello everyone. > > I am trying to build today's svn source of scipy on a dual opteron server. > > I am using this guide as a reference: > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > Here is the error: > > [root at marie scipy]# python setup.py build > /usr/lib64/python2.3/distutils/dist.py:227: UserWarning: Unknown > distribution option: 'configuration' Please also get the most recent SVN of numpy. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From ringueet at ete.inrs.ca Tue Apr 25 13:15:18 2006 From: ringueet at ete.inrs.ca (Etienne Ringuet) Date: Tue, 25 Apr 2006 13:15:18 -0400 Subject: [SciPy-user] =?utf-8?q?Unable_to_build_=3A_Unknown_distribution_o?= =?utf-8?q?ption=3A=09=27configuration=27?= Message-ID: Hello Robert, I do have the latest version of numpy, compiled from yesterday svn. I am using CentOS 4, python 2.3, I build BLAS and LAPACK from source, I tried removing blas and lapack librairies from the CentOS install. Thanks, Etienne -----Message d'origine----- De?: Robert Kern [mailto:robert.kern at gmail.com] Envoy??: 25 avril 2006 12:51 ??: ringueet at ete.inrs.ca;SciPy Users List Objet?: Re: [SciPy-user] Unable to build : Unknown distribution option: 'configuration' Etienne Ringuet wrote: > This is my first post to the list so hello everyone. > > I am trying to build today's svn source of scipy on a dual opteron server. > > I am using this guide as a reference: > http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 > > Here is the error: > > [root at marie scipy]# python setup.py build > /usr/lib64/python2.3/distutils/dist.py:227: UserWarning: Unknown > distribution option: 'configuration' Please also get the most recent SVN of numpy. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From w.northcott at unsw.edu.au Tue Apr 25 19:03:26 2006 From: w.northcott at unsw.edu.au (Bill Northcott) Date: Wed, 26 Apr 2006 09:03:26 +1000 Subject: [SciPy-user] OSX Issue -- Symbol not found: _fprintf$LDBLStub In-Reply-To: References: Message-ID: On 25/04/2006, at 12:38 PM, Greg wrote: > Hey! I'm attempting to get SciPy to run on OSX 10.4 (Tiger), with > the latest version of Xcode (2.2.1, I believe) installed. When I run > python and import scipy.fftpack, I get the following error: > > Python 2.4.3 (#1, Mar 30 2006, 11:02:15) > [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin > Type "help", "copyright", "credits" or "license" for more information. >>>> import scipy.fftpack > Traceback (most recent call last): > File "", line 1, in ? > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/fftpack/__init__.py", line 10, in ? > from basic import * > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/scipy/fftpack/basic.py", line 13, in ? > import _fftpack as fftpack > ImportError: Failure linking new module: /Library/Frameworks/ > Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/ > fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub > .......... > > Does anyone have thoughts on what could be going wrong? This is caused by trying to link object code files and or static libraries produced by different versions of gcc. It will go away if you ensure that all objects are compiled either with gcc-3.x/g77 or gcc-4.x/gfortran. It does not matter what compiler is used for linked dynamic libraries, which is why they are preferred. Bill Northcott From robert.kern at gmail.com Tue Apr 25 23:09:23 2006 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 25 Apr 2006 22:09:23 -0500 Subject: [SciPy-user] Chang*ed* the Trac authentication Message-ID: <444EE463.10007@gmail.com> Trying not to embarass myself again, I made the changes without telling you. :-) In order to create or modify Wiki pages or tickets on the NumPy and SciPy Tracs, you will have to be logged in. You can register yourself by clicking the "Register" link in the upper right-hand corner of the page. Developers who previously had accounts have the same username/password as before. You can now change your password if you like. Only developers have the ability to close tickets, delete Wiki pages entirely, or create new ticket reports (and possibly a couple of other things). Developers, please enter your name and email by clicking on the "Settings" link up at top once logged in. Thank you for your patience. If there are any problems, please email me, and I will try to correct them quickly. -- Robert Kern robert.kern at gmail.com "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nomo17k at gmail.com Wed Apr 26 12:03:41 2006 From: nomo17k at gmail.com (Taro Sato) Date: Wed, 26 Apr 2006 09:03:41 -0700 Subject: [SciPy-user] singular matrix linalg.basic.LinAlgError in optimize.leastsq In-Reply-To: <444DBEE1.7020406@ieee.org> References: <444DBEE1.7020406@ieee.org> Message-ID: On 4/24/06, Travis Oliphant wrote: > Taro Sato wrote: > > I frequently use optimize.leastsq, and there is this annoying tendency > > for the routine to throw an exception when the resulting covariant > > matrix ends up singular. The error message follows. > > > > Basically, I need full_output=1 to just obtain a stopping condition in > > the output parameter ier, but the way the code prepare the full > > output, it also tries to compute covariant matrix. The problem is > > when it fails, it throws an exception which is not handled within > > minpack.py: > > > > This is a good suggestion. Unfortunately, this year has been very busy > for me and I have not been able to spend much time on SciPy (most of my > Python time has been spent on NumPy). > > Fortunately, other SciPy developers have helped but we could use more > who can make these kinds of changes. > > Think about submitting a patch (please at least enter a ticket on the > Trac page for SciPy). This can help. > > -Travis Submitted a patch and entered a ticket on the Trac page. Thanks. Taro From chuckles at llnl.gov Wed Apr 26 14:12:51 2006 From: chuckles at llnl.gov (Chuckles McGregor) Date: Wed, 26 Apr 2006 11:12:51 -0700 Subject: [SciPy-user] example dict_sort.py compile error Message-ID: <6.2.1.2.2.20060426110136.13311990@mail.llnl.gov> good day, I'm trying to run the dict_sort.py script from the weave/examples dir and at this piece of the script: def c_sort2(adict): assert(type(adict) is dict) code = """ #line 44 "dict_sort.py" py::list keys = adict.keys(); py::list items(keys.len()); keys.sort(); int N = keys.length(); for(int i = 0; i < N;i++) items[i] = adict[keys[i]]; return_val = items; """ return inline_tools.inline(code,['adict'],verbose=1) I'm getting this error from the compiler: dict_sort.py(49) : error C2593: 'operator [' is ambiguous C:\Python24\Lib\site-packages\scipy\weave\scxx/dict.h(120): could be 'py ::object::keyed_ref py::dict::operator [](const std::string &)' C:\Python24\Lib\site-packages\scipy\weave\scxx/dict.h(113): or 'py::obje ct::keyed_ref py::dict::operator [](const char *)' C:\Python24\Lib\site-packages\scipy\weave\scxx/dict.h(105): or 'py::obje ct::keyed_ref py::dict::operator [](const std::complex &)' C:\Python24\Lib\site-packages\scipy\weave\scxx/dict.h(101): or 'py::obje ct::keyed_ref py::dict::operator [](double)' C:\Python24\Lib\site-packages\scipy\weave\scxx/dict.h(97): or 'py::objec t::keyed_ref py::dict::operator [](int)' while trying to match the argument list '(py::dict, py::indexed_ref)' Traceback (most recent call last): File "dict_sort.py", line 119, in ? sort_compare(a,n) File "dict_sort.py", line 96, in sort_compare b=c_sort2(a) File "dict_sort.py", line 52, in c_sort2 return inline_tools.inline(code,['adict'],verbose=1) File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 334, in inline auto_downcast = auto_downcast, File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line 442, in compile_function verbose=verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 353, in co mpile verbose = verbose, **kw) File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line 274, in build_extension setup(name = module_name, ext_modules = [ext],verbose=verb) File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 85, in setu p return old_setup(**new_attr) File "C:\Python24\lib\distutils\core.py", line 166, in setup raise SystemExit, "error: " + str(msg) distutils.errors.CompileError: error: Command "cl.exe /c /nologo /Ox /MD /W3 /EHsc /DNDEBUG -IC:\Python24\Lib\site-packages\scipy\weave -IC:\Python24\Lib\site-p ackages\scipy\weave\scxx -IC:\python24\lib\site-packages\numpy\core\include -IC: \python24\include -IC:\python24\PC /Tpc:\docume~1\mcgreg~1\locals~1\temp\mcgrego r1\python24_compiled\sc_9fef7eb8a0d63221b946305b186f457b1.cpp /Foc:\docume~1\mcg reg~1\locals~1\temp\mcgregor1\python24_intermediate\compiler_d41d8cd98f00b204e98 00998ecf8427e\Release\docume~1\mcgreg~1\locals~1\temp\mcgregor1\python24_compile d\sc_9fef7eb8a0d63221b946305b186f457b1.obj /Zm1000" failed with exit status 2 I'm using python 2.4.3, scipy 0.4.8, and the visual C++ express edition (ver 8) compiler. any suggestions? chuckles From cookedm at physics.mcmaster.ca Wed Apr 26 15:31:54 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 26 Apr 2006 15:31:54 -0400 Subject: [SciPy-user] [Numpy-discussion] Chang*ed* the Trac authentication In-Reply-To: <444EE463.10007@gmail.com> (Robert Kern's message of "Tue, 25 Apr 2006 22:09:23 -0500") References: <444EE463.10007@gmail.com> Message-ID: Robert Kern writes: > Trying not to embarass myself again, I made the changes without telling you. :-) > > In order to create or modify Wiki pages or tickets on the NumPy and SciPy Tracs, > you will have to be logged in. You can register yourself by clicking the > "Register" link in the upper right-hand corner of the page. > > Developers who previously had accounts have the same username/password as > before. You can now change your password if you like. Only developers have the > ability to close tickets, delete Wiki pages entirely, or create new ticket > reports (and possibly a couple of other things). Developers, please enter your > name and email by clicking on the "Settings" link up at top once logged in. > > Thank you for your patience. If there are any problems, please email me, and I > will try to correct them quickly. Thanks Robert; I hope this helps with our spam problem to an extent. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From python at axtom.com Thu Apr 27 05:08:46 2006 From: python at axtom.com (python at axtom.com) Date: Thu, 27 Apr 2006 11:08:46 +0200 Subject: [SciPy-user] install - floating point exception (scipy-tests) Message-ID: <1146128926.44508a1e8648a@ssl0.ovh.net> Hi all, I have installed scipy, and have some fp exceptions when running the tests. Does anyone else experiment the same thing? Thanks -- Jean Pierre configuration ----------------- Debian GNU/Linux 3.1 kernel 2.6 compilers: gcc-3.3.5 and g77-3.3.5 (c++ not installed) python-2.4.2 blas http://www.netlib.org/blas/blas.tgz -- installation from src lapack http://www.netlib.org/lapack/lapack.tgz -- installation from src fftw http://www.fftw.org/fftw-2.1.5.tar.gz -- installation from src numpy-0.9.6 scipy-0.4.8 (without cluster and weave packages) output of the tests (by package and by module when there is an exception) ------------------------------------------------------------------ >>> from scipy import integrate;NumpyTest(integrate).test(level=10) Found 10 tests for scipy.integrate.quadpack Found 1 tests for scipy.integrate Found 0 tests for __main__ ..Floating point exception >>> from scipy import fftpack;NumpyTest(fftpack).test(level=10) Found 23 tests for scipy.fftpack.basic Found 24 tests for scipy.fftpack.pseudo_diffs Found 4 tests for scipy.fftpack.helper Found 0 tests for __main__ Fast Fourier Transform ================================================= | real input | complex input ------------------------------------------------- size | scipy | Numeric | scipy | Numeric ------------------------------------------------- 100 | 0.10 | N/A | 0.10 | N/A (secs for 7000 calls) 1000 | 0.08 | N/A | 0.12 | N/A (secs for 2000 calls) 256 | 0.17 | N/A | 0.19 | N/A (secs for 10000 calls) 512 | 0.23 | N/A | 0.31 | N/A (secs for 10000 calls) 1024 | 0.04 | N/A | 0.05 | N/A (secs for 1000 calls) 2048 | 0.08 | N/A | 0.11 | N/A (secs for 1000 calls) 4096 | 0.07 | N/A | 0.13 | N/A (secs for 500 calls) 8192 | 0.19 | N/A | 0.57 | N/A (secs for 500 calls) ..Warning: Skipping check_djbfft (failed to import FFT) .. Multi-dimensional Fast Fourier Transform =================================================== | real input | complex input --------------------------------------------------- size | scipy | Numeric | scipy | Numeric --------------------------------------------------- 100x100 | 0.09 | N/A | 0.07 | N/A (secs for 100 calls) 1000x100 | 0.09 | N/A | 0.08 | N/A (secs for 7 calls) 256x256 | 0.11 | N/A | 0.12 | N/A (secs for 10 calls) 512x512 | 0.30 | N/A | 0.30 | N/A (secs for 3 calls) ..... Inverse Fast Fourier Transform =============================================== | real input | complex input ----------------------------------------------- size | scipy | Numeric | scipy | Numeric ----------------------------------------------- 100 | 0.09 | N/A | 0.13 | N/A (secs for 7000 calls) 1000 | 0.09 | N/A | 0.18 | N/A (secs for 2000 calls) 256 | 0.17 | N/A | 0.22 | N/A (secs for 10000 calls) 512 | 0.26 | N/A | 0.35 | N/A (secs for 10000 calls) 1024 | 0.04 | N/A | 0.07 | N/A (secs for 1000 calls) 2048 | 0.09 | N/A | 0.12 | N/A (secs for 1000 calls) 4096 | 0.09 | N/A | 0.15 | N/A (secs for 500 calls) 8192 | 0.21 | N/A | 0.59 | N/A (secs for 500 calls) ....... Inverse Fast Fourier Transform (real data) ================================== size | scipy | Numeric ---------------------------------- 100 | 0.12 | N/A (secs for 7000 calls) 1000 | 0.10 | N/A (secs for 2000 calls) 256 | 0.19 | N/A (secs for 10000 calls) 512 | 0.26 | N/A (secs for 10000 calls) 1024 | 0.04 | N/A (secs for 1000 calls) 2048 | 0.08 | N/A (secs for 1000 calls) 4096 | 0.09 | N/A (secs for 500 calls) 8192 | 0.19 | N/A (secs for 500 calls) ..Warning: Skipping check_djbfft (failed to import FFT) .. Fast Fourier Transform (real data) ================================== size | scipy | Numeric ---------------------------------- 100 | 0.10 | N/A (secs for 7000 calls) 1000 | 0.08 | N/A (secs for 2000 calls) 256 | 0.18 | N/A (secs for 10000 calls) 512 | 0.24 | N/A (secs for 10000 calls) 1024 | 0.03 | N/A (secs for 1000 calls) 2048 | 0.07 | N/A (secs for 1000 calls) 4096 | 0.07 | N/A (secs for 500 calls) 8192 | 0.17 | N/A (secs for 500 calls) ..Warning: Skipping check_djbfft (failed to import FFT: No module named FFT) . Differentiation of periodic functions ===================================== size | convolve | naive ------------------------------------- 100 | 0.03 | 0.23 (secs for 1500 calls) 1000 | 0.03 | 0.21 (secs for 300 calls) 256 | 0.04 | 0.34 (secs for 1500 calls) 512 | 0.04 | 0.38 (secs for 1000 calls) 1024 | 0.02 | 0.34 (secs for 500 calls) 2048 | 0.03 | 0.27 (secs for 200 calls) 4096 | 0.02 | 0.28 (secs for 100 calls) 8192 | 0.04 | 0.33 (secs for 50 calls) .......... Hilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.03 | 0.17 (secs for 1500 calls) 1000 | 0.03 | 0.14 (secs for 300 calls) 256 | 0.04 | 0.24 (secs for 1500 calls) 512 | 0.03 | 0.26 (secs for 1000 calls) 1024 | 0.03 | 0.23 (secs for 500 calls) 2048 | 0.02 | 0.16 (secs for 200 calls) 4096 | 0.03 | 0.19 (secs for 100 calls) 8192 | 0.03 | 0.25 (secs for 50 calls) ........ Shifting periodic functions ============================== size | optimized | naive ------------------------------ 100 | 0.03 | 0.23 (secs for 1500 calls) 1000 | 0.01 | 0.25 (secs for 300 calls) 256 | 0.03 | 0.38 (secs for 1500 calls) 512 | 0.03 | 0.43 (secs for 1000 calls) 1024 | 0.02 | 0.39 (secs for 500 calls) 2048 | 0.01 | 0.30 (secs for 200 calls) 4096 | 0.03 | 0.31 (secs for 100 calls) 8192 | 0.03 | 0.32 (secs for 50 calls) .. Tilbert transform of periodic functions ========================================= size | optimized | naive ----------------------------------------- 100 | 0.02 | 0.23 (secs for 1500 calls) 1000 | 0.02 | 0.17 (secs for 300 calls) 256 | 0.04 | 0.33 (secs for 1500 calls) 512 | 0.04 | 0.32 (secs for 1000 calls) 1024 | 0.03 | 0.28 (secs for 500 calls) 2048 | 0.03 | 0.20 (secs for 200 calls) 4096 | 0.03 | 0.21 (secs for 100 calls) 8192 | 0.04 | 0.25 (secs for 50 calls) ........ ---------------------------------------------------------------------- Ran 51 tests in 26.269s OK >>> from scipy import interpolate;NumpyTest(interpolate).test(level=10) Found 5 tests for scipy.interpolate.fitpack Found 0 tests for __main__ /opt/scipy/lib/scipy/interpolate/fitpack2.py:410: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ..... ---------------------------------------------------------------------- Ran 5 tests in 0.012s OK >>> from scipy import io;NumpyTest(io).test(level=10) Found 4 tests for scipy.io.array_import Found 12 tests for scipy.io.mmio Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ................ ---------------------------------------------------------------------- Ran 16 tests in 0.137s OK >>> from scipy import lib;NumpyTest(lib).test(level=10); **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 0 tests for __main__ .......................................... ---------------------------------------------------------------------- Ran 42 tests in 0.059s OK >>> from scipy import linalg;NumpyTest(linalg).test(level=10) Found 128 tests for scipy.linalg.fblas Found 37 tests for scipy.linalg.decomp Found 4 tests for scipy.linalg.lapack Found 44 tests for scipy.linalg.basic Found 7 tests for scipy.linalg.matfuncs Found 14 tests for scipy.linalg.blas Found 0 tests for __main__ ...caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ......... Finding matrix eigenvalues ================================== | contiguous ---------------------------------------------- size | scipy 20 | 0.08 (secs for 150 calls) 100 | 0.17 (secs for 7 calls) 200 | 0.36 (secs for 2 calls) .................Floating point exception >>> from scipy import maxentropy;NumpyTest(maxentropy).test(level=10) Found 2 tests for scipy.maxentropy Found 0 tests for __main__ .. ---------------------------------------------------------------------- Ran 2 tests in 0.003s OK >>> from scipy import ndimage;NumpyTest(ndimage).test(level=10) Found 397 tests for scipy.ndimage Found 0 tests for __main__ ............................................................................................................................................................................................................................................................................................................................................................................................................. ---------------------------------------------------------------------- Ran 397 tests in 1.174s OK >>> from scipy import optimize;NumpyTest(optimize).test(level=10) Found 6 tests for scipy.optimize.optimize Found 2 tests for scipy.optimize.zeros Found 1 tests for scipy.optimize.cobyla Found 0 tests for __main__ ......%s f2 is a symmetric parabola, x**2 - 1 f3 is a quartic polynomial with large hump in interval f4 is step function with a discontinuity at 1 f5 is a hyperbola with vertical asymptote at 1 f6 has random values positive to left of 1 , negative to right of course these are not real problems. They just test how the 'good' solvers behave in bad circumstances where bisection is really the best. A good solver should not be much worse than bisection in such circumstance, while being faster for smooth monotone sorts of functions. TESTING SPEED times in seconds for 2000 iterations function f2 cc.bisect : 0.080 cc.ridder : 0.040 cc.brenth : 0.030 cc.brentq : 0.030 function f3 cc.bisect : 0.110 cc.ridder : 0.030 cc.brenth : 0.040 cc.brentq : 0.030 function f4 cc.bisect : 0.090 cc.ridder : 0.110 cc.brenth : 0.100 cc.brentq : 0.110 function f5 cc.bisect : 0.090 cc.ridder : 0.120 cc.brenth : 0.110 cc.brentq : 0.110 function f6 cc.bisect : 0.100 cc.ridder : 0.120 cc.brenth : 0.110 cc.brentq : 0.130 .TESTING CONVERGENCE zero should be 1 function f2 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004658 cc.brenth : 0.9999999999999997 cc.brentq : 0.9999999999999577 function f3 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000000000 cc.brenth : 1.0000000000000009 cc.brentq : 1.0000000000000011 function f4 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000001452 cc.brenth : 0.9999999999993339 cc.brentq : 0.9999999999993339 function f5 cc.bisect : 1.0000000000001952 cc.ridder : 1.0000000000004574 cc.brenth : 0.9999999999991442 cc.brentq : 0.9999999999991442 function f6 cc.bisect : 1.0000000000001952 cc.ridder : 0.9999999999995509 cc.brenth : 1.0000000000004117 cc.brentq : 0.9999999999988777 .Result: [ 4.957975 0.64690335] (exact result = 4.955356249106168, 0.666666666666666) . ---------------------------------------------------------------------- Ran 9 tests in 2.344s OK >>> from scipy import signal;NumpyTest(signal).test(level=10) Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ .... ---------------------------------------------------------------------- Ran 4 tests in 0.005s OK >>> from scipy import sparse;NumpyTest(sparse).test(level=10) Found 89 tests for scipy.sparse.sparse Found 0 tests for __main__ 2 3 1 2 2 3 2 1 3 3 3 3 1 3 3 . 3 3 1 3 3 ...........Use minimum degree ordering on A'+A. .....................Use minimum degree ordering on A'+A. .....................Use minimum degree ordering on A'+A. .......................Use minimum degree ordering on A'+A. ............ ---------------------------------------------------------------------- Ran 89 tests in 0.487s OK >>> from scipy import special;NumpyTest(special).test(level=10) Found 341 tests for scipy.special.basic Found 0 tests for __main__ .........Floating point exception >>> from scipy import stats;NumpyTest(stats).test(level=10) Found 95 tests for scipy.stats.stats Found 70 tests for scipy.stats.distributions Found 10 tests for scipy.stats.morestats Found 0 tests for __main__ ..................................................................................................................................................Floating point exception ***************BY MODULE****************** python ./integrate/tests/test_integrate.py -l 10 Found 1 tests for __main__ Residual: 1.05006950433e-07 . ---------------------------------------------------------------------- Ran 1 test in 0.010s OK python ./integrate/tests/test_quadpack.py -l 10 Found 10 tests for __main__ ..Floating point exception python ./linalg/tests/test_atlas_version.py -l 10 NO ATLAS INFO AVAILABLE python ./linalg/tests/test_basic.py -l 10 Found 44 tests for __main__ Finding matrix determinant ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | basic | scipy | basic 20 | 0.21 | 0.22 | 0.26 | 0.28 (secs for 2000 calls) 100 | 0.52 | 0.47 | 0.66 | 0.68 (secs for 300 calls) 500 | 0.68 | 0.68 | 0.72 | 0.73 (secs for 4 calls) ...... Finding matrix inverse ================================== | contiguous | non-contiguous ---------------------------------------------- size | scipy | basic | scipy | basic 20 | 0.34 | 0.31 | 0.36 | 0.36 (secs for 2000 calls) 100 | 1.21 | 1.51 | 1.22 | 1.68 (secs for 300 calls) 500 | 2.42 | 2.73 | 2.40 | 2.68 (secs for 4 calls) ....../opt/numpy/lib/numpy/core/oldnumeric.py:573: DeprecationWarning: integer argument expected, got float result = a.round(decimals) .......Floating point exception python ./linalg/tests/test_blas.py -l 10 Found 14 tests for __main__ **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** .............. ---------------------------------------------------------------------- Ran 14 tests in 0.013s OK python ./linalg/tests/test_decomp.py -l 10 Found 37 tests for __main__ ....... Finding matrix eigenvalues ================================== | contiguous ---------------------------------------------- size | scipy 20 | 0.08 (secs for 150 calls) 100 | 0.16 (secs for 7 calls) 200 | 0.35 (secs for 2 calls) .................Floating point exception python ./linalg/tests/test_fblas.py -l 10 Found 128 tests for __main__ ...caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .. ---------------------------------------------------------------------- Ran 128 tests in 0.074s OK python ./linalg/tests/test_lapack.py -l 10 Found 4 tests for __main__ .. **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** .. ---------------------------------------------------------------------- Ran 4 tests in 0.004s OK python ./linalg/tests/test_matfuncs.py -l 10 Found 7 tests for __main__ .Floating point exception python ./stats/tests/test_distributions.py -l 10 Found 70 tests for stats.distributions Found 0 tests for __main__ ...................................................Floating point exception python ./stats/tests/test_morestats.py -l 10 Found 10 tests for __main__ ..Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. .....Floating point exception python ./stats/tests/test_stats.py -l 10 Found 95 tests for __main__ ............................................................................................... ---------------------------------------------------------------------- Ran 95 tests in 0.050s OK python ./special/tests/test_basic.py -l 10 Found 341 tests for __main__ .........Floating point exception From kwgoodman at gmail.com Thu Apr 27 13:43:41 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 27 Apr 2006 10:43:41 -0700 Subject: [SciPy-user] import stats -> failed Message-ID: I installed scipy from SVN source less than an hour ago. I can't import stats. (If it's not something I did wrong, would a bug report like this be useful information?) >> from scipy import * --------------------------------------------------------------------------- exceptions.ValueError Traceback (most recent call last) /home/me/ /usr/local/scipy/lib/python2.4/site-packages/scipy/stats/__init__.py 5 from info import __doc__ 6 ----> 7 from stats import * 8 from distributions import * 9 from rv import * /usr/local/scipy/lib/python2.4/site-packages/scipy/stats/stats.py 1693 1694 import scipy.stats -> 1695 import distributions 1696 def kstest(rvs, cdf, args=(), N=20): 1697 """Return the D-value and the p-value for a /usr/local/scipy/lib/python2.4/site-packages/scipy/stats/distributions.py 3812 lvals = where(vals==0,0.0,log(vals)) 3813 return -sum(vals*lvals) -> 3814 binom = binom_gen(name='binom',shapes="n,pr",extradoc=""" 3815 3816 Binomial distribution /usr/local/scipy/lib/python2.4/site-packages/scipy/stats/distributions.py in __init__(self, a, b, name, badvalue, moment_tol, values, inc, longname, shapes, extradoc) 3373 self.numargs=0 3374 else: -> 3375 self._vecppf = new.instancemethod(sgf(_drv2_ppfsingle,otypes='d'), 3376 self, rv_discrete) 3377 self.generic_moment = new.instancemethod(sgf(_drv2_moment, /usr/local/scipy/lib/python2.4/site-packages/numpy/lib/function_base.py in __init__(self, pyfunc, otypes, doc) 594 """ 595 def __init__(self, pyfunc, otypes='', doc=None): --> 596 nin, ndefault = _get_nargs(pyfunc) 597 self.thefunc = pyfunc 598 self.ufunc = None /usr/local/scipy/lib/python2.4/site-packages/numpy/lib/function_base.py in _get_nargs(obj) 559 nargs -= 1 560 return nargs, ndefaults --> 561 raise ValueError, 'failed to determine the number of arguments for %s' % (obj) 562 563 ValueError: failed to determine the number of arguments for From oliphant at ee.byu.edu Thu Apr 27 15:05:44 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 27 Apr 2006 13:05:44 -0600 Subject: [SciPy-user] import stats -> failed In-Reply-To: References: Message-ID: <44511608.6080300@ee.byu.edu> Keith Goodman wrote: >I installed scipy from SVN source less than an hour ago. I can't import stats. > >(If it's not something I did wrong, would a bug report like this be >useful information?) > > It's actually an indentation problem in NumPy. Get the latest NumPy out of SVN and scipy should work. You shouldn't have to recompile NumPy. The fix was a simple indentation problem in numpy/lib/function_base.py -Travis From kwgoodman at gmail.com Thu Apr 27 16:01:52 2006 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 27 Apr 2006 13:01:52 -0700 Subject: [SciPy-user] repmat of matrix returns array Message-ID: Do most operations on matrices return matrices? In porting Octave code, I noticed that the repmat of a matrix is an array. I decided to use matrices simply because that's what they are called in Octave. What do you recommend for new users who have a background in Octave, matrix or array? Do people generally pick one and not use the other? From oliphant at ee.byu.edu Thu Apr 27 16:05:28 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 27 Apr 2006 14:05:28 -0600 Subject: [SciPy-user] ***[Possible UCE]*** repmat of matrix returns array In-Reply-To: References: Message-ID: <44512408.8030205@ee.byu.edu> Keith Goodman wrote: >Do most operations on matrices return matrices? In porting Octave >code, I noticed that the repmat of a matrix is an array. > >I decided to use matrices simply because that's what they are called >in Octave. What do you recommend for new users who have a background >in Octave, matrix or array? Do people generally pick one and not use >the other? > > I usually use arrays all the time and matrices only when I need to express some matrix formula. As a result, the matrices are not as well developed (there are issues like the one you mentioned all over the place). Matrices not being preserved through operations is a big reason they have not been used in the past. We are trying to fix this and have some strategies for doing it (asarray followed by __array_wrap__ at the end). These strategies have just not been universally applied to all the functions yet. -Travis From wbaxter at gmail.com Thu Apr 27 21:25:35 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 28 Apr 2006 10:25:35 +0900 Subject: [SciPy-user] repmat of matrix returns array In-Reply-To: References: Message-ID: Keith, The decision of matrix vs array really comes down to whether you intend to mostly do linear algreba with the entities in quesiton or not. In Matlab or Octave A * B always means matrix multiply (er, at least for ndims=2), and A .* B always means element-wise multiply. If you're doing stuff with images you rarely need *, and if you're doing mostly linear algebra you don't need as much .* . In numpy the meaning of * changes depending on whether the arguments are of type 'numpy.matrix' or not. That's pretty much the most significant difference between array and matrix. That plus the fact that reduction operations on matrices like sum() will generally return matrices (with ndims==2), while the same operations on arrays will return arrays with ndims reduced by one. The whole changing meaning of * seemed reasonable to me at first -- from an object oriented programming perspective it makes sense. But you know, I find that I get bitten by it a lot in several ways: 1) I end up accidentally multiplying element-wise because the argumets end up being arrays instead of matrices like I had planned. (No runtime errors either way if the arguments are both NxN) 2) Because of Python's dynamic typing, it's often quite difficult to tell when reading code like " a * b " if a and b are arrays or matrices (or something else entirely). But you can bet when I wrote the code I knew what I meant. 3) Sometimes you do get an array back after performing an operation on a matrix (This one's getting fixed gradually in Numpy code, but I don't think it will ever go away because it's just as easy for developers to do it to themselves in their own code. I.e. write a function that takes both arrays and matrices but always returns an array. Since it's easier to get that wrong than it is to get it right, I predict such issues will persist.) Because of these points, I think I have to conclude that overloading * for matrices was not such a great idea. Or at least, there should be an unabiguous way to spell 'matrix multiply' that _always_ means matrix multiply. Matlab seems to have gotten this one right. I think if there were such a function in Numpy, there would be very little reason to use the matrix class. We can't make it a unique infix operator like Matlab does, so let's call just it numpy.linalg.mult, and people who want to use it can import it as something easier to type. I propose the function should behave as follows: def mult(a, b): """ Multiplies a times b using the rules of tensor algebra, and returns c, the product. (In the following description, a_ij means a[i, j]. Also the tensor summation convention of summing over repeated dummy indices is assumed. For instance, a_ij * b_jk means dot( a[i,:], b[:, j] ) If a and b are of type matrix, then the result is the same as 'a * b'. In other words c_ik = a_ij b_jk. The returned c will also have type matrix. The remaining all deal with cases when one or both of the arguments are of type array. If a.shape==(N,M) and b.shape==(M,P), then the result is just like the matrix-matrix multiply: c_ik = a_ij b_jk. The resulting shape is (N,P). If b.shape[0] is not M, then an error is raised. If a.shape==(N,M), and b.shape==(M) or (M,1) this is treated as a matrix-vector multiply. c_i = a_ij * b_j. The return value c, will have c.shape==b.shape. If b.shape[0] is not M, an error is raised. If a.shape==(N) or (1,N), and b.shape==(N,M) this is treated as a vector-matrix multiply. c_j = a_i * b_ij. The return value c, will have c.shape==a.shape. If b.shape[1] is not N, an error is raised. TODO: scalar cases... TODO: higher dimensional cases... """ The general idea is to allow both matrices and arrays to be treated like linear (or tensor) algebra thingies. It could be restricted to just matrix linear alg, but you might as well go ahead and support higher rank entities while you're at it. I think there's a simple succinct statement which could cover all the above rules that you'll probably find in a tensor algebra text book. Basically from memory it's this (in a hodgepodge of notations): Let A be rank-N with shape (s1, ..., sN) Let B be rank-M with shape (t1, ..., tM) For the tensor product A * B to exist, we must have sN==t1 Let i1,...,iN be indices for A, and j1,...,jN be indices for B. Then C[i1,..., iN-1, j2, ..., jM] = dot( A[i1, ..., iN-1, :], B[: , j2, ..., jM] ) Where the ...'s are mathematical ...'s --- meaning I left the middle part out -- not python syntax. I'd have to think about it some more, but I think this fundamental definition from tensor algebra automatically covers all the cases enumerated in the above docstring, including reurning shape (N,1) for (N,M)*(M,1) versus just (N) for (N,M)*(M). Oh, there's one more twist needed to cover transposes. So actually it should be allowable to do (M,N)*(M,P). It just means A.transpose() * B. Hmm, the more I think about it, the more I think the idea of a separate matrix class in numpy is flawed. It seems to exist essentially for the sole purpose of overloading *, but that idea itself seems to be questionable as argued above. If you don't want to use the overloaded *, what benefit remains? Maybe it's this: with array you can easily end up with an (N) array, and you want to know if it is (N,1) or (1,N). The thing is, tensor gets along algebra does quite fine without that distinction. If you multiply an (N) times a (M,N), there's only one thing that makes sense. Hmm.. ok but (N) times (N,N) could mean either, ya. Well in that case I'd say the mult() above should prefer the version that keeps the arguments in the order presented. So in the presence of ambiguity: mult(rand(N),rand(N,N)) --> row-vec * matrix mult(rand(N,N),rand(N)) --> matrix * column vec That puts the burden of remembering whether you meant to have a row vector or a column vector on the user. Or not, you can always explicity tack on the extra axis to remind yourself if you want it there. There is also potential ambiguity if you allow tensor alg-like automatic transposes: mult(rand(N,M),rand(N,P)) --> ok, means (N,M)^t * (N,P) mult(rand(N,M),rand(P,M)) --> ok, means (N,M) * (P,M)^t mult(rand(N,M),rand(M,N)) --> ambiguous, but return (N,M) * (M,N) [ could also mean (N,M)^t * (M,N)^t ] mult(rand(N,M),rand(N,M)) --> ambiguious, not clear which is better: (N,M)^t * (N,M) or (N,M) * (N,M)^t ... these ambiguities seems to suggest it would be better not to allow automatic transposes... or maybe just allow them in the case of axes of size 1, i.e. in the (1,N) or (N,1) case. So this would work: mult(rand(N,1),rand(N,N)) --> matrix * column vec mult(rand(N,N),rand(N,1)) --> row-vec * matrix So what do you think. My gut tells me that tensor math on arrays fits well with the numpy.array philosophy. E.g. a vector is a rank-1 entity, not a rank-2 entity with one of its dimensions being 1. So any thoughts? --bb PS. Sorry, Keith, for going off on a tangent like this... On 4/28/06, Keith Goodman wrote: > > Do most operations on matrices return matrices? In porting Octave > code, I noticed that the repmat of a matrix is an array. > > I decided to use matrices simply because that's what they are called > in Octave. What do you recommend for new users who have a background > in Octave, matrix or array? Do people generally pick one and not use > the other? > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > -- William V. Baxter III OLM Digital Kono Dens Building Rm 302 1-8-8 Wakabayashi Setagaya-ku Tokyo, Japan 154-0023 +81 (3) 3422-3380 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wbaxter at gmail.com Thu Apr 27 22:27:05 2006 From: wbaxter at gmail.com (Bill Baxter) Date: Fri, 28 Apr 2006 11:27:05 +0900 Subject: [SciPy-user] repmat of matrix returns array In-Reply-To: References: Message-ID: Just one quick addendum that occurred to me... On 4/28/06, Bill Baxter wrote: > > ... Basically from memory it's this (in a hodgepodge of notations): > Let A be rank-N with shape (s1, ..., sN) > Let B be rank-M with shape (t1, ..., tM) > For the tensor product A * B to exist, we must have sN==t1 > Let i1,...,iN be indices for A, and j1,...,jN be indices for B. > Then > C[i1,..., iN-1, j2, ..., jM] = dot( A[i1, ..., iN-1, :], > B[: , j2, ..., jM] ) > > Where the ...'s are mathematical ...'s --- meaning I left the middle part > out -- not python syntax. > I'd have to think about it some more, but I think this fundamental > definition from tensor algebra automatically covers all the cases enumerated > in the above docstring, including reurning shape (N,1) for (N,M)*(M,1) > versus just (N) for (N,M)*(M). Oh, there's one more twist needed to cover > transposes. So actually it should be allowable to do (M,N)*(M,P). It just > means A.transpose() * B. > Actually, thinking about it more, there are plenty more variations on the tensor product for higher rank tensors. For instance with a.shape==(N,N), and b.shape==(N,N), the dummy indices can appear pretty much anywhere and there can be any number of them. Some examples from math: 1) c_ik = a_ij * b_jk --- normal matrix multiply a * b 2) c_ik = a_ji * b_jk --- transpose multiply a^t * b 3) c = a_ij * b_ij --- element-wise product and sum of all elements (aka trace of a^t * b) 4) c_ijkl = a_ij * b_kl --- forgot what to call this, but it creates a rank-4 tensor from two rank 2's. 5) c_kilj = a_ij * b_kl --- same as above, just the axes in the result are swapped around So what I'm thinking is the first behavior should be assumed by the ' linalg.mult()' function. That is, assume there's one dummy index, and that it's on the last axis of a, and the first axis of b, and that the result comes from tacking the remaining non-dummy axes of a and b together in order (so no transposes on the result) However it would be nice to be able to use the full generality of tensor index notation to specify tensor products when wanted. I can think of a couple of ways you might want to specify the dummies. First, as a pair of axes (or list of pairs). So mult(a, b, (n1,n2)) would mean a.shape[n1] and b.shape[n2] share a dummy index that is summed over. Trying it out on the above examples would look like this: 1) mult(a, b, (1,0)) or maybe also mult(a,b, (-1,0)), could be used, meaning the last axis of a, and the first axis of b. This could be the default value of the extra parameter. 2) mult(a,b, (0,0) ) 3) mult(a,b, [(0,0), (1,1)]) 4) mult(a,b, []) 5) (not possible with this notation, but could be done as mult(a,b,[]) followed by some swapaxes calls.) The other option is to allow something like the mathematical notation to be used. This would require a little parsing, but it would definitely be handy, I think. 1) mult(a,b, "ij*jk") 2) mult(a,b, "ji*jk") 3) mult(a,b, "ij*ij") 4) mult(a,b, "ij*kl") 5) mult(a,b, "kilj=ij*kl") The first option would be easier to use if you just want something like "use dummies for the last and second to last dimensions". Then you could just say [(-1,-1),(-2,-2)] and it will work for tensors of any rank >= 2. The latter option would be easier if you're working with known, fixed, smallish-rank entities. Plus it would be much easier to use if you're just transcribing some tensor math from your notebook into numpy. --Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From strawman at astraw.com Thu Apr 27 23:40:19 2006 From: strawman at astraw.com (Andrew Straw) Date: Thu, 27 Apr 2006 20:40:19 -0700 Subject: [SciPy-user] install - floating point exception (scipy-tests) In-Reply-To: <1146128926.44508a1e8648a@ssl0.ovh.net> References: <1146128926.44508a1e8648a@ssl0.ovh.net> Message-ID: <44518EA3.6050001@astraw.com> python at axtom.com wrote: >Hi all, > >I have installed scipy, and have some fp exceptions when running the tests. >Does anyone else experiment the same thing? > > GNU libc version 2.3.2 has a bug "feclearexcept() error on CPUs with SSE" (fixed in 2.3.3) which has been submitted to Debian but not fixed in sarge. See the following URL for more info, including links to the above bug reports and patched .debs: http://www.its.caltech.edu/~astraw/coding.html#id3 From gnchen at cortechs.net Fri Apr 28 10:53:04 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Fri, 28 Apr 2006 07:53:04 -0700 Subject: [SciPy-user] marching cubes / isosurface in scipy? Message-ID: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> Hi! Is there a algorithm like machine cubes or isosurface in scipy? If not, is anyone working on this type of algorithms? Since my levelset algorithms depends on that, I might need to DIY if no one is working on that. Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net From jelle.feringa at ezct.net Fri Apr 28 11:04:34 2006 From: jelle.feringa at ezct.net (Jelle Feringa / EZCT Architecture & Design Research) Date: Fri, 28 Apr 2006 17:04:34 +0200 Subject: [SciPy-user] marching cubes / isosurface in scipy? In-Reply-To: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> Message-ID: <017c01c66ad5$11fdc5a0$0b01a8c0@JELLE> Dear Gen-Nan, I know FiPy has some levelset support. I haven't used it, but perhaps it could be of good use to you. -jelle -----Original Message----- From: scipy-user-bounces at scipy.net [mailto:scipy-user-bounces at scipy.net] On Behalf Of Gennan Chen Sent: Friday, April 28, 2006 4:53 PM To: SciPy Users List Subject: [SciPy-user] marching cubes / isosurface in scipy? Hi! Is there a algorithm like machine cubes or isosurface in scipy? If not, is anyone working on this type of algorithms? Since my levelset algorithms depends on that, I might need to DIY if no one is working on that. Gen-Nan Chen, PhD Chief Scientist Research and Development Group CorTechs Labs Inc (www.cortechs.net) 1020 Prospect St., #304, La Jolla, CA, 92037 Tel: 1-858-459-9700 ext 16 Fax: 1-858-459-9705 Email: gnchen at cortechs.net _______________________________________________ SciPy-user mailing list SciPy-user at scipy.net http://www.scipy.net/mailman/listinfo/scipy-user From jdhunter at ace.bsd.uchicago.edu Fri Apr 28 11:39:24 2006 From: jdhunter at ace.bsd.uchicago.edu (John Hunter) Date: Fri, 28 Apr 2006 10:39:24 -0500 Subject: [SciPy-user] marching cubes / isosurface in scipy? In-Reply-To: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> (Gennan Chen's message of "Fri, 28 Apr 2006 07:53:04 -0700") References: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> Message-ID: <873bfxvheb.fsf@peds-pc311.bsd.uchicago.edu> >>>>> "Gennan" == Gennan Chen writes: Gennan> Hi! Is there a algorithm like machine cubes or isosurface Gennan> in scipy? If not, is anyone working on this type of Gennan> algorithms? Since my levelset algorithms depends on that, Gennan> I might need to DIY if no one is working on that. VTK has a marching cubes algorithm, and a python interface #!/usr/local/bin/python import os from vtk import * from colors import * ren = vtkRenderer() renWin = vtkRenderWindow() renWin.AddRenderer(ren) iren = vtkRenderWindowInteractor() iren.SetRenderWindow(renWin) # create pipeline # v16 = vtkVolume16Reader() v16.SetDataDimensions(256,256) v16.GetOutput().SetOrigin(0.0,0.0,0.0) v16.SetFilePrefix( os.environ['HOME'] + "/python/examples/vtk/images/r") v16.SetFilePattern( '%s%d.ima') v16.SetDataByteOrderToBigEndian() v16.SetImageRange(1001,1060) v16.SetDataSpacing(1.0,1.0,3.5) v16.Update() #vtkImageMarchingCubes iso iso = vtkMarchingCubes() iso.SetInput(v16.GetOutput()) iso.SetValue(0,30) #120 vessles near cerebellum #100 cortex #20 face #iso SetStartMethod {puts "Start Marching"} isoMapper = vtkPolyDataMapper() isoMapper.SetInput(iso.GetOutput()) isoMapper.ScalarVisibilityOff() isoActor = vtkActor() isoActor.SetMapper(isoMapper) isoActor.GetProperty().SetColor(antique_white) outline = vtkOutlineFilter() outline.SetInput(v16.GetOutput()) outlineMapper = vtkPolyDataMapper() outlineMapper.SetInput(outline.GetOutput()) outlineActor = vtkActor() outlineActor.SetMapper(outlineMapper) outlineActor.VisibilityOff() # Add the actors to the renderer, set the background and size # ren.AddActor(outlineActor) ren.AddActor(isoActor) ren.SetBackground(0.2,0.3,0.4) renWin.SetSize(450,450) ## ren.GetActiveCamera().Elevation(235) ## ren.GetActiveCamera().SetViewUp(0,.5,-1) ## ren.GetActiveCamera().Azimuth(90) iren.Initialize() iren.Start() From python at axtom.com Fri Apr 28 12:03:39 2006 From: python at axtom.com (python at axtom.com) Date: Fri, 28 Apr 2006 18:03:39 +0200 Subject: [SciPy-user] install - floating point exception (scipy-tests) Message-ID: <1146240219.44523cdb5a3d8@ssl0.ovh.net> Hi all, Thank you Andrew, my processor has indeed SSE support. It is an AMD Opteron 64-bit. I will install a patched libc6 debian package from your site, and let you know. Jean Pierre python at axtom.com wrote: >Hi all, > >I have installed scipy, and have some fp exceptions when running the tests. >Does anyone else experiment the same thing? > > GNU libc version 2.3.2 has a bug "feclearexcept() error on CPUs with SSE" (fixed in 2.3.3) which has been submitted to Debian but not fixed in sarge. See the following URL for more info, including links to the above bug reports and patched .debs: http://www.its.caltech.edu/~astraw/coding.html#id3 -- Jean Pierre From silesalvarado at hotmail.com Fri Apr 28 12:13:00 2006 From: silesalvarado at hotmail.com (Hugo Siles) Date: Fri, 28 Apr 2006 16:13:00 +0000 Subject: [SciPy-user] test runnig error Message-ID: HI, I am trying to make a correct instalation of the last version of scipython (for the first time of any), during rhe instalation every thing seems to go quite well, when I run de test >>>import scipy >>>scipy.test(level=1, verbosity=2) and after 700 tests or more (ok) gives the import clapack error, not defined clapack_sgesv I am using complete libraries for laplack and atlas all compiled with intel fortran, I also have teh last version of numpy. I get the same error for all levels ( from 1 to 10) and usually after about 700 test ok. I will appreciate any help Hugo Siles From gnchen at cortechs.net Fri Apr 28 12:16:31 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Fri, 28 Apr 2006 09:16:31 -0700 Subject: [SciPy-user] marching cubes / isosurface in scipy? In-Reply-To: <873bfxvheb.fsf@peds-pc311.bsd.uchicago.edu> References: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> <873bfxvheb.fsf@peds-pc311.bsd.uchicago.edu> Message-ID: Thanks! I probably need to dig into VTK more Gen On Apr 28, 2006, at 8:39 AM, John Hunter wrote: >>>>>> "Gennan" == Gennan Chen writes: > > Gennan> Hi! Is there a algorithm like machine cubes or isosurface > Gennan> in scipy? If not, is anyone working on this type of > Gennan> algorithms? Since my levelset algorithms depends on that, > Gennan> I might need to DIY if no one is working on that. > > VTK has a marching cubes algorithm, and a python interface > > > #!/usr/local/bin/python > import os > > from vtk import * > from colors import * > > ren = vtkRenderer() > renWin = vtkRenderWindow() > renWin.AddRenderer(ren) > iren = vtkRenderWindowInteractor() > iren.SetRenderWindow(renWin) > > # create pipeline > # > v16 = vtkVolume16Reader() > v16.SetDataDimensions(256,256) > v16.GetOutput().SetOrigin(0.0,0.0,0.0) > v16.SetFilePrefix( > os.environ['HOME'] + "/python/examples/vtk/images/r") > v16.SetFilePattern( '%s%d.ima') > v16.SetDataByteOrderToBigEndian() > v16.SetImageRange(1001,1060) > v16.SetDataSpacing(1.0,1.0,3.5) > v16.Update() > > #vtkImageMarchingCubes iso > iso = vtkMarchingCubes() > iso.SetInput(v16.GetOutput()) > iso.SetValue(0,30) > #120 vessles near cerebellum > #100 cortex > #20 face > #iso SetStartMethod {puts "Start Marching"} > > > > isoMapper = vtkPolyDataMapper() > isoMapper.SetInput(iso.GetOutput()) > isoMapper.ScalarVisibilityOff() > > isoActor = vtkActor() > isoActor.SetMapper(isoMapper) > isoActor.GetProperty().SetColor(antique_white) > > outline = vtkOutlineFilter() > outline.SetInput(v16.GetOutput()) > outlineMapper = vtkPolyDataMapper() > outlineMapper.SetInput(outline.GetOutput()) > outlineActor = vtkActor() > outlineActor.SetMapper(outlineMapper) > outlineActor.VisibilityOff() > > # Add the actors to the renderer, set the background and size > # > ren.AddActor(outlineActor) > ren.AddActor(isoActor) > ren.SetBackground(0.2,0.3,0.4) > renWin.SetSize(450,450) > ## ren.GetActiveCamera().Elevation(235) > ## ren.GetActiveCamera().SetViewUp(0,.5,-1) > ## ren.GetActiveCamera().Azimuth(90) > > > iren.Initialize() > > iren.Start() > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user > From strawman at astraw.com Fri Apr 28 14:11:32 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 28 Apr 2006 11:11:32 -0700 Subject: [SciPy-user] install - floating point exception (scipy-tests) In-Reply-To: <1146240219.44523cdb5a3d8@ssl0.ovh.net> References: <1146240219.44523cdb5a3d8@ssl0.ovh.net> Message-ID: <44525AD4.9090209@astraw.com> python at axtom.com wrote: >Hi all, > >Thank you Andrew, my processor has indeed SSE support. It is an AMD Opteron >64-bit. >I will install a patched libc6 debian package from your site, and let you know. > > Hmm. Are you running with amd64 architecture or the i386? I haven't found that patch to be necessary on amd64. If it is, you'd have to rebuild the .debs -- the binaries are only compiled for i386. (The bug certainly does affect the i386 architecture, though...) From sransom at nrao.edu Fri Apr 28 14:16:14 2006 From: sransom at nrao.edu (Scott Ransom) Date: Fri, 28 Apr 2006 14:16:14 -0400 Subject: [SciPy-user] install - floating point exception (scipy-tests) In-Reply-To: <44525AD4.9090209@astraw.com> References: <1146240219.44523cdb5a3d8@ssl0.ovh.net> <44525AD4.9090209@astraw.com> Message-ID: <200604281416.14919.sransom@nrao.edu> Just another data point: I'm running new numpy/scipy on a cluster of Opterons with Debian AMD64 unstable (with the AMD64 Atlas .debs) and have not seen this problem. Scott On Friday 28 April 2006 14:11, Andrew Straw wrote: > python at axtom.com wrote: > >Hi all, > > > >Thank you Andrew, my processor has indeed SSE support. It is an AMD > > Opteron 64-bit. > >I will install a patched libc6 debian package from your site, and let > > you know. > > Hmm. Are you running with amd64 architecture or the i386? I haven't > found that patch to be necessary on amd64. If it is, you'd have to > rebuild the .debs -- the binaries are only compiled for i386. (The bug > certainly does affect the i386 architecture, though...) > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.net > http://www.scipy.net/mailman/listinfo/scipy-user -- Scott M. Ransom Address: NRAO Phone: (434) 296-0320 520 Edgemont Rd. email: sransom at nrao.edu Charlottesville, VA 22903 USA GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 From skip at pobox.com Fri Apr 28 15:43:25 2006 From: skip at pobox.com (skip at pobox.com) Date: Fri, 28 Apr 2006 14:43:25 -0500 Subject: [SciPy-user] marching cubes / isosurface in scipy? In-Reply-To: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> References: <2DEA0A99-DA7E-40AA-8104-B20F43517C51@cortechs.net> Message-ID: <17490.28765.222528.13852@montanaro.dyndns.org> Gen-nan> Is there a algorithm like machine cubes or isosurface in scipy? Gen-nan> If not, is anyone working on this type of algorithms? Since my Gen-nan> levelset algorithms depends on that, I might need to DIY if no Gen-nan> one is working on that. You might ask around the VTK community. There's probably an implementation there. Skip From Tony.Mannucci at jpl.nasa.gov Sat Apr 29 04:27:46 2006 From: Tony.Mannucci at jpl.nasa.gov (Tony Mannucci) Date: Sat, 29 Apr 2006 01:27:46 -0700 Subject: [SciPy-user] scipy fails on OS X 10.4 Message-ID: I downloaded the binary version of scipy from www.scipy.org/Download for OS X 10.4. I installed in the usual OS X way (from .dmg package file). I get the "old" error I used to see: >>> import scipy >>> scipy.test(level=1) import signal -> failed: Failure linking new module: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so: Symbol not found: _fprintf$LDBLStub Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/site-packages/scipy/fftpack/_fftpack.so Expected in: dynamic lookup Versions: >>> scipy.__version__ '0.4.8' >>> scipy.__numpy_version__ '0.9.6' I have selected gcc 3.3 using gcc_select. I had this problem before when I built from source, and it had to do with the gcc version I was using along with g77. Thanks for your help. -Tony -- Tony Mannucci Supervisor, Ionospheric and Atmospheric Remote Sensing Group Mail-Stop 138-308, Tel > (818) 354-1699 Jet Propulsion Laboratory, Fax > (818) 393-5115 California Institute of Technology, Email > Tony.Mannucci at jpl.nasa.gov 4800 Oak Grove Drive, http://genesis.jpl.nasa.gov Pasadena, CA 91109 From adiril at mynet.com Sat Apr 29 05:40:37 2006 From: adiril at mynet.com (adiril) Date: Sat, 29 Apr 2006 12:40:37 +0300 (EEST) Subject: [SciPy-user] (no subject) Message-ID: <1578.81.215.224.234.1146303637.mynet@webmail38.mynet.com>   ____________________________________________________________________________ İnternete Taşınmak İçin Yeterince Beklemediniz mi? Alan Adı + 10 MB Web Alanı + 3 Email Adresi Yıllık Sadece 79 YTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at axtom.com Sat Apr 29 13:32:12 2006 From: python at axtom.com (python at axtom.com) Date: Sat, 29 Apr 2006 19:32:12 +0200 Subject: [SciPy-user] install - floating point exception (sc Message-ID: <1146331932.4453a31c3f6a3@ssl0.ovh.net> Hi all, The processor is used in 32-bit mode. Andrew Straw wrote >Hmm. Are you running with amd64 architecture or the i386? I haven't >found that patch to be necessary on amd64. If it is, you'd have to >rebuild the .debs -- the binaries are only compiled for i386. (The bug >certainly does affect the i386 architecture, though...) Jean Pierre From python at axtom.com Sun Apr 30 10:28:37 2006 From: python at axtom.com (python at axtom.com) Date: Sun, 30 Apr 2006 16:28:37 +0200 Subject: [SciPy-user] install - floating point exception (sc Message-ID: <1146407317.4454c9959c152@ssl0.ovh.net> Hi, After have installed the patched libraries, the FP exceptions have gone. Thanks again, Jean Pierre Jean Pierre DENYS wrote >The processor is used in 32-bit mode. Andrew Straw wrote >>Hmm. Are you running with amd64 architecture or the i386? I haven't >>found that patch to be necessary on amd64. If it is, you'd have to >>rebuild the .debs -- the binaries are only compiled for i386. (The bug >>certainly does affect the i386 architecture, though...) -- Jean Pierre