From yosefmel at post.tau.ac.il Tue Jul 1 01:09:08 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Tue, 1 Jul 2008 08:09:08 +0300 Subject: [SciPy-user] single elements and arrays In-Reply-To: <3d375d730806301157t13d8d3eejc62b1f9ced005ed7@mail.gmail.com> References: <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> <3d375d730806301157t13d8d3eejc62b1f9ced005ed7@mail.gmail.com> Message-ID: <200807010809.08232.yosefmel@post.tau.ac.il> On Monday 30 June 2008 21:57:18 Robert Kern wrote: > On Mon, Jun 30, 2008 at 13:14, Gideon Simpson wrote: > > foo finds a root where x is a parameter in the equation to be solved. > > If x is an array, I iterate through the elements of the array. > > In that case, just special case it. Use numpy.isscalar() to do the test. Or else, put this at the beginning: x = numpy.atleast_1d(x) From robert.kern at gmail.com Tue Jul 1 01:13:58 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 1 Jul 2008 00:13:58 -0500 Subject: [SciPy-user] single elements and arrays In-Reply-To: <200807010809.08232.yosefmel@post.tau.ac.il> References: <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> <3d375d730806301157t13d8d3eejc62b1f9ced005ed7@mail.gmail.com> <200807010809.08232.yosefmel@post.tau.ac.il> Message-ID: <3d375d730806302213w7877cd8dl6f94540a66ac0da5@mail.gmail.com> On Tue, Jul 1, 2008 at 00:09, Yosef Meller wrote: > On Monday 30 June 2008 21:57:18 Robert Kern wrote: >> On Mon, Jun 30, 2008 at 13:14, Gideon Simpson wrote: >> > foo finds a root where x is a parameter in the equation to be solved. >> > If x is an array, I iterate through the elements of the array. >> >> In that case, just special case it. Use numpy.isscalar() to do the test. > > Or else, put this at the beginning: > x = numpy.atleast_1d(x) Presumably, he also wants to return a scalar if given a scalar. The general outline would probably look like this: def foo(x): xisscalar = numpy.isscalar(x) x = numpy.atleast_1d(x) y = ... if xisscalar: y = y[0] return y -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Tue Jul 1 01:49:23 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 1 Jul 2008 01:49:23 -0400 Subject: [SciPy-user] single elements and arrays In-Reply-To: <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> References: <3d375d730806301052k703db955iac318c6bcfa7f200@mail.gmail.com> <02AA0899-7A57-4CD3-8DF3-388135BD5D0A@columbia.edu> Message-ID: 2008/6/30 Gideon Simpson : > foo finds a root where x is a parameter in the equation to be solved. > If x is an array, I iterate through the elements of the array. If you're actually *iterating*, in the sense of using a python loop to go through every element, there will be substantial overhead - something like a few tens or hundreds of floating-point operations for every trip through the loop - involved in looping. If the function itself is reasonably fast and if you can write it in terms of vectors, that will be much faster. It may also, with some care, operate transparently on scalars. However, if you can't - let's say it's a numerical root-finding using brentq - then there's a handy tool to provide python-level looping: the vectorize decorator: @np.vectorize def acos(y): return scipy.optimize.brentq(lambda x: y-np.cos(x),0,np.pi) This makes acos behave a little like a ufunc from the user's point of view: you can hand it a scalar or an array of arbitrary dimensionality and it will iterate as appropriate. The iteration passes through python, necessarily since the function being wrapped is in python, so it won't be fast, but it is convenient. Anne From eiffleduarte at gmail.com Tue Jul 1 09:30:07 2008 From: eiffleduarte at gmail.com (Marcus Vinicius Eiffle Duarte) Date: Tue, 1 Jul 2008 10:30:07 -0300 Subject: [SciPy-user] Is there a way to use Sundials with scipy? Message-ID: <280253810807010630k2e4fbdfey7018aa55641badfb@mail.gmail.com> I am a new user of both python and scipy and was searching for a way to use sundials with scipy. I found some old messages in the history of the mailing list dating back to 2004, but nothing indicating wether such a wrapper was effectively developed. At the present, is it possible to use saundials with scipy? Thanks, Marcus Vinicius Eiffle Duarte -------------- next part -------------- An HTML attachment was scrubbed... URL: From cimrman3 at ntc.zcu.cz Tue Jul 1 10:13:46 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Tue, 01 Jul 2008 16:13:46 +0200 Subject: [SciPy-user] ANN: SfePy 00.46.02 Message-ID: <486A3B9A.8060106@ntc.zcu.cz> I am pleased announce the release of SfePy 00.46.02. SfePy is a finite element analysis software in Python, based primarily on Numpy and SciPy. Mailing lists, issue tracking, mercurial repository: http://sfepy.org Home page: http://sfepy.kme.zcu.cz Major improvements: - alternative short syntax for specifying essential boundary conditions, variables and regions - manufactured solutions tests: - SymPy support - site configuration now via script/config.py + site_cfg.py - new solvers - new terms For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/004602_RELEASE_NOTES.txt If you happen to come to Leipzig for EuroSciPy 2008, see you there! Best regards, Robert Cimrman & SfePy developers From nwagner at iam.uni-stuttgart.de Tue Jul 1 13:04:18 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 01 Jul 2008 19:04:18 +0200 Subject: [SciPy-user] Is there a way to use Sundials with scipy? In-Reply-To: <280253810807010630k2e4fbdfey7018aa55641badfb@mail.gmail.com> References: <280253810807010630k2e4fbdfey7018aa55641badfb@mail.gmail.com> Message-ID: On Tue, 1 Jul 2008 10:30:07 -0300 "Marcus Vinicius Eiffle Duarte" wrote: > I am a new user of both python and scipy and was >searching for a way to use > sundials with scipy. > I found some old messages in the history of the mailing >list dating back to > 2004, but nothing indicating wether such a wrapper was >effectively > developed. > At the present, is it possible to use saundials with >scipy? > > Thanks, > > Marcus Vinicius Eiffle Duarte Hi Marcus, you can use svn co https://pysundials.svn.sourceforge.net/svnroot/pysundials/ pysundials to get the latest version of pysundials. Sundials is required of course. * Download and untar the complete SUNDIALS suite. * $ ./configure --enable-shared * $ make && make install * $ python setup.py install Further information is given in pysundials/doc/README Cheers, Nils From travis at enthought.com Wed Jul 2 10:32:43 2008 From: travis at enthought.com (Travis Vaught) Date: Wed, 2 Jul 2008 09:32:43 -0500 Subject: [SciPy-user] Enthought Python Distribution Message-ID: <78A3D219-9C3C-4B4C-BA77-8B63ED1191BF@enthought.com> Greetings, We're pleased to announce the beta release of the Enthought Python Distribution for *Mac OS X*. http://www.enthought.com/products/epd.php This release should safely install alongside other existing Python installations on your Mac. With the Mac OS X platform support, EPD now provides a consistent scientific application tool set across three major platforms (Windows, RedHat Linux (32 and 64 bit) and OS X). This is a _beta_ release, so install at your own risk. Please provide any feedback to info at enthought.com. See the included EPD Readme.txt for instructions and known issues. About EPD --------- The Enthought Python Distribution (EPD) is a "kitchen-sink-included" distribution of the Python? Programming Language, including over 60 additional tools and libraries. The EPD bundle includes the following major packages: Python Core Python NumPy Multidimensional arrays and fast numerics for Python SciPy Scientific Library for Python Enthought Tool Suite (ETS) A suite of tools including: Traits Manifest typing, validation, visualization, delegation, etc. Mayavi 3D interactive data exploration environment. Chaco Advanced 2D plotting toolkit for interactive 2D visualization. Kiva 2D drawing library in the spirit of DisplayPDF. Enable Object-based canvas for interacting with 2D components and widgets. Matplotlib 2D plotting library wxPython Cross-platform windowing and widget library. Visualization Toolkit (VTK) 3D visualization framework There are many more included packages as well. There's a complete list here: http://www.enthought.com/products/epdlibraries.php License ------- EPD is a bundle of software--every piece of which is available for free under various open-source licenses. The bundle itself is offered as a free download to academic and individual hobbyist use. Commercial and non-degree granting institutions and agencies may purchase individual subscriptions for the bundle (http://www.enthought.com/products/order.php?ver=MacOSX ) or contact Enthought to discuss an Enterprise license (http://www.enthought.com/products/enterprise.php ). Please see the FAQ for further explanation about how the software came together. (http://www.enthought.com/products/epdfaq.php) Thanks, Travis From fperez.net at gmail.com Wed Jul 2 13:10:10 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 2 Jul 2008 10:10:10 -0700 Subject: [SciPy-user] Enthought Python Distribution In-Reply-To: <78A3D219-9C3C-4B4C-BA77-8B63ED1191BF@enthought.com> References: <78A3D219-9C3C-4B4C-BA77-8B63ED1191BF@enthought.com> Message-ID: Hey Travis, On Wed, Jul 2, 2008 at 7:32 AM, Travis Vaught wrote: > Greetings, > > We're pleased to announce the beta release of the Enthought Python > Distribution for *Mac OS X*. First of all, this is *fantastic*. OSX has been a major pain for us here in terms of distribution, so I am extremely happy and grateful to you guys for putting this out. Two minor requests: any chance in the next release, you could update ipython and matplotlib? You are shipping 0.8.1 and 0.91.2, both of which are fairly old and the current versions of both do have reasonably useful improvements and fixes (ipython has threading fixes that impact plotting, and mpl has all the new mathtext code that gives good math rendering without needing latex). In any case, a big cheer from me :) Regards, f From grs2103 at columbia.edu Wed Jul 2 23:19:51 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Wed, 2 Jul 2008 23:19:51 -0400 Subject: [SciPy-user] 0.7 test results Message-ID: On an OS X 10.5.4 machine with fink python 2.5.2 and numpy 1.1.0, I get the following output when running scipy.test() with scipy 0.7.0.dev4518: /opt/lib/python2.5/site-packages/scipy/sparse/linalg/dsolve/ linsolve.py:20: DeprecationWarning: scipy.sparse.linalg.dsolve.umfpack will be removed, install scikits.umfpack instead ' install scikits.umfpack instead', DeprecationWarning ) /opt/lib/python2.5/site-packages/scipy/linsolve/__init__.py:4: DeprecationWarning: scipy.linsolve has moved to scipy.sparse.linalg.dsolve warn('scipy.linsolve has moved to scipy.sparse.linalg.dsolve', DeprecationWarning) /opt/lib/python2.5/site-packages/scipy/splinalg/__init__.py:3: DeprecationWarning: scipy.splinalg has moved to scipy.sparse.linalg warn('scipy.splinalg has moved to scipy.sparse.linalg', DeprecationWarning) ....(1, 1) (0,) (2, 2) (1,) (3, 3) (3,) (4, 4) (6,) (5, 5) (10,) (6, 6) (15,) (7, 7) (21,) (8, 8) (28,) (9, 9) (36,) ................/opt/lib/python2.5/site-packages/scipy/cluster/vq.py: 570: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................................./opt/ lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:479: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ..../opt/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py: 420: UserWarning: The required storage space exceeds the available storage space: nxest or nyest too small, or s too small. The weighted least-squares spline corresponds to the current set of knots. warnings.warn(message) ..................................................................................... Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ./opt/lib/python2.5/site-packages/numpy/lib/utils.py:114: DeprecationWarning: write_array is deprecated warnings.warn(str1, DeprecationWarning) /opt/lib/python2.5/site-packages/numpy/lib/utils.py:114: DeprecationWarning: read_array is deprecated warnings.warn(str1, DeprecationWarning) ....................../opt/lib/python2.5/site-packages/numpy/lib/ utils.py:114: DeprecationWarning: npfile is deprecated warnings.warn(str1, DeprecationWarning) ............................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .. **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ..........................................NO ATLAS INFO AVAILABLE ......................................... **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ...............................................................................................caxpy:n =4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 .... **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ...Result may be inaccurate, approximate err = 1.23518201169e-08 ...Result may be inaccurate, approximate err = 7.27595761418e-12 .....SSS ........................................................................................................................................................................................................................................................................................................................................................................................................./opt /lib/python2.5/site-packages/scipy/ndimage/_registration.py:25: UserWarning: The registration code is under heavy development and therefore the public API will change in the future. The NIPY group is actively working on this code, and has every intention of generalizing this for the Scipy community. Use this module minimally, if at all, until it this warning is removed. warnings.warn(_msg, UserWarning) ...E..EEE./opt/lib/python2.5/site-packages/scipy/ndimage/_segmenter.py: 30: UserWarning: The segmentation code is under heavy development and therefore the public API will change in the future. The NIPY group is actively working on this code, and has every intention of generalizing this for the Scipy community. Use this module minimally, if at all, until it this warning is removed. warnings.warn(_msg, UserWarning) ...F ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................F ......................EE ......................................................................................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ........................................................../opt/lib/ python2.5/site-packages/numpy/lib/function_base.py:166: FutureWarning: The semantics of histogram will be modified in release 1.2 to improve outlier handling. The new behavior can be obtained using new=True. Note that the new version accepts/ returns the bin edges instead of the left bin edges. Please read the docstring for more information. Please read the docstring for more information.""", FutureWarning) /opt/lib/python2.5/site-packages/numpy/lib/function_base.py:181: FutureWarning: Outliers handling will change in version 1.2. Please read the docstring for details. Please read the docstring for details.""", FutureWarning) ................................................................................................warning : specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ..warning: specified build_dir '_bad_path_' does not exist or is not writable. Trying default locations ...warning: specified build_dir '..' does not exist or is not writable. Trying default locations ............................building extensions here: /Users/ gideon/.python25_compiled/m11 ................................................................................................ ====================================================================== ERROR: Execute a single test. Returns a success boolean ---------------------------------------------------------------------- TypeError: exec_test() takes exactly 4 arguments (1 given) ====================================================================== ERROR: Run one test with arguments ---------------------------------------------------------------------- TypeError: run_test() takes exactly 3 arguments (1 given) ====================================================================== ERROR: Run many tests with a common setUp/tearDown. ---------------------------------------------------------------------- TypeError: run_tests() takes exactly 3 arguments (1 given) ====================================================================== ERROR: set_testMethodDoc (test_regression.ParametricTestCase) ---------------------------------------------------------------------- TypeError: set_testMethodDoc() takes exactly 2 arguments (1 given) ====================================================================== ERROR: test_huber (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/lib/python2.5/site-packages/scipy/stats/models/tests/ test_scale.py", line 35, in test_huber m = scale.huber(X) File "/opt/lib/python2.5/site-packages/scipy/stats/models/robust/ scale.py", line 82, in __call__ for donothing in self: File "/opt/lib/python2.5/site-packages/scipy/stats/models/robust/ scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/opt/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 994, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== ERROR: test_huberaxes (test_scale.TestScale) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/lib/python2.5/site-packages/scipy/stats/models/tests/ test_scale.py", line 40, in test_huberaxes m = scale.huber(X, axis=0) File "/opt/lib/python2.5/site-packages/scipy/stats/models/robust/ scale.py", line 82, in __call__ for donothing in self: File "/opt/lib/python2.5/site-packages/scipy/stats/models/robust/ scale.py", line 102, in next scale = N.sum(subset * (a - mu)**2, axis=self.axis) / (self.n * Huber.gamma - N.sum(1. - subset, axis=self.axis) * Huber.c**2) File "/opt/lib/python2.5/site-packages/numpy/core/fromnumeric.py", line 994, in sum return sum(axis, dtype, out) TypeError: only length-1 arrays can be converted to Python scalars ====================================================================== FAIL: test_texture2 (test_segment.TestSegment) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/lib/python2.5/site-packages/scipy/ndimage/tests/ test_segment.py", line 152, in test_texture2 assert_array_almost_equal(tem0, truth_tem0, decimal=6) File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 255, in assert_array_almost_equal header='Arrays are not almost equal') File "/opt/lib/python2.5/site-packages/numpy/testing/utils.py", line 240, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 66.6666666667%) x: array([ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.91816598e-01, 1.02515288e-01, 9.30087343e-02,... y: array([ 0. , 0. , 0. , 0. , 0. , 0. , 0.13306101, 0.08511007, 0.05084148, 0.07550675, 0.4334695 , 0.03715914, 0.00289055, 0.02755581, 0.48142046, 0.03137803, 0.00671277, 0.51568902, 0.01795249, 0.49102375, 1. ], dtype=float32) ====================================================================== FAIL: test_namespace (test_formula.TestFormula) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/lib/python2.5/site-packages/scipy/stats/models/tests/ test_formula.py", line 119, in test_namespace self.assertEqual(xx.namespace, Y.namespace) AssertionError: {} != {'Y': array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98]), 'X': array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49])} ---------------------------------------------------------------------- Ran 2275 tests in 23.705s FAILED (SKIP=3, errors=6, failures=2) From nwagner at iam.uni-stuttgart.de Thu Jul 3 05:24:01 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 03 Jul 2008 11:24:01 +0200 Subject: [SciPy-user] array manipulation Message-ID: Hi all, How can I remove duplicate entries from an array ? >>> Omega array([ 157.08, 314.16, 471.24, 157.08, 157.08, 314.16, 314.16, 471.24, 471.24]) I am looking for an array Omega_new = array(([157.08, 314.16, 471.24])) Nils From robert.kern at gmail.com Thu Jul 3 05:26:27 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 3 Jul 2008 04:26:27 -0500 Subject: [SciPy-user] array manipulation In-Reply-To: References: Message-ID: <3d375d730807030226j6d11aea9t3eb16ab321629272@mail.gmail.com> On Thu, Jul 3, 2008 at 04:24, Nils Wagner wrote: > Hi all, > > How can I remove duplicate entries from an array ? > > >>>> Omega > array([ 157.08, 314.16, 471.24, 157.08, 157.08, > 314.16, 314.16, > 471.24, 471.24]) > > I am looking for an array > > Omega_new = array(([157.08, 314.16, 471.24])) numpy.unique1d() -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From lorenzo.isella at gmail.com Thu Jul 3 10:39:41 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 3 Jul 2008 16:39:41 +0200 Subject: [SciPy-user] Real Array Expressed as Complex Array Message-ID: Dear All, I am bit puzzled: I was plotting a (rather complicated) analytical potential for which an analytical form is available. When asking to print out the value of the potential at the cut-off: print "at the cut-off, the dimensionless potential takes the value, ", pot_ext_dimensionless[-1] I got the following: at the cut-off, the dimensionless potential takes the value, (-6.48829965957e-06+0j) Now, since the potential I am coding via a function has to be a real function, I checked that the real part was always zero (as it should). Since the result was that the array pot_ext_dimensionless is real, how comes that it is expressed as a complex array (though the imaginary part is always zero)? It is true, however, that the potential could become complex for certain (physically unsound) choices of some parameters. Any suggestions? Cheers Lorenzo From kwmsmith at gmail.com Thu Jul 3 11:14:16 2008 From: kwmsmith at gmail.com (Kurt Smith) Date: Thu, 3 Jul 2008 10:14:16 -0500 Subject: [SciPy-user] Real Array Expressed as Complex Array In-Reply-To: References: Message-ID: On Thu, Jul 3, 2008 at 9:39 AM, Lorenzo Isella wrote: > Dear All, > I am bit puzzled: I was plotting a (rather complicated) analytical > potential for which an analytical form is available. > When asking to print out the value of the potential at the cut-off: > > print "at the cut-off, the dimensionless potential takes the value, ", > pot_ext_dimensionless[-1] > > I got the following: > > at the cut-off, the dimensionless potential takes the value, > (-6.48829965957e-06+0j) > > Now, since the potential I am coding via a function has to be a real > function, I checked that the real part was always zero (as it should). I think you mean "...the *imaginary* part was always zero..." > Since the result was that the array pot_ext_dimensionless is real, how > comes that it is expressed as a complex array (though the imaginary > part is always zero)? It all depends on how you're calculating the pot_ext_dimensionless array; clearly somewhere in there an operation makes it complex. You'll have to show us how it's calculated. You can always access the (real,imaginary) part of a complex array with (pot_ext_dimensionless.real, pot_ext_dimensionless.imag) But be careful, these arrays are not contiguous (they're a view into the complex array). That wrinkle has bitten me before, but I can't quite recall the circumstances. You can always make them contiguous with numpy.ascontiguousarray(). > It is true, however, that the potential could become complex for > certain (physically unsound) choices of some parameters. > Any suggestions? > Cheers > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From markbak at gmail.com Thu Jul 3 13:15:35 2008 From: markbak at gmail.com (Mark Bakker) Date: Thu, 3 Jul 2008 19:15:35 +0200 Subject: [SciPy-user] Enthought Python Distribution Message-ID: <6946b9500807031015j392e8a20ic5b7636208e086d6@mail.gmail.com> Hello Travis and other Enthought guru's - I have benefited from the Enthought distribution for years, and really want to switch to a Mac, so this is great news. My main concern with the Enthought distribution is the frequency of new releases. The guys at Python(x,y) are exceptionally quick at updating. That may be hard to match (and cost a lot of effort). What are your plans at releasing updates? Any chance we can look forward to a new version maybe 4 times a year or so? Thanks again for putting these together, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzo.isella at gmail.com Thu Jul 3 16:27:22 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Thu, 03 Jul 2008 22:27:22 +0200 Subject: [SciPy-user] Real Array Expressed as Complex Array In-Reply-To: References: Message-ID: <486D362A.1020902@gmail.com> Hello, > 814v3268b9e5rca34b98bb7b84b57 at mail.gmail.com> Content-Type: > text/plain; charset=ISO-8859-1 On Thu, Jul 3, 2008 at 9:39 AM, Lorenzo > Isella wrote: >> > Dear All, >> > I am bit puzzled: I was plotting a (rather complicated) analytical >> > potential for which an analytical form is available. >> > When asking to print out the value of the potential at the cut-off: >> > >> > print "at the cut-off, the dimensionless potential takes the value, ", >> > pot_ext_dimensionless[-1] >> > >> > I got the following: >> > >> > at the cut-off, the dimensionless potential takes the value, >> > (-6.48829965957e-06+0j) >> > >> > Now, since the potential I am coding via a function has to be a real >> > function, I checked that the real part was always zero (as it should). >> > > I think you mean "...the *imaginary* part was always zero..." > Absolutely, a slip of the tongue. >> > Since the result was that the array pot_ext_dimensionless is real, how >> > comes that it is expressed as a complex array (though the imaginary >> > part is always zero)? >> > > It all depends on how you're calculating the pot_ext_dimensionless > array; clearly somewhere in there an operation makes it complex. > You'll have to show us how it's calculated. > I think I solved the problem: I introduced some real elements taken from an array that has also some complex entries into pot_ext_dimensionless; although all the elements of pot_ext_dimensionless are all real, somehow scipy retains memory of these, once-existing, complex entries. > You can always access the (real,imaginary) part of a complex array > with (pot_ext_dimensionless.real, pot_ext_dimensionless.imag) > > But be careful, these arrays are not contiguous (they're a view into > the complex array). That wrinkle has bitten me before, but I can't > quite recall the circumstances. You can always make them contiguous > with numpy.ascontiguousarray(). > This sounds important and not 100% clear to me. Do you mean that if I have a complex array z and call real.z, I do not get in general an array with the same length as z, since purely imaginary entries are "skipped" rather than appearing as entries with zero real part, as one would expect? Many thanks Lorenzo From peridot.faceted at gmail.com Thu Jul 3 16:57:00 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Thu, 3 Jul 2008 16:57:00 -0400 Subject: [SciPy-user] Real Array Expressed as Complex Array In-Reply-To: <486D362A.1020902@gmail.com> References: <486D362A.1020902@gmail.com> Message-ID: 2008/7/3 Lorenzo Isella : > Hello, >> 814v3268b9e5rca34b98bb7b84b57 at mail.gmail.com> Content-Type: >> text/plain; charset=ISO-8859-1 On Thu, Jul 3, 2008 at 9:39 AM, Lorenzo >> Isella wrote: >>> > Since the result was that the array pot_ext_dimensionless is real, how >>> > comes that it is expressed as a complex array (though the imaginary >>> > part is always zero)? >>> >> >> It all depends on how you're calculating the pot_ext_dimensionless >> array; clearly somewhere in there an operation makes it complex. >> You'll have to show us how it's calculated. >> > I think I solved the problem: I introduced some real elements taken from > an array that has also some complex entries into pot_ext_dimensionless; > although all the elements of pot_ext_dimensionless are all real, somehow > scipy retains memory of these, once-existing, complex entries. The key idea is that arrays have a data type: that is, each numpy array, upon creation, specifies the type of all its contents. So your numpy arrays are marked as containing complex numbers. The fact that the imaginary part of these complex numbers is approximately or exactly zero isn't relevant; they are still stored as a pair of floating-point numbers. If you prefer, you can think of their type as "potentially complex numbers". In any case, such numbers are printed as a+bj even if a or b is zero, and various arithmetic operations treat them as complex numbers. If you know that the answer should be real, and you want to represent them more conveniently and efficiently, you can simply take the real part. >> You can always access the (real,imaginary) part of a complex array >> with (pot_ext_dimensionless.real, pot_ext_dimensionless.imag) >> >> But be careful, these arrays are not contiguous (they're a view into >> the complex array). That wrinkle has bitten me before, but I can't >> quite recall the circumstances. You can always make them contiguous >> with numpy.ascontiguousarray(). >> > > This sounds important and not 100% clear to me. Do you mean that if I > have a complex array z and call real.z, I do not get in general an array > with the same length as z, since purely imaginary entries are "skipped" > rather than appearing as entries with zero real part, as one would expect? No. Taking "X.real" is the same as the mathematical operation of taking the real part:it throws away any imaginary part and interprets what's left as real, whether it's zero or not. "Contiguous", in this context, is a technical feature of numpy arrays that you should almost never need to care about. (Numpy arrays can be homogeneous blocks of memory, but they can also be homogeneous elements "strided" through memory with other data in between. A few functions cannot deal with this striding.) Anne From travis at enthought.com Thu Jul 3 17:16:40 2008 From: travis at enthought.com (Travis Vaught) Date: Thu, 3 Jul 2008 16:16:40 -0500 Subject: [SciPy-user] Enthought Python Distribution In-Reply-To: <6946b9500807031015j392e8a20ic5b7636208e086d6@mail.gmail.com> References: <6946b9500807031015j392e8a20ic5b7636208e086d6@mail.gmail.com> Message-ID: Hey Mark, You're right. Pierre has been extremely responsive--a great example to us all! Our plan is to have a _minimum _ of two releases per year and your suggestion of 4 times is fairly likely. It depends a lot on the frequency and urgency of updates to underlying libraries. Any bugs in the installer itself should prompt urgent correction, so there may be updates due to this alone. I'm glad you've found these things useful. Let me know if you have any other questions. Best, Travis On Jul 3, 2008, at 12:15 PM, Mark Bakker wrote: > Hello Travis and other Enthought guru's - > > I have benefited from the Enthought distribution for years, and > really want to switch to a Mac, so this is great news. > > My main concern with the Enthought distribution is the frequency of > new releases. The guys at Python(x,y) are exceptionally quick at > updating. That may be hard to match (and cost a lot of effort). What > are your plans at releasing updates? Any chance we can look forward > to a new version maybe 4 times a year or so? > > Thanks again for putting these together, > > Mark > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From travis at enthought.com Thu Jul 3 17:26:13 2008 From: travis at enthought.com (Travis Vaught) Date: Thu, 3 Jul 2008 16:26:13 -0500 Subject: [SciPy-user] Enthought Python Distribution In-Reply-To: References: <78A3D219-9C3C-4B4C-BA77-8B63ED1191BF@enthought.com> Message-ID: Hey Fernando, On Jul 2, 2008, at 12:10 PM, Fernando Perez wrote: > Hey Travis, > > On Wed, Jul 2, 2008 at 7:32 AM, Travis Vaught > wrote: >> Greetings, >> >> We're pleased to announce the beta release of the Enthought Python >> Distribution for *Mac OS X*. > > First of all, this is *fantastic*. OSX has been a major pain for us > here in terms of distribution, so I am extremely happy and grateful to > you guys for putting this out. You're quite welcome. I'm glad it's useful. Let us know if you see any issues in the beta. > > > Two minor requests: any chance in the next release, you could update > ipython and matplotlib? You are shipping 0.8.1 and 0.91.2, both of > which are fairly old and the current versions of both do have > reasonably useful improvements and fixes (ipython has threading fixes > that impact plotting, and mpl has all the new mathtext code that gives > good math rendering without needing latex). I want to see these as well. It should be no problem to get these in the next release. In fact we'll upgrade to the latest versions of _everything_ as long as the developers recommend doing so and there are no compatibility issues. > > > In any case, a big cheer from me :) :-) Thanks, Travis > > > Regards, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From jh at physics.ucf.edu Thu Jul 3 17:59:08 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Thu, 03 Jul 2008 17:59:08 -0400 Subject: [SciPy-user] Summer Doc Marathon status report and request for more writers Message-ID: This is an interim status report on the Summer Documentation Marathon. It is also an invitation and plea for all experienced users to participate! I am cross-posting in an effort to get broader participation. Please hold any discussion on the scipy-dev mailing list. As you know, our immediate goal is to produce first-draft docstrings for the user-visible parts of Numpy in time for a release before Fall classes (about 1 August). The short version is: We are really moving along! But, we need *your* help to make it in time for August. Here's the scoop: 1. We have all our infrastructure, standards, and procedure in place: We have a wiki that makes editing the docs easy and even fun. It communicates directly with the numpy sources. We have PDF and HTML reference guides being generated essentially automatically: http://sd-2116.dedibox.fr/pydocweb http://mentat.za.net/numpy/refguide/NumPy.pdf http://mentat.za.net/numpy/refguide/ The wiki front page contains or points to all you need to get started. The wiki lets you pull down a docstring with a few mouse clicks, edit on your machine, upload it, and see how it will look in its HTML version right away. You can also read everyone else's docstrings, comment on them, see the status of the project, and so on. The formatted versions necessarily lag the docstrings on the wiki because they are made whenever the docstrings are checked into the sources. 2. We have documented about 1/4 of numpy in a fairly professional way, comparable to the reference pages of the major commercial packages. The doc wiki is probably the next place to go if your question isn't answered by the docstring in the current version's help(), since you can look at the new docstrings we've generated, and they're *good*! 3. But, we're only 1/4 of the way there, we're halfway through the summer, and some of the initial enthusiasm is waning. The following page tells the tale: http://sd-2116.dedibox.fr/pydocweb/stats/ As you can see (you did click, right? please click...), there are 2323 numpy objects with docstrings. Of these, 1464 we deemed "unimportant" (for now). These are generally items not seen by regular users. This left 859 objects to document in this first pass. We've done 24% of them at this writing. Now, 24% is really exciting, and I'd like to take a moment to say a public "Hooray!" for the team (no particular order): St?fan van der Walt Pauli Virtanen Robert Hetland Gael Varoquaux Scott Sinclair Alan Jackson Tim Cera Johann Cohen-Tanugi David Huard Keith Goodman Together these ten have written around 7500 words of documentation on the community's behalf, mainly as volunteers. HOWEVER, we can all do the math. We've spent one of our two months. We are 1/4 of the way there. Progress is slowing, and even if it didn't, we wouldn't make it in time. This is not a sprint, it's a MARATHON. WE NEED YOUR HELP! And we need it now. Are you excited by the idea of having documentation for numpy by the Fall release? Of having docs that answer your questions, that have *good* examples, that really save you time? If so, then please invest just a fraction of the time that documentation will save you in the next year alone by signing up on the wiki and writing some. If each experienced user wrote just a few pages, we'd be done! If you don't think you know enough to write, then do some reviewing. Are the docs readable? Do you understand the examples? Each docstring on the wiki has an easy comment box waiting for your thoughts. You will have a reference guide in the next release of numpy! I hope you will help make it a complete one. Sign up on the doc wiki today: http://sd-2116.dedibox.fr/pydocweb/ Thanks, --jh-- and the SciPy Doc Team Prof. Joseph Harrington Department of Physics MAP 414 4000 Central Florida Blvd. University of Central Florida Orlando, FL 32816-2385 (407) 823-3416 voice (407) 823-5112 fax (407) 823-2325 physics office jh at physics.ucf.edu From fperez.net at gmail.com Thu Jul 3 18:40:58 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Thu, 3 Jul 2008 15:40:58 -0700 Subject: [SciPy-user] Enthought Python Distribution In-Reply-To: References: <78A3D219-9C3C-4B4C-BA77-8B63ED1191BF@enthought.com> Message-ID: On Thu, Jul 3, 2008 at 2:26 PM, Travis Vaught wrote: > I want to see these as well. It should be no problem to get these in > the next release. In fact we'll upgrade to the latest versions of > _everything_ as long as the developers recommend doing so and there > are no compatibility issues. Any chance the next release might happen before scipy'08? It would make an enormous difference for those of us teaching the tutorials to have this as an installation option. Perhaps after waiting for the outcome of the Mayavi sprint you guys are holding right now... I know that if we could point all attendees to reasonably up to date installers for the tools the tutorials will cover on Win32 and OSX, things will be a LOT smoother than they were last year. I think we were really hurt last year by underestimating (I fully take the blame for that) the importance of installation issues. We burned valuable time on that and probably left quite a few people behind simply because debugging install problems on a one by one basis was impossible under those circumstances. In any case, thanks again for this! Cheers, f From emanuele at relativita.com Fri Jul 4 12:23:04 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 04 Jul 2008 18:23:04 +0200 Subject: [SciPy-user] [OpenOpt] lb issue In-Reply-To: <48672993.1040001@scipy.org> References: <48672993.1040001@scipy.org> Message-ID: <486E4E68.4090004@relativita.com> Yes Dmitrey you are right. After some thinking I successfully implemented your advice in my code, and added contol and inf. It seems to work quite well. Now "ralg" tries some attempts in the forbidden area and then comes back. I hope that generating the next 'x' (attempt) does not require lots of computation to 'ralg', in the sense that wasting some attempts does not cost too much time (especially for bigger problems, like hundreds of variables). I have another question. Is there a way in OpenOpt to tell the solver to optimize "just" on some variable/dimensions and not all of them? Example: assume my function takes 3 numbers and returns 1; I want to minimize it but with respect just the first 2 of the 3 input values; let's say that the initial guess of the 3rd value is already OK or that I can't change it. I'd like to pass something like a boolean vector [True, True, False] to p.solve() o the NLP instance to say that the third parameter should not be changed. In other words it can be thought as another kind of constraint. Obviously I can wrap my function in order to handle the boolean vector and do the right thing, but I'm wondering if OpenOpt is able to handle this kind of requests. Thanks, Emanuele dmitrey wrote: > I don't see any difficulties to map my advice to more difficult cases > (just don't forget to increase your ineq constraints by contol, for to > compare those ones with zero, not contol, and use inf, not the huge > value you have mentioned). Current ralg implementation doesn't need > objfunc value when point is outside of feasible region (i.e. if *any* > constraint is bigger than p.contol, not only lb-ub constraint). OO calls > objFunc outside feasible region just for to check some stop criteria, > yield iter output point text and possible graphics output. > > Regards, D. > > Emanuele Olivetti wrote: > >> Unfortunately it is not so simple to map this advice to my >> real situation, which is more complex than the >> proof-of-concept example of the initial message. Returning >> a big positive value when x is outside the bounds is an option >> I considered some time ago but then discarded. But I'll think >> more about it now. >> >> Ciao, >> >> Emanuele >> >> dmitrey wrote: >> >> >>> Hi Emanuele, >>> >>> I could propose you temporary solution (see below), this one doesn't >>> require updating oo from svn. However, usually ALGENCAN, ipopt and >>> scipy_slsqp work much better for box-bound constrained problems (w/o >>> other constraints) than current ralg implementation. >>> D. >>> >>> import numpy as N >>> from scikits.openopt import NLP >>> from numpy import any, inf >>> size = 100 >>> dimensions = 2 >>> data = N.random.rand(size,dimensions)-0.5 >>> >>> contol = 1e-6 >>> lb=N.zeros(dimensions) + contol >>> >>> def f(x): >>> global data >>> if any(x<0): >>> #objective function is not defined here, let's use inf instead >>> #however, some iters will show objFunVa= inf in text output >>> # and graphic output is currently unavailable for the case >>> return inf >>> return N.dot(data**2,x.T) >>> >>> x0 = N.ones(dimensions) >>> p = NLP(f,x0,lb=lb, contol = contol) >>> p.solve('ralg') >>> print p.ff,p.xf >>> >>> >>> >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From dmitrey.kroshko at scipy.org Fri Jul 4 14:58:55 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 04 Jul 2008 21:58:55 +0300 Subject: [SciPy-user] [OpenOpt] lb issue In-Reply-To: <486E4E68.4090004@relativita.com> References: <48672993.1040001@scipy.org> <486E4E68.4090004@relativita.com> Message-ID: <486E72EF.8000001@scipy.org> hi Emanuele, Emanuele Olivetti wrote: > Yes Dmitrey you are right. After some thinking I successfully > implemented your advice in my code, and added contol and > inf. It seems to work quite well. Now "ralg" tries some attempts > in the forbidden area and then comes back. I hope that > generating the next 'x' (attempt) does not require lots of > computation to 'ralg', in the sense that wasting some attempts > does not cost too much time (especially for bigger problems, > like hundreds of variables). > each iteration with point infeasible due to box-bound constraints (if no other are present) eats no greater than 5*nVars^2 multiplication operations (+some evaluations of objective function while line search is going within feasible region, usually it's 2-4 steps, no matter how big is nVars), and for nVars < 1000 it's not a big number as I guess (however I don't know your situation). Also, there are some methods to reduce the number, but it's not implemented in current python ralg solver yet. > I have another question. Is there a way in OpenOpt to > tell the solver to optimize "just" on some variable/dimensions > and not all of them? > I intend to implement "fixedVars" prob filed (similar to intVars and boolVars that are handled by some MILP solvers). However, it's not done yet. Regards, D. From hectorvd at gmail.com Sat Jul 5 12:20:57 2008 From: hectorvd at gmail.com (Hector Villafuerte) Date: Sat, 5 Jul 2008 10:20:57 -0600 Subject: [SciPy-user] EPD on Vista Message-ID: <350f72300807050920r1d8625c3j355d2f20ff2fc1ed@mail.gmail.com> Hi, a colleague of mine is having troubles installing the Enthought Python Distro on Windows Vista. I don't have access to Vista now (just Windows XP, where EPD works like a charm). So basically my question: has somebody successfully installed EPD on Vista? (also, it seems that Vista is not currently supported). As a side note: this other colleague had lots of trouble getting Matlab to work when he "upgraded" to Vista... thought you would like to know :) Best, -- Hector From robert.kern at gmail.com Sat Jul 5 19:28:55 2008 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 5 Jul 2008 18:28:55 -0500 Subject: [SciPy-user] EPD on Vista In-Reply-To: <350f72300807050920r1d8625c3j355d2f20ff2fc1ed@mail.gmail.com> References: <350f72300807050920r1d8625c3j355d2f20ff2fc1ed@mail.gmail.com> Message-ID: <3d375d730807051628j55ab6b45l7b2fea32dbe80684@mail.gmail.com> On Sat, Jul 5, 2008 at 11:20, Hector Villafuerte wrote: > Hi, > a colleague of mine is having troubles installing the Enthought Python > Distro on Windows Vista. I don't have access to Vista now (just > Windows XP, where EPD works like a charm). So basically my question: > has somebody successfully installed EPD on Vista? (also, it seems that > Vista is not currently supported). I think we tried once with an early unreleased pre-alpha, and it worked. Vista is not officially supported, yet. However, we (Enthought) would be interested in receiving bug reports on the enthought-dev mailing list. There might be simple things we can do to get it to work for your colleague. https://mail.enthought.com/mailman/listinfo/enthought-dev -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From yosefmel at post.tau.ac.il Sun Jul 6 00:48:10 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Sun, 6 Jul 2008 07:48:10 +0300 Subject: [SciPy-user] array manipulation In-Reply-To: <3d375d730807030226j6d11aea9t3eb16ab321629272@mail.gmail.com> References: <3d375d730807030226j6d11aea9t3eb16ab321629272@mail.gmail.com> Message-ID: <200807060748.10076.yosefmel@post.tau.ac.il> On Thursday 03 July 2008 12:26:27 Robert Kern wrote: > On Thu, Jul 3, 2008 at 04:24, Nils Wagner wrote: > > How can I remove duplicate entries from an array ? > > numpy.unique1d() Why is there both numpy.unique1d() and numpy.unique()? Their code seems very similar. And, if thery're different, it would be nice to see unique1d in http://www.scipy.org/Numpy_Example_List From mwojc at p.lodz.pl Sun Jul 6 10:17:06 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Sun, 6 Jul 2008 16:17:06 +0200 Subject: [SciPy-user] scipy-0.6 on AIX 5.3 In-Reply-To: References: Message-ID: <200807061617.07505.mwojc@p.lodz.pl> Hi! I tried to install scipy in AIX 5.3 and i'm partially successful. For now i see no way to compile and use C++ code from scipy, so sparse and weave modules are not working. I used native IBM compilers: xlf_r, cc_r, xlc++_r. I compiled also properly blas and lapack for use with scipy. I applied two dirty hacks in numpy.distutils.ccompiler (1.0.4) and some hacks in scipy itself. I'm attaching diffs because probably some changes could be applied to the development code... Greetings -- Marek Wojciechowski -------------- next part -------------- A non-text attachment was scrubbed... Name: scipy-0.6.0-aix.diff Type: text/x-diff Size: 60159 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ccompiler-aix.diff Type: text/x-diff Size: 1419 bytes Desc: not available URL: From gael.varoquaux at normalesup.org Sun Jul 6 17:07:31 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 6 Jul 2008 23:07:31 +0200 Subject: [SciPy-user] Schedule for the SciPy08 conferencez Message-ID: <20080706210731.GC25810@phare.normalesup.org> We have received a large number of excellent contributions for papers for the SciPy 2008 conference. The program committee has had to make a difficult selection and we are happy to bring to you a preliminary schedule: Thursday ========= **8:00** Registration/Breakfast **8:55** Welcome (Travis Vaught) **9:10** Keynote (Alex Martelli) **10:00** State of SciPy (Travis Vaught, Jarrod Millman) **10:40** -- Break -- **11:00** Sympy - Python library for symbolic mathematics: introduction and applications (Ond?ej ?ertik) **11:40** Interval arithmetic: Python implementation and applications (Stefano Taschini) **12:00** Experiences Using Scipy for Computer Vision Research (Damian Eads) **12:20** -- Lunch -- **1:40** The new NumPy documentation framework (St?fan Van der Walt) **2:00** Matplotlib solves the riddle of the sphinx (Michael Droettboom) **2:40** The SciPy documentation project (Joe Harrington) **3:00** -- Break -- **3:40** Sage: creating a viable free Python-based open source alternatice to Magma, Maple, Mathematica and Matlab (William Stein) **4:20** Open space for lightning talks Friday ======== **8:30** Breakfast **9:00** Pysynphot: A Python Re-Implementation of a Legacy App in Astronomy (Perry Greenfield) **9:40** How the Large Synoptic Survey Telescope (LSST) is using Python (Robert Lupton) **10:00** Real-time Astronomical Time-series Classification and Broadcast Pipeline (Dan Starr) **10:20** Analysis and Visualization of Multi-Scale Astrophysical Simulations using Python and NumPy (Matthew Turk) **10:40** -- Break -- **11:00** Exploring network structure, dynamics, and function using NetworkX (Aric Hagberg) **11:40** Mayavi: Making 3D data visualization reusable (Prabhu Ramachandran, Ga?l Varoquaux) **12:00** Finite Element Modeling of Contact and Impact Problems Using Python (Ryan Krauss) **12:20** -- Lunch -- **2:00** PyCircuitScape: A Tool for Landscape Ecology (Viral Shah) **2:20** Summarizing Complexity in High Dimensional Spaces (Karl Young) **2:40** UFuncs: A generic function mechanism in Python (Travis Oliphant) **3:20** -- Break -- **3:40** NumPy Optimization: Manual tuning and automated approaches (Evan Patterson) **4:00** Converting Python functions to dynamically-compiled C (Ilan Schnell) **4:20** unPython: Converting Python numerical programs into C (Rahul Garg) **4:40** Implementing the Grammar of Graphics for Python (Robert Kern) **5:00** Ask the experts session. A more detailled booklet including the abstract text will be available soon. We are looking forward to seeing you in Caltech, Ga?l Varoquaux, on behalf of the program committee. -- SciPy2008 conference. Program committee Anne Archibald, McGill University Matthew Brett Perry Greenfield, Space Telescope Science Institute Charles Harris Ryan Krauss, Southern Illinois University Ga?l Varoquaux St?fan van der Walt, University of Stellenbosch From Joris.DeRidder at ster.kuleuven.be Sun Jul 6 18:00:03 2008 From: Joris.DeRidder at ster.kuleuven.be (Joris De Ridder) Date: Mon, 7 Jul 2008 00:00:03 +0200 Subject: [SciPy-user] array manipulation In-Reply-To: <200807060748.10076.yosefmel@post.tau.ac.il> References: <3d375d730807030226j6d11aea9t3eb16ab321629272@mail.gmail.com> <200807060748.10076.yosefmel@post.tau.ac.il> Message-ID: <8019D79F-820A-4B50-83CE-A36FCD109DE8@ster.kuleuven.be> On 6 Jul 2008, at 6:48 , Yosef Meller wrote: > On Thursday 03 July 2008 12:26:27 Robert Kern wrote: >> On Thu, Jul 3, 2008 at 04:24, Nils Wagner > > > wrote: >>> How can I remove duplicate entries from an array ? >> >> numpy.unique1d() > > Why is there both numpy.unique1d() and numpy.unique()? Their code > seems very > similar. And, if thery're different, it would be nice to see > unique1d in > http://www.scipy.org/Numpy_Example_List Good question... unique1d() supports returning the indices of the unique values while unique() does not, but that's the only difference I can see. Perhaps a somewhat more consistent implementation would only consist of unique() and argunique(). But changing this will perhaps break too much code. Also, at the moment there seems to be no unique()-like function that doesn't first flatten its input array. That is, a function that is able to transform, for example, array([[1,2],[3,4],[1,2]]) into array([[1,2],[3,4]]). Joris Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From david at ar.media.kyoto-u.ac.jp Sun Jul 6 22:53:41 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Mon, 07 Jul 2008 11:53:41 +0900 Subject: [SciPy-user] scipy-0.6 on AIX 5.3 In-Reply-To: <200807061617.07505.mwojc@p.lodz.pl> References: <200807061617.07505.mwojc@p.lodz.pl> Message-ID: <48718535.8000703@ar.media.kyoto-u.ac.jp> Marek Wojciechowski wrote: > Hi! > > I tried to install scipy in AIX 5.3 and i'm partially successful. For now i > see no way to compile and use C++ code from scipy, so sparse and weave > modules are not working. I used native IBM compilers: xlf_r, cc_r, xlc++_r. I > compiled also properly blas and lapack for use with scipy. > > I applied two dirty hacks in numpy.distutils.ccompiler (1.0.4) and some hacks > in scipy itself. I'm attaching diffs because probably some changes could be > applied to the development code... > C++-style comments in C are bugs (I am guilty of most of them), but should have been solved in the SVN trunk. Concerning the fortran code, it seems that the AIX fortran compiler does not like special characters in comments nor code lines starting with &... I would consider using the Gnu compiler instead of AIX fortran compiler, cheers, David From sgarcia at olfac.univ-lyon1.fr Mon Jul 7 05:42:24 2008 From: sgarcia at olfac.univ-lyon1.fr (Samuel GARCIA) Date: Mon, 07 Jul 2008 11:42:24 +0200 Subject: [SciPy-user] weave problems, weave_imp.o no such file or directory In-Reply-To: References: Message-ID: <4871E500.6070704@olfac.univ-lyon1.fr> Hi list, I have exactly the same problem under Windows and the exelent Python(X,Y) distribution. The problem is in the command line, there is no quote at the end just after -c and -o and so the path is splited. The problem is new for me because before using this distrubution there was no space in my python path. And now, yes. in 'c:\program files\...' Does the weave maintener could fix this ? Or is it already fix but not include in the distribution ? Thank Samuel S?ren Nielsen a ?crit : > Hi, > > I've done a fresh install of MinGw 5.14, Python 2.4.4, Scipy 0.6.0, > Numpy 1.1.0 and numarray 1.5.2 > > When I try to use weave i get this: > > C:\>test_weave.py > > running build_ext > running build_src > building extension "sc_552cccf5dbf4f6eadd273cdcbd5860523" sources > customize Mingw32CCompiler > customize Mingw32CCompiler using build_ext > customize Mingw32CCompiler > customize Mingw32CCompiler using build_ext > building 'sc_552cccf5dbf4f6eadd273cdcbd5860523' extension > compiling C++ sources > C compiler: g++ -mno-cygwin -O2 -Wall > > compile options: '-IC:\Python24\lib\site-packages\scipy\weave > -IC:\Python24\lib\ > site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\core\includ > e -IC:\Python24\include -IC:\Python24\PC -c' > g++ -mno-cygwin -O2 -Wall -IC:\Python24\lib\site-packages\scipy\weave > -IC:\Pytho > n24\lib\site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\cor > e\include -IC:\Python24\include -IC:\Python24\PC -c > C:\Python24\lib\site-package > s\scipy\weave\scxx\weave_imp.cpp -o > c:\docume~1\lisear~1\lokale~1\temp\Lise Arle > th\python24_intermediate\compiler_08edc7e348e1c33f63a33ab500aef08e\Release\pytho > n24\lib\site-packages\scipy\weave\scxx\weave_imp.o > Found executable C:\MinGw\bin\g++.exe > g++.exe: > Arleth\python24_intermediate\compiler_08edc7e348e1c33f63a33ab500aef08e\ > Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o: No > such file or > directory > Traceback (most recent call last): > File "C:\test_weave.py", line 341, in ? > main() > File "C:\test_weave.py", line 224, in main > weave.inline('printf("%d\\n",a);',['a'], verbose=2, > type_converters=converte > rs.blitz) #, compiler = 'msvc', verbpse=2, > type_converters=converters.blitz, au > to_downcast=0) #'msvc' or 'gcc' or 'mingw32' > File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", > line 338, in > inline > auto_downcast = auto_downcast, > File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", > line 447, in > compile_function > verbose=verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line > 365, in co > mpile > verbose = verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", > line 269, in > build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line > 184, in set > up > return old_setup(**new_attr) > File "C:\Python24\lib\distutils\core.py", line 166, in setup > raise SystemExit, "error: " + str(msg) > distutils.errors.CompileError: error: Command "g++ -mno-cygwin -O2 > -Wall -IC:\Py > thon24\lib\site-packages\scipy\weave > -IC:\Python24\lib\site-packages\scipy\weave > \scxx -IC:\Python24\lib\site-packages\numpy\core\include > -IC:\Python24\include - > IC:\Python24\PC -c > C:\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp > -o c:\docume~1\lisear~1\lokale~1\temp\Lise > Arleth\python24_intermediate\compiler > _08edc7e348e1c33f63a33ab500aef08e\Release\python24\lib\site-packages\scipy\weave > \scxx\weave_imp.o" failed with exit status 1 > > the test_weave file was something i found on the scipy dev wiki... I > also tried some of my older files that also uses weave and they all > give the same error... > > Can anyone help me with this? > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Samuel Garcia Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. CNRS - UMR5020 - Universite Claude Bernard LYON 1 Equipe logistique et technique 50, avenue Tony Garnier 69366 LYON Cedex 07 FRANCE T?l : 04 37 28 74 64 Fax : 04 37 28 76 01 http://olfac.univ-lyon1.fr/unite/equipe-07/ http://neuralensemble.org/trac/OpenElectrophy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwojc at p.lodz.pl Mon Jul 7 09:06:46 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Mon, 07 Jul 2008 15:06:46 +0200 Subject: [SciPy-user] scipy-0.6 on AIX 5.3 References: <200807061617.07505.mwojc@p.lodz.pl> <48718535.8000703@ar.media.kyoto-u.ac.jp> Message-ID: David Cournapeau wrote: > Marek Wojciechowski wrote: >> Hi! >> >> I tried to install scipy in AIX 5.3 and i'm partially successful. For now >> i see no way to compile and use C++ code from scipy, so sparse and weave >> modules are not working. I used native IBM compilers: xlf_r, cc_r, >> xlc++_r. I compiled also properly blas and lapack for use with scipy. >> >> I applied two dirty hacks in numpy.distutils.ccompiler (1.0.4) and some >> hacks in scipy itself. I'm attaching diffs because probably some changes >> could be applied to the development code... >> > > C++-style comments in C are bugs (I am guilty of most of them), but > should have been solved in the SVN trunk. Concerning the fortran code, > it seems that the AIX fortran compiler does not like special characters > in comments nor code lines starting with &... > > I would consider using the Gnu compiler instead of AIX fortran compiler, > IBM Fortran compiler (xlf) worked very well for me. The only compilation problem is with two functions in specfun.f, namely: CLQN and CLQMN. However, this is because of the aggressive -O5 optimization flag (which is set up by default on AIX in numpy.distutils). With -O3 whole code compiles nicely. BTW. i'm not sure if there is a way to get working g77 or gfortran on AIX. Greetings, -- Marek Wojciechowski From soren.skou.nielsen at gmail.com Mon Jul 7 15:31:03 2008 From: soren.skou.nielsen at gmail.com (=?ISO-8859-1?Q?S=F8ren_Nielsen?=) Date: Mon, 7 Jul 2008 21:31:03 +0200 Subject: [SciPy-user] weave problems, weave_imp.o no such file or directory In-Reply-To: <4871E500.6070704@olfac.univ-lyon1.fr> References: <4871E500.6070704@olfac.univ-lyon1.fr> Message-ID: I have a space in my path too... So thats why it works on some computers and not on others... If you find a solution I'd really like to hear it.. The activity level on this mailing list is almost flatlining. Soren On Mon, Jul 7, 2008 at 11:42 AM, Samuel GARCIA wrote: > Hi list, > I have exactly the same problem under Windows and the exelent Python(X,Y) > distribution. > > The problem is in the command line, there is no quote at the end just after > -c and -o and so the path is splited. > > The problem is new for me because before using this distrubution there was > no space in my python path. > And now, yes. in 'c:\program files\...' > > Does the weave maintener could fix this ? Or is it already fix but not > include in the distribution ? > > Thank > > Samuel > > S?ren Nielsen a ?crit : > > Hi, > > I've done a fresh install of MinGw 5.14, Python 2.4.4, Scipy 0.6.0, Numpy > 1.1.0 and numarray 1.5.2 > > When I try to use weave i get this: > > C:\>test_weave.py > > running build_ext > running build_src > building extension "sc_552cccf5dbf4f6eadd273cdcbd5860523" sources > customize Mingw32CCompiler > customize Mingw32CCompiler using build_ext > customize Mingw32CCompiler > customize Mingw32CCompiler using build_ext > building 'sc_552cccf5dbf4f6eadd273cdcbd5860523' extension > compiling C++ sources > C compiler: g++ -mno-cygwin -O2 -Wall > > compile options: '-IC:\Python24\lib\site-packages\scipy\weave > -IC:\Python24\lib\ > site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\core\includ > e -IC:\Python24\include -IC:\Python24\PC -c' > g++ -mno-cygwin -O2 -Wall -IC:\Python24\lib\site-packages\scipy\weave > -IC:\Pytho > n24\lib\site-packages\scipy\weave\scxx > -IC:\Python24\lib\site-packages\numpy\cor > e\include -IC:\Python24\include -IC:\Python24\PC -c > C:\Python24\lib\site-package > s\scipy\weave\scxx\weave_imp.cpp -o c:\docume~1\lisear~1\lokale~1\temp\Lise > Arle > > th\python24_intermediate\compiler_08edc7e348e1c33f63a33ab500aef08e\Release\pytho > n24\lib\site-packages\scipy\weave\scxx\weave_imp.o > Found executable C:\MinGw\bin\g++.exe > g++.exe: > Arleth\python24_intermediate\compiler_08edc7e348e1c33f63a33ab500aef08e\ > Release\python24\lib\site-packages\scipy\weave\scxx\weave_imp.o: No such > file or > directory > Traceback (most recent call last): > File "C:\test_weave.py", line 341, in ? > main() > File "C:\test_weave.py", line 224, in main > weave.inline('printf("%d\\n",a);',['a'], verbose=2, > type_converters=converte > rs.blitz) #, compiler = 'msvc', verbpse=2, > type_converters=converters.blitz, au > to_downcast=0) #'msvc' or 'gcc' or 'mingw32' > File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line > 338, in > inline > auto_downcast = auto_downcast, > File "C:\Python24\Lib\site-packages\scipy\weave\inline_tools.py", line > 447, in > compile_function > verbose=verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\ext_tools.py", line 365, > in co > mpile > verbose = verbose, **kw) > File "C:\Python24\Lib\site-packages\scipy\weave\build_tools.py", line > 269, in > build_extension > setup(name = module_name, ext_modules = [ext],verbose=verb) > File "C:\Python24\Lib\site-packages\numpy\distutils\core.py", line 184, > in set > up > return old_setup(**new_attr) > File "C:\Python24\lib\distutils\core.py", line 166, in setup > raise SystemExit, "error: " + str(msg) > distutils.errors.CompileError: error: Command "g++ -mno-cygwin -O2 -Wall > -IC:\Py > thon24\lib\site-packages\scipy\weave > -IC:\Python24\lib\site-packages\scipy\weave > \scxx -IC:\Python24\lib\site-packages\numpy\core\include > -IC:\Python24\include - > IC:\Python24\PC -c > C:\Python24\lib\site-packages\scipy\weave\scxx\weave_imp.cpp > -o c:\docume~1\lisear~1\lokale~1\temp\Lise > Arleth\python24_intermediate\compiler > > _08edc7e348e1c33f63a33ab500aef08e\Release\python24\lib\site-packages\scipy\weave > \scxx\weave_imp.o" failed with exit status 1 > > the test_weave file was something i found on the scipy dev wiki... I also > tried some of my older files that also uses weave and they all give the same > error... > > Can anyone help me with this? > > > ------------------------------ > > _______________________________________________ > SciPy-user mailing listSciPy-user at scipy.orghttp://projects.scipy.org/mailman/listinfo/scipy-user > > > -- > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > Samuel Garcia > Laboratoire de Neurosciences Sensorielles, Comportement, Cognition. > CNRS - UMR5020 - Universite Claude Bernard LYON 1 > Equipe logistique et technique > 50, avenue Tony Garnier > 69366 LYON Cedex 07 > FRANCE > T?l : 04 37 28 74 64 > Fax : 04 37 28 76 01http://olfac.univ-lyon1.fr/unite/equipe-07/http://neuralensemble.org/trac/OpenElectrophy > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jturner at gemini.edu Mon Jul 7 20:36:48 2008 From: jturner at gemini.edu (James Turner) Date: Mon, 07 Jul 2008 20:36:48 -0400 Subject: [SciPy-user] Portability problems (on Solaris) Message-ID: <4872B6A0.8080706@gemini.edu> Following my last post on numpy-discussion (Re: Ctypes required? Fails to build.), I have found some additional portability issues -- this time in SciPy (0.6.0) -- that cause my build to fail on Solaris 9 with the Sun WorkShop 6 compiler. Is this a good place to report these, or should I make a ticket for them? They don't really need any troubleshooting -- just changing (or not). 1. In the ndimage module, there is a C++ style comment at lines 144-145 of ni_interpolation.c. 2. The Sun ForTran compiler doesn't like the "END" syntax used in linalg/iterative -- for example "END SUBROUTINE sSTOPTEST2" fails but "END" on its own works (a shame, since the first syntax is nicer). This applies to all the ForTran files in that directory. 3. The main INSTALL.txt file says this: "Currently the SciPy build process does not use a C++ compiler, but the SciPy module Weave uses a C++ compiler at run time, so it is good to have a C++ compiler around as well". However, when I try to build SciPy with no C++ compiler available, the build stops at "scipy/sparese/sparsetools/ sparsetools_wrap.cxx" with the error "c++: not found" and SciPy doesn't get installed. Do I need to tell it not to build certain modules? Thanks, James. From robert.kern at gmail.com Mon Jul 7 20:56:36 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 7 Jul 2008 19:56:36 -0500 Subject: [SciPy-user] Portability problems (on Solaris) In-Reply-To: <4872B6A0.8080706@gemini.edu> References: <4872B6A0.8080706@gemini.edu> Message-ID: <3d375d730807071756q6057eea0pf0cc95730a176cfb@mail.gmail.com> On Mon, Jul 7, 2008 at 19:36, James Turner wrote: > Following my last post on numpy-discussion (Re: Ctypes required? > Fails to build.), I have found some additional portability > issues -- this time in SciPy (0.6.0) -- that cause my build to > fail on Solaris 9 with the Sun WorkShop 6 compiler. > > Is this a good place to report these, or should I make a ticket > for them? They don't really need any troubleshooting -- just > changing (or not). > > 1. In the ndimage module, there is a C++ style comment > at lines 144-145 of ni_interpolation.c. I can fix that. Unfortunately, there are a lot of new such comments in SVN. > 2. The Sun ForTran compiler doesn't like the "END" syntax > used in linalg/iterative -- for example > "END SUBROUTINE sSTOPTEST2" fails but "END" on its > own works (a shame, since the first syntax is nicer). > This applies to all the ForTran files in that directory. Isn't that standard Fortran-77? If it isn't, I'll change it. If it is, I am going to have to insist on not supporting such broken compilers. > 3. The main INSTALL.txt file says this: "Currently the SciPy > build process does not use a C++ compiler, but the SciPy > module Weave uses a C++ compiler at run time, so it is > good to have a C++ compiler around as well". However, > when I try to build SciPy with no C++ compiler available, > the build stops at "scipy/sparese/sparsetools/ > sparsetools_wrap.cxx" with the error "c++: not found" and > SciPy doesn't get installed. Do I need to tell it not to > build certain modules? The docs are old and wrong. scipy.sparse does require C++. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dineshbvadhia at hotmail.com Mon Jul 7 21:33:45 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Mon, 7 Jul 2008 18:33:45 -0700 Subject: [SciPy-user] A.mean(0) for sparse matrices Message-ID: Binary sparse matrices are created as follows: row = numpy.zeros(nnz, dtype='intc') column = numpy.zeros(nnz, dtype='intc') data = scipy.ones(nnz, dtype='intc') A = sparse.csr_matrix((data, (row, column)), shape=(I,J)) Next, I want to get the mean of A with: column_mean = A.mean(0) I would expect column_mean to be a float vector but instead I get an integer vector. Have I missed something? Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanlists at gmail.com Tue Jul 8 01:17:13 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Tue, 8 Jul 2008 00:17:13 -0500 Subject: [SciPy-user] EPD on Vista In-Reply-To: <3d375d730807051628j55ab6b45l7b2fea32dbe80684@mail.gmail.com> References: <350f72300807050920r1d8625c3j355d2f20ff2fc1ed@mail.gmail.com> <3d375d730807051628j55ab6b45l7b2fea32dbe80684@mail.gmail.com> Message-ID: You might try right clicking on the installer and choosing run as administrator. This has solved some Vista install problems for my students (though not with EPD). On Sat, Jul 5, 2008 at 6:28 PM, Robert Kern wrote: > On Sat, Jul 5, 2008 at 11:20, Hector Villafuerte wrote: >> Hi, >> a colleague of mine is having troubles installing the Enthought Python >> Distro on Windows Vista. I don't have access to Vista now (just >> Windows XP, where EPD works like a charm). So basically my question: >> has somebody successfully installed EPD on Vista? (also, it seems that >> Vista is not currently supported). > > I think we tried once with an early unreleased pre-alpha, and it > worked. Vista is not officially supported, yet. However, we > (Enthought) would be interested in receiving bug reports on the > enthought-dev mailing list. There might be simple things we can do to > get it to work for your colleague. > > https://mail.enthought.com/mailman/listinfo/enthought-dev > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From wnbell at gmail.com Tue Jul 8 06:08:10 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 8 Jul 2008 05:08:10 -0500 Subject: [SciPy-user] A.mean(0) for sparse matrices In-Reply-To: References: Message-ID: On Mon, Jul 7, 2008 at 8:33 PM, Dinesh B Vadhia wrote: > > Next, I want to get the mean of A with: > column_mean = A.mean(0) > > I would expect column_mean to be a float vector but instead I get an integer > vector. Have I missed something? > Nope, it's a bug. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From twaite at berkeley.edu Tue Jul 8 10:36:34 2008 From: twaite at berkeley.edu (Tom Waite) Date: Tue, 8 Jul 2008 07:36:34 -0700 Subject: [SciPy-user] EPD on Vista In-Reply-To: <3d375d730807051628j55ab6b45l7b2fea32dbe80684@mail.gmail.com> References: <350f72300807050920r1d8625c3j355d2f20ff2fc1ed@mail.gmail.com> <3d375d730807051628j55ab6b45l7b2fea32dbe80684@mail.gmail.com> Message-ID: I will try EPD on my Vista 32 laptop. I have had problems with Vista. Cygwin would not install fully (I would have to use task manager to kill some of the installations), Matlab 6 would not install and there are problems with older versions of MSVS (VC6 will install but project builds will have problems). On Sat, Jul 5, 2008 at 4:28 PM, Robert Kern wrote: > On Sat, Jul 5, 2008 at 11:20, Hector Villafuerte > wrote: > > Hi, > > a colleague of mine is having troubles installing the Enthought Python > > Distro on Windows Vista. I don't have access to Vista now (just > > Windows XP, where EPD works like a charm). So basically my question: > > has somebody successfully installed EPD on Vista? (also, it seems that > > Vista is not currently supported). > > I think we tried once with an early unreleased pre-alpha, and it > worked. Vista is not officially supported, yet. However, we > (Enthought) would be interested in receiving bug reports on the > enthought-dev mailing list. There might be simple things we can do to > get it to work for your colleague. > > https://mail.enthought.com/mailman/listinfo/enthought-dev > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > -- Umberto Eco > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jturner at gemini.edu Tue Jul 8 11:14:36 2008 From: jturner at gemini.edu (James Turner) Date: Tue, 08 Jul 2008 11:14:36 -0400 Subject: [SciPy-user] Portability problems (on Solaris) References: 3d375d730807071756q6057eea0pf0cc95730a176cfb@mail.gmail.com Message-ID: <4873845C.4080904@gemini.edu> > > 2. The Sun ForTran compiler doesn't like the "END" syntax > > used in linalg/iterative -- for example > > "END SUBROUTINE sSTOPTEST2" fails but "END" on its > > own works (a shame, since the first syntax is nicer). > > This applies to all the ForTran files in that directory. > > Isn't that standard Fortran-77? If it isn't, I'll change it. If > it is, I am going to have to insist on not supporting such > broken compilers. I did a quick search for "Fortran 77 standard" and came up with the following, which suggests that the syntax is just "END" without extra characters but AFAICT doesn't explicitly forbid the latter. http://www.fortran.com/fortran/F77_std/rjcnf0001-sh-11.html#sh-11.14 > The docs are old and wrong. scipy.sparse does require C++. OK, thanks. I'd be happy to update that paragraph (is it just sparse that needs C++?) if there is some wiki page for editing documentation like NumPy has? In the meantime I'm trying to get a working C++ compiler installed (seemed for a moment like I was finally almost finished building stuff...). Cheers, James. From jturner at gemini.edu Tue Jul 8 11:31:33 2008 From: jturner at gemini.edu (James Turner) Date: Tue, 08 Jul 2008 11:31:33 -0400 Subject: [SciPy-user] Portability problems (on Solaris) In-Reply-To: <4873845C.4080904@gemini.edu> References: 3d375d730807071756q6057eea0pf0cc95730a176cfb@mail.gmail.com <4873845C.4080904@gemini.edu> Message-ID: <48738855.8060207@gemini.edu> >> The docs are old and wrong. scipy.sparse does require C++. > > OK, thanks. I'd be happy to update that paragraph (is it just > sparse that needs C++?) if there is some wiki page for editing > documentation like NumPy has? In the meantime I'm trying to get > a working C++ compiler installed (seemed for a moment like I > was finally almost finished building stuff...). I have updated the following wiki page, which is similar to the INSTALL.txt file, but I'm not sure how to change the latter: http://scipy.org/Installing_SciPy/BuildingGeneral Cheers, James. From isaulv at gmail.com Tue Jul 8 16:31:51 2008 From: isaulv at gmail.com (Isaul Vargas) Date: Tue, 8 Jul 2008 16:31:51 -0400 Subject: [SciPy-user] Scipy optimize fmin_l_bfgs_b gives me an obscure error Message-ID: <12e2f0d90807081331q6a676115pc5c8f4854ab6425e@mail.gmail.com> I am using Scipy 0.6 on windows vista with python 2.5.2 (same thing happens in my linux setup), and when I use the function fmin_l_bfgs_b, I get this obscure error: ValueError: failed to initialize intent(inout) array -- input not fortran contiguous This is a part of the traceback within the function fmin_l_bfgs_b: 197 _lbfgsb.setulb(m, x, low_bnd, upper_bnd, nbd, f, g, factr, 198 pgtol, wa, iwa, task, iprint, csave, lsave, 199 isave, dsave) I have looked at the variables involved and they are all 1d arrays, so I don't see how fortran ordering would apply. Any ideas on how to fix this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Tue Jul 8 17:37:12 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 08 Jul 2008 23:37:12 +0200 Subject: [SciPy-user] Scipy optimize fmin_l_bfgs_b gives me an obscure error In-Reply-To: <12e2f0d90807081331q6a676115pc5c8f4854ab6425e@mail.gmail.com> References: <12e2f0d90807081331q6a676115pc5c8f4854ab6425e@mail.gmail.com> Message-ID: On Tue, 8 Jul 2008 16:31:51 -0400 "Isaul Vargas" wrote: > I am using Scipy 0.6 on windows vista with python 2.5.2 >(same thing happens > in my linux setup), and when I use the function >fmin_l_bfgs_b, I get this > obscure error: > > ValueError: failed to initialize intent(inout) array -- >input not fortran > contiguous > > This is a part of the traceback within the function >fmin_l_bfgs_b: > 197 _lbfgsb.setulb(m, x, low_bnd, upper_bnd, >nbd, f, g, factr, > 198 pgtol, wa, iwa, task, iprint, >csave, lsave, > 199 isave, dsave) > > > I have looked at the variables involved and they are all >1d arrays, so I > don't see how fortran ordering would apply. > > Any ideas on how to fix this? Please can you submit your example ? Nils From robert.kern at gmail.com Tue Jul 8 18:11:39 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 8 Jul 2008 17:11:39 -0500 Subject: [SciPy-user] Portability problems (on Solaris) In-Reply-To: <4873845C.4080904@gemini.edu> References: <4873845C.4080904@gemini.edu> Message-ID: <3d375d730807081511j32530f3dr19deb0032adb8b65@mail.gmail.com> On Tue, Jul 8, 2008 at 10:14, James Turner wrote: >> > 2. The Sun ForTran compiler doesn't like the "END" syntax >> > used in linalg/iterative -- for example >> > "END SUBROUTINE sSTOPTEST2" fails but "END" on its >> > own works (a shame, since the first syntax is nicer). >> > This applies to all the ForTran files in that directory. >> >> Isn't that standard Fortran-77? If it isn't, I'll change it. If >> it is, I am going to have to insist on not supporting such >> broken compilers. > > I did a quick search for "Fortran 77 standard" and came up with > the following, which suggests that the syntax is just "END" without > extra characters but AFAICT doesn't explicitly forbid the latter. > > http://www.fortran.com/fortran/F77_std/rjcnf0001-sh-11.html#sh-11.14 Hmm. I can't find an explicit statement either, but I changed them anyways. >> The docs are old and wrong. scipy.sparse does require C++. > > OK, thanks. I'd be happy to update that paragraph (is it just > sparse that needs C++?) scipy.stats.models seems to sort of optionally have some generated C++ code that it might possibly be able to ignore at runtime. It's a bit vague. We should sort that situation out. But otherwise, no. Just scipy.sparse. > if there is some wiki page for editing > documentation like NumPy has? Not at the moment. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From anirudhvij at gmail.com Wed Jul 9 09:03:24 2008 From: anirudhvij at gmail.com (anirudh vij) Date: Wed, 9 Jul 2008 15:03:24 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion Message-ID: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> Hi, I have a bunch of matlab scripts that I want to convert to scipy/numpy/pylab. There is a nice resource at mathesaurus.sourceforge.net However, it is tedious and error prone to do these conversions manually. For eg: 1. there is no "end" in python. So all "end" statements from matlab need to be deleted 2. the syntax "for i=1:10" has to be changed to "for i in range(1,11):" 3. special functions from matlab need to be converted to their scipy equivalents. 4. For comments "%" must be replaced by "#" 5. for array scripting "()" should be replaced by "[]" 6. Many others........ To my mind these examples are prime contenders for automation. A reasonably small python/perl script can do the above(it will get more complicated ofcourse, but the principle is the same). I realize that such a script/module wont be a complete replacement. A human will still need to look at the output python file and edit it. But the script/module can remove the bullwork and leave the human programmer to do more productive stuff. Is there a pre-existing project that accom,plishes this kind of stuff. If not, I would like to start one. Feedback/Suggestions will be very helpful. From david.huard at gmail.com Wed Jul 9 09:44:06 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 9 Jul 2008 09:44:06 -0400 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> References: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> Message-ID: <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> Hi, As a ex-matlab user, I remember being in the position where I would either 1. translate matlab codes to python, 2. rewrite the whole thing Python-like. I always ended up rewriting the code from scratch because 1. it's a great way to learn python, and 2. there things that you can do in Python that you don't even dream are possible in matlab, so it's worth rethinking the code structure. Besides, a translator that does half the job will likely be seen rather as buggy software than as a useful tool for translation. I suggest you look at ways to interface matlab and python to launch matlab scripts from python. Sorry to be so pessimistic, David 2008/7/9 anirudh vij : > Hi, > > I have a bunch of matlab scripts that I want to convert to > scipy/numpy/pylab. There is a nice resource at > mathesaurus.sourceforge.net > > However, it is tedious and error prone to do these conversions manually. > For eg: > > 1. there is no "end" in python. So all "end" statements from matlab > need to be deleted > 2. the syntax "for i=1:10" has to be changed to "for i in range(1,11):" > 3. special functions from matlab need to be converted to their scipy > equivalents. > 4. For comments "%" must be replaced by "#" > 5. for array scripting "()" should be replaced by "[]" > 6. Many others........ > > To my mind these examples are prime contenders for automation. A > reasonably small python/perl script can do the above(it will get more > complicated ofcourse, but the principle is the same). > > I realize that such a script/module wont be a complete replacement. A > human will still need to look at the output python file and edit it. > But the script/module can remove the bullwork and leave the human > programmer to do more productive stuff. > > Is there a pre-existing project that accom,plishes this kind of stuff. > If not, I would like to start one. Feedback/Suggestions will be very > helpful. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anirudhvij at gmail.com Wed Jul 9 09:52:44 2008 From: anirudhvij at gmail.com (anirudh vij) Date: Wed, 9 Jul 2008 15:52:44 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> References: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> Message-ID: <1aa72c450807090652t34a9b908le57cee0736cd7266@mail.gmail.com> On Wed, Jul 9, 2008 at 3:44 PM, David Huard wrote: > Hi, > > As a ex-matlab user, I remember being in the position where I would either > 1. translate matlab codes to python, 2. rewrite the whole thing Python-like. > I always ended up rewriting the code from scratch because 1. it's a great > way to learn python, and 2. there things that you can do in Python that you > don't even dream are possible in matlab, so it's worth rethinking the code > structure. > I already know python(i think :) ). My purpose is to preprocess text from matlab so that i dont have to do the bullwork of typing range(n) everywhere where 1:n is used etc. I agree python is millions of times superior to matlab. It has to be, since it is a _real_ programming language, unlike matlab. > Besides, a translator that does half the job will likely be seen rather as > buggy software than as a useful tool for translation. I suggest you look at > ways to interface matlab and python to launch matlab scripts from python. > I want the guys in my workplace to make the switch from matlab to python. Thanks to large scale brainwashing(by me), they are receptive to the idea :) However, it is too much to expert matlab gurus to leave their environment of choice and make the switch in a week. So, I hope to use this tool/module to ease the translation. As a nice side effect, it also help me right away in changing stuff from matlab to scipy. > Sorry to be so pessimistic, > I can understand your pessimism. But this does not replace the human programmer, but makes his life less dull (hopefully!), by doing the boring work for him. > David > > > 2008/7/9 anirudh vij : >> >> Hi, >> >> I have a bunch of matlab scripts that I want to convert to >> scipy/numpy/pylab. There is a nice resource at >> mathesaurus.sourceforge.net >> >> However, it is tedious and error prone to do these conversions manually. >> For eg: >> >> 1. there is no "end" in python. So all "end" statements from matlab >> need to be deleted >> 2. the syntax "for i=1:10" has to be changed to "for i in range(1,11):" >> 3. special functions from matlab need to be converted to their scipy >> equivalents. >> 4. For comments "%" must be replaced by "#" >> 5. for array scripting "()" should be replaced by "[]" >> 6. Many others........ >> >> To my mind these examples are prime contenders for automation. A >> reasonably small python/perl script can do the above(it will get more >> complicated ofcourse, but the principle is the same). >> >> I realize that such a script/module wont be a complete replacement. A >> human will still need to look at the output python file and edit it. >> But the script/module can remove the bullwork and leave the human >> programmer to do more productive stuff. >> >> Is there a pre-existing project that accom,plishes this kind of stuff. >> If not, I would like to start one. Feedback/Suggestions will be very >> helpful. >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > cheers, Anirudh. From ivo.maljevic at gmail.com Wed Jul 9 10:13:29 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Wed, 9 Jul 2008 10:13:29 -0400 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <1aa72c450807090652t34a9b908le57cee0736cd7266@mail.gmail.com> References: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> <1aa72c450807090652t34a9b908le57cee0736cd7266@mail.gmail.com> Message-ID: <826c64da0807090713w424ae510hffcc8c6dbda360a3@mail.gmail.com> If they are matlab experts than maybe it will be enough to give them the matlab-python cross reference: http://mathesaurus.sourceforge.net/matlab-python-xref.pdf Ivo 2008/7/9 anirudh vij : > On Wed, Jul 9, 2008 at 3:44 PM, David Huard wrote: > > Hi, > > > > As a ex-matlab user, I remember being in the position where I would > either > > 1. translate matlab codes to python, 2. rewrite the whole thing > Python-like. > > I always ended up rewriting the code from scratch because 1. it's a great > > way to learn python, and 2. there things that you can do in Python that > you > > don't even dream are possible in matlab, so it's worth rethinking the > code > > structure. > > > > I already know python(i think :) ). My purpose is to preprocess text > from matlab so that i dont have to do the bullwork of typing range(n) > everywhere where 1:n is used etc. > > I agree python is millions of times superior to matlab. It has to be, > since it is a _real_ programming language, unlike matlab. > > > Besides, a translator that does half the job will likely be seen rather > as > > buggy software than as a useful tool for translation. I suggest you look > at > > ways to interface matlab and python to launch matlab scripts from python. > > > > I want the guys in my workplace to make the switch from matlab to > python. Thanks to large scale brainwashing(by me), they are receptive > to the idea :) > > However, it is too much to expert matlab gurus to leave their > environment of choice and make the switch in a week. So, I hope to use > this tool/module to ease the translation. As a nice side effect, it > also help me right away in changing stuff from matlab to scipy. > > > Sorry to be so pessimistic, > > > > I can understand your pessimism. But this does not replace the human > programmer, but makes his life less dull (hopefully!), by doing the > boring work for him. > > David > > > > > > 2008/7/9 anirudh vij : > >> > >> Hi, > >> > >> I have a bunch of matlab scripts that I want to convert to > >> scipy/numpy/pylab. There is a nice resource at > >> mathesaurus.sourceforge.net > >> > >> However, it is tedious and error prone to do these conversions manually. > >> For eg: > >> > >> 1. there is no "end" in python. So all "end" statements from matlab > >> need to be deleted > >> 2. the syntax "for i=1:10" has to be changed to "for i in range(1,11):" > >> 3. special functions from matlab need to be converted to their scipy > >> equivalents. > >> 4. For comments "%" must be replaced by "#" > >> 5. for array scripting "()" should be replaced by "[]" > >> 6. Many others........ > >> > >> To my mind these examples are prime contenders for automation. A > >> reasonably small python/perl script can do the above(it will get more > >> complicated ofcourse, but the principle is the same). > >> > >> I realize that such a script/module wont be a complete replacement. A > >> human will still need to look at the output python file and edit it. > >> But the script/module can remove the bullwork and leave the human > >> programmer to do more productive stuff. > >> > >> Is there a pre-existing project that accom,plishes this kind of stuff. > >> If not, I would like to start one. Feedback/Suggestions will be very > >> helpful. > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > > > cheers, > Anirudh. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anirudhvij at gmail.com Wed Jul 9 10:29:39 2008 From: anirudhvij at gmail.com (anirudh vij) Date: Wed, 9 Jul 2008 16:29:39 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <826c64da0807090713w424ae510hffcc8c6dbda360a3@mail.gmail.com> References: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> <1aa72c450807090652t34a9b908le57cee0736cd7266@mail.gmail.com> <826c64da0807090713w424ae510hffcc8c6dbda360a3@mail.gmail.com> Message-ID: <1aa72c450807090729y259fc803yc27917e4f3d29ad7@mail.gmail.com> On Wed, Jul 9, 2008 at 4:13 PM, Ivo Maljevic wrote: > If they are matlab experts than maybe it will be enough to give them the > matlab-python cross reference: > > http://mathesaurus.sourceforge.net/matlab-python-xref.pdf > Mathersaurus is great for guys creating new python code(where they can use the cross-reference). But conversion of legacy matlab code (which i have to do, unfortunately) is still an issue. A python script could perhaps help with that. I dont know whether such a script is better called a preprocessor or a converter. cheers, Anirudh. From rob.clewley at gmail.com Wed Jul 9 11:28:15 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 9 Jul 2008 11:28:15 -0400 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <1aa72c450807090729y259fc803yc27917e4f3d29ad7@mail.gmail.com> References: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> <1aa72c450807090652t34a9b908le57cee0736cd7266@mail.gmail.com> <826c64da0807090713w424ae510hffcc8c6dbda360a3@mail.gmail.com> <1aa72c450807090729y259fc803yc27917e4f3d29ad7@mail.gmail.com> Message-ID: Unfortunately, basic "for" loop syntax and "end" statements are just the tip of the iceberg in terms of conversion issues. Even if you had a reasonable way to fix those, I think you'll find it much harder to automatically convert array syntax. Least of which is indices starting at 0, not 1. That can really change the presentation of an algorithm in subtle ways. I'm no expert in writing parsers, etc., but you might learn something about what's necessary to convert from matlab by looking at the python 2to3 tool that is intended to help users automatically upgrade their code from the python version 2.x series to python 3000. This is fairly complex stuff, making use of a formal python grammar in an abstract syntax tree. I suspect you won't achieve a reliable conversion of matlab scripts without a similarly sophisticated approach, which would be a lot of work for you. Given your success with your colleagues so far, I'd suggest you keep diligently working on the psychological angle until they make the effort themselves to discover how easy it is to learn python. Maybe dangle them a carrot, such as a proper object-oriented example relevant to your work to show off what they are missing... -Rob On Wed, Jul 9, 2008 at 10:29 AM, anirudh vij wrote: > On Wed, Jul 9, 2008 at 4:13 PM, Ivo Maljevic wrote: >> If they are matlab experts than maybe it will be enough to give them the >> matlab-python cross reference: >> >> http://mathesaurus.sourceforge.net/matlab-python-xref.pdf >> > Mathersaurus is great for guys creating new python code(where they can > use the cross-reference). > > But conversion of legacy matlab code (which i have to do, > unfortunately) is still an issue. A python script could perhaps help > with that. I dont know whether such a script is better called a > preprocessor or a converter. > > cheers, > Anirudh. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From pete.forman at westerngeco.com Wed Jul 9 12:49:47 2008 From: pete.forman at westerngeco.com (Pete Forman) Date: Wed, 09 Jul 2008 17:49:47 +0100 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion References: <1aa72c450807090603k1237c3dcm79d8a73558001b08@mail.gmail.com> <91cf711d0807090644n155c6881m47c026ba653229a2@mail.gmail.com> <1aa72c450807090652t34a9b908le57cee0736cd7266@mail.gmail.com> <826c64da0807090713w424ae510hffcc8c6dbda360a3@mail.gmail.com> <1aa72c450807090729y259fc803yc27917e4f3d29ad7@mail.gmail.com> Message-ID: <7ibvdmtw.fsf@wgmail2.gatwick.eur.slb.com> "Rob Clewley" writes: > I'm no expert in writing parsers, etc., but you might learn > something about what's necessary to convert from matlab ... I've done a few and would recommend that general approach. It will be much easier doing things like changing to zero base indexes working with an AST. ANTLR would be my tool of choice but there are others. Just avoid lex and friends, they are a lot of work compared with modern parsers. -- Pete Forman -./\.- Disclaimer: This post is originated WesternGeco -./\.- by myself and does not represent pete.forman at westerngeco.com -./\.- the opinion of Schlumberger or http://petef.22web.net -./\.- WesternGeco. From millman at berkeley.edu Wed Jul 9 13:04:46 2008 From: millman at berkeley.edu (Jarrod Millman) Date: Wed, 9 Jul 2008 10:04:46 -0700 Subject: [SciPy-user] REMINDER: SciPy 2008 Early Registration ends in 2 days Message-ID: Hello, This is a reminder that early registration for SciPy 2008 ends in two days on Friday, July 11th. To register, please see: http://conference.scipy.org/to_register This year's conference has two days for tutorials, two days of presentations, and ends with a two day coding sprint. If you want to learn more see my blog post: http://jarrodmillman.blogspot.com/2008/07/scipy-2008-conference-program-posted.html Cheers, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ From isaulv at gmail.com Wed Jul 9 13:24:59 2008 From: isaulv at gmail.com (Isaul Vargas) Date: Wed, 9 Jul 2008 13:24:59 -0400 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion Message-ID: <12e2f0d90807091024p42aa7977vced0ae210a0a9c50@mail.gmail.com> There are some issues in converting Matlab code to Python. Matlab has "dynamic arrays". Suppose you have a 1x 3 array, and you want to add a new row via this syntax: a[4,:] = b; This won't work in python. You will have to preallocated the array size somehow. Second, it's the 0 based indexing. Generally you shift indices by -1, but sometimes you may have to add a 1 to the end range . Also concatenation can be a pain. Hope this helps. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjracine at glosten.com Wed Jul 9 13:42:25 2008 From: bjracine at glosten.com (Benjamin Racine ) Date: Wed, 09 Jul 2008 10:42:25 -0700 Subject: [SciPy-user] Fwd: pep8.py References: <48749568.7EC6.00A7.0@glosten.com> Message-ID: <487495F1.7EC6.00A7.0@glosten.com> I am new, Anybody care to tell me where I should put the pep8.py style conformance checking script in order that ipython will see it when i invoke: run pep8.py test.py I have installed from python(x,y) on windows. Thanks, Ben R. From anirudhvij at gmail.com Wed Jul 9 13:59:15 2008 From: anirudhvij at gmail.com (anirudh vij) Date: Wed, 9 Jul 2008 19:59:15 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <12e2f0d90807091024p42aa7977vced0ae210a0a9c50@mail.gmail.com> References: <12e2f0d90807091024p42aa7977vced0ae210a0a9c50@mail.gmail.com> Message-ID: <1aa72c450807091059j2d373f95ia290097cafd1b35c@mail.gmail.com> On Wed, Jul 9, 2008 at 7:24 PM, Isaul Vargas wrote: > There are some issues in converting Matlab code to Python. Tons of them infact :) The idea is to handle the easy stuff via the script, and the harder ones manually. > Matlab has "dynamic arrays". Suppose you have a 1x 3 array, and you want to > add a new row via this syntax: > a[4,:] = b; > This won't work in python. You will have to preallocated the array size > somehow. > Second, it's the 0 based indexing. Generally you shift indices by -1, but > sometimes you may have to add a 1 to the end range . > This can be handled i think. > Also concatenation can be a pain. > Currently using hstack and vstack > Hope this helps. I am working on the script right now. Will post it somewhere once the thing starts working. Currently doing very basic stuff. As Pete pointed out, for a full conversion, something like ANTLR will be needed. cheers, Anirudh. From robert.kern at gmail.com Wed Jul 9 15:31:34 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 9 Jul 2008 14:31:34 -0500 Subject: [SciPy-user] Fwd: pep8.py In-Reply-To: <487495F1.7EC6.00A7.0@glosten.com> References: <48749568.7EC6.00A7.0@glosten.com> <487495F1.7EC6.00A7.0@glosten.com> Message-ID: <3d375d730807091231m15aa7a08x2931d85f9233919a@mail.gmail.com> On Wed, Jul 9, 2008 at 12:42, Benjamin Racine wrote: > I am new, > > Anybody care to tell me where I > should put the pep8.py style conformance > checking script in order > that ipython will see it when i invoke: > > run pep8.py test.py > > I have installed from python(x,y) on windows. You will want to ask IPython questions on the IPython mailing list, but basically, you need to give the full path to the pep8.py file. %run does not search your PATH or anything. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From isaulv at gmail.com Wed Jul 9 19:17:58 2008 From: isaulv at gmail.com (Isaul Vargas) Date: Wed, 9 Jul 2008 19:17:58 -0400 Subject: [SciPy-user] Scipy optimize fmin_l_bfgs_b gives me an obscure error Message-ID: <12e2f0d90807091617u711d7fbbp9baf1e7de698d786@mail.gmail.com> Here is how my function looks like: xt = scipy.optimize.fmin_l_bfgs_b(obj_grad_func, xcur, args = (b,h,Beta,R,wR,wh,muh, alpha_b, beta_b, BN, sp), maxfun = 1000) the args are mostly arrays except for BN and SP which are dicts that contain various flags. In the debugger, I tried the following: xcur is a 1,3 array with the values 1,1,0. I check the flags of xcur and I see it is not fortran contiguous. I do a copy: xcur2 = xcur.copy('f') and now I get this error message: *** error: failed in converting 10th argument `wa' of _lbfgsb.setulb to C/Fortra n array There is no variable 'wa' in any of my functions, so I am not sure why this failing. Also, how do I setup the bounds, I am not clear on how to set it. I have a list of upper bounds (an array of 3 values) and a list of lower bounds Not sure how to set them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Jul 10 02:06:43 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Jul 2008 08:06:43 +0200 Subject: [SciPy-user] Scipy optimize fmin_l_bfgs_b gives me an obscure error In-Reply-To: <12e2f0d90807091617u711d7fbbp9baf1e7de698d786@mail.gmail.com> References: <12e2f0d90807091617u711d7fbbp9baf1e7de698d786@mail.gmail.com> Message-ID: On Wed, 9 Jul 2008 19:17:58 -0400 "Isaul Vargas" wrote: > Here is how my function looks like: > > xt = scipy.optimize.fmin_l_bfgs_b(obj_grad_func, xcur, >args = > (b,h,Beta,R,wR,wh,muh, alpha_b, beta_b, BN, sp), maxfun >= 1000) > > > the args are mostly arrays except for BN and SP which >are dicts that contain > various flags. > > In the debugger, I tried the following: > xcur is a 1,3 array with the values 1,1,0. > I check the flags of xcur and I see it is not fortran >contiguous. > I do a copy: xcur2 = xcur.copy('f') > and now I get this error message: > *** error: failed in converting 10th argument `wa' of >_lbfgsb.setulb to > C/Fortra > n array > > There is no variable 'wa' in any of my functions, so I >am not sure why this > failing. > > Also, how do I setup the bounds, I am not clear on how >to set it. I have a > list of upper bounds (an array of 3 values) and a list >of lower bounds Not > sure how to set them. For lower and upper bounda: bounds = [(2000.,2100.),(1800.,1850.),(1600.,1630.)] x_lbl,f,d = optimize.fmin_l_bfgs_b(func1, x0, fprime=None, args=(), approx_grad=1, bounds=bounds) Nils From nwagner at iam.uni-stuttgart.de Thu Jul 10 02:10:59 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Jul 2008 08:10:59 +0200 Subject: [SciPy-user] Scipy optimize fmin_l_bfgs_b gives me an obscure error In-Reply-To: <12e2f0d90807091617u711d7fbbp9baf1e7de698d786@mail.gmail.com> References: <12e2f0d90807091617u711d7fbbp9baf1e7de698d786@mail.gmail.com> Message-ID: On Wed, 9 Jul 2008 19:17:58 -0400 "Isaul Vargas" wrote: > Here is how my function looks like: > > xt = scipy.optimize.fmin_l_bfgs_b(obj_grad_func, xcur, >args = > (b,h,Beta,R,wR,wh,muh, alpha_b, beta_b, BN, sp), maxfun >= 1000) > > > the args are mostly arrays except for BN and SP which >are dicts that contain > various flags. > > In the debugger, I tried the following: > xcur is a 1,3 array with the values 1,1,0. > I check the flags of xcur and I see it is not fortran >contiguous. > I do a copy: xcur2 = xcur.copy('f') > and now I get this error message: > *** error: failed in converting 10th argument `wa' of >_lbfgsb.setulb to > C/Fortra > n array > You can find the corresponding file (lbfgsb.pyf) in scipy/scipy/optimize/lbfgsb ! -*- f90 -*- python module _lbfgsb ! in interface ! in :_lbfgsb subroutine setulb(n,m,x,l,u,nbd,f,g,factr,pgtol,wa,iwa,task,iprint,csave,lsave,isave,dsave) ! in :lbfsgb:routines.f integer intent(in),optional,check(len(x)>=n),depend(x) :: n=len(x) integer intent(in) :: m double precision dimension(n),intent(inout) :: x double precision dimension(n),depend(n),intent(in) :: l double precision dimension(n),depend(n),intent(in) :: u integer dimension(n),depend(n),intent(in) :: nbd double precision intent(inout) :: f double precision dimension(n),depend(n),intent(inout) :: g double precision intent(in) :: factr double precision intent(in) :: pgtol double precision dimension(2*m*n+4*n+12*m*m+12*m),depend(n,m),intent(inout) :: wa integer dimension(3 * n),depend(n),intent(inout) :: iwa character*60 intent(inout) :: task integer intent(in) :: iprint character*60 intent(inout) :: csave logical dimension(4),intent(inout) :: lsave integer dimension(44),intent(inout) :: isave double precision dimension(29),intent(inout) :: dsave end subroutine setulb end interface end python module _lbfgsb Nils > There is no variable 'wa' in any of my functions, so I >am not sure why this > failing. > > Also, how do I setup the bounds, I am not clear on how >to set it. I have a > list of upper bounds (an array of 3 values) and a list >of lower bounds Not > sure how to set them. From silva at lma.cnrs-mrs.fr Thu Jul 10 09:13:55 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 10 Jul 2008 15:13:55 +0200 Subject: [SciPy-user] odepack or f2py error Message-ID: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> Hi, Using odeint, I've got an "Internal error" when the maximum number of steps is reached : lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout In above message, I1 = 2000 At line 1991 of file scipy/integrate/odepack/ddasrt.f (unit = 6, file = 'stdout') Internal Error: printf is broken In above message, R1 =f fab at laptop$ as "printf is broken", I can not know the instant when it breaks... I'm not sure the internal error is due to odeint or to f2py, as it seems that I've already seen this problem out of odeint. Any suggestion ? -- Fabrice Silva From pav at iki.fi Thu Jul 10 15:30:33 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 10 Jul 2008 19:30:33 +0000 (UTC) Subject: [SciPy-user] odepack or f2py error References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: Thu, 10 Jul 2008 15:13:55 +0200, Fabrice Silva wrote: > Hi, > Using odeint, I've got an "Internal error" when the maximum number of > steps is reached : > lsoda-- at current t (=r1), mxstep (=i1) steps > taken on this call before reaching tout > In above message, I1 = 2000 > At line 1991 of file scipy/integrate/odepack/ddasrt.f (unit = 6, > file = 'stdout') > Internal Error: printf is broken > In above message, R1 =f > fab at laptop$ > > as "printf is broken", I can not know the instant when it breaks... I'm > not sure the internal error is due to odeint or to f2py, as it seems > that I've already seen this problem out of odeint. > > Any suggestion ? What Fortran compiler are you using? Some old version of gfortran perhaps? I think this "Internal Error:" message is from your Fortran compiler. Googling for it brings up several messages referring to gfortran, so it's likely this is a gfortran bug (possibly fixed in newer versions). -- Pauli Virtanen From astrofitz at gmail.com Thu Jul 10 19:19:49 2008 From: astrofitz at gmail.com (Michael Fitzgerald) Date: Thu, 10 Jul 2008 16:19:49 -0700 Subject: [SciPy-user] patch for ndimage.generic_filter Message-ID: <477CA520-C850-47AD-9888-92DDC968BBF3@gmail.com> Hello, I found a small bug in ndimage.generic_filter() that arises when using the 'size' keyword with multidimensional arrays. Patch attached. Mike -------------- next part -------------- A non-text attachment was scrubbed... Name: filter.diff Type: application/octet-stream Size: 580 bytes Desc: not available URL: -------------- next part -------------- From callen at gemini.edu Fri Jul 11 21:26:48 2008 From: callen at gemini.edu (C. Allen) Date: Fri, 11 Jul 2008 15:26:48 -1000 Subject: [SciPy-user] HDULists sharing HDU entries Message-ID: <1215826008.5560.24.camel@phact.hi.gemini.edu> I have been making a system for handling astronomical data, the actual fits manipulation using pyfits. Please correct me if there is another list this question would be more appropriate on, I did not see a pyfits list at the project page. When I started this system I was new to python (but experienced in many other languages) and in the process I have done some things I am not sure about. I of course test my ideas to make sure they work, but of course, such tests can miss some condition in which the idiom ultimately is bound to fail horribly. So... I'm curious if something I'm doing with HDULists is ok. Basically, we have an abstraction for our data, lets call it AstroData because that's what we call it. It contains a member which is the HDUList as returned by pyfits when opening a MEF. I also have some syntax that allows one to get an AstroData instance of just sub-selections of extensions in the MEF, e.g.: if ad = AstroData(fitsfilename) then (e.g.) ad["SCI"] returns a separate AstroData instance from 'ad'. In this case I build the HDUList out of the HDU entries in the original instance. I make a python list with the PHU and whatever extensions are being selected (in the sample case, those where EXTNAME="SCI"), and initialize a new HDUList instance with this list. Then this unique HDUList is stored in the new AstroData instance. So each AstroData instance has it's own HDUList, but the HDU entries in it are shared. Everything works fine, to write the constructed HDUList is to save just those extensions listed, as desired, and I can pass the HDUList around as normal to anything that wants HDUList instances. Of course, if I change any of the shared extensions, it will be seen in all HDULists that refer to it, which is confusing to some but is exactly what we want in this case. All this runs perfectly fine in practice. Does anyone know any reason this might get me into trouble? It seems I have nothing to worry about, but I have a nagging idea that perhaps the HDU is more tightly linked to the list containing it in some way that I do not yet know. thanks, craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From chanley at stsci.edu Fri Jul 11 21:36:20 2008 From: chanley at stsci.edu (Christopher Hanley) Date: Fri, 11 Jul 2008 21:36:20 -0400 Subject: [SciPy-user] HDULists sharing HDU entries In-Reply-To: <1215826008.5560.24.camel@phact.hi.gemini.edu> References: <1215826008.5560.24.camel@phact.hi.gemini.edu> Message-ID: <48780A94.1030303@stsci.edu> C. Allen wrote: > I have been making a system for handling astronomical data, the actual > fits manipulation using pyfits. Please correct me if there is another > list this question would be more appropriate on, I did not see a pyfits > list at the project page. > > When I started this system I was new to python (but experienced in many > other languages) and in the process I have done some things I am not > sure about. I of course test my ideas to make sure they work, but of > course, such tests can miss some condition in which the idiom ultimately > is bound to fail horribly. > > So... I'm curious if something I'm doing with HDULists is ok. > > Basically, we have an abstraction for our data, lets call it AstroData > because that's what we call it. It contains a member which is the > HDUList as returned by pyfits when opening a MEF. > > I also have some syntax that allows one to get an AstroData instance of > just sub-selections of extensions in the MEF, e.g.: > > if ad = AstroData(fitsfilename) then (e.g.) ad["SCI"] returns a separate > AstroData instance from 'ad'. In this case I build the HDUList out of > the HDU entries in the original instance. I make a python list with the > PHU and whatever extensions are being selected (in the sample case, > those where EXTNAME="SCI"), and initialize a new HDUList instance with > this list. Then this unique HDUList is stored in the new AstroData > instance. So each AstroData instance has it's own HDUList, but the HDU > entries in it are shared. > > Everything works fine, to write the constructed HDUList is to save just > those extensions listed, as desired, and I can pass the HDUList around > as normal to anything that wants HDUList instances. Of course, if I > change any of the shared extensions, it will be seen in all HDULists > that refer to it, which is confusing to some but is exactly what we want > in this case. > > All this runs perfectly fine in practice. Does anyone know any reason > this might get me into trouble? It seems I have nothing to worry about, > but I have a nagging idea that perhaps the HDU is more tightly linked to > the list containing it in some way that I do not yet know. > > thanks, > craig > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Craig, The more appropriate list is probably astropy at scipy.org. As to your question. Assuming that I understand what you are attempting, I don't think you should run into any problems. Chris -- Christopher Hanley Systems Software Engineer Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21218 (410) 338-4338 From callen at gemini.edu Fri Jul 11 21:53:05 2008 From: callen at gemini.edu (C. Allen) Date: Fri, 11 Jul 2008 15:53:05 -1000 Subject: [SciPy-user] HDULists sharing HDU entries In-Reply-To: <48780A94.1030303@stsci.edu> References: <1215826008.5560.24.camel@phact.hi.gemini.edu> <48780A94.1030303@stsci.edu> Message-ID: <1215827585.5560.27.camel@phact.hi.gemini.edu> thanks chris On Fri, 2008-07-11 at 21:36 -0400, Christopher Hanley wrote: > C. Allen wrote: > > I have been making a system for handling astronomical data, the actual > > fits manipulation using pyfits. Please correct me if there is another > > list this question would be more appropriate on, I did not see a pyfits > > list at the project page. > > > > When I started this system I was new to python (but experienced in many > > other languages) and in the process I have done some things I am not > > sure about. I of course test my ideas to make sure they work, but of > > course, such tests can miss some condition in which the idiom ultimately > > is bound to fail horribly. > > > > So... I'm curious if something I'm doing with HDULists is ok. > > > > Basically, we have an abstraction for our data, lets call it AstroData > > because that's what we call it. It contains a member which is the > > HDUList as returned by pyfits when opening a MEF. > > > > I also have some syntax that allows one to get an AstroData instance of > > just sub-selections of extensions in the MEF, e.g.: > > > > if ad = AstroData(fitsfilename) then (e.g.) ad["SCI"] returns a separate > > AstroData instance from 'ad'. In this case I build the HDUList out of > > the HDU entries in the original instance. I make a python list with the > > PHU and whatever extensions are being selected (in the sample case, > > those where EXTNAME="SCI"), and initialize a new HDUList instance with > > this list. Then this unique HDUList is stored in the new AstroData > > instance. So each AstroData instance has it's own HDUList, but the HDU > > entries in it are shared. > > > > Everything works fine, to write the constructed HDUList is to save just > > those extensions listed, as desired, and I can pass the HDUList around > > as normal to anything that wants HDUList instances. Of course, if I > > change any of the shared extensions, it will be seen in all HDULists > > that refer to it, which is confusing to some but is exactly what we want > > in this case. > > > > All this runs perfectly fine in practice. Does anyone know any reason > > this might get me into trouble? It seems I have nothing to worry about, > > but I have a nagging idea that perhaps the HDU is more tightly linked to > > the list containing it in some way that I do not yet know. > > > > thanks, > > craig > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > Hi Craig, > > The more appropriate list is probably astropy at scipy.org. > > As to your question. Assuming that I understand what you are > attempting, I don't think you should run into any problems. > > Chris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Sat Jul 12 07:32:29 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Sat, 12 Jul 2008 13:32:29 +0200 Subject: [SciPy-user] odepack or f2py error In-Reply-To: References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> Le jeudi 10 juillet 2008 ? 19:30 +0000, Pauli Virtanen a ?crit : > What Fortran compiler are you using? Some old version of gfortran > perhaps? I do have gfortran installed fab at Portable-s2m:~$ gfortran --version GNU Fortran (Debian 4.3.1-6) 4.3.1 How can I check gfortran is used by f2py ? > I think this "Internal Error:" message is from your Fortran compiler. > Googling for it brings up several messages referring to gfortran, so > it's likely this is a gfortran bug (possibly fixed in newer versions). I am using debian unstable, by weekly upgrade. I remember using printf calls for debugging last year without any error. I have looked at the gfortran bugzilla, but I did not see any bugs which seems similar to this one... > -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From pav at iki.fi Sat Jul 12 08:22:36 2008 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 12 Jul 2008 12:22:36 +0000 (UTC) Subject: [SciPy-user] odepack or f2py error References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: Sat, 12 Jul 2008 13:32:29 +0200, Fabrice Silva wrote: > Le jeudi 10 juillet 2008 ? 19:30 +0000, Pauli Virtanen a ?crit : >> What Fortran compiler are you using? Some old version of gfortran >> perhaps? > > I do have gfortran installed > > fab at Portable-s2m:~$ gfortran --version GNU Fortran (Debian > 4.3.1-6) 4.3.1 > > How can I check gfortran is used by f2py ? Look at the output when scipy is compiled, or try to ldd scipy/integrate/ vode.so and check if it's linked with libgfortran. >> I think this "Internal Error:" message is from your Fortran compiler. >> Googling for it brings up several messages referring to gfortran, so >> it's likely this is a gfortran bug (possibly fixed in newer versions). > > I am using debian unstable, by weekly upgrade. I remember using printf > calls for debugging last year without any error. I have looked at the > gfortran bugzilla, but I did not see any bugs which seems similar to > this one... It would help if you can write a minimal testcase for this, ie. what arguments to give to odeint so that it produces this error. It's probably best to continue the discussion here: http://scipy.org/scipy/scipy/ticket/696 so that the information is archived on a single place. -- Pauli Virtanen From travis at enthought.com Sat Jul 12 11:34:57 2008 From: travis at enthought.com (Travis Vaught) Date: Sat, 12 Jul 2008 10:34:57 -0500 Subject: [SciPy-user] SciPy 2008 Registration Deadline Extended Message-ID: <3A00FC0B-B463-4308-B0CF-7D6F3195E609@enthought.com> Greetings, The merchant account processor that we use for the SciPy Conference online registration has been experiencing some inexplicable problems authorizing some registrations. Apologies to those who have struggled to register and have not been successful. Because of the problems, we're extending the early-bird rates through Monday at midnight Central Time. If you experience any problems registering, please give us a call during business hours Monday (9:00am - 5:00pm Central - 512.536.1057). http://conference.scipy.org/ For those of you who have set up an account on the conference site, but have not yet registered, I encourage you to do so in time to take advantage of the lower rates. I also encourage everyone to make sure you've specified which tutorial track, T-shirt size, whether you'll attend the sprint, and meal preferences in your profile (http://conference.scipy.org/profile ). Please send me an email if you have any questions. Best, Travis From williamhpurcell at gmail.com Sun Jul 13 15:59:47 2008 From: williamhpurcell at gmail.com (William Purcell) Date: Sun, 13 Jul 2008 14:59:47 -0500 Subject: [SciPy-user] signal.lti(A,B,C,D) with D=0 Message-ID: I am trying to set up state space representation of a system using signal.lti. The system has no feedforward so D = 0. I've tried the three options listed in the code below to represent D. The only way I can get it to work is option 2 if C has 1 row. If C has more than 1 row it won't work. Any thoughts? -Bill Code --------------------------------------------------------------- from numpy import ones,matrix from scipy import signal r = ones(3) A = matrix([r,r,r]) B = matrix([r]).T C = matrix([r,r]) #three options of D to make it 0 #1) D=0 #2) D = matrix([0,0]).T #3) D = None D = 0 #D = None #D = matrix([0,0]).T Gss = signal.lti(A,B,C,D) ----------------------------------------------------------- Tracebacks ----------------------------------------------------------- Option 1 /usr/lib/python2.5/site-packages/scipy/signal/ltisys.py in abcd_normalize(A, B, C, D) 101 raise ValueError, "A and C must have the same number of columns." 102 if MD != MC: --> 103 raise ValueError, "C and D must have the same number of rows." 104 if ND != NB: 105 raise ValueError, "B and D must have the same number of columns." : C and D must have the same number of rows. WARNING: Failure executing file: Option 2 (with C as two rows...if C is a single row I do not get this traceback) /usr/lib/python2.5/site-packages/scipy/signal/ltisys.py in ss2tf(A, B, C, D, input) 147 148 num_states = A.shape[0] --> 149 type_test = A[:,0] + B[:,0] + C[0,:] + D 150 num = numpy.zeros((nout, num_states+1), type_test.dtype) 151 for k in range(nout): : shape mismatch: objects cannot be broadcast to a single shape WARNING: Failure executing file: Option 3 same as 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at pythonxy.com Sun Jul 13 13:19:08 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Sun, 13 Jul 2008 19:19:08 +0200 Subject: [SciPy-user] [python(x,y)] New Release: 2.0.0 Message-ID: <629b08a40807131019o36a7c35cw19ee640bbbcc173@mail.gmail.com> Hi all, Python(x,y) 2.0.0 is now available on http://www.pythonxy.com. Changes history 07 -12 -2008 - Version 2.0.0 : * Added: o ITK 3.6 (WrapITK) - Open-source software system for image processing (leading-edge segmentation and registration algorithms) o Windows installer: new component manager system (updating is easier and more flexible, install and uninstall process are much cleaner, ...) o Windows installer: Python(x,y) may now be installed silently (/S option), and the non-silent installation now allows the user to keep working on the machine during the process * Updated: o Matplotlib 0.98.1 (see release notes) o SymPy 0.6.0 o GDAL 1.5.2 o PySerial 2.4 Regards, Pierre Raybaut From ryanlists at gmail.com Mon Jul 14 09:41:43 2008 From: ryanlists at gmail.com (Ryan Krauss) Date: Mon, 14 Jul 2008 08:41:43 -0500 Subject: [SciPy-user] signal.lti(A,B,C,D) with D=0 In-Reply-To: References: Message-ID: This seems like a bug in ltisys. I think this line 149 type_test = A[:,0] + B[:,0] + C[0,:] + D should be 149 type_test = A[:,0] + B[:,0] + C[0,:] + D[0,:] for a multipe-input/multiple-output system with i inputs, n states, and m outputs, A should be n by n, B should be n by i, C should be m by n, and D should be m by i. (actually, because of this code a few lines above: # make MOSI from possibly MOMI system. if B.shape[-1] != 0: B = B[:,input] B.shape = (B.shape[0],1) if D.shape[-1] != 0: D = D[:,input] I guess line 149 should be 149 type_test = A[:,0] + B[:,0] + C[0,:] + D[0] ) But making this change exposes another problem: /usr/lib/python2.5/site-packages/scipy/signal/ltisys.py in __init__(self, *args, **kwords) 224 self.__dict__['D'] = abcd_normalize(*args) 225 self.__dict__['zeros'], self.__dict__['poles'], \ --> 226 self.__dict__['gain'] = ss2zpk(*args) 227 self.__dict__['num'], self.__dict__['den'] = ss2tf(*args) 228 self.inputs = self.B.shape[-1] /usr/lib/python2.5/site-packages/scipy/signal/ltisys.py in ss2zpk(A, B, C, D, input) 183 """ 184 Pdb().set_trace() --> 185 return tf2zpk(*ss2tf(A,B,C,D,input=input)) 186 187 class lti(object): /usr/lib/python2.5/site-packages/scipy/signal/filter_design.py in tf2zpk(b, a) 128 k = b[0] 129 b /= b[0] --> 130 z = roots(b) 131 p = roots(a) 132 return z, p, k /usr/lib/python2.5/site-packages/numpy/lib/polynomial.py in roots(p) 98 p = atleast_1d(p) 99 if len(p.shape) != 1: --> 100 raise ValueError,"Input must be a rank-1 array." 101 102 # find non-zero array entries : Input must be a rank-1 array. WARNING: Failure executing file: In [2]: %debug > /usr/lib/python2.5/site-packages/numpy/lib/polynomial.py(100)roots() 99 if len(p.shape) != 1: --> 100 raise ValueError,"Input must be a rank-1 array." 101 ipdb> print p [[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00 1.00000000e+00] [ 1.81898940e-11 -3.06222400e+00 -2.79196150e+02 -5.87518033e+03 -1.59843721e+04 1.91386789e-07] [ 1.75910240e+01 1.60479206e+03 3.37698682e+04 9.12618857e+04 -1.23219975e+04 -3.29828641e+04]] ss2tf seems to correctly handle MIMO (or at least SIMO systems) correctly and returns one denominator polynomial with several (m) numerator polynomials. But tf2zpk in filter_design.py does not seem able to handle more than siso systems (which makes sense, it is expecting a transfer function which is just a SISO system). How should this be fixed? I understand why ss2tf converts a MIMO system to SIMO - trying to represent a mulitple input, multiple output system with a transfer function has some limitations. I think the real offender is in the __init__ method of signal.lti: elif N == 4: ... self.__dict__['zeros'], self.__dict__['poles'], \ self.__dict__['gain'] = ss2zpk(*args) For a MIMO system, what should the zpk representation be? For a SIMO system, I would expect a vector of poles and a matrix of zeros. This seems to be in line with what ss2tf does. It seems like the init method could find the pole by passing C[0,:] and D[0,:] to ss2zpk. The zeros and gains could then be found by calling ss2zpk in some vectorized way or simply in a for loop for each row of C and D. But there may well be a more elegant solution. Any thoughts? Ryan On Sun, Jul 13, 2008 at 2:59 PM, William Purcell wrote: > I am trying to set up state space representation of a system using > signal.lti. The system has no feedforward so D = 0. I've tried the three > options listed in the code below to represent D. > > The only way I can get it to work is option 2 if C has 1 row. If C has more > than 1 row it won't work. > > Any thoughts? > -Bill > > Code > --------------------------------------------------------------- > from numpy import ones,matrix > from scipy import signal > r = ones(3) > A = matrix([r,r,r]) > B = matrix([r]).T > C = matrix([r,r]) > > #three options of D to make it 0 > #1) D=0 > #2) D = matrix([0,0]).T > #3) D = None > > D = 0 > #D = None > #D = matrix([0,0]).T > > Gss = signal.lti(A,B,C,D) > ----------------------------------------------------------- > > Tracebacks > ----------------------------------------------------------- > Option 1 > /usr/lib/python2.5/site-packages/scipy/signal/ltisys.py in abcd_normalize(A, > B, C, D) > 101 raise ValueError, "A and C must have the same number of > columns." > 102 if MD != MC: > --> 103 raise ValueError, "C and D must have the same number of > rows." > 104 if ND != NB: > 105 raise ValueError, "B and D must have the same number of > columns." > > : C and D must have the same number of rows. > WARNING: Failure executing file: > > Option 2 (with C as two rows...if C is a single row I do not get this > traceback) > /usr/lib/python2.5/site-packages/scipy/signal/ltisys.py in ss2tf(A, B, C, D, > input) > 147 > 148 num_states = A.shape[0] > --> 149 type_test = A[:,0] + B[:,0] + C[0,:] + D > 150 num = numpy.zeros((nout, num_states+1), type_test.dtype) > 151 for k in range(nout): > > : shape mismatch: objects cannot be broadcast > to a single shape > WARNING: Failure executing file: > > Option 3 > same as 1 > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From emanuele at relativita.com Mon Jul 14 12:42:05 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 14 Jul 2008 18:42:05 +0200 Subject: [SciPy-user] [OpenOpt] Segmentation fault Message-ID: <487B81DD.5000608@relativita.com> Dear all, Whenever I run example 'nlp_1.py' of OpenOpt (both tarball 0.18 and svn) on both my linux boxes I get "Segmentation fault"! Or better, I get it 9 times out of ten (due to random initialization without fixed seed, I guess). My configuration: box1: Ubuntu Gutsy, amd64 (Intel C2D) box2: Ubuntu Hardy, amd64 (Intel QuadCore) Python, Numpy, Scipy, Matplotlib etc. are standard versions from Ubuntu repositories. Can anyone confirm/reproduce this behavior? I've a report that in i386 debian it seems to work flawlessly. Thanks, Emanuele From emanuele at relativita.com Mon Jul 14 13:06:47 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 14 Jul 2008 19:06:47 +0200 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: <487B81DD.5000608@relativita.com> References: <487B81DD.5000608@relativita.com> Message-ID: <487B87A7.8030400@relativita.com> Emanuele Olivetti wrote: > Dear all, > > Whenever I run example 'nlp_1.py' of OpenOpt (both tarball 0.18 > and svn) on both my linux boxes I get "Segmentation fault"! > Or better, I get it 9 times out of ten (due to random > initialization without fixed seed, I guess). > > My configuration: > box1: Ubuntu Gutsy, amd64 (Intel C2D) > box2: Ubuntu Hardy, amd64 (Intel QuadCore) > > Python, Numpy, Scipy, Matplotlib etc. are standard versions > from Ubuntu repositories. > > Can anyone confirm/reproduce this behavior? > > I've a report that in i386 debian it seems to work flawlessly. > > Here is the a full log on box1: ---- /usr/lib/python2.5/site-packages/scikits/openopt/examples# python nlp_1.py OpenOpt checks user-supplied gradient df (shape: (150,) ) according to prob.diffInt = [ 1.00000000e-07] lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown max(abs(df_user - df_numerical)) = 1.72111294887e-05 (is registered in df number 67) ======================== OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) according to prob.diffInt = [ 1.00000000e-07] lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown max(abs(dc_user - dc_numerical)) = 8.42460444801e-05 (is registered in dc number 0) ======================== OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) according to prob.diffInt = [ 1.00000000e-07] lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 will be shown max(abs(dh_user - dh_numerical)) = 0.00043945014113 (is registered in dh number 149) ======================== ----------------------------------------------------- solver: ralg problem: unnamed goal: minimum iter objFunVal log10(maxResidual) 0 8.596e+03 3.91 Segmentation fault (core dumped) ---- The core dump is available here: http://sra.fbk.eu/people/olivetti/tmp/core.bz2 HTH, Emanuele From nwagner at iam.uni-stuttgart.de Mon Jul 14 13:21:45 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Mon, 14 Jul 2008 19:21:45 +0200 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: <487B87A7.8030400@relativita.com> References: <487B81DD.5000608@relativita.com> <487B87A7.8030400@relativita.com> Message-ID: On Mon, 14 Jul 2008 19:06:47 +0200 Emanuele Olivetti wrote: > Emanuele Olivetti wrote: >> Dear all, >> >> Whenever I run example 'nlp_1.py' of OpenOpt (both >>tarball 0.18 >> and svn) on both my linux boxes I get "Segmentation >>fault"! >> Or better, I get it 9 times out of ten (due to random >> initialization without fixed seed, I guess). >> >> My configuration: >> box1: Ubuntu Gutsy, amd64 (Intel C2D) >> box2: Ubuntu Hardy, amd64 (Intel QuadCore) >> >> Python, Numpy, Scipy, Matplotlib etc. are standard >>versions >> from Ubuntu repositories. >> >> Can anyone confirm/reproduce this behavior? >> >> I've a report that in i386 debian it seems to work >>flawlessly. >> >> > > Here is the a full log on box1: > ---- > /usr/lib/python2.5/site-packages/scikits/openopt/examples# >python nlp_1.py > OpenOpt checks user-supplied gradient df (shape: (150,) >) > according to prob.diffInt = [ 1.00000000e-07] > lines with 1 - info_user/info_numerical greater than >maxViolation = 0.01 > will be shown > max(abs(df_user - df_numerical)) = 1.72111294887e-05 > (is registered in df number 67) > ======================== > OpenOpt checks user-supplied gradient dc (shape: (2, >150) ) > according to prob.diffInt = [ 1.00000000e-07] > lines with 1 - info_user/info_numerical greater than >maxViolation = 0.01 > will be shown > max(abs(dc_user - dc_numerical)) = 8.42460444801e-05 > (is registered in dc number 0) > ======================== > OpenOpt checks user-supplied gradient dh (shape: (2, >150) ) > according to prob.diffInt = [ 1.00000000e-07] > lines with 1 - info_user/info_numerical greater than >maxViolation = 0.01 > will be shown > max(abs(dh_user - dh_numerical)) = 0.00043945014113 > (is registered in dh number 149) > ======================== > ----------------------------------------------------- > solver: ralg problem: unnamed goal: minimum > iter objFunVal log10(maxResidual) > 0 8.596e+03 3.91 > Segmentation fault (core dumped) > ---- > > The core dump is available here: > http://sra.fbk.eu/people/olivetti/tmp/core.bz2 > > > HTH, > > Emanuele > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Emanuele, I am using r1129 of openopt and r5410 (numpy) on a 64 bit linux bos (open SuSe 10.2). nlp_1.py works for me but nlp_3.py segfaults within the algencan part. IIRC this is a known issue. solver: algencan problem: nlp3 goal: maximum iter objFunVal log10(maxResidual) 0 -1.640e+02 0.81 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 47292235252112 (LWP 5534)] 0x00002b0314b4af03 in free () from /lib64/libc.so.6 (gdb) bt #0 0x00002b0314b4af03 in free () from /lib64/libc.so.6 #1 0x00002b032a717bd9 in memFree () from /home/nwagner/TANGO/ALGENCAN/pywrapper.so #2 0x00002b032a719b1b in pywrapper_solver () from /home/nwagner/TANGO/ALGENCAN/pywrapper.so Nils From silva at lma.cnrs-mrs.fr Mon Jul 14 14:12:21 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Mon, 14 Jul 2008 20:12:21 +0200 Subject: [SciPy-user] odepack or f2py error In-Reply-To: References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> Le samedi 12 juillet 2008 ? 12:22 +0000, Pauli Virtanen a ?crit : > Sat, 12 Jul 2008 13:32:29 +0200, Fabrice Silva wrote: > > How can I check gfortran is used by f2py ? > Look at the output when scipy is compiled, or try to ldd scipy/integrate/ > vode.so and check if it's linked with libgfortran. I am using scipy from debian packages, so I did not compile it. The ldd command on either vode.so or my share object from f2py gives the following ouput : ldd /usr/lib/python2.5/site-packages/scipy/integrate/vode.so linux-gate.so.1 => (0xb7f25000) libblas.so.3gf => /usr/lib/atlas/libblas.so.3gf (0xb7b83000) libgfortran.so.3 => /usr/lib/libgfortran.so.3 (0xb7ad1000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb7aab000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0xb7a9e000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7943000) /lib/ld-linux.so.2 (0xb7f26000) > http://scipy.org/scipy/scipy/ticket/696 You may see that on this ticket, I've provided an example of small computation where the solver reach the mxstep number of iteration, but the error message is nicely displayed. The small example is also written in fortran and compiled with f2py in a identical manner as I have done with my study case... -- Fabrice Silva LMA UPR CNRS 7051 - ?quipe S2M From dmitrey.kroshko at scipy.org Mon Jul 14 16:37:34 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 14 Jul 2008 23:37:34 +0300 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: <487B87A7.8030400@relativita.com> References: <487B81DD.5000608@relativita.com> <487B87A7.8030400@relativita.com> Message-ID: <487BB90E.9090200@scipy.org> hi Emanuele, I have no idea and I have no experience of working with core dump files. I can only guess that it's related to numpy (since OO have no other than python code). IIRC there ware some troubles with numpy version from channels (IIRC release 1.0.5) (however, other troubles with other output), AFAIK both Nils and I have numpy from latest svn. Regards, D. Emanuele Olivetti wrote: > Emanuele Olivetti wrote: > >> Dear all, >> >> Whenever I run example 'nlp_1.py' of OpenOpt (both tarball 0.18 >> and svn) on both my linux boxes I get "Segmentation fault"! >> Or better, I get it 9 times out of ten (due to random >> initialization without fixed seed, I guess). >> >> My configuration: >> box1: Ubuntu Gutsy, amd64 (Intel C2D) >> box2: Ubuntu Hardy, amd64 (Intel QuadCore) >> >> Python, Numpy, Scipy, Matplotlib etc. are standard versions >> from Ubuntu repositories. >> >> Can anyone confirm/reproduce this behavior? >> >> I've a report that in i386 debian it seems to work flawlessly. >> >> >> > > Here is the a full log on box1: > ---- > /usr/lib/python2.5/site-packages/scikits/openopt/examples# python nlp_1.py > OpenOpt checks user-supplied gradient df (shape: (150,) ) > according to prob.diffInt = [ 1.00000000e-07] > lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 > will be shown > max(abs(df_user - df_numerical)) = 1.72111294887e-05 > (is registered in df number 67) > ======================== > OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) > according to prob.diffInt = [ 1.00000000e-07] > lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 > will be shown > max(abs(dc_user - dc_numerical)) = 8.42460444801e-05 > (is registered in dc number 0) > ======================== > OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) > according to prob.diffInt = [ 1.00000000e-07] > lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 > will be shown > max(abs(dh_user - dh_numerical)) = 0.00043945014113 > (is registered in dh number 149) > ======================== > ----------------------------------------------------- > solver: ralg problem: unnamed goal: minimum > iter objFunVal log10(maxResidual) > 0 8.596e+03 3.91 > Segmentation fault (core dumped) > ---- > > The core dump is available here: > http://sra.fbk.eu/people/olivetti/tmp/core.bz2 > > > HTH, > > Emanuele > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From dmitrey.kroshko at scipy.org Mon Jul 14 16:39:12 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 14 Jul 2008 23:39:12 +0300 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: References: <487B81DD.5000608@relativita.com> <487B87A7.8030400@relativita.com> Message-ID: <487BB970.9070209@scipy.org> Hi Nils, what algencan version do you have? I have published a post in my blog today that is related to algencan 2.0.3 and it works well for me. Regards, D. Nils Wagner wrote: > Hi Emanuele, > > I am using r1129 of openopt and r5410 (numpy) on a 64 bit > linux bos (open SuSe 10.2). > nlp_1.py works for me but nlp_3.py segfaults within the > algencan part. IIRC this is a known issue. > solver: algencan problem: nlp3 goal: maximum > iter objFunVal log10(maxResidual) > 0 -1.640e+02 0.81 > > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 47292235252112 (LWP 5534)] > 0x00002b0314b4af03 in free () from /lib64/libc.so.6 > (gdb) bt > #0 0x00002b0314b4af03 in free () from /lib64/libc.so.6 > #1 0x00002b032a717bd9 in memFree () from > /home/nwagner/TANGO/ALGENCAN/pywrapper.so > #2 0x00002b032a719b1b in pywrapper_solver () from > /home/nwagner/TANGO/ALGENCAN/pywrapper.so > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From forrest at physics.Auburn.EDU Mon Jul 14 17:16:57 2008 From: forrest at physics.Auburn.EDU (Phil Forrest) Date: Mon, 14 Jul 2008 16:16:57 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio Message-ID: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> Hello, I'm a non-Python using system administrator attempting to install scipy on Sun Solaris 10 AMD64 architecture (IBM Blade Server). Sun Studio 12 is installed in default, vanilla mode (/opt/SUNWspro) I was able to install numpy with no apparent (no yet tested) problems. First things: In the file "INSTALL.txt" there is a line stating the following: "See http://new.scipy.org/Wiki/Installing_SciPy for updates of this document." When I go to the above URL, I get a misconfigured web page (500: Internal Server Error). The crux of my problem is that the scipy build is complaining that it cannot find the F77 Compatibility library: /opt/SUNWspro/bin/f90 -Bdynamic -G -Bdynamic -G build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/scipy/fft pack/_fftpackmodule.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfft.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/drfft.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zrfft.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfftnd.o build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/fortranob ject.o -Lbuild/temp.solaris-2.10-i86pc-2.4 -ldfftpack -lfsu -lsunmath -lmvec -lf77compat -o build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so ld: fatal: library -lf77compat: not found ld: fatal: File processing errors. No output written to build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so ld: fatal: library -lf77compat: not found ld: fatal: File processing errors. No output written to build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so error: Command "/opt/SUNWspro/bin/f90 -Bdynamic -G -Bdynamic -G build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/scipy/fft pack/_fftpackmodule.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfft.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/drfft.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zrfft.o build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfftnd.o build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/fortranob ject.o -Lbuild/temp.solaris-2.10-i86pc-2.4 -ldfftpack -lfsu -lsunmath -lmvec -lf77compat -o build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so" failed with exit status 1 # Here are some environment variables I have defined: # env | grep BLAS BLAS=/opt/SUNWspro/lib/libsunperf.so # env | grep LAPACK LAPACK=/opt/SUNWspro/lib/libsunmath.so Here is my crle: # crle Configuration file [version 4]: /var/ld/ld.config Default Library Path (ELF): /lib:/usr/lib:/opt/SUNWspro/lib:/usr/local/lib:/opt/sfw/lib Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system default) Command line: crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/SUNWspro/lib:/usr/local/lib:/opt/sfw/lib # crle -64 Configuration file [version 4]: /var/ld/64/ld.config Default Library Path (ELF): /lib/64:/usr/lib/64:/opt/SUNWspro/lib/amd64 Trusted Directories (ELF): /lib/secure/64:/usr/lib/secure/64 (system default) Command line: crle -64 -c /var/ld/64/ld.config -l /lib/64:/usr/lib/64:/opt/SUNWspro/lib/amd64 Can anyone shed some light on how to resolve the "ld: fatal: library -lf77compat: not found" error? Is the scipy build attempting calls that force F77 mode/calls? I would appreciate any documentation URL that might help clear this up. Thanks, Phil From michael.abshoff at googlemail.com Mon Jul 14 17:37:36 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 14 Jul 2008 14:37:36 -0700 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> Message-ID: <487BC720.8060904@gmail.com> Phil Forrest wrote: > Hello, Hi Phil, > I'm a non-Python using system administrator attempting to install scipy on > Sun Solaris 10 AMD64 architecture (IBM Blade Server). > > Sun Studio 12 is installed in default, vanilla mode (/opt/SUNWspro) > > I was able to install numpy with no apparent (no yet tested) problems. > > First things: In the file "INSTALL.txt" there is a line stating the > following: > > "See http://new.scipy.org/Wiki/Installing_SciPy for updates of this > document." > > When I go to the above URL, I get a misconfigured web page (500: Internal > Server Error). > > The crux of my problem is that the scipy build is complaining that it cannot > find the F77 Compatibility library: This is comping from numpy: -bash-3.00$ /usr/sfw/bin/ggrep -r f77compat * site-packages/numpy/distutils/fcompiler/sun.py: opt.extend(['fsu','sunmath','mvec','f77compat']) You might want to throw it out there. Depending on which symbols are missing at link or module import time you might need to add some libs there. I guess by the time I have build Python, numpy and scipy on that box using the Sun compiler you will have fixed the problem :) If you have problems I can poke around in case it is not fixed in numpy 1.1.x. > Here are some environment variables I have defined: > # env | grep BLAS > BLAS=/opt/SUNWspro/lib/libsunperf.so > # env | grep LAPACK > LAPACK=/opt/SUNWspro/lib/libsunmath.so > > Here is my crle: > > # crle > > Configuration file [version 4]: /var/ld/ld.config > Default Library Path (ELF): > /lib:/usr/lib:/opt/SUNWspro/lib:/usr/local/lib:/opt/sfw/lib > Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system > default) > > Command line: > crle -c /var/ld/ld.config -l > /lib:/usr/lib:/opt/SUNWspro/lib:/usr/local/lib:/opt/sfw/lib > > # crle -64 > > Configuration file [version 4]: /var/ld/64/ld.config > Default Library Path (ELF): /lib/64:/usr/lib/64:/opt/SUNWspro/lib/amd64 > Trusted Directories (ELF): /lib/secure/64:/usr/lib/secure/64 (system > default) > > Command line: > crle -64 -c /var/ld/64/ld.config -l > /lib/64:/usr/lib/64:/opt/SUNWspro/lib/amd64 > > > Can anyone shed some light on how to resolve the "ld: fatal: library > -lf77compat: not found" error? Is the scipy build attempting calls that > force F77 mode/calls? Sun Studio 12 does not ship a liblf77compat.[so|a]. I would guess this is a bug in numpy and it should probably be fixed there. > I would appreciate any documentation URL that might help clear this up. > > Thanks, > Phil > Cheers, Michael > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Mon Jul 14 17:41:47 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Jul 2008 16:41:47 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> Message-ID: <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> On Mon, Jul 14, 2008 at 16:16, Phil Forrest wrote: > Hello, > > I'm a non-Python using system administrator attempting to install scipy on > Sun Solaris 10 AMD64 architecture (IBM Blade Server). > > Sun Studio 12 is installed in default, vanilla mode (/opt/SUNWspro) > > I was able to install numpy with no apparent (no yet tested) problems. > > First things: In the file "INSTALL.txt" there is a line stating the > following: > > "See http://new.scipy.org/Wiki/Installing_SciPy for updates of this > document." > > When I go to the above URL, I get a misconfigured web page (500: Internal > Server Error). Sorry about that. The new URL is http://www.scipy.org/Installing_SciPy . The document has been fixed. > The crux of my problem is that the scipy build is complaining that it cannot > find the F77 Compatibility library: > > /opt/SUNWspro/bin/f90 -Bdynamic -G -Bdynamic -G > build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/scipy/fft > pack/_fftpackmodule.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfft.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/drfft.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zrfft.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfftnd.o > build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/fortranob > ject.o -Lbuild/temp.solaris-2.10-i86pc-2.4 -ldfftpack -lfsu -lsunmath -lmvec > -lf77compat -o build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so > ld: fatal: library -lf77compat: not found > ld: fatal: File processing errors. No output written to > build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so > ld: fatal: library -lf77compat: not found > ld: fatal: File processing errors. No output written to > build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so > error: Command "/opt/SUNWspro/bin/f90 -Bdynamic -G -Bdynamic -G > build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/scipy/fft > pack/_fftpackmodule.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfft.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/drfft.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zrfft.o > build/temp.solaris-2.10-i86pc-2.4/scipy/fftpack/src/zfftnd.o > build/temp.solaris-2.10-i86pc-2.4/build/src.solaris-2.10-i86pc-2.4/fortranob > ject.o -Lbuild/temp.solaris-2.10-i86pc-2.4 -ldfftpack -lfsu -lsunmath -lmvec > -lf77compat -o build/lib.solaris-2.10-i86pc-2.4/scipy/fftpack/_fftpack.so" > failed with exit status 1 > # > > Here are some environment variables I have defined: > # env | grep BLAS > BLAS=/opt/SUNWspro/lib/libsunperf.so > # env | grep LAPACK > LAPACK=/opt/SUNWspro/lib/libsunmath.so > > Here is my crle: > > # crle > > Configuration file [version 4]: /var/ld/ld.config > Default Library Path (ELF): > /lib:/usr/lib:/opt/SUNWspro/lib:/usr/local/lib:/opt/sfw/lib > Trusted Directories (ELF): /lib/secure:/usr/lib/secure (system > default) > > Command line: > crle -c /var/ld/ld.config -l > /lib:/usr/lib:/opt/SUNWspro/lib:/usr/local/lib:/opt/sfw/lib > > # crle -64 > > Configuration file [version 4]: /var/ld/64/ld.config > Default Library Path (ELF): /lib/64:/usr/lib/64:/opt/SUNWspro/lib/amd64 > Trusted Directories (ELF): /lib/secure/64:/usr/lib/secure/64 (system > default) > > Command line: > crle -64 -c /var/ld/64/ld.config -l > /lib/64:/usr/lib/64:/opt/SUNWspro/lib/amd64 > > > Can anyone shed some light on how to resolve the "ld: fatal: library > -lf77compat: not found" error? Is the scipy build attempting calls that > force F77 mode/calls? All of our FORTRAN code is FORTRAN-77. > I would appreciate any documentation URL that might help clear this up. I'm afraid that we don't have much in the way of Sun-specific information, nor am I much familiar with the lay of the land, here. Is the FORTRAN-77 support an optional install? Or is your version of the compiler supposed to work without any extra libraries like f77compat any more? If this is the case and you can figure out which version no longer requires f77compat, we can put the appropriate check into numpy. Please provide the output of "f90 -V". Thanks. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.abshoff at googlemail.com Mon Jul 14 18:08:56 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 14 Jul 2008 15:08:56 -0700 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> Message-ID: <487BCE78.3040403@gmail.com> Robert Kern wrote: > On Mon, Jul 14, 2008 at 16:16, Phil Forrest wrote: >> Can anyone shed some light on how to resolve the "ld: fatal: library >> -lf77compat: not found" error? Is the scipy build attempting calls that >> force F77 mode/calls? > > All of our FORTRAN code is FORTRAN-77. > >> I would appreciate any documentation URL that might help clear this up. > > I'm afraid that we don't have much in the way of Sun-specific > information, nor am I much familiar with the lay of the land, here. Is > the FORTRAN-77 support an optional install? Or is your version of the > compiler supposed to work without any extra libraries like f77compat > any more? If this is the case and you can figure out which version no > longer requires f77compat, we can put the appropriate check into > numpy. I am not sure when it changed. When I look at http://projects.scipy.org/scipy/numpy/ticket/671 it seemt that up to and including Release 6 of the Sun compiler it was still valid. > Please provide the output of "f90 -V". This is SunStudio12-200709: -bash-3.00$ ./f77 -V NOTICE: Invoking ./f90 -f77 -ftrap=%none -V f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 Usage: f90 [ options ] files. Use 'f90 -flags' for details -bash-3.00$ ./f90 -V f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 Usage: f90 [ options ] files. Use 'f90 -flags' for details -bash-3.00$ ./f95 -V f95: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 Usage: f95 [ options ] files. Use 'f95 -flags' for details I currently do not have access to older Sun Studio releases personally, but I could ask some people if they could check. This is unlikely to work, but I will try. > Thanks. > Cheers, Michael From robert.kern at gmail.com Mon Jul 14 18:42:49 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Jul 2008 17:42:49 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <487BCE78.3040403@gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> Message-ID: <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> On Mon, Jul 14, 2008 at 17:08, Michael Abshoff wrote: > Robert Kern wrote: >> On Mon, Jul 14, 2008 at 16:16, Phil Forrest wrote: > > > >>> Can anyone shed some light on how to resolve the "ld: fatal: library >>> -lf77compat: not found" error? Is the scipy build attempting calls that >>> force F77 mode/calls? >> >> All of our FORTRAN code is FORTRAN-77. >> >>> I would appreciate any documentation URL that might help clear this up. >> >> I'm afraid that we don't have much in the way of Sun-specific >> information, nor am I much familiar with the lay of the land, here. Is >> the FORTRAN-77 support an optional install? Or is your version of the >> compiler supposed to work without any extra libraries like f77compat >> any more? If this is the case and you can figure out which version no >> longer requires f77compat, we can put the appropriate check into >> numpy. > > I am not sure when it changed. When I look at > > http://projects.scipy.org/scipy/numpy/ticket/671 > > it seemt that up to and including Release 6 of the Sun compiler it was > still valid. > >> Please provide the output of "f90 -V". > > This is SunStudio12-200709: > > -bash-3.00$ ./f77 -V > NOTICE: Invoking ./f90 -f77 -ftrap=%none -V > f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 > Usage: f90 [ options ] files. Use 'f90 -flags' for details > -bash-3.00$ ./f90 -V > f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 > Usage: f90 [ options ] files. Use 'f90 -flags' for details > -bash-3.00$ ./f95 -V > f95: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 > Usage: f95 [ options ] files. Use 'f95 -flags' for details > > I currently do not have access to older Sun Studio releases personally, > but I could ask some people if they could check. This is unlikely to > work, but I will try. Manuals sometimes give a little history. If it got dropped in version 8.0, it's probably documented in there, at least as a bit of advertising. Are the manuals publicly accessible online anywhere? I'm happy to do some searching myself. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.abshoff at googlemail.com Mon Jul 14 19:09:33 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 14 Jul 2008 16:09:33 -0700 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> Message-ID: <487BDCAD.4090907@gmail.com> Robert Kern wrote: >>> Please provide the output of "f90 -V". >> This is SunStudio12-200709: >> >> -bash-3.00$ ./f77 -V >> NOTICE: Invoking ./f90 -f77 -ftrap=%none -V >> f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 >> Usage: f90 [ options ] files. Use 'f90 -flags' for details >> -bash-3.00$ ./f90 -V >> f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 >> Usage: f90 [ options ] files. Use 'f90 -flags' for details >> -bash-3.00$ ./f95 -V >> f95: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 >> Usage: f95 [ options ] files. Use 'f95 -flags' for details >> >> I currently do not have access to older Sun Studio releases personally, >> but I could ask some people if they could check. This is unlikely to >> work, but I will try. > > Manuals sometimes give a little history. If it got dropped in version > 8.0, it's probably documented in there, at least as a bit of > advertising. Are the manuals publicly accessible online anywhere? I'm > happy to do some searching myself. > Hi Robert, caught a lucky break and found a box with Sun Studio 11: It ships a libf77compat.so. To identify: mabshoff at cerberus: /usr/local/SUNWspro/bin>./f77 -V NOTICE: Invoking ./f90 -f77 -ftrap=%none -V f90: Sun Fortran 95 8.2 Patch 121019-04 2006/10/26 Usage: f90 [ options ] files. Use 'f90 -flags' for details mabshoff at cerberus: /usr/local/SUNWspro/bin>./f90 -V f90: Sun Fortran 95 8.2 Patch 121019-04 2006/10/26 Usage: f90 [ options ] files. Use 'f90 -flags' for details mabshoff at cerberus: /usr/local/SUNWspro/bin>./f95 -V f95: Sun Fortran 95 8.2 Patch 121019-04 2006/10/26 Usage: f95 [ options ] files. Use 'f95 -flags' for details Hope this helps. It seems that Phil is not poking around. I have some odd numpy 1.0.4cvs failure with gcc in 32 bit mode on Solaris 10 Intel when computing roots of polynomials. Since Sage will finally upgrade to numpy 1.1 soon I will try to see if I can reproduce the problem there and for fun I can also build numpy 1.1 + scipy in 64 bit mode with Sun Forte. So if there are any patches let me know and I can try them out. Otherwise I will likely do some crude patching to get the flags right and let you guys work out the fines details :) Cheers, Michael From robert.kern at gmail.com Mon Jul 14 19:35:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Jul 2008 18:35:24 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <487BDCAD.4090907@gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> Message-ID: <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> On Mon, Jul 14, 2008 at 18:09, Michael Abshoff wrote: > Robert Kern wrote: > > > >>>> Please provide the output of "f90 -V". >>> This is SunStudio12-200709: >>> >>> -bash-3.00$ ./f77 -V >>> NOTICE: Invoking ./f90 -f77 -ftrap=%none -V >>> f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 >>> Usage: f90 [ options ] files. Use 'f90 -flags' for details >>> -bash-3.00$ ./f90 -V >>> f90: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 >>> Usage: f90 [ options ] files. Use 'f90 -flags' for details >>> -bash-3.00$ ./f95 -V >>> f95: Sun Fortran 95 8.3 SunOS_i386 Patch 127002-01 2007/07/18 >>> Usage: f95 [ options ] files. Use 'f95 -flags' for details >>> >>> I currently do not have access to older Sun Studio releases personally, >>> but I could ask some people if they could check. This is unlikely to >>> work, but I will try. >> >> Manuals sometimes give a little history. If it got dropped in version >> 8.0, it's probably documented in there, at least as a bit of >> advertising. Are the manuals publicly accessible online anywhere? I'm >> happy to do some searching myself. >> > > Hi Robert, > > caught a lucky break and found a box with Sun Studio 11: It ships a > libf77compat.so. > > To identify: > > mabshoff at cerberus: /usr/local/SUNWspro/bin>./f77 -V > NOTICE: Invoking ./f90 -f77 -ftrap=%none -V > f90: Sun Fortran 95 8.2 Patch 121019-04 2006/10/26 > Usage: f90 [ options ] files. Use 'f90 -flags' for details > mabshoff at cerberus: /usr/local/SUNWspro/bin>./f90 -V > f90: Sun Fortran 95 8.2 Patch 121019-04 2006/10/26 > Usage: f90 [ options ] files. Use 'f90 -flags' for details > mabshoff at cerberus: /usr/local/SUNWspro/bin>./f95 -V > f95: Sun Fortran 95 8.2 Patch 121019-04 2006/10/26 > Usage: f95 [ options ] files. Use 'f95 -flags' for details > > Hope this helps. Yup. We can do a version check. For >= 8.3, we don't put in f77compat. Can you and Phil check that if you modify that line of code in sun.py to omit f77compat on your Sun Fortran 95 8.3 boxes, that it works? (Ugh, that sentence got away from me, but too late now!) > It seems that Phil is not poking around. I have some odd numpy 1.0.4cvs > failure with gcc in 32 bit mode on Solaris 10 Intel when computing roots > of polynomials. Since Sage will finally upgrade to numpy 1.1 soon I will > try to see if I can reproduce the problem there and for fun I can also > build numpy 1.1 + scipy in 64 bit mode with Sun Forte. So if there are > any patches let me know and I can try them out. I'm not sure if any are relevant, but we have a branch for the upcoming 1.1.1 bugfix release: http://svn.scipy.org/svn/numpy/branches/1.1.x/ > Otherwise I will likely > do some crude patching to get the flags right and let you guys work out > the fines details :) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.abshoff at googlemail.com Mon Jul 14 22:35:57 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 14 Jul 2008 19:35:57 -0700 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> Message-ID: <487C0D0D.5040904@gmail.com> Robert Kern wrote: Hi Robert, > Yup. We can do a version check. For >= 8.3, we don't put in f77compat. > Can you and Phil check that if you modify that line of code in sun.py > to omit f77compat on your Sun Fortran 95 8.3 boxes, that it works? > (Ugh, that sentence got away from me, but too late now!) :) - I am working on it, but it will be tomorrow until I am done. >> It seems that Phil is not poking around. I have some odd numpy 1.0.4cvs >> failure with gcc in 32 bit mode on Solaris 10 Intel when computing roots >> of polynomials. Since Sage will finally upgrade to numpy 1.1 soon I will >> try to see if I can reproduce the problem there and for fun I can also >> build numpy 1.1 + scipy in 64 bit mode with Sun Forte. So if there are >> any patches let me know and I can try them out. > > I'm not sure if any are relevant, but we have a branch for the > upcoming 1.1.1 bugfix release: > > http://svn.scipy.org/svn/numpy/branches/1.1.x/ Yeah, I am aware of that, but I just build Python 2.5.2 and numpy 1.1.0 and expect for the borken ctypes all tests pass on x86-64 Solaris with Sun Studio 12 in 64 bit mode. Unfortunately I do not have the SunPerf lib installed for x86-64, so I will have to do the whole thing on Sparc again before attempting to come up with a set of working flags. I have an ATLAS+Lapack build on that box, but I used gfortran, so it won't be any good in debugging this problem. >> Otherwise I will likely >> do some crude patching to get the flags right and let you guys work out >> the fines details :) > One more thing: Sin Phil wants to build a 64 bit version he ought to set export BLAS=/usr/local/SunStudio12-200709/x86_64-SunOS/SUNWspro/lib/amd64/libsunmath.so Notice .../lib/amd64/... since lib contains the 32 bit version of the library and lib/amd64 the 64 bit version. And for the SunPerf lib itself: There are a bunch of them on my Sparc box: -bash-3.00$ find /usr/local/SunStudio12-200709/ -name "lib*perf*" /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/bin/libsunperf_check /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/bin/libsunperf_check /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v9/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/sparcfmaf/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v7/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v8/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v8a/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v8plus/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v8plusa/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v8plusb/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v9a/libsuniperf.a /usr/local/SunStudio12-200709/sparc-SunOS/SUNWspro/prod/lib/v9b/libsuniperf.a And the same applies here: the v9 is 64 bit while v8 and v7 are 32 bit. Hope this helps. If anybody cares I can send my notes on the build (including Python) to the list so that someone interested can stuff them in the wiki. There seems to be no mention of Solaris on http://www.scipy.org/Installing_SciPy http://www.scipy.org/Installing_SciPy/BuildingGeneral http://www.scipy.org/FAQ I also send some build info on 64 bit OSX out a while ago and somebody mentioned that those might also be well placed in the wiki. Cheers, Michael From robert.kern at gmail.com Mon Jul 14 22:55:47 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 14 Jul 2008 21:55:47 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <487C0D0D.5040904@gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> <487C0D0D.5040904@gmail.com> Message-ID: <3d375d730807141955l641a1866s24e2103c1de134d@mail.gmail.com> On Mon, Jul 14, 2008 at 21:35, Michael Abshoff wrote: > Hope this helps. If anybody cares I can send my notes on the build > (including Python) to the list so that someone interested can stuff them > in the wiki. Yes, please! However, I don't think anyone who hasn't built numpyor scipy on Solaris will have the motivation (or background, probably) to pretty them up for the wiki, so they'll probably get lost if you don't do it yourself. But toss 'em over anyways. Actually, if you have notes on building anything in the Sage stack on any platform, I'm personally interested in seeing them. > There seems to be no mention of Solaris on > > http://www.scipy.org/Installing_SciPy > http://www.scipy.org/Installing_SciPy/BuildingGeneral > http://www.scipy.org/FAQ > > I also send some build info on 64 bit OSX out a while ago and somebody > mentioned that those might also be well placed in the wiki. To this list? I'll have to go back through the archives. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From michael.abshoff at googlemail.com Mon Jul 14 23:21:53 2008 From: michael.abshoff at googlemail.com (Michael Abshoff) Date: Mon, 14 Jul 2008 20:21:53 -0700 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <3d375d730807141955l641a1866s24e2103c1de134d@mail.gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> <487C0D0D.5040904@gmail.com> <3d375d730807141955l641a1866s24e2103c1de134d@mail.gmail.com> Message-ID: <487C17D1.2080704@gmail.com> Robert Kern wrote: > On Mon, Jul 14, 2008 at 21:35, Michael Abshoff > wrote: Hi Robert, >> Hope this helps. If anybody cares I can send my notes on the build >> (including Python) to the list so that someone interested can stuff them >> in the wiki. > > Yes, please! However, I don't think anyone who hasn't built numpyor > scipy on Solaris will have the motivation (or background, probably) to > pretty them up for the wiki, so they'll probably get lost if you don't > do it yourself. But toss 'em over anyways. Actually, if you have notes > on building anything in the Sage stack on any platform, I'm personally > interested in seeing them. Ok, besides the 64 bit OSX there is little that is working and of interest to you guys. The main issue is still getting the ctypes extension to build at all. For various reasons what is in-tree in python is crap and lacking 64 bit Darwin support completely and I have not had time to come up with a clean patch for ffcall (or is it fficall?). For 64 bit OSX Scipy there was one failure with one extension that was compiled in 32 bit mode even on 64 bit. I would assume someone is cleverly clearing the CFLAGS or CXXFLAGS and then shooting me in the foot, but maybe I myself was dumb and overlooked something. Solaris in 32 and 64 bit mode is mostly done in regards to python, numpy and scipy. I promised Stefan to run a build bot on Solaris and I have started to put together eight builds, i.e. a) Sparc, Sparc 64, x86 and x86-64 each with b) gcc with gfortran or cc with SunPerflib So those are eight build bots in total. But progress there has also been hampered by me working on other Sage issues. What will hopefully happen next months is a 64 bit ATLAS+Netlib.org Lapack+numpy+scipy on Vista 64bit, but we will see how that goes. If I have trouble with ATLAS I will probably use the Intel MKL for now to get up and running on the 64 bit native Sage on Windows port. >> There seems to be no mention of Solaris on >> >> http://www.scipy.org/Installing_SciPy >> http://www.scipy.org/Installing_SciPy/BuildingGeneral >> http://www.scipy.org/FAQ >> >> I also send some build info on 64 bit OSX out a while ago and somebody >> mentioned that those might also be well placed in the wiki. > > To this list? I'll have to go back through the archives. > It was sent to the numpy list IIRC. If they don't turn up in the archives I can send you a copy off list. Cheers, Michael From jh at physics.ucf.edu Tue Jul 15 00:16:18 2008 From: jh at physics.ucf.edu (Joe Harrington) Date: Tue, 15 Jul 2008 00:16:18 -0400 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: (scipy-user-request@scipy.org) References: Message-ID: Automatic conversion of at least the boring stuff is crucial. No matter how educational rewriting your code may be, if you have 100,000 lines of tools and utilities to convert, you generally can't. Our IDL-based group has made good use of i2py, which may at least provide some educational examples for converting MATLAB, if not a usable framework. The general thing is to covert what is easy, and at least flag what is difficult. It doesn't do a mapping of function names to function names or anything like that. Here's the URL, hope it helps. http://software.pseudogreen.org/i2py/ If you do a general parser approach, it would be useful to do it in such a way that the front end can be ripped out and modified or replaced with one for another language. I've added a section on the scipy.org wiki under Topical Software for conversion programs. When yours is ready, please put in a link there. --jh-- From nwagner at iam.uni-stuttgart.de Tue Jul 15 02:17:21 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 15 Jul 2008 08:17:21 +0200 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: <487BB970.9070209@scipy.org> References: <487B81DD.5000608@relativita.com> <487B87A7.8030400@relativita.com> <487BB970.9070209@scipy.org> Message-ID: On Mon, 14 Jul 2008 23:39:12 +0300 dmitrey wrote: > Hi Nils, > what algencan version do you have? > I have published a post in my blog today that is related >to algencan > 2.0.3 and it works well for me. > > Regards, D. > Hi Dmitrey, I am using the old algencan < 2.0. I will upgrade asap. Cheers, Nils From dmitrey.kroshko at scipy.org Tue Jul 15 02:44:42 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Tue, 15 Jul 2008 09:44:42 +0300 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: References: <487B81DD.5000608@relativita.com> <487B87A7.8030400@relativita.com> <487BB970.9070209@scipy.org> Message-ID: <487C475A.2050701@scipy.org> NB you should call lower-case algencan for v >= 2.0 and upper-case ALGENCAN for v 1.0. Ensure you have single pywrapper.so file in PYTHONPATH (i.e. remove old pywrapper.so). It's easy to do via >>> import pywrapper >>> pywrapper.__file__ '/usr/lib/python2.5/site-packages/pywrapper.so' $ [sudo ]rm -rf ... On the other hand, for my tests ALGENCAN 1.0 works better. Of course, it maintenance sooner or later will be ceased. Regards, D. Nils Wagner wrote: > On Mon, 14 Jul 2008 23:39:12 +0300 > dmitrey wrote: > >> Hi Nils, >> what algencan version do you have? >> I have published a post in my blog today that is related >> to algencan >> 2.0.3 and it works well for me. >> >> Regards, D. >> >> > Hi Dmitrey, > > I am using the old algencan < 2.0. > > I will upgrade asap. > > Cheers, > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From hoytak at gmail.com Tue Jul 15 03:24:06 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Tue, 15 Jul 2008 10:24:06 +0300 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: References: Message-ID: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> I know I'm a little late to the main discussion, but FYI there is a project, pym, which claims to do some conversion: http://code.google.com/p/pym/ It's only available from svn, and there's no claim about how well it works, but it might be a good starting point. I believe it's MIT licensed, so no issue there. Plus the authors might be good contacts. --Hoyt On Tue, Jul 15, 2008 at 7:16 AM, Joe Harrington wrote: > Automatic conversion of at least the boring stuff is crucial. No > matter how educational rewriting your code may be, if you have 100,000 > lines of tools and utilities to convert, you generally can't. Our > IDL-based group has made good use of i2py, which may at least provide > some educational examples for converting MATLAB, if not a usable > framework. The general thing is to covert what is easy, and at least > flag what is difficult. It doesn't do a mapping of function names to > function names or anything like that. Here's the URL, hope it helps. > > http://software.pseudogreen.org/i2py/ > > If you do a general parser approach, it would be useful to do it in > such a way that the front end can be ripped out and modified or > replaced with one for another language. > > I've added a section on the scipy.org wiki under Topical Software for > conversion programs. When yours is ready, please put in a link there. > > --jh-- > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From emanuele at relativita.com Tue Jul 15 05:50:35 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 15 Jul 2008 11:50:35 +0200 Subject: [SciPy-user] [OpenOpt] Segmentation fault In-Reply-To: <487BB90E.9090200@scipy.org> References: <487B81DD.5000608@relativita.com> <487B87A7.8030400@relativita.com> <487BB90E.9090200@scipy.org> Message-ID: <487C72EB.9040709@relativita.com> Updated to latest svn (numpy, scipy, etc.). Now nlp_1.py works. I came across this bug because of troubles in my code. Now let's go back and fix mine :D Thanks, Emanuele dmitrey wrote: > hi Emanuele, > I have no idea and I have no experience of working with core dump files. > I can only guess that it's related to numpy (since OO have no other than > python code). IIRC there ware some troubles with numpy version from > channels (IIRC release 1.0.5) (however, other troubles with other > output), AFAIK both Nils and I have numpy from latest svn. > > Regards, D. > > Emanuele Olivetti wrote: > >> Emanuele Olivetti wrote: >> >> >>> Dear all, >>> >>> Whenever I run example 'nlp_1.py' of OpenOpt (both tarball 0.18 >>> and svn) on both my linux boxes I get "Segmentation fault"! >>> Or better, I get it 9 times out of ten (due to random >>> initialization without fixed seed, I guess). >>> >>> My configuration: >>> box1: Ubuntu Gutsy, amd64 (Intel C2D) >>> box2: Ubuntu Hardy, amd64 (Intel QuadCore) >>> >>> Python, Numpy, Scipy, Matplotlib etc. are standard versions >>> from Ubuntu repositories. >>> >>> Can anyone confirm/reproduce this behavior? >>> >>> I've a report that in i386 debian it seems to work flawlessly. >>> >>> >>> >>> >> Here is the a full log on box1: >> ---- >> /usr/lib/python2.5/site-packages/scikits/openopt/examples# python nlp_1.py >> OpenOpt checks user-supplied gradient df (shape: (150,) ) >> according to prob.diffInt = [ 1.00000000e-07] >> lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 >> will be shown >> max(abs(df_user - df_numerical)) = 1.72111294887e-05 >> (is registered in df number 67) >> ======================== >> OpenOpt checks user-supplied gradient dc (shape: (2, 150) ) >> according to prob.diffInt = [ 1.00000000e-07] >> lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 >> will be shown >> max(abs(dc_user - dc_numerical)) = 8.42460444801e-05 >> (is registered in dc number 0) >> ======================== >> OpenOpt checks user-supplied gradient dh (shape: (2, 150) ) >> according to prob.diffInt = [ 1.00000000e-07] >> lines with 1 - info_user/info_numerical greater than maxViolation = 0.01 >> will be shown >> max(abs(dh_user - dh_numerical)) = 0.00043945014113 >> (is registered in dh number 149) >> ======================== >> ----------------------------------------------------- >> solver: ralg problem: unnamed goal: minimum >> iter objFunVal log10(maxResidual) >> 0 8.596e+03 3.91 >> Segmentation fault (core dumped) >> ---- >> >> The core dump is available here: >> http://sra.fbk.eu/people/olivetti/tmp/core.bz2 >> >> >> HTH, >> >> Emanuele >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From contact at pythonxy.com Tue Jul 15 13:53:05 2008 From: contact at pythonxy.com (Pierre Raybaut) Date: Tue, 15 Jul 2008 19:53:05 +0200 Subject: [SciPy-user] [python(x,y)] Call for donations In-Reply-To: <629b08a40807150338y42dab8bbxab15a7d990afdf26@mail.gmail.com> References: <629b08a40807150338y42dab8bbxab15a7d990afdf26@mail.gmail.com> Message-ID: <629b08a40807151053j139b51b8g5992d9859705789c@mail.gmail.com> Hi all, As you may already know, Python(x,y) is a free Python / Qt / Eclipse distribution for scientific computing, numerical simulations, data analysis and visualization, ... The basic idea of Python(x,y) was to freely share with other people a ready-to-use Python distribution being a viable alternative to MATLAB or IDL, i.e. providing documentation, development environment and interactive consoles in an integrated way with each other. Almost everything was already available to make it, one had only to gather all these tools, documents, libraries, and put them together in one self-sufficient software (and scripting the installation process to configure automatically all these softwares). At first, I did this for me (in my free time) because it was a good way to install Python from scratch. Then I quickly decided to distribute it freely - and this takes a lot of time, but I was very glad that other people could benefit from my work. However, distributing freely Python(x,y) paradoxically takes money too (which is mine by the way because this is a private initiative). Not much though. But still, hosting 500Mb of data with a traffic limit of 24Gb per month is not free. Now Python(x,y) is a victim of its own success (traffic is increasing each month : 37Gb last month, which is over the limit but my web hosting provider has a flexible quota policy) and the new release (2.0.0) allow to distribute easily Python(x,y) individual packages (even on computers with only Python 2.5 installed and for users who don't want to install the distribution but only some packages) but 500Mb are not enough... I could pay for it because it's really not much (it's clearly not a question of money but more likely a question of principle), but I've already paid for the current website and spend a lot of time on this, so I thought that I could share my debts as well as my work :) So, feel free to donate on the Python(x,y) website. Thanks ! Pierre From anirudhvij at gmail.com Tue Jul 15 14:31:14 2008 From: anirudhvij at gmail.com (anirudh vij) Date: Tue, 15 Jul 2008 20:31:14 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> References: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> Message-ID: <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> On Tue, Jul 15, 2008 at 9:24 AM, Hoyt Koepke wrote: > I know I'm a little late to the main discussion, but FYI there is a > project, pym, which claims to do some conversion: > http://code.google.com/p/pym/ It's only available from svn, and > there's no claim about how well it works, but it might be a good > starting point. I believe it's MIT licensed, so no issue there. Plus > the authors might be good contacts. > > --Hoyt > thanks for this. Your google-fu is stronger than mine :) Work on my own script has been sluggish, since there is a conference going on here. I am looking at the pym code right now to see how usable it is. > > > > On Tue, Jul 15, 2008 at 7:16 AM, Joe Harrington wrote: >> Automatic conversion of at least the boring stuff is crucial. No >> matter how educational rewriting your code may be, if you have 100,000 >> lines of tools and utilities to convert, you generally can't. Our >> IDL-based group has made good use of i2py, which may at least provide >> some educational examples for converting MATLAB, if not a usable >> framework. The general thing is to covert what is easy, and at least >> flag what is difficult. It doesn't do a mapping of function names to >> function names or anything like that. Here's the URL, hope it helps. >> >> http://software.pseudogreen.org/i2py/ >> >> If you do a general parser approach, it would be useful to do it in >> such a way that the front end can be ripped out and modified or >> replaced with one for another language. >> >> I've added a section on the scipy.org wiki under Topical Software for >> conversion programs. When yours is ready, please put in a link there. >> Will do. Meanwhile, should a link to pym be put up on the wiki? From forrest at physics.Auburn.EDU Tue Jul 15 16:13:15 2008 From: forrest at physics.Auburn.EDU (Phil Forrest) Date: Tue, 15 Jul 2008 15:13:15 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <487C0D0D.5040904@gmail.com> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> <487C0D0D.5040904@gmail.com> Message-ID: <015401c8e6b7$36f552f0$a4dff8d0$@auburn.edu> Robert Kern wrote: >> Yup. We can do a version check. For >= 8.3, we don't put in f77compat. >> Can you and Phil check that if you modify that line of code in sun.py >> to omit f77compat on your Sun Fortran 95 8.3 boxes, that it works? >> (Ugh, that sentence got away from me, but too late now!) Hey Guys, I did not have any success on a Sun Studio 12 (8.3) scipy build with f77compat removed from the sun.py in the numpy tree. The numpy build went fine (I cleaned the tree, downloaded the source, built via 'python setup.py install') I didn't want to flood this list with compile error messages, so please let me know if someone would like me to send out the build failure output for scipy. >One more thing: Sin Phil wants to build a 64 bit version he ought to set I do not need (at the moment) a 64-bit build. Which is why my environment variables for BLAS and LAPACK point to 32 bit libs: BLAS=/opt/SUNWspro/lib/libsunperf.so LAPACK=/opt/SUNWspro/lib/libsunmath.so >export BLAS=/usr/local/SunStudio12-200709/x86_64-SunOS/SUNWspro/lib/amd64/libsunmat h.so >Notice .../lib/amd64/... since lib contains the 32 bit version of the library and lib/amd64 the 64 bit version. Again, I know about as much of python as my 2 year old nephew. I'm attempting this install at the request of a researcher who needs it. Thanks, Phil From robert.kern at gmail.com Tue Jul 15 16:19:28 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Jul 2008 15:19:28 -0500 Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio In-Reply-To: <015401c8e6b7$36f552f0$a4dff8d0$@auburn.edu> References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> <487C0D0D.5040904@gmail.com> <015401c8e6b7$36f552f0$a4dff8d0$@auburn.edu> Message-ID: <3d375d730807151319x405f6966i947357bb49cbdd19@mail.gmail.com> On Tue, Jul 15, 2008 at 15:13, Phil Forrest wrote: > Robert Kern wrote: > >>> Yup. We can do a version check. For >= 8.3, we don't put in f77compat. >>> Can you and Phil check that if you modify that line of code in sun.py >>> to omit f77compat on your Sun Fortran 95 8.3 boxes, that it works? >>> (Ugh, that sentence got away from me, but too late now!) > > Hey Guys, > > I did not have any success on a Sun Studio 12 (8.3) scipy build with > f77compat removed from the sun.py in the numpy tree. The numpy build went > fine (I cleaned the tree, downloaded the source, built via 'python setup.py > install') > > I didn't want to flood this list with compile error messages, so please let > me know if someone would like me to send out the build failure output for > scipy. You can send it to me in private email. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Tue Jul 15 16:24:16 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 15 Jul 2008 22:24:16 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> References: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> Message-ID: <487D0770.8060400@ru.nl> anirudh vij wrote: > On Tue, Jul 15, 2008 at 9:24 AM, Hoyt Koepke wrote: > >> I know I'm a little late to the main discussion, but FYI there is a >> project, pym, which claims to do some conversion: >> http://code.google.com/p/pym/ It's only available from svn, and >> there's no claim about how well it works, but it might be a good >> starting point. I believe it's MIT licensed, so no issue there. Plus >> the authors might be good contacts. >> >> --Hoyt >> >> > > thanks for this. Your google-fu is stronger than mine :) > > I couldn't find any files (don't understand anything of SVN) could you please attach the downloaded files to this list ? thanks, Stef > Work on my own script has been sluggish, since there is a conference > going on here. I am looking at the pym code right now to see how > usable it is. > >> >> On Tue, Jul 15, 2008 at 7:16 AM, Joe Harrington wrote: >> >>> Automatic conversion of at least the boring stuff is crucial. No >>> matter how educational rewriting your code may be, if you have 100,000 >>> lines of tools and utilities to convert, you generally can't. Our >>> IDL-based group has made good use of i2py, which may at least provide >>> some educational examples for converting MATLAB, if not a usable >>> framework. The general thing is to covert what is easy, and at least >>> flag what is difficult. It doesn't do a mapping of function names to >>> function names or anything like that. Here's the URL, hope it helps. >>> >>> http://software.pseudogreen.org/i2py/ >>> >>> If you do a general parser approach, it would be useful to do it in >>> such a way that the front end can be ripped out and modified or >>> replaced with one for another language. >>> >>> I've added a section on the scipy.org wiki under Topical Software for >>> conversion programs. When yours is ready, please put in a link there. >>> >>> > Will do. Meanwhile, should a link to pym be put up on the wiki? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > From cournape at gmail.com Tue Jul 15 16:28:16 2008 From: cournape at gmail.com (david) Date: Tue, 15 Jul 2008 20:28:16 +0000 (UTC) Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> <487C0D0D.5040904@gmail.com> <015401c8e6b7$36f552f0$a4dff8d0$@auburn.edu> <3d375d730807151319x405f6966i947357bb49cbdd19@mail.gmail.com> Message-ID: > > me know if someone would like me to send out the build failure output for > > scipy. One thing which I had problems with on solaris (open solaris with sun studio express, that is a bit different than the version we are talking about here) was C++ code in sparsetools. Some weird missing symbols related to RTTI (g++ builds and link the thing ok). Except that, I could build and test numpy and scipy correctly with Sun studio express and sun perf libraries, but using numscons. I did not feel like supporting sunperf in numpy.distutils, cheers, David From robert.kern at gmail.com Tue Jul 15 16:44:27 2008 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Jul 2008 15:44:27 -0500 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <487D0770.8060400@ru.nl> References: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> <487D0770.8060400@ru.nl> Message-ID: <3d375d730807151344m6d85a2d8j8d6bdefd3af4349b@mail.gmail.com> On Tue, Jul 15, 2008 at 15:24, Stef Mientki wrote: > > anirudh vij wrote: >> On Tue, Jul 15, 2008 at 9:24 AM, Hoyt Koepke wrote: >> >>> I know I'm a little late to the main discussion, but FYI there is a >>> project, pym, which claims to do some conversion: >>> http://code.google.com/p/pym/ It's only available from svn, and >>> there's no claim about how well it works, but it might be a good >>> starting point. I believe it's MIT licensed, so no issue there. Plus >>> the authors might be good contacts. >>> >>> --Hoyt >> >> thanks for this. Your google-fu is stronger than mine :) >> > I couldn't find any files > (don't understand anything of SVN) > could you please attach the downloaded files to this list ? Um, not to the list, please. You can browse the raw files here, if you wish: http://pym.googlecode.com/svn/trunk/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From s.mientki at ru.nl Tue Jul 15 17:13:41 2008 From: s.mientki at ru.nl (Stef Mientki) Date: Tue, 15 Jul 2008 23:13:41 +0200 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <3d375d730807151344m6d85a2d8j8d6bdefd3af4349b@mail.gmail.com> References: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> <487D0770.8060400@ru.nl> <3d375d730807151344m6d85a2d8j8d6bdefd3af4349b@mail.gmail.com> Message-ID: <487D1305.5020506@ru.nl> Robert Kern wrote: > On Tue, Jul 15, 2008 at 15:24, Stef Mientki wrote: > >> anirudh vij wrote: >> >>> On Tue, Jul 15, 2008 at 9:24 AM, Hoyt Koepke wrote: >>> >>> >>>> I know I'm a little late to the main discussion, but FYI there is a >>>> project, pym, which claims to do some conversion: >>>> http://code.google.com/p/pym/ It's only available from svn, and >>>> there's no claim about how well it works, but it might be a good >>>> starting point. I believe it's MIT licensed, so no issue there. Plus >>>> the authors might be good contacts. >>>> >>>> --Hoyt >>>> >>> thanks for this. Your google-fu is stronger than mine :) >>> >>> >> I couldn't find any files >> (don't understand anything of SVN) >> could you please attach the downloaded files to this list ? >> > > Um, not to the list, please. You can browse the raw files here, if you wish: > > http://pym.googlecode.com/svn/trunk/ > thanks, now I can see some files. cheers, Stef From cournape at gmail.com Tue Jul 15 17:27:31 2008 From: cournape at gmail.com (david) Date: Tue, 15 Jul 2008 21:27:31 +0000 (UTC) Subject: [SciPy-user] -lf77compat failure - SciPy on AMD64 Sun Studio References: <00d101c8e5f6$f2992f20$d7cb8d60$@auburn.edu> <3d375d730807141441x4c7a1044wca49fafe653578ca@mail.gmail.com> <487BCE78.3040403@gmail.com> <3d375d730807141542i1086c63fx9a43e0bad275fb7c@mail.gmail.com> <487BDCAD.4090907@gmail.com> <3d375d730807141635l667ecc94j4cea55c0c4251909@mail.gmail.com> <487C0D0D.5040904@gmail.com> <015401c8e6b7$36f552f0$a4dff8d0$@auburn.edu> <3d375d730807151319x405f6966i947357bb49cbdd19@mail.gmail.com> Message-ID: david gmail.com> writes: > > > > me know if someone would like me to send out the build failure output for > > > scipy. > > One thing which I had problems with on solaris (open solaris with sun studio > express, that is a bit different than the version we are talking about here) > was C++ code in sparsetools. Some weird missing symbols related to RTTI (g++ > builds and link the thing ok). Ok, I found out the problem: when using some features of C++, the sun C++ compiler needs additional libraries... The one I needed so far was Crun. Appending -lCrun to the C++ link command seems to work. Now, I got some segfaults in sparse unit tests, but that may be caused by recent changes to it. cheers, David From cournape at gmail.com Tue Jul 15 17:43:04 2008 From: cournape at gmail.com (david) Date: Tue, 15 Jul 2008 21:43:04 +0000 (UTC) Subject: [SciPy-user] scipy-0.6 on AIX 5.3 References: <200807061617.07505.mwojc@p.lodz.pl> <48718535.8000703@ar.media.kyoto-u.ac.jp> Message-ID: Marek Wojciechowski p.lodz.pl> writes: > > > > IBM Fortran compiler (xlf) worked very well for me. Well, it does not work here. > The only compilation > problem is with two functions in specfun.f, namely: CLQN and CLQMN. > However, this is because of the aggressive -O5 optimization flag (which is > set up by default on AIX in numpy.distutils). With -O3 whole code compiles > nicely. > That's strange, since the errors are syntax related. In particular, from your output, it seemed like some comments were causing problems. > BTW. i'm not sure if there is a way to get working g77 or gfortran on AIX. Yes. When I googled for your problem on AIX, I saw similar problems compiling octave, and using gnu fortran compilers did solve the problem. But anyway, since changing the optimization level worked, I guess there is no need for that anymore. cheers, David From silva at lma.cnrs-mrs.fr Tue Jul 15 17:51:17 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Tue, 15 Jul 2008 23:51:17 +0200 Subject: [SciPy-user] odepack or f2py error In-Reply-To: <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> On ticket 696 (http://scipy.org/scipy/scipy/ticket/696), you can find a minimal example of code reproducing the problem. It does not use f2py anymore, but the trouble seems to be linked to the import of pylab... -- Fabrice Silva From hoytak at gmail.com Wed Jul 16 02:42:54 2008 From: hoytak at gmail.com (Hoyt Koepke) Date: Wed, 16 Jul 2008 09:42:54 +0300 Subject: [SciPy-user] Automatic MATLAB to scipy/numpy/pylab conversion In-Reply-To: <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> References: <4db580fd0807150024v79726315j7570b0785783381d@mail.gmail.com> <1aa72c450807151131s51636acayf46d736aa802c193@mail.gmail.com> Message-ID: <4db580fd0807152342o1e9cae2atb4b3937a109788b@mail.gmail.com> I only knew about it from a few months ago, when I was looking for a similar tool, and it seems it was higher ranked then. I couldn't find it except for googling the exact name... --Hoyt On Tue, Jul 15, 2008 at 9:31 PM, anirudh vij wrote: > On Tue, Jul 15, 2008 at 9:24 AM, Hoyt Koepke wrote: >> I know I'm a little late to the main discussion, but FYI there is a >> project, pym, which claims to do some conversion: >> http://code.google.com/p/pym/ It's only available from svn, and >> there's no claim about how well it works, but it might be a good >> starting point. I believe it's MIT licensed, so no issue there. Plus >> the authors might be good contacts. >> >> --Hoyt >> > > thanks for this. Your google-fu is stronger than mine :) > > > Work on my own script has been sluggish, since there is a conference > going on here. I am looking at the pym code right now to see how > usable it is. >> >> >> >> On Tue, Jul 15, 2008 at 7:16 AM, Joe Harrington wrote: >>> Automatic conversion of at least the boring stuff is crucial. No >>> matter how educational rewriting your code may be, if you have 100,000 >>> lines of tools and utilities to convert, you generally can't. Our >>> IDL-based group has made good use of i2py, which may at least provide >>> some educational examples for converting MATLAB, if not a usable >>> framework. The general thing is to covert what is easy, and at least >>> flag what is difficult. It doesn't do a mapping of function names to >>> function names or anything like that. Here's the URL, hope it helps. >>> >>> http://software.pseudogreen.org/i2py/ >>> >>> If you do a general parser approach, it would be useful to do it in >>> such a way that the front end can be ripped out and modified or >>> replaced with one for another language. >>> >>> I've added a section on the scipy.org wiki under Topical Software for >>> conversion programs. When yours is ready, please put in a link there. >>> > Will do. Meanwhile, should a link to pym be put up on the wiki? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- +++++++++++++++++++++++++++++++++++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ hoytak at gmail.com +++++++++++++++++++++++++++++++++++ From haase at msg.ucsf.edu Wed Jul 16 07:47:56 2008 From: haase at msg.ucsf.edu (Sebastian Haase) Date: Wed, 16 Jul 2008 13:47:56 +0200 Subject: [SciPy-user] [python(x,y)] Call for donations In-Reply-To: <629b08a40807151053j139b51b8g5992d9859705789c@mail.gmail.com> References: <629b08a40807150338y42dab8bbxab15a7d990afdf26@mail.gmail.com> <629b08a40807151053j139b51b8g5992d9859705789c@mail.gmail.com> Message-ID: On Tue, Jul 15, 2008 at 7:53 PM, Pierre Raybaut wrote: > Hi all, > > As you may already know, Python(x,y) is a free Python / Qt / Eclipse > distribution for scientific computing, numerical simulations, data > analysis and visualization, ... > > The basic idea of Python(x,y) was to freely share with other people a > ready-to-use Python distribution being a viable alternative to MATLAB > or IDL, i.e. providing documentation, development environment and > interactive consoles in an integrated way with each other. Almost > everything was already available to make it, one had only to gather > all these tools, documents, libraries, and put them together in one > self-sufficient software (and scripting the installation process to > configure automatically all these softwares). > > At first, I did this for me (in my free time) because it was a good > way to install Python from scratch. > Then I quickly decided to distribute it freely - and this takes a lot > of time, but I was very glad that other people could benefit from my > work. > However, distributing freely Python(x,y) paradoxically takes money too > (which is mine by the way because this is a private initiative). Not > much though. But still, hosting 500Mb of data with a traffic limit of > 24Gb per month is not free. > > Now Python(x,y) is a victim of its own success (traffic is increasing > each month : 37Gb last month, which is over the limit but my web > hosting provider has a flexible quota policy) and the new release > (2.0.0) allow to distribute easily Python(x,y) individual packages > (even on computers with only Python 2.5 installed and for users who > don't want to install the distribution but only some packages) but > 500Mb are not enough... > > I could pay for it because it's really not much (it's clearly not a > question of money but more likely a question of principle), but I've > already paid for the current website and spend a lot of time on this, > so I thought that I could share my debts as well as my work :) > > So, feel free to donate on the Python(x,y) website. > Thanks ! > Pierre Did you look into free hosting solutions - like google code !? - Sebastian Haase From dave.hirschfeld at gmail.com Wed Jul 16 09:33:14 2008 From: dave.hirschfeld at gmail.com (Dave) Date: Wed, 16 Jul 2008 13:33:14 +0000 (UTC) Subject: [SciPy-user] OpenOpt svn broken on Windows Message-ID: Under BrasilOpt there are two files ALGENCAN_oo.py and algencan_oo.py. Windows filenames are not case sensitive so this causes the svn checkout to fail. -Dave From dmitrey.kroshko at scipy.org Wed Jul 16 11:02:29 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 18:02:29 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: Message-ID: <487E0D85.5040709@scipy.org> I don't know what is easiest solution here, currently you could use zip-file from OO installation page (~ 10 Mb, contains all scikits from latest svn automatically generated by Trac). Regards, D. Dave wrote: > Under BrasilOpt there are two files ALGENCAN_oo.py and algencan_oo.py. Windows > filenames are not case sensitive so this causes the svn checkout to fail. > > -Dave > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From william.ratcliff at gmail.com Wed Jul 16 11:25:19 2008 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 16 Jul 2008 11:25:19 -0400 Subject: [SciPy-user] [python(x,y)] Call for donations In-Reply-To: References: <629b08a40807150338y42dab8bbxab15a7d990afdf26@mail.gmail.com> <629b08a40807151053j139b51b8g5992d9859705789c@mail.gmail.com> Message-ID: <827183970807160825l232363f0p8893d33edce09b0f@mail.gmail.com> Google code is actually very convenient to use :> On Wed, Jul 16, 2008 at 7:47 AM, Sebastian Haase wrote: > On Tue, Jul 15, 2008 at 7:53 PM, Pierre Raybaut > wrote: > > Hi all, > > > > As you may already know, Python(x,y) is a free Python / Qt / Eclipse > > distribution for scientific computing, numerical simulations, data > > analysis and visualization, ... > > > > The basic idea of Python(x,y) was to freely share with other people a > > ready-to-use Python distribution being a viable alternative to MATLAB > > or IDL, i.e. providing documentation, development environment and > > interactive consoles in an integrated way with each other. Almost > > everything was already available to make it, one had only to gather > > all these tools, documents, libraries, and put them together in one > > self-sufficient software (and scripting the installation process to > > configure automatically all these softwares). > > > > At first, I did this for me (in my free time) because it was a good > > way to install Python from scratch. > > Then I quickly decided to distribute it freely - and this takes a lot > > of time, but I was very glad that other people could benefit from my > > work. > > However, distributing freely Python(x,y) paradoxically takes money too > > (which is mine by the way because this is a private initiative). Not > > much though. But still, hosting 500Mb of data with a traffic limit of > > 24Gb per month is not free. > > > > Now Python(x,y) is a victim of its own success (traffic is increasing > > each month : 37Gb last month, which is over the limit but my web > > hosting provider has a flexible quota policy) and the new release > > (2.0.0) allow to distribute easily Python(x,y) individual packages > > (even on computers with only Python 2.5 installed and for users who > > don't want to install the distribution but only some packages) but > > 500Mb are not enough... > > > > I could pay for it because it's really not much (it's clearly not a > > question of money but more likely a question of principle), but I've > > already paid for the current website and spend a lot of time on this, > > so I thought that I could share my debts as well as my work :) > > > > So, feel free to donate on the Python(x,y) website. > > Thanks ! > > Pierre > > Did you look into free hosting solutions - like google code !? > > - Sebastian Haase > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aisaac at american.edu Wed Jul 16 11:52:44 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Jul 2008 11:52:44 -0400 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E0D85.5040709@scipy.org> References: <487E0D85.5040709@scipy.org> Message-ID: On Wed, 16 Jul 2008, dmitrey apparently wrote: > I don't know what is easiest solution here, currently you > could use zip-file from OO installation page (~ 10 Mb, > contains all scikits from latest svn automatically > generated by Trac). Hi Dmitrey, Since this is a real problem for cross platform use, it needs to be addressed. As a rule, two files with the same name expect for capitalization is a bad idea. Clearly these names do not distinguish the file content. Additionally, the two files contain serious DRY/SPOT violations so it looks like this bug report offers a chance to address those as well. Cheers, Alan Isaac From dmitrey.kroshko at scipy.org Wed Jul 16 12:18:15 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 19:18:15 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E0D85.5040709@scipy.org> Message-ID: <487E1F47.9020303@scipy.org> Since OO has solvers "ALGENCAN" (for ALGENCAN v 1.0) and "algencan" (for ALGENCAN v 2.0) openopt must have the files ALGENCAN_oo.py and algencan_oo.py (because calling solver (that is done in OO by name) depend on file names). I could just remove ALGENCAN (in anyway in future ALGENCAN v 1.0 support will be ceased), but currently some tests made by me show v 1.0 works better than 2.0. So I intend to remove ALGENCAN_oo.py for next OO release, and doing it right now isn't a good idea. D. Alan G Isaac wrote: > On Wed, 16 Jul 2008, dmitrey apparently wrote: > >> I don't know what is easiest solution here, currently you >> could use zip-file from OO installation page (~ 10 Mb, >> contains all scikits from latest svn automatically >> generated by Trac). >> > > Hi Dmitrey, > > Since this is a real problem for cross platform use, > it needs to be addressed. > > As a rule, two files with the same name expect for > capitalization is a bad idea. Clearly these names > do not distinguish the file content. > > Additionally, the two files contain serious DRY/SPOT > violations > so it looks like this bug report offers a chance to address > those as well. > > Cheers, > Alan Isaac > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From matthieu.brucher at gmail.com Wed Jul 16 12:28:16 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 18:28:16 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E1F47.9020303@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> Message-ID: Hi, As Alan states it, you could use a diffrent name. Why not use algencan2_oo.py ? With the current registry pattern in OO, the name of the solver is not dependent on the actual implementation file. Besides, you could resfactor the code by using some patterns, as Alan suggested it ;) Matthieu 2008/7/16 dmitrey : > Since OO has solvers "ALGENCAN" (for ALGENCAN v 1.0) and "algencan" (for > ALGENCAN v 2.0) openopt must have the files ALGENCAN_oo.py and > algencan_oo.py (because calling solver (that is done in OO by name) > depend on file names). > > I could just remove ALGENCAN (in anyway in future ALGENCAN v 1.0 support > will be ceased), but currently some tests made by me show v 1.0 works > better than 2.0. > > So I intend to remove ALGENCAN_oo.py for next OO release, and doing it > right now isn't a good idea. > > D. > > Alan G Isaac wrote: >> On Wed, 16 Jul 2008, dmitrey apparently wrote: >> >>> I don't know what is easiest solution here, currently you >>> could use zip-file from OO installation page (~ 10 Mb, >>> contains all scikits from latest svn automatically >>> generated by Trac). >>> >> >> Hi Dmitrey, >> >> Since this is a real problem for cross platform use, >> it needs to be addressed. >> >> As a rule, two files with the same name expect for >> capitalization is a bad idea. Clearly these names >> do not distinguish the file content. >> >> Additionally, the two files contain serious DRY/SPOT >> violations >> so it looks like this bug report offers a chance to address >> those as well. >> >> Cheers, >> Alan Isaac >> >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From dmitrey.kroshko at scipy.org Wed Jul 16 12:37:06 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 19:37:06 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> Message-ID: <487E23B2.9060507@scipy.org> Matthieu Brucher wrote: > Hi, > > As Alan states it, you could use a diffrent name. Why not use > algencan2_oo.py ? I don't think using r = p.solve('algencan2') is a good idea, even "algencan" is too long to type. Also, there is a number of those algencan 2.0beta, 2.0.1beta, ..., 2.0.4 beta and the number will continue. Also it means after new A. version I should remove old files each time, making some users code not working. > With the current registry pattern in OO, the name of > the solver is not dependent on the actual implementation file. > I don't understand what do you mean here, see runProbSolver.py, solver is extracting from py-file with same name. > Besides, you could resfactor the code by using some patterns, as Alan > suggested it ;) > I intend just to remove ALGENCAN_oo.py, because I don't see needs to continue it maintenance in future. D. From matthieu.brucher at gmail.com Wed Jul 16 12:47:58 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 18:47:58 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E23B2.9060507@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> Message-ID: 2008/7/16 dmitrey : > Matthieu Brucher wrote: >> Hi, >> >> As Alan states it, you could use a diffrent name. Why not use >> algencan2_oo.py ? > I don't think using r = p.solve('algencan2') is a good idea, even > "algencan" is too long to type. Also, there is a number of those > algencan 2.0beta, 2.0.1beta, ..., 2.0.4 beta and the number will continue. > > Also it means after new A. version I should remove old files each time, > making some users code not working. Algecan2 is supposed to have a clear interface, isn't it ? So you only have one wrapper per major version. This is done in other packages as well. And it is clearer that this is for algecan 2.x and not for algecan 1.x. >> With the current registry pattern in OO, the name of >> the solver is not dependent on the actual implementation file. >> > I don't understand what do you mean here, see runProbSolver.py, solver > is extracting from py-file with same name. It wasn't the case some time ago. But you still can do what I mean. I see that you can't register a new solver in OO on the fly and use your wrappers. Why didn't you make __solverPaths__ directly a dictionary that you populate then ? This way, you always have one __solverPaths__ that is not different accross modules, as it could be expected to be. >> Besides, you could resfactor the code by using some patterns, as Alan >> suggested it ;) >> > I intend just to remove ALGENCAN_oo.py, because I don't see needs to > continue it maintenance in future. Of course ;) but you should try to reuse your code at the beginning as well ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From dmitrey.kroshko at scipy.org Wed Jul 16 12:58:56 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 19:58:56 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> Message-ID: <487E28D0.7070505@scipy.org> Matthieu Brucher wrote: > 2008/7/16 dmitrey : > >> Matthieu Brucher wrote: >> >>> Hi, >>> >>> As Alan states it, you could use a diffrent name. Why not use >>> algencan2_oo.py ? >>> >> I don't think using r = p.solve('algencan2') is a good idea, even >> "algencan" is too long to type. Also, there is a number of those >> algencan 2.0beta, 2.0.1beta, ..., 2.0.4 beta and the number will continue. >> >> Also it means after new A. version I should remove old files each time, >> making some users code not working. >> > > Algecan2 is supposed to have a clear interface, isn't it ? So you only > have one wrapper per major version. This is done in other packages as > well. And it is clearer that this is for algecan 2.x and not for > algecan 1.x. > I meant not 2.0.1 vs 2.0.3 but 2.x vs 3.x vs ... as well > >>> With the current registry pattern in OO, the name of >>> the solver is not dependent on the actual implementation file. >>> >>> >> I don't understand what do you mean here, see runProbSolver.py, solver >> is extracting from py-file with same name. >> > > It wasn't the case some time ago. But you still can do what I mean. > I see that you can't register a new solver in OO on the fly and use > your wrappers. Why didn't you make __solverPaths__ directly a > dictionary that you populate then ? This way, you always have one > __solverPaths__ that is not different accross modules, as it could be > expected to be. > Because the solution you propose turn out to be a headache, especially for me - and that's the main reason why I have removed your dictionary __solverPaths__. Each time I created solvers like "ralgELS", "ralg2" etc (with some modifications to compare results with origin ralg) I have got a error during execution and have to go to runProbSolver.py file to fix the issue (as well as for connecting a new solver). Also, I was not fond of those dozens of additional lines of code in rPS.py > >>> Besides, you could resfactor the code by using some patterns, as Alan >>> suggested it ;) >>> >>> >> I intend just to remove ALGENCAN_oo.py, because I don't see needs to >> continue it maintenance in future. >> > > Of course ;) but you should try to reuse your code at the beginning as well ;) > I don't know what code do you mean here, I had already moved some code from ALGENCAN_oo.py to algencan_oo.py. Also, only 1 version of algencan can be used with OO because both versions produce pywrapper.so file, and only 1 can be used from PYTHONPATH. Of course, I can create a workaround but I have lots of other things to do, especially to complite my GSoC assigned tasks. D. From aisaac at american.edu Wed Jul 16 13:15:36 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Jul 2008 13:15:36 -0400 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E28D0.7070505@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> Message-ID: > Matthieu Brucher wrote: >> but you should try to reuse your code at the beginning as >> well ;) On Wed, 16 Jul 2008, dmitrey apparently wrote: > I don't know what code do you mean here, I had already moved some code > from ALGENCAN_oo.py to algencan_oo.py. I think Matthieu is just referring to the DRY/SPOT violations I pointed out. Generally speaking, you should not have two files containing duplicate code. Just for example, one might refactor as follows: algencan_common.py (for code found in both current files) algencan.py (support for current version (unnumbered)) algencan1.py (support for version 1) The last two would import from the first as needed. This is not a "proposal" per se, just an illustration. Cheers, Alan Isaac From matthieu.brucher at gmail.com Wed Jul 16 13:23:14 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 19:23:14 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E28D0.7070505@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> Message-ID: >> Algecan2 is supposed to have a clear interface, isn't it ? So you only >> have one wrapper per major version. This is done in other packages as >> well. And it is clearer that this is for algecan 2.x and not for >> algecan 1.x. >> > I meant not 2.0.1 vs 2.0.3 but 2.x vs 3.x vs ... as well algecan 3 will break your current model. I think that the fft wrappers for scipy are also named fftw2 and fftw3, so why not using it as well ? >> It wasn't the case some time ago. But you still can do what I mean. >> I see that you can't register a new solver in OO on the fly and use >> your wrappers. Why didn't you make __solverPaths__ directly a >> dictionary that you populate then ? This way, you always have one >> __solverPaths__ that is not different accross modules, as it could be >> expected to be. >> > Because the solution you propose turn out to be a headache, especially > for me - and that's the main reason why I have removed your dictionary > __solverPaths__. > Each time I created solvers like "ralgELS", "ralg2" etc (with some > modifications to compare results with origin ralg) I have got a error > during execution and have to go to runProbSolver.py file to fix the > issue (as well as for connecting a new solver). Also, I was not fond of > those dozens of additional lines of code in rPS.py I didn't say that you shouldn't have done this, you obviously can, you have done it and it works. What I'm saying is that there were several purposes for this registry : - not polluting sys.path - allow a clean way of adding a solver For the moment, I think you have two places where you can do the second part : - ooMisc.py - runProbSolver.py Each time, it uses __solverPaths__. The funny thing is that __solverPaths__ in ooMisc is in fact never used or modified. If it were a dictionary, it would be of some use. If you modify ooMic.__solverPaths__ after loading OO, you can't add a new solver, as it equals None. And you cannot add it before, as you have to load OO to get access to it. So I think it is runProbSolver.__solverPaths__ that should be used, but then, ooMic.__solverPaths__ really is just junk code that is of no use. Or have I missed something ? >> Of course ;) but you should try to reuse your code at the beginning as well ;) >> > I don't know what code do you mean here, I had already moved some code > from ALGENCAN_oo.py to algencan_oo.py. It is one of the basis of agile development. Never copy code. What is cool with Python is that you can import the pieces you want. For instance, you can factor these code lines in a new package that you will import in each solver. This is especially important when you find a bug in one wrapper and you forget to apply it in the second wrapper. And it will work great with algecan3 if you can also reuse the same parts, as you will have a much shorter file to modify (and thus less headhache ;)) Also, only 1 version of algencan > can be used with OO because both versions produce pywrapper.so file, and > only 1 can be used from PYTHONPATH. Of course, I can create a workaround > but I have lots of other things to do, especially to complite my GSoC > assigned tasks. I didn't know that there was only one filename. Is it something you build or is it something algecan provides ? Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From matthieu.brucher at gmail.com Wed Jul 16 13:24:52 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 19:24:52 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E28D0.7070505@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> Message-ID: > I don't know what code do you mean here, I had already moved some code > from ALGENCAN_oo.py to algencan_oo.py. Also, only 1 version of algencan > can be used with OO because both versions produce pywrapper.so file, and > only 1 can be used from PYTHONPATH. Of course, I can create a workaround > but I have lots of other things to do, especially to complite my GSoC > assigned tasks. In fact, refactoring is also what you did when you rewrote the registry part ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From aisaac at american.edu Wed Jul 16 13:51:33 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Jul 2008 13:51:33 -0400 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E23B2.9060507@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> Message-ID: On Wed, 16 Jul 2008, dmitrey apparently wrote: > I intend just to remove ALGENCAN_oo.py, because I don't > see needs to continue it maintenance in future. I understand, but - there is a problem right now: SVN is not working for Windows users. This is not acceptable. - the same problem will reappear in the future (new ALGENCAN versions), so you might as well pick a solution now. See my previous message for what might be a simple solution. Cheers, Alan Isaac From dmitrey.kroshko at scipy.org Wed Jul 16 14:19:23 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 21:19:23 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> Message-ID: <487E3BAB.3070402@scipy.org> Alan G Isaac wrote: > On Wed, 16 Jul 2008, dmitrey apparently wrote: > >> I intend just to remove ALGENCAN_oo.py, because I don't >> see needs to continue it maintenance in future. >> > > I understand, but > > - there is a problem right now: SVN is not working for > Windows users. This is not acceptable. > - the same problem will reappear in the future (new ALGENCAN > versions), so you might as well pick a solution now. > There will be no problem because I intend to provide just single version of algencan (latest) in algencan_oo.py and nothing more. As for Windows users, I guess they can wait till next OO release. These 2 files are in repository for several months but Dave was first (and I guess he will be last till next OO release) who encountered the problem, because very seldom Windows users download OO via svn. In anyway, they have a number of other download possibilities - zip-file, tarballs etc. D. From dmitrey.kroshko at scipy.org Wed Jul 16 14:19:26 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 21:19:26 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> Message-ID: <487E3BAE.1000004@scipy.org> Matthieu Brucher wrote: >>> Algecan2 is supposed to have a clear interface, isn't it ? So you only >>> have one wrapper per major version. This is done in other packages as >>> well. And it is clearer that this is for algecan 2.x and not for >>> algecan 1.x. >>> >>> >> I meant not 2.0.1 vs 2.0.3 but 2.x vs 3.x vs ... as well >> > > algecan 3 will break your current model. I think that the fft wrappers > for scipy are also named fftw2 and fftw3, so why not using it as well > ? > I don't think ALGENCAN 3 will have other than ALGENCAN 2 API. If it will be different, I'll change algencan_oo.py once again. In any way I don't intend to maintain several ALGENCAN versions. > >>> It wasn't the case some time ago. But you still can do what I mean. >>> I see that you can't register a new solver in OO on the fly and use >>> your wrappers. Why didn't you make __solverPaths__ directly a >>> dictionary that you populate then ? This way, you always have one >>> __solverPaths__ that is not different accross modules, as it could be >>> expected to be. >>> >>> >> Because the solution you propose turn out to be a headache, especially >> for me - and that's the main reason why I have removed your dictionary >> __solverPaths__. >> Each time I created solvers like "ralgELS", "ralg2" etc (with some >> modifications to compare results with origin ralg) I have got a error >> during execution and have to go to runProbSolver.py file to fix the >> issue (as well as for connecting a new solver). Also, I was not fond of >> those dozens of additional lines of code in rPS.py >> > > I didn't say that you shouldn't have done this, you obviously can, you > have done it and it works. What I'm saying is that there were several > purposes for this registry : > - not polluting sys.path > - allow a clean way of adding a solver > Adding a solver have been never more clear than current style: just add _oo file to any directory from /solvers, and it will work. Also, it was the way it works before your changes. > For the moment, I think you have two places where you can do the second part : > - ooMisc.py > - runProbSolver.py > Each time, it uses __solverPaths__. The funny thing is that > __solverPaths__ in ooMisc is in fact never used or modified. > If it > were a dictionary, it would be of some use. > If you modify ooMic.__solverPaths__ after loading OO, you can't add a > new solver, as it equals None. And you cannot add it before, as you > have to load OO to get access to it. So I think it is > runProbSolver.__solverPaths__ that should be used, but then, > ooMic.__solverPaths__ really is just junk code that is of no use. Or > have I missed something ? > __solverPaths__ is exported by runProbSolver.py, and than converted from None to dict. I intended to init it in runProbSolver.py but some issues had appeared (Python reported of bugs) so I had moved it to ooMisc.py >>> Of course ;) but you should try to reuse your code at the beginning as well ;) >>> >>> >> I don't know what code do you mean here, I had already moved some code >> from ALGENCAN_oo.py to algencan_oo.py. >> > > It is one of the basis of agile development. Never copy code. What is > cool with Python is that you can import the pieces you want. For > instance, you can factor these code lines in a new package that you > will import in each solver. This is especially important when you find > a bug in one wrapper and you forget to apply it in the second wrapper. > And it will work great with algecan3 if you can also reuse the same > parts, as you will have a much shorter file to modify (and thus less > headhache ;)) > I understand the DRY issue, in fact, there is some code related to CVXOPT that is used by several solvers. But I when I created algencan_oo.py I intended to remove ALGENCAN_oo.py as soon as new file will be ready. However, I decided to remain it in repository for some more time, because old ALGENCAN passes some tests better than new ver. In anyway, soon it will be removed and the problem will gone away. > Also, only 1 version of algencan > >> can be used with OO because both versions produce pywrapper.so file, and >> only 1 can be used from PYTHONPATH. Of course, I can create a workaround >> but I have lots of other things to do, especially to complite my GSoC >> assigned tasks. >> > > I didn't know that there was only one filename. Is it something you > build or is it something algecan provides ? > latter. pywrapper.so it generated by ALGENCAN. As I had mentioned, I could create a workaround but I have lots of other things to do. D. From matthieu.brucher at gmail.com Wed Jul 16 14:43:03 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 20:43:03 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E3BAE.1000004@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> Message-ID: >> I didn't say that you shouldn't have done this, you obviously can, you >> have done it and it works. What I'm saying is that there were several >> purposes for this registry : >> - not polluting sys.path >> - allow a clean way of adding a solver >> > Adding a solver have been never more clear than current style: just add > _oo file to any directory from /solvers, and it will work. Also, it was > the way it works before your changes. You are still missing the point after so many months.... > __solverPaths__ is exported by runProbSolver.py, and than converted from None to > dict. I intended to init it in runProbSolver.py but some issues had > appeared (Python reported of bugs) so I had moved it to ooMisc.py It has not been moved to ooMisc. The actual registry is only available in runProbSolver. Because you are initializing it in runProbSolver and not in ooMisc. >> I didn't know that there was only one filename. Is it something you >> build or is it something algecan provides ? >> > latter. pywrapper.so it generated by ALGENCAN. As I had mentioned, I > could create a workaround but I have lots of other things to do. And it is not your job to try to get a workaround (IMHO) ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From cdcasey at gmail.com Wed Jul 16 15:04:22 2008 From: cdcasey at gmail.com (chris) Date: Wed, 16 Jul 2008 14:04:22 -0500 Subject: [SciPy-user] [python(x,y)] Call for donations In-Reply-To: References: <629b08a40807150338y42dab8bbxab15a7d990afdf26@mail.gmail.com> <629b08a40807151053j139b51b8g5992d9859705789c@mail.gmail.com> Message-ID: On Wed, Jul 16, 2008 at 6:47 AM, Sebastian Haase wrote: > > Did you look into free hosting solutions - like google code !? > Or, as far as I know, sourceforge. From aisaac at american.edu Wed Jul 16 15:09:27 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Jul 2008 15:09:27 -0400 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E3BAB.3070402@scipy.org> References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E3BAB.3070402@scipy.org> Message-ID: On Wed, 16 Jul 2008, dmitrey apparently wrote: > very seldom Windows users download OO via svn You may have just learned why: it does not work. What is the problem with the approach I suggested? It looks like 15 minutes (max) to fix something broken, which is 15 minutes well spent. Or am I overlooking another problem? As I said, fixing this should pay benefits in the future as well, since you can always keep old versions around in numbered form. You are very likely to have this need again. Cheers, Alan Isaac From matthieu.brucher at gmail.com Wed Jul 16 15:07:35 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 21:07:35 +0200 Subject: [SciPy-user] [python(x,y)] Call for donations In-Reply-To: References: <629b08a40807150338y42dab8bbxab15a7d990afdf26@mail.gmail.com> <629b08a40807151053j139b51b8g5992d9859705789c@mail.gmail.com> Message-ID: 2008/7/16 chris : > On Wed, Jul 16, 2008 at 6:47 AM, Sebastian Haase wrote: >> >> Did you look into free hosting solutions - like google code !? >> > > Or, as far as I know, sourceforge. or Launchpad, or gna, ... -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From dmitrey.kroshko at scipy.org Wed Jul 16 15:32:56 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Wed, 16 Jul 2008 22:32:56 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E0D85.5040709@scipy.org> <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> Message-ID: <487E4CE8.5000609@scipy.org> Matthieu Brucher wrote: >> __solverPaths__ is exported by runProbSolver.py, and than converted from None to >> dict. I intended to init it in runProbSolver.py but some issues had >> appeared (Python reported of bugs) so I had moved it to ooMisc.py >> > > It has not been moved to ooMisc. The actual registry is only available > in runProbSolver. Because you are initializing it in runProbSolver and > not in ooMisc. > What is not been moved to ooMisc?! I have said I encountered bugs while creating __solverPaths__ in runProbSolver, so now it's created in ooMisc (as None). Of course, then I redefine it as Python dict (in runProbSolver). > >>> I didn't know that there was only one filename. Is it something you >>> build or is it something algecan provides ? >>> >>> >> latter. pywrapper.so it generated by ALGENCAN. As I had mentioned, I >> could create a workaround but I have lots of other things to do. >> > > And it is not your job to try to get a workaround (IMHO) ;) > Neither connecting IPOPT nor ALGENCAN 2.x version (as well as some other work done during my GSoC 2008) are not mentioned in my GSoC-assigned schedule (still I had spent some more time because some OO users were very interested in these issues). So it's not my job to get ALGENCAN workaround. If my GSoC mentors will insist on getting OO svn work for windows w/o wait for next OO release, for to economy my time I'll just remove ALGENCAN_oo.py, it will take much less efforts. D. From pav at iki.fi Wed Jul 16 15:52:54 2008 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Jul 2008 19:52:54 +0000 (UTC) Subject: [SciPy-user] odepack or f2py error References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: Tue, 15 Jul 2008 23:51:17 +0200, Fabrice Silva wrote: > On ticket 696 (http://scipy.org/scipy/scipy/ticket/696), you can find a > minimal example of code reproducing the problem. It does not use f2py > anymore, but the trouble seems to be linked to the import of pylab... gcc bug #36857. Can be worked around by export LC_ALL=C in shell before running your codes. (The scipy ticket can be closed.) -- Pauli Virtanen From matthieu.brucher at gmail.com Wed Jul 16 16:01:30 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Wed, 16 Jul 2008 22:01:30 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E4CE8.5000609@scipy.org> References: <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> Message-ID: 2008/7/16 dmitrey : > Matthieu Brucher wrote: >>> __solverPaths__ is exported by runProbSolver.py, and than converted from None to >>> dict. I intended to init it in runProbSolver.py but some issues had >>> appeared (Python reported of bugs) so I had moved it to ooMisc.py >>> >> >> It has not been moved to ooMisc. The actual registry is only available >> in runProbSolver. Because you are initializing it in runProbSolver and >> not in ooMisc. >> > What is not been moved to ooMisc?! I have said I encountered bugs while > creating __solverPaths__ in runProbSolver, so now it's created in ooMisc > (as None). Of course, then I redefine it as Python dict (in runProbSolver). Try and get the type of ooMisc.__solverPaths__. It will always be None. Why ? Because you import the variable None as __solverPaths__ in rPS. Then you modify the reference, but not the underlying variable. Try this: module1.py s = "Something" module2.py import module1.s module1.s = "Something else" module3.py from module1 import s s = "Something else" And now >>> import module1 >>> module1.s Something >>> import module2 >>> module1.s Something else Try the same with module3 >>> import module1 >>> module1.s Something >>> import module3 >>> module1.s Something This is exactly your case. This question showed up on the python-list ML. At first, I didn't understand either, but if you're thinking about references to variables, you can understand what is really going on. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From aisaac at american.edu Wed Jul 16 16:42:52 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 16 Jul 2008 16:42:52 -0400 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E1F47.9020303@scipy.org><487E23B2.9060507@scipy.org><487E28D0.7070505@scipy.org><487E3BAE.1000004@scipy.org><487E4CE8.5000609@scipy.org> Message-ID: On Wed, 16 Jul 2008, Matthieu Brucher apparently wrote: > you modify the reference, but not the underlying variable. This may be helpful: http://effbot.org/zone/import-confusion.htm#what-does-python-do Importing a module creates a module object and then executes the module's code in this object's namespace. Like other objects, this object allows dynamic attribute assignment. (E.g., you could say ``module1.t='test'`` even though module1 does not define ``t``.) But for each module you can only import once. (Ignoring ``reload``.) Subsequent imports statements will therefore not recover the assignments made in the module. One thing that follows is that ``from module1 import s`` is alot like import module1 ``s = module1.s`` (except the latter has ``module1`` in the namespace). fwiw, Alan From astrofitz at gmail.com Wed Jul 16 17:44:23 2008 From: astrofitz at gmail.com (Michael Fitzgerald) Date: Wed, 16 Jul 2008 14:44:23 -0700 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: Message-ID: <2df6e3390807161444x527080b2gf75cbc0cd1f66f55@mail.gmail.com> On Jul 16, 2008, at 6:33 AM, Dave wrote: Under BrasilOpt there are two files ALGENCAN_oo.py and algencan_oo.py. Windows filenames are not case sensitive so this causes the svn checkout to fail. This is also the case under OSX with most HFS+ file systems (case sensitivity is disabled by default). Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From silva at lma.cnrs-mrs.fr Wed Jul 16 21:06:19 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 17 Jul 2008 03:06:19 +0200 Subject: [SciPy-user] odepack or f2py error In-Reply-To: References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> Le mercredi 16 juillet 2008 ? 19:52 +0000, Pauli Virtanen a ?crit : > gcc bug #36857. Can be worked around by > > export LC_ALL=C > in shell before running your codes. > (The scipy ticket can be closed.) If this bug is linked to locale setting and decimal separator symbol, how can it be explained that without pylab import, the displayed results is lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout In above message, I1 = 2 In above message, R1 = 0.4064399263119E-05 Excess work done on this call (perhaps wrong Dfun type). Run with full_output = 1 to get quantitative information. and with pylab import, it crashes as follows: lsoda-- at current t (=r1), mxstep (=i1) steps taken on this call before reaching tout In above message, I1 = 2 In above message, R1 = At line 1991 of file scipy/integrate/odepack/ddasrt.f (unit = 6, file = 'stdout') Internal Error: printf is broken with the example file Solv.py given in Ticket #696 ? In the first example, we can see that the separator is '.' . Does pylab modify this desktop setting ? -- Fabrice Silva From silva at lma.cnrs-mrs.fr Wed Jul 16 21:35:38 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 17 Jul 2008 03:35:38 +0200 Subject: [SciPy-user] odepack or f2py error In-Reply-To: <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <1216258538.3088.10.camel@Portable-s2m.cnrs-mrs.fr> Le jeudi 17 juillet 2008 ? 03:06 +0200, Fabrice Silva a ?crit : > Le mercredi 16 juillet 2008 ? 19:52 +0000, Pauli Virtanen a ?crit : > > gcc bug #36857. Can be worked around by > > > > export LC_ALL=C > > in shell before running your codes. > > (The scipy ticket can be closed.) > > If this bug is linked to locale setting and decimal separator symbol, > how can it be explained that without pylab import[...] Another comment on this bug : http://telin.ugent.be/~slippens/drupal/pylabllocaleproblem which is really due to pylab. -- Fabrice Silva From robert.kern at gmail.com Wed Jul 16 20:09:59 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Jul 2008 19:09:59 -0500 Subject: [SciPy-user] odepack or f2py error In-Reply-To: <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: <3d375d730807161709q22b2685el10d32695c0e268c0@mail.gmail.com> On Wed, Jul 16, 2008 at 20:06, Fabrice Silva wrote: > Le mercredi 16 juillet 2008 ? 19:52 +0000, Pauli Virtanen a ?crit : >> gcc bug #36857. Can be worked around by >> >> export LC_ALL=C >> in shell before running your codes. >> (The scipy ticket can be closed.) > > If this bug is linked to locale setting and decimal separator symbol, > how can it be explained that without pylab import, the displayed results > is > > lsoda-- at current t (=r1), mxstep (=i1) steps > taken on this call before reaching tout > In above message, I1 = 2 > In above message, R1 = 0.4064399263119E-05 > Excess work done on this call (perhaps wrong Dfun type). > Run with full_output = 1 to get quantitative information. > > and with pylab import, it crashes as follows: > > lsoda-- at current t (=r1), mxstep (=i1) steps > taken on this call before reaching tout > In above message, I1 = 2 > In above message, R1 = > At line 1991 of file scipy/integrate/odepack/ddasrt.f (unit = 6, file = 'stdout') > Internal Error: printf is broken > > with the example file Solv.py given in Ticket #696 ? > > In the first example, we can see that the separator is '.' . Does pylab > modify this desktop setting ? Can you try doing this: import locale encoding = locale.getpreferredencoding() # ... whatever crashes -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From matthieu.brucher at gmail.com Thu Jul 17 01:52:37 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 17 Jul 2008 07:52:37 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487E4CE8.5000609@scipy.org> References: <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> Message-ID: > Neither connecting IPOPT nor ALGENCAN 2.x version (as well as some > other work done during my GSoC 2008) are not mentioned in my > GSoC-assigned schedule (still I had spent some more time because some OO > users were very interested in these issues). So it's not my job to get > ALGENCAN workaround. I've just noticed this. If you want people to help you, provide a simple way of doing so. Write a little tutorial about registering a basic solver and then you may have help. I don't think that maintaining a package is only a GSoC work. You will always have work that you did not plan to do. But modifying an external package (algecan) is not the same as enhancing your own package. The first is not your primary job. The latter is. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From dmitrey.kroshko at scipy.org Thu Jul 17 02:24:08 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 17 Jul 2008 09:24:08 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E1F47.9020303@scipy.org> <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> Message-ID: <487EE588.1000204@scipy.org> Matthieu Brucher wrote: > 2008/7/16 dmitrey : > >> Matthieu Brucher wrote: >> >>>> __solverPaths__ is exported by runProbSolver.py, and than converted from None to >>>> dict. I intended to init it in runProbSolver.py but some issues had >>>> appeared (Python reported of bugs) so I had moved it to ooMisc.py >>>> >>>> >>> It has not been moved to ooMisc. The actual registry is only available >>> in runProbSolver. Because you are initializing it in runProbSolver and >>> not in ooMisc. >>> >>> >> What is not been moved to ooMisc?! I have said I encountered bugs while >> creating __solverPaths__ in runProbSolver, so now it's created in ooMisc >> (as None). Of course, then I redefine it as Python dict (in runProbSolver). >> > > Try and get the type of ooMisc.__solverPaths__. It will always be > None. Why ? Because you import the variable None as __solverPaths__ in > rPS. Then you modify the reference, but not the underlying variable. > Do you want to inform me something is wrong with my code? I have no needs to import ooMisc.__solverPaths__ each time. It is imported only once per Python session, no matter how many solvers were called, as well as it is redefined in rPS.py to dictionary just a single time per session. >But modifying an external package (algecan) is not the same as enhancing your own package. The first is not your primary job. The latter is. I just said " it's not my job to get ALGENCAN workaround." I have to do my GSoC milestones from OOTimeLine and no more, so it's is up to me to decide to do any additional job or not to do, aa well as choosing which one. D. From matthieu.brucher at gmail.com Thu Jul 17 02:31:44 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 17 Jul 2008 08:31:44 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487EE588.1000204@scipy.org> References: <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> Message-ID: >> Try and get the type of ooMisc.__solverPaths__. It will always be >> None. Why ? Because you import the variable None as __solverPaths__ in >> rPS. Then you modify the reference, but not the underlying variable. >> > Do you want to inform me something is wrong with my code? Yes... > I have no needs to import ooMisc.__solverPaths__ each time. It is > imported only once per Python session, no matter how many solvers were > called, as well as it is redefined in rPS.py to dictionary just a single > time per session. What is the purpose of ooMisc.__solverPaths__ ? If it should be a dictionary once scikits.openopt is imported, you have an error in your program. If it is not, you have some junk code that can be deleted. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From dmitrey.kroshko at scipy.org Thu Jul 17 02:43:47 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 17 Jul 2008 09:43:47 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E23B2.9060507@scipy.org> <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> Message-ID: <487EEA23.1020105@scipy.org> Matthieu Brucher wrote: >> I have no needs to import ooMisc.__solverPaths__ each time. It is >> imported only once per Python session, no matter how many solvers were >> called, as well as it is redefined in rPS.py to dictionary just a single >> time per session. >> > > What is the purpose of ooMisc.__solverPaths__ ? If it should be a > dictionary once scikits.openopt is imported, you have an error in your > program. If it is not, you have some junk code that can be deleted. > Why it should be a dictionary once scikits.openopt is imported? What's wrong now? scikits.openopt doesn't contain __solverPaths__, it's auxiliary variable used deeply in Kernel only, and anyone don't have to care about this variable value. Currently I removed import __solverPaths__, add __solverPaths__= None in rPS.py file and it works ok with my Python 2.5 but I got a error in 2.4, that's why I had moved __solverPaths__ to ooMisc. I intend to remove it (from svn) later, when Python 2.4 will be too obsolete. D. From matthieu.brucher at gmail.com Thu Jul 17 03:02:53 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 17 Jul 2008 09:02:53 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487EEA23.1020105@scipy.org> References: <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> Message-ID: >> What is the purpose of ooMisc.__solverPaths__ ? If it should be a >> dictionary once scikits.openopt is imported, you have an error in your >> program. If it is not, you have some junk code that can be deleted. >> > Why it should be a dictionary once scikits.openopt is imported? What's > wrong now? scikits.openopt doesn't contain __solverPaths__, it's > auxiliary variable used deeply in Kernel only, and anyone don't have to > care about this variable value. There are two parts in what I'm saying: - the first is that there is a langage (Python) misunderstanding, see the link Alan provided sooner in the discussion, probably leading to mistakes in the future, - the second is that you have to expose it if you want people to help you. Let me explain this. You are complaining that you have to connect several optimizers yourself to OO. This is sad, because, as you've stated, you don't have much time, you have to work on the GSoC. So your best interest and the best interest of the OO users is that additional people show up and help you. So what is the best course of action? Not everyone wants to have access to the SVN. Enabling adding solvers on the fly solves this (and you already stated before that you don't like people committing changes to the SVN without your approval, so this IS the only way of properly doing things). People install OO from the latest tarball. Then they can create their own solver, add some tests to check if it is working and then come back to you. This way, you will have feedback from the community as well, and you won't feel alone developing the package. So if you want people to help you, please advertise __solverPaths__ and write a small tutorial. If you don't want people helping you, please stop saying that it is not your job to connect solver X or solver Y. It is, as you are the only developer of this specific part of the package. > Currently I removed import __solverPaths__, add __solverPaths__= None > in rPS.py file and it works ok with my Python 2.5 but I got a error in > 2.4, that's why I had moved __solverPaths__ to ooMisc. I intend to > remove it (from svn) later, when Python 2.4 will be too obsolete. This is strange. I'll try to find out what is happening and I'll let you know ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From dmitrey.kroshko at scipy.org Thu Jul 17 03:19:12 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 17 Jul 2008 10:19:12 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E28D0.7070505@scipy.org> <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> Message-ID: <487EF270.3050901@scipy.org> Matthieu Brucher wrote: > There are two parts in what I'm saying: > - the first is that there is a langage (Python) misunderstanding, see > the link Alan provided sooner in the discussion, probably leading to > mistakes in the future, > - the second is that you have to expose it if you want people to help you. > > Let me explain this. > You are complaining that you have to connect several optimizers > yourself to OO. Matthieu, IIRC I had been never complaining that I have to connect several optimizers to OO. As well as I don't need any help, it would be enough for me to feel myself OK would anyone don't commit his changes w/o even notification to me (and than I had to find bugs). IIRC there was a time I had proposed you to sent tarball with your proposed changes to me before committing to svn (to check for bugs) but you ignored it. The things like these make me crazy much more than doing OO jobs by myself (latter is preferred by me). > This is sad, because, as you've stated, you don't have > much time, you have to work on the GSoC. So your best interest and the > best interest of the OO users is that additional people show up and > help you. > So what is the best course of action? Not everyone wants to have > access to the SVN. Enabling adding solvers on the fly solves this (and > you already stated before that you don't like people committing > changes to the SVN without your approval, so this IS the only way of > properly doing things). People install OO from the latest tarball. > Then they can create their own solver, add some tests to check if it > is working and then come back to you. This way, you will have feedback > from the community as well, and you won't feel alone developing the > package. > > So if you want people to help you, please advertise __solverPaths__ > and write a small tutorial. > If you don't want people helping you, please stop saying that it is > not your job to connect solver X or solver Y. Please give me a link to my message where I say "it is not my job to connect solver X or solver Y". > It is, as you are the > only developer of this specific part of the package. > > >> Currently I removed import __solverPaths__, add __solverPaths__= None >> in rPS.py file and it works ok with my Python 2.5 but I got a error in >> 2.4, that's why I had moved __solverPaths__ to ooMisc. I intend to >> remove it (from svn) later, when Python 2.4 will be too obsolete. >> > > This is strange. I'll try to find out what is happening and I'll let you know ;) > > Matthieu > Let me also remember you that OO had been started even before GSOC 2007, and in GSoC 2008 rules (Python dept) they clarified that all code produced by student is his own one (still it should be available with BSD license). If you want to fork openopt to other package and develop that one by yourself - please inform me ASAP. D. From matthieu.brucher at gmail.com Thu Jul 17 03:37:30 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Thu, 17 Jul 2008 09:37:30 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487EF270.3050901@scipy.org> References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> Message-ID: >> Let me explain this. >> You are complaining that you have to connect several optimizers >> yourself to OO. > Matthieu, IIRC I had been never complaining that I have to connect > several optimizers to OO. This is what you were saying some hours ago: "> And it is not your job to try to get a workaround (IMHO) ;) > Neither connecting IPOPT nor ALGENCAN 2.x version (as well as some other work done during my GSoC 2008) are not mentioned in my GSoC-assigned schedule (still I had spent some more time because some OO users were very interested in these issues). So it's not my job to get ALGENCAN workaround." Sorry, but for me, this is complaining. And I'm not talking about the mails you sent some months ago (in which you were saying that you wanted to get paid to connect some solvers). > Please give me a link to my message where I say "it is not my job to > connect solver X or solver Y". See my above quote. > Let me also remember you that OO had been started even before GSOC 2007, > and in GSoC 2008 rules (Python dept) they clarified that all code > produced by student is his own one (still it should be available with > BSD license). If you want to fork openopt to other package and develop > that one by yourself - please inform me ASAP. Let me also inform you that OO is a scikit to which I contributed a lot as well. And to which I will keep contributing (generic framework + some other lines). But I'll there, because, as always, it is not possible to discuss something with you. I only wanted to provide you a simple and efficient way to have additional contribution. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From pav at iki.fi Thu Jul 17 03:49:45 2008 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 17 Jul 2008 07:49:45 +0000 (UTC) Subject: [SciPy-user] odepack or f2py error References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> Message-ID: Thu, 17 Jul 2008 03:06:19 +0200, Fabrice Silva wrote: > Le mercredi 16 juillet 2008 ? 19:52 +0000, Pauli Virtanen a ?crit : >> gcc bug #36857. Can be worked around by >> >> export LC_ALL=C >> in shell before running your codes. >> (The scipy ticket can be closed.) > > If this bug is linked to locale setting and decimal separator symbol, > how can it be explained that without pylab import, the displayed results > is [clip] > and with pylab import, it crashes as follows: [clip] As said in the ticket comments, pylab imports gtk, which calls setlocale and sets the active locale according to the environment variables. This does not occur if you don't import pylab or call setlocale through some other route. -- Pauli Virtanen From robert.kern at gmail.com Thu Jul 17 03:50:25 2008 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 17 Jul 2008 02:50:25 -0500 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> Message-ID: <3d375d730807170050m15aebd4es93c37a0258b8609d@mail.gmail.com> I have asked both of you before to take your personal arguments off the list. Keep it civil, or keep it in private email. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From dmitrey.kroshko at scipy.org Thu Jul 17 03:55:15 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 17 Jul 2008 10:55:15 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> Message-ID: <487EFAE3.9020503@scipy.org> Matthieu Brucher wrote: >>> Let me explain this. >>> You are complaining that you have to connect several optimizers >>> yourself to OO. >>> > > >> Matthieu, IIRC I had been never complaining that I have to connect >> several optimizers to OO. >> > > This is what you were saying some hours ago: > > "> And it is not your job to try to get a workaround (IMHO) ;) > > Neither connecting IPOPT nor ALGENCAN 2.x version (as well as some > other work done during my GSoC 2008) are not mentioned in my > GSoC-assigned schedule (still I had spent some more time because some OO > users were very interested in these issues). So it's not my job to get > ALGENCAN workaround." > > Sorry, but for me, this is complaining. Maybe for you, but not for me (and I guess for others as well). > And I'm not talking about the > mails you sent some months ago (in which you were saying that you > wanted to get paid to connect some solvers). > Why don't talk? Give a link to the messages (because I can't remember those ones), let's discuss it. > >> Please give me a link to my message where I say "it is not my job to >> connect solver X or solver Y". >> > > See my above quote. > I see your quote above and don't find the words. > >> Let me also remember you that OO had been started even before GSOC 2007, >> and in GSoC 2008 rules (Python dept) they clarified that all code >> produced by student is his own one (still it should be available with >> BSD license). If you want to fork openopt to other package and develop >> that one by yourself - please inform me ASAP. >> > > Let me also inform you that OO is a scikit to which I contributed a > lot as well. And to which I will keep contributing (generic framework > + some other lines). > Let me remember you when I had been insisted to connect GenericOpt to OpenOpt I said "GenericOpt should stay available as separate package", and I have obtained agree (and AFAIK it remains available as separate package). So please enough speculating on GenericOpt. You have GenericOpt page in scikits webserver, any time you can remove GenericOpt from OpenOpt svn and continue to develop and use it as a standalone package (I would gladly hear from you of this decision). The situation is like I commit something to Linux Kernel and then I commit whatever I want to any Linux files saying "I'm Linux developer too, I had committed something so I will continue more and more, no matter what you think". D. From gael.varoquaux at normalesup.org Thu Jul 17 04:02:28 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 17 Jul 2008 10:02:28 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487EFAE3.9020503@scipy.org> References: <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> <487EFAE3.9020503@scipy.org> Message-ID: <20080717080228.GD616@phare.normalesup.org> Guys. Quit it. Ga?l From silva at lma.cnrs-mrs.fr Thu Jul 17 06:52:15 2008 From: silva at lma.cnrs-mrs.fr (Fabrice Silva) Date: Thu, 17 Jul 2008 12:52:15 +0200 Subject: [SciPy-user] odepack or f2py error In-Reply-To: <3d375d730807161709q22b2685el10d32695c0e268c0@mail.gmail.com> References: <1215695635.3278.7.camel@Portable-s2m.cnrs-mrs.fr> <1215862349.3033.3.camel@Portable-s2m.cnrs-mrs.fr> <1216059141.3007.5.camel@Portable-s2m.cnrs-mrs.fr> <1216158677.2943.2.camel@Portable-s2m.cnrs-mrs.fr> <1216256779.3088.6.camel@Portable-s2m.cnrs-mrs.fr> <3d375d730807161709q22b2685el10d32695c0e268c0@mail.gmail.com> Message-ID: <1216291935.2996.9.camel@Portable-s2m.cnrs-mrs.fr> Le mercredi 16 juillet 2008 ? 19:09 -0500, Robert Kern a ?crit : > Can you try doing this: > import locale encoding = locale.getpreferredencoding() 'Preferred encoding' seems not to be affected by pylab import. import locale print locale.getpreferredencoding() import pylab print locale.getpreferredencoding() gives UTF-8 UTF-8 and the script crashes or not as well depending pylab is import or not. Is that what you were expecting ? > -- Fabrice Silva From stefan at sun.ac.za Thu Jul 17 09:08:02 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 17 Jul 2008 15:08:02 +0200 Subject: [SciPy-user] patch for ndimage.generic_filter In-Reply-To: <477CA520-C850-47AD-9888-92DDC968BBF3@gmail.com> References: <477CA520-C850-47AD-9888-92DDC968BBF3@gmail.com> Message-ID: <9457e7c80807170608m1ae4daeap8aeb18b52b18b4bb@mail.gmail.com> Hey Mike, 2008/7/11 Michael Fitzgerald : > I found a small bug in ndimage.generic_filter() that arises when using the > 'size' keyword with multidimensional arrays. Patch attached. The bug you found indicates that we have inadequate unit tests. Would you mind writing one that excercised generic_filter, so that this doesn't happen again? Thanks St?fan From aisaac at american.edu Thu Jul 17 11:54:47 2008 From: aisaac at american.edu (Alan G Isaac) Date: Thu, 17 Jul 2008 11:54:47 -0400 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487F6203.9030403@ukr.net> References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> <487EFAE3.9020503@scipy.org> <487F6203.9030403@ukr.net> Message-ID: OpenOpt SVN is once again usable by Windows and Mac users. (Thanks Dmitrey!) Please report any problems. Cheers, Alan Isaac From dmitrey.kroshko at scipy.org Thu Jul 17 12:46:57 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Thu, 17 Jul 2008 19:46:57 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> <487EFAE3.9020503@scipy.org> <487F6203.9030403@ukr.net> Message-ID: <487F7781.3040904@scipy.org> Alan G Isaac wrote: > OpenOpt SVN is once again usable by Windows and Mac users. > (Thanks Dmitrey!) Please report any problems. > however, there are no reasons to thank because I just removed ALGENCAN_oo.py from repository, as it has been mentioned in OO blog. So now only ALGENCAN v 2.0.3 or later can be used. Regards, D. From forrest at physics.Auburn.EDU Thu Jul 17 16:53:35 2008 From: forrest at physics.Auburn.EDU (Phil Forrest) Date: Thu, 17 Jul 2008 15:53:35 -0500 Subject: [SciPy-user] sparsetools involvement? (was -lf77compat build failure) Message-ID: <001301c8e84f$2dfdd750$89f985f0$@auburn.edu> Hey Guys, After reviewing the compilation failures, Robert Kern suggested I post the following error messages to this list. Again, the original problem has to do with no "f77 compatibility" library included with Sun Studio 12 (8.3) - at least on Sun Solaris x86. The lack of this library caused the scipy build to fail. Below is a portion of the error messages from building scipy with "f77compat" REMOVED from sun.py in the numpy tree/build. Thanks, Phil > building 'scipy.sparse._sparsetools' extension compiling C++ sources C > compiler: /usr/lib/python2.4/pyCC -DNDEBUG > > compile options: '-Iscipy/sparse/sparsetools > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c' > pyCC: scipy/sparse/sparsetools/sparsetools_wrap.cxx > "/usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api. > h", line 960: Warning: String literal converted to char* in formal > argument name in call to PyImport_ImportModule(char*). > "/usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api. > h", line 963: Warning: String literal converted to char* in formal > argument > 2 in call to PyObject_GetAttrString(_object*, char*). > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2602: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2603: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2604: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2605: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2606: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2607: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2608: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2609: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2610: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2611: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2612: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2614: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools.h", line 409: Error: multiplies > is not a member of std. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > While instantiating "csr_elmul_csr(const int, const int, > const int*, const int*, const int*, const int*, const int*, const > int*, std::vector*, std::vector*, std::vector*)". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > Instantiated from non-template code. > "scipy/sparse/sparsetools/sparsetools.h", line 409: Error: Unexpected > type name "T" encountered. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > While instantiating "csr_elmul_csr(const int, const int, > const int*, const int*, const int*, const int*, const int*, const > int*, std::vector*, std::vector*, std::vector*)". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > Instantiated from non-template code. > "scipy/sparse/sparsetools/sparsetools.h", line 409: Error: Operand > expected instead of ")". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > While instantiating "csr_elmul_csr(const int, const int, > const int*, const int*, const int*, const int*, const int*, const > int*, std::vector*, std::vector*, std::vector*)". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > Instantiated from non-template code. > 3 Error(s) and 14 Warning(s) detected. > "/usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api. > h", line 960: Warning: String literal converted to char* in formal > argument name in call to PyImport_ImportModule(char*). > "/usr/lib/python2.4/site-packages/numpy/core/include/numpy/__multiarray_api. > h", line 963: Warning: String literal converted to char* in formal > argument > 2 in call to PyObject_GetAttrString(_object*, char*). > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2602: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2603: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2604: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2605: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2606: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2607: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2608: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2609: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2610: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2611: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2612: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 2614: Warning: > String literal converted to char* in initialization. > "scipy/sparse/sparsetools/sparsetools.h", line 409: Error: multiplies > is not a member of std. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > While instantiating "csr_elmul_csr(const int, const int, > const int*, const int*, const int*, const int*, const int*, const > int*, std::vector*, std::vector*, std::vector*)". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > Instantiated from non-template code. > "scipy/sparse/sparsetools/sparsetools.h", line 409: Error: Unexpected > type name "T" encountered. > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > While instantiating "csr_elmul_csr(const int, const int, > const int*, const int*, const int*, const int*, const int*, const > int*, std::vector*, std::vector*, std::vector*)". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > Instantiated from non-template code. > "scipy/sparse/sparsetools/sparsetools.h", line 409: Error: Operand > expected instead of ")". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > While instantiating "csr_elmul_csr(const int, const int, > const int*, const int*, const int*, const int*, const int*, const > int*, std::vector*, std::vector*, std::vector*)". > "scipy/sparse/sparsetools/sparsetools_wrap.cxx", line 15087: Where: > Instantiated from non-template code. > 3 Error(s) and 14 Warning(s) detected. > error: Command "/usr/lib/python2.4/pyCC -DNDEBUG > -Iscipy/sparse/sparsetools > -I/usr/lib/python2.4/site-packages/numpy/core/include > -I/usr/include/python2.4 -c > scipy/sparse/sparsetools/sparsetools_wrap.cxx -o build/temp.solaris-2.10-i86pc-2.4/scipy/sparse/sparsetools/sparsetools_wrap. > o" failed with exit status 3 > # From washakie at gmail.com Fri Jul 18 01:37:31 2008 From: washakie at gmail.com (John [H2O]) Date: Thu, 17 Jul 2008 22:37:31 -0700 (PDT) Subject: [SciPy-user] help assigning arrays - global model output Message-ID: <18522938.post@talk.nabble.com> Hello, I am trying to fill an array reading in unformatted binary data :-/ I actually have most the code working now, and I have a F2Py module which reads the binary efficiently. My problem is assigning the arrays into the python multidimensional array. This works: G={} for date in dates: data = F2Py.module.getdata() # data is shape (180,360,3) G[date] = data What I WANT is this: G=zeros((180,360,3,len(dates))) for i in range(len(dates)): data = F2Py.module.getdata() # data is shape (180,360,3) G[:,:,:,i] = data But I get a shape mismatch error, something about not broadcasting an array.... Could someone please provide comment on how I need to do this assignment? Thank you. -- View this message in context: http://www.nabble.com/help-assigning-arrays---global-model-output-tp18522938p18522938.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Fri Jul 18 01:40:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Jul 2008 00:40:24 -0500 Subject: [SciPy-user] help assigning arrays - global model output In-Reply-To: <18522938.post@talk.nabble.com> References: <18522938.post@talk.nabble.com> Message-ID: <3d375d730807172240i33727681g508a8a6003da222a@mail.gmail.com> On Fri, Jul 18, 2008 at 00:37, John [H2O] wrote: > > Hello, I am trying to fill an array reading in unformatted binary data :-/ > > I actually have most the code working now, and I have a F2Py module which > reads the binary efficiently. My problem is assigning the arrays into the > python multidimensional array. > > This works: > G={} > for date in dates: > data = F2Py.module.getdata() > # data is shape (180,360,3) > G[date] = data > > What I WANT is this: > > G=zeros((180,360,3,len(dates))) > for i in range(len(dates)): > data = F2Py.module.getdata() > # data is shape (180,360,3) > G[:,:,:,i] = data > > But I get a shape mismatch error, something about not broadcasting an > array.... > > Could someone please provide comment on how I need to do this assignment? Please double-check your shapes by printing out the actual shapes: G=zeros((180,360,3,len(dates))) for i in range(len(dates)): data = F2Py.module.getdata() print 'data: %s' % (data.shape,) print 'G[:,:,:,i]: %s' % (G[:,:,:,i].shape,) # data is shape (180,360,3) G[:,:,:,i] = data -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From washakie at gmail.com Fri Jul 18 02:07:14 2008 From: washakie at gmail.com (John [H2O]) Date: Thu, 17 Jul 2008 23:07:14 -0700 (PDT) Subject: [SciPy-user] help assigning arrays - global model output In-Reply-To: <3d375d730807172240i33727681g508a8a6003da222a@mail.gmail.com> References: <18522938.post@talk.nabble.com> <3d375d730807172240i33727681g508a8a6003da222a@mail.gmail.com> Message-ID: <18523212.post@talk.nabble.com> Robert Kern-2 wrote: > > > Please double-check your shapes by printing out the actual shapes: > Seems I must have been doing something wrong earlier! You're correct, I just rewrote it in the 'old' or desired method without using the dict, and it works! I must have been doing something wrong earlier... now to just figure out which dimensions to sum across. Mind providing some insight on how to generate the average? What I have is an array, with 20 days worth of global model output at three layers (180,360,3) so my array is actually shape (180,360,3,20). I want the average over the 20 days of the sum of the three vertical layers. I would also like the complete sum. Thanks in advance if you're so inclined! -- View this message in context: http://www.nabble.com/help-assigning-arrays---global-model-output-tp18522938p18523212.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Fri Jul 18 02:30:10 2008 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Jul 2008 01:30:10 -0500 Subject: [SciPy-user] help assigning arrays - global model output In-Reply-To: <18523212.post@talk.nabble.com> References: <18522938.post@talk.nabble.com> <3d375d730807172240i33727681g508a8a6003da222a@mail.gmail.com> <18523212.post@talk.nabble.com> Message-ID: <3d375d730807172330h440cd0f6ya3dc9f6292fc813e@mail.gmail.com> On Fri, Jul 18, 2008 at 01:07, John [H2O] wrote: > Robert Kern-2 wrote: >> Please double-check your shapes by printing out the actual shapes: > > Seems I must have been doing something wrong earlier! You're correct, I just > rewrote it in the 'old' or desired method without using the dict, and it > works! I must have been doing something wrong earlier... now to just figure > out which dimensions to sum across. > > Mind providing some insight on how to generate the average? What I have is > an array, with 20 days worth of global model output at three layers > (180,360,3) so my array is actually shape (180,360,3,20). I want the average > over the 20 days of the sum of the three vertical layers. G.sum(axis=2).mean(axis=2) > I would also like > the complete sum. Do you mean the sum over all of the layers and all of the days? G.sum(axis=2).sum(axis=2) Or did you mean something else? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From washakie at gmail.com Fri Jul 18 02:39:49 2008 From: washakie at gmail.com (John [H2O]) Date: Thu, 17 Jul 2008 23:39:49 -0700 (PDT) Subject: [SciPy-user] sum / average of multidimensional arrays Message-ID: <18523480.post@talk.nabble.com> Hello, I have an array, with 20 days worth of global model output at three layers (180,360,3) so my array is actually shape (180,360,3,20). I'm trying to produce a few new arrays: 1) The sum of the three layers for all 20 days, so shape (180,360,20) and 2) The sum of all 20 days AND all three layers, so shape (180,360) and 3) The average of all 20 days at each layer, so shape (180,360,3) Anyone mind providing some insight? Thanks! -- View this message in context: http://www.nabble.com/sum---average-of-multidimensional-arrays-tp18523480p18523480.html Sent from the Scipy-User mailing list archive at Nabble.com. From lbolla at gmail.com Fri Jul 18 04:07:35 2008 From: lbolla at gmail.com (lorenzo bolla) Date: Fri, 18 Jul 2008 09:07:35 +0100 Subject: [SciPy-user] sum / average of multidimensional arrays In-Reply-To: <18523480.post@talk.nabble.com> References: <18523480.post@talk.nabble.com> Message-ID: <80c99e790807180107i489b0d2fr874f784e16413172@mail.gmail.com> You can use the "axis" keyword in numpy.sum() and numpy.mean(). L. On Fri, Jul 18, 2008 at 7:39 AM, John [H2O] wrote: > > Hello, I have an array, with 20 days worth of global model output at three > layers (180,360,3) so my array is actually shape (180,360,3,20). > > I'm trying to produce a few new arrays: > > 1) The sum of the tohree layers for all 20 days, so shape (180,360,20) > > and > > 2) The sum of all 20 days AND all three layers, so shape (180,360) > > and > > 3) The average of all 20 days at each layer, so shape (180,360,3) > > Anyone mind providing some insight? Thanks! > -- > View this message in context: > http://www.nabble.com/sum---average-of-multidimensional-arrays-tp18523480p18523480.html > Sent from the Scipy-User mailing list archive at Nabble.com. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- "Whereof one cannot speak, thereof one must be silent." -- Ludwig Wittgenstein -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jul 18 06:14:37 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 18 Jul 2008 12:14:37 +0200 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: <487F7781.3040904@scipy.org> References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> <487EFAE3.9020503@scipy.org> <487F6203.9030403@ukr.net> <487F7781.3040904@scipy.org> Message-ID: On Thu, 17 Jul 2008 19:46:57 +0300 dmitrey wrote: > Alan G Isaac wrote: >> OpenOpt SVN is once again usable by Windows and Mac >>users >> (Thanks Dmitrey!) Please report any problems. >> > however, there are no reasons to thank because I just >removed > ALGENCAN_oo.py from repository, as it has been mentioned >in OO blog. So > now only ALGENCAN v 2.0.3 or later can be used. > > Regards, D. Hi Dmitrey, I have some trouble to install tne new ALGENCAN version on my linux box. Please can you add some notes to your blog. Thanks in advance Nils From dmitrey.kroshko at scipy.org Fri Jul 18 07:24:22 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 18 Jul 2008 14:24:22 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> <487EFAE3.9020503@scipy.org> <487F6203.9030403@ukr.net> <487F7781.3040904@scipy.org> Message-ID: <48807D66.2060002@scipy.org> Hi Nils, w/o details on the troubles you have encountered I can hardly solve it. Have you look at detailed instructions here: http://openopt.blogspot.com/2008/06/connecting-algencan-2x-beta.html ? chief idea - ensure you have pywrapper.so on your PYTHONPATH and old file pywrapper.so from algencan v 1.0 has been removed. I will mention the moment in NLP page. Regards, D. Nils Wagner wrote: > Hi Dmitrey, > > I have some trouble to install tne new ALGENCAN version > on my linux box. > > Please can you add some notes to your blog. > > Thanks in advance > > Nils > > From wnbell at gmail.com Fri Jul 18 09:37:46 2008 From: wnbell at gmail.com (Nathan Bell) Date: Fri, 18 Jul 2008 08:37:46 -0500 Subject: [SciPy-user] sparsetools involvement? (was -lf77compat build failure) In-Reply-To: <001301c8e84f$2dfdd750$89f985f0$@auburn.edu> References: <001301c8e84f$2dfdd750$89f985f0$@auburn.edu> Message-ID: On Thu, Jul 17, 2008 at 3:53 PM, Phil Forrest wrote: > > After reviewing the compilation failures, Robert Kern suggested I post the > following error messages to this list. Again, the original problem has to do > with no "f77 compatibility" library included with Sun Studio 12 (8.3) - at > least on Sun Solaris x86. The lack of this library caused the scipy build to > fail. > > Below is a portion of the error messages from building scipy with > "f77compat" REMOVED from sun.py in the numpy tree/build. > Phil, could you try compiling SciPy from SVN? Much of sparsetools has changed since the last release. I believe there was a missing #include in sparsetools.h in the release you're using. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From washakie at gmail.com Fri Jul 18 12:50:44 2008 From: washakie at gmail.com (John [H2O]) Date: Fri, 18 Jul 2008 09:50:44 -0700 (PDT) Subject: [SciPy-user] sum / average of multidimensional arrays In-Reply-To: <80c99e790807180107i489b0d2fr874f784e16413172@mail.gmail.com> References: <18523480.post@talk.nabble.com> <80c99e790807180107i489b0d2fr874f784e16413172@mail.gmail.com> Message-ID: <18533448.post@talk.nabble.com> lorenzo bolla wrote: > > You can use the "axis" keyword in numpy.sum() and numpy.mean(). > L. > Thank you. -- View this message in context: http://www.nabble.com/sum---average-of-multidimensional-arrays-tp18523480p18533448.html Sent from the Scipy-User mailing list archive at Nabble.com. From Dharhas.Pothina at twdb.state.tx.us Fri Jul 18 15:05:34 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Fri, 18 Jul 2008 14:05:34 -0500 Subject: [SciPy-user] Help interpolating equally spaced 2D gridded data to a different grid. Message-ID: <4880A32E0200009B00014590@GWWEB.twdb.state.tx.us> Hi, I have a regular grid of data : lat_old.shape = (277, 349) lon_old.shape = (277, 349) precip_old.shape = (8, 277, 349) # the first axis is time I need to interpolate precip_old to a new regular grid (lat,lon) where lat.shape = (204, 160) lon.shape = (204, 160) Basically I need to calculate precip_new where precip_new.shape = (8, 204, 160) I've had a look at the cookbook page for 'N-D interpolation for equally-spaced data' and also some of the documentation for interp2D etc but am having trouble understanding what I need to do. Any help is appreciated. thanks - dharhas From dmitrey.kroshko at scipy.org Fri Jul 18 15:08:37 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Fri, 18 Jul 2008 22:08:37 +0300 Subject: [SciPy-user] OpenOpt svn broken on Windows In-Reply-To: References: <487E3BAE.1000004@scipy.org> <487E4CE8.5000609@scipy.org> <487EE588.1000204@scipy.org> <487EEA23.1020105@scipy.org> <487EF270.3050901@scipy.org> <487EFAE3.9020503@scipy.org> <487F6203.9030403@ukr.net> <487F7781.3040904@scipy.org> <48807D66.2060002@scipy.org> Message-ID: <4880EA35.5020604@scipy.org> hi Nils, I don't know for sure yet but it can be actual: http://openopt.blogspot.com/2008/07/algencan-diffint-issue.html Regards, D. Nils Wagner wrote: > > Hi Dmitrey, > > It works fine for me now with algencan-2.0.4-beta. > I have used a modified Makefile together with > > make algencan-py > > Then I have modified my PYTHONPATH. > > However the new algencan shows a poor convergence > behavior wrt nlp_3.py. Any idea ? > From emanuele at relativita.com Sat Jul 19 11:29:30 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 19 Jul 2008 17:29:30 +0200 Subject: [SciPy-user] bug in cho_factor? Message-ID: <4882085A.2040107@relativita.com> In scipy v0.7.0.dev4543 (svn) the documentation of scipy.linalg.cho_factor says that cho_factor returns a tuple made of: ---- c : array, shape (M, M) Upper- or lower-triangular Cholesky factor of A lower : array, shape (M, M) Flag indicating whether the factor is lower or upper triangular ---- But the following simple example shows that 'c' is not triangular and the result of cho_factor() is pretty different from that of cholesky()! ---- import numpy as N import scipy.linalg as SL A = N.array([[2,1],[1,2]]) c_wrong,lower = SL.cho_factor(A) print c_wrong c_correct = SL.cholesky(A) print c_correct print c_wrong==c_correct ---- The output is: ---- [[ 1.41421356 0.70710678] [ 1. 1.22474487]] [[ 1.41421356 0.70710678] [ 0. 1.22474487]] [[ True True] [False True]] ---- Why c_wrong[1,0] is not zero? And why is it 1.0 ? I went mad tracking this issue in my code 8-| Regards, Emanuele P.S.: there is a type in cho_factor documentation: 'lower' is not an 'array' but an 'int'. From lbolla at gmail.com Sat Jul 19 12:06:41 2008 From: lbolla at gmail.com (Lorenzo Bolla) Date: Sat, 19 Jul 2008 17:06:41 +0100 Subject: [SciPy-user] Help interpolating equally spaced 2D gridded data to a different grid. In-Reply-To: <4880A32E0200009B00014590@GWWEB.twdb.state.tx.us> References: <4880A32E0200009B00014590@GWWEB.twdb.state.tx.us> Message-ID: <20080719160631.GA6332@lollo-laptop> something like this should work (adjust the dimensions as you wish): ---------------------------------------- import numpy from scipy.interpolate import interp2d x = numpy.linspace(0, 1, 27) y = numpy.linspace(0, 1, 34) lat_old, lon_old = numpy.meshgrid(y, x) precip_old = numpy.sin(lat_old + lon_old) print lat_old.shape print lon_old.shape print precip_old.shape x_new = numpy.linspace(0, 1, 20) y_new = numpy.linspace(0, 1, 16) lat, lon = numpy.meshgrid(y_new, x_new) precip_interp = interp2d(lat_old, lon_old, precip_old) precip = precip_interp(y_new, x_new) print lat.shape print lon.shape print precip.shape ---------------------------------------- hth, L. On Fri, Jul 18, 2008 at 02:05:34PM -0500, Dharhas Pothina wrote: > Hi, > > I have a regular grid of data : > > lat_old.shape = (277, 349) > lon_old.shape = (277, 349) > precip_old.shape = (8, 277, 349) # the first axis is time > > I need to interpolate precip_old to a new regular grid (lat,lon) where > > lat.shape = (204, 160) > lon.shape = (204, 160) > > Basically I need to calculate precip_new where precip_new.shape = (8, 204, 160) > > I've had a look at the cookbook page for 'N-D interpolation for equally-spaced data' and also some of the documentation for interp2D etc but am having trouble understanding what I need to do. > > Any help is appreciated. > > thanks > > - dharhas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From warren.weckesser at gmail.com Sat Jul 19 12:28:55 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 19 Jul 2008 12:28:55 -0400 Subject: [SciPy-user] bug in cho_factor? In-Reply-To: <4882085A.2040107@relativita.com> References: <4882085A.2040107@relativita.com> Message-ID: <114880320807190928h29f43122ye85f060181c2d39b@mail.gmail.com> It appears that cho_factor is a wrapper for the lapack routine potrf. This function doesn't zero out the irrelevant part of the matrix; it appears to leave it untouched. In your case, the upper triangular part of your answer is correct; the subdiagonal part is irrelevant (and is apparently left over from the input matrix). Apparently this is not a problem, since the output of cho_factor is intended to be used in cho_solve (as the docstring states). Presumably cho_solve ignores the irrelevant parts of the input matrix. Warren On Sat, Jul 19, 2008 at 11:29 AM, Emanuele Olivetti wrote: > In scipy v0.7.0.dev4543 (svn) the documentation of > scipy.linalg.cho_factor says that cho_factor returns a tuple > made of: > ---- > c : array, shape (M, M) > Upper- or lower-triangular Cholesky factor of A > lower : array, shape (M, M) > Flag indicating whether the factor is lower or upper triangular > ---- > But the following simple example shows that 'c' is not triangular > and the result of cho_factor() is pretty different from that of > cholesky()! > ---- > import numpy as N > import scipy.linalg as SL > A = N.array([[2,1],[1,2]]) > c_wrong,lower = SL.cho_factor(A) > print c_wrong > c_correct = SL.cholesky(A) > print c_correct > print c_wrong==c_correct > ---- > The output is: > ---- > [[ 1.41421356 0.70710678] > [ 1. 1.22474487]] > [[ 1.41421356 0.70710678] > [ 0. 1.22474487]] > [[ True True] > [False True]] > ---- > Why c_wrong[1,0] is not zero? And why is it 1.0 ? > > I went mad tracking this issue in my code 8-| > > Regards, > > Emanuele > > P.S.: there is a type in cho_factor documentation: 'lower' is > not an 'array' but an 'int'. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sat Jul 19 13:17:48 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 19 Jul 2008 13:17:48 -0400 Subject: [SciPy-user] bug in cho_factor? In-Reply-To: <4882085A.2040107@relativita.com> References: <4882085A.2040107@relativita.com> Message-ID: <114880320807191017j4684602ak9cbe80439d82d5f@mail.gmail.com> Emanuele, After a closer look at the code, it appears that the only difference between cholesky() and cho_factor() is that cho_factor() does not set the irrelevant terms to zero. So, if you are only going to use the result to call cho_solve(), you can use cho_factor(), but if you need the correct square triangular matrix with zeros in the subdiagonal, use cholesky(). Warren On Sat, Jul 19, 2008 at 11:29 AM, Emanuele Olivetti wrote: > In scipy v0.7.0.dev4543 (svn) the documentation of > scipy.linalg.cho_factor says that cho_factor returns a tuple > made of: > ---- > c : array, shape (M, M) > Upper- or lower-triangular Cholesky factor of A > lower : array, shape (M, M) > Flag indicating whether the factor is lower or upper triangular > ---- > But the following simple example shows that 'c' is not triangular > and the result of cho_factor() is pretty different from that of > cholesky()! > ---- > import numpy as N > import scipy.linalg as SL > A = N.array([[2,1],[1,2]]) > c_wrong,lower = SL.cho_factor(A) > print c_wrong > c_correct = SL.cholesky(A) > print c_correct > print c_wrong==c_correct > ---- > The output is: > ---- > [[ 1.41421356 0.70710678] > [ 1. 1.22474487]] > [[ 1.41421356 0.70710678] > [ 0. 1.22474487]] > [[ True True] > [False True]] > ---- > Why c_wrong[1,0] is not zero? And why is it 1.0 ? > > I went mad tracking this issue in my code 8-| > > Regards, > > Emanuele > > P.S.: there is a type in cho_factor documentation: 'lower' is > not an 'array' but an 'int'. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Sat Jul 19 13:50:47 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 19 Jul 2008 19:50:47 +0200 Subject: [SciPy-user] bug in cho_factor? In-Reply-To: <114880320807191017j4684602ak9cbe80439d82d5f@mail.gmail.com> References: <4882085A.2040107@relativita.com> <114880320807191017j4684602ak9cbe80439d82d5f@mail.gmail.com> Message-ID: <48822977.80601@relativita.com> Thanks a lot for the explanation. I believe cho_factor's docstring should be updated in order to mention these facts. It is definitely unexpected that the result of the two decompositions are different and can cause problems like I had (a couple of hours spent). A clear "Warning" should fit. Consider that U,lower=cho_factor(A) outputs an U that does not satisfy A==N.dot(U.T,U) !! Do you know why cho_factor does not zeros out the matrix? Is it for performance reasons? Otherwise there are no reasons for using cho_factor() instead of cholesky(). Best Regards, Emanuele Warren Weckesser wrote: > Emanuele, > > After a closer look at the code, it appears that the only difference > between cholesky() and cho_factor() is that cho_factor() does not > set the irrelevant terms to zero. So, if you are only going to > use the result to call cho_solve(), you can use cho_factor(), but > if you need the correct square triangular matrix with zeros in the > subdiagonal, use cholesky(). > > Warren > > > On Sat, Jul 19, 2008 at 11:29 AM, Emanuele Olivetti > > wrote: > > In scipy v0.7.0.dev4543 (svn) the documentation of > scipy.linalg.cho_factor says that cho_factor returns a tuple > made of: > ---- > c : array, shape (M, M) > Upper- or lower-triangular Cholesky factor of A > lower : array, shape (M, M) > Flag indicating whether the factor is lower or upper triangular > ---- > But the following simple example shows that 'c' is not triangular > and the result of cho_factor() is pretty different from that of > cholesky()! > ---- > import numpy as N > import scipy.linalg as SL > A = N.array([[2,1],[1,2]]) > c_wrong,lower = SL.cho_factor(A) > print c_wrong > c_correct = SL.cholesky(A) > print c_correct > print c_wrong==c_correct > ---- > The output is: > ---- > [[ 1.41421356 0.70710678] > [ 1. 1.22474487]] > [[ 1.41421356 0.70710678] > [ 0. 1.22474487]] > [[ True True] > [False True]] > ---- > Why c_wrong[1,0] is not zero? And why is it 1.0 ? > > I went mad tracking this issue in my code 8-| > > Regards, > > Emanuele > > P.S.: there is a type in cho_factor documentation: 'lower' is > not an 'array' but an 'int'. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at gmail.com Sat Jul 19 14:24:29 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 19 Jul 2008 14:24:29 -0400 Subject: [SciPy-user] bug in cho_factor? In-Reply-To: <48822977.80601@relativita.com> References: <4882085A.2040107@relativita.com> <114880320807191017j4684602ak9cbe80439d82d5f@mail.gmail.com> <48822977.80601@relativita.com> Message-ID: <114880320807191124q453b288y2ab29f11d3def6c7@mail.gmail.com> Hi Emanuele, On Sat, Jul 19, 2008 at 1:50 PM, Emanuele Olivetti wrote: > Thanks a lot for the explanation. I believe cho_factor's docstring > should be updated in order to mention these facts. It is definitely > unexpected that the result of the two decompositions are different > and can cause problems like I had (a couple of hours spent). A > clear "Warning" should fit. Consider that U,lower=cho_factor(A) > outputs an U that does not satisfy A==N.dot(U.T,U) !! I agree--the docstring description of the return matrix c is wrong. c is a matrix whose upper or lower (depending on the parameter lower) triangular part gives the Cholesky factor, but c itself is not triangular. > Do you know why cho_factor does not zeros out the matrix? > Is it for performance reasons? That would be my guess, and it makes sense (why zero out elements that will be ignored by cho_solve()?), but you'd have to ask the author of the code to be sure. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Sat Jul 19 17:25:27 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Sat, 19 Jul 2008 23:25:27 +0200 Subject: [SciPy-user] bug in cho_factor? In-Reply-To: <114880320807191124q453b288y2ab29f11d3def6c7@mail.gmail.com> References: <4882085A.2040107@relativita.com> <114880320807191017j4684602ak9cbe80439d82d5f@mail.gmail.com> <48822977.80601@relativita.com> <114880320807191124q453b288y2ab29f11d3def6c7@mail.gmail.com> Message-ID: <48825BC7.5090904@relativita.com> In the meanwhile I've submitted ticket #704 about this issue and attached a tentative patch for the docstring. I hope someone will review, improve it and commit. Regards, Emanuele Warren Weckesser wrote: > Hi Emanuele, > > On Sat, Jul 19, 2008 at 1:50 PM, Emanuele Olivetti > > wrote: > > Thanks a lot for the explanation. I believe cho_factor's docstring > should be updated in order to mention these facts. It is definitely > unexpected that the result of the two decompositions are different > and can cause problems like I had (a couple of hours spent). A > clear "Warning" should fit. Consider that U,lower=cho_factor(A) > outputs an U that does not satisfy A==N.dot(U.T,U) !! > > > I agree--the docstring description of the return matrix c is wrong. > c is a matrix whose upper or lower (depending on the parameter > lower) triangular part gives the Cholesky factor, but c itself is not > triangular. > > > Do you know why cho_factor does not zeros out the matrix? > Is it for performance reasons? > > > That would be my guess, and it makes sense (why zero out elements > that will be ignored by cho_solve()?), but you'd have to ask the > author of the > code to be sure. > > Warren > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From rpmuller at gmail.com Sun Jul 20 14:24:43 2008 From: rpmuller at gmail.com (Rick Muller) Date: Sun, 20 Jul 2008 12:24:43 -0600 Subject: [SciPy-user] Problems while trying to build scipy with umfpack on Intel MacBook Pro Message-ID: I'm trying to get a scipy build that contains umfpack on an Intel MacBook Pro. I believe that I've compiled and installed umfpack correctly. I appended the output I get from the build to this message. In short, I get some warnings about having multiply defined "-arch" flags, but I'm not clear whether this is the fatal error or not. At the end of the install process, I have what looks like a umfpack directory: $ ls /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages/scipy/linsolve/umfpack/ __init__.py _umfpack.py info.pyc tests/ __init__.pyc _umfpack.pyc setup.py umfpack.py __umfpack.so info.py setup.pyc umfpack.pyc But for some reason I can't import it: s886301{rmuller}[1]: import scipy.linsolve.umfpack as um --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /Users/rmuller/ in () AttributeError: 'module' object has no attribute 'umfpack' Any suggestions? mkl_info: libraries mkl,vml,guide not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries mkl,vml,guide not found in /usr/local/lib libraries mkl,vml,guide not found in /usr/lib NOT AVAILABLE fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries fftw3 not found in /usr/local/lib libraries fftw3 not found in /usr/lib fftw3 not found NOT AVAILABLE fftw2_info: libraries rfftw,fftw not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries rfftw,fftw not found in /usr/local/lib libraries rfftw,fftw not found in /usr/lib fftw2 not found NOT AVAILABLE dfftw_info: libraries drfftw,dfftw not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries drfftw,dfftw not found in /usr/local/lib libraries drfftw,dfftw not found in /usr/lib dfftw not found NOT AVAILABLE djbfft_info: NOT AVAILABLE blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] non-existing path in 'scipy/linsolve': 'tests' umfpack_info: libraries umfpack not found in /Library/Frameworks/Python.framework/Versions/2.5/lib amd_info: libraries amd not found in /Library/Frameworks/Python.framework/Versions/2.5/lib libraries amd not found in /usr/local/lib libraries amd not found in /usr/lib NOT AVAILABLE FOUND: libraries = ['umfpack'] library_dirs = ['/usr/local/lib'] swig_opts = ['-I/usr/local/include'] define_macros = [('SCIPY_UMFPACK_H', None)] include_dirs = ['/usr/local/include'] running build running scons customize UnixCCompiler Found executable /usr/bin/gcc customize NAGFCompiler Could not locate executable f95 customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize IBMFCompiler Could not locate executable xlf90 Could not locate executable xlf customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize GnuFCompiler Could not locate executable g77 customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize UnixCCompiler customize UnixCCompiler using scons running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src building py_modules sources building library "dfftpack" sources building library "linpack_lite" sources building library "mach" sources building library "quadpack" sources building library "odepack" sources building library "fitpack" sources building library "superlu_src" sources building library "odrpack" sources building library "minpack" sources building library "rootfind" sources building library "c_misc" sources building library "cephes" sources building library "mach" sources building library "toms" sources building library "amos" sources building library "cdf" sources building library "specfun" sources building library "statlib" sources building extension "scipy.cluster._vq" sources building extension "scipy.fftpack._fftpack" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.fftpack.convolve" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.integrate._quadpack" sources building extension "scipy.integrate._odepack" sources building extension "scipy.integrate.vode" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.interpolate._fitpack" sources building extension "scipy.interpolate.dfitpack" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/scipy/interpolate/dfitpack-f2pywrappers.f' to sources. building extension "scipy.io.numpyio" sources building extension "scipy.lib.blas.fblas" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/lib/blas/fblas-f2pywrappers.f' to sources. building extension "scipy.lib.blas.cblas" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/lib/blas/cblas.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.flapack" sources f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.clapack" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/lib/lapack/clapack.pyf' to sources. f2py options: ['skip:', ':'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.lib.lapack.atlas_version" sources building extension "scipy.linalg.fblas" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/fblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/linalg/fblas-f2pywrappers.f' to sources. building extension "scipy.linalg.cblas" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/cblas.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg.flapack" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/flapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/linalg/flapack-f2pywrappers.f' to sources. building extension "scipy.linalg.clapack" sources adding 'build/src.macosx-10.3-fat-2.5/scipy/linalg/clapack.pyf' to sources. f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg._flinalg" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg.calc_lwork" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linalg.atlas_version" sources building extension "scipy.linalg._iterative" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.linsolve._zsuperlu" sources building extension "scipy.linsolve._dsuperlu" sources building extension "scipy.linsolve._csuperlu" sources building extension "scipy.linsolve._ssuperlu" sources building extension "scipy.linsolve.umfpack.__umfpack" sources adding 'scipy/linsolve/umfpack/umfpack.i' to sources. building extension "scipy.odr.__odrpack" sources building extension "scipy.optimize._minpack" sources building extension "scipy.optimize._zeros" sources building extension "scipy.optimize._lbfgsb" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.optimize.moduleTNC" sources building extension "scipy.optimize._cobyla" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.optimize.minpack2" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.signal.sigtools" sources building extension "scipy.signal.spline" sources building extension "scipy.sparse._sparsetools" sources building extension "scipy.special._cephes" sources building extension "scipy.special.specfun" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.stats.statlib" sources f2py options: ['--no-wrap-functions'] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.stats.futil" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. building extension "scipy.stats.mvn" sources f2py options: [] adding 'build/src.macosx-10.3-fat-2.5/fortranobject.c' to sources. adding 'build/src.macosx-10.3-fat-2.5' to include_dirs. adding 'build/src.macosx-10.3-fat-2.5/scipy/stats/mvn-f2pywrappers.f' to sources. building extension "scipy.ndimage._nd_image" sources building data_files sources running build_py copying build/src.macosx-10.3-fat-2.5/scipy/__config__.py -> build/lib.macosx-10.3-fat-2.5/scipy creating build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack copying scipy/linsolve/umfpack/__init__.py -> build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack copying scipy/linsolve/umfpack/info.py -> build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack copying scipy/linsolve/umfpack/setup.py -> build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack copying scipy/linsolve/umfpack/umfpack.py -> build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack copying build/src.macosx-10.3-fat-2.5/scipy/linsolve/umfpack/_umfpack.py -> build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack running build_clib customize UnixCCompiler customize UnixCCompiler using build_clib customize NAGFCompiler customize AbsoftFCompiler customize IBMFCompiler customize IntelFCompiler customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_clib running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext library 'mach' defined more than once, overwriting build_info {'sources': ['scipy/integrate/mach/d1mach.f', 'scipy/integrate/mach/i1mach.f', 'scipy/integrate/mach/r1mach.f', 'scipy/integrate/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/integrate/setup.pyc', 1)}, 'source_languages': ['f77']}... with {'sources': ['scipy/special/mach/d1mach.f', 'scipy/special/mach/i1mach.f', 'scipy/special/mach/r1mach.f', 'scipy/special/mach/xerror.f'], 'config_fc': {'noopt': ('scipy/special/setup.pyc', 1)}, 'source_languages': ['f77']}... extending extension 'scipy.linsolve._zsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._dsuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._csuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] extending extension 'scipy.linsolve._ssuperlu' defined_macros with [('USE_VENDOR_BLAS', 1)] customize UnixCCompiler customize UnixCCompiler using build_ext customize NAGFCompiler customize AbsoftFCompiler customize IBMFCompiler customize IntelFCompiler customize GnuFCompiler customize Gnu95FCompiler customize Gnu95FCompiler customize Gnu95FCompiler using build_ext building 'scipy.linsolve.umfpack.__umfpack' extension compiling C sources C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-DSCIPY_UMFPACK_H -DNO_ATLAS_INFO=3 -I/usr/local/include -I/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/numpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' extra options: '-faltivec -I/System/Library/Frameworks/vecLib.framework/Headers' gcc -arch i386 -arch ppc -isysroot /Developer/SDKs/MacOSX10.4u.sdk -g -bundle -undefined dynamic_lookup build/temp.macosx-10.3-fat-2.5/build/src.macosx-10.3-fat-2.5/scipy/linsolve/umfpack/_umfpack_wrap.o -L/usr/local/lib -Lbuild/temp.macosx-10.3-fat-2.5 -lumfpack -o build/lib.macosx-10.3-fat-2.5/scipy/linsolve/umfpack/__umfpack.so -Wl,-framework -Wl,Accelerate /usr/bin/ld: for architecture ppc /usr/bin/ld: warning /usr/local/lib/libumfpack.a archive's cputype (7, architecture i386) does not match cputype (18) for specified -arch flag: ppc (can't load from it) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpmuller at gmail.com Sun Jul 20 17:26:22 2008 From: rpmuller at gmail.com (Rick Muller) Date: Sun, 20 Jul 2008 15:26:22 -0600 Subject: [SciPy-user] Problems while trying to build scipy with umfpack on Intel MacBook Pro In-Reply-To: References: Message-ID: On Sun, Jul 20, 2008 at 12:24 PM, Rick Muller wrote: > I'm trying to get a scipy build that contains umfpack on an Intel MacBook > Pro. I believe that I've compiled and installed umfpack correctly. > > > I appended the output I get from the build to this message. In short, I get > some warnings about having multiply defined "-arch" flags, but I'm not clear > whether this is the fatal error or not. At the end of the install process, I > have what looks like a umfpack directory: > I'm replying to my own message to give a little more information, just in case it helps at all. Oddly enough, if I chdir to the relevant place, I can import umfpack: $ cd /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/scipy/linsolve/umfpack/ s886301{umfpack}508$ python Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import umfpack as um Does this make sense to anyone? Thanks in advance for any help. Rick -------------- next part -------------- An HTML attachment was scrubbed... URL: From didier.rano at gmail.com Sun Jul 20 22:56:14 2008 From: didier.rano at gmail.com (didier rano) Date: Sun, 20 Jul 2008 22:56:14 -0400 Subject: [SciPy-user] Statistics advise with scipy Message-ID: Hi all, I am using the TimeSeries module with scipy for several months. I am very impressive by this module. It is very easy to use. I have a problem to filter only relevant data. Sometimes, my data retrieved are not in good quality (very too high value or very too low value). I've tried to use median filter, but data could be irrelevant during a long period. I know that the pattern "strong change in a short time" is not possible normally. Then, how to filter it correctly ? You may see the data in http://spreadsheets.google.com/pub?key=pF2qPjwUpy_-1m7FbqafDXQ Thanks Didier Rano -------------- next part -------------- An HTML attachment was scrubbed... URL: From qing at qpeng.org Mon Jul 21 04:43:30 2008 From: qing at qpeng.org (Qing Peng) Date: Mon, 21 Jul 2008 01:43:30 -0700 Subject: [SciPy-user] Mode "reflect" may yield incorrect results on boundaries Message-ID: Hi there, I carefully followed the instructions to install numpy-1.1.0 and scipy-0.6.0, after lapack,atlas,amd,umfpack,fftw3 installation. The cite.cfg is the following. [DEFAULT] library_dirs = /home/users3/cmt/qing/usr/local/lib:/home/users3/cmt/qing/usr/local/gcc-4.3.1/lib64 include_dirs = /home/users3/cmt/qing/usr/local/include:/home/users3/cmt/qing/usr/local/gcc-4.3.1/include [atlas] library_dirs = /home/users3/cmt/qing/usr/local/lib atlas_libs = lapack, f77blas, cblas, atlas [amd] library_dirs = /home/users3/cmt/qing/usr/local/lib:/home/users3/cmt/qing/usr/local/gcc-4.3.1/lib64 include_dirs = /home/users3/cmt/qing/usr/local/include:/home/users3/cmt/qing/usr/local/gcc-4.3.1/include amd_libs = amd [umfpack] library_dirs = /home/users3/cmt/qing/usr/local/lib:/home/users3/cmt/qing/usr/local/gcc-4.3.1/lib64 include_dirs = /home/users3/cmt/qing/usr/local/include:/home/users3/cmt/qing/usr/local/gcc-4.3.1/include umfpack_libs = umfpack, gfortran [fftw3] library_dirs = /home/users3/cmt/qing/usr/local/fftw-3.1.2/lib include_dirs = /home/users3/cmt/qing/usr/local/fftw-3.1.2/include fftw3_libs = fftw3 "python setup.py install" successfully install numpy and scipy. But when I run the test: it gives: -bash-2.05b$ python -c 'import numpy; import scipy; scipy.test(1)' Failed importing scipy.linsolve.umfpack: 'module' object has no attribute 'umfpack' Found 9/9 tests for scipy.cluster.tests.test_vq Found 18/18 tests for scipy.fftpack.tests.test_basic Found 4/4 tests for scipy.fftpack.tests.test_helper Found 20/20 tests for scipy.fftpack.tests.test_pseudo_diffs Found 1/1 tests for scipy.integrate.tests.test_integrate Found 3/3 tests for scipy.integrate.tests.test_quadrature Found 10/10 tests for scipy.integrate.tests.test_quadpack Found 6/6 tests for scipy.tests.test_fitpack Found 6/6 tests for scipy.tests.test_interpolate Found 28/28 tests for scipy.io.tests.test_mio Found 5/5 tests for scipy.io.tests.test_npfile Found 4/4 tests for scipy.io.tests.test_array_import Found 4/4 tests for scipy.io.tests.test_recaster Found 13/13 tests for scipy.io.tests.test_mmio Found 128/128 tests for scipy.lib.blas.tests.test_fblas Found 16/16 tests for scipy.lib.blas.tests.test_blas Found 42/42 tests for scipy.lib.lapack.tests.test_lapack Found 128/128 tests for scipy.linalg.tests.test_fblas Found 6/6 tests for scipy.linalg.tests.test_iterative Found 72/72 tests for scipy.linalg.tests.test_decomp Found 4/4 tests for scipy.linalg.tests.test_lapack Found 41/41 tests for scipy.linalg.tests.test_basic Found 7/7 tests for scipy.linalg.tests.test_matfuncs Found 16/16 tests for scipy.linalg.tests.test_blas Failed importing /home/users3/cmt/qing/usr/local/python/lib/python2.5/site-packages/scipy/linsolve/umfpack/tests/test_umfpack.py: 'module' object has no attribute 'umfpack' Found 2/2 tests for scipy.maxentropy.tests.test_maxentropy Failed importing /home/users3/cmt/qing/usr/local/python/lib/python2.5/site-packages/scipy/misc/tests/test_pilutil.py: No module named PIL.Image Found 399/399 tests for scipy.ndimage.tests.test_ndimage Found 5/5 tests for scipy.odr.tests.test_odr Found 8/8 tests for scipy.optimize.tests.test_optimize Found 4/4 tests for scipy.optimize.tests.test_zeros Found 1/1 tests for scipy.optimize.tests.test_cobyla Found 10/10 tests for scipy.optimize.tests.test_nonlin Found 5/5 tests for scipy.signal.tests.test_signaltools Found 4/4 tests for scipy.signal.tests.test_wavelets Found 152/152 tests for scipy.sparse.tests.test_sparse Found 3/3 tests for scipy.special.tests.test_spfun_stats Found 342/342 tests for scipy.special.tests.test_basic Found 107/107 tests for scipy.stats.tests.test_stats Found 10/10 tests for scipy.stats.tests.test_morestats Found 73/73 tests for scipy.stats.tests.test_distributions Found 9/9 tests for scipy.weave.tests.test_build_tools Found 0/0 tests for scipy.weave.tests.test_scxx_dict Failed importing /home/users3/cmt/qing/usr/local/python/lib/python2.5/site-packages/scipy/weave/tests/test_wx_spec.py: Could not locate wxPython base directory. Found 3/3 tests for scipy.weave.tests.test_standard_array_spec Found 0/0 tests for scipy.weave.tests.test_inline_tools Found 0/0 tests for scipy.weave.tests.test_c_spec Found 1/1 tests for scipy.weave.tests.test_ast_tools Found 0/0 tests for scipy.weave.tests.test_scxx_sequence Found 16/16 tests for scipy.weave.tests.test_slice_handler Found 0/0 tests for scipy.weave.tests.test_scxx_object Found 2/2 tests for scipy.weave.tests.test_blitz_tools building extensions here: /home/users3/cmt/qing/.python25_compiled/m5 Found 1/1 tests for scipy.weave.tests.test_ext_tools Found 26/26 tests for scipy.weave.tests.test_catalog Found 74/74 tests for scipy.weave.tests.test_size_check .../home/users3/cmt/qing/usr/local/python/lib/python2.5/site-packages/scipy/cluster/vq.py:477: UserWarning: One of the clusters is empty. Re-run kmean with a different initialization. warnings.warn("One of the clusters is empty. " exception raised as expected: One of the clusters is empty. Re-run kmean with a different initialization. ................................................Residual: 1.05006987327e-07 ............../home/users3/cmt/qing/usr/local/python/lib/python2.5/site-packages/scipy/interpolate/fitpack2.py:458: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ............................................. Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. ........................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ............................................FF.................caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ..............................................................................................................................Result may be inaccurate, approximate err = 2.53191024037e-09 ...Result may be inaccurate, approximate err = 3.85396349383e-12 ............................................................................................................................/home/users3/cmt/qing/usr/local/python/lib/python2.5/site-packages/scipy/ndimage/interpolation.py:41: UserWarning: Mode "reflect" may yield incorrect results on boundaries. Please use "mirror" instead. warnings.warn('Mode "reflect" may yield incorrect results on ' .............................................................................................Illegal instruction After google with the key words: 'Mode "reflect" may yield incorrect results on boundaries', it looks like a bug, which said been fixed in year 2007. I just wondering where I go wrong. Thanks -- Qing ============= -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwojc at p.lodz.pl Mon Jul 21 05:13:53 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Mon, 21 Jul 2008 11:13:53 +0200 Subject: [SciPy-user] optimize.fmin_tnc - how to catch its output? Message-ID: Hallo! I tried to catch fmin_tnc output to a file in the following way: file = open('messages', 'w') stdout = sys.stdout stderr = sys.stderr sys.stdout = file sys.stderr = file res = optimize.fmin_tnc(..., messages=1) file.close() sys.stdout = stdout sys.stderr = stderr But this does not work. I realized that fmin_tnc calls C routine which probably uses its own stdout, stderr. Is there a way to catch its output? Greetings, -- Marek Wojciechowski From emanuele at relativita.com Mon Jul 21 05:24:56 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 21 Jul 2008 11:24:56 +0200 Subject: [SciPy-user] fastest 'solve' for triangular matrix? Message-ID: <488455E8.60900@relativita.com> Dear All, Is there a better/faster way (provided by scipy) to solve a linear system Lx=b (when L is lower-triangular) then the following? piv = N.arange(L.shape[0]) x = scipy.linalg.lu_solve((L,piv),b) The linear system is trivial to solve but using 'solve' is much much slower and I was no t able to find something ad-hoc for triangular matrices. Best, Emanuele From emanuele at relativita.com Mon Jul 21 06:12:47 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 21 Jul 2008 12:12:47 +0200 Subject: [SciPy-user] fastest 'solve' for triangular matrix? In-Reply-To: <488455E8.60900@relativita.com> References: <488455E8.60900@relativita.com> Message-ID: <4884611F.5020106@relativita.com> Oops, the previous code was incorrect (it referred to the upper-triangular case, not the lower-triangular). Here is the correct version: # L = lower-triangular matrix piv = N.arange(L.shape[0]) x = scipy.linalg.lu_solve((L.T,piv),b,trans=1) And so in case of upper-triangular matrix, Ux=b, it is: # U = upper-triangular matrix piv = N.arange(L.shape[0]) x = scipy.linalg.lu_solve((U,piv),b,trans=0) Emanuele P.S.: for some reasons the 'upper' case is consistently faster than the 'lower' one... Emanuele Olivetti wrote: > Dear All, > > Is there a better/faster way (provided by scipy) to solve a linear system > Lx=b (when L is lower-triangular) then the following? > > piv = N.arange(L.shape[0]) > x = scipy.linalg.lu_solve((L,piv),b) > > The linear system is trivial to solve but using 'solve' is much much slower > and I was no t able to find something ad-hoc for triangular matrices. > > Best, > > Emanuele > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From emanuele at relativita.com Mon Jul 21 08:22:48 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 21 Jul 2008 14:22:48 +0200 Subject: [SciPy-user] [OpenOpt] evaluation of f(x) and df(x) Message-ID: <48847F98.7030600@relativita.com> Dear All and Dmitrey, in my code the evaluation of f(x) and df(x) shares many intermediate steps. I'd like to re-use what is computed inside f(x) to evaluate df(x) more efficiently, during f(x) optimization. Then is it _always_ true that, when OpenOpt evaluates df(x) at a certain x=x^*, f(x) too was previously evaluated at x=x^*? And in case f(x) was evaluated multiple times before evaluating df(x), is it true that the last x at which f(x) was evaluated (before computing df(x=x^*)) was x=x^*? If these assumptions holds (as it seems from preliminary tests on NLP using ralg), the extra code to take advantage of this fact is extremely simple. Best, Emanuele P.S.: if the previous assumptions are false in general, I'd like to know it they are true at least for the NLP case. From dmitrey.kroshko at scipy.org Mon Jul 21 08:35:48 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 21 Jul 2008 15:35:48 +0300 Subject: [SciPy-user] [OpenOpt] evaluation of f(x) and df(x) In-Reply-To: <48847F98.7030600@relativita.com> References: <48847F98.7030600@relativita.com> Message-ID: <488482A4.5060601@scipy.org> Hi Emanuele, if df(x1) is obtained via finite-difference calculations then f(x1) is stored and compared during next call to f / df, and vice versa: if f(x1) is called then the value obtained is stored and compared during next call to f and/or finite-difference df. At least it is intended so, I can take more precise look if you have noticed it doesn't work properly. Regards, D. Emanuele Olivetti wrote: > Dear All and Dmitrey, > > in my code the evaluation of f(x) and df(x) shares many > intermediate steps. I'd like to re-use what is computed > inside f(x) to evaluate df(x) more efficiently, during f(x) > optimization. Then is it _always_ true that, when OpenOpt > evaluates df(x) at a certain x=x^*, f(x) too was previously > evaluated at x=x^*? And in case f(x) was evaluated multiple > times before evaluating df(x), is it true that the last x at > which f(x) was evaluated (before computing df(x=x^*)) > was x=x^*? > > If these assumptions holds (as it seems from preliminary > tests on NLP using ralg), the extra code to take advantage > of this fact is extremely simple. > > Best, > > Emanuele > > P.S.: if the previous assumptions are false in general, I'd > like to know it they are true at least for the NLP case. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From stefan at sun.ac.za Mon Jul 21 08:53:28 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Mon, 21 Jul 2008 14:53:28 +0200 Subject: [SciPy-user] Mode "reflect" may yield incorrect results on boundaries In-Reply-To: References: Message-ID: <9457e7c80807210553y1c729d48n4eadf31f62553210@mail.gmail.com> 2008/7/21 Qing Peng : > After google with the key words: 'Mode "reflect" may yield incorrect results > on boundaries', it looks like a bug, which said been fixed in year 2007. I > just wondering where I go wrong. That's true, but unfortunately we haven't released in the meantime. Hopefully, 0.7 should be out by September. In the meantime, building the latest sources from SVN should solve your problem. Regards St?fan From Dharhas.Pothina at twdb.state.tx.us Mon Jul 21 08:53:55 2008 From: Dharhas.Pothina at twdb.state.tx.us (Dharhas Pothina) Date: Mon, 21 Jul 2008 07:53:55 -0500 Subject: [SciPy-user] Help interpolating equally spaced 2D griddeddata to a different grid. In-Reply-To: <20080719160631.GA6332@lollo-laptop> References: <4880A32E0200009B00014590@GWWEB.twdb.state.tx.us> <20080719160631.GA6332@lollo-laptop> Message-ID: <48844093.63BA.009B.0@twdb.state.tx.us> Thanks I'll try that. I guess I didn't realize interp2d returns something that can be used as a function and was wondering where to specify ynew,xnew in the interp2d function call. - dharhas >>> Lorenzo Bolla 7/19/2008 11:06 AM >>> something like this should work (adjust the dimensions as you wish): ---------------------------------------- import numpy from scipy.interpolate import interp2d x = numpy.linspace(0, 1, 27) y = numpy.linspace(0, 1, 34) lat_old, lon_old = numpy.meshgrid(y, x) precip_old = numpy.sin(lat_old + lon_old) print lat_old.shape print lon_old.shape print precip_old.shape x_new = numpy.linspace(0, 1, 20) y_new = numpy.linspace(0, 1, 16) lat, lon = numpy.meshgrid(y_new, x_new) precip_interp = interp2d(lat_old, lon_old, precip_old) precip = precip_interp(y_new, x_new) print lat.shape print lon.shape print precip.shape ---------------------------------------- hth, L. On Fri, Jul 18, 2008 at 02:05:34PM -0500, Dharhas Pothina wrote: > Hi, > > I have a regular grid of data : > > lat_old.shape = (277, 349) > lon_old.shape = (277, 349) > precip_old.shape = (8, 277, 349) # the first axis is time > > I need to interpolate precip_old to a new regular grid (lat,lon) where > > lat.shape = (204, 160) > lon.shape = (204, 160) > > Basically I need to calculate precip_new where precip_new.shape = (8, 204, 160) > > I've had a look at the cookbook page for 'N-D interpolation for equally-spaced data' and also some of the documentation for interp2D etc but am having trouble understanding what I need to do. > > Any help is appreciated. > > thanks > > - dharhas > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user _______________________________________________ SciPy-user mailing list SciPy-user at scipy.org http://projects.scipy.org/mailman/listinfo/scipy-user From emanuele at relativita.com Mon Jul 21 10:04:35 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 21 Jul 2008 16:04:35 +0200 Subject: [SciPy-user] [OpenOpt] evaluation of f(x) and df(x) In-Reply-To: <488482A4.5060601@scipy.org> References: <48847F98.7030600@relativita.com> <488482A4.5060601@scipy.org> Message-ID: <48849773.3060907@relativita.com> Hi Dmitrey, I do not understand if your message answers my question so let me reformulate. I have the exact gradient df(x) implemented in my code so I don't use finite differences. In my problem, In order to compute the gradient df(x=x1), I'd like to take advantage of intermediate results of f(x=x1)'s compuation. The re-use of these results is trivial to implement if the sequence of function calls made by OpenOpt is, e.g., like this: f(x0), df(x0), f(x1), f(x2), df(x2), f(x3), df(x3).... . Instead the implementation could become quite difficult if the sequence would be like this: f(x0), f(x1), df(x0), f(x2), f(x3), f(x4), df(x3),... (i.e., the sequence of f / df is not evaluated on the same values). Is OpenOpt working as in the first case? Thanks, Emanuele dmitrey wrote: > Hi Emanuele, > > if df(x1) is obtained via finite-difference calculations then f(x1) is > stored and compared during next call to f / df, and vice versa: if f(x1) > is called then the value obtained is stored and compared during next > call to f and/or finite-difference df. > > At least it is intended so, I can take more precise look if you have > noticed it doesn't work properly. > > Regards, D. > > Emanuele Olivetti wrote: > >> Dear All and Dmitrey, >> >> in my code the evaluation of f(x) and df(x) shares many >> intermediate steps. I'd like to re-use what is computed >> inside f(x) to evaluate df(x) more efficiently, during f(x) >> optimization. Then is it _always_ true that, when OpenOpt >> evaluates df(x) at a certain x=x^*, f(x) too was previously >> evaluated at x=x^*? And in case f(x) was evaluated multiple >> times before evaluating df(x), is it true that the last x at >> which f(x) was evaluated (before computing df(x=x^*)) >> was x=x^*? >> >> If these assumptions holds (as it seems from preliminary >> tests on NLP using ralg), the extra code to take advantage >> of this fact is extremely simple. >> >> Best, >> >> Emanuele >> >> P.S.: if the previous assumptions are false in general, I'd >> like to know it they are true at least for the NLP case. >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From dmitrey.kroshko at scipy.org Mon Jul 21 10:58:32 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 21 Jul 2008 17:58:32 +0300 Subject: [SciPy-user] [OpenOpt] evaluation of f(x) and df(x) In-Reply-To: <48849773.3060907@relativita.com> References: <48847F98.7030600@relativita.com> <488482A4.5060601@scipy.org> <48849773.3060907@relativita.com> Message-ID: <4884A418.5000006@scipy.org> Hi Emanuele, no, openopt will never hold in memory more then 1 point value, because it's too expensive to check all those previous values, as well as to store in memory so much data (sometimes nVars are very large). The proble you have mentioned is too specific, so it assumes user will take care of the situation. As for solvers (especially those which use derivatives), they will hardly use previous points (if so it may be likely considered as a bug). To prevent the bugs I use openopt Point concept (in my ralg solver). Instead of handling in current workspace so lots of variables (f,df, f_prev, df_prev, c_prev, h_prev, dh_prev, etc, + any kinds of linear, + possible 2nd derivatives) I use something like this: iterPoint = p.point(x) if I need f(x), df(x), dc(x), maxResidual(x) etc I use iterPoint.f(), iterPoint.df(), iterPoint.dc(), iterPoint.mr() etc and I'm sure these values will not be recalculated. Still I'm not sure in my current Point having f will benefit of having df and wise versa (if some calculations with other points had been done between them). As for you as a user, if you want to take usage of OO Point, it could be done via p = NLP() p.args = p and then using p.point in your objFunc/constraints (still I'm not sure it will not bring endless recursive cycle). Or, you could write something like this by yourself in your code. Point class is situated in /Kernel/Point.py. Regards, D. Emanuele Olivetti wrote: > Hi Dmitrey, > > I do not understand if your message answers my question so let me > reformulate. I have the exact gradient df(x) implemented in my code > so I don't use finite differences. > > In my problem, In order to compute the gradient df(x=x1), I'd like to > take advantage of intermediate results of f(x=x1)'s compuation. > The re-use of these results is trivial to implement if > the sequence of function calls made by OpenOpt is, e.g., like this: > f(x0), df(x0), f(x1), f(x2), df(x2), f(x3), df(x3).... . Instead the > implementation could become quite difficult if the sequence would > be like this: f(x0), f(x1), df(x0), f(x2), f(x3), f(x4), df(x3),... > (i.e., the sequence of f / df is not evaluated on the same values). > > Is OpenOpt working as in the first case? > > Thanks, > > Emanuele > > dmitrey wrote: > >> Hi Emanuele, >> >> if df(x1) is obtained via finite-difference calculations then f(x1) is >> stored and compared during next call to f / df, and vice versa: if f(x1) >> is called then the value obtained is stored and compared during next >> call to f and/or finite-difference df. >> >> At least it is intended so, I can take more precise look if you have >> noticed it doesn't work properly. >> >> Regards, D. >> >> Emanuele Olivetti wrote: >> >> >>> Dear All and Dmitrey, >>> >>> in my code the evaluation of f(x) and df(x) shares many >>> intermediate steps. I'd like to re-use what is computed >>> inside f(x) to evaluate df(x) more efficiently, during f(x) >>> optimization. Then is it _always_ true that, when OpenOpt >>> evaluates df(x) at a certain x=x^*, f(x) too was previously >>> evaluated at x=x^*? And in case f(x) was evaluated multiple >>> times before evaluating df(x), is it true that the last x at >>> which f(x) was evaluated (before computing df(x=x^*)) >>> was x=x^*? >>> >>> If these assumptions holds (as it seems from preliminary >>> tests on NLP using ralg), the extra code to take advantage >>> of this fact is extremely simple. >>> >>> Best, >>> >>> Emanuele >>> >>> P.S.: if the previous assumptions are false in general, I'd >>> like to know it they are true at least for the NLP case. >>> >>> _______________________________________________ >>> SciPy-user mailing list >>> SciPy-user at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >>> >>> >>> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > From dmitrey.kroshko at scipy.org Mon Jul 21 12:08:52 2008 From: dmitrey.kroshko at scipy.org (dmitrey) Date: Mon, 21 Jul 2008 19:08:52 +0300 Subject: [SciPy-user] [OpenOpt] evaluation of f(x) and df(x) In-Reply-To: <48847F98.7030600@relativita.com> References: <48847F98.7030600@relativita.com> Message-ID: <4884B494.9070209@scipy.org> Sorry, I re-read your letter once again and seems now I understand the question. Emanuele Olivetti wrote: > Dear All and Dmitrey, > > in my code the evaluation of f(x) and df(x) shares many > intermediate steps. I'd like to re-use what is computed > inside f(x) to evaluate df(x) more efficiently, during f(x) > optimization. Then is it _always_ true that, when OpenOpt > evaluates df(x) at a certain x=x^*, f(x) too was previously > evaluated at x=x^*? No, this is not guarantied. > And in case f(x) was evaluated multiple > times before evaluating df(x), is it true that the last x at > which f(x) was evaluated (before computing df(x=x^*)) > was x=x^*? > No, this is not guarantied. I can only guarantee the code f = user_obj_fun(x) f2 = user_obj_fun(x) will not call user-supplied objective func for twice. Same for df, c, dc, dh etc. > If these assumptions holds (as it seems from preliminary > tests on NLP using ralg), the extra code to take advantage > of this fact is extremely simple. > It is what oofun is intended for (constructing f,c,h from lots of pieces, some of which can be used for several times, including cross-cases, for example same part of code used by c and f). Currently I continue my work on them. See the file (committed some minutes ago, still require more detailed docstrings): http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/oofun_input.py Still there is much more work to be done (including recursive 1st derivatives, conception of oovar that I intend to add etc). Regards, D. > Best, > > Emanuele > > P.S.: if the previous assumptions are false in general, I'd > like to know it they are true at least for the NLP case. > > From pgmdevlist at gmail.com Mon Jul 21 12:26:46 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 21 Jul 2008 12:26:46 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: References: Message-ID: <200807211226.46646.pgmdevlist@gmail.com> On Sunday 20 July 2008 22:56:14 didier rano wrote: > I have a problem to filter only relevant data. Didier, That's a trick question, whose answer depends too heavily on the kind of data you're processing. * Are you expecting some kind of trend ? If yes, detrend your data first and work on the residuals. * Are you interested in finding the breaks in trend and/or regimes ? If so, I can send you some algos that do just that. * Are you interested into finding the outliers ? There are common approaches, some based on an expected normality of the data, some based on more robust methods... Googling 'outliers robust method' should get you started. From didier.rano at gmail.com Mon Jul 21 12:59:06 2008 From: didier.rano at gmail.com (didier rano) Date: Mon, 21 Jul 2008 12:59:06 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: <200807211226.46646.pgmdevlist@gmail.com> References: <200807211226.46646.pgmdevlist@gmail.com> Message-ID: Thanks Pierre, What do you mean by "finding the breaks in trend and/or regimes" ? In fact, in my case, I have several objectives: * Draw a graph without outliers data * Compute a trend wihout outliers data * Determine when the behavior/trend data will change. May be "finding the breaks in trend and/or regimes " ? Bye Didier Rano 2008/7/21, Pierre GM : > > On Sunday 20 July 2008 22:56:14 didier rano wrote: > > > I have a problem to filter only relevant data. > > Didier, > That's a trick question, whose answer depends too heavily on the kind of > data > you're processing. > * Are you expecting some kind of trend ? If yes, detrend your data first > and > work on the residuals. > * Are you interested in finding the breaks in trend and/or regimes ? If so, > I > can send you some algos that do just that. > * Are you interested into finding the outliers ? There are common > approaches, > some based on an expected normality of the data, some based on more robust > methods... Googling 'outliers robust method' should get you started. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Mon Jul 21 13:03:47 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Mon, 21 Jul 2008 19:03:47 +0200 Subject: [SciPy-user] [OpenOpt] evaluation of f(x) and df(x) In-Reply-To: <4884B494.9070209@scipy.org> References: <48847F98.7030600@relativita.com> <4884B494.9070209@scipy.org> Message-ID: <4884C173.8040501@relativita.com> OK you got it now. Thanks for the answer. In the meanwhile I made a little change to my code - just a sub-optimal solution - but allows to avoid lots of useless computations (until now I have no example for which the assumptions I described are false, and when they will be my algorithm will just become slower and compute everything). Oofun seems very interesting I'll spend more time on it. Let me express again my appreciation for OpenOpt! Emanuele dmitrey wrote: > Sorry, I re-read your letter once again and seems now I understand the > question. > > Emanuele Olivetti wrote: > >> Dear All and Dmitrey, >> >> in my code the evaluation of f(x) and df(x) shares many >> intermediate steps. I'd like to re-use what is computed >> inside f(x) to evaluate df(x) more efficiently, during f(x) >> optimization. Then is it _always_ true that, when OpenOpt >> evaluates df(x) at a certain x=x^*, f(x) too was previously >> evaluated at x=x^*? >> > No, this is not guarantied. > >> And in case f(x) was evaluated multiple >> times before evaluating df(x), is it true that the last x at >> which f(x) was evaluated (before computing df(x=x^*)) >> was x=x^*? >> >> > No, this is not guarantied. > > I can only guarantee the code > f = user_obj_fun(x) > f2 = user_obj_fun(x) > will not call user-supplied objective func for twice. Same for df, c, > dc, dh etc. > > >> If these assumptions holds (as it seems from preliminary >> tests on NLP using ralg), the extra code to take advantage >> of this fact is extremely simple. >> >> > It is what oofun is intended for (constructing f,c,h from lots of > pieces, some of which can be used for several times, including > cross-cases, for example same part of code used by c and f). Currently I > continue my work on them. See the file (committed some minutes ago, > still require more detailed docstrings): > http://projects.scipy.org/scipy/scikits/browser/trunk/openopt/scikits/openopt/examples/oofun_input.py > Still there is much more work to be done (including recursive 1st > derivatives, conception of oovar that I intend to add etc). > > Regards, D. > >> Best, >> >> Emanuele >> >> P.S.: if the previous assumptions are false in general, I'd >> like to know it they are true at least for the NLP case. >> >> >> > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From pgmdevlist at gmail.com Mon Jul 21 13:12:39 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 21 Jul 2008 13:12:39 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: References: <200807211226.46646.pgmdevlist@gmail.com> Message-ID: <200807211312.40202.pgmdevlist@gmail.com> On Monday 21 July 2008 12:59:06 didier rano wrote: > Thanks Pierre, > > What do you mean by "finding the breaks in trend and/or regimes" ? There's a lot of literature on the detection of change-points. Roughly, a change-point can be a step change-point (eg., switching from one mean to another, as you observe in your data), or a trend change-points (the slope of a linear model changes at some point), or both. www.beringclimate.noaa.gov/regimes/Regime_shift_methods_list.htm http://ams.allenpress.com/perlserv/?request=get-pdf&doi=10.1175%2F2008JCLI1956.1 > In fact, in my case, I have several objectives: > * Draw a graph without outliers data Well, first you need to define what an outlier is in your problem: assuming normal data, how far away from the mean should a point be to be an outlier ? 1,2, 3 standard deviation ? What if your data is not normal ? In that case, robust methods can give good result. > * Compute a trend wihout outliers data > * Determine when the behavior/trend data will change. May be "finding the > breaks in trend and/or regimes " ? From S.Collis at bom.gov.au Mon Jul 21 18:00:43 2008 From: S.Collis at bom.gov.au (Scott Collis) Date: Tue, 22 Jul 2008 08:00:43 +1000 Subject: [SciPy-user] Matplotlib Help with colorbar [SEC=UNCLASSIFIED] Message-ID: <00E49660AB21DD44A0D4A3115DFCBF61010126F0@officeho2.bom.gov.au> Good morning all, While not strictly a Scipy question, I know many use Matplotlib for visualisation. I have a 2D matrix of classifications (say 1 to 10 in discrete steps) and I am using pcolor to create a mesh plot (which I overlay contours and then a quiver on...) Now I wish to put a colour bar on but instead of 1-10 on the axis I want say "apple" to "Jake" (or in my case "Unidentified" to "Rain + Hail") Is there a way I can give colourbar or one of its children (colorbar.ax.yaxis?) a list or dictionary to annotate the bar? ie [(0,"unclassified"), ..., (10, "Rain + Hail")] Dr Scott Collis Postdoctoral Researcher Centre for Australian Weather and Climate Research (CAWCR) Atmosphere and Land Observation and Assessment Group Desk: (+613) 96694766 MB: 0412177550 www: http://www.bom.gov.au/bmrc/wefor/staff/scollis/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jul 21 18:46:38 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Jul 2008 17:46:38 -0500 Subject: [SciPy-user] Matplotlib Help with colorbar [SEC=UNCLASSIFIED] In-Reply-To: <00E49660AB21DD44A0D4A3115DFCBF61010126F0@officeho2.bom.gov.au> References: <00E49660AB21DD44A0D4A3115DFCBF61010126F0@officeho2.bom.gov.au> Message-ID: <3d375d730807211546v1bbea7f0wcff2b7f70d898a3f@mail.gmail.com> On Mon, Jul 21, 2008 at 17:00, Scott Collis wrote: > Good morning all, > While not strictly a Scipy question, I know many use Matplotlib for > visualisation. I suspect you will have better luck on the matplotlib-users mailing list. https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Mon Jul 21 18:56:26 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 22 Jul 2008 00:56:26 +0200 Subject: [SciPy-user] optimize.fmin_tnc - how to catch its output? In-Reply-To: References: Message-ID: <9457e7c80807211556x5ae7d4e1m9079534f518e0cb9@mail.gmail.com> Hi Marek 2008/7/21 Marek Wojciechowski : > Hallo! > I tried to catch fmin_tnc output to a file in the following way: > > file = open('messages', 'w') > stdout = sys.stdout > stderr = sys.stderr > sys.stdout = file > sys.stderr = file > res = optimize.fmin_tnc(..., messages=1) > file.close() > sys.stdout = stdout > sys.stderr = stderr > > But this does not work. I realized that fmin_tnc calls C routine which > probably uses its own stdout, stderr. Is there a way to catch its output? I'm afraid you are right. The quickest way I can think of is to grab it using standard unix pipes: python optimise_script.py 2>&1 | cat > out.txt Otherwise, the C-code will have to be modified. Does anybody know what the best way is to write to sys.stdout and sys.stderr (not printf that goes to /dev/stdout) via the Python C API? Regards St?fan From robert.kern at gmail.com Mon Jul 21 19:22:29 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 21 Jul 2008 18:22:29 -0500 Subject: [SciPy-user] optimize.fmin_tnc - how to catch its output? In-Reply-To: <9457e7c80807211556x5ae7d4e1m9079534f518e0cb9@mail.gmail.com> References: <9457e7c80807211556x5ae7d4e1m9079534f518e0cb9@mail.gmail.com> Message-ID: <3d375d730807211622p4b73cd4bo2fe69b2ebf4d8a61@mail.gmail.com> On Mon, Jul 21, 2008 at 17:56, St?fan van der Walt wrote: > Hi Marek > > 2008/7/21 Marek Wojciechowski : >> Hallo! >> I tried to catch fmin_tnc output to a file in the following way: >> >> file = open('messages', 'w') >> stdout = sys.stdout >> stderr = sys.stderr >> sys.stdout = file >> sys.stderr = file >> res = optimize.fmin_tnc(..., messages=1) >> file.close() >> sys.stdout = stdout >> sys.stderr = stderr >> >> But this does not work. I realized that fmin_tnc calls C routine which >> probably uses its own stdout, stderr. Is there a way to catch its output? > > I'm afraid you are right. The quickest way I can think of is to grab > it using standard unix pipes: > > python optimise_script.py 2>&1 | cat > out.txt There is a way to redirect it to a (temporary) file from within Python. You can then read this file. I have an old and not-recently-tested implementation over here: http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/redir/ > Otherwise, the C-code will have to be modified. Does anybody know > what the best way is to write to sys.stdout and sys.stderr (not printf > that goes to /dev/stdout) via the Python C API? PySys_WriteStdout() and PySys_WriteStderr() as long as the output is coming from the C extension modules themselves. Modifying the library's C or (hah!) FORTRAN code is less palatable. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From didier.rano at gmail.com Mon Jul 21 22:49:51 2008 From: didier.rano at gmail.com (didier rano) Date: Mon, 21 Jul 2008 22:49:51 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: <200807211312.40202.pgmdevlist@gmail.com> References: <200807211226.46646.pgmdevlist@gmail.com> <200807211312.40202.pgmdevlist@gmail.com> Message-ID: 2008/7/21 Pierre GM : > On Monday 21 July 2008 12:59:06 didier rano wrote: > > Thanks Pierre, > > > > What do you mean by "finding the breaks in trend and/or regimes" ? > > There's a lot of literature on the detection of change-points. Roughly, a > change-point can be a step change-point (eg., switching from one mean to > another, as you observe in your data), or a trend change-points (the slope > of > a linear model changes at some point), or both. > > www.beringclimate.noaa.gov/regimes/Regime_shift_methods_list.htm > > http://ams.allenpress.com/perlserv/?request=get-pdf&doi=10.1175%2F2008JCLI1956.1 > > > In fact, in my case, I have several objectives: > > * Draw a graph without outliers data > > Well, first you need to define what an outlier is in your problem: assuming > normal data, how far away from the mean should a point be to be an outlier > ? > 1,2, 3 standard deviation ? What if your data is not normal ? In that case, > robust methods can give good result. > My data is not normal. Do you know robusts method in scipy ? Or maybe in an other python module ? > > > * Compute a trend wihout outliers data > > * Determine when the behavior/trend data will change. May be "finding the > > breaks in trend and/or regimes " ? > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan at aims.ac.za Mon Jul 21 23:10:54 2008 From: jan at aims.ac.za (Jan Groenewald) Date: Tue, 22 Jul 2008 05:10:54 +0200 Subject: [SciPy-user] optimize.fmin_tnc - how to catch its output? In-Reply-To: <9457e7c80807211556x5ae7d4e1m9079534f518e0cb9@mail.gmail.com> References: <9457e7c80807211556x5ae7d4e1m9079534f518e0cb9@mail.gmail.com> Message-ID: <3c1fa8220807212010k1d4a12b3scc138002424f5783@mail.gmail.com> Hi St?fan, On Tue, Jul 22, 2008 at 12:56 AM, St?fan van der Walt wrote: > python optimise_script.py 2>&1 | cat > out.txt http://en.wikipedia.org/wiki/Cat_(Unix)#cat_and_UUOC ? You don' t need the "| cat". Never mind the pipecat, there is a shortcut for "2>&1 >" and it is "&>". > Otherwise, the C-code will have to be modified. Does anybody know > what the best way is to write to sys.stdout and sys.stderr (not printf > that goes to /dev/stdout) via the Python C API? I don't know but found this interesting so googled it: http://www.ragestorm.net/tutorial?id=21#9 or http://docs.python.org/lib/module-subprocess.html regards, Jan From pgmdevlist at gmail.com Mon Jul 21 23:07:31 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Mon, 21 Jul 2008 23:07:31 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: References: <200807211312.40202.pgmdevlist@gmail.com> Message-ID: <200807212307.31208.pgmdevlist@gmail.com> On Monday 21 July 2008 22:49:51 didier rano wrote: > My data is not normal. Do you know robusts method in scipy ? Or maybe in an > other python module ? Mmh, I'm sure you could implement some yourself. That way, we could start another scikits. There are already some winsorization and trimming functions in scipy.stats. Alternatively, you can try to use R and numpy through rpy: http://rpy.sourceforge.net/ From nwagner at iam.uni-stuttgart.de Tue Jul 22 02:56:41 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 08:56:41 +0200 Subject: [SciPy-user] Frequency content of a transient signal Message-ID: Hi all, How can I obtain the frequency content of a transient signal (signal.png) ? Any pointer would be appreciated. Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: signal.png Type: image/png Size: 21303 bytes Desc: not available URL: From matthieu.brucher at gmail.com Tue Jul 22 03:15:11 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 09:15:11 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: Hi ! I think you can use a sliding DFT on your signal. This way, you will get a moving estimation of the spectral information (it's a basic time-frequency transform, before you use wavelets ;)). Matthieu 2008/7/22 Nils Wagner : > Hi all, > How can I obtain the frequency content of a > transient signal (signal.png) ? > > Any pointer would be appreciated. > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Tue Jul 22 03:31:33 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 09:31:33 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: On Tue, 22 Jul 2008 09:15:11 +0200 "Matthieu Brucher" wrote: > Hi ! > > I think you can use a sliding DFT on your signal. This >way, you will > get a moving estimation of the spectral information >(it's a basic > time-frequency transform, before you use wavelets ;)). > > Matthieu > Matthieu, I am not very familiar with signal processing. Please can you provide a short example ? I have attached the data file. The first column corresponds with the time, the second column represents the signal. Thanks in advance Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: mysignal.dat.gz Type: application/x-gzip Size: 5668 bytes Desc: not available URL: From matthieu.brucher at gmail.com Tue Jul 22 03:59:13 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 09:59:13 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: 2008/7/22 Nils Wagner : > On Tue, 22 Jul 2008 09:15:11 +0200 > "Matthieu Brucher" wrote: >> >> Hi ! >> >> I think you can use a sliding DFT on your signal. This way, you will >> get a moving estimation of the spectral information (it's a basic >> time-frequency transform, before you use wavelets ;)). >> >> Matthieu >> > Matthieu, > > I am not very familiar with signal processing. Please can > you provide a short example ? > I have attached the data file. The first column corresponds > with the time, the second column represents the signal. > Thanks in advance > > Nils A wikipedia article states it better than me, so here is a link : http://en.wikipedia.org/wiki/Short-time_Fourier_transform Feel free to ask any question ;) In fact, what you will do is : my_ffts = [] fo i in range(n): my_ffts.append(fft(data[i*size:(i+1)*size])) with size the size of the sliding window you will use. This will give you non overlapping FFTs, but you can use overlapping FFT, such as : my_ffts.append(fft(data[(i-1/2.)*size:(i+3/2.)*size])) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Tue Jul 22 04:52:31 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 10:52:31 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: On Tue, 22 Jul 2008 09:59:13 +0200 "Matthieu Brucher" wrote: > 2008/7/22 Nils Wagner : >> On Tue, 22 Jul 2008 09:15:11 +0200 >> "Matthieu Brucher" wrote: >>> >>> Hi ! >>> >>> I think you can use a sliding DFT on your signal. This >>>way, you will >>> get a moving estimation of the spectral information >>>(it's a basic >>> time-frequency transform, before you use wavelets ;)). >>> >>> Matthieu >>> >> Matthieu, >> >> I am not very familiar with signal processing. Please >>can >> you provide a short example ? >> I have attached the data file. The first column >>corresponds >> with the time, the second column represents the signal. >> Thanks in advance >> >> Nils > > A wikipedia article states it better than me, so here is >a link : > http://en.wikipedia.org/wiki/Short-time_Fourier_transform >Feel free to ask any question ;) > Fine. Let's start with the example from the link How do I produce the nice spectrograms from scipy import * from pylab import plot, show # # Example taken from http://en.wikipedia.org/wiki/Short-time_Fourier_transform # def x(t): if t < 5: return cos(2*pi*10*t) if t >= 5. and t < 10: return cos(2*pi*25*t) if t >=10. and t< 15: return cos(2*pi*50*t) if t >=15. and t<= 20: return cos(2*pi*100*t) t = linspace(0.,20.,8001) # sampled at 400 Hz x_vec = vectorize(x) signal = x_vec(t) plot(t,signal) # # How can I obtain the nice spectrograms ? # show() ... to be continued Nils > In fact, what you will do is : > > my_ffts = [] > fo i in range(n): > my_ffts.append(fft(data[i*size:(i+1)*size])) > > with size the size of the sliding window you will use. >This will give > you non overlapping FFTs, but you can use overlapping >FFT, such as : > my_ffts.append(fft(data[(i-1/2.)*size:(i+3/2.)*size])) > From matthieu.brucher at gmail.com Tue Jul 22 05:17:08 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 11:17:08 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: > Fine. Let's start with the example from the link > > How do I produce the nice spectrograms > > from scipy import * > from pylab import plot, show > # > # Example taken from > http://en.wikipedia.org/wiki/Short-time_Fourier_transform > # > def x(t): > if t < 5: > return cos(2*pi*10*t) > if t >= 5. and t < 10: > return cos(2*pi*25*t) > if t >=10. and t< 15: > return cos(2*pi*50*t) > if t >=15. and t<= 20: > return cos(2*pi*100*t) > > t = linspace(0.,20.,8001) # sampled at 400 Hz > x_vec = vectorize(x) > signal = x_vec(t) > plot(t,signal) > # > # How can I obtain the nice spectrograms ? > # > show() > ... > to be continued > > > Nils I've found a matlab script, it'snot very clear, but it will be a start : http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=7463 The easiest thing to do is to use an array, so you will do something like : >>> spectro = n.zeros((nb_chunks, precision)) nb_chunks is the number of time slices you want and precision is the precision of the FFT you want. sampling_f is your sampling frequency, so you can compute the size of a slice ; >>> slice_size = time / sampling_f # time being the time in one slice Then, something like : >>> for i in nb_chunks: spectro[i] = n.abs(fft(x[i * slice_size:(i+1)*slice_size])) # you may need to scale data, as shown in the matlab script And then : >>> imshow(spectro) This is a very very crude example. I might have some time this evening to write a full example if you still need it. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Tue Jul 22 05:29:42 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 11:29:42 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: On Tue, 22 Jul 2008 11:17:08 +0200 "Matthieu Brucher" wrote: >> Fine. Let's start with the example from the link >> >> How do I produce the nice spectrograms >> >> from scipy import * >> from pylab import plot, show >> # >> # Example taken from >> http://en.wikipedia.org/wiki/Short-time_Fourier_transform >> # >> def x(t): >> if t < 5: >> return cos(2*pi*10*t) >> if t >= 5. and t < 10: >> return cos(2*pi*25*t) >> if t >=10. and t< 15: >> return cos(2*pi*50*t) >> if t >=15. and t<= 20: >> return cos(2*pi*100*t) >> >> t = linspace(0.,20.,8001) # sampled at 400 Hz >> x_vec = vectorize(x) >> signal = x_vec(t) >> plot(t,signal) >> # >> # How can I obtain the nice spectrograms ? >> # >> show() >> ... >> to be continued >> >> >> Nils > > I've found a matlab script, it'snot very clear, but it >will be a start > : >http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=7463 > > The easiest thing to do is to use an array, so you will >do something like : > >>>> spectro = n.zeros((nb_chunks, precision)) > > nb_chunks is the number of time slices you want and >precision is the > precision of the FFT you want. > sampling_f is your sampling frequency, so you can >compute the size of a slice ; > >>>> slice_size = time / sampling_f # time being the time in >>>>one slice > > Then, something like : > >>>> for i in nb_chunks: > spectro[i] = n.abs(fft(x[i * >slice_size:(i+1)*slice_size])) # you > may need to scale data, as shown in the matlab script > > And then : > >>>> imshow(spectro) > > This is a very very crude example. I might have some >time this evening > to write a full example if you still need it. > That would be great ! It could be added to the Cookbook for other users :-). Thank you very much ! Cheers, Nils From matthieu.brucher at gmail.com Tue Jul 22 05:36:48 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 11:36:48 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: >> This is a very very crude example. I might have some >>time this evening >> to write a full example if you still need it. >> > That would be great ! It could be added to the Cookbook > for other users :-). Let's not rush, I have to stop checking my mails and start working if I want to find time for this :D Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From nwagner at iam.uni-stuttgart.de Tue Jul 22 06:56:36 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 12:56:36 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: On Tue, 22 Jul 2008 11:36:48 +0200 "Matthieu Brucher" wrote: >>> This is a very very crude example. I might have some >>>time this evening >>> to write a full example if you still need it. >>> >> That would be great ! It could be added to the Cookbook >> for other users :-). > > Let's not rush, I have to stop checking my mails and >start working if > I want to find time for this :D > > Matthieu > -- >French PhD student > Website : http://matthieu-brucher.developpez.com/ > Blogs : http://matt.eifelle.com and >http://blog.developpez.com/?blog=92 > LinkedIn : http://www.linkedin.com/in/matthieubrucher > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user Hi Matthieu, I made some progress (See test_stft.py). But, how do I choose proper values for the parameter NFFT,noverlap ? Cheers, Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: test_stft.py Type: text/x-python Size: 784 bytes Desc: not available URL: From stefan at sun.ac.za Tue Jul 22 07:59:39 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Tue, 22 Jul 2008 13:59:39 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: <9457e7c80807220459o2680f370ge1265b717579ebb7@mail.gmail.com> 2008/7/22 Matthieu Brucher : > I think you can use a sliding DFT on your signal. This way, you will > get a moving estimation of the spectral information (it's a basic > time-frequency transform, before you use wavelets ;)). Wavelets always sound like so much fun. Attached, a script to decompose the signal using pywt (http://wavelets.scipy.org). Regards St?fan -------------- next part -------------- A non-text attachment was scrubbed... Name: growchip_wt.py Type: application/octet-stream Size: 883 bytes Desc: not available URL: From nwagner at iam.uni-stuttgart.de Tue Jul 22 08:14:43 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 14:14:43 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: <9457e7c80807220459o2680f370ge1265b717579ebb7@mail.gmail.com> References: <9457e7c80807220459o2680f370ge1265b717579ebb7@mail.gmail.com> Message-ID: On Tue, 22 Jul 2008 13:59:39 +0200 "St?fan van der Walt" wrote: > 2008/7/22 Matthieu Brucher : >> I think you can use a sliding DFT on your signal. This >>way, you will >> get a moving estimation of the spectral information >>(it's a basic >> time-frequency transform, before you use wavelets ;)). > > Wavelets always sound like so much fun. Attached, a >script to > decompose the signal using pywt >(http://wavelets.scipy.org). > > Regards > St?fan Hi Stefan, Thank you for your reply. I have installed pywt and run your script. It looks impressive but how can I interprete the results ? Regards, Nils From matthieu.brucher at gmail.com Tue Jul 22 08:37:01 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 14:37:01 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: > Hi Matthieu, > > I made some progress (See test_stft.py). > But, how do I choose proper values for the parameter > NFFT,noverlap ? > > Cheers, > Nils Of course, if pylab has everything, it's better :D This is what I think, I didn't try it much : NFFT is the size of one slice, so if you want to compute the FFT over 25 ms, NFFT = 24. e-3 *400 noverlap depends on the size of the slice and the number of FFTs you want per second. If you want 50 FFTs per second (50 colmuns per second), that means one every actual 400/50 sample, noverlap is NFFT - 8 If someone knows better this, he can show himself ;) Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From matthieu.brucher at gmail.com Tue Jul 22 08:43:19 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 14:43:19 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: <9457e7c80807220459o2680f370ge1265b717579ebb7@mail.gmail.com> References: <9457e7c80807220459o2680f370ge1265b717579ebb7@mail.gmail.com> Message-ID: 2008/7/22 St?fan van der Walt : > 2008/7/22 Matthieu Brucher : >> I think you can use a sliding DFT on your signal. This way, you will >> get a moving estimation of the spectral information (it's a basic >> time-frequency transform, before you use wavelets ;)). > > Wavelets always sound like so much fun. Attached, a script to > decompose the signal using pywt (http://wavelets.scipy.org). > > Regards > St?fan What is great with the PSD (estimation of the spectral density) is that it shows the spectral density. Wavelet are more complex than the PSD. Let's just say that what a wavelet transform does on a signal will generally never be a spectral density (IIRC). Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From fredmfp at gmail.com Tue Jul 22 11:48:08 2008 From: fredmfp at gmail.com (fred) Date: Tue, 22 Jul 2008 17:48:08 +0200 Subject: [SciPy-user] 3D bilinear interpolation... Message-ID: <48860138.2060304@gmail.com> Hi all, I do know that scipy has a 2D interpolate.interp2d() method but what about in 3D ? TIA. Cheers, -- Fred From pav at iki.fi Tue Jul 22 11:54:57 2008 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 22 Jul 2008 15:54:57 +0000 (UTC) Subject: [SciPy-user] 3D bilinear interpolation... References: <48860138.2060304@gmail.com> Message-ID: Tue, 22 Jul 2008 17:48:08 +0200, fred wrote: > Hi all, > > I do know that scipy has a 2D interpolate.interp2d() method but what > about in 3D ? If your grid is regularly spaced, you can user ndimage.map_coordinates http://scipy.org/Cookbook/Interpolation?highlight=%28map_coordinates%29 -- Pauli Virtanen From peridot.faceted at gmail.com Tue Jul 22 11:55:44 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Tue, 22 Jul 2008 11:55:44 -0400 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: 2008/7/22 Nils Wagner : > I made some progress (See test_stft.py). > But, how do I choose proper values for the parameter > NFFT,noverlap ? Suppose you have a time series with 4096 points and duration 1 s. If you take a real FFT, you will get a complex array of length about 2048. In each bin of this array is a complex number representing the amplitude and phase of a sinusoid at a particular frequency. The frequencies are evenly spaced from zero (a constant, necessarily real) up to (4096/2)/(1 s), that is, 2048 Hz. The spacing is linear, so bin 37 is 37 Hz (in this example). If you don't care about the phase and just want to find the power, you can take the (squared) amplitude of the complex number in each bin. This is useful, but it interprets *everything* as a sinusoid. If you have an array containing ten seconds of noise, one second of a tone, and ten more seconds of noise, doing the above will produce a spectrum in which sinusoids are very carefully chosen to cancel everywhere except during that one second. Correct, and occasionally useful. but hard to interpret. Instead, what you often want is something like the human ear: changes on a short timescale are interpreted in the Fourier domain, as tones, but changes on a longer scale remain in the time domain (tones turning on and off). This is a slightly messy sort of problem; be aware that you are introducing a timescale of transition, and things get very confusing near that timescale. (For the human ear, this frequency is somewhere between a few Hertz and maybe thirty Hertz.) Also, wavelet transforms are designed to deal with this sort of problem in a systematic way, but their results are more complicated to interpret still. So let's use a Fourier transform to compute a "dynamical power spectrum". For illustrative purposes, let's suppose we have a data stream at 48000 samples per second and we want to roughly mimic the human ear. So we want a lowest frequency of about 10 Hz; let's take chunks of 4096 samples, then. Now a first approach would simply chop the time sttream into 4096-sample chunks, FFT each one, and take the (squared) amplitude. This gives you a frequency spectrum with a spectral resolution of (48 kHz/2)/4096, roughly five Hertz, and a time resolution of a tenth of a second. If you want more time resolution take shorter FFTs; if you want more frequency resolution take longer FFTs. You can't have both (even changing the sample rate just changes your highest frequency, not the resolution). This is a fundamental mathematical limitation, the same one that produces the uncertainty principle in quantum mechanics. But subject to that limitation, you've got a dynamic power spectrum. What about that overlap business? Well, suppose you have a tone that turns on for a tenth of a second. Ideally, you'd see it in one FFT but not in any of the others, and it'd be nice and strong. But what if its start time falls in the middle of a chunk? Then you'll see it half as strong in two adjacent chunks - much less visible. So what people often do is make successive chunks overlap. Let's say you do that by a factor of two. Then your tone, instead of falling in the middle of a chunk, appears half as strong in one chunk, at full power in the next, and half as strong in the third. Of course misalignments are still possible, but the worst possible misalignment now puts three-quarters of the tone in each block. The less power you want to risk losing, the more overlap you should use (and, of course, the more time and space it takes). Also, you should note that you won't get any more time resolution this way - it's like (in fact it is) interpolating the power spectrum in the time dimension. You get more measurements, but the narrowest possible peak is still about a tenth of a second wide. A similar phenomenon occurs in the frequency domain (often called "scalloping"). If you take such an FFT, you get the coefficients needed to express your signal in terms of sinusoids at frequencies (48 kHz/2)/4096 * i. What happens if the true frequency of your tone falls halfway between two of these frequencies? Well, it appears as two sinusoids, one on either side at about half the power (actually there are more further out, but they are even weaker). If you're looking to detect tones, this means that you're more sensitive to some frequencies than others. You can address this with a similar "interpolation" procedure. In this case you'd take your 4096 samples and copy them into an array of size, say, 8192; you'd fill the rest of the array with the mean value of your samples (or zero, but this slightly reduces the junk at the low frequencies). Now when you do your FFT, you get twice as many coefficients; the new ones interpolate the old ones, reducing the "scalloping". As before, the more interpolation you do, the less you lose in sensitivity, but the more it costs in space and time. How should you choose the FFT size and the overlap? Well, to pick the FFT size, decide on the time resolution you want to have, keeping in mind that the reciprocal of this is (roguhly) the frequency resolution you will get. To pick the overlap, you need to think about space and time costs, but overlapping by a factor of two is a good habit, or four if it's not too slow. Whether you want to do interpolation in the frequency domain is up to you, but it's fairly cheap. I'd go with a factor of two. Good luck, Anne From fredmfp at gmail.com Tue Jul 22 12:00:50 2008 From: fredmfp at gmail.com (fred) Date: Tue, 22 Jul 2008 18:00:50 +0200 Subject: [SciPy-user] 3D bilinear interpolation... In-Reply-To: References: <48860138.2060304@gmail.com> Message-ID: <48860432.7040603@gmail.com> Pauli Virtanen a ?crit : > If your grid is regularly spaced, you can user ndimage.map_coordinates Thanks ! Cheers, -- Fred From nwagner at iam.uni-stuttgart.de Tue Jul 22 15:26:54 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 22 Jul 2008 21:26:54 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: On Tue, 22 Jul 2008 11:55:44 -0400 "Anne Archibald" wrote: > 2008/7/22 Nils Wagner : > >> I made some progress (See test_stft.py). >> But, how do I choose proper values for the parameter >> NFFT,noverlap ? > > Suppose you have a time series with 4096 points and >duration 1 s. If > you take a real FFT, you will get a complex array of >length about > 2048. In each bin of this array is a complex number >representing the > amplitude and phase of a sinusoid at a particular >frequency. The > frequencies are evenly spaced from zero (a constant, >necessarily real) > up to (4096/2)/(1 s), that is, 2048 Hz. The spacing is >linear, so bin > 37 is 37 Hz (in this example). If you don't care about >the phase and > just want to find the power, you can take the (squared) >amplitude of > the complex number in each bin. > > This is useful, but it interprets *everything* as a >sinusoid. If you > have an array containing ten seconds of noise, one >second of a tone, > and ten more seconds of noise, doing the above will >produce a spectrum > in which sinusoids are very carefully chosen to cancel >everywhere > except during that one second. Correct, and occasionally >useful. but > hard to interpret. Instead, what you often want is >something like the > human ear: changes on a short timescale are interpreted >in the Fourier > domain, as tones, but changes on a longer scale remain >in the time > domain (tones turning on and off). This is a slightly >messy sort of > problem; be aware that you are introducing a timescale >of transition, > and things get very confusing near that timescale. (For >the human ear, > this frequency is somewhere between a few Hertz and >maybe thirty > Hertz.) Also, wavelet transforms are designed to deal >with this sort > of problem in a systematic way, but their results are >more complicated > to interpret still. > > So let's use a Fourier transform to compute a "dynamical >power > spectrum". For illustrative purposes, let's suppose we >have a data > stream at 48000 samples per second and we want to >roughly mimic the > human ear. So we want a lowest frequency of about 10 Hz; >let's take > chunks of 4096 samples, then. Now a first approach would >simply chop > the time sttream into 4096-sample chunks, FFT each one, >and take the > (squared) amplitude. This gives you a frequency spectrum >with a > spectral resolution of (48 kHz/2)/4096, roughly five >Hertz, and a time > resolution of a tenth of a second. If you want more time >resolution > take shorter FFTs; if you want more frequency resolution >take longer >FFTs. You can't have both (even changing the sample rate >just changes > your highest frequency, not the resolution). This is a >fundamental > mathematical limitation, the same one that produces the >uncertainty > principle in quantum mechanics. But subject to that >limitation, you've > got a dynamic power spectrum. > > What about that overlap business? Well, suppose you have >a tone that > turns on for a tenth of a second. Ideally, you'd see it >in one FFT but > not in any of the others, and it'd be nice and strong. >But what if its > start time falls in the middle of a chunk? Then you'll >see it half as > strong in two adjacent chunks - much less visible. So >what people > often do is make successive chunks overlap. Let's say >you do that by a > factor of two. Then your tone, instead of falling in the >middle of a > chunk, appears half as strong in one chunk, at full >power in the next, > and half as strong in the third. Of course misalignments >are still > possible, but the worst possible misalignment now puts >three-quarters > of the tone in each block. The less power you want to >risk losing, the > more overlap you should use (and, of course, the more >time and space > it takes). Also, you should note that you won't get any >more time > resolution this way - it's like (in fact it is) >interpolating the > power spectrum in the time dimension. You get more >measurements, but > the narrowest possible peak is still about a tenth of a >second wide. > > A similar phenomenon occurs in the frequency domain >(often called > "scalloping"). If you take such an FFT, you get the >coefficients > needed to express your signal in terms of sinusoids at >frequencies (48 > kHz/2)/4096 * i. What happens if the true frequency of >your tone falls > halfway between two of these frequencies? Well, it >appears as two > sinusoids, one on either side at about half the power >(actually there > are more further out, but they are even weaker). If >you're looking to > detect tones, this means that you're more sensitive to >some > frequencies than others. You can address this with a >similar > "interpolation" procedure. In this case you'd take your >4096 samples > and copy them into an array of size, say, 8192; you'd >fill the rest of > the array with the mean value of your samples (or zero, >but this > slightly reduces the junk at the low frequencies). Now >when you do > your FFT, you get twice as many coefficients; the new >ones interpolate > the old ones, reducing the "scalloping". As before, the >more > interpolation you do, the less you lose in sensitivity, >but the more > it costs in space and time. > > How should you choose the FFT size and the overlap? >Well, to pick the >FFT size, decide on the time resolution you want to have, >keeping in > mind that the reciprocal of this is (roguhly) the >frequency resolution > you will get. To pick the overlap, you need to think >about space and > time costs, but overlapping by a factor of two is a good >habit, or > four if it's not too slow. Whether you want to do >interpolation in the > frequency domain is up to you, but it's fairly cheap. >I'd go with a > factor of two. > > Good luck, > Anne Hi Anne, Thank you very much for your reply, which is very rich in content. It will take some time to "stomach" it though ;) Are you aware of a good reference to delve into this topic ? Best wishes Nils From matthieu.brucher at gmail.com Tue Jul 22 15:50:10 2008 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Tue, 22 Jul 2008 21:50:10 +0200 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: > Hi Anne, > > Thank you very much for your reply, which is very rich in > content. It will take some time to "stomach" it though ;) > > Are you aware of a good reference to delve into this topic > ? Any book on signal processing ;) I have to correct something. When you do an FFT of a 4096 samples real signal, you get 2047 meaning full complex values and 2 real ones. Matthieu -- French PhD student Website : http://matthieu-brucher.developpez.com/ Blogs : http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn : http://www.linkedin.com/in/matthieubrucher From timmichelsen at gmx-topmail.de Tue Jul 22 18:46:06 2008 From: timmichelsen at gmx-topmail.de (Tim Michelsen) Date: Wed, 23 Jul 2008 00:46:06 +0200 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: <200807212307.31208.pgmdevlist@gmail.com> References: <200807211312.40202.pgmdevlist@gmail.com> <200807212307.31208.pgmdevlist@gmail.com> Message-ID: >> My data is not normal. Do you know robusts method in scipy ? Or maybe in an >> other python module ? > > Mmh, I'm sure you could implement some yourself. That way, we could start > another scikits. There are already some winsorization and trimming functions > in scipy.stats. > Alternatively, you can try to use R and numpy through rpy: > http://rpy.sourceforge.net/ Dider, may I ask you to give some feedback what method worked for you? I am also working with the problem of removing outliners etc. from data. Thanks in advance, Timmie From didier.rano at gmail.com Tue Jul 22 18:55:32 2008 From: didier.rano at gmail.com (didier rano) Date: Tue, 22 Jul 2008 18:55:32 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: References: <200807211312.40202.pgmdevlist@gmail.com> <200807212307.31208.pgmdevlist@gmail.com> Message-ID: Hi, I haven't found yet a solution to my problem. But I am reading a good article about removing outliers: http://www.lcgceurope.com/lcgceurope/data/articlestandard//lcgceurope/502001/4509/article.pdf Now, I need to experiment methods described in this article. Thanks Didier Rano 2008/7/22 Tim Michelsen : > >> My data is not normal. Do you know robusts method in scipy ? Or maybe in > an > >> other python module ? > > > > Mmh, I'm sure you could implement some yourself. That way, we could start > > another scikits. There are already some winsorization and trimming > functions > > in scipy.stats. > > Alternatively, you can try to use R and numpy through rpy: > > http://rpy.sourceforge.net/ > Dider, > may I ask you to give some feedback what method worked for you? > I am also working with the problem of removing outliners etc. from data. > > Thanks in advance, > Timmie > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Didier Rano didier.rano at gmail.com http://www.jaxtr.com/didierrano -------------- next part -------------- An HTML attachment was scrubbed... URL: From smcphers at stanford.edu Wed Jul 23 03:07:58 2008 From: smcphers at stanford.edu (Selwyn-Lloyd McPherson) Date: Wed, 23 Jul 2008 00:07:58 -0700 Subject: [SciPy-user] SciPy builds and installs but won't import due to f2c Message-ID: <06BDD519-19A2-4BE1-B6B7-C3BAB3AD799E@stanford.edu> Hi everyone, I've been working on getting SciPy installed on my machine and it seems almost there. I've got a x86_64 system running Red Hat and more or less followed http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 to build and install SciPy. After dealing with the whole fPIC mess that the 64-bit processor requires the install seems to finish but when I try to import, I get: >>> from scipy import * Traceback (most recent call last): File "", line 1, in ? File "/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.py", line 8, in ? from basic import * File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", line 17, in ? from lapack import get_lapack_funcs File "/usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py", line 17, in ? from scipy.linalg import flapack ImportError: /usr/lib/libf2c.so.0: undefined symbol: MAIN__ Apparently a problem with f2c. . . Have any ideas? Thanks so much for reading! Selwyn-Lloyd From massimo.sandal at unibo.it Wed Jul 23 06:58:00 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Wed, 23 Jul 2008 12:58:00 +0200 Subject: [SciPy-user] scipy.stats.gaussian_kde for 2d kernel density estimation Message-ID: <48870EB8.8010208@unibo.it> Hi, I can't figure out how to do bivariate kernel density estimation with the scipy.stats.gaussian_kde module .1D evaluation seems working, but 2D evaluation escapes me. I have two vectors representing x and y coordinates of points: xvect=[72.11,81.52,66.46,52.34,81.12,76.83,...] yvect=[26.91,17.39,28.84,15.05,10.21,26.42,...] The problem is: how do I build the grid to evaluate the points? I would expect that he wants an x range and a y range (something like xrange=numpy.arange(0,100,1) ) from which he builds and evaluates the grid. However this does not work, and I do not understand how to create a proper 2d-grid that has the correct coordinate information. Where can I look for information? The built-in documentation seems a bit terse on this point. Thanks a lot, m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From wnbell at gmail.com Wed Jul 23 07:14:11 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 23 Jul 2008 06:14:11 -0500 Subject: [SciPy-user] sparsetools involvement? (was -lf77compat build failure) In-Reply-To: References: <001301c8e84f$2dfdd750$89f985f0$@auburn.edu> Message-ID: On Fri, Jul 18, 2008 at 8:37 AM, Nathan Bell wrote: > > Phil, could you try compiling SciPy from SVN? Much of sparsetools has > changed since the last release. > > I believe there was a missing #include in sparsetools.h in > the release you're using. > Phil, I was mistaken about the missing #include , I believe the fix is to add #include instead. Here's a related ticket: http://scipy.org/scipy/scipy/ticket/706 I still would appreciate you testing SciPy from SVN on your system. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dave.hirschfeld at gmail.com Wed Jul 23 09:12:54 2008 From: dave.hirschfeld at gmail.com (Dave) Date: Wed, 23 Jul 2008 13:12:54 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?scipy=2Estats=2Egaussian=5Fkde_for_2d_kern?= =?utf-8?q?el_density=09estimation?= References: <48870EB8.8010208@unibo.it> Message-ID: massimo sandal unibo.it> writes: > > Hi, > > I can't figure out how to do bivariate kernel density estimation with > the scipy.stats.gaussian_kde module .1D evaluation seems working, but 2D > evaluation escapes me. > > I have two vectors representing x and y coordinates of points: > > xvect=[72.11,81.52,66.46,52.34,81.12,76.83,...] > yvect=[26.91,17.39,28.84,15.05,10.21,26.42,...] > > The problem is: how do I build the grid to evaluate the points? Hopefully the example below will help... -Dave import numpy as np import scipy.stats as stats from matplotlib.pyplot import imshow # Create some dummy data rvs = np.append(stats.norm.rvs(loc=2,scale=1,size=(2000,1)), stats.norm.rvs(loc=0,scale=3,size=(2000,1)), axis=1) kde = stats.kde.gaussian_kde(rvs.T) # Regular grid to evaluate kde upon x_flat = np.r_[rvs[:,0].min():rvs[:,0].max():128j] y_flat = np.r_[rvs[:,1].min():rvs[:,1].max():128j] x,y = np.meshgrid(x_flat,y_flat) grid_coords = np.append(x.reshape(-1,1),y.reshape(-1,1),axis=1) z = kde(grid_coords.T) z = z.reshape(128,128) imshow(z,aspect=x_flat.ptp()/y_flat.ptp()) From dfranci at seas.upenn.edu Wed Jul 23 09:58:02 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Wed, 23 Jul 2008 09:58:02 -0400 Subject: [SciPy-user] SciPy builds and installs but won't import due to f2c In-Reply-To: <06BDD519-19A2-4BE1-B6B7-C3BAB3AD799E@stanford.edu> References: <06BDD519-19A2-4BE1-B6B7-C3BAB3AD799E@stanford.edu> Message-ID: <9fddf64a0807230658n64438a98y5192822ea9706d2e@mail.gmail.com> Dear Selwyn-Lloyd, I hope I can help some, but I am not at all an expert on these things. I had what seemed like a similar problem (VERY similar, if I can remember). Here's what got me running-- my numpy installation was based on the lapack version that was causing my problem (I don't know if it was a lapack or scipy problem but whatever...) Anyways, I re-did the install of numpy, and I did not specify what lapack to use in the install. I you want to do this, be very careful that you do not have a LAPACK environment variable which is pointing somewhere and you don't know it. If you do, just unset it (unset LAPACK). The numpy install should not find any LAPACK (you'll be able to read that on the screen during install), and it will use some default one. Then re-do the scipy install and it should work. This is sort of a hack fix, because you will be using a non-optimized version of Lapack (assuming you wanted to use ATLAS or something), but it will get you running. One last thing, unfortunately I believe the default version of lapack installed is lapack lite or something like that (not the full lapack). This may cause you to not have full functionality. It has only caused me problems on one function: interp2d for 2d interpolation. Also, you are on Red Hat, why don't you use some red hat package manager or builtin software to install this stuff? That's all I've got, Frank On Wed, Jul 23, 2008 at 3:07 AM, Selwyn-Lloyd McPherson < smcphers at stanford.edu> wrote: > Hi everyone, > > I've been working on getting SciPy installed on my machine and it > seems almost there. I've got a x86_64 system running Red Hat and more > or less followed > http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > to build and install SciPy. After dealing with the whole fPIC mess > that the 64-bit processor requires the install seems to finish but > when I try to import, I get: > > >>> from scipy import * > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.py", > line 8, in ? > from basic import * > File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", > line 17, in ? > from lapack import get_lapack_funcs > File "/usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py", > line 17, in ? > from scipy.linalg import flapack > ImportError: /usr/lib/libf2c.so.0: undefined symbol: MAIN__ > > > Apparently a problem with f2c. . . Have any ideas? > > Thanks so much for reading! > > > Selwyn-Lloyd > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania dfranci at seas.upenn.edu 215.898.3144 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.huard at gmail.com Wed Jul 23 11:01:30 2008 From: david.huard at gmail.com (David Huard) Date: Wed, 23 Jul 2008 11:01:30 -0400 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: References: <200807211312.40202.pgmdevlist@gmail.com> <200807212307.31208.pgmdevlist@gmail.com> Message-ID: <91cf711d0807230801o3f5b3b1ahc07dd0f53251b0a0@mail.gmail.com> I've had some success with the following: 1. Define a simple statistical model for your data. That is, from the previous data, define a distribution for the probability of the next point. 2. Define a cutoff probability separating valid data from outliers. 3. For each datum, compute its probability based on previous data, and tag it as valid or outlier. The advantage is that you can start with a simple statistical model ( for example a gaussian centered on the last valid entry ) and customize it as you find cases that are not well handled. David 2008/7/22 didier rano : > Hi, > I haven't found yet a solution to my problem. But I am reading a good > article about removing > outliers: http://www.lcgceurope.com/lcgceurope/data/articlestandard//lcgceurope/502001/4509/article.pdf > Now, I need to experiment methods described in this article. > Thanks > Didier Rano > > 2008/7/22 Tim Michelsen : >> >> >> My data is not normal. Do you know robusts method in scipy ? Or maybe >> >> in an >> >> other python module ? >> > >> > Mmh, I'm sure you could implement some yourself. That way, we could >> > start >> > another scikits. There are already some winsorization and trimming >> > functions >> > in scipy.stats. >> > Alternatively, you can try to use R and numpy through rpy: >> > http://rpy.sourceforge.net/ >> Dider, >> may I ask you to give some feedback what method worked for you? >> I am also working with the problem of removing outliners etc. from data. >> >> Thanks in advance, >> Timmie >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > Didier Rano > didier.rano at gmail.com > http://www.jaxtr.com/didierrano > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From stefan at sun.ac.za Wed Jul 23 11:16:57 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 23 Jul 2008 17:16:57 +0200 Subject: [SciPy-user] Statistics advise with scipy In-Reply-To: <91cf711d0807230801o3f5b3b1ahc07dd0f53251b0a0@mail.gmail.com> References: <200807211312.40202.pgmdevlist@gmail.com> <200807212307.31208.pgmdevlist@gmail.com> <91cf711d0807230801o3f5b3b1ahc07dd0f53251b0a0@mail.gmail.com> Message-ID: <9457e7c80807230816w3e25a20are4dc4f6a8d732643@mail.gmail.com> 2008/7/23 David Huard : > I've had some success with the following: > > 1. Define a simple statistical model for your data. That is, from the > previous data, define a distribution for the probability of the next > point. > 2. Define a cutoff probability separating valid data from outliers. > 3. For each datum, compute its probability based on previous data, and > tag it as valid or outlier. > > The advantage is that you can start with a simple statistical model ( > for example a gaussian centered on the last valid entry ) and > customize it as you find cases that are not well handled. There's also http://www.scipy.org/Cookbook/RANSAC Cheers St?fan From christian-baehnisch at gmx.de Wed Jul 23 16:46:14 2008 From: christian-baehnisch at gmx.de (Christian =?utf-8?q?B=C3=A4hnisch?=) Date: Wed, 23 Jul 2008 22:46:14 +0200 Subject: [SciPy-user] spsolve Message-ID: <200807232246.14492.christian-baehnisch@gmx.de> Hello, I'm using "scipy.linsolve.spsolve". Unfortunately, I have not been able to figure out if it is possible to get the number of iterations actually used by the method. I would be very glad if somebody could help me on this topic greetings, Christian From aisaac at american.edu Wed Jul 23 18:24:50 2008 From: aisaac at american.edu (Alan G Isaac) Date: Wed, 23 Jul 2008 18:24:50 -0400 Subject: [SciPy-user] Fwd: Re: permission requested Message-ID: 822 From smcphers at stanford.edu Wed Jul 23 18:21:05 2008 From: smcphers at stanford.edu (Selwyn-Lloyd McPherson) Date: Wed, 23 Jul 2008 15:21:05 -0700 Subject: [SciPy-user] SciPy builds and installs but won't import due to f2c In-Reply-To: <9fddf64a0807230658n64438a98y5192822ea9706d2e@mail.gmail.com> References: <06BDD519-19A2-4BE1-B6B7-C3BAB3AD799E@stanford.edu> <9fddf64a0807230658n64438a98y5192822ea9706d2e@mail.gmail.com> Message-ID: <1AB18010-2249-4FEA-A6D3-96C001164ABF@stanford.edu> Unfortunately, scipy doesn't seem to want to build without LAPACK -- I've tried lots of different LAPACK distributions. I've tried using ATLAS with LAPACK and ATLAS without LAPACK (built LAPACK standalone) with lots of different flags but nothing seems to work. I've tried rebuilding the f2c libraries but it still comes up with the "undefined symbol" MAIN__ error. I've read here that inserting a int MAIN__( ) { return(0); } in the codebase somewhere might fix the problem but I'm not sure where it could go. Is there any reason that scipy / lapack needs f2c? Can't it just use f2py? Selwyn-Lloyd On Jul 23, 2008, at 6:58 Antemeridian, Frank Lagor wrote: > Dear Selwyn-Lloyd, > > I hope I can help some, but I am not at all an expert on these > things. I had what seemed like a similar problem (VERY similar, if > I can remember). Here's what got me running-- my numpy installation > was based on the lapack version that was causing my problem (I don't > know if it was a lapack or scipy problem but whatever...) Anyways, I > re-did the install of numpy, and I did not specify what lapack to > use in the install. I you want to do this, be very careful that you > do not have a LAPACK environment variable which is pointing > somewhere and you don't know it. If you do, just unset it (unset > LAPACK). The numpy install should not find any LAPACK (you'll be > able to read that on the screen during install), and it will use > some default one. Then re-do the scipy install and it should work. > This is sort of a hack fix, because you will be using a non- > optimized version of Lapack (assuming you wanted to use ATLAS or > something), but it will get you running. One last thing, > unfortunately I believe the default version of lapack installed is > lapack lite or something like that (not the full lapack). This may > cause you to not have full functionality. It has only caused me > problems on one function: interp2d for 2d interpolation. Also, you > are on Red Hat, why don't you use some red hat package manager or > builtin software to install this stuff? > > That's all I've got, > Frank > > On Wed, Jul 23, 2008 at 3:07 AM, Selwyn-Lloyd McPherson > wrote: > Hi everyone, > > I've been working on getting SciPy installed on my machine and it > seems almost there. I've got a x86_64 system running Red Hat and more > or less followed http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 > to build and install SciPy. After dealing with the whole fPIC mess > that the 64-bit processor requires the install seems to finish but > when I try to import, I get: > > >>> from scipy import * > Traceback (most recent call last): > File "", line 1, in ? > File "/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.py", > line 8, in ? > from basic import * > File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", > line 17, in ? > from lapack import get_lapack_funcs > File "/usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py", > line 17, in ? > from scipy.linalg import flapack > ImportError: /usr/lib/libf2c.so.0: undefined symbol: MAIN__ > > > Apparently a problem with f2c. . . Have any ideas? > > Thanks so much for reading! > > > Selwyn-Lloyd > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > -- > Frank Lagor > Ph.D. Candidate > Mechanical Engineering and Applied Mechanics > University of Pennsylvania > dfranci at seas.upenn.edu > 215.898.3144 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfranci at seas.upenn.edu Wed Jul 23 18:39:19 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Wed, 23 Jul 2008 18:39:19 -0400 Subject: [SciPy-user] SciPy builds and installs but won't import due to f2c In-Reply-To: <1AB18010-2249-4FEA-A6D3-96C001164ABF@stanford.edu> References: <06BDD519-19A2-4BE1-B6B7-C3BAB3AD799E@stanford.edu> <9fddf64a0807230658n64438a98y5192822ea9706d2e@mail.gmail.com> <1AB18010-2249-4FEA-A6D3-96C001164ABF@stanford.edu> Message-ID: <9fddf64a0807231539x262d38cj918926e957f90161@mail.gmail.com> Honestly, I don't know. Here is a tip though that caused me a problem with numpy and scipy installs and I just want to make sure you are aware of it. Maybe it will help... Sometimes when you execute the step "python setup.py build" when you have been trying a bunch of different things, it does not rebuild the files as you want it to. So that when you do "python setup.py install" it is not installing code that was built with the options you wanted. To avoid this pitful, EVERY time you try to install execute: python setup.py clean and, rm -rf build (This removes the old build directory) Perhaps that was helpful... :( Good Luck, Frank On Wed, Jul 23, 2008 at 6:21 PM, Selwyn-Lloyd McPherson < smcphers at stanford.edu> wrote: > Unfortunately, scipy doesn't seem to want to build without LAPACK -- I've > tried lots of different LAPACK distributions. I've tried using ATLAS with > LAPACK and ATLAS without LAPACK (built LAPACK standalone) with lots of > different flags but nothing seems to work. I've tried rebuilding the f2c > libraries but it still comes up with the "undefined symbol" MAIN__ error. > I've read here that > inserting a > int MAIN__( ) > { return(0); > } > > in the codebase somewhere might fix the problem but I'm not sure where it > could go. > > Is there any reason that scipy / lapack needs f2c? Can't it just use f2py? > > Selwyn-Lloyd > > > > On Jul 23, 2008, at 6:58 Antemeridian, Frank Lagor wrote: > > Dear Selwyn-Lloyd, > > I hope I can help some, but I am not at all an expert on these things. I > had what seemed like a similar problem (VERY similar, if I can remember). > Here's what got me running-- my numpy installation was based on the lapack > version that was causing my problem (I don't know if it was a lapack or > scipy problem but whatever...) Anyways, I re-did the install of numpy, and I > did not specify what lapack to use in the install. I you want to do this, > be very careful that you do not have a LAPACK environment variable which is > pointing somewhere and you don't know it. If you do, just unset it (unset > LAPACK). The numpy install should not find any LAPACK (you'll be able to > read that on the screen during install), and it will use some default one. > Then re-do the scipy install and it should work. This is sort of a hack > fix, because you will be using a non-optimized version of Lapack (assuming > you wanted to use ATLAS or something), but it will get you running. One > last thing, unfortunately I believe the default version of lapack installed > is lapack lite or something like that (not the full lapack). This may cause > you to not have full functionality. It has only caused me problems on one > function: interp2d for 2d interpolation. Also, you are on Red Hat, why > don't you use some red hat package manager or builtin software to install > this stuff? > > That's all I've got, > Frank > > On Wed, Jul 23, 2008 at 3:07 AM, Selwyn-Lloyd McPherson < > smcphers at stanford.edu> wrote: > >> Hi everyone, >> >> I've been working on getting SciPy installed on my machine and it >> seems almost there. I've got a x86_64 system running Red Hat and more >> or less followed >> http://www.scipy.org/Installing_SciPy/Linux#head-1c4018a51422706809ee96a4db03ca0669f5f6d1 >> to build and install SciPy. After dealing with the whole fPIC mess >> that the 64-bit processor requires the install seems to finish but >> when I try to import, I get: >> >> >>> from scipy import * >> Traceback (most recent call last): >> File "", line 1, in ? >> File "/usr/lib64/python2.4/site-packages/scipy/linalg/__init__.py", >> line 8, in ? >> from basic import * >> File "/usr/lib64/python2.4/site-packages/scipy/linalg/basic.py", >> line 17, in ? >> from lapack import get_lapack_funcs >> File "/usr/lib64/python2.4/site-packages/scipy/linalg/lapack.py", >> line 17, in ? >> from scipy.linalg import flapack >> ImportError: /usr/lib/libf2c.so.0: undefined symbol: MAIN__ >> >> >> Apparently a problem with f2c. . . Have any ideas? >> >> Thanks so much for reading! >> >> >> Selwyn-Lloyd >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Frank Lagor > Ph.D. Candidate > Mechanical Engineering and Applied Mechanics > University of Pennsylvania > dfranci at seas.upenn.edu > 215.898.3144 > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania dfranci at seas.upenn.edu 215.898.3144 -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Jul 23 18:45:23 2008 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 23 Jul 2008 17:45:23 -0500 Subject: [SciPy-user] SciPy builds and installs but won't import due to f2c In-Reply-To: <1AB18010-2249-4FEA-A6D3-96C001164ABF@stanford.edu> References: <06BDD519-19A2-4BE1-B6B7-C3BAB3AD799E@stanford.edu> <9fddf64a0807230658n64438a98y5192822ea9706d2e@mail.gmail.com> <1AB18010-2249-4FEA-A6D3-96C001164ABF@stanford.edu> Message-ID: <3d375d730807231545k2c5b0fb5m2e997750cb8c6809@mail.gmail.com> On Wed, Jul 23, 2008 at 17:21, Selwyn-Lloyd McPherson wrote: > Unfortunately, scipy doesn't seem to want to build without LAPACK -- I've > tried lots of different LAPACK distributions. I've tried using ATLAS with > LAPACK and ATLAS without LAPACK (built LAPACK standalone) with lots of > different flags but nothing seems to work. I've tried rebuilding the f2c > libraries but it still comes up with the "undefined symbol" MAIN__ error. > I've read here that inserting a > int MAIN__( ) > { return(0); > } > in the codebase somewhere might fix the problem but I'm not sure where it > could go. > Is there any reason that scipy / lapack needs f2c? Can't it just use f2py? It doesn't use f2c to convert the Fortran code to C. However, some Fortran compilers use libf2c as a runtime library. Without more details about how you built things, I don't really know why you are having linking problems. One thing to consider is that using the environment variable LDFLAGS replaces all link flags when building Fortran extension modules. So if you are using $LDFLAGS, then you may have removed important link flags. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From wnbell at gmail.com Wed Jul 23 22:48:41 2008 From: wnbell at gmail.com (Nathan Bell) Date: Wed, 23 Jul 2008 21:48:41 -0500 Subject: [SciPy-user] spsolve In-Reply-To: <200807232246.14492.christian-baehnisch@gmx.de> References: <200807232246.14492.christian-baehnisch@gmx.de> Message-ID: On Wed, Jul 23, 2008 at 3:46 PM, Christian B?hnisch wrote: > > I'm using "scipy.linsolve.spsolve". Unfortunately, I have not been able to > figure out if it is possible to get the number of iterations actually > used by the method. I would be very glad if somebody could help me on > this topic > Christian, the sparse solvers in scipy.linsolve are what is known as "direct" methods (as opposed to "iterative" methods). Rather than iteratively solving a linear system (like the Conjugate Gradient method), direct solvers just compute a sparse LU factorization of the matrix in order to solve the system. In order to do this efficiently, and to keep L and U as sparse as possible, they incorporate a number of fairly sophisticated algorithms/heuristics. In SciPy 0.6 scipy.linsolve ultimately calls either the SuperLU or UMFPACK packages to perform the solve. If you're interested in iterative solvers, there are several in scipy.linalg.iterative. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From dfranci at seas.upenn.edu Wed Jul 23 23:02:05 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Wed, 23 Jul 2008 23:02:05 -0400 Subject: [SciPy-user] error ellipse Message-ID: <9fddf64a0807232002g31ba7a80nc4a26291c8590caf@mail.gmail.com> Does anyone know if there is a function in scipy.stats for somewhere else to plot a 2D error ellipse on a scatter plot of some data given the covariance matrix? I have not been able to find one. Thanks in advance for your help, Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at ar.media.kyoto-u.ac.jp Wed Jul 23 23:23:55 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 24 Jul 2008 12:23:55 +0900 Subject: [SciPy-user] Frequency content of a transient signal In-Reply-To: References: Message-ID: <4887F5CB.6080903@ar.media.kyoto-u.ac.jp> Nils Wagner wrote: > Fine. Let's start with the example from the link > > How do I produce the nice spectrograms > matplotlib has a function specgram, which mimics the matlab's one, and should do what you want. You will need to adapt the window size to the timescale of your transients, while keeping in mind that the frequency resolution of your spectrogram is dependent on the window size (the shorter the window, the cruder your frequency resolution is). If you need more elaborate time-frequency representation, then wavelet, or things like matching pursuit may be adapted, but they are not trivial to use/code if you are new to signal processing. For audio signals, which I am more familiar with, wavelets are generally not useful. On matching pursuit: http://www.scholarpedia.org/article/Matching_pursuit (I wanted to start wrapping the MPTK in a scikits, but never got the time to start something: http://mptk.irisa.fr/). cheers, David From ajvogel at tuks.co.za Thu Jul 24 01:59:03 2008 From: ajvogel at tuks.co.za (Adolph J. Vogel) Date: Thu, 24 Jul 2008 07:59:03 +0200 Subject: [SciPy-user] Parallel linear solver. Message-ID: <200807240759.03710.ajvogel@tuks.co.za> Are there any bindings available for python to a Parallel linear solver package like ScaLapack? I couldnt find anything conclusive on google. Thanx, Adolph From christian-baehnisch at gmx.de Thu Jul 24 02:57:33 2008 From: christian-baehnisch at gmx.de (Christian =?iso-8859-1?q?B=E4hnisch?=) Date: Thu, 24 Jul 2008 08:57:33 +0200 Subject: [SciPy-user] spsolve In-Reply-To: References: <200807232246.14492.christian-baehnisch@gmx.de> Message-ID: <200807240857.33610.christian-baehnisch@gmx.de> Thank you for your quick reply! Somehow I believed spsolve is a conjugate gradient method ... my fault. If I got it right, the conjugate gradient method in scipy.linalg.iterative should benefit from the sparsity of the matrices (symmetric and p. semi-definite) I put into it ? > On Wed, Jul 23, 2008 at 3:46 PM, Christian B?hnisch > > wrote: > > I'm using "scipy.linsolve.spsolve". Unfortunately, I have not been able > > to figure out if it is possible to get the number of iterations actually > > used by the method. I would be very glad if somebody could help me on > > this topic > > Christian, the sparse solvers in scipy.linsolve are what is known as > "direct" methods (as opposed to "iterative" methods). Rather than > iteratively solving a linear system (like the Conjugate Gradient > method), direct solvers just compute a sparse LU factorization of the > matrix in order to solve the system. In order to do this efficiently, > and to keep L and U as sparse as possible, they incorporate a number > of fairly sophisticated algorithms/heuristics. > > In SciPy 0.6 scipy.linsolve ultimately calls either the SuperLU or > UMFPACK packages to perform the solve. > > If you're interested in iterative solvers, there are several in > scipy.linalg.iterative. From wnbell at gmail.com Thu Jul 24 05:05:58 2008 From: wnbell at gmail.com (Nathan Bell) Date: Thu, 24 Jul 2008 04:05:58 -0500 Subject: [SciPy-user] spsolve In-Reply-To: <200807240857.33610.christian-baehnisch@gmx.de> References: <200807232246.14492.christian-baehnisch@gmx.de> <200807240857.33610.christian-baehnisch@gmx.de> Message-ID: On Thu, Jul 24, 2008 at 1:57 AM, Christian B?hnisch wrote: > Thank you for your quick reply! Somehow I believed spsolve is a conjugate > gradient method ... my fault. No problem. Documentation for the sparse solvers is incomplete, so it's easy to get confused :) > If I got it right, the conjugate gradient > method in scipy.linalg.iterative should benefit from the sparsity of the > matrices (symmetric and p. semi-definite) I put into it ? > If your matrix is in a sparse format, such as the CSR or CSC formats in scipy.sparse, then iterative solvers like conjugate gradient will execute much more quickly. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From wesmckinn at gmail.com Thu Jul 24 09:00:38 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Thu, 24 Jul 2008 09:00:38 -0400 Subject: [SciPy-user] Equivalent to 'match' function in R? Message-ID: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> Hi all, I've been working with users lately who are transitioning from using R to NumPy/Scipy. Some are accustomed to using the 'match' function, for example: > allData <- cbind(c(1,2,3,4,5), c(12, 19, 27, 38, 51)) > allData [,1] [,2] [1,] 1 12 [2,] 2 19 [3,] 3 27 [4,] 4 38 [5,] 5 51 > subData <- cbind(c(3,5,1), c(NA, NA, NA)) > subData [,1] [,2] [1,] 3 NA [2,] 5 NA [3,] 1 NA > matchInd <- match(subData[,1], allData[,1]) > subData[,2] <- allData[matchInd,2] > subData [,1] [,2] [1,] 3 27 [2,] 5 51 [3,] 1 12 > > > allData <- cbind(c(1,2,3,4,5), rep(NA,5)) > allData [,1] [,2] [1,] 1 NA [2,] 2 NA [3,] 3 NA [4,] 4 NA [5,] 5 NA > subData <- cbind(c(3,5,1), c(27, 51, 12)) > subData [,1] [,2] [1,] 3 27 [2,] 5 51 [3,] 1 12 > matchInd <- match(subData[,1], allData[,1]) > allData[matchInd,2] <- subData[,2] > allData [,1] [,2] [1,] 1 12 [2,] 2 NA [3,] 3 27 [4,] 4 NA [5,] 5 51 Before I reinvent the wheel, was curious if there was an equivalent function elsewhere already. If not then some Cython/Pyrex is in order. Thanks, Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From yosefmel at post.tau.ac.il Thu Jul 24 09:35:53 2008 From: yosefmel at post.tau.ac.il (Yosef Meller) Date: Thu, 24 Jul 2008 16:35:53 +0300 Subject: [SciPy-user] Equivalent to 'match' function in R? In-Reply-To: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> References: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> Message-ID: <200807241635.53457.yosefmel@post.tau.ac.il> On Thursday 24 July 2008 16:00:38 Wes McKinney wrote: > Hi all, > > I've been working with users lately who are transitioning from using R to > NumPy/Scipy. Some are accustomed to using the 'match' function, for > > example: [snipped outputs] > > allData <- cbind(c(1,2,3,4,5), c(12, 19, 27, 38, 51)) > > subData <- cbind(c(3,5,1), c(NA, NA, NA)) > > matchInd <- match(subData[,1], allData[,1]) > > subData[,2] <- allData[matchInd,2] > > allData <- cbind(c(1,2,3,4,5), rep(NA,5)) > > subData <- cbind(c(3,5,1), c(27, 51, 12)) > > matchInd <- match(subData[,1], allData[,1]) > > allData[matchInd,2] <- subData[,2] > > allData > > [,1] [,2] > [1,] 1 12 > [2,] 2 NA > [3,] 3 27 > [4,] 4 NA > [5,] 5 51 > > Before I reinvent the wheel, was curious if there was an equivalent > function elsewhere already. If not then some Cython/Pyrex is in order. > > Thanks, > Wes If I understand this correctly, this is what you want: --- from scipy import c_, r_, equal, nan vals = r_[12, 19, 27, 38, 51] selector = r_[3, 5, 1] idxs = r_[1, 2, 3, 4, 5] # I assume these don't have to be consecutive. mask = equal.outer(idxs, selector).any(axis=1) alldata = c_[idxs, where(mask, vals, nan] --- Maybe looks a little less obvious, but does the job. From arnar.flatberg at gmail.com Thu Jul 24 09:49:32 2008 From: arnar.flatberg at gmail.com (Arnar Flatberg) Date: Thu, 24 Jul 2008 15:49:32 +0200 Subject: [SciPy-user] Equivalent to 'match' function in R? In-Reply-To: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> References: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> Message-ID: <5d3194020807240649m7d4d9c19he343da39f6c09181@mail.gmail.com> On Thu, Jul 24, 2008 at 3:00 PM, Wes McKinney wrote: > Hi all, > > I've been working with users lately who are transitioning from using R to > NumPy/Scipy. Some are accustomed to using the 'match' function, for > example: > > > allData <- cbind(c(1,2,3,4,5), c(12, 19, 27, 38, 51)) > > allData > [,1] [,2] > [1,] 1 12 > [2,] 2 19 > [3,] 3 27 > [4,] 4 38 > [5,] 5 51 > > subData <- cbind(c(3,5,1), c(NA, NA, NA)) > > subData > [,1] [,2] > [1,] 3 NA > [2,] 5 NA > [3,] 1 NA > What about using `intersect` combined with `where` ? all_data = np.array([[1,2,3,4,5], [12,19,27,38,51]]).T sub_data = np.array([[3,5,1], [nan,nan,nan]]).T match_ind = np.where(np.intersect_1d(sub_data[:,0], all_data[:,0])) sub_data[:,1] = all_data[match_ind,1] It may not be pretty or the best approach for solving the above examples but it behaves like R's match somewhat. Arnar -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfranci at seas.upenn.edu Thu Jul 24 10:27:05 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Thu, 24 Jul 2008 10:27:05 -0400 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <200807240759.03710.ajvogel@tuks.co.za> References: <200807240759.03710.ajvogel@tuks.co.za> Message-ID: <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> Yes, there are. What you are looking for is petsc4py. PETSc is a very good, well-developed package for scalable scientific computing. It is based on MPI and is very well-known and well-supported package (it is really amazing actually how good their support actually is!) Anyways, Lisandro Dalcin, a friend of mine, wrote petsc4py bindings that allow you to do anything you want in PETSc in python. The bindings are very good, but not well documented. It doesn't really matter however, because he is so accessible, and PETSc's documentation is so good that you can browse through it and guess what the correcponding syntax is in petsc4py. I highly recommend that you check it out. Let me know how it goes or if you have any problems. -Frank On Thu, Jul 24, 2008 at 1:59 AM, Adolph J. Vogel wrote: > Are there any bindings available for python to a Parallel linear solver > package like ScaLapack? I couldnt find anything conclusive on google. > > Thanx, Adolph > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania dfranci at seas.upenn.edu 215.898.3144 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwojc at p.lodz.pl Thu Jul 24 10:36:42 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Thu, 24 Jul 2008 16:36:42 +0200 Subject: [SciPy-user] Parallel linear solver. References: <200807240759.03710.ajvogel@tuks.co.za> Message-ID: Adolph J. Vogel wrote: > Are there any bindings available for python to a Parallel linear solver > package like ScaLapack? I couldnt find anything conclusive on google. > > Thanx, Adolph There exist python bindings for PETSC: http://code.google.com/p/petsc4py. But i don't know its current state... Greetings -- Marek Wojciechowski From mwojc at p.lodz.pl Thu Jul 24 10:52:27 2008 From: mwojc at p.lodz.pl (Marek Wojciechowski) Date: Thu, 24 Jul 2008 16:52:27 +0200 Subject: [SciPy-user] optimize.fmin_tnc - how to catch its output? References: <9457e7c80807211556x5ae7d4e1m9079534f518e0cb9@mail.gmail.com> <3d375d730807211622p4b73cd4bo2fe69b2ebf4d8a61@mail.gmail.com> Message-ID: Robert Kern wrote: > On Mon, Jul 21, 2008 at 17:56, St?fan van der Walt > wrote: >> Hi Marek >> >> 2008/7/21 Marek Wojciechowski : >>> Hallo! >>> I tried to catch fmin_tnc output to a file in the following way: >>> >>> file = open('messages', 'w') >>> stdout = sys.stdout >>> stderr = sys.stderr >>> sys.stdout = file >>> sys.stderr = file >>> res = optimize.fmin_tnc(..., messages=1) >>> file.close() >>> sys.stdout = stdout >>> sys.stderr = stderr >>> >>> But this does not work. I realized that fmin_tnc calls C routine which >>> probably uses its own stdout, stderr. Is there a way to catch its >>> output? >> >> I'm afraid you are right. The quickest way I can think of is to grab >> it using standard unix pipes: >> >> python optimise_script.py 2>&1 | cat > out.txt > > There is a way to redirect it to a (temporary) file from within > Python. You can then read this file. I have an old and > not-recently-tested implementation over here: > > http://www.enthought.com/~rkern/cgi-bin/hgwebdir.cgi/redir/ > Hi! Thanks for that! It works very nice. Tested with fmin_tnc: import redirfile as redir rout = redir.Redirector(redir.STDOUT) rerr = redir.Redirector(redir.STDERR) rout.start() rerr.start() res = optimize.fmin_tnc(..., messages=1) out = rout.stop() err = rerr.stop() Greetings, -- Marek Wojciechowski From yannick.copin at laposte.net Thu Jul 24 11:09:22 2008 From: yannick.copin at laposte.net (Yannick Copin) Date: Thu, 24 Jul 2008 15:09:22 +0000 (UTC) Subject: [SciPy-user] Equivalent to 'match' function in R? References: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> Message-ID: Wes McKinney gmail.com> writes: > Before I reinvent the wheel, was curious if there was an equivalent function elsewhere already. If not then some Cython/Pyrex is in order.Thanks,Wes For a similar purpose (with the handling of non-common items), I used: def find_indices(big, small): """Find indices in big array of elements of small arrays. Set index=-1 elements of small that cannot be found in big.""" idx = np.array([ np.argmax(big==s) for s in small ]) # Mark unfound value indices (beware: -1 is a valid index!) idx[big[idx]!=small] = -1 return idx In [63]: all_data = np.array([[1,2,3,4,5], [12,19,27,38,51]]).T In [64]: sub_data = np.array([[3,5,1], [nan,nan,nan]]).T In [67]: match_ind = find_indices(all_data[:,0],sub_data[:,0]) In [68]: sub_data[:,1] = all_data[match_ind,1] In [69]: sub_data Out[69]: array([[ 3., 27.], [ 5., 51.], [ 1., 12.]]) Cheers. From wesmckinn at gmail.com Thu Jul 24 11:12:12 2008 From: wesmckinn at gmail.com (Wes McKinney) Date: Thu, 24 Jul 2008 11:12:12 -0400 Subject: [SciPy-user] Equivalent to 'match' function in R? In-Reply-To: <5d3194020807240649m7d4d9c19he343da39f6c09181@mail.gmail.com> References: <6c476c8a0807240600r138c80fdmdd5d87accfdad73e@mail.gmail.com> <5d3194020807240649m7d4d9c19he343da39f6c09181@mail.gmail.com> Message-ID: <6c476c8a0807240812i397dcd09iee7c8d6a9076ec75@mail.gmail.com> I did a 'super naive' version of this: def match(a, b): bmap = dict([(v, i) for i, v in enumerate(b)]) res = empty(len(a)) for i, val in enumerate(a): res[i] = bmap.get(val, NaN) return res Runs pretty slow for a test case, matching arange(20000) with a shuffled version of itself In [28]: timeit match(a, b) 10 loops, best of 3: 49.9 ms per loop Same slightly less naive implementation done all with Cython and working only with ndarrays: In [30]: timeit cmatch(a, b) 100 loops, best of 3: 10.3 ms per loop I don't know how to compare performance of this with R, assume it's pretty comparable. The only thing that is kind of bust is that values not found in the target array get translated to NA in R, but NaN's get translated to 0 as numpy ints, you can't index an array with an array containing NaN's anyhow. Hmm. On Thu, Jul 24, 2008 at 9:49 AM, Arnar Flatberg wrote: > > > On Thu, Jul 24, 2008 at 3:00 PM, Wes McKinney wrote: > >> Hi all, >> >> I've been working with users lately who are transitioning from using R to >> NumPy/Scipy. Some are accustomed to using the 'match' function, for >> example: >> >> > allData <- cbind(c(1,2,3,4,5), c(12, 19, 27, 38, 51)) >> > allData >> [,1] [,2] >> [1,] 1 12 >> [2,] 2 19 >> [3,] 3 27 >> [4,] 4 38 >> [5,] 5 51 >> > subData <- cbind(c(3,5,1), c(NA, NA, NA)) >> > subData >> [,1] [,2] >> [1,] 3 NA >> [2,] 5 NA >> [3,] 1 NA >> > > What about using `intersect` combined with `where` ? > > all_data = np.array([[1,2,3,4,5], [12,19,27,38,51]]).T > sub_data = np.array([[3,5,1], [nan,nan,nan]]).T > match_ind = np.where(np.intersect_1d(sub_data[:,0], all_data[:,0])) > sub_data[:,1] = all_data[match_ind,1] > > It may not be pretty or the best approach for solving the above examples > but it behaves like R's match somewhat. > > Arnar > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Jul 24 11:17:16 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 24 Jul 2008 17:17:16 +0200 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> Message-ID: On Thu, 24 Jul 2008 10:27:05 -0400 "Frank Lagor" wrote: > Yes, there are. What you are looking for is petsc4py. > PETSc is a very > good, well-developed package for scalable scientific >computing. It is based > on MPI and is very well-known and well-supported package >(it is really > amazing actually how good their support actually is!) > > Anyways, Lisandro Dalcin, a friend of mine, wrote >petsc4py bindings that > allow you to do anything you want in PETSc in python. > The bindings are very > good, but not well documented. It doesn't really matter >however, because he > is so accessible, and PETSc's documentation is so good >that you can browse > through it and guess what the correcponding syntax is in >petsc4py. > > I highly recommend that you check it out. Let me know >how it goes or if you > have any problems. > > -Frank > > On Thu, Jul 24, 2008 at 1:59 AM, Adolph J. Vogel > wrote: > >> Are there any bindings available for python to a >>Parallel linear solver >> package like ScaLapack? I couldnt find anything >>conclusive on google. >> >> Thanx, Adolph >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- >Frank Lagor > Ph.D. Candidate > Mechanical Engineering and Applied Mechanics > University of Pennsylvania > dfranci at seas.upenn.edu > 215.898.3144 The current version of PETSc is 2.3.3; released May 23, 2007. Are you aware of a forthcoming new release ? Nils From dfranci at seas.upenn.edu Thu Jul 24 11:31:53 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Thu, 24 Jul 2008 11:31:53 -0400 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> Message-ID: <9fddf64a0807240831s5c56d0ccv44c10196b6daab85@mail.gmail.com> Yes, there will be a new release of PETSc soon (I just asked one fo the developers), but they don't know exactly how long it will be. The 2.3.3 release of PETSc is still current. They use patches for a lot of their small changes. For example, just a two months ago or so, I downloaded a version that was 2.3.3-p8. The current patched version is 2.3.3-p13. It is still very active, and in my opinion, major releases are not a good thing, because it may end up affecting your code. Actually, I think this is probably why you asked (so you could wait until a new release so you wouldn't have to worry about code changes for a while). Hope this help, Frank On Thu, Jul 24, 2008 at 11:17 AM, Nils Wagner wrote: > On Thu, 24 Jul 2008 10:27:05 -0400 > "Frank Lagor" wrote: > > Yes, there are. What you are looking for is petsc4py. > > PETSc is a very > > good, well-developed package for scalable scientific > >computing. It is based > > on MPI and is very well-known and well-supported package > >(it is really > > amazing actually how good their support actually is!) > > > > Anyways, Lisandro Dalcin, a friend of mine, wrote > >petsc4py bindings that > > allow you to do anything you want in PETSc in python. > > The bindings are very > > good, but not well documented. It doesn't really matter > >however, because he > > is so accessible, and PETSc's documentation is so good > >that you can browse > > through it and guess what the correcponding syntax is in > >petsc4py. > > > > I highly recommend that you check it out. Let me know > >how it goes or if you > > have any problems. > > > > -Frank > > > > On Thu, Jul 24, 2008 at 1:59 AM, Adolph J. Vogel > > wrote: > > > >> Are there any bindings available for python to a > >>Parallel linear solver > >> package like ScaLapack? I couldnt find anything > >>conclusive on google. > >> > >> Thanx, Adolph > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > > > -- > >Frank Lagor > > Ph.D. Candidate > > Mechanical Engineering and Applied Mechanics > > University of Pennsylvania > > dfranci at seas.upenn.edu > > 215.898.3144 > > The current version of PETSc is 2.3.3; released May 23, > 2007. Are you aware of a forthcoming new release ? > > Nils > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Thu Jul 24 11:48:24 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 24 Jul 2008 17:48:24 +0200 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <9fddf64a0807240831s5c56d0ccv44c10196b6daab85@mail.gmail.com> References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> <9fddf64a0807240831s5c56d0ccv44c10196b6daab85@mail.gmail.com> Message-ID: On Thu, 24 Jul 2008 11:31:53 -0400 "Frank Lagor" wrote: > Yes, there will be a new release of PETSc soon (I just >asked one fo the > developers), but they don't know exactly how long it >will be. The 2.3.3 > release of PETSc is still current. They use patches for >a lot of their > small changes. For example, just a two months ago or >so, I downloaded a > version that was 2.3.3-p8. The current patched version >is 2.3.3-p13. It is > still very active, and in my opinion, major releases are >not a good thing, > because it may end up affecting your code. Actually, I >think this is > probably why you asked (so you could wait until a new >release so you > wouldn't have to worry about code changes for a while). > > Hope this help, >Frank > Fank, Thank you very much for your prompt. So far I have used serial programs solely. How do I benefit from parallel code in python ? I mean what are the minimal requirements to run codes in parallel (hardware/software) ? What software packages are needed to configure petsc in that context ? Nils From dfranci at seas.upenn.edu Thu Jul 24 12:06:38 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Thu, 24 Jul 2008 12:06:38 -0400 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> <9fddf64a0807240831s5c56d0ccv44c10196b6daab85@mail.gmail.com> Message-ID: <9fddf64a0807240906s70d2efb5yac913625ab1066d2@mail.gmail.com> Hi Nils, Typically users come to find PETSc when they need it-- that is if they are doing a lot of scientific computations and parallel processing is therefore definitely needed. For me, I had access to a small 64 processor cluster and sequential codes that would take days to run, so it was a natural fit. I could definitely see it being used in smaller settings (and I'm sure many people do), like on a desktop machine with a few processors, but that is not what I use it for. I'm not sure about your needs or your access to a cluster, so you'll can probably be the best judge of if it is for you. For software requirements-- PETSc uses BLAS, LAPACK, and an MPI distribution as a mimimum. It can also interface with countless other packages (e.g. SCALAPACK, ATLAS, SPRNG, etc.), but I don't bother with all this. For me, there was an OpenMPI implementation of MPI already installed on my cluster (as should be the case for most clusters), so I just linked to it. And the BLAS and LAPACK on the cluster were not working currently, so I told PETSc to download and install BLAS and LAPACK automatically. It did and it works fine. Anyways, that's my story-- I hope it helps. -Frank On Thu, Jul 24, 2008 at 11:48 AM, Nils Wagner wrote: > On Thu, 24 Jul 2008 11:31:53 -0400 > "Frank Lagor" wrote: > > Yes, there will be a new release of PETSc soon (I just > >asked one fo the > > developers), but they don't know exactly how long it > >will be. The 2.3.3 > > release of PETSc is still current. They use patches for > >a lot of their > > small changes. For example, just a two months ago or > >so, I downloaded a > > version that was 2.3.3-p8. The current patched version > >is 2.3.3-p13. It is > > still very active, and in my opinion, major releases are > >not a good thing, > > because it may end up affecting your code. Actually, I > >think this is > > probably why you asked (so you could wait until a new > >release so you > > wouldn't have to worry about code changes for a while). > > > > Hope this help, > >Frank > > > Fank, > > Thank you very much for your prompt. So far I have used > serial programs solely. > How do I benefit from parallel code in python ? > I mean what are the minimal requirements to run codes in > parallel (hardware/software) ? > What software packages are needed to configure petsc in > that context ? > > Nils > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania -------------- next part -------------- An HTML attachment was scrubbed... URL: From dfranci at seas.upenn.edu Thu Jul 24 12:15:27 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Thu, 24 Jul 2008 12:15:27 -0400 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <9fddf64a0807240906s70d2efb5yac913625ab1066d2@mail.gmail.com> References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> <9fddf64a0807240831s5c56d0ccv44c10196b6daab85@mail.gmail.com> <9fddf64a0807240906s70d2efb5yac913625ab1066d2@mail.gmail.com> Message-ID: <9fddf64a0807240915i492e8dddqccec6c9cd9dbd44d@mail.gmail.com> One last thing-- If you just want parallel interaction between your python codes, and you think that petsc4py is overkill with all of it's solvers and such, then just use mpi4py to exchange messages, perform reductions, etc. in parallel. It is written by the same author for petsc4py (but it is not a requirement for petsc4py) and you may find it a bit easier to use if you want to run basically sequential codes in parallel and exchange results. -Frank On Thu, Jul 24, 2008 at 12:06 PM, Frank Lagor wrote: > Hi Nils, > > Typically users come to find PETSc when they need it-- that is if they are > doing a lot of scientific computations and parallel processing is therefore > definitely needed. For me, I had access to a small 64 processor cluster and > sequential codes that would take days to run, so it was a natural fit. I > could definitely see it being used in smaller settings (and I'm sure many > people do), like on a desktop machine with a few processors, but that is not > what I use it for. I'm not sure about your needs or your access to a > cluster, so you'll can probably be the best judge of if it is for you. > > For software requirements-- PETSc uses BLAS, LAPACK, and an MPI > distribution as a mimimum. > It can also interface with countless other packages (e.g. SCALAPACK, > ATLAS, SPRNG, etc.), but I don't bother with all this. For me, there was an > OpenMPI implementation of MPI already installed on my cluster (as should be > the case for most clusters), so I just linked to it. And the BLAS and > LAPACK on the cluster were not working currently, so I told PETSc to > download and install BLAS and LAPACK automatically. It did and it works > fine. Anyways, that's my story-- I hope it helps. > > -Frank > > > > > On Thu, Jul 24, 2008 at 11:48 AM, Nils Wagner < > nwagner at iam.uni-stuttgart.de> wrote: > >> On Thu, 24 Jul 2008 11:31:53 -0400 >> "Frank Lagor" wrote: >> > Yes, there will be a new release of PETSc soon (I just >> >asked one fo the >> > developers), but they don't know exactly how long it >> >will be. The 2.3.3 >> > release of PETSc is still current. They use patches for >> >a lot of their >> > small changes. For example, just a two months ago or >> >so, I downloaded a >> > version that was 2.3.3-p8. The current patched version >> >is 2.3.3-p13. It is >> > still very active, and in my opinion, major releases are >> >not a good thing, >> > because it may end up affecting your code. Actually, I >> >think this is >> > probably why you asked (so you could wait until a new >> >release so you >> > wouldn't have to worry about code changes for a while). >> > >> > Hope this help, >> >Frank >> > >> Fank, >> >> Thank you very much for your prompt. So far I have used >> serial programs solely. >> How do I benefit from parallel code in python ? >> I mean what are the minimal requirements to run codes in >> parallel (hardware/software) ? >> What software packages are needed to configure petsc in >> that context ? >> >> Nils >> >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Frank Lagor > Ph.D. Candidate > Mechanical Engineering and Applied Mechanics > University of Pennsylvania > -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania -------------- next part -------------- An HTML attachment was scrubbed... URL: From grs2103 at columbia.edu Thu Jul 24 16:05:28 2008 From: grs2103 at columbia.edu (Gideon Simpson) Date: Thu, 24 Jul 2008 16:05:28 -0400 Subject: [SciPy-user] quad question Message-ID: So I'm looking at the quad routine which seems to be built on quadpack. Assuming I give it a finite interval with no weights, will it use QAGS? My integrand has integrable singularities at one endpoint, which I don't really feel like sorting out the correct weighting for the QAW routine. -gideon From hep.sebastien.binet at gmail.com Fri Jul 25 01:21:44 2008 From: hep.sebastien.binet at gmail.com (Sebastien Binet) Date: Thu, 24 Jul 2008 22:21:44 -0700 Subject: [SciPy-user] mpi4py - MPI_Comm_join example ? Message-ID: <200807242221.50661.binet@cern.ch> hi there, I am trying to parallelize an application with mpi4py. For various reasons[*], I need to dynamically create new processes and get inter-communicators via MPI_Comm_join. Documentation about that function is pretty scarce and I am still a MPI rookie. I came up with that kind of code: ## import sys,os,socket from mpi4py import MPI def msg (fmt, *args): map (sys.stdout.write, ('[%i] :: '%os.getpid(), fmt%args, '\n')) sys.stdout.flush() def main(ntasks=5): msg ("creating a socket pair...") fd_parent, fd_child = socket.socketpair (socket.AF_UNIX) msg ("forking...") pid = os.fork() if pid == 0: fd_child.close() # not needed anymore msg ("child process...") main_child (fd_parent) else: fd_parent.close() # idem msg ("parent process...") main_parent (fd_child) return def main_child (fd): comm = MPI.COMM_WORLD msg ("joining parent communicator...") icomm = MPI.Comm.Join (fd.fileno()) fd.close() # not needed anymore msg ("icomm is [%s]", "OK" if icomm != MPI.COMM_NULL else "ERR") if icomm == MPI.COMM_NULL: return ### do something with icomm... return def main_parent (fd): comm = MPI.COMM_WORLD msg ("mpi: rank=%r nprocs=%r", comm.rank, comm.size) msg ("joining child communicator...") icomm = MPI.Comm.Join (fd.fileno()) fd.close() # not needed anymore msg ("icomm is [%s]", "OK" if icomm != MPI.COMM_NULL else "ERR") if icomm == MPI.COMM_NULL: return ### do something with icomm return ## I'd like to fork as many as `ntasks` subprocesses and farm out some work... But I am a bit stuck as to how to proceed... Any help would be much appreciated. Cheers, Sebastien. [*] main reason being that the application takes ~1-2 minutes to completely initialize and eats up to 1-2 Gb of memory. Hence, relying on os.fork to efficiently share memory between processes after the initialization completed seems to be a good idea. I don't think I can use MPI_Win_xyz as the pieces of data I want to share/reuse among processes are C++ containers, so the usual problem of STL allocators and shm-segments kicks in... Spawning new processes via MPI_Comm_spawn is a no go either b/c of the above main reason... -- ################################### # Dr. Sebastien Binet # # Lawrence Berkeley National Lab. # # 1 Cyclotron Road # # Berkeley, CA 94720 # ################################### -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. URL: From ajvogel at tuks.co.za Fri Jul 25 01:58:28 2008 From: ajvogel at tuks.co.za (Adolph J. Vogel) Date: Fri, 25 Jul 2008 07:58:28 +0200 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> Message-ID: <200807250758.28653.ajvogel@tuks.co.za> Thank you for your reply. I will check it out today, and see if I can get it working. Adolph ps. Frank I see we are semi colleagues. :) On Thursday 24 July 2008 16:27:05 you wrote: > Yes, there are. What you are looking for is petsc4py. PETSc is a very > good, well-developed package for scalable scientific computing. It is > based on MPI and is very well-known and well-supported package (it is > really amazing actually how good their support actually is!) > > Anyways, Lisandro Dalcin, a friend of mine, wrote petsc4py bindings that > allow you to do anything you want in PETSc in python. The bindings are > very good, but not well documented. It doesn't really matter however, > because he is so accessible, and PETSc's documentation is so good that you > can browse through it and guess what the correcponding syntax is in > petsc4py. > > I highly recommend that you check it out. Let me know how it goes or if > you have any problems. > > -Frank > > On Thu, Jul 24, 2008 at 1:59 AM, Adolph J. Vogel wrote: > > Are there any bindings available for python to a Parallel linear solver > > package like ScaLapack? I couldnt find anything conclusive on google. > > > > Thanx, Adolph > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user -- _/_/_/_/_/_/_/_/_/_/_/_/_/ Adolph J. Vogel BEng(Mechanical) University of Pretoria Office: 012-420-2189 Email: ajvogel at tuks.co.za _/_/_/_/_/_/_/_/_/_/_/_/_/ From massimo.sandal at unibo.it Fri Jul 25 05:45:36 2008 From: massimo.sandal at unibo.it (massimo sandal) Date: Fri, 25 Jul 2008 11:45:36 +0200 Subject: [SciPy-user] scipy.stats.gaussian_kde for 2d kernel density estimation In-Reply-To: References: <48870EB8.8010208@unibo.it> Message-ID: <4889A0C0.40707@unibo.it> Hi Dave, > Hopefully the example below will help... Thanks a lot - your example put me on the right way. I am a complete newbie on 2d graphs, and meshgrid just escaped my attention. m. -- Massimo Sandal , Ph.D. University of Bologna Department of Biochemistry "G.Moruzzi" snail mail: Via Irnerio 48, 40126 Bologna, Italy email: massimo.sandal at unibo.it web: http://www.biocfarm.unibo.it/samori/people/sandal.html tel: +39-051-2094388 fax: +39-051-2094387 -------------- next part -------------- A non-text attachment was scrubbed... Name: massimo_sandal.vcf Type: text/x-vcard Size: 274 bytes Desc: not available URL: From ajvogel at tuks.co.za Fri Jul 25 05:48:33 2008 From: ajvogel at tuks.co.za (Adolph J. Vogel) Date: Fri, 25 Jul 2008 11:48:33 +0200 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> Message-ID: <200807251148.33395.ajvogel@tuks.co.za> Hi Frank, Im having trouble with using petcs4py. When trying to run the helloworld.py example, I get the following: adolph at adolph-laptop:~/lib/petsc4py-0.7.5/tests/sandbox$ mpirun -np 2 python helloworld.py [adolph-laptop:07785] mca: base: component_find: unable to open osc pt2pt: file not found (ignored) libibverbs: Fatal: couldn't read uverbs ABI version. -------------------------------------------------------------------------- [0,0,0]: OpenIB on host adolph-laptop was unable to find any HCAs. Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- Hello, World!! I am process 0 of 1 [adolph-laptop:07786] mca: base: component_find: unable to open osc pt2pt: file not found (ignored) libibverbs: Fatal: couldn't read uverbs ABI version. -------------------------------------------------------------------------- [0,0,0]: OpenIB on host adolph-laptop was unable to find any HCAs. Another transport will be used instead, although this may result in lower performance. -------------------------------------------------------------------------- ----------------------------------------------------------------------------- It seems that [at least] one of the processes that was started with mpirun did not invoke MPI_INIT before quitting (it is possible that more than one process did not invoke MPI_INIT -- mpirun was only notified of the first one, which was on node n0). mpirun can *only* be used with MPI programs (i.e., programs that invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program to run non-MPI programs over the lambooted nodes. ----------------------------------------------------------------------------- [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 15 Terminate: Somet process (or the batch system) has told this process to end [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run [0]PETSC ERROR: to get more information on the crash. [adolph-laptop:07786] MPI_ABORT invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 59 The invocation I used, works with my mpi4py programs. However nothing from petcs4py seems to work. Any help would be greatly appreciated. Adolph On Thursday 24 July 2008 16:27:05 you wrote: > Yes, there are. What you are looking for is petsc4py. PETSc is a very > good, well-developed package for scalable scientific computing. It is > based on MPI and is very well-known and well-supported package (it is > really amazing actually how good their support actually is!) > > Anyways, Lisandro Dalcin, a friend of mine, wrote petsc4py bindings that > allow you to do anything you want in PETSc in python. The bindings are > very good, but not well documented. It doesn't really matter however, > because he is so accessible, and PETSc's documentation is so good that you > can browse through it and guess what the correcponding syntax is in > petsc4py. > > I highly recommend that you check it out. Let me know how it goes or if > you have any problems. > > -Frank > > On Thu, Jul 24, 2008 at 1:59 AM, Adolph J. Vogel wrote: > > Are there any bindings available for python to a Parallel linear solver > > package like ScaLapack? I couldnt find anything conclusive on google. > > > > Thanx, Adolph > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user -- _/_/_/_/_/_/_/_/_/_/_/_/_/ Adolph J. Vogel BEng(Mechanical) University of Pretoria Office: 012-420-2189 Email: ajvogel at tuks.co.za Mobile: 072 592 5836 _/_/_/_/_/_/_/_/_/_/_/_/_/ From mjakubik at ta3.sk Fri Jul 25 05:51:29 2008 From: mjakubik at ta3.sk (Marian Jakubik) Date: Fri, 25 Jul 2008 11:51:29 +0200 Subject: [SciPy-user] Parallel linear solver. In-Reply-To: <9fddf64a0807240915i492e8dddqccec6c9cd9dbd44d@mail.gmail.com> References: <200807240759.03710.ajvogel@tuks.co.za> <9fddf64a0807240727w6d6526dbp1af089257ba348e8@mail.gmail.com> <9fddf64a0807240831s5c56d0ccv44c10196b6daab85@mail.gmail.com> <9fddf64a0807240906s70d2efb5yac913625ab1066d2@mail.gmail.com> <9fddf64a0807240915i492e8dddqccec6c9cd9dbd44d@mail.gmail.com> Message-ID: <20080725115129.37b80066@jakubik.ta3.sk> Hi Frank, is it possible to compare the speed of computation using parallelisms (parallel codes) in C and Python? I am preparing a parallel code and till now I've been decided to use C. I mean the "difference" in speed for executable (C) and interpreted (Python) codes is important, isn't it? Thank in advance for response... Best, Marian D?a Thu, 24 Jul 2008 12:15:27 -0400 "Frank Lagor" nap?sal: > One last thing-- > > If you just want parallel interaction between your python codes, and you > think that petsc4py is overkill with all of it's solvers and such, then just > use mpi4py to exchange messages, perform reductions, etc. in parallel. It > is written by the same author for petsc4py (but it is not a requirement for > petsc4py) and you may find it a bit easier to use if you want to run > basically sequential codes in parallel and exchange results. > > -Frank > > On Thu, Jul 24, 2008 at 12:06 PM, Frank Lagor > wrote: > > > Hi Nils, > > > > Typically users come to find PETSc when they need it-- that is if they are > > doing a lot of scientific computations and parallel processing is therefore > > definitely needed. For me, I had access to a small 64 processor cluster and > > sequential codes that would take days to run, so it was a natural fit. I > > could definitely see it being used in smaller settings (and I'm sure many > > people do), like on a desktop machine with a few processors, but that is not > > what I use it for. I'm not sure about your needs or your access to a > > cluster, so you'll can probably be the best judge of if it is for you. > > > > For software requirements-- PETSc uses BLAS, LAPACK, and an MPI > > distribution as a mimimum. > > It can also interface with countless other packages (e.g. SCALAPACK, > > ATLAS, SPRNG, etc.), but I don't bother with all this. For me, there was an > > OpenMPI implementation of MPI already installed on my cluster (as should be > > the case for most clusters), so I just linked to it. And the BLAS and > > LAPACK on the cluster were not working currently, so I told PETSc to > > download and install BLAS and LAPACK automatically. It did and it works > > fine. Anyways, that's my story-- I hope it helps. > > > > -Frank > > > > > > > > > > On Thu, Jul 24, 2008 at 11:48 AM, Nils Wagner < > > nwagner at iam.uni-stuttgart.de> wrote: > > > >> On Thu, 24 Jul 2008 11:31:53 -0400 > >> "Frank Lagor" wrote: > >> > Yes, there will be a new release of PETSc soon (I just > >> >asked one fo the > >> > developers), but they don't know exactly how long it > >> >will be. The 2.3.3 > >> > release of PETSc is still current. They use patches for > >> >a lot of their > >> > small changes. For example, just a two months ago or > >> >so, I downloaded a > >> > version that was 2.3.3-p8. The current patched version > >> >is 2.3.3-p13. It is > >> > still very active, and in my opinion, major releases are > >> >not a good thing, > >> > because it may end up affecting your code. Actually, I > >> >think this is > >> > probably why you asked (so you could wait until a new > >> >release so you > >> > wouldn't have to worry about code changes for a while). > >> > > >> > Hope this help, > >> >Frank > >> > > >> Fank, > >> > >> Thank you very much for your prompt. So far I have used > >> serial programs solely. > >> How do I benefit from parallel code in python ? > >> I mean what are the minimal requirements to run codes in > >> parallel (hardware/software) ? > >> What software packages are needed to configure petsc in > >> that context ? > >> > >> Nils > >> > >> > >> _______________________________________________ > >> SciPy-user mailing list > >> SciPy-user at scipy.org > >> http://projects.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > > > -- > > Frank Lagor > > Ph.D. Candidate > > Mechanical Engineering and Applied Mechanics > > University of Pennsylvania > > > > > From dineshbvadhia at hotmail.com Sat Jul 26 07:08:06 2008 From: dineshbvadhia at hotmail.com (Dinesh B Vadhia) Date: Sat, 26 Jul 2008 04:08:06 -0700 Subject: [SciPy-user] sparse matrix: getrow() Message-ID: Has the sparse.csr_matrix.getrow(,) function been fixed in the latest svn? Thanks! Dinesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wnbell at gmail.com Sat Jul 26 07:14:30 2008 From: wnbell at gmail.com (Nathan Bell) Date: Sat, 26 Jul 2008 06:14:30 -0500 Subject: [SciPy-user] sparse matrix: getrow() In-Reply-To: References: Message-ID: On Sat, Jul 26, 2008 at 6:08 AM, Dinesh B Vadhia wrote: > Has the sparse.csr_matrix.getrow(,) function been fixed in the latest svn? > It appears to work. You should use A[i,:] instead, since getrow() will eventually be deprecated. CSR and CSC matrices support most forms of slicing and indexing now. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From lists at vrbka.net Sat Jul 26 10:43:13 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Sat, 26 Jul 2008 16:43:13 +0200 Subject: [SciPy-user] sine transform prefactor Message-ID: <488B3801.6010602@vrbka.net> hi guys, since i am not a mathematician, i'd like to ask one question related to discrete sine transform and its implementation in scipy (sandbox at the moment). from definition, the continuous sine transform comes with the constant prefactor of sqrt(2/pi), i.e., FST(f(r)) = f(k) = sqrt(2/pi) int_0^{\infty} f(r) sin(kr) dr iFST(f(k)) = f(r) = sqrt(2/pi) int_0^{\infty} f(k) sin(kr) dk how is it in case of a discrete transform? does this sqrt(2/pi) factor come into play also for discrete transforms, or is it applicable only for continuous transforms? if it is applicable, is it already included inside the routines? how is it then distributed among the forward and reverse transform? i think the situation here might be complicated by the fact, that the code in sandbox uses the fast fourier transform for the evaluation of the sine transform - and this might probably change the constants involved as well (since the fourier transform should be 'normalized' with a factor of 1/2pi). i would be very grateful for any help on this issue. my program seems to run, but it is dependent on the choice of the constant factors involved in the transforms. so far i've found the correct combination (it seems so), but i'd like to make this more rigorous and to really understand _why_. thanks a lot in advance. best, -- Lubos _ at _" http://www.lubos.vrbka.net From dfranci at seas.upenn.edu Sat Jul 26 11:46:32 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Sat, 26 Jul 2008 11:46:32 -0400 Subject: [SciPy-user] prefactor response Message-ID: <9fddf64a0807260846r68c24f97k97fb964069bacfa4@mail.gmail.com> Hi Lubos, I don't know a whole lot about transforms however, I do remember one bit of information that may be useful to you: The prefactors in the sine transform (which is of course related to the fourier transform) are actually matters of convention. The sqrt(2/pi) is not always the prefactor! The prefactor is usually chosen to be sqrt(2/pi) so that the inverse transform will also have a prefactor of sqrt(2/pi) and there will be some symmetry in the two formulas so that they look alike and are easier to remember. However, SOMETIMES people chose the transform or inverse transform (I can't recall which) to have a 2/pi prefactor, and the other to have no prefactor. I comes down to a matter of convention and it doesn't matter which one you chose, as long as you are consistent. Please careful and make sure you a certain of the convention chosen and you a consistent in what you code and you will be fine. Hope this helps, Frank PS> Dear Everyone, I am trying to get my email mailing list style fixed so that I don't break mailing list etiquette. Please email me personal to check me if this email sucked. Sorry for my previous mistakes! -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Jul 26 11:49:16 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Jul 2008 17:49:16 +0200 Subject: [SciPy-user] error ellipse In-Reply-To: <9fddf64a0807232002g31ba7a80nc4a26291c8590caf@mail.gmail.com> References: <9fddf64a0807232002g31ba7a80nc4a26291c8590caf@mail.gmail.com> Message-ID: <20080726154916.GB3248@phare.normalesup.org> On Wed, Jul 23, 2008 at 11:02:05PM -0400, Frank Lagor wrote: > Does anyone know if there is a function in scipy.stats for somewhere else > to plot a 2D error ellipse on a scatter plot of some data given the > covariance matrix? I have not been able to find one. Thanks in advance > for your help, For plotting related questions, you can try your luck on the matplotlib mailing list: https://lists.sourceforge.net/lists/listinfo/matplotlib-users You might get more answers. Cheers, Ga?l From gael.varoquaux at normalesup.org Sat Jul 26 11:55:19 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Jul 2008 17:55:19 +0200 Subject: [SciPy-user] scipy.stats.gaussian_kde for 2d kernel density estimation In-Reply-To: <4889A0C0.40707@unibo.it> References: <48870EB8.8010208@unibo.it> <4889A0C0.40707@unibo.it> Message-ID: <20080726155519.GC3248@phare.normalesup.org> On Fri, Jul 25, 2008 at 11:45:36AM +0200, massimo sandal wrote: > Thanks a lot - your example put me on the right way. I am a complete newbie > on 2d graphs, and meshgrid just escaped my attention. Prefer mgrid (http://www.scipy.org/Numpy_Example_List_With_Doc#head-c2338cdfd293a3c60c3eb65d6df9a2d8e1803fc5 ) or ogrid (http://www.scipy.org/Numpy_Example_List_With_Doc#ogrid )to meshgrid. Meshgrid is for matlab compatibility, and is less powerful. Ga?l From lists at vrbka.net Sat Jul 26 12:24:48 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Sat, 26 Jul 2008 18:24:48 +0200 Subject: [SciPy-user] sine transform prefactor In-Reply-To: <9fddf64a0807260846r68c24f97k97fb964069bacfa4@mail.gmail.com> References: <9fddf64a0807260846r68c24f97k97fb964069bacfa4@mail.gmail.com> Message-ID: <488B4FD0.3070804@vrbka.net> hi, > I don't know a whole lot about transforms however, I do remember one bit of > information that may be useful to you: The prefactors in the sine transform > (which is of course related to the fourier transform) are actually matters > of convention. The sqrt(2/pi) is not always the prefactor! The prefactor > is usually chosen to be sqrt(2/pi) so that the inverse transform will also > have a prefactor of sqrt(2/pi) and there will be some symmetry in the two > formulas so that they look alike and are easier to remember. However, > SOMETIMES people chose the transform or inverse transform (I can't recall > which) to have a 2/pi prefactor, and the other to have no prefactor. I > comes down to a matter of convention and it doesn't matter which one you > chose, as long as you are consistent. well, all this is clear to me. if i do iDST(DST(f(r)) on discrete data, i get the original data back (expected result). this would indicate that 1) either the sqrt(2/pi) norm shouldn't be used for discrete data (why?) 2) the sqrt(2/pi) factor is taken care of somewhere inside the DST function (how?) this is the thing i'd really love to know - is the answer 1 or answer 2 correct? in my opinion, it shouldn't matter what combination of norms i use, as soon as they are consistent - but at the moment, i have the following problem. i need to iteratively solve H(r) = C(r) + C(r)*H(r) where the star denotes convolution and H and C are functions of interest. the easiest way to solve this should be fourier transforming this, solving H(k) = C(k) + C(k)H(k) by H(k) = C(k)/(1-C(k)) and then transforming H(k) back for further processing. in practice, i calculate C, transorm it into fourier space, solve for H and transform it back, where it serves as a 'parameter' for new C. due to the nature of my functions, the 3D-FT can be replaced by 1D fourier-bessel transformation AND some constants. depending on how one distributes the normalization of FT, one arrives at, e.g., FB(f(r)) = f(k) = 4pi/k int_0^infty f(r) r sin(kr) dr iFB(f(k)) = f(r) = 1/2rpi^2 int_0^infty f(k) k sin(kr) dk this can be calculated using fourier-sine transform of a new function F(r)=f(r)*r and F(k)=f(k)*k, respectively FB(f(r)) = 4pi/k sqrt(pi/2) int_0^infty F(r) sin(kr) dr iFB(f(r)) = 1/2rpi^2 sqrt(pi/2) int_0^infty F(k) sin(kr) dk so one would expect that the story is clear - do fourier-sinus, multiply with the respective normalization constant... but the problem is, that this doesn't work. at this point i asked the question - *should* i actually use these factors (coming from continuous transforms) also in the case of discrete transforms, which are self normalized to a number of discretization points? > PS> Dear Everyone, I am trying to get my email mailing list style fixed so > that I don't break mailing list etiquette. Please email me personal to > check me if this email sucked. Sorry for my previous mistakes! just a comment - don't change the subject line, since it breaks the thread best, -- Lubos _ at _" http://www.lubos.vrbka.net From David.Kaplan at ird.fr Sat Jul 26 14:06:51 2008 From: David.Kaplan at ird.fr (David M. Kaplan) Date: Sat, 26 Jul 2008 20:06:51 +0200 Subject: [SciPy-user] unique, sort, sortrows Message-ID: <1217095611.7172.42.camel@localhost> Hi, I am fairly new to numpy/scipy (switching from matlab). I have noticed a few things that left me confused. I am wondering if I could get some input from others. 1) On the webpage: http://www.scipy.org/NumPy_for_Matlab_Users , where it says matlab sortrows(a,i) is equivalent to a[argsort(a[:,0],i)], I believe this should be a[argsort(a[:,i]),:]. Or better yet: matlab: [b,I] = sortrows(a,i) numpy: I = argsort(a[:,i]), b = a[I,:] I have already changed this on the webpage, but I want to make sure I am not missing something. 2) Is there a simple equivalent of sortrows(a) (i.e., sorting by entire rows)? Similarly, is there a simple equivalent of the matlab Y = unique(X,'rows')? I looked online and there appears to have been previous discussion of these, but nothing simple and general seemed to come out. 3) Is there an equivalent of [Y,I,J] = unique(X)? In this case, I is the indices of the unique elements and J is an index from Y to X (i.e., where the unique elements appear in X. I can get I with: I,Y = unique1d( X, return_index=True ) but J, which is very useful, is not available. I suppose I could do: J = array([]) for i,y in enumerate(Y): J[ nonzero( y == X ) ] = i But it seems like J is useful enough that there should be an easier way (or perhaps it should be integrated into unique1d, at the risk of adding more keyword arguments). 4) What is the best equivalent of meshgrid(x,y,z,...) or ndgrid(x,y,z...)? I defined my own function, but is there a built in way: def ndgrid( *args ): s = [] for a in args: s.append(len(a)) I = indices(s) res = [] for a,i in zip(args,I): res.append(a[i]) return res This method will also be painful for large matrices as one has to create twice as much data (the index matrix and the resulting matrices). 5) I was trying to do matrix multiplication of matrices with more than 2 axes. Normally matrix multiplication of an [M,N,P] matrix with a [P,Q,R] matrix should produce something with dimensions [M,N,Q,R], but this produces an error. What is the appropriate idiom for this? Thanks for the help. Cheers, David -- ********************************** David M. Kaplan Charge de Recherche 1 Institut de Recherche pour le Developpement Centre de Recherche Halieutique Mediterraneenne et Tropicale av. Jean Monnet B.P. 171 34203 Sete cedex France Phone: +33 (0)4 99 57 32 27 Fax: +33 (0)4 99 57 32 95 http://www.ur097.ird.fr/team/dkaplan/index.html ********************************** From dfranci at seas.upenn.edu Sat Jul 26 14:15:11 2008 From: dfranci at seas.upenn.edu (Frank Lagor) Date: Sat, 26 Jul 2008 14:15:11 -0400 Subject: [SciPy-user] sine transform prefactor In-Reply-To: <488B4FD0.3070804@vrbka.net> References: <9fddf64a0807260846r68c24f97k97fb964069bacfa4@mail.gmail.com> <488B4FD0.3070804@vrbka.net> Message-ID: <9fddf64a0807261115x426853e6wf888f7801472b539@mail.gmail.com> Ludos, First let me clarify: the factor in front of the continuous forms is 1/sqrt(2pi), not sqrt(2/pi) like I said previously (I was confused). > if i do > iDST(DST(f(r)) > on discrete data, i get the original data back (expected result). this > would indicate that > 1) either the sqrt(2/pi) norm shouldn't be used for discrete data (why?) I don't think the factors should be added on at all. The mere transform and invtransform functions should handle all factors. This is related to what I want to type later on. > 2) the sqrt(2/pi) factor is taken care of somewhere inside the DST function > (how?) > Yes. It should be. Like I said before, the convention chosen doesn't really matter as long as the you do the FT and then the IFT, because the factors from the two operations must multiple together to give 1/(2pi) in the end (this is built into the answer. The problem with convention only comes when you want to compare numbers of transformed data (still in fourier space) with someone else. Then you need to know if your conventions are the same. > this is the thing i'd really love to know - is the answer 1 or answer 2 > correct? > I think the actual answer lies in the fact that the fourier transform is actually derived from the fourier series. The fourier series is a sum of some coefficients times an exponential. There is an equation for calculating these coefficients and it is something like 1/(2pi) times an integral, and thats where the 1/(2pi) comes in (actually it comes in when the formula for the coefficients is derived). See the book Functions of a complex variable by Carrier, Krook, and Pearson for the details of this derivation. I think that if you see this, all your concerns about the factors will be taken care of. -Frank -- Frank Lagor Ph.D. Candidate Mechanical Engineering and Applied Mechanics University of Pennsylvania -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.varoquaux at normalesup.org Sat Jul 26 16:24:00 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sat, 26 Jul 2008 22:24:00 +0200 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: <1217095611.7172.42.camel@localhost> References: <1217095611.7172.42.camel@localhost> Message-ID: <20080726202400.GA683@phare.normalesup.org> Hi David, It's been a long time since I used Matlab, so I can't really answer all your questions, however I think I can help you with one: On Sat, Jul 26, 2008 at 08:06:51PM +0200, David M. Kaplan wrote: > 4) What is the best equivalent of meshgrid(x,y,z,...) or > ndgrid(x,y,z...)? I defined my own function, but is there a built in > way: I think you should look at the mgrid and ogrid functions. (See http://www.scipy.org/Numpy_Example_List ). For many operations, prefer ogrid, it saves memory and tends to works as easily as mgrid thanks to broadcasting. HTH, Ga?l From cimrman3 at ntc.zcu.cz Sat Jul 26 17:35:24 2008 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Sat, 26 Jul 2008 23:35:24 +0200 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: <1217095611.7172.42.camel@localhost> References: <1217095611.7172.42.camel@localhost> Message-ID: <20080726233524.vcdq55buowgwgcso@webmail.zcu.cz> Hi David, I can comment on unique1d, as I am the culprit. I am cc'ing to numpy-discussion as this is a numpy function. Quoting "David M. Kaplan" : > 2) Is there a simple equivalent of sortrows(a) (i.e., sorting by entire > rows)? Similarly, is there a simple equivalent of the matlab Y = have you looked at lexsort? > 3) Is there an equivalent of [Y,I,J] = unique(X)? In this case, I is > the indices of the unique elements and J is an index from Y to X (i.e., > where the unique elements appear in X. I can get I with: > > I,Y = unique1d( X, return_index=True ) > > but J, which is very useful, is not available. I suppose I could do: > > J = array([]) > for i,y in enumerate(Y): > J[ nonzero( y == X ) ] = i > > But it seems like J is useful enough that there should be an easier way > (or perhaps it should be integrated into unique1d, at the risk of adding > more keyword arguments). So basically Y = X[I] and X = Y[J], right? I do not recall matlab that well to know for sure. It certainly could be done, I could look at it after I return from Euroscipy (i.e. after Monday). I would replace return_index argument by two arguments: return_direct (->I) and return_inverse (->J), ok? Does anyone propose better names? Actually most of the methods in arraysetops could return optionally some index arrays. Would anyone use it? (I do not need it personally :) cheers, r. From David.Kaplan at ird.fr Sun Jul 27 08:43:09 2008 From: David.Kaplan at ird.fr (David M. Kaplan) Date: Sun, 27 Jul 2008 14:43:09 +0200 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: 1217095611.7172.42.camel@localhost Message-ID: <1217162589.7128.36.camel@localhost> Hi, Thanks for the very helpful comments. Regarding Gael's comment, the problem with mgrid and ogrid (at least in the version of numpy I use: 1.1.0-3) is that it currently only accepts standard indexing so you can't use it with non-uniform values. I use this a lot when I have a model I want to run with a series of parameter values, not all of which are uniformly spaced. For example, the following fails: mgrid[:5,[1,7,8]] I would like this to give something equivalent to meshgrid(arange(5),[1,7,8]) (up to a transpose operation). Similarly for more input arguments. I haven't looked at the numpy source code, but it would seem that it shouldn't be too hard to add this functionality as it already exists for r_ and c_: r_[:5,[1,7,8]] # no problem here In response to Robert's comments, I looked a bit at lexsort and didn't immediately see how it could fix my problem of sorting rows of a matrix because I didn't really understand it. Finally I figured out that the following appears to do the trick: I = lexsort(a[:,-1::-1].T) b = a[I,:] As this is a bit tricky for someone to figure out, perhaps a helper function called sortrows would be useful in numpy? Also, an equivalent of unique(a,'rows') would still be very useful. As for [Y,I,J] = unique(X), yes Y=X[I] and X=Y[J]. Your fix would help me out, though it would be nice to specify the call signature in the help of the new version of unique1d (it took me a while to figure out that it was I,Y and not Y,I). I think it would also be useful to propogate these changes to the other arraysetops functions. In particular, I use indexes returned by matlab's intersect command often: [C,IA,IB] = intersect(A,B) A use case for this is suppose you have a sparse dataset at x,y points that is "on a grid", but lacking some of the points (e.g., instrument only returns points that had valid data). A simple way to solve this would be (using a few suggested changes to numpy/scipy): [X,Y] = mgrid[ unique(pts[:,0]), unique(pts[:,1]) ] s = X.shape newData = tile( NaN, s ).flatten() p,IA,IB = intersect( c_[X.flatten(),Y.flatten()], pts, rows=True ) newData[IA] = Data[IB] newData.reshape(s) There may be other concise ways to solve this, but this one seems fairly efficient. Thanks again. Cheers, David -- ********************************** David M. Kaplan Charge de Recherche 1 Institut de Recherche pour le Developpement Centre de Recherche Halieutique Mediterraneenne et Tropicale av. Jean Monnet B.P. 171 34203 Sete cedex France Phone: +33 (0)4 99 57 32 27 Fax: +33 (0)4 99 57 32 95 http://www.ur097.ird.fr/team/dkaplan/index.html ********************************** From gael.varoquaux at normalesup.org Sun Jul 27 11:55:10 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Sun, 27 Jul 2008 17:55:10 +0200 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: <1217162589.7128.36.camel@localhost> References: <1217162589.7128.36.camel@localhost> Message-ID: <20080727155510.GA12519@phare.normalesup.org> On Sun, Jul 27, 2008 at 02:43:09PM +0200, David M. Kaplan wrote: > I haven't looked at the numpy source code, > but it would seem that it shouldn't be too hard to add this > functionality as it already exists for r_ and c_: > r_[:5,[1,7,8]] # no problem here Well, if you have time to have a look at that, there certainly is value in adding this functionnality. Cheers, Ga?l From lorenzo.isella at gmail.com Sun Jul 27 12:17:47 2008 From: lorenzo.isella at gmail.com (Lorenzo Isella) Date: Sun, 27 Jul 2008 18:17:47 +0200 Subject: [SciPy-user] Ordering and Counting the Repetitions of the Rows of a Matrix Message-ID: Dear All, Consider an Nx2 matrix of the kind: A= 1 2 3 13 1 2 6 8 3 13 2 9 1 1 The first entry in each row is always smaller or equal than the second entry in the same row. Now there are two things I would like to do with this A matrix: (1) With a sort of n.unique1d (but have not been very successful yet), let each row of A appear only once (i.e. get rid of the repetitions). Therefore one should obtain the matrix: B= 1 2 3 13 6 8 2 9 1 1 (2) At the same time, efficiently count how many times each row of B appeared in A. I would like to get a C vector counting them as: C= 2 2 1 1 1 Any suggestions about an efficient way of achieving this? Many thanks Lorenzo From David.Kaplan at ird.fr Sun Jul 27 13:31:01 2008 From: David.Kaplan at ird.fr (David M. Kaplan) Date: Sun, 27 Jul 2008 19:31:01 +0200 Subject: [SciPy-user] Ordering and Counting the Repetitions of the Rows of a Matrix Message-ID: <1217179861.7128.51.camel@localhost> Hi, This is related to the discussion of "unique, sort, rows" that is just before this email. Unfortunately, unique1d doesn't have a rows option (yet). But I saw an email post that had a good suggestion that may work for you: hash your array so that you can get the almost certainly unique rows (essentially infinitesimal chance of getting a false match): B = rand(A.shape[1],1) C = dot(A,B) I,C2 = unique1d(C,return_index=True) Aunique = A[I,:] Anum=[] for a in Aunique: Anum.append( sum(all(A==a,axis=1)) ) This will do basically what you want. Aunique won't be sorted correctly, but you can use lexsort to fix that (see me recent post). Cheers, David -- ********************************** David M. Kaplan Charge de Recherche 1 Institut de Recherche pour le Developpement Centre de Recherche Halieutique Mediterraneenne et Tropicale av. Jean Monnet B.P. 171 34203 Sete cedex France Phone: +33 (0)4 99 57 32 27 Fax: +33 (0)4 99 57 32 95 http://www.ur097.ird.fr/team/dkaplan/index.html ********************************** From warren.weckesser at gmail.com Sun Jul 27 15:46:29 2008 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 27 Jul 2008 15:46:29 -0400 Subject: [SciPy-user] Ordering and Counting the Repetitions of the Rows of a Matrix In-Reply-To: References: Message-ID: <114880320807271246x1c922e7cg9539684fbad7bed9@mail.gmail.com> Lorenzo, Given a matrix A like you showed, here is one way to find (and count) the unique rows: ---------- d = {} for r in A: t = tuple(r) d[t] = d.get(t,0) + 1 # The dict d now has the counts of the unique rows of A. B = numpy.array(d.keys()) # The unique rows of A C = numpy.array(d.values()) # The counts of the unique rows ---------- For a large number of rows (e.g. 10000), this appears to be significantly faster than the code that David Kaplan suggested in his email earlier today. Regards, Warren On Sun, Jul 27, 2008 at 12:17 PM, Lorenzo Isella wrote: > Dear All, > Consider an Nx2 matrix of the kind: > > A= 1 2 > 3 13 > 1 2 > 6 8 > 3 13 > 2 9 > 1 1 > > > The first entry in each row is always smaller or equal than the second > entry in the same row. > Now there are two things I would like to do with this A matrix: > (1) With a sort of n.unique1d (but have not been very successful yet), > let each row of A appear only once (i.e. get rid of the repetitions). > Therefore one should obtain the matrix: > B= 1 2 > 3 13 > 6 8 > 2 9 > 1 1 > > (2) At the same time, efficiently count how many times each row of B > appeared in A. I would like to get a C vector counting them as: > > C= 2 > 2 > 1 > 1 > 1 > > > Any suggestions about an efficient way of achieving this? > Many thanks > > Lorenzo > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Sun Jul 27 17:37:24 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Sun, 27 Jul 2008 23:37:24 +0200 Subject: [SciPy-user] Ordering and Counting the Repetitions of the Rows of a Matrix In-Reply-To: <114880320807271246x1c922e7cg9539684fbad7bed9@mail.gmail.com> References: <114880320807271246x1c922e7cg9539684fbad7bed9@mail.gmail.com> Message-ID: <9457e7c80807271437t664acc9fu706c7a029e57cc57@mail.gmail.com> 2008/7/27 Warren Weckesser : > Lorenzo, > > Given a matrix A like you showed, here is one way to find (and count) the > unique rows: > > ---------- > d = {} > for r in A: > t = tuple(r) > d[t] = d.get(t,0) + 1 > > # The dict d now has the counts of the unique rows of A. > > B = numpy.array(d.keys()) # The unique rows of A > C = numpy.array(d.values()) # The counts of the unique rows > ---------- And here's an evil one-liner: x[np.sum(np.triu(np.all(x == x[:,None], axis=2)), axis=1) that requires a lot of memory (given M rows of N columns, a temporary array of MxMxN is used). It is blazingly fast, though :) Cheers St?fan From schut at sarvision.nl Mon Jul 28 07:09:57 2008 From: schut at sarvision.nl (Vincent Schut) Date: Mon, 28 Jul 2008 13:09:57 +0200 Subject: [SciPy-user] [python(x,y)] New Release: 2.0.0 In-Reply-To: <629b08a40807131019o36a7c35cw19ee640bbbcc173@mail.gmail.com> References: <629b08a40807131019o36a7c35cw19ee640bbbcc173@mail.gmail.com> Message-ID: Pierre Raybaut wrote: > Hi all, > > Python(x,y) 2.0.0 is now available on http://www.pythonxy.com. > > Changes history > 07 -12 -2008 - Version 2.0.0 : > > * Added: > o ITK 3.6 (WrapITK) - Open-source software system for image > processing (leading-edge segmentation and registration algorithms) > o Windows installer: new component manager system (updating is > easier and more flexible, install and uninstall process are much > cleaner, ...) > o Windows installer: Python(x,y) may now be installed silently > (/S option), and the non-silent installation now allows the user to > keep working on the machine during the process > * Updated: > o Matplotlib 0.98.1 (see release notes) > o SymPy 0.6.0 > o GDAL 1.5.2 > o PySerial 2.4 > > Regards, > Pierre Raybaut Hi Pierre, as you include gdal and ITK in python(x,y), you might also be interested to include OTB (Orfeo ToolBox) in the future. OTB is a toolbox built on ITK which enhances/extends ITK with geospatial-focussed processing nodes and some extra glue (e.g. to read and propagate geospatial info during processing, etc). For more info, see http://otb.cnes.fr/ Cheers, Vincent. From elcorto at gmx.net Mon Jul 28 07:41:59 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Mon, 28 Jul 2008 13:41:59 +0200 Subject: [SciPy-user] temporary copies for in-place array modification? Message-ID: <20080728114159.GA5776@ramrod.de> Hi all Say I do this: >>> a array([1, 2, 3, 4, 5, 6]) >>> a[1:] = a[1:] - a[:-1] >>> a array([1, 1, 1, 1, 1, 1]) >>> b = a[1:] >>> ... I.e. I want to compute the difference of a's elements and store them in place in b = a[1:] to work with that (b is only a view so that's ok, no copy). Are there any temp copies of `a` involved? I ask b/c `a` will be large. Thanks! steve From David.Kaplan at ird.fr Mon Jul 28 08:55:22 2008 From: David.Kaplan at ird.fr (David M. Kaplan) Date: Mon, 28 Jul 2008 14:55:22 +0200 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: 1217162589.7128.36.camel@localhost Message-ID: <1217249722.7230.36.camel@localhost> Hi, Well, as usual there are compromises in everything and the mgrid/ogrid functionality is the way it currently is for some good reasons. The first reason is that python appears to be fairly sloppy about how it passes indexing arguments to the __getitem__ method. It passes a tuple containing the arguments in all cases except when it has one argument, in which case it just passes that argument. This means that it is hard to tell a tuple argument from several non-tuple arguments. For example, the following two produce exactly the same call to __getitem__ : mgrid[1,2,3] mgrid[(1,2,3)] (__getitem__ receives a single tuple (1,2,3)), but different from: mgrid[[1,2,3]] (__getitem__ receives a single list = [1,2,3]). This seems like a bug to me, but is probably considered a feature by somebody. In any case, this is workable, but a bit annoying in that tuple arguments just aren't going to work well. The second problem is that the current implementation is fairly efficient because it forces all arguments to the same type so as to avoid some unnecessary copies (I think). Once you allow non-slice arguments, this is hard to maintain. That being said, attached is a replacement for index_tricks.py that should implement a reasonable set of features, while only very slightly altering performance. I have only touched nd_grid. I haven't fixed the documentation string yet, nor have I added tests to test_index_tricks.py, but will do that if the changes will be accepted into numpy. With the new version, old stuff should work as usual, except that mgrid now returns a list of arrays instead of an array of arrays (note that this change will cause current test_index_tricks.py to fail). With the new changes, you can now do: mgrid[-2:5:10j,[4.5,6,7.1],12,['abc','def']] The following will work as expected: mgrid[:5,(1,2,3)] But this will not: mgrid[(1,2,3)] # same as mgrid[1,2,3], but different from mgrid[[1,2,3]] Given these limitations, this seems like a fairly useful addition. If this looks usable, I will clean up and add tests if desired. If not, I recommend adding a ndgrid function to numpy that does the equivalent of matlab [X,Y,Z,...] = ndgrid(x,y,z,...) and then making the current meshgrid just call that changing the order of the first two arguments. Cheers, David -- ********************************** David M. Kaplan Charge de Recherche 1 Institut de Recherche pour le Developpement Centre de Recherche Halieutique Mediterraneenne et Tropicale av. Jean Monnet B.P. 171 34203 Sete cedex France Phone: +33 (0)4 99 57 32 27 Fax: +33 (0)4 99 57 32 95 http://www.ur097.ird.fr/team/dkaplan/index.html ********************************** -------------- next part -------------- A non-text attachment was scrubbed... Name: index_tricks.py Type: text/x-python Size: 15089 bytes Desc: not available URL: From aisaac at american.edu Mon Jul 28 09:51:47 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 28 Jul 2008 09:51:47 -0400 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: <1217249722.7230.36.camel@localhost> References: <1217249722.7230.36.camel@localhost> Message-ID: <488DCEF3.3010702@american.edu> David M. Kaplan wrote: > python appears to be fairly sloppy about how it passes > indexing arguments to the __getitem__ method. I do not generally find the word 'sloppy' to be descriptive of Python. > It passes a tuple containing the arguments in all cases > except when it has one argument, in which case it just > passes that argument. Well, not quite. The bracket syntax is for passing a key (a single object) to __getitem__. > For example, the following two produce exactly the same > call to __getitem__ : > mgrid[1,2,3] > mgrid[(1,2,3)] Well, yes. Note:: >>> x = 1,2,3 >>> type(x) In Python it is the commas, not the paretheses, that is determining the tuple type. So perhaps the question you raise could be rephrased to "why does an ndarray (not Python) handle treat a list 'index' differently than a tuple 'index'?" I do not know the history of that decision, but it has been used to provide some additional functionality. Cheers, Alan Isaac From ivo.maljevic at gmail.com Mon Jul 28 10:07:33 2008 From: ivo.maljevic at gmail.com (Ivo Maljevic) Date: Mon, 28 Jul 2008 10:07:33 -0400 Subject: [SciPy-user] sine transform prefactor In-Reply-To: <9fddf64a0807261115x426853e6wf888f7801472b539@mail.gmail.com> References: <9fddf64a0807260846r68c24f97k97fb964069bacfa4@mail.gmail.com> <488B4FD0.3070804@vrbka.net> <9fddf64a0807261115x426853e6wf888f7801472b539@mail.gmail.com> Message-ID: <826c64da0807280707m3c340acv1b1d3413b340e0d2@mail.gmail.com> I do not know about cosine transform much as I do not use it, but the coefficient 1/(2pi) in the continuous time Fourier transform is not something that is quite randomly selected (even though, I admit, in calculating discrete FT you have more freedom at choosing the scaling factor, and sometimes for reasons of symmetry there is a 1/sqrt(N) scaling factor in both directions). Using latex, you either have completely symmetric expressions if one uses 'f' frequency: X(f) = \int_{-\infty}^{\infty} x(t)\ e^{- i 2\pi f t}\,dt, x(t) = \int_{-\infty}^{\infty} X(f)\ e^{i 2 \pi f t}\,df, or if you use angular frequency ?=2 pi f, than you have: X( ?) = \int _{-\infty}^\infty x(t)\ e^{- i ? t}\,dt x(t) = \frac{1}{2\pi} \int _{-\infty}^{\infty} X( ?)\ e^{ i ? t}\,d ?, Not that this will be of much help to Lubos. Lubos, you may want to look at the following Wiki page, or if you are still not happy, I will dig up my literature and find out more about DCT. You are probably interested in DCT II type from this article: http://en.wikipedia.org/wiki/Discrete_cosine_transform and you might want to look also at: http://en.wikipedia.org/wiki/Discrete_Fourier_transform Ivo 2008/7/26 Frank Lagor : > Ludos, > > First let me clarify: the factor in front of the continuous forms is > 1/sqrt(2pi), not sqrt(2/pi) like I said previously (I was confused). > >> >> if i do >> iDST(DST(f(r)) >> on discrete data, i get the original data back (expected result). this >> would indicate that >> 1) either the sqrt(2/pi) norm shouldn't be used for discrete data (why?) > > > I don't think the factors should be added on at all. The mere transform and > invtransform functions should handle all factors. This is related to what I > want to type later on. > >> >> 2) the sqrt(2/pi) factor is taken care of somewhere inside the DST >> function (how?) > > Yes. It should be. Like I said before, the convention chosen doesn't really > matter as long as the you do the FT and then the IFT, because the factors > from the two operations must multiple together to give 1/(2pi) in the end > (this is built into the answer. The problem with convention only comes when > you want to compare numbers of transformed data (still in fourier space) > with someone else. Then you need to know if your conventions are the same. > >> >> this is the thing i'd really love to know - is the answer 1 or answer 2 >> correct? > > I think the actual answer lies in the fact that the fourier transform is > actually derived from the fourier series. The fourier series is a sum of > some coefficients times an exponential. There is an equation for calculating > these coefficients and it is something like 1/(2pi) times an integral, and > thats where the 1/(2pi) comes in (actually it comes in when the formula for > the coefficients is derived). See the book Functions of a complex variable > by Carrier, Krook, and Pearson for the details of this derivation. I think > that if you see this, all your concerns about the factors will be taken care > of. > > -Frank > > > -- > Frank Lagor > Ph.D. Candidate > Mechanical Engineering and Applied Mechanics > University of Pennsylvania > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > From zachary.pincus at yale.edu Mon Jul 28 10:25:56 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Jul 2008 10:25:56 -0400 Subject: [SciPy-user] Best way to save ND-arrays for Matlab? Message-ID: <790A19AE-EF29-491D-BA9E-EBBCA4398A51@yale.edu> Hi all, I'm trying to use the scipy.io.mio tools to write out some arrays for matlab users. This works quite well, with the exception of some 3D arrays that I have, which (as per the documentation) get flattened to 2D during the process. (I'm saving to version 4 .mat files, which I need for compatibility with some folk's versions of Matlab.) Is there any good way to save this data in a less-awkward format for the eventual users? Or is this a limitation of the format? Thanks, Zachary Pincus Postdoctoral Fellow, lab of Frank Slack Molecular, Cellular and Developmental Biology Yale University From David.Kaplan at ird.fr Mon Jul 28 10:56:00 2008 From: David.Kaplan at ird.fr (David M. Kaplan) Date: Mon, 28 Jul 2008 16:56:00 +0200 Subject: [SciPy-user] unique, sort, sortrows In-Reply-To: 1217249722.7230.36.camel@localhost Message-ID: <1217256960.7230.47.camel@localhost> Hi, >From Alan Isaac: Well, yes. Note:: >>> x = 1,2,3 >>> type(x) In Python it is the commas, not the paretheses, that is determining the tuple type. ---- Good point. In any case, this is a workable problem. Adding a comma after the last argument to mgrid[] assures that it behaves "as expected" (e.g. mgrid[(1,2),]). Attached is a newer much cleaner version of my replacement for index_tricks.py. I did some optimisation and it looks like this new version beats the old on similar operations for large meshed matrices. For single return arrays or small matrices it looses to the old version, but only slightly. As large matrices are probably the bottleneck, this seems like a reasonable tradeoff. Cheers, David -- ********************************** David M. Kaplan Charge de Recherche 1 Institut de Recherche pour le Developpement Centre de Recherche Halieutique Mediterraneenne et Tropicale av. Jean Monnet B.P. 171 34203 Sete cedex France Phone: +33 (0)4 99 57 32 27 Fax: +33 (0)4 99 57 32 95 http://www.ur097.ird.fr/team/dkaplan/index.html ********************************** -------------- next part -------------- A non-text attachment was scrubbed... Name: index_tricks.py Type: text/x-python Size: 14286 bytes Desc: not available URL: From zachary.pincus at yale.edu Mon Jul 28 12:00:17 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Jul 2008 12:00:17 -0400 Subject: [SciPy-user] Matlab IO problem? (was: Best way to save ND-arrays for Matlab?) In-Reply-To: <790A19AE-EF29-491D-BA9E-EBBCA4398A51@yale.edu> References: <790A19AE-EF29-491D-BA9E-EBBCA4398A51@yale.edu> Message-ID: Hello all, I have investigated my issue with matlab IO a little more. Clearly, I need to be using matlab version-5 .mat files, which support multidimensional arrays. I had thought I needed to use version-4 files, because a user said that they couldn't read the version-5 ones. However, on closer investigation, they are using Version 6.5.0.180913a Release 13 of Matlab -- which should be able to read version-5 .mat files fine. It appears that scipy.io.matlab saves version-5 .mat files which cannot always be opened by this version of matlab. I can't quite figure out when things work. scipy.io.matlab.mio.savemat('test', {'five':5}, format='5') or scipy.io.matlab.mio.savemat('test', {'five':5*numpy.ones((5,5))}, format='5') both fail as below: >> load test ??? Error using ==> load Can't read file. However, some more complex .mat files, with multiple variables, seem to work. Other more-complex ones fail. Anyone have any thoughts? Zach On Jul 28, 2008, at 10:25 AM, Zachary Pincus wrote: > Hi all, > > I'm trying to use the scipy.io.mio tools to write out some arrays for > matlab users. This works quite well, with the exception of some 3D > arrays that I have, which (as per the documentation) get flattened to > 2D during the process. (I'm saving to version 4 .mat files, which I > need for compatibility with some folk's versions of Matlab.) > > Is there any good way to save this data in a less-awkward format for > the eventual users? Or is this a limitation of the format? > > Thanks, > > Zachary Pincus > Postdoctoral Fellow, lab of Frank Slack > Molecular, Cellular and Developmental Biology > Yale University > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From robert.kern at gmail.com Mon Jul 28 12:13:24 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Jul 2008 11:13:24 -0500 Subject: [SciPy-user] temporary copies for in-place array modification? In-Reply-To: <20080728114159.GA5776@ramrod.de> References: <20080728114159.GA5776@ramrod.de> Message-ID: <3d375d730807280913y49f580b7yc051384781933e0e@mail.gmail.com> On Mon, Jul 28, 2008 at 06:41, Steve Schmerler wrote: > Hi all > > Say I do this: > > >>> a > array([1, 2, 3, 4, 5, 6]) > > >>> a[1:] = a[1:] - a[:-1] The right-hand side of this will be a temporary array. > >>> a > array([1, 1, 1, 1, 1, 1]) > > >>> b = a[1:] > >>> ... > > I.e. I want to compute the difference of a's elements and store them in place > in b = a[1:] to work with that (b is only a view so that's ok, no copy). > > Are there any temp copies of `a` involved? I ask b/c `a` will be large. > Thanks! Unfortunately, a temporary is unavoidable. If you modify `a` in-place, you will mess up the computation. For example, we could try using the third argument to the subtract() ufunc to place the results back into a[1:]: In [1]: import numpy In [2]: a = numpy.arange(1,7) In [3]: numpy.subtract(a[1:], a[:-1], a[1:]) Out[3]: array([1, 2, 2, 3, 3]) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From aisaac at american.edu Mon Jul 28 12:48:39 2008 From: aisaac at american.edu (Alan G Isaac) Date: Mon, 28 Jul 2008 12:48:39 -0400 Subject: [SciPy-user] temporary copies for in-place array modification? In-Reply-To: <3d375d730807280913y49f580b7yc051384781933e0e@mail.gmail.com> References: <20080728114159.GA5776@ramrod.de> <3d375d730807280913y49f580b7yc051384781933e0e@mail.gmail.com> Message-ID: On Mon, 28 Jul 2008, Robert Kern apparently wrote: > Unfortunately, a temporary is unavoidable. If you modify `a` in-place, > you will mess up the computation. For example, we could try using the > third argument to the subtract() ufunc to place the results back into > a[1:]: > In [1]: import numpy > In [2]: a = numpy.arange(1,7) > In [3]: numpy.subtract(a[1:], a[:-1], a[1:]) > Out[3]: array([1, 2, 2, 3, 3]) But he could work from the other end:: >>> np.subtract(a[1:],a[:-1],a[:-1]) array([1, 1, 1, 1, 1]) As long as he can then use a[:-1] instead of a[1:]. Right? Alan From robert.kern at gmail.com Mon Jul 28 12:47:58 2008 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 28 Jul 2008 11:47:58 -0500 Subject: [SciPy-user] temporary copies for in-place array modification? In-Reply-To: References: <20080728114159.GA5776@ramrod.de> <3d375d730807280913y49f580b7yc051384781933e0e@mail.gmail.com> Message-ID: <3d375d730807280947l2096f199p2f90d9d8cea57015@mail.gmail.com> On Mon, Jul 28, 2008 at 11:48, Alan G Isaac wrote: > On Mon, 28 Jul 2008, Robert Kern apparently wrote: >> Unfortunately, a temporary is unavoidable. If you modify `a` in-place, >> you will mess up the computation. For example, we could try using the >> third argument to the subtract() ufunc to place the results back into >> a[1:]: >> In [1]: import numpy >> In [2]: a = numpy.arange(1,7) >> In [3]: numpy.subtract(a[1:], a[:-1], a[1:]) >> Out[3]: array([1, 2, 2, 3, 3]) > > > But he could work from the other end:: > > >>> np.subtract(a[1:],a[:-1],a[:-1]) > array([1, 1, 1, 1, 1]) > > As long as he can then use a[:-1] instead of a[1:]. > Right? Yup. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From peridot.faceted at gmail.com Mon Jul 28 13:41:37 2008 From: peridot.faceted at gmail.com (Anne Archibald) Date: Mon, 28 Jul 2008 13:41:37 -0400 Subject: [SciPy-user] temporary copies for in-place array modification? In-Reply-To: References: <20080728114159.GA5776@ramrod.de> <3d375d730807280913y49f580b7yc051384781933e0e@mail.gmail.com> Message-ID: 2008/7/28 Alan G Isaac : > On Mon, 28 Jul 2008, Robert Kern apparently wrote: >> Unfortunately, a temporary is unavoidable. If you modify `a` in-place, >> you will mess up the computation. For example, we could try using the >> third argument to the subtract() ufunc to place the results back into >> a[1:]: >> In [1]: import numpy >> In [2]: a = numpy.arange(1,7) >> In [3]: numpy.subtract(a[1:], a[:-1], a[1:]) >> Out[3]: array([1, 2, 2, 3, 3]) > > > But he could work from the other end:: > > >>> np.subtract(a[1:],a[:-1],a[:-1]) > array([1, 1, 1, 1, 1]) > > As long as he can then use a[:-1] instead of a[1:]. > Right? Maybe. The result is dependent on the memory layout of a. Anne From jj20047 at gmail.com Mon Jul 28 13:47:31 2008 From: jj20047 at gmail.com (jb) Date: Mon, 28 Jul 2008 10:47:31 -0700 Subject: [SciPy-user] CPU you selected does not support x86-64 instruction set Message-ID: Hello: I just installed a new version of numpy from svn on my 64-bit, Fedora 8 machine, but cannot get scipy to install. I get the error: "CPU you selected does not support x86-64 instruction set", which others have had on this list. Suggestions have been to install a new version of numpy (which I did) and to use a gcc version > 3.4 (which I have). So I am stuck. Any suggestions? Here is a bit of information on my machine: [root at quad1 scipy]# g77 -v Reading specs from /usr/lib/gcc/x86_64-redhat-linux/3.4.6/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-languages=c,c++,f77 --disable-libgcj --host=x86_64-redhat-linux Thread model: posix gcc version 3.4.6 20060404 (Red Hat 3.4.6-8) [root at quad1 scipy]# uname -a Linux quad1 2.6.25.10-47.fc8 #1 SMP Mon Jul 7 18:31:41 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux [john at quad1 site-packages]$ python numpy/distutils/cpuinfo.py CPU information: CPUInfoBase__get_nbits=64 getNCPUs=4 has_3dnow has_3dnowext has_mmx has_sse has_sse2 has_sse3 is_64bit is_AMD is_AthlonK6_2 Here are the error messages during compile: creating build/temp.linux-x86_64-2.5/scipy/fftpack/dfftpack compile options: '-c' g77:f77: scipy/fftpack/dfftpack/dsinti.f scipy/fftpack/dfftpack/dsinti.f:0: error: CPU you selected does not support x86-64 instruction set scipy/fftpack/dfftpack/dsinti.f:0: error: CPU you selected does not support x86-64 instruction set scipy/fftpack/dfftpack/dsinti.f:0: error: CPU you selected does not support x86-64 instruction set scipy/fftpack/dfftpack/dsinti.f:0: error: CPU you selected does not support x86-64 instruction set error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O3 -funroll-loops -march=k6-2 -mmmx -m3dnow -msse2 -msse -msse3 -c -c scipy/fftpack/dfftpack/dsinti.f -o build/temp.linux-x86_64-2.5/scipy/fftpack/dfftpack/dsinti.o" failed with exit status 1 From wnbell at gmail.com Mon Jul 28 14:08:57 2008 From: wnbell at gmail.com (Nathan Bell) Date: Mon, 28 Jul 2008 13:08:57 -0500 Subject: [SciPy-user] CPU you selected does not support x86-64 instruction set In-Reply-To: References: Message-ID: On Mon, Jul 28, 2008 at 12:47 PM, jb wrote: > Hello: > I just installed a new version of numpy from svn on my 64-bit, Fedora > 8 machine, but cannot get scipy to install. I get the error: "CPU you > selected does not support x86-64 instruction set", which others have > had on this list. Suggestions have been to install a new version of > numpy (which I did) and to use a gcc version > 3.4 (which I have). So > I am stuck. Any suggestions? > This ticket may be related: http://projects.scipy.org/scipy/scipy/ticket/607 -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From robfelty at gmail.com Mon Jul 28 14:27:20 2008 From: robfelty at gmail.com (Robert Felty) Date: Mon, 28 Jul 2008 14:27:20 -0400 Subject: [SciPy-user] trouble installing audiolab Message-ID: <200807281427.20767.robfelty@gmail.com> I am having trouble installing the audiolab scipy toolkit. I have scipy installed just fine, but I am getting a library not found error when I try to install audiolab This is the error I get sudo python setup.py install sndfile_info: libraries libsndfile.so.1.0.17 not found in /usr/lib64 libraries libsndfile.so.1.0.17 not found in /usr/local/lib libraries libsndfile.so.1.0.17 not found in /usr/lib Traceback (most recent call last): File "setup.py", line 221, in 'Topic :: Scientific/Engineering'] File "/usr/lib64/python2.5/site-packages/numpy/distutils/core.py", line 142, in setup config = configuration() File "setup.py", line 144, in configuration sf_config = sf_info.get_info(2) File "/usr/lib64/python2.5/site-packages/numpy/distutils/system_info.py", line 399, in get_info self.calc_info() File "setup.py", line 96, in calc_info raise SndfileNotFoundError("sndfile library not found") __main__.SndfileNotFoundError: sndfile library not found I am running on linux - Fedora 7 x86_64, with Pyton 2.5. I know that I have the libsndfile installed: the 64bit version is here: ls -l /usr/lib64/libsndfile* -rw-r--r-- 1 root root 610500 2007-09-20 07:33 /usr/lib64/libsndfile.a lrwxrwxrwx 1 root root 20 2008-07-28 12:45 /usr/lib64/libsndfile.so -> libsndfile.so.1.0.17 lrwxrwxrwx 1 root root 20 2007-09-27 14:01 /usr/lib64/libsndfile.so.1 -> libsndfile.so.1.0.17 -rwxr-xr-x 1 root root 328832 2007-09-20 07:33 /usr/lib64/libsndfile.so.1.0.17 . and the 32bit here: ls -l /usr/lib/libsndfile* -rw-r--r-- 1 root root 501314 2007-09-20 07:31 /usr/lib/libsndfile.a lrwxrwxrwx 1 root root 20 2008-07-28 12:45 /usr/lib/libsndfile.so -> libsndfile.so.1.0.17 lrwxrwxrwx 1 root root 20 2007-09-27 14:01 /usr/lib/libsndfile.so.1 -> libsndfile.so.1.0.17 -rwxr-xr-x 1 root root 369612 2007-09-20 07:31 /usr/lib/libsndfile.so.1.0.17 I changed the default value of "libname" in the setup.py script from "sndfile" to "libsndfile" (and also tried several other variants, like "libsndfile.so", "libsndfile.so.1") What I am missing here? Any help would be much appreciated. Rob -- Robert Felty http://robfelty.com "Verbing weirds language." -- Calvin (of Calvin and Hobbes) From jj20047 at gmail.com Mon Jul 28 17:06:04 2008 From: jj20047 at gmail.com (John) Date: Mon, 28 Jul 2008 21:06:04 +0000 (UTC) Subject: [SciPy-user] =?utf-8?q?CPU_you_selected_does_not_support_x86-64?= =?utf-8?q?=09instruction_set?= References: Message-ID: Nathan Bell gmail.com> writes: > This ticket may be related: > http://projects.scipy.org/scipy/scipy/ticket/607 > That did it, thanks! From fperez.net at gmail.com Mon Jul 28 18:33:45 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 28 Jul 2008 15:33:45 -0700 Subject: [SciPy-user] Python tools at the annual SIAM meeting Message-ID: Hi all, for those interested, here's a brief report on the recent SIAM meeting where a number of Python-based tools for scientific computing (including ipython, numpy, scipy, sage, and more) were discussed: http://fdoperez.blogspot.com/2008/07/python-tools-for-science-go-to-siam.html The punch line is that we got selected for the annual highlights of the conference: http://www.ams.org/ams/siam-2008.html#python Thanks again to all who contributed talks and attended! Cheers, f From dominique.orban at gmail.com Mon Jul 28 19:09:09 2008 From: dominique.orban at gmail.com (Dominique Orban) Date: Mon, 28 Jul 2008 19:09:09 -0400 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: References: Message-ID: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Fernando, Congratulations on being selected for the highlights of the meeting. For those of us who were not in San Diego, is there any chance to see the slides of the talks in the three sessions you co-organized? That would be awesome. Cheers, Dominique On Mon, Jul 28, 2008 at 6:33 PM, Fernando Perez wrote: > Hi all, > > for those interested, here's a brief report on the recent SIAM meeting > where a number of Python-based tools for scientific computing > (including ipython, numpy, scipy, sage, and more) were discussed: > > > http://fdoperez.blogspot.com/2008/07/python-tools-for-science-go-to-siam.html > > The punch line is that we got selected for the annual highlights of > the conference: > > http://www.ams.org/ams/siam-2008.html#python > > Thanks again to all who contributed talks and attended! > > Cheers, > > f > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fperez.net at gmail.com Mon Jul 28 19:16:40 2008 From: fperez.net at gmail.com (Fernando Perez) Date: Mon, 28 Jul 2008 16:16:40 -0700 Subject: [SciPy-user] Python tools at the annual SIAM meeting In-Reply-To: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> References: <8793ae6e0807281609i4075e048sbaefb22b3b211450@mail.gmail.com> Message-ID: Hi Dominique, On Mon, Jul 28, 2008 at 4:09 PM, Dominique Orban wrote: > Fernando, > Congratulations on being selected for the highlights of the meeting. For > those of us who were not in San Diego, is there any chance to see the slides > of the talks in the three sessions you co-organized? That would be awesome. I should have thought of doing that before, sorry. Here are my slides: https://cirl.berkeley.edu/fperez/talks/0807_siam_intro_python_scicomp.pdf and I'm in the process of contacting the others for theirs. I'll report back in a couple of days with the appropriate links as I get them. Cheers, f From zachary.pincus at yale.edu Mon Jul 28 23:04:24 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Mon, 28 Jul 2008 23:04:24 -0400 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? Message-ID: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> Hi, I have found that scipy's .mat-file IO creates version-5 files that cannot be read by the version of matlab I have access to, if a variable with less than five letters in the name is saved to that file. The version of matlab I have is 6.5.0.180913a (R13). Here's an example to show the issue: python: >>> import scipy.io >>> scipy.io.savemat('works.mat', {'abcde':1}, format='5') >>> scipy.io.savemat('fails.mat', {'abcd':1}, format='5') matlab: >> load works >> load fails ??? Error using ==> load Can't read file. Could others please test this code out with the version(s) of Matlab that they have access to? The issue is that it appears that (at least my version of) matlab assumes that for variables with <= 4-byte names, the "compressed data element" format will be used in the file. Scipy.io.savemat does not do this... However, the mat-file specification clearly states that this format is *optional*, and so the scipy routines are technically correct -- this is a matlab bug. If this bug with matlab's file IO is widespread, I will make a patch for scipy so that short elements are written in the compressed format; otherwise I'll just work around the issue by ensuring that I never use any short names. I would be very grateful if others would test this. Zach From david at ar.media.kyoto-u.ac.jp Mon Jul 28 23:14:44 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jul 2008 12:14:44 +0900 Subject: [SciPy-user] trouble installing audiolab In-Reply-To: <200807281427.20767.robfelty@gmail.com> References: <200807281427.20767.robfelty@gmail.com> Message-ID: <488E8B24.8010209@ar.media.kyoto-u.ac.jp> Robert Felty wrote: > I am having trouble installing the audiolab scipy toolkit. I have scipy > installed just fine, but I am getting a library not found error when I try to > install audiolab > Hi Robert, I am sorry this is not working for you. I am not sure what the problem is, though. You should not change libname to libsndfile, because distutils adds it automatically for you (it will look for liblibsndfile otherwise...). Could you add this at line 92 of setup.py (print statement) ? for i in lib_dirs: tmp = self.check_libs(i, sndfile_libs) print tmp, i if tmp is not None: info = tmp break thanks, David From jjh at 42quarks.com Mon Jul 28 23:43:41 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Tue, 29 Jul 2008 13:43:41 +1000 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> Message-ID: Which version of SciPy are you using? I have 0.6.0 and MATLAB 7.6.0.324 (R2008a). >> import scipy.io >> scipy.io.savemat('works.mat', {'abcde':1}) # My version of SciPy doesn't support a format argument >> scipy.io.savemat('fails.mat', {'abcd':1}) works fine. Hope this helps. Jonny On Tue, Jul 29, 2008 at 1:04 PM, Zachary Pincus wrote: > Hi, > > I have found that scipy's .mat-file IO creates version-5 files that > cannot be read by the version of matlab I have access to, if a > variable with less than five letters in the name is saved to that file. > > The version of matlab I have is 6.5.0.180913a (R13). Here's an example > to show the issue: > > python: > >>> import scipy.io > >>> scipy.io.savemat('works.mat', {'abcde':1}, format='5') > >>> scipy.io.savemat('fails.mat', {'abcd':1}, format='5') > > matlab: > >> load works > >> load fails > ??? Error using ==> load > Can't read file. > > Could others please test this code out with the version(s) of Matlab > that they have access to? > > The issue is that it appears that (at least my version of) matlab > assumes that for variables with <= 4-byte names, the "compressed data > element" format will be used in the file. Scipy.io.savemat does not do > this... However, the mat-file specification clearly states that this > format is *optional*, and so the scipy routines are technically > correct -- this is a matlab bug. > > If this bug with matlab's file IO is widespread, I will make a patch > for scipy so that short elements are written in the compressed format; > otherwise I'll just work around the issue by ensuring that I never use > any short names. > > I would be very grateful if others would test this. > > Zach > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From zachary.pincus at yale.edu Tue Jul 29 00:04:09 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 29 Jul 2008 00:04:09 -0400 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> Message-ID: <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> Hi Johnny, > Which version of SciPy are you using? I have 0.6.0 and MATLAB > 7.6.0.324 (R2008a). Foolish me for not specifying! I'm using a recent-ish SVN checkout of scipy: 0.7.0.dev4393. It seems that the version-5 format writing came into the svn early this year, after the 0.6 release. The version-4 mat-file IO code seems to deal fine with short names. Thanks again, Zach > > >>> import scipy.io >>> scipy.io.savemat('works.mat', {'abcde':1}) # My version of SciPy >>> doesn't support a format argument >>> scipy.io.savemat('fails.mat', {'abcd':1}) > > works fine. > > Hope this helps. > Jonny > > On Tue, Jul 29, 2008 at 1:04 PM, Zachary Pincus > wrote: >> Hi, >> >> I have found that scipy's .mat-file IO creates version-5 files that >> cannot be read by the version of matlab I have access to, if a >> variable with less than five letters in the name is saved to that >> file. >> >> The version of matlab I have is 6.5.0.180913a (R13). Here's an >> example >> to show the issue: >> >> python: >>>>> import scipy.io >>>>> scipy.io.savemat('works.mat', {'abcde':1}, format='5') >>>>> scipy.io.savemat('fails.mat', {'abcd':1}, format='5') >> >> matlab: >>>> load works >>>> load fails >> ??? Error using ==> load >> Can't read file. >> >> Could others please test this code out with the version(s) of Matlab >> that they have access to? >> >> The issue is that it appears that (at least my version of) matlab >> assumes that for variables with <= 4-byte names, the "compressed data >> element" format will be used in the file. Scipy.io.savemat does not >> do >> this... However, the mat-file specification clearly states that this >> format is *optional*, and so the scipy routines are technically >> correct -- this is a matlab bug. >> >> If this bug with matlab's file IO is widespread, I will make a patch >> for scipy so that short elements are written in the compressed >> format; >> otherwise I'll just work around the issue by ensuring that I never >> use >> any short names. >> >> I would be very grateful if others would test this. >> >> Zach >> >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Jonathan J Hunt > Homepage: http://www.42quarks.net.nz/wiki/JJH > (Further contact details there) > "Physics isn't the most important thing. Love is." Richard Feynman > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From aisaac at american.edu Tue Jul 29 01:31:16 2008 From: aisaac at american.edu (Alan G Isaac) Date: Tue, 29 Jul 2008 01:31:16 -0400 Subject: [SciPy-user] temporary copies for in-place array modification? Message-ID: <488EAB24.4080009@american.edu> >> On Mon, 28 Jul 2008, Robert Kern apparently wrote: >>> Unfortunately, a temporary is unavoidable. If you modify `a` in-place, >>> you will mess up the computation. For example, we could try using the >>> third argument to the subtract() ufunc to place the results back into >>> a[1:]: >>> In [1]: import numpy >>> In [2]: a = numpy.arange(1,7) >>> In [3]: numpy.subtract(a[1:], a[:-1], a[1:]) >>> Out[3]: array([1, 2, 2, 3, 3]) > 2008/7/28 Alan G Isaac : >> But he could work from the other end:: >> >>> np.subtract(a[1:],a[:-1],a[:-1]) >> array([1, 1, 1, 1, 1]) >> As long as he can then use a[:-1] instead of a[1:]. >> Right? Anne Archibald wrote: > Maybe. The result is dependent on the memory layout of a. What would be the right test for this being ok? And would you give an example where this will fail? I assumed the implicit loop would always be over the elements in the order they appear in the array. E.g., :: >>> a = np.arange(20) >>> a2 = a[::-2] >>> a2.flags C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False >>> np.subtract(a2[1:],a2[:-1],a2[:-1]) array([-2, -2, -2, -2, -2, -2, -2, -2, -2]) >>> a2 array([-2, -2, -2, -2, -2, -2, -2, -2, -2, 1]) Thanks, Alan From jjh at 42quarks.com Tue Jul 29 01:37:29 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Tue, 29 Jul 2008 15:37:29 +1000 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> Message-ID: Hi Zach (and others), I'd be happy to test the latest SVN checkout if someone could tell me how I might build this in place (when I try and import from the scipy directory directly it tells me that it needs to be installed which I don't want to do as I want to leave the existing release version installed). Is there an easy way to go about using/modifying/testing the SVN checkout without installing it each time one makes changes? Thanks, Jonny On Tue, Jul 29, 2008 at 2:04 PM, Zachary Pincus wrote: > Hi Johnny, > >> Which version of SciPy are you using? I have 0.6.0 and MATLAB >> 7.6.0.324 (R2008a). > > Foolish me for not specifying! I'm using a recent-ish SVN checkout of > scipy: 0.7.0.dev4393. > > It seems that the version-5 format writing came into the svn early > this year, after the 0.6 release. The version-4 mat-file IO code seems > to deal fine with short names. > > Thanks again, > > Zach -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From wnbell at gmail.com Tue Jul 29 03:13:48 2008 From: wnbell at gmail.com (Nathan Bell) Date: Tue, 29 Jul 2008 02:13:48 -0500 Subject: [SciPy-user] temporary copies for in-place array modification? In-Reply-To: <488EAB24.4080009@american.edu> References: <488EAB24.4080009@american.edu> Message-ID: On Tue, Jul 29, 2008 at 12:31 AM, Alan G Isaac wrote: >> Maybe. The result is dependent on the memory layout of a. > > What would be the right test for this being ok? > And would you give an example where this will fail? > I assumed the implicit loop would always be over the > elements in the order they appear in the array. I wouldn't write code that depended on such details. Doing so means that any attempt to parallelize the implementation will break your code in subtle, potentially non-deterministic ways. -- Nathan Bell wnbell at gmail.com http://graphics.cs.uiuc.edu/~wnbell/ From elcorto at gmx.net Tue Jul 29 04:39:53 2008 From: elcorto at gmx.net (Steve Schmerler) Date: Tue, 29 Jul 2008 10:39:53 +0200 Subject: [SciPy-user] temporary copies for in-place array modification? In-Reply-To: <20080728114159.GA5776@ramrod.de> References: <20080728114159.GA5776@ramrod.de> Message-ID: <20080729083953.GA4340@ramrod.de> On Jul 28 13:41, Steve Schmerler wrote: > Hi all > > Say I do this: > > >>> a > array([1, 2, 3, 4, 5, 6]) > > >>> a[1:] = a[1:] - a[:-1] > > >>> a > array([1, 1, 1, 1, 1, 1]) > > >>> b = a[1:] > >>> ... > > I.e. I want to compute the difference of a's elements and store them in place > in b = a[1:] to work with that (b is only a view so that's ok, no copy). > > Are there any temp copies of `a` involved? I ask b/c `a` will be large. > Thanks! > Thanks for all answers. Actually, `a` will be a 3D array which I'm filling with 2D arrays in a loop a = empty(x,y,z) first_2d_array(a[:,0,:]) # in-place modification for j in xrange(1, N+1): new_2d_array(a[:,j,:]) and I get 2D arrays of differences. a[:,1:,:] = a[:,1:,:] a[:,:-1,:] b = [:,1:,:] ATM a's size is in the order of several MB, so the cost of the temporary is nothing. But for the future, since the temporary is in general unavoidable (or it's safe to consider one), I'll use smaller temp 2D arrays in the loop and compute the diffs right there (and b.shape[1] = a.shape[1]-1). old = first_2d_array(...) for j in xrange(0, N) # view only, operate on `w` w = b[:,j,:] new_2d_array(w) tmp = w.copy() w -= old old = tmp steve From zachary.pincus at yale.edu Tue Jul 29 08:42:51 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Tue, 29 Jul 2008 08:42:51 -0400 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> Message-ID: <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> Hi Jonny, > I'd be happy to test the latest SVN checkout if someone could tell me > how I might build this in place (when I try and import from the scipy > directory directly it tells me that it needs to be installed which I > don't want to do as I want to leave the existing release version > installed). > > Is there an easy way to go about using/modifying/testing the SVN > checkout without installing it each time one makes changes? Perhaps some others can chime in with more "official" methods, but I typically do one of several things to test SVN versions: (1) I cd to the build/lib-whatever subdirectory, after 'python setup.py build'. This dir contains a mostly-complete scipy install. After starting python and making sure that '.' is first on the sys.path (so that the scipy dir in that directory shadows the system one), 'import scipy' should get the new version. Alternately, you can start python from anywhere, and then do 'import sys; sys.path.insert('/ path/to/build/lib...', 0); import scipy' to get the same effect. (2) The above method fails for certain things like using f2py, which needs certain source files that get installed, but not copied over into build/lib. In those cases, you can install scipy into an alternated location somewhere, and then add that to sys.path (or to the PYTHONPATH environment variable). Use the '--install-base' or '-- prefix' switch to 'python setup.py install' to control where the files get installed. I'm not really sure which is most appropriate here. (3) In this particular case, scipy.io.matlab doesn't involve any compiled modules. You could just cd to scipy/io/matlab in the SVN checkout, and then 'import mio' should work. The desired function is mio.savemat.... Zach From david at ar.media.kyoto-u.ac.jp Tue Jul 29 08:37:30 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jul 2008 21:37:30 +0900 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> Message-ID: <488F0F0A.70105@ar.media.kyoto-u.ac.jp> Zachary Pincus wrote: > > Perhaps some others can chime in with more "official" methods, but I > typically do one of several things to test SVN versions: > On Unix, a nice and easy way to easily switch between different numpy installs without touching at all the environment is stow. Basically what you do once stow is installed is: python setup.py install --prefix=SOMEPATH/stow/numpy1 python setup.py install --prefix=SOMEPATH/stow/numpy2 And then stow makes link between the 'real' numpy and the one you want: stow numpy1 -> numpy1 is used stow -R numpy1 -> numpy1 is disabled stow numpy2 -> numpy2 is enabled I switch between many numpy configurations (one which only uses released versions for 'real' work, many other with various compilers combination to test the build system). It also enables true uninstallation, since any installed version is self-contained in one directory (you can rm -rf the directory without any chance of removing something unrelated). cheers, David From jjh at 42quarks.com Tue Jul 29 09:31:32 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Tue, 29 Jul 2008 23:31:32 +1000 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? Message-ID: Hi, Thanks for the advice. But all of these seem like hacks. Is there any clean way to use SVN SciPy without installing it. This is particularly important since I'm hoping to modify SciPy (I want to add a mio wrapper for MATLAB 7/HDF5 files) and obviously I would like to avoid having the build/install everytime I make a modification to the library that I want to test. Thanks, Jonny On Tue, Jul 29, 2008 at 10:37 PM, David Cournapeau wrote: > Zachary Pincus wrote: >> >> Perhaps some others can chime in with more "official" methods, but I >> typically do one of several things to test SVN versions: >> > > On Unix, a nice and easy way to easily switch between different numpy > installs without touching at all the environment is stow. Basically what > you do once stow is installed is: > > python setup.py install --prefix=SOMEPATH/stow/numpy1 > python setup.py install --prefix=SOMEPATH/stow/numpy2 > > And then stow makes link between the 'real' numpy and the one you want: > > stow numpy1 -> numpy1 is used > stow -R numpy1 -> numpy1 is disabled > stow numpy2 -> numpy2 is enabled > > I switch between many numpy configurations (one which only uses released > versions for 'real' work, many other with various compilers combination > to test the build system). It also enables true uninstallation, since > any installed version is self-contained in one directory (you can rm -rf > the directory without any chance of removing something unrelated). > > cheers, > > David -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From david at ar.media.kyoto-u.ac.jp Tue Jul 29 09:23:56 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jul 2008 22:23:56 +0900 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: Message-ID: <488F19EC.2060506@ar.media.kyoto-u.ac.jp> Jonathan Hunt wrote: > Hi, > > Thanks for the advice. But all of these seem like hacks. stow is certainly not an hack, particularly compared to not installing scipy. There is the possibility to use the develop mode of setuptools, or varisous "sandboxed" python, but they are likely to be more work, and I don't know anything about them, so I can't really comment on them (I know other scipy developers do, though). > Is there any > clean way to use SVN SciPy without installing it. This is particularly > important since I'm hoping to modify SciPy (I want to add a mio > wrapper for MATLAB 7/HDF5 files) and obviously I would like to avoid > having the build/install everytime I make a modification to the > library that I want to test. > This is not obvious. For most softwares, you cannot test them without 'installing' them, why should it be different for python packages ? Any method enabling the test of the software without installing it (besides sandbox) is an hack by definition. cheers, David From emanuele at relativita.com Tue Jul 29 10:04:03 2008 From: emanuele at relativita.com (Emanuele Olivetti) Date: Tue, 29 Jul 2008 16:04:03 +0200 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: Message-ID: <488F2353.6040106@relativita.com> Insted of stow why not using just PYTHONPATH? (in bash, continuing the Zachary's example on numpy1) export PYTHONPATH="SOMEPATH/stow/numpy1" then run python/emacs/ipython/whatever and use numpy1. Emanuele Jonathan Hunt wrote: > Hi, > > Thanks for the advice. But all of these seem like hacks. Is there any > clean way to use SVN SciPy without installing it. This is particularly > important since I'm hoping to modify SciPy (I want to add a mio > wrapper for MATLAB 7/HDF5 files) and obviously I would like to avoid > having the build/install everytime I make a modification to the > library that I want to test. > > Thanks, > Jonny > > On Tue, Jul 29, 2008 at 10:37 PM, David Cournapeau > wrote: >> Zachary Pincus wrote: >>> Perhaps some others can chime in with more "official" methods, but I >>> typically do one of several things to test SVN versions: >>> >> On Unix, a nice and easy way to easily switch between different numpy >> installs without touching at all the environment is stow. Basically what >> you do once stow is installed is: >> >> python setup.py install --prefix=SOMEPATH/stow/numpy1 >> python setup.py install --prefix=SOMEPATH/stow/numpy2 >> >> And then stow makes link between the 'real' numpy and the one you want: >> >> stow numpy1 -> numpy1 is used >> stow -R numpy1 -> numpy1 is disabled >> stow numpy2 -> numpy2 is enabled >> >> I switch between many numpy configurations (one which only uses released >> versions for 'real' work, many other with various compilers combination >> to test the build system). It also enables true uninstallation, since >> any installed version is self-contained in one directory (you can rm -rf >> the directory without any chance of removing something unrelated). >> >> cheers, >> >> David > From david at ar.media.kyoto-u.ac.jp Tue Jul 29 09:56:17 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jul 2008 22:56:17 +0900 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <488F2353.6040106@relativita.com> References: <488F2353.6040106@relativita.com> Message-ID: <488F2181.3040307@ar.media.kyoto-u.ac.jp> Emanuele Olivetti wrote: > Insted of stow why not using just PYTHONPATH? > (in bash, continuing the Zachary's example on numpy1) > export PYTHONPATH="SOMEPATH/stow/numpy1" > then run python/emacs/ipython/whatever and use numpy1. > it breaks (or more exactly becomes complicated) once you depend on several interdependent packages (say numpy and scipy, for example). Granted, it may not be a problem for the problem encountered by John, but I am always surprised by the number of people who use - buggy - hacks to do something for which stow was invented. cheers, David From jjh at 42quarks.com Tue Jul 29 10:15:38 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Wed, 30 Jul 2008 00:15:38 +1000 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <488F2181.3040307@ar.media.kyoto-u.ac.jp> References: <488F2353.6040106@relativita.com> <488F2181.3040307@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Firstly David, hack was not intended as an insult. Thanks for pointing me to stow I haven't heard of it before. However. Could someone tell me how most SciPy developers work on the library. Because stow still doesn't seem to get around the need to do a build/install everytime one makes a modification to the library. I look at online documentation for developers but I can't find any clues. What's the best way to modify/test SciPy SVN? Thanks, Jonny On Tue, Jul 29, 2008 at 11:56 PM, David Cournapeau wrote: > Emanuele Olivetti wrote: >> Insted of stow why not using just PYTHONPATH? >> (in bash, continuing the Zachary's example on numpy1) >> export PYTHONPATH="SOMEPATH/stow/numpy1" >> then run python/emacs/ipython/whatever and use numpy1. >> > > it breaks (or more exactly becomes complicated) once you depend on > several interdependent packages (say numpy and scipy, for example). > Granted, it may not be a problem for the problem encountered by John, > but I am always surprised by the number of people who use - buggy - > hacks to do something for which stow was invented. > > cheers, > > David > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From pgmdevlist at gmail.com Tue Jul 29 10:15:24 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 29 Jul 2008 10:15:24 -0400 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: Message-ID: <200807291015.24171.pgmdevlist@gmail.com> Jonathan, Hopefully you won't need to modify a lot of files all over Scipy, but just a few of them ? Then you may want to install the SVN version and create symbolic links pointing to the files/directories you'll be actually modifying. That works well, may come to bite you if you don't delete an old installation of Scipy before installing a new one, and is a bit problematic from times to times, but (goto 1)... From david at ar.media.kyoto-u.ac.jp Tue Jul 29 10:07:52 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jul 2008 23:07:52 +0900 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <488F2353.6040106@relativita.com> <488F2181.3040307@ar.media.kyoto-u.ac.jp> Message-ID: <488F2438.9040300@ar.media.kyoto-u.ac.jp> Jonathan Hunt wrote: > Hi, > > Firstly David, hack was not intended as an insult. I did not take it as an insult, don't worry. I just pointed out that any supposedly non hackish thing is more hackish than stow. > > However. Could someone tell me how most SciPy developers work on the > library. Personally, that's how I develop on scipy and numpy. > Because stow still doesn't seem to get around the need to do > a build/install everytime one makes a modification to the library. I think that getting around this need brings more problems than it solves. I always install and test from the installed version; in most cases, I even clean the installed version first because this often leads to subtle problems. Installing numpy/scipy like that takes a few seconds, and is easy to automate by make, rake files if I need portability on Windows, etc... cheers, David From david at ar.media.kyoto-u.ac.jp Tue Jul 29 10:08:57 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Tue, 29 Jul 2008 23:08:57 +0900 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <200807291015.24171.pgmdevlist@gmail.com> References: <200807291015.24171.pgmdevlist@gmail.com> Message-ID: <488F2479.8020706@ar.media.kyoto-u.ac.jp> Pierre GM wrote: > Jonathan, > > Hopefully you won't need to modify a lot of files all over Scipy, but just a > few of them ? Then you may want to install the SVN version and create > symbolic links pointing to the files/directories you'll be actually > modifying. > That's precisely how stow works ... > That works well, may come to bite you if you don't delete an old installation > of Scipy before installing a new one, and is a bit problematic from times to > times, but (goto 1)... > ... except that it solves this problem too :) Really, stow is nice. The only problem is that it does not work on windows. cheers, David From jjh at 42quarks.com Tue Jul 29 10:32:08 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Wed, 30 Jul 2008 00:32:08 +1000 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <488F2479.8020706@ar.media.kyoto-u.ac.jp> References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> Message-ID: Hi, Thanks for all the comments. If anyone uses the develop mode of setuptools or other methods I'd be interested to hear. Also, might I suggest (apologies if they do exist somewhere and I missed them) that the (summarized) contents of this thread might be useful on the SciPy developers wiki to lower the barrier for others getting started hacking SciPy. Thanks for all the help so far, Jonny -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From pgmdevlist at gmail.com Tue Jul 29 10:54:32 2008 From: pgmdevlist at gmail.com (Pierre GM) Date: Tue, 29 Jul 2008 10:54:32 -0400 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <488F2479.8020706@ar.media.kyoto-u.ac.jp> References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> Message-ID: <200807291054.32764.pgmdevlist@gmail.com> On Tuesday 29 July 2008 10:08:57 David Cournapeau wrote: > Pierre GM wrote: > > Then you may want to install the SVN version and > > create symbolic links pointing to the files/directories you'll be > > actually modifying. > > That's precisely how stow works ... OK, I just found about stow. I gonna have to try. Thanks for the info. From strawman at astraw.com Wed Jul 30 02:28:38 2008 From: strawman at astraw.com (Andrew Straw) Date: Tue, 29 Jul 2008 23:28:38 -0700 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: <488F0F0A.70105@ar.media.kyoto-u.ac.jp> References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> <488F0F0A.70105@ar.media.kyoto-u.ac.jp> Message-ID: <48900A16.6050401@astraw.com> David Cournapeau wrote: > Zachary Pincus wrote: > >> Perhaps some others can chime in with more "official" methods, but I >> typically do one of several things to test SVN versions: >> >> > > On Unix, a nice and easy way to easily switch between different numpy > installs without touching at all the environment is stow. Basically what > you do once stow is installed is: > > python setup.py install --prefix=SOMEPATH/stow/numpy1 > python setup.py install --prefix=SOMEPATH/stow/numpy2 > > And then stow makes link between the 'real' numpy and the one you want: > > stow numpy1 -> numpy1 is used > stow -R numpy1 -> numpy1 is disabled > stow numpy2 -> numpy2 is enabled > > I switch between many numpy configurations (one which only uses released > versions for 'real' work, many other with various compilers combination > to test the build system). It also enables true uninstallation, since > any installed version is self-contained in one directory (you can rm -rf > the directory without any chance of removing something unrelated). > No time to do a similar writeup for it, but: virtualenv http://pypi.python.org/pypi/virtualenv From jjh at 42quarks.com Wed Jul 30 02:42:46 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Wed, 30 Jul 2008 16:42:46 +1000 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: <48900A16.6050401@astraw.com> References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> <488F0F0A.70105@ar.media.kyoto-u.ac.jp> <48900A16.6050401@astraw.com> Message-ID: This is no doubt a dumb question. But why it is not possible simply to import the library in place (i.e no install necessary, just use the python files as they are laid out in the SVN co directory). Is it because of the C/Fortran files that need to be compiled? Thanks, Jonny -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From nwagner at iam.uni-stuttgart.de Wed Jul 30 05:41:03 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 11:41:03 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers Message-ID: Hi all, I am looking for a python tool to solve a set of DAE's, e.g. m \ddot{x} = 2 \lambda x m \ddot{y} = 2 \lambda y x^2 + y^2 - l^2 = 0 Any pointer ? Nils From jr at sun.ac.za Wed Jul 30 06:43:12 2008 From: jr at sun.ac.za (Johann Rohwer) Date: Wed, 30 Jul 2008 12:43:12 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: Message-ID: <200807301243.12871.jr@sun.ac.za> On Wednesday, 30 July 2008, Nils Wagner wrote: > Hi all, > > I am looking for a python tool to solve a set > of DAE's, e.g. > > m \ddot{x} = 2 \lambda x > m \ddot{y} = 2 \lambda y > x^2 + y^2 - l^2 = 0 Our group has developed PySUNDIALS (http://pysundials.sourceforge.net), which is a Python package providing bindings for the SUNDIALS suite of solvers (using ctypes). SUNDIALS has a DAE solver (IDA), which should be able to handle your kind of problem. The PySUNDIALS distributrion comes with examples for each of the wrapped solvers. Johann From nwagner at iam.uni-stuttgart.de Wed Jul 30 07:07:36 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 13:07:36 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: <200807301243.12871.jr@sun.ac.za> References: <200807301243.12871.jr@sun.ac.za> Message-ID: On Wed, 30 Jul 2008 12:43:12 +0200 Johann Rohwer wrote: > On Wednesday, 30 July 2008, Nils Wagner wrote: >> Hi all, >> >> I am looking for a python tool to solve a set >> of DAE's, e.g. >> >> m \ddot{x} = 2 \lambda x >> m \ddot{y} = 2 \lambda y >> x^2 + y^2 - l^2 = 0 > > Our group has developed PySUNDIALS > (http://pysundials.sourceforge.net), which is a Python >package > providing bindings for the SUNDIALS suite of solvers >(using ctypes). > SUNDIALS has a DAE solver (IDA), which should be able to >handle your > kind of problem. The PySUNDIALS distributrion comes with >examples for > each of the wrapped solvers. > > Johann Hi Johann, I have installed pysundials. Can you give me an advice how to implement my simple example using pysundials ? idadenx.py seems to be a good starting point. How do I define the set of equations, initial conditions etc. ? Is it necessary to define the Jacobian ? Thank you in advance. Nils From ferrell at diablotech.com Wed Jul 30 08:45:46 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 30 Jul 2008 06:45:46 -0600 Subject: [SciPy-user] OS X install success! + Question Message-ID: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> I just installed NumPy & SciPy on a new Mac Book Pro (OS X 10.5.2). Thanks for the great step-by-step instructions at http://www.scipy.org/Installing_SciPy/Mac_OS_X . The whole process was flawless until I tried to run the tests and I found I don't have nose installed. That leads to an elementary question about Python and OS X. Before installing NumPy & SciPy I installed the OS X Python from python.org. That's 2.5.2 (laptop shipped with 2.5.1) and was installed in /Library/Frameworks/Python.framework/Versions/Current/bin/python which is just a link to /Library/Frameworks/Python.framework/Versions/2.5/bin/python. numpy and scipy installed in /Library/Frameworks/Python.framework/ Versions/2.5/lib/python2.5/site-packages/ So far so good. But, when I used easy_install for nose, that got installed into /Library/Python/2.5/site-packages/. What is the best way to deal with the Frameworks vs no Frameworks thing? Should I just put a link from /Library/Python/2.5 to /Library/ Frameworks/Python.framework/Versions/2.5/lib/python2.5? I realize this is a Python + OS X question and has nothing to do with SciPy per se. But I've searched the web for an answer with no luck. It seems like this group is the most likely to provide useful information. Thanks, and thanks for the awesome install page. SciPy has come a very long way! -robert From zachary.pincus at yale.edu Wed Jul 30 09:57:07 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 30 Jul 2008 09:57:07 -0400 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> Message-ID: <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> Hi Robert, I'm sure others with a little deeper knowledge of easy_install and friends may chime in, but this should get you started. > > Before installing NumPy & SciPy I installed the OS X Python from > python.org. That's 2.5.2 (laptop shipped with 2.5.1) and was > installed in > > /Library/Frameworks/Python.framework/Versions/Current/bin/python > > which is just a link to > > /Library/Frameworks/Python.framework/Versions/2.5/bin/python. > > numpy and scipy installed in /Library/Frameworks/Python.framework/ > Versions/2.5/lib/python2.5/site-packages/ > > So far so good. But, when I used easy_install for nose, that got > installed into > > /Library/Python/2.5/site-packages/. > > What is the best way to deal with the Frameworks vs no Frameworks > thing? Should I just put a link from /Library/Python/2.5 to /Library/ > Frameworks/Python.framework/Versions/2.5/lib/python2.5? Short answer: the version of easy_install on your machine is for the version of python that came with the operating system. So "easy_install nose" installed nose for that version -- not for the python.org python that you just installed. You will need to install easy_install for the new version of python that you just installed from python.org: http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-install Then run that version of easy_install that gets installed by the ez_setup script. That version, for better or worse, lives by default in: /Library/Frameworks/Python.framework/Versions/2.5/bin/easy_install (I would recommend making a symlink to /usr/local/bin for ease. Alternately, you could run the easy_setup.py script with the --script- dir=/usr/local/bin option.) Background information: Apple ships a version of Python with OS X, and some housekeeping scripts, etc., depend on this python and in some cases, the specific (old or broken) versions of some of the libraries it comes with (e.g. numpy). This is installed in: /System/Library/Frameworks/Python.framework/Versions/2.5 This version of python is accessible as /usr/bin/python, which is a symlink to /System/Library/Frameworks/Python.framework/Versions/2.5/bin/python The basic consensus is that one really shouldn't be mucking with this python, so the best bet is to install the python.org python, which goes into /Library and /usr/local/bin. Now, what's the deal with /Library/Python/2.5/site-packages? Well, Apple's basic decree is that you must not alter things in /System/... directories. But installing a new library for the system python would do just that, under the default regime (which would put them in /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages ) So Apple's python instead puts new libraries in /Library/Python/2.5/ site-packages . This is all fine, and controlled by the default values of the sys.path variable (which is the list of directories Python searches when you try to import something). It's illuminating to do: import sys; print sys.path in both the system and python.org versions of python. Now, how does one deal with multiple versions of python when installing things? In the pre-easy_install era, it was quite simple. Python things are installed with setup.py files in a directory: python setup.py install would install whatever library for the version of python that was used to run the script. So /usr/bin/python setup.py install would install the library for the system python, and /usr/local/bin/python setup.py install would do so for the python.org version. (Plain 'python' is the /usr/local/bin version, thanks to the python.org installer. With easy_install, this sort of transparency is a bit lost. You need to look to see which version of python is invoked in the first line of that script to determine where the files will eventually go. If you have two versions of python, you need two versions of easy_install in different places. I personally find this a bit frustrating... Zach From zachary.pincus at yale.edu Wed Jul 30 10:00:08 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 30 Jul 2008 10:00:08 -0400 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> <488F0F0A.70105@ar.media.kyoto-u.ac.jp> <48900A16.6050401@astraw.com> Message-ID: > This is no doubt a dumb question. But why it is not possible simply to > import the library in place (i.e no install necessary, just use the > python files as they are laid out in the SVN co directory). Is it > because of the C/Fortran files that need to be compiled? Yes, basically. Pure-python libraries can usually be imported in-place -- though sometimes, too-clever setup.py files can actually make it so the installed folder structure is different from the one in the source directory, and thus an in-place import will fail. (I think this is probably considered bad form though.) In the specific, limited case of the matlab IO sub-sub-library, it is indeed pure python and can be imported in-place -- this was one of my hackish suggestions for testing it. Zach From gael.varoquaux at normalesup.org Wed Jul 30 10:10:40 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 30 Jul 2008 16:10:40 +0200 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> Message-ID: <20080730141040.GD21845@phare.normalesup.org> On Wed, Jul 30, 2008 at 09:57:07AM -0400, Zachary Pincus wrote: > With easy_install, this sort of transparency is a bit lost. You need > to look to see which version of python is invoked in the first line of > that script to determine where the files will eventually go. If you > have two versions of python, you need two versions of easy_install in > different places. I personally find this a bit frustrating... Hum, are you sure you need two versions of easy_install? Doesn't using the "-d" option, or the "--prefix" work? Alternatively, if you don't like typing the options all the time, there is a configuration file to set it: http://peak.telecommunity.com/DevCenter/EasyInstall#configuration-files Ga?l From jjh at 42quarks.com Wed Jul 30 10:16:33 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Thu, 31 Jul 2008 00:16:33 +1000 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> <488F0F0A.70105@ar.media.kyoto-u.ac.jp> <48900A16.6050401@astraw.com> Message-ID: Hi Zachary, I just tested your test with yesterdays SVN SciPy and both .mat files load with no problems in MATLAB 7.6.0.324 (R2008a) (with Python 2.5, OS X 10.5.4). Perhaps Mathworks fixed the bug in there end if that's where it is. Hope that helps. Jonny On Thu, Jul 31, 2008 at 12:00 AM, Zachary Pincus wrote: >> This is no doubt a dumb question. But why it is not possible simply to >> import the library in place (i.e no install necessary, just use the >> python files as they are laid out in the SVN co directory). Is it >> because of the C/Fortran files that need to be compiled? > > Yes, basically. Pure-python libraries can usually be imported in-place > -- though sometimes, too-clever setup.py files can actually make it so > the installed folder structure is different from the one in the source > directory, and thus an in-place import will fail. (I think this is > probably considered bad form though.) > > In the specific, limited case of the matlab IO sub-sub-library, it is > indeed pure python and can be imported in-place -- this was one of my > hackish suggestions for testing it. > > Zach > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From ferrell at diablotech.com Wed Jul 30 10:17:08 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 30 Jul 2008 08:17:08 -0600 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> Message-ID: <3EAC1400-3C3A-4438-9DE3-2383C2F27D41@diablotech.com> Thanks so much. I installed a new version of easy_install, as suggested, that installed nose, and now everything works great. All numpy tests and almost all scipy tests passed. (Nearly) Flawless victory! I really appreciate the description of what's going on with the OS X python directories. You've explained it very well. Thanks, -robert On Jul 30, 2008, at 7:57 AM, Zachary Pincus wrote: > Hi Robert, > > I'm sure others with a little deeper knowledge of easy_install and > friends may chime in, but this should get you started. > >> >> Before installing NumPy & SciPy I installed the OS X Python from >> python.org. That's 2.5.2 (laptop shipped with 2.5.1) and was >> installed in >> >> /Library/Frameworks/Python.framework/Versions/Current/bin/python >> >> which is just a link to >> >> /Library/Frameworks/Python.framework/Versions/2.5/bin/python. >> >> numpy and scipy installed in /Library/Frameworks/Python.framework/ >> Versions/2.5/lib/python2.5/site-packages/ >> >> So far so good. But, when I used easy_install for nose, that got >> installed into >> >> /Library/Python/2.5/site-packages/. >> >> What is the best way to deal with the Frameworks vs no Frameworks >> thing? Should I just put a link from /Library/Python/2.5 to / >> Library/ >> Frameworks/Python.framework/Versions/2.5/lib/python2.5? > > Short answer: the version of easy_install on your machine is for the > version of python that came with the operating system. So > "easy_install nose" installed nose for that version -- not for the > python.org python that you just installed. > > You will need to install easy_install for the new version of python > that you just installed from python.org: > http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-install > > Then run that version of easy_install that gets installed by the > ez_setup script. That version, for better or worse, lives by default > in: > /Library/Frameworks/Python.framework/Versions/2.5/bin/easy_install > > (I would recommend making a symlink to /usr/local/bin for ease. > Alternately, you could run the easy_setup.py script with the --script- > dir=/usr/local/bin option.) > > > Background information: > Apple ships a version of Python with OS X, and some housekeeping > scripts, etc., depend on this python and in some cases, the specific > (old or broken) versions of some of the libraries it comes with (e.g. > numpy). This is installed in: > /System/Library/Frameworks/Python.framework/Versions/2.5 > > This version of python is accessible as /usr/bin/python, which is a > symlink to > /System/Library/Frameworks/Python.framework/Versions/2.5/bin/python > > The basic consensus is that one really shouldn't be mucking with this > python, so the best bet is to install the python.org python, which > goes into /Library and /usr/local/bin. > > Now, what's the deal with /Library/Python/2.5/site-packages? Well, > Apple's basic decree is that you must not alter things in /System/... > directories. But installing a new library for the system python would > do just that, under the default regime (which would put them in > /System/Library/Frameworks/Python.framework/Versions/2.5/lib/ > python2.5/ > site-packages ) > So Apple's python instead puts new libraries in /Library/Python/2.5/ > site-packages . > > This is all fine, and controlled by the default values of the sys.path > variable (which is the list of directories Python searches when you > try to import something). It's illuminating to do: > import sys; print sys.path > in both the system and python.org versions of python. > > Now, how does one deal with multiple versions of python when > installing things? In the pre-easy_install era, it was quite simple. > Python things are installed with setup.py files in a directory: > > python setup.py install > > would install whatever library for the version of python that was used > to run the script. So /usr/bin/python setup.py install would install > the library for the system python, and /usr/local/bin/python setup.py > install would do so for the python.org version. (Plain 'python' is > the /usr/local/bin version, thanks to the python.org installer. > > With easy_install, this sort of transparency is a bit lost. You need > to look to see which version of python is invoked in the first line of > that script to determine where the files will eventually go. If you > have two versions of python, you need two versions of easy_install in > different places. I personally find this a bit frustrating... > > Zach > > > > > > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From gael.varoquaux at normalesup.org Wed Jul 30 10:21:10 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 30 Jul 2008 16:21:10 +0200 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: <48900A16.6050401@astraw.com> References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> <488F0F0A.70105@ar.media.kyoto-u.ac.jp> <48900A16.6050401@astraw.com> Message-ID: <20080730142110.GE21845@phare.normalesup.org> On Tue, Jul 29, 2008 at 11:28:38PM -0700, Andrew Straw wrote: > No time to do a similar writeup for it, but: virtualenv > http://pypi.python.org/pypi/virtualenv Use with caution: lots of gotcha with it (for instance the fact that there are 90% chances that ipython or mayavi, or any program using python other than the interpreter) will not start in your virtual environment if not installed in your virtal env. I have shot myself in the foot many times. This is due to how they are installed, and has been reported, and is 'not a bug' :). I use virtualenv a lot, but do remember when you are using it, that something is making some magic and that if things fail you have to look closer. Ga?l From ferrell at diablotech.com Wed Jul 30 10:24:09 2008 From: ferrell at diablotech.com (Robert Ferrell) Date: Wed, 30 Jul 2008 08:24:09 -0600 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <20080730141040.GD21845@phare.normalesup.org> References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> <20080730141040.GD21845@phare.normalesup.org> Message-ID: On Jul 30, 2008, at 8:10 AM, Gael Varoquaux wrote: > On Wed, Jul 30, 2008 at 09:57:07AM -0400, Zachary Pincus wrote: >> With easy_install, this sort of transparency is a bit lost. You need >> to look to see which version of python is invoked in the first line >> of >> that script to determine where the files will eventually go. If you >> have two versions of python, you need two versions of easy_install in >> different places. I personally find this a bit frustrating... > > Hum, are you sure you need two versions of easy_install? Doesn't using > the "-d" option, or the "--prefix" work? Alternatively, if you don't > like > typing the options all the time, there is a configuration file to > set it: > http://peak.telecommunity.com/DevCenter/EasyInstall#configuration- > files I tried the -d option. The original (came with OS X) easy_install didn't like that. Here's the complaint: $> sudo easy_install -v -n -d/Library/Frameworks/Python.framework/ Versions/2.5/lib/python2.5/site-packages TEST FAILED: /Library/Frameworks/Python.framework/Versions/2.5/lib/ python2.5/site-packages does NOT support .pth files error: bad install directory or PYTHONPATH You are attempting to install a package to a directory that is not on PYTHONPATH and which Python does not read ".pth" files from. The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/ site-packages and your PYTHONPATH environment variable currently contains: '' That's what got me thinking there was some subtlety that I didn't appreciate. Zachary's email explained it. /Library/Python/2.5/site- packages is really for OS X's own python. Now that I understand what's up with OS X I have greater confidence that I'll put future packages in the right place. -robert From shad0wfade at yahoo.com Wed Jul 30 10:42:50 2008 From: shad0wfade at yahoo.com (Joshua) Date: Wed, 30 Jul 2008 07:42:50 -0700 (PDT) Subject: [SciPy-user] Is there a collection of useful functions/modules? Message-ID: <267237.208.qm@web32905.mail.mud.yahoo.com> I'm very new to Python, as in only a week of programming, and was wondering if there is a page with a collection of highly useful functions/modules that are not necessarily maintained on a release-to-release basis for things like performing FFT's on data in a file. I wrote a function to do this using Numpy, and put in options to normalize the FFT for me, and take the absolute value.? There are many other options I should add, but before I add complexity, I wouldn't mind having my code scrutinized, and made available for others to use and optimize.? That way students and researchers don't have to reinvent the wheel, unless they want to or are using some odd formatting scheme.? Just have the amplitude data in the list column wise. exempli gratia: 0.0 0.1 0.0 -0.1 ... et cetera. This code makes no particular frequency sampling assumptions (other than the sampling was done correctly), and only deals with simple FFT processing. I've attached the code and welcome scrutiny. There are few things that I know I should to add/restructure to my code: better error handling, printing out the phase, multi-dimensional analysis, and |Magnitude| in 20dB. Thanks for your help Joshua -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fftfromfile.py Type: text/x-python Size: 3099 bytes Desc: not available URL: From zachary.pincus at yale.edu Wed Jul 30 10:44:01 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 30 Jul 2008 10:44:01 -0400 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <20080730141040.GD21845@phare.normalesup.org> References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> <20080730141040.GD21845@phare.normalesup.org> Message-ID: <2D851907-2E03-4843-9CB1-B02A05F3FA18@yale.edu> > On Wed, Jul 30, 2008 at 09:57:07AM -0400, Zachary Pincus wrote: >> With easy_install, this sort of transparency is a bit lost. You need >> to look to see which version of python is invoked in the first line >> of >> that script to determine where the files will eventually go. If you >> have two versions of python, you need two versions of easy_install in >> different places. I personally find this a bit frustrating... > > Hum, are you sure you need two versions of easy_install? Doesn't using > the "-d" option, or the "--prefix" work? Alternatively, if you don't > like > typing the options all the time, there is a configuration file to > set it: > http://peak.telecommunity.com/DevCenter/EasyInstall#configuration- > files To the best of my limited knowledge, that won't work too well, since you're trying to use version A of Python to install a package for version B, but the installation process often (but not always?) gathers information from the currently-running version of Python -- version A -- to make decisions about the installation. (Information which may not be correct for version B.) Zach From zachary.pincus at yale.edu Wed Jul 30 10:45:46 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 30 Jul 2008 10:45:46 -0400 Subject: [SciPy-user] Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <9595C26D-C67A-404F-AB42-9D7EDB8860F8@yale.edu> <67E4404D-CC00-4775-BE98-5C0BF81B53B5@yale.edu> <8D21E970-E9A5-470C-A8B7-97960BB3C0FA@yale.edu> <488F0F0A.70105@ar.media.kyoto-u.ac.jp> <48900A16.6050401@astraw.com> Message-ID: Thanks! It looks like it's a limited matlab bug... I'll still be making a patch to fix this, but that will definitely need some testing to make sure I didn't inadvertently break other stuff. I'll post more information after I have a patch. Zach On Jul 30, 2008, at 10:16 AM, Jonathan Hunt wrote: > Hi Zachary, > > I just tested your test with yesterdays SVN SciPy and both .mat files > load with no problems in MATLAB 7.6.0.324 (R2008a) (with Python 2.5, > OS X 10.5.4). Perhaps Mathworks fixed the bug in there end if that's > where it is. > > Hope that helps. > Jonny > > On Thu, Jul 31, 2008 at 12:00 AM, Zachary Pincus > wrote: >>> This is no doubt a dumb question. But why it is not possible >>> simply to >>> import the library in place (i.e no install necessary, just use the >>> python files as they are laid out in the SVN co directory). Is it >>> because of the C/Fortran files that need to be compiled? >> >> Yes, basically. Pure-python libraries can usually be imported in- >> place >> -- though sometimes, too-clever setup.py files can actually make it >> so >> the installed folder structure is different from the one in the >> source >> directory, and thus an in-place import will fail. (I think this is >> probably considered bad form though.) >> >> In the specific, limited case of the matlab IO sub-sub-library, it is >> indeed pure python and can be imported in-place -- this was one of my >> hackish suggestions for testing it. >> >> Zach >> _______________________________________________ >> SciPy-user mailing list >> SciPy-user at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Jonathan J Hunt > Homepage: http://www.42quarks.net.nz/wiki/JJH > (Further contact details there) > "Physics isn't the most important thing. Love is." Richard Feynman > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From pkienzle at nist.gov Wed Jul 30 10:50:04 2008 From: pkienzle at nist.gov (Paul Kienzle) Date: Wed, 30 Jul 2008 10:50:04 -0400 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <20080730141040.GD21845@phare.normalesup.org>; from gael.varoquaux@normalesup.org on Wed, Jul 30, 2008 at 04:10:40PM +0200 References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> <20080730141040.GD21845@phare.normalesup.org> Message-ID: <20080730105004.C780476@jazz.ncnr.nist.gov> On Wed, Jul 30, 2008 at 04:10:40PM +0200, Gael Varoquaux wrote: > On Wed, Jul 30, 2008 at 09:57:07AM -0400, Zachary Pincus wrote: > > With easy_install, this sort of transparency is a bit lost. You need > > to look to see which version of python is invoked in the first line of > > that script to determine where the files will eventually go. If you > > have two versions of python, you need two versions of easy_install in > > different places. I personally find this a bit frustrating... For python 2.4 and up you can use: $ python -m easy_install --help This uses the easy_install for the particular python interpreter used to invoke it. - Paul From gael.varoquaux at normalesup.org Wed Jul 30 10:58:17 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 30 Jul 2008 16:58:17 +0200 Subject: [SciPy-user] OS X install success! + Question In-Reply-To: <2D851907-2E03-4843-9CB1-B02A05F3FA18@yale.edu> References: <5BE66D24-E4A2-4076-BF49-B77B1A1459C2@diablotech.com> <5449A4F1-E90B-430B-9223-72AFC7C9DF40@yale.edu> <20080730141040.GD21845@phare.normalesup.org> <2D851907-2E03-4843-9CB1-B02A05F3FA18@yale.edu> Message-ID: <20080730145817.GF21845@phare.normalesup.org> On Wed, Jul 30, 2008 at 10:44:01AM -0400, Zachary Pincus wrote: > To the best of my limited knowledge, that won't work too well, since > you're trying to use version A of Python to install a package for > version B, but the installation process often (but not always?) > gathers information from the currently-running version of Python -- > version A -- to make decisions about the installation. (Information > which may not be correct for version B.) It seems you are right. Sorry for the noise. Ga?l From m.starnes05 at imperial.ac.uk Wed Jul 30 11:11:30 2008 From: m.starnes05 at imperial.ac.uk (mark starnes) Date: Wed, 30 Jul 2008 16:11:30 +0100 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] Message-ID: <489084A2.3000408@imperial.ac.uk> Hi everyone, I've looked through the list here and in Numpy-users, and checked the 'net but can't find an answer to this problem (with luck, I've missed something obvious!). I've an array of velocities at 80,000 points, irregularly spaced (from a CFD analysis). I'd like to generate the interpolated velocity at any position in the domain, to map the data to an acoustics analysis on a different mesh. I tried a least squares approach but the errors are too large using polynomials and trigonometric functions. My conclusion is that I need a nearest-neighbour type interpolation routine. Is there such a routine in Scipy or Numpy? From inspection of Matlab's functions 'interp3' may be similar to what I would like, but I can't test. If there's nothing available, I'll end up converting one of the submitted scripts at the url, http://www.mathworks.com/matlabcentral/fileexchange/loadCategory.do?objectId=14&objectType=Category Thanks in advance, Mark. From josemaria.alkala at gmail.com Wed Jul 30 11:18:02 2008 From: josemaria.alkala at gmail.com (=?ISO-8859-1?Q?Jos=E9_Mar=EDa_Garc=EDa_P=E9rez?=) Date: Wed, 30 Jul 2008 17:18:02 +0200 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <489084A2.3000408@imperial.ac.uk> References: <489084A2.3000408@imperial.ac.uk> Message-ID: Hi Mark, Try Radial Basis Functions. But I'm not sure if it's implemented at SciPy/Numpy. I implemented myself long time ago, but better if there is something already done within the project. Regards, Jos? M. 2008/7/30 mark starnes > Hi everyone, > > I've looked through the list here and in Numpy-users, and checked the > 'net but can't find an answer to this problem (with luck, I've missed > something obvious!). > > I've an array of velocities at 80,000 points, irregularly spaced (from a > CFD analysis). I'd like to generate the interpolated velocity at any > position in the domain, to map the data to an acoustics analysis on a > different mesh. > > I tried a least squares approach but the errors are too large using > polynomials and trigonometric functions. My conclusion is that I need a > nearest-neighbour type interpolation routine. > > Is there such a routine in Scipy or Numpy? From inspection of Matlab's > functions 'interp3' may be similar to what I would like, but I can't test. > > If there's nothing available, I'll end up converting one of the > submitted scripts at the url, > > > http://www.mathworks.com/matlabcentral/fileexchange/loadCategory.do?objectId=14&objectType=Category > > > Thanks in advance, > > Mark. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob.clewley at gmail.com Wed Jul 30 11:28:50 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 30 Jul 2008 11:28:50 -0400 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: Hi, PyDSTool wraps Hairer and Wanner's Radau 5 code using SWIG. This supports DAEs. An example is in PyDSTool/tests/DAE_example.py, and you get the advantage of the higher level Pointset and Trajectory structures for your solutions, with named coordinates (rather than plain arrays with integer indices), if that appeals to you. -Rob From nwagner at iam.uni-stuttgart.de Wed Jul 30 11:46:21 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 17:46:21 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: On Wed, 30 Jul 2008 11:28:50 -0400 "Rob Clewley" wrote: > Hi, > > PyDSTool wraps Hairer and Wanner's Radau 5 code using >SWIG. This > supports DAEs. An example is in >PyDSTool/tests/DAE_example.py, and you > get the advantage of the higher level Pointset and >Trajectory > structures for your solutions, with named coordinates >(rather than > plain arrays with integer indices), if that appeals to >you. > > -Rob > Hi Rob, Thank you for your reply. How do I install PyDSTool ? As opposed to other projects like numpy/scipy/matplotlib I can't find a setup.py. /usr/bin/python -i src/PyDSTool/tests/DAE_example.py Traceback (most recent call last): File "src/PyDSTool/tests/DAE_example.py", line 12, in ? from PyDSTool import * ImportError: No module named PyDSTool Nils From zachary.pincus at yale.edu Wed Jul 30 11:55:09 2008 From: zachary.pincus at yale.edu (Zachary Pincus) Date: Wed, 30 Jul 2008 11:55:09 -0400 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: References: <489084A2.3000408@imperial.ac.uk> Message-ID: <45918DC2-F3E5-4F7E-A031-539A659491B1@yale.edu> Mark: > My conclusion is that I need a > nearest-neighbour type interpolation routine. Jos?: > Try Radial Basis Functions. This was my thought as well. Scipy has a handy implementation: http://www.scipy.org/Cookbook/RadialBasisFunctions Zach From rob.clewley at gmail.com Wed Jul 30 11:59:25 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 30 Jul 2008 11:59:25 -0400 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: Erm... yes, there isn't one! I know it's a bit clunky but I have been waiting to add a setup.py until the next major release because I wanted to get some details right about pre-compiling the C and Fortran extensions. Those issues have been ironed out so it shouldn't be long now... Anyway, it's still very easy. The instructions at http://www.cam.cornell.edu/~rclewley/cgi-bin/moin.cgi/GettingStarted will fill you in but you basically just put the unzipped directory wherever you like and add it to your pythonpath. There is currently no pre-compiling of the extensions, it will all be done when you first invoke the compiler to build your Radau model. Let me know if you have any other questions or if there's something I should add to the documentation to make things more clear. Cheers Rob On Wed, Jul 30, 2008 at 11:46 AM, Nils Wagner wrote: > On Wed, 30 Jul 2008 11:28:50 -0400 > "Rob Clewley" wrote: >> Hi, >> >> PyDSTool wraps Hairer and Wanner's Radau 5 code using >>SWIG. This >> supports DAEs. An example is in >>PyDSTool/tests/DAE_example.py, and you >> get the advantage of the higher level Pointset and >>Trajectory >> structures for your solutions, with named coordinates >>(rather than >> plain arrays with integer indices), if that appeals to >>you. >> >> -Rob >> > Hi Rob, > > Thank you for your reply. > How do I install PyDSTool ? > As opposed to other projects like numpy/scipy/matplotlib I > can't find a setup.py. > > /usr/bin/python -i src/PyDSTool/tests/DAE_example.py > Traceback (most recent call last): > File "src/PyDSTool/tests/DAE_example.py", line 12, in ? > from PyDSTool import * > ImportError: No module named PyDSTool > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From nwagner at iam.uni-stuttgart.de Wed Jul 30 12:11:44 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 18:11:44 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: On Wed, 30 Jul 2008 11:59:25 -0400 "Rob Clewley" wrote: > Erm... yes, there isn't one! I know it's a bit clunky >but I have been > waiting to add a setup.py until the next major release >because I > wanted to get some details right about pre-compiling the >C and Fortran > extensions. Those issues have been ironed out so it >shouldn't be long > now... > > Anyway, it's still very easy. The instructions at > http://www.cam.cornell.edu/~rclewley/cgi-bin/moin.cgi/GettingStarted > will fill you in but you basically just put the unzipped >directory > wherever you like and add it to your pythonpath. There >is currently no > pre-compiling of the extensions, it will all be done >when you first > invoke the compiler to build your Radau model. Let me >know if you have > any other questions or if there's something I should add >to the > documentation to make things more clear. > > Cheers > Rob Hi Rob, I think I followed all these instructions. echo $PYTHONPATH /home/nwagner/:/home/nwagner/src/PyDSTool/:/home/nwagner/src/PyDSTool/tests/:/usr/lib/python2.4/site-packages/ nwagner at linux:~> /usr/bin/python -i src/PyDSTool/tests/DAE_example.py Traceback (most recent call last): File "src/PyDSTool/tests/DAE_example.py", line 12, in ? from PyDSTool import * ImportError: No module named PyDSTool Am I missing something ? Cheers, Nils From rob.clewley at gmail.com Wed Jul 30 12:17:17 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 30 Jul 2008 12:17:17 -0400 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: Did you also add a PyDSTool.pth into python's site-packages/ directory? It would contain the same PyDSTool-specific lines that are in your $PYTHONPATH, one per line, and not separated by colons. From nwagner at iam.uni-stuttgart.de Wed Jul 30 12:25:46 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 18:25:46 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: On Wed, 30 Jul 2008 12:17:17 -0400 "Rob Clewley" wrote: > Did you also add a PyDSTool.pth into python's >site-packages/ directory? The file /usr/lib/python2.4/site-packages/PyDSTool.pth consists of two lines /home/nwagner/src/PyDSTool/ /home/nwagner/src/PyDSTool/tests/ Nils From rob.clewley at gmail.com Wed Jul 30 12:36:05 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 30 Jul 2008 12:36:05 -0400 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: As I say in the wiki in the Windows install section (I must add it to the *nix section!) of GettingStarted, my experience is that you'll need to add /home /home/wagner /home/wagner/src before these other lines. I don't understand why this is necessary from a technical perspective. It seems daft to me. On Wed, Jul 30, 2008 at 12:25 PM, Nils Wagner wrote: > On Wed, 30 Jul 2008 12:17:17 -0400 > "Rob Clewley" wrote: >> Did you also add a PyDSTool.pth into python's >>site-packages/ directory? > > The file > > /usr/lib/python2.4/site-packages/PyDSTool.pth > > consists of two lines > > /home/nwagner/src/PyDSTool/ > /home/nwagner/src/PyDSTool/tests/ > > Nils > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Robert H. Clewley, Ph. D. Assistant Professor Department of Mathematics and Statistics Georgia State University 720 COE, 30 Pryor St Atlanta, GA 30303, USA tel: 404-413-6420 fax: 404-413-6403 http://www2.gsu.edu/~matrhc http://brainsbehavior.gsu.edu/ From nwagner at iam.uni-stuttgart.de Wed Jul 30 12:47:41 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 18:47:41 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: On Wed, 30 Jul 2008 12:36:05 -0400 "Rob Clewley" wrote: > As I say in the wiki in the Windows install section (I >must add it to > the *nix section!) of GettingStarted, my experience is >that you'll > need to add > > /home > /home/wagner > /home/wagner/src > > before these other lines. I don't understand why this is >necessary > from a technical perspective. It seems daft to me. Strange, Now it works fine for me, but it's very slow... Thank you again Nils From rob.clewley at gmail.com Wed Jul 30 13:03:24 2008 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 30 Jul 2008 13:03:24 -0400 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: Message-ID: > > Strange, Now it works fine for me, but it's very slow... I assume you mean the time to be able to run the radau test script? If so, that's because the current version of the systems needs to recompile everything afresh each time, even though 90% of the code is the same. That's a distutils issue, but the version I'll be releasing in the next few months side-steps that problem. Then, only your vector field will need to be re-compiled if you change it, and the core radau and dopri parts will be compiled once only at the original installation time. Hence the complicated issues getting setup.py working. I think you'll find that once the model is compiled, it integrates solutions *very* quickly. Radau is very efficient, and there's only a small overhead in simulating models with it using the additional structures from PyDSTool. If this isn't what you find then there's something else wrong! -Rob From nwagner at iam.uni-stuttgart.de Wed Jul 30 13:12:09 2008 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 30 Jul 2008 19:12:09 +0200 Subject: [SciPy-user] Differential Algebraic Equation Solvers In-Reply-To: References: Message-ID: On Wed, 30 Jul 2008 13:03:24 -0400 "Rob Clewley" wrote: >> >> Strange, Now it works fine for me, but it's very slow... > > I assume you mean the time to be able to run the radau >test script? Exactly, >If > so, that's because the current version of the systems >needs to > recompile everything afresh each time, even though 90% >of the code is > the same. That's a distutils issue, but the version I'll >be releasing > in the next few months side-steps that problem. Will you post an announcement to this list ? I am looking forward to the enhanced version version ! Is PyDSTool available via svn ? >Then, >only your vector > field will need to be re-compiled if you change it, and >the core radau > and dopri parts will be compiled once only at the >original > installation time. Hence the complicated issues getting >setup.py > working. > > I think you'll find that once the model is compiled, it >integrates > solutions *very* quickly. Radau is very efficient, and >there's only a > small overhead in simulating models with it using the >additional > structures from PyDSTool. If this isn't what you find >then there's > something else wrong! > > -Rob Nils From bnuttall at uky.edu Wed Jul 30 13:41:15 2008 From: bnuttall at uky.edu (Nuttall, Brandon C) Date: Wed, 30 Jul 2008 13:41:15 -0400 Subject: [SciPy-user] Robust Statistics In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: Folks, I have some rate/time data I'm in the process of analyzing. The data represent the natural gas produced from a well over time. The data are fit best by either an exponential model (least squares best fit of log of rate data, just qi and di see below) or a "hyperbolic" model: qt = qi *(1-b*di*t)**(-1.0/b) qt = rate at time t qi = initial rate (calculated) b = decline exponent (rate of change of the decline rate, calculated) di = initial decline rate (calculated) t = time The calculated parameters are found using the optimization routine scipy.optimize.fmin_tnc(). I'm already rejecting some best fit results because either the correlation coefficient is not statistically significant or the di (slope) is not significantly different from 0. The image I've attached shows two things. An exponential best fit is shown by the solid line and a hyperbolic best fit by the dotted line. It is common practice in analyzing this type of data to begin the best fit analysis at an early time point where the data actually start to "decline"; thus, the hyperbolic best fit started at time 2 and the first rate point (rate = about 1350) is ignored. The other thing the graph shows is that there appear to be outliers. My question is, what Python resources are there for testing for outliers and robust statistics? Is RANSAC appropriate for this? Note that when I run ransack.py from the cookbook, I get: Traceback (most recent call last): --Snip other traceback info--- File "C:\Documents and Settings\bnuttall\Desktop\ransac.py", line 132, in fit A = numpy.vstack([data[:,i] for i in self.input_columns]).T AttributeError: 'numpy.ndarray' object has no attribute 'T' I presume the T attribute is supposed to mean the transpose operator and have changed the offending statement(s, there are 4 total) to the form: A = numpy.vstack([data[:,i] for i in self.input_columns]).transpose() Thanks. Brandon Nuttall -------------- next part -------------- A non-text attachment was scrubbed... Name: R0114442_raw.png Type: image/png Size: 55799 bytes Desc: R0114442_raw.png URL: From stefan at sun.ac.za Wed Jul 30 14:17:47 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 30 Jul 2008 20:17:47 +0200 Subject: [SciPy-user] Robust Statistics In-Reply-To: References: Message-ID: <9457e7c80807301117x7070364bx925a52afb01c1e77@mail.gmail.com> 2008/7/30 Nuttall, Brandon C : > I presume the T attribute is supposed to mean the transpose operator and have changed the > offending statement(s, there are 4 total) to the form: Which version of NumPy do you have? This was changed very long ago. Cheers St?fan From gael.varoquaux at normalesup.org Wed Jul 30 14:36:52 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 30 Jul 2008 20:36:52 +0200 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <488F2353.6040106@relativita.com> <488F2181.3040307@ar.media.kyoto-u.ac.jp> Message-ID: <20080730183652.GJ32733@phare.normalesup.org> On Wed, Jul 30, 2008 at 12:15:38AM +1000, Jonathan Hunt wrote: > However. Could someone tell me how most SciPy developers work on the > library. I don't work on scipy, but on other project I use a combination of virtualenv and setuptools develop feature, giving it the right prefix. It adds some overheard indeed. I rerun python setup.py develop when there are any changes to the C code. I do this because I need to be able to run different branches of the of the same project in the same time to check for differences. Prabhu has explained the way he uses virtualenv here: http://prabhuramachandran.blogspot.com/2008/03/using-virtualenv-under-linux.html I do things a bit different. One of the big difference is the instead of python setup.py install, I use python setup.py develop. All these approach tend to have there own quirks so you have to get to know them. Ga?l From gael.varoquaux at normalesup.org Wed Jul 30 14:38:37 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 30 Jul 2008 20:38:37 +0200 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <488F2479.8020706@ar.media.kyoto-u.ac.jp> References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> Message-ID: <20080730183837.GK32733@phare.normalesup.org> On Tue, Jul 29, 2008 at 11:08:57PM +0900, David Cournapeau wrote: > Really, stow is nice. The > only problem is that it does not work on windows. Can I have two shells, one running one environment, and the other another, say to compare results with stow? Ga?l From gael.varoquaux at normalesup.org Wed Jul 30 14:41:02 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Wed, 30 Jul 2008 20:41:02 +0200 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> Message-ID: <20080730184102.GL32733@phare.normalesup.org> On Wed, Jul 30, 2008 at 12:32:08AM +1000, Jonathan Hunt wrote: > Thanks for all the comments. If anyone uses the develop mode of > setuptools or other methods I'd be interested to hear. I do. I am not sure what to tell you about it. Just be aware that when you use setuptools, your import path gets messed up (print sys.path to reaize that), and as a result import priorities get scrambled. If you know that and keep watch on what is important, when you have several version of different packages installed, you can usual understand what is going on. Ga?l From jturner at gemini.edu Wed Jul 30 14:46:25 2008 From: jturner at gemini.edu (James Turner) Date: Wed, 30 Jul 2008 14:46:25 -0400 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <489084A2.3000408@imperial.ac.uk> References: <489084A2.3000408@imperial.ac.uk> Message-ID: <4890B701.40402@gemini.edu> Hi Mark, > I've an array of velocities at 80,000 points, irregularly spaced (from a > CFD analysis). I'd like to generate the interpolated velocity at any > position in the domain, to map the data to an acoustics analysis on a > different mesh. > > I tried a least squares approach but the errors are too large using > polynomials and trigonometric functions. My conclusion is that I need a > nearest-neighbour type interpolation routine. If you don't have any joy with the other suggestions, there are some useful routines by Renka in Netlib, at least one of which is 3D, though I believe they are also piecewise polynomials (and in ForTran, of course, so need wrapping). http://www.netlib.org/toms/ I also have a program implementing the Hubble Telescope's "Drizzle" algorithm in 3D, which is similar to nearest neighbour or linear, but unfortunately it's written in an obscure language that would make it difficult to use from Python unless you are prepared to install STScI's PyRAF. I also suspect it will only work well for data that are mildly irregular and it may be fiddly to set up for your application, but anyway, just so you have the option... The paper is at http://xxx.lanl.gov/abs/astro-ph/9708242. Are the points really irregular in 3D, or just in 1-2 of the dimensions? If any dimension is regularly spaced, you might be able to simplify the problem by resampling in 1-2D at a time? Stefan might also have suggestions ;-). But maybe the radial basis functions will do the trick... I'll have to look at those. Cheers, James. From m.starnes05 at imperial.ac.uk Wed Jul 30 16:15:32 2008 From: m.starnes05 at imperial.ac.uk (mark starnes) Date: Wed, 30 Jul 2008 21:15:32 +0100 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <45918DC2-F3E5-4F7E-A031-539A659491B1@yale.edu> References: <489084A2.3000408@imperial.ac.uk> <45918DC2-F3E5-4F7E-A031-539A659491B1@yale.edu> Message-ID: <4890CBE4.6000908@imperial.ac.uk> Jos?, Zachary, I seem to be having some success running the example from the link posted! Thanks for your rapid and helpful responses. Best regards, Mark. Zachary Pincus wrote: > Mark: >> My conclusion is that I need a >> nearest-neighbour type interpolation routine. > > Jos?: >> Try Radial Basis Functions. > > This was my thought as well. > > Scipy has a handy implementation: > http://www.scipy.org/Cookbook/RadialBasisFunctions > > Zach > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From m.starnes05 at imperial.ac.uk Wed Jul 30 16:32:31 2008 From: m.starnes05 at imperial.ac.uk (mark starnes) Date: Wed, 30 Jul 2008 21:32:31 +0100 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <4890B701.40402@gemini.edu> References: <489084A2.3000408@imperial.ac.uk> <4890B701.40402@gemini.edu> Message-ID: <4890CFDF.2060304@imperial.ac.uk> Hi James, I'm having a go with the radial basis functions, as they seem to be working with my Scipy (surprisingly!). Thanks for your suggestions; I'll move onto them if I fail with this approach. The points are partly on a structured grid, though not regular, and partly an unstructured grid. I was hoping not to treat the regions differently. Thanks for your response, though. I'll be sure to write back if the radial functions don't work. Best regards, Mark. James Turner wrote: > Hi Mark, > >> I've an array of velocities at 80,000 points, irregularly spaced (from a >> CFD analysis). I'd like to generate the interpolated velocity at any >> position in the domain, to map the data to an acoustics analysis on a >> different mesh. >> >> I tried a least squares approach but the errors are too large using >> polynomials and trigonometric functions. My conclusion is that I need a >> nearest-neighbour type interpolation routine. > > If you don't have any joy with the other suggestions, there are some > useful routines by Renka in Netlib, at least one of which is 3D, > though I believe they are also piecewise polynomials (and in ForTran, > of course, so need wrapping). > > http://www.netlib.org/toms/ > > I also have a program implementing the Hubble Telescope's "Drizzle" > algorithm in 3D, which is similar to nearest neighbour or linear, > but unfortunately it's written in an obscure language that would > make it difficult to use from Python unless you are prepared to > install STScI's PyRAF. I also suspect it will only work well for > data that are mildly irregular and it may be fiddly to set up for > your application, but anyway, just so you have the option... > The paper is at http://xxx.lanl.gov/abs/astro-ph/9708242. > > Are the points really irregular in 3D, or just in 1-2 of the > dimensions? If any dimension is regularly spaced, you might be able > to simplify the problem by resampling in 1-2D at a time? > > Stefan might also have suggestions ;-). But maybe the radial basis > functions will do the trick... I'll have to look at those. > > Cheers, > > James. > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From stefan at sun.ac.za Wed Jul 30 17:11:36 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 30 Jul 2008 23:11:36 +0200 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <489084A2.3000408@imperial.ac.uk> References: <489084A2.3000408@imperial.ac.uk> Message-ID: <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> Hi Mark 2008/7/30 mark starnes : > Hi everyone, > > I've looked through the list here and in Numpy-users, and checked the > 'net but can't find an answer to this problem (with luck, I've missed > something obvious!). > > I've an array of velocities at 80,000 points, irregularly spaced (from a > CFD analysis). I'd like to generate the interpolated velocity at any > position in the domain, to map the data to an acoustics analysis on a > different mesh. > > I tried a least squares approach but the errors are too large using > polynomials and trigonometric functions. My conclusion is that I need a > nearest-neighbour type interpolation routine. Also take a look at http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data Robert Kern's delaunay package does natural neighbour interpolation, IIRC. Regards St?fan From jturner at gemini.edu Wed Jul 30 19:30:12 2008 From: jturner at gemini.edu (James Turner) Date: Wed, 30 Jul 2008 19:30:12 -0400 Subject: [SciPy-user] Core dump and import errors during scipy.test() Message-ID: <4890F984.6080101@gemini.edu> Hot on the heels of my numpy.test() core dump, I now have one from scipy.test() using SciPy 0.6.0, again on RedHat Enterprise 3 with Python 2.5.1. The last line (with verbosity=2) is this: check_cosine_weighted_infinite (scipy.integrate.tests.test_quadpack.test_quad)Floating exception (core dumped) I tried running gdb on the core dump and I'm not 100% sure what I'm looking at, but maybe this bit is useful? Loaded symbols for /astro/iraf/i686/gempylocal/lib/python2.5/site-packages/scipy/cluster/_vq.so #0 0xb61359c0 in dqawfe_ (f=0xb612a6e8 , a=0xbfff94a0, omega=0xbfff94c0, integr=0xbfff94a8, epsabs=0xbfff94d8, limlst=0xbfff94b4, limit=0xbfff94b8, maxp1=0xbfff94bc, result=0xbfff94d0, abserr=0xbfff94c8, neval=0xbfff9484, ier=0xbfff9488, lst=0xbfff948c, alist=0x88024a8, blist=0x8802640, rlist=0x8805698, elist=0x8805830, iord=0x8719b70, nnlog=0x87566b8, chebmo=0x8802f80) at scipy/integrate/quadpack/dqawfe.f:267 267 10 l = dabs(omega) In addition, I have a load of import failures at the start of the tests, due to a missing symbol (zfftnd_fftw) in _fftpack.so. From a Google search, it looks like this is a known SciPy problem that David has fixed in the SVN repository. Is there a workaround for the public SciPy release? I don't really understand what makes this show up on my machine or whether everyone sees it (but then presumably there would have been a SciPy 0.6.1 release?). It looks like one of the files is just missing from the compilation. Thanks for any suggestions, James. From jjh at 42quarks.com Wed Jul 30 19:50:52 2008 From: jjh at 42quarks.com (Jonathan Hunt) Date: Thu, 31 Jul 2008 09:50:52 +1000 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <20080730184102.GL32733@phare.normalesup.org> References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> <20080730184102.GL32733@phare.normalesup.org> Message-ID: Thanks for the help. I tried using develop mode in SciPy but since SciPy just uses distutils it doesn't have develop mode. Is there an easy way to make it use setuptools? Thanks, Jonny On Thu, Jul 31, 2008 at 4:41 AM, Gael Varoquaux wrote: > On Wed, Jul 30, 2008 at 12:32:08AM +1000, Jonathan Hunt wrote: >> Thanks for all the comments. If anyone uses the develop mode of >> setuptools or other methods I'd be interested to hear. > > I do. I am not sure what to tell you about it. Just be aware that when > you use setuptools, your import path gets messed up (print sys.path to > reaize that), and as a result import priorities get scrambled. If you > know that and keep watch on what is important, when you have several > version of different packages installed, you can usual understand what is > going on. > > Ga?l > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -- Jonathan J Hunt Homepage: http://www.42quarks.net.nz/wiki/JJH (Further contact details there) "Physics isn't the most important thing. Love is." Richard Feynman From gael.varoquaux at normalesup.org Wed Jul 30 20:09:39 2008 From: gael.varoquaux at normalesup.org (Gael Varoquaux) Date: Thu, 31 Jul 2008 02:09:39 +0200 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> <20080730184102.GL32733@phare.normalesup.org> Message-ID: <20080731000939.GG27172@phare.normalesup.org> On Thu, Jul 31, 2008 at 09:50:52AM +1000, Jonathan Hunt wrote: > Thanks for the help. I tried using develop mode in SciPy but since > SciPy just uses distutils it doesn't have develop mode. Is there an > easy way to make it use setuptools? Use setupegg.py instead of setup.py. Ga?l From m.starnes05 at imperial.ac.uk Thu Jul 31 04:13:26 2008 From: m.starnes05 at imperial.ac.uk (mark starnes) Date: Thu, 31 Jul 2008 09:13:26 +0100 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> References: <489084A2.3000408@imperial.ac.uk> <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> Message-ID: <48917426.5000901@imperial.ac.uk> Hi St?fan, thanks for the tip. I tried this approach before, but couldn't get it to work, due to my installation (I broke it, got it working again, then wasn't brave enough to tinker). However, I got the radial basis functions working by modifying my PYTHONPATH, rather than with a rebuild, so may try the Delauney / natgrid approaches too. The radial basis function seems to have trouble with more than 5000 points so I may need other options. Thanks for taking the time to help! Best regards, Mark. St?fan van der Walt wrote: > Hi Mark > > 2008/7/30 mark starnes : >> Hi everyone, >> >> I've looked through the list here and in Numpy-users, and checked the >> 'net but can't find an answer to this problem (with luck, I've missed >> something obvious!). >> >> I've an array of velocities at 80,000 points, irregularly spaced (from a >> CFD analysis). I'd like to generate the interpolated velocity at any >> position in the domain, to map the data to an acoustics analysis on a >> different mesh. >> >> I tried a least squares approach but the errors are too large using >> polynomials and trigonometric functions. My conclusion is that I need a >> nearest-neighbour type interpolation routine. > > Also take a look at > > http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > > Robert Kern's delaunay package does natural neighbour interpolation, IIRC. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From m.starnes05 at imperial.ac.uk Thu Jul 31 04:58:05 2008 From: m.starnes05 at imperial.ac.uk (mark starnes) Date: Thu, 31 Jul 2008 09:58:05 +0100 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> References: <489084A2.3000408@imperial.ac.uk> <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> Message-ID: <48917E9D.207@imperial.ac.uk> Hi again, St?fan. Is Robert Kern's package limited to two-dimensional data? I've had a look and can't see any three-dimensional options. Best regards, Mark. St?fan van der Walt wrote: > Hi Mark > > 2008/7/30 mark starnes : >> Hi everyone, >> >> I've looked through the list here and in Numpy-users, and checked the >> 'net but can't find an answer to this problem (with luck, I've missed >> something obvious!). >> >> I've an array of velocities at 80,000 points, irregularly spaced (from a >> CFD analysis). I'd like to generate the interpolated velocity at any >> position in the domain, to map the data to an acoustics analysis on a >> different mesh. >> >> I tried a least squares approach but the errors are too large using >> polynomials and trigonometric functions. My conclusion is that I need a >> nearest-neighbour type interpolation routine. > > Also take a look at > > http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > > Robert Kern's delaunay package does natural neighbour interpolation, IIRC. > > Regards > St?fan > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > From josemaria.alkala at gmail.com Thu Jul 31 05:18:50 2008 From: josemaria.alkala at gmail.com (=?ISO-8859-1?Q?Jos=E9_Mar=EDa_Garc=EDa_P=E9rez?=) Date: Thu, 31 Jul 2008 11:18:50 +0200 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <48917E9D.207@imperial.ac.uk> References: <489084A2.3000408@imperial.ac.uk> <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> <48917E9D.207@imperial.ac.uk> Message-ID: Mark, >From my experience working with RBF, they work pretty well even when you use few points for the interpolation. They track non linear behavior very well. I have worked with RBF with very big FEM models (200000-500000 grid points) and with more than 3D (in other disciplines), but I don't take all the points at the same. What I would do is to use for example the 100 nearest points to the geometric point where you want to interpolate (probably with 10 would be enough). That's something you can try: test using 10, 20, 50, 100 points, and you will see that the difference is small pretty soon, and for sure smaller that the error you may expect from a CFD simulation. Hope this tip helps! Jos? M. 2008/7/31 mark starnes > Hi again, St?fan. > > Is Robert Kern's package limited to two-dimensional data? I've had a > look and can't see any three-dimensional options. > > Best regards, > > Mark. > > St?fan van der Walt wrote: > > Hi Mark > > > > 2008/7/30 mark starnes : > >> Hi everyone, > >> > >> I've looked through the list here and in Numpy-users, and checked the > >> 'net but can't find an answer to this problem (with luck, I've missed > >> something obvious!). > >> > >> I've an array of velocities at 80,000 points, irregularly spaced (from a > >> CFD analysis). I'd like to generate the interpolated velocity at any > >> position in the domain, to map the data to an acoustics analysis on a > >> different mesh. > >> > >> I tried a least squares approach but the errors are too large using > >> polynomials and trigonometric functions. My conclusion is that I need a > >> nearest-neighbour type interpolation routine. > > > > Also take a look at > > > > > http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > > > > Robert Kern's delaunay package does natural neighbour interpolation, > IIRC. > > > > Regards > > St?fan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.starnes05 at imperial.ac.uk Thu Jul 31 07:12:10 2008 From: m.starnes05 at imperial.ac.uk (mark starnes) Date: Thu, 31 Jul 2008 12:12:10 +0100 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: References: <489084A2.3000408@imperial.ac.uk> <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> <48917E9D.207@imperial.ac.uk> Message-ID: <48919E0A.2020505@imperial.ac.uk> Hi Jos?, Thanks for your comments. Knowing that other people split the interpolation into regions is useful. I'll be attempting an approach similar to that you suggested; splitting the domain, not around each desired point but into regions containing sets of points. I didn't expect the effect you mention about accuracy being related to the number of points. Looks like I need to read up on the theory.... Thanks again, Mark. Jos? Mar?a Garc?a P?rez wrote: > Mark, > From my experience working with RBF, they work pretty well even when you > use few points for the interpolation. They track non linear behavior > very well. > I have worked with RBF with very big FEM models (200000-500000 grid > points) and with more than 3D (in other disciplines), but I don't take > all the points at the same. What I would do is to use for example the > 100 nearest points to the geometric point where you want to interpolate > (probably with 10 would be enough). That's something you can try: test > using 10, 20, 50, 100 points, and you will see that the difference is > small pretty soon, and for sure smaller that the error you may expect > from a CFD simulation. > Hope this tip helps! > Jos? M. > > > 2008/7/31 mark starnes > > > Hi again, St?fan. > > Is Robert Kern's package limited to two-dimensional data? I've had a > look and can't see any three-dimensional options. > > Best regards, > > Mark. > > St?fan van der Walt wrote: > > Hi Mark > > > > 2008/7/30 mark starnes >: > >> Hi everyone, > >> > >> I've looked through the list here and in Numpy-users, and checked the > >> 'net but can't find an answer to this problem (with luck, I've missed > >> something obvious!). > >> > >> I've an array of velocities at 80,000 points, irregularly spaced > (from a > >> CFD analysis). I'd like to generate the interpolated velocity at any > >> position in the domain, to map the data to an acoustics analysis on a > >> different mesh. > >> > >> I tried a least squares approach but the errors are too large using > >> polynomials and trigonometric functions. My conclusion is that I > need a > >> nearest-neighbour type interpolation routine. > > > > Also take a look at > > > > > http://www.scipy.org/Cookbook/Matplotlib/Gridding_irregularly_spaced_data > > > > Robert Kern's delaunay package does natural neighbour > interpolation, IIRC. > > > > Regards > > St?fan > > _______________________________________________ > > SciPy-user mailing list > > SciPy-user at scipy.org > > http://projects.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > > > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user From david at ar.media.kyoto-u.ac.jp Thu Jul 31 08:07:20 2008 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 31 Jul 2008 21:07:20 +0900 Subject: [SciPy-user] Running SciPy SVN without installing Was: Matlab IO -- can others with matlab test this bug? In-Reply-To: <20080730183837.GK32733@phare.normalesup.org> References: <200807291015.24171.pgmdevlist@gmail.com> <488F2479.8020706@ar.media.kyoto-u.ac.jp> <20080730183837.GK32733@phare.normalesup.org> Message-ID: <4891AAF8.9030906@ar.media.kyoto-u.ac.jp> Gael Varoquaux wrote: > On Tue, Jul 29, 2008 at 11:08:57PM +0900, David Cournapeau wrote: > >> Really, stow is nice. The >> only problem is that it does not work on windows. >> > > Can I have two shells, one running one environment, and the other > another, say to compare results with stow? > Sure, just get one stow environment per shell, and set PYTHONPATH differently for each one (for example by using an alias pythonfoo='PYTHONPATH=foo python'). The great advantage of stow is its simplicity. If you screw up something, you just remove the directory, there is no side effect at all. In a way, stow fixes one of the things which I really hate with unix, the FHS. cheers, David From stefan at sun.ac.za Thu Jul 31 08:32:38 2008 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Thu, 31 Jul 2008 14:32:38 +0200 Subject: [SciPy-user] [Fwd: 3D interpolation over irregular data] In-Reply-To: <48917E9D.207@imperial.ac.uk> References: <489084A2.3000408@imperial.ac.uk> <9457e7c80807301411p68d04884ne0a02b6d4f30828a@mail.gmail.com> <48917E9D.207@imperial.ac.uk> Message-ID: <9457e7c80807310532j4ae692d8qf038fbf8162f3004@mail.gmail.com> 2008/7/31 mark starnes : > Is Robert Kern's package limited to two-dimensional data? I've had a > look and can't see any three-dimensional options. Sorry, Mark, my mistake, it doesn't. The only Python wrapper for 3-dimensional Delaunay I'm aware of is from enthought.tvtk.api import tvtk d = tvtk.Delaunay3D() Unfortunately, I don't have much more detail on how to use it. Regards St?fan From bsouthey at gmail.com Thu Jul 31 09:47:14 2008 From: bsouthey at gmail.com (Bruce Southey) Date: Thu, 31 Jul 2008 08:47:14 -0500 Subject: [SciPy-user] Robust Statistics In-Reply-To: References: <200807301243.12871.jr@sun.ac.za> Message-ID: <4891C262.9020002@gmail.com> Nuttall, Brandon C wrote: > Folks, > > I have some rate/time data I'm in the process of analyzing. The data represent the natural gas produced from a well over time. The data are fit best by either an exponential model (least squares best fit of log of rate data, just qi and di see below) or a "hyperbolic" model: > > qt = qi *(1-b*di*t)**(-1.0/b) > > qt = rate at time t > qi = initial rate (calculated) > b = decline exponent (rate of change of the decline rate, calculated) > di = initial decline rate (calculated) > t = time > > The calculated parameters are found using the optimization routine scipy.optimize.fmin_tnc(). I'm already rejecting some best fit results because either the correlation coefficient is not statistically significant or the di (slope) is not significantly different from 0. > > The image I've attached shows two things. An exponential best fit is shown by the solid line and a hyperbolic best fit by the dotted line. It is common practice in analyzing this type of data to begin the best fit analysis at an early time point where the data actually start to "decline"; thus, the hyperbolic best fit started at time 2 and the first rate point (rate = about 1350) is ignored. The other thing the graph shows is that there appear to be outliers. > > My question is, what Python resources are there for testing for outliers and robust statistics? > > Is RANSAC appropriate for this? Note that when I run ransack.py from the cookbook, I get: > > Traceback (most recent call last): > > --Snip other traceback info--- > File "C:\Documents and Settings\bnuttall\Desktop\ransac.py", line 132, in fit > A = numpy.vstack([data[:,i] for i in self.input_columns]).T > AttributeError: 'numpy.ndarray' object has no attribute 'T' > > I presume the T attribute is supposed to mean the transpose operator and have changed the offending statement(s, there are 4 total) to the form: > > A = numpy.vstack([data[:,i] for i in self.input_columns]).transpose() > > Thanks. > > Brandon Nuttall > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > Hi, First, what do you mean by 'outlier'? Please see, for example, http://en.wikipedia.org/wiki/Outlier because outliers fully depend on the (implicit or explicit) model fitted. Note this is really defined for linear models not nonlinear models that you are fitting. Second, NumPy provides the tools to do fit various models but you need to implement your own statistics especially for nonlinear models. (In particular I would strongly suggest looking at likelihood ratio test and Bayesian information criterion for model selection.) Really your best bet is to use to R directly or interface to R with Rpy. That way you also have access to many nonlinear statistical routines that can fit that type of data. If you do not need a parametric model, try fitting splines. These may be more suited to handle the variability near t=60 although that may be more related to data collection and time scale. Regards Bruce From lists at vrbka.net Thu Jul 31 10:43:22 2008 From: lists at vrbka.net (Lubos Vrbka) Date: Thu, 31 Jul 2008 16:43:22 +0200 Subject: [SciPy-user] sine transformation weirdness Message-ID: <4891CF8A.2040505@vrbka.net> hi guys, while using the sine transformation from sandbox http://projects.scipy.org/scipy/scipy/browser/trunk/scipy/sandbox/image/transforms.py i encountered some confusion with respect to the constants involved in the transformation process. to test it, i tried to do discrete sine transformation of f(r) = exp(-r) analytically, the result is f(k) = sqrt(2/pi) * k/(1+k^2) for the discrete transformation i used dr=0.1, 1000 points, dk = pi/(npoints*dr) since the dst uses effectively the double number of sampling points. you can see the result in the attached file (hopefully it gets it through). the blue line is the analytical result (without the sqrt(2/pi) factor), green is the numerical discrete sine transform and the red line is f(k,numerical)/f(k,analytical) ratio. in case the figure doesn't make it through, the red line looks like one half of reverted parabola. for r=0, its value is ~10. it reaches 0 at the distance corresponding to the last sampling point. apparently, there is some non-constant factor present here - i just don't know what factor it might be. it has to come from the FFT used inside the DST routine... i'd be very glad for any hint in this respect. thanks in advance! with best regards, lubos -- Lubos _ at _" http://www.lubos.vrbka.net -------------- next part -------------- A non-text attachment was scrubbed... Name: fst_exp_r.png Type: image/png Size: 14188 bytes Desc: not available URL: From jturner at gemini.edu Thu Jul 31 18:06:50 2008 From: jturner at gemini.edu (James Turner) Date: Thu, 31 Jul 2008 18:06:50 -0400 Subject: [SciPy-user] Core dump and import errors during scipy.test() In-Reply-To: <4890F984.6080101@gemini.edu> References: <4890F984.6080101@gemini.edu> Message-ID: <4892377A.3040803@gemini.edu> > #0 0xb61359c0 in dqawfe_ (f=0xb612a6e8 , a=0xbfff94a0, > omega=0xbfff94c0, integr=0xbfff94a8, epsabs=0xbfff94d8, > limlst=0xbfff94b4, > limit=0xbfff94b8, maxp1=0xbfff94bc, result=0xbfff94d0, > abserr=0xbfff94c8, > neval=0xbfff9484, ier=0xbfff9488, lst=0xbfff948c, alist=0x88024a8, > blist=0x8802640, rlist=0x8805698, elist=0x8805830, iord=0x8719b70, > nnlog=0x87566b8, chebmo=0x8802f80) at > scipy/integrate/quadpack/dqawfe.f:267 > 267 10 l = dabs(omega) I had a look at the file scipy/integrate/quadpack/dqawfe.f and realized that whilst "dabs()" is a double-precision built-in function, it is being assigned to an integer, "l" on line 267. I wondered if the core dump is just due to their different lengths, so I changed "l" to be declared as double precision and now that crash seems to have gone away, but I'm getting another one from the same _vq.so library: Loaded symbols for /astro/iraf/i686/gempylocal/lib/python2.5/site-packages/scipy/cluster/_vq.so #0 gesdd_ (min_lwork_=0xbfff6af0, max_lwork_=0xbfff6af0, prefix=0x1, m=0x5, n=0xbfff6b20, compute_uv_=0x0, __g77_length_prefix=-1249493092) at scipy/linalg/src/calc_lwork.f:39 39 MNTHR = INT( MINMN*11.0D0 / 6.0D0 ) The last lines from test(verbosity=2) are: Check the behavior when the inputs and outputs are multidimensional. ... ok Make sure that appropriate exceptions are raised when invalid valuesFloating exception (core dumped) (...seems like they weren't?) Are these kinds of test crashes a common occurence? It seems like they should be if they're declaration errors? Or does no-one else build the official public SciPy from source? I still have to look into my other problem with the imports, but it sounds like that one is already fixed in the latest SVN anyway. Thanks! James. From brennan.williams at visualreservoir.com Thu Jul 31 18:12:17 2008 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Fri, 01 Aug 2008 10:12:17 +1200 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array Message-ID: <489238C1.7040902@visualreservoir.com> I have an existing binary file containing numpy array data. It has been created using open,fwrite & close and I can read the data using fread. I want to be able to either append a new array to the end of the file or update an existing array within the file. I've tried opening the file with a mode of either 'ab+' or 'wb+' and then writing the data using something like.... fd = open(vfname, 'ab+') if fd: filepos=(self.id-1)*self.yarray.size*4 fd.seek(filepos) fwrite(fd, self.yarray.size, self.yarray,'f') fd.close() When I use a mode of 'ab+' it looks like the data has been written to the file ok (no errors reported) but when I read it back I get my original data. When I use 'wb+' then my updated data gets written and read back ok. But when I reload the file, everything apart from my updated data (i.e. everything before it in the file) is now zero. The '+' in the mode seems to make no difference. What am I doing wrong? Thanks Bren. From brennan.williams at visualreservoir.com Thu Jul 31 22:42:15 2008 From: brennan.williams at visualreservoir.com (Brennan Williams) Date: Fri, 01 Aug 2008 14:42:15 +1200 Subject: [SciPy-user] scipy.io.numpyio fwrite - appending or updating an array In-Reply-To: <489238C1.7040902@visualreservoir.com> References: <489238C1.7040902@visualreservoir.com> Message-ID: <48927807.3080603@visualreservoir.com> I've tried replacing numpyio with both fopen and now also npfile but I'm getting the same problem, i.e. if I write a numpy array to the file, everything else before that position in the file is now zero. It is as if it is a new file, not an existing one. Brennan Williams wrote: > I have an existing binary file containing numpy array data. It has been > created using open,fwrite & close and I can read the data using fread. > > I want to be able to either append a new array to the end of the file or > update an existing array within the file. > > I've tried opening the file with a mode of either 'ab+' or 'wb+' and > then writing the data using something like.... > > fd = open(vfname, 'ab+') > if fd: > filepos=(self.id-1)*self.yarray.size*4 > fd.seek(filepos) > fwrite(fd, self.yarray.size, self.yarray,'f') > fd.close() > > When I use a mode of 'ab+' it looks like the data has been written to > the file ok (no errors reported) but when I read it back I get my > original data. > > When I use 'wb+' then my updated data gets written and read back ok. But > when I reload the file, everything apart from my updated data (i.e. > everything before it in the file) is now zero. > > The '+' in the mode seems to make no difference. > > What am I doing wrong? > > Thanks > > Bren. > _______________________________________________ > SciPy-user mailing list > SciPy-user at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-user > >