From robert.kern at gmail.com Sat Jul 1 00:54:24 2006 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 30 Jun 2006 23:54:24 -0500 Subject: [SciPy-dev] Mail lists for Trac tickets and SVN checkins Message-ID: <44A60000.9070602@gmail.com> We now have mailing lists set up to receive notifications of changes to Trac tickets and SVN checkins for both NumPy and SciPy. We do not have Gmane gateways for them, yet. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Sat Jul 1 04:53:45 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 01 Jul 2006 10:53:45 +0200 Subject: [SciPy-dev] iterative.py AttributeError: 'module' object has no attribute 'ArrayType' Message-ID: x0,info = linalg.cgs(A,r) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", line 507, in cgs matvec = get_matvec(A) File "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", line 38, in __init__ if isinstance(obj, sb.ArrayType): AttributeError: 'module' object has no attribute 'ArrayType' >>> A <4x4 sparse matrix of type '' with 4 stored elements (space for 100) in Compressed Sparse Column format> >>> r array([ 7.80423197e-01, 7.08887770e-04, 2.48475465e-01, 8.27064601e-02]) From stefan at sun.ac.za Sat Jul 1 06:19:11 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Sat, 1 Jul 2006 12:19:11 +0200 Subject: [SciPy-dev] iterative.py AttributeError: 'module' object has no attribute 'ArrayType' In-Reply-To: References: Message-ID: <20060701101911.GA15179@mentat.za.net> On Sat, Jul 01, 2006 at 10:53:45AM +0200, Nils Wagner wrote: > x0,info = linalg.cgs(A,r) > File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", > line 507, in cgs > matvec = get_matvec(A) > File > "/usr/local/lib/python2.4/site-packages/scipy/linalg/iterative.py", > line 38, in __init__ > if isinstance(obj, sb.ArrayType): > AttributeError: 'module' object has no attribute > 'ArrayType' > >>> A > <4x4 sparse matrix of type '' > with 4 stored elements (space for 100) > in Compressed Sparse Column format> > >>> r > array([ 7.80423197e-01, 7.08887770e-04, > 2.48475465e-01, > 8.27064601e-02]) This was fixed on 28/06. St?fan From pmarks at gmail.com Mon Jul 3 15:24:26 2006 From: pmarks at gmail.com (Patrick Marks) Date: Mon, 3 Jul 2006 14:24:26 -0500 Subject: [SciPy-dev] A couple problems of stats.poisson.pmf(k,mu) Message-ID: <90d2b3470607031224vca37b70r1e2360200df41bac@mail.gmail.com> Hi, I've come across a couple problems in stats.poisson.pmf: In [1]: from scipy.stats import * In [2]: from scipy import special In [3]: from numpy import asarray, exp In [4]: poisson.pmf(10,10) Out[4]: array(-0.026867175591182967) # Negative PMF value! In [5]: 10**10 * exp(-10) / asarray(special.gamma(11)) Out[5]: 0.1251100357211333 In [6]: poisson.pmf(10,10.0) Out[6]: array(0.1251100357211333) # OK when mu is not an integer In [7]: poisson.pmf(150,150.0) Out[7]: array(1.#INF) # Some sort of arithmetic overflow The overflow could be avoided by computing the PMF in log space: >poisson_gen._pmf(self, k, mu): > logPk = k * log(mu) - mu - arr(special.gammaln(k+1)) > return exp(logPk) which works to much higher numbers, and has a fractional deviation of <10e-13 and typically == 0.0 from the current implementation in my tests. I'm not sure why the pmf is wrong when mu is an integer, although i guess it's an easy fix. Should i make a ticket in trac for this? Patrick Marks University of Illinois From robert.kern at gmail.com Mon Jul 3 15:31:22 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 03 Jul 2006 14:31:22 -0500 Subject: [SciPy-dev] A couple problems of stats.poisson.pmf(k,mu) In-Reply-To: <90d2b3470607031224vca37b70r1e2360200df41bac@mail.gmail.com> References: <90d2b3470607031224vca37b70r1e2360200df41bac@mail.gmail.com> Message-ID: <44A9708A.1080600@gmail.com> Patrick Marks wrote: > Should i make a ticket in trac for this? Yes, please. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From nwagner at iam.uni-stuttgart.de Wed Jul 5 08:59:38 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 14:59:38 +0200 Subject: [SciPy-dev] Bug in scipy.special.jv Message-ID: <44ABB7BA.8010401@iam.uni-stuttgart.de> I didn't expect nan for small arguments. >>> scipy.__version__ '0.5.0.2034' >>> special.jv(2./3,1.e-8) 3.2390285067614573e-06 >>> special.jv(2./3,1.e-9) 6.9782753769690383e-07 >>> special.jv(2./3,1.e-10) nan >>> special.jv(2./3,0.0) 0.0 Nils From nwagner at iam.uni-stuttgart.de Wed Jul 5 14:49:00 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 05 Jul 2006 20:49:00 +0200 Subject: [SciPy-dev] Errors and failures Message-ID: A short bug report wrt to latest svn numpy/scipy >>> scipy.__version__ '0.5.0.2041' >>> numpy.__version__ '0.9.9.2735' numpy.test(1,10) ====================================================================== ERROR: check_basic (numpy.core.tests.test_defmatrix.test_algebra) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 125, in check_basic Ainv = linalg.inv(A) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 122, in inv return wrap(solve(a, identity(a.shape[0]))) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 109, in solve results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.dgesv ====================================================================== ERROR: check_instance_methods (numpy.core.tests.test_defmatrix.test_matrix_return) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 157, in check_instance_methods f = eval('a.%s' % attrib) File "", line 0, in ? File "/usr/lib/python2.4/site-packages/numpy/core/defmatrix.py", line 293, in getI return matrix(inv(self)) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 122, in inv return wrap(solve(a, identity(a.shape[0]))) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 109, in solve results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.dgesv ====================================================================== ERROR: check_basic (numpy.core.tests.test_defmatrix.test_properties) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/numpy/core/tests/test_defmatrix.py", line 48, in check_basic assert allclose(linalg.inv(A), mA.I) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 122, in inv return wrap(solve(a, identity(a.shape[0]))) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 109, in solve results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.dgesv ---------------------------------------------------------------------- Ran 387 tests in 0.821s FAILED (errors=3) scipy.test(1,10) ====================================================================== ERROR: Compare dgbtrs solutions for linear equation system A*x = b ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 383, in check_dgbtrs y_lin = linalg.solve(self.real_mat, self.b) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 109, in solve results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.dgesv ====================================================================== ERROR: Compare zgbtrs solutions for linear equation system A*x = b ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_decomp.py", line 393, in check_zgbtrs y_lin = linalg.solve(self.comp_mat, self.bc) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 109, in solve results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.zgesv ====================================================================== ERROR: check_random (scipy.linalg.tests.test_basic.test_det) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 287, in check_random d2 = basic_det(a) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 453, in det results = lapack_routine(n, n, a, n, pivots, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.dgetrf ====================================================================== ERROR: check_random_complex (scipy.linalg.tests.test_basic.test_det) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/linalg/tests/test_basic.py", line 297, in check_random_complex d2 = basic_det(a) File "/usr/lib/python2.4/site-packages/numpy/linalg/linalg.py", line 453, in det results = lapack_routine(n, n, a, n, pivots, 0) LapackError: Parameter ipiv is not of type PyArray_INT in lapack_lite.zgetrf ====================================================================== FAIL: check_hyp2f1 (scipy.special.tests.test_basic.test_hyp2f1) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/special/tests/test_basic.py", line 1174, in check_hyp2f1 assert_almost_equal(cv, v, 8, err_msg='test #%d' % i) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 154, in assert_almost_equal assert round(abs(desired - actual),decimal) == 0, msg AssertionError: Items are not equal: test #6 ACTUAL: 1.7976931348623157e+308 DESIRED: 0.12499999999999999 ====================================================================== FAIL: check_simple (scipy.optimize.tests.test_cobyla.test_cobyla) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/scipy/optimize/tests/test_cobyla.py", line 20, in check_simple assert_almost_equal(x, [x0, x1], decimal=5) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 152, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 222, in assert_array_almost_equal header='Arrays are not almost equal') File "/usr/lib/python2.4/site-packages/numpy/testing/utils.py", line 207, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.957975 , 0.64690335]) y: array([ 4.95535625, 0.66666667]) ---------------------------------------------------------------------- Ran 1552 tests in 6.465s FAILED (failures=2, errors=4) From robert.kern at gmail.com Wed Jul 5 15:09:13 2006 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 05 Jul 2006 14:09:13 -0500 Subject: [SciPy-dev] Errors and failures In-Reply-To: References: Message-ID: <44AC0E59.5010109@gmail.com> Nils Wagner wrote: > A short bug report wrt to latest svn numpy/scipy Make tickets instead of posting here. Only post here if you have something substantive to discuss. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From bhendrix at enthought.com Wed Jul 5 19:58:31 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Wed, 05 Jul 2006 18:58:31 -0500 Subject: [SciPy-dev] ANN: Python Enthought Edition Version 1.0.0.beta3 Released Message-ID: <44AC5227.9030902@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0.beta3 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 1.0.0.beta3 Release Notes: -------------------- Version 1.0.0.beta3 of Python Enthought Edition is the first version based on Python 2.4.3 and includes updates to nearly every package. This is the third and (hopefully) last beta release. This release includes version 1.0.8 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.8.html About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numeric SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From izakmarais at yahoo.com Thu Jul 6 09:51:47 2006 From: izakmarais at yahoo.com (izak marais) Date: Thu, 6 Jul 2006 06:51:47 -0700 (PDT) Subject: [SciPy-dev] BUG: Memory leak in Numpy Message-ID: <20060706135147.54390.qmail@web50908.mail.yahoo.com> Hi I'm a new scipy and numpy user, so I don't know if this is the right mailing list to post numpy bugs to or what additional info you'll need, but I have code that leaks memory when accessing a numpy array inside a while loop. Here is the script (note that I have spent a great deal of time to isolate the problem to this piece of code, because I worried it might have been my code that's buggy. Therefore the script doesn't actually do anything useful in its isolated state). def grow_region(image, start): import numpy as N #--------> numpy causes leak test_array = N.zeros((316, 316), N.uint8) #import Numeric as N2 #--------> Numeric works fine #test_array = N2.zeros((316, 316), N2.UInt8) print test_array remaining_area = 316**2 print 'Before while loop. press enter to continue' raw_input() while (remaining_area>1): remaining_area -=1 for dr in [-1, 0, 1]: for dc in [-1, 0, 1]: r = 157 +dr c = 157 + dc # test_array[r, c] = 1 #--------> NO leak if test_array[r,c] == 0: #--------> causes leak pass # if test_array[157,157] == 0: #--------> NO leak # pass # test = test_array[r,c] #--------> causes leak #test grow_region(1,1) I run it on windows XP. The numpy.test(level=1) test executes correctly. I can see it leaks memory by watching the memory usage in the task list. The version/install file of Python is 'python-2.4.2.msi' and numpy is 'numpy-0.9.8.win32-py2.4.exe'. Let me know if you need extra info on my setup and how I shoud retrieve it (if it is non-trivial) Izak --------------------------------- Do you Yahoo!? Get on board. You're invited to try the new Yahoo! Mail Beta. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliphant at ee.byu.edu Thu Jul 6 12:14:37 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Thu, 06 Jul 2006 10:14:37 -0600 Subject: [SciPy-dev] BUG: Memory leak in Numpy In-Reply-To: <20060706135147.54390.qmail@web50908.mail.yahoo.com> References: <20060706135147.54390.qmail@web50908.mail.yahoo.com> Message-ID: <44AD36ED.8080700@ee.byu.edu> izak marais wrote: > Hi > > I'm a new scipy and numpy user, so I don't know if this is the right > mailing list to post numpy bugs to or what additional info you'll > need, but I have code that leaks memory when accessing a numpy array > inside a while loop. > I'm pretty sure this has been fixed in SVN. -Travis From clarence at broad.mit.edu Thu Jul 6 14:07:12 2006 From: clarence at broad.mit.edu (clarence at broad.mit.edu) Date: Thu, 06 Jul 2006 14:07:12 -0400 Subject: [SciPy-dev] Scipy-0.4.9 and BLAS? srotmg_ erros Message-ID: <20060706140712.d1tqfepuok484ocs@imap.broad.mit.edu> Hopefully I've subscribed to the correct list for this inquiry. I'm currently attempting to get scipy-0.4.9 built/installed correctly with numpy-0.9.8 and the latest BLAS/LAPACK libs to be used on Linux. Everything builds and installs fine without errors (seemingly), and I encounter the problem when trying to do a python scipy.test(level=1): import integrate -> failed: scipy/linalg/fblas.so: undefined symbol: srotmg_ import signal -> failed: scipy/linalg/fblas.so: undefined symbol: srotmg_ import special -> failed: scipy/linalg/fblas.so: undefined symbol: srotmg_ import lib.blas -> failed: scipy/lib/blas/fblas.so: undefined symbol: srotmg_ import linalg -> failed: scipy/linalg/fblas.so: undefined symbol: srotmg_ import maxentropy -> failed: scipy/linalg/fblas.so: undefined symbol: srotmg_ import stats -> failed: scipy/linalg/fblas.so: undefined symbol: srotmg_ I've seen that this is a known bug and that I should get the full BLAS implementation, which I have and was using anyways. I don't want to use the ATLAS option if possible, and my BLAS/LAPACK libs did build without a hitch. Are there any other checks I can do to verify that scipy built against my libs correctly? Unfortunately this is a time sensitive issue for my team and any clue you can provide is appreciated. /clarence/ From nwagner at iam.uni-stuttgart.de Fri Jul 7 04:27:19 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 07 Jul 2006 10:27:19 +0200 Subject: [SciPy-dev] 32 versus 64 bit Message-ID: <44AE1AE7.2010004@iam.uni-stuttgart.de> Hi all, I have submitted a new bug report. Can someone reproduce these results ? If so what is the reason for different results on 32 and 64 bit machines ? I am interested in reliable results independent of the processor. http://projects.scipy.org/scipy/scipy/ticket/223 Nils From prabhu_r at users.sf.net Fri Jul 7 04:45:46 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Fri, 7 Jul 2006 14:15:46 +0530 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs Message-ID: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> Hi, I have some C++ code that uses an external library (Ygl). This library makes the following typedefs: typedef signed char Int8; typedef unsigned char Uint8; typedef signed short Int16; typedef unsigned short Uint16; typedef signed int Int32; typedef unsigned int Uint32; typedef float Float32; typedef double Float64; My C++ code is wrapped to Python and includes the header file that defines these typedefs. I use weave to efficiently script this code from Python. Earlier, with Numeric, arrayobject.h did not define typedefs for the Int32 and so on. However, numpy defines several typedefs that conflict with the typedefs in Ygl. Due to this change between Numeric/numpy my weave code that uses both the C++ header files (which include Ygl's headers) and the arrayobject.h no longer compiles. I could try and work around the problem in my code but would like to know if this can be handled in numpy. Any clarifications will be much appreciated. Thanks! cheers, prabhu From oliphant.travis at ieee.org Fri Jul 7 13:46:58 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Fri, 07 Jul 2006 11:46:58 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> Message-ID: <44AE9E12.5000703@ieee.org> Prabhu Ramachandran wrote: > Hi, > > I have some C++ code that uses an external library (Ygl). This > library makes the following typedefs: > > typedef signed char Int8; > typedef unsigned char Uint8; > typedef signed short Int16; > typedef unsigned short Uint16; > typedef signed int Int32; > typedef unsigned int Uint32; > typedef float Float32; > typedef double Float64; > > My C++ code is wrapped to Python and includes the header file that > defines these typedefs. I use weave to efficiently script this code > from Python. > > Earlier, with Numeric, arrayobject.h did not define typedefs for the > Int32 and so on. However, numpy defines several typedefs that > conflict with the typedefs in Ygl. > You need to #define PY_ARRAY_TYPES_PREFIX ygl_ or something like that before including arrayobject.h -Travis From nmarais at sun.ac.za Fri Jul 7 13:53:44 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 07 Jul 2006 19:53:44 +0200 Subject: [SciPy-dev] Sparse matrix design and broadcasting Message-ID: <1152294824.5814.270.camel@localhost.localdomain> Hi I've been looking at the scipy.sparse routines a little, specifically the sparse_lil type. Currently fancy indexing seems to be handled by a lot of special case code. I wonder if there is not some way we can use numpy's build in broadcasting to make this easier. My thinking is something like this: 1) lil_matrix.__setitem__() figures out what the equivalent shape of the region being assigned to is. Perhaps some numpy routines exist that can help here? e.g. lil_mat[1,:] -> (1, lil_mat.shape[1]) lil_mat[[0,1,2], [5,6,7]] -> (3,) lil_mat[ix_([0,1,2], [5,6,7])] -> (3,3) 2) get numpy to broadcast the value being set to that shape 3) use a single code path that knows how to handle assignation from 2D or 1D arrays. Does this make sense? Regards Neilen From prabhu_r at users.sf.net Fri Jul 7 15:33:54 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Sat, 8 Jul 2006 01:03:54 +0530 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <44AE9E12.5000703@ieee.org> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> Message-ID: <17582.46882.286683.846598@prpc.aero.iitb.ac.in> >>>>> "Travis" == Travis Oliphant writes: >> Earlier, with Numeric, arrayobject.h did not define typedefs >> for the Int32 and so on. However, numpy defines several >> typedefs that conflict with the typedefs in Ygl. Travis> You need to #define PY_ARRAY_TYPES_PREFIX ygl_ Travis> or something like that before including arrayobject.h Thanks! I tried to do this but there are more problems. arrayobject.h defines MAX/MIN macros. My own code defines these as generic functions. I can work around those easily, and after I did that I ended up with compilation errors below (where foo.cpp is the weave generated code): foo.cpp: In member function 'void numpy_type_handler::conversion_numpy_check_type(PyArrayObject*, int, const char*)': foo.cpp:494: error: expected primary-expression before ')' token foo.cpp:494: error: 'Bool' was not declared in this scope foo.cpp:494: error: expected primary-expression before 'int' foo.cpp:494: error: expected primary-expression before 'int' foo.cpp:494: error: expected `)' before 'PyArray_API' [...] The code in question is: 482 void conversion_numpy_check_type(PyArrayObject* arr_obj, int numeric_type, 483 const char* name) 484 { 485 // Make sure input has correct numeric type. 486 int arr_type = arr_obj->descr->type_num; 487 if (PyTypeNum_ISEXTENDED(numeric_type)) 488 { 489 char msg[80]; 490 sprintf(msg, "Conversion Error: extended types not supported for variable '%s'", 491 name); 492 throw_error(PyExc_TypeError, msg); 493 } 494 if (!PyArray_EquivTypenums(arr_type, numeric_type)) 495 { If I don't "#define PY_ARRAY_TYPES_PREFIX ygl_" I don't get these errors. I see the problem. PyArray_EquivTypenums ultimately returns a Bool (defined in __multiarray_api.h) and Bool is undef'd when PY_ARRAY_TYPES_PREFIX is set. If I comment the undef Bool this compiles fine. Is there anything that can be done or should I just try and work around the problem in my code? Also, would you consider changing the MAX/MIN macro names? Thanks again! regards, prabhu From fperez.net at gmail.com Fri Jul 7 15:38:06 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Jul 2006 13:38:06 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <17582.46882.286683.846598@prpc.aero.iitb.ac.in> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> Message-ID: On 7/7/06, Prabhu Ramachandran wrote: > Also, would you consider changing the MAX/MIN macro names? I think there was a recent message about this as well. Shouldn't all publicly visible names from the numpy API be PyArray prefixed? Otherwise these kinds of problems will pop up more and more in the future... If MIN/MAX are widely used internally by numpy, they can be declared in a header for internal use which is not the public API one. Cheers, f From oliphant at ee.byu.edu Fri Jul 7 16:03:43 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 07 Jul 2006 14:03:43 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <17582.46882.286683.846598@prpc.aero.iitb.ac.in> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> Message-ID: <44AEBE1F.8010304@ee.byu.edu> Prabhu Ramachandran wrote: >>>>>>"Travis" == Travis Oliphant writes: >>>>>> >>>>>> > > >> Earlier, with Numeric, arrayobject.h did not define typedefs > >> for the Int32 and so on. However, numpy defines several > >> typedefs that conflict with the typedefs in Ygl. > > Travis> You need to #define PY_ARRAY_TYPES_PREFIX ygl_ > > Travis> or something like that before including arrayobject.h > >Thanks! I tried to do this but there are more problems. >arrayobject.h defines MAX/MIN macros. My own code defines these as >generic functions. I can work around those easily, and after I did >that I ended up with compilation errors below (where foo.cpp is the >weave generated code): > >foo.cpp: In member function 'void numpy_type_handler::conversion_numpy_check_type(PyArrayObject*, int, const char*)': >foo.cpp:494: error: expected primary-expression before ')' token >foo.cpp:494: error: 'Bool' was not declared in this scope >foo.cpp:494: error: expected primary-expression before 'int' >foo.cpp:494: error: expected primary-expression before 'int' >foo.cpp:494: error: expected `)' before 'PyArray_API' >[...] > > This should be fixed in SVN. Basically, we can't use the defined types in the C-API. Also, (in latest SVN) the MAXMIN macros can be avoided using #define PYA_NOMAXMIN before including arrayobject.h -Travis From fperez.net at gmail.com Fri Jul 7 16:32:28 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Jul 2006 14:32:28 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <44AEBE1F.8010304@ee.byu.edu> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> Message-ID: On 7/7/06, Travis Oliphant wrote: > Also, (in latest SVN) the MAXMIN macros can be avoided using > > #define PYA_NOMAXMIN > > before including arrayobject.h Mmh, this looks crufty to me: special cases like these look bad in a library, and break the 'just works' ideal we all strive for, IMHO. Why not have arrayobject.h be fully include-safe with prefixing of all of its #defines, leaving a private header for use by numpy's internals? I really don't like having to remember (or teach) special cases. One never seems like too many, until you forget it months later and waste an afternoon hunting for a strange bug. Just my opinion... f From oliphant at ee.byu.edu Fri Jul 7 16:39:06 2006 From: oliphant at ee.byu.edu (Travis Oliphant) Date: Fri, 07 Jul 2006 14:39:06 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> Message-ID: <44AEC66A.2070507@ee.byu.edu> Fernando Perez wrote: >On 7/7/06, Travis Oliphant wrote: > > > >>Also, (in latest SVN) the MAXMIN macros can be avoided using >> >>#define PYA_NOMAXMIN >> >>before including arrayobject.h >> >> > >Mmh, this looks crufty to me: special cases like these look bad in a >library, and break the 'just works' ideal we all strive for, IMHO. > > But, it fixes the problem he's having without breaking anybody else's code that uses the MAX / MIN macros, already. Besides, the PY_ARRAY_TYPES_PREFIX business is a lot more crufty. I'm not opposed to putting a *short* prefix in front of everything (the Int32, Float64, stuff came from numarray which now has it's own back-ward compatible header where it could be placed now anyway). Perhaps npy_ would be a suitable prefix. That way we could get rid of the cruft entirely. I suppose we could also provide the noprefix.h header that defines the old un-prefixed cases for "backwards NumPy compatibility". -Travis From fperez.net at gmail.com Fri Jul 7 16:53:45 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Fri, 7 Jul 2006 14:53:45 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <44AEC66A.2070507@ee.byu.edu> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> <44AEC66A.2070507@ee.byu.edu> Message-ID: On 7/7/06, Travis Oliphant wrote: > I'm not opposed to putting a *short* prefix in front of everything (the > Int32, Float64, stuff came from numarray which now has it's own > back-ward compatible header where it could be placed now anyway). > Perhaps npy_ would be a suitable prefix. > > That way we could get rid of the cruft entirely. Well, now is your chance to clean up all the APIs, /including/ the C ones :) npy_ or NPy, I'm not too sure what conventions you are following at the C naming level. I'm all for cruft removal and making things easy to use out of the box, even at the cost of making the transition a bit more work. Remember, that's a one time cost. And numpy is so good that people /will/ transition from Numeric eventually, so might as well make the end result as nice and appealing as possible. As other tools (like matplotlib) move eventually to numpy-only support, the incentive to making the switch will really go up for just about anyone using python for numerical work. At the risk of sounding a bit harsh, I think you can then say, 'take the pain for the switch if you really want all the new goodies'. Those who positively, absolutely can't update from Numeric can then just keep a frozen codebase. It's not like you're breaking Numeric 24.2 or deleting it from the internet :) Cheers, f From cookedm at physics.mcmaster.ca Fri Jul 7 17:01:48 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Fri, 7 Jul 2006 17:01:48 -0400 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> Message-ID: <20060707170148.071ae785@arbutus.physics.mcmaster.ca> On Fri, 7 Jul 2006 14:32:28 -0600 "Fernando Perez" wrote: > On 7/7/06, Travis Oliphant wrote: > > > Also, (in latest SVN) the MAXMIN macros can be avoided using > > > > #define PYA_NOMAXMIN > > > > before including arrayobject.h > > Mmh, this looks crufty to me: special cases like these look bad in a > library, and break the 'just works' ideal we all strive for, IMHO. > > Why not have arrayobject.h be fully include-safe with prefixing of all > of its #defines, leaving a private header for use by numpy's > internals? > > I really don't like having to remember (or teach) special cases. One > never seems like too many, until you forget it months later and waste > an afternoon hunting for a strange bug. > > Just my opinion... There's really not all that many uses of MAX() and MIN() in numpy. I can check in a change to rename all those to PyArray_MAX/PyArray_MIN. As for the others in arrayobject.h: - a bunch are handled by PY_ARRAY_TYPES_PREFIX - there are a bunch of MAX_*, MIN_* constants for ints * also BITSOF_*, SIZEOF_* * similiarly, why STRBITSOF_*? - stuff handling long long: LONGLONG_* and ULONGLONG_* * FALSE and TRUE - flag stuff, some for Numeric backwards compatibility * the thread macros stuff with a '*' I think should be private or removed. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From schofield at ftw.at Sat Jul 8 13:49:40 2006 From: schofield at ftw.at (Ed Schofield) Date: Sat, 8 Jul 2006 19:49:40 +0200 Subject: [SciPy-dev] Sparse matrix design and broadcasting In-Reply-To: <1152294824.5814.270.camel@localhost.localdomain> References: <1152294824.5814.270.camel@localhost.localdomain> Message-ID: <1C110344-8C71-45BF-9C34-E43FA49F58E2@ftw.at> On 07/07/2006, at 7:53 PM, Neilen Marais wrote: > Hi > > I've been looking at the scipy.sparse routines a little, specifically > the sparse_lil type. Currently fancy indexing seems to be handled by a > lot of special case code. > > I wonder if there is not some way we can use numpy's build in > broadcasting to make this easier. My thinking is something like this: > > 1) lil_matrix.__setitem__() figures out what the equivalent shape > of the > region being assigned to is. Perhaps some numpy routines exist that > can > help here? > > e.g. > > lil_mat[1,:] -> (1, lil_mat.shape[1]) > lil_mat[[0,1,2], [5,6,7]] -> (3,) > lil_mat[ix_([0,1,2], [5,6,7])] -> (3,3) > > 2) get numpy to broadcast the value being set to that shape > > 3) use a single code path that knows how to handle assignation from 2D > or 1D arrays. > > Does this make sense? Interesting idea. I wouldn't be surprised if we could re-use some code from NumPy for this, or perhaps from numarray (whose indexing code is written in Python). I went through the numarray code for interpreting slices and index arrays before writing the sparse fancy indexing code, hoping to re-use whatever I could, but it made quite heavy use of strides, and I didn't see how this could be applied to sparse matrices. So I more or less reinvented the wheel. But some ideas from numarray (in Lib/generic.py) could probably be lifted, such as the way the _universalIndexing method handles both getting and setting. Maybe we could indeed use functionality from NumPy in interpreting indices, but I don't know whether this is possible or how we'd do it. I'd appreciate any help. -- Ed From david.huard at gmail.com Mon Jul 10 17:12:39 2006 From: david.huard at gmail.com (David Huard) Date: Mon, 10 Jul 2006 17:12:39 -0400 Subject: [SciPy-dev] Common data sets for testing purposes Message-ID: <1152565959.2501.40.camel@localhost> Hi, I think it would be useful if there were standard, common data sets included in the scipy distribution (as in matlab or R). They could be used to ease testing, the creation of demos or simply to give examples. Also, if the data sets are chosen wisely, they could serve to attract people from targeted discipline to scipy (IQ scores data won't attract the same crowd as neutrino counts or distributed sea surface temperatures). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Mon Jul 10 17:53:17 2006 From: robert.kern at gmail.com (Robert Kern) Date: Mon, 10 Jul 2006 16:53:17 -0500 Subject: [SciPy-dev] Common data sets for testing purposes In-Reply-To: <1152565959.2501.40.camel@localhost> References: <1152565959.2501.40.camel@localhost> Message-ID: <44B2CC4D.8050002@gmail.com> David Huard wrote: > Hi, > > I think it would be useful if there were standard, common data sets > included in the scipy distribution (as in matlab or R). They could be > used to ease testing, the creation of demos or simply to give examples. > Also, if the data sets are chosen wisely, they could serve to attract > people from targeted discipline to scipy (IQ scores data won't attract > the same crowd as neutrino counts or distributed sea surface temperatures). Good idea. The first step would be collecting some datasets and writing one scipy/matplotlib (dare I say Chaco?) example per dataset. As we write the examples, the idioms we use to access the data should come to the surface, and we can possible settle on a common data format and some utilities in scipy to make the demos accessible through a uniform interface (more or less; at the very least the file structure should settle out quickly: a README, example01.py, example02.py, plot01.png, data/*.dat, etc.). I would prefer to keep the datasets out of the trunk and the distribution tarballs, though. The current download burden is somewhat heavy as it is, and some of the worthwhile datasets will probably be substantial in size. A few might be absorbed into the scipy trunk for use in unit tests or the (very lonely) tutorial. I suggest making a data/ directory in the repository sibling to branches/, tags/, and trunk/. I'll try to get around to it if no one beats me. If you would like to start a Wiki page on www.scipy.org to collect pointers to useful datasets and example code, that would be great. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From fullung at gmail.com Mon Jul 10 19:56:36 2006 From: fullung at gmail.com (Albert Strasheim) Date: Tue, 11 Jul 2006 01:56:36 +0200 Subject: [SciPy-dev] Common data sets for testing purposes In-Reply-To: <44B2CC4D.8050002@gmail.com> Message-ID: <003b01c6a47c$7a8d5320$0100000a@dsp.sun.ac.za> Hello all > If you would like to start a Wiki page on www.scipy.org to collect > pointers to > useful datasets and example code, that would be great. Here's a link you can add once the wiki page name has been decided upon: http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ The libsvm authors did a pretty decent job of collecting quite a few datasets for classification, regression and other problems. You'll also find links to many other sites with interesting datasets from the page I mentioned. Regards, Albert From pwang at enthought.com Tue Jul 11 02:05:18 2006 From: pwang at enthought.com (Peter Wang) Date: Tue, 11 Jul 2006 01:05:18 -0500 (CDT) Subject: [SciPy-dev] Common data sets for testing purposes In-Reply-To: <44B2CC4D.8050002@gmail.com> Message-ID: <1152597918.958@mail.enthought.com> Robert Kern wrote .. > Good idea. The first step would be collecting some datasets and writing > one scipy/matplotlib (dare I say Chaco?) example per dataset. I double DOG dare you to say it. Actually, having some good datasets would be really helpful. As pretty as bessel functions are, I'm rather tired of using them for all the basic chaco demos. > I would prefer to keep the datasets out of the trunk and the distribution > tarballs, though. The current download burden is somewhat heavy as it is, > and some of the worthwhile datasets will probably be substantial in size. I vote for this as well. The GIS data that is used for some of the older chaco examples is 2.6mb compressed, and I moved it out of the main enthought/src/lib/ directory structure for that reason. > If you would like to start a Wiki page on www.scipy.org to collect pointers > to useful datasets and example code, that would be great. The UN has a Statistics Division with tons of demographic data: http://unstats.un.org/unsd/cdb/cdb_help/cdb_quick_start.asp Now, granted, this is financial and demographic data instead of strictly scientific data, but perhaps the sheer volume and quality of the data outweighs the lack of direct scientific applicability. -Peter From david.huard at gmail.com Tue Jul 11 10:22:47 2006 From: david.huard at gmail.com (David Huard) Date: Tue, 11 Jul 2006 10:22:47 -0400 Subject: [SciPy-dev] Common data sets for testing purposes In-Reply-To: <1152597918.958@mail.enthought.com> References: <44B2CC4D.8050002@gmail.com> <1152597918.958@mail.enthought.com> Message-ID: <91cf711d0607110722s3e50cefdy923da92d435b9aab@mail.gmail.com> 2006/7/11, Peter Wang : > > Robert Kern wrote .. > > If you would like to start a Wiki page on www.scipy.org to collect > pointers > > to useful datasets and example code, that would be great. > I put the page in the developper zone/documentation/projects/Data sets and Examples . I wrote some guidelines for naming conventions, feel free to change them. > Good idea. The first step would be collecting some datasets and writing > one scipy/matplotlib (dare I say Chaco?) example per dataset. Yes, but it should also work without matplotlib (or chaco) installed (all plotting done inside a try: statement). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From prabhu_r at users.sf.net Wed Jul 12 17:14:25 2006 From: prabhu_r at users.sf.net (Prabhu Ramachandran) Date: Thu, 13 Jul 2006 02:44:25 +0530 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <44AEC66A.2070507@ee.byu.edu> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> <44AEC66A.2070507@ee.byu.edu> Message-ID: <17589.26161.788478.50305@prpc.aero.iitb.ac.in> I sent this reply on 9th but the message seems to have never made it to scipy-dev and is still pending moderator approval on numpy-discussion. I had attached a patch for weave that made the email larger than 40 KB. I don't have checkin privileges to scipy/numpy. So if someone would be kind enough to apply the patch for me, let me know and I'll send you the patch off-list. >>>>> "Travis" == Travis Oliphant writes: [...] Travis> I'm not opposed to putting a *short* prefix in front of Travis> everything (the Int32, Float64, stuff came from numarray Travis> which now has it's own back-ward compatible header where Travis> it could be placed now anyway). Perhaps npy_ would be a Travis> suitable prefix. Travis> That way we could get rid of the cruft entirely. Travis> I suppose we could also provide the noprefix.h header that Travis> defines the old un-prefixed cases for "backwards NumPy Travis> compatibility". Travis, you rock! Thanks for fixing this in SVN. All the problems I was having earlier are gone. Here is a patch for weave's SWIG support that should be applied to scipy. The SWIG runtime layout changed in version 1.3.28 and this broke weave support. This should be fixed soon (hopefully!) in SWIG CVS and the following patch will ensure that weave works against it. Many thanks once again for the speedy fixes! cheers, prabhu From fperez.net at gmail.com Wed Jul 12 17:24:32 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 12 Jul 2006 15:24:32 -0600 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: <17589.26161.788478.50305@prpc.aero.iitb.ac.in> References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> <44AEC66A.2070507@ee.byu.edu> <17589.26161.788478.50305@prpc.aero.iitb.ac.in> Message-ID: On 7/12/06, Prabhu Ramachandran wrote: > I sent this reply on 9th but the message seems to have never made it > to scipy-dev and is still pending moderator approval on > numpy-discussion. I had attached a patch for weave that made the > email larger than 40 KB. I don't have checkin privileges to > scipy/numpy. So if someone would be kind enough to apply the patch > for me, let me know and I'll send you the patch off-list. Send me the patches. I'll make them available online so that anyone can commit them later (I can do it if there's no objection), while you're asleep (trying to make this available across our timezone differences). Cheers, f From cookedm at physics.mcmaster.ca Wed Jul 12 17:29:26 2006 From: cookedm at physics.mcmaster.ca (David M. Cooke) Date: Wed, 12 Jul 2006 17:29:26 -0400 Subject: [SciPy-dev] Weave, numpy, external libraries and conflicting typedefs In-Reply-To: References: <17582.7994.142818.120995@prpc.aero.iitb.ac.in> <44AE9E12.5000703@ieee.org> <17582.46882.286683.846598@prpc.aero.iitb.ac.in> <44AEBE1F.8010304@ee.byu.edu> <44AEC66A.2070507@ee.byu.edu> <17589.26161.788478.50305@prpc.aero.iitb.ac.in> Message-ID: <20060712172926.39e65405@arbutus.physics.mcmaster.ca> On Wed, 12 Jul 2006 15:24:32 -0600 "Fernando Perez" wrote: > On 7/12/06, Prabhu Ramachandran wrote: > > I sent this reply on 9th but the message seems to have never made it > > to scipy-dev and is still pending moderator approval on > > numpy-discussion. I had attached a patch for weave that made the > > email larger than 40 KB. I don't have checkin privileges to > > scipy/numpy. So if someone would be kind enough to apply the patch > > for me, let me know and I'll send you the patch off-list. > > Send me the patches. I'll make them available online so that anyone > can commit them later (I can do it if there's no objection), while > you're asleep (trying to make this available across our timezone > differences). Or, make a ticket on the Scipy bug tracker (http://projects.scipy.org/scipy/scipy/newticket), and attach them there. In general I'd suggest that people do this; then we don't lose them on the mailing list. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |cookedm at physics.mcmaster.ca From nwagner at iam.uni-stuttgart.de Thu Jul 13 01:46:49 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 13 Jul 2006 07:46:49 +0200 Subject: [SciPy-dev] Please send the f2py-generated source code of _cobylamodule.c In-Reply-To: <44B55EA4.30101@ieee.org> References: <44B55EA4.30101@ieee.org> Message-ID: On Wed, 12 Jul 2006 14:42:12 -0600 Travis Oliphant wrote: > > Please send the 32-bit and 64-bit generated interface >modules > > _cobylamodule.c > > located on my system (after a build) > > scipy/build/src.linux-i686-2.4/Lib/optimize/cobyla/_cobylamodule.c > > > -Travis > Hi Travis, I had a mail delivery problem. Please find attached the 32-bit generated interface module BTW, did you receive testcob.out32 ? Nils -------------- next part -------------- A non-text attachment was scrubbed... Name: _cobylamodule.c Type: text/x-csrc Size: 23522 bytes Desc: not available URL: From stefan at sun.ac.za Thu Jul 13 06:54:21 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 13 Jul 2006 12:54:21 +0200 Subject: [SciPy-dev] acceptable licenses for scipy Message-ID: <20060713105421.GA15355@mentat.za.net> Hi all, When including source with scipy, which licenses are acceptable? I know that the BSD-style licenses are good, but how about the Lesser GPL etc.? Regards St?fan From david at ar.media.kyoto-u.ac.jp Thu Jul 13 08:12:11 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Thu, 13 Jul 2006 21:12:11 +0900 Subject: [SciPy-dev] acceptable licenses for scipy In-Reply-To: <20060713105421.GA15355@mentat.za.net> References: <20060713105421.GA15355@mentat.za.net> Message-ID: <44B6389B.3060604@ar.media.kyoto-u.ac.jp> Stefan van der Walt wrote: > Hi all, > > When including source with scipy, which licenses are acceptable? I > know that the BSD-style licenses are good, but how about the Lesser > GPL etc.? > > That's something I was wondering recently, concerning extensions written in C: is it possible to have C extension modules licensed under LGPL ? Does it make sense ? David From listservs at mac.com Thu Jul 13 11:00:26 2006 From: listservs at mac.com (listservs at mac.com) Date: Thu, 13 Jul 2006 11:00:26 -0400 Subject: [SciPy-dev] scipy svn build errors (fftpack) Message-ID: <2DF1CEB3-58EA-4355-895C-248BAD42E7DE@mac.com> SVN builds of scipy on OSX are failing. The errors occur with fftpack: creating build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/fftpack /usr/local/bin/g77 -undefined dynamic_lookup -bundle build/ temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib -L/usr/local/lib/ gcc/powerpc-apple-darwin6.8/3.4.2 -L../staticlibs -Lbuild/ temp.darwin-8.7.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - lcc_dynamic -o build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/ fftpack/_fftpack.so /usr/bin/ld: can't locate file for: -ldfftpack collect2: ld returned 1 exit status /usr/bin/ld: can't locate file for: -ldfftpack collect2: ld returned 1 exit status error: Command "/usr/local/bin/g77 -undefined dynamic_lookup -bundle build/temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/Lib/fftpack/_fftpackmodule.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/drfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zrfft.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/Lib/fftpack/src/zfftnd.o build/ temp.darwin-8.7.0-Power_Macintosh-2.4/build/src.darwin-8.7.0- Power_Macintosh-2.4/fortranobject.o -L/usr/local/lib -L/usr/local/lib/ gcc/powerpc-apple-darwin6.8/3.4.2 -L../staticlibs -Lbuild/ temp.darwin-8.7.0-Power_Macintosh-2.4 -ldfftpack -lfftw3 -lg2c - lcc_dynamic -o build/lib.darwin-8.7.0-Power_Macintosh-2.4/scipy/ fftpack/_fftpack.so" failed with exit status 1 These builds worked fine until a week or so ago. I have fftw3: fftw3_info: libraries fftw3 not found in /Library/Frameworks/Python.framework/ Versions/2.4/lib FOUND: libraries = ['fftw3'] library_dirs = ['/usr/local/lib'] define_macros = [('SCIPY_FFTW3_H', None)] include_dirs = ['/usr/local/include'] Any ideas regarding what can be done? Thanks, C. - -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address From robert.kern at gmail.com Thu Jul 13 13:20:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 13 Jul 2006 12:20:06 -0500 Subject: [SciPy-dev] acceptable licenses for scipy In-Reply-To: <20060713105421.GA15355@mentat.za.net> References: <20060713105421.GA15355@mentat.za.net> Message-ID: <44B680C6.3040009@gmail.com> Stefan van der Walt wrote: > Hi all, > > When including source with scipy, which licenses are acceptable? I > know that the BSD-style licenses are good, but how about the Lesser > GPL etc.? No, we're still trying to keep scipy as a whole BSDish. If there is an LGPL library which you wish to provide, it would provide an excellent seed for Fernando's "scikits" namespace package. We would be happy to provide the infrastructure on projects.scipy.org. Of course, every time this topic comes up, I make the same offer, but so far, no one has done anything about it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From stefan at sun.ac.za Thu Jul 13 15:19:48 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Thu, 13 Jul 2006 21:19:48 +0200 Subject: [SciPy-dev] acceptable licenses for scipy In-Reply-To: <44B680C6.3040009@gmail.com> References: <20060713105421.GA15355@mentat.za.net> <44B680C6.3040009@gmail.com> Message-ID: <20060713191948.GA30126@mentat.za.net> On Thu, Jul 13, 2006 at 12:20:06PM -0500, Robert Kern wrote: > Stefan van der Walt wrote: > > Hi all, > > > > When including source with scipy, which licenses are acceptable? I > > know that the BSD-style licenses are good, but how about the Lesser > > GPL etc.? > > No, we're still trying to keep scipy as a whole BSDish. > > If there is an LGPL library which you wish to provide, it would provide an > excellent seed for Fernando's "scikits" namespace package. We would be happy to > provide the infrastructure on projects.scipy.org. > > Of course, every time this topic comes up, I make the same offer, but so far, no > one has done anything about it. Robert Hetland's post got me thinking the other day. I would really like to have polygon manipulation routines in (or pluggable into) scipy. He provided a link to some C-code, which, after I spent a morning trying to get everything running smoothly with the C API, I wrapped with c_types in half an hour (thanks, Albert!). Unfortunately, that code was only for point-in-polygon. The next obvious thing to include would be polygon clipping. There are a couple of packages out there, but their licenses are either unspecified, GPL or LGPL. I spent the whole afternoon googling for free polygon clipping routines, but to no avail (maybe I should just write them, I've found enough articles in the process). While the package is currently rather small, there might be room for a larger package, 'geometry'. And since licensing seems to be an issue, doing this outside (but closely involved with) scipy seems to be a good idea. So, how does a person go about creating a 'scikit'? At the moment, the source is formatted like a sandbox module, but I'd gladly rip its guts out to make it fit. Cheers St?fan From Eric.Buehler at smiths-aerospace.com Thu Jul 13 15:24:36 2006 From: Eric.Buehler at smiths-aerospace.com (Buehler, Eric (AGRE)) Date: Thu, 13 Jul 2006 13:24:36 -0600 Subject: [SciPy-dev] Question about Kaiser Implementation in firwin Message-ID: Hello, I have been using the signal.firwin FIR design tool and I have a question about the implementation. Scipy-0.4.8 Numpy-0.9.6 I am generating a kaiser window with the following parameters: N = 128 Cutoff = ~4e7 width = .3 All of the special kaiser generation parameters in the function follow Oppenheim and Schafer well. Even the sinc function for linear phase is follows exactly. However, the last line of the firwin function is causing me some heartburn. filter_design.py 1538 win = get_window(window,N,fftbins=1) 1539 alpha = N//2 1540 m = numpy.arange(0,N) 1541 h = win*special.sinc(cutoff*(m-alpha)) 1542 return h / sum(h) Line 1542 of filter_design.py, "return h / sum(h)", normalizes the function where it doesn't seem necessary, at least in the kaiser window case. Without the normalization, the kaiser window already returns a value of 1 at the zero frequency point. This normalization scales all of the data, making the window difficult to use in the frequency domain. Can someone point me to the rationale for this line? Looking at the code, this seems to be a pretty recent change (within the last year/year and a half). Thanks, Eric Buehler eric buehler smiths-aerospace com ****************************************** The information contained in, or attached to, this e-mail, may contain confidential information and is intended solely for the use of the individual or entity to whom they are addressed and may be subject to legal privilege. If you have received this e-mail in error you should notify the sender immediately by reply e-mail, delete the message from your system and notify your system manager. Please do not copy it for any purpose, or disclose its contents to any other person. The views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the company. The recipient should check this e-mail and any attachments for the presence of viruses. The company accepts no liability for any damage caused, directly or indirectly, by any virus transmitted in this email. ****************************************** From gnchen at cortechs.net Thu Jul 13 15:40:01 2006 From: gnchen at cortechs.net (Gennan Chen) Date: Thu, 13 Jul 2006 12:40:01 -0700 Subject: [SciPy-dev] scipy compilation error Message-ID: <1152819601.13467.1.camel@t43p-1> Hi! I tried to compile scipy from svn repository on FC5. But I got following error: g77:f77: Lib/fftpack/dfftpack/dffti.f g77:f77: Lib/fftpack/dfftpack/zfftb1.f /tmp/ccB1Yghk.s: Assembler messages: /tmp/ccB1Yghk.s:589: Error: suffix or operands invalid for `movd' /tmp/ccB1Yghk.s:2948: Error: suffix or operands invalid for `movd' /tmp/ccB1Yghk.s: Assembler messages: /tmp/ccB1Yghk.s:589: Error: suffix or operands invalid for `movd' /tmp/ccB1Yghk.s:2948: Error: suffix or operands invalid for `movd' error: Command "/usr/bin/g77 -g -Wall -fno-second-underscore -fPIC -O2 -funroll-loops -march=pentium3 -mmmx -msse2 -msse -fomit-frame-pointer -malign-double -c -c Lib/fftpack/dfftpack/zfftb1.f -o build/temp.linux-i686-2.4/Lib/fftpack/dfftpack/zfftb1.o" failed with exit status 1 Anyone know how to fix it?? -- Gen-Nan Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhendrix at enthought.com Thu Jul 13 18:48:45 2006 From: bhendrix at enthought.com (Bryce Hendrix) Date: Thu, 13 Jul 2006 17:48:45 -0500 Subject: [SciPy-dev] ANN: Python Enthought Edition Version 1.0.0.beta4 Released Message-ID: <44B6CDCD.7010206@enthought.com> Enthought is pleased to announce the release of Python Enthought Edition Version 1.0.0.beta4 (http://code.enthought.com/enthon/) -- a python distribution for Windows. 1.0.0.beta4 Release Notes: -------------------- There are two known issues: * No documentation is included due to problems with the chm. Instead, all documentation for this beta is available on the web at http://code.enthought.com/enthon/docs. The official 1.0.0 will include a chm containing all of our docs again. * IPython may cause problems when starting the first time if a previous version of IPython was ran. If you see "WARNING: could not import user config", either follow the directions which follow the warning. Unless something terrible is discovered between now and the next release, we intend on releasing 1.0.0 on July 25th. This release includes version 1.0.9 of the Enthought Tool Suite (ETS) Package and bug fixes-- you can look at the release notes for this ETS version here: http://svn.enthought.com/downloads/enthought/changelog-release.1.0.9.html About Python Enthought Edition: ------------------------------- Python 2.4.3, Enthought Edition is a kitchen-sink-included Python distribution for Windows including the following packages out of the box: Numpy SciPy IPython Enthought Tool Suite wxPython PIL mingw f2py MayaVi Scientific Python VTK and many more... More information is available about all Open Source code written and released by Enthought, Inc. at http://code.enthought.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Fri Jul 14 02:22:16 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 14 Jul 2006 08:22:16 +0200 Subject: [SciPy-dev] Scipy-0.4.9 and BLAS? srotmg_ erros In-Reply-To: <20060706140712.d1tqfepuok484ocs@imap.broad.mit.edu> References: <20060706140712.d1tqfepuok484ocs@imap.broad.mit.edu> Message-ID: <62C94E54-E298-46FF-825A-9DDB3C5806E4@ftw.at> On 06/07/2006, at 8:07 PM, clarence at broad.mit.edu wrote: > Hopefully I've subscribed to the correct list for this inquiry. I'm > currently attempting to get scipy-0.4.9 built/installed correctly with > numpy-0.9.8 and the latest BLAS/LAPACK libs to be used on Linux. > Everything builds and installs fine without errors (seemingly), and I > encounter the problem when trying to do a python scipy.test(level=1): > > import integrate -> failed: scipy/linalg/fblas.so: undefined > symbol: srotmg_ > import signal -> failed: scipy/linalg/fblas.so: undefined symbol: > srotmg_ > import special -> failed: scipy/linalg/fblas.so: undefined symbol: > srotmg_ > import lib.blas -> failed: scipy/lib/blas/fblas.so: undefined symbol: > srotmg_ > import linalg -> failed: scipy/linalg/fblas.so: undefined symbol: > srotmg_ > import maxentropy -> failed: scipy/linalg/fblas.so: undefined symbol: > srotmg_ > import stats -> failed: scipy/linalg/fblas.so: undefined symbol: > srotmg_ > > I've seen that this is a known bug and that I should get the full BLAS > implementation, which I have and was using anyways. I don't want to > use the ATLAS option if possible, and my BLAS/LAPACK libs did build > without a hitch. Are there any other checks I can do to verify that > scipy built against my libs correctly? Unfortunately this is a time > sensitive issue for my team and any clue you can provide is > appreciated. Hi Clarence, Check out this thread: http://www.scipy.net/pipermail/scipy-user/2002- May/000434.html In summary: BLAS provides srotmg but LAPACK doesn't. So you need to download and build BLAS from Netlib. There are instructions for this at http://new.scipy.org/Wiki/Installing_SciPy/BuildingGeneral. -- Ed From david at ar.media.kyoto-u.ac.jp Fri Jul 14 07:39:51 2006 From: david at ar.media.kyoto-u.ac.jp (David Cournapeau) Date: Fri, 14 Jul 2006 20:39:51 +0900 Subject: [SciPy-dev] [ANN] pyem 0.4.1, a numpy module for Gaussian Mixture Models Message-ID: <44B78287.503@ar.media.kyoto-u.ac.jp> Hi there, a few weeks ago, I submitted a small module for Expectation Maximization for Gaussian Mixture Models. I had the time recently to improve it significantly, and I've just made available a second public release: http://www.ar.media.kyoto-u.ac.jp/members/david/pyem-0.4.1.tar.gz Example of training: http://www.ar.media.kyoto-u.ac.jp/members/david/example2.png The major user-visible changes are the use of distutils for packaging, and speed improvement for diagonal models. Now, on a pentium M @ 1.2 Mhz, 10 iterations of EM takes around 6 seconds for a 20-dimension, diagonal model of 20 components with 10^4 points for training, including the k-mean initialization (pyrex version). This should make it usable for applications such as speaker recognition, etc... More speed improvements (particularly full covariance matrices) are to be expected once I managed to include my separate project which implements the whole EM for GMM in C. Examples are available in the module, and several parts of the module can be executed directly to see some basic uses (including plotting). I tried several tools which I am not very familiar with yet (distutils, pyrex), so there may be some problems depending on the configuration; I tested the package on linux x86 (ubuntu dapper), one machine with and and one without pyrex. As always, comments, suggestion, bugs reports are welcome, David From nmarais at sun.ac.za Fri Jul 14 11:47:34 2006 From: nmarais at sun.ac.za (Neilen Marais) Date: Fri, 14 Jul 2006 17:47:34 +0200 Subject: [SciPy-dev] ARPACK wrappers Message-ID: <1152892054.4632.9.camel@localhost.localdomain> Hi I saw some mention of this in scipy-user a while ago. I'm quite motivated to attempt the wrapping of ARPACK, since I'm driven by personal need :) Anyway, I'd like to hear thaughts on how to go about it. My current (limited) thinking is to use f2py for the ARPACK "utility" functions, and then to port the example "driver" functions to python, calling the wrapped ARPACK utility functions. The pythonised driver functions should be the interface that the user sees, and should allow the specification of matrix solution strategy (if appropriate) etc. Unfortunately I'm going to start 6 weeks of RealWork (tm) on Monday, so I probably won't spend much time on the project till end August, but I think we should at least get some discussion rolling. Caveat: I have squat experience with ARPACK, so this will be my first use of it :) Regards Neilen From r.demaria at tiscali.it Fri Jul 14 11:56:32 2006 From: r.demaria at tiscali.it (Riccardo de Maria) Date: Fri, 14 Jul 2006 17:56:32 +0200 Subject: [SciPy-dev] ARPACK wrappers In-Reply-To: <1152892054.4632.9.camel@localhost.localdomain> References: <1152892054.4632.9.camel@localhost.localdomain> Message-ID: <20060714155632.GA19965@localhost.localdomain> Hello, I'm very interested in having a eigenvalue solver for sparse matrix, but I've never used the ARPACK library. > My current (limited) thinking is to use f2py for the ARPACK "utility" > functions, and then to port the example "driver" functions to python, > calling the wrapped ARPACK utility functions. The pythonised driver > functions should be the interface that the user sees, and should allow > the specification of matrix solution strategy (if appropriate) etc. As far as I remember, ARPACK needs the definition of a vector matrix multiplication routine for handling the matrix format. I don't know if scipy has already a set of low-level vector matrix multiplication routines for each type of matrix storage. If they exist I think most of the job is done. Any ideas? Riccardo From travis at enthought.com Fri Jul 14 13:19:11 2006 From: travis at enthought.com (Travis N. Vaught) Date: Fri, 14 Jul 2006 12:19:11 -0500 Subject: [SciPy-dev] ANN: SciPy 2006 Schedule/Early Registration Reminder Message-ID: <44B7D20F.8060701@enthought.com> Greetings, The SciPy 2006 Conference (http://www.scipy.org/SciPy2006) is August 17-18 this year. The deadline for early registration is *today*, July 14, 2006. The registration price will increase from $100 to $150 after today. You can register online at https://www.enthought.com/scipy06 . We invite everyone attending the conference to also attend the Coding Sprints on Monday-Tuesday , August 14-15 and also the Tutorials Wednesday, August 16. There is no additional charge for these sessions. A *tentative* schedule of talks has now been posted. http://www.scipy.org/SciPy2006/Schedule We look forward to seeing you at CalTech in August! Best, Travis From oliphant.travis at ieee.org Sun Jul 16 00:50:56 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Sat, 15 Jul 2006 22:50:56 -0600 Subject: [SciPy-dev] Question about Kaiser Implementation in firwin In-Reply-To: References: Message-ID: <44B9C5B0.1050201@ieee.org> Buehler, Eric (AGRE) wrote: > Hello, > > However, the last line of the firwin function is causing me some > heartburn. > filter_design.py > > 1538 win = get_window(window,N,fftbins=1) > 1539 alpha = N//2 > 1540 m = numpy.arange(0,N) > 1541 h = win*special.sinc(cutoff*(m-alpha)) > 1542 return h / sum(h) > > Line 1542 of filter_design.py, "return h / sum(h)", normalizes the > function where it doesn't seem necessary, at least in the kaiser window > case. Without the normalization, the kaiser window already returns a > value of 1 at the zero frequency point. This normalization scales all > of the data, making the window difficult to use in the frequency domain. > > Can someone point me to the rationale for this line? Looking at the > code, this seems to be a pretty recent change (within the last year/year > and a half). > > I'm not aware of if and when the change was made. Was there a time when firwin did not have this normalization? The normalization is done so that the resulting filter has a 0dB gain at DC (which is the center of the pass-band). In other-words fft(h)[0] is approximately equal to 1. This is usually what is desired as firwin returns "time-domain" filter coefficients. The return value of the function is not designed for use in the "frequency"-domain. I'm not even sure what you mean by that in this context. The intended usage of the result of firwin is in a convolution: convolve(h, ) -Travis From oliphant.travis at ieee.org Mon Jul 17 15:41:49 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 17 Jul 2006 13:41:49 -0600 Subject: [SciPy-dev] 64-bit run request Message-ID: <44BBE7FD.8080108@ieee.org> Another request to run the latest scipy SVN on a 64-bit system and send me the output. -Travis From oliphant.travis at ieee.org Mon Jul 17 15:43:30 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Mon, 17 Jul 2006 13:43:30 -0600 Subject: [SciPy-dev] 64-bit run request Message-ID: <44BBE862.8090904@ieee.org> Please just run the following script using latest SVN of scipy on a 64-bit system. -Travis -------------- next part -------------- A non-text attachment was scrubbed... Name: testcob.py Type: text/x-python Size: 585 bytes Desc: not available URL: From oliphant.travis at ieee.org Mon Jul 17 20:19:01 2006 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Mon, 17 Jul 2006 18:19:01 -0600 Subject: [SciPy-dev] Cobyla test fixed Message-ID: <44BC28F5.8000200@ieee.org> I think we tracked down the problem in cobyla. Apparently, a certain operation deep in the bowels of the Fortran code was causing a number to be very-small negative on 32-bit platforms but 0.0d0 on 64-bit platforms. Different behavior occurred based on whether or not the number was less than zero or not. On 32-bit platforms the different behavior ensued on 64-bit platforms it did not. I changed the code in cobyla/trstlp.f so that it checks for a number that is less than eps currently eps is hard-coded to -2.2e-16 but this is a hack. But, the hack works and allows all scipy tests to pass on 32-bit systems (and hopefully 64-bit systems as well). There are a lot more print statements in the code now, but they are all embedded in a test for iprint to be equal to 3 so they don't run by default (the tests will cause the code to run very slightly slower, however). Thanks to those with 64-bit systems who helped in debugging. -Travis From david.huard at gmail.com Thu Jul 20 10:39:35 2006 From: david.huard at gmail.com (David Huard) Date: Thu, 20 Jul 2006 10:39:35 -0400 Subject: [SciPy-dev] What is this dispatch thing ? Message-ID: <91cf711d0607200739s38be100dh98956b5d6b949d54@mail.gmail.com> Hi, I just updated scipy from svn and stats.linregress(x,y) returns /usr/lib/python2.4/site-packages/stats.py in __call__(self, arg1, *args, **kw) 244 def __call__(self, arg1, *args, **kw): 245 if type(arg1) not in self._types: --> 246 raise TypeError, "don't know how to dispatch %s arguments" % type(arg1) 247 return apply(self._dispatch[type(arg1)], (arg1,) + args, kw) 248 TypeError: don't know how to dispatch arguments Also, in ipython, linregress? returns a doc about the dispatch class. Is this intended ? David -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Jul 20 12:26:06 2006 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 20 Jul 2006 11:26:06 -0500 Subject: [SciPy-dev] What is this dispatch thing ? In-Reply-To: <91cf711d0607200739s38be100dh98956b5d6b949d54@mail.gmail.com> References: <91cf711d0607200739s38be100dh98956b5d6b949d54@mail.gmail.com> Message-ID: <44BFAE9E.60407@gmail.com> David Huard wrote: > Hi, > I just updated scipy from svn and stats.linregress(x,y) returns > > /usr/lib/python2.4/site-packages/stats.py in __call__(self, arg1, *args, > **kw) > 244 def __call__(self, arg1, *args, **kw): > 245 if type(arg1) not in self._types: > --> 246 raise TypeError, "don't know how to dispatch %s > arguments" % type(arg1) > 247 return apply(self._dispatch[type(arg1)], (arg1,) + args, kw) > 248 > > TypeError: don't know how to dispatch arguments > > Also, in ipython, linregress? returns a doc about the dispatch class. Is > this intended ? That version of stats.py is not from scipy. That version tried to handle lists as well as arrays. The functions for each type were wrapped by a dispatching class that would call the appropriate version depending on the argument type. That version of stats.py was written for Numeric and cannot deal with numpy arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From schofield at ftw.at Fri Jul 21 04:03:18 2006 From: schofield at ftw.at (Ed Schofield) Date: Fri, 21 Jul 2006 10:03:18 +0200 Subject: [SciPy-dev] Next SciPy release Message-ID: Hi all, I'd like to make a new SciPy release early next week to sync with NumPy 1.0b1. It would have been nice to release SciPy 0.5.0 to coincide with NumPy 1.0 (final), but I suppose the next release should be numbered 0.5.0, following Andrew Straw's suggestion in http://projects.scipy.org/scipy/numpy/ticket/170. We have a growing list of outstanding tickets. I'll go through these, but I'd appreciate any help here ;) -- Ed From strawman at astraw.com Fri Jul 21 05:24:32 2006 From: strawman at astraw.com (Andrew Straw) Date: Fri, 21 Jul 2006 02:24:32 -0700 Subject: [SciPy-dev] Next SciPy release In-Reply-To: References: Message-ID: <7951BBAF-6452-461C-96B6-C8675136272F@astraw.com> On Jul 21, 2006, at 1:03 AM, Ed Schofield wrote: > > Hi all, > > I'd like to make a new SciPy release early next week to sync with > NumPy 1.0b1. Great! > It would have been nice to release SciPy 0.5.0 to > coincide with NumPy 1.0 (final), but I suppose the next release > should be numbered 0.5.0, following Andrew Straw's suggestion in > http://projects.scipy.org/scipy/numpy/ticket/170. After re-reading that sentence a few times, it's still not clear to me exactly what you mean. Let me skip to my suggestion, which actually seems to be what you're rejecting: numpy scipy 0.9.8 0.4.9 # release (already done) 0.9.9.x 0.5.0.x # svn revision x (already done) 1.0b1 0.5.0b1 # release 1.0b1.x 0.5.0b1.x # svn revision x 1.0b2 0.5.0b2 # release 1.0b2.x 0.5.0b2.x # svn revision x 1.0 0.5.0 1.0.x 0.5.0.x ... Technically, yes, this violates the principle that the temporal sequence of version numbers should also sort with setuptools or debian's dpkg or whatever. (scipy numbered 0.5.0.x after the 0.4.9 svn versions but that temporally precedes the 0.5.0b series would sort after the 0.5.0b series). However, I'd rather deal with that than break the 2:1 ratio that has happened with the release version numbers, which would be particularly nice to have at 1.0/0.5. If we stick to the above suggestion, future releases (and interim svn versions) should sort to the temporal order. As a digression (since discussing version numbering schemes is so productive), numpy is presumably going to have a slowed release cycle after 1.0, and thus we probably won't be able to keep scipy in such version-number lock(half)step. (Other ways to keep the lockstep exist but seem less desirable IMO: few scipy releases will get made, numpy version numbers will have to jump to allow scipy some room to grow.) Back to the first issue -- one could argue that we might as well break the lockstep between numpy/scipy version numbers now, as I think you might be. My opinion leans towards opposition because the releases are so close to some nice, round easy-to-remember numbers. Anyway, debian's dpkg has ways to force version numbers to sort properly, even if the release numbers don't. I don't know if setuptools does, but I presume you can at least force things to work somehow. Finally, this is really only an issue for people packaging stuff out of svn, who basically know they are living on the bleeding edge to begin with, and I hope no one spends as much time on this issue as I have! :) -- Andrew PS I'm posting this email back into Trac so it doesn't get forgotten when we get closer to future releases. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Jul 21 08:21:40 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 21 Jul 2006 14:21:40 +0200 Subject: [SciPy-dev] Next SciPy release In-Reply-To: References: Message-ID: <44C0C6D4.9030007@iam.uni-stuttgart.de> Ed Schofield wrote: > Hi all, > > I'd like to make a new SciPy release early next week to sync with > NumPy 1.0b1. It would have been nice to release SciPy 0.5.0 to > coincide with NumPy 1.0 (final), but I suppose the next release > should be numbered 0.5.0, following Andrew Straw's suggestion in > http://projects.scipy.org/scipy/numpy/ticket/170. > > We have a growing list of outstanding tickets. I'll go through these, > but I'd appreciate any help here ;) > > -- Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > Hi Ed, Please can you try to confirm the results wrt my bug report. http://projects.scipy.org/scipy/scipy/ticket/223 Thanks in advance. Nils From dvp at mwl.MIT.EDU Fri Jul 21 08:23:39 2006 From: dvp at mwl.MIT.EDU (Dennis V. Perepelitsa) Date: Fri, 21 Jul 2006 08:23:39 -0400 Subject: [SciPy-dev] Unit Tests for Probabilistic Functions Message-ID: <1153484619.19448.3.camel@m66-080-12.mit.edu> Hi, all. I'm in the process of writing unit tests for a hidden markov model module I've been hacking on. I want to ensure that my implementation generates sequences of hidden and observed states with the correct probabilities. How can I test for this? More generally, what are some good methods to use when writing unit tests for probabilistic functions? Can we do any better than "run it a bunch of times and make sure the true mean is within a 98% (or higher/lower) confidence interval of the sample mean"? I'd rather not devise a unit test that is guaranteed to fail 2% of the time. Dennis V. Perepelitsa Picower Institute for Learning and Memory MIT, Class of 2008 - Course VIII, XVIII From david.huard at gmail.com Fri Jul 21 08:48:18 2006 From: david.huard at gmail.com (David Huard) Date: Fri, 21 Jul 2006 08:48:18 -0400 Subject: [SciPy-dev] What is this dispatch thing ? In-Reply-To: <44BFAE9E.60407@gmail.com> References: <91cf711d0607200739s38be100dh98956b5d6b949d54@mail.gmail.com> <44BFAE9E.60407@gmail.com> Message-ID: <91cf711d0607210548o256de6e9w67f6d028a5105ecf@mail.gmail.com> Thanks a lot. Sorry for the bother. David 2006/7/20, Robert Kern : > > David Huard wrote: > > Hi, > > I just updated scipy from svn and stats.linregress(x,y) returns > > > > /usr/lib/python2.4/site-packages/stats.py in __call__(self, arg1, *args, > > **kw) > > 244 def __call__(self, arg1, *args, **kw): > > 245 if type(arg1) not in self._types: > > --> 246 raise TypeError, "don't know how to dispatch %s > > arguments" % type(arg1) > > 247 return apply(self._dispatch[type(arg1)], (arg1,) + args, > kw) > > 248 > > > > TypeError: don't know how to dispatch arguments > > > > Also, in ipython, linregress? returns a doc about the dispatch class. Is > > this intended ? > > That version of stats.py is not from scipy. That version tried to handle > lists > as well as arrays. The functions for each type were wrapped by a > dispatching > class that would call the appropriate version depending on the argument > type. > That version of stats.py was written for Numeric and cannot deal with > numpy arrays. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma > that is made terrible by our own mad attempt to interpret it as though > it had > an underlying truth." > -- Umberto Eco > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ravi at ati.com Fri Jul 21 10:27:37 2006 From: ravi at ati.com (Ravikiran Rajagopal) Date: Fri, 21 Jul 2006 10:27:37 -0400 Subject: [SciPy-dev] Next SciPy release In-Reply-To: <7951BBAF-6452-461C-96B6-C8675136272F@astraw.com> References: <7951BBAF-6452-461C-96B6-C8675136272F@astraw.com> Message-ID: <200607211027.37655.ravi@ati.com> On Friday 21 July 2006 05:24, Andrew Straw wrote: > Technically, yes, this violates the principle that the temporal ? > sequence of version numbers should also sort with setuptools or ? > debian's dpkg or whatever. (scipy numbered 0.5.0.x after the 0.4.9 ? > svn versions but that temporally precedes the 0.5.0b series would ? > sort after the 0.5.0b series). However, I'd rather deal with that ? > than break the 2:1 ratio that has happened with the release version ? > numbers, which would be particularly nice to have at 1.0/0.5. Please, no. There really is no reason to keep the 2:1 ratio; having easy to remember numbers is really pointless since most users just want the latest version. In particular, the actual version number roundness matters probably only to the people on this list and we all use svn anyway. Violating version numbering standards makes life a *lot* harder for packagers, even if it is only for a few months. Please follow the scheme recognized by virtually every tool in *nix world; it would be an administration nightmare otherwise. I, for example, am slowly sneaking in scipy as an alternative to commercial math packages. As a result, I support scipy (and some other mathematical or statistical software) installations for many users and it is hard enough to ensure that everyone is using the same version without resorting to extra scripts just for scipy. Regards, Ravi From dd55 at cornell.edu Fri Jul 21 10:41:32 2006 From: dd55 at cornell.edu (Darren Dale) Date: Fri, 21 Jul 2006 10:41:32 -0400 Subject: [SciPy-dev] Next SciPy release In-Reply-To: <200607211027.37655.ravi@ati.com> References: <7951BBAF-6452-461C-96B6-C8675136272F@astraw.com> <200607211027.37655.ravi@ati.com> Message-ID: <200607211041.32441.dd55@cornell.edu> On Friday 21 July 2006 10:27, Ravikiran Rajagopal wrote: > On Friday 21 July 2006 05:24, Andrew Straw wrote: > > Technically, yes, this violates the principle that the temporal ? > > sequence of version numbers should also sort with setuptools or ? > > debian's dpkg or whatever. (scipy numbered 0.5.0.x after the 0.4.9 ? > > svn versions but that temporally precedes the 0.5.0b series would ? > > sort after the 0.5.0b series). However, I'd rather deal with that ? > > than break the 2:1 ratio that has happened with the release version ? > > numbers, which would be particularly nice to have at 1.0/0.5. > > Please, no. I agree that breaking from standard versioning practice in order to maintain the 2:1 ratio would be a mistake. You suggested the ratio would be broken soon enough after the release of numpy-1.0 anyway. Just my opinion. Darren From schofield at ftw.at Mon Jul 24 18:48:38 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 25 Jul 2006 00:48:38 +0200 Subject: [SciPy-dev] Next SciPy release In-Reply-To: <7951BBAF-6452-461C-96B6-C8675136272F@astraw.com> References: <7951BBAF-6452-461C-96B6-C8675136272F@astraw.com> Message-ID: <64690B84-C5C3-4D99-A8F6-7B61F563F4CC@ftw.at> On 21/07/2006, at 11:24 AM, Andrew Straw wrote: > > On Jul 21, 2006, at 1:03 AM, Ed Schofield wrote: > >> >> Hi all, >> >> I'd like to make a new SciPy release early next week to sync with >> NumPy 1.0b1. > > Great! > >> It would have been nice to release SciPy 0.5.0 to >> coincide with NumPy 1.0 (final), but I suppose the next release >> should be numbered 0.5.0, following Andrew Straw's suggestion in >> http://projects.scipy.org/scipy/numpy/ticket/170. > > After re-reading that sentence a few times, it's still not clear to > me exactly what you mean. Let me skip to my suggestion, which > actually seems to be what you're rejecting: Oops, sorry if my post was confusing. I think your suggestion is a great idea. I didn't really know whether to bump the SVN version numbers before or after the previous releases, but your suggestion makes sense. In this post I was just thinking aloud -- and I suppose hoping for some discussion on the version numbers of releases ;) [Mental note: confusing posts are good for generating discussion] After reading Ravi and Darren's posts I agree that the nice 2:1 ratio should die. This happened by accident anyway, and doesn't really make sense while NumPy is in feature freeze but SciPy isn't. So under this scheme the next versions would be 0.5.0, then 0.5.0.x (svn), then 0.5.1. -- Ed -------------- next part -------------- An HTML attachment was scrubbed... URL: From schofield at ftw.at Mon Jul 24 18:49:51 2006 From: schofield at ftw.at (Ed Schofield) Date: Tue, 25 Jul 2006 00:49:51 +0200 Subject: [SciPy-dev] Next SciPy release In-Reply-To: <44C0C6D4.9030007@iam.uni-stuttgart.de> References: <44C0C6D4.9030007@iam.uni-stuttgart.de> Message-ID: On 21/07/2006, at 2:21 PM, Nils Wagner wrote: > Ed Schofield wrote: >> Hi all, >> >> I'd like to make a new SciPy release early next week to sync with >> NumPy 1.0b1. It would have been nice to release SciPy 0.5.0 to >> coincide with NumPy 1.0 (final), but I suppose the next release >> should be numbered 0.5.0, following Andrew Straw's suggestion in >> http://projects.scipy.org/scipy/numpy/ticket/170. >> >> We have a growing list of outstanding tickets. I'll go through these, >> but I'd appreciate any help here ;) >> >> -- Ed >> >> _______________________________________________ >> Scipy-dev mailing list >> Scipy-dev at scipy.org >> http://projects.scipy.org/mailman/listinfo/scipy-dev >> > > > Hi Ed, > > Please can you try to confirm the results wrt my bug report. > > http://projects.scipy.org/scipy/scipy/ticket/223 > > Thanks in advance. Hi Nils, I'll take a look at it tomorrow. Best wishes, Ed From nwagner at iam.uni-stuttgart.de Tue Jul 25 04:53:03 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Tue, 25 Jul 2006 10:53:03 +0200 Subject: [SciPy-dev] Next SciPy release In-Reply-To: References: <44C0C6D4.9030007@iam.uni-stuttgart.de> Message-ID: On Tue, 25 Jul 2006 00:49:51 +0200 Ed Schofield wrote: > > On 21/07/2006, at 2:21 PM, Nils Wagner wrote: > >> Ed Schofield wrote: >>> Hi all, >>> >>> I'd like to make a new SciPy release early next week to >>>sync with >>> NumPy 1.0b1. It would have been nice to release SciPy >>>0.5.0 to >>> coincide with NumPy 1.0 (final), but I suppose the next >>>release >>> should be numbered 0.5.0, following Andrew Straw's >>>suggestion in >>> http://projects.scipy.org/scipy/numpy/ticket/170. >>> >>> We have a growing list of outstanding tickets. I'll go >>>through these, >>> but I'd appreciate any help here ;) >>> >>> -- Ed >>> >>> _______________________________________________ >>> Scipy-dev mailing list >>> Scipy-dev at scipy.org >>> http://projects.scipy.org/mailman/listinfo/scipy-dev >>> >> >> >> Hi Ed, >> >> Please can you try to confirm the results wrt my bug >>report. >> >> http://projects.scipy.org/scipy/scipy/ticket/223 >> >> Thanks in advance. > > Hi Nils, > > I'll take a look at it tomorrow. > > Best wishes, > Ed > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev Hi Ed, I am curious about it. As far as I remember the order of the matrix (N at line 30 in test*.py) can be very small. Actually I have no idea why I get different results on 32/64 bit systems. Cheers, Nils From prabhu at aero.iitb.ac.in Wed Jul 26 11:51:25 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Wed, 26 Jul 2006 21:21:25 +0530 Subject: [SciPy-dev] Weave support for recent SWIG releases (patch) Message-ID: <17607.36733.573172.910350@prpc.aero.iitb.ac.in> Hi, I've submitted a ticket and patch to get weave working with SWIG (from CVS) wrapped objects. Versions of SWIG before 1.3.28 should continue to work. The patch and ticket are available here: http://projects.scipy.org/scipy/scipy/ticket/235 Could someone please take a look and apply the patch. Thanks. cheers, prabhu From oliphant.travis at ieee.org Wed Jul 26 14:32:01 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Wed, 26 Jul 2006 12:32:01 -0600 Subject: [SciPy-dev] Moving stsci into the main scipy library Message-ID: <44C7B521.8070706@ieee.org> I'd like to propose moving the stsci package (which contains the convolve and image sub-packages from numarray) into SciPy from the sandbox. These packages duplicate some of the features available in other packages and a better solution might be to merge the functionality into other packages. However, this will take work and time. Right now, my main concern is giving numarray users who depended on those packages an immediate working mechanism for using them. The ndimage package is already in SciPy. Creating an stsci package will allow numarray converts to install scipy to get access to their familiar code. Users not afraid of a compile could check-out just these packages and install them. Users who rely on binary installations will want the stsci package to get built and that means it needs to be in the main library. -Travis From stefan at sun.ac.za Wed Jul 26 17:59:14 2006 From: stefan at sun.ac.za (Stefan van der Walt) Date: Wed, 26 Jul 2006 23:59:14 +0200 Subject: [SciPy-dev] Weave support for recent SWIG releases (patch) In-Reply-To: <17607.36733.573172.910350@prpc.aero.iitb.ac.in> References: <17607.36733.573172.910350@prpc.aero.iitb.ac.in> Message-ID: <20060726215913.GB6338@mentat.za.net> On Wed, Jul 26, 2006 at 09:21:25PM +0530, Prabhu Ramachandran wrote: > Hi, > > I've submitted a ticket and patch to get weave working with SWIG (from > CVS) wrapped objects. Versions of SWIG before 1.3.28 should continue > to work. The patch and ticket are available here: > > http://projects.scipy.org/scipy/scipy/ticket/235 Older versions of SWIG seem to work fine. Applied in r2137. Thanks, Prabhu! Regards St?fan From prabhu at aero.iitb.ac.in Wed Jul 26 22:46:20 2006 From: prabhu at aero.iitb.ac.in (Prabhu Ramachandran) Date: Thu, 27 Jul 2006 08:16:20 +0530 Subject: [SciPy-dev] Weave support for recent SWIG releases (patch) In-Reply-To: <20060726215913.GB6338@mentat.za.net> References: <17607.36733.573172.910350@prpc.aero.iitb.ac.in> <20060726215913.GB6338@mentat.za.net> Message-ID: <17608.10492.585917.676648@prpc.aero.iitb.ac.in> >>>>> "Stefan" == Stefan van der Walt writes: Stefan> On Wed, Jul 26, 2006 at 09:21:25PM +0530, Prabhu Stefan> Ramachandran wrote: >> Hi, >> >> I've submitted a ticket and patch to get weave working with >> SWIG (from CVS) wrapped objects. Versions of SWIG before >> 1.3.28 should continue to work. The patch and ticket are >> available here: >> >> http://projects.scipy.org/scipy/scipy/ticket/235 Stefan> Older versions of SWIG seem to work fine. Applied in Stefan> r2137. Thanks, Prabhu! Thanks! Just a note. SWIG broke the runtime version variable between 1.3.28 and CVS (1.3.30), i.e. they updated the object layout and forgot to increment the version. This means weave will not work correctly with swig versions 1.3.28 and 1.3.29. cheers, prabhu From ndbecker2 at gmail.com Thu Jul 27 10:27:17 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 27 Jul 2006 10:27:17 -0400 Subject: [SciPy-dev] building numpy-1.0b1 on linux x86_64 Message-ID: I'm trying to make a rpm for fedora 5 (x86_64). lapack and blas are in /usr/lib64. I've used this in the past: env CFLAGS="$RPM_OPT_FLAGS" BLAS=%{_libdir} LAPACK=%{_libdir} python setup.py config_fc --fcompiler=gnu95 build That is, pass BLAS and LAPACK in env during build. I noticed comments about site.cfg. I wonder if it would be better to specify BLAS, LAPACK there? Also, does it matter whether I do the same during install? BLAS=%{_libdir} LAPACK=%{_libdir} python setup.py config_fc --fcompiler=gnu95 install --root=$RPM_BUILD_ROOT From nwagner at iam.uni-stuttgart.de Fri Jul 28 05:42:21 2006 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 28 Jul 2006 11:42:21 +0200 Subject: [SciPy-dev] Ticket 223 Message-ID: <44C9DBFD.3070605@iam.uni-stuttgart.de> Dear developers, Please can you look into my bugreport http://projects.scipy.org/scipy/scipy/ticket/223 I'm really sorry for annoying you but I want to know the reason for this bug. Thanks in advance Nils P.S: Before I recompile ATLAS I would like to know if it is really an ATLAS issue... From jonathan.taylor at stanford.edu Fri Jul 28 18:52:24 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 28 Jul 2006 15:52:24 -0700 Subject: [SciPy-dev] netcdf module with ctypes Message-ID: <44CA9528.1010304@stanford.edu> I have been playing around with ctypes to see how they work and have written a fairly complete netcdf module (with some tests + doctests) in pure python. The reason I wrote this one is that I have a specific subclass of netcdf files I want to work with and couldn't get the sandbox.netcdf package to behave exactly as I wanted it to. By default, I read variables into temporary memmap'ed array files because some of the data files I use are quite large. This could be customized, though of course. Wondering if it is of any interest to anyone, Jonathan -- ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- A non-text attachment was scrubbed... Name: netcdf.tgz Type: application/x-compressed-tar Size: 7627 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: From aisaac at american.edu Fri Jul 28 19:25:42 2006 From: aisaac at american.edu (Alan G Isaac) Date: Fri, 28 Jul 2006 19:25:42 -0400 Subject: [SciPy-dev] netcdf module with ctypes In-Reply-To: <44CA9528.1010304@stanford.edu> References: <44CA9528.1010304@stanford.edu> Message-ID: On Fri, 28 Jul 2006, Jonathan Taylor apparently wrote: > Wondering if it is of any interest to anyone Maybe getting it linked from http://www.unidata.ucar.edu/software/netcdf/software.html#Python might be useful ... Cheers, Alan Isaac From jonathan.taylor at stanford.edu Fri Jul 28 19:58:09 2006 From: jonathan.taylor at stanford.edu (Jonathan Taylor) Date: Fri, 28 Jul 2006 16:58:09 -0700 Subject: [SciPy-dev] netcdf module with ctypes In-Reply-To: References: <44CA9528.1010304@stanford.edu> Message-ID: <44CAA491.5040606@stanford.edu> Thanks, Wasn't aware of so many python implementations. One of the other reasons that I did this is I wanted the variable data to have at least "put" and "compress" methods rather than just assigning/retreiving by slice, hence the memmap hack. Does anyone know if any of these packages have "put" and "compress" methods for the variables? -- Jonathan Alan G Isaac wrote: > On Fri, 28 Jul 2006, Jonathan Taylor apparently wrote: > >> Wondering if it is of any interest to anyone >> > > Maybe getting it linked from > http://www.unidata.ucar.edu/software/netcdf/software.html#Python > might be useful ... > > Cheers, > Alan Isaac > > > > > _______________________________________________ > Scipy-dev mailing list > Scipy-dev at scipy.org > http://projects.scipy.org/mailman/listinfo/scipy-dev > -- ------------------------------------------------------------------------ Jonathan Taylor Tel: 650.723.9230 Dept. of Statistics Fax: 650.725.8977 Sequoia Hall, 137 www-stat.stanford.edu/~jtaylo 390 Serra Mall Stanford, CA 94305 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: jonathan.taylor.vcf Type: text/x-vcard Size: 329 bytes Desc: not available URL: