From oliphant at ee.byu.edu Mon Nov 1 08:55:30 2010 From: oliphant at ee.byu.edu (oliphant at ee.byu.edu) Date: Mon, 1 Nov 2010 12:55:30 +0000 Subject: [SciPy-Dev] Good day Message-ID: <20101101125530.2A2341B85C2@scipy.org> The message contains Unicode characters and has been sent as a binary attachment. -------------- next part -------------- A non-text attachment was scrubbed... Name: message.zip Type: application/octet-stream Size: 60536 bytes Desc: not available URL: From josef.pktd at gmail.com Mon Nov 1 23:13:51 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 1 Nov 2010 23:13:51 -0400 Subject: [SciPy-Dev] scipy.stats.distributions: note on initial parameters for fitting the beta distribution In-Reply-To: References: Message-ID: On Sun, Oct 31, 2010 at 9:34 AM, James Phillips wrote: > File attached. > > On Sun, Oct 31, 2010 at 8:33 AM, James Phillips wrote: >> Here is a more polished and quite smaller version of your example file >> matchdist.py that uses either nnlf or residuals for ranking, and >> includes checks for NaN, +inf and -inf. ?I think this has all of the >> logic and range checks that it needs. I tried out both your scripts during the weekend but didn't get around to replying. Most distributions work pretty fast but there are still a few unsuccesful time wasters in there. For example, ksone is also mainly a distribution for a statistical test, and in my run took a long time without a successful fit. I'm not sure how nnlf woill work as selection criterium for the distributions, and similarly the residual sum of squares might not be a good or robust criterium, for example with heavy tailed distributions. But that's just a guess, the only (commercial) package that I looked at, offered the choice between Kolmogorov-Smirnov, Anderson-Darling and 2 chisquare tests (equal-spaced and equal probability) as distance or goodness-of-fit measure. (Entropy would be another criterium, but I haven't seen it yet for selecting the distribution.) Your version also will help to narrow down what might be good starting values, eventually I would prefer to hardcode (optional) distribution specific _start_values. Your second script (after dropping the failures) has 10 distributions fewer than the first script (70 instead of 80), so there is still some distribution specific work left. Josef >> >> ? ? James >> >> 2010/10/30 James Phillips : >>> I'll parallelize this code and make a few more tweaks, and then add it >>> to my web site. >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From warren.weckesser at enthought.com Tue Nov 2 01:24:06 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 2 Nov 2010 00:24:06 -0500 Subject: [SciPy-Dev] Sporadic failures of tests of signal.correlate with dtype complex64 Message-ID: On Mac OSX 10.5.8, I'm seeing occasional failures like the following: $ python -c "import scipy.signal; scipy.signal.test()" Running unit tests for scipy.signal NumPy version 1.5.0.dev8716 NumPy is installed in /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy SciPy version 0.9.0.dev6856 SciPy is installed in /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] nose version 0.11.3 ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) ......................................................F............................................................................................................................................................................................................................................... ====================================================================== FAIL: test_rank1_same (test_signaltools.TestCorrelateComplex64) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", line 606, in test_rank1_same assert_array_almost_equal(y, y_r) File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 10.0%) x: array([-6.76370811-8.55324841j, 0.68672836-4.2681613j , -3.22760987-8.69287109j, 0.75051951-5.50820398j, -7.33016682-1.14685655j, -5.99573374+7.84123898j,... y: array([-6.76370859-8.55324745j, 0.68672895-4.2681613j , -3.22761011-8.69286919j, 0.75051963-5.50820446j, -7.33016682-1.14685678j, -5.99573517+7.84123898j,... ---------------------------------------------------------------------- Ran 311 tests in 2.307s FAILED (failures=1) $ python -c "import scipy.signal; scipy.signal.test()" Running unit tests for scipy.signal NumPy version 1.5.0.dev8716 NumPy is installed in /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy SciPy version 0.9.0.dev6856 SciPy is installed in /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] nose version 0.11.3 ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless "results may be meaningless", BadCoefficients) .......................................................F.............................................................................................................................................................................................................................................. ====================================================================== FAIL: test_rank1_same_old (test_signaltools.TestCorrelateComplex64) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/decorators.py", line 257, in _deprecated_imp f(*args, **kwargs) File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", line 641, in test_rank1_same_old assert_array_almost_equal(y, y_r) File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", line 774, in assert_array_almost_equal header='Arrays are not almost equal') File "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", line 618, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal (mismatch 10.0%) x: array([ 2.46665049+2.02072477j, -7.42591763-0.54789257j, 3.41454220-0.15863085j, -0.14030695+5.01129198j, -2.11230707+2.68583822j, 7.78784609+7.19434834j,... y: array([ 2.46665049+2.02072501j, -7.42591763-0.54789257j, 3.41454196-0.15863061j, -0.14030659+5.01129246j, -2.11230707+2.68583822j, 7.78784752+7.19434786j,... ---------------------------------------------------------------------- Ran 311 tests in 2.623s FAILED (failures=1) The above tests are part of a suite of tests that use random data, and usually the tests all pass. It took several tries to get the above failures. I suspect the problem is simply that the default tolerance of 'assert_array_almost_equal' is too small for the complex64 data type for these tests. Could someone verify that they can reproduce those failures? Does simply increasing the tolerance of the test look like a reasonable fix? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Tue Nov 2 02:21:50 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 2 Nov 2010 01:21:50 -0500 Subject: [SciPy-Dev] The function 'c0_P' in signal/bsplines.py Message-ID: There is a function called c0_P in signal/bsplines.py that sets local variables c0 and P, but does not return anything. Except for an update to the raise statement at the end of the function that I just checked in, this function hasn't been touched since r122. Obviously no one has used it, since it doesn't return anything. Any objection to removing it? (Note: objections that don't include a patch containing a docstring that conforms to the scipy standard, an actual 'return' statement, and appropriate tests will be ignored. :) Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From zunzun at zunzun.com Tue Nov 2 06:07:35 2010 From: zunzun at zunzun.com (James Phillips) Date: Tue, 2 Nov 2010 05:07:35 -0500 Subject: [SciPy-Dev] scipy.stats.distributions: note on initial parameters for fitting the beta distribution In-Reply-To: References: Message-ID: Thank you kindly for trying the scripts. James On Mon, Nov 1, 2010 at 10:13 PM, wrote: > > I tried out both your scripts during the weekend but didn't get around > to replying. Most distributions work pretty fast but there are still a > few unsuccesful time wasters in there. For example, ksone is also > mainly a distribution for a statistical test, and in my run took a > long time without a successful fit. > > I'm not sure how nnlf woill work as selection criterium for the > distributions, and similarly the residual sum of squares might not be > a good or robust criterium, for example with heavy tailed > distributions. But that's just a guess, the only (commercial) package > that I looked at, offered the choice between Kolmogorov-Smirnov, > Anderson-Darling and 2 chisquare tests (equal-spaced and equal > probability) as distance or goodness-of-fit measure. (Entropy would be > another criterium, but I haven't seen it yet for selecting the > distribution.) > > Your version also will help to narrow down what might be good starting > values, eventually I would prefer to hardcode (optional) distribution > specific _start_values. > > Your second script (after dropping the failures) has 10 distributions > fewer than the first script (70 instead of 80), so there is still some > distribution specific work left. > > Josef > >>> >>> ? ? James >>> >>> 2010/10/30 James Phillips : >>>> I'll parallelize this code and make a few more tweaks, and then add it >>>> to my web site. >>> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josef.pktd at gmail.com Tue Nov 2 09:41:45 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 2 Nov 2010 09:41:45 -0400 Subject: [SciPy-Dev] Sporadic failures of tests of signal.correlate with dtype complex64 In-Reply-To: References: Message-ID: On Tue, Nov 2, 2010 at 1:24 AM, Warren Weckesser wrote: > On Mac OSX 10.5.8, I'm seeing occasional failures like the following: > > $ python -c "import scipy.signal; scipy.signal.test()" > Running unit tests for scipy.signal > NumPy version 1.5.0.dev8716 > NumPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy > SciPy version 0.9.0.dev6856 > SciPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy > Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] > nose version 0.11.3 > ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: > BadCoefficients: Badly conditioned filter coefficients (numerator): the > results may be meaningless > ? "results may be meaningless", BadCoefficients) > ......................................................F............................................................................................................................................................................................................................................... > ====================================================================== > FAIL: test_rank1_same (test_signaltools.TestCorrelateComplex64) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", > line 606, in test_rank1_same > ??? assert_array_almost_equal(y, y_r) > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 774, in assert_array_almost_equal > ??? header='Arrays are not almost equal') > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 618, in assert_array_compare > ??? raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 10.0%) > ?x: array([-6.76370811-8.55324841j,? 0.68672836-4.2681613j , > ?????? -3.22760987-8.69287109j,? 0.75051951-5.50820398j, > ?????? -7.33016682-1.14685655j, -5.99573374+7.84123898j,... > ?y: array([-6.76370859-8.55324745j,? 0.68672895-4.2681613j , > ?????? -3.22761011-8.69286919j,? 0.75051963-5.50820446j, > ?????? -7.33016682-1.14685678j, -5.99573517+7.84123898j,... > > ---------------------------------------------------------------------- > Ran 311 tests in 2.307s > > FAILED (failures=1) > > $ python -c "import scipy.signal; scipy.signal.test()" > Running unit tests for scipy.signal > NumPy version 1.5.0.dev8716 > NumPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy > SciPy version 0.9.0.dev6856 > SciPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy > Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] > nose version 0.11.3 > ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: > BadCoefficients: Badly conditioned filter coefficients (numerator): the > results may be meaningless > ? "results may be meaningless", BadCoefficients) > .......................................................F.............................................................................................................................................................................................................................................. > ====================================================================== > FAIL: test_rank1_same_old (test_signaltools.TestCorrelateComplex64) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/decorators.py", > line 257, in _deprecated_imp > ??? f(*args, **kwargs) > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", > line 641, in test_rank1_same_old > ??? assert_array_almost_equal(y, y_r) > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 774, in assert_array_almost_equal > ??? header='Arrays are not almost equal') > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 618, in assert_array_compare > ??? raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 10.0%) > ?x: array([ 2.46665049+2.02072477j, -7.42591763-0.54789257j, > ??????? 3.41454220-0.15863085j, -0.14030695+5.01129198j, > ?????? -2.11230707+2.68583822j,? 7.78784609+7.19434834j,... > ?y: array([ 2.46665049+2.02072501j, -7.42591763-0.54789257j, > ??????? 3.41454196-0.15863061j, -0.14030659+5.01129246j, > ?????? -2.11230707+2.68583822j,? 7.78784752+7.19434786j,... > > ---------------------------------------------------------------------- > Ran 311 tests in 2.623s > > FAILED (failures=1) > > > The above tests are part of a suite of tests that use random data, and > usually the tests all pass.? It took several tries to get the above > failures. Is there a purpose behind the randomness in the tests? If not, you could choose a seed that works. For example, after some discussions on the mailing list, I set a seed for most of the stats.distributions tests. Josef > > I suspect the problem is simply that the default tolerance of > 'assert_array_almost_equal' is too small for the complex64 data type for > these tests. > > Could someone verify that they can reproduce those failures?? Does simply > increasing the tolerance of the test look like a reasonable fix? > > > Warren > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From pav at iki.fi Tue Nov 2 10:11:31 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 2 Nov 2010 14:11:31 +0000 (UTC) Subject: [SciPy-Dev] Sporadic failures of tests of signal.correlate with dtype complex64 References: Message-ID: Tue, 02 Nov 2010 09:41:45 -0400, josef.pktd wrote: [clip] > Is there a purpose behind the randomness in the tests? If not, you could > choose a seed that works. For example, after some discussions on the > mailing list, I set a seed for most of the stats.distributions tests. As a rule, all tests using random numbers should set the seed at the beginning of the test. -- Pauli Virtanen From vincent at vincentdavis.net Tue Nov 2 10:18:08 2010 From: vincent at vincentdavis.net (Vincent Davis) Date: Tue, 2 Nov 2010 08:18:08 -0600 Subject: [SciPy-Dev] Sporadic failures of tests of signal.correlate with dtype complex64 In-Reply-To: References: Message-ID: I remember running into a random number issue with edp ":Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)]" Becuase of the mkl library. I wrote a whole set of numpy.random.test() just to make sure the results where consistent. but I am not sure they where ever used. Not where I can get to this now but can look later. Vincent On Tue, Nov 2, 2010 at 8:11 AM, Pauli Virtanen wrote: > Tue, 02 Nov 2010 09:41:45 -0400, josef.pktd wrote: > [clip] > > Is there a purpose behind the randomness in the tests? If not, you could > > choose a seed that works. For example, after some discussions on the > > mailing list, I set a seed for most of the stats.distributions tests. > > As a rule, all tests using random numbers should set the seed at the > beginning of the test. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -- Thanks Vincent Davis 720-301-3003 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Nov 2 10:18:57 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 2 Nov 2010 22:18:57 +0800 Subject: [SciPy-Dev] Sporadic failures of tests of signal.correlate with dtype complex64 In-Reply-To: References: Message-ID: On Tue, Nov 2, 2010 at 1:24 PM, Warren Weckesser wrote: > On Mac OSX 10.5.8, I'm seeing occasional failures like the following: > > $ python -c "import scipy.signal; scipy.signal.test()" > Running unit tests for scipy.signal > NumPy version 1.5.0.dev8716 > NumPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy > SciPy version 0.9.0.dev6856 > SciPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy > Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] > nose version 0.11.3 > ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: > BadCoefficients: Badly conditioned filter coefficients (numerator): the > results may be meaningless > ? "results may be meaningless", BadCoefficients) > ......................................................F............................................................................................................................................................................................................................................... > ====================================================================== > FAIL: test_rank1_same (test_signaltools.TestCorrelateComplex64) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", > line 606, in test_rank1_same > ??? assert_array_almost_equal(y, y_r) > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 774, in assert_array_almost_equal > ??? header='Arrays are not almost equal') > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 618, in assert_array_compare > ??? raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 10.0%) > ?x: array([-6.76370811-8.55324841j,? 0.68672836-4.2681613j , > ?????? -3.22760987-8.69287109j,? 0.75051951-5.50820398j, > ?????? -7.33016682-1.14685655j, -5.99573374+7.84123898j,... > ?y: array([-6.76370859-8.55324745j,? 0.68672895-4.2681613j , > ?????? -3.22761011-8.69286919j,? 0.75051963-5.50820446j, > ?????? -7.33016682-1.14685678j, -5.99573517+7.84123898j,... > > ---------------------------------------------------------------------- > Ran 311 tests in 2.307s > > FAILED (failures=1) > > $ python -c "import scipy.signal; scipy.signal.test()" > Running unit tests for scipy.signal > NumPy version 1.5.0.dev8716 > NumPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy > SciPy version 0.9.0.dev6856 > SciPy is installed in > /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy > Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] > nose version 0.11.3 > ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: > BadCoefficients: Badly conditioned filter coefficients (numerator): the > results may be meaningless > ? "results may be meaningless", BadCoefficients) > .......................................................F.............................................................................................................................................................................................................................................. > ====================================================================== > FAIL: test_rank1_same_old (test_signaltools.TestCorrelateComplex64) > ---------------------------------------------------------------------- > Traceback (most recent call last): > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/decorators.py", > line 257, in _deprecated_imp > ??? f(*args, **kwargs) > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", > line 641, in test_rank1_same_old > ??? assert_array_almost_equal(y, y_r) > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 774, in assert_array_almost_equal > ??? header='Arrays are not almost equal') > ? File > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > line 618, in assert_array_compare > ??? raise AssertionError(msg) > AssertionError: > Arrays are not almost equal > > (mismatch 10.0%) > ?x: array([ 2.46665049+2.02072477j, -7.42591763-0.54789257j, > ??????? 3.41454220-0.15863085j, -0.14030695+5.01129198j, > ?????? -2.11230707+2.68583822j,? 7.78784609+7.19434834j,... > ?y: array([ 2.46665049+2.02072501j, -7.42591763-0.54789257j, > ??????? 3.41454196-0.15863061j, -0.14030659+5.01129246j, > ?????? -2.11230707+2.68583822j,? 7.78784752+7.19434786j,... > > ---------------------------------------------------------------------- > Ran 311 tests in 2.623s > > FAILED (failures=1) > > > The above tests are part of a suite of tests that use random data, and > usually the tests all pass.? It took several tries to get the above > failures. > > I suspect the problem is simply that the default tolerance of > 'assert_array_almost_equal' is too small for the complex64 data type for > these tests. > > Could someone verify that they can reproduce those failures? Can't reproduce these. > Does simply increasing the tolerance of the test look like a reasonable fix? The default tolerance is decimal=6, which is not all that strict. I notice that only the longdouble tests fail while the single/double tests do not. This is very likely platform dependent, otherwise it would have been noticed before. Like Josef/Pauli say fixing the seed is one thing (preferably with a value that's failing for you). But then I would split the tests and only increase the tolerance of the longdouble version, perhaps even only on certain platforms. Cheers, Ralf From dsdale24 at gmail.com Tue Nov 2 16:56:18 2010 From: dsdale24 at gmail.com (Darren Dale) Date: Tue, 2 Nov 2010 16:56:18 -0400 Subject: [SciPy-Dev] scipy.org unresponsive Message-ID: I'm having trouble connecting to www.scipy.org, the server is not responding: $ ping www.scipy.org PING www.scipy.org (216.62.213.231): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 From dagss at student.matnat.uio.no Wed Nov 3 06:26:56 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 03 Nov 2010 11:26:56 +0100 Subject: [SciPy-Dev] Work on fwrap and SciPy Message-ID: <4CD138F0.3010708@student.matnat.uio.no> This is just a quick note to inform people that I am currently working with Enthought to bring SciPy to the .NET platform. In particular, I will work on the Fortran parts of SciPy. The primary strategy will be to improve fwrap enough to make it usable for SciPy, and then move SciPy over to fwrap instead of f2py. The point here is that Carl Witty is already working on a .NET backend for Cython, and since fwrap generates Cython code we get the .NET port that way. All work is done in Enthought's "refactor" branches for now [1]. The intention is certainly to merge back to main eventually, but questions of how or when or whether will have to wait; getting things up and running on .NET has priority. Some details: a) The most important missing feature in fwrap is callbacks. I'm sure there are other things I'll have to implement as well. b) The main strategy is to first move (our own branch of) SciPy over to fwrap and have that work on CPython, and then move to compiling things on .NET c) fwrap does not support g77, only Fortran 90 compilers like gfortran/ifort etc. For the purposes of the .NET port this is likely to be good enough. Before a merge with the main CPython branch one must perhaps look into implementing g77 support in fwrap. I know that David C. at least earlier stated that g77 support is not going away anytime soon. Feedback on this welcome. Dag Sverre [1] http://github.com/teoliphant/numpy-refactor http://github.com/jasonmccampbell/scipy-refactor From william.ratcliff at gmail.com Wed Nov 3 08:09:51 2010 From: william.ratcliff at gmail.com (william ratcliff) Date: Wed, 3 Nov 2010 08:09:51 -0400 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: <4CD138F0.3010708@student.matnat.uio.no> References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: Is fwrap feature complete now for f95? William On Wed, Nov 3, 2010 at 6:26 AM, Dag Sverre Seljebotn < dagss at student.matnat.uio.no> wrote: > This is just a quick note to inform people that I am currently working > with Enthought to bring SciPy to the .NET platform. In particular, I > will work on the Fortran parts of SciPy. > > The primary strategy will be to improve fwrap enough to make it usable > for SciPy, and then move SciPy over to fwrap instead of f2py. The point > here is that Carl Witty is already working on a .NET backend for Cython, > and since fwrap generates Cython code we get the .NET port that way. > > All work is done in Enthought's "refactor" branches for now [1]. The > intention is certainly to merge back to main eventually, but questions > of how or when or whether will have to wait; getting things up and > running on .NET has priority. > > Some details: > > a) The most important missing feature in fwrap is callbacks. I'm sure > there are other things I'll have to implement as well. > > b) The main strategy is to first move (our own branch of) SciPy over > to fwrap and have that work on CPython, and then move to compiling > things on .NET > > c) fwrap does not support g77, only Fortran 90 compilers like > gfortran/ifort etc. For the purposes of the .NET port this is likely to > be good enough. Before a merge with the main CPython branch one must > perhaps look into implementing g77 support in fwrap. I know that David > C. at least earlier stated that g77 support is not going away anytime > soon. Feedback on this welcome. > > Dag Sverre > > [1] > http://github.com/teoliphant/numpy-refactor > http://github.com/jasonmccampbell/scipy-refactor > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Wed Nov 3 08:47:13 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 03 Nov 2010 13:47:13 +0100 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: <4CD159D1.4020608@student.matnat.uio.no> On 11/03/2010 01:09 PM, william ratcliff wrote: > Is fwrap feature complete now for f95? Look here for general information on fwrap: - http://fwrap.sourceforge.net/ - http://fortrancython.wordpress.com/ Short answer: No, but it's getting there. In particular, there's no support for modules, one must wrap global functions. AFAIK I won't be doing anything about Fortran 90 support. Dag Sverre > > William > > On Wed, Nov 3, 2010 at 6:26 AM, Dag Sverre Seljebotn > > wrote: > > This is just a quick note to inform people that I am currently working > with Enthought to bring SciPy to the .NET platform. In particular, I > will work on the Fortran parts of SciPy. > > The primary strategy will be to improve fwrap enough to make it usable > for SciPy, and then move SciPy over to fwrap instead of f2py. The > point > here is that Carl Witty is already working on a .NET backend for > Cython, > and since fwrap generates Cython code we get the .NET port that way. > > All work is done in Enthought's "refactor" branches for now [1]. The > intention is certainly to merge back to main eventually, but questions > of how or when or whether will have to wait; getting things up and > running on .NET has priority. > > Some details: > > a) The most important missing feature in fwrap is callbacks. I'm sure > there are other things I'll have to implement as well. > > b) The main strategy is to first move (our own branch of) SciPy over > to fwrap and have that work on CPython, and then move to compiling > things on .NET > > c) fwrap does not support g77, only Fortran 90 compilers like > gfortran/ifort etc. For the purposes of the .NET port this is > likely to > be good enough. Before a merge with the main CPython branch one must > perhaps look into implementing g77 support in fwrap. I know that David > C. at least earlier stated that g77 support is not going away anytime > soon. Feedback on this welcome. > > Dag Sverre > > [1] > http://github.com/teoliphant/numpy-refactor > http://github.com/jasonmccampbell/scipy-refactor > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Nov 3 12:55:03 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 3 Nov 2010 09:55:03 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: <4CD138F0.3010708@student.matnat.uio.no> References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: Hi, On Wed, Nov 3, 2010 at 3:26 AM, Dag Sverre Seljebotn wrote: ... > All work is done in Enthought's "refactor" branches for now [1]. The > intention is certainly to merge back to main eventually, but questions > of how or when or whether will have to wait; getting things up and > running on .NET has priority. How great is the divergence of the refactor branches from the trunk? One worry would be that if the divergence gets too large, it will not be possible to review the changes. Best, Matthew From dagss at student.matnat.uio.no Wed Nov 3 16:20:26 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Wed, 03 Nov 2010 21:20:26 +0100 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: <4CD1C40A.3030208@student.matnat.uio.no> On 11/03/2010 05:55 PM, Matthew Brett wrote: > Hi, > > On Wed, Nov 3, 2010 at 3:26 AM, Dag Sverre Seljebotn > wrote: > ... > >> All work is done in Enthought's "refactor" branches for now [1]. The >> intention is certainly to merge back to main eventually, but questions >> of how or when or whether will have to wait; getting things up and >> running on .NET has priority. >> > How great is the divergence of the refactor branches from the trunk? > > One worry would be that if the divergence gets too large, it will not > be possible to review the changes. > Speaking about the Fortran part only, I honestly don't expect anybody else to do major rewrites of the f2py wrappers (.pyf files) while this is going on; any (unlikely) minor tweaks can readily be merged (provided a decision is made for SciPy on CPython to move from f2py to fwrap). I can try to keep the replacement of f2py with fwrap in a seperate branch that could potentially be merged without taking the other stuff from the .NET branch. Not sure whether that will succeed yet, but it is definitely worth a try. So perhaps one can have two or three branches with different kinds of changes rather than one large one. Replacing f2py with fwrap seems to be orthogonal to changes that has to do with changes necessitated by the refactoring of NumPy, but I don't know yet whether that will hold true in practice. (For non-Fortran it would be better to start a new thread on the .NET refactoring in general and ask Jason McCampbell, I can't tell.) Dag Sverre From kwmsmith at gmail.com Wed Nov 3 17:00:14 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Wed, 3 Nov 2010 16:00:14 -0500 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: <4CD138F0.3010708@student.matnat.uio.no> References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: On Wed, Nov 3, 2010 at 5:26 AM, Dag Sverre Seljebotn wrote: > This is just a quick note to inform people that I am currently working with > Enthought to bring SciPy to the .NET platform. In particular, I will work on > the Fortran parts of SciPy. > > The primary strategy will be to improve fwrap enough to make it usable for > SciPy, and then move SciPy over to fwrap instead of f2py. The point here is > that Carl Witty is already working on a .NET backend for Cython, and since > fwrap generates Cython code we get the .NET port that way. This is great news. Thanks for doing this, and to Enthought for providing support. I think all parties will benefit :-) > > All work is done in Enthought's "refactor" branches for now [1]. The > intention is certainly to merge back to main eventually, but questions of > how or when or whether will have to wait; getting things up and running on > .NET has priority. > Sure. The fact that the .NET-specific stuff is in Cython should make merging fwrap much easier than it otherwise would be. I'm interested in keeping fwrap-main synced up with fwrap-.NET, so that they won't diverge too much. Your thoughts on this are welcome. At any rate, whatever improvements I make to fwrap in the coming months will likely be orthogonal to callbacks. The code could use some refactoring, to use the visitor pattern consistently throughout. > Some details: > > ?a) The most important missing feature in fwrap is callbacks. I'm sure there > are other things I'll have to implement as well. > There are differences between callbacks in f77 and callbacks in F90, using modern (Fortran 2003) features. So figuring out whether g77 support is a requirement would be good to know before attacking this problem. F90 callbacks (using the 2003 features) are much stricter, and last time I looked not all compilers support all features that would be useful for callback support. > ?b) The main strategy is to first move (our own branch of) SciPy over to > fwrap and have that work on CPython, and then move to compiling things on > .NET > > ?c) fwrap does not support g77, only Fortran 90 compilers like > gfortran/ifort etc. For the purposes of the .NET port this is likely to be > good enough. Before a merge with the main CPython branch one must perhaps > look into implementing g77 support in fwrap. I know that David C. at least > earlier stated that g77 support is not going away anytime soon. Feedback on > this welcome. > I'm interested to hear what DC has to say... > Dag Sverre > > [1] > http://github.com/teoliphant/numpy-refactor > http://github.com/jasonmccampbell/scipy-refactor From matthew.brett at gmail.com Wed Nov 3 17:06:16 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 3 Nov 2010 14:06:16 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: <4CD1C40A.3030208@student.matnat.uio.no> References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: Hi, On Wed, Nov 3, 2010 at 1:20 PM, Dag Sverre Seljebotn wrote: >> How great is the divergence of the refactor branches from the trunk? >> >> One worry would be that if the divergence gets too large, it will not >> be possible to review the changes. >> > > Speaking about the Fortran part only, I honestly don't expect anybody else > to do major rewrites of the f2py wrappers (.pyf files) while this is going > on; any (unlikely) minor tweaks can readily be merged (provided a decision > is made for SciPy on CPython to move from f2py to fwrap). > > I can try to keep the replacement of f2py with fwrap in a seperate branch > that could potentially be merged without taking the other stuff from the > .NET branch. Not sure whether that will succeed yet, but it is definitely > worth a try. So perhaps one can have two or three branches with different > kinds of changes rather than one large one. Replacing f2py with fwrap seems > to be orthogonal to changes that has to do with changes necessitated by the > refactoring of NumPy, but I don't know yet whether that will hold true in > practice. It sounds like a very good idea to try to separate orthogonal changes. I can imagine that it would get very hard to merge back the changes if they are not separated. > (For non-Fortran it would be better to start a new thread on the .NET > refactoring in general and ask Jason McCampbell, I can't tell.) Good idea. I will try :) Thanks, Matthew From matthew.brett at gmail.com Wed Nov 3 17:09:40 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 3 Nov 2010 14:09:40 -0700 Subject: [SciPy-Dev] Scipy / numpy refactor plans Message-ID: Hi, Y'all might have seen Dag Sverre's post on .NET, fwrap and so on: On Wed, Nov 3, 2010 at 3:26 AM, Dag Sverre Seljebotn wrote: > This is just a quick note to inform people that I am currently working > with Enthought to bring SciPy to the .NET platform. In particular, I > will work on the Fortran parts of SciPy. ... > All work is done in Enthought's "refactor" branches for now [1]. The > intention is certainly to merge back to main eventually, but questions > of how or when or whether will have to wait; getting things up and > running on .NET has priority. I am wondering what the plan is for eventual merge of the Enthought refactoring of scipy / numpy. Are the changes minor enough that there is a reasonable hope of community review? Best, Matthew From matthew.brett at gmail.com Wed Nov 3 19:38:57 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 3 Nov 2010 16:38:57 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: Hi, > That is our desire as well. ?It is much easier to merge together several > small projects than one all-encompassing one. :) Sorry to ask - I realize I could have a look myself - but can I ask - how is it going from that point of view, with the refactoring branches? Thanks a lot, Matthew From fperez.net at gmail.com Wed Nov 3 19:51:00 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 3 Nov 2010 16:51:00 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: On Wed, Nov 3, 2010 at 4:38 PM, Matthew Brett wrote: > Hi, > >> That is our desire as well. ?It is much easier to merge together several >> small projects than one all-encompassing one. :) Sorry, who said this? I can't find it in the thread, and without the original attribution line it's impossible to understand the conversation... Cheers, f From matthew.brett at gmail.com Wed Nov 3 20:01:47 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 3 Nov 2010 17:01:47 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: Hi, On Wed, Nov 3, 2010 at 4:51 PM, Fernando Perez wrote: > On Wed, Nov 3, 2010 at 4:38 PM, Matthew Brett wrote: >> Hi, >> >>> That is our desire as well. ?It is much easier to merge together several >>> small projects than one all-encompassing one. :) > > Sorry, who said this? ?I can't find it in the thread, and without the > original attribution line it's impossible to understand the > conversation... Previous message in this thread (Jason McCampbell) Matthew From robert.kern at gmail.com Wed Nov 3 20:03:25 2010 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 3 Nov 2010 19:03:25 -0500 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: On Wed, Nov 3, 2010 at 19:01, Matthew Brett wrote: > Hi, > > On Wed, Nov 3, 2010 at 4:51 PM, Fernando Perez wrote: >> On Wed, Nov 3, 2010 at 4:38 PM, Matthew Brett wrote: >>> Hi, >>> >>>> That is our desire as well. ?It is much easier to merge together several >>>> small projects than one all-encompassing one. :) >> >> Sorry, who said this? ?I can't find it in the thread, and without the >> original attribution line it's impossible to understand the >> conversation... > > Previous message in this thread (Jason McCampbell) It either hasn't arrived, or it was sent to you personally. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From matthew.brett at gmail.com Wed Nov 3 20:06:03 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 3 Nov 2010 17:06:03 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: On Wed, Nov 3, 2010 at 5:03 PM, Robert Kern wrote: > On Wed, Nov 3, 2010 at 19:01, Matthew Brett wrote: >> Hi, >> >> On Wed, Nov 3, 2010 at 4:51 PM, Fernando Perez wrote: >>> On Wed, Nov 3, 2010 at 4:38 PM, Matthew Brett wrote: >>>> Hi, >>>> >>>>> That is our desire as well. ?It is much easier to merge together several >>>>> small projects than one all-encompassing one. :) >>> >>> Sorry, who said this? ?I can't find it in the thread, and without the >>> original attribution line it's impossible to understand the >>> conversation... >> >> Previous message in this thread (Jason McCampbell) > > It either hasn't arrived, or it was sent to you personally. Thanks for the clarification - it was Cc'ed to the scipy-dev list - should arrive soon I guess. Best, Matthew From fperez.net at gmail.com Wed Nov 3 20:11:18 2010 From: fperez.net at gmail.com (Fernando Perez) Date: Wed, 3 Nov 2010 17:11:18 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: On Wed, Nov 3, 2010 at 5:06 PM, Matthew Brett wrote: > > Thanks for the clarification - it was Cc'ed to the scipy-dev list - > should arrive soon I guess. Thanks, I'll get the full context then. Cheers, f From david at silveregg.co.jp Wed Nov 3 20:58:59 2010 From: david at silveregg.co.jp (David) Date: Thu, 04 Nov 2010 09:58:59 +0900 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: <4CD20553.7090008@silveregg.co.jp> On 11/04/2010 06:00 AM, Kurt Smith wrote: >> >> c) fwrap does not support g77, only Fortran 90 compilers like >> gfortran/ifort etc. For the purposes of the .NET port this is likely to be >> good enough. Before a merge with the main CPython branch one must perhaps >> look into implementing g77 support in fwrap. I know that David C. at least >> earlier stated that g77 support is not going away anytime soon. Feedback on >> this welcome. G77 is still the "standard" fortran compiler on significant platforms (mingw and RHEL/Centos). Could you quickly explain why g77 is not usable with fwrap ? Would it take a long time to add support (if that is even possible) ? cheers, David From kwmsmith at gmail.com Wed Nov 3 23:31:51 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Wed, 3 Nov 2010 22:31:51 -0500 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: <4CD20553.7090008@silveregg.co.jp> References: <4CD138F0.3010708@student.matnat.uio.no> <4CD20553.7090008@silveregg.co.jp> Message-ID: On Wed, Nov 3, 2010 at 7:58 PM, David wrote: > On 11/04/2010 06:00 AM, Kurt Smith wrote: > >>> >>> ? c) fwrap does not support g77, only Fortran 90 compilers like >>> gfortran/ifort etc. For the purposes of the .NET port this is likely to be >>> good enough. Before a merge with the main CPython branch one must perhaps >>> look into implementing g77 support in fwrap. I know that David C. at least >>> earlier stated that g77 support is not going away anytime soon. Feedback on >>> this welcome. > > G77 is still the "standard" fortran compiler on significant platforms > (mingw and RHEL/Centos). Could you quickly explain why g77 is not usable > with fwrap ? Would it take a long time to add support (if that is even > possible) ? ...not *currently* usable. Fwrap's fortran-C wrappers use features defined in the 2003 standard that make it easy to generate portable wrapping code. These features allow wrapping of fortran 90/95-specific constructs. g77 support is certainly possible, given sufficient effort. I am in favor of it, but someone else will have to do it (I'm happy to pitch in, of course). It will require a good deal of work that I can't justify given my requirements for the tool. Kurt From wardefar at iro.umontreal.ca Thu Nov 4 00:10:48 2010 From: wardefar at iro.umontreal.ca (David Warde-Farley) Date: Thu, 4 Nov 2010 00:10:48 -0400 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD20553.7090008@silveregg.co.jp> Message-ID: <764F7C8A-5219-4A2E-8C65-94D6D84D6C51@iro.umontreal.ca> On 2010-11-03, at 11:31 PM, Kurt Smith wrote: > g77 support is certainly possible, given sufficient effort. I am in > favor of it, but someone else will have to do it (I'm happy to pitch > in, of course). It will require a good deal of work that I can't > justify given my requirements for the tool. There's also the issue (correct me if this has changed since we spoke in Austin, Kurt) that Fwrap isn't gfortran 4.2 compatible, yet on OS X the recommended binary distribution of gfortran for SciPy is a 4.2 build. I recall Robert saying at some point that building a 100% correctly working gfortran (on OS X, at least?) is a nightmare. So there's that hurdle to deal with somehow when Fwrap starts getting used for SciPy. David From kwmsmith at gmail.com Thu Nov 4 00:30:28 2010 From: kwmsmith at gmail.com (Kurt Smith) Date: Wed, 3 Nov 2010 23:30:28 -0500 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: <764F7C8A-5219-4A2E-8C65-94D6D84D6C51@iro.umontreal.ca> References: <4CD138F0.3010708@student.matnat.uio.no> <4CD20553.7090008@silveregg.co.jp> <764F7C8A-5219-4A2E-8C65-94D6D84D6C51@iro.umontreal.ca> Message-ID: On Wed, Nov 3, 2010 at 11:10 PM, David Warde-Farley wrote: > On 2010-11-03, at 11:31 PM, Kurt Smith wrote: > >> g77 support is certainly possible, given sufficient effort. ?I am in >> favor of it, but someone else will have to do it (I'm happy to pitch >> in, of course). ?It will require a good deal of work that I can't >> justify given my requirements for the tool. > > There's also the issue (correct me if this has changed since we spoke in Austin, Kurt) that Fwrap isn't gfortran 4.2 compatible, yet on OS X the recommended binary distribution of gfortran for SciPy is a 4.2 build. I recall Robert saying at some point that building a 100% correctly working gfortran (on OS X, at least?) is a nightmare. So there's that hurdle to deal with somehow when Fwrap starts getting used for SciPy. > Yes. I agree. If you want a universal gfortran on OS X that isn't the recommended binary version, you would have to reproduce what was done here, under "Building a universal compiler": http://r.research.att.com/tools/ I've tried it for gfortran 4.4.4 with no success (dependency hell, IIRC). A suboptimal solution, suitable only for local use (non-universal build, so no fat binaries) is to use the gfortran from fink or macports. This introduces its own issues, none insurmountable. Fwrap's new build system *greatly* improves things here. Numpy and Scipy point people to R's OS X gfortran. One day I plan on making a case to some benefactor to host a gfortran build suitable for use with fwrap, but that's only when fwrap is an essential component. If/when fwrap gets an 'f77-compat' mode, then it won't matter the gfortran version, but it won't support all fortran 90/95 constructs. To support those, fwrap requires gfortran >=4.4 (the 4.3 series is buggy). Or one can simply use g95 >=0.92, and bypass all of the above. Kurt > David > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From david at silveregg.co.jp Thu Nov 4 00:43:33 2010 From: david at silveregg.co.jp (David) Date: Thu, 04 Nov 2010 13:43:33 +0900 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: <764F7C8A-5219-4A2E-8C65-94D6D84D6C51@iro.umontreal.ca> References: <4CD138F0.3010708@student.matnat.uio.no> <4CD20553.7090008@silveregg.co.jp> <764F7C8A-5219-4A2E-8C65-94D6D84D6C51@iro.umontreal.ca> Message-ID: <4CD239F5.3080303@silveregg.co.jp> On 11/04/2010 01:10 PM, David Warde-Farley wrote: > On 2010-11-03, at 11:31 PM, Kurt Smith wrote: > >> g77 support is certainly possible, given sufficient effort. I am in >> favor of it, but someone else will have to do it (I'm happy to pitch >> in, of course). It will require a good deal of work that I can't >> justify given my requirements for the tool. > > There's also the issue (correct me if this has changed since we spoke in Austin, Kurt) that Fwrap isn't gfortran 4.2 compatible, yet on OS X the recommended binary distribution of gfortran for SciPy is a 4.2 build. I recall Robert saying at some point that building a 100% correctly working gfortran (on OS X, at least?) is a nightmare. Building compilers with universal support is indeed very error prone, especially gfortran. I have built gcc in many weird configurations, none (even building gcc on windows) was as difficult as this. It may worth being looked at, maybe with the help of the R guys who did the build available on att website, cheers, David From dagss at student.matnat.uio.no Thu Nov 4 04:42:33 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 04 Nov 2010 09:42:33 +0100 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD20553.7090008@silveregg.co.jp> Message-ID: <4CD271F9.3000701@student.matnat.uio.no> On 11/04/2010 04:31 AM, Kurt Smith wrote: > On Wed, Nov 3, 2010 at 7:58 PM, David wrote: > >> On 11/04/2010 06:00 AM, Kurt Smith wrote: >> >> >>>> c) fwrap does not support g77, only Fortran 90 compilers like >>>> gfortran/ifort etc. For the purposes of the .NET port this is likely to be >>>> good enough. Before a merge with the main CPython branch one must perhaps >>>> look into implementing g77 support in fwrap. I know that David C. at least >>>> earlier stated that g77 support is not going away anytime soon. Feedback on >>>> this welcome. >>>> >> G77 is still the "standard" fortran compiler on significant platforms >> (mingw and RHEL/Centos). Could you quickly explain why g77 is not usable >> with fwrap ? Would it take a long time to add support (if that is even >> possible) ? >> Just to give everyone a taste, here's an example of modern Fortran interopability: function add(x, y) bind(c) use iso_c_binding real(c_double), value :: x, y real(c_double) :: add add = x + y end function add This function is callable from C as "double add(double x, double y)". In particular, "bind(c)" turns off name mangling (and allow setting a different link symbol name through another parameter), and the "value" attribute means to not pass by reference. This is all standard across modern compilers. Essentially, modern Fortran compilers allow your Fortran code to be callable just as if it was C code. The problem of interopability is much worse with Fortran 90/95/2003 than Fortran 77 though. E.g., Fortran 90 can pass arrays with shape and strides implicitly being passed along, something that has absolutely no standard binary interface between compilers. This is one of the main problems that fwrap solves. With Fortran 77, it is much easier to rely on de facto standards/assumptions and do some probing. > ...not *currently* usable. Fwrap's fortran-C wrappers use features > defined in the 2003 standard that make it easy to generate portable > wrapping code. These features allow wrapping of fortran > 90/95-specific constructs. > > g77 support is certainly possible, given sufficient effort. I am in > favor of it, but someone else will have to do it (I'm happy to pitch > in, of course). It will require a good deal of work that I can't > justify given my requirements for the tool. > If all goes well, I'm hoping to have time for this (no promises though, as it is kind of low priority with respect to the .NET port). However, given that I have such time available, then if SciPy instead starts requiring Fortran 90 compiler, my time could be more profitably spent on more useful stuff :-) I think the big questions here are a) What time scales are we looking at here...in two years time, relying on a modern gfortran may be less of an issue. b) Is there a case to move SciPy to Fortran 90 anyway (because one wants to include scientific code written in Fortran 90?) My feeling is that gfortran 4.2 compatibility is rather out of the question though, as it would require another set of special cases for a compiler that is at least soon out-dated. Dag Sverre From dagss at student.matnat.uio.no Thu Nov 4 05:59:26 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Thu, 04 Nov 2010 10:59:26 +0100 Subject: [SciPy-Dev] [fwrap-users] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> Message-ID: <4CD283FE.8010707@student.matnat.uio.no> On 11/03/2010 10:00 PM, Kurt Smith wrote: > On Wed, Nov 3, 2010 at 5:26 AM, Dag Sverre Seljebotn > wrote: > >> This is just a quick note to inform people that I am currently working with >> Enthought to bring SciPy to the .NET platform. In particular, I will work on >> the Fortran parts of SciPy. >> >> The primary strategy will be to improve fwrap enough to make it usable for >> SciPy, and then move SciPy over to fwrap instead of f2py. The point here is >> that Carl Witty is already working on a .NET backend for Cython, and since >> fwrap generates Cython code we get the .NET port that way. >> > This is great news. Thanks for doing this, and to Enthought for > providing support. I think all parties will benefit :-) > > >> All work is done in Enthought's "refactor" branches for now [1]. The >> intention is certainly to merge back to main eventually, but questions of >> how or when or whether will have to wait; getting things up and running on >> .NET has priority. >> >> > Sure. The fact that the .NET-specific stuff is in Cython should make > merging fwrap much easier than it otherwise would be. I'm interested > in keeping fwrap-main synced up with fwrap-.NET, so that they won't > diverge too much. Your thoughts on this are welcome. > I'm basically happy to work as closely with the main branch as you want me to. Bug-fix stuff I'll prepare as patches straight into your trunk. More special purpose functionality that I'm less confident that you'd accept without discussion would need to live in a branch unless you in fact have time to discuss it with me :-) I think the limiting factor here is what time you have available for discussion. As I work on this full-time and you probably must make do on scrapes of time here and there, I see some danger that you wouldn't be able to "keep up" with the decisions I must make as I go, and that some temporary forking must happen here and there for that reason. This does of course also depend on how much control you want to have over fwrap, or if you're happy to let me loose on my own on the main branch :-) Obviously, the more time we spend talking together, the more my work will be in sync with your wishes for the fwrap code base. On my end, I feel I can easily justify spending time to discuss issues with you because it greatly reduces the risk of going in a wrong direction, and also makes me find out how to do things quicker than if I have to go look at the source code. If you have time for it, it would be good to schedule a Skype session. I already have a small list of decisions for the code base I'd like your input on :-) > At any rate, whatever improvements I make to fwrap in the coming > months will likely be orthogonal to callbacks. The code could use > some refactoring, to use the visitor pattern consistently throughout. > > >> Some details: >> >> a) The most important missing feature in fwrap is callbacks. I'm sure there >> are other things I'll have to implement as well. >> >> > There are differences between callbacks in f77 and callbacks in F90, > using modern (Fortran 2003) features. So figuring out whether g77 > support is a requirement would be good to know before attacking this > problem. F90 callbacks (using the 2003 features) are much stricter, > and last time I looked not all compilers support all features that > would be useful for callback support. > As all the code in SciPy is F77, that's what I'll need to support. I'm trying to substitute f2py, and will try to emulate it as closely as is reasonable. If nothing else, it should at least get the ground well prepared for callbacks on the Cython code generation side, where they ought to be very similar. Dag Sverre From matthew.brett at gmail.com Thu Nov 4 11:58:42 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 4 Nov 2010 08:58:42 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: Hi, On Thu, Nov 4, 2010 at 7:13 AM, Jason McCampbell wrote: >> > That is our desire as well. ?It is much easier to merge together several >> > small projects than one all-encompassing one. :) >> >> Sorry to ask - I realize I could have a look myself - but can I ask - >> how is it going from that point of view, with the refactoring >> branches? > > It's a mixed bag. ?Cython .NET and fwrap should be independent of the > refactor in that they can use it, but don't require it. ?The numpy changes > are self-contained but impact SciPy. ?The .NET branch for SciPy then ends up > dependent on all three projects. Thanks - that's helpful. How close then are the numpy changes to being ready for preliminary review? Are they of a size that is sensible for review in one go? > Sorry about the prior out-of-context reply. ?I realized I wasn't subscribed > to the scipy list (just numpy) and didn't managed to subscribe before my > reply was rejected. Oh - don't worry - I think that wrist-slap was aimed at me :) See you, Matthew From matthew.brett at gmail.com Thu Nov 4 13:03:07 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 4 Nov 2010 10:03:07 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: Hi, On Thu, Nov 4, 2010 at 9:58 AM, Jason McCampbell wrote: >> Thanks - that's helpful. ?How close then are the numpy changes to >> being ready for preliminary review? ?Are they of a size that is >> sensible for review in one go? > > Depending on how cursory a review is done, perhaps but probably not. ?More > likely it will require some conversations to explain what was done and why. > ?By and large the changes are in core/src/multiarray and core/src/umath > (plus appropriate header files). Thanks for the useful summary. Do you anticipate any performance changes? I seem to remember we're a bit short on benchmarks - is that correct? Cheers, Matthew From matthew.brett at gmail.com Thu Nov 4 14:41:37 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 4 Nov 2010 11:41:37 -0700 Subject: [SciPy-Dev] Work on fwrap and SciPy In-Reply-To: References: <4CD138F0.3010708@student.matnat.uio.no> <4CD1C40A.3030208@student.matnat.uio.no> Message-ID: Hi, On Thu, Nov 4, 2010 at 11:11 AM, Jason McCampbell wrote: >> Thanks for the useful summary. ? Do you anticipate any performance >> changes? ?I seem to remember we're a bit short on benchmarks - is that >> correct? > > The intent is to that performance changes should be minimal if any. ?All of > the tight ufunc loops are unchanged, except for some handling of object > arrays, and those might be slightly faster. ?There is an extra level of > indirection because the Py_OBJECT_HEAD structure had to be separated from > the main structure, so there could be a slight amount of overhead there. ?We > still need to do some benchmarks runs now that things are pieced back > together. (I nearly accidentally deleted your name from the top of the email but remembered just in time!) Am I right in thinking that our current benchmark suite is not very comprehensive? I see we have /benchmarks - does that cover most or all of the cases you want to test? Maybe that in itself could be an orthogonal piece of work? I hesitate to ask - but can we (I) help in that? Best, Matthew From ariver at enthought.com Thu Nov 4 18:08:14 2010 From: ariver at enthought.com (Aaron River) Date: Thu, 4 Nov 2010 17:08:14 -0500 Subject: [SciPy-Dev] scipy.org unresponsive In-Reply-To: References: Message-ID: Hello, This was due to a major hit to our bandwidth with the release of EPD 6.3. We've offloaded the traffic to an off-site service, which has brought our bandwidth usage back into normal usage patterns. Thanks, -- Aaron On Tue, Nov 2, 2010 at 15:56, Darren Dale wrote: > I'm having trouble connecting to www.scipy.org, the server is not responding: > > > $ ping www.scipy.org > PING www.scipy.org (216.62.213.231): 56 data bytes > Request timeout for icmp_seq 0 > Request timeout for icmp_seq 1 > Request timeout for icmp_seq 2 > Request timeout for icmp_seq 3 > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From dave.hirschfeld at gmail.com Mon Nov 8 05:11:25 2010 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 8 Nov 2010 10:11:25 +0000 (UTC) Subject: [SciPy-Dev] TimeSeries Plotting Broken? Message-ID: I get the following error when trying to run the "Separate scales for left and right axis" example from the scikits.timeseries website: Executing Python(x,y) 2.6.5.3 profile startup script: default.py Python 2.6.5 (r265:79096, Mar 19 2010, 21:48:26) [MSC v.1500 32 bit (Intel)] IPython 0.10 -- An enhanced Interactive Python. In [2]: numpy.__version__ Out[2]: '1.5.1rc1' In [3]: ts.__version__ Out[3]: '0.91.3' In [4]: matplotlib.__version__ Out[4]: '1.0.0' In [5]: import numpy as np In [6]: import numpy.ma as ma In [7]: import matplotlib.pyplot as plt In [8]: import scikits.timeseries as ts In [9]: import scikits.timeseries.lib.plotlib as tpl In [10]: In [11]: # generate some random data In [12]: data1 = np.cumprod(1 + np.random.normal(0, 1, 300)/100) In [13]: data2 = np.cumprod(1 + np.random.normal(0, 1, 300)/100)*100 In [14]: start_date = ts.Date(freq='M', year=1982, month=1) In [15]: series1 = ts.time_series(data1, start_date=start_date-50) In [16]: series2 = ts.time_series(data2, start_date=start_date) In [17]: fig = tpl.tsfigure() In [18]: fsp = fig.add_tsplot(111) In [19]: # plot series on left axis In [20]: fsp.tsplot(series1, 'b-', label='<- left series') Out[20]: [] In [21]: fsp.set_ylim(ma.min(series1.series), ma.max(series1.series)) Out[21]: (0.83169831486028944, 1.0802663771556904) In [22]: # create right axis In [23]: fsp_right = fsp.add_yaxis(position='right', yscale='log') --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) scikits.timeseries-0.91.3-py2.6-win32.egg\scikits\timeseries\lib\plotlib.pyc in add_yaxis(fsp, position, yscale, basey, subsy) 1194 fig = fsp.figure 1195 axisini = fsp.axis() -> 1196 fsp_alt_args = (fsp._rows, fsp._cols, fsp._num + 1) 1197 fsp_alt = fig.add_tsplot(frameon=False, position=fsp.get_position(), 1198 sharex=fsp, *fsp_alt_args) AttributeError: 'TimeSeriesPlot' object has no attribute '_rows' Let me know if there's anything further I can do to help getting it fixed... Thanks, Dave From dave.hirschfeld at gmail.com Mon Nov 8 07:59:03 2010 From: dave.hirschfeld at gmail.com (Dave Hirschfeld) Date: Mon, 8 Nov 2010 12:59:03 +0000 (UTC) Subject: [SciPy-Dev] Bug in TimeSeries.dump? Message-ID: Looks like quite a serious bug in the TimeSeries.dump function: In [175]: test = ts.time_series(np.column_stack([randn(100) for _ in range(24)]), start_date='01-Jan-2010', freq='D') In [176]: test.dump('test.npy') In [177]: all(test == np.load('test.npy')) Out[177]: False In [178]: test_copy = test.copy() In [179]: all(test_copy == test) Out[179]: True In [180]: test_copy.dump('test.npy') In [181]: all(test == np.load('test.npy')) Out[181]: True ...which doesn't seem to happen with a plain ndarray: In [182]: test = np.column_stack([randn(100) for _ in range(24)]) In [183]: test.dump('test.npy') In [184]: all(test == np.load('test.npy')) Out[184]: True Regards, Dave From bsouthey at gmail.com Mon Nov 8 10:53:01 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 08 Nov 2010 09:53:01 -0600 Subject: [SciPy-Dev] Scipy failures with 0.9.0.dev6859 Message-ID: <4CD81CDD.7030806@gmail.com> Hi, Having upgraded to Fedora 14, I had to reinstall everything. I am getting various scipy failures across Python versions. test_ndimage.TestNdimage.test_gauss03 (FAIL: gaussian filter 3) test_ncg (test_optimize.TestOptimize) (FAIL: line-search Newton conjugate gradient optimization routine) I know that first was been previously mentioned such as by Vincent in the OS X/Python 2.7 thread (http://permalink.gmane.org/gmane.comp.python.scientific.user/26891) The second test appears to vary between versions as test code (section given next) appears to refer to scipy versions. Since I don't understand this area, could the test be written not to depend on on the version? # Ensure that function call counts are 'known good'; these are from # Scipy 0.7.0. Don't allow them to increase. assert_(self.funccalls == 7, self.funccalls) assert_(self.gradcalls == 18, self.gradcalls) # 0.8.0 #assert_(self.gradcalls == 22, self.gradcalls) # 0.7.0 Python 2.4 also has multiple errors with these lines in common: File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None Bruce $ python -c "import scipy; scipy.test()" Running unit tests for scipy NumPy version 2.0.0.dev NumPy is installed in /usr/lib64/python2.7/site-packages/numpy SciPy version 0.9.0.dev6859 SciPy is installed in /usr/lib64/python2.7/site-packages/scipy Python version 2.7 (r27:82500, Sep 16 2010, 18:02:00) [GCC 4.5.1 20100907 (Red Hat 4.5.1-3)] nose version 0.11.2 [snip] ====================================================================== FAIL: test_ndimage.TestNdimage.test_gauss03 gaussian filter 3 ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose-0.11.2-py2.7.egg/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/lib64/python2.7/site-packages/scipy/ndimage/tests/test_ndimage.py", line 468, in test_gauss03 assert_almost_equal(output.sum(), input.sum()) File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", line 463, in assert_almost_equal raise AssertionError(msg) AssertionError: Arrays are not almost equal ACTUAL: 49993304.0 DESIRED: 49992896.0 ====================================================================== FAIL: test_ncg (test_optimize.TestOptimize) line-search Newton conjugate gradient optimization routine ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/scipy/optimize/tests/test_optimize.py", line 177, in test_ncg assert_(self.gradcalls == 18, self.gradcalls) # 0.8.0 File "/usr/lib64/python2.7/site-packages/numpy/testing/utils.py", line 34, in assert_ raise AssertionError(msg) AssertionError: 16 ---------------------------------------------------------------------- Ran 4776 tests in 41.348s FAILED (KNOWNFAIL=13, SKIP=34, failures=2) Python2.4 errors ====================================================================== ERROR: test_idl.TestArrayDimensions.test_2d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 133, in test_2d s = readsav(path.join(DATA_PATH, 'array_float32_2d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_3d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 137, in test_3d s = readsav(path.join(DATA_PATH, 'array_float32_3d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_4d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 141, in test_4d s = readsav(path.join(DATA_PATH, 'array_float32_4d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_5d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 145, in test_5d s = readsav(path.join(DATA_PATH, 'array_float32_5d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_6d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 149, in test_6d s = readsav(path.join(DATA_PATH, 'array_float32_6d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_7d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 153, in test_7d s = readsav(path.join(DATA_PATH, 'array_float32_7d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestArrayDimensions.test_8d ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 157, in test_8d s = readsav(path.join(DATA_PATH, 'array_float32_8d.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None ====================================================================== ERROR: test_idl.TestCompressed.test_compressed ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/nose/case.py", line 186, in runTest self.test(*self.arg) File "/usr/local/lib/python2.4/site-packages/scipy/io/tests/test_idl.py", line 114, in test_compressed s = readsav(path.join(DATA_PATH, 'various_compressed.sav'), verbose=False) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 706, in readsav r = _read_record(f) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 341, in _read_record rectypedesc['array_desc']) File "/usr/local/lib/python2.4/site-packages/scipy/io/idl.py", line 294, in _read_array dims = array_desc['dims'][:array_desc['ndims']] TypeError: slice indices must be integers or None From zunzun at zunzun.com Mon Nov 8 11:43:09 2010 From: zunzun at zunzun.com (James Phillips) Date: Mon, 8 Nov 2010 06:43:09 -1000 Subject: [SciPy-Dev] scipy.stats.distributions: note on initial parameters for fitting the beta distribution In-Reply-To: References: Message-ID: I am finding that using eps alone is insufficient, as the size of a given floating point number may be too large to add or subtract eps and have the floating point number change. Below is example code that on my 32-bit Ubuntu Linux distribution illustrates my meaning. James import numpy eps = numpy.finfo(float).eps print 'eps =', eps a = 500.0 b = 500.0 print 'should be zero: a-b =', a-b c = b + eps print 'should be -eps, not zero: a-c =', a-c d = 1.0E-290 e = 1.0E-290 print 'should be zero: d-e =', d-e f = e + eps print 'should be -eps, not zero: a-c =', d-f On Mon, Oct 25, 2010 at 10:14 AM, wrote: > > Are you handling fixed loc and scale in your code? In that case, it > might be possible just to restrict the usage of the beta distribution > for data between zero and one, or fix loc=x.min()-eps, scale=(x.max() > - x.min() + 2*eps) or something like this, if you don't want to get > another estimation method. (I don't remember if I tried this for the > beta distribution.) From josef.pktd at gmail.com Mon Nov 8 11:54:54 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 8 Nov 2010 11:54:54 -0500 Subject: [SciPy-Dev] scipy.stats.distributions: note on initial parameters for fitting the beta distribution In-Reply-To: References: Message-ID: On Mon, Nov 8, 2010 at 11:43 AM, James Phillips wrote: > I am finding that using eps alone is insufficient, as the size of a > given floating point number may be too large to add or subtract eps > and have the floating point number change. ?Below is example code that > on my 32-bit Ubuntu Linux distribution illustrates my meaning. For me eps in statistics, with accumulated errors and approximations is often only 1e-4, 1e-6, 1e-8, or 1e-10 depending on the problem, not machine epsilon. For example, if the calculations are based on a previous optimization or fsolve or numerical derivatives, then the precision is already limited by the convergence criteria. I usually try how small I can make eps without getting into problems. I don't know how it will work specifically for the beta case. Josef > > ? ? James > > > import numpy > > eps = numpy.finfo(float).eps > print 'eps =', eps > > a = 500.0 > b = 500.0 > print 'should be zero: a-b =', a-b > > c = b + eps > print 'should be -eps, not zero: a-c =', a-c > > > d = 1.0E-290 > e = 1.0E-290 > print 'should be zero: d-e =', d-e > > f = e + eps > print 'should be -eps, not zero: a-c =', d-f > > > > On Mon, Oct 25, 2010 at 10:14 AM, ? wrote: >> >> Are you handling fixed loc and scale in your code? In that case, it >> might be possible just to restrict the usage of the beta distribution >> for data between zero and one, or fix loc=x.min()-eps, scale=(x.max() >> - x.min() + 2*eps) or something like this, if you don't want to get >> another estimation method. (I don't remember if I tried this for the >> beta distribution.) > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From pav at iki.fi Thu Nov 11 15:53:51 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 11 Nov 2010 20:53:51 +0000 (UTC) Subject: [SciPy-Dev] Google custom search for ask.scipy.org? References: Message-ID: On Sat, 30 Oct 2010 17:49:56 +0000, Pauli Virtanen wrote: > A current (killer) problem with ask.scipy.org is that it does not have > search. As a band-aid solution, it should be possible to add a Google > custom search box, like on this page: > http://scikit-learn.sourceforge.net/ > > Google searches "site:ask.scipy.org/en/topic KEYWORDS" work well, so > that a custom search should also work OK. Here's a patch (my previous didn't seem to reach the list...): # HG changeset patch # User Pauli Virtanen # Date 1288530804 -3600 # Node ID 342b2a291ab1a37ff9f198e1833d3b7a4d60d8b3 # Parent 1af6eeb468587f11ff8017bd422f8c90830e27a9 ENH: add Google-powered search diff --git a/solace/default_settings.cfg b/solace/default_settings.cfg --- a/solace/default_settings.cfg +++ b/solace/default_settings.cfg @@ -172,5 +172,12 @@ #: a proxy. IS_BEHIND_PROXY = False +# target site for Google "site:" search +GOOGLE_SEARCH_SITE = "ask.scipy.org/*/topic" + +# key for Google Custom Search +# if None, the search is simply a redirect to Google +GOOGLE_SEARCH_KEY = "001245708749086444135:49umyzakcpw" + # get rid of them again del os, tempfile diff --git a/solace/templates/_helpers.html b/solace/templates/_helpers.html --- a/solace/templates/_helpers.html +++ b/solace/templates/_helpers.html @@ -27,3 +27,34 @@ {%- endmacro %} +{% macro render_search() %} +
+ {% if settings.GOOGLE_SEARCH_KEY %} +
.
+ + + + {% else %} +
+ + + +
+ {% endif %} +
+{% endmacro %} diff --git a/solace/templates/kb/overview.html b/solace/templates/kb/overview.html --- a/solace/templates/kb/overview.html +++ b/solace/templates/kb/overview.html @@ -1,5 +1,6 @@ {% extends 'layout.html' %} {% from 'kb/_boxes.html' import render_topics, render_topic_tabs %} +{% from '_helpers.html' import render_search %} {% set page_title = _('Overview') %} {% block html_head %} {{- super() }} @@ -10,8 +11,10 @@

{{ page_title }}

{% trans -%} - This is an overview of some of the newest topics here on Solace. You can view other topics grouped by activity, votes and hotness. + This is an overview of some of the newest topics here on Solace. You can view other topics grouped by activity, votes and hotness. {%- endtrans %} +

+ {{ render_search() }}
{{ render_topic_tabs('kb.overview', order_by) }} {{ render_topics(topics) }} diff --git a/solace/templates/kb/search.html b/solace/templates/kb/search.html new file mode 100644 --- /dev/null +++ b/solace/templates/kb/search.html @@ -0,0 +1,9 @@ +{% extends 'layout.html' %} +{% from '_helpers.html' import render_search %} +{% set page_title = _(Search) %} +{% block body %} +

{{ _('Search') }}

+

+ {{ render_search() }} +

+{% endblock %} diff --git a/solace/templates/layout.html b/solace/templates/layout.html --- a/solace/templates/layout.html +++ b/solace/templates/layout.html @@ -77,6 +77,7 @@ ('kb.new', true, _('Ask')), ('kb.overview', true, _('Overview')), ('kb.unanswered', true, _('Unanswered')), + ('kb.search', true, _('Search')), ('kb.tags', true, _('Tags')), (('kb.userlist', true, _('Users')) if request.view_lang and settings.LANGUAGE_SECTIONS|length > 1 else ('users.userlist', false, _('Users'))), diff --git a/solace/urls.py b/solace/urls.py --- a/solace/urls.py +++ b/solace/urls.py @@ -43,6 +43,9 @@ Rule('/users/') > 'kb.userlist' ]), + # kb search + Rule('/search/') > 'kb.search', + # kb sections not depending on the lang code Rule('/sections/') > 'kb.sections', diff --git a/solace/views/kb.py b/solace/views/kb.py --- a/solace/views/kb.py +++ b/solace/views/kb.py @@ -381,6 +381,10 @@ return common_userlist(request, locale=request.view_lang) +def search(request): + """Show a Google search form.""" + return render_template('kb/search.html') + @no_cache @require_login @exchange_token_protected From matthew.brett at gmail.com Thu Nov 11 17:42:42 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 11 Nov 2010 14:42:42 -0800 Subject: [SciPy-Dev] Advice on py3 debugging Message-ID: Hi, I think this one's for Pauli. I am in the process of trying to make sure that my matlab io changes work on python 3. For python2 I am used to: linking /scipy/io/matlab to the repository working tree scipy/scipy/io/matlab building the scipy/io/matlab directory inplace with 'python setup.py build_ext --inplace' Of course I now realize that's not going to work because of the python 2to3 step. Do you have any advice on how to avoid a full reinstall for testing individual packages in scipy? Thanks a lot, Matthew From matthew.brett at gmail.com Thu Nov 11 18:07:59 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 11 Nov 2010 15:07:59 -0800 Subject: [SciPy-Dev] Advice on py3 debugging In-Reply-To: References: Message-ID: Hi, On Thu, Nov 11, 2010 at 2:42 PM, Matthew Brett wrote: > I am in the process of trying to make sure that my matlab io changes > work on python 3. > > For python2 I am used to: > > linking /scipy/io/matlab to the repository working > tree scipy/scipy/io/matlab > building the scipy/io/matlab directory inplace with 'python setup.py > build_ext --inplace' > > Of course I now realize that's not going to work because of the python > 2to3 step. ?Do you have any advice on how to avoid a full reinstall > for testing individual packages in scipy? I should say that the nasty solution that I have now is a temporary Makefile in /scipy/scipy/io/matlab: TMP_BUILD = $(HOME)/tmp/scipy-matlab-files py3-py: rsync -r --delete . $(TMP_BUILD) cd $(TMP_BUILD) && 2to3 -w *.py tests/*.py && python3 setup.py build_ext -i and I link $(TMP_BUILD) as /scipy/io/matlab See you, Matthew From pav at iki.fi Thu Nov 11 18:19:46 2010 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 11 Nov 2010 23:19:46 +0000 (UTC) Subject: [SciPy-Dev] Advice on py3 debugging References: Message-ID: On Thu, 11 Nov 2010 15:07:59 -0800, Matthew Brett wrote: [clip] > I should say that the nasty solution that I have now is a temporary > Makefile in /scipy/scipy/io/matlab: > > TMP_BUILD = $(HOME)/tmp/scipy-matlab-files > > py3-py: > rsync -r --delete . $(TMP_BUILD) > cd $(TMP_BUILD) && 2to3 -w *.py tests/*.py && python3 setup.py > build_ext -i > > and I link $(TMP_BUILD) as /scipy/io/matlab I have the following Makefile. Full (re)install runs fast enough for me. SVN_BASE_REVISION=$(shell git log|sed -n -e '/git-svn-id:.*@\([0-9]\+\)/{s/.*@//;s/ .*//;p;q}') GIT_REVISION=$(shell git log|sed -e '1{s/commit \(......\).*/\1/;q;}') REVISION=$(SVN_BASE_REVISION)+$(GIT_REVISION) all: build test all-wine: build-wine test-wine build: build-linux test: test-linux test-all: test-linux test-wine build-all: build-linux build-wine PYVER=2.6 export OPT=-ggdb export PYTHONPATH=$(CURDIR)/../numpy/dist/linux/lib/python$(PYVER)/site-packages/ PATH := /usr/lib/ccache:/usr/local/lib/f90cache:$(PATH) export PATH USE_2TO3CACHE=1 export USE_2TO3CACHE LANG=C export LANG TEST_MODULE=scipy TEST_TYPE=full TEST_STANZA='import sys, os; sys.path.insert(0, os.path.join(os.getcwd(), "site-packages")); import $(TEST_MODULE) as tst; sys.exit(not tst.test("$(TEST_TYPE)", verbose=2).wasSuccessful())' build-linux: @echo "version = \"$(REVISION)\"" > scipy/__svn_version__.py @echo "--- Building..." python$(PYVER) setup.py build --debug install --prefix=$(CURDIR)/dist/linux \ > build.log 2>&1 || { cat build.log; exit 1; } build-linux-scons: @echo "version = \"$(REVISION)\"" > scipy/__svn_version__.py @echo "--- Building..." python$(PYVER) setupscons.py build --debug install --prefix=$(CURDIR)/dist/linux \ > build.log 2>&1 || { cat build.log; exit 1; } test-linux: @echo "--- Testing in Linux" (cd dist/linux/lib/python$(PYVER) && python$(PYVER) -c $(TEST_STANZA)) \ > test.log 2>&1 || { cat test.log; exit 1; } test-cgdb: @echo "--- Testing in Linux" cd dist/linux/lib/python$(PYVER) && cgdb --args python$(PYVER) -c $(TEST_STANZA) test-valgrind: @echo "--- Testing in Linux" cd dist/linux/lib/python$(PYVER) && valgrind --suppressions=$(HOME)/.valgrind/valgrind-python.supp -- python$(PYVER) -c $(TEST_STANZA) egg-install: install -d $(CURDIR)/dist/linux/lib/python$(PYVER)/site-packages PYTHONPATH=$$PYTHONPATH:$(CURDIR)/dist/linux/lib/python$(PYVER)/site-packages \ python$(PYVER) setupegg.py install --prefix=$(CURDIR)/dist/linux \ > install.log 2>&1 || { cat install.log; exit 1; } find $(CURDIR)/dist -name 'test_*.py' -print0|xargs -0r chmod a-x build-wine: @echo "--- Building..." wine c:\\Python25\\python.exe setup.py build --compiler=mingw32 install --prefix="dist\\win32" \ > build.log 2>&1 || { cat build.log; exit 1; } test-wine: @echo "--- Testing in WINE" (cd dist/win32/Lib && wine c:\\Python25\\python.exe -c $(TEST_STANZA)) \ > test.log 2>&1 || { cat test.log; exit 1; } ipython: cd $(CURDIR)/dist && PYTHONPATH=$$PYTHONPATH:$(CURDIR)/dist/linux/lib/python$(PYVER)/site-packages python$(PYVER) `which ipython` -pylab cgdb: cd $(CURDIR)/dist && PYTHONPATH=$$PYTHONPATH:$(CURDIR)/dist/linux/lib/python$(PYVER)/site-packages cgdb --args python$(PYVER) python: cd $(CURDIR)/dist && PYTHONPATH=$$PYTHONPATH:$(CURDIR)/dist/linux/lib/python$(PYVER)/site-packages python$(PYVER) sh: cd $(CURDIR)/dist && PYTHONPATH=$$PYTHONPATH:$(CURDIR)/dist/linux/lib/python$(PYVER)/site-packages bash etags: find scipy -name '*.[ch]' -o -name '*.src' \ | ctags-exuberant -L - \ -e --extra=+fq --fields=+afiksS --c++-kinds=+px \ --langmap=c:+.src,python:+.pyx --if0=yes \ --regex-c="/#define ([a-zA-Z0-9 at _]*@[a-zA-Z0-9 at _]*)/\1/" \ --regex-c="/^([a-zA-Z0-9 at _]*@[a-zA-Z0-9 at _]*)\(/\1/" tags: etags clean: rm -rf build dist .PHONY: test build test-linux build-linux test-wine build-wine clean etags tags From matthew.brett at gmail.com Thu Nov 11 20:00:29 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 11 Nov 2010 17:00:29 -0800 Subject: [SciPy-Dev] matlab reading / writing changes - review anyone? In-Reply-To: References: Message-ID: Hi, On Tue, Oct 12, 2010 at 8:40 PM, Matthew Brett wrote: >>> I'm sorry - I probably have probably made the comparison a little too >>> large to be taken in in an instant, but I'd be very grateful for >>> feedback before merge. >> >> Based on a very shallow glance, looks mostly good to me, although: >> >> - Using UserWarning sounds a bit nasty -- maybe use >> ?class MatlabReaderWarning(UserWarning): pass instead? > > Thanks, good idea. > >> - Is there a test for "varmats_from_mat"? > > No, only the doctest, thanks for the reminder, I'll put some in. > >> - Breaks on Python 3: Thanks for the review - sorry it took me a little time to python3-ize. I believe the tests are passing on 2 and 3 now, and I've added changes as you suggested, thanks for the review. Committed : 6871 (and previous) Cheers, Matthew From ralf.gommers at googlemail.com Fri Nov 12 10:45:39 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 12 Nov 2010 23:45:39 +0800 Subject: [SciPy-Dev] schedule for scipy 0.9.0 Message-ID: Hi all, I'm interested to hear what you think about releasing scipy 0.9.0 in the near future, and if you have any things that should go in or be fixed for 0.9.0. If we want to keep to a ~6 month release schedule then we should branch around the end of this month and have a final release in January. As far as I can tell trunk is in good shape, and we have more then enough bug fixes, test and documentation improvements to make this a solid release. Plus of course the highlight - Python 3 compatibility. Some things to be done: - resolve a few test failures, as reported recently by Bruce - clean up a lot of noise in the test output - merge work done in the doc wiki - (for the very brave) have a look at the state of our bug tracker Cheers, Ralf From zunzun at zunzun.com Fri Nov 12 10:58:24 2010 From: zunzun at zunzun.com (James Phillips) Date: Fri, 12 Nov 2010 05:58:24 -1000 Subject: [SciPy-Dev] schedule for scipy 0.9.0 In-Reply-To: References: Message-ID: That would allow 0.9.0 to be in the next Ubuntu Linux release, version 11.04 named "Natty Narwhal", which is set for the end of April 2011 according to this release schedule: https://wiki.ubuntu.com/Releases The current Ubuntu release uses SciPy 0.7.2 according to http://packages.ubuntu.com/maverick/python-scipy James On Fri, Nov 12, 2010 at 5:45 AM, Ralf Gommers wrote: > > If we want to keep to a ~6 month release schedule > then we should branch around the end of this month and have a final > release in January. From josef.pktd at gmail.com Fri Nov 12 11:18:58 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Nov 2010 11:18:58 -0500 Subject: [SciPy-Dev] schedule for scipy 0.9.0 In-Reply-To: References: Message-ID: On Fri, Nov 12, 2010 at 10:45 AM, Ralf Gommers wrote: > Hi all, > > I'm interested to hear what you think about releasing scipy 0.9.0 in > the near future, and if you have any things that should go in or be > fixed for 0.9.0. If we want to keep to a ~6 month release schedule > then we should branch around the end of this month and have a final > release in January. > > As far as I can tell trunk is in good shape, and we have more then > enough bug fixes, test and documentation improvements to make this a > solid release. Plus of course the highlight - Python 3 compatibility. as Pauli recently mentioned scipy.weave is still missing for Python 3 and there are still some problems in stats.distributions. Urgent fixes shouldn't be too time consuming, but there is some backlog that will take more time (statsmodels is already months behind release schedule and is more fun). I will work through the tickets and the tests, and scipy.stats should be releasable. Josef > > Some things to be done: > - resolve a few test failures, as reported recently by Bruce > - clean up a lot of noise in the test output > - merge work done in the doc wiki > - (for the very brave) have a look at the state of our bug tracker > > Cheers, > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From pav at iki.fi Fri Nov 12 11:21:54 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 12 Nov 2010 16:21:54 +0000 (UTC) Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] References: Message-ID: Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: > I'm interested to hear what you think about releasing scipy 0.9.0 in the > near future, and if you have any things that should go in or be fixed > for 0.9.0. If we want to keep to a ~6 month release schedule then we > should branch around the end of this month and have a final release in > January. Sounds OK to me. I think we should try to time it so that it gets in Ubuntu. Some tasks that would be nice to do before that in scipy.interpolate: - Write a interpolation routine for N-d *structured* interpolation. (ndimage.map_coordinates is difficult to find, so it would be useful to have an interpolation-oriented routine for that in scipy.interpolate). - Deprecate interp2d, as it does not work well for most problems, and its calling convention is a bit wonky. People should use either `griddata` or splines depending on what they want to do. - Clean up the spline routines: - We have two incompatible implementations for 1-D splines there, on from FITPACK, and a pure-Python one by Travis. I don't remember if one of them has an advantage over the other. I think that either `splmake` the one should be deprecated, or the spline representation it uses made compatible with FITPACK. - Deprecate the bispl* and spl* routines, leaving only the object-oriented interface. - Clean up the OO API. Currently it does nasty things such as changing the class of the objects on the fly. I'd perhaps go forward by deprecating the current API, and starting with a clean slate. Some additional ones that would be nice, but I'm not sure if I can make it in this timeframe: - Natural neighbor interpolation in N-d. I currently know how to do this up to 3-d, but doing it in N-d takes still more thinking (the difficult part is computing the volume of the N-d Voronoi polyhedron). -- Pauli Virtanen From bsouthey at gmail.com Fri Nov 12 11:30:58 2010 From: bsouthey at gmail.com (Bruce Southey) Date: Fri, 12 Nov 2010 10:30:58 -0600 Subject: [SciPy-Dev] schedule for scipy 0.9.0 In-Reply-To: References: Message-ID: <4CDD6BC2.4020504@gmail.com> On 11/12/2010 09:45 AM, Ralf Gommers wrote: > Hi all, > > I'm interested to hear what you think about releasing scipy 0.9.0 in > the near future, and if you have any things that should go in or be > fixed for 0.9.0. If we want to keep to a ~6 month release schedule > then we should branch around the end of this month and have a final > release in January. > > As far as I can tell trunk is in good shape, and we have more then > enough bug fixes, test and documentation improvements to make this a > solid release. Plus of course the highlight - Python 3 compatibility. > > Some things to be done: > - resolve a few test failures, as reported recently by Bruce > - clean up a lot of noise in the test output > - merge work done in the doc wiki > - (for the very brave) have a look at the state of our bug tracker > > Cheers, > Ralf > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev This is also a good time to add any depreciation and function change notifications. Will this be the last release before scipy is 'officially' moved git or should the git move occur before then? Bruce From charlesr.harris at gmail.com Fri Nov 12 11:42:57 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 12 Nov 2010 09:42:57 -0700 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Fri, Nov 12, 2010 at 9:21 AM, Pauli Virtanen wrote: > Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: > > I'm interested to hear what you think about releasing scipy 0.9.0 in the > > near future, and if you have any things that should go in or be fixed > > for 0.9.0. If we want to keep to a ~6 month release schedule then we > > should branch around the end of this month and have a final release in > > January. > > Sounds OK to me. I think we should try to time it so that it gets in > Ubuntu. > > Some tasks that would be nice to do before that in scipy.interpolate: > > - Write a interpolation routine for N-d *structured* interpolation. > (ndimage.map_coordinates is difficult to find, so it would be useful > to have an interpolation-oriented routine for that in > scipy.interpolate). > > - Deprecate interp2d, as it does not work well for most problems, > and its calling convention is a bit wonky. People should use either > `griddata` or splines depending on what they want to do. > > - Clean up the spline routines: > > - We have two incompatible implementations for 1-D splines there, > on from FITPACK, and a pure-Python one by Travis. I don't remember > if one of them has an advantage over the other. > > Which is the pure python one? The interpolate.py module is, I believe, Pearu's and uses fitpack. However, there are two interfaces to fipack, one generated by f2py, and another handwritten in C. We probably want to keep the f2py interface but the handwritten interface is a bit higher level. Fitpack uses b-splines, which isn't the fastest representation to evaluate but is essential for least squares. I agree that it would be nice to rationalize the spline packages, I'm not sure we can do that for 9.0. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Nov 12 11:48:09 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Nov 2010 11:48:09 -0500 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Fri, Nov 12, 2010 at 11:21 AM, Pauli Virtanen wrote: > Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: >> I'm interested to hear what you think about releasing scipy 0.9.0 in the >> near future, and if you have any things that should go in or be fixed >> for 0.9.0. If we want to keep to a ~6 month release schedule then we >> should branch around the end of this month and have a final release in >> January. > > Sounds OK to me. I think we should try to time it so that it gets in > Ubuntu. > > Some tasks that would be nice to do before that in scipy.interpolate: > > ?- Write a interpolation routine for N-d *structured* interpolation. > ? ?(ndimage.map_coordinates is difficult to find, so it would be useful > ? ?to have an interpolation-oriented routine for that in > ? ?scipy.interpolate). > > ?- Deprecate interp2d, as it does not work well for most problems, > ? ?and its calling convention is a bit wonky. People should use either > ? ?`griddata` or splines depending on what they want to do. > > ?- Clean up the spline routines: > > ? ?- We have two incompatible implementations for 1-D splines there, > ? ? ?on from FITPACK, and a pure-Python one by Travis. I don't remember > ? ? ?if one of them has an advantage over the other. > > ? ? ?I think that either `splmake` the one should be deprecated, or > ? ? ?the spline representation it uses made compatible with FITPACK. > > ? ?- Deprecate the bispl* and spl* routines, leaving only the > ? ? ?object-oriented interface. These are in some cases more robust and give better control, the classes look a bit "messy". I would only depreciate them if the classes get some makeover. Josef > > ? ?- Clean up the OO API. Currently it does nasty things such as > ? ? ?changing the class of the objects on the fly. I'd perhaps go > ? ? ?forward by deprecating the current API, and starting with a clean > ? ? ?slate. > > Some additional ones that would be nice, but I'm not sure if I can make > it in this timeframe: > > ?- Natural neighbor interpolation in N-d. > > ? ?I currently know how to do this up to 3-d, but doing it in N-d takes > ? ?still more thinking (the difficult part is computing the volume of > ? ?the N-d Voronoi polyhedron). > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Fri Nov 12 12:25:45 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 12 Nov 2010 10:25:45 -0700 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Fri, Nov 12, 2010 at 9:48 AM, wrote: > On Fri, Nov 12, 2010 at 11:21 AM, Pauli Virtanen wrote: > > Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: > >> I'm interested to hear what you think about releasing scipy 0.9.0 in the > >> near future, and if you have any things that should go in or be fixed > >> for 0.9.0. If we want to keep to a ~6 month release schedule then we > >> should branch around the end of this month and have a final release in > >> January. > > > > Sounds OK to me. I think we should try to time it so that it gets in > > Ubuntu. > > > > Some tasks that would be nice to do before that in scipy.interpolate: > > > > - Write a interpolation routine for N-d *structured* interpolation. > > (ndimage.map_coordinates is difficult to find, so it would be useful > > to have an interpolation-oriented routine for that in > > scipy.interpolate). > > > > - Deprecate interp2d, as it does not work well for most problems, > > and its calling convention is a bit wonky. People should use either > > `griddata` or splines depending on what they want to do. > > > > - Clean up the spline routines: > > > > - We have two incompatible implementations for 1-D splines there, > > on from FITPACK, and a pure-Python one by Travis. I don't remember > > if one of them has an advantage over the other. > > > > I think that either `splmake` the one should be deprecated, or > > the spline representation it uses made compatible with FITPACK. > > > > - Deprecate the bispl* and spl* routines, leaving only the > > object-oriented interface. > > These are in some cases more robust and give better control, the > classes look a bit "messy". I would only depreciate them if the > classes get some makeover. > Which classes? If the ones in interpolate, I agree that they are a bit limited. The simple case of running a 1-d spline through a set of points, with some provision for handling the boundaty conditions, is often done without using b-splines. Doing tricky things with varying degrees of discontinuity at the knot points or generating least squares fits is better done with b-splines. If we are going to use just one package I would stick with fitpack, or an equivalent subset of opennurbs. The latter is a large package in c++ intended for CAD with the usual tangle of inter-dependencies, but the core bits don't look like too much trouble to disintangle and convert to simple C. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Fri Nov 12 15:09:35 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 12 Nov 2010 15:09:35 -0500 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Fri, Nov 12, 2010 at 12:25 PM, Charles R Harris wrote: > > > On Fri, Nov 12, 2010 at 9:48 AM, wrote: >> >> On Fri, Nov 12, 2010 at 11:21 AM, Pauli Virtanen wrote: >> > Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: >> >> I'm interested to hear what you think about releasing scipy 0.9.0 in >> >> the >> >> near future, and if you have any things that should go in or be fixed >> >> for 0.9.0. If we want to keep to a ~6 month release schedule then we >> >> should branch around the end of this month and have a final release in >> >> January. >> > >> > Sounds OK to me. I think we should try to time it so that it gets in >> > Ubuntu. >> > >> > Some tasks that would be nice to do before that in scipy.interpolate: >> > >> > ?- Write a interpolation routine for N-d *structured* interpolation. >> > ? ?(ndimage.map_coordinates is difficult to find, so it would be useful >> > ? ?to have an interpolation-oriented routine for that in >> > ? ?scipy.interpolate). >> > >> > ?- Deprecate interp2d, as it does not work well for most problems, >> > ? ?and its calling convention is a bit wonky. People should use either >> > ? ?`griddata` or splines depending on what they want to do. >> > >> > ?- Clean up the spline routines: >> > >> > ? ?- We have two incompatible implementations for 1-D splines there, >> > ? ? ?on from FITPACK, and a pure-Python one by Travis. I don't remember >> > ? ? ?if one of them has an advantage over the other. >> > >> > ? ? ?I think that either `splmake` the one should be deprecated, or >> > ? ? ?the spline representation it uses made compatible with FITPACK. >> > >> > ? ?- Deprecate the bispl* and spl* routines, leaving only the >> > ? ? ?object-oriented interface. >> >> These are in some cases more robust and give better control, the >> classes look a bit "messy". I would only depreciate them if the >> classes get some makeover. > > Which classes? If the ones in interpolate, I agree that they are a bit > limited. interpolate UnivariateSpline, and BivariateSpline classes (or family of classes) which are supposed to be the successor or frontend to the bispl* and spl* routines. Josef > > The simple case of running a 1-d spline through a set of points, with some > provision for handling the boundaty conditions, is often? done without using > b-splines. Doing tricky things with varying degrees of discontinuity at the > knot points or generating least squares fits is better done with b-splines. > If we are going to use just one package I would stick with fitpack, or an > equivalent subset of opennurbs. The latter is a large package in c++ > intended for CAD with the usual tangle of inter-dependencies, but the core > bits don't look like too much trouble to disintangle and convert to simple > C. > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From pav at iki.fi Fri Nov 12 12:16:23 2010 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 12 Nov 2010 17:16:23 +0000 (UTC) Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] References: Message-ID: Fri, 12 Nov 2010 09:42:57 -0700, Charles R Harris wrote: [clip: scipy.interpolate] > Which is the pure python one? The interpolate.py module is, I believe, > Pearu's and uses fitpack. - The one in interpolate.py uses fitpack for evaluation, but is otherwise pure-Python, and has only procedular interface (splmake & spleval) - There are also two Python-FITPACK interfaces, _fitpack.so and dfitpack.so, which both are both used by the procedural (splrep & splev) and OO interfaces. IIRC, there is incompatibility in the spline representation used by splev and spleval. -- Pauli Virtanen From zunzun at zunzun.com Fri Nov 12 15:20:42 2010 From: zunzun at zunzun.com (James Phillips) Date: Fri, 12 Nov 2010 10:20:42 -1000 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: You might be interested to see my free web interfaces to these same spline classes: for 2D data using the UnivariateSpline class: http://zunzun.com/Equation/2/Spline/Spline/ for 3D data using SmoothBivariateSpline class: http://zunzun.com/Equation/3/Spline/Spline/ The web site also outputs spline evaluation code in Python, C++ and Java. BSD source code links with syntax-highlighting via Google: 2D: http://code.google.com/p/pythonequations/source/browse/trunk/pythonequations/Equations2D/Spline.py 3D: http://code.google.com/p/pythonequations/source/browse/trunk/pythonequations/Equations3D/Spline.py James On Fri, Nov 12, 2010 at 10:09 AM, wrote: > > interpolate UnivariateSpline, and BivariateSpline classes (or family > of classes) which are supposed to be the successor or frontend to the > bispl* and spl* routines. From ralf.gommers at googlemail.com Fri Nov 12 23:07:55 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 13 Nov 2010 12:07:55 +0800 Subject: [SciPy-Dev] schedule for scipy 0.9.0 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 12:18 AM, wrote: > > and there are still some problems in stats.distributions. Urgent fixes > shouldn't be too time consuming, but there is some backlog that will > take more time (statsmodels is already months behind release schedule > and is more fun). > I looked through the tickets and did not see anything in stats that should hold up a release. The most important thing IMHO is to handle the tickets with patches attached: http://projects.scipy.org/scipy/ticket/1098 http://projects.scipy.org/scipy/ticket/1131 http://projects.scipy.org/scipy/ticket/793 http://projects.scipy.org/scipy/ticket/1200 http://projects.scipy.org/scipy/ticket/956 And one of: http://projects.scipy.org/scipy/ticket/893 http://projects.scipy.org/scipy/ticket/999 Stats is one of the more interesting parts to me, so I'm willing to help out with this (on the parts that do not require a licensed statistician). Cheers, Ralf From ralf.gommers at googlemail.com Fri Nov 12 23:10:54 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 13 Nov 2010 12:10:54 +0800 Subject: [SciPy-Dev] schedule for scipy 0.9.0 In-Reply-To: <4CDD6BC2.4020504@gmail.com> References: <4CDD6BC2.4020504@gmail.com> Message-ID: On Sat, Nov 13, 2010 at 12:30 AM, Bruce Southey wrote: > On 11/12/2010 09:45 AM, Ralf Gommers wrote: > This is also a good time to add any depreciation and function change > notifications. > That happens in every release. Do you have specific tickets in mind? > Will this be the last release before scipy is 'officially' moved git or > should the git move occur before then? > For me the sooner we move the better. I don't mind if that would be in the middle of a release cycle. Cheers, Ralf From ralf.gommers at googlemail.com Fri Nov 12 23:35:19 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 13 Nov 2010 12:35:19 +0800 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 12:21 AM, Pauli Virtanen wrote: > Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: >> I'm interested to hear what you think about releasing scipy 0.9.0 in the >> near future, and if you have any things that should go in or be fixed >> for 0.9.0. If we want to keep to a ~6 month release schedule then we >> should branch around the end of this month and have a final release in >> January. > > Sounds OK to me. I think we should try to time it so that it gets in > Ubuntu. Looking at https://wiki.ubuntu.com/NattyReleaseSchedule, I guess that means it would be best to release a few weeks before feature freeze, which is Feb 24th. So end of Jan should be fine. Ralf From charlesr.harris at gmail.com Sat Nov 13 00:28:26 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 12 Nov 2010 22:28:26 -0700 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Fri, Nov 12, 2010 at 10:16 AM, Pauli Virtanen wrote: > Fri, 12 Nov 2010 09:42:57 -0700, Charles R Harris wrote: > [clip: scipy.interpolate] > > Which is the pure python one? The interpolate.py module is, I believe, > > Pearu's and uses fitpack. > > - The one in interpolate.py uses fitpack for evaluation, but is otherwise > pure-Python, and has only procedular interface (splmake & spleval) > > - There are also two Python-FITPACK interfaces, _fitpack.so and > dfitpack.so, which both are both used by the procedural (splrep & splev) > and OO interfaces. > > I think I may have added the _fitpack.so dependency for a bug fix, IIRC the original only used dfitpack.so. I'm not really happy with fitpack, it is a bit musty and not that clean, but it would be a significant amount of work to fix it up or replace it. It might be possible to use just a small subset of the functions and build higher functionality using cython or some such. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dagss at student.matnat.uio.no Sat Nov 13 07:05:05 2010 From: dagss at student.matnat.uio.no (Dag Sverre Seljebotn) Date: Sat, 13 Nov 2010 13:05:05 +0100 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: <4CDE7EF1.7010408@student.matnat.uio.no> On 11/13/2010 06:28 AM, Charles R Harris wrote: > > > On Fri, Nov 12, 2010 at 10:16 AM, Pauli Virtanen > wrote: > > Fri, 12 Nov 2010 09:42:57 -0700, Charles R Harris wrote: > [clip: scipy.interpolate] > > Which is the pure python one? The interpolate.py module is, I > believe, > > Pearu's and uses fitpack. > > - The one in interpolate.py uses fitpack for evaluation, but is > otherwise > pure-Python, and has only procedular interface (splmake & spleval) > > - There are also two Python-FITPACK interfaces, _fitpack.so and > dfitpack.so, which both are both used by the procedural (splrep & > splev) > and OO interfaces. > > > I think I may have added the _fitpack.so dependency for a bug fix, > IIRC the original only used dfitpack.so. I'm not really happy with > fitpack, it is a bit musty and not that clean, but it would be a > significant amount of work to fix it up or replace it. It might be > possible to use just a small subset of the functions and build higher > functionality using cython or some such. I don't know fitpack well enough to say, but in general there's a chance that this may be helped by what I'm currently doing for Enthought. The way of getting a version of SciPy running on .NET is essentially to replace all f2py .pyf files with Cython .pyx files. That also adds a fast Cython interface to the module so that when writing Cython code there's will be no significant function call overhead when going between Cython and the Fortran code (that is, a .pxd file is provided). If somebody wants to do this we should get in touch so that we can work something out and see if and how I can contribute with what I'm doing. (BTW, the conversion is automatic with the aid of fwrap, but once the conversion is done I contemplate removing the .pyf files and only have the .pyx files checked in. Changes in the Fortran files can be merged into the .pyx files using git.) Dag Sverre -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Nov 13 18:22:35 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 13 Nov 2010 17:22:35 -0600 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 Message-ID: For any of you interested in FIR filter design, feedback on the patch in ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be appreciated. Thanks, Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Nov 13 19:29:16 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Nov 2010 17:29:16 -0700 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 4:22 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > For any of you interested in FIR filter design, feedback on the patch in > ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be > appreciated. > > What about negative frequencies? Do you want to limit yourself to real filters? I expect yes ;) But at some point I want to add my version of the Remez algorithm which I use to design complex filters and it would be nice to have a ready made template for the interface. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Nov 13 19:53:29 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Nov 2010 17:53:29 -0700 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 5:29 PM, Charles R Harris wrote: > > > On Sat, Nov 13, 2010 at 4:22 PM, Warren Weckesser < > warren.weckesser at enthought.com> wrote: > >> For any of you interested in FIR filter design, feedback on the patch in >> ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be >> appreciated. >> >> > What about negative frequencies? Do you want to limit yourself to real > filters? I expect yes ;) But at some point I want to add my version of the > Remez algorithm which I use to design complex filters and it would be nice > to have a ready made template for the interface. > > Also, I've always found it easier to design with actual frequencies rather than setting the sampling frequency to 1. The conversion isn't difficult, but it is repetitive and usually required which makes it something the software should do. Most filter specs are in frequency and it just makes life easier. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Nov 13 21:38:42 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 13 Nov 2010 20:38:42 -0600 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 6:29 PM, Charles R Harris wrote: > > > On Sat, Nov 13, 2010 at 4:22 PM, Warren Weckesser < > warren.weckesser at enthought.com> wrote: > >> For any of you interested in FIR filter design, feedback on the patch in >> ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be >> appreciated. >> >> > What about negative frequencies? Do you want to limit yourself to real > filters? I expect yes ;) > That is the intended use-case of the patch, yes. > But at some point I want to add my version of the Remez algorithm which I > use to design complex filters and it would be nice to have a ready made > template for the interface. > > How does the patch in #457 affect your plans for adding the Remez algorithm? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sat Nov 13 21:46:48 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sat, 13 Nov 2010 20:46:48 -0600 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 6:53 PM, Charles R Harris wrote: > > > On Sat, Nov 13, 2010 at 5:29 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sat, Nov 13, 2010 at 4:22 PM, Warren Weckesser < >> warren.weckesser at enthought.com> wrote: >> >>> For any of you interested in FIR filter design, feedback on the patch in >>> ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be >>> appreciated. >>> >>> >> What about negative frequencies? Do you want to limit yourself to real >> filters? I expect yes ;) But at some point I want to add my version of the >> Remez algorithm which I use to design complex filters and it would be nice >> to have a ready made template for the interface. >> >> > Also, I've always found it easier to design with actual frequencies rather > than setting the sampling frequency to 1. The conversion isn't difficult, > but it is repetitive and usually required which makes it something the > software should do. Most filter specs are in frequency and it just makes > life easier. > That is what the `nyq` argument is for: specify the frequencies in whatever units you prefer (e.g. Hz, samples per year, scores of samples per fortnight, etc), and give the `nyq` keyword argument in the same units. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Nov 13 21:48:07 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Nov 2010 19:48:07 -0700 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 7:38 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sat, Nov 13, 2010 at 6:29 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sat, Nov 13, 2010 at 4:22 PM, Warren Weckesser < >> warren.weckesser at enthought.com> wrote: >> >>> For any of you interested in FIR filter design, feedback on the patch in >>> ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be >>> appreciated. >>> >>> >> What about negative frequencies? Do you want to limit yourself to real >> filters? I expect yes ;) >> > > > That is the intended use-case of the patch, yes. > > > >> But at some point I want to add my version of the Remez algorithm which I >> use to design complex filters and it would be nice to have a ready made >> template for the interface. >> >> > How does the patch in #457 affect your plans for adding the Remez > algorithm? > > I'd rather copy someone else's interface than write it myself, I'm lazy that way. Having interfaces that are at least similar is also helpful to users. As to actually adding the algorithm, the code has been sitting around since September 2009 and I'll get around to polishing it up and adding tests real soon now ;) Although I don't expect to actually do anything until after the new year. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Nov 13 21:53:16 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 13 Nov 2010 19:53:16 -0700 Subject: [SciPy-Dev] FIR filter design - patch attached to ticket #457 In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 7:46 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Sat, Nov 13, 2010 at 6:53 PM, Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Sat, Nov 13, 2010 at 5:29 PM, Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Sat, Nov 13, 2010 at 4:22 PM, Warren Weckesser < >>> warren.weckesser at enthought.com> wrote: >>> >>>> For any of you interested in FIR filter design, feedback on the patch in >>>> ticket #457 (http://projects.scipy.org/scipy/ticket/457) would be >>>> appreciated. >>>> >>>> >>> What about negative frequencies? Do you want to limit yourself to real >>> filters? I expect yes ;) But at some point I want to add my version of the >>> Remez algorithm which I use to design complex filters and it would be nice >>> to have a ready made template for the interface. >>> >>> >> Also, I've always found it easier to design with actual frequencies rather >> than setting the sampling frequency to 1. The conversion isn't difficult, >> but it is repetitive and usually required which makes it something the >> software should do. Most filter specs are in frequency and it just makes >> life easier. >> > > That is what the `nyq` argument is for: specify the frequencies in whatever > units you prefer (e.g. Hz, samples per year, scores of samples per > fortnight, etc), and give the `nyq` keyword argument in the same units. > > Ah, I didn't see that, it's in firwin2. Great. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From pgmdevlist at gmail.com Sun Nov 14 09:15:40 2010 From: pgmdevlist at gmail.com (Pierre GM) Date: Sun, 14 Nov 2010 15:15:40 +0100 Subject: [SciPy-Dev] Bug in TimeSeries.dump? In-Reply-To: References: Message-ID: On Nov 8, 2010, at 1:59 PM, Dave Hirschfeld wrote: > > Looks like quite a serious bug in the TimeSeries.dump function: > Indeed, indeed. Looks like there's a problem of order: >>> ctrl=ts.time_series(np.column_stack([np.random.randn(100) for _ in range(24)]), start_date='01-Jan-2010', freq='D') >>> ctrl.dump('test.npy') >>> test = np.load('test.npy') >>> np.allclose(test.data.T.ravel(),ctrl.data.ravel()) True mmh... Wonder it can come from. I'll keep you posted. From charlesr.harris at gmail.com Sun Nov 14 12:11:48 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Nov 2010 10:11:48 -0700 Subject: [SciPy-Dev] Weave patch applied by ubuntu. Message-ID: I note that of the three new inclusions of only one is currently present in scipy. Can someone familiar with the code confirm that all the includes are needed. Chuck diff -u python-scipy-0.7.0/debian/changelog python-scipy-0.7.0/debian/changelog --- python-scipy-0.7.0/debian/changelog +++ python-scipy-0.7.0/debian/changelog @@ -1,3 +1,9 @@ +python-scipy (0.7.0-2ubuntu1) lucid; urgency=low + + * Fix scipy.weave.inline compilations (LP: #302649) + + -- Sameer Morar Wed, 29 Sep 2010 15:34:04 +0200 + python-scipy (0.7.0-2) unstable; urgency=medium [ Julien Lavergne ] only in patch2: unchanged: --- python-scipy-0.7.0.orig/scipy/weave/blitz/blitz/mathfunc.h +++ python-scipy-0.7.0/scipy/weave/blitz/blitz/mathfunc.h @@ -12,6 +12,8 @@ #include #endif +#include + BZ_NAMESPACE(blitz) // abs(P_numtype1) Absolute value only in patch2: unchanged: --- python-scipy-0.7.0.orig/scipy/weave/blitz/blitz/blitz.h +++ python-scipy-0.7.0/scipy/weave/blitz/blitz/blitz.h @@ -65,6 +65,8 @@ #define BZ_THROW // Needed in +#include + BZ_NAMESPACE(blitz) #ifdef BZ_HAVE_STD only in patch2: unchanged: --- python-scipy-0.7.0.orig/scipy/weave/blitz/blitz/prettyprint.h +++ python-scipy-0.7.0/scipy/weave/blitz/blitz/prettyprint.h @@ -22,6 +22,8 @@ #ifndef BZ_PRETTYPRINT_H #define BZ_PRETTYPRINT_H +#include + BZ_NAMESPACE(blitz) class prettyPrintFormat { -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbyszek at in.waw.pl Sun Nov 14 18:02:56 2010 From: zbyszek at in.waw.pl (Zbyszek Szmek) Date: Mon, 15 Nov 2010 00:02:56 +0100 Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated Message-ID: <20101114230256.GA5801@in.waw.pl> Hi, I'm getting a segfault when running scipy.test()... scipy: trunk at 6894 numpy: git tip (a07ac0f ENH: Add '-' to separate git hash in version.) python: 3.1 (Python 3.1.2 (release31-maint, Sep 26 2010, 16:45:30) [GCC 4.4.5] on linux2) or 3.2 (from current hg) The backtrace is pretty impressive :) > import scipy; scipy.test(verbose=10) ... test_qhull.TestRidgeIter2D.test_complicated ... Program received signal SIGSEGV, Segmentation fault. 0x00007ffff2912f5c in qh_setappend (setp=0x48f4f88, newelem=0x48f4e80) at scipy/spatial/qhull/src/qset.c:138 138 *(endp= &((*setp)->e[(*sizep)++ - 1].p))= newelem; (gdb) bt #0 0x00007ffff2912f5c in qh_setappend (setp=0x48f4f88, newelem=0x48f4e80) at scipy/spatial/qhull/src/qset.c:138 #1 0x00007ffff290b21f in qh_vertexneighbors () at scipy/spatial/qhull/src/poly2.c:3105 #2 0x00007ffff2932245 in qh_mergefacet (facet1=0x48f4d68, facet2=0x48f5060, mindist=0x0, maxdist=0x0, mergeapex=1) at scipy/spatial/qhull/src/merge.c:2325 #3 0x00007ffff2932678 in qh_mergecycle_all (facetlist=, wasmerge=0x7fffffff73ac) at scipy/spatial/qhull/src/merge.c:1856 #4 0x00007ffff2933bd0 in qh_premerge (apex=, maxcentrum=, maxangle=) at scipy/spatial/qhull/src/merge.c:70 #5 0x00007ffff291a005 in qh_addpoint (furthest=0x463b668, facet=0x48f5128, checkdist=) at scipy/spatial/qhull/src/qhull.c:235 #6 0x00007ffff291a3d3 in qh_buildhull () at scipy/spatial/qhull/src/qhull.c:394 #7 0x00007ffff291a9ad in qh_qhull () at scipy/spatial/qhull/src/qhull.c:66 #8 0x00007ffff2913d75 in qh_new_qhull (dim=, numpoints=, points=, ismalloc=, qhull_cmd=, outfile=, errfile=0x7ffff6aff860) at scipy/spatial/qhull/src/user.c:156 #9 0x00007ffff28eec10 in __pyx_pf_5scipy_7spatial_5qhull__construct_delaunay (__pyx_self=, __pyx_v_points=) at scipy/spatial/qhull.c:1249 #10 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #11 0x00007ffff28ecc9b in __pyx_pf_5scipy_7spatial_5qhull_8Delaunay___init__ (__pyx_self=, __pyx_args=, __pyx_kwds=) at scipy/spatial/qhull.c:4067 #12 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #13 0x000000000053fbbc in method_call (func=, arg=(,), kw=0x0) at ../Objects/classobject.c:319 #14 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #15 0x00000000004353d8 in slot_tp_init (self=, args=(,), kwds=0x0) at ../Objects/typeobject.c:5206 #16 0x0000000000429e1f in type_call (type=0x1dba640, args=(,), kwds=0x0) at ../Objects/typeobject.c:679 #17 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #18 0x0000000000465a5e in do_call (f= Frame 0x48e6610, for file <__main__.PyUnicodeObjectPtr object at 0x7f1062ae6d10>, line 118, in <__main__.PyUnicodeObjectPtr object at 0x7f1062ae6cd0> (self=, points=), throwflag=) at ../Python/ceval.c:3982 #19 call_function (f= Frame 0x48e6610, for file <__main__.PyUnicodeObjectPtr object at 0x7f1062ae6c50>, line 118, in <__main__.PyUnicodeObjectPtr object at 0x7f1062ae6cd0> (self=, points=), throwflag=) at ../Python/ceval.c:3785 #20 PyEval_EvalFrameEx (f= Frame 0x48e6610, for file <__main__.PyUnicodeObjectPtr object at 0x7f1062ae6d10>, line 118, in <__main__.PyUnicodeObjectPtr object at 0x7f1062ae6cd0> (self=, points=), throwflag=) at ../Python/ceval.c:2548 #21 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x42310b0, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #22 0x00000000005528b4 in function_call (func=, arg=(,), kw=0x0) at ../Objects/funcobject.c:630 #23 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #24 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #25 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #26 0x0000000000465b2f in fast_function (f=, throwflag=) at ../Python/ceval.c:3850 #27 call_function (f=, throwflag=) at ../Python/ceval.c:3783 #28 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #29 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0xf372b0, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0xfbd5a8, defcount=1, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #30 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #31 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #32 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #33 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #34 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0xf37530, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #35 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #36 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #37 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #38 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #39 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #40 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #41 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #42 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #43 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #44 0x0000000000465b2f in fast_function (f=, throwflag=) at ../Python/ceval.c:3850 #45 call_function (f=, throwflag=) at ../Python/ceval.c:3783 #46 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #47 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x18ebab0, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #48 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #49 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #50 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #51 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #52 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1898830, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #53 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #54 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #55 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #56 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #57 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #58 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #59 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #60 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #61 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #62 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908c30, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #63 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #64 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #65 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #66 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #67 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908b30, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #68 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #69 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #70 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #71 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #72 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #73 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #74 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #75 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #76 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #77 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908c30, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #78 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #79 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #80 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #81 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #82 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908b30, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #83 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #84 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #85 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #86 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #87 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #88 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #89 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #90 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #91 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #92 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908c30, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #93 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #94 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #95 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #96 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #97 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908b30, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #98 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #99 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #100 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #101 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #102 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #103 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #104 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #105 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #106 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #107 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908c30, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #108 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #109 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #110 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #111 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #112 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908b30, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #113 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #114 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #115 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #116 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #117 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #118 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #119 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #120 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #121 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #122 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908c30, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #123 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #124 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #125 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #126 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #127 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908b30, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #128 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #129 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #130 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #131 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #132 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #133 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #134 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #135 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #136 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #137 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908c30, globals=, locals=, args=0x2, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #138 0x00000000005528b4 in function_call (func=, arg=, kw={}) at ../Objects/funcobject.c:630 #139 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #140 0x000000000046427e in ext_do_call (f=, throwflag=) at ../Python/ceval.c:4077 #141 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2589 #142 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1908b30, globals=, locals=, args=0x1, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #143 0x00000000005528b4 in function_call (func=, arg=, kw=0x0) at ../Objects/funcobject.c:630 #144 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #145 0x000000000053fbbc in method_call (func=, arg=, kw=0x0) at ../Objects/classobject.c:319 #146 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #147 0x0000000000434028 in slot_tp_call (self=, args=, kwds=0x0) at ../Objects/typeobject.c:4946 #148 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #149 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #150 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #151 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #152 0x0000000000465b2f in fast_function (f=, throwflag=) at ../Python/ceval.c:3850 #153 call_function (f=, throwflag=) at ../Python/ceval.c:3783 #154 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #155 0x0000000000465b2f in fast_function (f=, throwflag=) at ../Python/ceval.c:3850 #156 call_function (f=, throwflag=) at ../Python/ceval.c:3783 #157 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #158 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0xf52830, globals=, locals=, args=0x7, argcount=1, kws=, kwcount=5, defs=0xf44f70, defcount=6, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #159 0x0000000000465972 in fast_function (f=, throwflag=) at ../Python/ceval.c:3860 #160 call_function (f=, throwflag=) at ../Python/ceval.c:3783 #161 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #162 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x1553730, globals=, locals=, args=0xb, argcount=1, kws=, kwcount=3, defs=0x196c8e8, defcount=10, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #163 0x00000000005528b4 in function_call (func=, arg=, kw=) at ../Objects/funcobject.c:630 #164 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #165 0x000000000053fbbc in method_call (func=, arg=(), kw=) at ../Objects/classobject.c:319 #166 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #167 0x00000000004353d8 in slot_tp_init (self=, args=(), kwds=) at ../Objects/typeobject.c:5206 #168 0x0000000000429e1f in type_call (type=0x1b5e650, args=(), kwds=) at ../Objects/typeobject.c:679 #169 0x0000000000528a97 in PyObject_Call (func=, arg=, kw=) at ../Objects/abstract.c:2160 #170 0x0000000000465a5e in do_call (f=, throwflag=) at ../Python/ceval.c:3982 #171 call_function (f=, throwflag=) at ../Python/ceval.c:3785 #172 PyEval_EvalFrameEx (f=, throwflag=) at ../Python/ceval.c:2548 #173 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x100a130, globals=, locals=, args=0x6, argcount=1, kws=, kwcount=1, defs=0xfc4ae8, defcount=5, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #174 0x0000000000465972 in fast_function (f=Frame 0xa9af30, for file <__main__.PyUnicodeObjectPtr object at 0x7f106037cb90>, line 1, in <__main__.PyUnicodeObjectPtr object at 0x7f106037cb10> (), throwflag=) at ../Python/ceval.c:3860 #175 call_function (f=Frame 0xa9af30, for file <__main__.PyUnicodeObjectPtr object at 0x7f106037c910>, line 1, in <__main__.PyUnicodeObjectPtr object at 0x7f106037cb10> (), throwflag=) at ../Python/ceval.c:3783 #176 PyEval_EvalFrameEx (f=Frame 0xa9af30, for file <__main__.PyUnicodeObjectPtr object at 0x7f106037cb90>, line 1, in <__main__.PyUnicodeObjectPtr object at 0x7f106037cb10> (), throwflag=) at ../Python/ceval.c:2548 #177 0x0000000000466ac5 in PyEval_EvalCodeEx (co=0x9e39b0, globals=, locals=, args=0x0, argcount=1, kws=, kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at ../Python/ceval.c:3198 #178 0x0000000000466bcb in PyEval_EvalCode (co=0x48f4f88, globals=, locals=) at ../Python/ceval.c:668 #179 0x0000000000489a20 in run_mod (command=, flags=) at ../Python/pythonrun.c:1708 #180 PyRun_StringFlags (command=, flags=) at ../Python/pythonrun.c:1642 #181 PyRun_SimpleStringFlags (command=, flags=) at ../Python/pythonrun.c:1214 #182 0x000000000049dfa9 in Py_Main (argc=, argv=) at ../Modules/main.c:528 #183 0x000000000041d0ae in main (argc=, argv=) at ../Modules/python.c:152 (gdb) p **setp $2 = {maxsize = 0, e = {{p = 0x48f51f0, i = 76501488}}} The qset code is pretty opaque. I'm posting this here in hope that somebody who understands the qset code can point me in the right direction. Thanks, Zbyszek From zbyszek at in.waw.pl Sun Nov 14 18:33:49 2010 From: zbyszek at in.waw.pl (Zbyszek Szmek) Date: Mon, 15 Nov 2010 00:33:49 +0100 Subject: [SciPy-Dev] Weave patch applied by ubuntu. In-Reply-To: References: Message-ID: <20101114233349.GB5801@in.waw.pl> On Sun, Nov 14, 2010 at 10:11:48AM -0700, Charles R Harris wrote: > I note that of the three new inclusions of only one is > currently present in scipy. Can someone familiar > with the code confirm that all the includes are needed. > > Chuck > > diff -u python-scipy-0.7.0/debian/changelog python-scipy-0.7.0/debian/changelog > --- python-scipy-0.7.0/debian/changelog > +++ python-scipy-0.7.0/debian/changelog > @@ -1,3 +1,9 @@ > +python-scipy (0.7.0-2ubuntu1) lucid; urgency=low > + > + * Fix scipy.weave.inline compilations (LP: #302649) > + > + -- Sameer Morar Wed, 29 Sep 2010 15:34:04 +0200 > + > python-scipy (0.7.0-2) unstable; urgency=medium > > [ Julien Lavergne ] > only in patch2: > unchanged: > --- python-scipy-0.7.0.orig/scipy/weave/blitz/blitz/mathfunc.h > +++ python-scipy-0.7.0/scipy/weave/blitz/blitz/mathfunc.h > @@ -12,6 +12,8 @@ > #include > #endif > > +#include > + prettyprint.h is included a few lines above. Therefore at least this one isn't necessary. > BZ_NAMESPACE(blitz) > > // abs(P_numtype1) Absolute value > only in patch2: > unchanged: > --- python-scipy-0.7.0.orig/scipy/weave/blitz/blitz/blitz.h > +++ python-scipy-0.7.0/scipy/weave/blitz/blitz/blitz.h > @@ -65,6 +65,8 @@ > > #define BZ_THROW // Needed in > > +#include > + > BZ_NAMESPACE(blitz) > > #ifdef BZ_HAVE_STD > only in patch2: > unchanged: > --- python-scipy-0.7.0.orig/scipy/weave/blitz/blitz/prettyprint.h > +++ python-scipy-0.7.0/scipy/weave/blitz/blitz/prettyprint.h > @@ -22,6 +22,8 @@ > #ifndef BZ_PRETTYPRINT_H > #define BZ_PRETTYPRINT_H > > +#include > + > BZ_NAMESPACE(blitz) > > class prettyPrintFormat { > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev From zbyszek at in.waw.pl Sun Nov 14 19:15:07 2010 From: zbyszek at in.waw.pl (Zbyszek Szmek) Date: Mon, 15 Nov 2010 01:15:07 +0100 Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated In-Reply-To: <20101114230256.GA5801@in.waw.pl> References: <20101114230256.GA5801@in.waw.pl> Message-ID: <20101115001507.GC5801@in.waw.pl> Smaller testcase: Python 3.1.2 (release31-maint, Sep 26 2010, 16:45:30) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np; from scipy.spatial import qhull >>> qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])) [1] 11287 segmentation fault (core dumped) python3.1 - Zbyszek From pav at iki.fi Sun Nov 14 19:54:46 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Nov 2010 00:54:46 +0000 (UTC) Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: On Mon, 15 Nov 2010 01:15:07 +0100, Zbyszek Szmek wrote: [clip] > >>> import numpy as np; from scipy.spatial import qhull > >>> qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])) Works for me on Python 2.6 and 3.1, with Numpy 1.5.x branch. 1) What compiler you used? 2) Does it occur on Python 2 for you? 3) Do you get it with Numpy 1.5.x? -- Pauli Virtanen From pav at iki.fi Sun Nov 14 20:07:26 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Nov 2010 01:07:26 +0000 (UTC) Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: On Mon, 15 Nov 2010 00:54:46 +0000, Pauli Virtanen wrote: > On Mon, 15 Nov 2010 01:15:07 +0100, Zbyszek Szmek wrote: [clip] >> >>> import numpy as np; from scipy.spatial import qhull >> >>> qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])) > > Works for me on Python 2.6 and 3.1, with Numpy 1.5.x branch. > > 1) What compiler you used? > > 2) Does it occur on Python 2 for you? > > 3) Do you get it with Numpy 1.5.x? Cannot reproduce it on the Numpy 2 branch either: Python 3.1.2 (release31-maint, Sep 17 2010, 20:27:33) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np; from scipy.spatial import qhull; print("AAAAA"); qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])); print("BBBBB") AAAAAA BBBBB >>> np.__version__ '2.0.0.dev-d1a184c' Valgrind doesn't show anything suspicious. My gcc version used for compiling Numpy and Scipy was 4.4.5. -- Pauli Virtanen From jhn at phys.au.dk Mon Nov 15 05:10:40 2010 From: jhn at phys.au.dk (Jens Nielsen) Date: Mon, 15 Nov 2010 11:10:40 +0100 Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated In-Reply-To: References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: I get a segmentation fault at the same point on ubuntu 10.10 both 64 bit and 32 bit versions but only when linking against atlas. Linking against blas and lapack only is fine. However using python 2.6 everything is fine even when using atlas. Jens Nielsen On Mon, Nov 15, 2010 at 2:07 AM, Pauli Virtanen wrote: > On Mon, 15 Nov 2010 00:54:46 +0000, Pauli Virtanen wrote: >> On Mon, 15 Nov 2010 01:15:07 +0100, Zbyszek Szmek wrote: [clip] >>> >>> import numpy as np; from scipy.spatial import qhull >>> >>> qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])) >> >> Works for me on Python 2.6 and 3.1, with Numpy 1.5.x branch. >> >> 1) What compiler you used? >> >> 2) Does it occur on Python 2 for you? >> >> 3) Do you get it with Numpy 1.5.x? > > Cannot reproduce it on the Numpy 2 branch either: > > Python 3.1.2 (release31-maint, Sep 17 2010, 20:27:33) > [GCC 4.4.5] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import numpy as np; from scipy.spatial import qhull; print("AAAAA"); > qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])); print("BBBBB") > AAAAAA > > BBBBB >>>> np.__version__ > '2.0.0.dev-d1a184c' > > Valgrind doesn't show anything suspicious. My gcc version used for > compiling Numpy and Scipy was 4.4.5. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From zbyszek at in.waw.pl Mon Nov 15 06:49:04 2010 From: zbyszek at in.waw.pl (Zbyszek Szmek) Date: Mon, 15 Nov 2010 12:49:04 +0100 Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated In-Reply-To: References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: <20101115114904.GD5801@in.waw.pl> On Mon, Nov 15, 2010 at 11:10:40AM +0100, Jens Nielsen wrote: > I get a segmentation fault at the same point on ubuntu 10.10 both 64 > bit and 32 bit versions > but only when linking against atlas. Linking against blas and lapack > only is fine. > > However using python 2.6 everything is fine even when using atlas. I'm running on 64 bit, and I have atlas installed. Will try without atlas later. > On Mon, Nov 15, 2010 at 2:07 AM, Pauli Virtanen wrote: > > On Mon, 15 Nov 2010 00:54:46 +0000, Pauli Virtanen wrote: > >> On Mon, 15 Nov 2010 01:15:07 +0100, Zbyszek Szmek wrote: [clip] > >>> >>> import numpy as np; from scipy.spatial import qhull > >>> >>> qhull.Delaunay(np.array([(0, 0), (0, 1), (1,0), (1,1) ])) > >> > >> Works for me on Python 2.6 and 3.1, with Numpy 1.5.x branch. > >> > >> 1) What compiler you used? gcc (Debian 4.4.5-6) 4.4.5 > >> 2) Does it occur on Python 2 for you? No. > >> 3) Do you get it with Numpy 1.5.x? Yes. Detailed log: Running unit tests for scipy.spatial NumPy version 2.0.0.dev-9c8e7d7 NumPy is installed in /usr/local/lib/python3.2/site-packages/numpy SciPy version 0.9.0.dev SciPy is installed in /usr/local/lib/python3.2/site-packages/scipy Python version 3.2a4 (py3k, Nov 13 2010, 14:15:45) [GCC 4.4.5] nose version 0.11.0 .....................................................................................................................................................................................................................................................................[3] 6511 segmentation fault (core dumped) Running unit tests for scipy.spatial NumPy version 2.0.0.dev-9c8e7d7 NumPy is installed in /usr/local/lib/python3.1/dist-packages/numpy SciPy version 0.9.0.dev SciPy is installed in /usr/local/lib/python3.1/dist-packages/scipy Python version 3.1.2 (release31-maint, Sep 26 2010, 16:45:30) [GCC 4.4.5] nose version 0.11.0 .....................................................................................................................................................................................................................................................................[3] 14045 segmentation fault (core dumped) Running unit tests for scipy.spatial NumPy version 1.5.1rc2 NumPy is installed in /usr/local/lib/python3.1/dist-packages/numpy SciPy version 0.9.0.dev SciPy is installed in /usr/local/lib/python3.1/dist-packages/scipy Python version 3.1.2 (release31-maint, Sep 26 2010, 16:45:30) [GCC 4.4.5] nose version 0.11.0 ... test_kdtree.test_onetree_query(, 0.1) ... ok test_kdtree.test_onetree_query(, 0.1) ... ok test_kdtree.test_onetree_query(, 0.001) ... ok test_kdtree.test_onetree_query(, 1e-05) ... ok test_kdtree.test_onetree_query(, 1e-06) ... ok test_qhull.TestRidgeIter2D.test_complicated ... [1] 5492 segmentation fault Running unit tests for scipy.spatial NumPy version 1.5.1rc2 NumPy is installed in /usr/local/lib/python2.6/dist-packages/numpy SciPy version 0.9.0.dev SciPy is installed in /usr/local/lib/python2.6/dist-packages/scipy Python version 2.6.6 (r266:84292, Oct 9 2010, 12:24:52) [GCC 4.4.5] nose version 0.11.1 Ran 271 tests in 13.085s OK On a different machine: Running unit tests for scipy.spatial NumPy version 2.0.0.dev-d1a184c NumPy is installed in /home/zbyszek/usr/lib/python3.1/site-packages/numpy SciPy version 0.9.0.dev SciPy is installed in /home/zbyszek/usr/lib/python3.1/site-packages/scipy Python version 3.1.2 (release31-maint, Sep 26 2010, 16:45:30) [GCC 4.4.5] nose version 0.11.0 ... test_kdtree.test_onetree_query(, 0.1) ... ok test_kdtree.test_onetree_query(, 0.1) ... ok test_kdtree.test_onetree_query(, 0.001) ... ok test_kdtree.test_onetree_query(, 1e-05) ... ok test_kdtree.test_onetree_query(, 1e-06) ... ok test_qhull.TestRidgeIter2D.test_complicated ... Segmentation fault I compiled from scratch on a different machine, so I don't think that this is some stupid compilation error on my part. - Zbyszek From pav at iki.fi Mon Nov 15 07:07:16 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Nov 2010 12:07:16 +0000 (UTC) Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: Mon, 15 Nov 2010 11:10:40 +0100, Jens Nielsen wrote: > I get a segmentation fault at the same point on ubuntu 10.10 both 64 bit > and 32 bit versions but only when linking against atlas. Linking > against blas and lapack only is fine. Ok, this bug is absent if Qhull is compiled without optimizations, and present if compiled with -O2. Qhull apparently breaks gcc's strict-aliasing assumptions, and compiling with 'OPT="-fno-strict-aliasing -O2" python setup.py ...' makes it work again. This should probably be enabled by default, or one would need to check if the aliasing issues can be fixed on the source code level. -- Pauli Virtanen From ralf.gommers at googlemail.com Mon Nov 15 08:30:33 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Mon, 15 Nov 2010 21:30:33 +0800 Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] In-Reply-To: References: Message-ID: On Sat, Nov 13, 2010 at 12:35 PM, Ralf Gommers wrote: > On Sat, Nov 13, 2010 at 12:21 AM, Pauli Virtanen wrote: > > Fri, 12 Nov 2010 23:45:39 +0800, Ralf Gommers wrote: > >> I'm interested to hear what you think about releasing scipy 0.9.0 in the > >> near future, and if you have any things that should go in or be fixed > >> for 0.9.0. If we want to keep to a ~6 month release schedule then we > >> should branch around the end of this month and have a final release in > >> January. > > > > Sounds OK to me. I think we should try to time it so that it gets in > > Ubuntu. > > Looking at https://wiki.ubuntu.com/NattyReleaseSchedule, I guess that > means it would be best to release a few weeks before feature freeze, > which is Feb 24th. So end of Jan should be fine. > > >From the past few releases I've learned that getting from branching to final release should take a bit over a month. So here's my proposal for the release schedule (leaving a good gap for Christmas): Dec 10: create branch 0.9.x (last date for getting new features in) Dec 12: beta 1 Jan 16: release candidate 1 (last date for non-critical bug fixes) Jan 23: release candidate 2 (if needed) Jan 30: release 0.9.0 Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Mon Nov 15 14:40:41 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 15 Nov 2010 19:40:41 +0000 (UTC) Subject: [SciPy-Dev] schedule for scipy 0.9.0 [scipy.interpolate] References: Message-ID: On Mon, 15 Nov 2010 21:30:33 +0800, Ralf Gommers wrote: [clip: Scipy 0.9 release plan] > Dec 10: create branch 0.9.x (last date for getting new features in) > Dec 12: beta 1 > Jan 16: release candidate 1 (last date for non-critical bug fixes) > Jan 23: release candidate 2 (if needed) > Jan 30: release 0.9.0 Looks good to me, I think we can stick with this. Pauli From pav at iki.fi Mon Nov 15 20:27:09 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Nov 2010 01:27:09 +0000 (UTC) Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: On Mon, 15 Nov 2010 12:07:16 +0000, Pauli Virtanen wrote: > Mon, 15 Nov 2010 11:10:40 +0100, Jens Nielsen wrote: >> I get a segmentation fault at the same point on ubuntu 10.10 both 64 >> bit and 32 bit versions but only when linking against atlas. Linking >> against blas and lapack only is fine. > > Ok, this bug is absent if Qhull is compiled without optimizations, and > present if compiled with -O2. > > Qhull apparently breaks gcc's strict-aliasing assumptions, and compiling > with 'OPT="-fno-strict-aliasing -O2" python setup.py ...' makes it work > again. This should probably be enabled by default, or one would need to > check if the aliasing issues can be fixed on the source code level. Ok, source code level "fix" here: https://github.com/pv/scipy-work/commit/8efed6e403597783243f476c7bdb1c506d99f22b I guess this is not completely robust, but it seems unlikely the compiler will go swapping the two statements after this. -- Pauli Virtanen From matthew.brett at gmail.com Mon Nov 15 20:39:45 2010 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 15 Nov 2010 17:39:45 -0800 Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated In-Reply-To: References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: Hi, On Mon, Nov 15, 2010 at 5:27 PM, Pauli Virtanen wrote: > On Mon, 15 Nov 2010 12:07:16 +0000, Pauli Virtanen wrote: > >> Mon, 15 Nov 2010 11:10:40 +0100, Jens Nielsen wrote: >>> I get a segmentation fault at the same point on ubuntu 10.10 both 64 >>> bit and 32 bit versions but only when linking against atlas. Linking >>> against blas and lapack only is fine. >> >> Ok, this bug is absent if Qhull is compiled without optimizations, and >> present if compiled with -O2. >> >> Qhull apparently breaks gcc's strict-aliasing assumptions, and compiling >> with 'OPT="-fno-strict-aliasing -O2" python setup.py ...' makes it work >> again. This should probably be enabled by default, or one would need to >> check if the aliasing issues can be fixed on the source code level. > > Ok, source code level "fix" here: > > https://github.com/pv/scipy-work/commit/8efed6e403597783243f476c7bdb1c506d99f22b Thank you - the comments explaining the change are particularly complete and helpful. Maybe it would be good to point to these comments in the source code for the change? See you, Matthew From pav at iki.fi Mon Nov 15 20:48:43 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Nov 2010 01:48:43 +0000 (UTC) Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: On Mon, 15 Nov 2010 17:39:45 -0800, Matthew Brett wrote: [clip] >> Ok, source code level "fix" here: >> >> https://github.com/pv/scipy-work/commit/8efed6e403597783243f476c7bdb1c506d99f22b > > Thank you - the comments explaining the change are particularly complete > and helpful. Maybe it would be good to point to these comments in the > source code for the change? I just noticed a similar fix is in upstream, in a new release of Qhull. Oh well, maybe we'll just upgrade the bundled library. Although it might still be good to add -fno-strict-aliasing flags to the compilation, as the upstream cautions that on gcc < 4.4 there might be problems (although I don't see how they can come from the same source). I however have no idea how to do this with distutils only for gcc. -- Pauli Virtanen From burley at zonnet.nl Tue Nov 16 06:48:37 2010 From: burley at zonnet.nl (burley) Date: Tue, 16 Nov 2010 12:48:37 +0100 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? Message-ID: Visual modeling support as offered by the Simulink/Matlab and Xcos/Scilab suites is very powerful, for the modeler as well as in communicating concepts and ideas to others. Neither is Pythonic or extendable by Python, unfortunately (disregarding the prohibitively expensiveness of Sim/Matl). Wouldn't it be *very* nice to have a free (LGPL++?), user extensible, Python based Simulink alternative, based on Numpy, SciPy, Wx and/or Qt-like, and the rest. All (e.g. most of) the algorithms are there, many would benefit. I know of Viper (Tcl/Tk), Elefant (Bayesian Learning), ... but none of them is general purpose or flexible in its use. Only a collaborative effort may make that happen I would believe. What's the community interest in such an effort? (I for myself am not a GUI programmer...). -- Burley -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Nov 16 10:04:56 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 16 Nov 2010 23:04:56 +0800 Subject: [SciPy-Dev] optimize.fmin_bfgs patch to review Message-ID: Hi, I've gathered and updated the changes for review of #734 and #843 in https://github.com/rgommers/scipy/commits/bfgs-tickets. Happily sent a pull request on github, which of course ended up in my own inbox:) Description: The change for #843 is trivial and the exact same as for other minimizers. The changes for #734 are three changes: 1. more stable sk, this is obvious. 2. extra stopping criterion. there's a reference, which seems to be followed. But no test, although the submitter claims it works better. I can't come up with a test to exercise this. Reviewer please decide whether to keep this or not - looks OK to me but I'm not sure. 3. handle zero division better. looks fine to me. Tests still pass of course. Can someone please have a look? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbyszek at in.waw.pl Tue Nov 16 11:14:50 2010 From: zbyszek at in.waw.pl (Zbyszek Szmek) Date: Tue, 16 Nov 2010 17:14:50 +0100 Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated In-Reply-To: References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> Message-ID: <20101116161450.GE5801@in.waw.pl> On Tue, Nov 16, 2010 at 01:48:43AM +0000, Pauli Virtanen wrote: > On Mon, 15 Nov 2010 17:39:45 -0800, Matthew Brett wrote: > [clip] > >> Ok, source code level "fix" here: > >> > >> https://github.com/pv/scipy-work/commit/8efed6e403597783243f476c7bdb1c506d99f22b Justed tested it: works perfectly. > > Thank you - the comments explaining the change are particularly complete > > and helpful. Maybe it would be good to point to these comments in the > > source code for the change? The comment is enlightening. Thanks, Zbyszek > I just noticed a similar fix is in upstream, in a new release of Qhull. > Oh well, maybe we'll just upgrade the bundled library. > > Although it might still be good to add -fno-strict-aliasing flags to > the compilation, as the upstream cautions that on gcc < 4.4 there might > be problems (although I don't see how they can come from the same source). > > I however have no idea how to do this with distutils only for gcc. From josh.holbrook at gmail.com Tue Nov 16 12:29:01 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Tue, 16 Nov 2010 08:29:01 -0900 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: On Tue, Nov 16, 2010 at 2:48 AM, burley wrote: > Visual modeling support as offered by the Simulink/Matlab and Xcos/Scilab > suites is very powerful, for the modeler as well as in communicating > concepts and ideas to others. Neither is Pythonic or extendable by Python, > unfortunately (disregarding the prohibitively expensiveness of Sim/Matl). > Wouldn't it be *very* nice to have a free (LGPL++?), user extensible, Python > based Simulink alternative, based on Numpy, SciPy,??Wx and/or Qt-like,?and > the rest. All (e.g. most of) the algorithms are there, many would benefit. I > know of Viper (Tcl/Tk), Elefant (Bayesian Learning), ... but none of them is > general purpose or flexible in its use. Only a collaborative effort may make > that happen I would believe. What's the community interest in such an > effort? (I for myself am not a GUI programmer...). > > -- Burley > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > I've done some thinking on this problem, though I haven't had the time to pursue anything directly. I started working on a graphical front-end for dataflow-family languages in javascript, but didn't get very far. I would say that there are two major issues that come in trying to build something simulink-esque. The first, obviously, is the issue of being able to build block diagrams in some sort of GUI and then translate those to a computer representation of a directed graph. While I think it's not something to be ignored, it's definitely been done before, and I think a nice way to describe these sorts of directed graphs in some sort of text format (python would work ;) ) anyway--so, working from a computer representation first and making a gui tool tool to build said representations second could be the way to go. In fact, I think a generalized block diagram component would be pretty awesome, since there are many worthwhile variations of the graphical dataflow format besides Simulink (Yahoo! Pipes/Pypes, LabVIEW, PD, Excel). The other part, and the one that I think makes this actually difficult, is converting the block diagram into a solution. My understanding of what Simulink actually does to generate a solution is something like this: 1. Convert everything to be in the time domain (For example, transfer functions would need to be transformed). 2. Convert the graph to a state-space representation suitable for use with a RK-like method--that is, dx/dt = f(x) where x is a vector, and y = g(x), where y is a vector of all the things you want to put virtual O-scopes onto. 3. Apply an RK-like method to the state-space representation. 4. Profit! Matlab/Simulink give you lots of options for step (3), and this part can be looked up in any numerical methods textbook, so it's relatively easy to get a handle on. In fact, I'm sure scipy has RK methods in it somewhere if it can claim to be anywhere near complete. Similarly, I would hope part (1) to be fairly straightforward. The part that I didn't have much luck on is step (2), but my guess is that you would have to traverse the graph and try to unroll any loops using rules specific to 'special' blocks, such as "1/s" blocks. I'm not sure it can get any more general than that. My $0.02, --Josh From josef.pktd at gmail.com Tue Nov 16 12:55:03 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 16 Nov 2010 12:55:03 -0500 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: On Tue, Nov 16, 2010 at 12:29 PM, Joshua Holbrook wrote: > On Tue, Nov 16, 2010 at 2:48 AM, burley wrote: >> Visual modeling support as offered by the Simulink/Matlab and Xcos/Scilab >> suites is very powerful, for the modeler as well as in communicating >> concepts and ideas to others. Neither is Pythonic or extendable by Python, >> unfortunately (disregarding the prohibitively expensiveness of Sim/Matl). >> Wouldn't it be *very* nice to have a free (LGPL++?), user extensible, Python >> based Simulink alternative, based on Numpy, SciPy,??Wx and/or Qt-like,?and >> the rest. All (e.g. most of) the algorithms are there, many would benefit. I >> know of Viper (Tcl/Tk), Elefant (Bayesian Learning), ... but none of them is >> general purpose or flexible in its use. Only a collaborative effort may make >> that happen I would believe. What's the community interest in such an >> effort? (I for myself am not a GUI programmer...). >> >> -- Burley >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > > I've done some thinking on this problem, though I haven't had the time > to pursue anything directly. I started working on a graphical > front-end for dataflow-family languages in javascript, but didn't get > very far. > > I would say that there are two major issues that come in trying to > build something simulink-esque. ?The first, obviously, is the issue of > being able to build block diagrams in some sort of GUI and then > translate those to a computer representation of a directed graph. > While I think it's not something to be ignored, it's definitely been > done before, and I think a nice way to describe these sorts of > directed graphs in some sort of text format (python would work ;) ) > anyway--so, working from a computer representation first and making a > gui tool tool to build said representations second could be the way to > go. In fact, I think a generalized block diagram component would be > pretty awesome, since there are many worthwhile variations of the > graphical dataflow format besides Simulink (Yahoo! Pipes/Pypes, > LabVIEW, PD, Excel). > > The other part, and the one that I think makes this actually > difficult, is converting the block diagram into a solution. My > understanding of what Simulink actually does to generate a solution is > something like this: > > 1. Convert everything to be in the time domain (For example, transfer > functions would need to be transformed). > 2. Convert the graph to a state-space representation suitable for use > with a RK-like method--that is, dx/dt = f(x) where x is a vector, and > y = g(x), where y is a vector of all the things you want to put > virtual O-scopes onto. > 3. Apply an RK-like method to the state-space representation. > 4. Profit! > > Matlab/Simulink give you lots of options for step (3), and this part > can be looked up in any numerical methods textbook, so it's relatively > easy to get a handle on. In fact, I'm sure scipy has RK methods in it > somewhere if it can claim to be anywhere near complete. Similarly, I > would hope part (1) to be fairly straightforward. ?The part that I > didn't have much luck on is step (2), but my guess is that you would > have to traverse the graph and try to unroll any loops using rules > specific to 'special' blocks, such as "1/s" blocks. I'm not sure it > can get any more general than that. this sounds a bit what pylab works tries to do http://pic.flappie.nl/ http://mientki.ruhosting.nl/data_www/pylab_works/pw_animations_screenshots.html Compared to graphical programming (and UML and Excel), programming directly in Python is a lot more fun and flexible in my opinion. There isn't a whole lot of boiler plate code to automate away. Josef > > My $0.02, > > --Josh > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From josh.holbrook at gmail.com Tue Nov 16 13:14:54 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Tue, 16 Nov 2010 09:14:54 -0900 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: > this sounds a bit what pylab works tries to do Only sort of. What Simulink does is significantly different from what your typical graphical dataflow language does. What LabView, Pypes, Pylab, etc. try to do is basically supply a graphical version of unix pipelines. Simulink, on the other hand, is used to describe and solve block diagrams like what you see in control theory: https://secure.wikimedia.org/wikipedia/en/wiki/Control_theory without making you have to convert it into a state-space representation yourself. This is handy because some dynamical systems are more natural to think about in terms of this block diagram representation than in a more state-space-friendly manner. Anyways, I'm not sure anybody else has implemented anything quite like Simulink. Afaik, there are lots of graphical dataflow languages (Like I said, I think an abstracted-out graphical flow diagram representation tool would be valuable) but not very many controls-style block diagram solvers. Fwiw, --Josh On Tue, Nov 16, 2010 at 8:55 AM, wrote: > On Tue, Nov 16, 2010 at 12:29 PM, Joshua Holbrook > wrote: >> On Tue, Nov 16, 2010 at 2:48 AM, burley wrote: >>> Visual modeling support as offered by the Simulink/Matlab and Xcos/Scilab >>> suites is very powerful, for the modeler as well as in communicating >>> concepts and ideas to others. Neither is Pythonic or extendable by Python, >>> unfortunately (disregarding the prohibitively expensiveness of Sim/Matl). >>> Wouldn't it be *very* nice to have a free (LGPL++?), user extensible, Python >>> based Simulink alternative, based on Numpy, SciPy,??Wx and/or Qt-like,?and >>> the rest. All (e.g. most of) the algorithms are there, many would benefit. I >>> know of Viper (Tcl/Tk), Elefant (Bayesian Learning), ... but none of them is >>> general purpose or flexible in its use. Only a collaborative effort may make >>> that happen I would believe. What's the community interest in such an >>> effort? (I for myself am not a GUI programmer...). >>> >>> -- Burley >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-dev >>> >>> >> >> I've done some thinking on this problem, though I haven't had the time >> to pursue anything directly. I started working on a graphical >> front-end for dataflow-family languages in javascript, but didn't get >> very far. >> >> I would say that there are two major issues that come in trying to >> build something simulink-esque. ?The first, obviously, is the issue of >> being able to build block diagrams in some sort of GUI and then >> translate those to a computer representation of a directed graph. >> While I think it's not something to be ignored, it's definitely been >> done before, and I think a nice way to describe these sorts of >> directed graphs in some sort of text format (python would work ;) ) >> anyway--so, working from a computer representation first and making a >> gui tool tool to build said representations second could be the way to >> go. In fact, I think a generalized block diagram component would be >> pretty awesome, since there are many worthwhile variations of the >> graphical dataflow format besides Simulink (Yahoo! Pipes/Pypes, >> LabVIEW, PD, Excel). >> >> The other part, and the one that I think makes this actually >> difficult, is converting the block diagram into a solution. My >> understanding of what Simulink actually does to generate a solution is >> something like this: >> >> 1. Convert everything to be in the time domain (For example, transfer >> functions would need to be transformed). >> 2. Convert the graph to a state-space representation suitable for use >> with a RK-like method--that is, dx/dt = f(x) where x is a vector, and >> y = g(x), where y is a vector of all the things you want to put >> virtual O-scopes onto. >> 3. Apply an RK-like method to the state-space representation. >> 4. Profit! >> >> Matlab/Simulink give you lots of options for step (3), and this part >> can be looked up in any numerical methods textbook, so it's relatively >> easy to get a handle on. In fact, I'm sure scipy has RK methods in it >> somewhere if it can claim to be anywhere near complete. Similarly, I >> would hope part (1) to be fairly straightforward. ?The part that I >> didn't have much luck on is step (2), but my guess is that you would >> have to traverse the graph and try to unroll any loops using rules >> specific to 'special' blocks, such as "1/s" blocks. I'm not sure it >> can get any more general than that. > > this sounds a bit what pylab works tries to do > > http://pic.flappie.nl/ > http://mientki.ruhosting.nl/data_www/pylab_works/pw_animations_screenshots.html > > Compared to graphical programming (and UML and Excel), programming > directly in Python is a lot more fun and flexible in my opinion. There > isn't a whole lot of boiler plate code to automate away. > > Josef > >> >> My $0.02, >> >> --Josh >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From burley at zonnet.nl Tue Nov 16 13:46:26 2010 From: burley at zonnet.nl (burley) Date: Tue, 16 Nov 2010 19:46:26 +0100 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: I've looked at Pylab but development seems dead...; couldn't find any code either... To add to the complexity, in 1998 I was already looking into the Modelica scene (https://www.modelica.org/), wrote actually a Python parser for that language. Several tools (commercial and quasi-free...) are available: see https://www.modelica.org/tools and in particularly (optimization oriented) JModelica (http://www.jmodelica.org/) which has Python as extension language. Modeling in Modelica goes beyond the usual data-flow paradigm in that it leaves the specification of the data-flow direction out of the modeling session. Components are connected through interfaces with bi-directional data-flow possible. Data-flow direction is problem solving specific, rather than modeling specific. Thus, dual modeling efforts (for simulation and inverse, system identification problems) are collapsed into one. My feel is that Modelica progress and acceptance has been slow over more than 10 years, probably because of the intricacies involved.Therefore, I think a Python-based Simulink-ish clone would suffice more than enough. Modelica related publications, however, probably contain substantial guidance on how loop unrolling could be done from a directed data-flow network. It's a pity with all Pythonic power available that it misses the visual programming interface to it. Sage (www.sagemath.org) is great but not really an alternative either. After a serious research effort, and considerable brain torture, I had to default to Scilab/Xcos, because communicating your ideas to others (less mathematically inclined) is crucial for acceptance. We'll see... ("Interfacing" Python with Scilab/Xcos will have to go through HDF5 files....:-( -- Burley On Tue, Nov 16, 2010 at 19:14, Joshua Holbrook wrote: > > this sounds a bit what pylab works tries to do > > Only sort of. What Simulink does is significantly different from what > your typical graphical dataflow language does. What LabView, Pypes, > Pylab, etc. try to do is basically supply a graphical version of unix > pipelines. Simulink, on the other hand, is used to describe and solve > block diagrams like what you see in control theory: > > https://secure.wikimedia.org/wikipedia/en/wiki/Control_theory > > without making you have to convert it into a state-space > representation yourself. This is handy because some dynamical systems > are more natural to think about in terms of this block diagram > representation than in a more state-space-friendly manner. Anyways, > I'm not sure anybody else has implemented anything quite like > Simulink. Afaik, there are lots of graphical dataflow languages (Like > I said, I think an abstracted-out graphical flow diagram > representation tool would be valuable) but not very many > controls-style block diagram solvers. > > Fwiw, > > --Josh > > > On Tue, Nov 16, 2010 at 8:55 AM, wrote: > > On Tue, Nov 16, 2010 at 12:29 PM, Joshua Holbrook > > wrote: > >> On Tue, Nov 16, 2010 at 2:48 AM, burley wrote: > >>> Visual modeling support as offered by the Simulink/Matlab and > Xcos/Scilab > >>> suites is very powerful, for the modeler as well as in communicating > >>> concepts and ideas to others. Neither is Pythonic or extendable by > Python, > >>> unfortunately (disregarding the prohibitively expensiveness of > Sim/Matl). > >>> Wouldn't it be *very* nice to have a free (LGPL++?), user extensible, > Python > >>> based Simulink alternative, based on Numpy, SciPy, Wx and/or > Qt-like, and > >>> the rest. All (e.g. most of) the algorithms are there, many would > benefit. I > >>> know of Viper (Tcl/Tk), Elefant (Bayesian Learning), ... but none of > them is > >>> general purpose or flexible in its use. Only a collaborative effort may > make > >>> that happen I would believe. What's the community interest in such an > >>> effort? (I for myself am not a GUI programmer...). > >>> > >>> -- Burley > >>> _______________________________________________ > >>> SciPy-Dev mailing list > >>> SciPy-Dev at scipy.org > >>> http://mail.scipy.org/mailman/listinfo/scipy-dev > >>> > >>> > >> > >> I've done some thinking on this problem, though I haven't had the time > >> to pursue anything directly. I started working on a graphical > >> front-end for dataflow-family languages in javascript, but didn't get > >> very far. > >> > >> I would say that there are two major issues that come in trying to > >> build something simulink-esque. The first, obviously, is the issue of > >> being able to build block diagrams in some sort of GUI and then > >> translate those to a computer representation of a directed graph. > >> While I think it's not something to be ignored, it's definitely been > >> done before, and I think a nice way to describe these sorts of > >> directed graphs in some sort of text format (python would work ;) ) > >> anyway--so, working from a computer representation first and making a > >> gui tool tool to build said representations second could be the way to > >> go. In fact, I think a generalized block diagram component would be > >> pretty awesome, since there are many worthwhile variations of the > >> graphical dataflow format besides Simulink (Yahoo! Pipes/Pypes, > >> LabVIEW, PD, Excel). > >> > >> The other part, and the one that I think makes this actually > >> difficult, is converting the block diagram into a solution. My > >> understanding of what Simulink actually does to generate a solution is > >> something like this: > >> > >> 1. Convert everything to be in the time domain (For example, transfer > >> functions would need to be transformed). > >> 2. Convert the graph to a state-space representation suitable for use > >> with a RK-like method--that is, dx/dt = f(x) where x is a vector, and > >> y = g(x), where y is a vector of all the things you want to put > >> virtual O-scopes onto. > >> 3. Apply an RK-like method to the state-space representation. > >> 4. Profit! > >> > >> Matlab/Simulink give you lots of options for step (3), and this part > >> can be looked up in any numerical methods textbook, so it's relatively > >> easy to get a handle on. In fact, I'm sure scipy has RK methods in it > >> somewhere if it can claim to be anywhere near complete. Similarly, I > >> would hope part (1) to be fairly straightforward. The part that I > >> didn't have much luck on is step (2), but my guess is that you would > >> have to traverse the graph and try to unroll any loops using rules > >> specific to 'special' blocks, such as "1/s" blocks. I'm not sure it > >> can get any more general than that. > > > > this sounds a bit what pylab works tries to do > > > > http://pic.flappie.nl/ > > > http://mientki.ruhosting.nl/data_www/pylab_works/pw_animations_screenshots.html > > > > Compared to graphical programming (and UML and Excel), programming > > directly in Python is a lot more fun and flexible in my opinion. There > > isn't a whole lot of boiler plate code to automate away. > > > > Josef > > > >> > >> My $0.02, > >> > >> --Josh > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Tue Nov 16 16:34:27 2010 From: pav at iki.fi (Pauli Virtanen) Date: Tue, 16 Nov 2010 21:34:27 +0000 (UTC) Subject: [SciPy-Dev] segfault in test_qhull.TestRidgeIter2D.test_complicated References: <20101114230256.GA5801@in.waw.pl> <20101115001507.GC5801@in.waw.pl> <20101116161450.GE5801@in.waw.pl> Message-ID: On Tue, 16 Nov 2010 17:14:50 +0100, Zbyszek Szmek wrote: > On Tue, Nov 16, 2010 at 01:48:43AM +0000, Pauli Virtanen wrote: >> On Mon, 15 Nov 2010 17:39:45 -0800, Matthew Brett wrote: [clip] >> >> Ok, source code level "fix" here: >> >> >> >> https://github.com/pv/scipy-work/commit/8efed6e403597783243f476c7bdb1c506d99f22b > > Justed tested it: works perfectly. Anyway, Scipy trunk was updated to Qhull 2010.1 which should work OK in this respect. -- Pauli Virtanen From ndbecker2 at gmail.com Wed Nov 17 07:08:31 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 17 Nov 2010 07:08:31 -0500 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? References: Message-ID: Anyone interested in this subject would do well to look at the Ptolemy project http://ptolemy.eecs.berkeley.edu/ From ndbecker2 at gmail.com Wed Nov 17 14:16:18 2010 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 17 Nov 2010 14:16:18 -0500 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? References: Message-ID: BTW, although I'm academically interested in this, I think a gui block- diagram approach is only useful for students/toy projects. I'm an old electrical engineer. Back in the day, we used to draw schematics (by hand). Today, we use hardware description languages. There's reasons why using pictures to describe designs has been replaced by text: * pictures work well only for low complexity. For complex designs, productivity becomes unacceptably low * What you see is all you got - text is easy to integrate with any kind of tool. From zunzun at zunzun.com Wed Nov 17 14:34:48 2010 From: zunzun at zunzun.com (James Phillips) Date: Wed, 17 Nov 2010 09:34:48 -1000 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: I have used National Instrument's "G" graphical programming language (LabVIEW) extensively in industrial settings internationally. Loops and complex switch/case type code seemed to me to be, hmm, difficult to maintain or follow program flow - especially for large industrial programs. It also appeared to me that bad programmers could easily write much worse code in G than in other languages, but that may just be residual frustration talking. My personal opinion... James On Wed, Nov 17, 2010 at 9:16 AM, Neal Becker wrote: > BTW, although I'm academically interested in this, I think a gui block- > diagram approach is only useful for students/toy projects. > > I'm an old electrical engineer. ?Back in the day, we used to draw schematics > (by hand). ?Today, we use hardware description languages. ?There's reasons > why using pictures to describe designs has been replaced by text: > > * pictures work well only for low complexity. ?For complex designs, > productivity becomes unacceptably low > * What you see is all you got - text is easy to integrate with any kind of > tool. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From charlesr.harris at gmail.com Wed Nov 17 14:46:09 2010 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 17 Nov 2010 12:46:09 -0700 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: On Wed, Nov 17, 2010 at 12:34 PM, James Phillips wrote: > I have used National Instrument's "G" graphical programming language > (LabVIEW) extensively in industrial settings internationally. Loops > and complex switch/case type code seemed to me to be, hmm, difficult > to maintain or follow program flow - especially for large industrial > programs. It also appeared to me that bad programmers could easily > write much worse code in G than in other languages, but that may just > be residual frustration talking. My personal opinion... > > James > > On Wed, Nov 17, 2010 at 9:16 AM, Neal Becker wrote: > > BTW, although I'm academically interested in this, I think a gui block- > > diagram approach is only useful for students/toy projects. > > > > I'm an old electrical engineer. Back in the day, we used to draw > schematics > > (by hand). Today, we use hardware description languages. There's > reasons > > why using pictures to describe designs has been replaced by text: > > > > * pictures work well only for low complexity. For complex designs, > > productivity becomes unacceptably low > > * What you see is all you got - text is easy to integrate with any kind > of > > tool. > > > Simulink is one way to describe a system for simulation and might not be the best for complex systems. Is there any other language which is particularly suitable for such things? Simulink does have an advantage in that companies will sometimes supply blocks that model their product, although such blocks are sometimes binary blobs that can't easily be modified. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Nov 17 15:46:54 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 17 Nov 2010 15:46:54 -0500 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: On Wed, Nov 17, 2010 at 2:46 PM, Charles R Harris wrote: > > > On Wed, Nov 17, 2010 at 12:34 PM, James Phillips wrote: >> >> I have used National Instrument's "G" graphical programming language >> (LabVIEW) extensively in industrial settings internationally. ?Loops >> and complex switch/case type code seemed to me to be, hmm, difficult >> to maintain or follow program flow - especially for large industrial >> programs. ?It also appeared to me that bad programmers could easily >> write much worse code in G than in other languages, but that may just >> be residual frustration talking. ?My personal opinion... >> >> ? ? James >> >> On Wed, Nov 17, 2010 at 9:16 AM, Neal Becker wrote: >> > BTW, although I'm academically interested in this, I think a gui block- >> > diagram approach is only useful for students/toy projects. >> > >> > I'm an old electrical engineer. ?Back in the day, we used to draw >> > schematics >> > (by hand). ?Today, we use hardware description languages. ?There's >> > reasons >> > why using pictures to describe designs has been replaced by text: >> > >> > * pictures work well only for low complexity. ?For complex designs, >> > productivity becomes unacceptably low >> > * What you see is all you got - text is easy to integrate with any kind >> > of >> > tool. >> > > > Simulink is one way to describe a system for simulation and might not be the > best > for complex systems. Is there any other language which is particularly > suitable for such things? Simulink does have an advantage in that companies > will sometimes supply blocks that model their product, although such blocks > are sometimes binary blobs that can't easily be modified. completely different application area there are some graphical modeling tools for agent-based models that I have seen on the internet and seem to by reasonably successful for example (the first one I found again with google) http://repast.sourceforge.net/ just a comment from the sidelines, since I have no idea about engineering applications. Josef > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From josh.holbrook at gmail.com Wed Nov 17 16:52:09 2010 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Wed, 17 Nov 2010 12:52:09 -0900 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: Maybe the easiest way to get pythonic simulink-esque action would be to strap jython onto one of these pre-existing java-based solutions. Otoh, I wouldn't be surprised if getting numpy to work on jython is somewhere between painful and impossible. Thoughts? --Josh On Wed, Nov 17, 2010 at 11:46 AM, wrote: > On Wed, Nov 17, 2010 at 2:46 PM, Charles R Harris > wrote: >> >> >> On Wed, Nov 17, 2010 at 12:34 PM, James Phillips wrote: >>> >>> I have used National Instrument's "G" graphical programming language >>> (LabVIEW) extensively in industrial settings internationally. ?Loops >>> and complex switch/case type code seemed to me to be, hmm, difficult >>> to maintain or follow program flow - especially for large industrial >>> programs. ?It also appeared to me that bad programmers could easily >>> write much worse code in G than in other languages, but that may just >>> be residual frustration talking. ?My personal opinion... >>> >>> ? ? James >>> >>> On Wed, Nov 17, 2010 at 9:16 AM, Neal Becker wrote: >>> > BTW, although I'm academically interested in this, I think a gui block- >>> > diagram approach is only useful for students/toy projects. >>> > >>> > I'm an old electrical engineer. ?Back in the day, we used to draw >>> > schematics >>> > (by hand). ?Today, we use hardware description languages. ?There's >>> > reasons >>> > why using pictures to describe designs has been replaced by text: >>> > >>> > * pictures work well only for low complexity. ?For complex designs, >>> > productivity becomes unacceptably low >>> > * What you see is all you got - text is easy to integrate with any kind >>> > of >>> > tool. >>> > >> >> Simulink is one way to describe a system for simulation and might not be the >> best >> for complex systems. Is there any other language which is particularly >> suitable for such things? Simulink does have an advantage in that companies >> will sometimes supply blocks that model their product, although such blocks >> are sometimes binary blobs that can't easily be modified. > > completely different application area > there are some graphical modeling tools for agent-based models that I > have seen on the internet and seem to by reasonably successful > for example (the first one I found again with google) > http://repast.sourceforge.net/ > > just a comment from the sidelines, since I have no idea about > engineering applications. > > Josef > > > >> >> Chuck >> >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-dev >> >> > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From scipy at SamuelJohn.de Fri Nov 19 07:32:49 2010 From: scipy at SamuelJohn.de (Samuel John) Date: Fri, 19 Nov 2010 13:32:49 +0100 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: Perhaps the block canvas of enthought is a possibility: http://code.enthought.com/projects/block_canvas.php I am not sure. -- Samuel From njs at pobox.com Sun Nov 21 00:25:15 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 20 Nov 2010 21:25:15 -0800 Subject: [SciPy-Dev] [SciPy-User] FIR filter with arbitrary frequency response In-Reply-To: References: Message-ID: [Moving this to scipy-dev] On Sat, Nov 20, 2010 at 8:09 PM, Warren Weckesser wrote: > There is one implemented as the function firwin2 currently under review in > this ticket: > ??? http://projects.scipy.org/scipy/ticket/457 > It will eventually be added to scipy.signal.? Included in the ticket is a > patch file that can be applied to the latest version of the scipy source, > and also a copy of just the updated file fir_filter_design.py.? You could > grab that and try it stand-alone (but you would have to comment out the > local import of sigtools, which is used by the remez function in > fir_filter_design.py). > > Feedback would be appreciated, so if you try it, be sure to write back with > comments or questions. Ha, what luck! Thanks! Some quick thoughts just based on reading the source code: -- Your handling of discontinuities in the desired frequency response looks questionable to me (not that I'm an expert). Octave's version of this function has a 'ramp_n' parameter that controls how much they smooth out discontinuities, documented as "transition width for jumps in filter response. Defaults to grid_n/20; a wider ramp gives wider transitions but has better stopband characteristics." Your function, OTOH, always uses the narrowest possible ramp. Octave's choice to make ramp_n defined in number-of-grid-point units seems bizarre to me; I wouldn't copy that detail. But it probably would be good to have some way to specify smarter handling of discontinuities? -- I suspect the first part of the docstring: " From the given set of frequencies and gains, the desired response is constructed in the frequency domain. The inverse FFT is applied to the desired response to create the associated convolution kernel, and the first `numtaps` coefficients of this kernel, scaled by `window`, are returned." will not make sense to anyone who doesn't already know how this works. If it were me, I'd start by saying: "Given any desired combination of frequencies (`freq`) and gains (`gain`), this function constructs an FIR filter with linear phase and (approximately) the given frequency response." And I'd put the terse description of how it accomplishes this trick lower down in the docstring. - In info.py, I'd expand the description of firwin to "Windowed FIR filter design, with standard high/low/band-pass frequency response", to make the contrast between it and firwin2 clear. - I'd also add a reference to http://www.dspguide.com/ch17/1.htm; obviously any good DSP book will explain this idea, and the reference you already have is fine, but it's much easier to click a link than to go to the library... and the explanation is much less terse than the one in the docstring :-). -- Your doc string still refers to this function as 'fir2' at one point -- I have a mild preference for specifying the sampling frequency instead of the nyquist frequency, but that's not a big deal. What's really bad is that we have no standard for this (it looks like the only other filter design function that lets you work in sampling units instead of normalized units is remez, and it uses the terribly named 'Hz'). Maybe I should start another thread to discuss that... I don't have any authority or anything, but with those issues fixed I'd vote for committing it. Tests seem reasonably complete. -- Nathaniel From warren.weckesser at enthought.com Sun Nov 21 20:28:30 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 21 Nov 2010 19:28:30 -0600 Subject: [SciPy-Dev] Fwd: [SciPy-User] FIR filter with arbitrary frequency response In-Reply-To: References: Message-ID: Forwarded to the mailing list (didn't "reply to all" the first time). ---------- Forwarded message ---------- From: Warren Weckesser Date: Sun, Nov 21, 2010 at 5:00 PM Subject: Re: [SciPy-User] FIR filter with arbitrary frequency response To: Nathaniel Smith Nathanial, thanks for the great feedback. Comments below... On Sat, Nov 20, 2010 at 11:25 PM, Nathaniel Smith wrote: > [Moving this to scipy-dev] > > On Sat, Nov 20, 2010 at 8:09 PM, Warren Weckesser > wrote: > > There is one implemented as the function firwin2 currently under review > in > > this ticket: > > http://projects.scipy.org/scipy/ticket/457 > > It will eventually be added to scipy.signal. Included in the ticket is a > > patch file that can be applied to the latest version of the scipy source, > > and also a copy of just the updated file fir_filter_design.py. You could > > grab that and try it stand-alone (but you would have to comment out the > > local import of sigtools, which is used by the remez function in > > fir_filter_design.py). > > > > Feedback would be appreciated, so if you try it, be sure to write back > with > > comments or questions. > > Ha, what luck! Thanks! > > Some quick thoughts just based on reading the source code: > > -- Your handling of discontinuities in the desired frequency response > looks questionable to me (not that I'm an expert). Octave's version of > this function has a 'ramp_n' parameter that controls how much they > smooth out discontinuities, documented as "transition width for jumps > in filter response. Defaults to grid_n/20; a wider ramp gives wider > transitions but has better stopband characteristics." Your function, > OTOH, always uses the narrowest possible ramp. > > Octave's choice to make ramp_n defined in number-of-grid-point units > seems bizarre to me; I wouldn't copy that detail. But it probably > would be good to have some way to specify smarter handling of > discontinuities? > > As you noticed, currently it does not attempt to smooth the requested frequency response. If the user wants a ramp, they can just define their function with a ramp instead of a discontinuity. :) To handle transition width issues, I've used the Kaiser design method, which provides formulas for the number of taps and the Kaiser parameter beta in terms of the desired attenuation and transition width. This works reasonably well. I have nothing against adding some sort of "auto-smoothing" option. I'd prefer something with a sound mathematical basis, rather than an ad hoc tweak. But if there is a widely used heuristic that works, it should probably be included in the function. In the Octave example, what happens if the user includes a ramp in the response that isn't as wide as the transition width defined by grid_n/20? > -- I suspect the first part of the docstring: > > " From the given set of frequencies and gains, the desired response is > constructed in the frequency domain. The inverse FFT is applied to the > desired response to create the associated convolution kernel, and the > first `numtaps` coefficients of this kernel, scaled by `window`, are > returned." > > will not make sense to anyone who doesn't already know how this works. > > If it were me, I'd start by saying: > "Given any desired combination of frequencies (`freq`) and gains > (`gain`), this function constructs an FIR filter with linear phase and > (approximately) the given frequency response." > And I'd put the terse description of how it accomplishes this trick > lower down in the docstring. > > Thanks--after rereading, I see that it does need to be improved. > - In info.py, I'd expand the description of firwin to "Windowed FIR > filter design, with standard high/low/band-pass frequency response", > to make the contrast between it and firwin2 clear. > > Good idea. > - I'd also add a reference to http://www.dspguide.com/ch17/1.htm; > obviously any good DSP book will explain this idea, and the reference > you already have is fine, but it's much easier to click a link than to > go to the library... and the explanation is much less terse than the > one in the docstring :-). > > I'm OK with web links, but the numpy/scipy documentation guidelines state "Referencing sources of a temporary nature, like web pages, is discouraged." > -- Your doc string still refers to this function as 'fir2' at one point > > Thanks, I'll fix that. > -- I have a mild preference for specifying the sampling frequency > instead of the nyquist frequency, but that's not a big deal. What's > really bad is that we have no standard for this (it looks like the > only other filter design function that lets you work in sampling units > instead of normalized units is remez, and it uses the terribly named > 'Hz'). Maybe I should start another thread to discuss that... > > Personally, I'm fine with either sample rate or Nyquist rate, but there are other functions where the default is to express frequencies relative to the Nyquist rate (e.g. all the IIR functions). I also noticed that remez has the unfortunate keyword 'Hz'. For the sake of consistency, I would like to deprecate that keyword, and add 'nyq' (or perhaps 'nyquist', or...) as the preferred keyword. This option should also be added to firwin(), so all these FIR filter design functions have a consistent interface. > I don't have any authority or anything, but with those issues fixed > I'd vote for committing it. Tests seem reasonably complete. > > Great--thanks again for the comments. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Nov 21 21:18:00 2010 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 21 Nov 2010 18:18:00 -0800 Subject: [SciPy-Dev] Fwd: [SciPy-User] FIR filter with arbitrary frequency response In-Reply-To: References: Message-ID: On Sun, Nov 21, 2010 at 5:28 PM, Warren Weckesser wrote: >> -- Your handling of discontinuities in the desired frequency response >> looks questionable to me (not that I'm an expert). Octave's version of >> this function has a 'ramp_n' parameter that controls how much they >> smooth out discontinuities, documented as "transition width for jumps >> in filter response. Defaults to grid_n/20; a wider ramp gives wider >> transitions but has better stopband characteristics." Your function, >> OTOH, always uses the narrowest possible ramp. >> >> Octave's choice to make ramp_n defined in number-of-grid-point units >> seems bizarre to me; I wouldn't copy that detail. But it probably >> would be good to have some way to specify smarter handling of >> discontinuities? > > As you noticed, currently it does not attempt to smooth the requested > frequency response.? If the user wants a ramp, they can just define their > function with a ramp instead of a discontinuity. :) Certainly :-) But some convenience handling might be useful... > To handle transition width issues, I've used the Kaiser design method, which > provides formulas for the number of taps and the Kaiser parameter beta in > terms of the desired attenuation and transition width.? This works > reasonably well. You mean, you've used this in the past, not, you've used that in coding up this function, right? > I have nothing against adding some sort of "auto-smoothing" option.? I'd > prefer something with a sound mathematical basis, rather than an ad hoc > tweak.? But if there is a widely used heuristic that works, it should > probably be included in the function. Matlab appears to use a similar tweak (see http://www.mathworks.com/help/toolbox/signal/fir2.html#f7-957922), and I guess you don't get more widely used than that. Not that this is a guarantee of quality... (In other news, this strange choice of specifying the ramp in terms of grid-units is probably their fault too.) > In the Octave example, what happens if the user includes a ramp in the > response that isn't as wide as? the transition width defined by grid_n/20? If I'm reading the code right, Octave's algorithm is: -- find all the jump discontinuities -- for each one, subtract ramp_width/2 from the left edge, and add ramp_width/2 to the right edge (where ramp_width is ramp_n/grid_n) -- use interpolation to recalculate the corresponding *gains* by calculating what the gain *would* have been at the *new* frequency points if we hadn't done this ramping stuff. -- then do the linear interpolation step as usual So, AFAICT the code says that this only happens for jump discontinuities. Of course, the comments say it happens for "any transition less than ramp_n units". Who you gonna trust... >> - I'd also add a reference to http://www.dspguide.com/ch17/1.htm; >> obviously any good DSP book will explain this idea, and the reference >> you already have is fine, but it's much easier to click a link than to >> go to the library... and the explanation is much less terse than the >> one in the docstring :-). > > I'm OK with web links, but the numpy/scipy documentation guidelines state > "Referencing sources of a temporary nature, like web pages, is discouraged." I think the intention of that rule is more like "don't make your primary reference some random schmuck's blog entry" -- not to force people to go trek to the library just because. That's a real published book http://www.dspguide.com/editions.htm that happens to also be online, and the link's been stable since 2006 http://web.archive.org/web/20060612222046/http://www.dspguide.com/ch17/1.htm So I'd add it :-). >> -- I have a mild preference for specifying the sampling frequency >> instead of the nyquist frequency, but that's not a big deal. What's >> really bad is that we have no standard for this (it looks like the >> only other filter design function that lets you work in sampling units >> instead of normalized units is remez, and it uses the terribly named >> 'Hz'). Maybe I should start another thread to discuss that... > > Personally, I'm fine with either sample rate or Nyquist rate, but there are > other functions where the default is to express frequencies relative to the > Nyquist rate (e.g. all the IIR functions).?? I also noticed that remez has > the unfortunate keyword 'Hz'.?? For the sake of consistency, I would like to > deprecate that keyword, and add 'nyq' (or perhaps 'nyquist', or...) as the > preferred keyword.? This option should also be added to firwin(), so all > these FIR filter design functions have a consistent interface. There are two orthogonal issues here -- whether people specify sampling or nyquist rate, and compatibility. If we let people specify the sampling rate instead of the Nyquist frequency, and we make the default sampling rate be "2", then compatibility will be preserved. But really *anything* would be better than what we have now (if nothing else, this would force it to be clear what the IIR functions expect -- right now it's just undocumented!). > Great--thanks again for the comments. np. -- Nathaniel From scopatz at gmail.com Sun Nov 21 22:23:53 2010 From: scopatz at gmail.com (Anthony Scopatz) Date: Sun, 21 Nov 2010 21:23:53 -0600 Subject: [SciPy-Dev] Pythonic Simulink clone... - how much interest? In-Reply-To: References: Message-ID: Hi Samuel, You are correct, BlockCanvas is a kind of process modeler like the OP mentioned. Additionally, it sits on top of Python, NumPy, Traits, etc. It is currently available as part of ETS. If I have my druthers, it and its underlying BlockContext (CodeTools) model will see some love in the near future. I tried to write up some documentation for it a little while ago, but became distracted. If you go into the "enthought/block_canvas/app/" directory and run "python app.py" you'll get a pretty good idea of what it does. Be Well Anthony On Fri, Nov 19, 2010 at 6:32 AM, Samuel John wrote: > Perhaps the block canvas of enthought is a possibility: > http://code.enthought.com/projects/block_canvas.php > > I am not sure. > > -- > Samuel > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Tue Nov 23 10:21:38 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Tue, 23 Nov 2010 23:21:38 +0800 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 Message-ID: Hi all, Here is a reminder that if you want to deprecate things for 0.9.0, to do that in the next two weeks. Furthermore, I took a look at currently deprecated things that should be removed. Below is a long list, I believe that all items on it have been deprecated long enough that they can be removed. Please check and if something can *not* be removed, say so in the next few days. Also, if you deprecate more things, please state in the source code (as well as in the release notes) at what version removal should take place. Which reminds me: what should the version number after 0.9.0 be? Cheers, Ralf REMOVALS Stats ----- pdf_moments pdfapprox z zs stderr samplestd std var samplevar mean median cov corrcoef erfc Sparse ------ spkron speye spidentity dims and nzmax keywords in _sc_matrix constructor Sparse matrix methods: getdata rowcol listprint matvec matmat dot rmatvec Ndimage ------- output_type keyword in interpolate.py Linalg ------ the econ keyword in qr Signal ------ a whole bunch of stuff that I assume Warren is going to clean. -------------- next part -------------- An HTML attachment was scrubbed... URL: From HAWRYLA at novachem.com Tue Nov 23 18:49:07 2010 From: HAWRYLA at novachem.com (Andrew Hawryluk) Date: Tue, 23 Nov 2010 16:49:07 -0700 Subject: [SciPy-Dev] incorrect function-call count in fsolve Message-ID: <48C01AE7354EC240A26F19CEB995E94306350E8D@CHMAILMBX01.novachem.com> I have just noticed that scipy.optimize.fsolve reports two fewer function calls than actually occur. It returns the number of Fortran function calls, but the Python code calls the function once to check the dimensions and the C code calls the function again for error checking (assuming I have understood the code correctly). The same thing is true for calls to the Jacobian. My two questions are: Would it be appropriate for fsolve to return the number of Fortran calls + 2? Would it be appropriate to provide an 'unsafe' option that bypasses these calls? For some of my work I am calling fsolve several times within other optimization/solver loops, so the Fortran code may only be needing 1 Jacobian and 2 or 3 function calls much of the time. Bypassing the additonal 2 Jacobians and function calls could speed things along a bit. Cheers, Andrew from scipy.optimize import fsolve import numpy as np nfev = 0 def func(x): global nfev nfev += 1 return np.cos(x) solution = fsolve(func, 1.3, full_output=1) print 'fsolve performed %d function calls' % solution[1]['nfev'] print 'but there were %d function calls in total' % nfev -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Wed Nov 24 23:33:36 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 24 Nov 2010 22:33:36 -0600 Subject: [SciPy-Dev] Fwd: [SciPy-User] FIR filter with arbitrary frequency response In-Reply-To: References: Message-ID: On Sun, Nov 21, 2010 at 8:18 PM, Nathaniel Smith wrote: > On Sun, Nov 21, 2010 at 5:28 PM, Warren Weckesser > wrote: > >> -- Your handling of discontinuities in the desired frequency response > >> looks questionable to me (not that I'm an expert). Octave's version of > >> this function has a 'ramp_n' parameter that controls how much they > >> smooth out discontinuities, documented as "transition width for jumps > >> in filter response. Defaults to grid_n/20; a wider ramp gives wider > >> transitions but has better stopband characteristics." Your function, > >> OTOH, always uses the narrowest possible ramp. > >> > >> Octave's choice to make ramp_n defined in number-of-grid-point units > >> seems bizarre to me; I wouldn't copy that detail. But it probably > >> would be good to have some way to specify smarter handling of > >> discontinuities? > > > > As you noticed, currently it does not attempt to smooth the requested > > frequency response. If the user wants a ramp, they can just define their > > function with a ramp instead of a discontinuity. :) > > Certainly :-) But some convenience handling might be useful... > > > To handle transition width issues, I've used the Kaiser design method, > which > > provides formulas for the number of taps and the Kaiser parameter beta in > > terms of the desired attenuation and transition width. This works > > reasonably well. > > You mean, you've used this in the past, not, you've used that in > coding up this function, right? > Yes, that's right. > > > I have nothing against adding some sort of "auto-smoothing" option. I'd > > prefer something with a sound mathematical basis, rather than an ad hoc > > tweak. But if there is a widely used heuristic that works, it should > > probably be included in the function. > > Matlab appears to use a similar tweak (see > http://www.mathworks.com/help/toolbox/signal/fir2.html#f7-957922), and > I guess you don't get more widely used than that. Not that this is a > guarantee of quality... (In other news, this strange choice of > specifying the ramp in terms of grid-units is probably their fault > too.) > > > In the Octave example, what happens if the user includes a ramp in the > > response that isn't as wide as the transition width defined by > grid_n/20? > > If I'm reading the code right, Octave's algorithm is: > -- find all the jump discontinuities > -- for each one, subtract ramp_width/2 from the left edge, and add > ramp_width/2 to the right edge (where ramp_width is ramp_n/grid_n) > -- use interpolation to recalculate the corresponding *gains* by > calculating what the gain *would* have been at the *new* frequency > points if we hadn't done this ramping stuff. > -- then do the linear interpolation step as usual > > So, AFAICT the code says that this only happens for jump > discontinuities. Of course, the comments say it happens for "any > transition less than ramp_n units". Who you gonna trust... > > >> - I'd also add a reference to http://www.dspguide.com/ch17/1.htm; > >> obviously any good DSP book will explain this idea, and the reference > >> you already have is fine, but it's much easier to click a link than to > >> go to the library... and the explanation is much less terse than the > >> one in the docstring :-). > > > > I'm OK with web links, but the numpy/scipy documentation guidelines state > > "Referencing sources of a temporary nature, like web pages, is > discouraged." > > I think the intention of that rule is more like "don't make your > primary reference some random schmuck's blog entry" -- not to force > people to go trek to the library just because. That's a real published > book > http://www.dspguide.com/editions.htm > that happens to also be online, and the link's been stable since 2006 > > http://web.archive.org/web/20060612222046/http://www.dspguide.com/ch17/1.htm > So I'd add it :-). > > >> -- I have a mild preference for specifying the sampling frequency > >> instead of the nyquist frequency, but that's not a big deal. What's > >> really bad is that we have no standard for this (it looks like the > >> only other filter design function that lets you work in sampling units > >> instead of normalized units is remez, and it uses the terribly named > >> 'Hz'). Maybe I should start another thread to discuss that... > > > > Personally, I'm fine with either sample rate or Nyquist rate, but there > are > > other functions where the default is to express frequencies relative to > the > > Nyquist rate (e.g. all the IIR functions). I also noticed that remez > has > > the unfortunate keyword 'Hz'. For the sake of consistency, I would like > to > > deprecate that keyword, and add 'nyq' (or perhaps 'nyquist', or...) as > the > > preferred keyword. This option should also be added to firwin(), so all > > these FIR filter design functions have a consistent interface. > > There are two orthogonal issues here -- whether people specify > sampling or nyquist rate, and compatibility. If we let people specify > the sampling rate instead of the Nyquist frequency, and we make the > default sampling rate be "2", then compatibility will be preserved. > But really *anything* would be better than what we have now (if > nothing else, this would force it to be clear what the IIR functions > expect -- right now it's just undocumented!). > Some of the IIR functions (e.g. ellipord, buttord) say that the frequencies are "normalized from 0 to 1 (1 corresponds to pi radians / sample)". Whether one immediately sees that that means they are normalized by the Nyquist rate depends on one's background. The docstrings of other IIR functions (e.g. ellip, butter) definitely need work. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Thu Nov 25 00:08:30 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 24 Nov 2010 23:08:30 -0600 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Tue, Nov 23, 2010 at 9:21 AM, Ralf Gommers wrote: > Hi all, > > Here is a reminder that if you want to deprecate things for 0.9.0, to do > that in the next two weeks. Furthermore, I took a look at currently > deprecated things that should be removed. Below is a long list, I believe > that all items on it have been deprecated long enough that they can be > removed. Please check and if something can *not* be removed, say so in the > next few days. > > Also, if you deprecate more things, please state in the source code (as > well as in the release notes) at what version removal should take place. > Which reminds me: what should the version number after 0.9.0 be? > > Cheers, > Ralf > > > > REMOVALS > > Stats > ----- > pdf_moments > pdfapprox > z > zs > stderr > samplestd > std > var > samplevar > mean > median > cov > corrcoef > erfc > > Sparse > ------ > spkron > speye > spidentity > dims and nzmax keywords in _sc_matrix constructor > > Sparse matrix methods: > getdata > rowcol > listprint > matvec > matmat > dot > rmatvec > > > Ndimage > ------- > output_type keyword in interpolate.py > > Linalg > ------ > the econ keyword in qr > > Signal > ------ > a whole bunch of stuff that I assume Warren is going to clean. > I dealt with some deprecated features in r6897-r6900 and r6929. One outstanding issue appears to be the "old_behavior" issue with convolve, convolve2d, correlate and correlate2d. David C, if you're out there, I noticed that the last change to signaltools.py is r5878, whose comment is "Add old_behavior arg for convolve2d and correlate2d (not fully implemented yet)." A problem related to this was reported in ticket #1301 ( http://projects.scipy.org/scipy/ticket/1301). Are you planning on any more changes? If not, could someone familiar with this issue finish up whatever needs to be done? Are there other deprecated features in scipy.signal? Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Nov 26 10:33:59 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 26 Nov 2010 23:33:59 +0800 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Thu, Nov 25, 2010 at 1:08 PM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Tue, Nov 23, 2010 at 9:21 AM, Ralf Gommers > wrote: > >> Hi all, >> >> Here is a reminder that if you want to deprecate things for 0.9.0, to do >> that in the next two weeks. Furthermore, I took a look at currently >> deprecated things that should be removed. Below is a long list, I believe >> that all items on it have been deprecated long enough that they can be >> removed. Please check and if something can *not* be removed, say so in the >> next few days. >> >> Also, if you deprecate more things, please state in the source code (as >> well as in the release notes) at what version removal should take place. >> Which reminds me: what should the version number after 0.9.0 be? >> >> Cheers, >> Ralf >> >> >> >> REMOVALS >> >> Stats >> ----- >> pdf_moments >> pdfapprox >> z >> zs >> stderr >> samplestd >> std >> var >> samplevar >> mean >> median >> cov >> corrcoef >> erfc >> >> Sparse >> ------ >> spkron >> speye >> spidentity >> dims and nzmax keywords in _sc_matrix constructor >> >> Sparse matrix methods: >> getdata >> rowcol >> listprint >> matvec >> matmat >> dot >> rmatvec >> >> >> Ndimage >> ------- >> output_type keyword in interpolate.py >> >> Linalg >> ------ >> the econ keyword in qr >> >> Signal >> ------ >> a whole bunch of stuff that I assume Warren is going to clean. >> > > > I dealt with some deprecated features in r6897-r6900 and r6929. > > One outstanding issue appears to be the "old_behavior" issue with convolve, > convolve2d, correlate and correlate2d. David C, if you're out there, I > noticed that the last change to signaltools.py is r5878, whose comment is > "Add old_behavior arg for convolve2d and correlate2d (not fully implemented > yet)." A problem related to this was reported in ticket #1301 ( > http://projects.scipy.org/scipy/ticket/1301). Are you planning on any > more changes? If not, could someone familiar with this issue finish up > whatever needs to be done? > > Are there other deprecated features in scipy.signal? > > No, I saw "grin deprecate" give a long list of results, but it's all related to that correlate/convolve issue. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Fri Nov 26 10:55:35 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 26 Nov 2010 23:55:35 +0800 Subject: [SciPy-Dev] mstats.linregress return values (#1273) Message-ID: Hi, stats.linregress returns five values: slope, intercept, r_value, p_value, sterr. mstats.linregress should return the same according to its docstring (and my expectation), but adds a sixth return value: Syy/Sxx, not clear what for. We can either document the return value, or just remove it. I'd think the latter makes more sense. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhansever at gmail.com Fri Nov 26 14:18:41 2010 From: gokhansever at gmail.com (=?UTF-8?Q?G=C3=B6khan_Sever?=) Date: Fri, 26 Nov 2010 13:18:41 -0600 Subject: [SciPy-Dev] mstats.linregress return values (#1273) In-Reply-To: References: Message-ID: On Fri, Nov 26, 2010 at 9:55 AM, Ralf Gommers wrote: > Hi, > > stats.linregress returns five values: slope, intercept, r_value, p_value, > sterr. > mstats.linregress should return the same according to its docstring (and my > expectation), but adds a sixth return value: Syy/Sxx, not clear what for. We > can either document the return value, or just remove it. I'd think the > latter makes more sense. > > Ralf +1 for the correction. I had once reported this at http://article.gmane.org/gmane.comp.python.numeric.general/40651/match=return+values+stats+mstats+linregress but haven't gotten any response --probably due to posting at numpy-discussion list instead of scipy-dev as you did. -- G?khan From pav at iki.fi Sat Nov 27 11:33:59 2010 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 27 Nov 2010 16:33:59 +0000 (UTC) Subject: [SciPy-Dev] ARPACK fixes before 0.9 Message-ID: Hi, The Arpack interface in Scipy needs some fixes, which would be useful to get in for Scipy 0.9. I have them in a branch here: https://github.com/pv/scipy-work/tree/bug/1313-arpack Comments would be appreciated (esp. from David who wrote the original interface). I'm going to merge them probably the next W/E unless objections arise. Bugfixes: - Raise ArpackNoConvergence when ARPACK iteration does not converge. Previously, there was no way to know if convergence was obtained. - Fix a bug in return value extraction from dneupd, which resulted to invalid eigenvectors/eigenvalues being returned on non-convergence. - Allow complex matrices in the sparse SVD. It's a naive approach, but probably still better than nothing. Other changes: - Rename routines eigen* -> eigs* to avoid shadowing the "eigen" module. "eigs" is also the name for the equivalent routines in Octave/Matlab. - Remove the ``speigs`` ARPACK interface. It does not seem to make sense two have two different interfaces to the same library. - Un-deprecate .dot() method of sparse matrixes -- ndarrays also have it, so it makes sense to retain it. -- Pauli Virtanen From ralf.gommers at googlemail.com Sun Nov 28 03:58:15 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 28 Nov 2010 16:58:15 +0800 Subject: [SciPy-Dev] mstats.linregress return values (#1273) In-Reply-To: References: Message-ID: On Sat, Nov 27, 2010 at 3:18 AM, G?khan Sever wrote: > On Fri, Nov 26, 2010 at 9:55 AM, Ralf Gommers > wrote: > > Hi, > > > > stats.linregress returns five values: slope, intercept, r_value, p_value, > > sterr. > > mstats.linregress should return the same according to its docstring (and > my > > expectation), but adds a sixth return value: Syy/Sxx, not clear what for. > We > > can either document the return value, or just remove it. I'd think the > > latter makes more sense. > > > > Ralf > > +1 for the correction. > > OK, fixed in r6953. Ralf > I had once reported this at > > http://article.gmane.org/gmane.comp.python.numeric.general/40651/match=return+values+stats+mstats+linregress > but haven't gotten any response --probably due to posting at > numpy-discussion list instead of scipy-dev as you did. > > > -- > G?khan > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Sun Nov 28 04:14:45 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 28 Nov 2010 17:14:45 +0800 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Tue, Nov 23, 2010 at 11:21 PM, Ralf Gommers wrote: > Hi all, > > Here is a reminder that if you want to deprecate things for 0.9.0, to do > that in the next two weeks. Furthermore, I took a look at currently > deprecated things that should be removed. Below is a long list, I believe > that all items on it have been deprecated long enough that they can be > removed. Please check and if something can *not* be removed, say so in the > next few days. > > Also, if you deprecate more things, please state in the source code (as > well as in the release notes) at what version removal should take place. > Which reminds me: what should the version number after 0.9.0 be? > > > Changes for ndimage, linalg and sparse are at https://github.com/rgommers/scipy/commits/deprecations. I have not touched the sparse matrix methods yet, since Matthieu had a question on those and Pauli has un-deprecated one already in his arpack branch. Changes for stats are at https://github.com/rgommers/scipy/tree/stats-work. There are also several small bugfixes and the new kendalltau implementation in that branch, if anyone is interested to review. Cheers, Ralf > > > REMOVALS > > Stats > ----- > pdf_moments > pdfapprox > z > zs > stderr > samplestd > std > var > samplevar > mean > median > cov > corrcoef > erfc > > Sparse > ------ > spkron > speye > spidentity > dims and nzmax keywords in _sc_matrix constructor > > Sparse matrix methods: > getdata > rowcol > listprint > matvec > matmat > dot > rmatvec > > > Ndimage > ------- > output_type keyword in interpolate.py > > Linalg > ------ > the econ keyword in qr > > Signal > ------ > a whole bunch of stuff that I assume Warren is going to clean. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Nov 28 06:43:42 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Nov 2010 06:43:42 -0500 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 4:14 AM, Ralf Gommers wrote: > > > On Tue, Nov 23, 2010 at 11:21 PM, Ralf Gommers > wrote: >> >> Hi all, >> >> Here is a reminder that if you want to deprecate things for 0.9.0, to do >> that in the next two weeks. Furthermore, I took a look at currently >> deprecated things that should be removed. Below is a long list, I believe >> that all items on it have been deprecated long enough that they can be >> removed. Please check and if something can *not* be removed, say so in the >> next few days. >> >> Also, if you deprecate more things, please state in the source code (as >> well as in the release notes) at what version removal should take place. >> Which reminds me: what should the version number after 0.9.0 be? >> >> > Changes for ndimage, linalg and sparse are at > https://github.com/rgommers/scipy/commits/deprecations. I have not touched > the sparse matrix methods yet, since Matthieu had a question on those and > Pauli has un-deprecated one already in his arpack branch. > > Changes for stats are at https://github.com/rgommers/scipy/tree/stats-work. > There are also several small bugfixes and the new kendalltau implementation > in that branch, if anyone is interested to review. I just read through your change sets, and I think they are good removing printcc is more than the ticket asks for https://github.com/rgommers/scipy/commit/3eb3bd08ddbc1950fbedcb5e68f5f5960cc1c080 printcc is a useful function, getting a nice string/print for a list of list was important when arrays where not used, but I don't think anything is using it, and it doesn't really fit in. So, I think it can as well be removed completely. I haven't looked at how the z, zs removal should be reflected in mstats. Thanks, Josef > > Cheers, > Ralf > >> >> >> >> REMOVALS >> >> Stats >> ----- >> pdf_moments >> pdfapprox >> z >> zs >> stderr >> samplestd >> std >> var >> samplevar >> mean >> median >> cov >> corrcoef >> erfc >> >> Sparse >> ------ >> spkron >> speye >> spidentity >> dims and nzmax keywords in _sc_matrix constructor >> >> Sparse matrix methods: >> getdata >> rowcol >> listprint >> matvec >> matmat >> dot >> rmatvec >> >> >> Ndimage >> ------- >> output_type keyword in interpolate.py >> >> Linalg >> ------ >> the econ keyword in qr >> >> Signal >> ------ >> a whole bunch of stuff that I assume Warren is going to clean. > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ralf.gommers at googlemail.com Sun Nov 28 06:54:07 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 28 Nov 2010 19:54:07 +0800 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 7:43 PM, wrote: > On Sun, Nov 28, 2010 at 4:14 AM, Ralf Gommers > wrote: > > > > > > On Tue, Nov 23, 2010 at 11:21 PM, Ralf Gommers < > ralf.gommers at googlemail.com> > > wrote: > >> > >> Hi all, > >> > >> Here is a reminder that if you want to deprecate things for 0.9.0, to do > >> that in the next two weeks. Furthermore, I took a look at currently > >> deprecated things that should be removed. Below is a long list, I > believe > >> that all items on it have been deprecated long enough that they can be > >> removed. Please check and if something can *not* be removed, say so in > the > >> next few days. > >> > >> Also, if you deprecate more things, please state in the source code (as > >> well as in the release notes) at what version removal should take place. > >> Which reminds me: what should the version number after 0.9.0 be? > >> > >> > > Changes for ndimage, linalg and sparse are at > > https://github.com/rgommers/scipy/commits/deprecations. I have not > touched > > the sparse matrix methods yet, since Matthieu had a question on those and > > Pauli has un-deprecated one already in his arpack branch. > > > > Changes for stats are at > https://github.com/rgommers/scipy/tree/stats-work. > > There are also several small bugfixes and the new kendalltau > implementation > > in that branch, if anyone is interested to review. > > I just read through your change sets, and I think they are good > > Thanks for checking. I'll commit that whole branch then. > removing printcc is more than the ticket asks for > > https://github.com/rgommers/scipy/commit/3eb3bd08ddbc1950fbedcb5e68f5f5960cc1c080 > > printcc is a useful function, getting a nice string/print for a list > of list was important when arrays where not used, but I don't think > anything is using it, and it doesn't really fit in. So, I think it can > as well be removed completely. > That's pretty much what I was thinking - it's completely unused and therefore can be removed. > > I haven't looked at how the z, zs removal should be reflected in mstats. > > Maybe better to answer first whether mstats should closely track stats or not. I did not touch z/zs there because they were not deprecated like the plain versions (but I'm not sure how important that is). There's also ticket 1195 for updating zmap. I can do that all at once if you agree to just mirror the changes to stats. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Sun Nov 28 07:04:32 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Nov 2010 07:04:32 -0500 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 6:54 AM, Ralf Gommers wrote: > > > On Sun, Nov 28, 2010 at 7:43 PM, wrote: >> >> On Sun, Nov 28, 2010 at 4:14 AM, Ralf Gommers >> wrote: >> > >> > >> > On Tue, Nov 23, 2010 at 11:21 PM, Ralf Gommers >> > >> > wrote: >> >> >> >> Hi all, >> >> >> >> Here is a reminder that if you want to deprecate things for 0.9.0, to >> >> do >> >> that in the next two weeks. Furthermore, I took a look at currently >> >> deprecated things that should be removed. Below is a long list, I >> >> believe >> >> that all items on it have been deprecated long enough that they can be >> >> removed. Please check and if something can *not* be removed, say so in >> >> the >> >> next few days. >> >> >> >> Also, if you deprecate more things, please state in the source code (as >> >> well as in the release notes) at what version removal should take >> >> place. >> >> Which reminds me: what should the version number after 0.9.0 be? >> >> >> >> >> > Changes for ndimage, linalg and sparse are at >> > https://github.com/rgommers/scipy/commits/deprecations. I have not >> > touched >> > the sparse matrix methods yet, since Matthieu had a question on those >> > and >> > Pauli has un-deprecated one already in his arpack branch. >> > >> > Changes for stats are at >> > https://github.com/rgommers/scipy/tree/stats-work. >> > There are also several small bugfixes and the new kendalltau >> > implementation >> > in that branch, if anyone is interested to review. >> >> I just read through your change sets, and I think they are good >> > Thanks for checking. I'll commit that whole branch then. > >> >> removing printcc is more than the ticket asks for >> >> https://github.com/rgommers/scipy/commit/3eb3bd08ddbc1950fbedcb5e68f5f5960cc1c080 >> >> printcc is a useful function, getting a nice string/print for a list >> of list was important when arrays where not used, ?but I don't think >> anything is using it, and it doesn't really fit in. So, I think it can >> as well be removed completely. > > That's pretty much what I was thinking - it's completely unused and > therefore can be removed. > >> >> I haven't looked at how the z, zs removal should be reflected in mstats. >> > Maybe better to answer first whether mstats should closely track stats or > not. I did not touch z/zs there because they were not deprecated like the > plain versions (but I'm not sure how important that is). There's also ticket > 1195 for updating zmap. I can do that all at once if you agree to just > mirror the changes to stats. I still keep forgetting sometimes about mstats when I make changes. Some functions in mstats are enhanced versions, that require more decisions, but mirrored functions like the z... should be in sync. I haven't checked all options for zmap and the corresponding mstats version. But, since the new zmap can also handle masked arrays, the mstats versions could be depreciated, removed completely. I'm planning to update to scipy trunk this week, so I can "play" with your changes, and check some of the remaining issues in stats. Josef > > Cheers, > Ralf > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > From ralf.gommers at googlemail.com Sun Nov 28 07:57:57 2010 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sun, 28 Nov 2010 20:57:57 +0800 Subject: [SciPy-Dev] deprecations/removals for 0.9.0 In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 8:04 PM, wrote: > On Sun, Nov 28, 2010 at 6:54 AM, Ralf Gommers > wrote: > > > > > > On Sun, Nov 28, 2010 at 7:43 PM, wrote: > >> > >> On Sun, Nov 28, 2010 at 4:14 AM, Ralf Gommers > >> wrote: > >> > > >> > Changes for stats are at > >> > https://github.com/rgommers/scipy/tree/stats-work. > >> > There are also several small bugfixes and the new kendalltau > >> > implementation > >> > in that branch, if anyone is interested to review. > >> > >> I just read through your change sets, and I think they are good > >> > > Thanks for checking. I'll commit that whole branch then. > > > >> > >> I haven't looked at how the z, zs removal should be reflected in mstats. > >> > > Maybe better to answer first whether mstats should closely track stats or > > not. I did not touch z/zs there because they were not deprecated like the > > plain versions (but I'm not sure how important that is). There's also > ticket > > 1195 for updating zmap. I can do that all at once if you agree to just > > mirror the changes to stats. > > I still keep forgetting sometimes about mstats when I make changes. > Some functions in mstats are enhanced versions, that require more > decisions, but mirrored functions like the z... should be in sync. > I haven't checked all options for zmap and the corresponding mstats > version. But, since the new zmap can also handle masked arrays, the > mstats versions could be depreciated, removed completely. > The stats version is more comprehensive than the mstats one. Same for zscore. So stats/mstats can be easily synchronized: https://github.com/rgommers/scipy/commit/25b382e (zmap/zscore) https://github.com/rgommers/scipy/commit/c970fabf (other deprecations) I've checked that the mstats.cov tests are not needed in numpy.ma, there are quite a few tests there. Ralf > > I'm planning to update to scipy trunk this week, so I can "play" with > your changes, and check some of the remaining issues in stats. > > Josef > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsseabold at gmail.com Sun Nov 28 19:03:31 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 28 Nov 2010 19:03:31 -0500 Subject: [SciPy-Dev] class.rst information and bug with the newer sphinx? Message-ID: I've been playing around a bit with numpy/scipy's doc/source/_templates/autosummary/class.rst. First does this still work as expected with the newer sphinx? See for instance the method of the dtype class for numpy 1.4 (assumed built with older sphinx before 1.0?) and numpy 1.5 http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.dtype.html#numpy.dtype http://docs.scipy.org/doc/numpy-1.5.x/reference/generated/numpy.dtype.html#numpy.dtype The newbyteorder method does not have a summary in the 1.5 version. Second, Is there anywhere to get more information on the autosummary templates? This covers layout.html This mentions the possible files but I am missing the details. In where do the lines {% for item in methods %} {{ name }}.{{ item }} come from? Specifically, where does 'name' come from? Is this from Python, Sphinx, Numpy extension, or Jinja? Any help to demystify this would be appreciated. Skipper From josef.pktd at gmail.com Sun Nov 28 19:49:35 2010 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Sun, 28 Nov 2010 19:49:35 -0500 Subject: [SciPy-Dev] class.rst information and bug with the newer sphinx? In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 7:03 PM, Skipper Seabold wrote: > I've been playing around a bit with numpy/scipy's > doc/source/_templates/autosummary/class.rst. ?First does this still > work as expected with the newer sphinx? ?See for instance the method > of the dtype class for numpy 1.4 (assumed built with older sphinx > before 1.0?) and numpy 1.5 > > http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.dtype.html#numpy.dtype > http://docs.scipy.org/doc/numpy-1.5.x/reference/generated/numpy.dtype.html#numpy.dtype > > The newbyteorder method does not have a summary in the 1.5 version. > > Second, Is there anywhere to get more information on the autosummary templates? > > This covers layout.html > This mentions the possible files > > but I am missing the details. ?In > > where do the lines > > ? ? ?{% for item in methods %} > ? ? ? ? {{ name }}.{{ item }} > > > come from? ?Specifically, where does 'name' come from? ?Is this from > Python, Sphinx, Numpy extension, or Jinja? ?Any help to demystify this > would be appreciated. in Sphinx-1.0.5\sphinx\ext\autosummary\generate.py ns collects the description and methods, ... for an object, e.g. for a class, this is given as argument to the template: rendered = template.render(**ns) which does the actual rendering of template plus info in `ns` to string. The template is supposed to produce valid rst with sphinx instructions. I don't know where the HACK came from. Josef > > Skipper > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From jsseabold at gmail.com Sun Nov 28 20:33:16 2010 From: jsseabold at gmail.com (Skipper Seabold) Date: Sun, 28 Nov 2010 20:33:16 -0500 Subject: [SciPy-Dev] class.rst information and bug with the newer sphinx? In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 7:49 PM, wrote: > On Sun, Nov 28, 2010 at 7:03 PM, Skipper Seabold wrote: >> I've been playing around a bit with numpy/scipy's >> doc/source/_templates/autosummary/class.rst. ?First does this still >> work as expected with the newer sphinx? ?See for instance the method >> of the dtype class for numpy 1.4 (assumed built with older sphinx >> before 1.0?) and numpy 1.5 >> >> http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.dtype.html#numpy.dtype >> http://docs.scipy.org/doc/numpy-1.5.x/reference/generated/numpy.dtype.html#numpy.dtype >> >> The newbyteorder method does not have a summary in the 1.5 version. >> >> Second, Is there anywhere to get more information on the autosummary templates? >> >> This covers layout.html >> This mentions the possible files >> >> but I am missing the details. ?In >> >> where do the lines >> >> ? ? ?{% for item in methods %} >> ? ? ? ? {{ name }}.{{ item }} >> >> >> come from? ?Specifically, where does 'name' come from? ?Is this from >> Python, Sphinx, Numpy extension, or Jinja? ?Any help to demystify this >> would be appreciated. > > in > Sphinx-1.0.5\sphinx\ext\autosummary\generate.py > > ns collects the description and methods, ... for an object, e.g. for a > class, this is given as argument to the > template: > > ? ? ? ? ?rendered = template.render(**ns) > > which does the actual rendering of template plus info in `ns` to string. > The template is supposed to produce valid rst with sphinx > instructions. I don't know where the HACK came from. > Ah, thanks, I never would've found that... HACK is ignored as a comment I believe, though the indentation is not correct style-wise, hence the warnings. http://sphinx.pocoo.org/rest.html#explicit-markup From aric.hagberg at gmail.com Sun Nov 28 22:45:25 2010 From: aric.hagberg at gmail.com (Aric Hagberg) Date: Sun, 28 Nov 2010 20:45:25 -0700 Subject: [SciPy-Dev] ARPACK fixes before 0.9 In-Reply-To: References: Message-ID: On Sat, Nov 27, 2010 at 9:33 AM, Pauli Virtanen wrote: > Hi, > > The Arpack interface in Scipy needs some fixes, which would be useful to > get in for Scipy 0.9. I have them in a branch here: > > ? ? ? ?https://github.com/pv/scipy-work/tree/bug/1313-arpack > > Comments would be appreciated (esp. from David who wrote the original > interface). I'm going to merge them probably the next W/E unless > objections arise. > > Bugfixes: > > - Raise ArpackNoConvergence when ARPACK iteration does not converge. > ?Previously, there was no way to know if convergence was obtained. > > - Fix a bug in return value extraction from dneupd, which resulted > ?to invalid eigenvectors/eigenvalues being returned on non-convergence. > > - Allow complex matrices in the sparse SVD. It's a naive approach, > ?but probably still better than nothing. > > Other changes: > > - Rename routines eigen* -> eigs* to avoid shadowing the "eigen" module. > ?"eigs" is also the name for the equivalent routines in Octave/Matlab. > > - Remove the ``speigs`` ARPACK interface. It does not seem to make sense > ?two have two different interfaces to the same library. > > - Un-deprecate .dot() method of sparse matrixes -- ndarrays also have > ?it, so it makes sense to retain it. I am actually to blame for the original interface. David made my hack work correctly and deserves credit for the good parts. I took a look at your branch and it seems fine to me. The name changes makes sense though maybe svd should be svds to match the eig->eigs pattern between the numpy and sparse implementations? (Similarly should eigs_symmetric be eigsh to match numpy.linalg.eigh?. It would need to be extended to handle the complex case). I made some suggestions for documentation improvements at https://github.com/hagberg/scipy-work/tree/bug/1313-arpack Aric From cournape at gmail.com Mon Nov 29 00:12:53 2010 From: cournape at gmail.com (David Cournapeau) Date: Mon, 29 Nov 2010 14:12:53 +0900 Subject: [SciPy-Dev] ARPACK fixes before 0.9 In-Reply-To: References: Message-ID: On Sun, Nov 28, 2010 at 1:33 AM, Pauli Virtanen wrote: > Hi, > > The Arpack interface in Scipy needs some fixes, which would be useful to > get in for Scipy 0.9. I have them in a branch here: > > ? ? ? ?https://github.com/pv/scipy-work/tree/bug/1313-arpack > > Comments would be appreciated (esp. from David who wrote the original > interface). I'm going to merge them probably the next W/E unless > objections arise. > > Bugfixes: > > - Raise ArpackNoConvergence when ARPACK iteration does not converge. > ?Previously, there was no way to know if convergence was obtained. > > - Fix a bug in return value extraction from dneupd, which resulted > ?to invalid eigenvectors/eigenvalues being returned on non-convergence. > > - Allow complex matrices in the sparse SVD. It's a naive approach, > ?but probably still better than nothing. I think so too, that's why I put SVD in the first place. When I was looking at sparse svd implementations, the best solution looked like adding support for PROPACK (which is much faster than ARPACK, up to 10 times on some middle-size problems I tried), but there is no license from the author (and any known attempt to get clarification has been ignored AFAIK). The PROPACK method has been fairly well documented, though, with both fortran and (early) matlab implementation, so it is doable. The original code cannot be used as is anyway because of some hardcoded limitations. Besides speed, the main issue with solving A^h A is precision, I was thinking about adding support for solving [A 0; 0 A^H] which is more precise (but much slower), but never got around it. Should be fairly easy to add. > > Other changes: > > - Rename routines eigen* -> eigs* to avoid shadowing the "eigen" module. > ?"eigs" is also the name for the equivalent routines in Octave/Matlab. > > - Remove the ``speigs`` ARPACK interface. It does not seem to make sense > ?two have two different interfaces to the same library. agreed - I only avoid removing anything at that time because I did not know what was the status of the interface (and needed svd quickly). For the parts I am familiar with, the changes made sense - but as Aric mentioned, I have only contributed the svd part. There are still some hard (causing crash) bugs in the arpack submodule, but I suspect it is a bug in arpack itself: when I tried to debug it, it was out of bound array indexing in some cases, which did not seem to be caused by invalid input. I think I would rather spend time implementing better algorithms than debugging unstrucuted, goto-ridden fortran code :) cheers, David From pav at iki.fi Mon Nov 29 06:38:44 2010 From: pav at iki.fi (Pauli Virtanen) Date: Mon, 29 Nov 2010 11:38:44 +0000 (UTC) Subject: [SciPy-Dev] ARPACK fixes before 0.9 References: Message-ID: Sun, 28 Nov 2010 20:45:25 -0700, Aric Hagberg wrote: [clip] > I am actually to blame for the original interface. David made my hack > work correctly and deserves credit for the good parts. Ah sorry, I only looked at the committer in "git blame" :) > I took a look at your branch and it seems fine to me. > > The name changes makes sense though maybe svd should be svds to match > the eig->eigs pattern between the numpy and sparse implementations? Yes, that seems reasonable. > (Similarly should eigs_symmetric be eigsh to match numpy.linalg.eigh?. > It would need to be extended to handle the complex case). There doesn't seem to be any special support for Hermitian matrices in Arpack, so I'm not sure which is better, (i) to raise an error on complex- valued input, or (ii) to fall back to eigs(). > I made some suggestions for documentation improvements at > https://github.com/hagberg/scipy-work/tree/bug/1313-arpack Ok, thanks! Mon, 29 Nov 2010 14:12:53 +0900, David Cournapeau wrote: [clip] > The PROPACK method has been fairly well documented, though, with both > fortran and (early) matlab implementation, so it is doable. The > original code cannot be used as is anyway because of some hardcoded > limitations. > > Besides speed, the main issue with solving A^h A is precision, I was > thinking about adding support for solving [A 0; 0 A^H] which is more > precise (but much slower), but never got around it. Should be fairly > easy to add. Sparse SVDs probably need more work later on. From what you say, the easiest approach seems to write the code for them from scratch, based on some papers. I don't have a need for SVDs myself at the moment, so it's for later. [clip] > For the parts I am familiar with, the changes made sense - but as Aric > mentioned, I have only contributed the svd part. > There are still some > hard (causing crash) bugs in the arpack submodule, but I suspect it is a > bug in arpack itself: when I tried to debug it, it was out of bound > array indexing in some cases, which did not seem to be caused by invalid > input. I think I would rather spend time implementing better algorithms > than debugging unstrucuted, goto-ridden fortran code :) At least one crash due to out-of-bounds access was fixed by applying a patch from Debian. Let's hope there are not more of them... Pauli From warren.weckesser at enthought.com Mon Nov 29 18:52:52 2010 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Mon, 29 Nov 2010 17:52:52 -0600 Subject: [SciPy-Dev] Sporadic failures of tests of signal.correlate with dtype complex64 In-Reply-To: References: Message-ID: On Tue, Nov 2, 2010 at 9:18 AM, Ralf Gommers wrote: > On Tue, Nov 2, 2010 at 1:24 PM, Warren Weckesser > wrote: > > On Mac OSX 10.5.8, I'm seeing occasional failures like the following: > > > > $ python -c "import scipy.signal; scipy.signal.test()" > > Running unit tests for scipy.signal > > NumPy version 1.5.0.dev8716 > > NumPy is installed in > > /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy > > SciPy version 0.9.0.dev6856 > > SciPy is installed in > > /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy > > Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] > > nose version 0.11.3 > > > ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: > > BadCoefficients: Badly conditioned filter coefficients (numerator): the > > results may be meaningless > > "results may be meaningless", BadCoefficients) > > > ......................................................F............................................................................................................................................................................................................................................... > > ====================================================================== > > FAIL: test_rank1_same (test_signaltools.TestCorrelateComplex64) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", > > line 606, in test_rank1_same > > assert_array_almost_equal(y, y_r) > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > > line 774, in assert_array_almost_equal > > header='Arrays are not almost equal') > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > > line 618, in assert_array_compare > > raise AssertionError(msg) > > AssertionError: > > Arrays are not almost equal > > > > (mismatch 10.0%) > > x: array([-6.76370811-8.55324841j, 0.68672836-4.2681613j , > > -3.22760987-8.69287109j, 0.75051951-5.50820398j, > > -7.33016682-1.14685655j, -5.99573374+7.84123898j,... > > y: array([-6.76370859-8.55324745j, 0.68672895-4.2681613j , > > -3.22761011-8.69286919j, 0.75051963-5.50820446j, > > -7.33016682-1.14685678j, -5.99573517+7.84123898j,... > > > > ---------------------------------------------------------------------- > > Ran 311 tests in 2.307s > > > > FAILED (failures=1) > > > > $ python -c "import scipy.signal; scipy.signal.test()" > > Running unit tests for scipy.signal > > NumPy version 1.5.0.dev8716 > > NumPy is installed in > > /Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy > > SciPy version 0.9.0.dev6856 > > SciPy is installed in > > /Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy > > Python version 2.6.5 |EPD 6.2-2 (32-bit)| (r265:79063, May 28 2010, > > 15:13:03) [GCC 4.0.1 (Apple Inc. build 5488)] > > nose version 0.11.3 > > > ................./Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/filter_design.py:256: > > BadCoefficients: Badly conditioned filter coefficients (numerator): the > > results may be meaningless > > "results may be meaningless", BadCoefficients) > > > .......................................................F.............................................................................................................................................................................................................................................. > > ====================================================================== > > FAIL: test_rank1_same_old (test_signaltools.TestCorrelateComplex64) > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/decorators.py", > > line 257, in _deprecated_imp > > f(*args, **kwargs) > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/scipy/signal/tests/test_signaltools.py", > > line 641, in test_rank1_same_old > > assert_array_almost_equal(y, y_r) > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > > line 774, in assert_array_almost_equal > > header='Arrays are not almost equal') > > File > > > "/Users/warren/tmp_py_install/lib/python2.6/site-packages/numpy/testing/utils.py", > > line 618, in assert_array_compare > > raise AssertionError(msg) > > AssertionError: > > Arrays are not almost equal > > > > (mismatch 10.0%) > > x: array([ 2.46665049+2.02072477j, -7.42591763-0.54789257j, > > 3.41454220-0.15863085j, -0.14030695+5.01129198j, > > -2.11230707+2.68583822j, 7.78784609+7.19434834j,... > > y: array([ 2.46665049+2.02072501j, -7.42591763-0.54789257j, > > 3.41454196-0.15863061j, -0.14030659+5.01129246j, > > -2.11230707+2.68583822j, 7.78784752+7.19434786j,... > > > > ---------------------------------------------------------------------- > > Ran 311 tests in 2.623s > > > > FAILED (failures=1) > > > > > > The above tests are part of a suite of tests that use random data, and > > usually the tests all pass. It took several tries to get the above > > failures. > > > > I suspect the problem is simply that the default tolerance of > > 'assert_array_almost_equal' is too small for the complex64 data type for > > these tests. > > > > Could someone verify that they can reproduce those failures? > > Can't reproduce these. > > > Does simply increasing the tolerance of the test look like a reasonable > fix? > > The default tolerance is decimal=6, which is not all that strict. I > notice that only the longdouble tests fail while the single/double > tests do not. This is very likely platform dependent, otherwise it > would have been noticed before. > Actually, it was not the longdouble case that was failing; the test class TestCorrelateComplex64 is the complex single precision case. > > Like Josef/Pauli say fixing the seed is one thing (preferably with a > value that's failing for you). But then I would split the tests and > only increase the tolerance of the longdouble version, perhaps even > only on certain platforms. > In r6982, I modified the complex test cases of the correlate function to use a decimal precision that depends on the data type; single, double and longdouble are tested with 'decimal' being 5, 10 and 15, respectively. This means that the double and longdouble cases are now being tested with a stricter tolerance than before. I also added a line to set the seed to a value for which the tests had been failing for me. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: From aric.hagberg at gmail.com Mon Nov 29 19:40:37 2010 From: aric.hagberg at gmail.com (Aric Hagberg) Date: Mon, 29 Nov 2010 17:40:37 -0700 Subject: [SciPy-Dev] ARPACK fixes before 0.9 In-Reply-To: References: Message-ID: On Mon, Nov 29, 2010 at 4:38 AM, Pauli Virtanen wrote: > Sun, 28 Nov 2010 20:45:25 -0700, Aric Hagberg wrote: >> (Similarly should eigs_symmetric be eigsh ?to match numpy.linalg.eigh?. >> It would need to be extended to handle the complex case). > > There doesn't seem to be any special support for Hermitian matrices in > Arpack, so I'm not sure which is better, (i) to raise an error on complex- > valued input, or (ii) to fall back to eigs(). The simplest is to raise an error though it would be nice, and fitting of the name eigsh, to fall back to eigs(). If you chose the fall-back plan then the option which='BE' isn't implemented so you'll have to raise an error on that. The others can be mapped - LA'->'LR', 'SA'->'SR'. Also it might be good to cast the returned eigenvalues as real (the eigenvectors will in general be complex). Aric