From matthias at matthias-k.org Fri May 2 11:47:54 2014 From: matthias at matthias-k.org (Matthias =?ISO-8859-1?Q?K=FCmmerer?=) Date: Fri, 02 May 2014 17:47:54 +0200 Subject: [SciPy-Dev] GIL and gaussian filter Message-ID: <1432519.KEEGSWvFZa@klio> Hi, I am using scipy.ndimage.filters.gaussian_filter a lot. Often, I have to blur a range of arrays. This could be done perfectly in parallel, however the function does not release the GIL thus I cannot use threading while multiprocessing has quite a big overhead. When I understood the code correct, there are no python calls in NI_Correlate1D, which is what ultimately performs the filter. Thus I wanted to ask whether it is possible to release the GIL inside the correlate function and make parallel filtering easier. I will be happy to help with this, if you can give me some hints where I have to search. Best, Matthias -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: This is a digitally signed message part. URL: From ralf.gommers at gmail.com Sat May 3 10:32:25 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 3 May 2014 16:32:25 +0200 Subject: [SciPy-Dev] Scipy build errors - undefined reference to `Py* - CentOS6 In-Reply-To: References: Message-ID: On Mon, Apr 28, 2014 at 6:48 PM, sidharth kashyap < sidharth.n.kashyap at gmail.com> wrote: > Hi, > > I am getting the below errors when I try to build Scipy. > > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/_fftpackmodule.o: > In function `f2py_rout__fftpack_destroy_dst1_cache': > > scipy-0.13.3/scipy/fftpack/build/src.linux-x86_64-2.7/_fftpackmodule.c:3458: > undefined reference to `PyArg_ParseTupleAndKeywords' > ...... > The command at which it fails: > > gfortran -Wall -L/app/libraries/expat/2.0.1/gnu-4.6.2/lib -lexpat -g > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/_fftpackmodule.o > build/temp.linux-x86_64-2.7/src/zfft.o > build/temp.linux-x86_64-2.7/src/drfft.o > build/temp.linux-x86_64-2.7/src/zrfft.o > build/temp.linux-x86_64-2.7/src/zfftnd.o > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/src/dct.o > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/src/dst.o > build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/fortranobject.o > -Lbuild/temp.linux-x86_64-2.7 -ldfftpack -lfftpack -lgfortran -o > build/lib.linux-x86_64-2.7/fftpack/_fftpack.so > > The error: > > scipy-0.13.3/scipy/fftpack/build/src.linux-x86_64-2.7/fortranobject.c:959: > undefined reference to `PyCObject_AsVoidPtr' > collect2: ld returned 1 exit status > > > I have tried to put in -l and -L options to include the library paths. > > Any help on this is highly appreciated > > Should I include any other build parameters? > Do you have CFLAGS or LDFLAGS set? With distutils they override default flags instead of append to them. Ralf > Thanks, > Sid > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun May 4 04:15:52 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 4 May 2014 10:15:52 +0200 Subject: [SciPy-Dev] ANN: Scipy 0.14.0 release Message-ID: Hi, On behalf of the Scipy development team I'm pleased to announce the availability of Scipy 0.14.0. This release contains new features (see release notes below) and 8 months worth of maintenance work. 80 people contributed to this release. This is also the first release for which binary wheels are available on PyPi for OS X, supporting the python.org Python. Wheels for Windows are still being worked on, those may follow at a later date. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.5.1 or greater. Sources and binaries can be found at https://sourceforge.net/projects/scipy/files/scipy/0.14.0/. Enjoy, Ralf ========================== SciPy 0.14.0 Release Notes ========================== .. contents:: SciPy 0.14.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 0.14.x branch, and on adding new features on the master branch. This release requires Python 2.6, 2.7 or 3.2-3.4 and NumPy 1.5.1 or greater. New features ============ ``scipy.interpolate`` improvements ---------------------------------- A new wrapper function `scipy.interpolate.interpn` for interpolation on regular grids has been added. `interpn` supports linear and nearest-neighbor interpolation in arbitrary dimensions and spline interpolation in two dimensions. Faster implementations of piecewise polynomials in power and Bernstein polynomial bases have been added as `scipy.interpolate.PPoly` and `scipy.interpolate.BPoly`. New users should use these in favor of `scipy.interpolate.PiecewisePolynomial`. `scipy.interpolate.interp1d` now accepts non-monotonic inputs and sorts them. If performance is critical, sorting can be turned off by using the new ``assume_sorted`` keyword. Functionality for evaluation of bivariate spline derivatives in ``scipy.interpolate`` has been added. The new class `scipy.interpolate.Akima1DInterpolator` implements the piecewise cubic polynomial interpolation scheme devised by H. Akima. Functionality for fast interpolation on regular, unevenly spaced grids in arbitrary dimensions has been added as `scipy.interpolate.RegularGridInterpolator` . ``scipy.linalg`` improvements ----------------------------- The new function `scipy.linalg.dft` computes the matrix of the discrete Fourier transform. A condition number estimation function for matrix exponential, `scipy.linalg.expm_cond`, has been added. ``scipy.optimize`` improvements ------------------------------- A set of benchmarks for optimize, which can be run with ``optimize.bench()``, has been added. `scipy.optimize.curve_fit` now has more controllable error estimation via the ``absolute_sigma`` keyword. Support for passing custom minimization methods to ``optimize.minimize()`` and ``optimize.minimize_scalar()`` has been added, currently useful especially for combining ``optimize.basinhopping()`` with custom local optimizer routines. ``scipy.stats`` improvements ---------------------------- A new class `scipy.stats.multivariate_normal` with functionality for multivariate normal random variables has been added. A lot of work on the ``scipy.stats`` distribution framework has been done. Moment calculations (skew and kurtosis mainly) are fixed and verified, all examples are now runnable, and many small accuracy and performance improvements for individual distributions were merged. The new function `scipy.stats.anderson_ksamp` computes the k-sample Anderson-Darling test for the null hypothesis that k samples come from the same parent population. ``scipy.signal`` improvements ----------------------------- ``scipy.signal.iirfilter`` and related functions to design Butterworth, Chebyshev, elliptical and Bessel IIR filters now all use pole-zero ("zpk") format internally instead of using transformations to numerator/denominator format. The accuracy of the produced filters, especially high-order ones, is improved significantly as a result. The new function `scipy.signal.vectorstrength` computes the vector strength, a measure of phase synchrony, of a set of events. ``scipy.special`` improvements ------------------------------ The functions `scipy.special.boxcox` and `scipy.special.boxcox1p`, which compute the Box-Cox transformation, have been added. ``scipy.sparse`` improvements ----------------------------- - Significant performance improvement in CSR, CSC, and DOK indexing speed. - When using Numpy >= 1.9 (to be released in MM 2014), sparse matrices function correctly when given to arguments of ``np.dot``, ``np.multiply`` and other ufuncs. With earlier Numpy and Scipy versions, the results of such operations are undefined and usually unexpected. - Sparse matrices are no longer limited to ``2^31`` nonzero elements. They automatically switch to using 64-bit index data type for matrices containing more elements. User code written assuming the sparse matrices use int32 as the index data type will continue to work, except for such large matrices. Code dealing with larger matrices needs to accept either int32 or int64 indices. Deprecated features =================== ``anneal`` ---------- The global minimization function `scipy.optimize.anneal` is deprecated. All users should use the `scipy.optimize.basinhopping` function instead. ``scipy.stats`` --------------- ``randwcdf`` and ``randwppf`` functions are deprecated. All users should use distribution-specific ``rvs`` methods instead. Probability calculation aliases ``zprob``, ``fprob`` and ``ksprob`` are deprecated. Use instead the ``sf`` methods of the corresponding distributions or the ``special`` functions directly. ``scipy.interpolate`` --------------------- ``PiecewisePolynomial`` class is deprecated. Backwards incompatible changes ============================== scipy.special.lpmn ------------------ ``lpmn`` no longer accepts complex-valued arguments. A new function ``clpmn`` with uniform complex analytic behavior has been added, and it should be used instead. scipy.sparse.linalg ------------------- Eigenvectors in the case of generalized eigenvalue problem are normalized to unit vectors in 2-norm, rather than following the LAPACK normalization convention. The deprecated UMFPACK wrapper in ``scipy.sparse.linalg`` has been removed due to license and install issues. If available, ``scikits.umfpack`` is still used transparently in the ``spsolve`` and ``factorized`` functions. Otherwise, SuperLU is used instead in these functions. scipy.stats ----------- The deprecated functions ``glm``, ``oneway`` and ``cmedian`` have been removed from ``scipy.stats``. ``stats.scoreatpercentile`` now returns an array instead of a list of percentiles. scipy.interpolate ----------------- The API for computing derivatives of a monotone piecewise interpolation has changed: if `p` is a ``PchipInterpolator`` object, `p.derivative(der)` returns a callable object representing the derivative of `p`. For in-place derivatives use the second argument of the `__call__` method: `p(0.1, der=2)` evaluates the second derivative of `p` at `x=0.1`. The method `p.derivatives` has been removed. Other changes ============= Authors ======= * Marc Abramowitz + * Anders Bech Borchersen + * Vincent Arel-Bundock + * Petr Baudis + * Max Bolingbroke * Fran?ois Boulogne * Matthew Brett * Lars Buitinck * Evgeni Burovski * CJ Carey + * Thomas A Caswell + * Pawel Chojnacki + * Phillip Cloud + * Stefano Costa + * David Cournapeau * David Menendez Hurtado + * Matthieu Dartiailh + * Christoph Deil + * J?rg Dietrich + * endolith * Francisco de la Pe?a + * Ben FrantzDale + * Jim Garrison + * Andr? Gaul * Christoph Gohlke * Ralf Gommers * Robert David Grant * Alex Griffing * Blake Griffith * Yaroslav Halchenko * Andreas Hilboll * Kat Huang * Gert-Ludwig Ingold * James T. Webber + * Dorota Jarecka + * Todd Jennings + * Thouis (Ray) Jones * Juan Luis Cano Rodr?guez * ktritz + * Jacques Kvam + * Eric Larson + * Justin Lavoie + * Denis Laxalde * Jussi Leinonen + * lemonlaug + * Tim Leslie * Alain Leufroy + * George Lewis + * Max Linke + * Brandon Liu + * Benny Malengier + * Matthias K?mmerer + * Cimarron Mittelsteadt + * Eric Moore * Andrew Nelson + * Niklas Hamb?chen + * Joel Nothman + * Clemens Novak * Emanuele Olivetti + * Stefan Otte + * peb + * Josef Perktold * pjwerneck * poolio * J?r?me Roy + * Carl Sandrock + * Andrew Sczesnak + * Shauna + * Fabrice Silva * Daniel B. Smith * Patrick Snape + * Thomas Spura + * Jacob Stevenson * Julian Taylor * Tomas Tomecek * Richard Tsai * Jacob Vanderplas * Joris Vankerschaver + * Pauli Virtanen * Warren Weckesser A total of 80 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Sun May 4 06:57:17 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Sun, 4 May 2014 10:57:17 +0000 (UTC) Subject: [SciPy-Dev] GIL and gaussian filter References: <1432519.KEEGSWvFZa@klio> Message-ID: <933891858420892625.784532sturla.molden-gmail.com@news.gmane.org> Matthias K?mmerer wrote: > Thus I wanted to ask whether it is possible to release the GIL inside the > correlate function and make parallel filtering easier. I will be happy to help > with this, if you can give me some hints where I have to search. Possible, but not recommended. The way gaussian_filter is written, you would have to release and re-aquire the GIL for every axis in the ndimage. This is an expensive operation. It is better ti keep everything in C or Cython, so you can release the GIL only once. Two other comments: - It is better to do the multi-threading inside the gaussian_filter function. - It is possible to express a Gaussian filter as a recursive IIR filter, which is much faster than convolution. Also it does not truncate the filter kernel and the speed is not dependent on the kernel size. Sturla From charlesr.harris at gmail.com Mon May 5 13:49:09 2014 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 5 May 2014 11:49:09 -0600 Subject: [SciPy-Dev] Current test errors/failures with upcoming 1.9. Message-ID: This is after the reversion to fix a long standing test failure. The first error is new to me, the second is the flaky qhull test, and the third looks like a precision problem. ====================================================================== ERROR: test_fitpack.TestSplder.test_kink ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/charris/.local/lib/python2.7/site-packages/scipy/interpolate/tests/test_fitpack.py", line 329, in test_kink splder(spl2, 2) # Should work File "/home/charris/.local/lib/python2.7/site-packages/scipy/interpolate/fitpack.py", line 1198, in splder "and is not differentiable %d times") % n) ValueError: The spline has internal repeated knots and is not differentiable 2 times ====================================================================== FAIL: test_qhull.TestUtilities.test_degenerate_barycentric_transforms ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/decorators.py", line 146, in skipper_func return f(*args, **kwargs) File "/home/charris/.local/lib/python2.7/site-packages/scipy/spatial/tests/test_qhull.py", line 296, in test_degenerate_barycentric_transforms assert_(bad_count < 20, bad_count) File "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", line 50, in assert_ raise AssertionError(smsg) AssertionError: 20 ====================================================================== FAIL: test_skewtest (test_mstats_basic.TestCompareWithStats) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/charris/.local/lib/python2.7/site-packages/scipy/stats/tests/test_mstats_basic.py", line 972, in test_skewtest assert_equal(r[0], rm[0]) File "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line 100, in assert_equal raise AssertionError(msg) AssertionError: Items are not equal: ACTUAL: 0.86656913902327259 DESIRED: 0.86656913902327182 ---------------------------------------------------------------------- Ran 16601 tests in 106.443s FAILED (KNOWNFAIL=277, SKIP=903, errors=1, failures=2) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtaylor.debian at googlemail.com Mon May 5 13:53:52 2014 From: jtaylor.debian at googlemail.com (Julian Taylor) Date: Mon, 05 May 2014 19:53:52 +0200 Subject: [SciPy-Dev] Current test errors/failures with upcoming 1.9. In-Reply-To: References: Message-ID: <5367D030.5070406@googlemail.com> On 05.05.2014 19:49, Charles R Harris wrote: > This is after the reversion to fix a long standing test failure. The > first error is new to me, the second is the flaky qhull test, and the > third looks like a precision problem. the first two issues are unrelated to numpy. the first is a know issue which seems to appear with newer distributions: https://github.com/scipy/scipy/issues/2911 the issue states 32 bit only, but now I can also reproduce it in 64 bit. Likely compiler optimization changes. the second is also known and very old, can't find an issue but I know its happens on 32 bit since quite a while > > > ====================================================================== > ERROR: test_fitpack.TestSplder.test_kink > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest > self.test(*self.arg) > File > "/home/charris/.local/lib/python2.7/site-packages/scipy/interpolate/tests/test_fitpack.py", > line 329, in test_kink > splder(spl2, 2) # Should work > File > "/home/charris/.local/lib/python2.7/site-packages/scipy/interpolate/fitpack.py", > line 1198, in splder > "and is not differentiable %d times") % n) > ValueError: The spline has internal repeated knots and is not > differentiable 2 times > > ====================================================================== > FAIL: test_qhull.TestUtilities.test_degenerate_barycentric_transforms > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest > self.test(*self.arg) > File > "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/decorators.py", > line 146, in skipper_func > return f(*args, **kwargs) > File > "/home/charris/.local/lib/python2.7/site-packages/scipy/spatial/tests/test_qhull.py", > line 296, in test_degenerate_barycentric_transforms > assert_(bad_count < 20, bad_count) > File > "/home/charris/.local/lib/python2.7/site-packages/numpy/testing/utils.py", > line 50, in assert_ > raise AssertionError(smsg) > AssertionError: 20 > > ====================================================================== > FAIL: test_skewtest (test_mstats_basic.TestCompareWithStats) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "/home/charris/.local/lib/python2.7/site-packages/scipy/stats/tests/test_mstats_basic.py", > line 972, in test_skewtest > assert_equal(r[0], rm[0]) > File > "/home/charris/.local/lib/python2.7/site-packages/numpy/ma/testutils.py", line > 100, in assert_equal > raise AssertionError(msg) > AssertionError: > Items are not equal: > ACTUAL: 0.86656913902327259 > DESIRED: 0.86656913902327182 > > ---------------------------------------------------------------------- > Ran 16601 tests in 106.443s > > FAILED (KNOWNFAIL=277, SKIP=903, errors=1, failures=2) > > Chuck > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > From warren.weckesser at gmail.com Mon May 5 18:50:09 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 5 May 2014 18:50:09 -0400 Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: On Mon, Apr 28, 2014 at 12:56 PM, Ralf Gommers wrote: > > > > On Thu, Nov 28, 2013 at 10:03 PM, Maxim Imakaev wrote: > >> Hi, >> >> Pauli Virtanen iki.fi> writes: >> >> > >> > Arnd Baecker web.de> writes: >> > [clip] >> > > Finally, to answer Roberts question whether I could volunteer >> > > to maintain weave as a separate package: >> > > not sure - maybe with some helping hands (currently I have no clue >> what >> > > this would require) it might be possible. >> > > The important aspect is the long term perspective: >> > > How many people would be interested in this and maybe even in a >> python 3 >> > > port and actively use weave also for new code, or >> > > is the general impression, that the more modern tools (cython, ...) >> > > should be used? >> > >> > Also, let's say that if someone with personal interest in keeping >> > it working addresses the issues within the next few years, then >> > the pressure of splitting it out decreases quite a lot. >> > >> > In the big picture, weave is relatively "mature" code base, and >> > keeping it working is probably not too big apart from the Py3 port. >> > >> >> I found this two-month-old thread while checking (again) for any solutions >> regarding weave.inline and python 3.x compatibility. >> >> Our lab is another example of what Arnd was talking about: I'm a graduate >> student close to the end of my Ph.D, and I built many parts of our code >> using weave.inline. Now I'm looking for ways to make my code compatible >> with >> future generation. And weave.inline is the only package which holds us at >> python 2.x. >> >> Did someone start porting weave.inline since this thread ended on Sep 26? >> What are the chances someone will do this in the nearest future? >> > > They were small, it looks like. > > I spent some time this weekend on packaging weave as a separate package: > https://github.com/rgommers/weave. My proposal is to: > - release this under the name "Weave". > - put it on PyPi and at https://github.com/scipy/weave > - deprecate scipy.weave for 0.15.0 > - add an extra envvar to trigger running the scipy.weave tests marked as > slow. don't run those by default when executing ``scipy.test('full')``. > this will fix the timeout issues we currently have on TravisCi. > - remove scipy.weave for 0.17.0 or 1.0.0, whichever comes first. > > Thoughts? > > +1 Warren > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue May 6 05:20:25 2014 From: cournape at gmail.com (David Cournapeau) Date: Tue, 6 May 2014 10:20:25 +0100 Subject: [SciPy-Dev] 1.0 roadmap: weave In-Reply-To: References: Message-ID: On Mon, Apr 28, 2014 at 5:56 PM, Ralf Gommers wrote: > > > > On Thu, Nov 28, 2013 at 10:03 PM, Maxim Imakaev wrote: > >> Hi, >> >> Pauli Virtanen iki.fi> writes: >> >> > >> > Arnd Baecker web.de> writes: >> > [clip] >> > > Finally, to answer Roberts question whether I could volunteer >> > > to maintain weave as a separate package: >> > > not sure - maybe with some helping hands (currently I have no clue >> what >> > > this would require) it might be possible. >> > > The important aspect is the long term perspective: >> > > How many people would be interested in this and maybe even in a >> python 3 >> > > port and actively use weave also for new code, or >> > > is the general impression, that the more modern tools (cython, ...) >> > > should be used? >> > >> > Also, let's say that if someone with personal interest in keeping >> > it working addresses the issues within the next few years, then >> > the pressure of splitting it out decreases quite a lot. >> > >> > In the big picture, weave is relatively "mature" code base, and >> > keeping it working is probably not too big apart from the Py3 port. >> > >> >> I found this two-month-old thread while checking (again) for any solutions >> regarding weave.inline and python 3.x compatibility. >> >> Our lab is another example of what Arnd was talking about: I'm a graduate >> student close to the end of my Ph.D, and I built many parts of our code >> using weave.inline. Now I'm looking for ways to make my code compatible >> with >> future generation. And weave.inline is the only package which holds us at >> python 2.x. >> >> Did someone start porting weave.inline since this thread ended on Sep 26? >> What are the chances someone will do this in the nearest future? >> > > They were small, it looks like. > > I spent some time this weekend on packaging weave as a separate package: > https://github.com/rgommers/weave. My proposal is to: > - release this under the name "Weave". > - put it on PyPi and at https://github.com/scipy/weave > - deprecate scipy.weave for 0.15.0 > - add an extra envvar to trigger running the scipy.weave tests marked as > slow. don't run those by default when executing ``scipy.test('full')``. > this will fix the timeout issues we currently have on TravisCi. > - remove scipy.weave for 0.17.0 or 1.0.0, whichever comes first. > > Thoughts? > In favour as well. David > > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kasturi.surya at gmail.com Tue May 6 08:48:30 2014 From: kasturi.surya at gmail.com (Surya) Date: Tue, 6 May 2014 18:18:30 +0530 Subject: [SciPy-Dev] PEP8: Acceptable line length Message-ID: Hey all, PEP8 basically recommends 79 chars. However the docs sound its not strictly enforced. So, what's the acceptable line length in scipy community? How much strictly it has to be followed? I have many lines going up to 80's, sometimes ~90's or 95's, rarely ~101-103 Are they okay? I think 80 seems to be quite common and few IDE's even relax it to 100. In few situations, I am unable to avoid the high length unless another variable has to be created solely for the purpose. Thanks Surya From njs at pobox.com Tue May 6 09:02:36 2014 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 6 May 2014 14:02:36 +0100 Subject: [SciPy-Dev] PEP8: Acceptable line length In-Reply-To: References: Message-ID: On Tue, May 6, 2014 at 1:48 PM, Surya wrote: > Hey all, > > PEP8 basically recommends 79 chars. However the docs sound its not > strictly enforced. > > So, what's the acceptable line length in scipy community? How much > strictly it has to be followed? I use 78, and violate it occasionally, but very rarely (and never by more than a few characters). When tempted to violate it I almost always conclude that the code has gotten out of hand (usually too much nesting), and split things out into helper functions, add temporary variables, etc., all of which also improves readability. > I have many lines going up to 80's, sometimes ~90's or 95's, rarely ~101-103 > > Are they okay? I think 80 seems to be quite common and few IDE's even > relax it to 100. In few situations, I am unable to avoid the high > length unless another variable has to be created solely for the > purpose. I don't know your code, but yes, getting you to add more variables is in general one of the goals of this restriction ;-). -- Nathaniel J. Smith Postdoctoral researcher - Informatics - University of Edinburgh http://vorpus.org From lukashev.s at ya.ru Wed May 7 02:29:10 2014 From: lukashev.s at ya.ru (=?koi8-r?B?7NXLwdvF1yDzxdLHxco=?=) Date: Wed, 07 May 2014 10:29:10 +0400 Subject: [SciPy-Dev] sparse matrix problem In-Reply-To: <2016491399444014@web3j.yandex.ru> Message-ID: <2101321399444150@web9g.yandex.ru> An HTML attachment was scrubbed... URL: From argriffi at ncsu.edu Wed May 7 02:44:10 2014 From: argriffi at ncsu.edu (alex) Date: Wed, 7 May 2014 02:44:10 -0400 Subject: [SciPy-Dev] sparse matrix problem In-Reply-To: <2101321399444150@web9g.yandex.ru> References: <2016491399444014@web3j.yandex.ru> <2101321399444150@web9g.yandex.ru> Message-ID: On Wed, May 7, 2014 at 2:29 AM, ??????? ?????? wrote: > Hello, > I've been using scipy for a while and found out the following, which seems > to me like a true bug. > Converting coo_matrix to dok_matrix doesn't take into account coinsiding i > and j cases: > >>>> cols = (0, 1, 2, 1) >>>> rows = (0, 1, 2, 1) >>>> vals = (1, 1, 1, 1) >>>> B = sparse.coo_matrix((vals, (rows, cols))) >>>> print B.todense() > [[1 0 0] > [0 2 0] > [0 0 1]] >>>> print B.todok().todense() > [[1 0 0] > [0 1 0] > [0 0 1]] > >>>> sp.__version__ > Out[17]: '0.14.0' > > I've spent a lot of time figuring this out, therefor I decided to report it. Thanks for reporting it. It looks like duplicate entries in scipy.sparse matrices are not handled completely consistently, for example https://github.com/scipy/scipy/issues/1281#issuecomment-17022480. From rmcgibbo at gmail.com Thu May 8 05:26:30 2014 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Thu, 8 May 2014 02:26:30 -0700 Subject: [SciPy-Dev] segfault from scipy.io.netcdf with scipy-0.14 numpy-0.18 Message-ID: Hey all, The travis tests for a library I work on just stopped working, and I tracked down the bug to the following test case. The file "MDTraj/testing/reference/mdcrd.nc" is a netcdf3 file in our repository ( https://github.com/rmcgibbo/mdtraj/tree/master/MDTraj/testing/reference). this script: conda install --yes scipy==0.13 numpy==1.7 --quiet python -c 'import scipy.io; print scipy.io.netcdf.netcdf_file("MDTraj/testing/reference/mdcrd.nc").variables["coordinates"][:].sum()' conda install --yes scipy==0.14 numpy==1.8 --quiet python -c 'import scipy.io; print scipy.io.netcdf.netcdf_file("MDTraj/testing/reference/mdcrd.nc").variables["coordinates"][:].sum()' works on scipy==0.13 numpy==1.7, but segfaults on scipy==0.14 numpy==1.8. I got the segfault on both linux and osx. I tried compiling a new version of numpy from source with debug symbols using `python setup.py build_ext -g install`, but couldn't get a useful traceback. $ gdb --core=core (gdb) bt #0 0x00007fd4f7887b18 in ?? () #1 0x00007fd4f786ecc6 in ?? () #2 0x0000000000000000 in ?? () Anyone have any advice for tracking this down? -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Thu May 8 06:17:12 2014 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Thu, 8 May 2014 03:17:12 -0700 Subject: [SciPy-Dev] segfault from scipy.io.netcdf with scipy-0.14 numpy-0.18 In-Reply-To: References: Message-ID: fwiw, I did a bisect, and the problem comes at scipy 2dfffc7beeb924d6aa5e6529f25745c5330a7bac. -Robert On Thu, May 8, 2014 at 2:26 AM, Robert McGibbon wrote: > Hey all, > > The travis tests for a library I work on just stopped working, and I > tracked down the bug to the following test case. The file > "MDTraj/testing/reference/mdcrd.nc" is a netcdf3 file in our repository ( > https://github.com/rmcgibbo/mdtraj/tree/master/MDTraj/testing/reference). > > this script: > > conda install --yes scipy==0.13 numpy==1.7 --quiet > python -c 'import scipy.io; print scipy.io.netcdf.netcdf_file("MDTraj/testing/reference/mdcrd.nc").variables["coordinates"][:].sum()' > > conda install --yes scipy==0.14 numpy==1.8 --quiet > python -c 'import scipy.io; print scipy.io.netcdf.netcdf_file("MDTraj/testing/reference/mdcrd.nc").variables["coordinates"][:].sum()' > > works on scipy==0.13 numpy==1.7, but segfaults on scipy==0.14 numpy==1.8. > I got the segfault on both linux and osx. > > I tried compiling a new version of numpy from source with debug symbols > using `python setup.py build_ext -g install`, but couldn't get a useful > traceback. > > $ gdb --core=core > (gdb) bt > #0 0x00007fd4f7887b18 in ?? () > #1 0x00007fd4f786ecc6 in ?? () > #2 0x0000000000000000 in ?? () > > > Anyone have any advice for tracking this down? > > -Robert > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu May 8 08:44:21 2014 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 8 May 2014 06:44:21 -0600 Subject: [SciPy-Dev] segfault from scipy.io.netcdf with scipy-0.14 numpy-0.18 In-Reply-To: References: Message-ID: On Thu, May 8, 2014 at 4:17 AM, Robert McGibbon wrote: > fwiw, I did a bisect, and the problem comes at scipy > 2dfffc7beeb924d6aa5e6529f25745c5330a7bac. > > You should open a scipy issue for this, or at least post to the scipy list. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjhelmus at gmail.com Thu May 8 10:12:01 2014 From: jjhelmus at gmail.com (Jonathan Helmus) Date: Thu, 08 May 2014 09:12:01 -0500 Subject: [SciPy-Dev] segfault from scipy.io.netcdf with scipy-0.14 numpy-0.18 In-Reply-To: References: Message-ID: <536B90B1.6000701@gmail.com> On 05/08/2014 07:44 AM, Charles R Harris wrote: > > > > On Thu, May 8, 2014 at 4:17 AM, Robert McGibbon > wrote: > > fwiw, I did a bisect, and the problem comes at scipy > 2dfffc7beeb924d6aa5e6529f25745c5330a7bac. > > > You should open a scipy issue for this, or at least post to the scipy > list. > > Chuck > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev It looks like the netcdf_file.close() method is called and closes the mmap without checking to see if an ndarray is still backed by that mmap. Accessing that mmapped ndarray causes the segmentation fault. A more detailed example which segmentation faults on all netCDF files I've tried is: import scipy.io f = scipy.io.netcdf.netcdf_file('mdcrd.nc', mmap=True) v = f.variables['coordinates'] print v[:].sum() f.close() print v[:].sum() # segmentation fails since mmap backing v is closed If you change the first line to mmap=False the script runs fine. I can generated a backtrace it would be helpful, but the offending lines are 234 and 235 in scipy/io/netcdf.py [1] Cheers, - Jonathan Helmus [1] https://github.com/scipy/scipy/blob/2dfffc7beeb924d6aa5e6529f25745c5330a7bac/scipy/io/netcdf.py#L234 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Thu May 8 18:17:21 2014 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Thu, 8 May 2014 15:17:21 -0700 Subject: [SciPy-Dev] segfault from scipy.io.netcdf with scipy-0.14 numpy-0.18 In-Reply-To: <536B90B1.6000701@gmail.com> References: <536B90B1.6000701@gmail.com> Message-ID: Thanks for the tips, all. I've posted a PR . -Robert On Thu, May 8, 2014 at 7:12 AM, Jonathan Helmus wrote: > On 05/08/2014 07:44 AM, Charles R Harris wrote: > > > > > On Thu, May 8, 2014 at 4:17 AM, Robert McGibbon wrote: > >> fwiw, I did a bisect, and the problem comes at scipy >> 2dfffc7beeb924d6aa5e6529f25745c5330a7bac. >> >> > You should open a scipy issue for this, or at least post to the scipy > list. > > Chuck > > > _______________________________________________ > SciPy-Dev mailing listSciPy-Dev at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-dev > > > It looks like the netcdf_file.close() method is called and closes the mmap > without checking to see if an ndarray is still backed by that mmap. > Accessing that mmapped ndarray causes the segmentation fault. A more > detailed example which segmentation faults on all netCDF files I've tried > is: > > import scipy.io > f = scipy.io.netcdf.netcdf_file('mdcrd.nc', mmap=True) > v = f.variables['coordinates'] > print v[:].sum() > f.close() > print v[:].sum() # segmentation fails since mmap backing v is closed > > If you change the first line to mmap=False the script runs fine. > > I can generated a backtrace it would be helpful, but the offending lines > are 234 and 235 in scipy/io/netcdf.py [1] > > Cheers, > > - Jonathan Helmus > > [1] > https://github.com/scipy/scipy/blob/2dfffc7beeb924d6aa5e6529f25745c5330a7bac/scipy/io/netcdf.py#L234 > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.joung at gmail.com Thu May 8 22:01:21 2014 From: i.joung at gmail.com (InSuk Joung) Date: Fri, 9 May 2014 11:01:21 +0900 Subject: [SciPy-Dev] step size for optimize nelder-mead Message-ID: Hello developers, I like to suggest adding an initial step size for nelder-mead optimization as an option. A suggested patch is pasted below: diff --git a/scipy/optimize/optimize.py b/scipy/optimize/optimize.py index 9b4ad6d..386a416 100644 --- a/scipy/optimize/optimize.py +++ b/scipy/optimize/optimize.py @@ -316,6 +316,8 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, Set to True to print convergence messages. retall : bool, optional Set to True to return list of solutions at each iteration. + step : ndarray, optional + Initial step size. Returns ------- @@ -368,7 +370,8 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, 'maxiter': maxiter, 'maxfev': maxfun, 'disp': disp, - 'return_all': retall} + 'return_all': retall, + 'step': step} res = _minimize_neldermead(func, x0, args, callback=callback, **opts) if full_output: @@ -385,7 +388,7 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, def _minimize_neldermead(func, x0, args=(), callback=None, xtol=1e-4, ftol=1e-4, maxiter=None, maxfev=None, - disp=False, return_all=False, + disp=False, return_all=False, step=None, **unknown_options): """ Minimization of scalar function of one or more variables using the @@ -440,7 +443,9 @@ def _minimize_neldermead(func, x0, args=(), callback=None, zdelt = 0.00025 for k in range(0, N): y = numpy.array(x0, copy=True) - if y[k] != 0: + if step[k]: + y[k] += step[k] + elif y[k] != 0: y[k] = (1 + nonzdelt)*y[k] else: y[k] = zdelt @@ -2609,6 +2614,8 @@ def show_options(solver=None, method=None): Relative error in ``fun(xopt)`` acceptable for convergence. maxfev : int Maximum number of function evaluations to make. + step : ndarray + Initial step size. *Newton-CG* options: -- Best, InSuk Joung -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.joung at gmail.com Thu May 8 22:18:25 2014 From: i.joung at gmail.com (InSuk Joung) Date: Fri, 9 May 2014 11:18:25 +0900 Subject: [SciPy-Dev] step size for optimize nelder-mead In-Reply-To: References: Message-ID: The previous patch missed something. I attach a new one. diff --git a/scipy/optimize/optimize.py b/scipy/optimize/optimize.py index 9b4ad6d..a10ae6f 100644 --- a/scipy/optimize/optimize.py +++ b/scipy/optimize/optimize.py @@ -284,7 +284,7 @@ def wrap_function(function, args): def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, - full_output=0, disp=1, retall=0, callback=None): + full_output=0, disp=1, retall=0, step=None, callback=None): """ Minimize a function using the downhill simplex algorithm. @@ -316,6 +316,8 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, Set to True to print convergence messages. retall : bool, optional Set to True to return list of solutions at each iteration. + step : ndarray, optional + Initial step size. Returns ------- @@ -368,7 +370,8 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, 'maxiter': maxiter, 'maxfev': maxfun, 'disp': disp, - 'return_all': retall} + 'return_all': retall, + 'step': step} res = _minimize_neldermead(func, x0, args, callback=callback, **opts) if full_output: @@ -385,7 +388,7 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, maxiter=None, maxfun=None, def _minimize_neldermead(func, x0, args=(), callback=None, xtol=1e-4, ftol=1e-4, maxiter=None, maxfev=None, - disp=False, return_all=False, + disp=False, return_all=False, step=None, **unknown_options): """ Minimization of scalar function of one or more variables using the @@ -440,7 +443,9 @@ def _minimize_neldermead(func, x0, args=(), callback=None, zdelt = 0.00025 for k in range(0, N): y = numpy.array(x0, copy=True) - if y[k] != 0: + if step != None and step[k]: + y[k] += step[k] + elif y[k] != 0: y[k] = (1 + nonzdelt)*y[k] else: y[k] = zdelt @@ -2609,6 +2614,8 @@ def show_options(solver=None, method=None): Relative error in ``fun(xopt)`` acceptable for convergence. maxfev : int Maximum number of function evaluations to make. + step : ndarray + Initial step size. *Newton-CG* options: On Fri, May 9, 2014 at 11:01 AM, InSuk Joung wrote: > Hello developers, > I like to suggest adding an initial step size for nelder-mead optimization > as an option. > A suggested patch is pasted below: > > diff --git a/scipy/optimize/optimize.py b/scipy/optimize/optimize.py > index 9b4ad6d..386a416 100644 > --- a/scipy/optimize/optimize.py > +++ b/scipy/optimize/optimize.py > @@ -316,6 +316,8 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, > maxiter=None, maxfun=None, > Set to True to print convergence messages. > retall : bool, optional > Set to True to return list of solutions at each iteration. > + step : ndarray, optional > + Initial step size. > > Returns > ------- > @@ -368,7 +370,8 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, > maxiter=None, maxfun=None, > 'maxiter': maxiter, > 'maxfev': maxfun, > 'disp': disp, > - 'return_all': retall} > + 'return_all': retall, > + 'step': step} > > res = _minimize_neldermead(func, x0, args, callback=callback, **opts) > if full_output: > @@ -385,7 +388,7 @@ def fmin(func, x0, args=(), xtol=1e-4, ftol=1e-4, > maxiter=None, maxfun=None, > > def _minimize_neldermead(func, x0, args=(), callback=None, > xtol=1e-4, ftol=1e-4, maxiter=None, maxfev=None, > - disp=False, return_all=False, > + disp=False, return_all=False, step=None, > **unknown_options): > """ > Minimization of scalar function of one or more variables using the > @@ -440,7 +443,9 @@ def _minimize_neldermead(func, x0, args=(), > callback=None, > zdelt = 0.00025 > for k in range(0, N): > y = numpy.array(x0, copy=True) > - if y[k] != 0: > + if step[k]: > + y[k] += step[k] > + elif y[k] != 0: > y[k] = (1 + nonzdelt)*y[k] > else: > y[k] = zdelt > @@ -2609,6 +2614,8 @@ def show_options(solver=None, method=None): > Relative error in ``fun(xopt)`` acceptable for convergence. > maxfev : int > Maximum number of function evaluations to make. > + step : ndarray > + Initial step size. > > *Newton-CG* options: > > > > -- > Best, > InSuk Joung > -- Best, InSuk Joung -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla.molden at gmail.com Thu May 8 22:51:46 2014 From: sturla.molden at gmail.com (Sturla Molden) Date: Fri, 09 May 2014 04:51:46 +0200 Subject: [SciPy-Dev] step size for optimize nelder-mead In-Reply-To: References: Message-ID: On 09/05/14 04:01, InSuk Joung wrote: > Hello developers, > I like to suggest adding an initial step size for nelder-mead > optimization as an option. > A suggested patch is pasted below: Post a PR on GitHub instead :) Sturla From warren.weckesser at gmail.com Sat May 10 16:01:16 2014 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 10 May 2014 16:01:16 -0400 Subject: [SciPy-Dev] Documentation as scipy.org still shows version 0.13.0. Message-ID: The scipy documentation at scipy.org (http://docs.scipy.org/doc/scipy/reference/) still shows version 0.13.0. What needs to be done to update that to 0.14.0? Warren From ralf.gommers at gmail.com Sun May 11 02:31:25 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 11 May 2014 08:31:25 +0200 Subject: [SciPy-Dev] Documentation as scipy.org still shows version 0.13.0. In-Reply-To: References: Message-ID: On Sat, May 10, 2014 at 10:01 PM, Warren Weckesser < warren.weckesser at gmail.com> wrote: > The scipy documentation at scipy.org > (http://docs.scipy.org/doc/scipy/reference/) still shows version > 0.13.0. What needs to be done to update that to 0.14.0? > Me asking Pauli nicely to do that - which I forgot for this release. Pauli? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From valentin.zulkower at gmail.com Sun May 11 03:38:58 2014 From: valentin.zulkower at gmail.com (Valentin Zulkower) Date: Sun, 11 May 2014 09:38:58 +0200 Subject: [SciPy-Dev] Delay Differential Equations in Scipy ? Message-ID: <536F2912.90607@gmail.com> Hello Scipy-dev, I wrote a practical delay differential equations solver on top of Scipy's odeint and interp1d. Ralph Gommers on Github suggested I post it here for a possible merge into scipy.integrate. Here is a demo, with a link to the Github repo (it is really 30 lines of code): http://zulko.github.io/blog/2013/10/22/delay-differential-equations-in-python/ Not sure if it meets the standards of Scipy (the current version is relatively slow and does not check for convergence) but it is a prototype, it can be improved in many ways. Cheers, From ralf.gommers at gmail.com Sun May 11 16:45:02 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 11 May 2014 22:45:02 +0200 Subject: [SciPy-Dev] ANN: Weave 0.15.0 release Message-ID: Hi, I'm pleased to announce the release of Weave 0.15.0. Weave provides tools for including C/C++ code within Python code. Inlining C/C++ code within Python generally results in speedups of 1.5x to 30x over algorithms written in pure Python. Weave is the stand-alone version of the deprecated Scipy submodule scipy.weave. It is Python 2.x only, and is provided for users that need new versions of Scipy (from which the weave submodule will be removed in the future) but have existing code that still depends on scipy.weave. For new code, users are recommended to use Cython. Weave 0.15.0 is the first release of Weave as a standalone package. It is numbered 0.15.0, because it was split from Scipy after the 0.14.0 release of that package. No new functionality is included in this release compared to Scipy 0.14.0, only changes needed to make Weave a standalone package. This release requires Python 2.6 or 2.7. The source code can be found on https://github.com/scipy/weave and the release itself on PyPi: https://pypi.python.org/pypi/weave Note that the Scipy developers are not planning to make any further improvements to Weave. They may however merge pull requests and create maintenance releases for urgent issues. If someone is interested in maintaining Weave, that would be very welcome. Questions and discussions relating to Weave should be directed to the scipy-dev mailing list (see http://scipy.org/scipylib/mailing-lists.html). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From nouiz at nouiz.org Mon May 12 11:34:44 2014 From: nouiz at nouiz.org (=?UTF-8?B?RnLDqWTDqXJpYyBCYXN0aWVu?=) Date: Mon, 12 May 2014 11:34:44 -0400 Subject: [SciPy-Dev] conv2 and normxcorr2 In-Reply-To: References: <52c2bbf2.c57bcc0a.61c6.4ceb@mx.google.com> <52c2d968.0937cc0a.334f.5101@mx.google.com> Message-ID: Just a follow op. There is a new paper on that subject. They found that on the GPU, for the big number of convolution that we need for convolutional neural network, it is worthwhile to do the conversion to the FFT: http://arxiv.org/abs/1312.5851 Fred On Tue, Jan 7, 2014 at 9:03 AM, Fr?d?ric Bastien wrote: > Using the FFT version need a conversion to from the FFT space. This > take time. This make that using the FFT version is useful only for big > convolution. For small convolution, the direct version is faster. But > I never timed it and so I can't give any idea of what is small and > big. This also depend of how efficient both version are. Comparing the > current slow version in SciPy vs the FFT would make the FFT > practically always faster. > > > If someone have the time to compare the FFT version vs the Theano > implementation, we could have an idea of the size for each case. > > Fred > > On Tue, Jan 7, 2014 at 12:32 AM, Aaron Webster > wrote: > > Do you know the reason why the convolutions here aren't computed using > it's > > Fourier transform property? It seems like this is an obvious path to > take > > advantage of existing code and speed. > > > > > > On Mon, Jan 6, 2014 at 10:45 PM, Fr?d?ric Bastien > wrote: > >> > >> Hi, > >> > >> I implemented faster CPU convolution in Theano. The code isn't easily > >> portable... So here is the optimization that I recall where important. > >> > >> 1) In the inner loop, there is an if. This is very bad speed wise. We > >> can replace the inner loop with if with 3 consecutive inner loop. One > >> for before the image (when we pad with 0 or something else). The > >> second for when we are in the image and the last for after. In the > >> valid mode, the first and last loop will be empty. > >> > >> 2) Don't copy data! This is very slow and not needed in many cases. > >> > >> 3) Don't use a jump table to function call to just do an > >> multiplication. This is used to make it work for all dtype. It need to > >> have different code path for each dtype. Doing a pseudo-function call > >> to just do a multiplication is very slow. > >> > >> 4) do some type of unrolling > >> > >> If someone want to see the part of Theano code that could be the more > >> portable is this: > >> > >> > >> > https://github.com/Theano/Theano/blob/master/theano/tensor/nnet/conv.py#L2167 > >> > >> It do those 4 optimizations and I point to just the code that do the > >> computation. So this should be redable by people knowing numpy c-api. > >> We add a paper that compare this implementation to the scipy one. From > >> memory it was 100x faster... but for neural network, as it also > >> generalize this code to do more then one convolution per call. That is > >> why there is other loop before line 2167. Also, the parallel version > >> don't always speed up, so I disabled it by default in Theano. It need > >> test to disable it when the shape are too small. > >> > >> If someone look at this and have questions, I can answer them. > >> > >> HTH > >> > >> Fred > >> > >> > >> > >> On Tue, Dec 31, 2013 at 10:50 AM, Ralf Gommers > >> wrote: > >> > > >> > > >> > > >> > On Tue, Dec 31, 2013 at 4:07 PM, Luke Pfister > > >> > wrote: > >> >> > >> >> I *believe* that Matlab is calling either the Intel MKL or Intel IPP > >> >> convolution routines, which is why they are so much faster. > >> >> > >> >> I ran into a situation where I needed to perform many, many small 2D > >> >> convolutions, and wound up writing a Cython wrapper to call the IPP > >> >> convolution. I seem to remember getting speedups of ~200x when > >> >> convolving an 8x8 kernel with a 512x512 image. > >> >> > >> >> I'm not familiar with how the Scipy convolution functions are > >> >> implemented under the hood. Do they use efficient algorithms for > >> >> small convolution sizes (ie, overlap-add, overlap-save)? > >> > > >> > > >> > It looks like the implementation is very straightforward and could > >> > benefit > >> > from some optimization: > >> > Convolve2d: > >> > > >> > > >> > > https://github.com/scipy/scipy/blob/master/scipy/signal/sigtoolsmodule.c#L1006 > >> > > >> > > https://github.com/scipy/scipy/blob/master/scipy/signal/firfilter.c#L84 > >> > And correlate2d just calls convolve2d: > >> > > >> > > >> > > https://github.com/scipy/scipy/blob/master/scipy/signal/signaltools.py#L503 > >> > > >> > > >> >> > >> >> > >> >> -- > >> >> Luke > >> >> > >> >> On Tue, Dec 31, 2013 at 8:49 AM, Aaron Webster > >> >> > >> >> wrote: > >> >> > On Tue, Dec 31, 2013 at 2:42 PM, Ralf Gommers > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> > >> >> >> > >> >> >> On Tue, Dec 31, 2013 at 1:43 PM, awebster at falsecolour.com > >> >> >> wrote: > >> >> >>> I noticed a couple of popular matlab functions - conv2 and > >> >> >>> normxcorr2 were not present in the scipy.signal packages. I > would > >> >> >>> like to submit them for addition. Can anyone point me on > >> >> >>> instructions on how to write such a thing? Below are examples. > >> >> >>> > >> >> >> > >> >> >> Hi Aaron, isn't conv2 the same as signal.convolve2d? And can what > >> >> >> normxcorr2 does be done with signal.correlate2d? > >> >> >> > >> >> > I did a quick test and it seems that you are correct: > >> >> > signal.convolve2d > >> >> > appears to generate basically the same output as conv2, and > following > >> >> > normxcorr2 can be done with signal.correlate2d. However, I noticed > >> >> > while doing this that both signal.convolve2d and signal.correlate2d > >> >> > are > >> >> > *extremely* slow. For example, on my computer with a random > 100x100 > >> >> > matrix signal.correlate2d takes 4.73 seconds while normxcorr2 take > >> >> > 0.253 seconds. The results are similar for signal.convolve2d and > >> >> > conv2. > >> >> > > >> >> > As a practical matter, would it make most sense to fix > >> >> > signal.correlate2d and signal.convolve2d, or implement new > functions? > >> > > >> > > >> > Speeding up the existing function would be preferable. firfilter.c > >> > already > >> > contains a suggestion on how to do that. > >> > > >> > Ralf > >> > > >> > > >> > _______________________________________________ > >> > SciPy-Dev mailing list > >> > SciPy-Dev at scipy.org > >> > http://mail.scipy.org/mailman/listinfo/scipy-dev > >> > > >> _______________________________________________ > >> SciPy-Dev mailing list > >> SciPy-Dev at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > > > > > > > -- > > Aaron Webster > > > > _______________________________________________ > > SciPy-Dev mailing list > > SciPy-Dev at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangj.sdu at gmail.com Mon May 12 22:02:03 2014 From: zhangj.sdu at gmail.com (Zhang Jiang) Date: Tue, 13 May 2014 02:02:03 +0000 (UTC) Subject: [SciPy-Dev] =?utf-8?q?Consideration_of_differential_evolution_min?= =?utf-8?q?imizer=09being_added_to_scipy=2Eoptimize=2E?= References: Message-ID: Hi Andrew, There is another Python implementation for Differential Evolution called Sherpa, which has been used in X-Ray Astronomy. See http://cxc.cfa.harvard.edu/sherpa/ Cheers, Clark From robert.kern at gmail.com Wed May 14 05:31:01 2014 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 14 May 2014 10:31:01 +0100 Subject: [SciPy-Dev] [SciPy-User] ANN: Weave 0.15.0 release In-Reply-To: References: Message-ID: On Wed, May 14, 2014 at 10:15 AM, srean wrote: > Very happy to see this, I thought weave had been relegated to abandon-ware. > Although Weave did not get a lot of love I have found it very useful. Actually, it's even more abandoned than before. It's been carved off into its own project such that its abandonment doesn't hold up the rest of scipy. See Ralf's last paragraph: > On Sun, May 11, 2014 at 3:45 PM, Ralf Gommers > wrote: >> Note that the Scipy developers are not planning to make any further >> improvements >> to Weave. They may however merge pull requests and create maintenance >> releases for urgent issues. If someone is interested in maintaining Weave, >> that would be very welcome. Questions and discussions relating to Weave >> should >> be directed to the scipy-dev mailing list (see >> http://scipy.org/scipylib/mailing-lists.html). If you wish to take up maintenance of weave to save it from its abandonment, please let us know, and we will give you admin rights to the repo. Thanks! -- Robert Kern From Brian.Newsom at Colorado.EDU Wed May 14 19:10:37 2014 From: Brian.Newsom at Colorado.EDU (Brian Lee Newsom) Date: Wed, 14 May 2014 17:10:37 -0600 Subject: [SciPy-Dev] Multivariate Ctypes PR Message-ID: Hello all, In regards to the multivariate ctypes PR ( https://github.com/scipy/scipy/pull/3262), Nathan and I have addressed all the issues presented earlier regarding this code. Tests have been added and integrated into the build system and are working and passing. As for merging the code paths between Python and Ctypes, my opinion is that this is a route where little progress can be made. I have removed the need for an extra stack storage struct (ZStorage) and am now using the one previously used for python (QStorage). Other than that, the initialization, call, and restore all require different enough behaviors for the Python vs. the C that I do not believe it is feasible to merge them. If anyone else has a different opinion I'd love to hear your thoughts. Ideally a route that could be explored soon would be compiling the Python into C with code generation and handling that automatically, and in this way the complexities could be removed. However with CTypes this possibility seems extremely challenging or impossible, and Nathan and I believe that this should be held off until Cython can be supported. At it's present state, this feature is usable, tested, and documented. Let me know if anything else can be done to hasten it's inclusion. Thanks, Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Fri May 16 16:01:11 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 16 May 2014 13:01:11 -0700 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? Message-ID: Hi guys, Am I right in thinking we don't have binary installers for Pythons 3.3 and 3.4 for Scipy 0.14.0? Is this because of the need for VS 10? [1] Are there plans for these guys? Can I help? Cheers, Matthew [1] http://matthew-brett.github.io/pydagogue/python_msvc.html#visual-studio-versions-used-to-compile-distributed-python-binaries From ralf.gommers at gmail.com Sat May 17 10:43:44 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 17 May 2014 16:43:44 +0200 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? In-Reply-To: References: Message-ID: On Fri, May 16, 2014 at 10:01 PM, Matthew Brett wrote: > Hi guys, > > Am I right in thinking we don't have binary installers for Pythons 3.3 > and 3.4 for Scipy 0.14.0? > You're right. > Is this because of the need for VS 10? [1] > This is because my build environment (Wine on OS X) refuses to install Python 3.3/3.4, and I have not yet switched to an alternative setup. I think Julian built 3.3/3.4 installers on Linux, so it should still be possible with Wine. Are there plans for these guys? Can I help? > If you'd be willing to fix this situation, that would be fantastic. There's a plan on my todo list that says "fix this", but my todo list never seems to get shorter...... Ralf > Cheers, > > Matthew > > [1] > http://matthew-brett.github.io/pydagogue/python_msvc.html#visual-studio-versions-used-to-compile-distributed-python-binaries > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard9404 at gmail.com Tue May 20 09:55:34 2014 From: richard9404 at gmail.com (Richard Tsai) Date: Tue, 20 May 2014 21:55:34 +0800 Subject: [SciPy-Dev] The performance of Cython typed memoryview Message-ID: Hi all, I'm rewriting cluster.vq in Cython now. I've been doing some experiments on Cython recently and I encountered a problem. I tried to implement vq.vq with Typed Memoryview but it seemed to be much slower than the original implementation, which accesses the array through C pointers and updates the pointers after every iteration. (See https://github.com/scipy/scipy/blob/master/scipy/cluster/_vq_rewrite.pyx#L51) I checked the C code Cython generated and I noticed that Cython generated some things like `*(arr.data + i * arr.strides[0] + j * arr.strides[1])` for memoryview accesses and it is inefficient when looping through the array. Is there any workaround? Or should I abandon Typed Memoryview in this case? BTW, in order to call BLAS routines in Cython conveniently, I generate a cblas.pxd with cwrap (https://github.com/geggo/cwrap), but it seems improper to leave it in the cluster directory (may be useful somewhere else). Is there a suitable place for it? Ralf said I can put a script there but then cwrap will become a build dependency and it seems unnecessary. Regards, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed May 21 06:03:23 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 21 May 2014 03:03:23 -0700 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? In-Reply-To: References: Message-ID: Hi, On Sat, May 17, 2014 at 7:43 AM, Ralf Gommers wrote: > > > > On Fri, May 16, 2014 at 10:01 PM, Matthew Brett > wrote: >> >> Hi guys, >> >> Am I right in thinking we don't have binary installers for Pythons 3.3 >> and 3.4 for Scipy 0.14.0? > > > You're right. > >> >> Is this because of the need for VS 10? [1] > > > This is because my build environment (Wine on OS X) refuses to install > Python 3.3/3.4, and I have not yet switched to an alternative setup. I think > Julian built 3.3/3.4 installers on Linux, so it should still be possible > with Wine. > >> Are there plans for these guys? Can I help? > > > If you'd be willing to fix this situation, that would be fantastic. There's > a plan on my todo list that says "fix this", but my todo list never seems to > get shorter...... I built them on Windows, it was a bit ugly, notes here: https://github.com/numpy/numpy/wiki/building_scipy_superpack I tested them on a clean virtual machine. The Python 3.4 test run found the same intermittent failure that Christoph found here: http://mail.scipy.org/pipermail/scipy-dev/2013-August/019118.html Error message: ====================================================================== FAIL: test_windows.test_windowfunc_basics ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python34\lib\site-packages\nose\case.py", line 198, in runTest self.test(*self.arg) File "C:\Python34\lib\site-packages\scipy\signal\tests\test_windows.py", line 100, in test_windowfunc_basics assert_array_almost_equal(w1, w2) File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 811, in assert_array_almost_equal header=('Arrays are not almost equal to %d decimals' % decimal)) File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line 644, in assert_array_compare raise AssertionError(msg) AssertionError: Arrays are not almost equal to 6 decimals (mismatch 100.0%) x: array([ 0.1892781 , 1. , 0.30368426, 0.30368426, 0.06227148, 0.18297787, 0.30368426]) y: array([ 1. , 0.79697112, 0.51113591, 0.00201155, 0.28611295, 0.4936433 , 0.00201155]) It seems to fail in about 2/3 of runs. Is this benign? Cheers, Matthew From cournape at gmail.com Wed May 21 08:32:10 2014 From: cournape at gmail.com (David Cournapeau) Date: Wed, 21 May 2014 13:32:10 +0100 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? In-Reply-To: References: Message-ID: On Wed, May 21, 2014 at 11:03 AM, Matthew Brett wrote: > Hi, > > On Sat, May 17, 2014 at 7:43 AM, Ralf Gommers > wrote: > > > > > > > > On Fri, May 16, 2014 at 10:01 PM, Matthew Brett > > > wrote: > >> > >> Hi guys, > >> > >> Am I right in thinking we don't have binary installers for Pythons 3.3 > >> and 3.4 for Scipy 0.14.0? > > > > > > You're right. > > > >> > >> Is this because of the need for VS 10? [1] > > > > > > This is because my build environment (Wine on OS X) refuses to install > > Python 3.3/3.4, and I have not yet switched to an alternative setup. I > think > > Julian built 3.3/3.4 installers on Linux, so it should still be possible > > with Wine. > > > >> Are there plans for these guys? Can I help? > > > > > > If you'd be willing to fix this situation, that would be fantastic. > There's > > a plan on my todo list that says "fix this", but my todo list never > seems to > > get shorter...... > > I built them on Windows, it was a bit ugly, notes here: > > https://github.com/numpy/numpy/wiki/building_scipy_superpack > > I tested them on a clean virtual machine. The Python 3.4 test run > found the same intermittent failure that Christoph found here: > > http://mail.scipy.org/pipermail/scipy-dev/2013-August/019118.html > > Error message: > > ====================================================================== > FAIL: test_windows.test_windowfunc_basics > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "C:\Python34\lib\site-packages\nose\case.py", line 198, in runTest > self.test(*self.arg) > File "C:\Python34\lib\site-packages\scipy\signal\tests\test_windows.py", > line 100, in test_windowfunc_basics > assert_array_almost_equal(w1, w2) > File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line > 811, in assert_array_almost_equal > header=('Arrays are not almost equal to %d decimals' % decimal)) > File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line > 644, in assert_array_compare > raise AssertionError(msg) > AssertionError: > Arrays are not almost equal to 6 decimals > > (mismatch 100.0%) > x: array([ 0.1892781 , 1. , 0.30368426, 0.30368426, 0.06227148, > 0.18297787, 0.30368426]) > y: array([ 1. , 0.79697112, 0.51113591, 0.00201155, 0.28611295, > 0.4936433 , 0.00201155]) > > It seems to fail in about 2/3 of runs. > > Is this benign? > Only if you don't use that function :) I have seen that error myself when building packages @ Enthought, I never took the time to track it down. > > Cheers, > > Matthew > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed May 21 11:29:03 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 21 May 2014 08:29:03 -0700 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? In-Reply-To: References: Message-ID: Hi, On Wed, May 21, 2014 at 5:32 AM, David Cournapeau wrote: > > > > > > On Wed, May 21, 2014 at 11:03 AM, Matthew Brett > wrote: >> >> Hi, >> >> On Sat, May 17, 2014 at 7:43 AM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Fri, May 16, 2014 at 10:01 PM, Matthew Brett >> > >> > wrote: >> >> >> >> Hi guys, >> >> >> >> Am I right in thinking we don't have binary installers for Pythons 3.3 >> >> and 3.4 for Scipy 0.14.0? >> > >> > >> > You're right. >> > >> >> >> >> Is this because of the need for VS 10? [1] >> > >> > >> > This is because my build environment (Wine on OS X) refuses to install >> > Python 3.3/3.4, and I have not yet switched to an alternative setup. I >> > think >> > Julian built 3.3/3.4 installers on Linux, so it should still be possible >> > with Wine. >> > >> >> Are there plans for these guys? Can I help? >> > >> > >> > If you'd be willing to fix this situation, that would be fantastic. >> > There's >> > a plan on my todo list that says "fix this", but my todo list never >> > seems to >> > get shorter...... >> >> I built them on Windows, it was a bit ugly, notes here: >> >> https://github.com/numpy/numpy/wiki/building_scipy_superpack >> >> I tested them on a clean virtual machine. The Python 3.4 test run >> found the same intermittent failure that Christoph found here: >> >> http://mail.scipy.org/pipermail/scipy-dev/2013-August/019118.html >> >> Error message: >> >> ====================================================================== >> FAIL: test_windows.test_windowfunc_basics >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "C:\Python34\lib\site-packages\nose\case.py", line 198, in runTest >> self.test(*self.arg) >> File "C:\Python34\lib\site-packages\scipy\signal\tests\test_windows.py", >> line 100, in test_windowfunc_basics >> assert_array_almost_equal(w1, w2) >> File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line >> 811, in assert_array_almost_equal >> header=('Arrays are not almost equal to %d decimals' % decimal)) >> File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line >> 644, in assert_array_compare >> raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal to 6 decimals >> >> (mismatch 100.0%) >> x: array([ 0.1892781 , 1. , 0.30368426, 0.30368426, >> 0.06227148, >> 0.18297787, 0.30368426]) >> y: array([ 1. , 0.79697112, 0.51113591, 0.00201155, >> 0.28611295, >> 0.4936433 , 0.00201155]) >> >> It seems to fail in about 2/3 of runs. >> >> Is this benign? > > > Only if you don't use that function :) > > I have seen that error myself when building packages @ Enthought, I never > took the time to track it down. OK then to upload these binaries to sourceforge? I'll try and debug the error later today. Cheers, Matthew From ralf.gommers at gmail.com Wed May 21 14:39:04 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 21 May 2014 20:39:04 +0200 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? In-Reply-To: References: Message-ID: On Wed, May 21, 2014 at 5:29 PM, Matthew Brett wrote: > Hi, > > >> > >> I built them on Windows, it was a bit ugly, > No surprise there:) > OK then to upload these binaries to sourceforge? I'll try and debug > the error later today. > Go for it. And thanks! Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed May 21 15:58:11 2014 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 21 May 2014 12:58:11 -0700 Subject: [SciPy-Dev] Win32 binaries for Python 3.3, 3.4? In-Reply-To: References: Message-ID: On Wed, May 21, 2014 at 5:32 AM, David Cournapeau wrote: > > > > > > On Wed, May 21, 2014 at 11:03 AM, Matthew Brett > wrote: >> >> Hi, >> >> On Sat, May 17, 2014 at 7:43 AM, Ralf Gommers >> wrote: >> > >> > >> > >> > On Fri, May 16, 2014 at 10:01 PM, Matthew Brett >> > >> > wrote: >> >> >> >> Hi guys, >> >> >> >> Am I right in thinking we don't have binary installers for Pythons 3.3 >> >> and 3.4 for Scipy 0.14.0? >> > >> > >> > You're right. >> > >> >> >> >> Is this because of the need for VS 10? [1] >> > >> > >> > This is because my build environment (Wine on OS X) refuses to install >> > Python 3.3/3.4, and I have not yet switched to an alternative setup. I >> > think >> > Julian built 3.3/3.4 installers on Linux, so it should still be possible >> > with Wine. >> > >> >> Are there plans for these guys? Can I help? >> > >> > >> > If you'd be willing to fix this situation, that would be fantastic. >> > There's >> > a plan on my todo list that says "fix this", but my todo list never >> > seems to >> > get shorter...... >> >> I built them on Windows, it was a bit ugly, notes here: >> >> https://github.com/numpy/numpy/wiki/building_scipy_superpack >> >> I tested them on a clean virtual machine. The Python 3.4 test run >> found the same intermittent failure that Christoph found here: >> >> http://mail.scipy.org/pipermail/scipy-dev/2013-August/019118.html >> >> Error message: >> >> ====================================================================== >> FAIL: test_windows.test_windowfunc_basics >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "C:\Python34\lib\site-packages\nose\case.py", line 198, in runTest >> self.test(*self.arg) >> File "C:\Python34\lib\site-packages\scipy\signal\tests\test_windows.py", >> line 100, in test_windowfunc_basics >> assert_array_almost_equal(w1, w2) >> File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line >> 811, in assert_array_almost_equal >> header=('Arrays are not almost equal to %d decimals' % decimal)) >> File "C:\Python34\lib\site-packages\numpy\testing\utils.py", line >> 644, in assert_array_compare >> raise AssertionError(msg) >> AssertionError: >> Arrays are not almost equal to 6 decimals >> >> (mismatch 100.0%) >> x: array([ 0.1892781 , 1. , 0.30368426, 0.30368426, >> 0.06227148, >> 0.18297787, 0.30368426]) >> y: array([ 1. , 0.79697112, 0.51113591, 0.00201155, >> 0.28611295, >> 0.4936433 , 0.00201155]) >> >> It seems to fail in about 2/3 of runs. >> >> Is this benign? > > > Only if you don't use that function :) > > I have seen that error myself when building packages @ Enthought, I never > took the time to track it down. I tracked it down to different values returned from calls to linalg.eig on the same input data : https://github.com/scipy/scipy/issues/3675 Cheers, Matthew From cimrman3 at ntc.zcu.cz Fri May 23 07:24:56 2014 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Fri, 23 May 2014 13:24:56 +0200 Subject: [SciPy-Dev] ANN: SfePy 2014.2 Message-ID: <537F3008.6090804@ntc.zcu.cz> I am pleased to announce release 2014.2 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. This release brings a preliminary support for isogeometric analysis - a recently developed computational approach that allows using the NURBS-based domain description from CAD design tools also for approximation purposes similar to the finite element method. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker, wiki: http://github.com/sfepy Highlights of this release -------------------------- - preliminary support for isogeometric analysis - improved post-processing and visualization script for time-dependent problems with adaptive time steps - three new terms For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke? From charlesnwoods at gmail.com Tue May 27 16:15:33 2014 From: charlesnwoods at gmail.com (Gmail) Date: Tue, 27 May 2014 14:15:33 -0600 Subject: [SciPy-Dev] Multivariate Ctypes PR In-Reply-To: References: Message-ID: <212EB293-2269-4444-963E-88B5529F9A77@gmail.com> Hey guys, We?re unfortunately fast running out of time with this, since Brian?s soon moving on to bigger and better things, at least temporarily. I understand that this is a less-tidy bit of code that we?d all like, but interfacing with ancient Fortran code isn?t really a tidy application anyway. Is there any way we can get this reviewed and at least provisionally evaluated in the next week or so? Nathan On May 14, 2014, at 5:10 PM, Brian Lee Newsom wrote: > Hello all, > > In regards to the multivariate ctypes PR (https://github.com/scipy/scipy/pull/3262), Nathan and I have addressed all the issues presented earlier regarding this code. Tests have been added and integrated into the build system and are working and passing. > > As for merging the code paths between Python and Ctypes, my opinion is that this is a route where little progress can be made. I have removed the need for an extra stack storage struct (ZStorage) and am now using the one previously used for python (QStorage). Other than that, the initialization, call, and restore all require different enough behaviors for the Python vs. the C that I do not believe it is feasible to merge them. If anyone else has a different opinion I'd love to hear your thoughts. > > Ideally a route that could be explored soon would be compiling the Python into C with code generation and handling that automatically, and in this way the complexities could be removed. However with CTypes this possibility seems extremely challenging or impossible, and Nathan and I believe that this should be held off until Cython can be supported. > > At it's present state, this feature is usable, tested, and documented. Let me know if anything else can be done to hasten it's inclusion. > > Thanks, > Brian > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed May 28 13:31:34 2014 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 28 May 2014 19:31:34 +0200 Subject: [SciPy-Dev] Multivariate Ctypes PR In-Reply-To: <212EB293-2269-4444-963E-88B5529F9A77@gmail.com> References: <212EB293-2269-4444-963E-88B5529F9A77@gmail.com> Message-ID: Hi Nathan, Brian, On Tue, May 27, 2014 at 10:15 PM, Gmail wrote: > Hey guys, > > We?re unfortunately fast running out of time with this, since Brian?s soon > moving on to bigger and better things, at least temporarily. I understand > that this is a less-tidy bit of code that we?d all like, but interfacing > with ancient Fortran code isn?t really a tidy application anyway. Is there > any way we can get this reviewed and at least provisionally evaluated in > the next week or so? > Thanks for being persistent. I did some testing, sent you a fix and provided some feedback on the PR. Cheers, Ralf > > Nathan > > On May 14, 2014, at 5:10 PM, Brian Lee Newsom > wrote: > > Hello all, > > In regards to the multivariate ctypes PR ( > https://github.com/scipy/scipy/pull/3262), Nathan and I have addressed > all the issues presented earlier regarding this code. Tests have been > added and integrated into the build system and are working and passing. > > As for merging the code paths between Python and Ctypes, my opinion is > that this is a route where little progress can be made. I have removed the > need for an extra stack storage struct (ZStorage) and am now using the one > previously used for python (QStorage). Other than that, the initialization, > call, and restore all require different enough behaviors for the Python vs. > the C that I do not believe it is feasible to merge them. If anyone else > has a different opinion I'd love to hear your thoughts. > > Ideally a route that could be explored soon would be compiling the Python > into C with code generation and handling that automatically, and in this > way the complexities could be removed. However with CTypes this > possibility seems extremely challenging or impossible, and Nathan and I > believe that this should be held off until Cython can be supported. > > At it's present state, this feature is usable, tested, and documented. > Let me know if anything else can be done to hasten it's inclusion. > > Thanks, > Brian > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: