From charlesr.harris at gmail.com Sat Jan 6 20:00:20 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 6 Jan 2018 18:00:20 -0700 Subject: [SciPy-User] NumPy 1.14.0 release Message-ID: Hi All, On behalf of the NumPy team, I am pleased to announce NumPy 1.14.0. Numpy 1.14.0 is the result of seven months of work and contains a large number of bug fixes and new features, along with several changes with potential compatibility issues. The major change that users will notice are the stylistic changes in the way numpy arrays and scalars are printed, a change that will affect doctests. See the release notes for details on how to preserve the old style printing when needed. A major decision affecting future development concerns the schedule for dropping Python 2.7 support in the runup to 2020. The decision has been made to support 2.7 for all releases made in 2018, with the last release being designated a long term release with support for bug fixes extending through the end of 2019. Starting from January, 2019 support for 2.7 will be dropped in all new releases. More details can be found in the relevant NEP . This release supports Python 2.7 and 3.4 - 3.6. Wheels for the release are available on PyPI. Source tarballs, zipfiles, release notes, and the changelog are available on github . *Highlights* - The ``np.einsum`` function uses BLAS when possible - ``genfromtxt``, ``loadtxt``, ``fromregex`` and ``savetxt`` can now handle files with arbitrary Python supported encoding. - Major improvements to printing of NumPy arrays and scalars. *New functions* - ``parametrize``: decorator added to numpy.testing - ``chebinterpolate``: Interpolate function at Chebyshev points. - ``format_float_positional`` and ``format_float_scientific`` : format floating-point scalars unambiguously with control of rounding and padding. - ``PyArray_ResolveWritebackIfCopy`` and ``PyArray_SetWritebackIfCopyBase``, new C-API functions useful in achieving PyPy compatibity. *Contributors* A total of 100 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Alexey Brodkin + * Allan Haldane * Andras Deak + * Andrew Lawson + * Anna Chiara + * Antoine Pitrou * Bernhard M. Wiedemann + * Bob Eldering + * Brandon Carter * CJ Carey * Charles Harris * Chris Lamb * Christoph Boeddeker + * Christoph Gohlke * Daniel Hrisca + * Daniel Smith * Danny Hermes * David Freese * David Hagen * David Linke + * David Schaefer + * Dillon Niederhut + * Egor Panfilov + * Emilien Kofman * Eric Wieser * Erik Bray + * Erik Quaeghebeur + * Garry Polley + * Gunjan + * Han Shen + * Henke Adolfsson + * Hidehiro NAGAOKA + * Hemil Desai + * Hong Xu + * Iryna Shcherbina + * Jaime Fernandez * James Bourbeau + * Jamie Townsend + * Jarrod Millman * Jean Helie + * Jeroen Demeyer + * John Goetz + * John Kirkham * John Zwinck * Jonathan Helmus * Joseph Fox-Rabinovitz * Joseph Paul Cohen + * Joshua Leahy + * Julian Taylor * J?rg D?pfert + * Keno Goertz + * Kevin Sheppard + * Kexuan Sun + * Konrad Kapp + * Kristofor Maynard + * Licht Takeuchi + * Lo?c Est?ve * Lukas Mericle + * Marten van Kerkwijk * Matheus Portela + * Matthew Brett * Matti Picus * Michael Lamparski + * Michael Odintsov + * Michael Schnaitter + * Michael Seifert * Mike Nolta * Nathaniel J. Smith * Nelle Varoquaux + * Nicholas Del Grosso + * Nico Schl?mer + * Oleg Zabluda + * Oleksandr Pavlyk * Pauli Virtanen * Pim de Haan + * Ralf Gommers * Robert T. McGibbon + * Roland Kaufmann * Sebastian Berg * Serhiy Storchaka + * Shitian Ni + * Spencer Hill + * Srinivas Reddy Thatiparthy + * Stefan Winkler + * Stephan Hoyer * Steven Maude + * SuperBo + * Thomas K?ppe + * Toon Verstraelen * Vedant Misra + * Warren Weckesser * Wirawan Purwanto + * Yang Li + * Ziyan Zhou + * chaoyu3 + * orbit-stabilizer + * solarjoe * wufangjie + * xoviat + * ?lie Gouzien + Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Thu Jan 11 09:16:28 2018 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 11 Jan 2018 14:16:28 +0000 Subject: [SciPy-User] Can fftconvolve use a faster fft? Message-ID: Can fftconvolve use fftw, or mkl fft? (Sorry if this post is a dup, I tried posting via gmane but I don't think it's working) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Jan 11 14:03:43 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 12 Jan 2018 08:03:43 +1300 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: References: Message-ID: On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker wrote: > Can fftconvolve use fftw, or mkl fft? > Yes, with pyfftw: https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey-patching-3rd-party-libraries Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at fredt.org Thu Jan 11 15:18:46 2018 From: me at fredt.org (Frederic Turmel) Date: Thu, 11 Jan 2018 12:18:46 -0800 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: Message-ID: <38b08c99-03ea-4560-a04e-df7632e5aa49@email.android.com> An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Jan 11 15:39:38 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 12 Jan 2018 09:39:38 +1300 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: <38b08c99-03ea-4560-a04e-df7632e5aa49@email.android.com> References: <38b08c99-03ea-4560-a04e-df7632e5aa49@email.android.com> Message-ID: On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel wrote: > If you use anaconda by default it will install the MKL version of scipy > and numpy. > True, but that won't make scipy or numpy use the MKL FFT capabilities. We need a switchable backend for this, we have had discussions with one of the Intel MKL engineers on this. Ralf > On Jan 11, 2018 11:03 AM, Ralf Gommers wrote: > > > > On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker wrote: > > Can fftconvolve use fftw, or mkl fft? > > > Yes, with pyfftw: https://hgomersall.github.io/ > pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey- > patching-3rd-party-libraries > > Ralf > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grlee77 at gmail.com Thu Jan 11 16:04:49 2018 From: grlee77 at gmail.com (Gregory Lee) Date: Thu, 11 Jan 2018 16:04:49 -0500 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: References: Message-ID: On Thu, Jan 11, 2018 at 2:03 PM, Ralf Gommers wrote: > > > On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker wrote: > >> Can fftconvolve use fftw, or mkl fft? >> > > Yes, with pyfftw: https://hgomersall.github.io/ > pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey- > patching-3rd-party-libraries > > Ralf > > Also, note that the pyFFTW scipy interfaces default to "threads=1", so the monkeypatching as listed in the pyFFTW docs may not give a big speed improvement for all transform sizes. It is likely you will get further speedup if you monkey patch specific functions using functools.partial to change the default threads to a more appropriate value for your system. > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at fredt.org Thu Jan 11 22:06:29 2018 From: me at fredt.org (Frederic Turmel) Date: Thu, 11 Jan 2018 19:06:29 -0800 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Jan 12 08:26:00 2018 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 12 Jan 2018 13:26:00 +0000 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: References: Message-ID: I found this: https://github.com/IntelPython/mkl_fft Not sure if using this (not even sure how) would improve scipy fft_convolve though. On Thu, Jan 11, 2018 at 10:07 PM Frederic Turmel wrote: > I'm confused. I though it was default > See > https://www.google.com/amp/s/amp.reddit.com/r/Python/comments/44klx4/anaconda_25_release_now_with_mkl_optimizations/ > > Benchmark > > https://github.com/ContinuumIO/mkl-optimizations-benchmarks/blob/master/README.md > > On Jan 11, 2018 12:39 PM, Ralf Gommers wrote: > > > > On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel wrote: > > If you use anaconda by default it will install the MKL version of scipy > and numpy. > > > True, but that won't make scipy or numpy use the MKL FFT capabilities. > > We need a switchable backend for this, we have had discussions with one of > the Intel MKL engineers on this. > > Ralf > > > > On Jan 11, 2018 11:03 AM, Ralf Gommers wrote: > > > > On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker wrote: > > Can fftconvolve use fftw, or mkl fft? > > > Yes, with pyfftw: > https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey-patching-3rd-party-libraries > > Ralf > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.eberspaecher at gmail.com Fri Jan 12 14:07:08 2018 From: alex.eberspaecher at gmail.com (=?UTF-8?Q?Alexander_Ebersp=c3=a4cher?=) Date: Fri, 12 Jan 2018 20:07:08 +0100 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: References: Message-ID: <873899c5-4bde-2ba1-99ce-d42136db9775@gmail.com> On 11.01.2018 22:04, Gregory Lee wrote: > Also, note that the pyFFTW scipy interfaces default to "threads=1", so > the monkeypatching as listed in the pyFFTW docs may not give a big speed > improvement for all transform sizes.? It is likely you will get further > speedup if you monkey patch specific functions using functools.partial > to change the default threads to a more appropriate value for your system.? Some time ago I've written a small tool [1] which takes care of creating wrappers around the pyfftw routines (a "wrapper around a wrapper"). The wrappers are created on module import and inject a number of threads read from an environment variable. Maybe you'll find it useful. Please note there's no distutils or setuptools setup yet, instead a waf-based build described in the readme is used. Regards Alex [1]: https://github.com/aeberspaecher/transparent_pyfftw -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From ralf.gommers at gmail.com Sat Jan 13 03:50:01 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 13 Jan 2018 21:50:01 +1300 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: References: Message-ID: On Sat, Jan 13, 2018 at 2:26 AM, Neal Becker wrote: > I found this: > https://github.com/IntelPython/mkl_fft > Ah yes. From memory: because neither NumPy nor SciPy allow switching the implementation, this does some monkeypatching of numpy.fft directly. Probably that's what's shipped in Anaconda then, given the benchmark link below. Everyone wants to get rid of such monkeypatching though, hence the support for different backends within numpy and(/or) scipy itself is needed. Ralf > > Not sure if using this (not even sure how) would improve scipy > fft_convolve though. > > On Thu, Jan 11, 2018 at 10:07 PM Frederic Turmel wrote: > >> I'm confused. I though it was default >> See https://www.google.com/amp/s/amp.reddit.com/r/Python/ >> comments/44klx4/anaconda_25_release_now_with_mkl_optimizations/ >> >> Benchmark >> https://github.com/ContinuumIO/mkl-optimizations- >> benchmarks/blob/master/README.md >> >> On Jan 11, 2018 12:39 PM, Ralf Gommers wrote: >> >> >> >> On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel wrote: >> >> If you use anaconda by default it will install the MKL version of scipy >> and numpy. >> >> >> True, but that won't make scipy or numpy use the MKL FFT capabilities. >> >> We need a switchable backend for this, we have had discussions with one >> of the Intel MKL engineers on this. >> >> Ralf >> >> >> >> On Jan 11, 2018 11:03 AM, Ralf Gommers wrote: >> >> >> >> On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker wrote: >> >> Can fftconvolve use fftw, or mkl fft? >> >> >> Yes, with pyfftw: https://hgomersall.github.io/ >> pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey- >> patching-3rd-party-libraries >> >> Ralf >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at python.org >> https://mail.python.org/mailman/listinfo/scipy-user >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at python.org >> https://mail.python.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Sat Jan 13 14:07:08 2018 From: cgohlke at uci.edu (Christoph Gohlke) Date: Sat, 13 Jan 2018 11:07:08 -0800 Subject: [SciPy-User] Can fftconvolve use a faster fft? In-Reply-To: References: Message-ID: <8dc253ce-35db-ce1a-2802-4a99b35345d3@uci.edu> On 1/13/2018 12:50 AM, Ralf Gommers wrote: > > > On Sat, Jan 13, 2018 at 2:26 AM, Neal Becker > wrote: > > I found this: > https://github.com/IntelPython/mkl_fft > > > > Ah yes. From memory: because neither NumPy nor SciPy allow switching the > implementation, this does some monkeypatching of numpy.fft directly. > Probably that's what's shipped in Anaconda then, given the benchmark > link below. > > Everyone wants to get rid of such monkeypatching though, hence the > support for different backends within numpy and(/or) scipy itself is needed. > > Ralf Anaconda's numpy is patched to use mkl_fft when available: Christoph > > > > > Not sure if using this (not even sure how) would improve scipy > fft_convolve though. > > On Thu, Jan 11, 2018 at 10:07 PM Frederic Turmel > wrote: > > I'm confused. I though it was default > See > https://www.google.com/amp/s/amp.reddit.com/r/Python/comments/44klx4/anaconda_25_release_now_with_mkl_optimizations/ > > > Benchmark > https://github.com/ContinuumIO/mkl-optimizations-benchmarks/blob/master/README.md > > > On Jan 11, 2018 12:39 PM, Ralf Gommers > wrote: > > > > On Fri, Jan 12, 2018 at 9:18 AM, Frederic Turmel > > wrote: > > If you use anaconda by default it will install the MKL > version of scipy and numpy. > > > True, but that won't make scipy or numpy use the MKL FFT > capabilities. > > We need a switchable backend for this, we have had > discussions with one of the Intel MKL engineers on this. > > Ralf > > > > On Jan 11, 2018 11:03 AM, Ralf Gommers > > > wrote: > > > > On Fri, Jan 12, 2018 at 3:16 AM, Neal Becker > > > wrote: > > Can fftconvolve use fftw, or mkl fft? > > > Yes, with pyfftw: > https://hgomersall.github.io/pyFFTW/sphinx/tutorial.html?highlight=fftconvolve#monkey-patching-3rd-party-libraries > > > Ralf > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > From immudzen at gmail.com Wed Jan 24 05:44:31 2018 From: immudzen at gmail.com (William Heymann) Date: Wed, 24 Jan 2018 11:44:31 +0100 Subject: [SciPy-User] optimize a piece of code pruning arrays based on distance Message-ID: Hello, [Sorry if this is a duplicate. I realized I sent my previous email from the wrong address] I am trying to find a way to use scipy/numpy to optimize a piece of code. >From profiling I already know over 90% of the runtime of the function is in 4 lines of code. This code is from selSPEA2 in DEAP in case that matters. for i in range(N): for j in range(1, size - 1): if sorted_indices[i,j] == min_pos: sorted_indices[i,j:size - 1] = sorted_indices[i,j + 1:size] sorted_indices[i,j:size] = min_pos break The inner loop is inherently parallel since it is each iteration does not depend on any other. With C++ I would use OpenMP or TBB to parallelize the outer loop. For a more complete picture I have also included the surrounding code. The basic problem is if the number of entries (size) is greater than the number allowed to be kept (k) for the next generation then the most similar items (based on distance) need to be removed until size == k. At each iteration a solution with the lowest distance to another solution is removed and then the distance from the remove solution to all other solutions is set as infinity. This prevents removing all points in a cluster. Thank you while size > k: # Search for minimal distance min_pos = 0 for i in range(1, N): for j in range(1, size): dist_i_sorted_j = distances[i,sorted_indices[i,j]] dist_min_sorted_j = distances[min_pos,sorted_indices[min_pos,j]] if dist_i_sorted_j < dist_min_sorted_j: min_pos = i break elif dist_i_sorted_j > dist_min_sorted_j: break distances[:,min_pos] = float("inf") distances[min_pos,:] = float("inf") #This part is still expensive but I don't know a better way to do it yet. #Essentially all remaining time in this function is in this section #It may even make sense to do this in C++ later since it is trivially parallel for i in range(N): for j in range(1, size - 1): if sorted_indices[i,j] == min_pos: sorted_indices[i,j:size - 1] = sorted_indices[i,j + 1:size] sorted_indices[i,j:size] = min_pos break # Remove corresponding individual from chosen_indices to_remove.append(min_pos) size -= 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at nicolas-cellier.net Wed Jan 24 06:05:21 2018 From: contact at nicolas-cellier.net (Nicolas Cellier) Date: Wed, 24 Jan 2018 12:05:21 +0100 Subject: [SciPy-User] optimize a piece of code pruning arrays based on distance In-Reply-To: References: Message-ID: You may be able to vectorise that piece of code with the np.where function. Otherway, your code may be optimized with the just-in-time compiler numba . Good luck ! 2018-01-24 11:44 GMT+01:00 William Heymann : > Hello, > > [Sorry if this is a duplicate. I realized I sent my previous email from > the wrong address] > > I am trying to find a way to use scipy/numpy to optimize a piece of code. > From profiling I already know over 90% of the runtime of the function is in > 4 lines of code. This code is from selSPEA2 in DEAP in case that matters. > > for i in range(N): > for j in range(1, size - 1): > if sorted_indices[i,j] == min_pos: > sorted_indices[i,j:size - 1] = sorted_indices[i,j + 1:size] > sorted_indices[i,j:size] = min_pos > break > > The inner loop is inherently parallel since it is each iteration does not > depend on any other. With C++ I would use OpenMP or TBB to parallelize the > outer loop. > > For a more complete picture I have also included the surrounding code. The > basic problem is if the number of entries (size) is greater than the number > allowed to be kept (k) for the next generation then the most similar items > (based on distance) need to be removed until size == k. At each iteration a > solution with the lowest distance to another solution is removed and then > the distance from the remove solution to all other solutions is set as > infinity. This prevents removing all points in a cluster. > > Thank you > > > while size > k: > # Search for minimal distance > min_pos = 0 > for i in range(1, N): > for j in range(1, size): > dist_i_sorted_j = distances[i,sorted_indices[i,j]] > dist_min_sorted_j = distances[min_pos,sorted_indic > es[min_pos,j]] > > if dist_i_sorted_j < dist_min_sorted_j: > min_pos = i > break > elif dist_i_sorted_j > dist_min_sorted_j: > break > > distances[:,min_pos] = float("inf") > distances[min_pos,:] = float("inf") > > #This part is still expensive but I don't know a better way to do it > yet. > #Essentially all remaining time in this function is in this section > #It may even make sense to do this in C++ later since it is trivially > parallel > for i in range(N): > for j in range(1, size - 1): > if sorted_indices[i,j] == min_pos: > sorted_indices[i,j:size - 1] = sorted_indices[i,j + 1:size] > sorted_indices[i,j:size] = min_pos > break > > # Remove corresponding individual from chosen_indices > to_remove.append(min_pos) > size -= 1 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at python.org > https://mail.python.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: