From charlesr.harris at gmail.com Sun Dec 2 23:54:21 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 2 Dec 2018 21:54:21 -0700 Subject: [Numpy-discussion] OpenBLAS Message-ID: Hi All, Just a heads up that OpenBLAS 3.4 has been released. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Dec 3 00:41:14 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 2 Dec 2018 21:41:14 -0800 Subject: [Numpy-discussion] OpenBLAS In-Reply-To: References: Message-ID: On Sun, Dec 2, 2018 at 8:54 PM Charles R Harris wrote: > Hi All, > > Just a heads up that OpenBLAS 3.4 > has been released. > Thanks for the heads up! Release notes look promising, should solve a reasonable amount of the issues we've been seeing. Ralf > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Dec 3 20:25:11 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 3 Dec 2018 18:25:11 -0700 Subject: [Numpy-discussion] Dropping Python 3.4 Message-ID: Hi All, Currently, NumPy has dropped Python 3.4 support for the 1.16 release while the upcoming SciPy 1.2 version is still supporting it. It looks like current NumPy master still works for Python 3.4, but I wasn't planning to build any wheels for it. Will that be a problem? Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Dec 4 10:11:25 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 4 Dec 2018 08:11:25 -0700 Subject: [Numpy-discussion] NumPy 1.16 branch Message-ID: Hi All, Matthew Brett has built OpenBLAS 3.4+ libraries and we are successfully building wheels with them. There are few remaining PRs/issues marked for 1.16, so I expect to make the 1.16. branch in the next several days. If there are PRs, especially bug fixes, that you really, really want in 1.16, now is the time to speak up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Dec 4 14:55:15 2018 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 4 Dec 2018 11:55:15 -0800 Subject: [Numpy-discussion] There will be no developer status meeting this week Message-ID: <40714cf1-a944-2453-bf98-ef82fb0c1330@gmail.com> I am traveling, so we decided to cancel this week's developer online meeting. The next one will be Wed Dec 19. Matti From wjames at us.ibm.com Wed Dec 5 11:21:52 2018 From: wjames at us.ibm.com (William James) Date: Wed, 5 Dec 2018 11:21:52 -0500 Subject: [Numpy-discussion] Need help to install Numpy 1.11.2, getting setuptool error In-Reply-To: References: Message-ID: Hello All, I am getting an setuptool error when trying to instal NumPy 1.11.2 on Python 2.6. I there a way to download the setuptools to complete the installation? Here is my build .log more build.log Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 386, in setup_package() File "setup.py", line 363, in setup_package from setuptools import setup ImportError: No module named setuptools Regards, William James Analytics - Hybrid Data Management IBM Cloud, US Distribution Market Phone: 813-838-7508 From: William James/Tampa/IBM To: numpy-discussion at python.org Date: 12/05/2018 10:46 AM Subject: Try to install Numpy 1.11.2, getting setuptool error Hello All, I am getting an setuptool error when trying to instal NumPy 1.11.2 on Python 2.6. I there a way to download the setuptools to complete the installation? Here is my build .log more build.log Running from numpy source directory. Traceback (most recent call last): File "setup.py", line 386, in setup_package() File "setup.py", line 363, in setup_package from setuptools import setup ImportError: No module named setuptools Regards, William James Analytics - Hybrid Data Management IBM Cloud, US Distribution Market Phone: 813-838-7508 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From derek at astro.physik.uni-goettingen.de Wed Dec 5 11:56:58 2018 From: derek at astro.physik.uni-goettingen.de (Derek Homeier) Date: Wed, 5 Dec 2018 17:56:58 +0100 Subject: [Numpy-discussion] Need help to install Numpy 1.11.2, getting setuptool error In-Reply-To: References: Message-ID: <173E8BFD-ED38-46CD-8FE2-E32C84C167CC@astro.physik.uni-goettingen.de> On 5 Dec 2018, at 5:21 pm, William James wrote: > > I am getting an setuptool error when trying to instal NumPy 1.11.2 on Python 2.6. > > I there a way to download the setuptools to complete the installation? Assuming you are stuck with Python 2.6, there is an archive on https://pypi.org/project/setuptools/#history where you should find a link to the last supported version. I?d check the Changelog on GitHub to see when Py 2.6 support was dropped - seems sometime before version 38.0. HTH, Derek From tyler.je.reddy at gmail.com Wed Dec 5 13:35:59 2018 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Wed, 5 Dec 2018 10:35:59 -0800 Subject: [Numpy-discussion] ANN: SciPy 1.2.0rc2 -- please test Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, On behalf of the SciPy development team I'm pleased to announce the release candidate SciPy 1.2.0rc2. Please help us test out this release candidate -- the 1.2.x series will be an LTS release and the last to support Python 2.7. Sources and binary wheels can be found at: https://pypi.org/project/scipy/ and at: https://github.com/scipy/scipy/releases/tag/v1.2.0rc2 One of a few ways to install the release candidate with pip: pip install scipy==1.2.0rc2 ========================== SciPy 1.2.0 Release Notes ========================== Note: Scipy 1.2.0 is not released yet! SciPy 1.2.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). Our development attention will now shift to bug-fix releases on the 1.2.x branch, and on adding new features on the master branch. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. Note: This will be the last SciPy release to support Python 2.7. Consequently, the 1.2.x series will be a long term support (LTS) release; we will backport bug fixes until 1 Jan 2020. For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. Highlights of this release --------------------------- - 1-D root finding improvements with a new solver, ``toms748``, and a new unified interface, ``root_scalar`` - New ``dual_annealing`` optimization method that combines stochastic and local deterministic searching - A new optimization algorithm, ``shgo`` (simplicial homology global optimization) for derivative free optimization problems - A new category of quaternion-based transformations are available in `scipy.spatial.transform` New features ============ `scipy.ndimage` improvements --------------------------------- Proper spline coefficient calculations have been added for the ``mirror``, ``wrap``, and ``reflect`` modes of `scipy.ndimage.rotate` `scipy.fftpack` improvements --------------------------------- DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in `scipy.fftpack`. `scipy.interpolate` improvements --------------------------------- `scipy.interpolate.pade` now accepts a new argument for the order of the numerator `scipy.cluster` improvements ----------------------------- `scipy.cluster.vq.kmeans2` gained a new initialization method, kmeans++. `scipy.special` improvements ----------------------------- The function ``softmax`` was added to `scipy.special`. `scipy.optimize` improvements ------------------------------ The one-dimensional nonlinear solvers have been given a unified interface `scipy.optimize.root_scalar`, similar to the `scipy.optimize.root` interface for multi-dimensional solvers. ``scipy.optimize.root_scalar(f, bracket=[a ,b], method="brenth")`` is equivalent to ``scipy.optimize.brenth(f, a ,b)``. If no ``method`` is specified, an appropriate one will be selected based upon the bracket and the number of derivatives available. The so-called Algorithm 748 of Alefeld, Potra and Shi for root-finding within an enclosing interval has been added as `scipy.optimize.toms748`. This provides guaranteed convergence to a root with convergence rate per function evaluation of approximately 1.65 (for sufficiently well-behaved functions.) ``differential_evolution`` now has the ``updating`` and ``workers`` keywords. The first chooses between continuous updating of the best solution vector (the default), or once per generation. Continuous updating can lead to faster convergence. The ``workers`` keyword accepts an ``int`` or map-like callable, and parallelises the solver (having the side effect of updating once per generation). Supplying an ``int`` evaluates the trial solutions in N parallel parts. Supplying a map-like callable allows other parallelisation approaches (such as ``mpi4py``, or ``joblib``) to be used. ``dual_annealing`` (and ``shgo`` below) is a powerful new general purpose global optizimation (GO) algorithm. ``dual_annealing`` uses two annealing processes to accelerate the convergence towards the global minimum of an objective mathematical function. The first annealing process controls the stochastic Markov chain searching and the second annealing process controls the deterministic minimization. So, dual annealing is a hybrid method that takes advantage of stochastic and local deterministic searching in an efficient way. ``shgo`` (simplicial homology global optimization) is a similar algorithm appropriate for solving black box and derivative free optimization (DFO) problems. The algorithm generally converges to the global solution in finite time. The convergence holds for non-linear inequality and equality constraints. In addition to returning a global minimum, the algorithm also returns any other global and local minima found after every iteration. This makes it useful for exploring the solutions in a domain. `scipy.optimize.newton` can now accept a scalar or an array ``MINPACK`` usage is now thread-safe, such that ``MINPACK`` + callbacks may be used on multiple threads. `scipy.signal` improvements ---------------------------- Digital filter design functions now include a parameter to specify the sampling rate. Previously, digital filters could only be specified using normalized frequency, but different functions used different scales (e.g. 0 to 1 for ``butter`` vs 0 to ? for ``freqz``), leading to errors and confusion. With the ``fs`` parameter, ordinary frequencies can now be entered directly into functions, with the normalization handled internally. ``find_peaks`` and related functions no longer raise an exception if the properties of a peak have unexpected values (e.g. a prominence of 0). A ``PeakPropertyWarning`` is given instead. The new keyword argument ``plateau_size`` was added to ``find_peaks``. ``plateau_size`` may be used to select peaks based on the length of the flat top of a peak. ``welch()`` and ``csd()`` methods in `scipy.signal` now support calculation of a median average PSD, using ``average='mean'`` keyword `scipy.sparse` improvements ---------------------------- The `scipy.sparse.bsr_matrix.tocsr` method is now implemented directly instead of converting via COO format, and the `scipy.sparse.bsr_matrix.tocsc` method is now also routed via CSR conversion instead of COO. The efficiency of both conversions is now improved. The issue where SuperLU or UMFPACK solvers crashed on matrices with non-canonical format in `scipy.sparse.linalg` was fixed. The solver wrapper canonicalizes the matrix if necessary before calling the SuperLU or UMFPACK solver. The ``largest`` option of `scipy.sparse.linalg.lobpcg()` was fixed to have a correct (and expected) behavior. The order of the eigenvalues was made consistent with the ARPACK solver (``eigs()``), i.e. ascending for the smallest eigenvalues, and descending for the largest eigenvalues. The `scipy.sparse.random` function is now faster and also supports integer and complex values by passing the appropriate value to the ``dtype`` argument. `scipy.spatial` improvements ----------------------------- The function `scipy.spatial.distance.jaccard` was modified to return 0 instead of ``np.nan`` when two all-zero vectors are compared. Support for the Jensen Shannon distance, the square-root of the divergence, has been added under `scipy.spatial.distance.jensenshannon` An optional keyword was added to the function `scipy.spatial.cKDTree.query_ball_point()` to sort or not sort the returned indices. Not sorting the indices can speed up calls. A new category of quaternion-based transformations are available in `scipy.spatial.transform`, including spherical linear interpolation of rotations (``Slerp``), conversions to and from quaternions, Euler angles, and general rotation and inversion capabilities (`spatial.transform.Rotation`), and uniform random sampling of 3D rotations (`spatial.transform.Rotation.random`). `scipy.stats` improvements --------------------------- The Yeo-Johnson power transformation is now supported (``yeojohnson``, ``yeojohnson_llf``, ``yeojohnson_normmax``, ``yeojohnson_normplot``). Unlike the Box-Cox transformation, the Yeo-Johnson transformation can accept negative values. Added a general method to sample random variates based on the density only, in the new function ``rvs_ratio_uniforms``. The Yule-Simon distribution (``yulesimon``) was added -- this is a new discrete probability distribution. ``stats`` and ``mstats`` now have access to a new regression method, ``siegelslopes``, a robust linear regression algorithm `scipy.stats.gaussian_kde` now has the ability to deal with weighted samples, and should have a modest improvement in performance Levy Stable Parameter Estimation, PDF, and CDF calculations are now supported for `scipy.stats.levy_stable`. The Brunner-Munzel test is now available as ``brunnermunzel`` in ``stats`` and ``mstats`` `scipy.linalg` improvements --------------------------- `scipy.linalg.lapack` now exposes the LAPACK routines using the Rectangular Full Packed storage (RFP) for upper triangular, lower triangular, symmetric, or Hermitian matrices; the upper trapezoidal fat matrix RZ decomposition routines are now available as well. Deprecated features =================== The functions ``hyp2f0``, ``hyp1f2`` and ``hyp3f0`` in ``scipy.special`` have been deprecated. Backwards incompatible changes ============================== LAPACK version 3.4.0 or later is now required. Building with Apple Accelerate is no longer supported. The function ``scipy.linalg.subspace_angles(A, B)`` now gives correct results for all angles. Before this, the function only returned correct values for those angles which were greater than pi/4. Support for the Bento build system has been removed. Bento has not been maintained for several years, and did not have good Python 3 or wheel support, hence it was time to remove it. The required signature of `scipy.optimize.lingprog` ``method=simplex`` callback function has changed. Before iteration begins, the simplex solver first converts the problem into a standard form that does not, in general, have the same variables or constraints as the problem defined by the user. Previously, the simplex solver would pass a user-specified callback function several separate arguments, such as the current solution vector ``xk``, corresponding to this standard form problem. Unfortunately, the relationship between the standard form problem and the user-defined problem was not documented, limiting the utility of the information passed to the callback function. In addition to numerous bug fix changes, the simplex solver now passes a user-specified callback function a single ``OptimizeResult`` object containing information that corresponds directly to the user-defined problem. In future releases, this ``OptimizeResult`` object may be expanded to include additional information, such as variables corresponding to the standard-form problem and information concerning the relationship between the standard-form and user-defined problems. The implementation of `scipy.sparse.random` has changed, and this affects the numerical values returned for both ``sparse.random`` and ``sparse.rand`` for some matrix shapes and a given seed. `scipy.optimize.newton` will no longer use Halley's method in cases where it negatively impacts convergence Other changes ============= Authors ======= * @endolith * @luzpaz * Hameer Abbasi + * akahard2dj + * Anton Akhmerov * Joseph Albert * alexthomas93 + * ashish + * atpage + * Blair Azzopardi + * Yoshiki V?zquez Baeza * Bence Bagi + * Christoph Baumgarten * Lucas Bellomo + * BH4 + * Aditya Bharti * Max Bolingbroke * Fran?ois Boulogne * Ward Bradt + * Matthew Brett * Evgeni Burovski * Rafa? Byczek + * Alfredo Canziani + * CJ Carey * Luc?a Cheung + * Poom Chiarawongse + * Jeanne Choo + * Robert Cimrman * Graham Clenaghan + * cynthia-rempel + * Johannes Damp + * Jaime Fernandez del Rio * Dowon + * emmi474 + * Stefan Endres + * Thomas Etherington + * Piotr Figiel * Alex Fikl + * fo40225 + * Joseph Fox-Rabinovitz * Lars G * Abhinav Gautam + * Stiaan Gerber + * C.A.M. Gerlach + * Ralf Gommers * Todd Goodall * Lars Grueter + * Sylvain Gubian + * Matt Haberland * David Hagen * Will Handley + * Charles Harris * Ian Henriksen * Thomas Hisch + * Theodore Hu * Michael Hudson-Doyle + * Nicolas Hug + * jakirkham + * Jakob Jakobson + * James + * Jan Schl?ter * jeanpauphilet + * josephmernst + * Kai + * Kai-Striega + * kalash04 + * Toshiki Kataoka + * Konrad0 + * Tom Krauss + * Johannes Kulick * Lars Gr?ter + * Eric Larson * Denis Laxalde * Will Lee + * Katrin Leinweber + * Yin Li + * P. L. Lim + * Jesse Livezey + * Duncan Macleod + * MatthewFlamm + * Nikolay Mayorov * Mike McClurg + * Christian Meyer + * Mark Mikofski * Naoto Mizuno + * mohmmadd + * Nathan Musoke * Anju Geetha Nair + * Andrew Nelson * Ayappan P + * Nick Papior * Haesun Park + * Ronny Pfannschmidt + * pijyoi + * Ilhan Polat * Anthony Polloreno + * Ted Pudlik * puenka * Eric Quintero * Pradeep Reddy Raamana + * Vyas Ramasubramani + * Ramon Vi?as + * Tyler Reddy * Joscha Reimer * Antonio H Ribeiro * richardjgowers + * Rob + * robbystk + * Lucas Roberts + * rohan + * Joaquin Derrac Rus + * Josua Sassen + * Bruce Sharpe + * Max Shinn + * Scott Sievert * Sourav Singh * Strahinja Luki? + * Kai Striega + * Shinya SUZUKI + * Mike Toews + * Piotr Uchwat * Miguel de Val-Borro + * Nicky van Foreest * Paul van Mulbregt * Gael Varoquaux * Pauli Virtanen * Stefan van der Walt * Warren Weckesser * Joshua Wharton + * Bernhard M. Wiedemann + * Eric Wieser * Josh Wilson * Tony Xiang + * Roman Yurchak + * Roy Zywina + A total of 137 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 1.2.0 ------------------------ * `#9520 `__: signal.correlate with method='fft' doesn't benefit from long... * `#9547 `__: signature of dual_annealing doesn't match other optimizers * `#9540 `__: SciPy v1.2.0rc1 cannot be imported on Python 2.7.15 * `#1240 `__: Allowing multithreaded use of minpack through scipy.optimize... * `#1432 `__: scipy.stats.mode extremely slow (Trac #905) * `#3372 `__: Please add Sphinx search field to online scipy html docs * `#3678 `__: _clough_tocher_2d_single direction between centroids * `#4174 `__: lobpcg "largest" option invalid? * `#5493 `__: anderson_ksamp p-values>1 * `#5743 `__: slsqp fails to detect infeasible problem * `#6139 `__: scipy.optimize.linprog failed to find a feasible starting point... * `#6358 `__: stats: docstring for `vonmises_line` points to `vonmises_line`... * `#6498 `__: runtests.py is missing in pypi distfile * `#7426 `__: scipy.stats.ksone(n).pdf(x) returns nan for positive values of... * `#7455 `__: scipy.stats.ksone.pdf(2,x) return incorrect values for x near... * `#7456 `__: scipy.special.smirnov and scipy.special.smirnovi have accuracy... * `#7492 `__: scipy.special.kolmogorov(x)/kolmogi(p) inefficient, inaccurate... * `#7914 `__: TravisCI not failing when it should for -OO run * `#8064 `__: linalg.solve test crashes on Windows * `#8212 `__: LAPACK Rectangular Full Packed routines * `#8256 `__: differential_evolution bug converges to wrong results in complex... * `#8443 `__: Deprecate `hyp2f0`, `hyp1f2`, and `hyp3f0`? * `#8452 `__: DOC: ARPACK tutorial has two conflicting equations * `#8680 `__: scipy fails compilation when building from source * `#8686 `__: Division by zero in _trustregion.py when x0 is exactly equal... * `#8700 `__: _MINPACK_LOCK not held when calling into minpack from least_squares * `#8786 `__: erroneous moment values for t-distribution * `#8791 `__: Checking COLA condition in istft should be optional (or omitted) * `#8843 `__: imresize cannot be deprecated just yet * `#8844 `__: Inverse Wishart Log PDF Incorrect for Non-diagonal Scale Matrix? * `#8878 `__: vonmises and vonmises_line in stats: vonmises wrong and superfluous? * `#8895 `__: v1.1.0 `ndi.rotate` documentation ? reused parameters not filled... * `#8900 `__: Missing complex conjugation in scipy.sparse.linalg.LinearOperator * `#8904 `__: BUG: if zero derivative at root, then Newton fails with RuntimeWarning * `#8911 `__: make_interp_spline bc_type incorrect input interpretation * `#8942 `__: MAINT: Refactor `_linprog.py` and `_linprog_ip.py` to remove... * `#8947 `__: np.int64 in scipy.fftpack.next_fast_len * `#9020 `__: BUG: linalg.subspace_angles gives wrong results * `#9033 `__: scipy.stats.normaltest sometimes gives incorrect returns b/c... * `#9036 `__: Bizarre times for `scipy.sparse.rand` function with 'low' density... * `#9044 `__: optimize.minimize(method=`trust-constr`) result dict does not... * `#9071 `__: doc/linalg: add cho_solve_banded to see also of cholesky_banded * `#9082 `__: eigenvalue sorting in scipy.sparse.linalg.eigsh * `#9086 `__: signaltools.py:491: FutureWarning: Using a non-tuple sequence... * `#9091 `__: test_spline_filter failure on 32-bit * `#9122 `__: Typo on scipy minimization tutorial * `#9135 `__: doc error at https://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete_poisson.html * `#9167 `__: DOC: BUG: typo in ndimage LowLevelCallable tutorial example * `#9169 `__: truncnorm does not work if b < a in scipy.stats * `#9250 `__: scipy.special.tests.test_mpmath::TestSystematic::test_pcfw fails... * `#9259 `__: rv.expect() == rv.mean() is false for rv.mean() == nan (and inf) * `#9286 `__: DOC: Rosenbrock expression in optimize.minimize tutorial * `#9316 `__: SLSQP fails in nested optimization * `#9337 `__: scipy.signal.find_peaks key typo in documentation * `#9345 `__: Example from documentation of scipy.sparse.linalg.eigs raises... * `#9383 `__: Default value for "mode" in "ndimage.shift" * `#9419 `__: dual_annealing off by one in the number of iterations * `#9442 `__: Error in Defintion of Rosenbrock Function * `#9453 `__: TST: test_eigs_consistency() doesn't have consistent results Pull requests for 1.2.0 ------------------------ * `#9526 `__: TST: relax precision requirements in signal.correlate tests * `#9507 `__: CI: MAINT: Skip a ckdtree test on pypy * `#9512 `__: TST: test_random_sampling 32-bit handling * `#9494 `__: TST: test_kolmogorov xfail 32-bit * `#9486 `__: BUG: fix sparse random int handling * `#9550 `__: BUG: scipy/_lib/_numpy_compat: get_randint * `#9549 `__: MAINT: make dual_annealing signature match other optimizers * `#9541 `__: BUG: fix SyntaxError due to non-ascii character on Python 2.7 * `#7352 `__: ENH: add Brunner Munzel test to scipy.stats. * `#7373 `__: BUG: Jaccard distance for all-zero arrays would return np.nan * `#7374 `__: ENH: Add PDF, CDF and parameter estimation for Stable Distributions * `#8098 `__: ENH: Add shgo for global optimization of NLPs. * `#8203 `__: ENH: adding simulated dual annealing to optimize * `#8259 `__: Option to follow original Storn and Price algorithm and its parallelisation * `#8293 `__: ENH add ratio-of-uniforms method for rv generation to scipy.stats * `#8294 `__: BUG: Fix slowness in stats.mode * `#8295 `__: ENH: add Jensen Shannon distance to `scipy.spatial.distance` * `#8357 `__: ENH: vectorize scalar zero-search-functions * `#8397 `__: Add `fs=` parameter to filter design functions * `#8537 `__: ENH: Implement mode parameter for spline filtering. * `#8558 `__: ENH: small speedup for stats.gaussian_kde * `#8560 `__: BUG: fix p-value calc of anderson_ksamp in scipy.stats * `#8614 `__: ENH: correct p-values for stats.kendalltau and stats.mstats.kendalltau * `#8670 `__: ENH: Require Lapack 3.4.0 * `#8683 `__: Correcting kmeans documentation * `#8725 `__: MAINT: Cleanup scipy.optimize.leastsq * `#8726 `__: BUG: Fix _get_output in scipy.ndimage to support string * `#8733 `__: MAINT: stats: A bit of clean up. * `#8737 `__: BUG: Improve numerical precision/convergence failures of smirnov/kolmogorov * `#8738 `__: MAINT: stats: A bit of clean up in test_distributions.py. * `#8740 `__: BF/ENH: make minpack thread safe * `#8742 `__: BUG: Fix division by zero in trust-region optimization methods * `#8746 `__: MAINT: signal: Fix a docstring of a private function, and fix... * `#8750 `__: DOC clarified description of norminvgauss in scipy.stats * `#8753 `__: DOC: signal: Fix a plot title in the chirp docstring. * `#8755 `__: DOC: MAINT: Fix link to the wheel documentation in developer... * `#8760 `__: BUG: stats: boltzmann wasn't setting the upper bound. * `#8763 `__: [DOC] Improved scipy.cluster.hierarchy documentation * `#8765 `__: DOC: added example for scipy.stat.mstats.tmin * `#8788 `__: DOC: fix definition of optional `disp` parameter * `#8802 `__: MAINT: Suppress dd_real unused function compiler warnings. * `#8803 `__: ENH: Add full_output support to optimize.newton() * `#8804 `__: MAINT: stats cleanup * `#8808 `__: DOC: add note about isinstance for frozen rvs * `#8812 `__: Updated numpydoc submodule * `#8813 `__: MAINT: stats: Fix multinomial docstrings, and do some clean up. * `#8816 `__: BUG: fixed _stats of t-distribution in scipy.stats * `#8817 `__: BUG: ndimage: Fix validation of the origin argument in correlate... * `#8822 `__: BUG: integrate: Fix crash with repeated t values in odeint. * `#8832 `__: Hyperlink DOIs against preferred resolver * `#8837 `__: BUG: sparse: Ensure correct dtype for sparse comparison operations. * `#8839 `__: DOC: stats: A few tweaks to the linregress docstring. * `#8846 `__: BUG: stats: Fix logpdf method of invwishart. * `#8849 `__: DOC: signal: Fixed mistake in the firwin docstring. * `#8854 `__: DOC: fix type descriptors in ltisys documentation * `#8865 `__: Fix tiny typo in docs for chi2 pdf * `#8870 `__: Fixes related to invertibility of STFT * `#8872 `__: ENH: special: Add the softmax function * `#8874 `__: DOC correct gamma function in docstrings in scipy.stats * `#8876 `__: ENH: Added TOMS Algorithm 748 as 1-d root finder; 17 test function... * `#8882 `__: ENH: Only use Halley's adjustment to Newton if close enough. * `#8883 `__: FIX: optimize: make jac and hess truly optional for 'trust-constr' * `#8885 `__: TST: Do not error on warnings raised about non-tuple indexing. * `#8887 `__: MAINT: filter out np.matrix PendingDeprecationWarning's in numpy... * `#8889 `__: DOC: optimize: separate legacy interfaces from new ones * `#8890 `__: ENH: Add optimize.root_scalar() as a universal dispatcher for... * `#8899 `__: DCT-IV, DST-IV and DCT-I, DST-I orthonormalization support in... * `#8901 `__: MAINT: Reorganize flapack.pyf.src file * `#8907 `__: BUG: ENH: Check if guess for newton is already zero before checking... * `#8908 `__: ENH: Make sorting optional for cKDTree.query_ball_point() * `#8910 `__: DOC: sparse.csgraph simple examples. * `#8914 `__: DOC: interpolate: fix equivalences of string aliases * `#8918 `__: add float_control(precise, on) to _fpumode.c * `#8919 `__: MAINT: interpolate: improve error messages for common `bc_type`... * `#8920 `__: DOC: update Contributing to SciPy to say "prefer no PEP8 only... * `#8924 `__: MAINT: special: deprecate `hyp2f0`, `hyp1f2`, and `hyp3f0` * `#8927 `__: MAINT: special: remove `errprint` * `#8932 `__: Fix broadcasting scale arg of entropy * `#8936 `__: Fix (some) non-tuple index warnings * `#8937 `__: ENH: implement sparse matrix BSR to CSR conversion directly. * `#8938 `__: DOC: add @_ni_docstrings.docfiller in ndimage.rotate * `#8940 `__: Update _discrete_distns.py * `#8943 `__: DOC: Finish dangling sentence in `convolve` docstring * `#8944 `__: MAINT: Address tuple indexing and warnings * `#8945 `__: ENH: spatial.transform.Rotation [GSOC2018] * `#8950 `__: csgraph Dijkstra function description rewording * `#8953 `__: DOC, MAINT: HTTP -> HTTPS, and other linkrot fixes * `#8955 `__: BUG: np.int64 in scipy.fftpack.next_fast_len * `#8958 `__: MAINT: Add more descriptive error message for phase one simplex. * `#8962 `__: BUG: sparse.linalg: add missing conjugate to _ScaledLinearOperator.adjoint * `#8963 `__: BUG: sparse.linalg: downgrade LinearOperator TypeError to warning * `#8965 `__: ENH: Wrapped RFP format and RZ decomposition routines * `#8969 `__: MAINT: doc and code fixes for optimize.newton * `#8970 `__: Added 'average' keyword for welch/csd to enable median averaging * `#8971 `__: Better imresize deprecation warning * `#8972 `__: MAINT: Switch np.where(c) for np.nonzero(c) * `#8975 `__: MAINT: Fix warning-based failures * `#8979 `__: DOC: fix description of count_sort keyword of dendrogram * `#8982 `__: MAINT: optimize: Fixed minor mistakes in test_linprog.py (#8978) * `#8984 `__: BUG: sparse.linalg: ensure expm casts integer inputs to float * `#8986 `__: BUG: optimize/slsqp: do not exit with convergence on steps where... * `#8989 `__: MAINT: use collections.abc in basinhopping * `#8990 `__: ENH extend p-values of anderson_ksamp in scipy.stats * `#8991 `__: ENH: Weighted kde * `#8993 `__: ENH: spatial.transform.Rotation.random [GSOC 2018] * `#8994 `__: ENH: spatial.transform.Slerp [GSOC 2018] * `#8995 `__: TST: time.time in test * `#9007 `__: Fix typo in fftpack.rst * `#9013 `__: Added correct plotting code for two sided output from spectrogram * `#9014 `__: BUG: differential_evolution with inf objective functions * `#9017 `__: BUG: fixed #8446 corner case for asformat(array|dense) * `#9018 `__: MAINT: _lib/ccallback: remove unused code * `#9021 `__: BUG: Issue with subspace_angles * `#9022 `__: DOC: Added "See Also" section to lombscargle docstring * `#9034 `__: BUG: Fix tolerance printing behavior, remove meaningless tol... * `#9035 `__: TST: improve signal.bsplines test coverage * `#9037 `__: ENH: add a new init method for k-means * `#9039 `__: DOC: Add examples to fftpack.irfft docstrings * `#9048 `__: ENH: scipy.sparse.random * `#9050 `__: BUG: scipy.io.hb_write: fails for matrices not in csc format * `#9051 `__: MAINT: Fix slow sparse.rand for k < mn/3 (#9036). * `#9054 `__: MAINT: spatial: Explicitly initialize LAPACK output parameters. * `#9055 `__: DOC: Add examples to scipy.special docstrings * `#9056 `__: ENH: Use one thread in OpenBLAS * `#9059 `__: DOC: Update README with link to Code of Conduct * `#9060 `__: BLD: remove support for the Bento build system. * `#9062 `__: DOC add sections to overview in scipy.stats * `#9066 `__: BUG: Correct "remez" error message * `#9069 `__: DOC: update linalg section of roadmap for LAPACK versions. * `#9079 `__: MAINT: add spatial.transform to refguide check; complete some... * `#9081 `__: MAINT: Add warnings if pivot value is close to tolerance in linprog(method='simplex') * `#9084 `__: BUG fix incorrect p-values of kurtosistest in scipy.stats * `#9095 `__: DOC: add sections to mstats overview in scipy.stats * `#9096 `__: BUG: Add test for Stackoverflow example from issue 8174. * `#9101 `__: ENH: add Siegel slopes (robust regression) to scipy.stats * `#9105 `__: allow resample_poly() to output float32 for float32 inputs. * `#9112 `__: MAINT: optimize: make trust-constr accept constraint dict (#9043) * `#9118 `__: Add doc entry to cholesky_banded * `#9120 `__: eigsh documentation parameters * `#9125 `__: interpolative: correctly reconstruct full rank matrices * `#9126 `__: MAINT: Use warnings for unexpected peak properties * `#9129 `__: BUG: Do not catch and silence KeyboardInterrupt * `#9131 `__: DOC: Correct the typo in scipy.optimize tutorial page * `#9133 `__: FIX: Avoid use of bare except * `#9134 `__: DOC: Update of 'return_eigenvectors' description * `#9137 `__: DOC: typo fixes for discrete Poisson tutorial * `#9139 `__: FIX: Doctest failure in optimize tutorial * `#9143 `__: DOC: missing sigma in Pearson r formula * `#9145 `__: MAINT: Refactor linear programming solvers * `#9149 `__: FIX: Make scipy.odr.ODR ifixx equal to its data.fix if given * `#9156 `__: DOC: special: Mention the sigmoid function in the expit docstring. * `#9160 `__: Fixed a latex delimiter error in levy() * `#9170 `__: DOC: correction / update of docstrings of distributions in scipy.stats * `#9171 `__: better description of the hierarchical clustering parameter * `#9174 `__: domain check for a < b in stats.truncnorm * `#9175 `__: DOC: Minor grammar fix * `#9176 `__: BUG: CloughTocher2DInterpolator: fix miscalculation at neighborless... * `#9177 `__: BUILD: Document the "clean" target in the doc/Makefile. * `#9178 `__: MAINT: make refguide-check more robust for printed numpy arrays * `#9186 `__: MAINT: Remove np.ediff1d occurence * `#9188 `__: DOC: correct typo in extending ndimage with C * `#9190 `__: ENH: Support specifying axes for fftconvolve * `#9192 `__: MAINT: optimize: fixed @pv style suggestions from #9112 * `#9200 `__: Fix make_interp_spline(..., k=0 or 1, axis<0) * `#9201 `__: BUG: sparse.linalg/gmres: use machine eps in breakdown check * `#9204 `__: MAINT: fix up stats.spearmanr and match mstats.spearmanr with... * `#9206 `__: MAINT: include benchmarks and dev files in sdist. * `#9208 `__: TST: signal: bump bsplines test tolerance for complex data * `#9210 `__: TST: mark tests as slow, fix missing random seed * `#9211 `__: ENH: add capability to specify orders in pade func * `#9217 `__: MAINT: Include ``success`` and ``nit`` in OptimizeResult returned... * `#9222 `__: ENH: interpolate: Use scipy.spatial.distance to speed-up Rbf * `#9229 `__: MNT: Fix Fourier filter double case * `#9233 `__: BUG: spatial/distance: fix pdist/cdist performance regression... * `#9234 `__: FIX: Proper suppression * `#9235 `__: BENCH: rationalize slow benchmarks + miscellaneous fixes * `#9238 `__: BENCH: limit number of parameter combinations in spatial.*KDTree... * `#9239 `__: DOC: stats: Fix LaTeX markup of a couple distribution PDFs. * `#9241 `__: ENH: Evaluate plateau size during peak finding * `#9242 `__: ENH: stats: Implement _ppf and _logpdf for crystalball, and do... * `#9246 `__: DOC: Properly render versionadded directive in HTML documentation * `#9255 `__: DOC: mention RootResults in optimization reference guide * `#9260 `__: TST: relax some tolerances so tests pass with x87 math * `#9264 `__: TST Use assert_raises "match" parameter instead of the "message"... * `#9267 `__: DOC: clarify expect() return val when moment is inf/nan * `#9272 `__: DOC: Add description of default bounds to linprog * `#9277 `__: MAINT: sparse/linalg: make test deterministic * `#9278 `__: MAINT: interpolate: pep8 cleanup in test_polyint * `#9279 `__: Fixed docstring for resample * `#9280 `__: removed first check for float in get_sum_dtype * `#9281 `__: BUG: only accept 1d input for bartlett / levene in scipy.stats * `#9282 `__: MAINT: dense_output and t_eval are mutually exclusive inputs * `#9283 `__: MAINT: add docs and do some cleanups in interpolate.Rbf * `#9288 `__: Run distance_transform_edt tests on all types * `#9294 `__: DOC: fix the formula typo * `#9298 `__: MAINT: optimize/trust-constr: restore .niter attribute for backward-compat * `#9299 `__: DOC: clarification of default rvs method in scipy.stats * `#9301 `__: MAINT: removed unused import sys * `#9302 `__: MAINT: removed unused imports * `#9303 `__: DOC: signal: Refer to fs instead of nyq in the firwin docstring. * `#9305 `__: ENH: Added Yeo-Johnson power transformation * `#9306 `__: ENH - add dual annealing * `#9309 `__: ENH add the yulesimon distribution to scipy.stats * `#9317 `__: Nested SLSQP bug fix. * `#9320 `__: MAINT: stats: avoid underflow in stats.geom.ppf * `#9326 `__: Add example for Rosenbrock function * `#9332 `__: Sort file lists * `#9340 `__: Fix typo in find_peaks documentation * `#9343 `__: MAINT Use np.full when possible * `#9344 `__: DOC: added examples to docstring of dirichlet class * `#9346 `__: DOC: Fix import of scipy.sparse.linalg in example (#9345) * `#9350 `__: Fix interpolate read only * `#9351 `__: MAINT: special.erf: use the x->-x symmetry * `#9356 `__: Fix documentation typo * `#9358 `__: DOC: improve doc for ksone and kstwobign in scipy.stats * `#9362 `__: DOC: Change datatypes of A matrices in linprog * `#9364 `__: MAINT: Adds implicit none to fftpack fortran sources * `#9369 `__: DOC: minor tweak to CoC (updated NumFOCUS contact address). * `#9373 `__: Fix exception if python is called with -OO option * `#9374 `__: FIX: AIX compilation issue with NAN and INFINITY * `#9376 `__: COBLYA -> COBYLA in docs * `#9377 `__: DOC: Add examples integrate: fixed_quad and quadrature * `#9379 `__: MAINT: TST: Make tests NumPy 1.8 compatible * `#9385 `__: CI: On Travis matrix "OPTIMIZE=-OO" flag ignored * `#9387 `__: Fix defaut value for 'mode' in 'ndimage.shift' in the doc * `#9392 `__: BUG: rank has to be integer in rank_filter: fixed issue 9388 * `#9399 `__: DOC: Misc. typos * `#9400 `__: TST: stats: Fix the expected r-value of a linregress test. * `#9405 `__: BUG: np.hstack does not accept generator expressions * `#9408 `__: ENH: linalg: Shorter ill-conditioned warning message * `#9418 `__: DOC: Fix ndimage docstrings and reduce doc build warnings * `#9421 `__: DOC: Add missing docstring examples in scipy.spatial * `#9422 `__: DOC: Add an example to integrate.newton_cotes * `#9427 `__: BUG: Fixed defect with maxiter #9419 in dual annealing * `#9431 `__: BENCH: Add dual annealing to scipy benchmark (see #9415) * `#9435 `__: DOC: Add docstring examples for stats.binom_test * `#9443 `__: DOC: Fix the order of indices in optimize tutorial * `#9444 `__: MAINT: interpolate: use operator.index for checking/coercing... * `#9445 `__: DOC: Added missing example to stats.mstats.kruskal * `#9446 `__: DOC: Add note about version changed for jaccard distance * `#9447 `__: BLD: version-script handling in setup.py * `#9448 `__: TST: skip a problematic linalg test * `#9449 `__: TST: fix missing seed in lobpcg test. * `#9456 `__: TST: test_eigs_consistency() now sorts output Checksums ========= MD5 ~~~ 8d056b1b8aabbbcb34116af4f132688c scipy-1.2.0rc2-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 53097b88fa71ca0d8bf1105f431682bf scipy-1.2.0rc2-cp27-cp27m-manylinux1_i686.whl 180c35a669c61f5aa487dcb0cb18f8ab scipy-1.2.0rc2-cp27-cp27m-manylinux1_x86_64.whl 0d4008cb8e25953b1e23a32c660684bd scipy-1.2.0rc2-cp27-cp27m-win32.whl 35d2091830466b6e37a672f59eae2f54 scipy-1.2.0rc2-cp27-cp27m-win_amd64.whl 0b6f7b8c0b8f1324ef2d87b7846fa6c5 scipy-1.2.0rc2-cp27-cp27mu-manylinux1_i686.whl c91784a0889bb921b524555c68e224eb scipy-1.2.0rc2-cp27-cp27mu-manylinux1_x86_64.whl 748d952957be7c48def5e77ccfbfa9fe scipy-1.2.0rc2-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl ea7db734b9e87aa8c694a23f50da261d scipy-1.2.0rc2-cp34-cp34m-manylinux1_i686.whl 897af31662c9d9bfeab4e34a8da8f7b0 scipy-1.2.0rc2-cp34-cp34m-manylinux1_x86_64.whl c9a0789bb8c2cdf1c1a6b44f7f6958ff scipy-1.2.0rc2-cp34-cp34m-win32.whl 34515d46edd4fc39ac97841c0ffdc535 scipy-1.2.0rc2-cp34-cp34m-win_amd64.whl cb445d88d28a8df870442f8b75b1f076 scipy-1.2.0rc2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 3107ddaec0337c99285d7ae7cd517c44 scipy-1.2.0rc2-cp35-cp35m-manylinux1_i686.whl e801ee5e915ecc47d5800d36a62609ed scipy-1.2.0rc2-cp35-cp35m-manylinux1_x86_64.whl 3282fa4b3088d18fb5d8a4176911f88c scipy-1.2.0rc2-cp35-cp35m-win32.whl 7178e87b7407f10232c0ec45c53d299b scipy-1.2.0rc2-cp35-cp35m-win_amd64.whl ad5747f5295b4e7feb98233211d28fba scipy-1.2.0rc2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 57b1c8ae77febceee7f1453be210b888 scipy-1.2.0rc2-cp36-cp36m-manylinux1_i686.whl 2eb66820f6806f6b7a7f8e1e66b7ffe6 scipy-1.2.0rc2-cp36-cp36m-manylinux1_x86_64.whl 3cfbf4618f40d79a52caa03eb66eff4b scipy-1.2.0rc2-cp36-cp36m-win32.whl 55a837b1d6bc1bac44f9d02fe71220fb scipy-1.2.0rc2-cp36-cp36m-win_amd64.whl 17bed5da01da6157d33215a8e3d4676f scipy-1.2.0rc2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl e8fe35c2017709d36f4fcb3ad7499782 scipy-1.2.0rc2-cp37-cp37m-manylinux1_i686.whl d670611d03945419b272c38ac109c542 scipy-1.2.0rc2-cp37-cp37m-manylinux1_x86_64.whl cc5a920ee31c68404ccb947a08d8825d scipy-1.2.0rc2-cp37-cp37m-win32.whl eb3fd215b6a39af8d6ef5f44c5b0b6ce scipy-1.2.0rc2-cp37-cp37m-win_amd64.whl 7fd5310c5b19a6faa32f845bba35208b scipy-1.2.0rc2.tar.gz 1fd898bfbbb2ec2afb155ada32c4ed20 scipy-1.2.0rc2.tar.xz 6022906c840d14272c8000f5d48113be scipy-1.2.0rc2.zip SHA256 ~~~~~~ 3b08d5ed06d3518f4cee22536d18b3b33b94265f902a0daf9238ce286e481fb9 scipy-1.2.0rc2-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 3b2a24b91e1570a10f65cdcebf90bd8186738ba25d934be38385f6ad5e56997f scipy-1.2.0rc2-cp27-cp27m-manylinux1_i686.whl f3add2b81868de7b3ec147019a6d2a32227dae1366e447ff69ea2744c6d59a3d scipy-1.2.0rc2-cp27-cp27m-manylinux1_x86_64.whl b61606b5bb3d4706362d468b3581952cb5f702bd237510d6ef3347a99a6f2ed4 scipy-1.2.0rc2-cp27-cp27m-win32.whl 1498d3b296075b2281524c42706be6d8f41112edda105eb0f5aa3355cee51a21 scipy-1.2.0rc2-cp27-cp27m-win_amd64.whl 01a32acb43729219cf583ba50fd7f77e6ccadd37c6fb835a5a3695c2c51aceb1 scipy-1.2.0rc2-cp27-cp27mu-manylinux1_i686.whl 368fce2d186120a89d6e7ac6ed374fa0418f92c502a54f5b6c462036254496e0 scipy-1.2.0rc2-cp27-cp27mu-manylinux1_x86_64.whl 06a81e1d99f6b02c3f6ef54827dd4bacf53d014988dbc09b954f4124e6c414e1 scipy-1.2.0rc2-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl a859de1cd744f51ffb7870c52b3982e0b78b9ca25f3036a83aae106cd57eb8ab scipy-1.2.0rc2-cp34-cp34m-manylinux1_i686.whl e60239d88fd2e094b0369a268b4480750c3c8b37b222dd4da22d53e4a7dc25b3 scipy-1.2.0rc2-cp34-cp34m-manylinux1_x86_64.whl 69ce63646a13b24fb35549dfb4c9daf4ff7dacc964e1720cf8bccb4db54e4aba scipy-1.2.0rc2-cp34-cp34m-win32.whl 58efa4bfe146e949d1b9a5c772ccb524a8f939e82423fd1129745bcdd029961a scipy-1.2.0rc2-cp34-cp34m-win_amd64.whl 597c1fded8f49019ed03d1e3648d296d56de8a5d5530ca698addd5707f729048 scipy-1.2.0rc2-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 25735065cfc8a3fc9b6575945c6a130b1dd71d6e096a35288a3c39be87040934 scipy-1.2.0rc2-cp35-cp35m-manylinux1_i686.whl 749262355e689e1f3828cb530496202d9d6cd88a7734f6908266df05e8debad7 scipy-1.2.0rc2-cp35-cp35m-manylinux1_x86_64.whl 2b3d9c56ec70e9be1a8e0a3e483f0bf7f9998d4cd37e9a91d6352ac168de1d3d scipy-1.2.0rc2-cp35-cp35m-win32.whl dcd05569a7dd0fc267eadec54d416ab728db3f30447fdc078d536166637a993f scipy-1.2.0rc2-cp35-cp35m-win_amd64.whl 6957a26d95be2b41dfb6bb0ede0b0d8826827954fdfbe8cf02e7a94fa29243e2 scipy-1.2.0rc2-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 42cd83178ab4ca83b8aa3a0001462895ada61b1ff656c6f6e455a6c9eb8d268a scipy-1.2.0rc2-cp36-cp36m-manylinux1_i686.whl e540be156010dac4b89e10b83193e6e700b58113781f20f627a83faafa335d79 scipy-1.2.0rc2-cp36-cp36m-manylinux1_x86_64.whl a320b6f5cb059c7376ded4a9b464030de972dbf4e58807baf0145e042301a8b0 scipy-1.2.0rc2-cp36-cp36m-win32.whl 37d0352e786d28095df81ced7e68eea2724c6ae66dd2f74c0c59edfbf0427d3e scipy-1.2.0rc2-cp36-cp36m-win_amd64.whl 6baa1c0242f41dabf3e4ab1be808c793cbd7ed494bff750fafe88546c0015bfb scipy-1.2.0rc2-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 9c2b534675ee8b07b7778d27daafdee06343dc8c96061d45bd646e34663003c1 scipy-1.2.0rc2-cp37-cp37m-manylinux1_i686.whl 8a086063621a531f797188a17b99620b2a3a35153618dac30c8325e8b8eb13ae scipy-1.2.0rc2-cp37-cp37m-manylinux1_x86_64.whl d018053c116e9e26334f0a00cf3ef835f725dd168ccb1affeebe2f44c8724003 scipy-1.2.0rc2-cp37-cp37m-win32.whl 1b6b10e7b8b236efb98f51297cd8c73944d1e3f6bec13269b9a22531b298b5eb scipy-1.2.0rc2-cp37-cp37m-win_amd64.whl 689c5ffb7634560e2a3671f3bac3c2a9926c87bc4d84d8bc7c54ca4695ca3552 scipy-1.2.0rc2.tar.gz 54b3a2f66c36f3cee1803ae843e249c85d814c2d4c4f18452059cdfa28b748af scipy-1.2.0rc2.tar.xz 6d77e355a517847131c645ccd95e4dcbb6569288eef3ddd8d811cca711cf296f scipy-1.2.0rc2.zip -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgfAqrOuERrV4fwr6XCLdIy4FKasFAlwID4kACgkQXCLdIy4F Kau2rRAAsTZRG9W1yCcIY3Oi7XY2XGdYdWQZ9JLQpMWpcjttXjA8N/sJ3mZIlWO8 Lwm8iWsLELmhgk19TxRIR38pu1Uofsqtm96X4HxcTg24G6nGvEABxh+QyOp8dQMX wARy1Ji8JwJqydg/Vg3885BNf6Ti2fpOCS1avFqf16FueukjSFDYlBVY/tvDjd+j q+Tv0ByjA0NXqh5HlJuVkiNXHsa/OUuVUaBFb4RE6VZPWHd4J47d3rkcpyActa2G JF0OyLqx34xmC5Q8IhCEFNAvwU7EpzTo/4gtZf3SoRuF3326Z/RUvalBu5u1zkio 17p6Wzfu/AXSq82AD9FoujLIvss52Ne26xqqtgR/Fc+1XkoxWuF7fNFvdX4AApK8 quCLp5W+NsoQjkCjnmvYxRmBa5lsvtxoiTGrC25gRbxg2SGpVC6c65egCE9xoFza 6HarIXnVQ2mzo+7PUX2czHOJx6XYgCwIUqrOMmymRH4NCGWLPp6+wfVJ4mENWAZ/ GFcQhVoHw8AV07wHnaM6MXN7BBFcg8ZqYZaGPf0RYdVqN+crh410i2atG9GHZiMi i7+5JwN75rK133KCn7p5wyZylm/7RyEGTs6ElhkapQ5/NRcYnjpG36qTJnTFQYzL f9F5NnadP2+pEc97ZTOo9j/g9QIz3DZFHFFILy22enmq/4VcFD8= =doxZ -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Dec 7 00:23:55 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 6 Dec 2018 21:23:55 -0800 Subject: [Numpy-discussion] the making of the NumPy roadmap Message-ID: Hi all, A little while ago I wrote a blog post about the history of us constructing the roadmap we currently have: https://rgommers.github.io/2018/10/the-making-of-the-numpy-roadmap/. There was a request to post it to this list, which seems appropriate. So below it is in full. Cheers, Ralf NumPy now has a [roadmap](https://www.numpy.org/neps/index.html#roadmap) - long overdue and a major step forward for the project. We're not done yet (see [my previous post]( https://rgommers.github.io/2018/10/2018-numfocus-summit---a-summary/) on this topic) and updating the technical roadmap with new ideas and priorities should happen regularly. Despite everything having been done in the open, via minutes of in-person meetings and shared roadmap drafts on the numpy-discussion mailing list, it turns out that it's not completely clear to the community and even some maintainers how we got to this point. So now seems like a good time for a summary. ## Act 1: NumPy sprint at BIDS During 24-25 May 2018 a number of people - St?fan van der Walt, Charles Harris, Stephan Hoyer, Matti Picus, Jarrod Millman, Nathaniel J. Smith, and Matthew Rocklin - came together at [BIDS](https://bids.berkeley.edu/) in Berkeley for a NumPy sprint. They had a productive brainstorm, which resulted in a rough draft of a roadmap posted to the mailing list. The mailing list thread, [A roadmap for NumPy - longer term planning]( https://mail.python.org/pipermail/numpy-discussion/2018-June/thread.html#78103), produced more suggestions and after incorporating those a [first pull request](https://github.com/numpy/numpy/pull/11446) was made. ## Act 2: BoF and sprint at SciPy'18 In order to collect input from as many people in the community as possible, a BoF ([birds of a feather]( https://en.wikipedia.org/wiki/Birds_of_a_feather_(computing) session) was held on 12 July 2018 during the SciPy 2018 conference in Austin. Stephan Hoyer presented an overview of recent NEPs, and Matti Picus presented the first roadmap draft and led a discussion with (I estimate) 30-40 people in which a number of new ideas were added. Other maintainers present included Charles Harris, Tyler Reddy, Ralf Gommers and St?fan van der Walt; many other participants were developers of libraries that depend on NumPy. At this point we had a fairly complete list of roadmap items, however the document itself was still more a brainstorm dump than a roadmap document. So during the NumPy sprint after the conference on 14 July 2018 Ralf Gommers and Stephan Hoyer sat together to reshape and clean up these ideas. That resulted in [roadmap draft v2]( https://github.com/numpy/numpy/wiki/NumPy-roadmap-v2) - not yet fully polished, but getting closer to something ready for acceptance. ## Act 3: NumPy sprint at BIDS In-person meetings seem to be critical to moving a complex and important document like a project roadmap forward. So the next draft was produced during a second NumPy sprint at Berkeley during 23-27 July 2018. Present were St?fan van der Walt, Matti Picus, Tyler Reddy and Ralf Gommers. We polished the document, worked out the [Scope of NumPy]( https://www.numpy.org/neps/scope.html) section in more detail, and integrated the roadmap in the html docs. During the sprint a [pull request](https://github.com/numpy/numpy/pull/11611) was sent, and [posted]( https://mail.python.org/pipermail/numpy-discussion/2018-July/thread.html#78458) again to the mailing list. Everyone seemed happy, and after addressing the final few review comments the roadmap was merged on 2 August 2018. *Final thought: NumPy development is accelerating at the moment, and in-person meetings and having the people and funding at BIDS to organize sprints has been a huge help. See https://github.com/BIDS-numpy/docs for an overview of what has been organized in the last year.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From allanhaldane at gmail.com Sat Dec 8 01:49:50 2018 From: allanhaldane at gmail.com (Allan Haldane) Date: Sat, 8 Dec 2018 01:49:50 -0500 Subject: [Numpy-discussion] the making of the NumPy roadmap In-Reply-To: References: Message-ID: Thanks a lot for this summary and review, so us devs who weren't there are aware of off-list discussions. Thanks all for the brainstorming work too! Cheers, Allan On 12/7/18 12:23 AM, Ralf Gommers wrote: > Hi all, > > A little while ago I wrote a blog post about the history of us > constructing the roadmap we currently have: > https://rgommers.github.io/2018/10/the-making-of-the-numpy-roadmap/. > There was a request to post it to this list, which seems appropriate. So > below it is in full. > > Cheers, > Ralf > > > NumPy now has a [roadmap](https://www.numpy.org/neps/index.html#roadmap) > - long overdue and a major step forward for the project.? We're not done > yet (see [my previous > post](https://rgommers.github.io/2018/10/2018-numfocus-summit---a-summary/) > on this topic) and updating the technical roadmap with new ideas and > priorities should happen regularly.? Despite everything having been done > in the open, via minutes of in-person meetings and shared roadmap drafts > on the numpy-discussion mailing list, it turns out that it's not > completely clear to the community and even some maintainers how we got > to this point.? So now seems like a good time for a summary. > > ## Act 1: NumPy sprint at BIDS > > During 24-25 May 2018 a number of people - St?fan van der Walt, Charles > Harris, Stephan Hoyer, Matti Picus, Jarrod Millman, Nathaniel J. Smith, > and Matthew Rocklin - came together at > [BIDS](https://bids.berkeley.edu/) in Berkeley for a NumPy sprint.? They > had a productive brainstorm, which resulted in a rough draft of a > roadmap posted to the mailing list.? The mailing list thread, [A roadmap > for NumPy - longer term > planning](https://mail.python.org/pipermail/numpy-discussion/2018-June/thread.html#78103), > produced more suggestions and after incorporating those a [first pull > request](https://github.com/numpy/numpy/pull/11446) was made. > > ## Act 2: BoF and sprint at SciPy'18 > > In order to collect input from as many people in the community as > possible, a BoF ([birds of a > feather](https://en.wikipedia.org/wiki/Birds_of_a_feather_(computing) > session) was held on 12 July 2018 during the SciPy 2018 conference in > Austin.? Stephan Hoyer presented an overview of recent NEPs, and Matti > Picus presented the first roadmap draft and led a discussion with (I > estimate) 30-40 people in which a number of new ideas were added.? Other > maintainers present included Charles Harris, Tyler Reddy, Ralf Gommers > and St?fan van der Walt; many other participants were developers of > libraries that depend on NumPy. > > At this point we had a fairly complete list of roadmap items, however > the document itself was still more a brainstorm dump than a roadmap > document.? So during the NumPy sprint after the conference on 14 July > 2018 Ralf Gommers and Stephan Hoyer sat together to reshape and clean up > these ideas.? That resulted in [roadmap draft > v2](https://github.com/numpy/numpy/wiki/NumPy-roadmap-v2) - not yet > fully polished, but getting closer to something ready for acceptance. > > ## Act 3: NumPy sprint at BIDS > > In-person meetings seem to be critical to moving a complex and important > document like a project roadmap forward.? So the next draft was produced > during a second NumPy sprint at Berkeley during 23-27 July 2018. > Present were St?fan van der Walt, Matti Picus, Tyler Reddy and Ralf > Gommers.? We polished the document, worked out the [Scope of > NumPy](https://www.numpy.org/neps/scope.html) section in more detail, > and integrated the roadmap in the html docs.? During the sprint a [pull > request](https://github.com/numpy/numpy/pull/11611) was sent, and > [posted](https://mail.python.org/pipermail/numpy-discussion/2018-July/thread.html#78458) > again to the mailing list.? Everyone seemed happy, and after addressing > the final few review comments the roadmap was merged on 2 August 2018. > > *Final thought: NumPy development is accelerating at the moment, and > in-person meetings and having the people and funding at BIDS to organize > sprints has been a huge help. See https://github.com/BIDS-numpy/docs for > an overview of what has been organized in the last year.* > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From charlesr.harris at gmail.com Sun Dec 9 16:55:08 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 9 Dec 2018 14:55:08 -0700 Subject: [Numpy-discussion] the making of the NumPy roadmap In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 10:24 PM Ralf Gommers wrote: > Hi all, > > A little while ago I wrote a blog post about the history of us > constructing the roadmap we currently have: > https://rgommers.github.io/2018/10/the-making-of-the-numpy-roadmap/. > There was a request to post it to this list, which seems appropriate. So > below it is in full. > > Cheers, > Ralf > > > NumPy now has a [roadmap](https://www.numpy.org/neps/index.html#roadmap) > - long overdue and a major step forward for the project. We're not done > yet (see [my previous post]( > https://rgommers.github.io/2018/10/2018-numfocus-summit---a-summary/) on > this topic) and updating the technical roadmap with new ideas and > priorities should happen regularly. Despite everything having been done in > the open, via minutes of in-person meetings and shared roadmap drafts on > the numpy-discussion mailing list, it turns out that it's not completely > clear to the community and even some maintainers how we got to this point. > So now seems like a good time for a summary. > > ## Act 1: NumPy sprint at BIDS > > During 24-25 May 2018 a number of people - St?fan van der Walt, Charles > Harris, Stephan Hoyer, Matti Picus, Jarrod Millman, Nathaniel J. Smith, and > Matthew Rocklin - came together at [BIDS](https://bids.berkeley.edu/) in > Berkeley for a NumPy sprint. They had a productive brainstorm, which > resulted in a rough draft of a roadmap posted to the mailing list. The > mailing list thread, [A roadmap for NumPy - longer term planning]( > https://mail.python.org/pipermail/numpy-discussion/2018-June/thread.html#78103), > produced more suggestions and after incorporating those a [first pull > request](https://github.com/numpy/numpy/pull/11446) was made. > > Jaime Fernandez del Rio also attended that sprint. > ## Act 2: BoF and sprint at SciPy'18 > > In order to collect input from as many people in the community as > possible, a BoF ([birds of a feather]( > https://en.wikipedia.org/wiki/Birds_of_a_feather_(computing) session) was > held on 12 July 2018 during the SciPy 2018 conference in Austin. Stephan > Hoyer presented an overview of recent NEPs, and Matti Picus presented the > first roadmap draft and led a discussion with (I estimate) 30-40 people in > which a number of new ideas were added. Other maintainers present included > Charles Harris, Tyler Reddy, Ralf Gommers and St?fan van der Walt; many > other participants were developers of libraries that depend on NumPy. > > At this point we had a fairly complete list of roadmap items, however the > document itself was still more a brainstorm dump than a roadmap document. > So during the NumPy sprint after the conference on 14 July 2018 Ralf > Gommers and Stephan Hoyer sat together to reshape and clean up these > ideas. That resulted in [roadmap draft v2]( > https://github.com/numpy/numpy/wiki/NumPy-roadmap-v2) - not yet fully > polished, but getting closer to something ready for acceptance. > > ## Act 3: NumPy sprint at BIDS > > In-person meetings seem to be critical to moving a complex and important > document like a project roadmap forward. So the next draft was produced > during a second NumPy sprint at Berkeley during 23-27 July 2018. Present > were St?fan van der Walt, Matti Picus, Tyler Reddy and Ralf Gommers. We > polished the document, worked out the [Scope of NumPy]( > https://www.numpy.org/neps/scope.html) section in more detail, and > integrated the roadmap in the html docs. During the sprint a [pull > request](https://github.com/numpy/numpy/pull/11611) was sent, and > [posted]( > https://mail.python.org/pipermail/numpy-discussion/2018-July/thread.html#78458) > again to the mailing list. Everyone seemed happy, and after addressing the > final few review comments the roadmap was merged on 2 August 2018. > > *Final thought: NumPy development is accelerating at the moment, and > in-person meetings and having the people and funding at BIDS to organize > sprints has been a huge help. See https://github.com/BIDS-numpy/docs for > an overview of what has been organized in the last year.* > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Sun Dec 9 16:59:53 2018 From: alan.isaac at gmail.com (Alan Isaac) Date: Sun, 9 Dec 2018 16:59:53 -0500 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module Message-ID: I believe this was proposed in the past to little enthusiasm, with the response, "you're using a library; learn its functions". Nevertheless, given the addition of `choices` to the Python random module in 3.6, it would be nice to have the *same name* for parallel functionality in numpy.random. And given the redundancy of numpy.random.sample, it would be nice to deprecate it with the intent to reintroduce the name later, better aligned with Python's usage. Obviously numpy.random.choice exists for both cases, so this comment is not about functionality. And I accept that some will think it is not about anything. Perhaps it might be at least seen as being about this: using the same function (`choice`) with a boolean argument (`replace`) to switch between sampling strategies at least appears to violate the proposal floated at times on this list that called for two separate functions in apparently similar cases. (I am not at all trying to claim that the argument against flag parameters is definitive; I'm just mentioning that this viewpoint has already been promulgated on this list.) Cheers, Alan Isaac From ralf.gommers at gmail.com Mon Dec 10 11:20:18 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 10 Dec 2018 08:20:18 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On Sun, Dec 9, 2018 at 2:00 PM Alan Isaac wrote: > I believe this was proposed in the past to little enthusiasm, > with the response, "you're using a library; learn its functions". > Not only that, NumPy and the core libraries around it are the standard for numerical/statistical computing. If core Python devs want to replicate a small subset of that functionality in a new Python version like 3.6, it would be sensible for them to choose compatible names. I don't think there's any justification for us to bother our users based on new things that get added to the stdlib. > Nevertheless, given the addition of `choices` to the Python > random module in 3.6, it would be nice to have the *same name* > for parallel functionality in numpy.random. > > And given the redundancy of numpy.random.sample, it would be > nice to deprecate it with the intent to reintroduce > the name later, better aligned with Python's usage. > No, there is nothing wrong with the current API, so I'm -10 on deprecating it. Ralf > Obviously numpy.random.choice exists for both cases, > so this comment is not about functionality. > And I accept that some will think it is not about anything. > Perhaps it might be at least seen as being about this: > using the same function (`choice`) with a boolean argument > (`replace`) to switch between sampling strategies at least > appears to violate the proposal floated at times on this > list that called for two separate functions in apparently > similar cases. (I am not at all trying to claim that the > argument against flag parameters is definitive; I'm just > mentioning that this viewpoint has already been > promulgated on this list.) > > Cheers, Alan Isaac > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Dec 10 11:21:02 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 10 Dec 2018 08:21:02 -0800 Subject: [Numpy-discussion] the making of the NumPy roadmap In-Reply-To: References: Message-ID: On Sun, Dec 9, 2018 at 1:55 PM Charles R Harris wrote: > > > On Thu, Dec 6, 2018 at 10:24 PM Ralf Gommers > wrote: > >> Hi all, >> >> A little while ago I wrote a blog post about the history of us >> constructing the roadmap we currently have: >> https://rgommers.github.io/2018/10/the-making-of-the-numpy-roadmap/. >> There was a request to post it to this list, which seems appropriate. So >> below it is in full. >> >> Cheers, >> Ralf >> >> >> NumPy now has a [roadmap](https://www.numpy.org/neps/index.html#roadmap) >> - long overdue and a major step forward for the project. We're not done >> yet (see [my previous post]( >> https://rgommers.github.io/2018/10/2018-numfocus-summit---a-summary/) on >> this topic) and updating the technical roadmap with new ideas and >> priorities should happen regularly. Despite everything having been done in >> the open, via minutes of in-person meetings and shared roadmap drafts on >> the numpy-discussion mailing list, it turns out that it's not completely >> clear to the community and even some maintainers how we got to this point. >> So now seems like a good time for a summary. >> >> ## Act 1: NumPy sprint at BIDS >> >> During 24-25 May 2018 a number of people - St?fan van der Walt, Charles >> Harris, Stephan Hoyer, Matti Picus, Jarrod Millman, Nathaniel J. Smith, and >> Matthew Rocklin - came together at [BIDS](https://bids.berkeley.edu/) in >> Berkeley for a NumPy sprint. They had a productive brainstorm, which >> resulted in a rough draft of a roadmap posted to the mailing list. The >> mailing list thread, [A roadmap for NumPy - longer term planning]( >> https://mail.python.org/pipermail/numpy-discussion/2018-June/thread.html#78103), >> produced more suggestions and after incorporating those a [first pull >> request](https://github.com/numpy/numpy/pull/11446) was made. >> >> > Jaime Fernandez del Rio also attended that sprint. > Thanks Chuck, I'll update the post for that. Cheers, Ralf > >> ## Act 2: BoF and sprint at SciPy'18 >> >> In order to collect input from as many people in the community as >> possible, a BoF ([birds of a feather]( >> https://en.wikipedia.org/wiki/Birds_of_a_feather_(computing) session) >> was held on 12 July 2018 during the SciPy 2018 conference in Austin. >> Stephan Hoyer presented an overview of recent NEPs, and Matti Picus >> presented the first roadmap draft and led a discussion with (I estimate) >> 30-40 people in which a number of new ideas were added. Other maintainers >> present included Charles Harris, Tyler Reddy, Ralf Gommers and St?fan van >> der Walt; many other participants were developers of libraries that depend >> on NumPy. >> >> At this point we had a fairly complete list of roadmap items, however the >> document itself was still more a brainstorm dump than a roadmap document. >> So during the NumPy sprint after the conference on 14 July 2018 Ralf >> Gommers and Stephan Hoyer sat together to reshape and clean up these >> ideas. That resulted in [roadmap draft v2]( >> https://github.com/numpy/numpy/wiki/NumPy-roadmap-v2) - not yet fully >> polished, but getting closer to something ready for acceptance. >> >> ## Act 3: NumPy sprint at BIDS >> >> In-person meetings seem to be critical to moving a complex and important >> document like a project roadmap forward. So the next draft was produced >> during a second NumPy sprint at Berkeley during 23-27 July 2018. Present >> were St?fan van der Walt, Matti Picus, Tyler Reddy and Ralf Gommers. We >> polished the document, worked out the [Scope of NumPy]( >> https://www.numpy.org/neps/scope.html) section in more detail, and >> integrated the roadmap in the html docs. During the sprint a [pull >> request](https://github.com/numpy/numpy/pull/11611) was sent, and >> [posted]( >> https://mail.python.org/pipermail/numpy-discussion/2018-July/thread.html#78458) >> again to the mailing list. Everyone seemed happy, and after addressing the >> final few review comments the roadmap was merged on 2 August 2018. >> >> *Final thought: NumPy development is accelerating at the moment, and >> in-person meetings and having the people and funding at BIDS to organize >> sprints has been a huge help. See https://github.com/BIDS-numpy/docs for >> an overview of what has been organized in the last year.* >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Mon Dec 10 11:26:46 2018 From: alan.isaac at gmail.com (Alan Isaac) Date: Mon, 10 Dec 2018 11:26:46 -0500 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> On 12/10/2018 11:20 AM, Ralf Gommers wrote: > there is nothing wrong with the current API Just to be clear: you completely reject the past cautions on this list against creating APIs with flag parameters. Is that correct? Or is "nothing wrong" just a narrow approval in this particular case? Alan Isaac From tyler.je.reddy at gmail.com Mon Dec 10 12:18:26 2018 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Mon, 10 Dec 2018 09:18:26 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> References: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> Message-ID: I think the current random infrastructure is mostly considered frozen anyway, even for bugfixes, given the pending NEP to produce a new random infrastructure and the commitment therein to guarantee that old random streams behave the same way given their extensive use in testing and so on. Maybe there are opportunities to have fruitful suggestions for the new system moving forward. On Mon, 10 Dec 2018 at 08:27, Alan Isaac wrote: > On 12/10/2018 11:20 AM, Ralf Gommers wrote: > > there is nothing wrong with the current API > > Just to be clear: you completely reject the past > cautions on this list against creating APIs > with flag parameters. Is that correct? > > Or is "nothing wrong" just a narrow approval in > this particular case? > > Alan Isaac > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Dec 10 12:20:12 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 10 Dec 2018 09:20:12 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> References: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> Message-ID: On Mon, Dec 10, 2018 at 8:27 AM Alan Isaac wrote: > On 12/10/2018 11:20 AM, Ralf Gommers wrote: > > there is nothing wrong with the current API > > Just to be clear: you completely reject the past > cautions on this list against creating APIs > with flag parameters. Is that correct? > There's no such caution in general. There are particular cases of keyword arguments that behave in certain ways that are best avoided in the future, for example `full_output=False` to return extra arguments. In this case, the `replace` keyword just switches between two methods, which seems perfectly normal to me. Either way, even things like `full_output` are not a good reason to deprecate something. We deprecate things because they're buggy, have severe usability issues, or some similar reason that translates to user pain. Cheers, Ralf > > Or is "nothing wrong" just a narrow approval in > this particular case? > > Alan Isaac > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Mon Dec 10 12:25:30 2018 From: shoyer at gmail.com (Stephan Hoyer) Date: Mon, 10 Dec 2018 09:25:30 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> References: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> Message-ID: On Mon, Dec 10, 2018 at 8:26 AM Alan Isaac wrote: > On 12/10/2018 11:20 AM, Ralf Gommers wrote: > > there is nothing wrong with the current API > > Just to be clear: you completely reject the past > cautions on this list against creating APIs > with flag parameters. Is that correct? > > Or is "nothing wrong" just a narrow approval in > this particular case? I agree with you that numpy.random.sample is redundant, that APIs based on flags are generally poorly design and that all things being equal it would be desirable for NumPy and Python's standard library to be aligned. That said, "replacing a function/parameter with something totally different by the same name" is a really painful/slow deprecation process that is best avoided if at all possible in mature projects. Personally, I would be +1 for issuing a deprecation warning for np.random.sample, and removing it after a good amount of notice (maybe several years). This is a similar deprecation cycle to what you see in Python itself (e.g., for ABCs in collections vs collections.abc). If you look at NumPy's docs for "Simple random data" [1] we have four different names for this same function ("random_sample", "random", "ranf" and "sample"), which is frankly absurd. Some cleanup is long overdue. But we should be extremely hesitant to actually reuse these names for something else. People depend on NumPy for stability, and there is plenty of code written against NumPy from five years ago that still runs just fine today. It's one thing to break code noisily by removing a function, but if there's any chance of introducing silent errors that would be inexcusable. Best, Stephan [1] https://docs.scipy.org/doc/numpy-1.15.0/reference/routines.random.html#simple-random-data -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Mon Dec 10 13:27:18 2018 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Mon, 10 Dec 2018 13:27:18 -0500 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On 12/10/18, Ralf Gommers wrote: > On Sun, Dec 9, 2018 at 2:00 PM Alan Isaac wrote: > >> I believe this was proposed in the past to little enthusiasm, >> with the response, "you're using a library; learn its functions". >> > > Not only that, NumPy and the core libraries around it are the standard for > numerical/statistical computing. If core Python devs want to replicate a > small subset of that functionality in a new Python version like 3.6, it > would be sensible for them to choose compatible names. I don't think > there's any justification for us to bother our users based on new things > that get added to the stdlib. > > >> Nevertheless, given the addition of `choices` to the Python >> random module in 3.6, it would be nice to have the *same name* >> for parallel functionality in numpy.random. >> >> And given the redundancy of numpy.random.sample, it would be >> nice to deprecate it with the intent to reintroduce >> the name later, better aligned with Python's usage. >> > > No, there is nothing wrong with the current API, so I'm -10 on deprecating > it. Actually, the `numpy.random.choice` API has one major weakness. When `replace` is False and `size` is greater than 1, the function is actually drawing a *one* sample from a multivariate distribution. For the other multivariate distributions (multinomial, multivariate_normal and dirichlet), `size` sets the number of samples to draw from the distribution. With `replace=False` in `choice`, size becomes a *parameter* of the distribution, and it is only possible to draw one (multivariate) sample. I thought about this some time ago, and came up with an API that eliminates the boolean flag, and separates the `size` argument from the number of items drawn in one sample, which I'll call `nsample`. To avoid creating a "false friend" with the standard library and with numpy's `choice`, I'll call this function `select`. Here's the proposed signature and docstring. (A prototype implementation is in a gist at https://gist.github.com/WarrenWeckesser/2e5905d116e710914af383ee47adc2bf.) The key feature is the `nsample` argument, which sets how many items to select from the given collection. Also note that this function is *always* drawing *without replacement*. It covers the `replace=True` case because drawing one item without replacement is the same as drawing one item with replacement. Whether or not an API like the following is used, it would be nice if there was some way to get multiple samples in the `replace=False` case in one function call. def select(items, nsample=None, p=None, size=None): """ Select random samples from `items`. The function randomly selects `nsample` items from `items` without replacement. Parameters ---------- items : sequence The collection of items from which the selection is made. nsample : int, optional Number of items to select without replacement in each draw. It must be between 0 and len(items), inclusize. p : array-like of floats, same length as `items, optional Probabilities of the items. If this argument is not given, the elements in `items` are assumed to have equal probability. size : int, optional Number of variates to draw. Notes ----- `size=None` means "generate a single selection". If `size` is None, the result is equivalent to numpy.random.choice(items, size=nsample, replace=False) `nsample=None` means draw one (scalar) sample. If `nsample` is None, the functon acts (almost) like nsample=1 (see below for more information), and the result is equivalent to numpy.random.choice(items, size=size) In effect, it does choice with replacement. The case `nsample=None` can be interpreted as each sample is a scalar, and `nsample=k` means each sample is a sequence with length k. If `nsample` is not None, it must be a nonnegative integer with 0 <= nsample <= len(items). If `size` is not None, it must be an integer or a tuple of integers. When `size` is an integer, it is treated as the tuple ``(size,)``. When both `nsample` and `size` are not None, the result has shape ``size + (nsample,)``. Examples -------- Make 6 choices with replacement from [10, 20, 30, 40]. (This is equivalent to "Make 1 choice without replacement from [10, 20, 30, 40]; do it six times.") >>> select([10, 20, 30, 40], size=6) array([20, 20, 40, 10, 40, 30]) Choose two items from [10, 20, 30, 40] without replacement. Do it six times. >>> select([10, 20, 30, 40], nsample=2, size=6) array([[40, 10], [20, 30], [10, 40], [30, 10], [10, 30], [10, 20]]) When `nsample` is an integer, there is always an axis at the end of the result with length `nsample`, even when `nsample=1`. For example, the shape of the array returned in the following call is (2, 3, 1) >>> select([10, 20, 30, 40], nsample=1, size=(2, 3)) array([[[10], [30], [20]], [[10], [40], [20]]]) When `nsample` is None, it acts like `nsample=1`, but the trivial dimension is not included. The shape of the array returned in the following call is (2, 3). >>> select([10, 20, 30, 40], size=(2, 3)) array([[20, 40, 30], [30, 20, 40]]) """ Warren > > Ralf > > >> Obviously numpy.random.choice exists for both cases, >> so this comment is not about functionality. >> And I accept that some will think it is not about anything. >> Perhaps it might be at least seen as being about this: >> using the same function (`choice`) with a boolean argument >> (`replace`) to switch between sampling strategies at least >> appears to violate the proposal floated at times on this >> list that called for two separate functions in apparently >> similar cases. (I am not at all trying to claim that the >> argument against flag parameters is definitive; I'm just >> mentioning that this viewpoint has already been >> promulgated on this list.) >> >> Cheers, Alan Isaac >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Dec 11 10:30:27 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 11 Dec 2018 07:30:27 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On Mon, Dec 10, 2018 at 10:27 AM Warren Weckesser < warren.weckesser at gmail.com> wrote: > > > On 12/10/18, Ralf Gommers wrote: > > On Sun, Dec 9, 2018 at 2:00 PM Alan Isaac wrote: > > > >> I believe this was proposed in the past to little enthusiasm, > >> with the response, "you're using a library; learn its functions". > >> > > > > Not only that, NumPy and the core libraries around it are the standard > for > > numerical/statistical computing. If core Python devs want to replicate a > > small subset of that functionality in a new Python version like 3.6, it > > would be sensible for them to choose compatible names. I don't think > > there's any justification for us to bother our users based on new things > > that get added to the stdlib. > > > > > >> Nevertheless, given the addition of `choices` to the Python > >> random module in 3.6, it would be nice to have the *same name* > >> for parallel functionality in numpy.random. > >> > >> And given the redundancy of numpy.random.sample, it would be > >> nice to deprecate it with the intent to reintroduce > >> the name later, better aligned with Python's usage. > >> > > > > No, there is nothing wrong with the current API, so I'm -10 on > deprecating > > it. > > Actually, the `numpy.random.choice` API has one major weakness. When > `replace` is False and `size` is greater than 1, the function is actually > drawing a *one* sample from a multivariate distribution. For the other > multivariate distributions (multinomial, multivariate_normal and > dirichlet), `size` sets the number of samples to draw from the > distribution. With `replace=False` in `choice`, size becomes a *parameter* > of the distribution, and it is only possible to draw one (multivariate) > sample. > I'm not sure I follow. `choice` draws samples from a given 1-D array, more than 1: In [12]: np.random.choice(np.arange(5), size=2, replace=True) Out[12]: array([2, 2]) In [13]: np.random.choice(np.arange(5), size=2, replace=False) Out[13]: array([3, 0]) The multivariate distribution you're talking about is for generating the indices I assume. Does the current implementation actually give a result for size>1 that has different statistic properties from calling the function N times with size=1? If so, that's definitely worth a bug report at least (I don't think there is one for this). Cheers, Ralf > I thought about this some time ago, and came up with an API that > eliminates the boolean flag, and separates the `size` argument from the > number of items drawn in one sample, which I'll call `nsample`. To avoid > creating a "false friend" with the standard library and with numpy's > `choice`, I'll call this function `select`. > > Here's the proposed signature and docstring. (A prototype implementation > is in a gist at > https://gist.github.com/WarrenWeckesser/2e5905d116e710914af383ee47adc2bf.) > The key feature is the `nsample` argument, which sets how many items to > select from the given collection. Also note that this function is *always* > drawing *without replacement*. It covers the `replace=True` case because > drawing one item without replacement is the same as drawing one item with > replacement. > > Whether or not an API like the following is used, it would be nice if > there was some way to get multiple samples in the `replace=False` case in > one function call. > > def select(items, nsample=None, p=None, size=None): > """ > Select random samples from `items`. > > The function randomly selects `nsample` items from `items` without > replacement. > > Parameters > ---------- > items : sequence > The collection of items from which the selection is made. > nsample : int, optional > Number of items to select without replacement in each draw. > It must be between 0 and len(items), inclusize. > p : array-like of floats, same length as `items, optional > Probabilities of the items. If this argument is not given, > the elements in `items` are assumed to have equal probability. > size : int, optional > Number of variates to draw. > > Notes > ----- > `size=None` means "generate a single selection". > > If `size` is None, the result is equivalent to > numpy.random.choice(items, size=nsample, replace=False) > > `nsample=None` means draw one (scalar) sample. > If `nsample` is None, the functon acts (almost) like nsample=1 (see > below for more information), and the result is equivalent to > numpy.random.choice(items, size=size) > In effect, it does choice with replacement. The case `nsample=None` > can be interpreted as each sample is a scalar, and `nsample=k` > means each sample is a sequence with length k. > > If `nsample` is not None, it must be a nonnegative integer with > 0 <= nsample <= len(items). > > If `size` is not None, it must be an integer or a tuple of integers. > When `size` is an integer, it is treated as the tuple ``(size,)``. > > When both `nsample` and `size` are not None, the result > has shape ``size + (nsample,)``. > > Examples > -------- > Make 6 choices with replacement from [10, 20, 30, 40]. (This is > equivalent to "Make 1 choice without replacement from [10, 20, 30, 40]; > do it six times.") > > >>> select([10, 20, 30, 40], size=6) > array([20, 20, 40, 10, 40, 30]) > > Choose two items from [10, 20, 30, 40] without replacement. Do it six > times. > > >>> select([10, 20, 30, 40], nsample=2, size=6) > array([[40, 10], > [20, 30], > [10, 40], > [30, 10], > [10, 30], > [10, 20]]) > > When `nsample` is an integer, there is always an axis at the end of the > result with length `nsample`, even when `nsample=1`. For example, the > shape of the array returned in the following call is (2, 3, 1) > > >>> select([10, 20, 30, 40], nsample=1, size=(2, 3)) > array([[[10], > [30], > [20]], > > [[10], > [40], > [20]]]) > > When `nsample` is None, it acts like `nsample=1`, but the trivial > dimension is not included. The shape of the array returned in the > following call is (2, 3). > > >>> select([10, 20, 30, 40], size=(2, 3)) > array([[20, 40, 30], > [30, 20, 40]]) > > """ > > > Warren > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Dec 11 13:37:32 2018 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 11 Dec 2018 13:37:32 -0500 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On Tue, Dec 11, 2018 at 10:32 AM Ralf Gommers wrote: > > > On Mon, Dec 10, 2018 at 10:27 AM Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> >> >> On 12/10/18, Ralf Gommers wrote: >> > On Sun, Dec 9, 2018 at 2:00 PM Alan Isaac wrote: >> > >> >> I believe this was proposed in the past to little enthusiasm, >> >> with the response, "you're using a library; learn its functions". >> >> >> > >> > Not only that, NumPy and the core libraries around it are the standard >> for >> > numerical/statistical computing. If core Python devs want to replicate a >> > small subset of that functionality in a new Python version like 3.6, it >> > would be sensible for them to choose compatible names. I don't think >> > there's any justification for us to bother our users based on new things >> > that get added to the stdlib. >> > >> > >> >> Nevertheless, given the addition of `choices` to the Python >> >> random module in 3.6, it would be nice to have the *same name* >> >> for parallel functionality in numpy.random. >> >> >> >> And given the redundancy of numpy.random.sample, it would be >> >> nice to deprecate it with the intent to reintroduce >> >> the name later, better aligned with Python's usage. >> >> >> > >> > No, there is nothing wrong with the current API, so I'm -10 on >> deprecating >> > it. >> >> Actually, the `numpy.random.choice` API has one major weakness. When >> `replace` is False and `size` is greater than 1, the function is actually >> drawing a *one* sample from a multivariate distribution. For the other >> multivariate distributions (multinomial, multivariate_normal and >> dirichlet), `size` sets the number of samples to draw from the >> distribution. With `replace=False` in `choice`, size becomes a *parameter* >> of the distribution, and it is only possible to draw one (multivariate) >> sample. >> > > I'm not sure I follow. `choice` draws samples from a given 1-D array, more > than 1: > > In [12]: np.random.choice(np.arange(5), size=2, replace=True) > Out[12]: array([2, 2]) > > In [13]: np.random.choice(np.arange(5), size=2, replace=False) > Out[13]: array([3, 0]) > > The multivariate distribution you're talking about is for generating the > indices I assume. Does the current implementation actually give a result > for size>1 that has different statistic properties from calling the > function N times with size=1? If so, that's definitely worth a bug report > at least (I don't think there is one for this). > > There is no bug, just a limitation in the API. When I draw without replacement, say, three values from a collection of length five, the three values that I get are not independent. So really, this is *one* sample from a three-dimensional (discrete-valued) distribution. The problem with the current API is that I can't get multiple samples from this three-dimensional distribution in one call. If I need to repeat the process six times, I have to use a loop, e.g.: >>> samples = [np.random.choice([10, 20, 30, 40, 50], replace=False, size=3) for _ in range(6)] With the `select` function I described in my previous email, which I'll call `random_select` here, the parameter that determines the number of items per sample, `nsample`, is separate from the parameter that determines the number of samples, `size`: >>> samples = random_select([10, 20, 30, 40, 50], nsample=3, size=6) >>> samples array([[30, 40, 50], [40, 50, 30], [10, 20, 40], [20, 30, 50], [40, 20, 50], [20, 10, 30]]) (`select` is a really bad name, since `numpy.select` already exists and is something completely different. I had the longer name `random.select` in mind when I started using it. "There are only two hard problems..." etc.) Warren > Cheers, > Ralf > > > >> I thought about this some time ago, and came up with an API that >> eliminates the boolean flag, and separates the `size` argument from the >> number of items drawn in one sample, which I'll call `nsample`. To avoid >> creating a "false friend" with the standard library and with numpy's >> `choice`, I'll call this function `select`. >> >> Here's the proposed signature and docstring. (A prototype implementation >> is in a gist at >> https://gist.github.com/WarrenWeckesser/2e5905d116e710914af383ee47adc2bf.) >> The key feature is the `nsample` argument, which sets how many items to >> select from the given collection. Also note that this function is *always* >> drawing *without replacement*. It covers the `replace=True` case because >> drawing one item without replacement is the same as drawing one item with >> replacement. >> >> Whether or not an API like the following is used, it would be nice if >> there was some way to get multiple samples in the `replace=False` case in >> one function call. >> >> def select(items, nsample=None, p=None, size=None): >> """ >> Select random samples from `items`. >> >> The function randomly selects `nsample` items from `items` without >> replacement. >> >> Parameters >> ---------- >> items : sequence >> The collection of items from which the selection is made. >> nsample : int, optional >> Number of items to select without replacement in each draw. >> It must be between 0 and len(items), inclusize. >> p : array-like of floats, same length as `items, optional >> Probabilities of the items. If this argument is not given, >> the elements in `items` are assumed to have equal probability. >> size : int, optional >> Number of variates to draw. >> >> Notes >> ----- >> `size=None` means "generate a single selection". >> >> If `size` is None, the result is equivalent to >> numpy.random.choice(items, size=nsample, replace=False) >> >> `nsample=None` means draw one (scalar) sample. >> If `nsample` is None, the functon acts (almost) like nsample=1 (see >> below for more information), and the result is equivalent to >> numpy.random.choice(items, size=size) >> In effect, it does choice with replacement. The case `nsample=None` >> can be interpreted as each sample is a scalar, and `nsample=k` >> means each sample is a sequence with length k. >> >> If `nsample` is not None, it must be a nonnegative integer with >> 0 <= nsample <= len(items). >> >> If `size` is not None, it must be an integer or a tuple of integers. >> When `size` is an integer, it is treated as the tuple ``(size,)``. >> >> When both `nsample` and `size` are not None, the result >> has shape ``size + (nsample,)``. >> >> Examples >> -------- >> Make 6 choices with replacement from [10, 20, 30, 40]. (This is >> equivalent to "Make 1 choice without replacement from [10, 20, 30, >> 40]; >> do it six times.") >> >> >>> select([10, 20, 30, 40], size=6) >> array([20, 20, 40, 10, 40, 30]) >> >> Choose two items from [10, 20, 30, 40] without replacement. Do it six >> times. >> >> >>> select([10, 20, 30, 40], nsample=2, size=6) >> array([[40, 10], >> [20, 30], >> [10, 40], >> [30, 10], >> [10, 30], >> [10, 20]]) >> >> When `nsample` is an integer, there is always an axis at the end of >> the >> result with length `nsample`, even when `nsample=1`. For example, the >> shape of the array returned in the following call is (2, 3, 1) >> >> >>> select([10, 20, 30, 40], nsample=1, size=(2, 3)) >> array([[[10], >> [30], >> [20]], >> >> [[10], >> [40], >> [20]]]) >> >> When `nsample` is None, it acts like `nsample=1`, but the trivial >> dimension is not included. The shape of the array returned in the >> following call is (2, 3). >> >> >>> select([10, 20, 30, 40], size=(2, 3)) >> array([[20, 40, 30], >> [30, 20, 40]]) >> >> """ >> >> >> Warren >> >> _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Tue Dec 11 13:50:26 2018 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 11 Dec 2018 13:50:26 -0500 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On Tue, Dec 11, 2018 at 1:37 PM Warren Weckesser wrote: > > > On Tue, Dec 11, 2018 at 10:32 AM Ralf Gommers > wrote: > >> >> >> On Mon, Dec 10, 2018 at 10:27 AM Warren Weckesser < >> warren.weckesser at gmail.com> wrote: >> >>> >>> >>> On 12/10/18, Ralf Gommers wrote: >>> > On Sun, Dec 9, 2018 at 2:00 PM Alan Isaac >>> wrote: >>> > >>> >> I believe this was proposed in the past to little enthusiasm, >>> >> with the response, "you're using a library; learn its functions". >>> >> >>> > >>> > Not only that, NumPy and the core libraries around it are the standard >>> for >>> > numerical/statistical computing. If core Python devs want to replicate >>> a >>> > small subset of that functionality in a new Python version like 3.6, it >>> > would be sensible for them to choose compatible names. I don't think >>> > there's any justification for us to bother our users based on new >>> things >>> > that get added to the stdlib. >>> > >>> > >>> >> Nevertheless, given the addition of `choices` to the Python >>> >> random module in 3.6, it would be nice to have the *same name* >>> >> for parallel functionality in numpy.random. >>> >> >>> >> And given the redundancy of numpy.random.sample, it would be >>> >> nice to deprecate it with the intent to reintroduce >>> >> the name later, better aligned with Python's usage. >>> >> >>> > >>> > No, there is nothing wrong with the current API, so I'm -10 on >>> deprecating >>> > it. >>> >>> Actually, the `numpy.random.choice` API has one major weakness. When >>> `replace` is False and `size` is greater than 1, the function is actually >>> drawing a *one* sample from a multivariate distribution. For the other >>> multivariate distributions (multinomial, multivariate_normal and >>> dirichlet), `size` sets the number of samples to draw from the >>> distribution. With `replace=False` in `choice`, size becomes a *parameter* >>> of the distribution, and it is only possible to draw one (multivariate) >>> sample. >>> >> >> I'm not sure I follow. `choice` draws samples from a given 1-D array, >> more than 1: >> >> In [12]: np.random.choice(np.arange(5), size=2, replace=True) >> Out[12]: array([2, 2]) >> >> In [13]: np.random.choice(np.arange(5), size=2, replace=False) >> Out[13]: array([3, 0]) >> >> The multivariate distribution you're talking about is for generating the >> indices I assume. Does the current implementation actually give a result >> for size>1 that has different statistic properties from calling the >> function N times with size=1? If so, that's definitely worth a bug report >> at least (I don't think there is one for this). >> >> > There is no bug, just a limitation in the API. > > When I draw without replacement, say, three values from a collection of > length five, the three values that I get are not independent. So really, > this is *one* sample from a three-dimensional (discrete-valued) > distribution. The problem with the current API is that I can't get > multiple samples from this three-dimensional distribution in one call. If > I need to repeat the process six times, I have to use a loop, e.g.: > > >>> samples = [np.random.choice([10, 20, 30, 40, 50], replace=False, > size=3) for _ in range(6)] > > With the `select` function I described in my previous email, which I'll > call `random_select` here, the parameter that determines the number of > items per sample, `nsample`, is separate from the parameter that determines > the number of samples, `size`: > > >>> samples = random_select([10, 20, 30, 40, 50], nsample=3, size=6) > >>> samples > array([[30, 40, 50], > [40, 50, 30], > [10, 20, 40], > [20, 30, 50], > [40, 20, 50], > [20, 10, 30]]) > > (`select` is a really bad name, since `numpy.select` already exists and is > something completely different. I had the longer name `random.select` in > mind when I started using it. "There are only two hard problems..." etc.) > > As I reread this, I see another naming problem: "sample" is used to mean different things. In my description above, I referred to one "sample" as the length-3 sequence generated by one call to `numpy.random.choice([10, 20, 30, 40, 50], replace=False, size=3)`, but in `random_select`, `nsample` refers to the length of each sequence generated. I use the name 'nsample' to be consistent with `numpy.random.hypergeometric`. I hope the output of the `random_select` call shown above makes clear the desired behavior. Warren Warren > > > >> Cheers, >> Ralf >> >> >> >>> I thought about this some time ago, and came up with an API that >>> eliminates the boolean flag, and separates the `size` argument from the >>> number of items drawn in one sample, which I'll call `nsample`. To avoid >>> creating a "false friend" with the standard library and with numpy's >>> `choice`, I'll call this function `select`. >>> >>> Here's the proposed signature and docstring. (A prototype >>> implementation is in a gist at >>> https://gist.github.com/WarrenWeckesser/2e5905d116e710914af383ee47adc2bf.) >>> The key feature is the `nsample` argument, which sets how many items to >>> select from the given collection. Also note that this function is *always* >>> drawing *without replacement*. It covers the `replace=True` case because >>> drawing one item without replacement is the same as drawing one item with >>> replacement. >>> >>> Whether or not an API like the following is used, it would be nice if >>> there was some way to get multiple samples in the `replace=False` case in >>> one function call. >>> >>> def select(items, nsample=None, p=None, size=None): >>> """ >>> Select random samples from `items`. >>> >>> The function randomly selects `nsample` items from `items` without >>> replacement. >>> >>> Parameters >>> ---------- >>> items : sequence >>> The collection of items from which the selection is made. >>> nsample : int, optional >>> Number of items to select without replacement in each draw. >>> It must be between 0 and len(items), inclusize. >>> p : array-like of floats, same length as `items, optional >>> Probabilities of the items. If this argument is not given, >>> the elements in `items` are assumed to have equal probability. >>> size : int, optional >>> Number of variates to draw. >>> >>> Notes >>> ----- >>> `size=None` means "generate a single selection". >>> >>> If `size` is None, the result is equivalent to >>> numpy.random.choice(items, size=nsample, replace=False) >>> >>> `nsample=None` means draw one (scalar) sample. >>> If `nsample` is None, the functon acts (almost) like nsample=1 (see >>> below for more information), and the result is equivalent to >>> numpy.random.choice(items, size=size) >>> In effect, it does choice with replacement. The case `nsample=None` >>> can be interpreted as each sample is a scalar, and `nsample=k` >>> means each sample is a sequence with length k. >>> >>> If `nsample` is not None, it must be a nonnegative integer with >>> 0 <= nsample <= len(items). >>> >>> If `size` is not None, it must be an integer or a tuple of integers. >>> When `size` is an integer, it is treated as the tuple ``(size,)``. >>> >>> When both `nsample` and `size` are not None, the result >>> has shape ``size + (nsample,)``. >>> >>> Examples >>> -------- >>> Make 6 choices with replacement from [10, 20, 30, 40]. (This is >>> equivalent to "Make 1 choice without replacement from [10, 20, 30, >>> 40]; >>> do it six times.") >>> >>> >>> select([10, 20, 30, 40], size=6) >>> array([20, 20, 40, 10, 40, 30]) >>> >>> Choose two items from [10, 20, 30, 40] without replacement. Do it >>> six >>> times. >>> >>> >>> select([10, 20, 30, 40], nsample=2, size=6) >>> array([[40, 10], >>> [20, 30], >>> [10, 40], >>> [30, 10], >>> [10, 30], >>> [10, 20]]) >>> >>> When `nsample` is an integer, there is always an axis at the end of >>> the >>> result with length `nsample`, even when `nsample=1`. For example, >>> the >>> shape of the array returned in the following call is (2, 3, 1) >>> >>> >>> select([10, 20, 30, 40], nsample=1, size=(2, 3)) >>> array([[[10], >>> [30], >>> [20]], >>> >>> [[10], >>> [40], >>> [20]]]) >>> >>> When `nsample` is None, it acts like `nsample=1`, but the trivial >>> dimension is not included. The shape of the array returned in the >>> following call is (2, 3). >>> >>> >>> select([10, 20, 30, 40], size=(2, 3)) >>> array([[20, 40, 30], >>> [30, 20, 40]]) >>> >>> """ >>> >>> >>> Warren >>> >>> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Tue Dec 11 14:26:34 2018 From: shoyer at gmail.com (Stephan Hoyer) Date: Tue, 11 Dec 2018 11:26:34 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On Tue, Dec 11, 2018 at 10:39 AM Warren Weckesser < warren.weckesser at gmail.com> wrote: > There is no bug, just a limitation in the API. > > When I draw without replacement, say, three values from a collection of > length five, the three values that I get are not independent. So really, > this is *one* sample from a three-dimensional (discrete-valued) > distribution. The problem with the current API is that I can't get > multiple samples from this three-dimensional distribution in one call. If > I need to repeat the process six times, I have to use a loop, e.g.: > > >>> samples = [np.random.choice([10, 20, 30, 40, 50], replace=False, > size=3) for _ in range(6)] > > With the `select` function I described in my previous email, which I'll > call `random_select` here, the parameter that determines the number of > items per sample, `nsample`, is separate from the parameter that determines > the number of samples, `size`: > > >>> samples = random_select([10, 20, 30, 40, 50], nsample=3, size=6) > >>> samples > array([[30, 40, 50], > [40, 50, 30], > [10, 20, 40], > [20, 30, 50], > [40, 20, 50], > [20, 10, 30]]) > > (`select` is a really bad name, since `numpy.select` already exists and is > something completely different. I had the longer name `random.select` in > mind when I started using it. "There are only two hard problems..." etc.) > > Warren > This is an issue for the probability distributions from scipy.stats, too. The only library that I know handles this well is TensorFlow Probability, which has a notion of "batch" vs "events" dimensions in distributions. It's actually pretty comprehensive, and makes it easy to express these sorts of operations: >>> import tensorflow_probability as tfp >>> import tensorflow as tf >>> tf.enable_eager_execution() >>> dist = tfp.distributions.Categorical(tf.zeros((3, 5))) >>> dist >>> dist.sample(6) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Dec 11 16:05:55 2018 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 11 Dec 2018 23:05:55 +0200 Subject: [Numpy-discussion] Reminder: weekly status meeting Dec 12 at 12:00 pacific time Message-ID: The draft agenda is at https://hackmd.io/Gn1ymjwkRjm9WVY5Cgbwsw?both Everyone is invited to join. Matti, Tyler and Stefan From warren.weckesser at gmail.com Tue Dec 11 17:10:55 2018 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Tue, 11 Dec 2018 17:10:55 -0500 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: Message-ID: On Tue, Dec 11, 2018 at 2:27 PM Stephan Hoyer wrote: > On Tue, Dec 11, 2018 at 10:39 AM Warren Weckesser < > warren.weckesser at gmail.com> wrote: > >> There is no bug, just a limitation in the API. >> >> When I draw without replacement, say, three values from a collection of >> length five, the three values that I get are not independent. So really, >> this is *one* sample from a three-dimensional (discrete-valued) >> distribution. The problem with the current API is that I can't get >> multiple samples from this three-dimensional distribution in one call. If >> I need to repeat the process six times, I have to use a loop, e.g.: >> >> >>> samples = [np.random.choice([10, 20, 30, 40, 50], replace=False, >> size=3) for _ in range(6)] >> >> With the `select` function I described in my previous email, which I'll >> call `random_select` here, the parameter that determines the number of >> items per sample, `nsample`, is separate from the parameter that determines >> the number of samples, `size`: >> >> >>> samples = random_select([10, 20, 30, 40, 50], nsample=3, size=6) >> >>> samples >> array([[30, 40, 50], >> [40, 50, 30], >> [10, 20, 40], >> [20, 30, 50], >> [40, 20, 50], >> [20, 10, 30]]) >> >> (`select` is a really bad name, since `numpy.select` already exists and >> is something completely different. I had the longer name `random.select` >> in mind when I started using it. "There are only two hard problems..." etc.) >> >> Warren >> > > This is an issue for the probability distributions from scipy.stats, too. > > The only library that I know handles this well is TensorFlow Probability, > which has a notion of "batch" vs "events" dimensions in distributions. It's > actually pretty comprehensive, and makes it easy to express these sorts of > operations: > > >>> import tensorflow_probability as tfp > >>> import tensorflow as tf > >>> tf.enable_eager_execution() > >>> dist = tfp.distributions.Categorical(tf.zeros((3, 5))) > >>> dist > event_shape=() dtype=int32> > >>> dist.sample(6) > [2, 1, 3], [4, 4, 2], [0, 1, 1], [0, 2, 2], [2, 0, 4]], dtype=int32)> > Yes, tensorflow-probability includes broadcasting of the parameters and generating multiple variates in one call, but note that your example is not sampling without replacement. For sampling 3 items without replacement from a population, the *event_shape* (to use tensorflow-probability terminology) would have to be (3,). Warren _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagru at mailbox.org Wed Dec 12 11:25:26 2018 From: lagru at mailbox.org (Lars Grueter) Date: Wed, 12 Dec 2018 17:25:26 +0100 Subject: [Numpy-discussion] Incremental rewrite of numpy.pad (gh-11358) Message-ID: <0fe5a2c1-922d-d8fe-13d3-f00c7101b5f2@mailbox.org> Dear community, I started the rewrite of "numpy.pad" [1] some time ago and feel like it has now left the WIP status. All tests are green and the benchmarks promise significant improvements for large arrays [2]. However progress on this PR seems to have stalled somewhat. As was brought up multiple times, a reasonable concern is that introducing these changes incrementally is preferable to a single PR. This was partly possible (see [3]) but as explained further down is not trivial for the remaining changes. Concerning the suggestion to split the remaining changes I proposed 4 ways forward (copied from comment [4]): Option 1: Split this PR in smaller PRs with the requirement to pass the test suite. I expect this will make this a lot more complicated. At some points I will just have to introduce an overhead of logic to control old and new code paths at the same time. I just don't see another way because the two approaches are so different. I'm really not a fan of creating functions that work with both approaches if this leads to a weird architecture in the long run. Also while this will split the review burden into smaller pieces I think it will increase the sum of changes to review in total. Option 2: Split this PR in smaller PRs without the requirement to pass the test suite (mark appropriate tests as xfail). Create a new branch in numpy's repo just for the rewrite. I could then make my PRs against this branch without master suffering. I'm thinking of first wiping the slate, introducing shared functions (for all modes) and then incrementally introducing each "new" mode with a new PR. This would keep the review burden for each PR and the rewrite overall small and still allow benchmarking of all combined changes against master. Option 3: Squash changes in this PR and continue with a new PR to maintain clarity as this PR has gotten quite big while I was still making changes and addressing reviews. Option 4: Continue this PR. I myself think option 2 is the most elegant and best compromise between all concerns. However I can't move forward on my own as this option requires a maintainer to set up a temporary branch for me to make PRs against. So I'd like to kindly ask for your thoughts and guidance. Best regards, Lars [1] https://github.com/numpy/numpy/pull/11358 [2] https://github.com/numpy/numpy/pull/11358#issuecomment-441246090 [3] https://github.com/numpy/numpy/pull/11966 [4] https://github.com/numpy/numpy/pull/11358#issuecomment-441362401 From ralf.gommers at gmail.com Wed Dec 12 22:08:23 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 12 Dec 2018 19:08:23 -0800 Subject: [Numpy-discussion] align `choices` and `sample` with Python `random` module In-Reply-To: References: <706cf43b-7663-6d62-d0d9-bbf7aadf6d60@gmail.com> Message-ID: On Mon, Dec 10, 2018 at 9:26 AM Stephan Hoyer wrote: > On Mon, Dec 10, 2018 at 8:26 AM Alan Isaac wrote: > >> On 12/10/2018 11:20 AM, Ralf Gommers wrote: >> > there is nothing wrong with the current API >> >> Just to be clear: you completely reject the past >> cautions on this list against creating APIs >> with flag parameters. Is that correct? >> >> Or is "nothing wrong" just a narrow approval in >> this particular case? > > > I agree with you that numpy.random.sample is redundant, that APIs based on > flags are generally poorly design > That argument is for things like 3 boolean variables that together create 8 states. It then becomes hard to spot bugs like 1 of the 8 cases not being handled or not being well-defined. In this case, the meaning is clear and imho the API is better like this then if we'd create two almost identical functions. and that all things being equal it would be desirable for NumPy and > Python's standard library to be aligned. > > That said, "replacing a function/parameter with something totally > different by the same name" is a really painful/slow deprecation process > that is best avoided if at all possible in mature projects. > +1 > Personally, I would be +1 for issuing a deprecation warning for > np.random.sample, and removing it after a good amount of notice (maybe > several years). This is a similar deprecation cycle to what you see in > Python itself (e.g., for ABCs in collections vs collections.abc). If you > look at NumPy's docs for "Simple random data" [1] we have four different > names for this same function ("random_sample", "random", "ranf" and > "sample"), which is frankly absurd. Some cleanup is long overdue. > These aliases have always been there, since before 2005: https://github.com/numpy/numpy/commit/9338dea9. Deprecating some of those seems fine to me, `ranf` is a really bad name, and `random` is too. I don't see a good reason to remove `sample` and `random_sample`; they're sane names, the function is not buggy, and it just seems like bothering our users for no good reason. Cheers, Ralf > But we should be extremely hesitant to actually reuse these names for > something else. People depend on NumPy for stability, and there is plenty > of code written against NumPy from five years ago that still runs just fine > today. It's one thing to break code noisily by removing a function, but if > there's any chance of introducing silent errors that would be inexcusable. > > Best, > Stephan > > [1] > https://docs.scipy.org/doc/numpy-1.15.0/reference/routines.random.html#simple-random-data > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Dec 15 18:02:23 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 15 Dec 2018 15:02:23 -0800 Subject: [Numpy-discussion] gear with NumPy logo Message-ID: Hi all, I thought this nontechnical announcement was worth copying from the NumFOCUS list: you can now order clothes, mugs and phone cases with the NumPy logo on it: https://shop.spreadshirt.com/numfocus/numpy+official+logo?q=I1018707210. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Tue Dec 18 11:57:02 2018 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 18 Dec 2018 08:57:02 -0800 Subject: [Numpy-discussion] ANN: SciPy 1.2.0 Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, On behalf of the SciPy development team I'm pleased to announce the release of SciPy 1.2.0. This is an LTS release and the last to support Python 2.7. Sources and binary wheels can be found at: https://pypi.org/project/scipy/ and at: https://github.com/scipy/scipy/releases/tag/v1.2.0 One of a few ways to install this release with pip: pip install scipy==1.2.0 ========================== SciPy 1.2.0 Release Notes ========================== SciPy 1.2.0 is the culmination of 6 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). Our development attention will now shift to bug-fix releases on the 1.2.x branch, and on adding new features on the master branch. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. Note: This will be the last SciPy release to support Python 2.7. Consequently, the 1.2.x series will be a long term support (LTS) release; we will backport bug fixes until 1 Jan 2020. For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. Highlights of this release --------------------------- - 1-D root finding improvements with a new solver, ``toms748``, and a new unified interface, ``root_scalar`` - New ``dual_annealing`` optimization method that combines stochastic and local deterministic searching - A new optimization algorithm, ``shgo`` (simplicial homology global optimization) for derivative free optimization problems - A new category of quaternion-based transformations are available in `scipy.spatial.transform` New features ============ `scipy.ndimage` improvements --------------------------------- Proper spline coefficient calculations have been added for the ``mirror``, ``wrap``, and ``reflect`` modes of `scipy.ndimage.rotate` `scipy.fftpack` improvements --------------------------------- DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in `scipy.fftpack`. `scipy.interpolate` improvements --------------------------------- `scipy.interpolate.pade` now accepts a new argument for the order of the numerator `scipy.cluster` improvements ----------------------------- `scipy.cluster.vq.kmeans2` gained a new initialization method, kmeans++. `scipy.special` improvements ----------------------------- The function ``softmax`` was added to `scipy.special`. `scipy.optimize` improvements ------------------------------ The one-dimensional nonlinear solvers have been given a unified interface `scipy.optimize.root_scalar`, similar to the `scipy.optimize.root` interface for multi-dimensional solvers. ``scipy.optimize.root_scalar(f, bracket=[a ,b], method="brenth")`` is equivalent to ``scipy.optimize.brenth(f, a ,b)``. If no ``method`` is specified, an appropriate one will be selected based upon the bracket and the number of derivatives available. The so-called Algorithm 748 of Alefeld, Potra and Shi for root-finding within an enclosing interval has been added as `scipy.optimize.toms748`. This provides guaranteed convergence to a root with convergence rate per function evaluation of approximately 1.65 (for sufficiently well-behaved functions.) ``differential_evolution`` now has the ``updating`` and ``workers`` keywords. The first chooses between continuous updating of the best solution vector (the default), or once per generation. Continuous updating can lead to faster convergence. The ``workers`` keyword accepts an ``int`` or map-like callable, and parallelises the solver (having the side effect of updating once per generation). Supplying an ``int`` evaluates the trial solutions in N parallel parts. Supplying a map-like callable allows other parallelisation approaches (such as ``mpi4py``, or ``joblib``) to be used. ``dual_annealing`` (and ``shgo`` below) is a powerful new general purpose global optizimation (GO) algorithm. ``dual_annealing`` uses two annealing processes to accelerate the convergence towards the global minimum of an objective mathematical function. The first annealing process controls the stochastic Markov chain searching and the second annealing process controls the deterministic minimization. So, dual annealing is a hybrid method that takes advantage of stochastic and local deterministic searching in an efficient way. ``shgo`` (simplicial homology global optimization) is a similar algorithm appropriate for solving black box and derivative free optimization (DFO) problems. The algorithm generally converges to the global solution in finite time. The convergence holds for non-linear inequality and equality constraints. In addition to returning a global minimum, the algorithm also returns any other global and local minima found after every iteration. This makes it useful for exploring the solutions in a domain. `scipy.optimize.newton` can now accept a scalar or an array ``MINPACK`` usage is now thread-safe, such that ``MINPACK`` + callbacks may be used on multiple threads. `scipy.signal` improvements ---------------------------- Digital filter design functions now include a parameter to specify the sampling rate. Previously, digital filters could only be specified using normalized frequency, but different functions used different scales (e.g. 0 to 1 for ``butter`` vs 0 to ? for ``freqz``), leading to errors and confusion. With the ``fs`` parameter, ordinary frequencies can now be entered directly into functions, with the normalization handled internally. ``find_peaks`` and related functions no longer raise an exception if the properties of a peak have unexpected values (e.g. a prominence of 0). A ``PeakPropertyWarning`` is given instead. The new keyword argument ``plateau_size`` was added to ``find_peaks``. ``plateau_size`` may be used to select peaks based on the length of the flat top of a peak. ``welch()`` and ``csd()`` methods in `scipy.signal` now support calculation of a median average PSD, using ``average='mean'`` keyword `scipy.sparse` improvements ---------------------------- The `scipy.sparse.bsr_matrix.tocsr` method is now implemented directly instead of converting via COO format, and the `scipy.sparse.bsr_matrix.tocsc` method is now also routed via CSR conversion instead of COO. The efficiency of both conversions is now improved. The issue where SuperLU or UMFPACK solvers crashed on matrices with non-canonical format in `scipy.sparse.linalg` was fixed. The solver wrapper canonicalizes the matrix if necessary before calling the SuperLU or UMFPACK solver. The ``largest`` option of `scipy.sparse.linalg.lobpcg()` was fixed to have a correct (and expected) behavior. The order of the eigenvalues was made consistent with the ARPACK solver (``eigs()``), i.e. ascending for the smallest eigenvalues, and descending for the largest eigenvalues. The `scipy.sparse.random` function is now faster and also supports integer and complex values by passing the appropriate value to the ``dtype`` argument. `scipy.spatial` improvements ----------------------------- The function `scipy.spatial.distance.jaccard` was modified to return 0 instead of ``np.nan`` when two all-zero vectors are compared. Support for the Jensen Shannon distance, the square-root of the divergence, has been added under `scipy.spatial.distance.jensenshannon` An optional keyword was added to the function `scipy.spatial.cKDTree.query_ball_point()` to sort or not sort the returned indices. Not sorting the indices can speed up calls. A new category of quaternion-based transformations are available in `scipy.spatial.transform`, including spherical linear interpolation of rotations (``Slerp``), conversions to and from quaternions, Euler angles, and general rotation and inversion capabilities (`spatial.transform.Rotation`), and uniform random sampling of 3D rotations (`spatial.transform.Rotation.random`). `scipy.stats` improvements --------------------------- The Yeo-Johnson power transformation is now supported (``yeojohnson``, ``yeojohnson_llf``, ``yeojohnson_normmax``, ``yeojohnson_normplot``). Unlike the Box-Cox transformation, the Yeo-Johnson transformation can accept negative values. Added a general method to sample random variates based on the density only, in the new function ``rvs_ratio_uniforms``. The Yule-Simon distribution (``yulesimon``) was added -- this is a new discrete probability distribution. ``stats`` and ``mstats`` now have access to a new regression method, ``siegelslopes``, a robust linear regression algorithm `scipy.stats.gaussian_kde` now has the ability to deal with weighted samples, and should have a modest improvement in performance Levy Stable Parameter Estimation, PDF, and CDF calculations are now supported for `scipy.stats.levy_stable`. The Brunner-Munzel test is now available as ``brunnermunzel`` in ``stats`` and ``mstats`` `scipy.linalg` improvements --------------------------- `scipy.linalg.lapack` now exposes the LAPACK routines using the Rectangular Full Packed storage (RFP) for upper triangular, lower triangular, symmetric, or Hermitian matrices; the upper trapezoidal fat matrix RZ decomposition routines are now available as well. Deprecated features =================== The functions ``hyp2f0``, ``hyp1f2`` and ``hyp3f0`` in ``scipy.special`` have been deprecated. Backwards incompatible changes ============================== LAPACK version 3.4.0 or later is now required. Building with Apple Accelerate is no longer supported. The function ``scipy.linalg.subspace_angles(A, B)`` now gives correct results for all angles. Before this, the function only returned correct values for those angles which were greater than pi/4. Support for the Bento build system has been removed. Bento has not been maintained for several years, and did not have good Python 3 or wheel support, hence it was time to remove it. The required signature of `scipy.optimize.lingprog` ``method=simplex`` callback function has changed. Before iteration begins, the simplex solver first converts the problem into a standard form that does not, in general, have the same variables or constraints as the problem defined by the user. Previously, the simplex solver would pass a user-specified callback function several separate arguments, such as the current solution vector ``xk``, corresponding to this standard form problem. Unfortunately, the relationship between the standard form problem and the user-defined problem was not documented, limiting the utility of the information passed to the callback function. In addition to numerous bug fix changes, the simplex solver now passes a user-specified callback function a single ``OptimizeResult`` object containing information that corresponds directly to the user-defined problem. In future releases, this ``OptimizeResult`` object may be expanded to include additional information, such as variables corresponding to the standard-form problem and information concerning the relationship between the standard-form and user-defined problems. The implementation of `scipy.sparse.random` has changed, and this affects the numerical values returned for both ``sparse.random`` and ``sparse.rand`` for some matrix shapes and a given seed. `scipy.optimize.newton` will no longer use Halley's method in cases where it negatively impacts convergence Other changes ============= Authors ======= * @endolith * @luzpaz * Hameer Abbasi + * akahard2dj + * Anton Akhmerov * Joseph Albert * alexthomas93 + * ashish + * atpage + * Blair Azzopardi + * Yoshiki V?zquez Baeza * Bence Bagi + * Christoph Baumgarten * Lucas Bellomo + * BH4 + * Aditya Bharti * Max Bolingbroke * Fran?ois Boulogne * Ward Bradt + * Matthew Brett * Evgeni Burovski * Rafa? Byczek + * Alfredo Canziani + * CJ Carey * Luc?a Cheung + * Poom Chiarawongse + * Jeanne Choo + * Robert Cimrman * Graham Clenaghan + * cynthia-rempel + * Johannes Damp + * Jaime Fernandez del Rio * Dowon + * emmi474 + * Stefan Endres + * Thomas Etherington + * Piotr Figiel * Alex Fikl + * fo40225 + * Joseph Fox-Rabinovitz * Lars G * Abhinav Gautam + * Stiaan Gerber + * C.A.M. Gerlach + * Ralf Gommers * Todd Goodall * Lars Grueter + * Sylvain Gubian + * Matt Haberland * David Hagen * Will Handley + * Charles Harris * Ian Henriksen * Thomas Hisch + * Theodore Hu * Michael Hudson-Doyle + * Nicolas Hug + * jakirkham + * Jakob Jakobson + * James + * Jan Schl?ter * jeanpauphilet + * josephmernst + * Kai + * Kai-Striega + * kalash04 + * Toshiki Kataoka + * Konrad0 + * Tom Krauss + * Johannes Kulick * Lars Gr?ter + * Eric Larson * Denis Laxalde * Will Lee + * Katrin Leinweber + * Yin Li + * P. L. Lim + * Jesse Livezey + * Duncan Macleod + * MatthewFlamm + * Nikolay Mayorov * Mike McClurg + * Christian Meyer + * Mark Mikofski * Naoto Mizuno + * mohmmadd + * Nathan Musoke * Anju Geetha Nair + * Andrew Nelson * Ayappan P + * Nick Papior * Haesun Park + * Ronny Pfannschmidt + * pijyoi + * Ilhan Polat * Anthony Polloreno + * Ted Pudlik * puenka * Eric Quintero * Pradeep Reddy Raamana + * Vyas Ramasubramani + * Ramon Vi?as + * Tyler Reddy * Joscha Reimer * Antonio H Ribeiro * richardjgowers + * Rob + * robbystk + * Lucas Roberts + * rohan + * Joaquin Derrac Rus + * Josua Sassen + * Bruce Sharpe + * Max Shinn + * Scott Sievert * Sourav Singh * Strahinja Luki? + * Kai Striega + * Shinya SUZUKI + * Mike Toews + * Piotr Uchwat * Miguel de Val-Borro + * Nicky van Foreest * Paul van Mulbregt * Gael Varoquaux * Pauli Virtanen * Stefan van der Walt * Warren Weckesser * Joshua Wharton + * Bernhard M. Wiedemann + * Eric Wieser * Josh Wilson * Tony Xiang + * Roman Yurchak + * Roy Zywina + A total of 137 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Issues closed for 1.2.0 ------------------------ * `#9520 `__: signal.correlate with method='fft' doesn't benefit from long... * `#9547 `__: signature of dual_annealing doesn't match other optimizers * `#9540 `__: SciPy v1.2.0rc1 cannot be imported on Python 2.7.15 * `#1240 `__: Allowing multithreaded use of minpack through scipy.optimize... * `#1432 `__: scipy.stats.mode extremely slow (Trac #905) * `#3372 `__: Please add Sphinx search field to online scipy html docs * `#3678 `__: _clough_tocher_2d_single direction between centroids * `#4174 `__: lobpcg "largest" option invalid? * `#5493 `__: anderson_ksamp p-values>1 * `#5743 `__: slsqp fails to detect infeasible problem * `#6139 `__: scipy.optimize.linprog failed to find a feasible starting point... * `#6358 `__: stats: docstring for `vonmises_line` points to `vonmises_line`... * `#6498 `__: runtests.py is missing in pypi distfile * `#7426 `__: scipy.stats.ksone(n).pdf(x) returns nan for positive values of... * `#7455 `__: scipy.stats.ksone.pdf(2,x) return incorrect values for x near... * `#7456 `__: scipy.special.smirnov and scipy.special.smirnovi have accuracy... * `#7492 `__: scipy.special.kolmogorov(x)/kolmogi(p) inefficient, inaccurate... * `#7914 `__: TravisCI not failing when it should for -OO run * `#8064 `__: linalg.solve test crashes on Windows * `#8212 `__: LAPACK Rectangular Full Packed routines * `#8256 `__: differential_evolution bug converges to wrong results in complex... * `#8443 `__: Deprecate `hyp2f0`, `hyp1f2`, and `hyp3f0`? * `#8452 `__: DOC: ARPACK tutorial has two conflicting equations * `#8680 `__: scipy fails compilation when building from source * `#8686 `__: Division by zero in _trustregion.py when x0 is exactly equal... * `#8700 `__: _MINPACK_LOCK not held when calling into minpack from least_squares * `#8786 `__: erroneous moment values for t-distribution * `#8791 `__: Checking COLA condition in istft should be optional (or omitted) * `#8843 `__: imresize cannot be deprecated just yet * `#8844 `__: Inverse Wishart Log PDF Incorrect for Non-diagonal Scale Matrix? * `#8878 `__: vonmises and vonmises_line in stats: vonmises wrong and superfluous? * `#8895 `__: v1.1.0 `ndi.rotate` documentation ? reused parameters not filled... * `#8900 `__: Missing complex conjugation in scipy.sparse.linalg.LinearOperator * `#8904 `__: BUG: if zero derivative at root, then Newton fails with RuntimeWarning * `#8911 `__: make_interp_spline bc_type incorrect input interpretation * `#8942 `__: MAINT: Refactor `_linprog.py` and `_linprog_ip.py` to remove... * `#8947 `__: np.int64 in scipy.fftpack.next_fast_len * `#9020 `__: BUG: linalg.subspace_angles gives wrong results * `#9033 `__: scipy.stats.normaltest sometimes gives incorrect returns b/c... * `#9036 `__: Bizarre times for `scipy.sparse.rand` function with 'low' density... * `#9044 `__: optimize.minimize(method=`trust-constr`) result dict does not... * `#9071 `__: doc/linalg: add cho_solve_banded to see also of cholesky_banded * `#9082 `__: eigenvalue sorting in scipy.sparse.linalg.eigsh * `#9086 `__: signaltools.py:491: FutureWarning: Using a non-tuple sequence... * `#9091 `__: test_spline_filter failure on 32-bit * `#9122 `__: Typo on scipy minimization tutorial * `#9135 `__: doc error at https://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete_poisson.html * `#9167 `__: DOC: BUG: typo in ndimage LowLevelCallable tutorial example * `#9169 `__: truncnorm does not work if b < a in scipy.stats * `#9250 `__: scipy.special.tests.test_mpmath::TestSystematic::test_pcfw fails... * `#9259 `__: rv.expect() == rv.mean() is false for rv.mean() == nan (and inf) * `#9286 `__: DOC: Rosenbrock expression in optimize.minimize tutorial * `#9316 `__: SLSQP fails in nested optimization * `#9337 `__: scipy.signal.find_peaks key typo in documentation * `#9345 `__: Example from documentation of scipy.sparse.linalg.eigs raises... * `#9383 `__: Default value for "mode" in "ndimage.shift" * `#9419 `__: dual_annealing off by one in the number of iterations * `#9442 `__: Error in Defintion of Rosenbrock Function * `#9453 `__: TST: test_eigs_consistency() doesn't have consistent results Pull requests for 1.2.0 ------------------------ * `#9526 `__: TST: relax precision requirements in signal.correlate tests * `#9507 `__: CI: MAINT: Skip a ckdtree test on pypy * `#9512 `__: TST: test_random_sampling 32-bit handling * `#9494 `__: TST: test_kolmogorov xfail 32-bit * `#9486 `__: BUG: fix sparse random int handling * `#9550 `__: BUG: scipy/_lib/_numpy_compat: get_randint * `#9549 `__: MAINT: make dual_annealing signature match other optimizers * `#9541 `__: BUG: fix SyntaxError due to non-ascii character on Python 2.7 * `#7352 `__: ENH: add Brunner Munzel test to scipy.stats. * `#7373 `__: BUG: Jaccard distance for all-zero arrays would return np.nan * `#7374 `__: ENH: Add PDF, CDF and parameter estimation for Stable Distributions * `#8098 `__: ENH: Add shgo for global optimization of NLPs. * `#8203 `__: ENH: adding simulated dual annealing to optimize * `#8259 `__: Option to follow original Storn and Price algorithm and its parallelisation * `#8293 `__: ENH add ratio-of-uniforms method for rv generation to scipy.stats * `#8294 `__: BUG: Fix slowness in stats.mode * `#8295 `__: ENH: add Jensen Shannon distance to `scipy.spatial.distance` * `#8357 `__: ENH: vectorize scalar zero-search-functions * `#8397 `__: Add `fs=` parameter to filter design functions * `#8537 `__: ENH: Implement mode parameter for spline filtering. * `#8558 `__: ENH: small speedup for stats.gaussian_kde * `#8560 `__: BUG: fix p-value calc of anderson_ksamp in scipy.stats * `#8614 `__: ENH: correct p-values for stats.kendalltau and stats.mstats.kendalltau * `#8670 `__: ENH: Require Lapack 3.4.0 * `#8683 `__: Correcting kmeans documentation * `#8725 `__: MAINT: Cleanup scipy.optimize.leastsq * `#8726 `__: BUG: Fix _get_output in scipy.ndimage to support string * `#8733 `__: MAINT: stats: A bit of clean up. * `#8737 `__: BUG: Improve numerical precision/convergence failures of smirnov/kolmogorov * `#8738 `__: MAINT: stats: A bit of clean up in test_distributions.py. * `#8740 `__: BF/ENH: make minpack thread safe * `#8742 `__: BUG: Fix division by zero in trust-region optimization methods * `#8746 `__: MAINT: signal: Fix a docstring of a private function, and fix... * `#8750 `__: DOC clarified description of norminvgauss in scipy.stats * `#8753 `__: DOC: signal: Fix a plot title in the chirp docstring. * `#8755 `__: DOC: MAINT: Fix link to the wheel documentation in developer... * `#8760 `__: BUG: stats: boltzmann wasn't setting the upper bound. * `#8763 `__: [DOC] Improved scipy.cluster.hierarchy documentation * `#8765 `__: DOC: added example for scipy.stat.mstats.tmin * `#8788 `__: DOC: fix definition of optional `disp` parameter * `#8802 `__: MAINT: Suppress dd_real unused function compiler warnings. * `#8803 `__: ENH: Add full_output support to optimize.newton() * `#8804 `__: MAINT: stats cleanup * `#8808 `__: DOC: add note about isinstance for frozen rvs * `#8812 `__: Updated numpydoc submodule * `#8813 `__: MAINT: stats: Fix multinomial docstrings, and do some clean up. * `#8816 `__: BUG: fixed _stats of t-distribution in scipy.stats * `#8817 `__: BUG: ndimage: Fix validation of the origin argument in correlate... * `#8822 `__: BUG: integrate: Fix crash with repeated t values in odeint. * `#8832 `__: Hyperlink DOIs against preferred resolver * `#8837 `__: BUG: sparse: Ensure correct dtype for sparse comparison operations. * `#8839 `__: DOC: stats: A few tweaks to the linregress docstring. * `#8846 `__: BUG: stats: Fix logpdf method of invwishart. * `#8849 `__: DOC: signal: Fixed mistake in the firwin docstring. * `#8854 `__: DOC: fix type descriptors in ltisys documentation * `#8865 `__: Fix tiny typo in docs for chi2 pdf * `#8870 `__: Fixes related to invertibility of STFT * `#8872 `__: ENH: special: Add the softmax function * `#8874 `__: DOC correct gamma function in docstrings in scipy.stats * `#8876 `__: ENH: Added TOMS Algorithm 748 as 1-d root finder; 17 test function... * `#8882 `__: ENH: Only use Halley's adjustment to Newton if close enough. * `#8883 `__: FIX: optimize: make jac and hess truly optional for 'trust-constr' * `#8885 `__: TST: Do not error on warnings raised about non-tuple indexing. * `#8887 `__: MAINT: filter out np.matrix PendingDeprecationWarning's in numpy... * `#8889 `__: DOC: optimize: separate legacy interfaces from new ones * `#8890 `__: ENH: Add optimize.root_scalar() as a universal dispatcher for... * `#8899 `__: DCT-IV, DST-IV and DCT-I, DST-I orthonormalization support in... * `#8901 `__: MAINT: Reorganize flapack.pyf.src file * `#8907 `__: BUG: ENH: Check if guess for newton is already zero before checking... * `#8908 `__: ENH: Make sorting optional for cKDTree.query_ball_point() * `#8910 `__: DOC: sparse.csgraph simple examples. * `#8914 `__: DOC: interpolate: fix equivalences of string aliases * `#8918 `__: add float_control(precise, on) to _fpumode.c * `#8919 `__: MAINT: interpolate: improve error messages for common `bc_type`... * `#8920 `__: DOC: update Contributing to SciPy to say "prefer no PEP8 only... * `#8924 `__: MAINT: special: deprecate `hyp2f0`, `hyp1f2`, and `hyp3f0` * `#8927 `__: MAINT: special: remove `errprint` * `#8932 `__: Fix broadcasting scale arg of entropy * `#8936 `__: Fix (some) non-tuple index warnings * `#8937 `__: ENH: implement sparse matrix BSR to CSR conversion directly. * `#8938 `__: DOC: add @_ni_docstrings.docfiller in ndimage.rotate * `#8940 `__: Update _discrete_distns.py * `#8943 `__: DOC: Finish dangling sentence in `convolve` docstring * `#8944 `__: MAINT: Address tuple indexing and warnings * `#8945 `__: ENH: spatial.transform.Rotation [GSOC2018] * `#8950 `__: csgraph Dijkstra function description rewording * `#8953 `__: DOC, MAINT: HTTP -> HTTPS, and other linkrot fixes * `#8955 `__: BUG: np.int64 in scipy.fftpack.next_fast_len * `#8958 `__: MAINT: Add more descriptive error message for phase one simplex. * `#8962 `__: BUG: sparse.linalg: add missing conjugate to _ScaledLinearOperator.adjoint * `#8963 `__: BUG: sparse.linalg: downgrade LinearOperator TypeError to warning * `#8965 `__: ENH: Wrapped RFP format and RZ decomposition routines * `#8969 `__: MAINT: doc and code fixes for optimize.newton * `#8970 `__: Added 'average' keyword for welch/csd to enable median averaging * `#8971 `__: Better imresize deprecation warning * `#8972 `__: MAINT: Switch np.where(c) for np.nonzero(c) * `#8975 `__: MAINT: Fix warning-based failures * `#8979 `__: DOC: fix description of count_sort keyword of dendrogram * `#8982 `__: MAINT: optimize: Fixed minor mistakes in test_linprog.py (#8978) * `#8984 `__: BUG: sparse.linalg: ensure expm casts integer inputs to float * `#8986 `__: BUG: optimize/slsqp: do not exit with convergence on steps where... * `#8989 `__: MAINT: use collections.abc in basinhopping * `#8990 `__: ENH extend p-values of anderson_ksamp in scipy.stats * `#8991 `__: ENH: Weighted kde * `#8993 `__: ENH: spatial.transform.Rotation.random [GSOC 2018] * `#8994 `__: ENH: spatial.transform.Slerp [GSOC 2018] * `#8995 `__: TST: time.time in test * `#9007 `__: Fix typo in fftpack.rst * `#9013 `__: Added correct plotting code for two sided output from spectrogram * `#9014 `__: BUG: differential_evolution with inf objective functions * `#9017 `__: BUG: fixed #8446 corner case for asformat(array|dense) * `#9018 `__: MAINT: _lib/ccallback: remove unused code * `#9021 `__: BUG: Issue with subspace_angles * `#9022 `__: DOC: Added "See Also" section to lombscargle docstring * `#9034 `__: BUG: Fix tolerance printing behavior, remove meaningless tol... * `#9035 `__: TST: improve signal.bsplines test coverage * `#9037 `__: ENH: add a new init method for k-means * `#9039 `__: DOC: Add examples to fftpack.irfft docstrings * `#9048 `__: ENH: scipy.sparse.random * `#9050 `__: BUG: scipy.io.hb_write: fails for matrices not in csc format * `#9051 `__: MAINT: Fix slow sparse.rand for k < mn/3 (#9036). * `#9054 `__: MAINT: spatial: Explicitly initialize LAPACK output parameters. * `#9055 `__: DOC: Add examples to scipy.special docstrings * `#9056 `__: ENH: Use one thread in OpenBLAS * `#9059 `__: DOC: Update README with link to Code of Conduct * `#9060 `__: BLD: remove support for the Bento build system. * `#9062 `__: DOC add sections to overview in scipy.stats * `#9066 `__: BUG: Correct "remez" error message * `#9069 `__: DOC: update linalg section of roadmap for LAPACK versions. * `#9079 `__: MAINT: add spatial.transform to refguide check; complete some... * `#9081 `__: MAINT: Add warnings if pivot value is close to tolerance in linprog(method='simplex') * `#9084 `__: BUG fix incorrect p-values of kurtosistest in scipy.stats * `#9095 `__: DOC: add sections to mstats overview in scipy.stats * `#9096 `__: BUG: Add test for Stackoverflow example from issue 8174. * `#9101 `__: ENH: add Siegel slopes (robust regression) to scipy.stats * `#9105 `__: allow resample_poly() to output float32 for float32 inputs. * `#9112 `__: MAINT: optimize: make trust-constr accept constraint dict (#9043) * `#9118 `__: Add doc entry to cholesky_banded * `#9120 `__: eigsh documentation parameters * `#9125 `__: interpolative: correctly reconstruct full rank matrices * `#9126 `__: MAINT: Use warnings for unexpected peak properties * `#9129 `__: BUG: Do not catch and silence KeyboardInterrupt * `#9131 `__: DOC: Correct the typo in scipy.optimize tutorial page * `#9133 `__: FIX: Avoid use of bare except * `#9134 `__: DOC: Update of 'return_eigenvectors' description * `#9137 `__: DOC: typo fixes for discrete Poisson tutorial * `#9139 `__: FIX: Doctest failure in optimize tutorial * `#9143 `__: DOC: missing sigma in Pearson r formula * `#9145 `__: MAINT: Refactor linear programming solvers * `#9149 `__: FIX: Make scipy.odr.ODR ifixx equal to its data.fix if given * `#9156 `__: DOC: special: Mention the sigmoid function in the expit docstring. * `#9160 `__: Fixed a latex delimiter error in levy() * `#9170 `__: DOC: correction / update of docstrings of distributions in scipy.stats * `#9171 `__: better description of the hierarchical clustering parameter * `#9174 `__: domain check for a < b in stats.truncnorm * `#9175 `__: DOC: Minor grammar fix * `#9176 `__: BUG: CloughTocher2DInterpolator: fix miscalculation at neighborless... * `#9177 `__: BUILD: Document the "clean" target in the doc/Makefile. * `#9178 `__: MAINT: make refguide-check more robust for printed numpy arrays * `#9186 `__: MAINT: Remove np.ediff1d occurence * `#9188 `__: DOC: correct typo in extending ndimage with C * `#9190 `__: ENH: Support specifying axes for fftconvolve * `#9192 `__: MAINT: optimize: fixed @pv style suggestions from #9112 * `#9200 `__: Fix make_interp_spline(..., k=0 or 1, axis<0) * `#9201 `__: BUG: sparse.linalg/gmres: use machine eps in breakdown check * `#9204 `__: MAINT: fix up stats.spearmanr and match mstats.spearmanr with... * `#9206 `__: MAINT: include benchmarks and dev files in sdist. * `#9208 `__: TST: signal: bump bsplines test tolerance for complex data * `#9210 `__: TST: mark tests as slow, fix missing random seed * `#9211 `__: ENH: add capability to specify orders in pade func * `#9217 `__: MAINT: Include ``success`` and ``nit`` in OptimizeResult returned... * `#9222 `__: ENH: interpolate: Use scipy.spatial.distance to speed-up Rbf * `#9229 `__: MNT: Fix Fourier filter double case * `#9233 `__: BUG: spatial/distance: fix pdist/cdist performance regression... * `#9234 `__: FIX: Proper suppression * `#9235 `__: BENCH: rationalize slow benchmarks + miscellaneous fixes * `#9238 `__: BENCH: limit number of parameter combinations in spatial.*KDTree... * `#9239 `__: DOC: stats: Fix LaTeX markup of a couple distribution PDFs. * `#9241 `__: ENH: Evaluate plateau size during peak finding * `#9242 `__: ENH: stats: Implement _ppf and _logpdf for crystalball, and do... * `#9246 `__: DOC: Properly render versionadded directive in HTML documentation * `#9255 `__: DOC: mention RootResults in optimization reference guide * `#9260 `__: TST: relax some tolerances so tests pass with x87 math * `#9264 `__: TST Use assert_raises "match" parameter instead of the "message"... * `#9267 `__: DOC: clarify expect() return val when moment is inf/nan * `#9272 `__: DOC: Add description of default bounds to linprog * `#9277 `__: MAINT: sparse/linalg: make test deterministic * `#9278 `__: MAINT: interpolate: pep8 cleanup in test_polyint * `#9279 `__: Fixed docstring for resample * `#9280 `__: removed first check for float in get_sum_dtype * `#9281 `__: BUG: only accept 1d input for bartlett / levene in scipy.stats * `#9282 `__: MAINT: dense_output and t_eval are mutually exclusive inputs * `#9283 `__: MAINT: add docs and do some cleanups in interpolate.Rbf * `#9288 `__: Run distance_transform_edt tests on all types * `#9294 `__: DOC: fix the formula typo * `#9298 `__: MAINT: optimize/trust-constr: restore .niter attribute for backward-compat * `#9299 `__: DOC: clarification of default rvs method in scipy.stats * `#9301 `__: MAINT: removed unused import sys * `#9302 `__: MAINT: removed unused imports * `#9303 `__: DOC: signal: Refer to fs instead of nyq in the firwin docstring. * `#9305 `__: ENH: Added Yeo-Johnson power transformation * `#9306 `__: ENH - add dual annealing * `#9309 `__: ENH add the yulesimon distribution to scipy.stats * `#9317 `__: Nested SLSQP bug fix. * `#9320 `__: MAINT: stats: avoid underflow in stats.geom.ppf * `#9326 `__: Add example for Rosenbrock function * `#9332 `__: Sort file lists * `#9340 `__: Fix typo in find_peaks documentation * `#9343 `__: MAINT Use np.full when possible * `#9344 `__: DOC: added examples to docstring of dirichlet class * `#9346 `__: DOC: Fix import of scipy.sparse.linalg in example (#9345) * `#9350 `__: Fix interpolate read only * `#9351 `__: MAINT: special.erf: use the x->-x symmetry * `#9356 `__: Fix documentation typo * `#9358 `__: DOC: improve doc for ksone and kstwobign in scipy.stats * `#9362 `__: DOC: Change datatypes of A matrices in linprog * `#9364 `__: MAINT: Adds implicit none to fftpack fortran sources * `#9369 `__: DOC: minor tweak to CoC (updated NumFOCUS contact address). * `#9373 `__: Fix exception if python is called with -OO option * `#9374 `__: FIX: AIX compilation issue with NAN and INFINITY * `#9376 `__: COBLYA -> COBYLA in docs * `#9377 `__: DOC: Add examples integrate: fixed_quad and quadrature * `#9379 `__: MAINT: TST: Make tests NumPy 1.8 compatible * `#9385 `__: CI: On Travis matrix "OPTIMIZE=-OO" flag ignored * `#9387 `__: Fix defaut value for 'mode' in 'ndimage.shift' in the doc * `#9392 `__: BUG: rank has to be integer in rank_filter: fixed issue 9388 * `#9399 `__: DOC: Misc. typos * `#9400 `__: TST: stats: Fix the expected r-value of a linregress test. * `#9405 `__: BUG: np.hstack does not accept generator expressions * `#9408 `__: ENH: linalg: Shorter ill-conditioned warning message * `#9418 `__: DOC: Fix ndimage docstrings and reduce doc build warnings * `#9421 `__: DOC: Add missing docstring examples in scipy.spatial * `#9422 `__: DOC: Add an example to integrate.newton_cotes * `#9427 `__: BUG: Fixed defect with maxiter #9419 in dual annealing * `#9431 `__: BENCH: Add dual annealing to scipy benchmark (see #9415) * `#9435 `__: DOC: Add docstring examples for stats.binom_test * `#9443 `__: DOC: Fix the order of indices in optimize tutorial * `#9444 `__: MAINT: interpolate: use operator.index for checking/coercing... * `#9445 `__: DOC: Added missing example to stats.mstats.kruskal * `#9446 `__: DOC: Add note about version changed for jaccard distance * `#9447 `__: BLD: version-script handling in setup.py * `#9448 `__: TST: skip a problematic linalg test * `#9449 `__: TST: fix missing seed in lobpcg test. * `#9456 `__: TST: test_eigs_consistency() now sorts output Checksums ========= MD5 ~~~ 0bb53a49e77bca11fb26698744c60f97 scipy-1.2.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 39e215ac7e8d6de33d939486987dcba4 scipy-1.2.0-cp27-cp27m-manylinux1_i686.whl d6e33b2c05ffbcf9790628d656c8e61f scipy-1.2.0-cp27-cp27m-manylinux1_x86_64.whl a463a12d77b87df0a2d202323771a908 scipy-1.2.0-cp27-cp27m-win32.whl 3dc17a11c7dd211ce51a338cfe30eb48 scipy-1.2.0-cp27-cp27m-win_amd64.whl 82b1ecbecadfeddd2be1c6d616491029 scipy-1.2.0-cp27-cp27mu-manylinux1_i686.whl 65021ade783f1416b1920d2a2cc39d4d scipy-1.2.0-cp27-cp27mu-manylinux1_x86_64.whl 7cc2cdbc9b421ef10695b898cdc241e7 scipy-1.2.0-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 2a4ccbfcccb9395fa7554a82db40e454 scipy-1.2.0-cp34-cp34m-manylinux1_i686.whl 662dc35acd6f588565cd6467465fc742 scipy-1.2.0-cp34-cp34m-manylinux1_x86_64.whl a19bd9969bb5b92595e82d924b8272f9 scipy-1.2.0-cp34-cp34m-win32.whl 5f1eaa3745956db0da724f83dd174559 scipy-1.2.0-cp34-cp34m-win_amd64.whl d32a4c31d0a188f3550c1306d20b03c7 scipy-1.2.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 34f4a7f04abfda87fac8ab38a9a70a77 scipy-1.2.0-cp35-cp35m-manylinux1_i686.whl bd4d56910802870072e3c5ded69a8f08 scipy-1.2.0-cp35-cp35m-manylinux1_x86_64.whl cb3fb7ddd3992928f9173e4eb489d23e scipy-1.2.0-cp35-cp35m-win32.whl 68f5ddcb6e592b1d9cba95f24faee7b5 scipy-1.2.0-cp35-cp35m-win_amd64.whl c0e110f3a935731782c96a13cc264ea2 scipy-1.2.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 9d898924498abbe2d26dd18b3413fb11 scipy-1.2.0-cp36-cp36m-manylinux1_i686.whl dd9ae664cbe7de54828d83c772e24da3 scipy-1.2.0-cp36-cp36m-manylinux1_x86_64.whl 8dea8432610bb3c63114eb6469c5f99a scipy-1.2.0-cp36-cp36m-win32.whl ebda830aec7b60193772741f85fee28c scipy-1.2.0-cp36-cp36m-win_amd64.whl 0e8b7a7908c50e635e639d2c69901140 scipy-1.2.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 56719437a821f9f2f98f069225e70c87 scipy-1.2.0-cp37-cp37m-manylinux1_i686.whl 93d9d978855516ec38fa08620ef3443c scipy-1.2.0-cp37-cp37m-manylinux1_x86_64.whl fff3d66b877b6e6b9984b84ae9e4d76c scipy-1.2.0-cp37-cp37m-win32.whl 3defb2c8b2f69057919ee3b0c92de65c scipy-1.2.0-cp37-cp37m-win_amd64.whl e57011507865b0b702aff6077d412e03 scipy-1.2.0.tar.gz 8eb6c1d7ceae0d06aef474f7801b8fca scipy-1.2.0.tar.xz b0fb16b09319d3031d27ccf21a3ef474 scipy-1.2.0.zip SHA256 ~~~~~~ d1ae77b79fd5e27a10ba7c4e7c3a62927b9d29a4dccf28f6905c25d983aaf183 scipy-1.2.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 4b1f0883cb9d8ee963cf0a31c87341e9e758abb2cf1e5bcc0d7b066ef6b17573 scipy-1.2.0-cp27-cp27m-manylinux1_i686.whl c5eae911cf26b3c7eda889ec98d3c226f312c587acfaaf02602473f98b4c16d6 scipy-1.2.0-cp27-cp27m-manylinux1_x86_64.whl 58f0435f052cb60f1472c77f52a8f6642f8877b70559e5e0b9a1744f33f5cbe5 scipy-1.2.0-cp27-cp27m-win32.whl 4cce25c6e7ff7399c67dfe1b5423c36c391cf9b4b2be390c1675ab410f1ef503 scipy-1.2.0-cp27-cp27m-win_amd64.whl 02cb79ea38114dc480e9b08d6b87095728e8fb39b9a49b449ee443d678001611 scipy-1.2.0-cp27-cp27mu-manylinux1_i686.whl 7dc4002f0a0a688774ef04878afe769ecd1ac21fe9b4b1d7125e2cf16170afd3 scipy-1.2.0-cp27-cp27mu-manylinux1_x86_64.whl 7994c044bf659b0a24ad7673ec7db85c2fadb87e4980a379a9fd5b086fe3649a scipy-1.2.0-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 72bd766f753fd32f072d30d7bc2ad492d56dbcbf3e13764e27635d5c41079339 scipy-1.2.0-cp34-cp34m-manylinux1_i686.whl 3132a9fab3f3545c8b0ba15688d11857efdd4a58d388d3785dc474f56fea7138 scipy-1.2.0-cp34-cp34m-manylinux1_x86_64.whl 7413080b381766a22d52814edb65631f0e323a7cea945c70021a672f38acc73f scipy-1.2.0-cp34-cp34m-win32.whl 6f791987899532305126309578727c0197bddbafab9596bafe3e7bfab6e1ce13 scipy-1.2.0-cp34-cp34m-win_amd64.whl 937147086e8b4338bf139ca8fa98da650e3a46bf2ca617f8e9dd68c3971ec420 scipy-1.2.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 63e1d5ca9e5e1984f1a275276991b036e25d39d37dd7edbb3f4f6165c2da7dbb scipy-1.2.0-cp35-cp35m-manylinux1_i686.whl 03c827cdbc584e935264040b958e5fa0570a16095bee23a013482ba3f0e963a2 scipy-1.2.0-cp35-cp35m-manylinux1_x86_64.whl bc294841f6c822714af362095b181a853f47ed5ce757354bd2e4776d579ff3a4 scipy-1.2.0-cp35-cp35m-win32.whl cca33a01a5987c650b87a1a910aa311ffa44e67cca1ff502ef9efdae5d9a8624 scipy-1.2.0-cp35-cp35m-win_amd64.whl 8608316d0cc01f8b25111c8adfe6efbc959cbba037a62c784551568d7ffbf280 scipy-1.2.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl fb36064047e6bf87b6320a9b6eb7f525ef6863c7a4aef1a84a4bbfb043612617 scipy-1.2.0-cp36-cp36m-manylinux1_i686.whl 854bd87cc23824d5db4983956bc30f3790e1c7448f1a9e6a8fb7bff7601aef87 scipy-1.2.0-cp36-cp36m-manylinux1_x86_64.whl fc1a19d95649439dbd50baca676bceb29bbfcd600aed2c5bd71d9bad82a87cfe scipy-1.2.0-cp36-cp36m-win32.whl 8f5fcc87b48fc3dd6d901669c89af4feeb856dffb6f671539a238b7e8af1799c scipy-1.2.0-cp36-cp36m-win_amd64.whl bc6a88b0009a1b60eab5c22ac3a006f6968d6328de10c6a64ebb0d64a05548d3 scipy-1.2.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 64b2c35824da3ef6bb1e722216e4ef28802af6413c7586136500e343d34ba179 scipy-1.2.0-cp37-cp37m-manylinux1_i686.whl 78a67ee4845440e81cfbfabde20537ca12051d0eeac951fe4c6d8751feac3103 scipy-1.2.0-cp37-cp37m-manylinux1_x86_64.whl 04f2b23258139c109d0524f111597dd095a505d9cb2c71e381d688d653877fa3 scipy-1.2.0-cp37-cp37m-win32.whl 5706b785ca289fdfd91aa05066619e51d140613b613e35932601f2315f5d8470 scipy-1.2.0-cp37-cp37m-win_amd64.whl 51a2424c8ed80e60bdb9a896806e7adaf24a58253b326fbad10f80a6d06f2214 scipy-1.2.0.tar.gz 1cfafc64e1c1c0b6d040d35780ba06f01e9cb44f3bacd8f33a5df6efe5a144a6 scipy-1.2.0.tar.xz 5b3d4b1654b4723431ad7f47b4556498a86a53d2fdb86e85a45c3da0c2a72a4b scipy-1.2.0.zip -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgfAqrOuERrV4fwr6XCLdIy4FKasFAlwYY1YACgkQXCLdIy4F KauihhAAo0fk+a4U4ilFXeZrUTuZTeqqHtD60yV8Z2pZvqQyH1BBP8BKksV/v3bP 6ckl+upWaolMjZgehWhAKyGgE2oUHDu6RxaRS97Kbbns580M+wWduPX7kjf7OKoX KFX1a8/GlBWTQKIMj6P2oShT7rvYB+WABvyoTxXKbHG0/ArOtOqqAYJFKLeOYB6m o0+qajqt9syV2iqBFlHnrPKQ8qjtfPxqCP9KHhNIbHd17305YwJc58CBhpulIjaP HIhJOuP7xigELX9yCzJ2qFGBjTd2HNvBQWIRjNDfbox6mhWO4no30c+OamG3MAr7 n9TDzSIjxkRedvBzMRJwA/Q5/Mou/R16BF+ZzvCVZnp/h6LrXQg4ENfR304Byyy3 NlYzmQKlV0XvP4oYewBnLfq6hcXAum7rf3L8ene8mu0OWJumW7Yr6PWfDNECDKvX sPWSwkNu01Pzg/KUUkrS9w6m9bZTj4UP15L2Z8JSVdp/wxCHGh8txJWjOenoZBnD BPyitzsgbuW+pd7+WmjZoJpr8QjL/Uw8vpUwlvKAvvVfFnVeVh3X1awK+D6iba69 LbyfhOG3R846WkxHh458uvNxmYUbm6sqaZ7lzY91bD8z61jS6PCvpR8BJcwmpP7Y q/duWsaakRWLe35CE8KuvKCgVFhhStX2sZ/hRkNsvm2/hEAtNW0= =3Enc -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Dec 18 16:29:42 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 18 Dec 2018 14:29:42 -0700 Subject: [Numpy-discussion] NumPy 1.16.0rc1 release coming up. Message-ID: Hi All, I plan on releasing NumPy 1.16.0rc in a day or two. The preliminary release notes are up and any suggestions/corrections/additions are welcome. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Tue Dec 18 17:08:43 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 18 Dec 2018 15:08:43 -0700 Subject: [Numpy-discussion] [SciPy-Dev] ANN: SciPy 1.2.0 In-Reply-To: References: Message-ID: Congratulations. May there be many more :) Chuck On Tue, Dec 18, 2018 at 9:57 AM Tyler Reddy wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi all, > > On behalf of the SciPy development team I'm pleased to announce > the release of SciPy 1.2.0. This is an LTS release and the > last to support Python 2.7. > > Sources and binary wheels can be found at: > https://pypi.org/project/scipy/ > and at: > https://github.com/scipy/scipy/releases/tag/v1.2.0 > > One of a few ways to install this release with pip: > > pip install scipy==1.2.0 > > ========================== > SciPy 1.2.0 Release Notes > ========================== > > SciPy 1.2.0 is the culmination of 6 months of hard work. It contains > many new features, numerous bug-fixes, improved test coverage and better > documentation. There have been a number of deprecations and API changes > in this release, which are documented below. All users are encouraged to > upgrade to this release, as there are a large number of bug-fixes and > optimizations. Before upgrading, we recommend that users check that > their own code does not use deprecated SciPy functionality (to do so, > run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). > Our development attention will now shift to bug-fix releases on the > 1.2.x branch, and on adding new features on the master branch. > > This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. > > Note: This will be the last SciPy release to support Python 2.7. > Consequently, the 1.2.x series will be a long term support (LTS) > release; we will backport bug fixes until 1 Jan 2020. > > For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required. > > Highlights of this release > --------------------------- > > - 1-D root finding improvements with a new solver, ``toms748``, and a new > unified interface, ``root_scalar`` > - New ``dual_annealing`` optimization method that combines stochastic and > local deterministic searching > - A new optimization algorithm, ``shgo`` (simplicial homology > global optimization) for derivative free optimization problems > - A new category of quaternion-based transformations are available in > `scipy.spatial.transform` > > New features > ============ > > `scipy.ndimage` improvements > --------------------------------- > > Proper spline coefficient calculations have been added for the ``mirror``, > ``wrap``, and ``reflect`` modes of `scipy.ndimage.rotate` > > `scipy.fftpack` improvements > --------------------------------- > > DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in > `scipy.fftpack`. > > `scipy.interpolate` improvements > --------------------------------- > > `scipy.interpolate.pade` now accepts a new argument for the order of the > numerator > > `scipy.cluster` improvements > ----------------------------- > > `scipy.cluster.vq.kmeans2` gained a new initialization method, kmeans++. > > `scipy.special` improvements > ----------------------------- > > The function ``softmax`` was added to `scipy.special`. > > `scipy.optimize` improvements > ------------------------------ > > The one-dimensional nonlinear solvers have been given a unified interface > `scipy.optimize.root_scalar`, similar to the `scipy.optimize.root` > interface > for multi-dimensional solvers. ``scipy.optimize.root_scalar(f, bracket=[a > ,b], > method="brenth")`` is equivalent to ``scipy.optimize.brenth(f, a ,b)``. > If no > ``method`` is specified, an appropriate one will be selected based upon the > bracket and the number of derivatives available. > > The so-called Algorithm 748 of Alefeld, Potra and Shi for root-finding > within > an enclosing interval has been added as `scipy.optimize.toms748`. This > provides > guaranteed convergence to a root with convergence rate per function > evaluation > of approximately 1.65 (for sufficiently well-behaved functions.) > > ``differential_evolution`` now has the ``updating`` and ``workers`` > keywords. > The first chooses between continuous updating of the best solution vector > (the > default), or once per generation. Continuous updating can lead to faster > convergence. The ``workers`` keyword accepts an ``int`` or map-like > callable, > and parallelises the solver (having the side effect of updating once per > generation). Supplying an ``int`` evaluates the trial solutions in N > parallel > parts. Supplying a map-like callable allows other parallelisation > approaches > (such as ``mpi4py``, or ``joblib``) to be used. > > ``dual_annealing`` (and ``shgo`` below) is a powerful new general purpose > global optizimation (GO) algorithm. ``dual_annealing`` uses two annealing > processes to accelerate the convergence towards the global minimum of an > objective mathematical function. The first annealing process controls the > stochastic Markov chain searching and the second annealing process > controls the > deterministic minimization. So, dual annealing is a hybrid method that > takes > advantage of stochastic and local deterministic searching in an efficient > way. > > ``shgo`` (simplicial homology global optimization) is a similar algorithm > appropriate for solving black box and derivative free optimization (DFO) > problems. The algorithm generally converges to the global solution in > finite > time. The convergence holds for non-linear inequality and > equality constraints. In addition to returning a global minimum, the > algorithm also returns any other global and local minima found after every > iteration. This makes it useful for exploring the solutions in a domain. > > `scipy.optimize.newton` can now accept a scalar or an array > > ``MINPACK`` usage is now thread-safe, such that ``MINPACK`` + callbacks may > be used on multiple threads. > > `scipy.signal` improvements > ---------------------------- > > Digital filter design functions now include a parameter to specify the > sampling > rate. Previously, digital filters could only be specified using normalized > frequency, but different functions used different scales (e.g. 0 to 1 for > ``butter`` vs 0 to ? for ``freqz``), leading to errors and confusion. With > the ``fs`` parameter, ordinary frequencies can now be entered directly into > functions, with the normalization handled internally. > > ``find_peaks`` and related functions no longer raise an exception if the > properties of a peak have unexpected values (e.g. a prominence of 0). A > ``PeakPropertyWarning`` is given instead. > > The new keyword argument ``plateau_size`` was added to ``find_peaks``. > ``plateau_size`` may be used to select peaks based on the length of the > flat top of a peak. > > ``welch()`` and ``csd()`` methods in `scipy.signal` now support calculation > of a median average PSD, using ``average='mean'`` keyword > > `scipy.sparse` improvements > ---------------------------- > > The `scipy.sparse.bsr_matrix.tocsr` method is now implemented directly > instead > of converting via COO format, and the `scipy.sparse.bsr_matrix.tocsc` > method > is now also routed via CSR conversion instead of COO. The efficiency of > both > conversions is now improved. > > The issue where SuperLU or UMFPACK solvers crashed on matrices with > non-canonical format in `scipy.sparse.linalg` was fixed. The solver wrapper > canonicalizes the matrix if necessary before calling the SuperLU or UMFPACK > solver. > > The ``largest`` option of `scipy.sparse.linalg.lobpcg()` was fixed to have > a correct (and expected) behavior. The order of the eigenvalues was made > consistent with the ARPACK solver (``eigs()``), i.e. ascending for the > smallest eigenvalues, and descending for the largest eigenvalues. > > The `scipy.sparse.random` function is now faster and also supports integer > and > complex values by passing the appropriate value to the ``dtype`` argument. > > `scipy.spatial` improvements > ----------------------------- > > The function `scipy.spatial.distance.jaccard` was modified to return 0 > instead > of ``np.nan`` when two all-zero vectors are compared. > > Support for the Jensen Shannon distance, the square-root of the > divergence, has > been added under `scipy.spatial.distance.jensenshannon` > > An optional keyword was added to the function > `scipy.spatial.cKDTree.query_ball_point()` to sort or not sort the returned > indices. Not sorting the indices can speed up calls. > > A new category of quaternion-based transformations are available in > `scipy.spatial.transform`, including spherical linear interpolation of > rotations (``Slerp``), conversions to and from quaternions, Euler angles, > and general rotation and inversion capabilities > (`spatial.transform.Rotation`), and uniform random sampling of 3D > rotations (`spatial.transform.Rotation.random`). > > `scipy.stats` improvements > --------------------------- > > The Yeo-Johnson power transformation is now supported (``yeojohnson``, > ``yeojohnson_llf``, ``yeojohnson_normmax``, ``yeojohnson_normplot``). > Unlike > the Box-Cox transformation, the Yeo-Johnson transformation can accept > negative > values. > > Added a general method to sample random variates based on the density > only, in > the new function ``rvs_ratio_uniforms``. > > The Yule-Simon distribution (``yulesimon``) was added -- this is a new > discrete probability distribution. > > ``stats`` and ``mstats`` now have access to a new regression method, > ``siegelslopes``, a robust linear regression algorithm > > `scipy.stats.gaussian_kde` now has the ability to deal with weighted > samples, > and should have a modest improvement in performance > > Levy Stable Parameter Estimation, PDF, and CDF calculations are now > supported > for `scipy.stats.levy_stable`. > > The Brunner-Munzel test is now available as ``brunnermunzel`` in ``stats`` > and ``mstats`` > > `scipy.linalg` improvements > --------------------------- > > `scipy.linalg.lapack` now exposes the LAPACK routines using the Rectangular > Full Packed storage (RFP) for upper triangular, lower triangular, > symmetric, > or Hermitian matrices; the upper trapezoidal fat matrix RZ decomposition > routines are now available as well. > > Deprecated features > =================== > The functions ``hyp2f0``, ``hyp1f2`` and ``hyp3f0`` in ``scipy.special`` > have > been deprecated. > > > Backwards incompatible changes > ============================== > > LAPACK version 3.4.0 or later is now required. Building with > Apple Accelerate is no longer supported. > > The function ``scipy.linalg.subspace_angles(A, B)`` now gives correct > results for all angles. Before this, the function only returned > correct values for those angles which were greater than pi/4. > > Support for the Bento build system has been removed. Bento has not been > maintained for several years, and did not have good Python 3 or wheel > support, > hence it was time to remove it. > > The required signature of `scipy.optimize.lingprog` ``method=simplex`` > callback function has changed. Before iteration begins, the simplex solver > first converts the problem into a standard form that does not, in general, > have the same variables or constraints > as the problem defined by the user. Previously, the simplex solver would > pass a > user-specified callback function several separate arguments, such as the > current solution vector ``xk``, corresponding to this standard form > problem. > Unfortunately, the relationship between the standard form problem and the > user-defined problem was not documented, limiting the utility of the > information passed to the callback function. > > In addition to numerous bug fix changes, the simplex solver now passes a > user-specified callback function a single ``OptimizeResult`` object > containing > information that corresponds directly to the user-defined problem. In > future > releases, this ``OptimizeResult`` object may be expanded to include > additional > information, such as variables corresponding to the standard-form problem > and > information concerning the relationship between the standard-form and > user-defined problems. > > The implementation of `scipy.sparse.random` has changed, and this affects > the > numerical values returned for both ``sparse.random`` and ``sparse.rand`` > for > some matrix shapes and a given seed. > > `scipy.optimize.newton` will no longer use Halley's method in cases where > it > negatively impacts convergence > > Other changes > ============= > > > Authors > ======= > > * @endolith > * @luzpaz > * Hameer Abbasi + > * akahard2dj + > * Anton Akhmerov > * Joseph Albert > * alexthomas93 + > * ashish + > * atpage + > * Blair Azzopardi + > * Yoshiki V?zquez Baeza > * Bence Bagi + > * Christoph Baumgarten > * Lucas Bellomo + > * BH4 + > * Aditya Bharti > * Max Bolingbroke > * Fran?ois Boulogne > * Ward Bradt + > * Matthew Brett > * Evgeni Burovski > * Rafa? Byczek + > * Alfredo Canziani + > * CJ Carey > * Luc?a Cheung + > * Poom Chiarawongse + > * Jeanne Choo + > * Robert Cimrman > * Graham Clenaghan + > * cynthia-rempel + > * Johannes Damp + > * Jaime Fernandez del Rio > * Dowon + > * emmi474 + > * Stefan Endres + > * Thomas Etherington + > * Piotr Figiel > * Alex Fikl + > * fo40225 + > * Joseph Fox-Rabinovitz > * Lars G > * Abhinav Gautam + > * Stiaan Gerber + > * C.A.M. Gerlach + > * Ralf Gommers > * Todd Goodall > * Lars Grueter + > * Sylvain Gubian + > * Matt Haberland > * David Hagen > * Will Handley + > * Charles Harris > * Ian Henriksen > * Thomas Hisch + > * Theodore Hu > * Michael Hudson-Doyle + > * Nicolas Hug + > * jakirkham + > * Jakob Jakobson + > * James + > * Jan Schl?ter > * jeanpauphilet + > * josephmernst + > * Kai + > * Kai-Striega + > * kalash04 + > * Toshiki Kataoka + > * Konrad0 + > * Tom Krauss + > * Johannes Kulick > * Lars Gr?ter + > * Eric Larson > * Denis Laxalde > * Will Lee + > * Katrin Leinweber + > * Yin Li + > * P. L. Lim + > * Jesse Livezey + > * Duncan Macleod + > * MatthewFlamm + > * Nikolay Mayorov > * Mike McClurg + > * Christian Meyer + > * Mark Mikofski > * Naoto Mizuno + > * mohmmadd + > * Nathan Musoke > * Anju Geetha Nair + > * Andrew Nelson > * Ayappan P + > * Nick Papior > * Haesun Park + > * Ronny Pfannschmidt + > * pijyoi + > * Ilhan Polat > * Anthony Polloreno + > * Ted Pudlik > * puenka > * Eric Quintero > * Pradeep Reddy Raamana + > * Vyas Ramasubramani + > * Ramon Vi?as + > * Tyler Reddy > * Joscha Reimer > * Antonio H Ribeiro > * richardjgowers + > * Rob + > * robbystk + > * Lucas Roberts + > * rohan + > * Joaquin Derrac Rus + > * Josua Sassen + > * Bruce Sharpe + > * Max Shinn + > * Scott Sievert > * Sourav Singh > * Strahinja Luki? + > * Kai Striega + > * Shinya SUZUKI + > * Mike Toews + > * Piotr Uchwat > * Miguel de Val-Borro + > * Nicky van Foreest > * Paul van Mulbregt > * Gael Varoquaux > * Pauli Virtanen > * Stefan van der Walt > * Warren Weckesser > * Joshua Wharton + > * Bernhard M. Wiedemann + > * Eric Wieser > * Josh Wilson > * Tony Xiang + > * Roman Yurchak + > * Roy Zywina + > > A total of 137 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > Issues closed for 1.2.0 > ------------------------ > > * `#9520 `__: > signal.correlate with method='fft' doesn't benefit from long... > * `#9547 `__: signature of > dual_annealing doesn't match other optimizers > * `#9540 `__: SciPy v1.2.0rc1 > cannot be imported on Python 2.7.15 > * `#1240 `__: Allowing > multithreaded use of minpack through scipy.optimize... > * `#1432 `__: > scipy.stats.mode extremely slow (Trac #905) > * `#3372 `__: Please add > Sphinx search field to online scipy html docs > * `#3678 `__: > _clough_tocher_2d_single direction between centroids > * `#4174 `__: lobpcg > "largest" option invalid? > * `#5493 `__: anderson_ksamp > p-values>1 > * `#5743 `__: slsqp fails to > detect infeasible problem > * `#6139 `__: > scipy.optimize.linprog failed to find a feasible starting point... > * `#6358 `__: stats: > docstring for `vonmises_line` points to `vonmises_line`... > * `#6498 `__: runtests.py is > missing in pypi distfile > * `#7426 `__: > scipy.stats.ksone(n).pdf(x) returns nan for positive values of... > * `#7455 `__: > scipy.stats.ksone.pdf(2,x) return incorrect values for x near... > * `#7456 `__: > scipy.special.smirnov and scipy.special.smirnovi have accuracy... > * `#7492 `__: > scipy.special.kolmogorov(x)/kolmogi(p) inefficient, inaccurate... > * `#7914 `__: TravisCI not > failing when it should for -OO run > * `#8064 `__: linalg.solve > test crashes on Windows > * `#8212 `__: LAPACK > Rectangular Full Packed routines > * `#8256 `__: > differential_evolution bug converges to wrong results in complex... > * `#8443 `__: Deprecate > `hyp2f0`, `hyp1f2`, and `hyp3f0`? > * `#8452 `__: DOC: ARPACK > tutorial has two conflicting equations > * `#8680 `__: scipy fails > compilation when building from source > * `#8686 `__: Division by > zero in _trustregion.py when x0 is exactly equal... > * `#8700 `__: _MINPACK_LOCK > not held when calling into minpack from least_squares > * `#8786 `__: erroneous > moment values for t-distribution > * `#8791 `__: Checking COLA > condition in istft should be optional (or omitted) > * `#8843 `__: imresize cannot > be deprecated just yet > * `#8844 `__: Inverse Wishart > Log PDF Incorrect for Non-diagonal Scale Matrix? > * `#8878 `__: vonmises and > vonmises_line in stats: vonmises wrong and superfluous? > * `#8895 `__: v1.1.0 > `ndi.rotate` documentation ? reused parameters not filled... > * `#8900 `__: Missing complex > conjugation in scipy.sparse.linalg.LinearOperator > * `#8904 `__: BUG: if zero > derivative at root, then Newton fails with RuntimeWarning > * `#8911 `__: > make_interp_spline bc_type incorrect input interpretation > * `#8942 `__: MAINT: Refactor > `_linprog.py` and `_linprog_ip.py` to remove... > * `#8947 `__: np.int64 in > scipy.fftpack.next_fast_len > * `#9020 `__: BUG: > linalg.subspace_angles gives wrong results > * `#9033 `__: > scipy.stats.normaltest sometimes gives incorrect returns b/c... > * `#9036 `__: Bizarre times > for `scipy.sparse.rand` function with 'low' density... > * `#9044 `__: > optimize.minimize(method=`trust-constr`) result dict does not... > * `#9071 `__: doc/linalg: add > cho_solve_banded to see also of cholesky_banded > * `#9082 `__: eigenvalue > sorting in scipy.sparse.linalg.eigsh > * `#9086 `__: > signaltools.py:491: FutureWarning: Using a non-tuple sequence... > * `#9091 `__: > test_spline_filter failure on 32-bit > * `#9122 `__: Typo on scipy > minimization tutorial > * `#9135 `__: doc error at > https://docs.scipy.org/doc/scipy/reference/tutorial/stats/discrete_poisson.html > * `#9167 `__: DOC: BUG: typo > in ndimage LowLevelCallable tutorial example > * `#9169 `__: truncnorm does > not work if b < a in scipy.stats > * `#9250 `__: > scipy.special.tests.test_mpmath::TestSystematic::test_pcfw fails... > * `#9259 `__: rv.expect() == > rv.mean() is false for rv.mean() == nan (and inf) > * `#9286 `__: DOC: Rosenbrock > expression in optimize.minimize tutorial > * `#9316 `__: SLSQP fails in > nested optimization > * `#9337 `__: > scipy.signal.find_peaks key typo in documentation > * `#9345 `__: Example from > documentation of scipy.sparse.linalg.eigs raises... > * `#9383 `__: Default value > for "mode" in "ndimage.shift" > * `#9419 `__: dual_annealing > off by one in the number of iterations > * `#9442 `__: Error in > Defintion of Rosenbrock Function > * `#9453 `__: TST: > test_eigs_consistency() doesn't have consistent results > > > Pull requests for 1.2.0 > ------------------------ > > * `#9526 `__: TST: relax > precision requirements in signal.correlate tests > * `#9507 `__: CI: MAINT: Skip a > ckdtree test on pypy > * `#9512 `__: TST: > test_random_sampling 32-bit handling > * `#9494 `__: TST: > test_kolmogorov xfail 32-bit > * `#9486 `__: BUG: fix sparse > random int handling > * `#9550 `__: BUG: > scipy/_lib/_numpy_compat: get_randint > * `#9549 `__: MAINT: make > dual_annealing signature match other optimizers > * `#9541 `__: BUG: fix > SyntaxError due to non-ascii character on Python 2.7 > * `#7352 `__: ENH: add Brunner > Munzel test to scipy.stats. > * `#7373 `__: BUG: Jaccard > distance for all-zero arrays would return np.nan > * `#7374 `__: ENH: Add PDF, CDF > and parameter estimation for Stable Distributions > * `#8098 `__: ENH: Add shgo for > global optimization of NLPs. > * `#8203 `__: ENH: adding > simulated dual annealing to optimize > * `#8259 `__: Option to follow > original Storn and Price algorithm and its parallelisation > * `#8293 `__: ENH add > ratio-of-uniforms method for rv generation to scipy.stats > * `#8294 `__: BUG: Fix slowness > in stats.mode > * `#8295 `__: ENH: add Jensen > Shannon distance to `scipy.spatial.distance` > * `#8357 `__: ENH: vectorize > scalar zero-search-functions > * `#8397 `__: Add `fs=` > parameter to filter design functions > * `#8537 `__: ENH: Implement > mode parameter for spline filtering. > * `#8558 `__: ENH: small > speedup for stats.gaussian_kde > * `#8560 `__: BUG: fix p-value > calc of anderson_ksamp in scipy.stats > * `#8614 `__: ENH: correct > p-values for stats.kendalltau and stats.mstats.kendalltau > * `#8670 `__: ENH: Require > Lapack 3.4.0 > * `#8683 `__: Correcting kmeans > documentation > * `#8725 `__: MAINT: Cleanup > scipy.optimize.leastsq > * `#8726 `__: BUG: Fix > _get_output in scipy.ndimage to support string > * `#8733 `__: MAINT: stats: A > bit of clean up. > * `#8737 `__: BUG: Improve > numerical precision/convergence failures of smirnov/kolmogorov > * `#8738 `__: MAINT: stats: A > bit of clean up in test_distributions.py. > * `#8740 `__: BF/ENH: make > minpack thread safe > * `#8742 `__: BUG: Fix division > by zero in trust-region optimization methods > * `#8746 `__: MAINT: signal: > Fix a docstring of a private function, and fix... > * `#8750 `__: DOC clarified > description of norminvgauss in scipy.stats > * `#8753 `__: DOC: signal: Fix > a plot title in the chirp docstring. > * `#8755 `__: DOC: MAINT: Fix > link to the wheel documentation in developer... > * `#8760 `__: BUG: stats: > boltzmann wasn't setting the upper bound. > * `#8763 `__: [DOC] Improved > scipy.cluster.hierarchy documentation > * `#8765 `__: DOC: added > example for scipy.stat.mstats.tmin > * `#8788 `__: DOC: fix > definition of optional `disp` parameter > * `#8802 `__: MAINT: Suppress > dd_real unused function compiler warnings. > * `#8803 `__: ENH: Add > full_output support to optimize.newton() > * `#8804 `__: MAINT: stats > cleanup > * `#8808 `__: DOC: add note > about isinstance for frozen rvs > * `#8812 `__: Updated numpydoc > submodule > * `#8813 `__: MAINT: stats: Fix > multinomial docstrings, and do some clean up. > * `#8816 `__: BUG: fixed _stats > of t-distribution in scipy.stats > * `#8817 `__: BUG: ndimage: Fix > validation of the origin argument in correlate... > * `#8822 `__: BUG: integrate: > Fix crash with repeated t values in odeint. > * `#8832 `__: Hyperlink DOIs > against preferred resolver > * `#8837 `__: BUG: sparse: > Ensure correct dtype for sparse comparison operations. > * `#8839 `__: DOC: stats: A few > tweaks to the linregress docstring. > * `#8846 `__: BUG: stats: Fix > logpdf method of invwishart. > * `#8849 `__: DOC: signal: > Fixed mistake in the firwin docstring. > * `#8854 `__: DOC: fix type > descriptors in ltisys documentation > * `#8865 `__: Fix tiny typo in > docs for chi2 pdf > * `#8870 `__: Fixes related to > invertibility of STFT > * `#8872 `__: ENH: special: Add > the softmax function > * `#8874 `__: DOC correct gamma > function in docstrings in scipy.stats > * `#8876 `__: ENH: Added TOMS > Algorithm 748 as 1-d root finder; 17 test function... > * `#8882 `__: ENH: Only use > Halley's adjustment to Newton if close enough. > * `#8883 `__: FIX: optimize: > make jac and hess truly optional for 'trust-constr' > * `#8885 `__: TST: Do not error > on warnings raised about non-tuple indexing. > * `#8887 `__: MAINT: filter out > np.matrix PendingDeprecationWarning's in numpy... > * `#8889 `__: DOC: optimize: > separate legacy interfaces from new ones > * `#8890 `__: ENH: Add > optimize.root_scalar() as a universal dispatcher for... > * `#8899 `__: DCT-IV, DST-IV > and DCT-I, DST-I orthonormalization support in... > * `#8901 `__: MAINT: Reorganize > flapack.pyf.src file > * `#8907 `__: BUG: ENH: Check > if guess for newton is already zero before checking... > * `#8908 `__: ENH: Make sorting > optional for cKDTree.query_ball_point() > * `#8910 `__: DOC: > sparse.csgraph simple examples. > * `#8914 `__: DOC: interpolate: > fix equivalences of string aliases > * `#8918 `__: add > float_control(precise, on) to _fpumode.c > * `#8919 `__: MAINT: > interpolate: improve error messages for common `bc_type`... > * `#8920 `__: DOC: update > Contributing to SciPy to say "prefer no PEP8 only... > * `#8924 `__: MAINT: special: > deprecate `hyp2f0`, `hyp1f2`, and `hyp3f0` > * `#8927 `__: MAINT: special: > remove `errprint` > * `#8932 `__: Fix broadcasting > scale arg of entropy > * `#8936 `__: Fix (some) > non-tuple index warnings > * `#8937 `__: ENH: implement > sparse matrix BSR to CSR conversion directly. > * `#8938 `__: DOC: add > @_ni_docstrings.docfiller in ndimage.rotate > * `#8940 `__: Update > _discrete_distns.py > * `#8943 `__: DOC: Finish > dangling sentence in `convolve` docstring > * `#8944 `__: MAINT: Address > tuple indexing and warnings > * `#8945 `__: ENH: > spatial.transform.Rotation [GSOC2018] > * `#8950 `__: csgraph Dijkstra > function description rewording > * `#8953 `__: DOC, MAINT: HTTP > -> HTTPS, and other linkrot fixes > * `#8955 `__: BUG: np.int64 in > scipy.fftpack.next_fast_len > * `#8958 `__: MAINT: Add more > descriptive error message for phase one simplex. > * `#8962 `__: BUG: > sparse.linalg: add missing conjugate to _ScaledLinearOperator.adjoint > * `#8963 `__: BUG: > sparse.linalg: downgrade LinearOperator TypeError to warning > * `#8965 `__: ENH: Wrapped RFP > format and RZ decomposition routines > * `#8969 `__: MAINT: doc and > code fixes for optimize.newton > * `#8970 `__: Added 'average' > keyword for welch/csd to enable median averaging > * `#8971 `__: Better imresize > deprecation warning > * `#8972 `__: MAINT: Switch > np.where(c) for np.nonzero(c) > * `#8975 `__: MAINT: Fix > warning-based failures > * `#8979 `__: DOC: fix > description of count_sort keyword of dendrogram > * `#8982 `__: MAINT: optimize: > Fixed minor mistakes in test_linprog.py (#8978) > * `#8984 `__: BUG: > sparse.linalg: ensure expm casts integer inputs to float > * `#8986 `__: BUG: > optimize/slsqp: do not exit with convergence on steps where... > * `#8989 `__: MAINT: use > collections.abc in basinhopping > * `#8990 `__: ENH extend > p-values of anderson_ksamp in scipy.stats > * `#8991 `__: ENH: Weighted kde > * `#8993 `__: ENH: > spatial.transform.Rotation.random [GSOC 2018] > * `#8994 `__: ENH: > spatial.transform.Slerp [GSOC 2018] > * `#8995 `__: TST: time.time in > test > * `#9007 `__: Fix typo in > fftpack.rst > * `#9013 `__: Added correct > plotting code for two sided output from spectrogram > * `#9014 `__: BUG: > differential_evolution with inf objective functions > * `#9017 `__: BUG: fixed #8446 > corner case for asformat(array|dense) > * `#9018 `__: MAINT: > _lib/ccallback: remove unused code > * `#9021 `__: BUG: Issue with > subspace_angles > * `#9022 `__: DOC: Added "See > Also" section to lombscargle docstring > * `#9034 `__: BUG: Fix > tolerance printing behavior, remove meaningless tol... > * `#9035 `__: TST: improve > signal.bsplines test coverage > * `#9037 `__: ENH: add a new > init method for k-means > * `#9039 `__: DOC: Add examples > to fftpack.irfft docstrings > * `#9048 `__: ENH: > scipy.sparse.random > * `#9050 `__: BUG: > scipy.io.hb_write: fails for matrices not in csc format > * `#9051 `__: MAINT: Fix slow > sparse.rand for k < mn/3 (#9036). > * `#9054 `__: MAINT: spatial: > Explicitly initialize LAPACK output parameters. > * `#9055 `__: DOC: Add examples > to scipy.special docstrings > * `#9056 `__: ENH: Use one > thread in OpenBLAS > * `#9059 `__: DOC: Update > README with link to Code of Conduct > * `#9060 `__: BLD: remove > support for the Bento build system. > * `#9062 `__: DOC add sections > to overview in scipy.stats > * `#9066 `__: BUG: Correct > "remez" error message > * `#9069 `__: DOC: update > linalg section of roadmap for LAPACK versions. > * `#9079 `__: MAINT: add > spatial.transform to refguide check; complete some... > * `#9081 `__: MAINT: Add > warnings if pivot value is close to tolerance in linprog(method='simplex') > * `#9084 `__: BUG fix incorrect > p-values of kurtosistest in scipy.stats > * `#9095 `__: DOC: add sections > to mstats overview in scipy.stats > * `#9096 `__: BUG: Add test for > Stackoverflow example from issue 8174. > * `#9101 `__: ENH: add Siegel > slopes (robust regression) to scipy.stats > * `#9105 `__: allow > resample_poly() to output float32 for float32 inputs. > * `#9112 `__: MAINT: optimize: > make trust-constr accept constraint dict (#9043) > * `#9118 `__: Add doc entry to > cholesky_banded > * `#9120 `__: eigsh > documentation parameters > * `#9125 `__: interpolative: > correctly reconstruct full rank matrices > * `#9126 `__: MAINT: Use > warnings for unexpected peak properties > * `#9129 `__: BUG: Do not catch > and silence KeyboardInterrupt > * `#9131 `__: DOC: Correct the > typo in scipy.optimize tutorial page > * `#9133 `__: FIX: Avoid use of > bare except > * `#9134 `__: DOC: Update of > 'return_eigenvectors' description > * `#9137 `__: DOC: typo fixes > for discrete Poisson tutorial > * `#9139 `__: FIX: Doctest > failure in optimize tutorial > * `#9143 `__: DOC: missing > sigma in Pearson r formula > * `#9145 `__: MAINT: Refactor > linear programming solvers > * `#9149 `__: FIX: Make > scipy.odr.ODR ifixx equal to its data.fix if given > * `#9156 `__: DOC: special: > Mention the sigmoid function in the expit docstring. > * `#9160 `__: Fixed a latex > delimiter error in levy() > * `#9170 `__: DOC: correction / > update of docstrings of distributions in scipy.stats > * `#9171 `__: better > description of the hierarchical clustering parameter > * `#9174 `__: domain check for > a < b in stats.truncnorm > * `#9175 `__: DOC: Minor > grammar fix > * `#9176 `__: BUG: > CloughTocher2DInterpolator: fix miscalculation at neighborless... > * `#9177 `__: BUILD: Document > the "clean" target in the doc/Makefile. > * `#9178 `__: MAINT: make > refguide-check more robust for printed numpy arrays > * `#9186 `__: MAINT: Remove > np.ediff1d occurence > * `#9188 `__: DOC: correct typo > in extending ndimage with C > * `#9190 `__: ENH: Support > specifying axes for fftconvolve > * `#9192 `__: MAINT: optimize: > fixed @pv style suggestions from #9112 > * `#9200 `__: Fix > make_interp_spline(..., k=0 or 1, axis<0) > * `#9201 `__: BUG: > sparse.linalg/gmres: use machine eps in breakdown check > * `#9204 `__: MAINT: fix up > stats.spearmanr and match mstats.spearmanr with... > * `#9206 `__: MAINT: include > benchmarks and dev files in sdist. > * `#9208 `__: TST: signal: bump > bsplines test tolerance for complex data > * `#9210 `__: TST: mark tests > as slow, fix missing random seed > * `#9211 `__: ENH: add > capability to specify orders in pade func > * `#9217 `__: MAINT: Include > ``success`` and ``nit`` in OptimizeResult returned... > * `#9222 `__: ENH: interpolate: > Use scipy.spatial.distance to speed-up Rbf > * `#9229 `__: MNT: Fix Fourier > filter double case > * `#9233 `__: BUG: > spatial/distance: fix pdist/cdist performance regression... > * `#9234 `__: FIX: Proper > suppression > * `#9235 `__: BENCH: > rationalize slow benchmarks + miscellaneous fixes > * `#9238 `__: BENCH: limit > number of parameter combinations in spatial.*KDTree... > * `#9239 `__: DOC: stats: Fix > LaTeX markup of a couple distribution PDFs. > * `#9241 `__: ENH: Evaluate > plateau size during peak finding > * `#9242 `__: ENH: stats: > Implement _ppf and _logpdf for crystalball, and do... > * `#9246 `__: DOC: Properly > render versionadded directive in HTML documentation > * `#9255 `__: DOC: mention > RootResults in optimization reference guide > * `#9260 `__: TST: relax some > tolerances so tests pass with x87 math > * `#9264 `__: TST Use > assert_raises "match" parameter instead of the "message"... > * `#9267 `__: DOC: clarify > expect() return val when moment is inf/nan > * `#9272 `__: DOC: Add > description of default bounds to linprog > * `#9277 `__: MAINT: > sparse/linalg: make test deterministic > * `#9278 `__: MAINT: > interpolate: pep8 cleanup in test_polyint > * `#9279 `__: Fixed docstring > for resample > * `#9280 `__: removed first > check for float in get_sum_dtype > * `#9281 `__: BUG: only accept > 1d input for bartlett / levene in scipy.stats > * `#9282 `__: MAINT: > dense_output and t_eval are mutually exclusive inputs > * `#9283 `__: MAINT: add docs > and do some cleanups in interpolate.Rbf > * `#9288 `__: Run > distance_transform_edt tests on all types > * `#9294 `__: DOC: fix the > formula typo > * `#9298 `__: MAINT: > optimize/trust-constr: restore .niter attribute for backward-compat > * `#9299 `__: DOC: > clarification of default rvs method in scipy.stats > * `#9301 `__: MAINT: removed > unused import sys > * `#9302 `__: MAINT: removed > unused imports > * `#9303 `__: DOC: signal: > Refer to fs instead of nyq in the firwin docstring. > * `#9305 `__: ENH: Added > Yeo-Johnson power transformation > * `#9306 `__: ENH - add dual > annealing > * `#9309 `__: ENH add the > yulesimon distribution to scipy.stats > * `#9317 `__: Nested SLSQP bug > fix. > * `#9320 `__: MAINT: stats: > avoid underflow in stats.geom.ppf > * `#9326 `__: Add example for > Rosenbrock function > * `#9332 `__: Sort file lists > * `#9340 `__: Fix typo in > find_peaks documentation > * `#9343 `__: MAINT Use np.full > when possible > * `#9344 `__: DOC: added > examples to docstring of dirichlet class > * `#9346 `__: DOC: Fix import > of scipy.sparse.linalg in example (#9345) > * `#9350 `__: Fix interpolate > read only > * `#9351 `__: MAINT: > special.erf: use the x->-x symmetry > * `#9356 `__: Fix documentation > typo > * `#9358 `__: DOC: improve doc > for ksone and kstwobign in scipy.stats > * `#9362 `__: DOC: Change > datatypes of A matrices in linprog > * `#9364 `__: MAINT: Adds > implicit none to fftpack fortran sources > * `#9369 `__: DOC: minor tweak > to CoC (updated NumFOCUS contact address). > * `#9373 `__: Fix exception if > python is called with -OO option > * `#9374 `__: FIX: AIX > compilation issue with NAN and INFINITY > * `#9376 `__: COBLYA -> COBYLA > in docs > * `#9377 `__: DOC: Add examples > integrate: fixed_quad and quadrature > * `#9379 `__: MAINT: TST: Make > tests NumPy 1.8 compatible > * `#9385 `__: CI: On Travis > matrix "OPTIMIZE=-OO" flag ignored > * `#9387 `__: Fix defaut value > for 'mode' in 'ndimage.shift' in the doc > * `#9392 `__: BUG: rank has to > be integer in rank_filter: fixed issue 9388 > * `#9399 `__: DOC: Misc. typos > * `#9400 `__: TST: stats: Fix > the expected r-value of a linregress test. > * `#9405 `__: BUG: np.hstack > does not accept generator expressions > * `#9408 `__: ENH: linalg: > Shorter ill-conditioned warning message > * `#9418 `__: DOC: Fix ndimage > docstrings and reduce doc build warnings > * `#9421 `__: DOC: Add missing > docstring examples in scipy.spatial > * `#9422 `__: DOC: Add an > example to integrate.newton_cotes > * `#9427 `__: BUG: Fixed defect > with maxiter #9419 in dual annealing > * `#9431 `__: BENCH: Add dual > annealing to scipy benchmark (see #9415) > * `#9435 `__: DOC: Add > docstring examples for stats.binom_test > * `#9443 `__: DOC: Fix the > order of indices in optimize tutorial > * `#9444 `__: MAINT: > interpolate: use operator.index for checking/coercing... > * `#9445 `__: DOC: Added > missing example to stats.mstats.kruskal > * `#9446 `__: DOC: Add note > about version changed for jaccard distance > * `#9447 `__: BLD: > version-script handling in setup.py > * `#9448 `__: TST: skip a > problematic linalg test > * `#9449 `__: TST: fix missing > seed in lobpcg test. > * `#9456 `__: TST: > test_eigs_consistency() now sorts output > > Checksums > ========= > > MD5 > ~~~ > > 0bb53a49e77bca11fb26698744c60f97 > scipy-1.2.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 39e215ac7e8d6de33d939486987dcba4 > scipy-1.2.0-cp27-cp27m-manylinux1_i686.whl > d6e33b2c05ffbcf9790628d656c8e61f > scipy-1.2.0-cp27-cp27m-manylinux1_x86_64.whl > a463a12d77b87df0a2d202323771a908 scipy-1.2.0-cp27-cp27m-win32.whl > 3dc17a11c7dd211ce51a338cfe30eb48 scipy-1.2.0-cp27-cp27m-win_amd64.whl > 82b1ecbecadfeddd2be1c6d616491029 > scipy-1.2.0-cp27-cp27mu-manylinux1_i686.whl > 65021ade783f1416b1920d2a2cc39d4d > scipy-1.2.0-cp27-cp27mu-manylinux1_x86_64.whl > 7cc2cdbc9b421ef10695b898cdc241e7 > scipy-1.2.0-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 2a4ccbfcccb9395fa7554a82db40e454 > scipy-1.2.0-cp34-cp34m-manylinux1_i686.whl > 662dc35acd6f588565cd6467465fc742 > scipy-1.2.0-cp34-cp34m-manylinux1_x86_64.whl > a19bd9969bb5b92595e82d924b8272f9 scipy-1.2.0-cp34-cp34m-win32.whl > 5f1eaa3745956db0da724f83dd174559 scipy-1.2.0-cp34-cp34m-win_amd64.whl > d32a4c31d0a188f3550c1306d20b03c7 > scipy-1.2.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 34f4a7f04abfda87fac8ab38a9a70a77 > scipy-1.2.0-cp35-cp35m-manylinux1_i686.whl > bd4d56910802870072e3c5ded69a8f08 > scipy-1.2.0-cp35-cp35m-manylinux1_x86_64.whl > cb3fb7ddd3992928f9173e4eb489d23e scipy-1.2.0-cp35-cp35m-win32.whl > 68f5ddcb6e592b1d9cba95f24faee7b5 scipy-1.2.0-cp35-cp35m-win_amd64.whl > c0e110f3a935731782c96a13cc264ea2 > scipy-1.2.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 9d898924498abbe2d26dd18b3413fb11 > scipy-1.2.0-cp36-cp36m-manylinux1_i686.whl > dd9ae664cbe7de54828d83c772e24da3 > scipy-1.2.0-cp36-cp36m-manylinux1_x86_64.whl > 8dea8432610bb3c63114eb6469c5f99a scipy-1.2.0-cp36-cp36m-win32.whl > ebda830aec7b60193772741f85fee28c scipy-1.2.0-cp36-cp36m-win_amd64.whl > 0e8b7a7908c50e635e639d2c69901140 > scipy-1.2.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 56719437a821f9f2f98f069225e70c87 > scipy-1.2.0-cp37-cp37m-manylinux1_i686.whl > 93d9d978855516ec38fa08620ef3443c > scipy-1.2.0-cp37-cp37m-manylinux1_x86_64.whl > fff3d66b877b6e6b9984b84ae9e4d76c scipy-1.2.0-cp37-cp37m-win32.whl > 3defb2c8b2f69057919ee3b0c92de65c scipy-1.2.0-cp37-cp37m-win_amd64.whl > e57011507865b0b702aff6077d412e03 scipy-1.2.0.tar.gz > 8eb6c1d7ceae0d06aef474f7801b8fca scipy-1.2.0.tar.xz > b0fb16b09319d3031d27ccf21a3ef474 scipy-1.2.0.zip > > SHA256 > ~~~~~~ > > d1ae77b79fd5e27a10ba7c4e7c3a62927b9d29a4dccf28f6905c25d983aaf183 > scipy-1.2.0-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 4b1f0883cb9d8ee963cf0a31c87341e9e758abb2cf1e5bcc0d7b066ef6b17573 > scipy-1.2.0-cp27-cp27m-manylinux1_i686.whl > c5eae911cf26b3c7eda889ec98d3c226f312c587acfaaf02602473f98b4c16d6 > scipy-1.2.0-cp27-cp27m-manylinux1_x86_64.whl > 58f0435f052cb60f1472c77f52a8f6642f8877b70559e5e0b9a1744f33f5cbe5 > scipy-1.2.0-cp27-cp27m-win32.whl > 4cce25c6e7ff7399c67dfe1b5423c36c391cf9b4b2be390c1675ab410f1ef503 > scipy-1.2.0-cp27-cp27m-win_amd64.whl > 02cb79ea38114dc480e9b08d6b87095728e8fb39b9a49b449ee443d678001611 > scipy-1.2.0-cp27-cp27mu-manylinux1_i686.whl > 7dc4002f0a0a688774ef04878afe769ecd1ac21fe9b4b1d7125e2cf16170afd3 > scipy-1.2.0-cp27-cp27mu-manylinux1_x86_64.whl > 7994c044bf659b0a24ad7673ec7db85c2fadb87e4980a379a9fd5b086fe3649a > scipy-1.2.0-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 72bd766f753fd32f072d30d7bc2ad492d56dbcbf3e13764e27635d5c41079339 > scipy-1.2.0-cp34-cp34m-manylinux1_i686.whl > 3132a9fab3f3545c8b0ba15688d11857efdd4a58d388d3785dc474f56fea7138 > scipy-1.2.0-cp34-cp34m-manylinux1_x86_64.whl > 7413080b381766a22d52814edb65631f0e323a7cea945c70021a672f38acc73f > scipy-1.2.0-cp34-cp34m-win32.whl > 6f791987899532305126309578727c0197bddbafab9596bafe3e7bfab6e1ce13 > scipy-1.2.0-cp34-cp34m-win_amd64.whl > 937147086e8b4338bf139ca8fa98da650e3a46bf2ca617f8e9dd68c3971ec420 > scipy-1.2.0-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 63e1d5ca9e5e1984f1a275276991b036e25d39d37dd7edbb3f4f6165c2da7dbb > scipy-1.2.0-cp35-cp35m-manylinux1_i686.whl > 03c827cdbc584e935264040b958e5fa0570a16095bee23a013482ba3f0e963a2 > scipy-1.2.0-cp35-cp35m-manylinux1_x86_64.whl > bc294841f6c822714af362095b181a853f47ed5ce757354bd2e4776d579ff3a4 > scipy-1.2.0-cp35-cp35m-win32.whl > cca33a01a5987c650b87a1a910aa311ffa44e67cca1ff502ef9efdae5d9a8624 > scipy-1.2.0-cp35-cp35m-win_amd64.whl > 8608316d0cc01f8b25111c8adfe6efbc959cbba037a62c784551568d7ffbf280 > scipy-1.2.0-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > fb36064047e6bf87b6320a9b6eb7f525ef6863c7a4aef1a84a4bbfb043612617 > scipy-1.2.0-cp36-cp36m-manylinux1_i686.whl > 854bd87cc23824d5db4983956bc30f3790e1c7448f1a9e6a8fb7bff7601aef87 > scipy-1.2.0-cp36-cp36m-manylinux1_x86_64.whl > fc1a19d95649439dbd50baca676bceb29bbfcd600aed2c5bd71d9bad82a87cfe > scipy-1.2.0-cp36-cp36m-win32.whl > 8f5fcc87b48fc3dd6d901669c89af4feeb856dffb6f671539a238b7e8af1799c > scipy-1.2.0-cp36-cp36m-win_amd64.whl > bc6a88b0009a1b60eab5c22ac3a006f6968d6328de10c6a64ebb0d64a05548d3 > scipy-1.2.0-cp37-cp37m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl > 64b2c35824da3ef6bb1e722216e4ef28802af6413c7586136500e343d34ba179 > scipy-1.2.0-cp37-cp37m-manylinux1_i686.whl > 78a67ee4845440e81cfbfabde20537ca12051d0eeac951fe4c6d8751feac3103 > scipy-1.2.0-cp37-cp37m-manylinux1_x86_64.whl > 04f2b23258139c109d0524f111597dd095a505d9cb2c71e381d688d653877fa3 > scipy-1.2.0-cp37-cp37m-win32.whl > 5706b785ca289fdfd91aa05066619e51d140613b613e35932601f2315f5d8470 > scipy-1.2.0-cp37-cp37m-win_amd64.whl > 51a2424c8ed80e60bdb9a896806e7adaf24a58253b326fbad10f80a6d06f2214 > scipy-1.2.0.tar.gz > 1cfafc64e1c1c0b6d040d35780ba06f01e9cb44f3bacd8f33a5df6efe5a144a6 > scipy-1.2.0.tar.xz > 5b3d4b1654b4723431ad7f47b4556498a86a53d2fdb86e85a45c3da0c2a72a4b > scipy-1.2.0.zip > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEgfAqrOuERrV4fwr6XCLdIy4FKasFAlwYY1YACgkQXCLdIy4F > KauihhAAo0fk+a4U4ilFXeZrUTuZTeqqHtD60yV8Z2pZvqQyH1BBP8BKksV/v3bP > 6ckl+upWaolMjZgehWhAKyGgE2oUHDu6RxaRS97Kbbns580M+wWduPX7kjf7OKoX > KFX1a8/GlBWTQKIMj6P2oShT7rvYB+WABvyoTxXKbHG0/ArOtOqqAYJFKLeOYB6m > o0+qajqt9syV2iqBFlHnrPKQ8qjtfPxqCP9KHhNIbHd17305YwJc58CBhpulIjaP > HIhJOuP7xigELX9yCzJ2qFGBjTd2HNvBQWIRjNDfbox6mhWO4no30c+OamG3MAr7 > n9TDzSIjxkRedvBzMRJwA/Q5/Mou/R16BF+ZzvCVZnp/h6LrXQg4ENfR304Byyy3 > NlYzmQKlV0XvP4oYewBnLfq6hcXAum7rf3L8ene8mu0OWJumW7Yr6PWfDNECDKvX > sPWSwkNu01Pzg/KUUkrS9w6m9bZTj4UP15L2Z8JSVdp/wxCHGh8txJWjOenoZBnD > BPyitzsgbuW+pd7+WmjZoJpr8QjL/Uw8vpUwlvKAvvVfFnVeVh3X1awK+D6iba69 > LbyfhOG3R846WkxHh458uvNxmYUbm6sqaZ7lzY91bD8z61jS6PCvpR8BJcwmpP7Y > q/duWsaakRWLe35CE8KuvKCgVFhhStX2sZ/hRkNsvm2/hEAtNW0= > =3Enc > -----END PGP SIGNATURE----- > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Wed Dec 19 00:52:40 2018 From: matti.picus at gmail.com (Matti Picus) Date: Wed, 19 Dec 2018 07:52:40 +0200 Subject: [Numpy-discussion] Reminder: weekly status meeting Dec 19 at 12:00 pacific time Message-ID: <7c424525-46bf-f126-c361-510c9592b346@gmail.com> The draft agenda is at https://hackmd.io/Gn1ymjwkRjm9WVY5Cgbwsw?both This will be our last meeting of 2018. The BIDS team From shoyer at gmail.com Wed Dec 19 12:47:01 2018 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 19 Dec 2018 09:47:01 -0800 Subject: [Numpy-discussion] Adding "maximum difference" to np.testing.assert_array_equal errors Message-ID: I have a PR up for review that adds "maximum difference" to the error messages produced by NumPy's testing functions for comparing arrays: https://github.com/numpy/numpy/pull/12591 Because this changes NumPy's public interface, I'm running it by the mailing list to see if there are any objections or suggestions for improvement. Example behavior: >>> x = np.array([1, 2, 3]) >>> y = np.array([1, 2, 3.0001]) >>> np.testing.assert_allclose(x, y) AssertionError: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 33.333333333333336%, maximum difference 0.00010000000000021103) x: array([1, 2, 3]) y: array([1. , 2. , 3.0001]) Motivation: when writing numerical algorithms, I frequently find myself experimenting to pick the right value of atol and rtol for np.testing.assert_allclose(). If I make the tolerance too generous, I risk missing regressions in accuracy, so I usually try to pick the smallest values for which tests pass. This change immediately reveals appropriate values to use for these parameters, so I don't need to guess and check. One alternative would be print both "atol" and "rtol" numbers directly that could be immediately used in assert_allclose(). Here we effectively only have "atol". The main reason why I didn't do this is that I felt it would add more clutter (rtol is slightly redundant with the displayed array values). But we could probably do this if we're willing to split the error message onto a few more lines, e.g., AssertionError: Not equal to tolerance rtol=1e-07, atol=0 (mismatch 33.333333333333336%, maximum absolute difference 0.00010000000000021103, maximum relative difference 3.333333333340368e-05) x: array([1, 2, 3]) y: array([1. , 2. , 3.0001]) Best, Stephan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Dec 19 13:50:46 2018 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 19 Dec 2018 10:50:46 -0800 Subject: [Numpy-discussion] Adding "maximum difference" to np.testing.assert_array_equal errors In-Reply-To: References: Message-ID: <20181219185046.imvel36fud2ysqla@carbo> On Wed, 19 Dec 2018 09:47:01 -0800, Stephan Hoyer wrote: > Example behavior: > > >>> x = np.array([1, 2, 3]) > >>> y = np.array([1, 2, 3.0001]) > >>> np.testing.assert_allclose(x, y) > AssertionError: > Not equal to tolerance rtol=1e-07, atol=0 > > (mismatch 33.333333333333336%, maximum difference > 0.00010000000000021103) This is a helpful addition; thank you! I don't have a strong preference around whether to also include rtol. St?fan From deil.christoph at googlemail.com Wed Dec 19 15:27:08 2018 From: deil.christoph at googlemail.com (Christoph Deil) Date: Wed, 19 Dec 2018 21:27:08 +0100 Subject: [Numpy-discussion] Adding "maximum difference" to np.testing.assert_array_equal errors In-Reply-To: <20181219185046.imvel36fud2ysqla@carbo> References: <20181219185046.imvel36fud2ysqla@carbo> Message-ID: <7975FB8F-C2C6-4783-95E0-A669DC380D0F@googlemail.com> > On 19. Dec 2018, at 19:50, Stefan van der Walt wrote: > > On Wed, 19 Dec 2018 09:47:01 -0800, Stephan Hoyer wrote: >> Example behavior: >> >>>>> x = np.array([1, 2, 3]) >>>>> y = np.array([1, 2, 3.0001]) >>>>> np.testing.assert_allclose(x, y) >> AssertionError: >> Not equal to tolerance rtol=1e-07, atol=0 >> >> (mismatch 33.333333333333336%, maximum difference >> 0.00010000000000021103) > > This is a helpful addition; thank you! I don't have a strong preference > around whether to also include rtol. > > St?fan > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion +1 to this addition. And +1 to also print rtol. I frequently use rtol with assert_allclose, and having the value printed helps quickly choose an appropriate value. Christoph From jni.soma at gmail.com Wed Dec 19 19:40:57 2018 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Wed, 19 Dec 2018 19:40:57 -0500 Subject: [Numpy-discussion] =?utf-8?q?Adding_=22maximum_difference=22_to_?= =?utf-8?q?np=2Etesting=2Eassert=5Farray=5Fequal_errors?= In-Reply-To: <7975FB8F-C2C6-4783-95E0-A669DC380D0F@googlemail.com> References: <20181219185046.imvel36fud2ysqla@carbo> <7975FB8F-C2C6-4783-95E0-A669DC380D0F@googlemail.com> Message-ID: <8445aa5a-9731-4be1-bcde-d6004c007440@sloti1d3t01> Another +1 on printing rtol, and +100 (can I do that?) on the overall idea! Thanks Stephan! On Thu, Dec 20, 2018, at 7:27 AM, Christoph Deil wrote: > > > > On 19. Dec 2018, at 19:50, Stefan van der Walt wrote: > > > > On Wed, 19 Dec 2018 09:47:01 -0800, Stephan Hoyer wrote: > >> Example behavior: > >> > >>>>> x = np.array([1, 2, 3]) > >>>>> y = np.array([1, 2, 3.0001]) > >>>>> np.testing.assert_allclose(x, y) > >> AssertionError: > >> Not equal to tolerance rtol=1e-07, atol=0 > >> > >> (mismatch 33.333333333333336%, maximum difference > >> 0.00010000000000021103) > > > > This is a helpful addition; thank you! I don't have a strong preference > > around whether to also include rtol. > > > > St?fan > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > +1 to this addition. > > And +1 to also print rtol. > I frequently use rtol with assert_allclose, and having the value > printed helps quickly choose an appropriate value. > > Christoph > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From shoyer at gmail.com Thu Dec 20 00:53:19 2018 From: shoyer at gmail.com (Stephan Hoyer) Date: Wed, 19 Dec 2018 21:53:19 -0800 Subject: [Numpy-discussion] Adding "maximum difference" to np.testing.assert_array_equal errors In-Reply-To: <8445aa5a-9731-4be1-bcde-d6004c007440@sloti1d3t01> References: <20181219185046.imvel36fud2ysqla@carbo> <7975FB8F-C2C6-4783-95E0-A669DC380D0F@googlemail.com> <8445aa5a-9731-4be1-bcde-d6004c007440@sloti1d3t01> Message-ID: OK, it sounds like a popular change :). I'll add relative error to the message as well. Please comment on the pull request if you have strong feelings about exactly what it should look like. On Wed, Dec 19, 2018 at 4:41 PM Juan Nunez-Iglesias wrote: > Another +1 on printing rtol, and +100 (can I do that?) on the overall > idea! Thanks Stephan! > > On Thu, Dec 20, 2018, at 7:27 AM, Christoph Deil wrote: > > > > > > > On 19. Dec 2018, at 19:50, Stefan van der Walt > wrote: > > > > > > On Wed, 19 Dec 2018 09:47:01 -0800, Stephan Hoyer wrote: > > >> Example behavior: > > >> > > >>>>> x = np.array([1, 2, 3]) > > >>>>> y = np.array([1, 2, 3.0001]) > > >>>>> np.testing.assert_allclose(x, y) > > >> AssertionError: > > >> Not equal to tolerance rtol=1e-07, atol=0 > > >> > > >> (mismatch 33.333333333333336%, maximum difference > > >> 0.00010000000000021103) > > > > > > This is a helpful addition; thank you! I don't have a strong > preference > > > around whether to also include rtol. > > > > > > St?fan > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at python.org > > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > +1 to this addition. > > > > And +1 to also print rtol. > > I frequently use rtol with assert_allclose, and having the value > > printed helps quickly choose an appropriate value. > > > > Christoph > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Dec 20 11:16:50 2018 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 20 Dec 2018 09:16:50 -0700 Subject: [Numpy-discussion] NumPy 1.16.0rc1 released Message-ID: Hi All, On behalf of the NumPy team I'm pleased to announce the release of NumPy 1.16.0rc1. This is the last NumPy release to support Python 2.7 and will be maintained as a long term release with bug fixes until 2020. This release has seen a lot of refactoring and features many bug fixes, improved code organization, and better cross platform compatibility. Not all of these improvements will be visible to users, but they should help make maintenance easier going forward. Highlights are - Experimental support for overriding numpy functions in downstream projects. - The matmul function is now a ufunc and can be overridden using __array_ufunc__. - Improved support for the ARM and POWER architectures. - Improved support for AIX and PyPy. - Improved interoperation with ctypes. - Improved support for PEP 3118. The supported Python versions are 2.7 and 3.5-3.7, support for 3.4 has been dropped. The wheels on PyPI are linked with OpenBLAS v0.3.4+, which should fix the known threading issues found in previous OpenBLAS versions. Downstream developers building this release should use Cython >= 0.29 and, if linking OpenBLAS, OpenBLAS > v0.3.4. Wheels for this release can be downloaded from PyPI , source archives are available from Github . *Contributors* A total of 111 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - Alan Fontenot + - Allan Haldane - Alon Hershenhorn + - Alyssa Quek + - Andreas Nussbaumer + - Anner + - Anthony Sottile + - Antony Lee - Ayappan P + - Bas van Schaik + - C.A.M. Gerlach + - Charles Harris - Chris Billington - Christian Clauss - Christoph Gohlke - Christopher Pezley + - Daniel B Allan + - Daniel Smith - Dawid Zych + - Derek Kim + - Dima Pasechnik + - Edgar Giovanni Lepe + - Elena Mokeeva + - Elliott Sales de Andrade + - Emil Hessman + - Eric Schles + - Eric Wieser - Giulio Benetti + - Guillaume Gautier + - Guo Ci - Heath Henley + - Isuru Fernando + - J. Lewis Muir + - Jack Vreeken + - Jaime Fernandez - James Bourbeau - Jeff VanOss - Jeffrey Yancey + - Jeremy Chen + - Jeremy Manning + - Jeroen Demeyer - John Darbyshire + - John Zwinck - Jonas Jensen + - Joscha Reimer + - Juan Azcarreta + - Julian Taylor - Kevin Sheppard - Krzysztof Chomski + - Kyle Sunden - Lars Gr?ter - Lilian Besson + - MSeifert04 - Mark Harfouche - Marten van Kerkwijk - Martin Thoma - Matt Harrigan + - Matthew Bowden + - Matthew Brett - Matthias Bussonnier - Matti Picus - Max Aifer + - Michael Hirsch, Ph.D + - Michael James Jamie Schnaitter + - MichaelSaah + - Mike Toews - Minkyu Lee + - Mircea Akos Bruma + - Mircea-Akos Brum? + - Moshe Looks + - Muhammad Kasim + - Nathaniel J. Smith - Nikita Titov + - Paul M?ller + - Paul van Mulbregt - Pauli Virtanen - Pierre Glaser + - Pim de Haan - Ralf Gommers - Robert Kern - Robin Aggleton + - Rohit Pandey + - Roman Yurchak + - Ryan Soklaski - Sebastian Berg - Sho Nakamura + - Simon Gibbons - Stan Seibert + - Stefan Otte - Stefan van der Walt - Stephan Hoyer - Stuart Archibald - Taylor Smith + - Tim Felgentreff + - Tim Swast + - Tim Teichmann + - Toshiki Kataoka - Travis Oliphant - Tyler Reddy - Uddeshya Singh + - Warren Weckesser - Weitang Li + - Wenjamin Petrenko + - William D. Irons - Yannick Jadoul + - Yaroslav Halchenko - Yug Khanna + - Yuji Kanagawa + - Yukun Guo + - lerbuke + - @ankokumoyashi + Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: From robbmcleod at gmail.com Fri Dec 21 13:44:53 2018 From: robbmcleod at gmail.com (Robert McLeod) Date: Fri, 21 Dec 2018 10:44:53 -0800 Subject: [Numpy-discussion] ANN: NumExpr 2.6.9 Message-ID: Hi everyone, This is a version-bump release to provide wheels for Python 3.7.1 on Windows platforms. Also Mike Toews made the handling of our environment variables more robust. Project documentation is available at: http://numexpr.readthedocs.io/ Changes from 2.6.8 to 2.6.9 --------------------------- - Thanks to Mike Toews for more robust handling of the thread-setting environment variables. - With Appveyor updating to Python 3.7.1, wheels for Python 3.7 are now available in addition to those for other OSes. What's Numexpr? --------------- Numexpr is a fast numerical expression evaluator for NumPy. With it, expressions that operate on arrays (like "3*a+4*b") are accelerated and use less memory than doing the same calculation in Python. It has multi-threaded capabilities, as well as support for Intel's MKL (Math Kernel Library), which allows an extremely fast evaluation of transcendental functions (sin, cos, tan, exp, log...) while squeezing the last drop of performance out of your multi-core processors. Look here for a some benchmarks of numexpr using MKL: https://github.com/pydata/numexpr/wiki/NumexprMKL Its only dependency is NumPy (MKL is optional), so it works well as an easy-to-deploy, easy-to-use, computational engine for projects that don't want to adopt other solutions requiring more heavy dependencies. Where I can find Numexpr? ------------------------- The project is hosted at GitHub in: https://github.com/pydata/numexpr You can get the packages from PyPI as well (but not for RC releases): http://pypi.python.org/pypi/numexpr Documentation is hosted at: http://numexpr.readthedocs.io/en/latest/ Share your experience --------------------- Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy data! -- Robert McLeod, Ph.D. robbmcleod at gmail.com robbmcleod at protonmail.com robert.mcleod at hitachi-hhtc.ca www.entropyreduction.al -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Dec 25 06:13:22 2018 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 25 Dec 2018 13:13:22 +0200 Subject: [Numpy-discussion] Warn or immidiately change readonly flag on broadcast_arrays return value? Message-ID: In PR 12609 https://github.com/numpy/numpy/pull/12609 I added code to emit a DepricationWarning when broadcast_arrays returns an array where the output is repeated. While this is a minimal fix to the problem, perhaps we should consider making the output readonly immediately instead? - A deprecation cycle requires two changes to downstream user's code: one to filter the deprecation warning, and another when we actually make the change - Writing to the repeated data will cause errors now. What do you think, should we change the behaviour at all, and if so should we depricate it over two releases or change it immediately? The original issue is here https://github.com/numpy/numpy/issues/2705 Matti From einstein.edison at gmail.com Tue Dec 25 06:26:29 2018 From: einstein.edison at gmail.com (Hameer Abbasi) Date: Tue, 25 Dec 2018 12:26:29 +0100 Subject: [Numpy-discussion] Warn or immidiately change readonly flag on broadcast_arrays return value? In-Reply-To: References: Message-ID: <152fa1a7-fb1f-4866-a6b6-cdc5128c37d3@Canary> Hi! Broadcasting almost always returns a repeated output (except when all arrays are the same shape), that?s the entire point. I suspect this function is in fairly widespread use and will therefore cause a lot of downstream issues when repeating, so I?m -0.5 on a DeprecationWarning. A FutureWarning might be more appropriate, in which case I?m +0.2. As for making the output read-only, that might break code, but most likely the code was erroneous anyway. But breaking backward-compatibility without a grace period is unheard of in this community. I?m +0.5 on it anyway. ????? Overall, a kind of hairy problem with no clear solution. Best Regards, Hameer Abbasi > On Tuesday, Dec 25, 2018 at 12:13 PM, Matti Picus wrote: > In PR 12609 https://github.com/numpy/numpy/pull/12609 I added code to > emit a DepricationWarning when broadcast_arrays returns an array where > the output is repeated. While this is a minimal fix to the problem, > perhaps we should consider making the output readonly immediately instead? > > > - A deprecation cycle requires two changes to downstream user's code: > one to filter the deprecation warning, and another when we actually make > the change > > - Writing to the repeated data will cause errors now. > > > What do you think, should we change the behaviour at all, and if so > should we depricate it over two releases or change it immediately? > > > The original issue is here https://github.com/numpy/numpy/issues/2705 > > > Matti > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Tue Dec 25 07:53:23 2018 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Tue, 25 Dec 2018 23:53:23 +1100 Subject: [Numpy-discussion] Warn or immidiately change readonly flag on broadcast_arrays return value? In-Reply-To: <152fa1a7-fb1f-4866-a6b6-cdc5128c37d3@Canary> References: <152fa1a7-fb1f-4866-a6b6-cdc5128c37d3@Canary> Message-ID: <6A6ACE82-B586-4EF1-9406-D2E8E27E8154@gmail.com> Probably this will cause a lot of groans, but I've definitely written code modifying `broadcast_to` outputs, intentionally. As such I am -1 on this whole endeavour. My preference on making arrays read-only is to have a very light touch if any. As an example, at some point `np.diag` started returning read-only views. Setting or modifying the diagonal of a matrix is a common operation, so this decision uglified my code (grab the diagonal, make it writeable, write to it, instead of `np.diag(M) = x`. I'll admit that the times that I modified the `broadcast_to` output I felt rather hacky and sheepish, but the point is that it's unlikely that someone who is using this function doesn't know what they're doing, isn't it? I wouldn't have even thought to look for this function before I understood what broadcasting and 0-strides were. In fact, I use it *specifically* to save memory and use zero strides. Matti, could you comment a bit on the motivation behind this change and why you feel it's necessary? Thanks, Juan. > On 25 Dec 2018, at 10:26 pm, Hameer Abbasi wrote: > > Hi! > > Broadcasting almost always returns a repeated output > (except when all arrays are the same shape), that?s the entire point. I suspect this function is in fairly widespread use and will therefore cause a lot of downstream issues when repeating, so I?m -0.5 on a DeprecationWarning. A FutureWarning might be more appropriate, in which case I?m +0.2. > > As for making the output read-only, that might break code, but most likely the code was erroneous anyway. But breaking backward-compatibility without a grace period is unheard of in this community. I?m +0.5 on it anyway. ????? > > Overall, a kind of hairy problem with no clear solution. > > Best Regards, > Hameer Abbasi > > On Tuesday, Dec 25, 2018 at 12:13 PM, Matti Picus > wrote: > In PR 12609 https://github.com/numpy/numpy/pull/12609 I added code to > emit a DepricationWarning when broadcast_arrays returns an array where > the output is repeated. While this is a minimal fix to the problem, > perhaps we should consider making the output readonly immediately instead? > > > - A deprecation cycle requires two changes to downstream user's code: > one to filter the deprecation warning, and another when we actually make > the change > > - Writing to the repeated data will cause errors now. > > > What do you think, should we change the behaviour at all, and if so > should we depricate it over two releases or change it immediately? > > > The original issue is here https://github.com/numpy/numpy/issues/2705 > > > Matti > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Tue Dec 25 11:32:27 2018 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Tue, 25 Dec 2018 11:32:27 -0500 Subject: [Numpy-discussion] Warn or immidiately change readonly flag on broadcast_arrays return value? In-Reply-To: <6A6ACE82-B586-4EF1-9406-D2E8E27E8154@gmail.com> References: <152fa1a7-fb1f-4866-a6b6-cdc5128c37d3@Canary> <6A6ACE82-B586-4EF1-9406-D2E8E27E8154@gmail.com> Message-ID: Hi Juan, I also use `broadcast_to` a lot, to save memory, but definitely have been in a situation where in another piece of code the array is assumed to be normal and writable (typically, that other piece was also written by me; so much for awareness...). Fortunately, `broadcast_to` already returns read-only views (it is newer and was designed as such from the start), so I was alerted. At the time `broadcast_to` was introduced [1], there was a discussion about `broadcast_arrays` as well, and it was decided to leave it writable - see the note at https://github.com/numpy/numpy/blob/master/numpy/lib/stride_tricks.py#L265. The same note suggests a deprecation cycle, but like Hameer I don't really see how that helps - there is no way to avoid the warning in the large majority of cases where readonly views are exactly what the user wanted in the first place. So, that argues for cold turkey... The one alternative would be to actually warn only if one attempts to write to a broadcast array - apparently this has been done for `np.diag` as well [2], so code to do so supposedly exists. All the best, Marten [1] https://github.com/numpy/numpy/pull/5371 [2] https://github.com/numpy/numpy/pull/5371#issuecomment-75238827 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffp at vodafonemail.de Tue Dec 25 21:10:07 2018 From: jeffp at vodafonemail.de (Jeff) Date: Wed, 26 Dec 2018 10:10:07 +0800 Subject: [Numpy-discussion] three-dim array Message-ID: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> hello, sorry newbe to numpy. I want to define a three-dim array. I know this works: >>> np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) But can you tell why this doesnt work? >>> np.array([[1,2],[[1,2],[3,4]]]) Traceback (most recent call last): File "", line 1, in ValueError: setting an array element with a sequence. Thank you. From mikofski at berkeley.edu Wed Dec 26 00:23:15 2018 From: mikofski at berkeley.edu (Mark Alexander Mikofski) Date: Tue, 25 Dec 2018 21:23:15 -0800 Subject: [Numpy-discussion] three-dim array In-Reply-To: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> References: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> Message-ID: I believe numpy arrays must be rectangular, yours is jagged, instead try >>> x3d = np.array([[[1, 2], [1, 2], [3, 4]]]) >>> x3d.shape (1, 3, 2) Note: 3 opening brackets, yours has 2 And single brackets around the 3 innermost arrays, yours has single brackets for the 1st, and double brackets around the 2nd and 3rd On Tue, Dec 25, 2018, 6:20 PM Jeff hello, > > sorry newbe to numpy. > > I want to define a three-dim array. > I know this works: > > >>> np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) > array([[[1, 2], > [3, 4]], > > [[5, 6], > [7, 8]]]) > > But can you tell why this doesnt work? > > >>> np.array([[1,2],[[1,2],[3,4]]]) > Traceback (most recent call last): > File "", line 1, in > ValueError: setting an array element with a sequence. > > > Thank you. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wieser.eric+numpy at gmail.com Wed Dec 26 01:58:41 2018 From: wieser.eric+numpy at gmail.com (Eric Wieser) Date: Wed, 26 Dec 2018 07:58:41 +0100 Subject: [Numpy-discussion] three-dim array In-Reply-To: References: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> Message-ID: In the latest version of numpy, this runs without an error, although may or may not be what you want: In [1]: np.array([[1,2],[[1,2],[3,4]]]) Out[1]: array([[1, 2], [list([1, 2]), list([3, 4])]], dtype=object) Here the result is a 2x2 array, where some elements are numbers and others are lists. ? On Wed, 26 Dec 2018 at 06:23 Mark Alexander Mikofski wrote: > I believe numpy arrays must be rectangular, yours is jagged, instead try > > >>> x3d = np.array([[[1, 2], [1, 2], [3, 4]]]) > >>> x3d.shape > (1, 3, 2) > > Note: 3 opening brackets, yours has 2 > And single brackets around the 3 innermost arrays, yours has single > brackets for the 1st, and double brackets around the 2nd and 3rd > > > On Tue, Dec 25, 2018, 6:20 PM Jeff >> hello, >> >> sorry newbe to numpy. >> >> I want to define a three-dim array. >> I know this works: >> >> >>> np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) >> array([[[1, 2], >> [3, 4]], >> >> [[5, 6], >> [7, 8]]]) >> >> But can you tell why this doesnt work? >> >> >>> np.array([[1,2],[[1,2],[3,4]]]) >> Traceback (most recent call last): >> File "", line 1, in >> ValueError: setting an array element with a sequence. >> >> >> Thank you. >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Wed Dec 26 18:28:02 2018 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 27 Dec 2018 00:28:02 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? Message-ID: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Hi all, In https://github.com/numpy/numpy/pull/11897 I am looking into the addition of a `copy=np.never_copy` argument to: * np.array * arr.reshape/np.reshape * arr.astype Which would cause an error to be raised when numpy cannot guarantee that the returned array is a view of the input array. The motivation is to easier avoid accidental copies of large data, or ensure that in-place manipulation will be meaningful. The copy flag API would be: * `copy=True` forces a copy * `copy=False` allows numpy to copy if necessary * `copy=np.never_copy` will error if a copy would be necessary * (almost) all other input will be deprecated. Unfortunately using `copy="never"` is tricky, because currently `np.array(..., copy="never")` behaves exactly the same as `np.array(..., copy=bool("never"))`. So that the wrong result would be given on old numpy versions and it would be a behaviour change. Some things that are a not so nice maybe: * adding/using `np.never_copy` is not very nice * Scalars need a copy and so will not be allowed * For rare array-likes numpy may not be able to guarantee no-copy, although it could happen (but should not). The history is that a long while ago I considered adding a copy flag to `reshape` so that it is possible to do `copy=np.never_copy` (or similar) to ensure that no copy is made. In these, you may want something like an assertion: ``` new_arr = arr.reshape(new_shape) assert np.may_share_memory(arr, new_arr) # Which is sometimes -- but should not be -- written as: arr.shape = new_shape # unnecessary container modification # Or: view = np.array(arr, order="F") assert np.may_share_memory(arr, new_arr) ``` but is more readable and will not cause an intermediate copy on error. So what do you think? Other variants would be to not expose this for `np.array` and probably limit `copy="never"` to the reshape method. Or just to not do it at all. Or to also accept "never" for `reshape`, although I think I would prefer to keep it in sync and wait for a few years to consider that. Best, Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From cimrman3 at ntc.zcu.cz Wed Dec 26 19:31:19 2018 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 27 Dec 2018 01:31:19 +0100 Subject: [Numpy-discussion] ANN: SfePy 2018.4 Message-ID: I am pleased to announce release 2018.4 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (limited support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: https://mail.python.org/mm3/mailman3/lists/sfepy.python.org/ Git (source) repository, issue tracker: https://github.com/sfepy/sfepy Highlights of this release -------------------------- - better support for eigenvalue problems - improved MUMPS solver interface - support for logging and plotting of complex values For full release notes see [1]. Cheers, Robert Cimrman [1] http://docs.sfepy.org/doc/release_notes.html#id1 --- Contributors to this release in alphabetical order: Robert Cimrman Vladimir Lukes Matyas Novak Jan Heczko Lubos Kejzlar From ralf.gommers at gmail.com Wed Dec 26 19:40:52 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 26 Dec 2018 16:40:52 -0800 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Message-ID: On Wed, Dec 26, 2018 at 3:29 PM Sebastian Berg wrote: > Hi all, > > In https://github.com/numpy/numpy/pull/11897 I am looking into the > addition of a `copy=np.never_copy` argument to: > * np.array > * arr.reshape/np.reshape > * arr.astype > > Which would cause an error to be raised when numpy cannot guarantee > that the returned array is a view of the input array. > The motivation is to easier avoid accidental copies of large data, or > ensure that in-place manipulation will be meaningful. > > The copy flag API would be: > * `copy=True` forces a copy > * `copy=False` allows numpy to copy if necessary > * `copy=np.never_copy` will error if a copy would be necessary > * (almost) all other input will be deprecated. > > Unfortunately using `copy="never"` is tricky, because currently > `np.array(..., copy="never")` behaves exactly the same as > `np.array(..., copy=bool("never"))`. So that the wrong result would be > given on old numpy versions and it would be a behaviour change. > I think np.never_copy is really ugly. I'd much rather simply use 'never', and clearly document that if users start using this and they critically rely on it really being never, then they should ensure that their code is only used with numpy >= 1.17.0. Note also that this would not be a backwards compatibility break, because `copy` is now clearly documented as bool, and not bool_like or some such thing. So we do not need to worry about the very improbable case that users now are using `copy='never'`. If others think `copy='never'` isn't acceptable now, there are two other options: 1. add code first to catch `copy='never'` in 1.17.x and raise on it, then in a later numpy version introduce it. 2. just do nothing. I'd prefer that over `np.never_copy`. Cheers, Ralf > Some things that are a not so nice maybe: > * adding/using `np.never_copy` is not very nice > * Scalars need a copy and so will not be allowed > * For rare array-likes numpy may not be able to guarantee no-copy, > although it could happen (but should not). > > > The history is that a long while ago I considered adding a copy flag to > `reshape` so that it is possible to do `copy=np.never_copy` (or > similar) to ensure that no copy is made. In these, you may want > something like an assertion: > > ``` > new_arr = arr.reshape(new_shape) > assert np.may_share_memory(arr, new_arr) > > # Which is sometimes -- but should not be -- written as: > arr.shape = new_shape # unnecessary container modification > > # Or: > view = np.array(arr, order="F") > assert np.may_share_memory(arr, new_arr) > ``` > > but is more readable and will not cause an intermediate copy on error. > > > So what do you think? Other variants would be to not expose this for > `np.array` and probably limit `copy="never"` to the reshape method. Or > just to not do it at all. Or to also accept "never" for `reshape`, > although I think I would prefer to keep it in sync and wait for a few > years to consider that. > > Best, > > Sebastian > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.v.root at gmail.com Wed Dec 26 20:21:30 2018 From: ben.v.root at gmail.com (Benjamin Root) Date: Wed, 26 Dec 2018 20:21:30 -0500 Subject: [Numpy-discussion] three-dim array In-Reply-To: References: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> Message-ID: Ewww, kinda wish that would be an error... It would be too easy for a typo to get accepted this way. On Wed, Dec 26, 2018 at 1:59 AM Eric Wieser wrote: > In the latest version of numpy, this runs without an error, although may > or may not be what you want: > > In [1]: np.array([[1,2],[[1,2],[3,4]]]) > Out[1]: > array([[1, 2], > [list([1, 2]), list([3, 4])]], dtype=object) > > Here the result is a 2x2 array, where some elements are numbers and others > are lists. > ? > > On Wed, 26 Dec 2018 at 06:23 Mark Alexander Mikofski < > mikofski at berkeley.edu> wrote: > >> I believe numpy arrays must be rectangular, yours is jagged, instead try >> >> >>> x3d = np.array([[[1, 2], [1, 2], [3, 4]]]) >> >>> x3d.shape >> (1, 3, 2) >> >> Note: 3 opening brackets, yours has 2 >> And single brackets around the 3 innermost arrays, yours has single >> brackets for the 1st, and double brackets around the 2nd and 3rd >> >> >> On Tue, Dec 25, 2018, 6:20 PM Jeff > >>> hello, >>> >>> sorry newbe to numpy. >>> >>> I want to define a three-dim array. >>> I know this works: >>> >>> >>> np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) >>> array([[[1, 2], >>> [3, 4]], >>> >>> [[5, 6], >>> [7, 8]]]) >>> >>> But can you tell why this doesnt work? >>> >>> >>> np.array([[1,2],[[1,2],[3,4]]]) >>> Traceback (most recent call last): >>> File "", line 1, in >>> ValueError: setting an array element with a sequence. >>> >>> >>> Thank you. >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Thu Dec 27 01:32:05 2018 From: matti.picus at gmail.com (Matti Picus) Date: Thu, 27 Dec 2018 08:32:05 +0200 Subject: [Numpy-discussion] three-dim array In-Reply-To: References: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> Message-ID: On 27/12/18 3:21 am, Benjamin Root wrote: > Ewww, kinda wish that would be an error... It would be too easy for a > typo to get accepted this way. > > On Wed, Dec 26, 2018 at 1:59 AM Eric Wieser > > > wrote: > > In the latest version of numpy, this runs without an error, > although may or may not be what you want: > > |In [1]: np.array([[1,2],[[1,2],[3,4]]]) Out[1]: array([[1, 2], > [list([1, 2]), list([3, 4])]], dtype=object) | > > Here the result is a 2x2 array, where some elements are numbers > and others are lists. > > ? > Specify the dtype explicitly: `dtype=int` or so, then NumPy will refuse to create a ragged array. There has been occasional discussion of `dtype='not-object'`, but I don't think it resulted in an issue or PR. Matti From wieser.eric+numpy at gmail.com Thu Dec 27 01:41:04 2018 From: wieser.eric+numpy at gmail.com (Eric Wieser) Date: Thu, 27 Dec 2018 07:41:04 +0100 Subject: [Numpy-discussion] three-dim array In-Reply-To: References: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> Message-ID: The rationale for the change allowing that construction was twofold: it's easier to understand what has gone wrong when seeing the `list`s in the repr than it was from the cryptic error message; and there were some jagged cases that already succeeded in this way, and it was less confusing to be consistent. I agree that the behavior is not terribly useful, and object arrays constructed containing lists are quite possibly something we should warn about. Eric On Thu, Dec 27, 2018, 2:22 AM Benjamin Root Ewww, kinda wish that would be an error... It would be too easy for a typo > to get accepted this way. > > On Wed, Dec 26, 2018 at 1:59 AM Eric Wieser > wrote: > >> In the latest version of numpy, this runs without an error, although may >> or may not be what you want: >> >> In [1]: np.array([[1,2],[[1,2],[3,4]]]) >> Out[1]: >> array([[1, 2], >> [list([1, 2]), list([3, 4])]], dtype=object) >> >> Here the result is a 2x2 array, where some elements are numbers and >> others are lists. >> ? >> >> On Wed, 26 Dec 2018 at 06:23 Mark Alexander Mikofski < >> mikofski at berkeley.edu> wrote: >> >>> I believe numpy arrays must be rectangular, yours is jagged, instead try >>> >>> >>> x3d = np.array([[[1, 2], [1, 2], [3, 4]]]) >>> >>> x3d.shape >>> (1, 3, 2) >>> >>> Note: 3 opening brackets, yours has 2 >>> And single brackets around the 3 innermost arrays, yours has single >>> brackets for the 1st, and double brackets around the 2nd and 3rd >>> >>> >>> On Tue, Dec 25, 2018, 6:20 PM Jeff >> >>>> hello, >>>> >>>> sorry newbe to numpy. >>>> >>>> I want to define a three-dim array. >>>> I know this works: >>>> >>>> >>> np.array([[[1,2],[3,4]],[[5,6],[7,8]]]) >>>> array([[[1, 2], >>>> [3, 4]], >>>> >>>> [[5, 6], >>>> [7, 8]]]) >>>> >>>> But can you tell why this doesnt work? >>>> >>>> >>> np.array([[1,2],[[1,2],[3,4]]]) >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> ValueError: setting an array element with a sequence. >>>> >>>> >>>> Thank you. >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Thu Dec 27 07:51:09 2018 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 27 Dec 2018 13:51:09 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Message-ID: <921d947bfcefcfe38834a9363d2a2b39313c2330.camel@sipsolutions.net> On Wed, 2018-12-26 at 16:40 -0800, Ralf Gommers wrote: > > > On Wed, Dec 26, 2018 at 3:29 PM Sebastian Berg < > sebastian at sipsolutions.net> wrote: > > Hi all, > > > > In https://github.com/numpy/numpy/pull/11897 I am looking into the > > addition of a `copy=np.never_copy` argument to: > > * np.array > > * arr.reshape/np.reshape > > * arr.astype > > > > Which would cause an error to be raised when numpy cannot guarantee > > that the returned array is a view of the input array. > > The motivation is to easier avoid accidental copies of large data, > > or > > ensure that in-place manipulation will be meaningful. > > > > The copy flag API would be: > > * `copy=True` forces a copy > > * `copy=False` allows numpy to copy if necessary > > * `copy=np.never_copy` will error if a copy would be necessary > > * (almost) all other input will be deprecated. > > > > Unfortunately using `copy="never"` is tricky, because currently > > `np.array(..., copy="never")` behaves exactly the same as > > `np.array(..., copy=bool("never"))`. So that the wrong result would > > be > > given on old numpy versions and it would be a behaviour change. > > I think np.never_copy is really ugly. I'd much rather simply use > 'never', and clearly document that if users start using this and they > critically rely on it really being never, then they should ensure > that their code is only used with numpy >= 1.17.0. > > Note also that this would not be a backwards compatibility break, > because `copy` is now clearly documented as bool, and not bool_like > or some such thing. So we do not need to worry about the very > improbable case that users now are using `copy='never'`. > I agree that it is much nicer to pass "never" and that I am not too worried about people actually passing in strings. But, I am a bit worried of new code silently doing the exact opposite when run on an older version. Although, I guess we typically do not worry too much about such compatibilities and it is possible to make a note of it in the docs. - Sebastian > If others think `copy='never'` isn't acceptable now, there are two > other options: > 1. add code first to catch `copy='never'` in 1.17.x and raise on it, > then in a later numpy version introduce it. > 2. just do nothing. I'd prefer that over `np.never_copy`. > > Cheers, > Ralf > > > Some things that are a not so nice maybe: > > * adding/using `np.never_copy` is not very nice > > * Scalars need a copy and so will not be allowed > > * For rare array-likes numpy may not be able to guarantee no-copy, > > although it could happen (but should not). > > > > > > The history is that a long while ago I considered adding a copy > > flag to > > `reshape` so that it is possible to do `copy=np.never_copy` (or > > similar) to ensure that no copy is made. In these, you may want > > something like an assertion: > > > > ``` > > new_arr = arr.reshape(new_shape) > > assert np.may_share_memory(arr, new_arr) > > > > # Which is sometimes -- but should not be -- written as: > > arr.shape = new_shape # unnecessary container modification > > > > # Or: > > view = np.array(arr, order="F") > > assert np.may_share_memory(arr, new_arr) > > ``` > > > > but is more readable and will not cause an intermediate copy on > > error. > > > > > > So what do you think? Other variants would be to not expose this > > for > > `np.array` and probably limit `copy="never"` to the reshape method. > > Or > > just to not do it at all. Or to also accept "never" for `reshape`, > > although I think I would prefer to keep it in sync and wait for a > > few > > years to consider that. > > > > Best, > > > > Sebastian > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From wieser.eric+numpy at gmail.com Thu Dec 27 08:50:29 2018 From: wieser.eric+numpy at gmail.com (Eric Wieser) Date: Thu, 27 Dec 2018 14:50:29 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Message-ID: I think np.never_copy is really ugly. I?d much rather simply use ?never?, I actually think np.never_copy is the better choice of the two. Arguments for it: - It?s consistent with np.newaxis in spelling (modulo the _) - If mistyped, it can be caught easily by IDEs. - It?s typeable with mypy, unlike constant string literals which currently aren?t - If code written against new numpy is run on old numpy, it will error rather than doing the wrong thing - It won?t be possible to miss parts of our existing API that evaluate a copy argument in a boolean context - It won?t be possible for downstream projects to misinterpret by not checking for ?never? - an error will be raised instead. Arguments against it: - It?s more characters than "never" - The implementation of np.never_copy is a little verbose / ugly Eric On Thu, Dec 27, 2018, 1:41 AM Ralf Gommers > On Wed, Dec 26, 2018 at 3:29 PM Sebastian Berg > wrote: > >> Hi all, >> >> In https://github.com/numpy/numpy/pull/11897 I am looking into the >> addition of a `copy=np.never_copy` argument to: >> * np.array >> * arr.reshape/np.reshape >> * arr.astype >> >> Which would cause an error to be raised when numpy cannot guarantee >> that the returned array is a view of the input array. >> The motivation is to easier avoid accidental copies of large data, or >> ensure that in-place manipulation will be meaningful. >> >> The copy flag API would be: >> * `copy=True` forces a copy >> * `copy=False` allows numpy to copy if necessary >> * `copy=np.never_copy` will error if a copy would be necessary >> * (almost) all other input will be deprecated. >> >> Unfortunately using `copy="never"` is tricky, because currently >> `np.array(..., copy="never")` behaves exactly the same as >> `np.array(..., copy=bool("never"))`. So that the wrong result would be >> given on old numpy versions and it would be a behaviour change. >> > > I think np.never_copy is really ugly. I'd much rather simply use 'never', > and clearly document that if users start using this and they critically > rely on it really being never, then they should ensure that their code is > only used with numpy >= 1.17.0. > > Note also that this would not be a backwards compatibility break, because > `copy` is now clearly documented as bool, and not bool_like or some such > thing. So we do not need to worry about the very improbable case that users > now are using `copy='never'`. > > If others think `copy='never'` isn't acceptable now, there are two other > options: > 1. add code first to catch `copy='never'` in 1.17.x and raise on it, then > in a later numpy version introduce it. > 2. just do nothing. I'd prefer that over `np.never_copy`. > > Cheers, > Ralf > > >> Some things that are a not so nice maybe: >> * adding/using `np.never_copy` is not very nice >> * Scalars need a copy and so will not be allowed >> * For rare array-likes numpy may not be able to guarantee no-copy, >> although it could happen (but should not). >> >> >> The history is that a long while ago I considered adding a copy flag to >> `reshape` so that it is possible to do `copy=np.never_copy` (or >> similar) to ensure that no copy is made. In these, you may want >> something like an assertion: >> >> ``` >> new_arr = arr.reshape(new_shape) >> assert np.may_share_memory(arr, new_arr) >> >> # Which is sometimes -- but should not be -- written as: >> arr.shape = new_shape # unnecessary container modification >> >> # Or: >> view = np.array(arr, order="F") >> assert np.may_share_memory(arr, new_arr) >> ``` >> >> but is more readable and will not cause an intermediate copy on error. >> >> >> So what do you think? Other variants would be to not expose this for >> `np.array` and probably limit `copy="never"` to the reshape method. Or >> just to not do it at all. Or to also accept "never" for `reshape`, >> although I think I would prefer to keep it in sync and wait for a few >> years to consider that. >> >> Best, >> >> Sebastian >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Thu Dec 27 09:40:30 2018 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 27 Dec 2018 09:40:30 -0500 Subject: [Numpy-discussion] three-dim array In-Reply-To: References: <7f83e26b-1dc9-5c5c-111b-211008570e3f@vodafonemail.de> Message-ID: There is a long-standing request to require an explicit opt-in for dtype=object: https://github.com/numpy/numpy/issues/5353 -- Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Thu Dec 27 11:39:07 2018 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 27 Dec 2018 17:39:07 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Message-ID: <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> On Thu, 2018-12-27 at 14:50 +0100, Eric Wieser wrote: > > I think np.never_copy is really ugly. I?d much rather simply use > > ?never?, > > > > I actually think np.never_copy is the better choice of the two. > Arguments for it: > > It?s consistent with np.newaxis in spelling (modulo the _) > If mistyped, it can be caught easily by IDEs. > It?s typeable with mypy, unlike constant string literals which > currently aren?t > If code written against new numpy is run on old numpy, it will error > rather than doing the wrong thing I am not sure I think that the above are too strong arguments, since it is not what is typically done for keywords (nobody suggests np.CLIP instead of "clip"). Unless you feel this is different because it is a mix of strings and bools here? > It won?t be possible to miss parts of our existing API that evaluate > a copy argument in a boolean context > It won?t be possible for downstream projects to misinterpret by not > checking for ?never? - an error will be raised instead. The last one is a good reason I think, since the dispatcher would pass on the string, so a downstream API user which are not picky about the input will interpret it wrong without the special argument. Unless we replace the string when dispatching, which seems strange on first sight. Overall, I am still a bit undecided right now. - Sebastian > Arguments against it: > > It?s more characters than "never" > The implementation of np.never_copy is a little verbose / ugly > Eric > > On Thu, Dec 27, 2018, 1:41 AM Ralf Gommers wrote: > > > > > On Wed, Dec 26, 2018 at 3:29 PM Sebastian Berg < > > sebastian at sipsolutions.net> wrote: > > > Hi all, > > > > > > In https://github.com/numpy/numpy/pull/11897 I am looking into > > > the > > > addition of a `copy=np.never_copy` argument to: > > > * np.array > > > * arr.reshape/np.reshape > > > * arr.astype > > > > > > Which would cause an error to be raised when numpy cannot > > > guarantee > > > that the returned array is a view of the input array. > > > The motivation is to easier avoid accidental copies of large > > > data, or > > > ensure that in-place manipulation will be meaningful. > > > > > > The copy flag API would be: > > > * `copy=True` forces a copy > > > * `copy=False` allows numpy to copy if necessary > > > * `copy=np.never_copy` will error if a copy would be necessary > > > * (almost) all other input will be deprecated. > > > > > > Unfortunately using `copy="never"` is tricky, because currently > > > `np.array(..., copy="never")` behaves exactly the same as > > > `np.array(..., copy=bool("never"))`. So that the wrong result > > > would be > > > given on old numpy versions and it would be a behaviour change. > > > > I think np.never_copy is really ugly. I'd much rather simply use > > 'never', and clearly document that if users start using this and > > they critically rely on it really being never, then they should > > ensure that their code is only used with numpy >= 1.17.0. > > > > Note also that this would not be a backwards compatibility break, > > because `copy` is now clearly documented as bool, and not bool_like > > or some such thing. So we do not need to worry about the very > > improbable case that users now are using `copy='never'`. > > > > If others think `copy='never'` isn't acceptable now, there are two > > other options: > > 1. add code first to catch `copy='never'` in 1.17.x and raise on > > it, then in a later numpy version introduce it. > > 2. just do nothing. I'd prefer that over `np.never_copy`. > > > > Cheers, > > Ralf > > > > > Some things that are a not so nice maybe: > > > * adding/using `np.never_copy` is not very nice > > > * Scalars need a copy and so will not be allowed > > > * For rare array-likes numpy may not be able to guarantee no- > > > copy, > > > although it could happen (but should not). > > > > > > > > > The history is that a long while ago I considered adding a copy > > > flag to > > > `reshape` so that it is possible to do `copy=np.never_copy` (or > > > similar) to ensure that no copy is made. In these, you may want > > > something like an assertion: > > > > > > ``` > > > new_arr = arr.reshape(new_shape) > > > assert np.may_share_memory(arr, new_arr) > > > > > > # Which is sometimes -- but should not be -- written as: > > > arr.shape = new_shape # unnecessary container modification > > > > > > # Or: > > > view = np.array(arr, order="F") > > > assert np.may_share_memory(arr, new_arr) > > > ``` > > > > > > but is more readable and will not cause an intermediate copy on > > > error. > > > > > > > > > So what do you think? Other variants would be to not expose this > > > for > > > `np.array` and probably limit `copy="never"` to the reshape > > > method. Or > > > just to not do it at all. Or to also accept "never" for > > > `reshape`, > > > although I think I would prefer to keep it in sync and wait for a > > > few > > > years to consider that. > > > > > > Best, > > > > > > Sebastian > > > > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at python.org > > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From jni.soma at gmail.com Thu Dec 27 18:26:14 2018 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 27 Dec 2018 18:26:14 -0500 Subject: [Numpy-discussion] =?utf-8?q?Add_guaranteed_no-copy_to_array_cre?= =?utf-8?q?ation_and_reshape=3F?= In-Reply-To: <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> Message-ID: <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> On Fri, Dec 28, 2018, at 3:40 AM, Sebastian Berg wrote: > > It?s consistent with np.newaxis in spelling (modulo the _) > > If mistyped, it can be caught easily by IDEs. > > It?s typeable with mypy, unlike constant string literals which > > currently aren?t > > If code written against new numpy is run on old numpy, it will error > > rather than doing the wrong thing > > I am not sure I think that the above are too strong arguments, since it > is not what is typically done for keywords (nobody suggests np.CLIP > instead of "clip"). Unless you feel this is different because it is a > mix of strings and bools here? :+1: to Eric's list. I don't think it's different because of the mix, I think it's different because deprecating things is painful. But now that we have good typing in Python, I think of string literals as an anti-pattern going forward. But, for me, the biggest reason is the silent different behaviour in old vs new NumPy. As a package maintainer this is just a nightmare. I have a lot of sympathy for the ugliness argument, but I don't think `np.never_copy` (or e.g. `np.modes.never`) is that much worse than a string. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Dec 27 18:57:25 2018 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 27 Dec 2018 15:57:25 -0800 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> Message-ID: On Thu, Dec 27, 2018 at 3:27 PM Juan Nunez-Iglesias wrote: > On Fri, Dec 28, 2018, at 3:40 AM, Sebastian Berg wrote: > > > It?s consistent with np.newaxis in spelling (modulo the _) > > If mistyped, it can be caught easily by IDEs. > > It?s typeable with mypy, unlike constant string literals which > > currently aren?t > > If code written against new numpy is run on old numpy, it will error > > rather than doing the wrong thing > > I am not sure I think that the above are too strong arguments, since it > is not what is typically done for keywords (nobody suggests np.CLIP > instead of "clip"). Unless you feel this is different because it is a > mix of strings and bools here? > > > :+1: to Eric's list. I don't think it's different because of the mix, I > think it's different because deprecating things is painful. > Technically there's nothing we are deprecating. If anything, the one not super uncommon pattern will be that people use 0/1 instead of False/True, which works as expected now and will start raising in case we'd go with np.never_copy But now that we have good typing in Python, I think of string literals as > an anti-pattern going forward. > I don't quite get that, it'll still be the most Pythonic thing to do in many API design scenarios. Adding a new class instance with unusual behavior like raising on bool() is not even a pattern, it would just be an oddity. > But, for me, the biggest reason is the silent different behaviour in old > vs new NumPy. As a package maintainer this is just a nightmare. > That is the main downside indeed in this particular case. > I have a lot of sympathy for the ugliness argument, but I don't think > `np.never_copy` (or e.g. `np.modes.never`) is that much worse than a string. > The point is not that `.reshape(..., copy=np.never_copy)` is uglier (it is, but indeed just a little), but that we're adding a new object to the numpy namespace that's without precedent. If we'd do that regularly the API would really blow up. np.newaxis is not relevant here - it's a simple alias for None, is just there for code readability, and is much more widely applicable than np.never_copy would be. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jni.soma at gmail.com Thu Dec 27 19:16:39 2018 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Thu, 27 Dec 2018 19:16:39 -0500 Subject: [Numpy-discussion] =?utf-8?q?Add_guaranteed_no-copy_to_array_cre?= =?utf-8?q?ation_and_reshape=3F?= In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> Message-ID: <7d78d1f2-d42e-4bb5-ac20-3c0b87ad41da@www.fastmail.com> On Fri, Dec 28, 2018, at 10:58 AM, Ralf Gommers wrote: > > Technically there's nothing we are deprecating. If anything, the one not super uncommon pattern will be that people use 0/1 instead of False/True, which works as expected now and will start raising in case we'd go with np.never_copy Oh, no, here I meant, I'd consider `np.modes.CLIP` to be something worth advocating for if it wasn't for deprecation cycles and extremely widespread usage. >> But now that we have good typing in Python, I think of string literals as an anti-pattern going forward. > > I don't quite get that, it'll still be the most Pythonic thing to do in many API design scenarios. Adding a new class instance with unusual behavior like raising on bool() is not even a pattern, it would just be an oddity. Oh, no, I wouldn't suggest that the class raise on boolification, I suggest that the typing module warn if something other than a bool or an np.types.CopyMode is provided. Then you can have a useful typed function signature. >> I have a lot of sympathy for the ugliness argument, but I don't think `np.never_copy` (or e.g. `np.modes.never`) is that much worse than a string. > > The point is not that `.reshape(..., copy=np.never_copy)` is uglier (it is, but indeed just a little), but that we're adding a new object to the numpy namespace that's without precedent. If we'd do that regularly the API would really blow up. Sure, I totally agree here, which is why I suggested `np.modes.never` as an alternative. Namespaces are a honking great idea! -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Dec 27 20:45:58 2018 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 27 Dec 2018 17:45:58 -0800 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> Message-ID: On Thu, Dec 27, 2018 at 3:27 PM Juan Nunez-Iglesias wrote: > > On Fri, Dec 28, 2018, at 3:40 AM, Sebastian Berg wrote: > > > It?s consistent with np.newaxis in spelling (modulo the _) > > If mistyped, it can be caught easily by IDEs. > > It?s typeable with mypy, unlike constant string literals which > > currently aren?t > > If code written against new numpy is run on old numpy, it will error > > rather than doing the wrong thing > > I am not sure I think that the above are too strong arguments, since it > is not what is typically done for keywords (nobody suggests np.CLIP > instead of "clip"). Unless you feel this is different because it is a > mix of strings and bools here? > > > :+1: to Eric's list. I don't think it's different because of the mix, I think it's different because deprecating things is painful. But now that we have good typing in Python, I think of string literals as an anti-pattern going forward. I've certainly seen people argue that it's better to use proper enum's in this kind of case instead of strings. The 'enum' package is even included in the stdlib on all our supported versions now: https://docs.python.org/3/library/enum.html I guess another possibility to throw out there would be a second kwarg, require_view=False/True. -n -- Nathaniel J. Smith -- https://vorpus.org From sebastian at sipsolutions.net Fri Dec 28 08:39:58 2018 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 28 Dec 2018 14:39:58 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <58cb2bf7b0b058590687bbbba14b526c1c276a58.camel@sipsolutions.net> <9ad06acc-93f3-436c-8df6-20a7eb2b2643@www.fastmail.com> Message-ID: <1496454a35d93bed6dece86fc38526148f48be98.camel@sipsolutions.net> On Thu, 2018-12-27 at 17:45 -0800, Nathaniel Smith wrote: > On Thu, Dec 27, 2018 at 3:27 PM Juan Nunez-Iglesias < > jni.soma at gmail.com> wrote: > > On Fri, Dec 28, 2018, at 3:40 AM, Sebastian Berg wrote: > > > > > It?s consistent with np.newaxis in spelling (modulo the _) > > > If mistyped, it can be caught easily by IDEs. > > > It?s typeable with mypy, unlike constant string literals which > > > currently aren?t > > > If code written against new numpy is run on old numpy, it will > > > error > > > rather than doing the wrong thing > > > > I am not sure I think that the above are too strong arguments, > > since it > > is not what is typically done for keywords (nobody suggests np.CLIP > > instead of "clip"). Unless you feel this is different because it is > > a > > mix of strings and bools here? > > > > > > :+1: to Eric's list. I don't think it's different because of the > > mix, I think it's different because deprecating things is painful. > > But now that we have good typing in Python, I think of string > > literals as an anti-pattern going forward. > > I've certainly seen people argue that it's better to use proper > enum's > in this kind of case instead of strings. The 'enum' package is even > included in the stdlib on all our supported versions now: > https://docs.python.org/3/library/enum.html > I am sympathetic with that, but it is something we (or scipy, etc.) currently simply do not use, so I am not sure that I think it has much validity at this time. That is least unless we agree to aim to use this more generally in the future. > I guess another possibility to throw out there would be a second > kwarg, require_view=False/True. > My first gut feeling was that it is clumsy at least for `reshape`, but one will always only use one of the two arguments at a time. The more I look at it, the better the suggestion seems. Plus it reduces the possible `copy=False` not meaning "never" confusion. I think with a bit more pondering, that will become my favorite solution. - Sebastian > -n > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From matthias.geier at gmail.com Sat Dec 29 11:16:42 2018 From: matthias.geier at gmail.com (Matthias Geier) Date: Sat, 29 Dec 2018 17:16:42 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Message-ID: Hi Sebastian. I don't have an opinion (yet) about this matter, but I have a question: On Thu, Dec 27, 2018 at 12:30 AM Sebastian Berg wrote: [...] > new_arr = arr.reshape(new_shape) > assert np.may_share_memory(arr, new_arr) > > # Which is sometimes -- but should not be -- written as: > arr.shape = new_shape # unnecessary container modification [...] Why is this discouraged? Why do you call this "unnecessary container modification"? I've used this idiom in the past for exactly those cases where I wanted to make sure no copy is made. And if we are not supposed to assign to arr.shape, why is it allowed in the first place? cheers, Matthias From sebastian at sipsolutions.net Sat Dec 29 11:58:16 2018 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sat, 29 Dec 2018 17:58:16 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> Message-ID: <429011baa0cb7f19f20db84dfc65276c76eaccf9.camel@sipsolutions.net> On Sat, 2018-12-29 at 17:16 +0100, Matthias Geier wrote: > Hi Sebastian. > > I don't have an opinion (yet) about this matter, but I have a > question: > > On Thu, Dec 27, 2018 at 12:30 AM Sebastian Berg wrote: > > [...] > > > new_arr = arr.reshape(new_shape) > > assert np.may_share_memory(arr, new_arr) > > > > # Which is sometimes -- but should not be -- written as: > > arr.shape = new_shape # unnecessary container modification > > [...] > > Why is this discouraged? > > Why do you call this "unnecessary container modification"? > > I've used this idiom in the past for exactly those cases where I > wanted to make sure no copy is made. > > And if we are not supposed to assign to arr.shape, why is it allowed > in the first place? Well, this may be a matter of taste, but say you have an object that stores an array: class MyObject: def __init__(self): self.myarr = some_array Now, lets say I do: def some_func(arr): # Do something with the array: arr.shape = -1 myobject = MyObject() some_func(myobject) then myobject will suddenly have the wrong shape stored. In most cases this is harmless, but I truly believe this is exactly why we have views and why they are so awesome. The content of arrays is mutable, but the array object itself should not be muted normally. There may be some corner cases, but a lot of the "than why is it allowed" questions are answered with: for history reasons. By the way, on error the `arr.shape = ...` code currently creates the copy temporarily. - Sebastian > > cheers, > Matthias > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From matthias.geier at gmail.com Sun Dec 30 10:03:08 2018 From: matthias.geier at gmail.com (Matthias Geier) Date: Sun, 30 Dec 2018 16:03:08 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: <429011baa0cb7f19f20db84dfc65276c76eaccf9.camel@sipsolutions.net> References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <429011baa0cb7f19f20db84dfc65276c76eaccf9.camel@sipsolutions.net> Message-ID: On Sat, Dec 29, 2018 at 6:00 PM Sebastian Berg wrote: > > On Sat, 2018-12-29 at 17:16 +0100, Matthias Geier wrote: > > Hi Sebastian. > > > > I don't have an opinion (yet) about this matter, but I have a > > question: > > > > On Thu, Dec 27, 2018 at 12:30 AM Sebastian Berg wrote: > > > > [...] > > > > > new_arr = arr.reshape(new_shape) > > > assert np.may_share_memory(arr, new_arr) > > > > > > # Which is sometimes -- but should not be -- written as: > > > arr.shape = new_shape # unnecessary container modification > > > > [...] > > > > Why is this discouraged? > > > > Why do you call this "unnecessary container modification"? > > > > I've used this idiom in the past for exactly those cases where I > > wanted to make sure no copy is made. > > > > And if we are not supposed to assign to arr.shape, why is it allowed > > in the first place? > > Well, this may be a matter of taste, but say you have an object that > stores an array: > > class MyObject: > def __init__(self): > self.myarr = some_array > > > Now, lets say I do: > > def some_func(arr): > # Do something with the array: > arr.shape = -1 > > myobject = MyObject() > some_func(myobject) > > then myobject will suddenly have the wrong shape stored. In most cases > this is harmless, but I truly believe this is exactly why we have views > and why they are so awesome. > The content of arrays is mutable, but the array object itself should > not be muted normally. Thanks for the example! I don't understand its point, though. Also, it's not working since MyObject doesn't have a .shape attribute. > There may be some corner cases, but a lot of the > "than why is it allowed" questions are answered with: for history > reasons. OK, that's a good point. > By the way, on error the `arr.shape = ...` code currently creates the > copy temporarily. That's interesting and it should probably be fixed. But it is not reason enough for me not to use it. I find it important that is doesn't make a copy in the success case, I don't care very much for the error case. Would you mind elaborating on the real reasons why I shouldn't use it? cheers, Matthias > > - Sebastian > > > > > > cheers, > > Matthias > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From sebastian at sipsolutions.net Sun Dec 30 11:23:39 2018 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Sun, 30 Dec 2018 17:23:39 +0100 Subject: [Numpy-discussion] Add guaranteed no-copy to array creation and reshape? In-Reply-To: References: <629842683a77aed9acfe7ca2f00a55e164abf6ac.camel@sipsolutions.net> <429011baa0cb7f19f20db84dfc65276c76eaccf9.camel@sipsolutions.net> Message-ID: On Sun, 2018-12-30 at 16:03 +0100, Matthias Geier wrote: > On Sat, Dec 29, 2018 at 6:00 PM Sebastian Berg wrote: > > On Sat, 2018-12-29 at 17:16 +0100, Matthias Geier wrote: > > > Hi Sebastian. > > > > > > I don't have an opinion (yet) about this matter, but I have a > > > question: > > > > > > On Thu, Dec 27, 2018 at 12:30 AM Sebastian Berg wrote: > > > > > > [...] > > > > > > > new_arr = arr.reshape(new_shape) > > > > assert np.may_share_memory(arr, new_arr) > > > > > > > > # Which is sometimes -- but should not be -- written as: > > > > arr.shape = new_shape # unnecessary container modification > > > > > > [...] > > > > > > Why is this discouraged? > > > > > > Why do you call this "unnecessary container modification"? > > > > > > I've used this idiom in the past for exactly those cases where I > > > wanted to make sure no copy is made. > > > > > > And if we are not supposed to assign to arr.shape, why is it > > > allowed > > > in the first place? > > > > Well, this may be a matter of taste, but say you have an object > > that > > stores an array: > > > > class MyObject: > > def __init__(self): > > self.myarr = some_array > > > > > > Now, lets say I do: > > > > def some_func(arr): > > # Do something with the array: > > arr.shape = -1 > > > > myobject = MyObject() > > some_func(myobject) > > > > then myobject will suddenly have the wrong shape stored. In most > > cases > > this is harmless, but I truly believe this is exactly why we have > > views > > and why they are so awesome. > > The content of arrays is mutable, but the array object itself > > should > > not be muted normally. > > Thanks for the example! I don't understand its point, though. > Also, it's not working since MyObject doesn't have a .shape > attribute. > The example should have called `some_func(myobject.arr)`. The thing is that if you have more references to the same array around, you change all their shapes. And if those other references are there for a reason, that is not what you want. That does not matter much in most cases, but it could change the shape of an array in a completely different place then intended. Creating a new view is cheap, so I think such things should be avoided. I admit, most code will effectively do: arr = input_arr[...] # create a new view arr.shape = ... so that there is no danger. But conceptually, I do not think there should be a danger of magically changing the shape of a stored array in a different part of the code. Does that make some sense? Maybe shorter example: arr = np.arange(10) arr2 = arr arr2.shape = (5, 2) print(arr.shape) # also (5, 2) so the arr container (shape, dtype) is changed/muted. I think we expect that for content here, but not for the shape. - Sebastian > > There may be some corner cases, but a lot of the > > "than why is it allowed" questions are answered with: for history > > reasons. > > OK, that's a good point. > > > By the way, on error the `arr.shape = ...` code currently creates > > the > > copy temporarily. > > That's interesting and it should probably be fixed. > > But it is not reason enough for me not to use it. > I find it important that is doesn't make a copy in the success case, > I > don't care very much for the error case. > > Would you mind elaborating on the real reasons why I shouldn't use > it? > > cheers, > Matthias > > > - Sebastian > > > > > > > cheers, > > > Matthias > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at python.org > > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: