From cimrman3 at ntc.zcu.cz Wed Dec 7 04:37:03 2016 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Wed, 7 Dec 2016 10:37:03 +0100 Subject: [Numpy-discussion] ANN: SfePy 2016.4 Message-ID: <1505540b-41a7-8568-10f1-c8afbac661b2@ntc.zcu.cz> I am pleased to announce release 2016.4 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method or by the isogeometric analysis (limited support). It is distributed under the new BSD license. Home page: http://sfepy.org Mailing list: http://groups.google.com/group/sfepy-devel Git (source) repository, issue tracker: https://github.com/sfepy/sfepy Highlights of this release -------------------------- - support tensor product element meshes with one-level hanging nodes - improve homogenization support for large deformations - parallel calculation of homogenized coefficients and related sub-problems - evaluation of second derivatives of Lagrange basis functions For full release notes see http://docs.sfepy.org/doc/release_notes.html#id1 (rather long and technical). Cheers, Robert Cimrman --- Contributors to this release in alphabetical order: Robert Cimrman Vladimir Lukes Matyas Novak From stuart at stuartreynolds.net Wed Dec 14 14:45:07 2016 From: stuart at stuartreynolds.net (Stuart Reynolds) Date: Wed, 14 Dec 2016 11:45:07 -0800 Subject: [Numpy-discussion] Possible to pickle new state in NDArray subclasses? Message-ID: I'm trying to subclass an NDArray as shown here: https://docs.scipy.org/doc/numpy/user/basics.subclassing.html My problem is that when I save the new class' state with pickle, the new attributes are lost. I don't seem to be able to override __getstate__ or __setstate__ to achieve this? Is it possible to allow new state to serialized when overriding an NDArray? In my example below, __setstate__ gets called by pickle but not __getstate__. In the final line, a RealisticInfoArray has been deserialized, but it has no .info attribute. ---- import cPickle as pickle import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, arr, info): obj = np.asarray(arr).view(cls) obj.info = info return obj def __array_finalize__(self, obj): if obj is None: return self.info = getattr(obj,"info",None) def __setstate__(self, *args): print "SET" return np.ndarray.__setstate__(self,*args) def __getstate__(self): print "GET" assert False, "EXPLODE" return np.ndarray.__getstate__(self) arr = np.zeros((2,3), int) arr = RealisticInfoArray(arr, "blarg") print arr.info arr2 = pickle.loads(pickle.dumps(arr)) print arr2.info # no .info attribute! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nathan12343 at gmail.com Wed Dec 14 14:51:26 2016 From: nathan12343 at gmail.com (Nathan Goldbaum) Date: Wed, 14 Dec 2016 13:51:26 -0600 Subject: [Numpy-discussion] Possible to pickle new state in NDArray subclasses? In-Reply-To: References: Message-ID: I'm able to do this in my ndarrary subclass using __reduce__ and __setstate__: https://bitbucket.org/yt_analysis/yt/src/yt/yt/units/yt_array.py#yt_array.py-1250 Here it's being used to save the unit information into the pickle for a unit-aware ndarray subclass. On Wed, Dec 14, 2016 at 1:45 PM, Stuart Reynolds wrote: > I'm trying to subclass an NDArray as shown here: > https://docs.scipy.org/doc/numpy/user/basics.subclassing.html > > My problem is that when I save the new class' state with pickle, the new > attributes are lost. I don't seem to be able to override __getstate__ or > __setstate__ to achieve this? > > Is it possible to allow new state to serialized when overriding an NDArray? > > In my example below, __setstate__ gets called by pickle but not > __getstate__. > In the final line, a RealisticInfoArray has been deserialized, but it has > no .info attribute. > > ---- > > import cPickle as pickle > import numpy as np > > class RealisticInfoArray(np.ndarray): > def __new__(cls, arr, info): > obj = np.asarray(arr).view(cls) > obj.info = info > return obj > > def __array_finalize__(self, obj): > if obj is None: return > self.info = getattr(obj,"info",None) > > def __setstate__(self, *args): > print "SET" > return np.ndarray.__setstate__(self,*args) > > def __getstate__(self): > print "GET" > assert False, "EXPLODE" > return np.ndarray.__getstate__(self) > > arr = np.zeros((2,3), int) > arr = RealisticInfoArray(arr, "blarg") > print arr.info > arr2 = pickle.loads(pickle.dumps(arr)) > print arr2.info # no .info attribute! > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuart at stuartreynolds.net Wed Dec 14 17:05:18 2016 From: stuart at stuartreynolds.net (Stuart Reynolds) Date: Wed, 14 Dec 2016 14:05:18 -0800 Subject: [Numpy-discussion] Possible to pickle new state in NDArray subclasses? In-Reply-To: References: Message-ID: Works great! Thank you. On Wed, Dec 14, 2016 at 11:51 AM, Nathan Goldbaum wrote: > I'm able to do this in my ndarrary subclass using __reduce__ and > __setstate__: > > https://bitbucket.org/yt_analysis/yt/src/yt/yt/units/ > yt_array.py#yt_array.py-1250 > > Here it's being used to save the unit information into the pickle for a > unit-aware ndarray subclass. > > On Wed, Dec 14, 2016 at 1:45 PM, Stuart Reynolds < > stuart at stuartreynolds.net> wrote: > >> I'm trying to subclass an NDArray as shown here: >> https://docs.scipy.org/doc/numpy/user/basics.subclassing.html >> >> My problem is that when I save the new class' state with pickle, the new >> attributes are lost. I don't seem to be able to override __getstate__ or >> __setstate__ to achieve this? >> >> Is it possible to allow new state to serialized when overriding an >> NDArray? >> >> In my example below, __setstate__ gets called by pickle but not >> __getstate__. >> In the final line, a RealisticInfoArray has been deserialized, but it has >> no .info attribute. >> >> ---- >> >> import cPickle as pickle >> import numpy as np >> >> class RealisticInfoArray(np.ndarray): >> def __new__(cls, arr, info): >> obj = np.asarray(arr).view(cls) >> obj.info = info >> return obj >> >> def __array_finalize__(self, obj): >> if obj is None: return >> self.info = getattr(obj,"info",None) >> >> def __setstate__(self, *args): >> print "SET" >> return np.ndarray.__setstate__(self,*args) >> >> def __getstate__(self): >> print "GET" >> assert False, "EXPLODE" >> return np.ndarray.__getstate__(self) >> >> arr = np.zeros((2,3), int) >> arr = RealisticInfoArray(arr, "blarg") >> print arr.info >> arr2 = pickle.loads(pickle.dumps(arr)) >> print arr2.info # no .info attribute! >> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Dec 18 20:21:56 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 18 Dec 2016 18:21:56 -0700 Subject: [Numpy-discussion] PyPI source files. Message-ID: Hi All, It seems that PyPI will only accept one source file at this time, e.g., numpy-1.11.3.zip and numpy-1.11.3.tar.gz are considered duplicates. Does anyone know if this is intentional or a bug on the PyPI end? It makes sense in a screwy sort of way. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Dec 18 20:39:19 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 18 Dec 2016 17:39:19 -0800 Subject: [Numpy-discussion] PyPI source files. In-Reply-To: References: Message-ID: On Sun, Dec 18, 2016 at 5:21 PM, Charles R Harris wrote: > Hi All, > > It seems that PyPI will only accept one source file at this time, e.g., > numpy-1.11.3.zip and numpy-1.11.3.tar.gz are considered duplicates. Does > anyone know if this is intentional or a bug on the PyPI end? It makes sense > in a screwy sort of way. It's intentional: see PEP 527 and in particular: https://www.python.org/dev/peps/pep-0527/#limiting-number-of-sdists-per-release -n -- Nathaniel J. Smith -- https://vorpus.org From charlesr.harris at gmail.com Sun Dec 18 21:19:22 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 18 Dec 2016 19:19:22 -0700 Subject: [Numpy-discussion] PyPI source files. In-Reply-To: References: Message-ID: On Sun, Dec 18, 2016 at 6:39 PM, Nathaniel Smith wrote: > On Sun, Dec 18, 2016 at 5:21 PM, Charles R Harris > wrote: > > Hi All, > > > > It seems that PyPI will only accept one source file at this time, e.g., > > numpy-1.11.3.zip and numpy-1.11.3.tar.gz are considered duplicates. Does > > anyone know if this is intentional or a bug on the PyPI end? It makes > sense > > in a screwy sort of way. > > It's intentional: see PEP 527 and in particular: > https://www.python.org/dev/peps/pep-0527/#limiting- > number-of-sdists-per-release Thanks for the info Nathaniel ;) Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sun Dec 18 21:24:50 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 18 Dec 2016 19:24:50 -0700 Subject: [Numpy-discussion] NumPy 1.11.3 release. Message-ID: Hi All, I'm please to annouce the release of NumPy 1.11.3. This is a one bug fix release to take care of a bug that could corrupt large files opened in append mode and then used as an argument to ndarray.tofile. Thanks to Pavel Potocek for the fix. Cheers, Chuck -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ========================== NumPy 1.11.3 Release Notes ========================== Numpy 1.11.3 fixes a bug that leads to file corruption when very large files opened in append mode are used in ``ndarray.tofile``. It supports Python versions 2.6 - 2.7 and 3.2 - 3.5. Wheels for Linux, Windows, and OS X can be found on PyPI. Contributors to maintenance/1.11.3 ================================== A total of 2 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - - Charles Harris - - Pavel Potocek + Pull Requests Merged ==================== - - `#8341 `__: BUG: Fix ndarray.tofile large file corruption in append mode. - - `#8346 `__: TST: Fix tests in PR #8341 for NumPy 1.11.x Checksums ========= MD5 ~~~ f36503c6665701e1ca0fd2953b6419dd numpy-1.11.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl ada01f12b747c0669be00be843fde6dd numpy-1.11.3-cp27-cp27m-manylinux1_i686.whl e3f454dc204b90015e4d8991b12069fb numpy-1.11.3-cp27-cp27m-manylinux1_x86_64.whl cccfb3f765fa2eb4759590467a5f3fb1 numpy-1.11.3-cp27-cp27mu-manylinux1_i686.whl 479c0c8b50ab0ed4acca0a66887fe74c numpy-1.11.3-cp27-cp27mu-manylinux1_x86_64.whl 110b93cc26ca556b075316bee81f8652 numpy-1.11.3-cp27-none-win32.whl 33bfb4c5f5608d3966a6600fa3d7623c numpy-1.11.3-cp27-none-win_amd64.whl 81df8e91c06595572583cd67fcb7d68f numpy-1.11.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 194d8903cb3fd3b17af4093089b1a154 numpy-1.11.3-cp34-cp34m-manylinux1_i686.whl 837d9d7c911d4589172d19d0d8fb4eaf numpy-1.11.3-cp34-cp34m-manylinux1_x86_64.whl f6b24305ab3edba245106b49b97fd9d7 numpy-1.11.3-cp34-none-win32.whl 2f3fdd08d9ad43304d67c16182ff92de numpy-1.11.3-cp34-none-win_amd64.whl f90839ad86e3ccda9a409ce93ca1cccc numpy-1.11.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 3b2268154e405f895402cbd4cbcaad7a numpy-1.11.3-cp35-cp35m-manylinux1_i686.whl 3d6754274af48c1c19154dd370ddb569 numpy-1.11.3-cp35-cp35m-manylinux1_x86_64.whl f8b64f46cc0e9a3fc877f24efd5e3b7c numpy-1.11.3-cp35-none-win32.whl b1a53851dde805a233e6c4eafe116e82 numpy-1.11.3-cp35-none-win_amd64.whl b8a9dec6901c046edaea706bad1448b1 numpy-1.11.3.tar.gz aa70cd5bba81b78382694d654ed10036 numpy-1.11.3.zip SHA256 ~~~~~~ 5941d3dbd0afed1ecd3746c0371b2a8b79977d084004cc320c2a4cf9d88589d8 numpy-1.11.3-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl ca37b5bebcc4ebde39dfbff0bda69fdc28785a8ff21155fd7adacf473c7b40dd numpy-1.11.3-cp27-cp27m-manylinux1_i686.whl 276cbb35b69eb2f0d5f264b7c71bdc1f4e91ecd3125d32cd1839873268239892 numpy-1.11.3-cp27-cp27m-manylinux1_x86_64.whl 1226e259d796207e8ef36762dce139e7da1cc0bb78f5d54e739252acd07834e5 numpy-1.11.3-cp27-cp27mu-manylinux1_i686.whl 674d0c1318890357f27ce3a8939e643eaf55140cfb8e84730aeee1dd769b0c21 numpy-1.11.3-cp27-cp27mu-manylinux1_x86_64.whl f8b30c76e0f805da7ea641f52c3f6bade55d50a0767f9c89c50e4c42b2a1b34c numpy-1.11.3-cp27-none-win32.whl 8cd184b0341e1db3a5619c85f875ce511ef0eb7ec01ec320116959a3de77f1b8 numpy-1.11.3-cp27-none-win_amd64.whl f0824beb03aff58d4062508b1dd4f737f08f5d2369f25a73c2350fe081beab2c numpy-1.11.3-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 9e4228ac322743dea101a90305ee6d54b4bf82f15d6499e55d1d9cef17bccdbb numpy-1.11.3-cp34-cp34m-manylinux1_i686.whl 195604fc19a9333f3342fcad93094b6a21bc6e6b28d7bfec14d120cb4391a032 numpy-1.11.3-cp34-cp34m-manylinux1_x86_64.whl 71a6aa8b8c9f666b541208d38b30c84df1666e4cc02fb33b59086aaea10affad numpy-1.11.3-cp34-none-win32.whl 135586ce1966dbecd9494ba30cb9beca93fad323ef9264c21efc2a0b59e449d2 numpy-1.11.3-cp34-none-win_amd64.whl cca8af884cbf220656ca2f8f9120a634e5cfb5fdcb0a21fd83ec279cc4f46654 numpy-1.11.3-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl ab810c942ead3f5988a7bef95dc6e85b586b6e814b83d571dfbca879e245bd45 numpy-1.11.3-cp35-cp35m-manylinux1_i686.whl 7c6eb737dc3d53977c558d57625dfbecd9900a5807ff17edd6842a102cb95c3b numpy-1.11.3-cp35-cp35m-manylinux1_x86_64.whl ab2af03dabecb97de27badfa944c56d799774a1fa975d52083197bb81858b742 numpy-1.11.3-cp35-none-win32.whl dd1800ec19192fd853bc255917eb3ecb34de268551b9c561f36d089023883807 numpy-1.11.3-cp35-none-win_amd64.whl 6e89f41217028452977cddb2a6c614e2210214bf3efb8494e7a9137b26985d41 numpy-1.11.3.tar.gz 2e0fc5248246a64628656fe14fcab0a959741a2820e003bd15538226501b82f7 numpy-1.11.3.zip -----BEGIN PGP SIGNATURE----- iQEcBAEBAgAGBQJYVz6CAAoJEGefIoN3xSR7PUsH/iK2boFNPG2x6RQ2bIvYW/eC iDw5Aewhv5VJRch8QUDAJAVX228Zw4rKY4mxgyXMBvd2ZGL0+E6lMSlooK9r4Sz+ y+lnHyWFc1UjxgQla46TUV77l8PBMjXUKgPQl+Whp8YYwqKd5Q1ZYmDcbNwuNZNc lw7aRcVMF2/hv02B+SYaxp2eo7VQVYf6OJQ3a2Ya8bdLUr2M5kUbVeUk4fpUhQXT FX3E0jFlxELm3y6YkC3mxgyp9kAOOn0d2M7y9IjufOaj7F+oCg+uHMfPTK6EqnoU 7T08Hy8TSdyoUY4ueRsoj3Ns/iwaM88iR133A4UR7vbNkaIDkVhCE/isrdvwbwk= =SJME -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From brstone at gmail.com Mon Dec 19 18:25:33 2016 From: brstone at gmail.com (Brenton R S Recht) Date: Mon, 19 Dec 2016 18:25:33 -0500 Subject: [Numpy-discussion] in1d, but preserve shape of ar1 Message-ID: I started an enhancement request in the Github bug tracker at https://github.com/numpy/numpy/issues/8331 , but Jaime Frio recommended I bring it to the mailing list. `in1d` takes two arrays, `ar1` and `ar2`, and returns a 1d array with the same number of elements as `ar1`. The logical extension would be a function that does the same thing but returns a (possibly multi-dimensional) array of the same shape as `ar1`. The code already has a comment suggesting this could be done (see https://github.com/numpy/numpy /blob/master/numpy/lib/arraysetops.py#L444 ). I agree that changing the behavior of the existing function isn't an option, since it would break backwards compatability. I'm not sure adding an option keep_shape is good, since the name of the function ("1d") wouldn't match what it does (returns an array that might not be 1d). I think a new function is the way to go. This would be it, more or less: def items_in(ar1, ar2, **kwargs): return np.in1d(ar1, ar2, **kwargs).reshape(ar1.shape) Questions I have are: * Function name? I was thinking something like `items_in` or `item_in`: the function returns whether each item in `ar1` is in `ar2`. Is "item" or "element" the right term here? * Are there any other changes that need to happen in arraysetops.py? Or other files? I ask this because although the file says "Set operations for 1D numeric arrays" right at the top, it's growing increasingly not 1D: `unique` recently changed to operate on multidimensional arrays, and I'm proposing a multidimensional version of `in1d`. `ediff1d` could probably be tweaked into a version that operates along an axis the same way unique does now, fwiw. Mostly I want to know if I should put my code changes in this file or somewhere else. Thanks, -brsr -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Mon Dec 19 20:43:41 2016 From: shoyer at gmail.com (Stephan Hoyer) Date: Mon, 19 Dec 2016 17:43:41 -0800 Subject: [Numpy-discussion] in1d, but preserve shape of ar1 In-Reply-To: References: Message-ID: I think this is a great idea! I agree that we need a new function. Because the new API is almost strictly superior, we should try to pick a more general name that we can encourage users to switch to from in1d. Pandas calls this method "isin", which I think is a perfectly good name for the multi-dimensional NumPy version, too: http://pandas.pydata.org/pandas-docs/stable/generated/ pandas.Series.isin.html It's a subjective call, but I would probably keep the new function in arraysetops.py. (This is the sort of question well suited to GitHub rather than the mailing list, though.) On Mon, Dec 19, 2016 at 3:25 PM, Brenton R S Recht wrote: > I started an enhancement request in the Github bug tracker at > https://github.com/numpy/numpy/issues/8331 , but Jaime Frio recommended I > bring it to the mailing list. > > `in1d` takes two arrays, `ar1` and `ar2`, and returns a 1d array with the > same number of elements as `ar1`. The logical extension would be a function > that does the same thing but returns a (possibly multi-dimensional) array > of the same shape as `ar1`. The code already has a comment suggesting this > could be done (see https://github.com/numpy/numpy/blob/master/numpy/lib/ > arraysetops.py#L444 ). > > I agree that changing the behavior of the existing function isn't an > option, since it would break backwards compatability. I'm not sure adding > an option keep_shape is good, since the name of the function ("1d") > wouldn't match what it does (returns an array that might not be 1d). I > think a new function is the way to go. This would be it, more or less: > > def items_in(ar1, ar2, **kwargs): > return np.in1d(ar1, ar2, **kwargs).reshape(ar1.shape) > > Questions I have are: > * Function name? I was thinking something like `items_in` or `item_in`: > the function returns whether each item in `ar1` is in `ar2`. Is "item" or > "element" the right term here? > * Are there any other changes that need to happen in arraysetops.py? Or > other files? I ask this because although the file says "Set operations for > 1D numeric arrays" right at the top, it's growing increasingly not 1D: > `unique` recently changed to operate on multidimensional arrays, and I'm > proposing a multidimensional version of `in1d`. `ediff1d` could probably be > tweaked into a version that operates along an axis the same way unique does > now, fwiw. Mostly I want to know if I should put my code changes in this > file or somewhere else. > > Thanks, > > -brsr > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Dec 19 20:43:48 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 19 Dec 2016 18:43:48 -0700 Subject: [Numpy-discussion] NumPy 1.12.0rc1 Message-ID: Hi All, I am pleased to announce the release of NumPy 1.12.0rc1. This release supports Python 2.7 and 3.4 - 3.6 and is the result of 406 pull requests submitted by 139 contributors and comprises a large number of fixes and improvements. Among the many improvements it is difficult to pick out just a few as standing above the others, but the following may be of particular interest or indicate areas likely to have future consequences. * Order of operations in ``np.einsum`` can now be optimized for large speed improvements. * New ``signature`` argument to ``np.vectorize`` for vectorizing with core dimensions. * The ``keepdims`` argument was added to many functions. * New context manager for testing warnings * Support for BLIS in numpy.distutils * Much improved support for PyPy (not yet finished) The release notes are quite sizable and rather than put them inline I've attached them as a file. They may also be viewed at Github . Zip files and tarballs may also be found the Github link. Wheels and a zip archive are available from PyPI, which is the recommended method of installation. Cheers, Charles Harris -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ========================== NumPy 1.12.0 Release Notes ========================== This release supports Python 2.7 and 3.4 - 3.6. Highlights ========== The NumPy 1.12.0 release contains a large number of fixes and improvements, but few that stand out above all others. That makes picking out the highlights somewhat arbitrary but the following may be of particular interest or indicate areas likely to have future consequences. * Order of operations in ``np.einsum`` can now be optimized for large speed improvements. * New ``signature`` argument to ``np.vectorize`` for vectorizing with core dimensions. * The ``keepdims`` argument was added to many functions. * New context manager for testing warnings * Support for BLIS in numpy.distutils * Much improved support for PyPy (not yet finished) Dropped Support =============== * Support for Python 2.6, 3.2, and 3.3 has been dropped. Added Support ============= * Support for PyPy 2.7 v5.6.0 has been added. While not complete (nditer ``updateifcopy`` is not supported yet), this is a milestone for PyPy's C-API compatibility layer. Build System Changes ==================== * Library order is preserved, instead of being reordered to match that of the directories. Deprecations ============ Assignment of ndarray object's ``data`` attribute ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Assigning the 'data' attribute is an inherently unsafe operation as pointed out in gh-7083. Such a capability will be removed in the future. Unsafe int casting of the num attribute in ``linspace`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``np.linspace`` now raises DeprecationWarning when num cannot be safely interpreted as an integer. Insufficient bit width parameter to ``binary_repr`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If a 'width' parameter is passed into ``binary_repr`` that is insufficient to represent the number in base 2 (positive) or 2's complement (negative) form, the function used to silently ignore the parameter and return a representation using the minimal number of bits needed for the form in question. Such behavior is now considered unsafe from a user perspective and will raise an error in the future. Future Changes ============== * In 1.13 NAT will always compare False except for ``NAT != NAT``, which will be True. In short, NAT will behave like NaN * In 1.13 np.average will preserve subclasses, to match the behavior of most other numpy functions such as np.mean. In particular, this means calls which returned a scalar may return a 0-d subclass object instead. Multiple-field manipulation of structured arrays ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In 1.13 the behavior of structured arrays involving multiple fields will change in two ways: First, indexing a structured array with multiple fields (eg, ``arr[['f1', 'f3']]``) will return a view into the original array in 1.13, instead of a copy. Note the returned view will have extra padding bytes corresponding to intervening fields in the original array, unlike the copy in 1.12, which will affect code such as ``arr[['f1', 'f3']].view(newdtype)``. Second, for numpy versions 1.6 to 1.12 assignment between structured arrays occurs "by field name": Fields in the destination array are set to the identically-named field in the source array or to 0 if the source does not have a field:: >>> a = np.array([(1,2),(3,4)], dtype=[('x', 'i4'), ('y', 'i4')]) >>> b = np.ones(2, dtype=[('z', 'i4'), ('y', 'i4'), ('x', 'i4')]) >>> b[:] = a >>> b array([(0, 2, 1), (0, 4, 3)], dtype=[('z', '`. This allows for vectorizing a much broader class of functions. For example, an arbitrary distance metric that combines two vectors to produce a scalar could be vectorized with ``signature='(n),(n)->()'``. See ``np.vectorize`` for full details. Emit py3kwarnings for division of integer arrays ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To help people migrate their code bases from Python 2 to Python 3, the python interpreter has a handy option -3, which issues warnings at runtime. One of its warnings is for integer division:: $ python -3 -c "2/3" -c:1: DeprecationWarning: classic int division In Python 3, the new integer division semantics also apply to numpy arrays. With this version, numpy will emit a similar warning:: $ python -3 -c "import numpy as np; np.array(2)/np.array(3)" -c:1: DeprecationWarning: numpy: classic int division numpy.sctypes now includes bytes on Python3 too ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously, it included str (bytes) and unicode on Python2, but only str (unicode) on Python3. Improvements ============ ``bitwise_and`` identity changed ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The previous identity was 1 with the result that all bits except the LSB were masked out when the reduce method was used. The new identity is -1, which should work properly on twos complement machines as all bits will be set to one. Generalized Ufuncs will now unlock the GIL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Generalized Ufuncs, including most of the linalg module, will now unlock the Python global interpreter lock. Caches in `np.fft` are now bounded in total size and item count ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The caches in `np.fft` that speed up successive FFTs of the same length can no longer grow without bounds. They have been replaced with LRU (least recently used) caches that automatically evict no longer needed items if either the memory size or item count limit has been reached. Improved handling of zero-width string/unicode dtypes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fixed several interfaces that explicitly disallowed arrays with zero-width string dtypes (i.e. ``dtype('S0')`` or ``dtype('U0')``, and fixed several bugs where such dtypes were not handled properly. In particular, changed ``ndarray.__new__`` to not implicitly convert ``dtype('S0')`` to ``dtype('S1')`` (and likewise for unicode) when creating new arrays. Integer ufuncs vectorized with AVX2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the cpu supports it at runtime the basic integer ufuncs now use AVX2 instructions. This feature is currently only available when compiled with GCC. Order of operations optimization in ``np.einsum`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``np.einsum`` now supports the ``optimize`` argument which will optimize the order of contraction. For example, ``np.einsum`` would complete the chain dot example ``np.einsum(?ij,jk,kl->il?, a, b, c)`` in a single pass which would scale like ``N^4``; however, when ``optimize=True`` ``np.einsum`` will create an intermediate array to reduce this scaling to ``N^3`` or effectively ``np.dot(a, b).dot(c)``. Usage of intermediate tensors to reduce scaling has been applied to the general einsum summation notation. See ``np.einsum_path`` for more details. quicksort has been changed to an introsort ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The quicksort kind of ``np.sort`` and ``np.argsort`` is now an introsort which is regular quicksort but changing to a heapsort when not enough progress is made. This retains the good quicksort performance while changing the worst case runtime from ``O(N^2)`` to ``O(N*log(N))``. ``ediff1d`` improved performance and subclass handling ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The ediff1d function uses an array instead on a flat iterator for the subtraction. When to_begin or to_end is not None, the subtraction is performed in place to eliminate a copy operation. A side effect is that certain subclasses are handled better, namely astropy.Quantity, since the complete array is created, wrapped, and then begin and end values are set, instead of using concatenate. Improved precision of ``ndarray.mean`` for float16 arrays ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The computation of the mean of float16 arrays is now carried out in float32 for improved precision. This should be useful in packages such as scikit-learn where the precision of float16 is adequate and its smaller footprint is desireable. Changes ======= All array-like methods are now called with keyword arguments in fromnumeric.py ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Internally, many array-like methods in fromnumeric.py were being called with positional arguments instead of keyword arguments as their external signatures were doing. This caused a complication in the downstream 'pandas' library that encountered an issue with 'numpy' compatibility. Now, all array-like methods in this module are called with keyword arguments instead. Operations on np.memmap objects return numpy arrays in most cases ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Previously operations on a memmap object would misleadingly return a memmap instance even if the result was actually not memmapped. For example, ``arr + 1`` or ``arr + arr`` would return memmap instances, although no memory from the output array is memmaped. Version 1.12 returns ordinary numpy arrays from these operations. Also, reduction of a memmap (e.g. ``.sum(axis=None``) now returns a numpy scalar instead of a 0d memmap. stacklevel of warnings increased ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The stacklevel for python based warnings was increased so that most warnings will report the offending line of the user code instead of the line the warning itself is given. Passing of stacklevel is now tested to ensure that new warnings will receive the ``stacklevel`` argument. This causes warnings with the "default" or "module" filter to be shown once for every offending user code line or user module instead of only once. On python versions before 3.4, this can cause warnings to appear that were falsely ignored before, which may be surprising especially in test suits. Contributors to maintenance/1.12.x ================================== A total of 139 people contributed to this release. People with a "+" by their names contributed a patch for the first time. - - Aditya Panchal - - Ales Erjavec + - - Alex Griffing - - Alexandr Shadchin + - - Alistair Muldal - - Allan Haldane - - Amit Aronovitch + - - Andrei Kucharavy + - - Antony Lee - - Antti Kaihola + - - Arne de Laat + - - Auke Wiggers + - - AustereCuriosity + - - Badhri Narayanan Krishnakumar + - - Ben North + - - Ben Rowland + - - Bertrand Lefebvre - - Boxiang Sun - - CJ Carey - - Charles Harris - - Christoph Gohlke - - Daniel Ching + - - Daniel Rasmussen + - - Daniel Smith + - - David Schaich + - - Denis Alevi + - - Devin Jeanpierre + - - Dmitry Odzerikho - - Dongjoon Hyun + - - Edward Richards + - - Ekaterina Tuzova + - - Emilien Kofman + - - Endolith - - Eren Sezener + - - Eric Moore - - Eric Quintero + - - Eric Wieser + - - Erik M. Bray - - Frederic Bastien + - - Friedrich Dunne + - - Gerrit Holl - - Golnaz Irannejad + - - Graham Markall + - - Greg Knoll + - - Greg Young - - Gustavo Serra Scalet + - - Ines Wichert + - - Irvin Probst + - - Jaime Fernandez - - James Sanders + - - Jan David Mol + - - Jan Schl?ter - - Jeremy Tuloup + - - John Kirkham - - John Zwinck + - - Jonathan Helmus - - Joseph Fox-Rabinovitz - - Josh Wilson + - - Joshua Warner + - - Julian Taylor - - Ka Wo Chen + - - Kamil Rytarowski + - - Kelsey Jordahl + - - Kevin Deldycke + - - Khaled Ben Abdallah Okuda + - - Lion Krischer + - - Lo?c Est?ve + - - Luca Mussi + - - Mads Ohm Larsen + - - Manoj Kumar + - - Mario Emmenlauer + - - Marshall Bockrath-Vandegrift + - - Marshall Ward + - - Marten van Kerkwijk - - Mathieu Lamarre + - - Matthew Brett - - Matthew Harrigan + - - Matthias Geier - - Matti Picus + - - Meet Udeshi + - - Michael Felt + - - Michael Goerz + - - Michael Martin + - - Michael Seifert + - - Mike Nolta + - - Nathaniel Beaver + - - Nathaniel J. Smith - - Naveen Arunachalam + - - Nick Papior - - Nikola Forr? + - - Oleksandr Pavlyk + - - Olivier Grisel - - Oren Amsalem + - - Pauli Virtanen - - Pavel Potocek + - - Pedro Lacerda + - - Peter Creasey + - - Phil Elson + - - Philip Gura + - - Phillip J. Wolfram + - - Pierre de Buyl + - - Raghav RV + - - Ralf Gommers - - Ray Donnelly + - - Rehas Sachdeva - - Rob Malouf + - - Robert Kern - - Samuel St-Jean - - Sanchez Gonzalez Alvaro + - - Saurabh Mehta + - - Scott Sanderson + - - Sebastian Berg - - Shayan Pooya + - - Shota Kawabuchi + - - Simon Conseil - - Simon Gibbons - - Sorin Sbarnea + - - Stefan van der Walt - - Stephan Hoyer - - Steven J Kern + - - Stuart Archibald - - Tadeu Manoel + - - Takuya Akiba + - - Thomas A Caswell - - Tom Bird + - - Tony Kelman + - - Toshihiro Kamishima + - - Valentin Valls + - - Varun Nayyar - - Victor Stinner + - - Warren Weckesser - - Wendell Smith - - Wojtek Ruszczewski + - - Xavier Abellan Ecija + - - Yaroslav Halchenko - - Yash Shah + - - Yinon Ehrlich + - - Yu Feng + - - nevimov + Pull requests merged for maintenance/1.12.x =========================================== A total of 406 pull requests were merged for this release. - - `#4073 `__: BUG: change real output checking to test if all imaginary parts... - - `#4619 `__: BUG : np.sum silently drops keepdims for sub-classes of ndarray - - `#5488 `__: ENH: add `contract`: optimizing numpy's einsum expression - - `#5706 `__: ENH: make some masked array methods behave more like ndarray... - - `#5822 `__: Allow many distributions to have a scale of 0. - - `#6054 `__: WIP: MAINT: Add deprecation warning to views of multi-field indexes - - `#6298 `__: Check lower base limit in base_repr. - - `#6430 `__: Fix issues with zero-width string fields - - `#6656 `__: ENH: usecols now accepts an int when only one column has to be... - - `#6660 `__: Added pathlib support for several functions - - `#6872 `__: ENH: linear interpolation of complex values in lib.interp - - `#6997 `__: MAINT: Simplify mtrand.pyx helpers - - `#7003 `__: BUG: Fix string copying for np.place - - `#7026 `__: DOC: Clarify behavior in np.random.uniform - - `#7055 `__: BUG: One Element Array Inputs Return Scalars in np.random - - `#7063 `__: REL: Update master branch after 1.11.x branch has been made. - - `#7073 `__: DOC: Update the 1.11.0 release notes. - - `#7076 `__: MAINT: Update the git .mailmap file. - - `#7082 `__: TST, DOC: Added Broadcasting Tests in test_random.py - - `#7087 `__: BLD: fix compilation on non glibc-Linuxes - - `#7088 `__: BUG: Have `norm` cast non-floating point arrays to 64-bit float... - - `#7090 `__: ENH: Added 'doane' and 'sqrt' estimators to np.histogram in numpy.function_base - - `#7091 `__: Revert "BLD: fix compilation on non glibc-Linuxes" - - `#7092 `__: BLD: fix compilation on non glibc-Linuxes - - `#7099 `__: TST: Suppressed warnings - - `#7102 `__: MAINT: Removed conditionals that are always false in datetime_strings.c - - `#7105 `__: DEP: Deprecate as_strided returning a writable array as default - - `#7109 `__: DOC: update Python versions requirements in the install docs - - `#7114 `__: MAINT: Fix typos in docs - - `#7116 `__: TST: Fixed f2py test for win32 virtualenv - - `#7118 `__: TST: Fixed f2py test for non-versioned python executables - - `#7119 `__: BUG: Fixed mingw.lib error - - `#7125 `__: DOC: Updated documentation wording and examples for np.percentile. - - `#7129 `__: BUG: Fixed 'midpoint' interpolation of np.percentile in odd cases. - - `#7131 `__: Fix setuptools sdist - - `#7133 `__: ENH: savez: temporary file alongside with target file and improve... - - `#7134 `__: MAINT: Fix some typos in a code string and comments - - `#7141 `__: BUG: Unpickled void scalars should be contiguous - - `#7144 `__: MAINT: Change `call_fortran` into `callfortran` in comments. - - `#7145 `__: BUG: Fixed regressions in np.piecewise in ref to #5737 and #5729. - - `#7147 `__: Temporarily disable __numpy_ufunc__ - - `#7148 `__: ENH,TST: Bump stacklevel and add tests for warnings - - `#7149 `__: TST: Add missing suffix to temppath manager - - `#7152 `__: BUG: mode kwargs passed as unicode to np.pad raises an exception - - `#7156 `__: BUG: Reascertain that linspace respects ndarray subclasses in... - - `#7167 `__: DOC: Update Wikipedia references for mtrand.pyx - - `#7171 `__: TST: Fixed f2py test for Anaconda non-win32 - - `#7174 `__: DOC: Fix broken pandas link in release notes - - `#7177 `__: ENH: added axis param for np.count_nonzero - - `#7178 `__: BUG: Fix binary_repr for negative numbers - - `#7180 `__: BUG: Fixed previous attempt to fix dimension mismatch in nanpercentile - - `#7181 `__: DOC: Updated minor typos in function_base.py and test_function_base.py - - `#7191 `__: DOC: add vstack, hstack, dstack reference to stack documentation. - - `#7193 `__: MAINT: Removed supurious assert in histogram estimators - - `#7194 `__: BUG: Raise a quieter `MaskedArrayFutureWarning` for mask changes. - - `#7195 `__: STY: Drop some trailing spaces in `numpy.ma.core`. - - `#7196 `__: Revert "DOC: add vstack, hstack, dstack reference to stack documentation." - - `#7197 `__: TST: Pin virtualenv used on Travis CI. - - `#7198 `__: ENH: Unlock the GIL for gufuncs - - `#7199 `__: MAINT: Cleanup for histogram bin estimator selection - - `#7201 `__: Raise IOError on not a file in python2 - - `#7202 `__: MAINT: Made `iterable` return a boolean - - `#7209 `__: TST: Bump `virtualenv` to 14.0.6 - - `#7211 `__: DOC: Fix fmin examples - - `#7215 `__: MAINT: Use PySlice_GetIndicesEx instead of custom reimplementation - - `#7229 `__: ENH: implement __complex__ - - `#7231 `__: MRG: allow distributors to run custom init - - `#7232 `__: BLD: Switch order of test for lapack_mkl and openblas_lapack - - `#7239 `__: DOC: Removed residual merge markup from previous commit - - `#7240 `__: Change 'pubic' to 'public'. - - `#7241 `__: MAINT: update doc/sphinxext to numpydoc 0.6.0, and fix up some... - - `#7243 `__: ENH: Adding support to the range keyword for estimation of the... - - `#7246 `__: DOC: metion writeable keyword in as_strided in release notes - - `#7247 `__: TST: Fail quickly on AppVeyor for superseded PR builds - - `#7248 `__: DOC: remove link to documentation wiki editor from HOWTO_DOCUMENT. - - `#7250 `__: DOC,REL: Update 1.11.0 notes. - - `#7251 `__: BUG: only benchmark complex256 if it exists - - `#7252 `__: Forward port a fix and enhancement from 1.11.x - - `#7253 `__: DOC: note in h/v/dstack points users to stack/concatenate - - `#7254 `__: BUG: Enforce dtype for randint singletons - - `#7256 `__: MAINT: Use `is None` or `is not None` instead of `== None` or... - - `#7257 `__: DOC: Fix mismatched variable names in docstrings. - - `#7258 `__: ENH: Make numpy floor_divide and remainder agree with Python... - - `#7260 `__: BUG/TST: Fix #7259, do not "force scalar" for already scalar... - - `#7261 `__: Added self to mailmap - - `#7266 `__: BUG: Segfault for classes with deceptive __len__ - - `#7268 `__: ENH: add geomspace function - - `#7274 `__: BUG: Preserve array order in np.delete - - `#7275 `__: DEP: Warn about assigning 'data' attribute of ndarray - - `#7276 `__: DOC: apply_along_axis missing whitespace inserted (before colon) - - `#7278 `__: BUG: Make returned unravel_index arrays writeable - - `#7279 `__: TST: Fixed elements being shuffled - - `#7280 `__: MAINT: Remove redundant trailing semicolons. - - `#7285 `__: BUG: Make Randint Backwards Compatible with Pandas - - `#7286 `__: MAINT: Fix typos in docs/comments of `ma` and `polynomial` modules. - - `#7292 `__: Clarify error on repr failure in assert_equal. - - `#7294 `__: ENH: add support for BLIS to numpy.distutils - - `#7295 `__: DOC: understanding code and getting started section to dev doc - - `#7296 `__: Revert part of #3907 which incorrectly propogated MaskedArray... - - `#7299 `__: DOC: Fix mismatched variable names in docstrings. - - `#7300 `__: DOC: dev: stop recommending keeping local master updated with... - - `#7301 `__: DOC: Update release notes - - `#7305 `__: BUG: Remove data race in mtrand: two threads could mutate the... - - `#7307 `__: DOC: Missing some characters in link. - - `#7308 `__: BUG: Incrementing the wrong reference on return - - `#7310 `__: STY: Fix GitHub rendering of ordered lists >9 - - `#7311 `__: ENH: Make _pointer_type_cache functional - - `#7313 `__: DOC: corrected grammatical error in quickstart doc - - `#7325 `__: BUG, MAINT: Improve fromnumeric.py interface for downstream compatibility - - `#7328 `__: DEP: Deprecated using a float index in linspace - - `#7331 `__: Add comment, TST: fix MemoryError on win32 - - `#7332 `__: Check for no solution in np.irr Fixes #6744 - - `#7338 `__: TST: Install `pytz` in the CI. - - `#7340 `__: DOC: Fixed math rendering in tensordot docs. - - `#7341 `__: TST: Add test for #6469 - - `#7344 `__: DOC: Fix more typos in docs and comments. - - `#7346 `__: Generalized flip - - `#7347 `__: ENH Generalized rot90 - - `#7348 `__: Maint: Removed extra space from `ureduce` - - `#7349 `__: MAINT: Hide nan warnings for masked internal MA computations - - `#7350 `__: BUG: MA ufuncs should set mask to False, not array([False]) - - `#7351 `__: TST: Fix some MA tests to avoid looking at the .data attribute - - `#7358 `__: BUG: pull request related to the issue #7353 - - `#7359 `__: Update 7314, DOC: Clarify valid integer range for random.seed... - - `#7361 `__: MAINT: Fix copy and paste oversight. - - `#7363 `__: ENH: Make no unshare mask future warnings less noisy - - `#7366 `__: TST: fix #6542, add tests to check non-iterable argument raises... - - `#7373 `__: ENH: Add bitwise_and identity - - `#7378 `__: added NumPy logo and separator - - `#7382 `__: MAINT: cleanup np.average - - `#7385 `__: DOC: note about wheels / windows wheels for pypi - - `#7386 `__: Added label icon to Travis status - - `#7397 `__: BUG: incorrect type for objects whose __len__ fails - - `#7398 `__: DOC: fix typo - - `#7404 `__: Use PyMem_RawMalloc on Python 3.4 and newer - - `#7406 `__: ENH ufunc called on memmap return a ndarray - - `#7407 `__: BUG: Fix decref before incref for in-place accumulate - - `#7410 `__: DOC: add nanprod to the list of math routines - - `#7414 `__: Tweak corrcoef - - `#7415 `__: DOC: Documention fixes - - `#7416 `__: BUG: Incorrect handling of range in `histogram` with automatic... - - `#7418 `__: DOC: Minor typo fix, hermefik -> hermefit. - - `#7421 `__: ENH: adds np.nancumsum and np.nancumprod - - `#7423 `__: BUG: Ongoing fixes to PR#7416 - - `#7430 `__: DOC: Update 1.11.0-notes. - - `#7433 `__: MAINT: FutureWarning for changes to np.average subclass handling - - `#7437 `__: np.full now defaults to the filling value's dtype. - - `#7438 `__: Allow rolling multiple axes at the same time. - - `#7439 `__: BUG: Do not try sequence repeat unless necessary - - `#7442 `__: MANT: Simplify diagonal length calculation logic - - `#7445 `__: BUG: reference count leak in bincount, fixes #6805 - - `#7446 `__: DOC: ndarray typo fix - - `#7447 `__: BUG: scalar integer negative powers gave wrong results. - - `#7448 `__: DOC: array "See also" link to full and full_like instead of fill - - `#7456 `__: BUG: int overflow in reshape, fixes #7455, fixes #7293 - - `#7463 `__: BUG: fix array too big error for wide dtypes. - - `#7466 `__: BUG: segfault inplace object reduceat, fixes #7465 - - `#7468 `__: BUG: more on inplace reductions, fixes #615 - - `#7469 `__: MAINT: Update git .mailmap - - `#7472 `__: MAINT: Update .mailmap. - - `#7477 `__: MAINT: Yet more .mailmap updates for recent contributors. - - `#7481 `__: BUG: Fix segfault in PyArray_OrderConverter - - `#7482 `__: BUG: Memory Leak in _GenericBinaryOutFunction - - `#7489 `__: Faster real_if_close. - - `#7491 `__: DOC: Update subclassing doc regarding downstream compatibility - - `#7496 `__: BUG: don't use pow for integer power ufunc loops. - - `#7504 `__: DOC: remove "arr" from keepdims docstrings - - `#7505 `__: MAIN: fix to #7382, make scl in np.average writeable - - `#7507 `__: MAINT: Remove nose.SkipTest import. - - `#7508 `__: DOC: link frompyfunc and vectorize - - `#7511 `__: numpy.power(0, 0) should return 1 - - `#7515 `__: BUG: MaskedArray.count treats negative axes incorrectly - - `#7518 `__: BUG: Extend glibc complex trig functions blacklist to glibc <... - - `#7521 `__: DOC: rephrase writeup of memmap changes - - `#7522 `__: BUG: Fixed iteration over additional bad commands - - `#7526 `__: DOC: Removed an extra `:const:` - - `#7529 `__: BUG: Floating exception with invalid axis in np.lexsort - - `#7534 `__: MAINT: Update setup.py to reflect supported python versions. - - `#7536 `__: MAINT: Always use PyCapsule instead of PyCObject in mtrand.pyx - - `#7539 `__: MAINT: Cleanup of random stuff - - `#7549 `__: BUG: allow graceful recovery for no Liux compiler - - `#7562 `__: BUG: Fix test_from_object_array_unicode (test_defchararray.TestBasic)? - - `#7565 `__: BUG: Fix test_ctypeslib and test_indexing for debug interpreter - - `#7566 `__: MAINT: use manylinux1 wheel for cython - - `#7568 `__: Fix a false positive OverflowError in Python 3.x when value above... - - `#7579 `__: DOC: clarify purpose of Attributes section - - `#7584 `__: BUG: fixes #7572, percent in path - - `#7586 `__: Make np.ma.take works on scalars - - `#7587 `__: BUG: linalg.norm(): Don't convert object arrays to float - - `#7598 `__: Cast array size to int64 when loading from archive - - `#7602 `__: DOC: Remove isreal and iscomplex from ufunc list - - `#7605 `__: DOC: fix incorrect Gamma distribution parameterization comments - - `#7609 `__: BUG: Fix TypeError when raising TypeError - - `#7611 `__: ENH: expose test runner raise_warnings option - - `#7614 `__: BLD: Avoid using os.spawnve in favor of os.spawnv in exec_command - - `#7618 `__: BUG: distance arg of np.gradient must be scalar, fix docstring - - `#7626 `__: DOC: RST definition list fixes - - `#7627 `__: MAINT: unify tup processing, move tup use to after all PyTuple_SetItem... - - `#7630 `__: MAINT: add ifdef around PyDictProxy_Check macro - - `#7631 `__: MAINT: linalg: fix comment, simplify math - - `#7634 `__: BLD: correct C compiler customization in system_info.py Closes... - - `#7635 `__: BUG: ma.median alternate fix for #7592 - - `#7636 `__: MAINT: clean up testing.assert_raises_regexp, 2.6-specific code... - - `#7637 `__: MAINT: clearer exception message when importing multiarray fails. - - `#7639 `__: TST: fix a set of test errors in master. - - `#7643 `__: DOC : minor changes to linspace docstring - - `#7651 `__: BUG: one to any power is still 1. Broken edgecase for int arrays - - `#7655 `__: BLD: Remove Intel compiler flag -xSSE4.2 - - `#7658 `__: BUG: fix incorrect printing of 1D masked arrays - - `#7659 `__: BUG: Temporary fix for str(mvoid) for object field types - - `#7664 `__: BUG: Fix unicode with byte swap transfer and copyswap - - `#7667 `__: Restore histogram consistency - - `#7668 `__: ENH: Do not check the type of module.__dict__ explicit in test. - - `#7669 `__: BUG: boolean assignment no GIL release when transfer needs API - - `#7673 `__: DOC: Create Numpy 1.11.1 release notes. - - `#7675 `__: BUG: fix handling of right edge of final bin. - - `#7678 `__: BUG: Fix np.clip bug NaN handling for Visual Studio 2015 - - `#7679 `__: MAINT: Fix up C++ comment in arraytypes.c.src. - - `#7681 `__: DOC: Update 1.11.1 release notes. - - `#7686 `__: ENH: Changing FFT cache to a bounded LRU cache - - `#7688 `__: DOC: fix broken genfromtxt examples in user guide. Closes gh-7662. - - `#7689 `__: BENCH: add correlate/convolve benchmarks. - - `#7696 `__: DOC: update wheel build / upload instructions - - `#7699 `__: BLD: preserve library order - - `#7704 `__: ENH: Add bits attribute to np.finfo - - `#7712 `__: BUG: Fix race condition with new FFT cache - - `#7715 `__: BUG: Remove memory leak in np.place - - `#7719 `__: BUG: Fix segfault in np.random.shuffle for arrays of different... - - `#7723 `__: Change mkl_info.dir_env_var from MKL to MKLROOT - - `#7727 `__: DOC: Corrections in Datetime Units-arrays.datetime.rst - - `#7729 `__: DOC: fix typo in savetxt docstring (closes #7620) - - `#7733 `__: Update 7525, DOC: Fix order='A' docs of np.array. - - `#7734 `__: Update 7542, ENH: Add `polyrootval` to numpy.polynomial - - `#7735 `__: BUG: fix issue on OS X with Python 3.x where npymath.ini was... - - `#7739 `__: DOC: Mention the changes of #6430 in the release notes. - - `#7740 `__: DOC: add reference to poisson rng - - `#7743 `__: Update 7476, DEP: deprecate Numeric-style typecodes, closes #2148 - - `#7744 `__: DOC: Remove "ones_like" from ufuncs list (it is not) - - `#7746 `__: DOC: Clarify the effect of rcond in numpy.linalg.lstsq. - - `#7747 `__: Update 7672, BUG: Make sure we don't divide by zero - - `#7748 `__: DOC: Update float32 mean example in docstring - - `#7754 `__: Update 7612, ENH: Add broadcast.ndim to match code elsewhere. - - `#7757 `__: Update 7175, BUG: Invalid read of size 4 in PyArray_FromFile - - `#7759 `__: BUG: Fix numpy.i support for numpy API < 1.7. - - `#7760 `__: ENH: Make assert_almost_equal & assert_array_almost_equal consistent. - - `#7766 `__: fix an English typo - - `#7771 `__: DOC: link geomspace from logspace - - `#7773 `__: DOC: Remove a redundant the - - `#7777 `__: DOC: Update Numpy 1.11.1 release notes. - - `#7785 `__: DOC: update wheel building procedure for release - - `#7789 `__: MRG: add note of 64-bit wheels on Windows - - `#7791 `__: f2py.compile issues (#7683) - - `#7799 `__: "lambda" is not allowed to use as keyword arguments in a sample... - - `#7803 `__: BUG: interpret 'c' PEP3118/struct type as 'S1'. - - `#7807 `__: DOC: Misplaced parens in formula - - `#7817 `__: BUG: Make sure npy_mul_with_overflow_ detects overflow. - - `#7818 `__: numpy/distutils/misc_util.py fix for #7809: check that _tmpdirs... - - `#7820 `__: MAINT: Allocate fewer bytes for empty arrays. - - `#7823 `__: BUG: Fixed masked array behavior for scalar inputs to np.ma.atleast_*d - - `#7834 `__: DOC: Added an example - - `#7839 `__: Pypy fixes - - `#7840 `__: Fix ATLAS version detection - - `#7842 `__: Fix versionadded tags - - `#7848 `__: MAINT: Fix remaining uses of deprecated Python imp module. - - `#7853 `__: BUG: Make sure numpy globals keep identity after reload. - - `#7863 `__: ENH: turn quicksort into introsort - - `#7866 `__: Document runtests extra argv - - `#7871 `__: BUG: handle introsort depth limit properly - - `#7879 `__: DOC: fix typo in documentation of loadtxt (closes #7878) - - `#7885 `__: Handle NetBSD specific - - `#7889 `__: DOC: #7881. Fix link to record arrays - - `#7894 `__: fixup-7790, BUG: construct ma.array from np.array which contains... - - `#7898 `__: Spelling and grammar fix. - - `#7903 `__: BUG: fix float16 type not being called due to wrong ordering - - `#7908 `__: BLD: Fixed detection for recent MKL versions - - `#7911 `__: BUG: fix for issue#7835 (ma.median of 1d) - - `#7912 `__: ENH: skip or avoid gc/objectmodel differences btwn pypy and cpython - - `#7918 `__: ENH: allow numpy.apply_along_axis() to work with ndarray subclasses - - `#7922 `__: ENH: Add ma.convolve and ma.correlate for #6458 - - `#7925 `__: Monkey-patch _msvccompile.gen_lib_option like any other compilators - - `#7931 `__: BUG: Check for HAVE_LDOUBLE_DOUBLE_DOUBLE_LE in npy_math_complex. - - `#7936 `__: ENH: improve duck typing inside iscomplexobj - - `#7937 `__: BUG: Guard against buggy comparisons in generic quicksort. - - `#7938 `__: DOC: add cbrt to math summary page - - `#7941 `__: BUG: Make sure numpy globals keep identity after reload. - - `#7943 `__: DOC: #7927. Remove deprecated note for memmap relevant for Python... - - `#7952 `__: BUG: Use keyword arguments to initialize Extension base class. - - `#7956 `__: BLD: remove __NUMPY_SETUP__ from builtins at end of setup.py - - `#7963 `__: BUG: MSVCCompiler grows 'lib' & 'include' env strings exponentially. - - `#7965 `__: BUG: cannot modify tuple after use - - `#7976 `__: DOC: Fixed documented dimension of return value - - `#7977 `__: DOC: Create 1.11.2 release notes. - - `#7979 `__: DOC: Corrected allowed keywords in add_(installed_)library - - `#7980 `__: ENH: Add ability to runtime select ufunc loops, add AVX2 integer... - - `#7985 `__: Rebase 7763, ENH: Add new warning suppression/filtering context - - `#7987 `__: DOC: See also np.load and np.memmap in np.lib.format.open_memmap - - `#7988 `__: DOC: Include docstring for cbrt, spacing and fabs in documentation - - `#7999 `__: ENH: add inplace cases to fast ufunc loop macros - - `#8006 `__: DOC: Update 1.11.2 release notes. - - `#8008 `__: MAINT: Remove leftover imp module imports. - - `#8009 `__: DOC: Fixed three typos in the c-info.ufunc-tutorial - - `#8011 `__: DOC: Update 1.11.2 release notes. - - `#8014 `__: BUG: Fix fid.close() to use os.close(fid) - - `#8016 `__: BUG: Fix numpy.ma.median. - - `#8018 `__: BUG: Fixes return for np.ma.count if keepdims is True and axis... - - `#8021 `__: DOC: change all non-code instances of Numpy to NumPy - - `#8027 `__: ENH: Add platform indepedent lib dir to PYTHONPATH - - `#8028 `__: DOC: Update 1.11.2 release notes. - - `#8030 `__: BUG: fix np.ma.median with only one non-masked value and an axis... - - `#8038 `__: MAINT: Update error message in rollaxis. - - `#8040 `__: Update add_newdocs.py - - `#8042 `__: BUG: core: fix bug in NpyIter buffering with discontinuous arrays - - `#8045 `__: DOC: Update 1.11.2 release notes. - - `#8050 `__: remove refcount semantics, now a.resize() almost always requires... - - `#8051 `__: Clear signaling NaN exceptions - - `#8054 `__: ENH: add signature argument to vectorize for vectorizing like... - - `#8057 `__: BUG: lib: Simplify (and fix) pad's handling of the pad_width - - `#8061 `__: BUG : financial.pmt modifies input (issue #8055) - - `#8064 `__: MAINT: Add PMIP files to .gitignore - - `#8065 `__: BUG: Assert fromfile ending earlier in pyx_processing - - `#8066 `__: BUG, TST: Fix python3-dbg bug in Travis script - - `#8071 `__: MAINT: Add Tempita to randint helpers - - `#8075 `__: DOC: Fix description of isinf in nan_to_num - - `#8080 `__: BUG: non-integers can end up in dtype offsets - - `#8081 `__: Update outdated Nose URL to nose.readthedocs.io - - `#8083 `__: ENH: Deprecation warnings for `/` integer division when running... - - `#8084 `__: DOC: Fix erroneous return type description for np.roots. - - `#8087 `__: BUG: financial.pmt modifies input #8055 - - `#8088 `__: MAINT: Remove duplicate randint helpers code. - - `#8093 `__: MAINT: fix assert_raises_regex when used as a context manager - - `#8096 `__: ENH: Vendorize tempita. - - `#8098 `__: DOC: Enhance description/usage for np.linalg.eig*h - - `#8103 `__: Pypy fixes - - `#8104 `__: Fix test code on cpuinfo's main function - - `#8107 `__: BUG: Fix array printing with precision=0. - - `#8109 `__: Fix bug in ravel_multi_index for big indices (Issue #7546) - - `#8110 `__: BUG: distutils: fix issue with rpath in fcompiler/gnu.py - - `#8111 `__: ENH: Add a tool for release authors and PRs. - - `#8112 `__: DOC: Fix "See also" links in linalg. - - `#8114 `__: BUG: core: add missing error check after PyLong_AsSsize_t - - `#8121 `__: DOC: Improve histogram2d() example. - - `#8122 `__: BUG: Fix broken pickle in MaskedArray when dtype is object (Return... - - `#8124 `__: BUG: Fixed build break - - `#8125 `__: Rebase, BUG: Fixed deepcopy of F-order object arrays. - - `#8127 `__: BUG: integers to a negative integer powers should error. - - `#8141 `__: improve configure checks for broken systems - - `#8142 `__: BUG: np.ma.mean and var should return scalar if no mask - - `#8148 `__: BUG: import full module path in npy_load_module - - `#8153 `__: MAINT: Expose void-scalar "base" attribute in python - - `#8156 `__: DOC: added example with empty indices for a scalar, #8138 - - `#8160 `__: BUG: fix _array2string for structured array (issue #5692) - - `#8164 `__: MAINT: Update mailmap for NumPy 1.12.0 - - `#8165 `__: Fixup 8152, BUG: assert_allclose(..., equal_nan=False) doesn't... - - `#8167 `__: Fixup 8146, DOC: Clarify when PyArray_{Max, Min, Ptp} return... - - `#8168 `__: DOC: Minor spelling fix in genfromtxt() docstring. - - `#8173 `__: BLD: Enable build on AIX - - `#8174 `__: DOC: warn that dtype.descr is only for use in PEP3118 - - `#8177 `__: MAINT: Add python 3.6 support to suppress_warnings - - `#8178 `__: MAINT: Fix ResourceWarning new in Python 3.6. - - `#8180 `__: FIX: protect stolen ref by PyArray_NewFromDescr in array_empty - - `#8181 `__: ENH: Improve announce to find github squash-merge commits. - - `#8182 `__: MAINT: Update .mailmap - - `#8183 `__: MAINT: Ediff1d performance - - `#8184 `__: MAINT: make `assert_allclose` behavior on `nan`s match pre 1.12 - - `#8188 `__: DOC: 'highest' is exclusive for randint() - - `#8189 `__: BUG: setfield should raise if arr is not writeable - - `#8190 `__: ENH: Add a float_power function with at least float64 precision. - - `#8197 `__: DOC: Add missing arguments to np.ufunc.outer - - `#8198 `__: DEP: Deprecate the keepdims argument to accumulate - - `#8199 `__: MAINT: change path to env in distutils.system_info. Closes gh-8195. - - `#8200 `__: BUG: Fix structured array format functions - - `#8202 `__: ENH: specialize name of dev package by interpreter - - `#8205 `__: DOC: change development instructions from SSH to HTTPS access. - - `#8216 `__: DOC: Patch doc errors for atleast_nd and frombuffer - - `#8218 `__: BUG: ediff1d should return subclasses - - `#8219 `__: DOC: Turn SciPy references into links. - - `#8222 `__: ENH: Make numpy.mean() do more precise computation - - `#8227 `__: BUG: Better check for invalid bounds in np.random.uniform. - - `#8231 `__: ENH: Refactor numpy ** operators for numpy scalar integer powers - - `#8234 `__: DOC: Clarified when a copy is made in numpy.asarray - - `#8236 `__: DOC: Fix documentation pull requests. - - `#8238 `__: MAINT: Update pavement.py - - `#8239 `__: ENH: Improve announce tool. - - `#8240 `__: REL: Prepare for 1.12.x branch - - `#8243 `__: BUG: Update operator `**` tests for new behavior. - - `#8246 `__: REL: Reset strides for RELAXED_STRIDE_CHECKING for 1.12 releases. - - `#8265 `__: BUG: np.piecewise not working for scalars - - `#8272 `__: TST: Path test should resolve symlinks when comparing - - `#8282 `__: DOC: Update 1.12.0 release notes. - - `#8286 `__: BUG: Fix pavement.py write_release_task. - - `#8296 `__: BUG: Fix iteration over reversed subspaces in mapiter_ at name@. - - `#8304 `__: BUG: Fix PyPy crash in PyUFunc_GenericReduction. - - `#8319 `__: BLD: blacklist powl (longdouble power function) on OS X. - - `#8320 `__: BUG: do not link to Accelerate if OpenBLAS, MKL or BLIS are found. - - `#8322 `__: BUG: fixed kind specifications for parameters - - `#8336 `__: BUG: fix packbits and unpackbits to correctly handle empty arrays - - `#8338 `__: BUG: fix test_api test that fails intermittently in python 3 - - `#8339 `__: BUG: Fix ndarray.tofile large file corruption in append mode. - - `#8359 `__: BUG: Fix suppress_warnings (again) for Python 3.6. - - `#8372 `__: BUG: Fixes for ma.median and nanpercentile. - - `#8373 `__: BUG: correct letter case - - `#8379 `__: DOC: Update 1.12.0-notes.rst. - - `#8390 `__: ENH: retune apply_along_axis nanmedian cutoff in 1.12 - - `#8391 `__: DEP: Fix escaped string characters deprecated in Python 3.6. - - `#8394 `__: DOC: create 1.11.3 release notes. - - `#8399 `__: BUG: Fix author search in announce.py - - `#8402 `__: DOC, MAINT: Update 1.12.0 notes and mailmap. Checksums ========= MD5 ~~~ a8dcb183dbe9b336f7bdfb3c9dcab1c4 numpy-1.12.0rc1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl a0fffded3c5b4562782a60c062205d24 numpy-1.12.0rc1-cp27-cp27m-manylinux1_i686.whl 74a6d2ca493f4ed2aa9a26076eda592d numpy-1.12.0rc1-cp27-cp27m-manylinux1_x86_64.whl 51a3e553c16d830f9600d071d0a70c34 numpy-1.12.0rc1-cp27-cp27mu-manylinux1_i686.whl 6382c2e48197c009a23b8f122c50c782 numpy-1.12.0rc1-cp27-cp27mu-manylinux1_x86_64.whl 108384da4efc1271cd9a7b9f58763fdc numpy-1.12.0rc1-cp27-none-win32.whl c5d45db386d4170d1f19900c55680385 numpy-1.12.0rc1-cp27-none-win_amd64.whl e06a3e32380f0157ac8829b6da60989b numpy-1.12.0rc1-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 4678191e76df688e8b9fea08267484f9 numpy-1.12.0rc1-cp34-cp34m-manylinux1_i686.whl a3cfbc1d045476e7ab1fc4cb297f1410 numpy-1.12.0rc1-cp34-cp34m-manylinux1_x86_64.whl 23aa6b0e7997bee8ba161054b8eca263 numpy-1.12.0rc1-cp34-none-win32.whl f85e2376c835c6a4aed56908df0b2627 numpy-1.12.0rc1-cp34-none-win_amd64.whl 30b88d280aca4d979ce274a6f8802786 numpy-1.12.0rc1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl a24092d34595bd8f47183e15c512fd25 numpy-1.12.0rc1-cp35-cp35m-manylinux1_i686.whl 99f1613b09845583e4cdd3c27fe0b581 numpy-1.12.0rc1-cp35-cp35m-manylinux1_x86_64.whl 41ed4df28cc2aa4b8ead5474d92e68ec numpy-1.12.0rc1-cp35-none-win32.whl dbed48e11c520aed00701cd77f4eff3b numpy-1.12.0rc1-cp35-none-win_amd64.whl fec43bde3d3dac92873c11f574aa3c28 numpy-1.12.0rc1.tar.gz 92e300ed24811cd119d3f04168bfcdae numpy-1.12.0rc1.zip SHA256 ~~~~~~ cdad735c13b344321c9b9174761bdc2e22a199a4cd6c18612f1b02ebe58b4ff7 numpy-1.12.0rc1-cp27-cp27m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 0172a1987bb3c0ad0b6eec88abc49207d9746cc6f519c599b797a992a53a404c numpy-1.12.0rc1-cp27-cp27m-manylinux1_i686.whl 1bbd7f2a07514afd5c9298a19953b15ff2b4a2759135aee19231570aab220525 numpy-1.12.0rc1-cp27-cp27m-manylinux1_x86_64.whl cf2539d4af2409feb005d227d4a88d5371f663974aed57d1fa489a6d17d67bbc numpy-1.12.0rc1-cp27-cp27mu-manylinux1_i686.whl 6bb0cff8e6fbca38b5048509e991a4697ffdd21ddd7453830cfc80474517d46a numpy-1.12.0rc1-cp27-cp27mu-manylinux1_x86_64.whl d23ca5d4d83ac872b7f9d9cd8d6c0c2d243e491199b9bb581f3892b17108acce numpy-1.12.0rc1-cp27-none-win32.whl 8ede1d028243199666e9ec3980afd773e91af12172e7c9274e1d9dfe86e22169 numpy-1.12.0rc1-cp27-none-win_amd64.whl 210afcb0d2893c19171d78e77d23ab05885379c0fb14cfacefdfc73d69192800 numpy-1.12.0rc1-cp34-cp34m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 2ffacc9645ed40a10c14aca477999e8ea5f7422e4806a3843fd2ccbaf7ed5ad4 numpy-1.12.0rc1-cp34-cp34m-manylinux1_i686.whl fc59b0aa44d37c5183ef288d172d81eed8772017c349acb137f8f3cb871d3ff7 numpy-1.12.0rc1-cp34-cp34m-manylinux1_x86_64.whl 46c018582f7003bd424a2776e8bef1370c1a78bd6d76af921ad6ce28539b71de numpy-1.12.0rc1-cp34-none-win32.whl 930815b162785e77e0ff41651931155a21482f286bc432f9bc552c4308782150 numpy-1.12.0rc1-cp34-none-win_amd64.whl 78835904da4e0e143ba671a7eff8d63282d4d2db868d09c7c9e0cd4f216e43c2 numpy-1.12.0rc1-cp35-cp35m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl 0e71e0b43fcc925fbe227f58dc2f875c4d92299c644ae2243a8cd21e050e3acd numpy-1.12.0rc1-cp35-cp35m-manylinux1_i686.whl 9c357f1cbaf3aae7f656e0ecd3d2f186622b9424fccb1b587ab13247b68bbc9a numpy-1.12.0rc1-cp35-cp35m-manylinux1_x86_64.whl 0c3fe4c1ee1955a58cb98b0cc4727dff9c9acf2521d3d34609fc8f772f56ab79 numpy-1.12.0rc1-cp35-none-win32.whl 998b229248b6219938827175a5852463dd1640785df3dd8e15b2ba9018217652 numpy-1.12.0rc1-cp35-none-win_amd64.whl 2a8defaf03473fdaa15e34e849b71221f8ab3da1a7eed510799d55c913b99b06 numpy-1.12.0rc1.tar.gz 7c860cf028b13a88487fde1dcd1ce8f0859a6882c477c42083f50e48c297427e numpy-1.12.0rc1.zip -----BEGIN PGP SIGNATURE----- iQEcBAEBAgAGBQJYWH3jAAoJEGefIoN3xSR7e/wIANIfp4+7gdsFKUMtyrWnlCfz GtAOX+N+mecTelbDEylnl2D2SDwHrnv1XrJ6hT888jPqnjlnw9NDLkLa9ilVLTAd 8Y5Bo1AZ2PHxyOHPirfGflU0qdlk+T25Ekz43TuyRDNCXoe4ZuQ0GsEzvO2nWFLu X/oaU16z/xNU5p4uV69emPsku5G3d11X6uP5JHYg5kSAE8pvx/jIv8PMFHC5GfTv MR6Q7+Oi2gJVa22866o2pq8bqViwLehGQLe95Cs91F8NMTNxxyG3bb7gVO4oLPZ3 5wEVAzFLbwPqgYH9K++sG52FoTJxe7tLCuj+EaLrffK5PywoGEP6Ub5/qKiQaBU= =hArc -----END PGP SIGNATURE----- From jfoxrabinovitz at gmail.com Tue Dec 20 08:08:51 2016 From: jfoxrabinovitz at gmail.com (Joseph Fox-Rabinovitz) Date: Tue, 20 Dec 2016 08:08:51 -0500 Subject: [Numpy-discussion] in1d, but preserve shape of ar1 In-Reply-To: References: Message-ID: Perhaps you could move the code from in1d to your new function and redefine in1d in terms of it? That may help encourage migration and also make deprecation easier down the line. -Joe On Mon, Dec 19, 2016 at 8:43 PM, Stephan Hoyer wrote: > I think this is a great idea! > > I agree that we need a new function. Because the new API is almost > strictly superior, we should try to pick a more general name that we can > encourage users to switch to from in1d. > > Pandas calls this method "isin", which I think is a perfectly good name > for the multi-dimensional NumPy version, too: > http://pandas.pydata.org/pandas-docs/stable/generated/pandas > .Series.isin.html > > It's a subjective call, but I would probably keep the new function in > arraysetops.py. (This is the sort of question well suited to GitHub rather > than the mailing list, though.) > > > On Mon, Dec 19, 2016 at 3:25 PM, Brenton R S Recht > wrote: > >> I started an enhancement request in the Github bug tracker at >> https://github.com/numpy/numpy/issues/8331 , but Jaime Frio recommended >> I bring it to the mailing list. >> >> `in1d` takes two arrays, `ar1` and `ar2`, and returns a 1d array with the >> same number of elements as `ar1`. The logical extension would be a function >> that does the same thing but returns a (possibly multi-dimensional) array >> of the same shape as `ar1`. The code already has a comment suggesting this >> could be done (see https://github.com/numpy/ >> numpy/blob/master/numpy/lib/arraysetops.py#L444 ). >> >> I agree that changing the behavior of the existing function isn't an >> option, since it would break backwards compatability. I'm not sure adding >> an option keep_shape is good, since the name of the function ("1d") >> wouldn't match what it does (returns an array that might not be 1d). I >> think a new function is the way to go. This would be it, more or less: >> >> def items_in(ar1, ar2, **kwargs): >> return np.in1d(ar1, ar2, **kwargs).reshape(ar1.shape) >> >> Questions I have are: >> * Function name? I was thinking something like `items_in` or `item_in`: >> the function returns whether each item in `ar1` is in `ar2`. Is "item" or >> "element" the right term here? >> * Are there any other changes that need to happen in arraysetops.py? Or >> other files? I ask this because although the file says "Set operations for >> 1D numeric arrays" right at the top, it's growing increasingly not 1D: >> `unique` recently changed to operate on multidimensional arrays, and I'm >> proposing a multidimensional version of `in1d`. `ediff1d` could probably be >> tweaked into a version that operates along an axis the same way unique does >> now, fwiw. Mostly I want to know if I should put my code changes in this >> file or somewhere else. >> >> Thanks, >> >> -brsr >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at gmail.com Wed Dec 21 00:06:49 2016 From: tcaswell at gmail.com (Thomas Caswell) Date: Wed, 21 Dec 2016 05:06:49 +0000 Subject: [Numpy-discussion] [REL] matplotlib 2.0.0rc2 Message-ID: Folks, We are happy to announce matplotlib v2.0.0rc2 ! Please re-distribute this widely. This is the final planned release candidate for the long awaited mpl v2.0 release. For the full details of what is new please see http://matplotlib.org/2.0.0rc2/users/whats_new.html Some of the highlights: - new default style (see http://matplotlib.org/2.0.0rc2/users/dflt_style_changes.html ) - new default color map (viridis) - default font includes most western alphabets - performance improvements in text and image rendering - many bug fixes and documentation improvements - many new rcparams Please help us by testing the release candidate! We would like to hear about any uses where the new defaults are significantly worse, any changes we failed to documents, or (as always) any bugs and regressions. To make this easy we have both wheels and conda packages for mac, linux and windows. You can install pre-releases via pip: pip install --pre matplotlib or via conda: conda update --all -c conda-forge conda install -c conda-forge/label/rc -c conda-forge matplotlib For more details see http://matplotlib.org/style_changes.html . Please report any issues to https://github.com/matplotlib/matplotlib/issues or matplotlib-users at python.org . The target for v2.0 final is early Jan 2017. This release is the work of over 200 individual code contributors and many more who took part in the discussions, tested the pre-releases, and reported bug reports. Thank you to everyone who contributed! Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Rougier at inria.fr Thu Dec 22 11:44:47 2016 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Thu, 22 Dec 2016 17:44:47 +0100 Subject: [Numpy-discussion] From Python to Numpy Message-ID: Dear all, I've just put online a (kind of) book on Numpy and more specifically about vectorization methods. It's not yet finished, has not been reviewed and it's a bit rough around the edges. But I think there are some material that can be interesting. I'm specifically happy with the boids example that show a nice combination of numpy and matplotlib strengths. Book is online at: http://www.labri.fr/perso/nrougier/from-python-to-numpy/ Sources are available at: https://github.com/rougier/from-python-to-numpy Comments/questions/fixes/ideas are of course welcome. Nicolas From chris.barker at noaa.gov Thu Dec 22 14:15:41 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 22 Dec 2016 11:15:41 -0800 Subject: [Numpy-discussion] From Python to Numpy In-Reply-To: References: Message-ID: Nicolas, >From a quick glance, this looks really wonderful! I intend to point my students that are interested in numpy to it. -CHB On Thu, Dec 22, 2016 at 8:44 AM, Nicolas P. Rougier < Nicolas.Rougier at inria.fr> wrote: > > Dear all, > > I've just put online a (kind of) book on Numpy and more specifically about > vectorization methods. It's not yet finished, has not been reviewed and > it's a bit rough around the edges. But I think there are some material that > can be interesting. I'm specifically happy with the boids example that show > a nice combination of numpy and matplotlib strengths. > > Book is online at: http://www.labri.fr/perso/ > nrougier/from-python-to-numpy/ > Sources are available at: https://github.com/rougier/from-python-to-numpy > > > Comments/questions/fixes/ideas are of course welcome. > > > Nicolas > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From kikocorreoso at gmail.com Fri Dec 23 03:14:21 2016 From: kikocorreoso at gmail.com (Kiko) Date: Fri, 23 Dec 2016 09:14:21 +0100 Subject: [Numpy-discussion] From Python to Numpy In-Reply-To: References: Message-ID: 2016-12-22 17:44 GMT+01:00 Nicolas P. Rougier : > > Dear all, > > I've just put online a (kind of) book on Numpy and more specifically about > vectorization methods. It's not yet finished, has not been reviewed and > it's a bit rough around the edges. But I think there are some material that > can be interesting. I'm specifically happy with the boids example that show > a nice combination of numpy and matplotlib strengths. > > Book is online at: http://www.labri.fr/perso/ > nrougier/from-python-to-numpy/ > Sources are available at: https://github.com/rougier/from-python-to-numpy > > > Comments/questions/fixes/ideas are of course welcome. > Wow!!! Beautiful. Thanks for sharing. > > > Nicolas > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorisvandenbossche at gmail.com Sat Dec 24 17:50:44 2016 From: jorisvandenbossche at gmail.com (Joris Van den Bossche) Date: Sat, 24 Dec 2016 23:50:44 +0100 Subject: [Numpy-discussion] ANN: pandas v0.19.2 released! Message-ID: Hi all, Just in time for the holidays, pandas 0.19.2 has been released! This is a minor bug-fix release in the 0.19.x series and includes some small regression fixes, bug fixes and performance improvements. We recommend that all users upgrade to this version. Highlights include: - Compatibility with Python 3.6. - A new Pandas Cheat Sheet thanks to Irv Lustig Wheels and conda packages for python 3.6 are not yet available for all platforms, but will shortly be. See the v0.19.2 Whatsnew page for an overview of all bugs that have been fixed in 0.19.2. Thanks to all contributors! Joris --- *How to get it:* Source tarballs and windows/mac/linux wheels are available on PyPI (thanks to Christoph Gohlke for the windows wheels, and to Matthew Brett for setting up the mac/linux wheels). Conda packages are already available via the conda-forge channel (conda install pandas -c conda-forge). It will be available on the main channel shortly. *Issues:* Please report any issues on our issue tracker: https://github.com/pydata/pandas/issues *Thanks to all the contributors of the 0.19.2 release:* - Ajay Saxena - Ben Kandel - Chris - Chris Ham - Christopher C. Aycock - Daniel Himmelstein - Dave Willmer - Dr-Irv - gfyoung - hesham shabana - Jeff Carey - Jeff Reback - Joe Jevnik - Joris Van den Bossche - Julian Santander - Kerby Shedden - Keshav Ramaswamy - Kevin Sheppard - Luca Scarabello - Matti Picus - Matt Roeschke - Maximilian Roos - Mykola Golubyev - Nate Yoder - Nicholas Ver Halen - Pawel Kordek - Pietro Battiston - Rodolfo Fernandez - sinhrks - Tara Adiseshan - Tom Augspurger - wandersoncferreira - Yaroslav Halchenko -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Rougier at inria.fr Mon Dec 26 04:34:06 2016 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Mon, 26 Dec 2016 10:34:06 +0100 Subject: [Numpy-discussion] Casting to np.byte before clearing values Message-ID: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> Hi all, I'm trying to understand why viewing an array as bytes before clearing makes the whole operation faster. I imagine there is some kind of special treatment for byte arrays but I've no clue. # Native float Z_float = np.ones(1000000, float) Z_int = np.ones(1000000, int) %timeit Z_float[...] = 0 1000 loops, best of 3: 361 ?s per loop %timeit Z_int[...] = 0 1000 loops, best of 3: 366 ?s per loop %timeit Z_float.view(np.byte)[...] = 0 1000 loops, best of 3: 267 ?s per loop %timeit Z_int.view(np.byte)[...] = 0 1000 loops, best of 3: 266 ?s per loop Nicolas From sebastian at sipsolutions.net Mon Dec 26 05:48:19 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 26 Dec 2016 11:48:19 +0100 Subject: [Numpy-discussion] Casting to np.byte before clearing values In-Reply-To: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> References: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> Message-ID: <1482749299.13822.7.camel@sipsolutions.net> On Mo, 2016-12-26 at 10:34 +0100, Nicolas P. Rougier wrote: > Hi all, > > > I'm trying to understand why viewing an array as bytes before > clearing makes the whole operation faster. > I imagine there is some kind of special treatment for byte arrays but > I've no clue.? > Sure, if its a 1-byte width type, the code will end up calling `memset`. If it is not, it will end up calling a loop with: while (N > 0) { ? ? *dst = output; ? ? *dst += 8; ?/* or whatever element size/stride is */ ? ? --N; } now why this gives such a difference, I don't really know, but I guess it is not too surprising and may depend on other things as well. - Sebastian > > # Native float > Z_float = np.ones(1000000, float) > Z_int???= np.ones(1000000, int) > > %timeit Z_float[...] = 0 > 1000 loops, best of 3: 361 ?s per loop > > %timeit Z_int[...] = 0 > 1000 loops, best of 3: 366 ?s per loop > > %timeit Z_float.view(np.byte)[...] = 0 > 1000 loops, best of 3: 267 ?s per loop > > %timeit Z_int.view(np.byte)[...] = 0 > 1000 loops, best of 3: 266 ?s per loop > > > Nicolas > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From Nicolas.Rougier at inria.fr Mon Dec 26 06:15:25 2016 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Mon, 26 Dec 2016 12:15:25 +0100 Subject: [Numpy-discussion] Casting to np.byte before clearing values In-Reply-To: <1482749299.13822.7.camel@sipsolutions.net> References: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> <1482749299.13822.7.camel@sipsolutions.net> Message-ID: <8187B6CB-0C41-4328-ADDB-53C62E9DBEB8@inria.fr> Thanks for the explanation Sebastian, makes sense. Nicolas > On 26 Dec 2016, at 11:48, Sebastian Berg wrote: > > On Mo, 2016-12-26 at 10:34 +0100, Nicolas P. Rougier wrote: >> Hi all, >> >> >> I'm trying to understand why viewing an array as bytes before >> clearing makes the whole operation faster. >> I imagine there is some kind of special treatment for byte arrays but >> I've no clue. >> > > Sure, if its a 1-byte width type, the code will end up calling > `memset`. If it is not, it will end up calling a loop with: > > while (N > 0) { > *dst = output; > *dst += 8; /* or whatever element size/stride is */ > --N; > } > > now why this gives such a difference, I don't really know, but I guess > it is not too surprising and may depend on other things as well. > > - Sebastian > > >> >> # Native float >> Z_float = np.ones(1000000, float) >> Z_int = np.ones(1000000, int) >> >> %timeit Z_float[...] = 0 >> 1000 loops, best of 3: 361 ?s per loop >> >> %timeit Z_int[...] = 0 >> 1000 loops, best of 3: 366 ?s per loop >> >> %timeit Z_float.view(np.byte)[...] = 0 >> 1000 loops, best of 3: 267 ?s per loop >> >> %timeit Z_int.view(np.byte)[...] = 0 >> 1000 loops, best of 3: 266 ?s per loop >> >> >> Nicolas >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion From ben.v.root at gmail.com Mon Dec 26 11:01:25 2016 From: ben.v.root at gmail.com (Benjamin Root) Date: Mon, 26 Dec 2016 11:01:25 -0500 Subject: [Numpy-discussion] Casting to np.byte before clearing values In-Reply-To: References: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> <1482749299.13822.7.camel@sipsolutions.net> <8187B6CB-0C41-4328-ADDB-53C62E9DBEB8@inria.fr> Message-ID: Might be os-specific, too. Some virtual memory management systems might special case the zeroing out of memory. Try doing the same thing with a different value than zero. On Dec 26, 2016 6:15 AM, "Nicolas P. Rougier" wrote: Thanks for the explanation Sebastian, makes sense. Nicolas > On 26 Dec 2016, at 11:48, Sebastian Berg wrote: > > On Mo, 2016-12-26 at 10:34 +0100, Nicolas P. Rougier wrote: >> Hi all, >> >> >> I'm trying to understand why viewing an array as bytes before >> clearing makes the whole operation faster. >> I imagine there is some kind of special treatment for byte arrays but >> I've no clue. >> > > Sure, if its a 1-byte width type, the code will end up calling > `memset`. If it is not, it will end up calling a loop with: > > while (N > 0) { > *dst = output; > *dst += 8; /* or whatever element size/stride is */ > --N; > } > > now why this gives such a difference, I don't really know, but I guess > it is not too surprising and may depend on other things as well. > > - Sebastian > > >> >> # Native float >> Z_float = np.ones(1000000, float) >> Z_int = np.ones(1000000, int) >> >> %timeit Z_float[...] = 0 >> 1000 loops, best of 3: 361 ?s per loop >> >> %timeit Z_int[...] = 0 >> 1000 loops, best of 3: 366 ?s per loop >> >> %timeit Z_float.view(np.byte)[...] = 0 >> 1000 loops, best of 3: 267 ?s per loop >> >> %timeit Z_int.view(np.byte)[...] = 0 >> 1000 loops, best of 3: 266 ?s per loop >> >> >> Nicolas >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at scipy.org https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From fmv1992 at gmail.com Mon Dec 26 15:43:09 2016 From: fmv1992 at gmail.com (Felipe Vieira) Date: Mon, 26 Dec 2016 18:43:09 -0200 Subject: [Numpy-discussion] script for building numpy from source in virtualenv and outside it Message-ID: Dear fellows, I'm struggling with a single script to build numpy from source in a virtual env. I want the same script to be able to be run with a normal env. So if ran from a normal env it should affect all users. If ran within a virtual env the installation should be constrained to that env. I tried setting script variables and other tricks but the script is always executed as a 'out of virtual env' user (I cannot make it aware that is running from a virtualenv), thus affecting my real working python. As the script activates other scripts I am not posting them for now (hoping that this is a simple issue). tl;dr: How can I install numpy from source and build it in a script which uses the virtual env instead of affecting the whole system? (And yes, I have looked for solutions on google but none of them worked.) Best regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Mon Dec 26 16:29:30 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 27 Dec 2016 10:29:30 +1300 Subject: [Numpy-discussion] script for building numpy from source in virtualenv and outside it In-Reply-To: References: Message-ID: On Tue, Dec 27, 2016 at 9:43 AM, Felipe Vieira wrote: > Dear fellows, > > I'm struggling with a single script to build numpy from source in a > virtual env. I want the same script to be able to be run with a normal env. > > So if ran from a normal env it should affect all users. > If ran within a virtual env the installation should be constrained to that > env. > > I tried setting script variables and other tricks but the script is always > executed as a 'out of virtual env' user (I cannot make it aware that is > running from a virtualenv), thus affecting my real working python. As the > script activates other scripts I am not posting them for now (hoping that > this is a simple issue). > > tl;dr: How can I install numpy from source and build it in a script which > uses the virtual env instead of affecting the whole system? > > (And yes, I have looked for solutions on google but none of them worked.) > Sounds like you just need to run your script with the Python interpreter in the virtualenv. There's nothing numpy-specific about this. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Dec 27 14:52:20 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 27 Dec 2016 11:52:20 -0800 Subject: [Numpy-discussion] Casting to np.byte before clearing values In-Reply-To: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> References: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> Message-ID: On Mon, Dec 26, 2016 at 1:34 AM, Nicolas P. Rougier < Nicolas.Rougier at inria.fr> wrote: > > I'm trying to understand why viewing an array as bytes before clearing > makes the whole operation faster. > I imagine there is some kind of special treatment for byte arrays but I've > no clue. > I notice that the code is simply setting a value using broadcasting -- I don't think there is anything special about zero in that case. But your subject refers to "clearing" an array. So I wonder if you have a use case where the performance difference matters, in which case _maybe_ it would be worth having a ndarray.zero() method that efficiently zeros out an array. Actually, there is ndarray.fill(): In [7]: %timeit Z_float[...] = 0 1000 loops, best of 3: 380 ?s per loop In [8]: %timeit Z_float.view(np.byte)[...] = 0 1000 loops, best of 3: 271 ?s per loop In [9]: %timeit Z_float.fill(0) 1000 loops, best of 3: 363 ?s per loop which seems to take an insignificantly shorter time than assignment. Probably because it's doing exactly the same loop. whereas a .zero() could use a memset, like it does with bytes. can't say I have a use-case that would justify this, though. -CHB > > # Native float > Z_float = np.ones(1000000, float) > Z_int = np.ones(1000000, int) > > %timeit Z_float[...] = 0 > 1000 loops, best of 3: 361 ?s per loop > > %timeit Z_int[...] = 0 > 1000 loops, best of 3: 366 ?s per loop > > %timeit Z_float.view(np.byte)[...] = 0 > 1000 loops, best of 3: 267 ?s per loop > > %timeit Z_int.view(np.byte)[...] = 0 > 1000 loops, best of 3: 266 ?s per loop > > > Nicolas > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Rougier at inria.fr Tue Dec 27 17:11:06 2016 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Tue, 27 Dec 2016 23:11:06 +0100 Subject: [Numpy-discussion] Casting to np.byte before clearing values In-Reply-To: References: <1A4BC86B-A978-4A6E-B025-FBCA5B95CBCA@inria.fr> Message-ID: <5A27DB06-313F-4379-8FCC-F5D18DD1D4F0@inria.fr> Yes, clearing is not the proper word but the "trick" works only work for 0 (I'll get the same result in both cases). Nicolas > On 27 Dec 2016, at 20:52, Chris Barker wrote: > > On Mon, Dec 26, 2016 at 1:34 AM, Nicolas P. Rougier wrote: > > I'm trying to understand why viewing an array as bytes before clearing makes the whole operation faster. > I imagine there is some kind of special treatment for byte arrays but I've no clue. > > I notice that the code is simply setting a value using broadcasting -- I don't think there is anything special about zero in that case. But your subject refers to "clearing" an array. > > So I wonder if you have a use case where the performance difference matters, in which case _maybe_ it would be worth having a ndarray.zero() method that efficiently zeros out an array. > > Actually, there is ndarray.fill(): > > In [7]: %timeit Z_float[...] = 0 > > 1000 loops, best of 3: 380 ?s per loop > > > In [8]: %timeit Z_float.view(np.byte)[...] = 0 > > 1000 loops, best of 3: 271 ?s per loop > > > In [9]: %timeit Z_float.fill(0) > > 1000 loops, best of 3: 363 ?s per loop > > which seems to take an insignificantly shorter time than assignment. Probably because it's doing exactly the same loop. > > whereas a .zero() could use a memset, like it does with bytes. > > can't say I have a use-case that would justify this, though. > > -CHB > > > > > > > # Native float > Z_float = np.ones(1000000, float) > Z_int = np.ones(1000000, int) > > %timeit Z_float[...] = 0 > 1000 loops, best of 3: 361 ?s per loop > > %timeit Z_int[...] = 0 > 1000 loops, best of 3: 366 ?s per loop > > %timeit Z_float.view(np.byte)[...] = 0 > 1000 loops, best of 3: 267 ?s per loop > > %timeit Z_int.view(np.byte)[...] = 0 > 1000 loops, best of 3: 266 ?s per loop > > > Nicolas > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion From alex.rogozhnikov at yandex.ru Fri Dec 30 14:36:26 2016 From: alex.rogozhnikov at yandex.ru (Alex Rogozhnikov) Date: Fri, 30 Dec 2016 23:36:26 +0400 Subject: [Numpy-discussion] From Python to Numpy In-Reply-To: References: Message-ID: Hi Nicolas, that's a very nice work! > Comments/questions/fixes/ideas are of course welcome. Boids example brought my attention too, some comments on it: - I find using complex numbers here very natural, this should speed up things and also shorten the code (rotating without einsum, etc.) - you probably can speed up things with going to sparse arrays - and you can go to really large numbers of 'birds' if you combine it with preliminary splitting of space into squares, thus analyze only birds from close squares Also I think worth adding some operations with HSV / HSL color spaces as those can be visualized easily e.g. on some photo. Thanks, Alex. > 23 ???. 2016 ?., ? 12:14, Kiko ???????(?): > > > > 2016-12-22 17:44 GMT+01:00 Nicolas P. Rougier >: > > Dear all, > > I've just put online a (kind of) book on Numpy and more specifically about vectorization methods. It's not yet finished, has not been reviewed and it's a bit rough around the edges. But I think there are some material that can be interesting. I'm specifically happy with the boids example that show a nice combination of numpy and matplotlib strengths. > > Book is online at: http://www.labri.fr/perso/nrougier/from-python-to-numpy/ > Sources are available at: https://github.com/rougier/from-python-to-numpy > > > Comments/questions/fixes/ideas are of course welcome. > > Wow!!! Beautiful. > > Thanks for sharing. > > > > Nicolas > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nicolas.Rougier at inria.fr Fri Dec 30 17:09:37 2016 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Fri, 30 Dec 2016 23:09:37 +0100 Subject: [Numpy-discussion] From Python to Numpy In-Reply-To: References: Message-ID: <62BD0BF9-534F-4A63-98E0-CE1C0137F805@inria.fr> > On 30 Dec 2016, at 20:36, Alex Rogozhnikov wrote: > > Hi Nicolas, > that's a very nice work! > >> Comments/questions/fixes/ideas are of course welcome. > > Boids example brought my attention too, some comments on it: > - I find using complex numbers here very natural, this should speed up things and also shorten the code (rotating without einsum, etc.) > - you probably can speed up things with going to sparse arrays > - and you can go to really large numbers of 'birds' if you combine it with preliminary splitting of space into squares, thus analyze only birds from close squares > > Also I think worth adding some operations with HSV / HSL color spaces as those can be visualized easily e.g. on some photo. > > Thanks, > Alex. Thanks. I'm not sure to know how to use complex with this example. Could you elaborate ? For the preliminary splitting, a quadtree (scipy KDTree) could also help a lot but I wanted to stick to numpy only. A simpler square splitting as you suggest could make thing faster but require some work. I'm not sure yet I see how to restrict analysis to close squares. Nicolas > > > >> 23 ???. 2016 ?., ? 12:14, Kiko ???????(?): >> >> >> >> 2016-12-22 17:44 GMT+01:00 Nicolas P. Rougier : >> >> Dear all, >> >> I've just put online a (kind of) book on Numpy and more specifically about vectorization methods. It's not yet finished, has not been reviewed and it's a bit rough around the edges. But I think there are some material that can be interesting. I'm specifically happy with the boids example that show a nice combination of numpy and matplotlib strengths. >> >> Book is online at: http://www.labri.fr/perso/nrougier/from-python-to-numpy/ >> Sources are available at: https://github.com/rougier/from-python-to-numpy >> >> >> Comments/questions/fixes/ideas are of course welcome. >> >> Wow!!! Beautiful. >> >> Thanks for sharing. >> >> >> >> Nicolas >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion From mmwoodman at gmail.com Fri Dec 30 17:36:50 2016 From: mmwoodman at gmail.com (Marmaduke Woodman) Date: Fri, 30 Dec 2016 23:36:50 +0100 Subject: [Numpy-discussion] From Python to Numpy In-Reply-To: References: Message-ID: > On 22 Dec 2016, at 17:44, Nicolas P. Rougier wrote: > > > Dear all, > > I've just put online a (kind of) book on Numpy and more specifically about vectorization methods. It's not yet finished, has not been reviewed and it's a bit rough around the edges. But I think there are some material that can be interesting. I'm specifically happy with the boids example that show a nice combination of numpy and matplotlib strengths. > > Book is online at: http://www.labri.fr/perso/nrougier/from-python-to-numpy/ > Sources are available at: https://github.com/rougier/from-python-to-numpy > > > Comments/questions/fixes/ideas are of course welcome. I?ve seen vectorisation taken to the extreme, with negative consequences in terms of both speed and readability, in both Python and MATLAB codebases, so I would suggest some discussion / wisdom about when not to vectorise. From Nicolas.Rougier at inria.fr Sat Dec 31 05:23:54 2016 From: Nicolas.Rougier at inria.fr (Nicolas P. Rougier) Date: Sat, 31 Dec 2016 11:23:54 +0100 Subject: [Numpy-discussion] From Python to Numpy In-Reply-To: References: Message-ID: <4C818029-F4B8-4893-8E3E-42C24221EC49@inria.fr> > I?ve seen vectorisation taken to the extreme, with negative consequences in terms of both speed and readability, in both Python and MATLAB codebases, so I would suggest some discussion / wisdom about when not to vectorise. I agree and there is actually a warning in the introduction about readability vs speed with an example showing a clever optimization (by Jaime Fern?ndez del R?o) that is hardly readable for the non-experts (including myself). Nicolas