From valentinatessy at gmail.com Tue Apr 2 04:34:05 2019 From: valentinatessy at gmail.com (Mbeng Tanyi) Date: Tue, 2 Apr 2019 09:34:05 +0100 Subject: [Numpy-discussion] Beginner Help: Generating HTML using Sphinx In-Reply-To: References: Message-ID: Hello I still have a problem with this. I am using sphinx1.8.5 and changed the default python version on my computer to be 3.6 but I get the following error on $ make html ; Traceback (most recent call last): > File "/home/valentina-t/.local/bin/sphinx-build", line 7, in > from sphinx.cmd.build import main > File > "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", > line 39 > file=stderr) > ^ > SyntaxError: invalid syntax > Makefile:123: recipe for target 'html' failed > make: *** [html] Error 1 I googled the error but didn't find anything really useful. I feel it has something to do with python2.7 as part of the path. Suppose I am right, I need help changing it to python3.6 please. Regards Mbeng Tanyi On Sun, Mar 31, 2019 at 9:20 PM Matti Picus wrote: > > On 31/3/19 10:56 pm, Mbeng Tanyi wrote: > > Hello > > > > I also got an error the first time I tried $ make file as follows: > > > > mkdir -p build/html build/doctrees > > LANG=C sphinx-build -b html -WT --keep-going -d build/doctrees > > source build/html > > /bin/sh: 1: sphinx-build: not found > > Makefile:123: recipe for target 'html' failed > > make: *** [html] Error 127 > > > > > > After upgrading to sphinx2 as was suggested here, I still get errors > > after $ make html : > > > > mkdir -p build/html build/doctrees > > LANG=C sphinx-build -b html -WT --keep-going -d build/doctrees > > source build/html > > Traceback (most recent call last): > > File "/home/valentina-t/.local/bin/sphinx-build", line 7, in > > > > from sphinx.cmd.build import main > > File > > > "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", > > line 39 > > file=stderr) > > ^ > > SyntaxError: invalid syntax > > Makefile:123: recipe for target 'html' failed > > make: *** [html] Error > > > > > > > > > > You need to use sphinx version 1.8.5, and python3.6. > > Matti > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From einstein.edison at gmail.com Tue Apr 2 04:37:19 2019 From: einstein.edison at gmail.com (Hameer Abbasi) Date: Tue, 2 Apr 2019 10:37:19 +0200 Subject: [Numpy-discussion] Beginner Help: Generating HTML using Sphinx In-Reply-To: References: Message-ID: <007eceb2-e844-4ab7-bff1-db3e8eb16f4c@Canary> Hi Mbeng! What is the output of python --version and python3 --version? It seems to me that you?re still using Python 2.7. You may need to use the command python3 rather than python. Best Regards, Hameer Abbasi > On Tuesday, Apr 02, 2019 at 10:34 AM, Mbeng Tanyi wrote: > Hello > > I still have a problem with this. I am using sphinx1.8.5 and changed the default python version on my computer to be 3.6 but I get the following error on $ make html ; > > > Traceback (most recent call last): > > File "/home/valentina-t/.local/bin/sphinx-build", line 7, in > > from sphinx.cmd.build import main > > File "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", line 39 > > file=stderr) > > ^ > > SyntaxError: invalid syntax > > Makefile:123: recipe for target 'html' failed > > make: *** [html] Error 1 > > I googled the error but didn't find anything really useful. I feel it has something to do with python2.7 as part of the path. Suppose I am right, I need help changing it to python3.6 please. > > Regards > Mbeng Tanyi > > > > On Sun, Mar 31, 2019 at 9:20 PM Matti Picus wrote: > > > > On 31/3/19 10:56 pm, Mbeng Tanyi wrote: > > > Hello > > > > > > I also got an error the first time I tried $ make file as follows: > > > > > > mkdir -p build/html build/doctrees > > > LANG=C sphinx-build -b html -WT --keep-going -d build/doctrees > > > source build/html > > > /bin/sh: 1: sphinx-build: not found > > > Makefile:123: recipe for target 'html' failed > > > make: *** [html] Error 127 > > > > > > > > > After upgrading to sphinx2 as was suggested here, I still get errors > > > after $ make html : > > > > > > mkdir -p build/html build/doctrees > > > LANG=C sphinx-build -b html -WT --keep-going -d build/doctrees > > > source build/html > > > Traceback (most recent call last): > > > File "/home/valentina-t/.local/bin/sphinx-build", line 7, in > > > > > > from sphinx.cmd.build import main > > > File > > > "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", > > > line 39 > > > file=stderr) > > > ^ > > > SyntaxError: invalid syntax > > > Makefile:123: recipe for target 'html' failed > > > make: *** [html] Error > > > > > > > > > > > > > > > > You need to use sphinx version 1.8.5, and python3.6. > > > > Matti > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 695 bytes Desc: not available URL: From valentinatessy at gmail.com Tue Apr 2 04:41:52 2019 From: valentinatessy at gmail.com (Mbeng Tanyi) Date: Tue, 2 Apr 2019 09:41:52 +0100 Subject: [Numpy-discussion] Beginner Help: Generating HTML using Sphinx In-Reply-To: <007eceb2-e844-4ab7-bff1-db3e8eb16f4c@Canary> References: <007eceb2-e844-4ab7-bff1-db3e8eb16f4c@Canary> Message-ID: Both python --version and python3 --version give Python 3.6.7 On Tue, Apr 2, 2019 at 9:39 AM Hameer Abbasi wrote: > Hi Mbeng! > > What is the output of python --version and python3 --version? It seems to > me that you?re still using Python 2.7. > > You may need to use the command python3 rather than python. > > Best Regards, > Hameer Abbasi > > On Tuesday, Apr 02, 2019 at 10:34 AM, Mbeng Tanyi < > valentinatessy at gmail.com> wrote: > Hello > > I still have a problem with this. I am using sphinx1.8.5 and changed the > default python version on my computer to be 3.6 but I get the following > error on $ make html ; > > Traceback (most recent call last): >> File "/home/valentina-t/.local/bin/sphinx-build", line 7, in >> from sphinx.cmd.build import main >> File >> "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", >> line 39 >> file=stderr) >> ^ >> SyntaxError: invalid syntax >> Makefile:123: recipe for target 'html' failed >> make: *** [html] Error 1 > > > I googled the error but didn't find anything really useful. I feel it has > something to do with python2.7 as part of the path. Suppose I am right, I > need help changing it to python3.6 please. > > Regards > Mbeng Tanyi > > > > On Sun, Mar 31, 2019 at 9:20 PM Matti Picus wrote: > >> >> On 31/3/19 10:56 pm, Mbeng Tanyi wrote: >> > Hello >> > >> > I also got an error the first time I tried $ make file as follows: >> > >> > mkdir -p build/html build/doctrees >> > LANG=C sphinx-build -b html -WT --keep-going -d build/doctrees >> > source build/html >> > /bin/sh: 1: sphinx-build: not found >> > Makefile:123: recipe for target 'html' failed >> > make: *** [html] Error 127 >> > >> > >> > After upgrading to sphinx2 as was suggested here, I still get errors >> > after $ make html : >> > >> > mkdir -p build/html build/doctrees >> > LANG=C sphinx-build -b html -WT --keep-going -d build/doctrees >> > source build/html >> > Traceback (most recent call last): >> > File "/home/valentina-t/.local/bin/sphinx-build", line 7, in >> > >> > from sphinx.cmd.build import main >> > File >> > >> "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", >> > line 39 >> > file=stderr) >> > ^ >> > SyntaxError: invalid syntax >> > Makefile:123: recipe for target 'html' failed >> > make: *** [html] Error >> > >> > >> > >> > >> >> You need to use sphinx version 1.8.5, and python3.6. >> >> Matti >> >> _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Apr 2 04:45:34 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 2 Apr 2019 10:45:34 +0200 Subject: [Numpy-discussion] Google Summer of Docs ideas? Message-ID: Hi all, NumFOCUS has applied as an umbrella org for the inaugural Google Summer of Docs, and we're participating. We need 1-2 ideas and work them out very well (ideas need to be high quality, it's not yet sure that NumFOCUS will get accepted as an umbrella org). Guidelines are at https://developers.google.com/season-of-docs/docs/project-ideas#project-idea Any suggestions for ideas? Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Tue Apr 2 04:45:54 2019 From: matti.picus at gmail.com (Matti Picus) Date: Tue, 2 Apr 2019 11:45:54 +0300 Subject: [Numpy-discussion] Beginner Help: Generating HTML using Sphinx In-Reply-To: References: <007eceb2-e844-4ab7-bff1-db3e8eb16f4c@Canary> Message-ID: Note the /home/valentina-t/.local/lib/python2.7 in the error message. This indicates you are using python2.7, not python3.6. Please try using a virtual environment or activating a conda environment. If you need further help please reach out to me personally. Matti On 2/4/19 11:41 am, Mbeng Tanyi wrote: > Both python --version and python3 --version give Python 3.6.7 > > On Tue, Apr 2, 2019 at 9:39 AM Hameer Abbasi > > wrote: > > Hi Mbeng! > > What is the output of python --version and python3 --version? It > seems to me that you?re still using Python 2.7. > > You may need to use the command python3 rather than python. > > Best Regards, > Hameer Abbasi > > On Tuesday, Apr 02, 2019 at 10:34 AM, Mbeng Tanyi > > > wrote: > Hello > > I still have a problem with this. I am using sphinx1.8.5 and > changed the default python version on my computer to be 3.6 > but I get the following error on$ make html ; > > Traceback (most recent call last): > File "/home/valentina-t/.local/bin/sphinx-build", line 7, > in > from sphinx.cmd.build import main > File > "/home/valentina-t/.local/lib/python2.7/site-packages/sphinx/cmd/build.py", > line 39 > file=stderr) > ^ > SyntaxError: invalid syntax > Makefile:123: recipe for target 'html' failed > make: *** [html] Error 1 > > I googled the error but didn't find anything really useful. I > feel it has something to do with python2.7 as part of the > path. Suppose I am right, I need help changing it to python3.6 > please. > > Regards > Mbeng Tanyi > > From tyler.je.reddy at gmail.com Tue Apr 2 19:02:51 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 2 Apr 2019 16:02:51 -0700 Subject: [Numpy-discussion] BIDS / NumPy community meeting April 3/ 2019 Message-ID: Hi, A reminder of the NumPy community meeting scheduled for 12pm Pacific time tomorrow, April 3/ 2019. There's a work in progress document with a community topics section that may be edited: https://hackmd.io/30eMeRnDQCSW7yDOG05gxA?view -------------- next part -------------- An HTML attachment was scrubbed... URL: From kachine at protonmail.com Thu Apr 4 13:11:03 2019 From: kachine at protonmail.com (kikocorreoso) Date: Thu, 04 Apr 2019 17:11:03 +0000 Subject: [Numpy-discussion] proposal of new keywords for np.nan_to_num Message-ID: <6Zey3ELkTyKnP2jeQgOlP5Yb2i9XZdozbDxWHvpY5aSPraThkgxpbLXyyK7VPrgcrAkt_eT2Au5rO88N_JDGGX-1uM_qgc4u0V2DsCxqnMU=@protonmail.com> Hi all, I propose to add some keywords to nan_to_num function. The addition do not modify the actual behavior. Information related with this addition can be found in these links: https://github.com/numpy/numpy/pull/13219 https://github.com/numpy/numpy/pull/9355 The basic idea is to allow the user to use their own defined values when replacing nan, positive infinity and/or negative infinity. The proposed names for the keywords are 'nan', posinf', and 'neginf' respectively. So the usage would be something like this: >>> a = np.array((np.nan, 2, 3, np.inf, 4, 5, -np.inf)) >>> np.nan_to_num(a, nan=-999) array([-9.99000000e+002, 2.00000000e+000, 3.00000000e+000, 1.79769313e+308, 4.00000000e+000, 5.00000000e+000, -1.79769313e+308]) >>> np.nan_to_num(a, posinf=np.nan, neginf=np.nan) array([ 0., 2., 3., nan, 4., 5., nan]) Please, could you comment if it would be useful the addition?, if the PR needs any change?... Thanks to Eric, Joseph, Allan and Matti for their comments and revisions on GH. Kind regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From millman at berkeley.edu Thu Apr 4 20:50:08 2019 From: millman at berkeley.edu (Jarrod Millman) Date: Thu, 4 Apr 2019 17:50:08 -0700 Subject: [Numpy-discussion] NetworkX 2.3rc3 released (Python 3 only) Message-ID: I am happy to announce the third **release candidate** for NetworkX 2.3! NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. This release supports Python 3.5-3.7 (i.e., this is our first **Python 3 only** release). Please try out the pre-release and let us know about any problems you find. If no major issues arise, we will release 2.3 final next week. Please see the draft of the 2.3 release announcement: https://networkx.github.io/documentation/latest/release/release_dev.html Since this is a pre-release, pip won't automatically install it. So $ pip install networkx still installs networkx-2.2. But $ pip install --pre networkx will install networkx-2.3rc3. If you already have networkx installed then you need to do $ pip install --pre --upgrade networkx For more information, please visit our `website `_ and our `gallery of examples `_. Please send comments and questions to the `networkx-discuss mailing list `_. Best regards, Jarrod From allanhaldane at gmail.com Mon Apr 8 11:39:31 2019 From: allanhaldane at gmail.com (Allan Haldane) Date: Mon, 8 Apr 2019 11:39:31 -0400 Subject: [Numpy-discussion] proposal of new keywords for np.nan_to_num In-Reply-To: <6Zey3ELkTyKnP2jeQgOlP5Yb2i9XZdozbDxWHvpY5aSPraThkgxpbLXyyK7VPrgcrAkt_eT2Au5rO88N_JDGGX-1uM_qgc4u0V2DsCxqnMU=@protonmail.com> References: <6Zey3ELkTyKnP2jeQgOlP5Yb2i9XZdozbDxWHvpY5aSPraThkgxpbLXyyK7VPrgcrAkt_eT2Au5rO88N_JDGGX-1uM_qgc4u0V2DsCxqnMU=@protonmail.com> Message-ID: <904947c4-ae42-9b4a-1e43-dbb2c7736567@gmail.com> Since there seem to be no objections, I think we're going ahead with this enhancement for np.nan_to_num. Cheers, Allan On 4/4/19 1:11 PM, kikocorreoso wrote: > Hi all, > > I propose to add some keywords to nan_to_num function. The addition do > not modify the actual behavior. Information related with this addition > can be found in these links: > https://github.com/numpy/numpy/pull/13219 > https://github.com/numpy/numpy/pull/9355 > > The basic idea is to allow the user to use their own defined values when > replacing nan, positive infinity and/or negative infinity. The proposed > names for the keywords are 'nan', posinf', and 'neginf' respectively. So > the usage would be something like this: > >>>> a = np.array((np.nan, 2, 3, np.inf, 4, 5, -np.inf)) >>>> np.nan_to_num(a, nan=-999) > array([-9.99000000e+002,? 2.00000000e+000,? 3.00000000e+000, > ??????? 1.79769313e+308,? 4.00000000e+000,? 5.00000000e+000, > ?????? -1.79769313e+308]) >>>> np.nan_to_num(a, posinf=np.nan, neginf=np.nan) > array([ 0.,? 2.,? 3., nan,? 4.,? 5., nan]) > > Please, could you comment if it would be useful the addition?, if the PR > needs any change?... > > Thanks to Eric, Joseph, Allan and Matti for their comments and revisions > on GH. > > Kind regards. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From stefanv at berkeley.edu Mon Apr 8 17:40:43 2019 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Mon, 08 Apr 2019 17:40:43 -0400 Subject: [Numpy-discussion] proposal of new keywords for np.nan_to_num In-Reply-To: <6Zey3ELkTyKnP2jeQgOlP5Yb2i9XZdozbDxWHvpY5aSPraThkgxpbLXyyK7VPrgcrAkt_eT2Au5rO88N_JDGGX-1uM_qgc4u0V2DsCxqnMU=@protonmail.com> References: <6Zey3ELkTyKnP2jeQgOlP5Yb2i9XZdozbDxWHvpY5aSPraThkgxpbLXyyK7VPrgcrAkt_eT2Au5rO88N_JDGGX-1uM_qgc4u0V2DsCxqnMU=@protonmail.com> Message-ID: <8a5dd85d-5074-49f5-a422-e570cb30e228@www.fastmail.com> On Thu, Apr 4, 2019, at 10:12, kikocorreoso wrote: > I propose to add some keywords to nan_to_num function. The addition do not modify the actual behavior. Information related with this addition can be found in these links: > https://github.com/numpy/numpy/pull/13219 > https://github.com/numpy/numpy/pull/9355 The added functionality seems fine, given what that function does already. But, a question for those who have seen its API put in place: What is the rationale behind replacing infinities, when the function is called `nan_to_num`? Also, this odd part of the docstring: *NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic** * *(IEEE 754). This means that Not a Number is not equivalent to infinity.** * The meaning of this note isn't clear to me. Why would the reader suspect that Not a Number is equivalent to infinity? Do we detail somewhere how NaN's vs infinities typically arise in code? St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Apr 9 03:10:40 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 9 Apr 2019 09:10:40 +0200 Subject: [Numpy-discussion] adding Quansight Labs as institutional partner Message-ID: Hi all, Last week I joined Quansight. In Quansight Labs I will be working on increasing the contributions to core SciPy/PyData projects, and will also have funded time to work on NumPy and other projects myself. Hameer Abbasi has had and will continue to have funded time to work on NumPy as well. So I have submitted a pull request (gh-13289) to add Quansight Labs as an Institutional Partner (we list those at https://docs.scipy.org/doc/numpy/dev/governance/people.html#institutional-partners, currently only BIDS). Both Travis and I wrote blog posts about where we want to go with Quansight Labs. Given the relevance to NumPy I thought it would be appropriate to reference those posts here: - https://www.quansight.com/single-post/2019/04/02/Welcoming-Ralf-Gommers-as-Director-of-Quansight-Labs - https://labs.quansight.org/blog/2019/4/joining-labs/ Any feedback, suggestion or idea is very welcome. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Tue Apr 9 12:04:42 2019 From: alan.isaac at gmail.com (Alan Isaac) Date: Tue, 9 Apr 2019 12:04:42 -0400 Subject: [Numpy-discussion] adding Quansight Labs as institutional partner In-Reply-To: References: Message-ID: Under the section "How will we fund this?" in the first Quansight link, there is not category of "individual and institutional donations". I noticed this because the question recently arose at my university, how can the university occasionally donate to NumPy development? In order for this to happen, the recipient of the donation and the intended use of the funds must be transparently documented. As an example, suppose one goes to numpy.org and scrolls down (!?) to the "Donate to Numpy" button. It is entirely unclear what that means, and clicking the button leads to a flipcause site that fails to clarify. I suspect many academic institutions would be interested in making occasional, modest contributions toward NumPy development, if the recipient and intended uses were entirely transparent. Cheers, Alan Isaac On 4/9/2019 3:10 AM, Ralf Gommers wrote: > Hi all, > > Last week I joined Quansight. In Quansight Labs I will be working on increasing the contributions to core SciPy/PyData projects, and will also have funded time to work on NumPy and > other projects myself. Hameer Abbasi has had and will continue to have funded time to work on NumPy as well. So I have submitted a pull request (gh-13289) to add Quansight Labs as > an Institutional Partner (we list those at https://docs.scipy.org/doc/numpy/dev/governance/people.html#institutional-partners, currently only BIDS). > > Both Travis and I wrote blog posts about where we want to go with Quansight Labs. Given the relevance to NumPy I thought it would be appropriate to reference those posts here: > - https://www.quansight.com/single-post/2019/04/02/Welcoming-Ralf-Gommers-as-Director-of-Quansight-Labs > > - https://labs.quansight.org/blog/2019/4/joining-labs/ > > Any feedback, suggestion or idea is very welcome. > > Cheers, > Ralf From ralf.gommers at gmail.com Tue Apr 9 12:25:13 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 9 Apr 2019 18:25:13 +0200 Subject: [Numpy-discussion] adding Quansight Labs as institutional partner In-Reply-To: References: Message-ID: Thanks Alan, good questions. The donations via the Flipcause site go to NumFOCUS. NumFOCUS is a 501(c)3 and NumPy's fiscal sponsor, so any individual or institution that wants to donate to NumPy should preferably donate to NumFOCUS. That way your donation is tax-deductable if you're in the US, and it can be used in a way that the NumPy Steering Council prefers. Quansight Labs is not a nonprofit and it doesn't make much sense for it to focus on donations. That said, it does have a very capable team, so could contract with NumFOCUS to do work on NumPy, if the NumPy Steering Council thinks that's in NumPy's best interest (e.g. for developing a particular feature). On Tue, Apr 9, 2019 at 6:05 PM Alan Isaac wrote: > Under the section "How will we fund this?" in the first Quansight link, > there is not category of "individual and institutional donations". > I noticed this because the question recently arose at my university, > how can the university occasionally donate to NumPy development? > We should talk:) Anything we can do as a project to make that easier should be done. Also if you need an invoice or purchase order from NumFOCUS, I believe that can be easily arranged (and has been arranged for other projects in the past). In order for this to happen, the recipient of the donation and the > intended use of the funds must be transparently documented. As an > example, suppose one goes to numpy.org and scrolls down (!?) to > the "Donate to Numpy" button. It is entirely unclear what that > means, and clicking the button leads to a flipcause site that fails > to clarify. The numpy.org donation button needs to be overhauled anyway, because NumFOCUS is switching away from Flipcause. At the same time we can clarify on that page where donation go and how we then decide to use that funding. > I suspect many academic institutions would be interested > in making occasional, modest contributions toward NumPy development, > if the recipient and intended uses were entirely transparent. > I think academic institutions or the people in it may have a lot of goodwill towards NumPy, however as a project we historically have been very bad at communicating needs and asking for donations. That button doesn't really do much; our average donation level is like $50/month. I actually would like to improve that. E.g. if we have a good story and ask people whose research relies on NumPy (or other core projects) to build say a 0.5% software support item in their grant requests, that could turn into a decent revenue stream, which will then help with maintenance and accelerating development of new features on our roadmap. Cheers, Ralf > Cheers, Alan Isaac > > > On 4/9/2019 3:10 AM, Ralf Gommers wrote: > > Hi all, > > > > Last week I joined Quansight. In Quansight Labs I will be working on > increasing the contributions to core SciPy/PyData projects, and will also > have funded time to work on NumPy and > > other projects myself. Hameer Abbasi has had and will continue to have > funded time to work on NumPy as well. So I have submitted a pull request > (gh-13289) to add Quansight Labs as > > an Institutional Partner (we list those at > https://docs.scipy.org/doc/numpy/dev/governance/people.html#institutional-partners, > currently only BIDS). > > > > Both Travis and I wrote blog posts about where we want to go with > Quansight Labs. Given the relevance to NumPy I thought it would be > appropriate to reference those posts here: > > - > https://www.quansight.com/single-post/2019/04/02/Welcoming-Ralf-Gommers-as-Director-of-Quansight-Labs > > < > https://www.quansight.com/single-post/2019/04/02/Welcoming-Ralf-Gommers-as-Director-of-Quansight-Labs > > > > - https://labs.quansight.org/blog/2019/4/joining-labs/ > > > > Any feedback, suggestion or idea is very welcome. > > > > Cheers, > > Ralf > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Tue Apr 9 13:20:03 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 9 Apr 2019 10:20:03 -0700 Subject: [Numpy-discussion] BIDS / NumPy community meeting April 10/ 2019 Message-ID: Hi, A reminder of the community call scheduled for April 10/ 2019 at 12 pm Pacific Time. There's a section for community-suggested topics on the draft meeting document that may be edited here: https://hackmd.io/WVHUbdriRe26t6s09HjEhA?view -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggardu at gmail.com Tue Apr 9 22:24:49 2019 From: ggardu at gmail.com (Gheorghe Gardu) Date: Tue, 9 Apr 2019 20:24:49 -0600 Subject: [Numpy-discussion] (no subject) Message-ID: hello, Sir, I have downloaded mlxtend-0.15.0.0, and right now I have error when I compile python scripts. I receive "AttributeError: module 'numpy' has no attribute 'testing'", and it seems to be due to an installation issue. Do you have any idea why does this happen ? It is because of mlx? Someone said I should delete the numpy.py file and update it. Do you have idea where is numpy.py located ? Thank in advance. gheorghe gardu -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Thu Apr 11 01:14:18 2019 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Wed, 10 Apr 2019 22:14:18 -0700 Subject: [Numpy-discussion] Does numpy depend upon a Fortran library? In-Reply-To: References: <0F4FE3FA-50F9-421B-B323-3EFBA4EFBF2D@fnal.gov> Message-ID: I have a different view on the assessments in this thread, having built OpenBLAS manually a number of times on different platforms now. Those shared objects packed in the wheel, including libgfortran and libquadmath, are proper runtime dependencies for the OpenBLAS library we ship with wheels, not artifacts of an old ATLAS dependency structure. Another way to achieve compliance with the wheels standard is to statically link them in when we build OpenBLAS on macpython / manually. This seems to be relatively doable on some platforms, and harder on others. There is a demonstration for Mac OS + NumPy available: https://github.com/numpy/numpy/pull/13191 On Linux, it is much harder, we would need a custom build of gcc from source with -fPIC compiler flag used to build libgfortran.a. The Julia language also faced this challenge on static links: https://github.com/JuliaLang/julia/issues/326#issuecomment-191781005 I'm not saying we should jump on static linking of the GCC runtime into OpenBLAS right away, but removing those shared objects from the wheels without a linking change doesn't seem quite right unless I'm missing something major. If we do eventually embrace the static link of the GCC runtime into the OpenBLAS wheels, this also makes our daily CI infrastructure less complex because we don't get pinned to specific runtime library versions of libgfortran / libquadmath & could likely just remove the gfortran-install submodule from our wheels workflow as well. But we don't get that gain for nothing--we do transfer some non-trivial burden to "upstream" builds of OpenBLAS & things do mostly tend to work the way they are now. The PEPs surrounding the wheel ecosystem also contain some cautions about the complexities of trying to static link with default OS lib availabilities. Best wishes, Tyler On Thu, 31 Jan 2019 at 15:58, Ralf Gommers wrote: > > > On Wed, Jan 30, 2019 at 6:03 PM Charles R Harris < > charlesr.harris at gmail.com> wrote: > >> >> >> On Wed, Jan 30, 2019 at 6:32 PM Ralf Gommers >> wrote: >> >>> >>> >>> On Wed, Jan 30, 2019 at 5:19 PM Charles R Harris < >>> charlesr.harris at gmail.com> wrote: >>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 5:28 PM Marc F Paterno >>>> wrote: >>>> >>>>> Hello, >>>>> >>>>> I have encountered a problem with a binary incompatibility between the >>>>> Fortran runtime library installed with numpy when using 'pip install --user >>>>> numpy', and that used by the rest of my program, which is built using >>>>> gfortran from GCC 8.2. The numpy installation uses libgfortran.5.dylib, >>>>> and GCC 8.2 provides libgfortran.5.dylib. >>>>> >>>>> While investigating the source of this problem, I downloaded the numpy >>>>> source >>>>> ( >>>>> https://files.pythonhosted.org/packages/04/b6/d7faa70a3e3eac39f943cc6a6a64ce378259677de516bd899dd9eb8f9b32/numpy-1.16.0.zip >>>>> ), >>>>> and tried building it. The resulting libraries have no coupling to any >>>>> Fortran library that I can find. I can find no Fortran source code files >>>>> in the numpy source, >>>>> except in tests or documentation. >>>>> >>>>> I am working on a MacBook laptop, running macOS Mojave, and so am >>>>> using the Accelerate framework to supply BLAS. >>>>> >>>>> I do not understand why the pip installation of numpy includes a >>>>> Fortran runtime library. Can someone explain to me what I am missing? >>>>> >>>>> >>>> That's interesting, it looks like the wheel includes four libraries: >>>> >>>> -rw-r--r--. 1 charris charris 273072 Jan 1 1980 libgcc_s.1.dylib >>>> -rwxr-xr-x. 1 charris charris 1550456 Jan 1 1980 libgfortran.3.dylib >>>> -rwxr-xr-x. 1 charris charris 63433364 Jan 1 1980 >>>> libopenblasp-r0.3.5.dev.dylib >>>> -rwxr-xr-x. 1 charris charris 279932 Jan 1 1980 libquadmath.0.dylib >>>> >>>> I thought we only needed the openblas, but that in turn probably >>>> depends on libgcc. But why we have the fortran library and quadmath escapes >>>> me. Perhaps someone else knows. >>>> >>> >>> I suspect it's a leftover from when we were using ATLAS, we did need a >>> Fortran runtime library at some point. The cause will be somewhere in the >>> numpy-wheel build scripts, there is a gfortran-install git submodule: >>> https://github.com/MacPython/numpy-wheels >>> >> >> And fortran is probably why the quadmath is there. Hmm, if we fix it we >> will need to test it... >> > > I opened an issue: https://github.com/MacPython/numpy-wheels/issues/42 > > Ralf > > >> Chuck >> >>> >>> >>>> >>>> Note that compiling from source is different and will generally use >>>> different libraries. We don't use Accelerate because it is buggy, not >>>> thread safe, and it appears Apple is not interested in doing anything about >>>> that. >>>> >>>> Chuck >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Apr 11 02:26:39 2019 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 10 Apr 2019 23:26:39 -0700 Subject: [Numpy-discussion] Does numpy depend upon a Fortran library? In-Reply-To: References: <0F4FE3FA-50F9-421B-B323-3EFBA4EFBF2D@fnal.gov> Message-ID: Yeah, the libgfortran dependency is expected. The question is why this should cause any problems. The wheel building infrastructure goes to great lengths to make sure that the dynamic libraries in the wheels shouldn't interfere with other packages. The original complaint in this thread is very vague... On Wed, Apr 10, 2019, 22:16 Tyler Reddy wrote: > I have a different view on the assessments in this thread, having built > OpenBLAS manually a number of times on different platforms now. > > Those shared objects packed in the wheel, including libgfortran and > libquadmath, are proper runtime dependencies for the OpenBLAS library we > ship with wheels, not artifacts of an old ATLAS dependency structure. > Another way to achieve compliance with the wheels standard is to statically > link them in when we build OpenBLAS on macpython / manually. This seems to > be relatively doable on some platforms, and harder on others. > > There is a demonstration for Mac OS + NumPy available: > https://github.com/numpy/numpy/pull/13191 > > On Linux, it is much harder, we would need a custom build of gcc from > source with -fPIC compiler flag used to build libgfortran.a. The Julia > language also faced this challenge on static links: > https://github.com/JuliaLang/julia/issues/326#issuecomment-191781005 > > I'm not saying we should jump on static linking of the GCC runtime into > OpenBLAS right away, but removing those shared objects from the wheels > without a linking change doesn't seem quite right unless I'm missing > something major. If we do eventually embrace the static link of the GCC > runtime into the OpenBLAS wheels, this also makes our daily CI > infrastructure less complex because we don't get pinned to specific runtime > library versions of libgfortran / libquadmath & could likely just remove > the gfortran-install submodule from our wheels workflow as well. > > But we don't get that gain for nothing--we do transfer some non-trivial > burden to "upstream" builds of OpenBLAS & things do mostly tend to work the > way they are now. The PEPs surrounding the wheel ecosystem also contain > some cautions about the complexities of trying to static link with default > OS lib availabilities. > > Best wishes, > Tyler > > > > > > On Thu, 31 Jan 2019 at 15:58, Ralf Gommers wrote: > >> >> >> On Wed, Jan 30, 2019 at 6:03 PM Charles R Harris < >> charlesr.harris at gmail.com> wrote: >> >>> >>> >>> On Wed, Jan 30, 2019 at 6:32 PM Ralf Gommers >>> wrote: >>> >>>> >>>> >>>> On Wed, Jan 30, 2019 at 5:19 PM Charles R Harris < >>>> charlesr.harris at gmail.com> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Jan 30, 2019 at 5:28 PM Marc F Paterno >>>>> wrote: >>>>> >>>>>> Hello, >>>>>> >>>>>> I have encountered a problem with a binary incompatibility between >>>>>> the Fortran runtime library installed with numpy when using 'pip install >>>>>> --user numpy', and that used by the rest of my program, which is built >>>>>> using gfortran from GCC 8.2. The numpy installation uses >>>>>> libgfortran.5.dylib, and GCC 8.2 provides libgfortran.5.dylib. >>>>>> >>>>>> While investigating the source of this problem, I downloaded the >>>>>> numpy source >>>>>> ( >>>>>> https://files.pythonhosted.org/packages/04/b6/d7faa70a3e3eac39f943cc6a6a64ce378259677de516bd899dd9eb8f9b32/numpy-1.16.0.zip >>>>>> ), >>>>>> and tried building it. The resulting libraries have no coupling to >>>>>> any Fortran library that I can find. I can find no Fortran source code >>>>>> files in the numpy source, >>>>>> except in tests or documentation. >>>>>> >>>>>> I am working on a MacBook laptop, running macOS Mojave, and so am >>>>>> using the Accelerate framework to supply BLAS. >>>>>> >>>>>> I do not understand why the pip installation of numpy includes a >>>>>> Fortran runtime library. Can someone explain to me what I am missing? >>>>>> >>>>>> >>>>> That's interesting, it looks like the wheel includes four libraries: >>>>> >>>>> -rw-r--r--. 1 charris charris 273072 Jan 1 1980 libgcc_s.1.dylib >>>>> -rwxr-xr-x. 1 charris charris 1550456 Jan 1 1980 libgfortran.3.dylib >>>>> -rwxr-xr-x. 1 charris charris 63433364 Jan 1 1980 >>>>> libopenblasp-r0.3.5.dev.dylib >>>>> -rwxr-xr-x. 1 charris charris 279932 Jan 1 1980 libquadmath.0.dylib >>>>> >>>>> I thought we only needed the openblas, but that in turn probably >>>>> depends on libgcc. But why we have the fortran library and quadmath escapes >>>>> me. Perhaps someone else knows. >>>>> >>>> >>>> I suspect it's a leftover from when we were using ATLAS, we did need a >>>> Fortran runtime library at some point. The cause will be somewhere in the >>>> numpy-wheel build scripts, there is a gfortran-install git submodule: >>>> https://github.com/MacPython/numpy-wheels >>>> >>> >>> And fortran is probably why the quadmath is there. Hmm, if we fix it we >>> will need to test it... >>> >> >> I opened an issue: https://github.com/MacPython/numpy-wheels/issues/42 >> >> Ralf >> >> >>> Chuck >>> >>>> >>>> >>>>> >>>>> Note that compiling from source is different and will generally use >>>>> different libraries. We don't use Accelerate because it is buggy, not >>>>> thread safe, and it appears Apple is not interested in doing anything about >>>>> that. >>>>> >>>>> Chuck >>>>> _______________________________________________ >>>>> NumPy-Discussion mailing list >>>>> NumPy-Discussion at python.org >>>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>>> >>>> _______________________________________________ >>>> NumPy-Discussion mailing list >>>> NumPy-Discussion at python.org >>>> https://mail.python.org/mailman/listinfo/numpy-discussion >>>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at python.org >>> https://mail.python.org/mailman/listinfo/numpy-discussion >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Thu Apr 11 09:51:10 2019 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 11 Apr 2019 09:51:10 -0400 Subject: [Numpy-discussion] Behaviour of copy for structured dtypes with gaps Message-ID: Hi All, An issue [1] about the copying of arrays with structured dtype raised a question about what the expected behaviour is: does copy always preserve the dtype as is, or should it remove padding? Specifically, consider an array with a structure with many fields, say 'a' to 'z'. Since numpy 1.16, if one does a[['a', 'z']]`, a view will be returned. In this case, its dtype will include a large offset. Now, if we copy this view, should the result have exactly the same dtype, including the large offset (i.e., the copy takes as much memory as the original full array), or should the padding be removed? From the discussion so far, it seems the logic has boiled down to a choice between: (1) Copy is a contract that the dtype will not vary (e.g., we also do not change endianness); (2) Copy is a contract that any access to the data in the array will return exactly the same result, without wasting memory and possibly optimized for access with different strides. E.g., `array[::10].copy() also compacts the result. An argument in favour of (2) is that, before numpy 1.16, `a[['a', 'z']].copy()` did return an array without padding. Of course, this relied on `a[['a', 'z']]` already returning a copy without padding, but still this is a regression. More generally, there should at least be a clear way to get the compact copy. Also, it would make sense for things like `np.save` to remove any padding (it currently does not). What do people think? All the best, Marten [1] https://github.com/numpy/numpy/issues/13299 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Apr 11 16:03:23 2019 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 11 Apr 2019 22:03:23 +0200 Subject: [Numpy-discussion] adding Quansight Labs as institutional partner In-Reply-To: References: Message-ID: On Tue, Apr 9, 2019 at 6:25 PM Ralf Gommers wrote: > > > Thanks Alan, good questions. The donations via the Flipcause site go to > NumFOCUS. NumFOCUS is a 501(c)3 and NumPy's fiscal sponsor, so any > individual or institution that wants to donate to NumPy should preferably > donate to NumFOCUS. That way your donation is tax-deductable if you're in > the US, and it can be used in a way that the NumPy Steering Council prefers. > > Quansight Labs is not a nonprofit and it doesn't make much sense for it to > focus on donations. That said, it does have a very capable team, so could > contract with NumFOCUS to do work on NumPy, if the NumPy Steering Council > thinks that's in NumPy's best interest (e.g. for developing a particular > feature). > > > On Tue, Apr 9, 2019 at 6:05 PM Alan Isaac wrote: > >> Under the section "How will we fund this?" in the first Quansight link, >> there is not category of "individual and institutional donations". >> I noticed this because the question recently arose at my university, >> how can the university occasionally donate to NumPy development? >> > > We should talk:) Anything we can do as a project to make that easier > should be done. Also if you need an invoice or purchase order from > NumFOCUS, I believe that can be easily arranged (and has been arranged for > other projects in the past). > > In order for this to happen, the recipient of the donation and the >> intended use of the funds must be transparently documented. As an >> example, suppose one goes to numpy.org and scrolls down (!?) to >> the "Donate to Numpy" button. It is entirely unclear what that >> means, and clicking the button leads to a flipcause site that fails >> to clarify. > > > The numpy.org donation button needs to be overhauled anyway, because > NumFOCUS is switching away from Flipcause. At the same time we can clarify > on that page where donation go and how we then decide to use that funding. > Here is a PR with updates to the numpy.org front page: https://github.com/numpy/numpy.org/pull/20. I think it contains all the essentials (governance, roadmap, where the funds go). A larger website overhaul is also in order, but that's hard to do right now. Howevrer if there is still information missing that people really look for when considering a donation, I'd love to know so I can add that straight away. Cheers, Ralf > >> I suspect many academic institutions would be interested >> in making occasional, modest contributions toward NumPy development, >> if the recipient and intended uses were entirely transparent. >> > > I think academic institutions or the people in it may have a lot of > goodwill towards NumPy, however as a project we historically have been very > bad at communicating needs and asking for donations. That button doesn't > really do much; our average donation level is like $50/month. I actually > would like to improve that. E.g. if we have a good story and ask people > whose research relies on NumPy (or other core projects) to build say a 0.5% > software support item in their grant requests, that could turn into a > decent revenue stream, which will then help with maintenance and > accelerating development of new features on our roadmap. > > Cheers, > Ralf > > > >> Cheers, Alan Isaac >> >> >> On 4/9/2019 3:10 AM, Ralf Gommers wrote: >> > Hi all, >> > >> > Last week I joined Quansight. In Quansight Labs I will be working on >> increasing the contributions to core SciPy/PyData projects, and will also >> have funded time to work on NumPy and >> > other projects myself. Hameer Abbasi has had and will continue to have >> funded time to work on NumPy as well. So I have submitted a pull request >> (gh-13289) to add Quansight Labs as >> > an Institutional Partner (we list those at >> https://docs.scipy.org/doc/numpy/dev/governance/people.html#institutional-partners, >> currently only BIDS). >> > >> > Both Travis and I wrote blog posts about where we want to go with >> Quansight Labs. Given the relevance to NumPy I thought it would be >> appropriate to reference those posts here: >> > - >> https://www.quansight.com/single-post/2019/04/02/Welcoming-Ralf-Gommers-as-Director-of-Quansight-Labs >> > < >> https://www.quansight.com/single-post/2019/04/02/Welcoming-Ralf-Gommers-as-Director-of-Quansight-Labs >> > >> > - https://labs.quansight.org/blog/2019/4/joining-labs/ >> > >> > Any feedback, suggestion or idea is very welcome. >> > >> > Cheers, >> > Ralf >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Thu Apr 11 18:59:54 2019 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Thu, 11 Apr 2019 15:59:54 -0700 Subject: [Numpy-discussion] Behaviour of copy for structured dtypes with gaps In-Reply-To: References: Message-ID: <20190411225954.vlgfhb6afkthy3xf@carbo> Hi Marten, On Thu, 11 Apr 2019 09:51:10 -0400, Marten van Kerkwijk wrote: > From the discussion so far, it > seems the logic has boiled down to a choice between: > > (1) Copy is a contract that the dtype will not vary (e.g., we also do not > change endianness); > > (2) Copy is a contract that any access to the data in the array will return > exactly the same result, without wasting memory and possibly optimized for > access with different strides. E.g., `array[::10].copy() also compacts the > result. I think you'll get different answers, depending on whom you ask?those interested in low-level memory layout, vs those who use the higher-level API. Given that higher-level API use is much more common, I would lean in the direction of option (2). >From that perspective, we already don't make consistency guarantees about memory layout and other flags. E.g., In [16]: x = np.arange(12).reshape((3, 4)) In [17]: x.strides Out[17]: (32, 8) In [18]: x[::2, 1::2].strides Out[18]: (64, 16) In [19]: np.copy(x[::2, 1::2]).strides Out[19]: (16, 8) Not to mention this odd copy contract: >>> x = np.array([[1,2,3],[4,5,6]], order='F') >>> print(np.copy(x).flags['C_CONTIGUOUS']) >>> print(x.copy().flags['C_CONTIGUOUS']) False True The objection about arrays that don't behave identically in [0] feels somewhat arbitary to me. As shown above, you can always find attributes that differ between a copied array and the original. The user's expectation is that they'll get an array that behaves the same way as the original, not one that is byte-for-byte compatible. The most common use case is to make sure that the original array doesn't get overwritten. Just to play devil's advocate with myself: if you do choose option (2), how would you go about making an identical memory copy of the original array? Best regards, St?fan [0] https://github.com/numpy/numpy/issues/13299#issuecomment-481912827 From teoliphant at gmail.com Thu Apr 11 21:04:31 2019 From: teoliphant at gmail.com (Travis Oliphant) Date: Thu, 11 Apr 2019 20:04:31 -0500 Subject: [Numpy-discussion] Behaviour of copy for structured dtypes with gaps In-Reply-To: <20190411225954.vlgfhb6afkthy3xf@carbo> References: <20190411225954.vlgfhb6afkthy3xf@carbo> Message-ID: I agree with Stefan that option 2 is what NumPy should go with for .copy() If you want to get an identical memory copy you should be getting the .data attribute and doing something with that buffer. My $0.02 -Travis On Thu, Apr 11, 2019 at 6:01 PM Stefan van der Walt wrote: > Hi Marten, > > On Thu, 11 Apr 2019 09:51:10 -0400, Marten van Kerkwijk wrote: > > From the discussion so far, it > > seems the logic has boiled down to a choice between: > > > > (1) Copy is a contract that the dtype will not vary (e.g., we also do not > > change endianness); > > > > (2) Copy is a contract that any access to the data in the array will > return > > exactly the same result, without wasting memory and possibly optimized > for > > access with different strides. E.g., `array[::10].copy() also compacts > the > > result. > > I think you'll get different answers, depending on whom you ask?those > interested in low-level memory layout, vs those who use the higher-level > API. Given that higher-level API use is much more common, I would lean > in the direction of option (2). > > From that perspective, we already don't make consistency guarantees about > memory > layout and other flags. E.g., > > In [16]: x = np.arange(12).reshape((3, 4)) > > > In [17]: x.strides > > > Out[17]: (32, 8) > > In [18]: x[::2, 1::2].strides > > Out[18]: (64, 16) > > In [19]: np.copy(x[::2, 1::2]).strides > > > Out[19]: (16, 8) > > Not to mention this odd copy contract: > > >>> x = np.array([[1,2,3],[4,5,6]], order='F') > >>> print(np.copy(x).flags['C_CONTIGUOUS']) > >>> print(x.copy().flags['C_CONTIGUOUS']) > > False > True > > > The objection about arrays that don't behave identically in [0] feels > somewhat arbitary to me. As shown above, you can always find attributes > that differ between a copied array and the original. > > The user's expectation is that they'll get an array that behaves the > same way as the original, not one that is byte-for-byte compatible. The > most common use case is to make sure that the original array doesn't get > overwritten. > > Just to play devil's advocate with myself: if you do choose option (2), > how would you go about making an identical memory copy of the original > array? > > Best regards, > St?fan > > > [0] https://github.com/numpy/numpy/issues/13299#issuecomment-481912827 > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Apr 11 22:07:27 2019 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 11 Apr 2019 19:07:27 -0700 Subject: [Numpy-discussion] Behaviour of copy for structured dtypes with gaps In-Reply-To: References: Message-ID: My concern would be that to implement (2), I think .copy() has to either special-case certain dtypes, or else we have to add some kind of "simplify for copy" operation to the dtype protocol. These both add architectural complexity, so maybe it's better to avoid it unless we have a compelling reason? On Thu, Apr 11, 2019 at 6:51 AM Marten van Kerkwijk wrote: > > Hi All, > > An issue [1] about the copying of arrays with structured dtype raised a question about what the expected behaviour is: does copy always preserve the dtype as is, or should it remove padding? > > Specifically, consider an array with a structure with many fields, say 'a' to 'z'. Since numpy 1.16, if one does a[['a', 'z']]`, a view will be returned. In this case, its dtype will include a large offset. Now, if we copy this view, should the result have exactly the same dtype, including the large offset (i.e., the copy takes as much memory as the original full array), or should the padding be removed? From the discussion so far, it seems the logic has boiled down to a choice between: > > (1) Copy is a contract that the dtype will not vary (e.g., we also do not change endianness); > > (2) Copy is a contract that any access to the data in the array will return exactly the same result, without wasting memory and possibly optimized for access with different strides. E.g., `array[::10].copy() also compacts the result. > > An argument in favour of (2) is that, before numpy 1.16, `a[['a', 'z']].copy()` did return an array without padding. Of course, this relied on `a[['a', 'z']]` already returning a copy without padding, but still this is a regression. > > More generally, there should at least be a clear way to get the compact copy. Also, it would make sense for things like `np.save` to remove any padding (it currently does not). > > What do people think? All the best, > > Marten > > [1] https://github.com/numpy/numpy/issues/13299 > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -- Nathaniel J. Smith -- https://vorpus.org From faltet at gmail.com Fri Apr 12 05:20:58 2019 From: faltet at gmail.com (Francesc Alted) Date: Fri, 12 Apr 2019 11:20:58 +0200 Subject: [Numpy-discussion] Behaviour of copy for structured dtypes with gaps In-Reply-To: References: Message-ID: I recently put some thought on the issue because a user was complaining about PyTables unadvertendly removing the padding while doing a copy. Incidentally, h5py also do respect padding while doing copies, so I took this seriously and released a new PyTables version mainly for fixing this. You can see the use case and my reflections here: https://github.com/PyTables/PyTables/pull/720 So, my take on this is that the padding is an integral part of the dtype and should be respected during copies too (principle of minimal surprise). With this, I am definitely aligned (pun intended) with contract (1). Francesc Missatge de Nathaniel Smith del dia dv., 12 d?abr. 2019 a les 4:08: > My concern would be that to implement (2), I think .copy() has to > either special-case certain dtypes, or else we have to add some kind > of "simplify for copy" operation to the dtype protocol. These both add > architectural complexity, so maybe it's better to avoid it unless we > have a compelling reason? > > On Thu, Apr 11, 2019 at 6:51 AM Marten van Kerkwijk > wrote: > > > > Hi All, > > > > An issue [1] about the copying of arrays with structured dtype raised a > question about what the expected behaviour is: does copy always preserve > the dtype as is, or should it remove padding? > > > > Specifically, consider an array with a structure with many fields, say > 'a' to 'z'. Since numpy 1.16, if one does a[['a', 'z']]`, a view will be > returned. In this case, its dtype will include a large offset. Now, if we > copy this view, should the result have exactly the same dtype, including > the large offset (i.e., the copy takes as much memory as the original full > array), or should the padding be removed? From the discussion so far, it > seems the logic has boiled down to a choice between: > > > > (1) Copy is a contract that the dtype will not vary (e.g., we also do > not change endianness); > > > > (2) Copy is a contract that any access to the data in the array will > return exactly the same result, without wasting memory and possibly > optimized for access with different strides. E.g., `array[::10].copy() also > compacts the result. > > > > An argument in favour of (2) is that, before numpy 1.16, `a[['a', > 'z']].copy()` did return an array without padding. Of course, this relied > on `a[['a', 'z']]` already returning a copy without padding, but still this > is a regression. > > > > More generally, there should at least be a clear way to get the compact > copy. Also, it would make sense for things like `np.save` to remove any > padding (it currently does not). > > > > What do people think? All the best, > > > > Marten > > > > [1] https://github.com/numpy/numpy/issues/13299 > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -- Francesc Alted -------------- next part -------------- An HTML attachment was scrubbed... URL: From allanhaldane at gmail.com Fri Apr 12 12:13:18 2019 From: allanhaldane at gmail.com (Allan Haldane) Date: Fri, 12 Apr 2019 12:13:18 -0400 Subject: [Numpy-discussion] Behaviour of copy for structured dtypes with gaps In-Reply-To: References: Message-ID: <4da60bc1-4d62-8aa7-131e-b1f89166b2c9@gmail.com> I would be much more in favor of `copy` eliminating padding in the dtype, if dtypes with different paddings were considered equivalent. But they are not. Numpy has always treated dtypes with different padding bytes as not-equal, and prints them very differently: >>> a = np.array([1], dtype={'names': ['f'], ... 'formats': ['i4'], ... 'offsets': [0]}) >>> b = np.array([1], dtype={'names': ['f'], ... 'formats': ['i4'], ... 'offsets': [4]}) >>> a.dtype == b.dtype False >>> a.dtype dtype([('f', '>> b.dtype dtype({'names':['f'], 'formats':['