From sebastian at sipsolutions.net Tue Aug 2 13:22:34 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Tue, 02 Aug 2016 19:22:34 +0200 Subject: [Numpy-discussion] Euroscipy Message-ID: <1470158553.5372.2.camel@sipsolutions.net> Hi all, I am still pondering whether or not (and if which days) to go to EuroScipy. Who else is there and would like to discuss a bit or whatever else? - Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From siegfried.gonzi at ed.ac.uk Tue Aug 2 13:41:21 2016 From: siegfried.gonzi at ed.ac.uk (Siegfried Gonzi) Date: Tue, 2 Aug 2016 18:41:21 +0100 Subject: [Numpy-discussion] scipy curve_fit variable list of optimisation parameters Message-ID: Hi all Does anyone know how to invoke curve_fit with a variable number of parameters, e.g. a1 to a10 without writing it out, e.g. def func2( x, a1,a2,a3,a4 ): # Bessel function tmp = scipy.special.j0( x[:,:] ) return np.dot( tmp[:,:] , np.array( [a1,a2,a3,a4] ) ### yi = M measurements (.e.g M=20) ### x = M (=20) rows of N (=4) columns popt = scipy.optimize.curve_fit( func2, x, yi ) I'd like to get *1 single vector* (in this case of size 4) of optimised A(i) values. The function I am trying to minimise (.e.g F(r) is a vector of 20 model measurements): F(r) = SUM_i_to_N [ A(i) * bessel_function_J0(i * r) ] Thanks, Siegfried Gonzi -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From evgeny.burovskiy at gmail.com Tue Aug 2 17:50:42 2016 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Tue, 2 Aug 2016 22:50:42 +0100 Subject: [Numpy-discussion] scipy curve_fit variable list of optimisation parameters In-Reply-To: References: Message-ID: On Tue, Aug 2, 2016 at 6:41 PM, Siegfried Gonzi wrote: > Hi all > > Does anyone know how to invoke curve_fit with a variable number of parameters, e.g. a1 to a10 without writing it out, > > e.g. > > def func2( x, a1,a2,a3,a4 ): > > # Bessel function > tmp = scipy.special.j0( x[:,:] ) > > return np.dot( tmp[:,:] , np.array( [a1,a2,a3,a4] ) > > > ### yi = M measurements (.e.g M=20) > ### x = M (=20) rows of N (=4) columns > popt = scipy.optimize.curve_fit( func2, x, yi ) > > I'd like to get *1 single vector* (in this case of size 4) of optimised A(i) values. > > The function I am trying to minimise (.e.g F(r) is a vector of 20 model measurements): F(r) = SUM_i_to_N [ A(i) * bessel_function_J0(i * r) ] > > > Thanks, > Siegfried Gonzi > > > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. You can use `leastsq` or `least_squares` directly: they both accept an array of parameters. BTW, since all of these functions are actually in scipy, you might want to redirect this discussion to the scipy-user mailing list. Cheers, Evgeni From ralf.gommers at gmail.com Wed Aug 3 03:50:00 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 3 Aug 2016 09:50:00 +0200 Subject: [Numpy-discussion] Euroscipy In-Reply-To: <1470158553.5372.2.camel@sipsolutions.net> References: <1470158553.5372.2.camel@sipsolutions.net> Message-ID: On Tue, Aug 2, 2016 at 7:22 PM, Sebastian Berg wrote: > Hi all, > > I am still pondering whether or not (and if which days) to go to > EuroScipy. Who else is there and would like to discuss a bit or > whatever else? > Unfortunately I'm not able to go this year. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.gidden at gmail.com Wed Aug 3 04:11:24 2016 From: matthew.gidden at gmail.com (Matthew Gidden) Date: Wed, 3 Aug 2016 10:11:24 +0200 Subject: [Numpy-discussion] Euroscipy In-Reply-To: References: <1470158553.5372.2.camel@sipsolutions.net> Message-ID: Hi all, I've been lurking on the list, but I'll be at EuroScipy and would very much enjoy meeting any others and getting more involved with the community. Cheers, Matt On Wed, Aug 3, 2016 at 9:50 AM, Ralf Gommers wrote: > > > On Tue, Aug 2, 2016 at 7:22 PM, Sebastian Berg > wrote: > >> Hi all, >> >> I am still pondering whether or not (and if which days) to go to >> EuroScipy. Who else is there and would like to discuss a bit or >> whatever else? >> > > Unfortunately I'm not able to go this year. > > Ralf > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From siegfried.gonzi at ed.ac.uk Wed Aug 3 08:53:09 2016 From: siegfried.gonzi at ed.ac.uk (Siegfried Gonzi) Date: Wed, 3 Aug 2016 13:53:09 +0100 Subject: [Numpy-discussion] scipy curve_fit variable list of optimisation parameters In-Reply-To: References: Message-ID: <3088B153-9E2A-4ABD-9A6B-3A46B8CBB441@ed.ac.uk> On 3 Aug 2016, at 13:00, numpy-discussion-request at scipy.org wrote: > Message: 3 > Date: Tue, 2 Aug 2016 22:50:42 +0100 > From: Evgeni Burovski > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] scipy curve_fit variable list of > optimisation parameters > Message-ID: > > Content-Type: text/plain; charset=UTF-8 > > > You can use `leastsq` or `least_squares` directly: they both accept an > array of parameters. > > BTW, since all of these functions are actually in scipy, you might > want to redirect this discussion to the scipy-user mailing list. Hi all I found the solution in the following thread: http://stackoverflow.com/questions/28969611/multiple-arguments-in-python One has to call curve_fit with 'p0' (giving curve_fit a clue about the unknown number of variables) I changed func2 to (note the *): def func2( x, *a ): # Bessel function tmp = scipy.special.j0( x[:,:] ) return np.dot( tmp[:,:] , a[:] ) and call it: N = number of optimisation parameters popt = scipy.optimize.curve_fit( func2, x, yi , p0=[1.0]*N) Regards, Siegfried Gonzi Met Office, Exeter, UK -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From charlesr.harris at gmail.com Wed Aug 3 16:09:32 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 3 Aug 2016 14:09:32 -0600 Subject: [Numpy-discussion] Numpy 1.11.2 Message-ID: Hi All, I would like to release Numpy 1.11.2rc1 this weekend. It will contain a few small fixes and enhancements for windows and the last Scipy release. If there are any pending PRs that you think should go in or be backported for this release, please speak up. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From matti.picus at gmail.com Fri Aug 5 03:06:02 2016 From: matti.picus at gmail.com (Matti Picus) Date: Fri, 5 Aug 2016 10:06:02 +0300 Subject: [Numpy-discussion] test_closing_fid (in test_io.py) on PyPy Message-ID: <7888be5c-52eb-4d6d-00b5-b31128f8cd0d@gmail.com> test_closing_fid essentially calls this to ensure close() is called when a NpzFile object goes out of context: for i in range(1, 1025): np.load(tmp)["data"] This raises a ResourceWarning on python 3, and fails on pypy since the garbage collector works differently. It seems to be a classic example of wrongly using np.load(), but I'm sure it is a common use case. I can submit a pull request to skip on pypy, or should this be solved in a more substantial way? Matti From florin.papa at intel.com Fri Aug 5 03:42:35 2016 From: florin.papa at intel.com (Papa, Florin) Date: Fri, 5 Aug 2016 07:42:35 +0000 Subject: [Numpy-discussion] NumPy in PyPy Message-ID: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> Hi, This is Florin Papa from the Dynamic Scripting Languages Optimizations team in Intel Corporation. Our team is working on optimizing the PyPy interpreter and part of this work is to find and fix incompatibilities between NumPy and PyPy. Does anyone have knowledge of real life workloads that use NumPy and cannot be run using PyPy? We are also interested in creating a repository with relevant benchmarks for real world usage of NumPy, like GUPB for CPython, but we have not found such workloads for NumPy. Thank you, Florin -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Fri Aug 5 13:35:44 2016 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 5 Aug 2016 17:35:44 +0000 (UTC) Subject: [Numpy-discussion] test_closing_fid (in test_io.py) on PyPy References: <7888be5c-52eb-4d6d-00b5-b31128f8cd0d@gmail.com> Message-ID: Fri, 05 Aug 2016 10:06:02 +0300, Matti Picus kirjoitti: [clip] > I can submit a pull request to skip on pypy, or should this be solved in > a more substantial way? Should also be safe to just skip it on Pypy, it's testing that the wrong way to use np.load also works on CPython. -- Pauli Virtanen From ben.v.root at gmail.com Fri Aug 5 17:20:48 2016 From: ben.v.root at gmail.com (Benjamin Root) Date: Fri, 5 Aug 2016 17:20:48 -0400 Subject: [Numpy-discussion] NumPy in PyPy In-Reply-To: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> References: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> Message-ID: Don't know if it is what you are looking for, but NumPy has a built-in suite of benchmarks: http://docs.scipy.org/doc/numpy/reference/generated/numpy.testing.Tester.bench.html Also, some projects have taken to utilizing the "airspeed velocity" utility to track benchmarking stats for their projects. I know astropy utilizes it. So, maybe their benchmarks might be a good starting point since they utilize numpy heavily? Cheers! Ben Root On Fri, Aug 5, 2016 at 3:42 AM, Papa, Florin wrote: > Hi, > > > > This is Florin Papa from the Dynamic Scripting Languages Optimizations > team in Intel Corporation. > > > > Our team is working on optimizing the PyPy interpreter and part of this > work is to find and fix incompatibilities between NumPy and PyPy. Does > anyone have knowledge of real life workloads that use NumPy and cannot be > run using PyPy? > > > > We are also interested in creating a repository with relevant benchmarks > for real world usage of NumPy, like GUPB for CPython, but we have not found > such workloads for NumPy. > > > > Thank you, > > Florin > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Aug 6 04:05:24 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 6 Aug 2016 20:05:24 +1200 Subject: [Numpy-discussion] NumPy in PyPy In-Reply-To: References: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> Message-ID: On Sat, Aug 6, 2016 at 9:20 AM, Benjamin Root wrote: > Don't know if it is what you are looking for, but NumPy has a built-in > suite of benchmarks: http://docs.scipy.org/doc/numpy/reference/generated/ > numpy.testing.Tester.bench.html > That's the very old (now unused) benchmark runner. Numpy has had an ASV test suite for a while, see https://github.com/numpy/numpy/tree/master/benchmarks for how to run it. > Also, some projects have taken to utilizing the "airspeed velocity" > utility to track benchmarking stats for their projects. I know astropy > utilizes it. So, maybe their benchmarks might be a good starting point > since they utilize numpy heavily? > > Cheers! > Ben Root > > > On Fri, Aug 5, 2016 at 3:42 AM, Papa, Florin > wrote: > >> Hi, >> >> >> >> This is Florin Papa from the Dynamic Scripting Languages Optimizations >> team in Intel Corporation. >> >> >> >> Our team is working on optimizing the PyPy interpreter and part of this >> work is to find and fix incompatibilities between NumPy and PyPy. Does >> anyone have knowledge of real life workloads that use NumPy and cannot be >> run using PyPy? >> >> >> >> We are also interested in creating a repository with relevant benchmarks >> for real world usage of NumPy, like GUPB for CPython, but we have not found >> such workloads for NumPy. >> > The approach of GUPB is interesting (the whole application part that is, the rest looks much more cumbersome than ASV benchmarks), but of course easier to create for Python than for Numpy. You'd need to find whole applications that spend most of their time in numpy but not in too small a set of numpy functions. Maybe benchmark suites of other projects aren't such a bad idea for that. Or spend a bit of time collecting relevant published ipython notebooks. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From akborder at gmail.com Mon Aug 8 09:11:17 2016 From: akborder at gmail.com (Anakim Border) Date: Mon, 8 Aug 2016 15:11:17 +0200 Subject: [Numpy-discussion] Views and Increments Message-ID: Dear List, I'm experimenting with views and array indexing. I have written two code blocks that I was expecting to produce the same result. First try: >>> a = np.arange(10) >>> b = a[np.array([1,6,5])] >>> b += 1 >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) Alternative version: >>> a = np.arange(10) >>> a[np.array([1,6,5])] += 1 >>> a array([0, 2, 2, 3, 4, 6, 7, 7, 8, 9]) I understand what is happening in the first case. In fact, the documentation is quite clear on the subject: For all cases of index arrays, what is returned is a copy of the original data, not a view as one gets for slices. What about the second case? There, I'm not keeping a reference to the intermediate copy (b, in the first example). Still, I don't see why the update (to the copy) is propagating to the original array. Is there any implementation detail that I'm missing? Best, ab -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Mon Aug 8 09:21:15 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Mon, 08 Aug 2016 15:21:15 +0200 Subject: [Numpy-discussion] Views and Increments In-Reply-To: References: Message-ID: <1470662475.20080.1.camel@sipsolutions.net> On Mo, 2016-08-08 at 15:11 +0200, Anakim Border wrote: > Dear List, > > I'm experimenting with views and array indexing. I have written two > code blocks that I was expecting to produce the same result. > > First try: > > >>> a = np.arange(10) > >>> b = a[np.array([1,6,5])] > >>> b += 1 > >>> a > array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > > > Alternative version: > > >>> a = np.arange(10) > >>> a[np.array([1,6,5])] += 1 > >>> a > array([0, 2, 2, 3, 4, 6, 7, 7, 8, 9]) > > > I understand what is happening in the first case. In fact, the > documentation is quite clear on the subject: > > For all cases of index arrays, what is returned is a copy of the > original data, not a view as one gets for slices. > > What about the second case? There, I'm not keeping a reference to the > intermediate copy (b, in the first example). Still, I don't see why > the update (to the copy) is propagating to the original array. Is > there any implementation detail that I'm missing? > The second case translates to: tmp =?a[np.array([1,6,5])] + 1 a[np.array([1,6,5])] = tmp this is done by python, without any interplay of numpy at all. Which is different from `arr += 1`, which is specifically defined and translates to `np.add(arr, 1, out=arr)`. - Sebastian > Best, > ? ab > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From bennyrowland at mac.com Mon Aug 8 11:35:25 2016 From: bennyrowland at mac.com (Ben Rowland) Date: Mon, 08 Aug 2016 11:35:25 -0400 Subject: [Numpy-discussion] PR 7918 apply_along_axis() Message-ID: Hi list, This is both my first post to this list and first pull request for numpy so apologies if this is not the right list or for any other mistakes. Here is the link to my pull request: https://github.com/numpy/numpy/pull/7918 It is a simple modification to the numpy.apply_along_axis() function in numpy/lib/shape_base.py to allow it to deal better with ndarray subclasses. My use case is this: I have an ndarray subclass storing sets of temporal data which has additional metadata properties such as the time between data points. I frequently want to apply the same function to each time course in a set and apply_along_axis() is a compact and efficient way to do this. However there are two behaviours that I require which are not provided by the current numpy master: 1) apply_along_axis returns the same subclass as it was called upon, so that I don?t lose all the metadata that it went in with. 2) fund1d calls inside apply_along_axis should receive 1d slices of the same subclass as the supplied whole array, so that they can make use of the metadata in performing their function. To achieve these two behaviours requires modifying only three lines of code: 1) At the start of the function, the input is converted to a bare ndarray by the function numpy.asarray(), I replace this with the subclass friendly numpy.asanyarray() 2) Depending on whether func1d returns a scalar or a vector there are two different code paths constructing and returning the output array. In each case I call __array_wrap__ on the input array, passing the output array to allow it to be updated with all the metadata of the original array. I have also implemented two new tests for this functionality. The first is very simple and merely calls apply_along_axis on a numpy.matrix class and checks that the result is also a numpy.matrix. The second is slightly more involved as it requires defining a MinimalSubclass class which adds a data member to the bare ndarray, then a function which returns that member. apply_along_axis() is then called passing in an instance of the subclass and the function to show that the slices passed to func1d preserve the data member. I would welcome any comment or constructive criticism. Ben Rowland -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Mon Aug 8 12:32:36 2016 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Mon, 8 Aug 2016 12:32:36 -0400 Subject: [Numpy-discussion] Views and Increments In-Reply-To: References: Message-ID: Hi Anakim, The difference is really in the code path that gets taken: in the first case, you go through `a.__getitem__(np.array([1,6,5])`, in the second through `a.__setitem__(...)`. The increments would not work if you added an extra indexing to it, as in: ``` a[np.array([1,6,5])][:] += 1 ``` ?(which would do `a.__getitem__(...).__setitem__(slice(None))`) Hope this clarifies it, Marten -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Mon Aug 8 16:08:46 2016 From: shoyer at gmail.com (Stephan Hoyer) Date: Mon, 8 Aug 2016 13:08:46 -0700 Subject: [Numpy-discussion] Views and Increments In-Reply-To: References: Message-ID: On Mon, Aug 8, 2016 at 6:11 AM, Anakim Border wrote: > Alternative version: > > >>> a = np.arange(10) > >>> a[np.array([1,6,5])] += 1 > >>> a > array([0, 2, 2, 3, 4, 6, 7, 7, 8, 9]) > I haven't checked, but a likely explanation is that Python itself interprets a[b] += c as a[b] = a[b] + c. Python has special methods for inplace assignment (__setitem__) and inplace arithmetic (__iadd__) but no special methods for inplace arithmetic and assignment at the same time, so this is really out of NumPy's control here. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Aug 8 18:11:37 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 8 Aug 2016 15:11:37 -0700 Subject: [Numpy-discussion] NumPy in PyPy In-Reply-To: References: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> Message-ID: > > On Fri, Aug 5, 2016 at 3:42 AM, Papa, Florin >> wrote: >> >>> Does anyone have knowledge of real life workloads that use NumPy and >>> cannot be run using PyPy? >>> >>> >>> >>> We are also interested in creating a repository with relevant benchmarks >>> for real world usage of NumPy, >>> >> We have a numpy -- heavy app. bu tit, like many others, I'm sure, also relies heavily on Cython-wrapped C++ code, as well as pure Cython extensions. As well as many other packages that are also wrappers around C libs, Cython -optimized, etc. I've never tried to run it under PyPy I've always assumed it's a non-starter. Is there any hope? If you are curious: https://github.com/NOAA-ORR-ERD/PyGnome -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From florin.papa at intel.com Tue Aug 9 06:49:14 2016 From: florin.papa at intel.com (Papa, Florin) Date: Tue, 9 Aug 2016 10:49:14 +0000 Subject: [Numpy-discussion] NumPy in PyPy In-Reply-To: References: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> Message-ID: <3A375A669FBEFF45B6B60E689636EDCA09C0C954@IRSMSX101.ger.corp.intel.com> > We have a numpy -- heavy app. bu tit, like many others, I'm sure, also relies heavily on Cython-wrapped C++ code, as well as pure Cython extensions. > > As well as many other packages that are also wrappers around C libs, Cython -optimized, etc. > > I've never tried to run it under PyPy I've always assumed it's a non-starter. > > Is there any hope? > > > If you are curious: > > https://github.com/NOAA-ORR-ERD/PyGnome > > -CHB Hi Christopher, Thank you for the information, I will investigate whether we can use PyPy for this workload. Regards, Florin From florin.papa at intel.com Tue Aug 9 07:03:11 2016 From: florin.papa at intel.com (Papa, Florin) Date: Tue, 9 Aug 2016 11:03:11 +0000 Subject: [Numpy-discussion] NumPy in PyPy In-Reply-To: References: <3A375A669FBEFF45B6B60E689636EDCA09C0BDDA@IRSMSX101.ger.corp.intel.com> Message-ID: <3A375A669FBEFF45B6B60E689636EDCA09C0C975@IRSMSX101.ger.corp.intel.com> On Sat, Aug 6, 2016 at 9:20 AM, Benjamin Root wrote: > Don't know if it is what you are looking for, but NumPy has a built-in suite of benchmarks: http://docs.scipy.org/doc/numpy/reference/generated/numpy.testing.Tester.bench.html > That's the very old (now unused) benchmark runner. Numpy has had an ASV test suite for a while, see https://github.com/numpy/numpy/tree/master/benchmarks for how to run it. ? > Also, some projects have taken to utilizing the "airspeed velocity" utility to track benchmarking stats for their projects. I know astropy utilizes it. So, maybe their benchmarks might be a good starting point since they utilize numpy heavily? > Cheers! > Ben Root >> On Fri, Aug 5, 2016 at 3:42 AM, Papa, Florin wrote: >> Hi, ? >> This is Florin Papa from the Dynamic Scripting Languages Optimizations team in Intel Corporation. ? >> Our team is working on optimizing the PyPy interpreter and part of this work is to find and fix incompatibilities between NumPy and PyPy. Does anyone have knowledge of real life workloads that use NumPy and cannot be run using PyPy? ? >> We are also interested in creating a repository with relevant benchmarks for real world usage of NumPy, like GUPB for CPython, but we have not found such workloads for NumPy. ? >> The approach of GUPB is interesting (the whole application part that is, the rest looks much more cumbersome than ASV benchmarks), but of course easier to create for Python than for Numpy. You'd need to find whole applications that spend most of their time in numpy but not in too small a set of numpy functions. Maybe benchmark suites of other projects aren't such a bad idea for that. Or spend a bit of time collecting relevant published ipython notebooks. >> Ralf Astropy definitely looks like a good candidate for a real life workload. Thank you also for the useful information on ASV. Regards, Florin From morph at debian.org Sun Aug 14 13:11:08 2016 From: morph at debian.org (Sandro Tosi) Date: Sun, 14 Aug 2016 18:11:08 +0100 Subject: [Numpy-discussion] Numpy 1.11.2 In-Reply-To: References: Message-ID: hey there, what happened here? do you still plan to release a 1.11.2rc1 soon? On Wed, Aug 3, 2016 at 9:09 PM, Charles R Harris wrote: > Hi All, > > I would like to release Numpy 1.11.2rc1 this weekend. It will contain a few > small fixes and enhancements for windows and the last Scipy release. If > there are any pending PRs that you think should go in or be backported for > this release, please speak up. > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -- Sandro "morph" Tosi My website: http://sandrotosi.me/ Me at Debian: http://wiki.debian.org/SandroTosi G+: https://plus.google.com/u/0/+SandroTosi From charlesr.harris at gmail.com Sun Aug 14 20:12:33 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sun, 14 Aug 2016 18:12:33 -0600 Subject: [Numpy-discussion] Numpy 1.11.2 In-Reply-To: References: Message-ID: "Events, dear boy, events" ;) There were a couple of bugs that turned up at the last moment that needed fixing. At the moment there are two, possibly three, bugs that need finishing off. - A fix for compilation on PPC running RHEL 7.2 (done, but not verified) - Roll back Numpy reload error: more than one project was reloading. - Maybe fix crash for quicksort of object arrays with bogus comparison. Chuck On Sun, Aug 14, 2016 at 11:11 AM, Sandro Tosi wrote: > hey there, what happened here? do you still plan to release a 1.11.2rc1 > soon? > > On Wed, Aug 3, 2016 at 9:09 PM, Charles R Harris > wrote: > > Hi All, > > > > I would like to release Numpy 1.11.2rc1 this weekend. It will contain a > few > > small fixes and enhancements for windows and the last Scipy release. If > > there are any pending PRs that you think should go in or be backported > for > > this release, please speak up. > > > > Chuck > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > -- > Sandro "morph" Tosi > My website: http://sandrotosi.me/ > Me at Debian: http://wiki.debian.org/SandroTosi > G+: https://plus.google.com/u/0/+SandroTosi > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Mon Aug 15 15:04:01 2016 From: alan.isaac at gmail.com (Alan Isaac) Date: Mon, 15 Aug 2016 15:04:01 -0400 Subject: [Numpy-discussion] weighted random choice in Python Message-ID: <1bf63320-0881-8097-a9a8-46c6a6ba6b7e@gmail.com> It seems that there may soon be movement on this enhancement request: http://bugs.python.org/issue18844 The API is currently under discussion. If this decision might interact with NumPy in any way, it would be good to have that documented now. (See Raymond Hettinger's comment today.) Cheers, Alan Isaac From ralf.gommers at gmail.com Tue Aug 16 04:06:32 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 16 Aug 2016 20:06:32 +1200 Subject: [Numpy-discussion] weighted random choice in Python In-Reply-To: <1bf63320-0881-8097-a9a8-46c6a6ba6b7e@gmail.com> References: <1bf63320-0881-8097-a9a8-46c6a6ba6b7e@gmail.com> Message-ID: On Tue, Aug 16, 2016 at 7:04 AM, Alan Isaac wrote: > It seems that there may soon be movement on this enhancement request: > http://bugs.python.org/issue18844 > The API is currently under discussion. > If this decision might interact with NumPy in any way, > it would be good to have that documented now. > (See Raymond Hettinger's comment today.) > Doesn't look like it interacts with numpy.random. The whole enhancement request doesn't look very interesting imho. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at telenczuk.pl Tue Aug 16 08:28:37 2016 From: mail at telenczuk.pl (mail at telenczuk.pl) Date: Tue, 16 Aug 2016 14:28:37 +0200 Subject: [Numpy-discussion] Euroscipy In-Reply-To: References: <1470158553.5372.2.camel@sipsolutions.net> Message-ID: <57b306f53aebe_44ce14f7e54c9@Pct-EqAlain-Z30.notmuch> Hi all, I will present advanced NumPy tutorial at EuroScipy next week [1]. Is anyone going to attend the tutorials on Tuesday? If so, would you be interested in helping during my tutorial (mainly helping the participants with installation and the exercises). Your help would be much appreciated. Preliminary materials for the tutorial are on Github [2]. I have only 1.5h so I won't cover all lessons. My plan is to do lessons 1-5 (broadcasting, shape and strides, dtypes and structured array, intro to xarray) . Your feedback is most welcome! Thanks in advance! Cheers, Bartosz [1] https://www.euroscipy.org/2016/schedule/sessions/5/ [2] https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy From alan.isaac at gmail.com Tue Aug 16 08:52:07 2016 From: alan.isaac at gmail.com (Alan Isaac) Date: Tue, 16 Aug 2016 08:52:07 -0400 Subject: [Numpy-discussion] weighted random choice in Python In-Reply-To: References: <1bf63320-0881-8097-a9a8-46c6a6ba6b7e@gmail.com> Message-ID: <157814d7-6825-da88-ebc5-3116d229a203@gmail.com> On 8/16/2016 4:06 AM, Ralf Gommers wrote: > The whole enhancement request doesn't look very interesting imho. Because the functionality is already in NumPy, or because it is easily user-written? Alan From matthew.gidden at gmail.com Tue Aug 16 09:36:01 2016 From: matthew.gidden at gmail.com (Matthew Gidden) Date: Tue, 16 Aug 2016 15:36:01 +0200 Subject: [Numpy-discussion] Euroscipy In-Reply-To: <57b306f53aebe_44ce14f7e54c9@Pct-EqAlain-Z30.notmuch> References: <1470158553.5372.2.camel@sipsolutions.net> <57b306f53aebe_44ce14f7e54c9@Pct-EqAlain-Z30.notmuch> Message-ID: Hi Bartosz, I'll happily help. I normally teach Software Carpentry lessons [1], so have experience in the workshop environment. I'll take a look over what you prepared. Cheers, Matt [1] http://software-carpentry.org/lessons/ On Tue, Aug 16, 2016 at 2:28 PM, wrote: > Hi all, > > I will present advanced NumPy tutorial at EuroScipy next week [1]. Is > anyone going to attend the tutorials on Tuesday? If so, would you be > interested in helping during my tutorial (mainly helping the participants > with installation and the exercises). Your help would be much appreciated. > > Preliminary materials for the tutorial are on Github [2]. I have only > 1.5h so I won't cover all lessons. My plan is to do lessons 1-5 > (broadcasting, shape and strides, dtypes and structured array, intro to > xarray) . Your feedback is most welcome! > > Thanks in advance! > > Cheers, > > Bartosz > > [1] https://www.euroscipy.org/2016/schedule/sessions/5/ > [2] https://github.com/btel/2016-erlangen-euroscipy-advanced-numpy > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuarteberg at gmail.com Fri Aug 19 11:29:29 2016 From: stuarteberg at gmail.com (Stuart Berg) Date: Fri, 19 Aug 2016 11:29:29 -0400 Subject: [Numpy-discussion] How to trigger warnings for integer division in python 2 Message-ID: Hi, To help people migrate their code bases from Python 2 to Python 3, the python interpreter has a handy option '-3' that issues warnings at runtime. One of the warnings is for integer division: $ echo "print 3/2" > /tmp/foo.py $ python -3 /tmp/foo.py /tmp/foo.py:1: DeprecationWarning: classic int division print 3/2 1 But no warnings are shown for division of numpy arrays, e.g. for a statement like this: print np.array([3]) / np.array([2]) I see that np.seterr can be used to issue certain types of division warnings, but not this one. Is there a way to activate integer division warnings? It would really help me migrate my application to Python 3. Thanks, Stuart -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Fri Aug 19 13:03:15 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 19 Aug 2016 19:03:15 +0200 Subject: [Numpy-discussion] How to trigger warnings for integer division in python 2 In-Reply-To: References: Message-ID: <1471626195.9335.5.camel@sipsolutions.net> On Fr, 2016-08-19 at 11:29 -0400, Stuart Berg wrote: > Hi, > > To help people migrate their code bases from Python 2 to Python 3, > the python interpreter has a handy option '-3' that issues warnings > at runtime.? One of the warnings is for integer division: > > $ echo "print 3/2" > /tmp/foo.py > $ python -3 /tmp/foo.py > /tmp/foo.py:1: DeprecationWarning: classic int division > ? print 3/2 > 1 > > But no warnings are shown for division of numpy arrays, e.g. for a > statement like this: > print np.array([3]) / np.array([2]) > > I see that np.seterr can be used to issue certain types of division > warnings, but not this one.? Is there a way to activate integer > division warnings?? It would really help me migrate my application to > Python 3. > I don't think numpy implements any py3kwarnings. It seems that it would be possible though. On newer versions we got more strict about using floats instead of ints, so some of these places might follow up with a warning quickly. I guess the question is whether we should aim to add at least some of these warnings and someone is willing to put in the effort (I suppose it is likely only a few places). I am not sure how easy they are on the C side, but probably not difficult at all. - Sebastian > Thanks, > Stuart > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From stuarteberg at gmail.com Fri Aug 19 13:33:01 2016 From: stuarteberg at gmail.com (Stuart Berg) Date: Fri, 19 Aug 2016 13:33:01 -0400 Subject: [Numpy-discussion] How to trigger warnings for integer division in python 2 In-Reply-To: <1471626195.9335.5.camel@sipsolutions.net> References: <1471626195.9335.5.camel@sipsolutions.net> Message-ID: > I guess the question is whether we should aim to add at least some of > these warnings As you can probably guess, I think such a feature would be very useful :-) When my team started planning our migration to Python 3, we quickly realized that the integer division issue was our biggest risk. It has the potential for subtle bugs to remain hidden in our code base long after the migration, whereas most of the other Python 3 changes will be easier to identify. > and someone is willing to put in the effort (I suppose it is likely only a > few places). > I am not sure how easy they are on the C side, but probably not difficult > at all. OK, good to hear that it shouldn't be difficult. I've opened the following issue on the github repo, to continue this discussion and hopefully attract a volunteer. :-) https://github.com/numpy/numpy/issues/7949 Best regards, Stuart -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Sat Aug 20 16:16:14 2016 From: alan.isaac at gmail.com (Alan Isaac) Date: Sat, 20 Aug 2016 16:16:14 -0400 Subject: [Numpy-discussion] coordinate bounds Message-ID: Is there a numpy equivalent to Mma's CoordinateBounds command? http://reference.wolfram.com/language/ref/CoordinateBounds.html Thanks, Alan Isaac From robert.kern at gmail.com Sat Aug 20 17:14:38 2016 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 20 Aug 2016 22:14:38 +0100 Subject: [Numpy-discussion] coordinate bounds In-Reply-To: References: Message-ID: On Sat, Aug 20, 2016 at 9:16 PM, Alan Isaac wrote: > > Is there a numpy equivalent to Mma's CoordinateBounds command? > http://reference.wolfram.com/language/ref/CoordinateBounds.html The first signature can be computed like so: np.transpose([coords.min(axis=0), coords.max(axis=0)]) -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From dipugee at gmail.com Sun Aug 21 13:46:23 2016 From: dipugee at gmail.com (=?utf-8?Q?Dipankar_=E2=80=9CDipu=E2=80=9D_Ganguly?=) Date: Sun, 21 Aug 2016 10:46:23 -0700 Subject: [Numpy-discussion] NumPy-Discussion Digest, Vol 119, Issue 11 In-Reply-To: References: Message-ID: Is there a way to use Wolframs? Mathematica 11 within IPython on Jupyter running on Anaconda?s Navigator on a Mac with OS 10.11.6? Failing that, what Python package would give me those capabilities? Thanks. Dipu Dipankar Ganguly Consultant: Strategy/Technology/Commercialization Bothell, WA Cell: 408-203-8814 email: dipugee at gmail.com http://www.linkedin.com/in/dipugee > On Aug 21, 2016, at 5:00 AM, numpy-discussion-request at scipy.org wrote: > > Send NumPy-Discussion mailing list submissions to > numpy-discussion at scipy.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://mail.scipy.org/mailman/listinfo/numpy-discussion > or, via email, send a message with subject or body 'help' to > numpy-discussion-request at scipy.org > > You can reach the person managing the list at > numpy-discussion-owner at scipy.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of NumPy-Discussion digest..." > > > Today's Topics: > > 1. coordinate bounds (Alan Isaac) > 2. Re: coordinate bounds (Robert Kern) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 20 Aug 2016 16:16:14 -0400 > From: Alan Isaac > To: Discussion of Numerical Python > Subject: [Numpy-discussion] coordinate bounds > Message-ID: > Content-Type: text/plain; charset=utf-8; format=flowed > > Is there a numpy equivalent to Mma's CoordinateBounds command? > http://reference.wolfram.com/language/ref/CoordinateBounds.html > > Thanks, > Alan Isaac > > > ------------------------------ > > Message: 2 > Date: Sat, 20 Aug 2016 22:14:38 +0100 > From: Robert Kern > To: Discussion of Numerical Python > Subject: Re: [Numpy-discussion] coordinate bounds > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Sat, Aug 20, 2016 at 9:16 PM, Alan Isaac wrote: >> >> Is there a numpy equivalent to Mma's CoordinateBounds command? >> http://reference.wolfram.com/language/ref/CoordinateBounds.html > > The first signature can be computed like so: > > np.transpose([coords.min(axis=0), coords.max(axis=0)]) > > -- > Robert Kern > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > ------------------------------ > > End of NumPy-Discussion Digest, Vol 119, Issue 11 > ************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Aug 21 18:53:43 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 21 Aug 2016 15:53:43 -0700 Subject: [Numpy-discussion] NumPy-Discussion Digest, Vol 119, Issue 11 In-Reply-To: References: Message-ID: On Aug 21, 2016 10:46 AM, "Dipankar ?Dipu? Ganguly" wrote: > > Is there a way to use Wolframs? Mathematica 11 within IPython on Jupyter running on Anaconda?s Navigator on a Mac with OS 10.11.6? Failing that, what Python package would give me those capabilities? This has nothing to do with numpy, so I'm afraid you're in the wrong place. You might try asking the Jupyter developers, or looking into Sage. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleksandr.pavlyk at intel.com Wed Aug 24 15:41:06 2016 From: oleksandr.pavlyk at intel.com (Pavlyk, Oleksandr) Date: Wed, 24 Aug 2016 19:41:06 +0000 Subject: [Numpy-discussion] numpy.distutils issue Message-ID: <4C9EDA7282E297428F3986994EB0FBD3856C71@ORSMSX110.amr.corp.intel.com> Hi All, According to the documentation page: http://docs.scipy.org/doc/numpy/reference/distutils.html Function add_library allows the following keywords: extra_f77_compiler_args extra_f90_compiler_args however setting them seem to have no effect for my extension. Digging deeper, I discovered, the documentation is inconsistent with the implementation, as per https://github.com/numpy/numpy/blob/v1.11.0/numpy/distutils/fcompiler/__init__.py#L569 https://github.com/numpy/numpy/blob/v1.11.0/numpy/distutils/fcompiler/__init__.py#L583 And indeed, setting extra_f77_compile_arg has the effect I was looking for. Fixing it is easy, but I am less certain whether we should fix the docs, or the code. Given that add_extension lists extra_compile_args, extra_f77_compile_args, etc, I would think it Is the documentation that need to change. Please confirm, and I will open up a pull request for this. Thank you, Oleksandr -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Aug 24 19:05:48 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 24 Aug 2016 17:05:48 -0600 Subject: [Numpy-discussion] numpy.distutils issue In-Reply-To: <4C9EDA7282E297428F3986994EB0FBD3856C71@ORSMSX110.amr.corp.intel.com> References: <4C9EDA7282E297428F3986994EB0FBD3856C71@ORSMSX110.amr.corp.intel.com> Message-ID: On Wed, Aug 24, 2016 at 1:41 PM, Pavlyk, Oleksandr < oleksandr.pavlyk at intel.com> wrote: > Hi All, > > > > According to the documentation page: > > > > http://docs.scipy.org/doc/numpy/reference/distutils.html > > > > Function add_library allows the following keywords: > > extra_f77_compiler_args > > extra_f90_compiler_args > > > > however setting them seem to have no effect for my extension. Digging > deeper, I discovered, > > the documentation is inconsistent with the implementation, as per > > > > https://github.com/numpy/numpy/blob/v1.11.0/numpy/ > distutils/fcompiler/__init__.py#L569 > > > > https://github.com/numpy/numpy/blob/v1.11.0/numpy/ > distutils/fcompiler/__init__.py#L583 > > > > And indeed, setting extra_f77_compile_arg has the effect I was looking > for. > > Fixing it is easy, but I am less certain whether we should fix the docs, > or the code. > > > > Given that add_extension lists extra_compile_args, extra_f77_compile_args, > etc, I would think it > > Is the documentation that need to change. > > > > Please confirm, and I will open up a pull request for this. > That's rather unfortunate, "compiler" would be better than "compile", but it is best to document the actual behavior. If we later settle on changing the argument we can do that, but it is a more involved process. Chuck > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Aug 24 19:18:35 2016 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 24 Aug 2016 17:18:35 -0600 Subject: [Numpy-discussion] numpy.distutils issue In-Reply-To: References: <4C9EDA7282E297428F3986994EB0FBD3856C71@ORSMSX110.amr.corp.intel.com> Message-ID: On Wed, Aug 24, 2016 at 5:05 PM, Charles R Harris wrote: > > > On Wed, Aug 24, 2016 at 1:41 PM, Pavlyk, Oleksandr < > oleksandr.pavlyk at intel.com> wrote: > >> Hi All, >> >> >> >> According to the documentation page: >> >> >> >> http://docs.scipy.org/doc/numpy/reference/distutils.html >> >> >> >> Function add_library allows the following keywords: >> >> extra_f77_compiler_args >> >> extra_f90_compiler_args >> >> >> >> however setting them seem to have no effect for my extension. Digging >> deeper, I discovered, >> >> the documentation is inconsistent with the implementation, as per >> >> >> >> https://github.com/numpy/numpy/blob/v1.11.0/numpy/distutils/ >> fcompiler/__init__.py#L569 >> >> >> >> https://github.com/numpy/numpy/blob/v1.11.0/numpy/distutils/ >> fcompiler/__init__.py#L583 >> >> >> >> And indeed, setting extra_f77_compile_arg has the effect I was looking >> for. >> >> Fixing it is easy, but I am less certain whether we should fix the docs, >> or the code. >> >> >> >> Given that add_extension lists extra_compile_args, >> extra_f77_compile_args, etc, I would think it >> >> Is the documentation that need to change. >> >> >> >> Please confirm, and I will open up a pull request for this. >> > > That's rather unfortunate, "compiler" would be better than "compile", but > it is best to document the actual behavior. If we later settle on changing > the argument we can do that, but it is a more involved process. > > Although I suppose we could allow either in the future. Chuck > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfoxrabinovitz at gmail.com Thu Aug 25 10:36:50 2016 From: jfoxrabinovitz at gmail.com (Joseph Fox-Rabinovitz) Date: Thu, 25 Aug 2016 10:36:50 -0400 Subject: [Numpy-discussion] Indexing issue with ndarrays Message-ID: This issue recently came up on Stack Overflow: http://stackoverflow.com/questions/39145795/masking-a-series-with-a-boolean-array. The poster attempted to index an ndarray with a pandas boolean Series object (all False), but the result was as if he had indexed with an array of integer zeros. Can someone explain this behavior? I can see two obvious possibilities: 1. ndarray checks if the input to __getitem__ is of exactly the right type, not using instanceof. 2. pandas actually uses a wider datatype than boolean internally, so indexing with the series is in fact indexing with an integer array. In my attempt to reproduce the poster's results, I got the following warning: FutureWarning: in the future, boolean array-likes will be handled as a boolean array index This indicates that the issue is probably #1 and that a fix is already on the way. Please correct me if I am wrong. Also, where does the code for ndarray.__getitem__ live? Thanks, -Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Thu Aug 25 16:37:45 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Thu, 25 Aug 2016 22:37:45 +0200 Subject: [Numpy-discussion] Indexing issue with ndarrays In-Reply-To: References: Message-ID: <1472157465.1331.1.camel@sipsolutions.net> On Do, 2016-08-25 at 10:36 -0400, Joseph Fox-Rabinovitz wrote: > This issue recently came up on Stack Overflow: http://stackoverflow.c > om/questions/39145795/masking-a-series-with-a-boolean-array. The > poster attempted to index an ndarray with a pandas boolean Series > object (all False), but the result was as if he had indexed with an > array of integer zeros. > > Can someone explain this behavior? I can see two obvious > possibilities: > ndarray checks if the input to __getitem__ is of exactly the right > type, not using instanceof. > pandas actually uses a wider datatype than boolean internally, so > indexing with the series is in fact indexing with an integer array. You are overthinking it ;). The reason is quite simply that the logic used to be: ?* Boolean array? -> think about boolean indexing. ?* Everything "array-like" (not caught earlier) -> convert to `intp` array and do integer indexing. Now you might wonder why, but probably it is quite simply because boolean indexing was tagged on later. - Sebastian > In my attempt to reproduce the poster's results, I got the following > warning: > FutureWarning: in the future, boolean array-likes will be handled as > a boolean array index > This indicates that the issue is probably #1 and that a fix is > already on the way. Please correct me if I am wrong. Also, where does > the code for ndarray.__getitem__ live? > Thanks, > ??? -Joe > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From jfoxrabinovitz at gmail.com Fri Aug 26 09:57:22 2016 From: jfoxrabinovitz at gmail.com (Joseph Fox-Rabinovitz) Date: Fri, 26 Aug 2016 09:57:22 -0400 Subject: [Numpy-discussion] Indexing issue with ndarrays In-Reply-To: <1472157465.1331.1.camel@sipsolutions.net> References: <1472157465.1331.1.camel@sipsolutions.net> Message-ID: On Thu, Aug 25, 2016 at 4:37 PM, Sebastian Berg wrote: > On Do, 2016-08-25 at 10:36 -0400, Joseph Fox-Rabinovitz wrote: > > This issue recently came up on Stack Overflow: http://stackoverflow.c > > om/questions/39145795/masking-a-series-with-a-boolean-array. The > > poster attempted to index an ndarray with a pandas boolean Series > > object (all False), but the result was as if he had indexed with an > > array of integer zeros. > > > > Can someone explain this behavior? I can see two obvious > > possibilities: > > ndarray checks if the input to __getitem__ is of exactly the right > > type, not using instanceof. > > pandas actually uses a wider datatype than boolean internally, so > > indexing with the series is in fact indexing with an integer array. > > You are overthinking it ;). The reason is quite simply that the logic > used to be: > > * Boolean array? -> think about boolean indexing. > * Everything "array-like" (not caught earlier) -> convert to `intp` > array and do integer indexing. > > Now you might wonder why, but probably it is quite simply because > boolean indexing was tagged on later. > > - Sebastian > > > > In my attempt to reproduce the poster's results, I got the following > > warning: > > FutureWarning: in the future, boolean array-likes will be handled as > > a boolean array index > > This indicates that the issue is probably #1 and that a fix is > > already on the way. Please correct me if I am wrong. Also, where does > > the code for ndarray.__getitem__ live? > > Thanks, > > -Joe > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > This makes perfect sense. I would like to help fix it if a fix is desired and has not been done already. Could you point me to where the "Boolean array?, etc." decision happens? I have had trouble navigating to `__getitem__` (which I assume is somewhere in np.core.multiarray C code. -Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Fri Aug 26 10:08:14 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Fri, 26 Aug 2016 16:08:14 +0200 Subject: [Numpy-discussion] Indexing issue with ndarrays In-Reply-To: References: <1472157465.1331.1.camel@sipsolutions.net> Message-ID: <1472220494.14163.9.camel@sipsolutions.net> On Fr, 2016-08-26 at 09:57 -0400, Joseph Fox-Rabinovitz wrote: > > > On Thu, Aug 25, 2016 at 4:37 PM, Sebastian Berg ns.net> wrote: > > On Do, 2016-08-25 at 10:36 -0400, Joseph Fox-Rabinovitz wrote: > > > This issue recently came up on Stack Overflow: http://stackoverfl > > ow.c > > > om/questions/39145795/masking-a-series-with-a-boolean-array. The > > > poster attempted to index an ndarray with a pandas boolean Series > > > object (all False), but the result was as if he had indexed with > > an > > > array of integer zeros. > > > > > > Can someone explain this behavior? I can see two obvious > > > possibilities: > > > ndarray checks if the input to __getitem__ is of exactly the > > right > > > type, not using instanceof. > > > pandas actually uses a wider datatype than boolean internally, so > > > indexing with the series is in fact indexing with an integer > > array. > > > > You are overthinking it ;). The reason is quite simply that the > > logic > > used to be: > > > > ?* Boolean array? -> think about boolean indexing. > > ?* Everything "array-like" (not caught earlier) -> convert to > > `intp` > > array and do integer indexing. > > > > Now you might wonder why, but probably it is quite simply because > > boolean indexing was tagged on later. > > > > - Sebastian > > > > > > > In my attempt to reproduce the poster's results, I got the > > following > > > warning: > > > FutureWarning: in the future, boolean array-likes will be handled > > as > > > a boolean array index > > > This indicates that the issue is probably #1 and that a fix is > > > already on the way. Please correct me if I am wrong. Also, where > > does > > > the code for ndarray.__getitem__ live? > > > Thanks, > > > ??? -Joe > > > > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at scipy.org > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > This makes perfect sense. I would like to help fix it if a fix is > desired and has not been done already. Could you point me to where > the "Boolean array?, etc." decision happens? I have had trouble > navigating to `__getitem__` (which I assume is somewhere in > np.core.multiarray C code. > As the warning says, it already is fixed in a sense (we just have to move forward with the deprecation, which you can maybe actually do at this time). This is all in the mapping.c stuff, without checking, there is a function called something like "prepare index" which goes through all the different types of indexing objects. It should be pretty straight forward to find the warning. The actual old behaviour where this behaviour originated in was a completely different code base though (you would have to check out some pre 1.9 version of numpy if you are interested in archeology. - Sebastian > ??? -Joe > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at telenczuk.pl Mon Aug 29 07:43:03 2016 From: mail at telenczuk.pl (mail at telenczuk.pl) Date: Mon, 29 Aug 2016 13:43:03 +0200 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? Message-ID: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> Hi all, What is the official spelling of NumPy/Numpy/numpy? The documentation is not consistent and it mixes both NumPy and Numpy. For example, the reference manual uses both spellings in the introduction paragraph (http://docs.scipy.org/doc/numpy/reference/): "This reference manual details functions, modules, and objects included in Numpy, describing what they are and what they do. For learning how to use NumPy, see also NumPy User Guide." However, in all docs taken together "NumPy" is most frequently used (74%): % find . -name "*.rst" -exec grep Numpy -ow {} \; | wc -l 161 % find . -name "*.rst" -exec grep NumPy -ow {} \; | wc -l 471 I also reported it as an issue: https://github.com/numpy/numpy/issues/7986 Yours, Bartosz From hodge at stsci.edu Mon Aug 29 08:39:59 2016 From: hodge at stsci.edu (Phil Hodge) Date: Mon, 29 Aug 2016 08:39:59 -0400 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? In-Reply-To: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> References: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> Message-ID: <0fbc81f4-c384-1e0d-6876-09a665caf628@stsci.edu> On 08/29/2016 07:43 AM, mail at telenczuk.pl wrote: > What is the official spelling of NumPy/Numpy/numpy? IMHO it should be written numpy, because ... >>> import NumPy Traceback (most recent call last): File "", line 1, in ImportError: No module named NumPy >>> import Numpy Traceback (most recent call last): File "", line 1, in ImportError: No module named Numpy >>> import numpy >>> Phil From mail at telenczuk.pl Mon Aug 29 09:22:13 2016 From: mail at telenczuk.pl (Bartosz Telenczuk) Date: Mon, 29 Aug 2016 15:22:13 +0200 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? In-Reply-To: <0fbc81f4-c384-1e0d-6876-09a665caf628@stsci.edu> References: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> <0fbc81f4-c384-1e0d-6876-09a665caf628@stsci.edu> Message-ID: <57c43705eee41_4222a36c5420@Pct-EqAlain-Z30.notmuch> Hi, I would not mind any choice as long as it's consistent. I agree that using all-lowercase spelling may avoid some common errors. However, PEP8 requires all module/package names to be lower case [1]. If we force the name of the library and the corresponding package to be the same, all Python libraries would be named in lowercase. This would not be the best choice for libraries, which have multi-component names (like NumPy = Numerical Python). Note also that both the Wikipedia page [2] and the official NumPy logo [3] use "NumPy" spelling. Some other popular [4] libraries use similar dichotomies: - Django - import django - Cython - import cython - PyYAML - import yaml - scikit-learn - import sklearn On the other hand all standard Python libraries are lower-case named. Cheers, Bartosz [1] https://www.python.org/dev/peps/pep-0008/#package-and-module-names [2] https://en.wikipedia.org/wiki/NumPy [3] http://www.numpy.org/_static/numpy_logo.png [4] http://pypi-ranking.info/alltime > On 08/29/2016 07:43 AM, mail at telenczuk.pl wrote: > > What is the official spelling of NumPy/Numpy/numpy? > > IMHO it should be written numpy, because ... > > >>> import NumPy > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named NumPy > >>> import Numpy > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named Numpy > >>> import numpy > >>> > > Phil > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion From josef.pktd at gmail.com Mon Aug 29 12:56:50 2016 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Mon, 29 Aug 2016 12:56:50 -0400 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? In-Reply-To: <57c43705eee41_4222a36c5420@Pct-EqAlain-Z30.notmuch> References: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> <0fbc81f4-c384-1e0d-6876-09a665caf628@stsci.edu> <57c43705eee41_4222a36c5420@Pct-EqAlain-Z30.notmuch> Message-ID: https://mail.scipy.org/pipermail/scipy-user/2010-June/025756.html "NumPy and SciPy to refer to the projects. numpy and scipy to refer to the packages, specifically. When in doubt, use the former." I thought there was also another discussion about capital letters, but I don't find it. Josef On Mon, Aug 29, 2016 at 9:22 AM, Bartosz Telenczuk wrote: > Hi, > > I would not mind any choice as long as it's consistent. > > I agree that using all-lowercase spelling may avoid some common errors. However, > PEP8 requires all module/package names to be lower case [1]. If we force the > name of the library and the corresponding package to be the same, all Python > libraries would be named in lowercase. This would not be the best choice for > libraries, which have multi-component names (like NumPy = Numerical Python). > > Note also that both the Wikipedia page [2] and the official NumPy logo [3] use > "NumPy" spelling. > > Some other popular [4] libraries use similar dichotomies: > > - Django - import django > - Cython - import cython > - PyYAML - import yaml > - scikit-learn - import sklearn > > On the other hand all standard Python libraries are lower-case named. > > Cheers, > > Bartosz > > [1] https://www.python.org/dev/peps/pep-0008/#package-and-module-names > [2] https://en.wikipedia.org/wiki/NumPy > [3] http://www.numpy.org/_static/numpy_logo.png > [4] http://pypi-ranking.info/alltime > >> On 08/29/2016 07:43 AM, mail at telenczuk.pl wrote: >> > What is the official spelling of NumPy/Numpy/numpy? >> >> IMHO it should be written numpy, because ... >> >> >>> import NumPy >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: No module named NumPy >> >>> import Numpy >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: No module named Numpy >> >>> import numpy >> >>> >> >> Phil >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion From ben.v.root at gmail.com Mon Aug 29 13:47:46 2016 From: ben.v.root at gmail.com (Benjamin Root) Date: Mon, 29 Aug 2016 13:47:46 -0400 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? In-Reply-To: References: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> <0fbc81f4-c384-1e0d-6876-09a665caf628@stsci.edu> <57c43705eee41_4222a36c5420@Pct-EqAlain-Z30.notmuch> Message-ID: There was similar discussion almost two years ago with respect to capitalization of matplotlib in prose. Most of the time, it was lower-case in our documentation, but then the question was if it should be upper-case at the beginning of the sentence... or should it always be upper-cased like a proper noun. I don't think a clear consensus was reached, but I ended up treating it as a proper noun in my book. I am also pretty sure I used "NumPy" in most places, even for "NumPy arrays", which still looks weird to me when I go back over my book. Ben Root On Mon, Aug 29, 2016 at 12:56 PM, wrote: > https://mail.scipy.org/pipermail/scipy-user/2010-June/025756.html > > "NumPy and SciPy to refer to the projects. numpy and scipy to refer to > the packages, specifically. When in doubt, use the former." > > I thought there was also another discussion about capital letters, but > I don't find it. > > Josef > > > On Mon, Aug 29, 2016 at 9:22 AM, Bartosz Telenczuk > wrote: > > Hi, > > > > I would not mind any choice as long as it's consistent. > > > > I agree that using all-lowercase spelling may avoid some common errors. > However, > > PEP8 requires all module/package names to be lower case [1]. If we > force the > > name of the library and the corresponding package to be the same, all > Python > > libraries would be named in lowercase. This would not be the best choice > for > > libraries, which have multi-component names (like NumPy = Numerical > Python). > > > > Note also that both the Wikipedia page [2] and the official NumPy logo > [3] use > > "NumPy" spelling. > > > > Some other popular [4] libraries use similar dichotomies: > > > > - Django - import django > > - Cython - import cython > > - PyYAML - import yaml > > - scikit-learn - import sklearn > > > > On the other hand all standard Python libraries are lower-case named. > > > > Cheers, > > > > Bartosz > > > > [1] https://www.python.org/dev/peps/pep-0008/#package-and-module-names > > [2] https://en.wikipedia.org/wiki/NumPy > > [3] http://www.numpy.org/_static/numpy_logo.png > > [4] http://pypi-ranking.info/alltime > > > >> On 08/29/2016 07:43 AM, mail at telenczuk.pl wrote: > >> > What is the official spelling of NumPy/Numpy/numpy? > >> > >> IMHO it should be written numpy, because ... > >> > >> >>> import NumPy > >> Traceback (most recent call last): > >> File "", line 1, in > >> ImportError: No module named NumPy > >> >>> import Numpy > >> Traceback (most recent call last): > >> File "", line 1, in > >> ImportError: No module named Numpy > >> >>> import numpy > >> >>> > >> > >> Phil > >> _______________________________________________ > >> NumPy-Discussion mailing list > >> NumPy-Discussion at scipy.org > >> https://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at scipy.org > > https://mail.scipy.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Aug 30 15:17:54 2016 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Tue, 30 Aug 2016 12:17:54 -0700 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? In-Reply-To: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> References: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> Message-ID: <1472584674.1559543.710714393.59C5761E@webmail.messagingengine.com> On Mon, Aug 29, 2016, at 04:43, mail at telenczuk.pl wrote: > The documentation is not consistent and it mixes both NumPy and Numpy. > For example, the reference manual uses both spellings in the introduction > paragraph (http://docs.scipy.org/doc/numpy/reference/): > > "This reference manual details functions, modules, and objects > included in Numpy, describing what they are and what they do. For > learning how to use NumPy, see also NumPy User Guide." That's technically a bug: the official spelling is NumPy. But, no one really cares :) St?fan From sebastian at sipsolutions.net Wed Aug 31 04:30:06 2016 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 31 Aug 2016 10:30:06 +0200 Subject: [Numpy-discussion] Which NumPy/Numpy/numpy spelling? In-Reply-To: <1472584674.1559543.710714393.59C5761E@webmail.messagingengine.com> References: <57c41fc7bf6ce_4222a36c5419@Pct-EqAlain-Z30.notmuch> <1472584674.1559543.710714393.59C5761E@webmail.messagingengine.com> Message-ID: <1472632206.22852.0.camel@sipsolutions.net> On Di, 2016-08-30 at 12:17 -0700, Stefan van der Walt wrote: > On Mon, Aug 29, 2016, at 04:43, mail at telenczuk.pl wrote: > > > > The documentation is not consistent and it mixes both NumPy and > > Numpy. > > For example, the reference manual uses both spellings in the > > introduction > > paragraph (http://docs.scipy.org/doc/numpy/reference/): > > > > ????"This reference manual details functions, modules, and objects > > ????included in Numpy, describing what they are and what they do. > > For > > ????learning how to use NumPy, see also NumPy User Guide." > That's technically a bug: the official spelling is NumPy.??But, no > one > really cares :) > I like the fact that this is all posted in:?[Numpy-discussion] ;). - Sebastian > St?fan > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mibieri at gmail.com Wed Aug 31 07:28:21 2016 From: mibieri at gmail.com (Michael Bieri) Date: Wed, 31 Aug 2016 13:28:21 +0200 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python Message-ID: Hi all There are several ways on how to use C/C++ code from Python with NumPy, as given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore, there's at least pybind11. I'm not quite sure which approach is state-of-the-art as of 2016. How would you do it if you had to make a C/C++ library available in Python right now? In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize. Best regards, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From mviljamaa at kapsi.fi Wed Aug 31 07:28:18 2016 From: mviljamaa at kapsi.fi (Matti Viljamaa) Date: Wed, 31 Aug 2016 14:28:18 +0300 Subject: [Numpy-discussion] Include last element when subindexing numpy arrays? Message-ID: <75413495-B5EB-4973-A0C1-2307C986D1F3@kapsi.fi> Is there a clean way to include the last element when subindexing numpy arrays? Since the default behaviour of numpy arrays is to omit the ?stop index?. So for, >>> A array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> A[0:5] array([0, 1, 2, 3, 4]) From mviljamaa at kapsi.fi Wed Aug 31 08:14:30 2016 From: mviljamaa at kapsi.fi (Matti Viljamaa) Date: Wed, 31 Aug 2016 15:14:30 +0300 Subject: [Numpy-discussion] Why np.fft.rfftfreq only returns up to Nyqvist? Message-ID: What?s the reasonability of np.fft.rfftfreq returning frequencies only up to Nyquist, rather than for the full sample rate? From robert.kern at gmail.com Wed Aug 31 08:22:28 2016 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Aug 2016 13:22:28 +0100 Subject: [Numpy-discussion] Why np.fft.rfftfreq only returns up to Nyqvist? In-Reply-To: References: Message-ID: On Wed, Aug 31, 2016 at 1:14 PM, Matti Viljamaa wrote: > > What?s the reasonability of np.fft.rfftfreq returning frequencies only up to Nyquist, rather than for the full sample rate? The answer to the question that you asked is that np.fft.rfft() only computes values for frequencies only up to Nyquist, so np.fft.rfftfreq() must give you the frequencies to match. I'm not sure if there is another misunderstanding lurking that needs to be clarified. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 31 08:22:54 2016 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Aug 2016 13:22:54 +0100 Subject: [Numpy-discussion] Include last element when subindexing numpy arrays? In-Reply-To: <75413495-B5EB-4973-A0C1-2307C986D1F3@kapsi.fi> References: <75413495-B5EB-4973-A0C1-2307C986D1F3@kapsi.fi> Message-ID: On Wed, Aug 31, 2016 at 12:28 PM, Matti Viljamaa wrote: > > Is there a clean way to include the last element when subindexing numpy arrays? > Since the default behaviour of numpy arrays is to omit the ?stop index?. > > So for, > > >>> A > array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > >>> A[0:5] > array([0, 1, 2, 3, 4]) A[5:] -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 31 08:23:57 2016 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Aug 2016 13:23:57 +0100 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: On Wed, Aug 31, 2016 at 12:28 PM, Michael Bieri wrote: > > Hi all > > There are several ways on how to use C/C++ code from Python with NumPy, as given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore, there's at least pybind11. > > I'm not quite sure which approach is state-of-the-art as of 2016. How would you do it if you had to make a C/C++ library available in Python right now? > > In my case, I have a C library with some scientific functions on matrices and vectors. You will typically call a few functions to configure the computation, then hand over some pointers to existing buffers containing vector data, then start the computation, and finally read back the data. The library also can use MPI to parallelize. I usually reach for Cython: http://cython.org/ http://docs.cython.org/en/latest/src/userguide/memoryviews.html -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From mviljamaa at kapsi.fi Wed Aug 31 08:34:54 2016 From: mviljamaa at kapsi.fi (Matti Viljamaa) Date: Wed, 31 Aug 2016 15:34:54 +0300 Subject: [Numpy-discussion] Include last element when subindexing numpy arrays? In-Reply-To: References: <75413495-B5EB-4973-A0C1-2307C986D1F3@kapsi.fi> Message-ID: <931CA2D0-5618-4C87-8F85-F5E3B0112795@kapsi.fi> > On 31 Aug 2016, at 15:22, Robert Kern wrote: > > On Wed, Aug 31, 2016 at 12:28 PM, Matti Viljamaa > wrote: > > > > Is there a clean way to include the last element when subindexing numpy arrays? > > Since the default behaviour of numpy arrays is to omit the ?stop index?. > > > > So for, > > > > >>> A > > array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > > >>> A[0:5] > > array([0, 1, 2, 3, 4]) > > A[5:] > > -- > Robert Kern No that returns the subarray starting from index 5 to the end. What I want to be able to return array([0, 1, 2, 3, 4, 5]) (i.e. last element 5 included) but without the funky A[0:6] syntax, which looks like it should return array([0, 1, 2, 3, 4, 5, 6]) but since bumpy arrays omit the last index, returns array([0, 1, 2, 3, 4, 5]) which syntactically would be more reasonable to be A[0:5]. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Wed Aug 31 08:42:07 2016 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Aug 2016 13:42:07 +0100 Subject: [Numpy-discussion] Include last element when subindexing numpy arrays? In-Reply-To: <931CA2D0-5618-4C87-8F85-F5E3B0112795@kapsi.fi> References: <75413495-B5EB-4973-A0C1-2307C986D1F3@kapsi.fi> <931CA2D0-5618-4C87-8F85-F5E3B0112795@kapsi.fi> Message-ID: On Wed, Aug 31, 2016 at 1:34 PM, Matti Viljamaa wrote: > > On 31 Aug 2016, at 15:22, Robert Kern wrote: > > On Wed, Aug 31, 2016 at 12:28 PM, Matti Viljamaa wrote: > > > > Is there a clean way to include the last element when subindexing numpy arrays? > > Since the default behaviour of numpy arrays is to omit the ?stop index?. > > > > So for, > > > > >>> A > > array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > > >>> A[0:5] > > array([0, 1, 2, 3, 4]) > > A[5:] > > -- > Robert Kern > > No that returns the subarray starting from index 5 to the end. > > What I want to be able to return > > array([0, 1, 2, 3, 4, 5]) > > (i.e. last element 5 included) > > but without the funky A[0:6] syntax, which looks like it should return > > array([0, 1, 2, 3, 4, 5, 6]) > > but since bumpy arrays omit the last index, returns > > array([0, 1, 2, 3, 4, 5]) > > which syntactically would be more reasonable to be A[0:5]. Ah, I see what you are asking now. The answer is "no"; this is just the way that slicing works in Python in general. numpy merely follows suit. It is something that you will get used to with practice. My sense of "funkiness" and "reasonableness" is the opposite of yours, for instance. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Wed Aug 31 09:04:09 2016 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 31 Aug 2016 09:04:09 -0400 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python References: Message-ID: Michael Bieri wrote: > Hi all > > There are several ways on how to use C/C++ code from Python with NumPy, as > given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore, > there's at least pybind11. > > I'm not quite sure which approach is state-of-the-art as of 2016. How > would you do it if you had to make a C/C++ library available in Python > right now? > > In my case, I have a C library with some scientific functions on matrices > and vectors. You will typically call a few functions to configure the > computation, then hand over some pointers to existing buffers containing > vector data, then start the computation, and finally read back the data. > The library also can use MPI to parallelize. > > Best regards, > Michael I prefer ndarray: https://github.com/ndarray/ndarray From mailinglists at xgm.de Wed Aug 31 11:00:20 2016 From: mailinglists at xgm.de (Florian Lindner) Date: Wed, 31 Aug 2016 17:00:20 +0200 Subject: [Numpy-discussion] Reading in a mesh file Message-ID: <738e8474-524a-8cd0-9340-4b56d7a909da@xgm.de> Hello, I have mesh (more exactly: just a bunch of nodes) description with values associated to the nodes in a file, e.g. for a 3x3 mesh: 0 0 10 0 0.3 11 0 0.6 12 0.3 0 20 0.3 0.3 21 0.3 0.6 22 0.6 0 30 0.6 0.3 31 0.6 0.6 32 What is best way to read it in and get data structures like the ones I get from np.meshgrid? Of course, I know about np.loadtxt, but I'm having trouble getting the resulting arrays (x, y, values) in the right form and to retain association to the values. Thanks, Florian From robert.kern at gmail.com Wed Aug 31 11:06:57 2016 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 31 Aug 2016 16:06:57 +0100 Subject: [Numpy-discussion] Reading in a mesh file In-Reply-To: <738e8474-524a-8cd0-9340-4b56d7a909da@xgm.de> References: <738e8474-524a-8cd0-9340-4b56d7a909da@xgm.de> Message-ID: On Wed, Aug 31, 2016 at 4:00 PM, Florian Lindner wrote: > > Hello, > > I have mesh (more exactly: just a bunch of nodes) description with values associated to the nodes in a file, e.g. for a > 3x3 mesh: > > 0 0 10 > 0 0.3 11 > 0 0.6 12 > 0.3 0 20 > 0.3 0.3 21 > 0.3 0.6 22 > 0.6 0 30 > 0.6 0.3 31 > 0.6 0.6 32 > > What is best way to read it in and get data structures like the ones I get from np.meshgrid? > > Of course, I know about np.loadtxt, but I'm having trouble getting the resulting arrays (x, y, values) in the right form > and to retain association to the values. For this particular case (known shape and ordering), this is what I would do. Maybe throw in a .T or three depending on exactly how you want them to be laid out. [~/scratch] |1> !cat mesh.txt 0 0 10 0 0.3 11 0 0.6 12 0.3 0 20 0.3 0.3 21 0.3 0.6 22 0.6 0 30 0.6 0.3 31 0.6 0.6 32 [~/scratch] |2> nodes = np.loadtxt('mesh.txt') [~/scratch] |3> nodes array([[ 0. , 0. , 10. ], [ 0. , 0.3, 11. ], [ 0. , 0.6, 12. ], [ 0.3, 0. , 20. ], [ 0.3, 0.3, 21. ], [ 0.3, 0.6, 22. ], [ 0.6, 0. , 30. ], [ 0.6, 0.3, 31. ], [ 0.6, 0.6, 32. ]]) [~/scratch] |4> reshaped = nodes.reshape((3, 3, -1)) [~/scratch] |5> reshaped array([[[ 0. , 0. , 10. ], [ 0. , 0.3, 11. ], [ 0. , 0.6, 12. ]], [[ 0.3, 0. , 20. ], [ 0.3, 0.3, 21. ], [ 0.3, 0.6, 22. ]], [[ 0.6, 0. , 30. ], [ 0.6, 0.3, 31. ], [ 0.6, 0.6, 32. ]]]) [~/scratch] |7> x = reshaped[..., 0] [~/scratch] |8> y = reshaped[..., 1] [~/scratch] |9> values = reshaped[..., 2] [~/scratch] |10> x array([[ 0. , 0. , 0. ], [ 0.3, 0.3, 0.3], [ 0.6, 0.6, 0.6]]) [~/scratch] |11> y array([[ 0. , 0.3, 0.6], [ 0. , 0.3, 0.6], [ 0. , 0.3, 0.6]]) [~/scratch] |12> values array([[ 10., 11., 12.], [ 20., 21., 22.], [ 30., 31., 32.]]) -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From othalan at othalan.net Wed Aug 31 13:08:58 2016 From: othalan at othalan.net (David Morris) Date: Wed, 31 Aug 2016 20:08:58 +0300 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: On Wed, Aug 31, 2016 at 2:28 PM, Michael Bieri wrote: > Hi all > > There are several ways on how to use C/C++ code from Python with NumPy, as > given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore, > there's at least pybind11. > > I'm not quite sure which approach is state-of-the-art as of 2016. How > would you do it if you had to make a C/C++ library available in Python > right now? > > In my case, I have a C library with some scientific functions on matrices > and vectors. You will typically call a few functions to configure the > computation, then hand over some pointers to existing buffers containing > vector data, then start the computation, and finally read back the data. > The library also can use MPI to parallelize. > I have been delighted with Cython for this purpose. Great integration with NumPy (you can access numpy arrays directly as C arrays), very python like syntax and amazing performance. Good luck, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Wed Aug 31 13:14:24 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 31 Aug 2016 19:14:24 +0200 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: +1 on pybind11. Sylvain On Wed, Aug 31, 2016 at 1:28 PM, Michael Bieri wrote: > Hi all > > There are several ways on how to use C/C++ code from Python with NumPy, as > given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore, > there's at least pybind11. > > I'm not quite sure which approach is state-of-the-art as of 2016. How > would you do it if you had to make a C/C++ library available in Python > right now? > > In my case, I have a C library with some scientific functions on matrices > and vectors. You will typically call a few functions to configure the > computation, then hand over some pointers to existing buffers containing > vector data, then start the computation, and finally read back the data. > The library also can use MPI to parallelize. > > Best regards, > Michael > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nevion at gmail.com Wed Aug 31 13:20:28 2016 From: nevion at gmail.com (Jason Newton) Date: Wed, 31 Aug 2016 13:20:28 -0400 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: I just wanted to follow up on the C++ side of OP email - Cython has quite a few difficulties working with C++ code at the moment. It's really more of a C solution most of the time and you must split things up into a mostly C call interface (that is the C code Cython can call) and limit exposure/complications with templates and complex C++11+ constructs. This may change in the longer term but in the near, that is the state. I used to use Boost.Python but I'm getting my feet wet with Pybind (which is basically the same api but works more as you expect it to with it's signature/type plumbing (including std::shared_ptr islanding), with some other C++11 based improvements, and is header only + submodule friendly!). I also remembered ndarray thanks to Neal's post but I haven't figured out how to leverage it better than pybind, at the moment. I'd be interested to see ndarray gain support for pybind interoperability... -Jason On Wed, Aug 31, 2016 at 1:08 PM, David Morris wrote: > On Wed, Aug 31, 2016 at 2:28 PM, Michael Bieri wrote: > >> Hi all >> >> There are several ways on how to use C/C++ code from Python with NumPy, >> as given in http://docs.scipy.org/doc/numpy/user/c-info.html . >> Furthermore, there's at least pybind11. >> >> I'm not quite sure which approach is state-of-the-art as of 2016. How >> would you do it if you had to make a C/C++ library available in Python >> right now? >> >> In my case, I have a C library with some scientific functions on matrices >> and vectors. You will typically call a few functions to configure the >> computation, then hand over some pointers to existing buffers containing >> vector data, then start the computation, and finally read back the data. >> The library also can use MPI to parallelize. >> > > I have been delighted with Cython for this purpose. Great integration > with NumPy (you can access numpy arrays directly as C arrays), very python > like syntax and amazing performance. > > Good luck, > > David > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From insertinterestingnamehere at gmail.com Wed Aug 31 14:17:56 2016 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Wed, 31 Aug 2016 18:17:56 +0000 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: We use Cython very heavily in DyND's Python bindings. It has worked well for us even when working with some very modern C++. That said, a lot depends on exactly which C++ features you want to expose as a part of the interface. Interfaces that require things like non-type template parameters or variadic templates will often require a some extra C++ code to work them in to something that Cython can understand. In my experience, those particular limitations haven't been that hard to work with. Best, Ian Henriksen On Wed, Aug 31, 2016 at 12:20 PM Jason Newton wrote: > I just wanted to follow up on the C++ side of OP email - Cython has quite > a few difficulties working with C++ code at the moment. It's really more > of a C solution most of the time and you must split things up into a mostly > C call interface (that is the C code Cython can call) and limit > exposure/complications with templates and complex C++11+ constructs. This > may change in the longer term but in the near, that is the state. > > I used to use Boost.Python but I'm getting my feet wet with Pybind (which > is basically the same api but works more as you expect it to with it's > signature/type plumbing (including std::shared_ptr islanding), with some > other C++11 based improvements, and is header only + submodule friendly!). > I also remembered ndarray thanks to Neal's post but I haven't figured out > how to leverage it better than pybind, at the moment. I'd be interested to > see ndarray gain support for pybind interoperability... > > -Jason > > On Wed, Aug 31, 2016 at 1:08 PM, David Morris wrote: > >> On Wed, Aug 31, 2016 at 2:28 PM, Michael Bieri wrote: >> >>> Hi all >>> >>> There are several ways on how to use C/C++ code from Python with NumPy, >>> as given in http://docs.scipy.org/doc/numpy/user/c-info.html . >>> Furthermore, there's at least pybind11. >>> >>> I'm not quite sure which approach is state-of-the-art as of 2016. How >>> would you do it if you had to make a C/C++ library available in Python >>> right now? >>> >>> In my case, I have a C library with some scientific functions on >>> matrices and vectors. You will typically call a few functions to configure >>> the computation, then hand over some pointers to existing buffers >>> containing vector data, then start the computation, and finally read back >>> the data. The library also can use MPI to parallelize. >>> >> >> I have been delighted with Cython for this purpose. Great integration >> with NumPy (you can access numpy arrays directly as C arrays), very python >> like syntax and amazing performance. >> >> Good luck, >> >> David >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nevion at gmail.com Wed Aug 31 16:57:06 2016 From: nevion at gmail.com (Jason Newton) Date: Wed, 31 Aug 2016 16:57:06 -0400 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: Hey Ian - I hope I gave Cython a fair comment, but I have to add the disclaimer that your capability to understand/implement those solutions/workarounds in that project is greatly enhanced from your knowing the innards of Cython from being core developer on the Cython project. This doesn't detract from DyDN's accomplishments (if nothing it means Cython users should look there for how to use C++ with Cython and the workarounds used + shortcomings) but I would not expect not everyone would want to jump through those hoops to get things working without a firm understanding of Cython's edges, and all this potential for special/hack adaption code is still something to keep in mind when comparing to something more straight forward and easier to understand coming from a more pure C/C++ side, where things are a bit more dangerous and fairly more verbose but make play with the language and environment first-class (like Boost.Python/pybind). Since this thread is a survey over state and options it's my intent just to make sure readers have something bare in mind for current pros/cons of the approaches. -Jason On Wed, Aug 31, 2016 at 2:17 PM, Ian Henriksen < insertinterestingnamehere at gmail.com> wrote: > We use Cython very heavily in DyND's Python bindings. It has worked well > for us > even when working with some very modern C++. That said, a lot depends on > exactly which C++ features you want to expose as a part of the interface. > Interfaces that require things like non-type template parameters or > variadic > templates will often require a some extra C++ code to work them in to > something > that Cython can understand. In my experience, those particular limitations > haven't > been that hard to work with. > Best, > Ian Henriksen > > > On Wed, Aug 31, 2016 at 12:20 PM Jason Newton wrote: > >> I just wanted to follow up on the C++ side of OP email - Cython has quite >> a few difficulties working with C++ code at the moment. It's really more >> of a C solution most of the time and you must split things up into a mostly >> C call interface (that is the C code Cython can call) and limit >> exposure/complications with templates and complex C++11+ constructs. This >> may change in the longer term but in the near, that is the state. >> >> I used to use Boost.Python but I'm getting my feet wet with Pybind (which >> is basically the same api but works more as you expect it to with it's >> signature/type plumbing (including std::shared_ptr islanding), with some >> other C++11 based improvements, and is header only + submodule friendly!). >> I also remembered ndarray thanks to Neal's post but I haven't figured out >> how to leverage it better than pybind, at the moment. I'd be interested to >> see ndarray gain support for pybind interoperability... >> >> -Jason >> >> On Wed, Aug 31, 2016 at 1:08 PM, David Morris >> wrote: >> >>> On Wed, Aug 31, 2016 at 2:28 PM, Michael Bieri >>> wrote: >>> >>>> Hi all >>>> >>>> There are several ways on how to use C/C++ code from Python with NumPy, >>>> as given in http://docs.scipy.org/doc/numpy/user/c-info.html . >>>> Furthermore, there's at least pybind11. >>>> >>>> I'm not quite sure which approach is state-of-the-art as of 2016. How >>>> would you do it if you had to make a C/C++ library available in Python >>>> right now? >>>> >>>> In my case, I have a C library with some scientific functions on >>>> matrices and vectors. You will typically call a few functions to configure >>>> the computation, then hand over some pointers to existing buffers >>>> containing vector data, then start the computation, and finally read back >>>> the data. The library also can use MPI to parallelize. >>>> >>> >>> I have been delighted with Cython for this purpose. Great integration >>> with NumPy (you can access numpy arrays directly as C arrays), very python >>> like syntax and amazing performance. >>> >>> Good luck, >>> >>> David >>> >>> _______________________________________________ >>> NumPy-Discussion mailing list >>> NumPy-Discussion at scipy.org >>> https://mail.scipy.org/mailman/listinfo/numpy-discussion >>> >>> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at scipy.org >> https://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at scipy.org > https://mail.scipy.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Wed Aug 31 18:04:33 2016 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Wed, 31 Aug 2016 15:04:33 -0700 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: <1472681073.2686064.712068009.73919B3D@webmail.messagingengine.com> On Wed, Aug 31, 2016, at 13:57, Jason Newton wrote: > Hey Ian - I hope I gave Cython a fair comment, but I have to add the > disclaimer that your capability to understand/implement those > solutions/workarounds in that project is greatly enhanced from your > knowing the innards of Cython from being core developer on the Cython > project. This doesn't detract from DyDN's accomplishments (if nothing > it means Cython users should look there for how to use C++ with Cython > and the workarounds used + shortcomings) but I would not expect not > everyone would want to jump through those hoops to get things working > without a firm understanding of Cython's edges, and all this potential > for special/hack adaption code is still something to keep in mind when > comparing to something more straight forward and easier to understand > coming from a more pure C/C++ side, where things are a bit more > dangerous and fairly more verbose but make play with the language and > environment first-class (like Boost.Python/pybind). Since this thread > is a survey over state and options it's my intent just to make sure > readers have something bare in mind for current pros/cons of the > approaches. There are many teaching resources available for Cython, after which exposure to sharp edges may be greatly reduced. See, e.g., https://github.com/stefanv/teaching/blob/master/2014_assp_split_cython/slides/split2014_cython.pdf and accompanying problems and exercises at https://github.com/stefanv/teaching/tree/master/2014_assp_split_cython St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From insertinterestingnamehere at gmail.com Wed Aug 31 18:53:05 2016 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Wed, 31 Aug 2016 22:53:05 +0000 Subject: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python In-Reply-To: References: Message-ID: On Wed, Aug 31, 2016 at 3:57 PM Jason Newton wrote: > Hey Ian - I hope I gave Cython a fair comment, but I have to add the > disclaimer that your capability to understand/implement those > solutions/workarounds in that project is greatly enhanced from your knowing > the innards of Cython from being core developer on the Cython project. This > doesn't detract from DyDN's accomplishments (if nothing it means Cython > users should look there for how to use C++ with Cython and the workarounds > used + shortcomings) but I would not expect not everyone would want to jump > through those hoops to get things working without a firm understanding of > Cython's edges, and all this potential for special/hack adaption code is > still something to keep in mind when comparing to something more straight > forward and easier to understand coming from a more pure C/C++ side, where > things are a bit more dangerous and fairly more verbose but make play with > the language and environment first-class (like Boost.Python/pybind). Since > this thread is a survey over state and options it's my intent just to make > sure readers have something bare in mind for current pros/cons of the > approaches. > > -Jason > No offense taken at all. I'm actually not a Cython developer, just a frequent contributor. That said, knowing the compiler internals certainly helps when finding workarounds and building intermediate interfaces. My main point was just that, in my experience, Cython has worked well for many things beyond plain C interfaces and that workarounds (hackery entirely aside) for any missing features are usually manageable. Given that my perspective is a bit different in that regard, it seemed worth chiming in on the discussion. I suppose the moral of the story is that there's still not a clear cut "best" way of building wrappers and that your mileage may vary depending on what features you need. Thanks, Ian Henriksen -------------- next part -------------- An HTML attachment was scrubbed... URL: