From charlesr.harris at gmail.com Tue Oct 3 19:04:45 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Tue, 3 Oct 2017 17:04:45 -0600 Subject: [Numpy-discussion] Sustainability Message-ID: Hi All, I and a number of others representing various open source projects under the NumFocus umbrella will be attending as meeting next Tuesday do discuss the problem of sustainability. In preparation for that meeting I would be interested in any ideas that the folks who follow this list may have on the subject. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfoxrabinovitz at gmail.com Wed Oct 4 10:42:47 2017 From: jfoxrabinovitz at gmail.com (Joseph Fox-Rabinovitz) Date: Wed, 4 Oct 2017 10:42:47 -0400 Subject: [Numpy-discussion] Sustainability In-Reply-To: References: Message-ID: Could you elaborate on the purpose of the meeting, or perhaps point to a link with a description if there is one? Sustainability is a very broad topic. What do you plan on discussing? -Joe On Tue, Oct 3, 2017 at 7:04 PM, Charles R Harris wrote: > Hi All, > > I and a number of others representing various open source projects under the > NumFocus umbrella will be attending as meeting next Tuesday do discuss the > problem of sustainability. In preparation for that meeting I would be > interested in any ideas that the folks who follow this list may have on the > subject. > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From ben.v.root at gmail.com Wed Oct 4 11:03:06 2017 From: ben.v.root at gmail.com (Benjamin Root) Date: Wed, 4 Oct 2017 11:03:06 -0400 Subject: [Numpy-discussion] Sustainability In-Reply-To: References: Message-ID: One thing that concerns me is trying to keep up with demand. Our tools have become extremely popular, but it is very difficult for maintainers to keep up with this demand. So, we seem to have a tendency to "tribalize", in a sense, focusing on the demand for our respective pet projects. Various projects have created excellent tools for better managing the high demand such as circleci doc build views, back-porting bots, lint checking settings, vim/emacs settings, etc. These are important tools to help maintainers, and other projects need to know about them. Perhaps these dev tools should get centrally managed? Maybe we should have a "developer's conference"? It would be good to learn from others their techniques and workflows that make them so efficient. I swear, some of you have time-turners or cloning machines! Cheers! Ben Root On Wed, Oct 4, 2017 at 10:42 AM, Joseph Fox-Rabinovitz < jfoxrabinovitz at gmail.com> wrote: > Could you elaborate on the purpose of the meeting, or perhaps point to > a link with a description if there is one? Sustainability is a very > broad topic. What do you plan on discussing? > > -Joe > > On Tue, Oct 3, 2017 at 7:04 PM, Charles R Harris > wrote: > > Hi All, > > > > I and a number of others representing various open source projects under > the > > NumFocus umbrella will be attending as meeting next Tuesday do discuss > the > > problem of sustainability. In preparation for that meeting I would be > > interested in any ideas that the folks who follow this list may have on > the > > subject. > > > > Chuck > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From theodore.goetz at gmail.com Wed Oct 4 11:31:54 2017 From: theodore.goetz at gmail.com (John T. Goetz) Date: Wed, 04 Oct 2017 08:31:54 -0700 Subject: [Numpy-discussion] Sustainability In-Reply-To: References: Message-ID: <1507131114.14730.6.camel@gmail.com> Hello Chuck, Sustainability is indeed a broad topic and I think it's all too easy to think broadly about it. Please do discuss the big picture, but I am far more interested in the practical day-to-day action items that result from such a meeting. Here are my concerns with regards NumPy specifically: * How to handle the backlog of pull requests. * How to advertise outstanding issues that could be tackled by developers that are new to NumPy (like myself). This maybe just being more aggressive with the "Difficulty" tag. * Coding style has changed within the code-base over time and it would good to have a handful of functions one can point to as examples to follow. Notice these are all on the "ease of contributing" side of sustainability. I can't address the perhaps larger issues of ecosystem integration but I suspect NumPy doesn't suffer from being ignored. As to sponsored work or financial support, I'll look forward to the report that comes out of these meetings. Thanks for bringing this up here on the mailing list, Johann On Tue, 2017-10-03 at 17:04 -0600, Charles R Harris wrote: > Hi All, > > I and a number of others representing various open source projects > under the NumFocus umbrella will be attending as meeting next Tuesday > do discuss the problem of sustainability. In preparation for that > meeting I would be interested in any ideas that the folks who follow > this list may have on the subject. > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From ralf.gommers at gmail.com Wed Oct 4 11:32:53 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 5 Oct 2017 04:32:53 +1300 Subject: [Numpy-discussion] Sustainability In-Reply-To: References: Message-ID: On Thu, Oct 5, 2017 at 4:03 AM, Benjamin Root wrote: > One thing that concerns me is trying to keep up with demand. Our tools > have become extremely popular, but it is very difficult for maintainers to > keep up with this demand. So, we seem to have a tendency to "tribalize", in > a sense, focusing on the demand for our respective pet projects. Various > projects have created excellent tools for better managing the high demand > such as circleci doc build views, back-porting bots, lint checking > settings, vim/emacs settings, etc. These are important tools to help > maintainers, and other projects need to know about them. > Yes, this is a great topic. We informally share these kinds of tools and techniques between projects, but there's no central place for any of them nor docs other than "read my code and yml config files". > Perhaps these dev tools should get centrally managed? Maybe we should have > a "developer's conference"? It would be good to learn from others their > techniques and workflows that make them so efficient. I swear, some of you > have time-turners or cloning machines! > > Cheers! > Ben Root > > > On Wed, Oct 4, 2017 at 10:42 AM, Joseph Fox-Rabinovitz < > jfoxrabinovitz at gmail.com> wrote: > >> Could you elaborate on the purpose of the meeting, or perhaps point to >> a link with a description if there is one? Sustainability is a very >> broad topic. What do you plan on discussing? >> > It's going to be a broad workshop, anything from dev tools to finding new maintainers, the role of community managers, and obtaining funding is in scope. Part of the preparation for organizing the workshop was interviews with a core developer from every project. I'd be interested in the replies to Chuck's question to get a sense of what the community thinks are NumPy's key challenges to remain (or become ...) a sustainable project in the years to come. Ralf > >> -Joe >> >> On Tue, Oct 3, 2017 at 7:04 PM, Charles R Harris >> wrote: >> > Hi All, >> > >> > I and a number of others representing various open source projects >> under the >> > NumFocus umbrella will be attending as meeting next Tuesday do discuss >> the >> > problem of sustainability. In preparation for that meeting I would be >> > interested in any ideas that the folks who follow this list may have on >> the >> > subject. >> > >> > Chuck >> > >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at python.org >> > https://mail.python.org/mailman/listinfo/numpy-discussion >> > >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilhanpolat at gmail.com Wed Oct 4 18:08:41 2017 From: ilhanpolat at gmail.com (Ilhan Polat) Date: Thu, 5 Oct 2017 00:08:41 +0200 Subject: [Numpy-discussion] Sustainability In-Reply-To: <1507131114.14730.6.camel@gmail.com> References: <1507131114.14730.6.camel@gmail.com> Message-ID: I have two points that I know, from first hand, people (including myself) wonder: 1. Clear distinction between NumPy/SciPy development and respective roadmaps. In addition to Johann's summary; I am an occasional contributor to SciPy (mostly linalg) and again occasionally I wonder whether certain stuff can be done on NumPy side or how to sync linalg issues lingering due to say legacy reasons etc. So I start reading the source code. However in particular to NumPy, it is extremely difficult for me to find an entry point on how things actually work or what core team has in mind about the SciPy/NumPy separation. Story gets really complicated by invoking the backwards compatibility issues, say the recent dropping the Accelerate support discussion. There are so many details to take care of, I can only mention how I'm impressed with the work you guys pulled off over the years. If sustainability is meant for widening the spectrum of contributors, some care is needed for initialization of us even in the form of contribution guide or which files stay where. This would also return as ease of reviewing and less weight on the core team. 2. Feature completeness of basic modules. I have been in contact with a few companies, probing the opportunities about open-source usage in my domain of expertise. Many of them mentioned the feature incompleteness of the basics. One person used the analogy of potholes and bumpy ride in the linalg module "How come <...> is there but <...> is not?" . So it creates a maintenance obligation of a code base that not so many use. Another person used the term "a bit of this, a bit of that". Same applies for NumPy side too. I hope these won't be taken as complaints, I just want to give the perspective I've gained in the last few months. But similar to other "huge" projects in open source domain, it seems to me that if there is a plan to attract interest of commercial 3rd parties for funding or simply donations, it would really help if they can see some clear planning or a better structure. Best, ilhan On Wed, Oct 4, 2017 at 5:31 PM, John T. Goetz wrote: > Hello Chuck, > Sustainability is indeed a broad topic and I think it's all too easy to > think broadly about it. Please do discuss the big picture, but I am far > more interested in the practical day-to-day action items that result > from such a meeting. Here are my concerns with regards NumPy > specifically: > > * How to handle the backlog of pull requests. > > * How to advertise outstanding issues that could be tackled by > developers that are new to NumPy (like myself). This maybe just being > more aggressive with the "Difficulty" tag. > > * Coding style has changed within the code-base over time and it would > good to have a handful of functions one can point to as examples to > follow. > > Notice these are all on the "ease of contributing" side of > sustainability. I can't address the perhaps larger issues of ecosystem > integration but I suspect NumPy doesn't suffer from being ignored. As > to sponsored work or financial support, I'll look forward to the report > that comes out of these meetings. > > Thanks for bringing this up here on the mailing list, > Johann > > On Tue, 2017-10-03 at 17:04 -0600, Charles R Harris wrote: > > Hi All, > > > > I and a number of others representing various open source projects > > under the NumFocus umbrella will be attending as meeting next Tuesday > > do discuss the problem of sustainability. In preparation for that > > meeting I would be interested in any ideas that the folks who follow > > this list may have on the subject. > > > > Chuck > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Oct 5 15:24:21 2017 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 05 Oct 2017 21:24:21 +0200 Subject: [Numpy-discussion] Sustainability In-Reply-To: References: <1507131114.14730.6.camel@gmail.com> Message-ID: <1507231461.2367.19.camel@iki.fi> to, 2017-10-05 kello 00:08 +0200, Ilhan Polat kirjoitti: [clip] > 2. Feature completeness of basic modules. I think those judgments can be subjective, but it is true many things have been organically grown, and this becomes even more so as the number of contributors grows (fielding other people's PRs already takes a lot of time). I think most people send something that they need there and then, and it is not possible to tell them to go do something else instead. So even if there are many contributors, it's less clear how available they are for implementing "the plan". Regardless, I would recommend anyone who knows something obvious is missing that obviously should be included, to send a PR adding it to the roadmap: https://github.com/scipy/scipy/blob/master/doc/ROADMAP.rst.txt This document has not really been updated significantly since the single brainstorm in a Scipy conference many years ago, and we did not go through the contents then in great detail. Pauli From ralf.gommers at gmail.com Thu Oct 5 17:14:15 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 6 Oct 2017 10:14:15 +1300 Subject: [Numpy-discussion] [SciPy-Dev] Sustainability In-Reply-To: <1507231461.2367.19.camel@iki.fi> References: <1507131114.14730.6.camel@gmail.com> <1507231461.2367.19.camel@iki.fi> Message-ID: On Fri, Oct 6, 2017 at 8:24 AM, Pauli Virtanen wrote: > to, 2017-10-05 kello 00:08 +0200, Ilhan Polat kirjoitti: > [clip] > > 2. Feature completeness of basic modules. > > I think those judgments can be subjective, but it is true many things > have been organically grown, and this becomes even more so as the > number of contributors grows (fielding other people's PRs already takes > a lot of time). I think most people send something that they need there > and then, and it is not possible to tell them to go do something else > instead. So even if there are many contributors, it's less clear how > available they are for implementing "the plan". > > Regardless, I would recommend anyone who knows something obvious is > missing that obviously should be included, to send a PR adding it to > the roadmap: > > https://github.com/scipy/scipy/blob/master/doc/ROADMAP.rst.txt > > This document has not really been updated significantly since the > single brainstorm in a Scipy conference many years ago, and we did not > go through the contents then in great detail. > I did update it several times, removing things that were implemented or no longer relevant. However you're right that especially on the new features front it could use more inputs. Ralf > Pauli > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sat Oct 7 05:52:34 2017 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sat, 7 Oct 2017 11:52:34 +0200 Subject: [Numpy-discussion] List comprehension and loops performances with NumPy arrays Message-ID: Hi All, I have this little snippet of code: import timeit import numpy class Item(object): def __init__(self, name): self.name = name self.values = numpy.random.rand(8, 1) def do_something(self): sv = self.values.sum(axis=0) array = numpy.empty((8, )) f = numpy.dot(0.5*numpy.ones((8, )), self.values)[0] array.fill(f) return array In my real application, the method do_something does a bit more than that, but I believe the snippet is enough to start playing with it. What I have is a list of (on average) 500-1,000 classes Item, and I am trying to retrieve the output of do_something for each of them in a single, big 2D numpy array. My current approach is to use list comprehension like this: output = numpy.asarray([item.do_something() for item in items]).T (Note: I need the transposed of that 2D array, always). But then I though: why not preallocating the output array and make a simple loop: output = numpy.empty((500, 8)) for i, item in enumerate(items): output[i, :] = item.do_something() I was expecting this version to be marginally faster - as the previous one has to call asarray and then transpose the matrix, but I was in for a surprise: if __name__ == '__main__': repeat = 1000 items = [Item('item_%d'%(i+1)) for i in xrange(500)] statements = [''' output = numpy.asarray([item.do_something() for item in items]).T ''', ''' output = numpy.empty((500, 8)) for i, item in enumerate(items): output[i, :] = item.do_something() '''] methods = ['List Comprehension', 'Empty plus Loop '] setup = 'from __main__ import numpy, items' for stmnt, method in zip(statements, methods): elapsed = timeit.repeat(stmnt, setup=setup, number=1, repeat=repeat) minv, maxv, meanv = min(elapsed), max(elapsed), numpy.mean(elapsed) elapsed.sort() best_of_3 = numpy.mean(elapsed[0:3]) result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , BEST OF 3: %0.2f ms'%tuple(result.tolist()) I get this: List Comprehension : MIN: 7.32 ms , MAX: 9.13 ms , MEAN: 7.85 ms , BEST OF 3: 7.33 ms Empty plus Loop : MIN: 7.99 ms , MAX: 9.57 ms , MEAN: 8.31 ms , BEST OF 3: 8.01 ms Now, I know that list comprehensions are renowned for being insanely fast, but I though that doing asarray plus transpose would by far defeat their advantage, especially since the list comprehension is used to call a method, not to do some simple arithmetic inside it... I guess I am missing something obvious here... oh, and if anyone has suggestions about how to improve my crappy code (performance wise), please feel free to add your thoughts. Thank you. Andrea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.gavana at gmail.com Sat Oct 7 05:56:54 2017 From: andrea.gavana at gmail.com (Andrea Gavana) Date: Sat, 7 Oct 2017 11:56:54 +0200 Subject: [Numpy-discussion] List comprehension and loops performances with NumPy arrays In-Reply-To: References: Message-ID: Apologies, correct timeit code this time (I had gotten the wrong shape for the output matrix in the loop case): if __name__ == '__main__': repeat = 1000 items = [Item('item_%d'%(i+1)) for i in xrange(500)] output = numpy.asarray([item.do_something() for item in items]).T statements = [''' output = numpy.asarray([item.do_something() for item in items]).T ''', ''' output = numpy.empty((8, 500)) for i, item in enumerate(items): output[:, i] = item.do_something() '''] methods = ['List Comprehension', 'Empty plus Loop '] setup = 'from __main__ import numpy, items' for stmnt, method in zip(statements, methods): elapsed = timeit.repeat(stmnt, setup=setup, number=1, repeat=repeat) minv, maxv, meanv = min(elapsed), max(elapsed), numpy.mean(elapsed) elapsed.sort() best_of_3 = numpy.mean(elapsed[0:3]) result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , BEST OF 3: %0.2f ms'%tuple(result.tolist()) Results are the same as before... On 7 October 2017 at 11:52, Andrea Gavana wrote: > Hi All, > > I have this little snippet of code: > > import timeit > import numpy > > class Item(object): > > def __init__(self, name): > > self.name = name > self.values = numpy.random.rand(8, 1) > > def do_something(self): > > sv = self.values.sum(axis=0) > array = numpy.empty((8, )) > f = numpy.dot(0.5*numpy.ones((8, )), self.values)[0] > array.fill(f) > return array > > > In my real application, the method do_something does a bit more than that, > but I believe the snippet is enough to start playing with it. What I have > is a list of (on average) 500-1,000 classes Item, and I am trying to > retrieve the output of do_something for each of them in a single, big 2D > numpy array. > > My current approach is to use list comprehension like this: > > output = numpy.asarray([item.do_something() for item in items]).T > > (Note: I need the transposed of that 2D array, always). > > But then I though: why not preallocating the output array and make a > simple loop: > > output = numpy.empty((500, 8)) > for i, item in enumerate(items): > output[i, :] = item.do_something() > > > I was expecting this version to be marginally faster - as the previous one > has to call asarray and then transpose the matrix, but I was in for a > surprise: > > if __name__ == '__main__': > > repeat = 1000 > items = [Item('item_%d'%(i+1)) for i in xrange(500)] > > statements = [''' > output = numpy.asarray([item.do_something() for item in > items]).T > ''', > ''' > output = numpy.empty((500, 8)) > for i, item in enumerate(items): > output[i, :] = item.do_something() > '''] > > methods = ['List Comprehension', 'Empty plus Loop '] > > setup = 'from __main__ import numpy, items' > > for stmnt, method in zip(statements, methods): > > elapsed = timeit.repeat(stmnt, setup=setup, number=1, > repeat=repeat) > minv, maxv, meanv = min(elapsed), max(elapsed), numpy.mean(elapsed) > elapsed.sort() > best_of_3 = numpy.mean(elapsed[0:3]) > result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat > > print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , > BEST OF 3: %0.2f ms'%tuple(result.tolist()) > > > I get this: > > List Comprehension : MIN: 7.32 ms , MAX: 9.13 ms , MEAN: 7.85 ms , BEST OF > 3: 7.33 ms > Empty plus Loop : MIN: 7.99 ms , MAX: 9.57 ms , MEAN: 8.31 ms , BEST OF > 3: 8.01 ms > > > Now, I know that list comprehensions are renowned for being insanely fast, > but I though that doing asarray plus transpose would by far defeat their > advantage, especially since the list comprehension is used to call a > method, not to do some simple arithmetic inside it... > > I guess I am missing something obvious here... oh, and if anyone has > suggestions about how to improve my crappy code (performance wise), please > feel free to add your thoughts. > > Thank you. > > Andrea. > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholas.nadeau at gmail.com Sat Oct 7 10:58:05 2017 From: nicholas.nadeau at gmail.com (Nicholas Nadeau) Date: Sat, 7 Oct 2017 10:58:05 -0400 Subject: [Numpy-discussion] List comprehension and loops performances with NumPy arrays In-Reply-To: References: Message-ID: Hi Andrea! Checkout the following SO answers for similar contexts: - https://stackoverflow.com/questions/22108488/are-list-comprehensions-and-functional-functions-faster-than-for-loops - https://stackoverflow.com/questions/30245397/why-is-list-comprehension-so-faster To better visualize the issue, I made a iPython gist (simplifying the code a bit): https://gist.github.com/nnadeau/3deb6f18d028009a4495590cfbbfaa40 >From a quick view of the disassembled code (I'm not an expert, so correct me if I'm wrong), list comprehension has much less overhead compared to iterating/looping through the pre-allocated data and building/storing each slice. Cheers, -- Nicholas Nadeau, P.Eng., AVS On 7 October 2017 at 05:56, Andrea Gavana wrote: > Apologies, correct timeit code this time (I had gotten the wrong shape for > the output matrix in the loop case): > > if __name__ == '__main__': > > repeat = 1000 > items = [Item('item_%d'%(i+1)) for i in xrange(500)] > > output = numpy.asarray([item.do_something() for item in items]).T > statements = [''' > output = numpy.asarray([item.do_something() for item in > items]).T > ''', > ''' > output = numpy.empty((8, 500)) > for i, item in enumerate(items): > output[:, i] = item.do_something() > '''] > > methods = ['List Comprehension', 'Empty plus Loop '] > > setup = 'from __main__ import numpy, items' > > for stmnt, method in zip(statements, methods): > > elapsed = timeit.repeat(stmnt, setup=setup, number=1, > repeat=repeat) > minv, maxv, meanv = min(elapsed), max(elapsed), numpy.mean(elapsed) > elapsed.sort() > best_of_3 = numpy.mean(elapsed[0:3]) > result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat > > print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , > BEST OF 3: %0.2f ms'%tuple(result.tolist()) > > > Results are the same as before... > > > > On 7 October 2017 at 11:52, Andrea Gavana wrote: > >> Hi All, >> >> I have this little snippet of code: >> >> import timeit >> import numpy >> >> class Item(object): >> >> def __init__(self, name): >> >> self.name = name >> self.values = numpy.random.rand(8, 1) >> >> def do_something(self): >> >> sv = self.values.sum(axis=0) >> array = numpy.empty((8, )) >> f = numpy.dot(0.5*numpy.ones((8, )), self.values)[0] >> array.fill(f) >> return array >> >> >> In my real application, the method do_something does a bit more than >> that, but I believe the snippet is enough to start playing with it. What I >> have is a list of (on average) 500-1,000 classes Item, and I am trying to >> retrieve the output of do_something for each of them in a single, big 2D >> numpy array. >> >> My current approach is to use list comprehension like this: >> >> output = numpy.asarray([item.do_something() for item in items]).T >> >> (Note: I need the transposed of that 2D array, always). >> >> But then I though: why not preallocating the output array and make a >> simple loop: >> >> output = numpy.empty((500, 8)) >> for i, item in enumerate(items): >> output[i, :] = item.do_something() >> >> >> I was expecting this version to be marginally faster - as the previous >> one has to call asarray and then transpose the matrix, but I was in for a >> surprise: >> >> if __name__ == '__main__': >> >> repeat = 1000 >> items = [Item('item_%d'%(i+1)) for i in xrange(500)] >> >> statements = [''' >> output = numpy.asarray([item.do_something() for item >> in items]).T >> ''', >> ''' >> output = numpy.empty((500, 8)) >> for i, item in enumerate(items): >> output[i, :] = item.do_something() >> '''] >> >> methods = ['List Comprehension', 'Empty plus Loop '] >> >> setup = 'from __main__ import numpy, items' >> >> for stmnt, method in zip(statements, methods): >> >> elapsed = timeit.repeat(stmnt, setup=setup, number=1, >> repeat=repeat) >> minv, maxv, meanv = min(elapsed), max(elapsed), >> numpy.mean(elapsed) >> elapsed.sort() >> best_of_3 = numpy.mean(elapsed[0:3]) >> result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat >> >> print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , >> BEST OF 3: %0.2f ms'%tuple(result.tolist()) >> >> >> I get this: >> >> List Comprehension : MIN: 7.32 ms , MAX: 9.13 ms , MEAN: 7.85 ms , BEST >> OF 3: 7.33 ms >> Empty plus Loop : MIN: 7.99 ms , MAX: 9.57 ms , MEAN: 8.31 ms , BEST >> OF 3: 8.01 ms >> >> >> Now, I know that list comprehensions are renowned for being insanely >> fast, but I though that doing asarray plus transpose would by far defeat >> their advantage, especially since the list comprehension is used to call a >> method, not to do some simple arithmetic inside it... >> >> I guess I am missing something obvious here... oh, and if anyone has >> suggestions about how to improve my crappy code (performance wise), please >> feel free to add your thoughts. >> >> Thank you. >> >> Andrea. >> >> >> >> >> >> >> > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Sat Oct 7 11:29:12 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Sat, 7 Oct 2017 09:29:12 -0600 Subject: [Numpy-discussion] Deprecate zipf distribution? Message-ID: Hi All, The current NumPy implementation of the truncated zipf distribution has several drawbacks. - Extremely poor performance when the parameter `a` is near 1. For instance, when `a = 1.000001` a simple change in the implementation speeds things up by a factor of 1,657. When the parameter is closer to 1, the algorithm effectively hangs. - Because the distribution is truncated, say to integers in the range of int64, the parameter could be allowed to take all values > 0, even though the untruncated series diverges. There is some indication that such values of `a` can be useful in modeling because of the heavy distribution in the tail. Because fixing these problems will change the output stream, I suggest implementing a truncated zeta distribution, which is an alternative name for the same distribution, and deprecating the the zipf distribution. Furthermore, rather than truncate at the value of C long, which varies, truncate at max(int64), or some possibly smaller value, say 2**44, which allows all integers up to that value to be realized with approximately correct probabilities when using double precision for the intermediate computations. Thoughts? Chubk -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sat Oct 7 14:15:17 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sat, 7 Oct 2017 14:15:17 -0400 Subject: [Numpy-discussion] Deprecate zipf distribution? In-Reply-To: References: Message-ID: On Sat, Oct 7, 2017 at 11:29 AM, Charles R Harris wrote: > Hi All, > > The current NumPy implementation of the truncated zipf distribution has > several drawbacks. > > > - Extremely poor performance when the parameter `a` is near 1. For > instance, when `a = 1.000001` a simple change in the implementation speeds > things up by a factor of 1,657. When the parameter is closer to 1, the > algorithm effectively hangs. > - Because the distribution is truncated, say to integers in the range > of int64, the parameter could be allowed to take all values > 0, even > though the untruncated series diverges. There is some indication that such > values of `a` can be useful in modeling because of the heavy distribution > in the tail. > > Because fixing these problems will change the output stream, I suggest > implementing a truncated zeta distribution, which is an alternative name > for the same distribution, and deprecating the the zipf distribution. > Furthermore, rather than truncate at the value of C long, which varies, > truncate at max(int64), or some possibly smaller value, say 2**44, which > allows all integers up to that value to be realized with approximately > correct probabilities when using double precision for the intermediate > computations. > > Thoughts? > > It is time that the 'random' API is extended to include some means of selecting a version of the random number generation algorithm. This has come up in discussions on github (e.g. https://github.com/numpy/numpy/pull/5158#issuecomment-58185802). Then instead of deprecating the existing 'zipf`' function, the user has the option of selecting which version of the code to use. Current users that are satisfied with the existing 'zipf' implementation are not affected. But I'm not against deprecating 'zipf' if the code is bad enough that the best long-term option is removing it. Something like this will be needed if there is interest in merging a pull request that I just submitted (https://github.com/numpy/numpy/pull/9834) that fixes (and improves the performance of) the generation of hypergeometric variates when the number of samples drawn is small. Warren > Chubk > I think Chuck just got a new hip-hop name. :) > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tcaswell at gmail.com Sat Oct 7 19:30:11 2017 From: tcaswell at gmail.com (Thomas Caswell) Date: Sat, 07 Oct 2017 23:30:11 +0000 Subject: [Numpy-discussion] [ANN] Matplotlib 2.1 released Message-ID: We are happy to announce the release of Matplotlib 2.1. This is the second minor release in the Matplotlib 2.x series and the first release with major new features since 1.5. This release contains approximately 2 years worth of work by 275 contributors across over 950 pull requests. Highlights from this release include: - support for string categorical values - export of animations to interactive javascript widgets - major overhaul of polar plots - reproducible output for ps/eps, pdf, and svg backends - performance improvements in drawing lines and images - GUIs show a busy cursor while rendering the plot along with many other enhancements and bug fixes. The gallery, examples, and tutorials have been overhauled and consolidated: Examples: http://matplotlib.org/gallery/index.html Tutorials: http://matplotlib.org/tutorials/index.html A big thank you to everyone who contributed to this release! Wheels are available on pypi for win/mac/manylinux and for conda via conda-forge. Full whats new: http://matplotlib.org/users/whats_new.html#new-in-matplotlib-2-1 Full API changes: http://matplotlib.org/api/api_changes.html#api-changes-in-2-1-0 github stats: http://matplotlib.org/users/github_stats.html Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From NissimD at elspec-ltd.com Sun Oct 8 03:12:56 2017 From: NissimD at elspec-ltd.com (Nissim Derdiger) Date: Sun, 8 Oct 2017 07:12:56 +0000 Subject: [Numpy-discussion] converting list of int16 values to bitmask and back to bitmask and back to list of int32\float values Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407CFEEC@EX10.Elspec.local> Hi again, I realize that my question was not clear enough, so I've refined it into one runnable function (attached below) My question is basically - is there a way to perform the same operation, but faster using NumPy (or even just by using Python better..) Thanks again and sorry for the unclearness.. Nissim. import struct def Convert(): Endian = ' References: Message-ID: On Sat, 7 Oct 2017 at 16.59, Nicholas Nadeau wrote: > Hi Andrea! > > Checkout the following SO answers for similar contexts: > - > https://stackoverflow.com/questions/22108488/are-list-comprehensions-and-functional-functions-faster-than-for-loops > - > https://stackoverflow.com/questions/30245397/why-is-list-comprehension-so-faster > > To better visualize the issue, I made a iPython gist (simplifying the code > a bit): https://gist.github.com/nnadeau/3deb6f18d028009a4495590cfbbfaa40 > > From a quick view of the disassembled code (I'm not an expert, so correct > me if I'm wrong), list comprehension has much less overhead compared to > iterating/looping through the pre-allocated data and building/storing each > slice. > Thank you Nicholas, I suspected that the approach of using list comprehensions was close to unbeatable, thanks for the analysis! Andrea. > Cheers, > > > > -- > Nicholas Nadeau, P.Eng., AVS > > On 7 October 2017 at 05:56, Andrea Gavana wrote: > >> Apologies, correct timeit code this time (I had gotten the wrong shape >> for the output matrix in the loop case): >> >> if __name__ == '__main__': >> >> repeat = 1000 >> items = [Item('item_%d'%(i+1)) for i in xrange(500)] >> >> output = numpy.asarray([item.do_something() for item in items]).T >> statements = [''' >> output = numpy.asarray([item.do_something() for item in >> items]).T >> ''', >> ''' >> output = numpy.empty((8, 500)) >> for i, item in enumerate(items): >> output[:, i] = item.do_something() >> '''] >> >> methods = ['List Comprehension', 'Empty plus Loop '] >> >> setup = 'from __main__ import numpy, items' >> >> for stmnt, method in zip(statements, methods): >> >> elapsed = timeit.repeat(stmnt, setup=setup, number=1, >> repeat=repeat) >> minv, maxv, meanv = min(elapsed), max(elapsed), >> numpy.mean(elapsed) >> elapsed.sort() >> best_of_3 = numpy.mean(elapsed[0:3]) >> result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat >> >> print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , >> BEST OF 3: %0.2f ms'%tuple(result.tolist()) >> >> >> Results are the same as before... >> >> >> >> On 7 October 2017 at 11:52, Andrea Gavana >> wrote: >> >>> Hi All, >>> >>> I have this little snippet of code: >>> >>> import timeit >>> import numpy >>> >>> class Item(object): >>> >>> def __init__(self, name): >>> >>> self.name = name >>> self.values = numpy.random.rand(8, 1) >>> >>> def do_something(self): >>> >>> sv = self.values.sum(axis=0) >>> array = numpy.empty((8, )) >>> f = numpy.dot(0.5*numpy.ones((8, )), self.values)[0] >>> array.fill(f) >>> return array >>> >>> >>> In my real application, the method do_something does a bit more than >>> that, but I believe the snippet is enough to start playing with it. What I >>> have is a list of (on average) 500-1,000 classes Item, and I am trying to >>> retrieve the output of do_something for each of them in a single, big 2D >>> numpy array. >>> >>> My current approach is to use list comprehension like this: >>> >>> output = numpy.asarray([item.do_something() for item in items]).T >>> >>> (Note: I need the transposed of that 2D array, always). >>> >>> But then I though: why not preallocating the output array and make a >>> simple loop: >>> >>> output = numpy.empty((500, 8)) >>> for i, item in enumerate(items): >>> output[i, :] = item.do_something() >>> >>> >>> I was expecting this version to be marginally faster - as the previous >>> one has to call asarray and then transpose the matrix, but I was in for a >>> surprise: >>> >>> if __name__ == '__main__': >>> >>> repeat = 1000 >>> items = [Item('item_%d'%(i+1)) for i in xrange(500)] >>> >>> statements = [''' >>> output = numpy.asarray([item.do_something() for item >>> in items]).T >>> ''', >>> ''' >>> output = numpy.empty((500, 8)) >>> for i, item in enumerate(items): >>> output[i, :] = item.do_something() >>> '''] >>> >>> methods = ['List Comprehension', 'Empty plus Loop '] >>> >>> setup = 'from __main__ import numpy, items' >>> >>> for stmnt, method in zip(statements, methods): >>> >>> elapsed = timeit.repeat(stmnt, setup=setup, number=1, >>> repeat=repeat) >>> minv, maxv, meanv = min(elapsed), max(elapsed), >>> numpy.mean(elapsed) >>> elapsed.sort() >>> best_of_3 = numpy.mean(elapsed[0:3]) >>> result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat >>> >>> print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms >>> , BEST OF 3: %0.2f ms'%tuple(result.tolist()) >>> >>> >>> I get this: >>> >>> List Comprehension : MIN: 7.32 ms , MAX: 9.13 ms , MEAN: 7.85 ms , BEST >>> OF 3: 7.33 ms >>> Empty plus Loop : MIN: 7.99 ms , MAX: 9.57 ms , MEAN: 8.31 ms , BEST >>> OF 3: 8.01 ms >>> >>> >>> Now, I know that list comprehensions are renowned for being insanely >>> fast, but I though that doing asarray plus transpose would by far defeat >>> their advantage, especially since the list comprehension is used to call a >>> method, not to do some simple arithmetic inside it... >>> >>> I guess I am missing something obvious here... oh, and if anyone has >>> suggestions about how to improve my crappy code (performance wise), please >>> feel free to add your thoughts. >>> >>> Thank you. >>> >>> Andrea. >>> >>> >>> >>> >>> >>> >>> >> >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjol at tjol.eu Sun Oct 8 16:50:02 2017 From: tjol at tjol.eu (Thomas Jollans) Date: Sun, 8 Oct 2017 22:50:02 +0200 Subject: [Numpy-discussion] converting list of int16 values to bitmask and back to bitmask and back to list of int32\float values In-Reply-To: <9EFE3345170EF24DB67C61C1B05EEEDB407CFEEC@EX10.Elspec.local> References: <9EFE3345170EF24DB67C61C1B05EEEDB407CFEEC@EX10.Elspec.local> Message-ID: <78197fb1-3f0f-06a6-1385-a310fd556d38@tjol.eu> On 08/10/17 09:12, Nissim Derdiger wrote: > Hi again, > I realize that my question was not clear enough, so I've refined it into one runnable function (attached below) > My question is basically - is there a way to perform the same operation, but faster using NumPy (or even just by using Python better..) > Thanks again and sorry for the unclearness.. > Nissim. > > import struct > > def Convert(): > Endian = ' ParameterFormat = 'f' # float32 > RawDataList = [17252, 26334, 16141, 58057,17252, 15478, 16144, 43257] # list of int32 registers > NumOfParametersInRawData = int(len(RawDataList)/2) > Result = [] > for i in range(NumOfParametersInRawData): Iterating over indices is not very Pythonic, and there's usually a better way. In this case: for int1, int2 in zip(RawDataList[::2], RawDataList[1::2]) > # pack every 2 registers, take only the first 2 bytes from each one, change their endianess than unpack them back to the Parameter format > Result.append((struct.unpack(ParameterFormat,(struct.pack(Endian,RawDataList[(i*2)+1])[0:2] + struct.pack(' References: <9EFE3345170EF24DB67C61C1B05EEEDB407CFEEC@EX10.Elspec.local> <78197fb1-3f0f-06a6-1385-a310fd556d38@tjol.eu> Message-ID: On 08/10/17 22:50, Thomas Jollans wrote: > On 08/10/17 09:12, Nissim Derdiger wrote: >> Hi again, >> I realize that my question was not clear enough, so I've refined it into one runnable function (attached below) >> My question is basically - is there a way to perform the same operation, but faster using NumPy (or even just by using Python better..) >> Thanks again and sorry for the unclearness.. >> Nissim. >> >> import struct >> >> def Convert(): >> Endian = ' < is little endian. Make sure you're getting out the right values! >> ParameterFormat = 'f' # float32 >> RawDataList = [17252, 26334, 16141, 58057,17252, 15478, 16144, 43257] # list of int32 registers >> NumOfParametersInRawData = int(len(RawDataList)/2) >> Result = [] >> for i in range(NumOfParametersInRawData): > Iterating over indices is not very Pythonic, and there's usually a > better way. In this case: for int1, int2 in zip(RawDataList[::2], > RawDataList[1::2]) > >> # pack every 2 registers, take only the first 2 bytes from each one, change their endianess than unpack them back to the Parameter format >> Result.append((struct.unpack(ParameterFormat,(struct.pack(Endian,RawDataList[(i*2)+1])[0:2] + struct.pack(' > You can do this a little more elegantly (and probably faster) with > struct by putting it in a list comprehension: > > [struct.unpack('f', struct.pack(' i1, i2 in zip(raw_data[::2], raw_data[1::2])] > > Numpy can also do it. You can get your array of little-endian shorts with > > > le_shorts = np.array(raw_data, dtype=' > and then reinterpret the bytes backing it as float32 with np.frombuffer: > > np.frombuffer(le_shorts.data, dtype='f4') > > For small lists like the one in your example, the two approaches are > equally fast. For long ones, numpy is much faster: *sigh* let's try that again: In [82]: raw_data Out[82]: [17252, 26334, 16141, 58057, 17252, 15478, 16144, 43257] In [83]: raw_data2 = np.random.randint(0, 2**32, size=10**6, dtype='u4') In [84]: %timeit np.frombuffer(np.array(raw_data, dtype=' > In [82]: raw_data Out[82]: [17252, 26334, 16141, 58057, 17252, 15478, > 16144, 43257] In [83]: raw_data2 = np.random.randint(0, 2**32, > size=10**6, dtype='u4') # 1 million random integers In [84]: %timeit > np.frombuffer(np.array(raw_data, dtype=' ? 60.9 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) In > [85]: %timeit np.frombuffer(np.array(raw_data2, dtype=' dtype='f4') 854 ?s ? 37.3 ?s per loop (mean ? std. dev. of 7 runs, 1000 > loops each) In [86]: %timeit [struct.unpack('f', struct.pack(' 0xffff, i2 & 0xffff))[0] for i1, i2 in zip(raw_data[::2], > raw_data[1::2])] 4.87 ?s ? 17.3 ns per loop (mean ? std. dev. of 7 runs, > 100000 loops each) In [87]: %timeit [struct.unpack('f', > struct.pack(' zip(raw_data2[::2], raw_data2[1::2])] 3.6 s ? 9.78 ms per loop (mean ? > std. dev. of 7 runs, 1 loop each) > > -- Thomas > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From chris.barker at noaa.gov Wed Oct 11 00:51:31 2017 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 10 Oct 2017 21:51:31 -0700 Subject: [Numpy-discussion] List comprehension and loops performances with NumPy arrays In-Reply-To: References: Message-ID: <-244552496598482232@unknownmsgid> Andrea, One note: transposing is almost free ? it just rearranges the strides ? I.e. changed how the array is interpreted. It doesn?t actually move the data around. -CHB Sent from my iPhone On Oct 7, 2017, at 2:58 AM, Andrea Gavana wrote: Apologies, correct timeit code this time (I had gotten the wrong shape for the output matrix in the loop case): if __name__ == '__main__': repeat = 1000 items = [Item('item_%d'%(i+1)) for i in xrange(500)] output = numpy.asarray([item.do_something() for item in items]).T statements = [''' output = numpy.asarray([item.do_something() for item in items]).T ''', ''' output = numpy.empty((8, 500)) for i, item in enumerate(items): output[:, i] = item.do_something() '''] methods = ['List Comprehension', 'Empty plus Loop '] setup = 'from __main__ import numpy, items' for stmnt, method in zip(statements, methods): elapsed = timeit.repeat(stmnt, setup=setup, number=1, repeat=repeat) minv, maxv, meanv = min(elapsed), max(elapsed), numpy.mean(elapsed) elapsed.sort() best_of_3 = numpy.mean(elapsed[0:3]) result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , BEST OF 3: %0.2f ms'%tuple(result.tolist()) Results are the same as before... On 7 October 2017 at 11:52, Andrea Gavana wrote: > Hi All, > > I have this little snippet of code: > > import timeit > import numpy > > class Item(object): > > def __init__(self, name): > > self.name = name > self.values = numpy.random.rand(8, 1) > > def do_something(self): > > sv = self.values.sum(axis=0) > array = numpy.empty((8, )) > f = numpy.dot(0.5*numpy.ones((8, )), self.values)[0] > array.fill(f) > return array > > > In my real application, the method do_something does a bit more than that, > but I believe the snippet is enough to start playing with it. What I have > is a list of (on average) 500-1,000 classes Item, and I am trying to > retrieve the output of do_something for each of them in a single, big 2D > numpy array. > > My current approach is to use list comprehension like this: > > output = numpy.asarray([item.do_something() for item in items]).T > > (Note: I need the transposed of that 2D array, always). > > But then I though: why not preallocating the output array and make a > simple loop: > > output = numpy.empty((500, 8)) > for i, item in enumerate(items): > output[i, :] = item.do_something() > > > I was expecting this version to be marginally faster - as the previous one > has to call asarray and then transpose the matrix, but I was in for a > surprise: > > if __name__ == '__main__': > > repeat = 1000 > items = [Item('item_%d'%(i+1)) for i in xrange(500)] > > statements = [''' > output = numpy.asarray([item.do_something() for item in > items]).T > ''', > ''' > output = numpy.empty((500, 8)) > for i, item in enumerate(items): > output[i, :] = item.do_something() > '''] > > methods = ['List Comprehension', 'Empty plus Loop '] > > setup = 'from __main__ import numpy, items' > > for stmnt, method in zip(statements, methods): > > elapsed = timeit.repeat(stmnt, setup=setup, number=1, > repeat=repeat) > minv, maxv, meanv = min(elapsed), max(elapsed), numpy.mean(elapsed) > elapsed.sort() > best_of_3 = numpy.mean(elapsed[0:3]) > result = numpy.asarray((minv, maxv, meanv, best_of_3))*repeat > > print method, ': MIN: %0.2f ms , MAX: %0.2f ms , MEAN: %0.2f ms , > BEST OF 3: %0.2f ms'%tuple(result.tolist()) > > > I get this: > > List Comprehension : MIN: 7.32 ms , MAX: 9.13 ms , MEAN: 7.85 ms , BEST OF > 3: 7.33 ms > Empty plus Loop : MIN: 7.99 ms , MAX: 9.57 ms , MEAN: 8.31 ms , BEST OF > 3: 8.01 ms > > > Now, I know that list comprehensions are renowned for being insanely fast, > but I though that doing asarray plus transpose would by far defeat their > advantage, especially since the list comprehension is used to call a > method, not to do some simple arithmetic inside it... > > I guess I am missing something obvious here... oh, and if anyone has > suggestions about how to improve my crappy code (performance wise), please > feel free to add your thoughts. > > Thank you. > > Andrea. > > > > > > > _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From seth.ghandi.2017 at gmail.com Sun Oct 15 22:16:47 2017 From: seth.ghandi.2017 at gmail.com (Seth Ghandi) Date: Mon, 16 Oct 2017 04:16:47 +0200 Subject: [Numpy-discussion] Vectorization of variant of piecewise or interpolation function Message-ID: <523C328A-0468-46E8-AB59-2117CE07296D@gmail.com> Hi everybody, I am new to newpy and am trying to define a variant of piecewise or zero holder interpolation function, say ZeroOrderInterpolation(t,a), where t is an 1D array of size, say p, consisting of real numbers, and a is a 2D array of size, say nxm, with first column consisting of increasing real numbers. This function should return an array, say y, of size px(m-1) such that y[i,:] is equal to a[n,1:] if a[n,0] <= t[i], and a[k,1:] if k < n and a[k,0] <= t[i] < a[k+1,0]. Note that t[0] is assumed to be at least equal to a[0,0]. I have the following script made of "for loops" and I am trying to vectorize it so as to make it faster for large arrays. def ZeroOrderInterpolation(t,a): import numpy as np p = t.shape[0] n, m = a.shape if n == 1: return a[0,1:] y = np.zeros((p,m-1)) for i in range(p): if a[n-1,0] <= t[i]: y[i] = a[n-1,1:] else: for j in range(n-1): if (a[j,0] <= t[i]) and (t[i] <= a[j+1,0]): y[i] = a[j,1:] return y import numpy as np t = np.array([0.5,1,1.5,2.5,3,10]) table = np.array([[0,3],[1,0],[2,5],[3,-1]]) ZeroOrderInterpolation(t,table) [Out]: array([[ 3.], [ 0.], [ 0.], [ 5.], [-1.], [-1.]]) Any help for a vectorization "? la numpy" of this fucntion will be apprecaited. Best regards, From jni.soma at gmail.com Sun Oct 15 22:29:46 2017 From: jni.soma at gmail.com (Juan Nunez-Iglesias) Date: Mon, 16 Oct 2017 13:29:46 +1100 Subject: [Numpy-discussion] Vectorization of variant of piecewise or interpolation function In-Reply-To: <523C328A-0468-46E8-AB59-2117CE07296D@gmail.com> References: <523C328A-0468-46E8-AB59-2117CE07296D@gmail.com> Message-ID: Hi Seth, The function you?re looking for is `np.digitize`: In [1]: t = np.array([0.5,1,1.5,2.5,3,10]) ? ?...: table = np.array([[0,3],[1,0],[2,5],[3,-1]]) ? ?...: In [2]: lookup, values = table[:, 0], table[:, 1:] In [3]: values = np.concatenate((values[0:1], values), axis=0) In [4]: indices = np.digitize(t, lookup) In [5]: values[indices] Out[5]: array([[ 3], ? ? ? ?[ 0], ? ? ? ?[ 0], ? ? ? ?[ 5], ? ? ? ?[-1], ? ? ? ?[-1]]) Note the call to concatenate. Depending on how exactly you want your bins to align, you might need to concatenate at the end or at the start of the `values` array. Hope this helps! Juan. On 16 Oct 2017, 1:17 PM +1100, Seth Ghandi , wrote: > Hi everybody, > > I am new to newpy and am trying to define a variant of piecewise or zero holder interpolation function, say ZeroOrderInterpolation(t,a), where t is an 1D array of size, say p, consisting of real numbers, and a is a 2D array of size, say nxm, with first column consisting of increasing real numbers. This function should return an array, say y, of size px(m-1) such that y[i,:] is equal to > a[n,1:] if a[n,0] <= t[i], and > a[k,1:] if k < n and a[k,0] <= t[i] < a[k+1,0]. > Note that t[0] is assumed to be at least equal to a[0,0]. > > I have the following script made of "for loops" and I am trying to vectorize it so as to make it faster for large arrays. > > def ZeroOrderInterpolation(t,a): > import numpy as np > p = t.shape[0] > n, m = a.shape > if n == 1: > return a[0,1:] > y = np.zeros((p,m-1)) > for i in range(p): > if a[n-1,0] <= t[i]: > y[i] = a[n-1,1:] > else: > for j in range(n-1): > if (a[j,0] <= t[i]) and (t[i] <= a[j+1,0]): > y[i] = a[j,1:] > return y > > import numpy as np > t = np.array([0.5,1,1.5,2.5,3,10]) > table = np.array([[0,3],[1,0],[2,5],[3,-1]]) > ZeroOrderInterpolation(t,table) > > [Out]: array([[ 3.], > [ 0.], > [ 0.], > [ 5.], > [-1.], > [-1.]]) > > > Any help for a vectorization "? la numpy" of this fucntion will be apprecaited. > > Best regards, > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From seth.ghandi.2017 at gmail.com Mon Oct 16 06:45:49 2017 From: seth.ghandi.2017 at gmail.com (Seth Ghandi) Date: Mon, 16 Oct 2017 12:45:49 +0200 Subject: [Numpy-discussion] Vectorization of variant of piecewise or interpolation function In-Reply-To: References: <523C328A-0468-46E8-AB59-2117CE07296D@gmail.com> Message-ID: <2CF30C83-0F26-4D01-9BB3-238661FD42DA@gmail.com> Thanks so much Juan! I did not know about this np.digitize command. With this the vectorization of my function then reads as def ZeroOrderInterpolation(t,table): import numpy as np lookup, values = table[:, 0], table[:, 1:] values = np.concatenate((values[0:1], values), axis=0) indices = np.digitize(t, lookup) return values[indices] Hugs! Seth > On Oct 16, 2017, at 4:29 AM, Juan Nunez-Iglesias wrote: > > Hi Seth, > > The function you?re looking for is `np.digitize`: > > > In [1]: t = np.array([0.5,1,1.5,2.5,3,10]) > ...: table = np.array([[0,3],[1,0],[2,5],[3,-1]]) > ...: > In [2]: lookup, values = table[:, 0], table[:, 1:] > In [3]: values = np.concatenate((values[0:1], values), axis=0) > In [4]: indices = np.digitize(t, lookup) > In [5]: values[indices] > Out[5]: > array([[ 3], > [ 0], > [ 0], > [ 5], > [-1], > [-1]]) > > > Note the call to concatenate. Depending on how exactly you want your bins to align, you might need to concatenate at the end or at the start of the `values` array. > > Hope this helps! > > Juan. > > On 16 Oct 2017, 1:17 PM +1100, Seth Ghandi , wrote: >> Hi everybody, >> >> I am new to newpy and am trying to define a variant of piecewise or zero holder interpolation function, say ZeroOrderInterpolation(t,a), where t is an 1D array of size, say p, consisting of real numbers, and a is a 2D array of size, say nxm, with first column consisting of increasing real numbers. This function should return an array, say y, of size px(m-1) such that y[i,:] is equal to >> a[n,1:] if a[n,0] <= t[i], and >> a[k,1:] if k < n and a[k,0] <= t[i] < a[k+1,0]. >> Note that t[0] is assumed to be at least equal to a[0,0]. >> >> I have the following script made of "for loops" and I am trying to vectorize it so as to make it faster for large arrays. >> >> def ZeroOrderInterpolation(t,a): >> import numpy as np >> p = t.shape[0] >> n, m = a.shape >> if n == 1: >> return a[0,1:] >> y = np.zeros((p,m-1)) >> for i in range(p): >> if a[n-1,0] <= t[i]: >> y[i] = a[n-1,1:] >> else: >> for j in range(n-1): >> if (a[j,0] <= t[i]) and (t[i] <= a[j+1,0]): >> y[i] = a[j,1:] >> return y >> >> import numpy as np >> t = np.array([0.5,1,1.5,2.5,3,10]) >> table = np.array([[0,3],[1,0],[2,5],[3,-1]]) >> ZeroOrderInterpolation(t,table) >> >> [Out]: array([[ 3.], >> [ 0.], >> [ 0.], >> [ 5.], >> [-1.], >> [-1.]]) >> >> >> Any help for a vectorization "? la numpy" of this fucntion will be apprecaited. >> >> Best regards, >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: From NissimD at elspec-ltd.com Mon Oct 16 08:23:19 2017 From: NissimD at elspec-ltd.com (Nissim Derdiger) Date: Mon, 16 Oct 2017 12:23:19 +0000 Subject: [Numpy-discussion] converting list of int16 values to bitmask and back to Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407D14CD@EX10.Elspec.local> Thomas, Thanks for your answers! Just for the closer of this issue, here are the 2 np solutions that I used with Thomas help: (both are MUCH faster than my original solution, func2 slightly more) def func1(): Endian = ' To: numpy-discussion at python.org Subject: Re: [Numpy-discussion] converting list of int16 values to bitmask and back to bitmask and back to list of int32\float values Message-ID: <78197fb1-3f0f-06a6-1385-a310fd556d38 at tjol.eu> Content-Type: text/plain; charset=utf-8 On 08/10/17 09:12, Nissim Derdiger wrote: > Hi again, > I realize that my question was not clear enough, so I've refined it > into one runnable function (attached below) My question is basically - > is there a way to perform the same operation, but faster using NumPy (or even just by using Python better..) Thanks again and sorry for the unclearness.. > Nissim. > > import struct > > def Convert(): > Endian = ' ParameterFormat = 'f' # float32 > RawDataList = [17252, 26334, 16141, 58057,17252, 15478, 16144, 43257] # list of int32 registers > NumOfParametersInRawData = int(len(RawDataList)/2) > Result = [] > for i in range(NumOfParametersInRawData): Iterating over indices is not very Pythonic, and there's usually a better way. In this case: for int1, int2 in zip(RawDataList[::2], RawDataList[1::2]) > # pack every 2 registers, take only the first 2 bytes from each one, change their endianess than unpack them back to the Parameter format > > Result.append((struct.unpack(ParameterFormat,(struct.pack(Endian,RawDa > taList[(i*2)+1])[0:2] + struct.pack(' To: numpy-discussion at python.org Subject: Re: [Numpy-discussion] converting list of int16 values to bitmask and back to bitmask and back to list of int32\float values Message-ID: Content-Type: text/plain; charset=utf-8 On 08/10/17 22:50, Thomas Jollans wrote: > On 08/10/17 09:12, Nissim Derdiger wrote: >> Hi again, >> I realize that my question was not clear enough, so I've refined it >> into one runnable function (attached below) My question is basically >> - is there a way to perform the same operation, but faster using NumPy (or even just by using Python better..) Thanks again and sorry for the unclearness.. >> Nissim. >> >> import struct >> >> def Convert(): >> Endian = ' < is little endian. Make sure you're getting out the right values! >> ParameterFormat = 'f' # float32 >> RawDataList = [17252, 26334, 16141, 58057,17252, 15478, 16144, 43257] # list of int32 registers >> NumOfParametersInRawData = int(len(RawDataList)/2) >> Result = [] >> for i in range(NumOfParametersInRawData): > Iterating over indices is not very Pythonic, and there's usually a > better way. In this case: for int1, int2 in zip(RawDataList[::2], > RawDataList[1::2]) > >> # pack every 2 registers, take only the first 2 bytes from each one, change their endianess than unpack them back to the Parameter format >> >> Result.append((struct.unpack(ParameterFormat,(struct.pack(Endian,RawD >> ataList[(i*2)+1])[0:2] + >> struct.pack(' > You can do this a little more elegantly (and probably faster) with > struct by putting it in a list comprehension: > > [struct.unpack('f', struct.pack(' for i1, i2 in zip(raw_data[::2], raw_data[1::2])] > > Numpy can also do it. You can get your array of little-endian shorts > with > > > le_shorts = np.array(raw_data, dtype=' > and then reinterpret the bytes backing it as float32 with np.frombuffer: > > np.frombuffer(le_shorts.data, dtype='f4') > > For small lists like the one in your example, the two approaches are > equally fast. For long ones, numpy is much faster: *sigh* let's try that again: In [82]: raw_data Out[82]: [17252, 26334, 16141, 58057, 17252, 15478, 16144, 43257] In [83]: raw_data2 = np.random.randint(0, 2**32, size=10**6, dtype='u4') In [84]: %timeit np.frombuffer(np.array(raw_data, dtype=' > In [82]: raw_data Out[82]: [17252, 26334, 16141, 58057, 17252, 15478, > 16144, 43257] In [83]: raw_data2 = np.random.randint(0, 2**32, > size=10**6, dtype='u4') # 1 million random integers In [84]: %timeit > np.frombuffer(np.array(raw_data, dtype=' ?s ? 60.9 ns per loop (mean ? std. dev. of 7 runs, 100000 loops each) > In > [85]: %timeit np.frombuffer(np.array(raw_data2, dtype=' dtype='f4') 854 ?s ? 37.3 ?s per loop (mean ? std. dev. of 7 runs, > 1000 loops each) In [86]: %timeit [struct.unpack('f', > struct.pack(' zip(raw_data[::2], raw_data[1::2])] 4.87 ?s ? 17.3 ns per loop (mean ? > std. dev. of 7 runs, > 100000 loops each) In [87]: %timeit [struct.unpack('f', > struct.pack(' zip(raw_data2[::2], raw_data2[1::2])] 3.6 s ? 9.78 ms per loop (mean ? > std. dev. of 7 runs, 1 loop each) > > -- Thomas > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > ------------------------------ Subject: Digest Footer _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion ------------------------------ End of NumPy-Discussion Digest, Vol 133, Issue 7 ************************************************ From marc.barbry at mailoo.org Tue Oct 17 08:49:00 2017 From: marc.barbry at mailoo.org (marc) Date: Tue, 17 Oct 2017 14:49:00 +0200 Subject: [Numpy-discussion] compiler binary in numpy.distutils Message-ID: <20d91d32-5602-a1ab-5729-475c4172b1b7@mailoo.org> Hi! I'm trying to write a python wrapper to ScaLapack using f2py. I have troubles to set up the binary path to the compiler using numpy.distutils. What is the correct way? You can find my actual setup.py at the code repository, https://gitlab.com/mbarbry/python-scalapack Thanks in advance, Marc From ralf.gommers at gmail.com Wed Oct 18 06:04:40 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 18 Oct 2017 23:04:40 +1300 Subject: [Numpy-discussion] ANN: second SciPy 1.0.0 release candidate Message-ID: Hi all, I'm excited to be able to announce the availability of the second (and hopefully last) release candidate of Scipy 1.0. This is a big release, and a version number that has been 16 years in the making. It contains a few more deprecations and backwards incompatible changes than an average release. Therefore please do test it on your own code, and report any issues on the Github issue tracker or on the scipy-dev mailing list. Sources and binary wheels can be found at https://pypi.python.org/pypi/scipy and https://github.com/scipy/scipy/releases/tag/v1.0.0rc2. To install with pip: pip install --pre --upgrade scipy The most important issues fixed after v1.0.0rc1 is https://github.com/scipy/scipy/issues/7969 (missing DLL in Windows wheel). Pull requests merged after v1.0.0rc1: - `#7948 `__: DOC: add note on checking for deprecations before upgrade to... - `#7952 `__: DOC: update SciPy Roadmap for 1.0 release and recent discussions. - `#7960 `__: BUG: optimize: revert changes to bfgs in gh-7165 - `#7962 `__: TST: special: mark a failing hyp2f1 test as xfail - `#7973 `__: BUG: fixed keyword in 'info' in ``_get_mem_available`` utility - `#7986 `__: TST: Relax test_trsm precision to 5 decimals - `#8001 `__: TST: fix test failures from Matplotlib 2.1 update - `#8010 `__: BUG: signal: fix crash in lfilter - `#8019 `__: MAINT: fix test failures with NumPy master Thanks to everyone who contributed to this release! Ralf ========================== SciPy 1.0.0 Release Notes ========================== .. note:: Scipy 1.0.0 is not released yet! .. contents:: SciPy 1.0.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-fixes, improved test coverage and better documentation. There have been a number of deprecations and API changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-fixes and optimizations. Moreover, our development attention will now shift to bug-fix releases on the 1.0.x branch, and on adding new features on the master branch. Some of the highlights of this release are: - Major build improvements. Windows wheels are available on PyPI for the first time, and continuous integration has been set up on Windows and OS X in addition to Linux. - A set of new ODE solvers and a unified interface to them (`scipy.integrate.solve_ivp`). - Two new trust region optimizers and a new linear programming method, with improved performance compared to what `scipy.optimize` offered previously. - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now complete. This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater. This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate providing the LAPACK 3.2.1 API. We have decided that it's time to either drop Accelerate or, if there is enough interest, provide shims for functions added in more recent LAPACK versions so it can still be used. New features ============ `scipy.cluster` improvements ---------------------------- `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a linkage matrix to minimize distances between adjacent leaves, was added. `scipy.fftpack` improvements ---------------------------- N-dimensional versions of the discrete sine and cosine transforms and their inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. `scipy.integrate` improvements ------------------------------ A set of new ODE solvers have been added to `scipy.integrate`. The convenience function `scipy.integrate.solve_ivp` allows uniform access to all solvers. The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and ``LSODA``) can also be used directly. `scipy.linalg` improvements ---------------------------- The BLAS wrappers in `scipy.linalg.blas` have been completed. Added functions are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, ``*spr``, ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, ``*sbmv``, ``*spr2``, Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, ``*hetrd``, ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. The function `scipy.linalg.subspace_angles` has been added to compute the subspace angles between two matrices. The function `scipy.linalg.clarkson_woodruff_transform` has been added. It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. The functions `scipy.linalg.eigh_tridiagonal` and `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and eigenvectors of tridiagonal hermitian/symmetric matrices, were added. `scipy.ndimage` improvements ---------------------------- Support for homogeneous coordinate transforms has been added to `scipy.ndimage.affine_transform`. The ``ndimage`` C code underwent a significant refactoring, and is now a lot easier to understand and maintain. `scipy.optimize` improvements ----------------------------- The methods ``trust-region-exact`` and ``trust-krylov`` have been added to the function `scipy.optimize.minimize`. These new trust-region methods solve the subproblem with higher accuracy at the cost of more Hessian factorizations (compared to dogleg) or more matrix vector products (compared to ncg) but usually require less nonlinear iterations and are able to deal with indefinite Hessians. They seem very competitive against the other Newton methods implemented in scipy. `scipy.optimize.linprog` gained an interior point method. Its performance is superior (both in accuracy and speed) to the older simplex method. `scipy.signal` improvements --------------------------- An argument ``fs`` (sampling frequency) was added to the following functions: ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these functions consistent with many other functions in `scipy.signal` in which the sampling frequency can be specified. `scipy.signal.freqz` has been sped up significantly for FIR filters. `scipy.sparse` improvements --------------------------- Iterating over and slicing of CSC and CSR matrices is now faster by up to ~35%. The ``tocsr`` method of COO matrices is now several times faster. The ``diagonal`` method of sparse matrices now takes a parameter, indicating which diagonal to return. `scipy.sparse.linalg` improvements ---------------------------------- A new iterative solver for large-scale nonsymmetric sparse linear systems, `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a flexible variant of ``GCROT``. `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding potentially faster convergence. SuperLU was updated to version 5.2.1. `scipy.spatial` improvements ---------------------------- Many distance metrics in `scipy.spatial.distance` gained support for weights. The signatures of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in order to support a wider range of metrics (e.g. string-based metrics that need extra keywords). Also, an optional ``out`` parameter was added to ``pdist`` and ``cdist`` allowing the user to specify where the resulting distance matrix is to be stored `scipy.stats` improvements -------------------------- The methods ``cdf`` and ``logcdf`` were added to `scipy.stats.multivariate_normal`, providing the cumulative distribution function of the multivariate normal distribution. New statistical distance functions were added, namely `scipy.stats.wasserstein_distance` for the first Wasserstein distance and `scipy.stats.energy_distance` for the energy distance. Deprecated features =================== The following functions in `scipy.misc` are deprecated: ``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow`` and ``toimage``. Most of those functions have unexpected behavior (like rescaling and type casting image data without the user asking for that). Other functions simply have better alternatives. ``scipy.interpolate.interpolate_wrapper`` and all functions in that submodule are deprecated. This was a never finished set of wrapper functions which is not relevant anymore. The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the dtypes of the input arrays in the future and checked that it is a scalar or an array with a single element. ``scipy.spatial.distance.matching`` is deprecated. It is an alias of `scipy.spatial.distance.hamming`, which should be used instead. Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong interpretation of the metric definition. In scipy 1.0 it has been just deprecated in the documentation to keep retro-compatibility but is recommended to use the new version of `scipy.spatial.distance.minkowski` that implements the correct behaviour. Positional arguments of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` should be replaced with their keyword version. Backwards incompatible changes ============================== The following deprecated functions have been removed from `scipy.stats`: ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and ``threshold``. The following deprecated functions have been removed from `scipy.stats.mstats`: ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. The deprecated ``a`` and ``reta`` keywords have been removed from `scipy.stats.shapiro`. The deprecated functions ``sparse.csgraph.cs_graph_components`` and ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. The following deprecated keywords have been removed in `scipy.sparse.linalg`: ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, ``cg``, ``cgs``, ``gmres``, ``qmr`` and ``minres``. The deprecated functions ``expm2`` and ``expm3`` have been removed from `scipy.linalg`. The deprecated keyword ``q`` was removed from `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` was removed. The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and ``K2F`` have been removed from `scipy.constants`. The deprecated ``ppform`` class was removed from `scipy.interpolate`. The deprecated keyword ``iprint`` was removed from `scipy.optimize.fmin_cobyla`. The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` has been changed to True. The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the method used for random initialization, so using a fixed random seed will not necessarily produce the same results as in previous versions. `scipy.special.gammaln` does not accept complex arguments anymore. The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and ``spherical_kn``. Be aware that the new functions have different signatures. The cross-class properties of `scipy.signal.lti` systems have been removed. The following properties/setters have been removed: Name - (accessing/setting has been removed) - (setting has been removed) * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, ``poles``) * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. This was a corner case for which it was unclear that the behavior was well-defined. The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather than an ndarray when the length of alpha is 1. Other changes ============= SciPy now has a formal governance structure. It consists of a BDFL (Pauli Virtanen) and a Steering Committee. See `the governance document `_ for details. It is now possible to build SciPy on Windows with MSVC + gfortran! Continuous integration has been set up for this build configuration on Appveyor, building against OpenBLAS. Continuous integration for OS X has been set up on TravisCI. The SciPy test suite has been migrated from ``nose`` to ``pytest``. ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy to add custom code that needs to run when importing SciPy (e.g. checks for hardware, DLL search paths, etc.). Support for PEP 518 (specifying build system requirements) was added - see ``pyproject.toml`` in the root of the SciPy repository. In order to have consistent function names, the function ``scipy.linalg.solve_lyapunov`` is renamed to `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for backwards-compatibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From NissimD at elspec-ltd.com Wed Oct 18 06:44:22 2017 From: NissimD at elspec-ltd.com (Nissim Derdiger) Date: Wed, 18 Oct 2017 10:44:22 +0000 Subject: [Numpy-discussion] different values for ndarray when printed with or without [ ] Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407E0A02@EX10.Elspec.local> Hi all, I have a ndarray, that shows different values when called like that: print(arr) or like that print(arr[0::]). When changing it back to a python string (with list = arr.tolist()) - both prints return same value, but when converting that list back to np array (arr=np.array(list)) - the printing issue returns. Any ideas what may cause that? Thanks, Nissim. -------------- next part -------------- An HTML attachment was scrubbed... URL: From deak.andris at gmail.com Wed Oct 18 08:25:44 2017 From: deak.andris at gmail.com (Andras Deak) Date: Wed, 18 Oct 2017 14:25:44 +0200 Subject: [Numpy-discussion] different values for ndarray when printed with or without [ ] In-Reply-To: <9EFE3345170EF24DB67C61C1B05EEEDB407E0A02@EX10.Elspec.local> References: <9EFE3345170EF24DB67C61C1B05EEEDB407E0A02@EX10.Elspec.local> Message-ID: On Wed, Oct 18, 2017 at 12:44 PM, Nissim Derdiger wrote: > Hi all, > > I have a ndarray, that shows different values when called like that: > print(arr) or like that print(arr[0::]). > > When changing it back to a python string (with list = arr.tolist()) ? both > prints return same value, but when converting that list back to np array > (arr=np.array(list)) ? the printing issue returns. > > Any ideas what may cause that? Hi Nissim, I suggest adding some specifics. What is the shape and dtype of your array? What are the differences in values? In what way are they different? The best would be if you could provide a minimal, reproducible example. Regards, Andr?s From nicholas.nadeau at gmail.com Wed Oct 18 08:39:01 2017 From: nicholas.nadeau at gmail.com (Nicholas Nadeau) Date: Wed, 18 Oct 2017 08:39:01 -0400 Subject: [Numpy-discussion] different values for ndarray when printed with or without [ ] In-Reply-To: <9EFE3345170EF24DB67C61C1B05EEEDB407E0A02@EX10.Elspec.local> References: <9EFE3345170EF24DB67C61C1B05EEEDB407E0A02@EX10.Elspec.local> Message-ID: Hi Nissim, While a working example will be helpful, I just wanted to confirm that you're not assigning a value to `list`, as you did in your message (e.g., ` list = arr.tolist()`). Because if that's the case, then you may run into issues, as `list` is a built-in Python keyword (for the `list` class). Cheers, -- Nicholas Nadeau, P.Eng., AVS On 18 October 2017 at 06:44, Nissim Derdiger wrote: > Hi all, > > I have a ndarray, that shows different values when called like that: > print(arr) or like that print(arr[0::]). > > When changing it back to a python string (with list = arr.tolist()) ? both > prints return same value, but when converting that list back to np array > (arr=np.array(list)) ? the printing issue returns. > > Any ideas what may cause that? > > Thanks, > > Nissim. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at sipsolutions.net Wed Oct 18 09:23:27 2017 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 18 Oct 2017 15:23:27 +0200 Subject: [Numpy-discussion] Github overview change Message-ID: <1508333007.27279.2.camel@sipsolutions.net> Hi all, probably silly, but is anyone else annoyed at not seeing comments anymore in the github overview/start page? I stopped getting everything as mails and had a (bad) habit of glancing at them which would spot at least bigger discussions going on, but now it only shows actual commits, which honestly are less interesting to me. Probably just me, was just wondering if anyone knew a setting or so? - Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From charlesr.harris at gmail.com Wed Oct 18 12:43:06 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 18 Oct 2017 10:43:06 -0600 Subject: [Numpy-discussion] Github overview change In-Reply-To: <1508333007.27279.2.camel@sipsolutions.net> References: <1508333007.27279.2.camel@sipsolutions.net> Message-ID: On Wed, Oct 18, 2017 at 7:23 AM, Sebastian Berg wrote: > Hi all, > > probably silly, but is anyone else annoyed at not seeing comments > anymore in the github overview/start page? I stopped getting everything > as mails and had a (bad) habit of glancing at them which would spot at > least bigger discussions going on, but now it only shows actual > commits, which honestly are less interesting to me. > > Probably just me, was just wondering if anyone knew a setting or so? > Don't know any settings. It's almost as annoying as not forwarding my own comments ... Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From NissimD at elspec-ltd.com Wed Oct 18 13:30:50 2017 From: NissimD at elspec-ltd.com (Nissim Derdiger) Date: Wed, 18 Oct 2017 17:30:50 +0000 Subject: [Numpy-discussion] different values for ndarray when printed with or without Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407E1A70@EX10.Elspec.local> Hi, The reason I've didn't send an example, is that in order to reproduce this issue - you'll need to have a meter that uses Modbus communication protocol in order to get a valid response object. (it's the output of "read_holding_registers" function on pymodbus3, which is has almost no documentation AT ALL) Is there a way to "export" an object so I'll be able to send it here? One of the parameters in this object is a regular list, but when I replace it with a list of my own (same values same everything) - the issue is not reproduced. (when looking in the watch list of the PyCharm - both are registered as ) That's why I figured I'll ask without example and maybe this is a simple or known issue that can be answered without it. To answer your questions: 1. Dtype is float32 2. shape is : (4,) 3. difference between values are: [ 2.25699615e+02 5.51561475e-01 3.81394744e+00 1.03807904e-01] Instead of: [225.69961547851562, 0.5515614748001099, 3.8139474391937256, 0.10380790382623672] 4. I've only used the word "list" in the mail example and not in my code.. Anyway, the basic code looks like this: def func(): Input = [17249, 45850, 16141, 13090, 16500, 6071, 15828, 39229] #usually this would be: Input = payload.registers Result = np.array(Input, dtype='>u2') Result = np.frombuffer(Result.data, dtype='>u4') Result = np.array(Result, dtype=' To: "numpy-discussion at python.org" Subject: [Numpy-discussion] different values for ndarray when printed with or without [ ] Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407E0A02 at EX10.Elspec.local> Content-Type: text/plain; charset="us-ascii" Hi all, I have a ndarray, that shows different values when called like that: print(arr) or like that print(arr[0::]). When changing it back to a python string (with list = arr.tolist()) - both prints return same value, but when converting that list back to np array (arr=np.array(list)) - the printing issue returns. Any ideas what may cause that? Thanks, Nissim. -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Wed, 18 Oct 2017 14:25:44 +0200 From: Andras Deak To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] different values for ndarray when printed with or without [ ] Message-ID: Content-Type: text/plain; charset="UTF-8" On Wed, Oct 18, 2017 at 12:44 PM, Nissim Derdiger wrote: > Hi all, > > I have a ndarray, that shows different values when called like that: > print(arr) or like that print(arr[0::]). > > When changing it back to a python string (with list = arr.tolist()) ? > both prints return same value, but when converting that list back to > np array > (arr=np.array(list)) ? the printing issue returns. > > Any ideas what may cause that? Hi Nissim, I suggest adding some specifics. What is the shape and dtype of your array? What are the differences in values? In what way are they different? The best would be if you could provide a minimal, reproducible example. Regards, Andr?s ------------------------------ Message: 3 Date: Wed, 18 Oct 2017 08:39:01 -0400 From: Nicholas Nadeau To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] different values for ndarray when printed with or without [ ] Message-ID: Content-Type: text/plain; charset="utf-8" Hi Nissim, While a working example will be helpful, I just wanted to confirm that you're not assigning a value to `list`, as you did in your message (e.g., ` list = arr.tolist()`). Because if that's the case, then you may run into issues, as `list` is a built-in Python keyword (for the `list` class). Cheers, -- Nicholas Nadeau, P.Eng., AVS On 18 October 2017 at 06:44, Nissim Derdiger wrote: > Hi all, > > I have a ndarray, that shows different values when called like that: > print(arr) or like that print(arr[0::]). > > When changing it back to a python string (with list = arr.tolist()) ? > both prints return same value, but when converting that list back to > np array > (arr=np.array(list)) ? the printing issue returns. > > Any ideas what may cause that? > > Thanks, > > Nissim. > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 18 Oct 2017 15:23:27 +0200 From: Sebastian Berg To: Discussion of Numerical Python Subject: [Numpy-discussion] Github overview change Message-ID: <1508333007.27279.2.camel at sipsolutions.net> Content-Type: text/plain; charset="utf-8" Hi all, probably silly, but is anyone else annoyed at not seeing comments anymore in the github overview/start page? I stopped getting everything as mails and had a (bad) habit of glancing at them which would spot at least bigger discussions going on, but now it only shows actual commits, which honestly are less interesting to me. Probably just me, was just wondering if anyone knew a setting or so? - Sebastian -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: ------------------------------ Subject: Digest Footer _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion ------------------------------ End of NumPy-Discussion Digest, Vol 133, Issue 13 ************************************************* From nathan12343 at gmail.com Wed Oct 18 14:25:27 2017 From: nathan12343 at gmail.com (Nathan Goldbaum) Date: Wed, 18 Oct 2017 13:25:27 -0500 Subject: [Numpy-discussion] Github overview change In-Reply-To: References: <1508333007.27279.2.camel@sipsolutions.net> Message-ID: This is a change in the UI that github introduced a couple weeks ago during their annual conference. See https://github.com/blog/2447-a-more-connected-universe On Wed, Oct 18, 2017 at 11:49 AM Charles R Harris wrote: > On Wed, Oct 18, 2017 at 7:23 AM, Sebastian Berg < > sebastian at sipsolutions.net> wrote: > >> Hi all, >> >> probably silly, but is anyone else annoyed at not seeing comments >> anymore in the github overview/start page? I stopped getting everything >> as mails and had a (bad) habit of glancing at them which would spot at >> least bigger discussions going on, but now it only shows actual >> commits, which honestly are less interesting to me. >> >> Probably just me, was just wondering if anyone knew a setting or so? >> > > Don't know any settings. It's almost as annoying as not forwarding my own > comments ... > > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From deak.andris at gmail.com Wed Oct 18 14:31:54 2017 From: deak.andris at gmail.com (Andras Deak) Date: Wed, 18 Oct 2017 20:31:54 +0200 Subject: [Numpy-discussion] different values for ndarray when printed with or without In-Reply-To: <9EFE3345170EF24DB67C61C1B05EEEDB407E1A70@EX10.Elspec.local> References: <9EFE3345170EF24DB67C61C1B05EEEDB407E1A70@EX10.Elspec.local> Message-ID: On Wed, Oct 18, 2017 at 7:30 PM, Nissim Derdiger wrote: > 3. difference between values are: > [ 2.25699615e+02 5.51561475e-01 3.81394744e+00 1.03807904e-01] > Instead of: > [225.69961547851562, 0.5515614748001099, 3.8139474391937256, 0.10380790382623672] The behaviour you're describing sounds like a matter of pretty-printing. Numpy uses a shortened format for printing numeric values by default. When you convert to a list, you leave numpy behind and you get the native python behaviour. If you want to control how this pretty-printing happens in numpy, take a close look at numpy.set_printoptions: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.set_printoptions.html . Now, I still don't see how taking a trivial view of your array would affect this printing, but I believe your values themselves are identical (i.e. correct) in both cases, and they are only displayed differently. If you were to do further computations with your arrays, the results would be the same. Regards, Andr?s From sebastian at sipsolutions.net Wed Oct 18 15:20:24 2017 From: sebastian at sipsolutions.net (Sebastian Berg) Date: Wed, 18 Oct 2017 21:20:24 +0200 Subject: [Numpy-discussion] Github overview change In-Reply-To: References: <1508333007.27279.2.camel@sipsolutions.net> Message-ID: <1508354424.7052.1.camel@sipsolutions.net> On Wed, 2017-10-18 at 13:25 -0500, Nathan Goldbaum wrote: > This is a change in the UI that github introduced a couple weeks ago > during their annual conference. > > See https://github.com/blog/2447-a-more-connected-universe > This announces the "Discover repositories" thing, but my normal news feed changed significantly, maybe at the same time, not showing comments at all. Is there a simple setup where: 1. I can get a rough overview what is being discussed without necessarily reading everything. 2. Still get anything with @mention, etc. so that I can't really miss it? (right now I have those in mail -- which I like -- and on the website, which I don't care too much about). Probably I can set it up to get everything as mail, and set the website to still only give notifications for 2., which would be OK. Maybe I am just change resistant ;). - Sebastian > On Wed, Oct 18, 2017 at 11:49 AM Charles R Harris > wrote: > > On Wed, Oct 18, 2017 at 7:23 AM, Sebastian Berg > ions.net> wrote: > > > Hi all, > > > > > > probably silly, but is anyone else annoyed at not seeing comments > > > anymore in the github overview/start page? I stopped getting > > > everything > > > as mails and had a (bad) habit of glancing at them which would > > > spot at > > > least bigger discussions going on, but now it only shows actual > > > commits, which honestly are less interesting to me. > > > > > > Probably just me, was just wondering if anyone knew a setting or > > > so? > > > > Don't know any settings. It's almost as annoying as not forwarding > > my own comments ... > > > > Chuck? > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From josef.pktd at gmail.com Wed Oct 18 17:03:09 2017 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 18 Oct 2017 17:03:09 -0400 Subject: [Numpy-discussion] Github overview change In-Reply-To: References: <1508333007.27279.2.camel@sipsolutions.net> Message-ID: On Wed, Oct 18, 2017 at 12:43 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Wed, Oct 18, 2017 at 7:23 AM, Sebastian Berg < > sebastian at sipsolutions.net> wrote: > >> Hi all, >> >> probably silly, but is anyone else annoyed at not seeing comments >> anymore in the github overview/start page? I stopped getting everything >> as mails and had a (bad) habit of glancing at them which would spot at >> least bigger discussions going on, but now it only shows actual >> commits, which honestly are less interesting to me. >> >> Probably just me, was just wondering if anyone knew a setting or so? >> > > Don't know any settings. It's almost as annoying as not forwarding my own > comments ... > That's an option now in notifications. I saw and changed the setting yesterday, and now I get distracted by my own comments on PRs and issues in the email. Josef > > Chuck > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Oct 19 01:24:11 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 18 Oct 2017 22:24:11 -0700 Subject: [Numpy-discussion] numpy grant update Message-ID: Hi all, I wanted to give everyone an update on what's going on with the NumPy grant [1]. As you may have noticed, things have been moving a bit slower than originally hoped -- unfortunately my health is improving but has continued to be rocky [2]. Fortunately, I have awesome co-workers, and BIDS has an institutional interest/mandate for figuring out how to make these things happen, so after thinking it over we've decided to reorganize how we're doing things internally and split up the work to let me focus on the core technical/community aspects without getting overloaded. Specifically, Fernando P?rez and Jonathan Dugan [3] are taking on PI/administration duties, St?fan van der Walt will focus on handling day-to-day management of the incoming hires, and Nelle Varoquaux & Jarrod Millman will also be joining the team (exact details TBD). This shouldn't really affect any of you, except that you might see some familiar faces with @berkeley.edu emails becoming more engaged. I'm still leading the Berkeley effort, and in any case it's still ultimately the community and NumPy steering council who will be making decisions about the project ? this is just some internal details about how we're planning to manage our contributions. But in the interest of full transparency I figured I'd let you know what's happening. In other news, the job ad to start the official hiring process has now been submitted for HR review, so it should hopefully be up soon -- depending on how efficient the bureaucracy is. I'll definitely let everyone know as soon as its posted. I'll also be giving a lunch talk at BIDS tomorrow to let folks locally know about what's going on, which I think will be recorded ? I'll send around a link after in case others are interested. -n [1] https://mail.python.org/pipermail/numpy-discussion/2017-May/076818.html [2] https://vorpus.org/blog/emerging-from-the-underworld/ [3] https://bids.berkeley.edu/people/jonathan-dugan -- Nathaniel J. Smith -- https://vorpus.org From NissimD at elspec-ltd.com Thu Oct 19 01:36:12 2017 From: NissimD at elspec-ltd.com (Nissim Derdiger) Date: Thu, 19 Oct 2017 05:36:12 +0000 Subject: [Numpy-discussion] different values for ndarray when printed with or without Message-ID: <9EFE3345170EF24DB67C61C1B05EEEDB407E2A95@EX10.Elspec.local> Nice catch Andre!!! np.set_printoptions(suppress=True) solved it. Thanks!!! Message: 4 Date: Wed, 18 Oct 2017 20:31:54 +0200 From: Andras Deak To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] different values for ndarray when printed with or without Message-ID: Content-Type: text/plain; charset="UTF-8" On Wed, Oct 18, 2017 at 7:30 PM, Nissim Derdiger wrote: > 3. difference between values are: > [ 2.25699615e+02 5.51561475e-01 3.81394744e+00 1.03807904e-01] > Instead of: > [225.69961547851562, 0.5515614748001099, 3.8139474391937256, > 0.10380790382623672] The behaviour you're describing sounds like a matter of pretty-printing. Numpy uses a shortened format for printing numeric values by default. When you convert to a list, you leave numpy behind and you get the native python behaviour. If you want to control how this pretty-printing happens in numpy, take a close look at numpy.set_printoptions: https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.set_printoptions.html . Now, I still don't see how taking a trivial view of your array would affect this printing, but I believe your values themselves are identical (i.e. correct) in both cases, and they are only displayed differently. If you were to do further computations with your arrays, the results would be the same. Regards, Andr?s ------------------------------ Subject: Digest Footer _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion at python.org https://mail.python.org/mailman/listinfo/numpy-discussion ------------------------------ End of NumPy-Discussion Digest, Vol 133, Issue 14 ************************************************* From charlesr.harris at gmail.com Thu Oct 19 13:02:47 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 19 Oct 2017 11:02:47 -0600 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: On Wed, Oct 18, 2017 at 11:24 PM, Nathaniel Smith wrote: > Hi all, > > I wanted to give everyone an update on what's going on with the NumPy > grant [1]. As you may have noticed, things have been moving a bit > slower than originally hoped -- unfortunately my health is improving > but has continued to be rocky [2]. > > Fortunately, I have awesome co-workers, and BIDS has an institutional > interest/mandate for figuring out how to make these things happen, so > after thinking it over we've decided to reorganize how we're doing > things internally and split up the work to let me focus on the core > technical/community aspects without getting overloaded. Specifically, > Fernando P?rez and Jonathan Dugan [3] are taking on PI/administration > duties, St?fan van der Walt will focus on handling day-to-day > management of the incoming hires, and Nelle Varoquaux & Jarrod Millman > will also be joining the team (exact details TBD). > > This shouldn't really affect any of you, except that you might see > some familiar faces with @berkeley.edu emails becoming more engaged. > I'm still leading the Berkeley effort, and in any case it's still > ultimately the community and NumPy steering council who will be making > decisions about the project ? this is just some internal details about > how we're planning to manage our contributions. But in the interest of > full transparency I figured I'd let you know what's happening. > > In other news, the job ad to start the official hiring process has now > been submitted for HR review, so it should hopefully be up soon -- > depending on how efficient the bureaucracy is. I'll definitely let > everyone know as soon as its posted. > > I'll also be giving a lunch talk at BIDS tomorrow to let folks locally > know about what's going on, which I think will be recorded ? I'll send > around a link after in case others are interested. > > -n > > [1] https://mail.python.org/pipermail/numpy-discussion/ > 2017-May/076818.html > [2] https://vorpus.org/blog/emerging-from-the-underworld/ > [3] https://bids.berkeley.edu/people/jonathan-dugan > Thanks for the update. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Oct 20 02:11:49 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 19 Oct 2017 23:11:49 -0700 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: On Thu, Oct 19, 2017 at 10:02 AM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Wed, Oct 18, 2017 at 11:24 PM, Nathaniel Smith wrote: > >> Hi all, >> >> I wanted to give everyone an update on what's going on with the NumPy >> grant [1]. As you may have noticed, things have been moving a bit >> slower than originally hoped -- unfortunately my health is improving >> but has continued to be rocky [2]. >> >> Fortunately, I have awesome co-workers, and BIDS has an institutional >> interest/mandate for figuring out how to make these things happen, so >> after thinking it over we've decided to reorganize how we're doing >> things internally and split up the work to let me focus on the core >> technical/community aspects without getting overloaded. Specifically, >> Fernando P?rez and Jonathan Dugan [3] are taking on PI/administration >> duties, St?fan van der Walt will focus on handling day-to-day >> management of the incoming hires, and Nelle Varoquaux & Jarrod Millman >> will also be joining the team (exact details TBD). >> >> This shouldn't really affect any of you, except that you might see >> some familiar faces with @berkeley.edu emails becoming more engaged. >> I'm still leading the Berkeley effort, and in any case it's still >> ultimately the community and NumPy steering council who will be making >> decisions about the project ? this is just some internal details about >> how we're planning to manage our contributions. But in the interest of >> full transparency I figured I'd let you know what's happening. >> >> In other news, the job ad to start the official hiring process has now >> been submitted for HR review, so it should hopefully be up soon -- >> depending on how efficient the bureaucracy is. I'll definitely let >> everyone know as soon as its posted. >> >> I'll also be giving a lunch talk at BIDS tomorrow to let folks locally >> know about what's going on, which I think will be recorded ? I'll send >> around a link after in case others are interested. >> >> -n >> >> [1] https://mail.python.org/pipermail/numpy-discussion/2017-May/ >> 076818.html >> [2] https://vorpus.org/blog/emerging-from-the-underworld/ >> [3] https://bids.berkeley.edu/people/jonathan-dugan >> > > Thanks for the update. > Thanks Nathaniel. I'm looking forward to all of those people getting involved. Hiring always takes longer than you want, but next year the pace of development promises to pick up significantly:) Ralf > Chuck > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirillbalunov at gmail.com Fri Oct 20 06:00:00 2017 From: kirillbalunov at gmail.com (Kirill Balunov) Date: Fri, 20 Oct 2017 13:00:00 +0300 Subject: [Numpy-discussion] Sorting of an array row-by-row? Message-ID: Hi, I was trying to sort an array (N, 3) by rows, and firstly come with this solution: N = 1000000 arr = np.random.randint(-100, 100, size=(N, 3)) dt = np.dtype([('x', int),('y', int),('z', int)]) *arr.view(dtype=dt).sort(axis=0)* Then I found another way using lexsort function *:* *idx = np.lexsort([arr[:, 2], arr[:, 1], arr[:, 0]])* *arr = arr[idx]* Which is 4 times faster than the previous solution. And now i have several questions: Why is the first way so much slower? What is the fastest way in numpy to sort array by rows? Why is the order of keys in lexsort function reversed? The last question was really the root of the problem for me with the lexsort function. And I still can not understand the idea of ??such an order (the last is the primary), it seems to me confusing. Thank you!!! With kind regards, Kirill. p.s.: One more thing, when i first try to use lexsort. I catch this strange exception: *np.lexsort(arr, axis=1)* ---------------------------------------------------------------------------AxisError Traceback (most recent call last) in ()----> 1 np.lexsort(ls, axis=1) AxisError: axis 1 is out of bounds for array of dimension 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfoxrabinovitz at gmail.com Fri Oct 20 10:11:06 2017 From: jfoxrabinovitz at gmail.com (Joseph Fox-Rabinovitz) Date: Fri, 20 Oct 2017 10:11:06 -0400 Subject: [Numpy-discussion] Sorting of an array row-by-row? In-Reply-To: References: Message-ID: There are two mistakes in your PS. The immediate error comes from the fact that lexsort accepts an iterable of 1D arrays, so when you pass in arr as the argument, it is treated as an iterable over the rows, each of which is 1D. 1D arrays do not have an axis=1. You actually want to iterate over the columns, so np.lexsort(a.T) is the correct phrasing of that. No idea about the speed difference. -Joe On Fri, Oct 20, 2017 at 6:00 AM, Kirill Balunov wrote: > Hi, > > I was trying to sort an array (N, 3) by rows, and firstly come with this > solution: > > N = 1000000 > arr = np.random.randint(-100, 100, size=(N, 3)) > dt = np.dtype([('x', int),('y', int),('z', int)]) > > arr.view(dtype=dt).sort(axis=0) > > Then I found another way using lexsort function: > > idx = np.lexsort([arr[:, 2], arr[:, 1], arr[:, 0]]) > arr = arr[idx] > > Which is 4 times faster than the previous solution. And now i have several > questions: > > Why is the first way so much slower? > What is the fastest way in numpy to sort array by rows? > Why is the order of keys in lexsort function reversed? > > The last question was really the root of the problem for me with the > lexsort function. > And I still can not understand the idea of such an order (the last is the > primary), it seems to me confusing. > > Thank you!!! With kind regards, Kirill. > > p.s.: One more thing, when i first try to use lexsort. I catch this strange > exception: > > np.lexsort(arr, axis=1) > > --------------------------------------------------------------------------- > AxisError Traceback (most recent call last) > in () > ----> 1 np.lexsort(ls, axis=1) > > AxisError: axis 1 is out of bounds for array of dimension 1 > > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From kirillbalunov at gmail.com Fri Oct 20 15:03:37 2017 From: kirillbalunov at gmail.com (Kirill Balunov) Date: Fri, 20 Oct 2017 22:03:37 +0300 Subject: [Numpy-discussion] Sorting of an array row-by-row? In-Reply-To: References: Message-ID: Thank you Josef, you gave me an idea, and now the fastest version (for big arrays) on my laptop is: np.lexsort(arr[:, ::-1].T) For me the most strange thing is the order of keys, what was an idea to keep then right-to-left? How does this relate to lexicographic order*?* 2017-10-20 17:11 GMT+03:00 Joseph Fox-Rabinovitz : > There are two mistakes in your PS. The immediate error comes from the > fact that lexsort accepts an iterable of 1D arrays, so when you pass > in arr as the argument, it is treated as an iterable over the rows, > each of which is 1D. 1D arrays do not have an axis=1. You actually > want to iterate over the columns, so np.lexsort(a.T) is the correct > phrasing of that. No idea about the speed difference. > > -Joe > > On Fri, Oct 20, 2017 at 6:00 AM, Kirill Balunov > wrote: > > Hi, > > > > I was trying to sort an array (N, 3) by rows, and firstly come with this > > solution: > > > > N = 1000000 > > arr = np.random.randint(-100, 100, size=(N, 3)) > > dt = np.dtype([('x', int),('y', int),('z', int)]) > > > > arr.view(dtype=dt).sort(axis=0) > > > > Then I found another way using lexsort function: > > > > idx = np.lexsort([arr[:, 2], arr[:, 1], arr[:, 0]]) > > arr = arr[idx] > > > > Which is 4 times faster than the previous solution. And now i have > several > > questions: > > > > Why is the first way so much slower? > > What is the fastest way in numpy to sort array by rows? > > Why is the order of keys in lexsort function reversed? > > > > The last question was really the root of the problem for me with the > > lexsort function. > > And I still can not understand the idea of such an order (the last is the > > primary), it seems to me confusing. > > > > Thank you!!! With kind regards, Kirill. > > > > p.s.: One more thing, when i first try to use lexsort. I catch this > strange > > exception: > > > > np.lexsort(arr, axis=1) > > > > ------------------------------------------------------------ > --------------- > > AxisError Traceback (most recent call > last) > > in () > > ----> 1 np.lexsort(ls, axis=1) > > > > AxisError: axis 1 is out of bounds for array of dimension 1 > > > > > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfoxrabinovitz at gmail.com Fri Oct 20 15:40:30 2017 From: jfoxrabinovitz at gmail.com (Joseph Fox-Rabinovitz) Date: Fri, 20 Oct 2017 15:40:30 -0400 Subject: [Numpy-discussion] Sorting of an array row-by-row? In-Reply-To: References: Message-ID: I do not think that there is any particular relationship between the order of the keys and lexicographic order. The key order is just a convention, which is clearly documented. I agree that it is a bit counter-intuitive for anyone that has used excel or MATLAB, but it is ingrained in the API at this point. -Joe On Fri, Oct 20, 2017 at 3:03 PM, Kirill Balunov wrote: > Thank you Josef, you gave me an idea, and now the fastest version (for big > arrays) on my laptop is: > > np.lexsort(arr[:, ::-1].T) > > For me the most strange thing is the order of keys, what was an idea to keep > then right-to-left? How does this relate to lexicographic order? > > 2017-10-20 17:11 GMT+03:00 Joseph Fox-Rabinovitz : >> >> There are two mistakes in your PS. The immediate error comes from the >> fact that lexsort accepts an iterable of 1D arrays, so when you pass >> in arr as the argument, it is treated as an iterable over the rows, >> each of which is 1D. 1D arrays do not have an axis=1. You actually >> want to iterate over the columns, so np.lexsort(a.T) is the correct >> phrasing of that. No idea about the speed difference. >> >> -Joe >> >> On Fri, Oct 20, 2017 at 6:00 AM, Kirill Balunov >> wrote: >> > Hi, >> > >> > I was trying to sort an array (N, 3) by rows, and firstly come with this >> > solution: >> > >> > N = 1000000 >> > arr = np.random.randint(-100, 100, size=(N, 3)) >> > dt = np.dtype([('x', int),('y', int),('z', int)]) >> > >> > arr.view(dtype=dt).sort(axis=0) >> > >> > Then I found another way using lexsort function: >> > >> > idx = np.lexsort([arr[:, 2], arr[:, 1], arr[:, 0]]) >> > arr = arr[idx] >> > >> > Which is 4 times faster than the previous solution. And now i have >> > several >> > questions: >> > >> > Why is the first way so much slower? >> > What is the fastest way in numpy to sort array by rows? >> > Why is the order of keys in lexsort function reversed? >> > >> > The last question was really the root of the problem for me with the >> > lexsort function. >> > And I still can not understand the idea of such an order (the last is >> > the >> > primary), it seems to me confusing. >> > >> > Thank you!!! With kind regards, Kirill. >> > >> > p.s.: One more thing, when i first try to use lexsort. I catch this >> > strange >> > exception: >> > >> > np.lexsort(arr, axis=1) >> > >> > >> > --------------------------------------------------------------------------- >> > AxisError Traceback (most recent call >> > last) >> > in () >> > ----> 1 np.lexsort(ls, axis=1) >> > >> > AxisError: axis 1 is out of bounds for array of dimension 1 >> > >> > >> > >> > >> > _______________________________________________ >> > NumPy-Discussion mailing list >> > NumPy-Discussion at python.org >> > https://mail.python.org/mailman/listinfo/numpy-discussion >> > >> _______________________________________________ >> NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion > > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > From stefanv at berkeley.edu Fri Oct 20 19:04:39 2017 From: stefanv at berkeley.edu (Stefan van der Walt) Date: Fri, 20 Oct 2017 16:04:39 -0700 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: <1508540679.3986729.1145916696.62609493@webmail.messagingengine.com> On Thu, Oct 19, 2017, at 23:11, Ralf Gommers wrote: > Thanks Nathaniel. I'm looking forward to all of those people getting > involved. Hiring always takes longer than you want, but next year the > pace of development promises to pick up significantly:) I'm excited for the opportunity to dedicate time to NumPy again! It was the first package I contributed to in the scientific Python ecosystem, at a time--back when segfaults were still a thing ;)--that could not have been more exciting to a young undergrad. Now a bit older, but hopefully not too rusty ;) Best regards St?fan -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Fri Oct 20 21:02:18 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Fri, 20 Oct 2017 19:02:18 -0600 Subject: [Numpy-discussion] Sorting of an array row-by-row? In-Reply-To: References: Message-ID: On Fri, Oct 20, 2017 at 1:40 PM, Joseph Fox-Rabinovitz < jfoxrabinovitz at gmail.com> wrote: > I do not think that there is any particular relationship between the > order of the keys and lexicographic order. The key order is just a > convention, which is clearly documented. I agree that it is a bit > counter-intuitive for anyone that has used excel or MATLAB, but it is > ingrained in the API at this point. > When I wrote lexsort for numarray, together with the typed sorting routines, I went back and forth on the key order, but finally decided that the simplest thing would be to leave them in the same order as the sorts. That requires a bit of knowledge as to what the effect of that is, but if one remembers that the last sort dominates it isn't to bad. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From berceanu at runbox.com Sat Oct 21 13:45:46 2017 From: berceanu at runbox.com (Andrei Berceanu) Date: Sat, 21 Oct 2017 19:45:46 +0200 (CEST) Subject: [Numpy-discussion] MATLAB to Numpy Message-ID: Hi, I am new to Numpy, and would like to start by translating a (badly written?) piece of MATLAB code. What I have come up with so far is this: px = np.zeros_like(tmp_px); py = np.zeros_like(tmp_py); pz = np.zeros_like(tmp_pz) w = np.zeros_like(tmp_w) x = np.zeros_like(tmp_x); y = np.zeros_like(tmp_y); z = np.zeros_like(tmp_z) j=-1 for i in range(tmp_px.size): if tmp_px[i] > 2: j += 1 px[j] = tmp_px[i] py[j] = tmp_py[i] pz[j] = tmp_pz[i] w[j] = tmp_w[i] x[j] = tmp_x[i] y[j] = tmp_y[i] z[j] = tmp_z[i] px=px[:j+1]; py=py[:j+1]; pz=pz[:j+1] w=w[:j+1] x=x[:j+1]; y=y[:j+1]; z=z[:j+1] It works, but I'm sure it's probably the most inefficient way of doing it. What would be a decent rewrite? Thank you so much, Best regards, Andrei From pmhobson at gmail.com Sat Oct 21 14:59:36 2017 From: pmhobson at gmail.com (Paul Hobson) Date: Sat, 21 Oct 2017 11:59:36 -0700 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: Can your provide representative examples for tmp_p[x|y|z]? -paul On Sat, Oct 21, 2017 at 10:45 AM, Andrei Berceanu wrote: > Hi, > > I am new to Numpy, and would like to start by translating a (badly > written?) piece of MATLAB code. > What I have come up with so far is this: > > px = np.zeros_like(tmp_px); py = np.zeros_like(tmp_py); pz = > np.zeros_like(tmp_pz) > w = np.zeros_like(tmp_w) > x = np.zeros_like(tmp_x); y = np.zeros_like(tmp_y); z = > np.zeros_like(tmp_z) > > j=-1 > for i in range(tmp_px.size): > if tmp_px[i] > 2: > j += 1 > px[j] = tmp_px[i] > py[j] = tmp_py[i] > pz[j] = tmp_pz[i] > w[j] = tmp_w[i] > x[j] = tmp_x[i] > y[j] = tmp_y[i] > z[j] = tmp_z[i] > > px=px[:j+1]; py=py[:j+1]; pz=pz[:j+1] > w=w[:j+1] > x=x[:j+1]; y=y[:j+1]; z=z[:j+1] > > It works, but I'm sure it's probably the most inefficient way of doing it. > What would be a decent rewrite? > > Thank you so much, > Best regards, > Andrei > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Sat Oct 21 15:03:07 2017 From: robert.kern at gmail.com (Robert Kern) Date: Sat, 21 Oct 2017 12:03:07 -0700 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: On Sat, Oct 21, 2017 at 10:45 AM, Andrei Berceanu wrote: > > Hi, > > I am new to Numpy, and would like to start by translating a (badly written?) piece of MATLAB code. > What I have come up with so far is this: > > px = np.zeros_like(tmp_px); py = np.zeros_like(tmp_py); pz = np.zeros_like(tmp_pz) > w = np.zeros_like(tmp_w) > x = np.zeros_like(tmp_x); y = np.zeros_like(tmp_y); z = np.zeros_like(tmp_z) > > j=-1 > for i in range(tmp_px.size): > if tmp_px[i] > 2: > j += 1 > px[j] = tmp_px[i] > py[j] = tmp_py[i] > pz[j] = tmp_pz[i] > w[j] = tmp_w[i] > x[j] = tmp_x[i] > y[j] = tmp_y[i] > z[j] = tmp_z[i] > > px=px[:j+1]; py=py[:j+1]; pz=pz[:j+1] > w=w[:j+1] > x=x[:j+1]; y=y[:j+1]; z=z[:j+1] > > It works, but I'm sure it's probably the most inefficient way of doing it. What would be a decent rewrite? Index with a boolean mask. mask = (tmp_px > 2) px = tmp_px[mask] py = tmp_py[mask] # ... etc. -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Sat Oct 21 16:11:38 2017 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sat, 21 Oct 2017 22:11:38 +0200 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: On 21 October 2017 at 21:03, Robert Kern wrote: > Index with a boolean mask. > > mask = (tmp_px > 2) > px = tmp_px[mask] > py = tmp_py[mask] > # ... etc. > > That isn't equivalent, note that j only increases when tmp_px > 2. I think you can do it with something like: mask = tmp_px > 2 j_values = np.cumsum(mask)[mask] i_values = np.arange(len(j_values)) px[i_values] = tmp_i[j_values] -------------- next part -------------- An HTML attachment was scrubbed... URL: From wieser.eric+numpy at gmail.com Sat Oct 21 16:32:11 2017 From: wieser.eric+numpy at gmail.com (Eric Wieser) Date: Sat, 21 Oct 2017 20:32:11 +0000 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: David, that doesn?t work, because np.cumsum(mask)[mask] is always equal to np.arange(mask.sum()) + 1. Robert?s answer is correct. Eric On Sat, 21 Oct 2017 at 13:12 Da?id wrote: On 21 October 2017 at 21:03, Robert Kern wrote: > >> Index with a boolean mask. >> >> mask = (tmp_px > 2) >> px = tmp_px[mask] >> py = tmp_py[mask] >> # ... etc. >> >> > That isn't equivalent, note that j only increases when tmp_px > 2. I think > you can do it with something like: > > mask = tmp_px > 2 > j_values = np.cumsum(mask)[mask] > i_values = np.arange(len(j_values)) > > px[i_values] = tmp_i[j_values] > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidmenhur at gmail.com Sat Oct 21 17:04:43 2017 From: davidmenhur at gmail.com (=?UTF-8?B?RGHPgGlk?=) Date: Sat, 21 Oct 2017 23:04:43 +0200 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: On 21 October 2017 at 22:32, Eric Wieser wrote: > David, that doesn?t work, because np.cumsum(mask)[mask] is always equal > to np.arange(mask.sum()) + 1. Robert?s answer is correct. > Of course, you are right. It makes sense in my head now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berceanu at runbox.com Mon Oct 23 09:05:26 2017 From: berceanu at runbox.com (Andrei Berceanu) Date: Mon, 23 Oct 2017 15:05:26 +0200 (CEST) Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: Message-ID: Thank you so much, the solution was much simpler than I expected! On Sat, 21 Oct 2017 23:04:43 +0200, Da?id wrote: > On 21 October 2017 at 22:32, Eric Wieser > wrote: > > > David, that doesn?t work, because np.cumsum(mask)[mask] is always equal > > to np.arange(mask.sum()) + 1. Robert?s answer is correct. > > > Of course, you are right. It makes sense in my head now. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From ralf.gommers at gmail.com Wed Oct 25 06:14:07 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Wed, 25 Oct 2017 23:14:07 +1300 Subject: [Numpy-discussion] SciPy 1.0 released! Message-ID: Hi all, We are extremely pleased to announce the release of SciPy 1.0, 16 years after version 0.1 saw the light of day. It has been a long, productive journey to get here, and we anticipate many more exciting new features and releases in the future. Why 1.0 now? ------------ A version number should reflect the maturity of a project - and SciPy was a mature and stable library that is heavily used in production settings for a long time already. From that perspective, the 1.0 version number is long overdue. Some key project goals, both technical (e.g. Windows wheels and continuous integration) and organisational (a governance structure, code of conduct and a roadmap), have been achieved recently. Many of us are a bit perfectionist, and therefore are reluctant to call something "1.0" because it may imply that it's "finished" or "we are 100% happy with it". This is normal for many open source projects, however that doesn't make it right. We acknowledge to ourselves that it's not perfect, and there are some dusty corners left (that will probably always be the case). Despite that, SciPy is extremely useful to its users, on average has high quality code and documentation, and gives the stability and backwards compatibility guarantees that a 1.0 label imply. Some history and perspectives ----------------------------- - 2001: the first SciPy release - 2005: transition to NumPy - 2007: creation of scikits - 2008: scipy.spatial module and first Cython code added - 2010: moving to a 6-monthly release cycle - 2011: SciPy development moves to GitHub - 2011: Python 3 support - 2012: adding a sparse graph module and unified optimization interface - 2012: removal of scipy.maxentropy - 2013: continuous integration with TravisCI - 2015: adding Cython interface for BLAS/LAPACK and a benchmark suite - 2017: adding a unified C API with scipy.LowLevelCallable; removal of scipy.weave - 2017: SciPy 1.0 release **Pauli Virtanen** is SciPy's Benevolent Dictator For Life (BDFL). He says: *Truthfully speaking, we could have released a SciPy 1.0 a long time ago, so I'm happy we do it now at long last. The project has a long history, and during the years it has matured also as a software project. I believe it has well proved its merit to warrant a version number starting with unity.* *Since its conception 15+ years ago, SciPy has largely been written by and for scientists, to provide a box of basic tools that they need. Over time, the set of people active in its development has undergone some rotation, and we have evolved towards a somewhat more systematic approach to development. Regardless, this underlying drive has stayed the same, and I think it will also continue propelling the project forward in future. This is all good, since not long after 1.0 comes 1.1.* **Travis Oliphant** is one of SciPy's creators. He says: *I'm honored to write a note of congratulations to the SciPy developers and the entire SciPy community for the release of SciPy 1.0. This release represents a dream of many that has been patiently pursued by a stalwart group of pioneers for nearly 2 decades. Efforts have been broad and consistent over that time from many hundreds of people. From initial discussions to efforts coding and packaging to documentation efforts to extensive conference and community building, the SciPy effort has been a global phenomenon that it has been a privilege to participate in.* *The idea of SciPy was already in multiple people?s minds in 1997 when I first joined the Python community as a young graduate student who had just fallen in love with the expressibility and extensibility of Python. The internet was just starting to bringing together like-minded mathematicians and scientists in nascent electronically-connected communities. In 1998, there was a concerted discussion on the matrix-SIG, python mailing list with people like Paul Barrett, Joe Harrington, Perry Greenfield, Paul Dubois, Konrad Hinsen, David Ascher, and others. This discussion encouraged me in 1998 and 1999 to procrastinate my PhD and spend a lot of time writing extension modules to Python that mostly wrapped battle-tested Fortran and C-code making it available to the Python user. This work attracted the help of others like Robert Kern, Pearu Peterson and Eric Jones who joined their efforts with mine in 2000 so that by 2001, the first SciPy release was ready. This was long before Github simplified collaboration and input from others and the "patch" command and email was how you helped a project improve.* *Since that time, hundreds of people have spent an enormous amount of time improving the SciPy library and the community surrounding this library has dramatically grown. I stopped being able to participate actively in developing the SciPy library around 2010. Fortunately, at that time, Pauli Virtanen and Ralf Gommers picked up the pace of development supported by dozens of other key contributors such as David Cournapeau, Evgeni Burovski, Josef Perktold, and Warren Weckesser. While I have only been able to admire the development of SciPy from a distance for the past 7 years, I have never lost my love of the project and the concept of community-driven development. I remain driven even now by a desire to help sustain the development of not only the SciPy library but many other affiliated and related open-source projects. I am extremely pleased that SciPy is in the hands of a world-wide community of talented developers who will ensure that SciPy remains an example of how grass-roots, community-driven development can succeed.* **Fernando Perez** offers a wider community perspective: *The existence of a nascent Scipy library, and the incredible --if tiny by today's standards-- community surrounding it is what drew me into the scientific Python world while still a physics graduate student in 2001. Today, I am awed when I see these tools power everything from high school education to the research that led to the 2017 Nobel Prize in physics.* *Don't be fooled by the 1.0 number: this project is a mature cornerstone of the modern scientific computing ecosystem. I am grateful for the many who have made it possible, and hope to be able to contribute again to it in the future. My sincere congratulations to the whole team!* Highlights of this release -------------------------- Some of the highlights of this release are: - Major build improvements. Windows wheels are available on PyPI for the first time, and continuous integration has been set up on Windows and OS X in addition to Linux. - A set of new ODE solvers and a unified interface to them (`scipy.integrate.solve_ivp`). - Two new trust region optimizers and a new linear programming method, with improved performance compared to what `scipy.optimize` offered previously. - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are now complete. Upgrading and compatibility --------------------------- There have been a number of deprecations and API changes in this release, which are documented below. Before upgrading, we recommend that users check that their own code does not use deprecated SciPy functionality (to do so, run your code with ``python -Wd`` and check for ``DeprecationWarning`` s). This release requires Python 2.7 or >=3.4 and NumPy 1.8.2 or greater. This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the lowest supported LAPACK version to >3.2.x was long blocked by Apple Accelerate providing the LAPACK 3.2.1 API. We have decided that it's time to either drop Accelerate or, if there is enough interest, provide shims for functions added in more recent LAPACK versions so it can still be used. New features ============ `scipy.cluster` improvements ---------------------------- `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a linkage matrix to minimize distances between adjacent leaves, was added. `scipy.fftpack` improvements ---------------------------- N-dimensional versions of the discrete sine and cosine transforms and their inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. `scipy.integrate` improvements ------------------------------ A set of new ODE solvers have been added to `scipy.integrate`. The convenience function `scipy.integrate.solve_ivp` allows uniform access to all solvers. The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and ``LSODA``) can also be used directly. `scipy.linalg` improvements ---------------------------- The BLAS wrappers in `scipy.linalg.blas` have been completed. Added functions are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, ``*spr``, ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, ``*sbmv``, ``*spr2``, Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, ``*hetrd``, ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. The function `scipy.linalg.subspace_angles` has been added to compute the subspace angles between two matrices. The function `scipy.linalg.clarkson_woodruff_transform` has been added. It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. The functions `scipy.linalg.eigh_tridiagonal` and `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and eigenvectors of tridiagonal hermitian/symmetric matrices, were added. `scipy.ndimage` improvements ---------------------------- Support for homogeneous coordinate transforms has been added to `scipy.ndimage.affine_transform`. The ``ndimage`` C code underwent a significant refactoring, and is now a lot easier to understand and maintain. `scipy.optimize` improvements ----------------------------- The methods ``trust-region-exact`` and ``trust-krylov`` have been added to the function `scipy.optimize.minimize`. These new trust-region methods solve the subproblem with higher accuracy at the cost of more Hessian factorizations (compared to dogleg) or more matrix vector products (compared to ncg) but usually require less nonlinear iterations and are able to deal with indefinite Hessians. They seem very competitive against the other Newton methods implemented in scipy. `scipy.optimize.linprog` gained an interior point method. Its performance is superior (both in accuracy and speed) to the older simplex method. `scipy.signal` improvements --------------------------- An argument ``fs`` (sampling frequency) was added to the following functions: ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these functions consistent with many other functions in `scipy.signal` in which the sampling frequency can be specified. `scipy.signal.freqz` has been sped up significantly for FIR filters. `scipy.sparse` improvements --------------------------- Iterating over and slicing of CSC and CSR matrices is now faster by up to ~35%. The ``tocsr`` method of COO matrices is now several times faster. The ``diagonal`` method of sparse matrices now takes a parameter, indicating which diagonal to return. `scipy.sparse.linalg` improvements ---------------------------------- A new iterative solver for large-scale nonsymmetric sparse linear systems, `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a flexible variant of ``GCROT``. `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding potentially faster convergence. SuperLU was updated to version 5.2.1. `scipy.spatial` improvements ---------------------------- Many distance metrics in `scipy.spatial.distance` gained support for weights. The signatures of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in order to support a wider range of metrics (e.g. string-based metrics that need extra keywords). Also, an optional ``out`` parameter was added to ``pdist`` and ``cdist`` allowing the user to specify where the resulting distance matrix is to be stored `scipy.stats` improvements -------------------------- The methods ``cdf`` and ``logcdf`` were added to `scipy.stats.multivariate_normal`, providing the cumulative distribution function of the multivariate normal distribution. New statistical distance functions were added, namely `scipy.stats.wasserstein_distance` for the first Wasserstein distance and `scipy.stats.energy_distance` for the energy distance. Deprecated features =================== The following functions in `scipy.misc` are deprecated: ``bytescale``, ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, ``imsave``, ``imshow`` and ``toimage``. Most of those functions have unexpected behavior (like rescaling and type casting image data without the user asking for that). Other functions simply have better alternatives. ``scipy.interpolate.interpolate_wrapper`` and all functions in that submodule are deprecated. This was a never finished set of wrapper functions which is not relevant anymore. The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the dtypes of the input arrays in the future and checked that it is a scalar or an array with a single element. ``scipy.spatial.distance.matching`` is deprecated. It is an alias of `scipy.spatial.distance.hamming`, which should be used instead. Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong interpretation of the metric definition. In scipy 1.0 it has been just deprecated in the documentation to keep retro-compatibility but is recommended to use the new version of `scipy.spatial.distance.minkowski` that implements the correct behaviour. Positional arguments of `scipy.spatial.distance.pdist` and `scipy.spatial.distance.cdist` should be replaced with their keyword version. Backwards incompatible changes ============================== The following deprecated functions have been removed from `scipy.stats`: ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and ``threshold``. The following deprecated functions have been removed from `scipy.stats.mstats`: ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. The deprecated ``a`` and ``reta`` keywords have been removed from `scipy.stats.shapiro`. The deprecated functions ``sparse.csgraph.cs_graph_components`` and ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. The following deprecated keywords have been removed in `scipy.sparse.linalg`: ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, ``cg``, ``cgs``, ``gmres``, ``qmr`` and ``minres``. The deprecated functions ``expm2`` and ``expm3`` have been removed from `scipy.linalg`. The deprecated keyword ``q`` was removed from `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` was removed. The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and ``K2F`` have been removed from `scipy.constants`. The deprecated ``ppform`` class was removed from `scipy.interpolate`. The deprecated keyword ``iprint`` was removed from `scipy.optimize.fmin_cobyla`. The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` has been changed to True. The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the method used for random initialization, so using a fixed random seed will not necessarily produce the same results as in previous versions. `scipy.special.gammaln` does not accept complex arguments anymore. The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and ``spherical_kn``. Be aware that the new functions have different signatures. The cross-class properties of `scipy.signal.lti` systems have been removed. The following properties/setters have been removed: Name - (accessing/setting has been removed) - (setting has been removed) * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, ``poles``) * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. This was a corner case for which it was unclear that the behavior was well-defined. The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather than an ndarray when the length of alpha is 1. Other changes ============= SciPy now has a formal governance structure. It consists of a BDFL (Pauli Virtanen) and a Steering Committee. See `the governance document < https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >`_ for details. It is now possible to build SciPy on Windows with MSVC + gfortran! Continuous integration has been set up for this build configuration on Appveyor, building against OpenBLAS. Continuous integration for OS X has been set up on TravisCI. The SciPy test suite has been migrated from ``nose`` to ``pytest``. ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy to add custom code that needs to run when importing SciPy (e.g. checks for hardware, DLL search paths, etc.). Support for PEP 518 (specifying build system requirements) was added - see ``pyproject.toml`` in the root of the SciPy repository. In order to have consistent function names, the function ``scipy.linalg.solve_lyapunov`` is renamed to `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for backwards-compatibility. Authors ======= * @arcady + * @xoviat + * Anton Akhmerov * Dominic Antonacci + * Alessandro Pietro Bardelli * Ved Basu + * Michael James Bedford + * Ray Bell + * Juan M. Bello-Rivas + * Sebastian Berg * Felix Berkenkamp * Jyotirmoy Bhattacharya + * Matthew Brett * Jonathan Bright * Bruno Jim?nez + * Evgeni Burovski * Patrick Callier * Mark Campanelli + * CJ Carey * Robert Cimrman * Adam Cox + * Michael Danilov + * David Haberth?r + * Andras Deak + * Philip DeBoer * Anne-Sylvie Deutsch * Cathy Douglass + * Dominic Else + * Guo Fei + * Roman Feldbauer + * Yu Feng * Jaime Fernandez del Rio * Orestis Floros + * David Freese + * Adam Geitgey + * James Gerity + * Dezmond Goff + * Christoph Gohlke * Ralf Gommers * Dirk Gorissen + * Matt Haberland + * David Hagen + * Charles Harris * Lam Yuen Hei + * Jean Helie + * Gaute Hope + * Guillaume Horel + * Franziska Horn + * Yevhenii Hyzyla + * Vladislav Iakovlev + * Marvin Kastner + * Mher Kazandjian * Thomas Keck * Adam Kurkiewicz + * Ronan Lamy + * J.L. Lanfranchi + * Eric Larson * Denis Laxalde * Gregory R. Lee * Felix Lenders + * Evan Limanto * Julian Lukwata + * Fran?ois Magimel * Syrtis Major + * Charles Masson + * Nikolay Mayorov * Tobias Megies * Markus Meister + * Roman Mirochnik + * Jordi Montes + * Nathan Musoke + * Andrew Nelson * M.J. Nichol * Juan Nunez-Iglesias * Arno Onken + * Nick Papior + * Dima Pasechnik + * Ashwin Pathak + * Oleksandr Pavlyk + * Stefan Peterson * Ilhan Polat * Andrey Portnoy + * Ravi Kumar Prasad + * Aman Pratik * Eric Quintero * Vedant Rathore + * Tyler Reddy * Joscha Reimer * Philipp Rentzsch + * Antonio Horta Ribeiro * Ned Richards + * Kevin Rose + * Benoit Rostykus + * Matt Ruffalo + * Eli Sadoff + * Pim Schellart * Nico Schl?mer + * Klaus Sembritzki + * Nikolay Shebanov + * Jonathan Tammo Siebert * Scott Sievert * Max Silbiger + * Mandeep Singh + * Michael Stewart + * Jonathan Sutton + * Deep Tavker + * Martin Thoma * James Tocknell + * Aleksandar Trifunovic + * Paul van Mulbregt + * Jacob Vanderplas * Aditya Vijaykumar * Pauli Virtanen * James Webber * Warren Weckesser * Eric Wieser + * Josh Wilson * Zhiqing Xiao + * Evgeny Zhurko * Nikolay Zinov + * Z? Vin?cius + A total of 121 people contributed to this release. People with a "+" by their names contributed a patch for the first time. This list of names is automatically generated, and may not be fully complete. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From faltet at gmail.com Wed Oct 25 06:47:48 2017 From: faltet at gmail.com (Francesc Alted) Date: Wed, 25 Oct 2017 12:47:48 +0200 Subject: [Numpy-discussion] SciPy 1.0 released! In-Reply-To: References: Message-ID: ?Congrats everybody!? 2017-10-25 12:14 GMT+02:00 Ralf Gommers : > Hi all, > > We are extremely pleased to announce the release of SciPy 1.0, 16 years > after > version 0.1 saw the light of day. It has been a long, productive journey > to > get here, and we anticipate many more exciting new features and releases > in the > future. > > > Why 1.0 now? > ------------ > > A version number should reflect the maturity of a project - and SciPy was a > mature and stable library that is heavily used in production settings for a > long time already. From that perspective, the 1.0 version number is long > overdue. > > Some key project goals, both technical (e.g. Windows wheels and continuous > integration) and organisational (a governance structure, code of conduct > and a > roadmap), have been achieved recently. > > Many of us are a bit perfectionist, and therefore are reluctant to call > something "1.0" because it may imply that it's "finished" or "we are 100% > happy > with it". This is normal for many open source projects, however that > doesn't > make it right. We acknowledge to ourselves that it's not perfect, and > there > are some dusty corners left (that will probably always be the case). > Despite > that, SciPy is extremely useful to its users, on average has high quality > code > and documentation, and gives the stability and backwards compatibility > guarantees that a 1.0 label imply. > > > Some history and perspectives > ----------------------------- > > - 2001: the first SciPy release > - 2005: transition to NumPy > - 2007: creation of scikits > - 2008: scipy.spatial module and first Cython code added > - 2010: moving to a 6-monthly release cycle > - 2011: SciPy development moves to GitHub > - 2011: Python 3 support > - 2012: adding a sparse graph module and unified optimization interface > - 2012: removal of scipy.maxentropy > - 2013: continuous integration with TravisCI > - 2015: adding Cython interface for BLAS/LAPACK and a benchmark suite > - 2017: adding a unified C API with scipy.LowLevelCallable; removal of > scipy.weave > - 2017: SciPy 1.0 release > > > **Pauli Virtanen** is SciPy's Benevolent Dictator For Life (BDFL). He > says: > > *Truthfully speaking, we could have released a SciPy 1.0 a long time ago, > so I'm > happy we do it now at long last. The project has a long history, and > during the > years it has matured also as a software project. I believe it has well > proved > its merit to warrant a version number starting with unity.* > > *Since its conception 15+ years ago, SciPy has largely been written by and > for > scientists, to provide a box of basic tools that they need. Over time, the > set > of people active in its development has undergone some rotation, and we > have > evolved towards a somewhat more systematic approach to development. > Regardless, > this underlying drive has stayed the same, and I think it will also > continue > propelling the project forward in future. This is all good, since not long > after 1.0 comes 1.1.* > > **Travis Oliphant** is one of SciPy's creators. He says: > > *I'm honored to write a note of congratulations to the SciPy developers > and the > entire SciPy community for the release of SciPy 1.0. This release > represents > a dream of many that has been patiently pursued by a stalwart group of > pioneers > for nearly 2 decades. Efforts have been broad and consistent over that > time > from many hundreds of people. From initial discussions to efforts coding > and > packaging to documentation efforts to extensive conference and community > building, the SciPy effort has been a global phenomenon that it has been a > privilege to participate in.* > > *The idea of SciPy was already in multiple people?s minds in 1997 when I > first > joined the Python community as a young graduate student who had just > fallen in > love with the expressibility and extensibility of Python. The internet > was > just starting to bringing together like-minded mathematicians and > scientists in > nascent electronically-connected communities. In 1998, there was a > concerted > discussion on the matrix-SIG, python mailing list with people like Paul > Barrett, Joe Harrington, Perry Greenfield, Paul Dubois, Konrad Hinsen, > David > Ascher, and others. This discussion encouraged me in 1998 and 1999 to > procrastinate my PhD and spend a lot of time writing extension modules to > Python that mostly wrapped battle-tested Fortran and C-code making it > available > to the Python user. This work attracted the help of others like Robert > Kern, > Pearu Peterson and Eric Jones who joined their efforts with mine in 2000 so > that by 2001, the first SciPy release was ready. This was long before > Github > simplified collaboration and input from others and the "patch" command and > email was how you helped a project improve.* > > *Since that time, hundreds of people have spent an enormous amount of time > improving the SciPy library and the community surrounding this library has > dramatically grown. I stopped being able to participate actively in > developing > the SciPy library around 2010. Fortunately, at that time, Pauli Virtanen > and > Ralf Gommers picked up the pace of development supported by dozens of > other key > contributors such as David Cournapeau, Evgeni Burovski, Josef Perktold, and > Warren Weckesser. While I have only been able to admire the development > of > SciPy from a distance for the past 7 years, I have never lost my love of > the > project and the concept of community-driven development. I remain driven > even now by a desire to help sustain the development of not only the SciPy > library but many other affiliated and related open-source projects. I am > extremely pleased that SciPy is in the hands of a world-wide community of > talented developers who will ensure that SciPy remains an example of how > grass-roots, community-driven development can succeed.* > > **Fernando Perez** offers a wider community perspective: > > *The existence of a nascent Scipy library, and the incredible --if tiny by > today's standards-- community surrounding it is what drew me into the > scientific Python world while still a physics graduate student in 2001. > Today, > I am awed when I see these tools power everything from high school > education to > the research that led to the 2017 Nobel Prize in physics.* > > *Don't be fooled by the 1.0 number: this project is a mature cornerstone > of the > modern scientific computing ecosystem. I am grateful for the many who have > made it possible, and hope to be able to contribute again to it in the > future. > My sincere congratulations to the whole team!* > > > Highlights of this release > -------------------------- > > Some of the highlights of this release are: > > - Major build improvements. Windows wheels are available on PyPI for the > first time, and continuous integration has been set up on Windows and OS > X > in addition to Linux. > - A set of new ODE solvers and a unified interface to them > (`scipy.integrate.solve_ivp`). > - Two new trust region optimizers and a new linear programming method, with > improved performance compared to what `scipy.optimize` offered > previously. > - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are > now > complete. > > > Upgrading and compatibility > --------------------------- > > There have been a number of deprecations and API changes in this release, > which > are documented below. Before upgrading, we recommend that users check that > their own code does not use deprecated SciPy functionality (to do so, run > your > code with ``python -Wd`` and check for ``DeprecationWarning`` s). > > This release requires Python 2.7 or >=3.4 and NumPy 1.8.2 or greater. > > This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the > lowest supported LAPACK version to >3.2.x was long blocked by Apple > Accelerate > providing the LAPACK 3.2.1 API. We have decided that it's time to either > drop > Accelerate or, if there is enough interest, provide shims for functions > added > in more recent LAPACK versions so it can still be used. > > > New features > ============ > > `scipy.cluster` improvements > ---------------------------- > > `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a > linkage matrix to minimize distances between adjacent leaves, was added. > > > `scipy.fftpack` improvements > ---------------------------- > > N-dimensional versions of the discrete sine and cosine transforms and their > inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. > > > `scipy.integrate` improvements > ------------------------------ > > A set of new ODE solvers have been added to `scipy.integrate`. The > convenience > function `scipy.integrate.solve_ivp` allows uniform access to all solvers. > The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and > ``LSODA``) > can also be used directly. > > > `scipy.linalg` improvements > ---------------------------- > > The BLAS wrappers in `scipy.linalg.blas` have been completed. Added > functions > are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, > ``*spr``, > ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, > ``*sbmv``, > ``*spr2``, > > Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, > ``*hetrd``, > ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, > ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. > > The function `scipy.linalg.subspace_angles` has been added to compute the > subspace angles between two matrices. > > The function `scipy.linalg.clarkson_woodruff_transform` has been added. > It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. > > The functions `scipy.linalg.eigh_tridiagonal` and > `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and > eigenvectors of tridiagonal hermitian/symmetric matrices, were added. > > > `scipy.ndimage` improvements > ---------------------------- > > Support for homogeneous coordinate transforms has been added to > `scipy.ndimage.affine_transform`. > > The ``ndimage`` C code underwent a significant refactoring, and is now > a lot easier to understand and maintain. > > > `scipy.optimize` improvements > ----------------------------- > > The methods ``trust-region-exact`` and ``trust-krylov`` have been added to > the > function `scipy.optimize.minimize`. These new trust-region methods solve > the > subproblem with higher accuracy at the cost of more Hessian factorizations > (compared to dogleg) or more matrix vector products (compared to ncg) but > usually require less nonlinear iterations and are able to deal with > indefinite > Hessians. They seem very competitive against the other Newton methods > implemented in scipy. > > `scipy.optimize.linprog` gained an interior point method. Its performance > is > superior (both in accuracy and speed) to the older simplex method. > > > `scipy.signal` improvements > --------------------------- > > An argument ``fs`` (sampling frequency) was added to the following > functions: > ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these > functions > consistent with many other functions in `scipy.signal` in which the > sampling > frequency can be specified. > > `scipy.signal.freqz` has been sped up significantly for FIR filters. > > > `scipy.sparse` improvements > --------------------------- > > Iterating over and slicing of CSC and CSR matrices is now faster by up to > ~35%. > > The ``tocsr`` method of COO matrices is now several times faster. > > The ``diagonal`` method of sparse matrices now takes a parameter, > indicating > which diagonal to return. > > > `scipy.sparse.linalg` improvements > ---------------------------------- > > A new iterative solver for large-scale nonsymmetric sparse linear systems, > `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a > flexible variant of ``GCROT``. > > `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding > potentially > faster convergence. > > SuperLU was updated to version 5.2.1. > > > `scipy.spatial` improvements > ---------------------------- > > Many distance metrics in `scipy.spatial.distance` gained support for > weights. > > The signatures of `scipy.spatial.distance.pdist` and > `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in > order to > support a wider range of metrics (e.g. string-based metrics that need extra > keywords). Also, an optional ``out`` parameter was added to ``pdist`` and > ``cdist`` allowing the user to specify where the resulting distance matrix > is > to be stored > > > `scipy.stats` improvements > -------------------------- > > The methods ``cdf`` and ``logcdf`` were added to > `scipy.stats.multivariate_normal`, providing the cumulative distribution > function of the multivariate normal distribution. > > New statistical distance functions were added, namely > `scipy.stats.wasserstein_distance` for the first Wasserstein distance and > `scipy.stats.energy_distance` for the energy distance. > > > Deprecated features > =================== > > The following functions in `scipy.misc` are deprecated: ``bytescale``, > ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, > ``imsave``, ``imshow`` and ``toimage``. Most of those functions have > unexpected > behavior (like rescaling and type casting image data without the user > asking > for that). Other functions simply have better alternatives. > > ``scipy.interpolate.interpolate_wrapper`` and all functions in that > submodule > are deprecated. This was a never finished set of wrapper functions which > is > not relevant anymore. > > The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the > dtypes of the input arrays in the future and checked that it is a scalar or > an array with a single element. > > ``scipy.spatial.distance.matching`` is deprecated. It is an alias of > `scipy.spatial.distance.hamming`, which should be used instead. > > Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong > interpretation of the metric definition. In scipy 1.0 it has been just > deprecated in the documentation to keep retro-compatibility but is > recommended > to use the new version of `scipy.spatial.distance.minkowski` that > implements > the correct behaviour. > > Positional arguments of `scipy.spatial.distance.pdist` and > `scipy.spatial.distance.cdist` should be replaced with their keyword > version. > > > Backwards incompatible changes > ============================== > > The following deprecated functions have been removed from `scipy.stats`: > ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, > ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and > ``threshold``. > > The following deprecated functions have been removed from > `scipy.stats.mstats`: > ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. > > The deprecated ``a`` and ``reta`` keywords have been removed from > `scipy.stats.shapiro`. > > The deprecated functions ``sparse.csgraph.cs_graph_components`` and > ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. > > The following deprecated keywords have been removed in > `scipy.sparse.linalg`: > ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, > ``cg``, > ``cgs``, ``gmres``, ``qmr`` and ``minres``. > > The deprecated functions ``expm2`` and ``expm3`` have been removed from > `scipy.linalg`. The deprecated keyword ``q`` was removed from > `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` > was > removed. > > The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and > ``K2F`` have been removed from `scipy.constants`. > > The deprecated ``ppform`` class was removed from `scipy.interpolate`. > > The deprecated keyword ``iprint`` was removed from > `scipy.optimize.fmin_cobyla`. > > The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` > has been changed to True. > > The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the > method used for random initialization, so using a fixed random seed will > not necessarily produce the same results as in previous versions. > > `scipy.special.gammaln` does not accept complex arguments anymore. > > The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, > ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use > the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and > ``spherical_kn``. Be aware that the new functions have different > signatures. > > The cross-class properties of `scipy.signal.lti` systems have been removed. > The following properties/setters have been removed: > > Name - (accessing/setting has been removed) - (setting has been removed) > > * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) > * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, > ``poles``) > * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () > > ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. > This > was a corner case for which it was unclear that the behavior was > well-defined. > > The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather > than > an ndarray when the length of alpha is 1. > > > Other changes > ============= > > SciPy now has a formal governance structure. It consists of a BDFL (Pauli > Virtanen) and a Steering Committee. See `the governance document > dev/governance/governance.rst>`_ > for details. > > It is now possible to build SciPy on Windows with MSVC + gfortran! > Continuous > integration has been set up for this build configuration on Appveyor, > building > against OpenBLAS. > > Continuous integration for OS X has been set up on TravisCI. > > The SciPy test suite has been migrated from ``nose`` to ``pytest``. > > ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy > to > add custom code that needs to run when importing SciPy (e.g. checks for > hardware, DLL search paths, etc.). > > Support for PEP 518 (specifying build system requirements) was added - see > ``pyproject.toml`` in the root of the SciPy repository. > > In order to have consistent function names, the function > ``scipy.linalg.solve_lyapunov`` is renamed to > `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for > backwards-compatibility. > > > Authors > ======= > > * @arcady + > * @xoviat + > * Anton Akhmerov > * Dominic Antonacci + > * Alessandro Pietro Bardelli > * Ved Basu + > * Michael James Bedford + > * Ray Bell + > * Juan M. Bello-Rivas + > * Sebastian Berg > * Felix Berkenkamp > * Jyotirmoy Bhattacharya + > * Matthew Brett > * Jonathan Bright > * Bruno Jim?nez + > * Evgeni Burovski > * Patrick Callier > * Mark Campanelli + > * CJ Carey > * Robert Cimrman > * Adam Cox + > * Michael Danilov + > * David Haberth?r + > * Andras Deak + > * Philip DeBoer > * Anne-Sylvie Deutsch > * Cathy Douglass + > * Dominic Else + > * Guo Fei + > * Roman Feldbauer + > * Yu Feng > * Jaime Fernandez del Rio > * Orestis Floros + > * David Freese + > * Adam Geitgey + > * James Gerity + > * Dezmond Goff + > * Christoph Gohlke > * Ralf Gommers > * Dirk Gorissen + > * Matt Haberland + > * David Hagen + > * Charles Harris > * Lam Yuen Hei + > * Jean Helie + > * Gaute Hope + > * Guillaume Horel + > * Franziska Horn + > * Yevhenii Hyzyla + > * Vladislav Iakovlev + > * Marvin Kastner + > * Mher Kazandjian > * Thomas Keck > * Adam Kurkiewicz + > * Ronan Lamy + > * J.L. Lanfranchi + > * Eric Larson > * Denis Laxalde > * Gregory R. Lee > * Felix Lenders + > * Evan Limanto > * Julian Lukwata + > * Fran?ois Magimel > * Syrtis Major + > * Charles Masson + > * Nikolay Mayorov > * Tobias Megies > * Markus Meister + > * Roman Mirochnik + > * Jordi Montes + > * Nathan Musoke + > * Andrew Nelson > * M.J. Nichol > * Juan Nunez-Iglesias > * Arno Onken + > * Nick Papior + > * Dima Pasechnik + > * Ashwin Pathak + > * Oleksandr Pavlyk + > * Stefan Peterson > * Ilhan Polat > * Andrey Portnoy + > * Ravi Kumar Prasad + > * Aman Pratik > * Eric Quintero > * Vedant Rathore + > * Tyler Reddy > * Joscha Reimer > * Philipp Rentzsch + > * Antonio Horta Ribeiro > * Ned Richards + > * Kevin Rose + > * Benoit Rostykus + > * Matt Ruffalo + > * Eli Sadoff + > * Pim Schellart > * Nico Schl?mer + > * Klaus Sembritzki + > * Nikolay Shebanov + > * Jonathan Tammo Siebert > * Scott Sievert > * Max Silbiger + > * Mandeep Singh + > * Michael Stewart + > * Jonathan Sutton + > * Deep Tavker + > * Martin Thoma > * James Tocknell + > * Aleksandar Trifunovic + > * Paul van Mulbregt + > * Jacob Vanderplas > * Aditya Vijaykumar > * Pauli Virtanen > * James Webber > * Warren Weckesser > * Eric Wieser + > * Josh Wilson > * Zhiqing Xiao + > * Evgeny Zhurko > * Nikolay Zinov + > * Z? Vin?cius + > > A total of 121 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > > Cheers, > Ralf > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -- Francesc Alted -------------- next part -------------- An HTML attachment was scrubbed... URL: From insertinterestingnamehere at gmail.com Wed Oct 25 12:17:14 2017 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Wed, 25 Oct 2017 16:17:14 +0000 Subject: [Numpy-discussion] SciPy 1.0 released! In-Reply-To: References: Message-ID: Many thanks to Ralf for managing this release! Thanks to the many contributors too! This is a major milestone. Best, Ian Henriksen On Wed, Oct 25, 2017 at 5:48 AM Francesc Alted wrote: > ?Congrats everybody!? > > > 2017-10-25 12:14 GMT+02:00 Ralf Gommers : > >> Hi all, >> >> We are extremely pleased to announce the release of SciPy 1.0, 16 years >> after >> version 0.1 saw the light of day. It has been a long, productive journey >> to >> get here, and we anticipate many more exciting new features and releases >> in the >> future. >> >> >> Why 1.0 now? >> ------------ >> >> A version number should reflect the maturity of a project - and SciPy was >> a >> mature and stable library that is heavily used in production settings for >> a >> long time already. From that perspective, the 1.0 version number is long >> overdue. >> >> Some key project goals, both technical (e.g. Windows wheels and continuous >> integration) and organisational (a governance structure, code of conduct >> and a >> roadmap), have been achieved recently. >> >> Many of us are a bit perfectionist, and therefore are reluctant to call >> something "1.0" because it may imply that it's "finished" or "we are 100% >> happy >> with it". This is normal for many open source projects, however that >> doesn't >> make it right. We acknowledge to ourselves that it's not perfect, and >> there >> are some dusty corners left (that will probably always be the case). >> Despite >> that, SciPy is extremely useful to its users, on average has high quality >> code >> and documentation, and gives the stability and backwards compatibility >> guarantees that a 1.0 label imply. >> >> >> Some history and perspectives >> ----------------------------- >> >> - 2001: the first SciPy release >> - 2005: transition to NumPy >> - 2007: creation of scikits >> - 2008: scipy.spatial module and first Cython code added >> - 2010: moving to a 6-monthly release cycle >> - 2011: SciPy development moves to GitHub >> - 2011: Python 3 support >> - 2012: adding a sparse graph module and unified optimization interface >> - 2012: removal of scipy.maxentropy >> - 2013: continuous integration with TravisCI >> - 2015: adding Cython interface for BLAS/LAPACK and a benchmark suite >> - 2017: adding a unified C API with scipy.LowLevelCallable; removal of >> scipy.weave >> - 2017: SciPy 1.0 release >> >> >> **Pauli Virtanen** is SciPy's Benevolent Dictator For Life (BDFL). He >> says: >> >> *Truthfully speaking, we could have released a SciPy 1.0 a long time ago, >> so I'm >> happy we do it now at long last. The project has a long history, and >> during the >> years it has matured also as a software project. I believe it has well >> proved >> its merit to warrant a version number starting with unity.* >> >> *Since its conception 15+ years ago, SciPy has largely been written by >> and for >> scientists, to provide a box of basic tools that they need. Over time, >> the set >> of people active in its development has undergone some rotation, and we >> have >> evolved towards a somewhat more systematic approach to development. >> Regardless, >> this underlying drive has stayed the same, and I think it will also >> continue >> propelling the project forward in future. This is all good, since not long >> after 1.0 comes 1.1.* >> >> **Travis Oliphant** is one of SciPy's creators. He says: >> >> *I'm honored to write a note of congratulations to the SciPy developers >> and the >> entire SciPy community for the release of SciPy 1.0. This release >> represents >> a dream of many that has been patiently pursued by a stalwart group of >> pioneers >> for nearly 2 decades. Efforts have been broad and consistent over that >> time >> from many hundreds of people. From initial discussions to efforts >> coding and >> packaging to documentation efforts to extensive conference and community >> building, the SciPy effort has been a global phenomenon that it has been a >> privilege to participate in.* >> >> *The idea of SciPy was already in multiple people?s minds in 1997 when I >> first >> joined the Python community as a young graduate student who had just >> fallen in >> love with the expressibility and extensibility of Python. The internet >> was >> just starting to bringing together like-minded mathematicians and >> scientists in >> nascent electronically-connected communities. In 1998, there was a >> concerted >> discussion on the matrix-SIG, python mailing list with people like Paul >> Barrett, Joe Harrington, Perry Greenfield, Paul Dubois, Konrad Hinsen, >> David >> Ascher, and others. This discussion encouraged me in 1998 and 1999 to >> procrastinate my PhD and spend a lot of time writing extension modules to >> Python that mostly wrapped battle-tested Fortran and C-code making it >> available >> to the Python user. This work attracted the help of others like Robert >> Kern, >> Pearu Peterson and Eric Jones who joined their efforts with mine in 2000 >> so >> that by 2001, the first SciPy release was ready. This was long before >> Github >> simplified collaboration and input from others and the "patch" command and >> email was how you helped a project improve.* >> >> *Since that time, hundreds of people have spent an enormous amount of time >> improving the SciPy library and the community surrounding this library has >> dramatically grown. I stopped being able to participate actively in >> developing >> the SciPy library around 2010. Fortunately, at that time, Pauli Virtanen >> and >> Ralf Gommers picked up the pace of development supported by dozens of >> other key >> contributors such as David Cournapeau, Evgeni Burovski, Josef Perktold, >> and >> Warren Weckesser. While I have only been able to admire the development >> of >> SciPy from a distance for the past 7 years, I have never lost my love of >> the >> project and the concept of community-driven development. I remain >> driven >> even now by a desire to help sustain the development of not only the SciPy >> library but many other affiliated and related open-source projects. I am >> extremely pleased that SciPy is in the hands of a world-wide community of >> talented developers who will ensure that SciPy remains an example of how >> grass-roots, community-driven development can succeed.* >> >> **Fernando Perez** offers a wider community perspective: >> >> *The existence of a nascent Scipy library, and the incredible --if tiny by >> today's standards-- community surrounding it is what drew me into the >> scientific Python world while still a physics graduate student in 2001. >> Today, >> I am awed when I see these tools power everything from high school >> education to >> the research that led to the 2017 Nobel Prize in physics.* >> >> *Don't be fooled by the 1.0 number: this project is a mature cornerstone >> of the >> modern scientific computing ecosystem. I am grateful for the many who >> have >> made it possible, and hope to be able to contribute again to it in the >> future. >> My sincere congratulations to the whole team!* >> >> >> Highlights of this release >> -------------------------- >> >> Some of the highlights of this release are: >> >> - Major build improvements. Windows wheels are available on PyPI for the >> first time, and continuous integration has been set up on Windows and >> OS X >> in addition to Linux. >> - A set of new ODE solvers and a unified interface to them >> (`scipy.integrate.solve_ivp`). >> - Two new trust region optimizers and a new linear programming method, >> with >> improved performance compared to what `scipy.optimize` offered >> previously. >> - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are >> now >> complete. >> >> >> Upgrading and compatibility >> --------------------------- >> >> There have been a number of deprecations and API changes in this release, >> which >> are documented below. Before upgrading, we recommend that users check >> that >> their own code does not use deprecated SciPy functionality (to do so, run >> your >> code with ``python -Wd`` and check for ``DeprecationWarning`` s). >> >> This release requires Python 2.7 or >=3.4 and NumPy 1.8.2 or greater. >> >> This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the >> lowest supported LAPACK version to >3.2.x was long blocked by Apple >> Accelerate >> providing the LAPACK 3.2.1 API. We have decided that it's time to either >> drop >> Accelerate or, if there is enough interest, provide shims for functions >> added >> in more recent LAPACK versions so it can still be used. >> >> >> New features >> ============ >> >> `scipy.cluster` improvements >> ---------------------------- >> >> `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a >> linkage matrix to minimize distances between adjacent leaves, was added. >> >> >> `scipy.fftpack` improvements >> ---------------------------- >> >> N-dimensional versions of the discrete sine and cosine transforms and >> their >> inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. >> >> >> `scipy.integrate` improvements >> ------------------------------ >> >> A set of new ODE solvers have been added to `scipy.integrate`. The >> convenience >> function `scipy.integrate.solve_ivp` allows uniform access to all solvers. >> The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and >> ``LSODA``) >> can also be used directly. >> >> >> `scipy.linalg` improvements >> ---------------------------- >> >> The BLAS wrappers in `scipy.linalg.blas` have been completed. Added >> functions >> are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, >> ``*spr``, >> ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, >> ``*sbmv``, >> ``*spr2``, >> >> Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, >> ``*hetrd``, >> ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, >> ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. >> >> The function `scipy.linalg.subspace_angles` has been added to compute the >> subspace angles between two matrices. >> >> The function `scipy.linalg.clarkson_woodruff_transform` has been added. >> It finds low-rank matrix approximation via the Clarkson-Woodruff >> Transform. >> >> The functions `scipy.linalg.eigh_tridiagonal` and >> `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and >> eigenvectors of tridiagonal hermitian/symmetric matrices, were added. >> >> >> `scipy.ndimage` improvements >> ---------------------------- >> >> Support for homogeneous coordinate transforms has been added to >> `scipy.ndimage.affine_transform`. >> >> The ``ndimage`` C code underwent a significant refactoring, and is now >> a lot easier to understand and maintain. >> >> >> `scipy.optimize` improvements >> ----------------------------- >> >> The methods ``trust-region-exact`` and ``trust-krylov`` have been added >> to the >> function `scipy.optimize.minimize`. These new trust-region methods solve >> the >> subproblem with higher accuracy at the cost of more Hessian factorizations >> (compared to dogleg) or more matrix vector products (compared to ncg) but >> usually require less nonlinear iterations and are able to deal with >> indefinite >> Hessians. They seem very competitive against the other Newton methods >> implemented in scipy. >> >> `scipy.optimize.linprog` gained an interior point method. Its >> performance is >> superior (both in accuracy and speed) to the older simplex method. >> >> >> `scipy.signal` improvements >> --------------------------- >> >> An argument ``fs`` (sampling frequency) was added to the following >> functions: >> ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these >> functions >> consistent with many other functions in `scipy.signal` in which the >> sampling >> frequency can be specified. >> >> `scipy.signal.freqz` has been sped up significantly for FIR filters. >> >> >> `scipy.sparse` improvements >> --------------------------- >> >> Iterating over and slicing of CSC and CSR matrices is now faster by up to >> ~35%. >> >> The ``tocsr`` method of COO matrices is now several times faster. >> >> The ``diagonal`` method of sparse matrices now takes a parameter, >> indicating >> which diagonal to return. >> >> >> `scipy.sparse.linalg` improvements >> ---------------------------------- >> >> A new iterative solver for large-scale nonsymmetric sparse linear systems, >> `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a >> flexible variant of ``GCROT``. >> >> `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding >> potentially >> faster convergence. >> >> SuperLU was updated to version 5.2.1. >> >> >> `scipy.spatial` improvements >> ---------------------------- >> >> Many distance metrics in `scipy.spatial.distance` gained support for >> weights. >> >> The signatures of `scipy.spatial.distance.pdist` and >> `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in >> order to >> support a wider range of metrics (e.g. string-based metrics that need >> extra >> keywords). Also, an optional ``out`` parameter was added to ``pdist`` and >> ``cdist`` allowing the user to specify where the resulting distance >> matrix is >> to be stored >> >> >> `scipy.stats` improvements >> -------------------------- >> >> The methods ``cdf`` and ``logcdf`` were added to >> `scipy.stats.multivariate_normal`, providing the cumulative distribution >> function of the multivariate normal distribution. >> >> New statistical distance functions were added, namely >> `scipy.stats.wasserstein_distance` for the first Wasserstein distance and >> `scipy.stats.energy_distance` for the energy distance. >> >> >> Deprecated features >> =================== >> >> The following functions in `scipy.misc` are deprecated: ``bytescale``, >> ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, >> ``imsave``, ``imshow`` and ``toimage``. Most of those functions have >> unexpected >> behavior (like rescaling and type casting image data without the user >> asking >> for that). Other functions simply have better alternatives. >> >> ``scipy.interpolate.interpolate_wrapper`` and all functions in that >> submodule >> are deprecated. This was a never finished set of wrapper functions which >> is >> not relevant anymore. >> >> The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to >> the >> dtypes of the input arrays in the future and checked that it is a scalar >> or >> an array with a single element. >> >> ``scipy.spatial.distance.matching`` is deprecated. It is an alias of >> `scipy.spatial.distance.hamming`, which should be used instead. >> >> Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong >> interpretation of the metric definition. In scipy 1.0 it has been just >> deprecated in the documentation to keep retro-compatibility but is >> recommended >> to use the new version of `scipy.spatial.distance.minkowski` that >> implements >> the correct behaviour. >> >> Positional arguments of `scipy.spatial.distance.pdist` and >> `scipy.spatial.distance.cdist` should be replaced with their keyword >> version. >> >> >> Backwards incompatible changes >> ============================== >> >> The following deprecated functions have been removed from `scipy.stats`: >> ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, >> ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and >> ``threshold``. >> >> The following deprecated functions have been removed from >> `scipy.stats.mstats`: >> ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. >> >> The deprecated ``a`` and ``reta`` keywords have been removed from >> `scipy.stats.shapiro`. >> >> The deprecated functions ``sparse.csgraph.cs_graph_components`` and >> ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. >> >> The following deprecated keywords have been removed in >> `scipy.sparse.linalg`: >> ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, >> ``cg``, >> ``cgs``, ``gmres``, ``qmr`` and ``minres``. >> >> The deprecated functions ``expm2`` and ``expm3`` have been removed from >> `scipy.linalg`. The deprecated keyword ``q`` was removed from >> `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` >> was >> removed. >> >> The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and >> ``K2F`` have been removed from `scipy.constants`. >> >> The deprecated ``ppform`` class was removed from `scipy.interpolate`. >> >> The deprecated keyword ``iprint`` was removed from >> `scipy.optimize.fmin_cobyla`. >> >> The default value for the ``zero_phase`` keyword of >> `scipy.signal.decimate` >> has been changed to True. >> >> The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the >> method used for random initialization, so using a fixed random seed will >> not necessarily produce the same results as in previous versions. >> >> `scipy.special.gammaln` does not accept complex arguments anymore. >> >> The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, >> ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use >> the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and >> ``spherical_kn``. Be aware that the new functions have different >> signatures. >> >> The cross-class properties of `scipy.signal.lti` systems have been >> removed. >> The following properties/setters have been removed: >> >> Name - (accessing/setting has been removed) - (setting has been removed) >> >> * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) >> * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, >> ``poles``) >> * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () >> >> ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. >> This >> was a corner case for which it was unclear that the behavior was >> well-defined. >> >> The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather >> than >> an ndarray when the length of alpha is 1. >> >> >> Other changes >> ============= >> >> SciPy now has a formal governance structure. It consists of a BDFL (Pauli >> Virtanen) and a Steering Committee. See `the governance document >> < >> https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >> >`_ >> for details. >> >> It is now possible to build SciPy on Windows with MSVC + gfortran! >> Continuous >> integration has been set up for this build configuration on Appveyor, >> building >> against OpenBLAS. >> >> Continuous integration for OS X has been set up on TravisCI. >> >> The SciPy test suite has been migrated from ``nose`` to ``pytest``. >> >> ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy >> to >> add custom code that needs to run when importing SciPy (e.g. checks for >> hardware, DLL search paths, etc.). >> >> Support for PEP 518 (specifying build system requirements) was added - see >> ``pyproject.toml`` in the root of the SciPy repository. >> >> In order to have consistent function names, the function >> ``scipy.linalg.solve_lyapunov`` is renamed to >> `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for >> backwards-compatibility. >> >> >> Authors >> ======= >> >> * @arcady + >> * @xoviat + >> * Anton Akhmerov >> * Dominic Antonacci + >> * Alessandro Pietro Bardelli >> * Ved Basu + >> * Michael James Bedford + >> * Ray Bell + >> * Juan M. Bello-Rivas + >> * Sebastian Berg >> * Felix Berkenkamp >> * Jyotirmoy Bhattacharya + >> * Matthew Brett >> * Jonathan Bright >> * Bruno Jim?nez + >> * Evgeni Burovski >> * Patrick Callier >> * Mark Campanelli + >> * CJ Carey >> * Robert Cimrman >> * Adam Cox + >> * Michael Danilov + >> * David Haberth?r + >> * Andras Deak + >> * Philip DeBoer >> * Anne-Sylvie Deutsch >> * Cathy Douglass + >> * Dominic Else + >> * Guo Fei + >> * Roman Feldbauer + >> * Yu Feng >> * Jaime Fernandez del Rio >> * Orestis Floros + >> * David Freese + >> * Adam Geitgey + >> * James Gerity + >> * Dezmond Goff + >> * Christoph Gohlke >> * Ralf Gommers >> * Dirk Gorissen + >> * Matt Haberland + >> * David Hagen + >> * Charles Harris >> * Lam Yuen Hei + >> * Jean Helie + >> * Gaute Hope + >> * Guillaume Horel + >> * Franziska Horn + >> * Yevhenii Hyzyla + >> * Vladislav Iakovlev + >> * Marvin Kastner + >> * Mher Kazandjian >> * Thomas Keck >> * Adam Kurkiewicz + >> * Ronan Lamy + >> * J.L. Lanfranchi + >> * Eric Larson >> * Denis Laxalde >> * Gregory R. Lee >> * Felix Lenders + >> * Evan Limanto >> * Julian Lukwata + >> * Fran?ois Magimel >> * Syrtis Major + >> * Charles Masson + >> * Nikolay Mayorov >> * Tobias Megies >> * Markus Meister + >> * Roman Mirochnik + >> * Jordi Montes + >> * Nathan Musoke + >> * Andrew Nelson >> * M.J. Nichol >> * Juan Nunez-Iglesias >> * Arno Onken + >> * Nick Papior + >> * Dima Pasechnik + >> * Ashwin Pathak + >> * Oleksandr Pavlyk + >> * Stefan Peterson >> * Ilhan Polat >> * Andrey Portnoy + >> * Ravi Kumar Prasad + >> * Aman Pratik >> * Eric Quintero >> * Vedant Rathore + >> * Tyler Reddy >> * Joscha Reimer >> * Philipp Rentzsch + >> * Antonio Horta Ribeiro >> * Ned Richards + >> * Kevin Rose + >> * Benoit Rostykus + >> * Matt Ruffalo + >> * Eli Sadoff + >> * Pim Schellart >> * Nico Schl?mer + >> * Klaus Sembritzki + >> * Nikolay Shebanov + >> * Jonathan Tammo Siebert >> * Scott Sievert >> * Max Silbiger + >> * Mandeep Singh + >> * Michael Stewart + >> * Jonathan Sutton + >> * Deep Tavker + >> * Martin Thoma >> * James Tocknell + >> * Aleksandar Trifunovic + >> * Paul van Mulbregt + >> * Jacob Vanderplas >> * Aditya Vijaykumar >> * Pauli Virtanen >> * James Webber >> * Warren Weckesser >> * Eric Wieser + >> * Josh Wilson >> * Zhiqing Xiao + >> * Evgeny Zhurko >> * Nikolay Zinov + >> * Z? Vin?cius + >> >> A total of 121 people contributed to this release. >> People with a "+" by their names contributed a patch for the first time. >> This list of names is automatically generated, and may not be fully >> complete. >> >> >> Cheers, >> Ralf >> >> _______________________________________________ >> > NumPy-Discussion mailing list >> NumPy-Discussion at python.org >> https://mail.python.org/mailman/listinfo/numpy-discussion >> >> > > > -- > Francesc Alted > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Wed Oct 25 13:09:14 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 25 Oct 2017 11:09:14 -0600 Subject: [Numpy-discussion] SciPy 1.0 released! In-Reply-To: References: Message-ID: On Wed, Oct 25, 2017 at 4:14 AM, Ralf Gommers wrote: > Hi all, > > We are extremely pleased to announce the release of SciPy 1.0, 16 years > after > version 0.1 saw the light of day. It has been a long, productive journey > to > get here, and we anticipate many more exciting new features and releases > in the > future. > > > Why 1.0 now? > ------------ > > A version number should reflect the maturity of a project - and SciPy was a > mature and stable library that is heavily used in production settings for a > long time already. From that perspective, the 1.0 version number is long > overdue. > > Some key project goals, both technical (e.g. Windows wheels and continuous > integration) and organisational (a governance structure, code of conduct > and a > roadmap), have been achieved recently. > > Many of us are a bit perfectionist, and therefore are reluctant to call > something "1.0" because it may imply that it's "finished" or "we are 100% > happy > with it". This is normal for many open source projects, however that > doesn't > make it right. We acknowledge to ourselves that it's not perfect, and > there > are some dusty corners left (that will probably always be the case). > Despite > that, SciPy is extremely useful to its users, on average has high quality > code > and documentation, and gives the stability and backwards compatibility > guarantees that a 1.0 label imply. > > > Some history and perspectives > ----------------------------- > > - 2001: the first SciPy release > - 2005: transition to NumPy > - 2007: creation of scikits > - 2008: scipy.spatial module and first Cython code added > - 2010: moving to a 6-monthly release cycle > - 2011: SciPy development moves to GitHub > - 2011: Python 3 support > - 2012: adding a sparse graph module and unified optimization interface > - 2012: removal of scipy.maxentropy > - 2013: continuous integration with TravisCI > - 2015: adding Cython interface for BLAS/LAPACK and a benchmark suite > - 2017: adding a unified C API with scipy.LowLevelCallable; removal of > scipy.weave > - 2017: SciPy 1.0 release > > > **Pauli Virtanen** is SciPy's Benevolent Dictator For Life (BDFL). He > says: > > *Truthfully speaking, we could have released a SciPy 1.0 a long time ago, > so I'm > happy we do it now at long last. The project has a long history, and > during the > years it has matured also as a software project. I believe it has well > proved > its merit to warrant a version number starting with unity.* > > *Since its conception 15+ years ago, SciPy has largely been written by and > for > scientists, to provide a box of basic tools that they need. Over time, the > set > of people active in its development has undergone some rotation, and we > have > evolved towards a somewhat more systematic approach to development. > Regardless, > this underlying drive has stayed the same, and I think it will also > continue > propelling the project forward in future. This is all good, since not long > after 1.0 comes 1.1.* > > **Travis Oliphant** is one of SciPy's creators. He says: > > *I'm honored to write a note of congratulations to the SciPy developers > and the > entire SciPy community for the release of SciPy 1.0. This release > represents > a dream of many that has been patiently pursued by a stalwart group of > pioneers > for nearly 2 decades. Efforts have been broad and consistent over that > time > from many hundreds of people. From initial discussions to efforts coding > and > packaging to documentation efforts to extensive conference and community > building, the SciPy effort has been a global phenomenon that it has been a > privilege to participate in.* > > *The idea of SciPy was already in multiple people?s minds in 1997 when I > first > joined the Python community as a young graduate student who had just > fallen in > love with the expressibility and extensibility of Python. The internet > was > just starting to bringing together like-minded mathematicians and > scientists in > nascent electronically-connected communities. In 1998, there was a > concerted > discussion on the matrix-SIG, python mailing list with people like Paul > Barrett, Joe Harrington, Perry Greenfield, Paul Dubois, Konrad Hinsen, > David > Ascher, and others. This discussion encouraged me in 1998 and 1999 to > procrastinate my PhD and spend a lot of time writing extension modules to > Python that mostly wrapped battle-tested Fortran and C-code making it > available > to the Python user. This work attracted the help of others like Robert > Kern, > Pearu Peterson and Eric Jones who joined their efforts with mine in 2000 so > that by 2001, the first SciPy release was ready. This was long before > Github > simplified collaboration and input from others and the "patch" command and > email was how you helped a project improve.* > > *Since that time, hundreds of people have spent an enormous amount of time > improving the SciPy library and the community surrounding this library has > dramatically grown. I stopped being able to participate actively in > developing > the SciPy library around 2010. Fortunately, at that time, Pauli Virtanen > and > Ralf Gommers picked up the pace of development supported by dozens of > other key > contributors such as David Cournapeau, Evgeni Burovski, Josef Perktold, and > Warren Weckesser. While I have only been able to admire the development > of > SciPy from a distance for the past 7 years, I have never lost my love of > the > project and the concept of community-driven development. I remain driven > even now by a desire to help sustain the development of not only the SciPy > library but many other affiliated and related open-source projects. I am > extremely pleased that SciPy is in the hands of a world-wide community of > talented developers who will ensure that SciPy remains an example of how > grass-roots, community-driven development can succeed.* > > **Fernando Perez** offers a wider community perspective: > > *The existence of a nascent Scipy library, and the incredible --if tiny by > today's standards-- community surrounding it is what drew me into the > scientific Python world while still a physics graduate student in 2001. > Today, > I am awed when I see these tools power everything from high school > education to > the research that led to the 2017 Nobel Prize in physics.* > > *Don't be fooled by the 1.0 number: this project is a mature cornerstone > of the > modern scientific computing ecosystem. I am grateful for the many who have > made it possible, and hope to be able to contribute again to it in the > future. > My sincere congratulations to the whole team!* > > > Highlights of this release > -------------------------- > > Some of the highlights of this release are: > > - Major build improvements. Windows wheels are available on PyPI for the > first time, and continuous integration has been set up on Windows and OS > X > in addition to Linux. > - A set of new ODE solvers and a unified interface to them > (`scipy.integrate.solve_ivp`). > - Two new trust region optimizers and a new linear programming method, with > improved performance compared to what `scipy.optimize` offered > previously. > - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are > now > complete. > > > Upgrading and compatibility > --------------------------- > > There have been a number of deprecations and API changes in this release, > which > are documented below. Before upgrading, we recommend that users check that > their own code does not use deprecated SciPy functionality (to do so, run > your > code with ``python -Wd`` and check for ``DeprecationWarning`` s). > > This release requires Python 2.7 or >=3.4 and NumPy 1.8.2 or greater. > > This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the > lowest supported LAPACK version to >3.2.x was long blocked by Apple > Accelerate > providing the LAPACK 3.2.1 API. We have decided that it's time to either > drop > Accelerate or, if there is enough interest, provide shims for functions > added > in more recent LAPACK versions so it can still be used. > > > New features > ============ > > `scipy.cluster` improvements > ---------------------------- > > `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a > linkage matrix to minimize distances between adjacent leaves, was added. > > > `scipy.fftpack` improvements > ---------------------------- > > N-dimensional versions of the discrete sine and cosine transforms and their > inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. > > > `scipy.integrate` improvements > ------------------------------ > > A set of new ODE solvers have been added to `scipy.integrate`. The > convenience > function `scipy.integrate.solve_ivp` allows uniform access to all solvers. > The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and > ``LSODA``) > can also be used directly. > > > `scipy.linalg` improvements > ---------------------------- > > The BLAS wrappers in `scipy.linalg.blas` have been completed. Added > functions > are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, > ``*spr``, > ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, > ``*sbmv``, > ``*spr2``, > > Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, > ``*hetrd``, > ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, > ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. > > The function `scipy.linalg.subspace_angles` has been added to compute the > subspace angles between two matrices. > > The function `scipy.linalg.clarkson_woodruff_transform` has been added. > It finds low-rank matrix approximation via the Clarkson-Woodruff Transform. > > The functions `scipy.linalg.eigh_tridiagonal` and > `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and > eigenvectors of tridiagonal hermitian/symmetric matrices, were added. > > > `scipy.ndimage` improvements > ---------------------------- > > Support for homogeneous coordinate transforms has been added to > `scipy.ndimage.affine_transform`. > > The ``ndimage`` C code underwent a significant refactoring, and is now > a lot easier to understand and maintain. > > > `scipy.optimize` improvements > ----------------------------- > > The methods ``trust-region-exact`` and ``trust-krylov`` have been added to > the > function `scipy.optimize.minimize`. These new trust-region methods solve > the > subproblem with higher accuracy at the cost of more Hessian factorizations > (compared to dogleg) or more matrix vector products (compared to ncg) but > usually require less nonlinear iterations and are able to deal with > indefinite > Hessians. They seem very competitive against the other Newton methods > implemented in scipy. > > `scipy.optimize.linprog` gained an interior point method. Its performance > is > superior (both in accuracy and speed) to the older simplex method. > > > `scipy.signal` improvements > --------------------------- > > An argument ``fs`` (sampling frequency) was added to the following > functions: > ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these > functions > consistent with many other functions in `scipy.signal` in which the > sampling > frequency can be specified. > > `scipy.signal.freqz` has been sped up significantly for FIR filters. > > > `scipy.sparse` improvements > --------------------------- > > Iterating over and slicing of CSC and CSR matrices is now faster by up to > ~35%. > > The ``tocsr`` method of COO matrices is now several times faster. > > The ``diagonal`` method of sparse matrices now takes a parameter, > indicating > which diagonal to return. > > > `scipy.sparse.linalg` improvements > ---------------------------------- > > A new iterative solver for large-scale nonsymmetric sparse linear systems, > `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a > flexible variant of ``GCROT``. > > `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding > potentially > faster convergence. > > SuperLU was updated to version 5.2.1. > > > `scipy.spatial` improvements > ---------------------------- > > Many distance metrics in `scipy.spatial.distance` gained support for > weights. > > The signatures of `scipy.spatial.distance.pdist` and > `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in > order to > support a wider range of metrics (e.g. string-based metrics that need extra > keywords). Also, an optional ``out`` parameter was added to ``pdist`` and > ``cdist`` allowing the user to specify where the resulting distance matrix > is > to be stored > > > `scipy.stats` improvements > -------------------------- > > The methods ``cdf`` and ``logcdf`` were added to > `scipy.stats.multivariate_normal`, providing the cumulative distribution > function of the multivariate normal distribution. > > New statistical distance functions were added, namely > `scipy.stats.wasserstein_distance` for the first Wasserstein distance and > `scipy.stats.energy_distance` for the energy distance. > > > Deprecated features > =================== > > The following functions in `scipy.misc` are deprecated: ``bytescale``, > ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, > ``imsave``, ``imshow`` and ``toimage``. Most of those functions have > unexpected > behavior (like rescaling and type casting image data without the user > asking > for that). Other functions simply have better alternatives. > > ``scipy.interpolate.interpolate_wrapper`` and all functions in that > submodule > are deprecated. This was a never finished set of wrapper functions which > is > not relevant anymore. > > The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to the > dtypes of the input arrays in the future and checked that it is a scalar or > an array with a single element. > > ``scipy.spatial.distance.matching`` is deprecated. It is an alias of > `scipy.spatial.distance.hamming`, which should be used instead. > > Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong > interpretation of the metric definition. In scipy 1.0 it has been just > deprecated in the documentation to keep retro-compatibility but is > recommended > to use the new version of `scipy.spatial.distance.minkowski` that > implements > the correct behaviour. > > Positional arguments of `scipy.spatial.distance.pdist` and > `scipy.spatial.distance.cdist` should be replaced with their keyword > version. > > > Backwards incompatible changes > ============================== > > The following deprecated functions have been removed from `scipy.stats`: > ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, > ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and > ``threshold``. > > The following deprecated functions have been removed from > `scipy.stats.mstats`: > ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. > > The deprecated ``a`` and ``reta`` keywords have been removed from > `scipy.stats.shapiro`. > > The deprecated functions ``sparse.csgraph.cs_graph_components`` and > ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. > > The following deprecated keywords have been removed in > `scipy.sparse.linalg`: > ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, > ``cg``, > ``cgs``, ``gmres``, ``qmr`` and ``minres``. > > The deprecated functions ``expm2`` and ``expm3`` have been removed from > `scipy.linalg`. The deprecated keyword ``q`` was removed from > `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` > was > removed. > > The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and > ``K2F`` have been removed from `scipy.constants`. > > The deprecated ``ppform`` class was removed from `scipy.interpolate`. > > The deprecated keyword ``iprint`` was removed from > `scipy.optimize.fmin_cobyla`. > > The default value for the ``zero_phase`` keyword of `scipy.signal.decimate` > has been changed to True. > > The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the > method used for random initialization, so using a fixed random seed will > not necessarily produce the same results as in previous versions. > > `scipy.special.gammaln` does not accept complex arguments anymore. > > The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, > ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use > the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and > ``spherical_kn``. Be aware that the new functions have different > signatures. > > The cross-class properties of `scipy.signal.lti` systems have been removed. > The following properties/setters have been removed: > > Name - (accessing/setting has been removed) - (setting has been removed) > > * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) > * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, > ``poles``) > * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () > > ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. > This > was a corner case for which it was unclear that the behavior was > well-defined. > > The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather > than > an ndarray when the length of alpha is 1. > > > Other changes > ============= > > SciPy now has a formal governance structure. It consists of a BDFL (Pauli > Virtanen) and a Steering Committee. See `the governance document > dev/governance/governance.rst>`_ > for details. > > It is now possible to build SciPy on Windows with MSVC + gfortran! > Continuous > integration has been set up for this build configuration on Appveyor, > building > against OpenBLAS. > > Continuous integration for OS X has been set up on TravisCI. > > The SciPy test suite has been migrated from ``nose`` to ``pytest``. > > ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy > to > add custom code that needs to run when importing SciPy (e.g. checks for > hardware, DLL search paths, etc.). > > Support for PEP 518 (specifying build system requirements) was added - see > ``pyproject.toml`` in the root of the SciPy repository. > > In order to have consistent function names, the function > ``scipy.linalg.solve_lyapunov`` is renamed to > `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for > backwards-compatibility. > > > Authors > ======= > > * @arcady + > * @xoviat + > * Anton Akhmerov > * Dominic Antonacci + > * Alessandro Pietro Bardelli > * Ved Basu + > * Michael James Bedford + > * Ray Bell + > * Juan M. Bello-Rivas + > * Sebastian Berg > * Felix Berkenkamp > * Jyotirmoy Bhattacharya + > * Matthew Brett > * Jonathan Bright > * Bruno Jim?nez + > * Evgeni Burovski > * Patrick Callier > * Mark Campanelli + > * CJ Carey > * Robert Cimrman > * Adam Cox + > * Michael Danilov + > * David Haberth?r + > * Andras Deak + > * Philip DeBoer > * Anne-Sylvie Deutsch > * Cathy Douglass + > * Dominic Else + > * Guo Fei + > * Roman Feldbauer + > * Yu Feng > * Jaime Fernandez del Rio > * Orestis Floros + > * David Freese + > * Adam Geitgey + > * James Gerity + > * Dezmond Goff + > * Christoph Gohlke > * Ralf Gommers > * Dirk Gorissen + > * Matt Haberland + > * David Hagen + > * Charles Harris > * Lam Yuen Hei + > * Jean Helie + > * Gaute Hope + > * Guillaume Horel + > * Franziska Horn + > * Yevhenii Hyzyla + > * Vladislav Iakovlev + > * Marvin Kastner + > * Mher Kazandjian > * Thomas Keck > * Adam Kurkiewicz + > * Ronan Lamy + > * J.L. Lanfranchi + > * Eric Larson > * Denis Laxalde > * Gregory R. Lee > * Felix Lenders + > * Evan Limanto > * Julian Lukwata + > * Fran?ois Magimel > * Syrtis Major + > * Charles Masson + > * Nikolay Mayorov > * Tobias Megies > * Markus Meister + > * Roman Mirochnik + > * Jordi Montes + > * Nathan Musoke + > * Andrew Nelson > * M.J. Nichol > * Juan Nunez-Iglesias > * Arno Onken + > * Nick Papior + > * Dima Pasechnik + > * Ashwin Pathak + > * Oleksandr Pavlyk + > * Stefan Peterson > * Ilhan Polat > * Andrey Portnoy + > * Ravi Kumar Prasad + > * Aman Pratik > * Eric Quintero > * Vedant Rathore + > * Tyler Reddy > * Joscha Reimer > * Philipp Rentzsch + > * Antonio Horta Ribeiro > * Ned Richards + > * Kevin Rose + > * Benoit Rostykus + > * Matt Ruffalo + > * Eli Sadoff + > * Pim Schellart > * Nico Schl?mer + > * Klaus Sembritzki + > * Nikolay Shebanov + > * Jonathan Tammo Siebert > * Scott Sievert > * Max Silbiger + > * Mandeep Singh + > * Michael Stewart + > * Jonathan Sutton + > * Deep Tavker + > * Martin Thoma > * James Tocknell + > * Aleksandar Trifunovic + > * Paul van Mulbregt + > * Jacob Vanderplas > * Aditya Vijaykumar > * Pauli Virtanen > * James Webber > * Warren Weckesser > * Eric Wieser + > * Josh Wilson > * Zhiqing Xiao + > * Evgeny Zhurko > * Nikolay Zinov + > * Z? Vin?cius + > > A total of 121 people contributed to this release. > People with a "+" by their names contributed a patch for the first time. > This list of names is automatically generated, and may not be fully > complete. > > > Cheers, > Ralf > > Congratulations to all. SciPy provides wonderful tools that are free for all to use. That those tools are available, and easily installed, is a great boon to many who would otherwise be at a disadvantage for lack of money or access; that, in itself, will have a major impact. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From dillon.niederhut at gmail.com Wed Oct 25 13:32:17 2017 From: dillon.niederhut at gmail.com (Dillon Niederhut) Date: Wed, 25 Oct 2017 17:32:17 +0000 Subject: [Numpy-discussion] SciPy 1.0 released! In-Reply-To: References: Message-ID: Woohoo! On Wed, Oct 25, 2017, 12:10 Charles R Harris wrote: > On Wed, Oct 25, 2017 at 4:14 AM, Ralf Gommers > wrote: > >> Hi all, >> >> We are extremely pleased to announce the release of SciPy 1.0, 16 years >> after >> version 0.1 saw the light of day. It has been a long, productive journey >> to >> get here, and we anticipate many more exciting new features and releases >> in the >> future. >> >> >> Why 1.0 now? >> ------------ >> >> A version number should reflect the maturity of a project - and SciPy was >> a >> mature and stable library that is heavily used in production settings for >> a >> long time already. From that perspective, the 1.0 version number is long >> overdue. >> >> Some key project goals, both technical (e.g. Windows wheels and continuous >> integration) and organisational (a governance structure, code of conduct >> and a >> roadmap), have been achieved recently. >> >> Many of us are a bit perfectionist, and therefore are reluctant to call >> something "1.0" because it may imply that it's "finished" or "we are 100% >> happy >> with it". This is normal for many open source projects, however that >> doesn't >> make it right. We acknowledge to ourselves that it's not perfect, and >> there >> are some dusty corners left (that will probably always be the case). >> Despite >> that, SciPy is extremely useful to its users, on average has high quality >> code >> and documentation, and gives the stability and backwards compatibility >> guarantees that a 1.0 label imply. >> >> >> Some history and perspectives >> ----------------------------- >> >> - 2001: the first SciPy release >> - 2005: transition to NumPy >> - 2007: creation of scikits >> - 2008: scipy.spatial module and first Cython code added >> - 2010: moving to a 6-monthly release cycle >> - 2011: SciPy development moves to GitHub >> - 2011: Python 3 support >> - 2012: adding a sparse graph module and unified optimization interface >> - 2012: removal of scipy.maxentropy >> - 2013: continuous integration with TravisCI >> - 2015: adding Cython interface for BLAS/LAPACK and a benchmark suite >> - 2017: adding a unified C API with scipy.LowLevelCallable; removal of >> scipy.weave >> - 2017: SciPy 1.0 release >> >> >> **Pauli Virtanen** is SciPy's Benevolent Dictator For Life (BDFL). He >> says: >> >> *Truthfully speaking, we could have released a SciPy 1.0 a long time ago, >> so I'm >> happy we do it now at long last. The project has a long history, and >> during the >> years it has matured also as a software project. I believe it has well >> proved >> its merit to warrant a version number starting with unity.* >> >> *Since its conception 15+ years ago, SciPy has largely been written by >> and for >> scientists, to provide a box of basic tools that they need. Over time, >> the set >> of people active in its development has undergone some rotation, and we >> have >> evolved towards a somewhat more systematic approach to development. >> Regardless, >> this underlying drive has stayed the same, and I think it will also >> continue >> propelling the project forward in future. This is all good, since not long >> after 1.0 comes 1.1.* >> >> **Travis Oliphant** is one of SciPy's creators. He says: >> >> *I'm honored to write a note of congratulations to the SciPy developers >> and the >> entire SciPy community for the release of SciPy 1.0. This release >> represents >> a dream of many that has been patiently pursued by a stalwart group of >> pioneers >> for nearly 2 decades. Efforts have been broad and consistent over that >> time >> from many hundreds of people. From initial discussions to efforts >> coding and >> packaging to documentation efforts to extensive conference and community >> building, the SciPy effort has been a global phenomenon that it has been a >> privilege to participate in.* >> >> *The idea of SciPy was already in multiple people?s minds in 1997 when I >> first >> joined the Python community as a young graduate student who had just >> fallen in >> love with the expressibility and extensibility of Python. The internet >> was >> just starting to bringing together like-minded mathematicians and >> scientists in >> nascent electronically-connected communities. In 1998, there was a >> concerted >> discussion on the matrix-SIG, python mailing list with people like Paul >> Barrett, Joe Harrington, Perry Greenfield, Paul Dubois, Konrad Hinsen, >> David >> Ascher, and others. This discussion encouraged me in 1998 and 1999 to >> procrastinate my PhD and spend a lot of time writing extension modules to >> Python that mostly wrapped battle-tested Fortran and C-code making it >> available >> to the Python user. This work attracted the help of others like Robert >> Kern, >> Pearu Peterson and Eric Jones who joined their efforts with mine in 2000 >> so >> that by 2001, the first SciPy release was ready. This was long before >> Github >> simplified collaboration and input from others and the "patch" command and >> email was how you helped a project improve.* >> >> *Since that time, hundreds of people have spent an enormous amount of time >> improving the SciPy library and the community surrounding this library has >> dramatically grown. I stopped being able to participate actively in >> developing >> the SciPy library around 2010. Fortunately, at that time, Pauli Virtanen >> and >> Ralf Gommers picked up the pace of development supported by dozens of >> other key >> contributors such as David Cournapeau, Evgeni Burovski, Josef Perktold, >> and >> Warren Weckesser. While I have only been able to admire the development >> of >> SciPy from a distance for the past 7 years, I have never lost my love of >> the >> project and the concept of community-driven development. I remain >> driven >> even now by a desire to help sustain the development of not only the SciPy >> library but many other affiliated and related open-source projects. I am >> extremely pleased that SciPy is in the hands of a world-wide community of >> talented developers who will ensure that SciPy remains an example of how >> grass-roots, community-driven development can succeed.* >> >> **Fernando Perez** offers a wider community perspective: >> >> *The existence of a nascent Scipy library, and the incredible --if tiny by >> today's standards-- community surrounding it is what drew me into the >> scientific Python world while still a physics graduate student in 2001. >> Today, >> I am awed when I see these tools power everything from high school >> education to >> the research that led to the 2017 Nobel Prize in physics.* >> >> *Don't be fooled by the 1.0 number: this project is a mature cornerstone >> of the >> modern scientific computing ecosystem. I am grateful for the many who >> have >> made it possible, and hope to be able to contribute again to it in the >> future. >> My sincere congratulations to the whole team!* >> >> >> Highlights of this release >> -------------------------- >> >> Some of the highlights of this release are: >> >> - Major build improvements. Windows wheels are available on PyPI for the >> first time, and continuous integration has been set up on Windows and >> OS X >> in addition to Linux. >> - A set of new ODE solvers and a unified interface to them >> (`scipy.integrate.solve_ivp`). >> - Two new trust region optimizers and a new linear programming method, >> with >> improved performance compared to what `scipy.optimize` offered >> previously. >> - Many new BLAS and LAPACK functions were wrapped. The BLAS wrappers are >> now >> complete. >> >> >> Upgrading and compatibility >> --------------------------- >> >> There have been a number of deprecations and API changes in this release, >> which >> are documented below. Before upgrading, we recommend that users check >> that >> their own code does not use deprecated SciPy functionality (to do so, run >> your >> code with ``python -Wd`` and check for ``DeprecationWarning`` s). >> >> This release requires Python 2.7 or >=3.4 and NumPy 1.8.2 or greater. >> >> This is also the last release to support LAPACK 3.1.x - 3.3.x. Moving the >> lowest supported LAPACK version to >3.2.x was long blocked by Apple >> Accelerate >> providing the LAPACK 3.2.1 API. We have decided that it's time to either >> drop >> Accelerate or, if there is enough interest, provide shims for functions >> added >> in more recent LAPACK versions so it can still be used. >> >> >> New features >> ============ >> >> `scipy.cluster` improvements >> ---------------------------- >> >> `scipy.cluster.hierarchy.optimal_leaf_ordering`, a function to reorder a >> linkage matrix to minimize distances between adjacent leaves, was added. >> >> >> `scipy.fftpack` improvements >> ---------------------------- >> >> N-dimensional versions of the discrete sine and cosine transforms and >> their >> inverses were added as ``dctn``, ``idctn``, ``dstn`` and ``idstn``. >> >> >> `scipy.integrate` improvements >> ------------------------------ >> >> A set of new ODE solvers have been added to `scipy.integrate`. The >> convenience >> function `scipy.integrate.solve_ivp` allows uniform access to all solvers. >> The individual solvers (``RK23``, ``RK45``, ``Radau``, ``BDF`` and >> ``LSODA``) >> can also be used directly. >> >> >> `scipy.linalg` improvements >> ---------------------------- >> >> The BLAS wrappers in `scipy.linalg.blas` have been completed. Added >> functions >> are ``*gbmv``, ``*hbmv``, ``*hpmv``, ``*hpr``, ``*hpr2``, ``*spmv``, >> ``*spr``, >> ``*tbmv``, ``*tbsv``, ``*tpmv``, ``*tpsv``, ``*trsm``, ``*trsv``, >> ``*sbmv``, >> ``*spr2``, >> >> Wrappers for the LAPACK functions ``*gels``, ``*stev``, ``*sytrd``, >> ``*hetrd``, >> ``*sytf2``, ``*hetrf``, ``*sytrf``, ``*sycon``, ``*hecon``, ``*gglse``, >> ``*stebz``, ``*stemr``, ``*sterf``, and ``*stein`` have been added. >> >> The function `scipy.linalg.subspace_angles` has been added to compute the >> subspace angles between two matrices. >> >> The function `scipy.linalg.clarkson_woodruff_transform` has been added. >> It finds low-rank matrix approximation via the Clarkson-Woodruff >> Transform. >> >> The functions `scipy.linalg.eigh_tridiagonal` and >> `scipy.linalg.eigvalsh_tridiagonal`, which find the eigenvalues and >> eigenvectors of tridiagonal hermitian/symmetric matrices, were added. >> >> >> `scipy.ndimage` improvements >> ---------------------------- >> >> Support for homogeneous coordinate transforms has been added to >> `scipy.ndimage.affine_transform`. >> >> The ``ndimage`` C code underwent a significant refactoring, and is now >> a lot easier to understand and maintain. >> >> >> `scipy.optimize` improvements >> ----------------------------- >> >> The methods ``trust-region-exact`` and ``trust-krylov`` have been added >> to the >> function `scipy.optimize.minimize`. These new trust-region methods solve >> the >> subproblem with higher accuracy at the cost of more Hessian factorizations >> (compared to dogleg) or more matrix vector products (compared to ncg) but >> usually require less nonlinear iterations and are able to deal with >> indefinite >> Hessians. They seem very competitive against the other Newton methods >> implemented in scipy. >> >> `scipy.optimize.linprog` gained an interior point method. Its >> performance is >> superior (both in accuracy and speed) to the older simplex method. >> >> >> `scipy.signal` improvements >> --------------------------- >> >> An argument ``fs`` (sampling frequency) was added to the following >> functions: >> ``firwin``, ``firwin2``, ``firls``, and ``remez``. This makes these >> functions >> consistent with many other functions in `scipy.signal` in which the >> sampling >> frequency can be specified. >> >> `scipy.signal.freqz` has been sped up significantly for FIR filters. >> >> >> `scipy.sparse` improvements >> --------------------------- >> >> Iterating over and slicing of CSC and CSR matrices is now faster by up to >> ~35%. >> >> The ``tocsr`` method of COO matrices is now several times faster. >> >> The ``diagonal`` method of sparse matrices now takes a parameter, >> indicating >> which diagonal to return. >> >> >> `scipy.sparse.linalg` improvements >> ---------------------------------- >> >> A new iterative solver for large-scale nonsymmetric sparse linear systems, >> `scipy.sparse.linalg.gcrotmk`, was added. It implements ``GCROT(m,k)``, a >> flexible variant of ``GCROT``. >> >> `scipy.sparse.linalg.lsmr` now accepts an initial guess, yielding >> potentially >> faster convergence. >> >> SuperLU was updated to version 5.2.1. >> >> >> `scipy.spatial` improvements >> ---------------------------- >> >> Many distance metrics in `scipy.spatial.distance` gained support for >> weights. >> >> The signatures of `scipy.spatial.distance.pdist` and >> `scipy.spatial.distance.cdist` were changed to ``*args, **kwargs`` in >> order to >> support a wider range of metrics (e.g. string-based metrics that need >> extra >> keywords). Also, an optional ``out`` parameter was added to ``pdist`` and >> ``cdist`` allowing the user to specify where the resulting distance >> matrix is >> to be stored >> >> >> `scipy.stats` improvements >> -------------------------- >> >> The methods ``cdf`` and ``logcdf`` were added to >> `scipy.stats.multivariate_normal`, providing the cumulative distribution >> function of the multivariate normal distribution. >> >> New statistical distance functions were added, namely >> `scipy.stats.wasserstein_distance` for the first Wasserstein distance and >> `scipy.stats.energy_distance` for the energy distance. >> >> >> Deprecated features >> =================== >> >> The following functions in `scipy.misc` are deprecated: ``bytescale``, >> ``fromimage``, ``imfilter``, ``imread``, ``imresize``, ``imrotate``, >> ``imsave``, ``imshow`` and ``toimage``. Most of those functions have >> unexpected >> behavior (like rescaling and type casting image data without the user >> asking >> for that). Other functions simply have better alternatives. >> >> ``scipy.interpolate.interpolate_wrapper`` and all functions in that >> submodule >> are deprecated. This was a never finished set of wrapper functions which >> is >> not relevant anymore. >> >> The ``fillvalue`` of `scipy.signal.convolve2d` will be cast directly to >> the >> dtypes of the input arrays in the future and checked that it is a scalar >> or >> an array with a single element. >> >> ``scipy.spatial.distance.matching`` is deprecated. It is an alias of >> `scipy.spatial.distance.hamming`, which should be used instead. >> >> Implementation of `scipy.spatial.distance.wminkowski` was based on a wrong >> interpretation of the metric definition. In scipy 1.0 it has been just >> deprecated in the documentation to keep retro-compatibility but is >> recommended >> to use the new version of `scipy.spatial.distance.minkowski` that >> implements >> the correct behaviour. >> >> Positional arguments of `scipy.spatial.distance.pdist` and >> `scipy.spatial.distance.cdist` should be replaced with their keyword >> version. >> >> >> Backwards incompatible changes >> ============================== >> >> The following deprecated functions have been removed from `scipy.stats`: >> ``betai``, ``chisqprob``, ``f_value``, ``histogram``, ``histogram2``, >> ``pdf_fromgamma``, ``signaltonoise``, ``square_of_sums``, ``ss`` and >> ``threshold``. >> >> The following deprecated functions have been removed from >> `scipy.stats.mstats`: >> ``betai``, ``f_value_wilks_lambda``, ``signaltonoise`` and ``threshold``. >> >> The deprecated ``a`` and ``reta`` keywords have been removed from >> `scipy.stats.shapiro`. >> >> The deprecated functions ``sparse.csgraph.cs_graph_components`` and >> ``sparse.linalg.symeig`` have been removed from `scipy.sparse`. >> >> The following deprecated keywords have been removed in >> `scipy.sparse.linalg`: >> ``drop_tol`` from ``splu``, and ``xtype`` from ``bicg``, ``bicgstab``, >> ``cg``, >> ``cgs``, ``gmres``, ``qmr`` and ``minres``. >> >> The deprecated functions ``expm2`` and ``expm3`` have been removed from >> `scipy.linalg`. The deprecated keyword ``q`` was removed from >> `scipy.linalg.expm`. And the deprecated submodule ``linalg.calc_lwork`` >> was >> removed. >> >> The deprecated functions ``C2K``, ``K2C``, ``F2C``, ``C2F``, ``F2K`` and >> ``K2F`` have been removed from `scipy.constants`. >> >> The deprecated ``ppform`` class was removed from `scipy.interpolate`. >> >> The deprecated keyword ``iprint`` was removed from >> `scipy.optimize.fmin_cobyla`. >> >> The default value for the ``zero_phase`` keyword of >> `scipy.signal.decimate` >> has been changed to True. >> >> The ``kmeans`` and ``kmeans2`` functions in `scipy.cluster.vq` changed the >> method used for random initialization, so using a fixed random seed will >> not necessarily produce the same results as in previous versions. >> >> `scipy.special.gammaln` does not accept complex arguments anymore. >> >> The deprecated functions ``sph_jn``, ``sph_yn``, ``sph_jnyn``, ``sph_in``, >> ``sph_kn``, and ``sph_inkn`` have been removed. Users should instead use >> the functions ``spherical_jn``, ``spherical_yn``, ``spherical_in``, and >> ``spherical_kn``. Be aware that the new functions have different >> signatures. >> >> The cross-class properties of `scipy.signal.lti` systems have been >> removed. >> The following properties/setters have been removed: >> >> Name - (accessing/setting has been removed) - (setting has been removed) >> >> * StateSpace - (``num``, ``den``, ``gain``) - (``zeros``, ``poles``) >> * TransferFunction (``A``, ``B``, ``C``, ``D``, ``gain``) - (``zeros``, >> ``poles``) >> * ZerosPolesGain (``A``, ``B``, ``C``, ``D``, ``num``, ``den``) - () >> >> ``signal.freqz(b, a)`` with ``b`` or ``a`` >1-D raises a ``ValueError``. >> This >> was a corner case for which it was unclear that the behavior was >> well-defined. >> >> The method ``var`` of `scipy.stats.dirichlet` now returns a scalar rather >> than >> an ndarray when the length of alpha is 1. >> >> >> Other changes >> ============= >> >> SciPy now has a formal governance structure. It consists of a BDFL (Pauli >> Virtanen) and a Steering Committee. See `the governance document >> < >> https://github.com/scipy/scipy/blob/master/doc/source/dev/governance/governance.rst >> >`_ >> for details. >> >> It is now possible to build SciPy on Windows with MSVC + gfortran! >> Continuous >> integration has been set up for this build configuration on Appveyor, >> building >> against OpenBLAS. >> >> Continuous integration for OS X has been set up on TravisCI. >> >> The SciPy test suite has been migrated from ``nose`` to ``pytest``. >> >> ``scipy/_distributor_init.py`` was added to allow redistributors of SciPy >> to >> add custom code that needs to run when importing SciPy (e.g. checks for >> hardware, DLL search paths, etc.). >> >> Support for PEP 518 (specifying build system requirements) was added - see >> ``pyproject.toml`` in the root of the SciPy repository. >> >> In order to have consistent function names, the function >> ``scipy.linalg.solve_lyapunov`` is renamed to >> `scipy.linalg.solve_continuous_lyapunov`. The old name is kept for >> backwards-compatibility. >> >> >> Authors >> ======= >> >> * @arcady + >> * @xoviat + >> * Anton Akhmerov >> * Dominic Antonacci + >> * Alessandro Pietro Bardelli >> * Ved Basu + >> * Michael James Bedford + >> * Ray Bell + >> * Juan M. Bello-Rivas + >> * Sebastian Berg >> * Felix Berkenkamp >> * Jyotirmoy Bhattacharya + >> * Matthew Brett >> * Jonathan Bright >> * Bruno Jim?nez + >> * Evgeni Burovski >> * Patrick Callier >> * Mark Campanelli + >> * CJ Carey >> * Robert Cimrman >> * Adam Cox + >> * Michael Danilov + >> * David Haberth?r + >> * Andras Deak + >> * Philip DeBoer >> * Anne-Sylvie Deutsch >> * Cathy Douglass + >> * Dominic Else + >> * Guo Fei + >> * Roman Feldbauer + >> * Yu Feng >> * Jaime Fernandez del Rio >> * Orestis Floros + >> * David Freese + >> * Adam Geitgey + >> * James Gerity + >> * Dezmond Goff + >> * Christoph Gohlke >> * Ralf Gommers >> * Dirk Gorissen + >> * Matt Haberland + >> * David Hagen + >> * Charles Harris >> * Lam Yuen Hei + >> * Jean Helie + >> * Gaute Hope + >> * Guillaume Horel + >> * Franziska Horn + >> * Yevhenii Hyzyla + >> * Vladislav Iakovlev + >> * Marvin Kastner + >> * Mher Kazandjian >> * Thomas Keck >> * Adam Kurkiewicz + >> * Ronan Lamy + >> * J.L. Lanfranchi + >> * Eric Larson >> * Denis Laxalde >> * Gregory R. Lee >> * Felix Lenders + >> * Evan Limanto >> * Julian Lukwata + >> * Fran?ois Magimel >> * Syrtis Major + >> * Charles Masson + >> * Nikolay Mayorov >> * Tobias Megies >> * Markus Meister + >> * Roman Mirochnik + >> * Jordi Montes + >> * Nathan Musoke + >> * Andrew Nelson >> * M.J. Nichol >> * Juan Nunez-Iglesias >> * Arno Onken + >> * Nick Papior + >> * Dima Pasechnik + >> * Ashwin Pathak + >> * Oleksandr Pavlyk + >> * Stefan Peterson >> * Ilhan Polat >> * Andrey Portnoy + >> * Ravi Kumar Prasad + >> * Aman Pratik >> * Eric Quintero >> * Vedant Rathore + >> * Tyler Reddy >> * Joscha Reimer >> * Philipp Rentzsch + >> * Antonio Horta Ribeiro >> * Ned Richards + >> * Kevin Rose + >> * Benoit Rostykus + >> * Matt Ruffalo + >> * Eli Sadoff + >> * Pim Schellart >> * Nico Schl?mer + >> * Klaus Sembritzki + >> * Nikolay Shebanov + >> * Jonathan Tammo Siebert >> * Scott Sievert >> * Max Silbiger + >> * Mandeep Singh + >> * Michael Stewart + >> * Jonathan Sutton + >> * Deep Tavker + >> * Martin Thoma >> * James Tocknell + >> * Aleksandar Trifunovic + >> * Paul van Mulbregt + >> * Jacob Vanderplas >> * Aditya Vijaykumar >> * Pauli Virtanen >> * James Webber >> * Warren Weckesser >> * Eric Wieser + >> * Josh Wilson >> * Zhiqing Xiao + >> * Evgeny Zhurko >> * Nikolay Zinov + >> * Z? Vin?cius + >> >> A total of 121 people contributed to this release. >> People with a "+" by their names contributed a patch for the first time. >> This list of names is automatically generated, and may not be fully >> complete. >> >> >> Cheers, >> Ralf >> >> > Congratulations to all. SciPy provides wonderful tools that are free for > all to use. That those tools are available, and easily installed, is a > great boon to many who would otherwise be at a disadvantage for lack of > money or access; that, in itself, will have a major impact. > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele at grinta.net Thu Oct 26 14:11:37 2017 From: daniele at grinta.net (Daniele Nicolodi) Date: Thu, 26 Oct 2017 12:11:37 -0600 Subject: [Numpy-discussion] Is there a better way to write a stacked matrix multiplication Message-ID: <199cdc60-8ce3-b98f-3645-b9b002cf3bd4@grinta.net> Hello, is there a better way to write the dot product between a stack of matrices? In my case I need to compute y = A.T @ inv(B) @ A with A a 3x1 matrix and B a 3x3 matrix, N times, with N in the few hundred thousands range. I thus "vectorize" the thing using stack of matrices, so that A is a Nx3x1 matrix and B is Nx3x3 and I can write: y = np.matmul(np.transpose(A, (0, 2, 1)), np.matmul(inv(B), A)) which I guess could be also written (in Python 3.6 and later): y = np.transpose(A, (0, 2, 1)) @ inv(B) @ A and I obtain a Nx1x1 y matrix which I can collapse to the vector I need with np.squeeze(). However, the need for the second argument of np.transpose() seems odd to me, because all other functions handle transparently the matrix stacking. Am I missing something? Is there a more natural matrix arrangement that I could use to obtain the same results more naturally? Cheers, Daniele From charlesr.harris at gmail.com Thu Oct 26 15:37:55 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 26 Oct 2017 13:37:55 -0600 Subject: [Numpy-discussion] Is there a better way to write a stacked matrix multiplication In-Reply-To: <199cdc60-8ce3-b98f-3645-b9b002cf3bd4@grinta.net> References: <199cdc60-8ce3-b98f-3645-b9b002cf3bd4@grinta.net> Message-ID: On Thu, Oct 26, 2017 at 12:11 PM, Daniele Nicolodi wrote: > Hello, > > is there a better way to write the dot product between a stack of > matrices? In my case I need to compute > > y = A.T @ inv(B) @ A > > with A a 3x1 matrix and B a 3x3 matrix, N times, with N in the few > hundred thousands range. I thus "vectorize" the thing using stack of > matrices, so that A is a Nx3x1 matrix and B is Nx3x3 and I can write: > > y = np.matmul(np.transpose(A, (0, 2, 1)), np.matmul(inv(B), A)) > > which I guess could be also written (in Python 3.6 and later): > > y = np.transpose(A, (0, 2, 1)) @ inv(B) @ A > > and I obtain a Nx1x1 y matrix which I can collapse to the vector I need > with np.squeeze(). > > However, the need for the second argument of np.transpose() seems odd to > me, because all other functions handle transparently the matrix stacking. > > Am I missing something? Is there a more natural matrix arrangement that > I could use to obtain the same results more naturally? There has been discussion of adding a operator for transposing the matrices in a stack, but no resolution at this point. However, if you have a stack of vectors (not matrices) you can turn then into transposed matrices like `A[..., None, :]`, so `A[..., None, :] @ inv(B) @ A[..., None]` and then squeeze. Another option is to use einsum. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Oct 26 15:40:10 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 26 Oct 2017 12:40:10 -0700 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: On Wed, Oct 18, 2017 at 10:24 PM, Nathaniel Smith wrote: > I'll also be giving a lunch talk at BIDS tomorrow to let folks locally > know about what's going on, which I think will be recorded ? I'll send > around a link after in case others are interested. Here's that link: https://www.youtube.com/watch?v=fowHwlpGb34 -n -- Nathaniel J. Smith -- https://vorpus.org From shoyer at gmail.com Thu Oct 26 16:07:50 2017 From: shoyer at gmail.com (Stephan Hoyer) Date: Thu, 26 Oct 2017 20:07:50 +0000 Subject: [Numpy-discussion] Is there a better way to write a stacked matrix multiplication In-Reply-To: References: <199cdc60-8ce3-b98f-3645-b9b002cf3bd4@grinta.net> Message-ID: I would certainly use einsum. It is almost perfect for these use cases, e.g., np.einsum('ki,kij,kj->k', A, inv(B), A) On Thu, Oct 26, 2017 at 12:38 PM Charles R Harris wrote: > On Thu, Oct 26, 2017 at 12:11 PM, Daniele Nicolodi > wrote: > >> Hello, >> >> is there a better way to write the dot product between a stack of >> matrices? In my case I need to compute >> >> y = A.T @ inv(B) @ A >> >> with A a 3x1 matrix and B a 3x3 matrix, N times, with N in the few >> hundred thousands range. I thus "vectorize" the thing using stack of >> matrices, so that A is a Nx3x1 matrix and B is Nx3x3 and I can write: >> >> y = np.matmul(np.transpose(A, (0, 2, 1)), np.matmul(inv(B), A)) >> >> which I guess could be also written (in Python 3.6 and later): >> >> y = np.transpose(A, (0, 2, 1)) @ inv(B) @ A >> >> and I obtain a Nx1x1 y matrix which I can collapse to the vector I need >> with np.squeeze(). >> >> However, the need for the second argument of np.transpose() seems odd to >> me, because all other functions handle transparently the matrix stacking. >> >> Am I missing something? Is there a more natural matrix arrangement that >> I could use to obtain the same results more naturally? > > > There has been discussion of adding a operator for transposing the > matrices in a stack, but no resolution at this point. However, if you have > a stack of vectors (not matrices) you can turn then into transposed > matrices like `A[..., None, :]`, so `A[..., None, :] @ inv(B) @ A[..., > None]` and then squeeze. > > Another option is to use einsum. > > Chuck > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Thu Oct 26 16:14:58 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 26 Oct 2017 16:14:58 -0400 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: Hi Nathaniel, Thanks for the link. The plans sounds great! You'll not be surprised to hear I'm particularly interested in the units aspect (and, no, I don't mind at all if we can stop subclassing ndarray...). Is the idea that there will be a general way for allow a dtype to define how to convert an array to one with another dtype? (Just as one now implicitly is able to convert between, say, int and float.) And, if so, is the idea that one of those conversion possibilities might involve checking units? Or were you thinking of implementing units more directly? The former would seem most sensible, if only so you can initially focus on other things than deciding how to support, say, esu vs emu units, or whether or not to treat radians as equal to dimensionless (which they formally are, but it is not always handy to do so). Anyway, do keep us posted! All the best, Marten On Thu, Oct 26, 2017 at 3:40 PM, Nathaniel Smith wrote: > On Wed, Oct 18, 2017 at 10:24 PM, Nathaniel Smith wrote: >> I'll also be giving a lunch talk at BIDS tomorrow to let folks locally >> know about what's going on, which I think will be recorded ? I'll send >> around a link after in case others are interested. > > Here's that link: https://www.youtube.com/watch?v=fowHwlpGb34 > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From nathan12343 at gmail.com Thu Oct 26 17:11:48 2017 From: nathan12343 at gmail.com (Nathan Goldbaum) Date: Thu, 26 Oct 2017 18:11:48 -0300 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: My understanding of this is that the dtype will only hold the unit metadata. So that means units would propogate through calculations automatically, but the dtype wouldn't be able to manipulate the array data (in an in-place unit conversion for example). In this world, astropy quantities and yt's YTArray would become containers around an ndarray that would make use of the dtype metadata but also implement all of the unit semantics that they already implement. Since they would become container classes and would no longer be ndarray subclasses, that avoids most of the pitfalls one encounters these days. Please correct me if I'm wrong, Nathaniel. -Nathan On Thu, Oct 26, 2017 at 5:14 PM, Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > Hi Nathaniel, > > Thanks for the link. The plans sounds great! You'll not be surprised > to hear I'm particularly interested in the units aspect (and, no, I > don't mind at all if we can stop subclassing ndarray...). Is the idea > that there will be a general way for allow a dtype to define how to > convert an array to one with another dtype? (Just as one now > implicitly is able to convert between, say, int and float.) And, if > so, is the idea that one of those conversion possibilities might > involve checking units? Or were you thinking of implementing units > more directly? The former would seem most sensible, if only so you can > initially focus on other things than deciding how to support, say, esu > vs emu units, or whether or not to treat radians as equal to > dimensionless (which they formally are, but it is not always handy to > do so). > > Anyway, do keep us posted! All the best, > > Marten > > On Thu, Oct 26, 2017 at 3:40 PM, Nathaniel Smith wrote: > > On Wed, Oct 18, 2017 at 10:24 PM, Nathaniel Smith wrote: > >> I'll also be giving a lunch talk at BIDS tomorrow to let folks locally > >> know about what's going on, which I think will be recorded ? I'll send > >> around a link after in case others are interested. > > > > Here's that link: https://www.youtube.com/watch?v=fowHwlpGb34 > > > > -n > > > > -- > > Nathaniel J. Smith -- https://vorpus.org > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.e.creasey.00 at googlemail.com Thu Oct 26 17:01:53 2017 From: p.e.creasey.00 at googlemail.com (Peter Creasey) Date: Thu, 26 Oct 2017 14:01:53 -0700 Subject: [Numpy-discussion] Is there a better way to write a stacked matrix Message-ID: > > On Thu, Oct 26, 2017 at 12:11 PM, Daniele Nicolodi > > wrote: > > > >> is there a better way to write the dot product between a stack of > >> matrices? In my case I need to compute > >> > >> y = A.T @ inv(B) @ A > >> > >> with A a 3x1 matrix and B a 3x3 matrix, N times, with N in the few > >> hundred thousands range. I thus "vectorize" the thing using stack of > >> matrices, so that A is a Nx3x1 matrix and B is Nx3x3 and I can write: > >> > >> y = np.matmul(np.transpose(A, (0, 2, 1)), np.matmul(inv(B), A)) > >> If you only ever multiply your matrix inverse by a single vector then you may also wish to consider np.linalg.solve(B,A) which usually has a better prefactor (although for 3x3 it's pretty marginal, your hardware may vary). Peter From m.h.vankerkwijk at gmail.com Thu Oct 26 17:27:33 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 26 Oct 2017 17:27:33 -0400 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: That sounds somewhat puzzling as units cannot really propagate without them somehow telling how they would change! (e.g., the outcome of sin(a) is possible only for angular units and then depends on that unit). But in any case, the mailing list is probably not the best case to discuss this - rather, I look forward to -- and will most happily give feedback on -- a NEP or other more detailed explanation! -- Marten From njs at pobox.com Thu Oct 26 20:56:51 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 26 Oct 2017 17:56:51 -0700 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: On Thu, Oct 26, 2017 at 1:14 PM, Marten van Kerkwijk wrote: > Hi Nathaniel, > > Thanks for the link. The plans sounds great! You'll not be surprised > to hear I'm particularly interested in the units aspect (and, no, I > don't mind at all if we can stop subclassing ndarray...). Is the idea > that there will be a general way for allow a dtype to define how to > convert an array to one with another dtype? (Just as one now > implicitly is able to convert between, say, int and float.) And, if > so, is the idea that one of those conversion possibilities might > involve checking units? Or were you thinking of implementing units > more directly? The former would seem most sensible, if only so you can > initially focus on other things than deciding how to support, say, esu > vs emu units, or whether or not to treat radians as equal to > dimensionless (which they formally are, but it is not always handy to > do so). Well, to some extent the answers here are going to be "you tell me" :-). I'm not an expert in unit handling, and these plans are pretty high-level right now -- there will be lots more discussions to work out details once we've hired people and they're ramping up, and as we work out the larger context around how to improve the dtype system. But, generally, yeah, one of the things that a custom dtype will need to be able to do is to hook into the casting and ufunc dispatch systems. That means, when you define a dtype, you get to answer questions like "can you cast yourself into float32 without loss of precision?", or "can you cast yourself into int64, truncating values if you have to?". (Or even, "can you cast yourself to ?", which would presumably trigger unit conversion.) And you'd also get to define how things like overriding how np.add and np.multiply work for your dtype -- it's already the case that ufuncs have multiple implementations for different dtypes and there's machinery to pick the best one; this would just be extending that to these new dtypes as well. One possible approach that I think might be particularly nice would be to implement units as a "wrapper dtype". The idea would be that if we have a standard interface that dtypes implement, then not only can you implement those methods yourself to make a new dtype, but you can also call those methods on an existing dtype. So you could do something like: class WithUnits(np.dtype): def __init__(self, inner_dtype, unit): self.inner_dtype = np.dtype(inner_dtype) self.unit = unit # Simple operations like bulk data copying are delegated to the inner dtype # (Invoked by arr.copy(), making temporary buffers for calculations, etc.) def copy_data(self, source, dest): return self.inner_dtype.copy_data(source, dest) # Other operations like casting can do some unit-specific stuff and then # delegate def cast_to(self, other_dtype, source, dest): if isinstance(other_dtype, WithUnits): if other_dtype.unit == self.unit: # Something like casting WithUnits(float64, meters) -> WithUnits(float32, meters) # So no unit trickiness needed; delegate to the inner dtype to handle the storage # conversion (e.g. float64 -> float32) self.inner_dtype.cast_to(other_dtype.inner_dtype, source, dest) # ... other cases to handle unit conversion, etc. ... And then as a user you'd use it like np.array([1, 2, 3], dtype=WithUnits(float, meters)) or whatever. (Or some convenience function that ultimately does this.) This is obviously a hand-wavey sketch, I'm sure the actual details will look very different. But hopefully it gives some sense of the kind of possibilities here? -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Thu Oct 26 21:20:47 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 26 Oct 2017 18:20:47 -0700 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: On Thu, Oct 26, 2017 at 2:11 PM, Nathan Goldbaum wrote: > My understanding of this is that the dtype will only hold the unit metadata. > So that means units would propogate through calculations automatically, but > the dtype wouldn't be able to manipulate the array data (in an in-place unit > conversion for example). I think that'd be fine actually... dtypes have methods[1] that are invoked to do any operation that involves touching the actual array data. For example, when you copy array data from one place to another (because someone called arr.copy(), or did x[...] = y, or because the ufunc internals need to copy part of the array into a temporary bounce buffer, etc.), you have to let the dtype do that, because only the dtype knows how to safely copy entries of this dtype. (For many dtypes it's just a simple (strided) memmove, but then for the object dtype you have to take care of refcounting...) Similarly, if your unit dtype implemented casting, then array(..., dtype=WithUnits(float, meters)).astype(WithUnits(float, feet)) would Just Work. It looks like we don't currently expose a user-level API for doing in-place dtype conversions, but there's no reason we can't add one; all the underlying casting machinery already exists and works on arbitrary memory buffers. (And in the mean time there's a cute trick here [2] you could use to implement it yourself.) And if we do add one, then you could use it equally well to do in-place conversion from float64->int64 as for float64-in-meters to float64-in-feet. [1] Well, technically right now they're not methods, but instead a bunch of instance attributes holding C level function pointers that act like methods. But basically this is just an obfuscated way of implementing methods; it made sense at the time, but in retrospect making them use the more usual Python machinery for this will make things easier. [2] https://stackoverflow.com/a/4396247/ > In this world, astropy quantities and yt's YTArray would become containers > around an ndarray that would make use of the dtype metadata but also > implement all of the unit semantics that they already implement. Since they > would become container classes and would no longer be ndarray subclasses, > that avoids most of the pitfalls one encounters these days. I don't think you'd need a container class for basic functionality, but it might turn out to be useful for some kind of convenience/backwards-compatibility issues. For example, right now with Quantity you can do 'arr.unit' to get the unit and 'arr.value' to get the raw values with units stripped. It should definitely be possible to support these with spellings like 'arr.dtype.unit' and 'asarray(arr, dtype=float)' (or 'astropy.quantities.value(arr)'), but maybe not the short array attribute based spellings? We'll have to have the discussion about whether we want to provide some mechanism for *dtypes* to add new attributes to the *ndarray* namespace. (There's some precedent in numpy's built-in .real and .imag, but OTOH this is a kind of 'import *' feature that can easily be confusing and create backwards compatibility issues -- what if ndarray and the dtype have a name clash? Keeping in mind that it could be a clash between a third-party dtype we don't even know about and a new ndarray attribute that didn't exist when the third-party dtype was created...) -n -- Nathaniel J. Smith -- https://vorpus.org From m.h.vankerkwijk at gmail.com Thu Oct 26 22:16:59 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Thu, 26 Oct 2017 22:16:59 -0400 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: Hi Nathaniel, That sounds like it could work very well indeed! Somewhat related only, for the inner loops I've been thinking whether it might be possible to automatically create composite ufuncs, where the inner loops are executed in some prescribed order, so that for instance one could define ``` sinmul = sin(multiply(Input(1), Input(2))) ``` which would then create a new ufunc with 2 inputs and one output, which would internally first multiply the inputs and the take the sin (you'll see some similarity with an example in the talk you gave...). For this purpose, I'm thinking one could just reuse the iterator, but call the inner loops sequentially (being somewhat smart in that the sin can be done in-place on the output of the multiply). I could see that even complicated "casting" from dtypes could be implemented similarly (it probably already happens for int/float/etc.?) Anyway, looking forward to hearing more (in due time)! All the best, Marten From berceanu at runbox.com Fri Oct 27 11:43:38 2017 From: berceanu at runbox.com (Andrei Berceanu) Date: Fri, 27 Oct 2017 17:43:38 +0200 (CEST) Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: Message-ID: Hmm, so how come this doesn't work now? mask = ((px > 2.) & ((py**2 + pz**2) / px**2 < 1.)) for arr in (px, py, pz, w, x, y, z): arr = arr[mask] On Mon, 23 Oct 2017 15:05:26 +0200 (CEST), "Andrei Berceanu" wrote: > Thank you so much, the solution was much simpler than I expected! > > On Sat, 21 Oct 2017 23:04:43 +0200, Da?id wrote: > > > On 21 October 2017 at 22:32, Eric Wieser > > wrote: > > > > > David, that doesn?t work, because np.cumsum(mask)[mask] is always equal > > > to np.arange(mask.sum()) + 1. Robert?s answer is correct. > > > > > Of course, you are right. It makes sense in my head now. > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From ben.v.root at gmail.com Fri Oct 27 12:16:44 2017 From: ben.v.root at gmail.com (Benjamin Root) Date: Fri, 27 Oct 2017 12:16:44 -0400 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: In what way does it not work? Does it error out at the `arr = arr[mask]` step? Or is it that something unexpected happens? I am guessing that you are trying to mutate the px, py, pz, w, x, y, z arrays? If so, that for-loop won't do it. In python, a plain simple assignment merely makes the variable point to a different object. It doesn't mutate the object itself. Cheers! Ben Root On Fri, Oct 27, 2017 at 11:43 AM, Andrei Berceanu wrote: > Hmm, so how come this doesn't work now? > > mask = ((px > 2.) & ((py**2 + pz**2) / px**2 < 1.)) > > for arr in (px, py, pz, w, x, y, z): > arr = arr[mask] > > On Mon, 23 Oct 2017 15:05:26 +0200 (CEST), "Andrei Berceanu" < > berceanu at runbox.com> wrote: > > > Thank you so much, the solution was much simpler than I expected! > > > > On Sat, 21 Oct 2017 23:04:43 +0200, Da?id wrote: > > > > > On 21 October 2017 at 22:32, Eric Wieser > > > wrote: > > > > > > > David, that doesn?t work, because np.cumsum(mask)[mask] is always > equal > > > > to np.arange(mask.sum()) + 1. Robert?s answer is correct. > > > > > > > Of course, you are right. It makes sense in my head now. > > > _______________________________________________ > > > NumPy-Discussion mailing list > > > NumPy-Discussion at python.org > > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > > > > _______________________________________________ > > NumPy-Discussion mailing list > > NumPy-Discussion at python.org > > https://mail.python.org/mailman/listinfo/numpy-discussion > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Fri Oct 27 12:26:06 2017 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 27 Oct 2017 09:26:06 -0700 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: On Fri, Oct 27, 2017 at 9:16 AM, Benjamin Root wrote: > > In what way does it not work? Does it error out at the `arr = arr[mask]` step? Or is it that something unexpected happens? > > I am guessing that you are trying to mutate the px, py, pz, w, x, y, z arrays? If so, that for-loop won't do it. In python, a plain simple assignment merely makes the variable point to a different object. It doesn't mutate the object itself. More specifically, it makes the name on the left-hand side point to the object that's evaluated by the right-hand side. So this for loop is just re-assigning objects to the name "arr". The names "px", "py", etc. are not being reassigned. Here is a good article on how Python assignment works: https://nedbatchelder.com/text/names.html -- Robert Kern -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.e.creasey.00 at googlemail.com Fri Oct 27 15:24:41 2017 From: p.e.creasey.00 at googlemail.com (Peter Creasey) Date: Fri, 27 Oct 2017 12:24:41 -0700 Subject: [Numpy-discussion] numpy grant update Message-ID: > Date: Thu, 26 Oct 2017 17:27:33 -0400 > From: Marten van Kerkwijk > > That sounds somewhat puzzling as units cannot really propagate without > them somehow telling how they would change! (e.g., the outcome of > sin(a) is possible only for angular units and then depends on that > unit). But in any case, the mailing list is probably not the best case > to discuss this - rather, I look forward to -- and will most happily > give feedback on -- a NEP or other more detailed explanation! > So whilst it?s true that trigonometric functions only make sense for dimensionless quantities, you might still want to compute them for dimensional quantities for reasons of computational efficiency. Taking your example of sin(a) in a spectral density identity: log(cos(ka) + i sin(ka)) = k log(cos(a) + i sin(a)) so if you are computing the LHS for many k and a single a (i.e k the wavenumber and ka dimensionless) then you might prefer the RHS, which actually uses sin(a). Peter From willsheffler at gmail.com Fri Oct 27 15:36:09 2017 From: willsheffler at gmail.com (William Sheffler) Date: Fri, 27 Oct 2017 12:36:09 -0700 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? Message-ID: Right before 1.12, I arranged an API around an np.ndarray subclass, making use of __array_ufunc__ to customize behavior based on structured dtype (we come from c++ and really like operator overloading). Having seen __array_ufunc__ featured in Travis Oliphant's Guide to NumPy: 2nd Edition, I assumed this was the way to go. But it was removed in 1.12. Now that 1.13 has reintroduced __array_ufunc__, can I now rely on its continued availability? I am considering using it as a base-level component in several libraries... is this a dangerous idea? Thanks! Will -- William H. Sheffler Ph.D. Principal Engineer Institute for Protein Design University of Washington -------------- next part -------------- An HTML attachment was scrubbed... URL: From shoyer at gmail.com Fri Oct 27 16:52:33 2017 From: shoyer at gmail.com (Stephan Hoyer) Date: Fri, 27 Oct 2017 20:52:33 +0000 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? In-Reply-To: References: Message-ID: Hi Will, We spent a *long time* sorting out the messy details of __array_ufunc__ [1], especially for handling interactions between different types, e.g., between numpy arrays, non-numpy array-like objects, builtin Python objects, objects that override arithmetic to act in non-numpy-like ways, and of course subclasses of all the above. We hope that we have it right this time, but as we wrote in the NumPy 1.13 release notes "The API is provisional, we do not yet guarantee backward compatibility as modifications may be made pending feedback." That said, let's give it a try! If any changes are necessary, I expect it would likely relate to how we handle interactions between different types. That's where we spent the majority of the design effort, but debate is a poor substitute for experience. I would be very surprised if the basic cases (one argument or two arguments of the same type) need any changes. Best, Stephan [1] https://docs.scipy.org/doc/numpy-1.13.0/neps/ufunc-overrides.html On Fri, Oct 27, 2017 at 12:39 PM William Sheffler wrote: > Right before 1.12, I arranged an API around an np.ndarray subclass, making > use of __array_ufunc__ to customize behavior based on structured dtype (we > come from c++ and really like operator overloading). Having seen > __array_ufunc__ featured in Travis Oliphant's Guide to NumPy: 2nd Edition, > I assumed this was the way to go. But it was removed in 1.12. Now that 1.13 > has reintroduced __array_ufunc__, can I now rely on its continued > availability? I am considering using it as a base-level component in > several libraries... is this a dangerous idea? > > Thanks! > Will > > -- > William H. Sheffler Ph.D. > Principal Engineer > Institute for Protein Design > University of Washington > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berceanu at runbox.com Fri Oct 27 17:25:26 2017 From: berceanu at runbox.com (Andrei Berceanu) Date: Fri, 27 Oct 2017 23:25:26 +0200 (CEST) Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: Message-ID: So how can I mutate all of them at once? On Fri, 27 Oct 2017 09:26:06 -0700, Robert Kern wrote: > On Fri, Oct 27, 2017 at 9:16 AM, Benjamin Root wrote: > > > > In what way does it not work? Does it error out at the `arr = arr[mask]` > step? Or is it that something unexpected happens? > > > > I am guessing that you are trying to mutate the px, py, pz, w, x, y, z > arrays? If so, that for-loop won't do it. In python, a plain simple > assignment merely makes the variable point to a different object. It > doesn't mutate the object itself. > > More specifically, it makes the name on the left-hand side point to the > object that's evaluated by the right-hand side. So this for loop is just > re-assigning objects to the name "arr". The names "px", "py", etc. are not > being reassigned. Here is a good article on how Python assignment works: > > https://nedbatchelder.com/text/names.html > > -- > Robert Kern > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From m.h.vankerkwijk at gmail.com Fri Oct 27 17:54:44 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Fri, 27 Oct 2017 17:54:44 -0400 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? In-Reply-To: References: Message-ID: Just to second Stephan's comment: do try it! I've moved astropy's Quantity over to it, and am certainly counting on the basic interface staying put... -- Marten From m.h.vankerkwijk at gmail.com Fri Oct 27 17:56:18 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Fri, 27 Oct 2017 17:56:18 -0400 Subject: [Numpy-discussion] MATLAB to Numpy In-Reply-To: References: Message-ID: One way would be ``` px, py, pz, w, x, y, z = [arr[mask] for arr in px, py, pz, w, x, y, z] ``` -- Marten From m.h.vankerkwijk at gmail.com Fri Oct 27 17:52:23 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Fri, 27 Oct 2017 17:52:23 -0400 Subject: [Numpy-discussion] numpy grant update In-Reply-To: References: Message-ID: Hi Peter, When using units, if `a` is not angular (or dimensionless), I don't see how one could write code in which your example wouldn't fail... But I may be missing something, since for your example one would just realize that cos(ka)+i sin(ka) = exp(ika), in which case the log is just ika and one can the whole complexity... All the best, Marten On Fri, Oct 27, 2017 at 3:24 PM, Peter Creasey wrote: >> Date: Thu, 26 Oct 2017 17:27:33 -0400 >> From: Marten van Kerkwijk >> >> That sounds somewhat puzzling as units cannot really propagate without >> them somehow telling how they would change! (e.g., the outcome of >> sin(a) is possible only for angular units and then depends on that >> unit). But in any case, the mailing list is probably not the best case >> to discuss this - rather, I look forward to -- and will most happily >> give feedback on -- a NEP or other more detailed explanation! >> > > > So whilst it?s true that trigonometric functions only make sense for > dimensionless quantities, you might still want to compute them for > dimensional quantities for reasons of computational efficiency. Taking > your example of sin(a) in a spectral density identity: > > log(cos(ka) + i sin(ka)) = k log(cos(a) + i sin(a)) > > so if you are computing the LHS for many k and a single a (i.e k the > wavenumber and ka dimensionless) then you might prefer the RHS, which > actually uses sin(a). > > Peter > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion From nathan12343 at gmail.com Fri Oct 27 18:18:02 2017 From: nathan12343 at gmail.com (Nathan Goldbaum) Date: Fri, 27 Oct 2017 22:18:02 +0000 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? In-Reply-To: References: Message-ID: I?m using it in yt. If we were able to drop support for all old numpy versions, switching would allow me to delete hundreds of lines of code. As-is since we need to simultaneously support old and new versions, it adds some additional complexity. If you?re ok with only supporting numpy >= 1.13, array_ufunc will make you life a lot easier. On Fri, Oct 27, 2017 at 6:55 PM Marten van Kerkwijk < m.h.vankerkwijk at gmail.com> wrote: > Just to second Stephan's comment: do try it! I've moved astropy's > Quantity over to it, and am certainly counting on the basic interface > staying put... -- Marten > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.h.vankerkwijk at gmail.com Fri Oct 27 20:46:07 2017 From: m.h.vankerkwijk at gmail.com (Marten van Kerkwijk) Date: Fri, 27 Oct 2017 20:46:07 -0400 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? In-Reply-To: References: Message-ID: Hi Nathan, Happy to hear that it works well for yt! In astropy's Quantity as well, it greatly simplifies the code, and has made many operations about two times faster (which is why I pushed so hard to get __array_ufunc__ done...). But for now we're stuck with supporting __array_prepare__ and __array_wrap__ as well. Of course, do let us know if there are issues with the current design. Some further cleanup, especially with the use of `super` for ndarray subclasses, is still foreseen (but none of that should impact the API). All the best, Marten From p.e.creasey.00 at googlemail.com Tue Oct 31 14:38:46 2017 From: p.e.creasey.00 at googlemail.com (Peter Creasey) Date: Tue, 31 Oct 2017 11:38:46 -0700 Subject: [Numpy-discussion] numpy grant update Message-ID: > Date: Fri, 27 Oct 2017 17:52:23 -0400 > From: Marten van Kerkwijk > > Hi Peter, > > When using units, if `a` is not angular (or dimensionless), I don't > see how one could write code in which your example wouldn't fail... > But I may be missing something, since for your example one would just > realize that cos(ka)+i sin(ka) = exp(ika), in which case the log is > just ika and one can the whole complexity... > Hi Marten, Sorry, I thought I replied to you but somehow it didn?t go through. Yes, that example was a bit contrived, but it was just an example where something like sin(x) can be meaningful even if x is dimensional (though you much more typically see these things with log or exp). Best, Peter From willsheffler at gmail.com Tue Oct 31 15:15:24 2017 From: willsheffler at gmail.com (William Sheffler) Date: Tue, 31 Oct 2017 12:15:24 -0700 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? Message-ID: Thank you all kindly for your responses! Based on your encouragement, I will pursue an ndarray subclass / __array_ufunc__ implementation. I had been toying with np.set_numeric_ops, which is less than ideal (for example, np.ndarray.around segfaults if I use set_numeric_ops in any way). A second question: very broadly speaking, how much 'pain' can I expect trying to use an np.ndarray subclass in the broader python scientific computing ecosystem, and is there general consensus that projects 'should' support ndarray subclasses? Will > We spent a *long time* sorting out the messy details of __array_ufunc__ [1], especially for handling interactions between different types, e.g., between numpy arrays, non-numpy array-like objects, builtin Python objects, objects that override arithmetic to act in non-numpy-like ways, and of course subclasses of all the above. > We hope that we have it right this time, but as we wrote in the NumPy 1.13 release notes "The API is provisional, we do not yet guarantee backward compatibility as modifications may be made pending feedback." That said, let's give it a try! > If any changes are necessary, I expect it would likely relate to how we handle interactions between different types. That's where we spent the majority of the design effort, but debate is a poor substitute for experience. I would be very surprised if the basic cases (one argument or two arguments of the same type) need any changes. Best, Stephan -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Oct 31 19:06:02 2017 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 31 Oct 2017 19:06:02 -0400 Subject: [Numpy-discussion] is __array_ufunc__ ready for prime-time? In-Reply-To: References: Message-ID: On Tue, Oct 31, 2017 at 3:15 PM, William Sheffler wrote: > Thank you all kindly for your responses! Based on your encouragement, I > will pursue an ndarray subclass / __array_ufunc__ implementation. I had > been toying with np.set_numeric_ops, which is less than ideal (for example, > np.ndarray.around segfaults if I use set_numeric_ops in any way). > > A second question: very broadly speaking, how much 'pain' can I expect > trying to use an np.ndarray subclass in the broader python scientific > computing ecosystem, and is there general consensus that projects 'should' > support ndarray subclasses? > That depends on what the ndarray subclass does, which methods it overrides, and what the function uses. My guess is that most general code uses np.asarray and then assume it behaves like an ndarray, the actual behavior will be what non ufuncs are doing with it, e.g. what does np.linalg.pinv(my_array) @ np.ones(len(my_array)) return. Josef > > Will > > > We spent a *long time* sorting out the messy details of __array_ufunc__ > [1], especially for handling interactions between different types, e.g., > between numpy arrays, non-numpy array-like objects, builtin Python objects, > objects that override arithmetic to act in non-numpy-like ways, and of > course subclasses of all the above. > > > We hope that we have it right this time, but as we wrote in the NumPy > 1.13 release notes "The API is provisional, we do not yet guarantee > backward compatibility as modifications may be made pending feedback." That > said, let's give it a try! > > > If any changes are necessary, I expect it would likely relate to how we > handle interactions between different types. That's where we spent the > majority of the design effort, but debate is a poor substitute for > experience. I would be very surprised if the basic cases (one argument or > two arguments of the same type) need any changes. > > Best, > Stephan > > > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > https://mail.python.org/mailman/listinfo/numpy-discussion > > -------------- next part -------------- An HTML attachment was scrubbed... URL: