From andyfaff at gmail.com Mon Jul 3 17:33:21 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Tue, 4 Jul 2017 07:33:21 +1000 Subject: [SciPy-Dev] Ci testing. In-Reply-To: References: Message-ID: There is currently an issue building the master branch on OSX. I thought we were testing OSX builds on travis, where did the osx entries go to in .travis.yml? Or am I imagining things? Also how are we testing on windows now, there's no appveyor script. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Tue Jul 4 04:20:45 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Tue, 4 Jul 2017 20:20:45 +1200 Subject: [SciPy-Dev] Ci testing. In-Reply-To: References: Message-ID: On Tue, Jul 4, 2017 at 9:33 AM, Andrew Nelson wrote: > There is currently an issue building the master branch on OSX. I thought > we were testing OSX builds on travis, where did the osx entries go to in > .travis.yml? Or am I imagining things? > I think so, we never OS X testing. > Also how are we testing on windows now, there's no appveyor script. > Appveyor doesn't work, issues with Fortran compiler availability IIRC (or build taking too long?). Windows isn't easily solvable, but we should be able to add both OS X and 32-bit builds. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From marvintdeals at gmail.com Tue Jul 4 10:58:05 2017 From: marvintdeals at gmail.com (marvin thielk) Date: Tue, 4 Jul 2017 10:58:05 -0400 Subject: [SciPy-Dev] Two sample one sided ks test pull request Message-ID: Hi, Happy 4th of July if you're in the US and happy day to the rest of you. My name is Marvin and I am a computational neuroscience student at UCSD and I implemented the two sample one sided ks test. I was trying to use sp.stats.ks_2samp and needed a one sided version of the test but the scipy version only implemented the two sided test. I've implemented it and updated the existing ks_2samp in the style of the existing ks_2samp and sp.stats.kstest. It's backwards compatible and I've updated the documentation but not the examples or unit tests. I've implemented it to match the matlab implementation kstest2. My pull request is located at: https://github.com/scipy/scipy/pull/7559 I was hoping I could get a quick code review of my pull request and figure out what I need to do in terms of unit tests etc. in order to get this hopefully straightforward pull request accepted. Thanks, ~Marvin P.S. This is my first pull request to a large open source project so forgive any mistakes in protocol. I do know how to squash and rebase my commits and can do that when I've finished with the all the required changes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charles.masson at datadoghq.com Wed Jul 5 10:40:54 2017 From: charles.masson at datadoghq.com (Charles-Philippe Masson) Date: Wed, 5 Jul 2017 16:40:54 +0200 Subject: [SciPy-Dev] Adding statistical distances Message-ID: Hi, I am a data scientist at Datadog, a cloud monitoring company. We have been working with statistical distances, which are distances between distributions, and more specifically on a family of distances that can be computed from CDFs, e.g., the first Wasserstein distance and the Cram?r-von Mises distance. We wrote and optimized some code in Python to compute those distances. Since those distances have various applications, we think that it might be helpful to others and that is why we intend to share it. Here is the PR: https://github.com/scipy/scipy/pull/7563 I put the code in scipy.stats.stats as statistical distances share common features and applications with statistical tests (such as chisquare or ks_2samp) but let me know if that is not the appropriate place. Looking forward to hearing your feedback, Charles -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Thu Jul 6 17:45:01 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Thu, 6 Jul 2017 23:45:01 +0200 Subject: [SciPy-Dev] Refactoring ndimage In-Reply-To: References: Message-ID: Just a heads up: I have just submitted a large-ish PR with a proof of concept of what I would like to do with ndimage, see #7569 . If you have an interest in the future of ndimage, please either head over to the PR and leave your thoughts there, or comment here. Thanks, Jaime On Mon, Jun 26, 2017 at 1:17 PM, Ralf Gommers wrote: > > > On Mon, Jun 26, 2017 at 9:36 PM, Jaime Fern?ndez del R?o < > jaime.frio at gmail.com> wrote: > >> >> On Mon, Jun 26, 2017 at 9:45 AM, Ralf Gommers >> wrote: >> >>> >>> >>> On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < >>> jaime.frio at gmail.com> wrote: >>> >>>> Hi all, >>>> >>>> I have started sending PRs for ndimage. It is my intention to keep at >>>> it through the summer, and would like to provide some context on what I am >>>> trying to achieve. >>>> >>>> A couple of years ago I mentored a GSoC to port ndimage to Cython that >>>> didn't do much progress. If nothing else, I think I did learn quite a bit >>>> about ndimage and what makes it hard to maintain. And I think one of the >>>> big ones is lack of encapsulation in the C code: ndimage defines four >>>> "classes" in ni_support.h >>>> that >>>> get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator >>>> and NI_CoordinateList. >>>> >>>> Unfortunately they are not very self contained, and e.g. to instantiate >>>> a NI_FilterIterator you typically have to: >>>> >>>> - declare a NI_FilterIterator variable, >>>> - declare two NI_Iterator variables, one for the input array, >>>> another for the output, >>>> - declare a npy_intp pointer of offsets and assign NULL to it, >>>> - call NI_InitFilterOffsets to initialize the offsets pointer, >>>> - call NI_InitFilterIterator to initialize the filter iterator, >>>> - call NI_InitPointIterator twice, once for the input, another for >>>> the output, >>>> - after each iteration call NI_FILTER_NEXT2 to advance all three >>>> iterators, >>>> - after iteration is finished, call free on the pointer of offsets. >>>> >>>> There is no good reason why we couldn't refactor this to being more >>>> like: >>>> >>>> - call NI_FilterIterator_New and assign its return to a >>>> NI_FilterIterator pointer, >>>> - after each iteration call NI_FilterIterator_Next to advance all >>>> involved pointers, >>>> - after iteration is finished, call NI_FilterIterator_Delete to >>>> release any memory. >>>> >>>> Proper encapsulation would have many benefits: >>>> >>>> - high level functions would not be cluttered with boilerplate, and >>>> would be easier to understand and follow, >>>> - chances for memory leaks and the like would be minimized, >>>> - we could wrap those "classes" in Python and test them thoroughly, >>>> - it would make the transition to Cython for the higher level >>>> functions, much simpler. >>>> >>>> As an example, the lowest hanging fruit in this would be #7527 >>>> . >>>> >>>> So open source wise this is what I would like to spend my summer on. >>>> >>> >>> Awesome! ndimage definitely could use it! >>> >>> >>>> Any feedback is welcome, but I would especially like to hear about: >>>> >>>> - thoughts on the overall plan, >>>> >>>> >>> Sounds like a great plan. I do think that the current test suite is a >>> bit marginal for this exercise and review may not catch subtle issues, so >>> it would be useful to: >>> - use scikit-image as an extra testsuite regularly >>> - ideally find a few heavy users to test the master branch once in a >>> while >>> >> >> Good points, I have e-mailed the scikit-image list to make them aware of >> this and to ask them for help. >> >> >>> - use tools like a static code analyzer and Valgrind where it makes >>> sense. >>> >> >> I may need help with setting up Valgrind: how hard is it ti make it work >> under OSX? >> > > I managed to set it up once on OS X, don't remember it being too painful. > Here's a recent recipe for it: http://julianguyen.org/ > installing-valgrind-on-os-x-el-capitan/ > > >> >> We should also work on having a more complete test suite for the low >> level iterators, probably through Python wrappers. >> > > Makes sense. > > Ralf > > >> >> >>> >>>> - reviewer availability: I would like to make this as incremental >>>> as possible, but that means many smaller interdependent PRs, which require >>>> reviewer availability, >>>> >>>> For the next 4 weeks or so my availability will be pretty good. I'm >>> pretty sure that I don't know as much about ndimage as you do, but that's >>> likely true for all other current devs as well:) I think it's important to >>> keep up with your PRs; once we start getting too far behind in reviewing >>> the effort only goes up. I suggest not being too modest in pinging for >>> reviews of PRs that are going to be a bottleneck or result in conflicts >>> later on. >>> >>> Ralf >>> >>> >>> >>>> >>>> - if anyone wants to join in the fun, I'm more than happy to mentor! >>>> >>>> Jaime >>>> >>>> -- >>>> (\__/) >>>> ( O.o) >>>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus >>>> planes de dominaci?n mundial. >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >>> >> >> >> -- >> (\__/) >> ( O.o) >> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes >> de dominaci?n mundial. >> >> On Mon, Jun 26, 2017 at 9:45 AM, Ralf Gommers >> wrote: >> >>> >>> >>> On Mon, Jun 26, 2017 at 4:55 AM, Jaime Fern?ndez del R?o < >>> jaime.frio at gmail.com> wrote: >>> >>>> Hi all, >>>> >>>> I have started sending PRs for ndimage. It is my intention to keep at >>>> it through the summer, and would like to provide some context on what I am >>>> trying to achieve. >>>> >>>> A couple of years ago I mentored a GSoC to port ndimage to Cython that >>>> didn't do much progress. If nothing else, I think I did learn quite a bit >>>> about ndimage and what makes it hard to maintain. And I think one of the >>>> big ones is lack of encapsulation in the C code: ndimage defines four >>>> "classes" in ni_support.h >>>> that >>>> get used throughout, namely NI_Iterator, NI_LineBuffer, NI_FilterIterator >>>> and NI_CoordinateList. >>>> >>>> Unfortunately they are not very self contained, and e.g. to instantiate >>>> a NI_FilterIterator you typically have to: >>>> >>>> - declare a NI_FilterIterator variable, >>>> - declare two NI_Iterator variables, one for the input array, >>>> another for the output, >>>> - declare a npy_intp pointer of offsets and assign NULL to it, >>>> - call NI_InitFilterOffsets to initialize the offsets pointer, >>>> - call NI_InitFilterIterator to initialize the filter iterator, >>>> - call NI_InitPointIterator twice, once for the input, another for >>>> the output, >>>> - after each iteration call NI_FILTER_NEXT2 to advance all three >>>> iterators, >>>> - after iteration is finished, call free on the pointer of offsets. >>>> >>>> There is no good reason why we couldn't refactor this to being more >>>> like: >>>> >>>> - call NI_FilterIterator_New and assign its return to a >>>> NI_FilterIterator pointer, >>>> - after each iteration call NI_FilterIterator_Next to advance all >>>> involved pointers, >>>> - after iteration is finished, call NI_FilterIterator_Delete to >>>> release any memory. >>>> >>>> Proper encapsulation would have many benefits: >>>> >>>> - high level functions would not be cluttered with boilerplate, and >>>> would be easier to understand and follow, >>>> - chances for memory leaks and the like would be minimized, >>>> - we could wrap those "classes" in Python and test them thoroughly, >>>> - it would make the transition to Cython for the higher level >>>> functions, much simpler. >>>> >>>> As an example, the lowest hanging fruit in this would be #7527 >>>> . >>>> >>>> So open source wise this is what I would like to spend my summer on. >>>> >>> >>> Awesome! ndimage definitely could use it! >>> >>> >>>> Any feedback is welcome, but I would especially like to hear about: >>>> >>>> - thoughts on the overall plan, >>>> >>>> >>> Sounds like a great plan. I do think that the current test suite is a >>> bit marginal for this exercise and review may not catch subtle issues, so >>> it would be useful to: >>> - use scikit-image as an extra testsuite regularly >>> - ideally find a few heavy users to test the master branch once in a >>> while >>> - use tools like a static code analyzer and Valgrind where it makes >>> sense. >>> >>>> >>>> - reviewer availability: I would like to make this as incremental >>>> as possible, but that means many smaller interdependent PRs, which require >>>> reviewer availability, >>>> >>>> For the next 4 weeks or so my availability will be pretty good. I'm >>> pretty sure that I don't know as much about ndimage as you do, but that's >>> likely true for all other current devs as well:) I think it's important to >>> keep up with your PRs; once we start getting too far behind in reviewing >>> the effort only goes up. I suggest not being too modest in pinging for >>> reviews of PRs that are going to be a bottleneck or result in conflicts >>> later on. >>> >>> Ralf >>> >>> >>> >>>> >>>> - if anyone wants to join in the fun, I'm more than happy to mentor! >>>> >>>> Jaime >>>> >>>> -- >>>> (\__/) >>>> ( O.o) >>>> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus >>>> planes de dominaci?n mundial. >>>> >>>> _______________________________________________ >>>> SciPy-Dev mailing list >>>> SciPy-Dev at python.org >>>> https://mail.python.org/mailman/listinfo/scipy-dev >>>> >>>> >>> >>> _______________________________________________ >>> SciPy-Dev mailing list >>> SciPy-Dev at python.org >>> https://mail.python.org/mailman/listinfo/scipy-dev >>> >>> >> >> >> -- >> (\__/) >> ( O.o) >> ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes >> de dominaci?n mundial. >> >> _______________________________________________ >> SciPy-Dev mailing list >> SciPy-Dev at python.org >> https://mail.python.org/mailman/listinfo/scipy-dev >> >> > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Thu Jul 6 22:20:18 2017 From: charlesr.harris at gmail.com (Charles R Harris) Date: Thu, 6 Jul 2017 20:20:18 -0600 Subject: [SciPy-Dev] NumPy 1.13.1 released Message-ID: Hi All, On behalf of the NumPy team, I am pleased to announce the release of NumPy 1.13.1. This is a bugfix release for problems found in 1.13.0. The major changes are: - fixes for the new memory overlap detection, - fixes for the new temporary elision capability, - reversion of the removal of the boolean binary ``-`` operator. It is recommended that users of 1.13.0 upgrade to 1.13.1. Wheels can be found on PyPI . Source tarballs, zipfiles, release notes, and the changelog are available on github . Note that the wheels for Python 3.6 are built against 3.6.1, hence will not work when used with 3.6.0 due to Python bug #29943 . The plan is to release NumPy 1.13.2 shortly after the release of Python 3.6.2 is out with a fix that problem. If you are using 3.6.0, the workaround is to upgrade to 3.6.1 or use an earlier Python version. *Pull requests merged*A total of 19 pull requests were merged for this release. * #9240 DOC: BLD: fix lots of Sphinx warnings/errors. * #9255 Revert "DEP: Raise TypeError for subtract(bool_, bool_)." * #9261 BUG: don't elide into readonly and updateifcopy temporaries for... * #9262 BUG: fix missing keyword rename for common block in numpy.f2py * #9263 BUG: handle resize of 0d array * #9267 DOC: update f2py front page and some doc build metadata. * #9299 BUG: Fix Intel compilation on Unix. * #9317 BUG: fix wrong ndim used in empty where check * #9319 BUG: Make extensions compilable with MinGW on Py2.7 * #9339 BUG: Prevent crash if ufunc doc string is null * #9340 BUG: umath: un-break ufunc where= when no out= is given * #9371 DOC: Add isnat/positive ufunc to documentation * #9372 BUG: Fix error in fromstring function from numpy.core.records... * #9373 BUG: ')' is printed at the end pointer of the buffer in numpy.f2py. * #9374 DOC: Create NumPy 1.13.1 release notes. * #9376 BUG: Prevent hang traversing ufunc userloop linked list * #9377 DOC: Use x1 and x2 in the heaviside docstring. * #9378 DOC: Add $PARAMS to the isnat docstring * #9379 DOC: Update the 1.13.1 release notes *Contributors* A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time. * Andras Deak + * Bob Eldering + * Charles Harris * Daniel Hrisca + * Eric Wieser * Joshua Leahy + * Julian Taylor * Michael Seifert * Pauli Virtanen * Ralf Gommers * Roland Kaufmann * Warren Weckesser -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 14 07:41:00 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 14 Jul 2017 23:41:00 +1200 Subject: [SciPy-Dev] Two sample one sided ks test pull request In-Reply-To: References: Message-ID: On Wed, Jul 5, 2017 at 2:58 AM, marvin thielk wrote: > Hi, > > Happy 4th of July if you're in the US and happy day to the rest of you. > > My name is Marvin and I am a computational neuroscience student at UCSD > and I implemented the two sample one sided ks test. > > I was trying to use sp.stats.ks_2samp and needed a one sided version of > the test but the scipy version only implemented the two sided test. > > I've implemented it and updated the existing ks_2samp in the style of the > existing ks_2samp and sp.stats.kstest. > > It's backwards compatible and I've updated the documentation but not the > examples or unit tests. > > I've implemented it to match the matlab implementation kstest2. > > My pull request is located at: https://github.com/scipy/scipy/pull/7559 > > I was hoping I could get a quick code review of my pull request and figure > out what I need to do in terms of unit tests etc. in order to get this > hopefully straightforward pull request accepted. > > Thanks, > ~Marvin > > P.S. This is my first pull request to a large open source project so > forgive any mistakes in protocol. I do know how to squash and rebase my > commits and can do that when I've finished with the all the required > changes. > Hi Marvin, welcome! There are no mistakes in protocol, and thanks for introducing yourself. In general, when you add a new function/feature it's good to send an email to this list. Bug fixes or other maintenance type things don't really need to be discussed here. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 14 07:48:19 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 14 Jul 2017 23:48:19 +1200 Subject: [SciPy-Dev] Ci testing. In-Reply-To: References: Message-ID: On Tue, Jul 4, 2017 at 8:20 PM, Ralf Gommers wrote: > > > On Tue, Jul 4, 2017 at 9:33 AM, Andrew Nelson wrote: > >> There is currently an issue building the master branch on OSX. I thought >> we were testing OSX builds on travis, where did the osx entries go to in >> .travis.yml? Or am I imagining things? >> > > I think so, we never OS X testing. > > >> Also how are we testing on windows now, there's no appveyor script. >> > > Appveyor doesn't work, issues with Fortran compiler availability IIRC (or > build taking too long?). > > Windows isn't easily solvable, but we should be able to add both OS X and > 32-bit builds. > For who is interested, here's the issue that discusses how to add OS X continuous integration support: https://github.com/numpy/numpy/issues/9421 (same for scipy as for numpy). Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Fri Jul 14 16:37:45 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Sat, 15 Jul 2017 06:37:45 +1000 Subject: [SciPy-Dev] Ci testing. In-Reply-To: References: Message-ID: I'm having a go at adding osx support at the moment. On 14 Jul 2017 9:54 pm, "Ralf Gommers" wrote: > > > On Tue, Jul 4, 2017 at 8:20 PM, Ralf Gommers > wrote: > >> >> >> On Tue, Jul 4, 2017 at 9:33 AM, Andrew Nelson wrote: >> >>> There is currently an issue building the master branch on OSX. I thought >>> we were testing OSX builds on travis, where did the osx entries go to in >>> .travis.yml? Or am I imagining things? >>> >> >> I think so, we never OS X testing. >> >> >>> Also how are we testing on windows now, there's no appveyor script. >>> >> >> Appveyor doesn't work, issues with Fortran compiler availability IIRC (or >> build taking too long?). >> >> Windows isn't easily solvable, but we should be able to add both OS X and >> 32-bit builds. >> > > For who is interested, here's the issue that discusses how to add OS X > continuous integration support: https://github.com/numpy/numpy/issues/9421 > (same for scipy as for numpy). > > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Jul 15 00:16:14 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 15 Jul 2017 16:16:14 +1200 Subject: [SciPy-Dev] reviewing open pull requests Message-ID: Hi all, Our documentation on how to contribute to SciPy didn't say that reviewing pull requests is welcome. https://github.com/scipy/scipy/pull/7608 addresses that, but I think it's important enough to point out here as well - especially given that we have a backlog of ~170 PRs that is quite hard to reduce to a lower number. So here's the text I added on that: Reviewing open pull requests (PRs) is very welcome, and a valuable way to help increase the speed at which the project moves forward. Everyone who feels like they are competent to comment on a pull request is invited to do so; if you have specific knowledge/experience in a particular area (say ?optimization algorithms? or ?special functions?) then reviewing PRs in that area is especially valuable - sometimes PRs with technical code have to wait for a long time to get merged due to a shortage of appropriate reviewers. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jul 16 06:27:56 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 16 Jul 2017 22:27:56 +1200 Subject: [SciPy-Dev] Adding statistical distances In-Reply-To: References: Message-ID: Hi Charles, On Thu, Jul 6, 2017 at 2:40 AM, Charles-Philippe Masson < charles.masson at datadoghq.com> wrote: > Hi, > > I am a data scientist at Datadog, a cloud monitoring company. We have been > working with statistical distances, which are distances between > distributions, and more specifically on a family of distances that can be > computed from CDFs, e.g., the first Wasserstein distance and the Cram?r-von > Mises distance. > > We wrote and optimized some code in Python to compute those distances. > Since those distances have various applications, we think that it might be > helpful to others and that is why we intend to share it. Here is the PR: > https://github.com/scipy/scipy/pull/7563 > Thanks for contributing! I put the code in scipy.stats.stats as statistical distances share common > features and applications with statistical tests (such as chisquare or > ks_2samp) but let me know if that is not the appropriate place. > I had a look at the other possible place to put them, scipy.spatial.distance. While it could fit there as well - your function signatures fit with distance.cdist - I agree that putting statistical distances in scipy.stats makes more sense. The Kullback-Leibler divergence is also present in scipy.stats already (a bit hidden, it's in `entropy`). Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Jul 16 07:35:10 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 16 Jul 2017 23:35:10 +1200 Subject: [SciPy-Dev] welcome Tyler and Cathy to the core team Message-ID: Hi all, On behalf of the SciPy developers I'd like to welcome Tyler Reddy and Cathy Douglass as members of the core dev team. Tyler has been contributing to scipy.spatial for well over a year. He's the author of SphericalVoronoi and distance.directed_hausdorff - https://github.com/scipy/scipy/pulls/tylerjereddy. Cathy has been active in fixing issues in various modules as well as PR reviewing and issue triaging for the past two months - https://github.com/scipy/scipy/pulls/cdouglass. I'm looking forward to their continued contributions! Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaime.frio at gmail.com Sun Jul 16 15:20:49 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Sun, 16 Jul 2017 21:20:49 +0200 Subject: [SciPy-Dev] welcome Tyler and Cathy to the core team In-Reply-To: References: Message-ID: The more the merrier, welcome on board! Jaime On Sun, Jul 16, 2017 at 1:35 PM, Ralf Gommers wrote: > Hi all, > > On behalf of the SciPy developers I'd like to welcome Tyler Reddy and > Cathy Douglass as members of the core dev team. > > Tyler has been contributing to scipy.spatial for well over a year. He's > the author of SphericalVoronoi and distance.directed_hausdorff - > https://github.com/scipy/scipy/pulls/tylerjereddy. > > Cathy has been active in fixing issues in various modules as well as PR > reviewing and issue triaging for the past two months - > https://github.com/scipy/scipy/pulls/cdouglass. > > I'm looking forward to their continued contributions! > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From evgeny.burovskiy at gmail.com Sun Jul 16 17:24:09 2017 From: evgeny.burovskiy at gmail.com (Evgeni Burovski) Date: Mon, 17 Jul 2017 00:24:09 +0300 Subject: [SciPy-Dev] welcome Tyler and Cathy to the core team In-Reply-To: References: Message-ID: Welcome! 16.07.2017 13:35 ???????????? "Ralf Gommers" ???????: > Hi all, > > On behalf of the SciPy developers I'd like to welcome Tyler Reddy and > Cathy Douglass as members of the core dev team. > > Tyler has been contributing to scipy.spatial for well over a year. He's > the author of SphericalVoronoi and distance.directed_hausdorff - > https://github.com/scipy/scipy/pulls/tylerjereddy. > > Cathy has been active in fixing issues in various modules as well as PR > reviewing and issue triaging for the past two months - > https://github.com/scipy/scipy/pulls/cdouglass. > > I'm looking forward to their continued contributions! > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at gmail.com Sun Jul 16 17:43:57 2017 From: warren.weckesser at gmail.com (Warren Weckesser) Date: Sun, 16 Jul 2017 17:43:57 -0400 Subject: [SciPy-Dev] welcome Tyler and Cathy to the core team In-Reply-To: References: Message-ID: On Sun, Jul 16, 2017 at 7:35 AM, Ralf Gommers wrote: > Hi all, > > On behalf of the SciPy developers I'd like to welcome Tyler Reddy and > Cathy Douglass as members of the core dev team. > > Tyler has been contributing to scipy.spatial for well over a year. He's > the author of SphericalVoronoi and distance.directed_hausdorff - > https://github.com/scipy/scipy/pulls/tylerjereddy. > > Cathy has been active in fixing issues in various modules as well as PR > reviewing and issue triaging for the past two months - > https://github.com/scipy/scipy/pulls/cdouglass. > > Welcome, Tyler and Cathy. Thanks for the great work so far. I hope we see a lot more! Warren I'm looking forward to their continued contributions! > > Cheers, > Ralf > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andyfaff at gmail.com Wed Jul 19 23:52:27 2017 From: andyfaff at gmail.com (Andrew Nelson) Date: Thu, 20 Jul 2017 13:52:27 +1000 Subject: [SciPy-Dev] Appveyor and Windows builds Message-ID: Hi all, I wanted to say that I think the recent work going on to get scipy building on windows with mingwpy is fantastic, keep up the good work. It's great to see how you're teasing out all the problems and solving them. As part of these changes #7616 is going to add CI testing for windows builds on Appveyor. From my experience on another project (whose build requirements are much smaller) the rate limiting step in the testing of PR commits are the Appveyor checks, Travis finishes in a fraction of the time. Of course, it's hard to complain when OSS gets checked for free. I'm wondering whether that's going to be an issue with the scipy dev process. cheers, Andrew. -- _____________________________________ Dr. Andrew Nelson _____________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Thu Jul 20 23:54:09 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Jul 2017 15:54:09 +1200 Subject: [SciPy-Dev] GSoC status update Message-ID: Hi all, Here's an update on the status of our GSoC projects. Antonio is making some great progress on his large-scale constrained optimizer. He has written a couple of nice blog posts to explain parts of the design at https://antonior92.github.io/year-archive/. So far he has been working in a separate repo ( https://github.com/antonior92/ip-nonlinear-solver); a first pull request is forthcoming though. The wiki (https://github.com/antonior92/ip-nonlinear- solver/wiki) and issues 4 and 5 on that repo give a good overview of progress. Aswhin unfortunately did not pass the first evaluation. His progress until that evaluation can be seen in his repo ( https://github.com/ashwinpathak20/scipy, branches diff-1 and hessian-diff) and his blog (https://ashwinpathak20.github.io/year-archive/). Matt and I (his co-mentors) do thank him for the effort he put into his project. All student blogs for projects under the PSF can be found at https://blogs.python-gsoc.org/ Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 21 00:04:31 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Jul 2017 16:04:31 +1200 Subject: [SciPy-Dev] Appveyor and Windows builds In-Reply-To: References: Message-ID: On Thu, Jul 20, 2017 at 3:52 PM, Andrew Nelson wrote: > Hi all, > I wanted to say that I think the recent work going on to get scipy > building on windows with mingwpy is fantastic, keep up the good work. It's > great to see how you're teasing out all the problems and solving them. > +1 > > As part of these changes #7616 is going to add CI testing for windows > builds on Appveyor. From my experience on another project (whose build > requirements are much smaller) the rate limiting step in the testing of PR > commits are the Appveyor checks, Travis finishes in a fraction of the time. > That's my experience as well. > Of course, it's hard to complain when OSS gets checked for free. > I'm wondering whether that's going to be an issue with the scipy dev > process. > We shouldn't let it become one. I think if it's too slow (which is likely), then just doing a single build on Appveyor per PR - say 64-bit Python 3.6 - and leaving the rest to a daily/weekly build from the scipy-wheels repo should be fine. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Jul 21 00:16:44 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 21 Jul 2017 16:16:44 +1200 Subject: [SciPy-Dev] Stochastic branch and bound optimization algorithm In-Reply-To: References: Message-ID: Hi Benoit, Your email fell through the cracks, sorry for the slow reply. On Sat, Jun 10, 2017 at 2:46 AM, Benoit Rosa wrote: > Hi all, > > I recently implemented a stochastic branch and bound algorithm in python > for one of my projects (namely, 3D registration without an initial guess, > requiring global optimization over the SE3 group). > > After trying basinhopping without much success, I went for a modified > stochastic branch and bound algorithm, described in those two papers > (disclaimer: I'm an author in one of those) > [1] C. Papazov and D. Burschka, Stochastic global optimization for robust > point set registration, Comput. Vis. Image Understand. 115 (12) (2011) > 1598-1609 > [2] C. Gruijthuijsen, B. Rosa, P.T. Tran, J. Vander Sloten, E. Vander > Poorten, D. Reynaerts, An automatic registration method for radiation-free > catheter navigation guidance, J. Med. Robot Res. 1 (03) (2016), 1640009 > > Since I wanted to compare with other optimization algorithms, I > implemented things by modifying the basinhopping file, and as such my > implementation (git commit here: https://github.com/benoitrosa/ > scipy/commit/49a2c23b74b69dc4250e20e21db75bd071dfd92d ) is fully > compatible with Scipy already. I have added a bit of documentation in the > file too. If you want an idea, on my project (for which I can't share the > code now), this algorithm finds the global optimum over the S03 space (i.e. > rotations) in 4.5 seconds on average, where basinhopping takes more than 15 > seconds, and doesn't necessarily converge to the correct solution. > That sounds promising. It also sounds quite specific though, so my first questions would be how performance on a wider range of problems looks like. For scipy.optimize there's an extensive set of benchmarks (in the repo under benchmarks/), new optimizers should be added to that and should perform better than what's already present in scipy. > > More about the algorithm itself: it's a common branch and bound algorithm > (i.e. recursive traversal of a tree and bisecting of leaves to find an > optimum), with two additions: > > 1 - The tree is traversed stochastically. This is governed by a > temperature parameter, much like simulated annealing algorithms. At each > node, there is a probability of going towards a "less promising" > (understand: cost function higher) branch, this probability being governed > by the temperature parameter. In the beginning, "bad" branches will be > selected, ensuring an exploration of large portions of the space. As the > number of iterations increases the temperature decreases, and the > algorithms goes more towards the most promising branches/leaves, getting > higher resolution in that part of the space. > > 2 - When creating a new leaf and evaluating the cost function there, > the value is propagated down the tree, until reaching the root or a node > with a lower cost. This ensures a smart traversal of the tree. Moreover, it > also makes sure that at any time, the root node has the lowest cost, making > it easy to query for the best solution when it is interrupted. > > Another advantage of this algorithm is that parameter tuning is pretty > easy. It only needs the bounds of the search space (no initial guess), a > sampling function (default is uniform) and a cost function (obviously). > Additional parameters are the number of iterations, and the start and stop > temperatures (start should be high to accept many "bad solutions" in the > beginning, and stop should be tending to zero). > > If that algorithm is of interest to the SciPy community, I'd be happy to > clean up the code and make a pull request ! > In principle there's interest, if it brings either better performance than the optimizers we have now, or if it solves a class of problems that are not handled well by existing solvers. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.rosa at unistra.fr Fri Jul 21 09:41:30 2017 From: b.rosa at unistra.fr (ROSA Benoit) Date: Fri, 21 Jul 2017 15:41:30 +0200 Subject: [SciPy-Dev] =?utf-8?b?Pz09P3V0Zi04P3E/ICBTdG9jaGFzdGljIGJyYW5j?= =?utf-8?q?h_and_bound_optimization_algorithm?= In-Reply-To: Message-ID: <4e-59720480-127-4b65f400@27936266> Hi, Your reply comes just at the right time, since I had a bit of time to work on this today :) I tried to setup the benchmarks for this too, but I get some problems when running them. Long story short, the benchmark fails to run, throwing an error from the ASV package (see https://pastebin.com/0tnN2hQE for exact details). I don't have any experience with this, so it's quite hard for me to debug it. More interestingly, it also fails on a fresh clone from the main scipy repository. What I tried: python runtests.py --bench optimize.BenchGlobal --> fails on both my modified version (to add stochasticBB testing) and on the original scipy repository (errors described in the pastebin above) python runtests.py --bench optimize.BenchLeastSquares --> works flawlessly python runtests.py --bench optimize.BenchSmoothUnbounded --> works flawlessly Is there something I missed, or the benchmarking pipeline for the global optimization algorithms is broken at the moment ? Best, Benoit On Friday, July 21, 2017 06:16 CEST, Ralf Gommers wrote: > Hi Benoit, > > Your email fell through the cracks, sorry for the slow reply. > > On Sat, Jun 10, 2017 at 2:46 AM, Benoit Rosa wrote: > > > Hi all, > > > > I recently implemented a stochastic branch and bound algorithm in python > > for one of my projects (namely, 3D registration without an initial guess, > > requiring global optimization over the SE3 group). > > > > After trying basinhopping without much success, I went for a modified > > stochastic branch and bound algorithm, described in those two papers > > (disclaimer: I'm an author in one of those) > > [1] C. Papazov and D. Burschka, Stochastic global optimization for robust > > point set registration, Comput. Vis. Image Understand. 115 (12) (2011) > > 1598-1609 > > [2] C. Gruijthuijsen, B. Rosa, P.T. Tran, J. Vander Sloten, E. Vander > > Poorten, D. Reynaerts, An automatic registration method for radiation-free > > catheter navigation guidance, J. Med. Robot Res. 1 (03) (2016), 1640009 > > > > Since I wanted to compare with other optimization algorithms, I > > implemented things by modifying the basinhopping file, and as such my > > implementation (git commit here: https://github.com/benoitrosa/ > > scipy/commit/49a2c23b74b69dc4250e20e21db75bd071dfd92d ) is fully > > compatible with Scipy already. I have added a bit of documentation in the > > file too. If you want an idea, on my project (for which I can't share the > > code now), this algorithm finds the global optimum over the S03 space (i.e. > > rotations) in 4.5 seconds on average, where basinhopping takes more than 15 > > seconds, and doesn't necessarily converge to the correct solution. > > > > That sounds promising. It also sounds quite specific though, so my first > questions would be how performance on a wider range of problems looks like. > For scipy.optimize there's an extensive set of benchmarks (in the repo > under benchmarks/), new optimizers should be added to that and should > perform better than what's already present in scipy. > > > > > > More about the algorithm itself: it's a common branch and bound algorithm > > (i.e. recursive traversal of a tree and bisecting of leaves to find an > > optimum), with two additions: > > > > 1 - The tree is traversed stochastically. This is governed by a > > temperature parameter, much like simulated annealing algorithms. At each > > node, there is a probability of going towards a "less promising" > > (understand: cost function higher) branch, this probability being governed > > by the temperature parameter. In the beginning, "bad" branches will be > > selected, ensuring an exploration of large portions of the space. As the > > number of iterations increases the temperature decreases, and the > > algorithms goes more towards the most promising branches/leaves, getting > > higher resolution in that part of the space. > > > > 2 - When creating a new leaf and evaluating the cost function there, > > the value is propagated down the tree, until reaching the root or a node > > with a lower cost. This ensures a smart traversal of the tree. Moreover, it > > also makes sure that at any time, the root node has the lowest cost, making > > it easy to query for the best solution when it is interrupted. > > > > Another advantage of this algorithm is that parameter tuning is pretty > > easy. It only needs the bounds of the search space (no initial guess), a > > sampling function (default is uniform) and a cost function (obviously). > > Additional parameters are the number of iterations, and the start and stop > > temperatures (start should be high to accept many "bad solutions" in the > > beginning, and stop should be tending to zero). > > > > If that algorithm is of interest to the SciPy community, I'd be happy to > > clean up the code and make a pull request ! > > > > In principle there's interest, if it brings either better performance than > the optimizers we have now, or if it solves a class of problems that are > not handled well by existing solvers. > > Cheers, > Ralf From jaime.frio at gmail.com Mon Jul 24 17:58:46 2017 From: jaime.frio at gmail.com (=?UTF-8?Q?Jaime_Fern=C3=A1ndez_del_R=C3=ADo?=) Date: Mon, 24 Jul 2017 23:58:46 +0200 Subject: [SciPy-Dev] Normalizing derivatives of Gaussian kernels Message-ID: There is an interesting issue that was posted yesterday: #7644 in a nutshell, when ndimage does an N-dimensional convolution with a Gaussian kernel, we separate it into N 1-dimensional convolutions, one along each axis. If the border is set to 'constant' mode, we need to replace that constant with the result of convolving the 1D kernel with that constant value, something we are currently not doing. In an ideal world that would mean leaving the constant alone if the order of the Gaussian kernel is zero, or setting it to zero if the order is larger than zero, because that's what the integrals from -inf to +inf of those kernels are supposed to be. In real life we get things right for zero order (because we normalize the kernel values to add up to 1.0) and the odd-ordered higher orders, (because they are antisymmetric, so they naturally add up to zero). But for even-ordered Gaussian kernels, which are symmetric, truncation means that we get a slightly negative sum, and that e.g. convolution of a constant field with a second order kernel, which should return all zeros, returns a small negative error. There are two ways we can solve this: 1. Make our separable approach to Gaussian filtering equivalent to what one would get without treating the filter as separable, by multiplying the constant value by the sum of the 1-D kernel after each 1-D convolution. This will at least make ndimage consistent with itself. 2. Normalize the Gaussian kernels of higher order, so that their total sums are equal to zero. I'm not sure if there's is a standard way of doing this in the literature, my searches haven't found anything. But e.g. all kernels you find as being an approximation to the Laplacian of Gaussian do add up to exactly zero. Thoughts and comments are very welcome. Jaime -- (\__/) ( O.o) ( > <) Este es Conejo. Copia a Conejo en tu firma y ay?dale en sus planes de dominaci?n mundial. -------------- next part -------------- An HTML attachment was scrubbed... URL: From malte.ziebarth at fmvkb.de Tue Jul 25 06:56:33 2017 From: malte.ziebarth at fmvkb.de (Malte Ziebarth) Date: Tue, 25 Jul 2017 12:56:33 +0200 Subject: [SciPy-Dev] More efficient spherical Voronoi algorithm Message-ID: Dear Scipy developers, during a project, I worked with spherical Voronoi tesselations. At the time I started, no python implementation of the spherical Voronoi tesselations existed, so I started to implement one. The algorithm I implemented is an existing Fortune's sphere variant which runs in O(N log N). I figured this may be interesting to the Scipy library since the existing method is at least O(N^2). You can find the code here: github.com/mjziebarth/ACOSA In that compilation, there's also methods to calculate the Delaunay triangulation, convex hull, and alpha shape of a point set on a sphere. Now to my question: Are you interested in the Voronoi tesselation algorithm and possibly some of the others as well? If so, I would start porting the code to scipy. In that case, some style guidance from one of the more involved developers would be helpful. Best, Malte Ziebarth From antonior92 at gmail.com Tue Jul 25 07:57:43 2017 From: antonior92 at gmail.com (Antonio Ribeiro) Date: Tue, 25 Jul 2017 08:57:43 -0300 Subject: [SciPy-Dev] minimize_constrained interface Message-ID: <7AB6006F-99BB-4400-89EE-79EDF77BF7E7@gmail.com> Hi all! I am working on a constrained optimizer solver for my GSoC project. I started on a separate repository and I am working now on a pull request to integrate it to SciPy. In order to prooced we have to make the following decision: should we fit the constrained solver to the ``minimize`` interface or should we create a new one? The best idea we came up with to try to fit the new solver into the ``minimize`` interface is to create a ``Constraints`` class and use it to pass the constraints specifications. I am, however, more and more conviced that the best option would be to create a new function ``minimized_constrained``. This new interface would be used to constrained minimization problems and I would include both SLSQP and COBLYA to it. The ``minimize`` interface would be reserved for unconstrained and bound-constrained problems. I think it would be better for the following reasons: 1. The ``minimize`` interface would be kept simple and reserved for unconstrained problems and problems with simple bound constraints. 2. We would have more freedom designing a ``minimize_constrained`` interface that best suit our needs. 3. The burden of implementing and including SLSQP and COBLYA to a new interface is not much greater than the burden of fitting the constrained solver being implemented into minimize. 4. This new interface could be used for new constrained solvers included to scipy. Let me elaborate on 1: I want to include finite-derivative options, quasi-newton options, different ways of passing constraints, different ways of passing second order information... All of that requires a great deal of documentation and the need of including several arguments to the function signature. And my point is: why to complicate ``minimize`` interface for something that will only be used by a single solver method? The idea is to create a new interface so we keep ``minimize`` interface clean and easy to use. It would also give us more freedom for creating a better ``minimize_constrained`` interface. I would much appreciate any feedback on this. I created a github issue (#7648) for this discussion: > Best, Ant?nio -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.je.reddy at gmail.com Tue Jul 25 13:28:09 2017 From: tyler.je.reddy at gmail.com (Tyler Reddy) Date: Tue, 25 Jul 2017 11:28:09 -0600 Subject: [SciPy-Dev] More efficient spherical Voronoi algorithm In-Reply-To: References: Message-ID: I think SphericalVoronoi should be getting close to loglinear performance after recent improvements (the docstring may lag behind a bit at the moment). Can you identify the step(s) (preferably line in the source code) in the current algorithm we use that causes quadratic time complexity? If that is possible, we should be able to just fix it so that it is loglinear as intended, rather than replacing the entire algorithm. The steps were suggested from discussion with a CGAL developer. That said, I'm partially biased since I co-wrote the code. Certainly, any kind of major code replacement would require some thorough side-by-side benchmarks in Python (maybe up to 10 million generators at least--I know our current code tends to take a few minutes at that size, so maybe there's still a weakness to be ironed out). Delaunay triangulation and Convex Hull calculations in scipy leverage the highly performant Qhull C source code, so the need for side-by-side benchmark comparison would be even more crucial there. I think ConvexHull performs really well up to 200 million generators at least (!). I think Pauli knows a fair bit about these as well. I'm not super familiar with alpha shape so can't comment on that. Certainly, more contributors / critical reviewers for spatial / comp geometry code would be welcome! Tyler On 25 July 2017 at 04:56, Malte Ziebarth wrote: > Dear Scipy developers, > > during a project, I worked with spherical Voronoi tesselations. At the > time I started, no python > implementation of the spherical Voronoi tesselations existed, so I started > to implement one. > > The algorithm I implemented is an existing Fortune's sphere variant which > runs in O(N log N). > I figured this may be interesting to the Scipy library since the existing > method is at least O(N^2). > > You can find the code here: > github.com/mjziebarth/ACOSA > > In that compilation, there's also methods to calculate the Delaunay > triangulation, convex hull, > and alpha shape of a point set on a sphere. > > Now to my question: Are you interested in the Voronoi tesselation > algorithm and possibly > some of the others as well? If so, I would start porting the code to > scipy. In that case, > some style guidance from one of the more involved developers would be > helpful. > > Best, > Malte Ziebarth > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From malte.ziebarth at fmvkb.de Tue Jul 25 17:38:29 2017 From: malte.ziebarth at fmvkb.de (Malte Ziebarth) Date: Tue, 25 Jul 2017 23:38:29 +0200 Subject: [SciPy-Dev] More efficient spherical Voronoi algorithm In-Reply-To: References: Message-ID: <8bfd7878-8f9d-d2c9-9b87-81cbc3306d3e@fmvkb.de> Thanks for the reply! > I think SphericalVoronoi should be getting close to loglinear > performance after recent improvements (the docstring may lag behind a > bit at the moment). Can you identify the step(s) (preferably line in > the source code) in the current algorithm we use that causes quadratic > time complexity? If that is possible, we should be able to just fix it > so that it is loglinear as intended, rather than replacing the entire > algorithm. The steps were suggested from discussion with a CGAL developer. Ah, I was relying solely on the documentation, since I developed my implementation in parallel to your scipy version and did not learn of it until I had already finished the project and consiquently did not use it so far. From a very quick glance, nothing seems like you should not reach loglinear performance. I don't know Qhull but I assume it would have a loglinear implementation as a specialized C library. > > That said, I'm partially biased since I co-wrote the code. Certainly, > any kind of major code replacement would require some thorough > side-by-side benchmarks in Python (maybe up to 10 million generators > at least--I know our current code tends to take a few minutes at that > size, so maybe there's still a weakness to be ironed out). Since unlike my assumption the complexity is not different, I guess there won't be much of a difference. I can, however, do a benchmark comparison, if you wish. > > Delaunay triangulation and Convex Hull calculations in scipy leverage > the highly performant Qhull C source code, so the need for > side-by-side benchmark comparison would be even more crucial there. I > think ConvexHull performs really well up to 200 million generators at > least (!). I think Pauli knows a fair bit about these as well. I haven't checked the Qhull docs, but now that I think about it, I guess I should have mentioned that it's a convex hull inside the spherical space, not a 3d euclidean space hull around the set of points on the sphere. Again, maybe Qhull supplies such a hull already. > > I'm not super familiar with alpha shape so can't comment on that. > > Certainly, more contributors / critical reviewers for spatial / comp > geometry code would be welcome! > > Tyler > As soon as / if I can spare the time, I'll take a look at the spherical Voronoi code! Malte > > On 25 July 2017 at 04:56, Malte Ziebarth > wrote: > > Dear Scipy developers, > > during a project, I worked with spherical Voronoi tesselations. At > the time I started, no python > implementation of the spherical Voronoi tesselations existed, so I > started to implement one. > > The algorithm I implemented is an existing Fortune's sphere > variant which runs in O(N log N). > I figured this may be interesting to the Scipy library since the > existing method is at least O(N^2). > > You can find the code here: > github.com/mjziebarth/ACOSA > > In that compilation, there's also methods to calculate the > Delaunay triangulation, convex hull, > and alpha shape of a point set on a sphere. > > Now to my question: Are you interested in the Voronoi tesselation > algorithm and possibly > some of the others as well? If so, I would start porting the code > to scipy. In that case, > some style guidance from one of the more involved developers would > be helpful. > > Best, > Malte Ziebarth > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev > > > > > > _______________________________________________ > SciPy-Dev mailing list > SciPy-Dev at python.org > https://mail.python.org/mailman/listinfo/scipy-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Jul 26 11:37:00 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 27 Jul 2017 03:37:00 +1200 Subject: [SciPy-Dev] ?==?utf-8?q? Stochastic branch and bound optimization algorithm In-Reply-To: <4e-59720480-127-4b65f400@27936266> References: <4e-59720480-127-4b65f400@27936266> Message-ID: On Sat, Jul 22, 2017 at 1:41 AM, ROSA Benoit wrote: > Hi, > > Your reply comes just at the right time, since I had a bit of time to work > on this today :) > > I tried to setup the benchmarks for this too, but I get some problems when > running them. > > Long story short, the benchmark fails to run, throwing an error from the > ASV package (see https://pastebin.com/0tnN2hQE for exact details). I > don't have any experience with this, so it's quite hard for me to debug it. > More interestingly, it also fails on a fresh clone from the main scipy > repository. > > What I tried: > > python runtests.py --bench optimize.BenchGlobal --> fails on both my > modified version (to add stochasticBB testing) and on the original scipy > repository (errors described in the pastebin above) > > python runtests.py --bench optimize.BenchLeastSquares --> works flawlessly > > python runtests.py --bench optimize.BenchSmoothUnbounded --> works > flawlessly > > Is there something I missed, or the benchmarking pipeline for the global > optimization algorithms is broken at the moment ? > No, looks like something indeed got broken recently. I've filed an issue here: https://github.com/scipy/scipy/issues/7658. The issue description contains a simple fix to make things work - probably not the optimal one, but using that locally should get you started on running your new benchmarks. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Wed Jul 26 12:05:35 2017 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 27 Jul 2017 04:05:35 +1200 Subject: [SciPy-Dev] Normalizing derivatives of Gaussian kernels In-Reply-To: References: Message-ID: On Tue, Jul 25, 2017 at 9:58 AM, Jaime Fern?ndez del R?o < jaime.frio at gmail.com> wrote: > There is an interesting issue that was posted yesterday: #7644 > > > in a nutshell, when ndimage does an N-dimensional convolution with a > Gaussian kernel, we separate it into N 1-dimensional convolutions, one > along each axis. If the border is set to 'constant' mode, we need to > replace that constant with the result of convolving the 1D kernel with that > constant value, something we are currently not doing. > > In an ideal world that would mean leaving the constant alone if the order > of the Gaussian kernel is zero, or setting it to zero if the order is > larger than zero, because that's what the integrals from -inf to +inf of > those kernels are supposed to be. > > In real life we get things right for zero order (because we normalize the > kernel values to add up to 1.0) and the odd-ordered higher orders, (because > they are antisymmetric, so they naturally add up to zero). > > But for even-ordered Gaussian kernels, which are symmetric, truncation > means that we get a slightly negative sum, and that e.g. convolution of a > constant field with a second order kernel, which should return all zeros, > returns a small negative error. > > There are two ways we can solve this: > > 1. Make our separable approach to Gaussian filtering equivalent to > what one would get without treating the filter as separable, by multiplying > the constant value by the sum of the 1-D kernel after each 1-D convolution. > This will at least make ndimage consistent with itself. > 2. Normalize the Gaussian kernels of higher order, so that their total > sums are equal to zero. I'm not sure if there's is a standard way of doing > this in the literature, my searches haven't found anything. But e.g. all > kernels you find as being an approximation to the Laplacian of Gaussian do > add up to exactly zero. > > Thoughts and comments are very welcome. > It sounds like (2) is the more correct approach. As far as I can tell either approach will change the numerical results for even-ordered kernels in a similar way, and making the change can be considered a bug fix rather than a backwards compatibility change. Is that right? Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From oss at multiwave.ch Fri Jul 28 09:31:51 2017 From: oss at multiwave.ch (oss) Date: Fri, 28 Jul 2017 15:31:51 +0200 Subject: [SciPy-Dev] N dimensional Gaussian quadrature Message-ID: <9426BFEE-91F1-4ACC-B57F-C3073A6B1CC7@multiwave.ch> Hi scipy-dev, We ve implemented and tested n dimensional fixed_quad function with choice of orders for each dimension separately (useful when known singular points need to be avoided). See the diff below. We re open for any request for improvement. https://github.com/scipy/scipy/compare/master...multiwave:n_fixed_quad Thanks tryfon at multiwave -------------- next part -------------- An HTML attachment was scrubbed... URL: